The United States Department of Education Office for Civil Rights (OCR) recently issued a resource providing examples of how the use of artificial intelligence (AI) in educational or security software could result in discrimination on the basis of disability, race, or sex.
The guidance titled “Avoiding the Discriminatory Use of Artificial Intelligence” is available here.
The guidance uses twenty-one (21) examples of the use of AI in schools and how it may have discriminatory outcomes which could cause OCR to investigate upon receiving a complaint. It specifically addresses the use of AI to write 504 plans or IEPs and also how the use of security software may have discriminatory impacts. All of the examples indicate that OCR might open an investigation, but also notes that the decision to do so would be based on the individual facts and circumstances of each case. It is also important to note this guidance does not have the force of law and does not create new legal standards.
The resource includes examples shedding light on how OCR views some of the new uses for AI in the educational setting and potentially what educational leaders should be aware of in evaluating vendors offering AI-based educational or security products.
For instance, the resource includes an example involving the use of AI products designed to flag plagiarism or prevent cheating. According to OCR, the AI products have a low error rate for evaluating essays written by native English speakers, but a high error rate when evaluating essays written by non-native English speakers. might evaluate non-native English speakers. According to the resource, “OCR would likely have reason to open an investigation if a person filed a complaint based on these facts.”
Another example involves potential harassment on the basis of race. In this scenario. an AI security vendor used facial recognition technology, that has issues with identifying Black individuals. The facial recognition technology could result in disparate treatment based on race in violation of Title VI, if students are misidentified and questioned or pulled from class as a result of the faulty software.
Yet another example surrounded the use of risk assessment software utilizing historic discipline data (which may present disparities in discipline based on race) to score student’s risk of future discipline issues and recommend discipline. The use of a “risk score” based on data, even if the data does not include student race, may still recommend more severe discipline outcomes for students of color, and may result in different treatment based on race.
One more example involved using AI software to create academic schedules for students. The software relies upon historical data and student demographics to determine course enrollments. The use of such software could result in treating students differently on the basis of sex. OCR would have reason to open an investigation under these facts.
The above examples frequently mention that when complaints are made, the school district does not investigate or act, instead completely relying upon the software. This reinforces the need for school districts to act upon complaints, conduct an adequate investigation, and take remedial action if necessary.
These examples should be reviewed and discussed as AI software is being considered for implementation. OCR is clearly signaling that it expects schools to evaluate the potential discriminatory impact of the use of this evolving technology in the school setting.