Study: Algorithms Used by Universities to Predict Student Success May Be Racially Biased

Washington, July 11, 2024—Predictive algorithms commonly used by colleges and universities to determine whether students will be successful may be racially biased against Black and Hispanic students, according to new research published today in AERA Open, a peer-reviewed journal of the American Educational Research Association. The study—conducted by Denisa Gándara (University of Texas at Austin), Hadis Anahideh (University of Illinois Chicago), Matthew Ison (Northern Illinois University), and Lorenzo Picchiarini (University of Illinois Chicago)—found that predictive models also tend to overestimate the potential success of White and Asian students. 

Video: Co-authors Denisa Gándara and Hadis Anahideh discuss findings and implications of the study

“Our results show that predictive models yield less accurate results for Black and Hispanic students, systemically making more errors,” said study co-author Denisa Gándara, an assistant professor in the College of Education at the University of Texas at Austin.

These models incorrectly predict failure for Black and Hispanic students 19 percent and 21 percent of the time, respectively, compared to false negative rates for White and Asian groups of 12 percent and 6 percent. At the same time, the models incorrectly predict success for White and Asian students 65 percent and 73 percent of the time, respectively, compared to false negative rates for Black and Hispanic students of 33 percent and 28 percent.

“Our findings reveal a troubling pattern—models that incorporate commonly used features to predict success for college students end up forecasting worse outcomes for racially minoritized groups and are often inaccurate,” said co-author Hadis Anahideh, an assistant professor of industrial engineering at the University of Illinois Chicago. “This underscores the necessity of addressing inherent biases in predictive analytics in education settings.”

The study used nationally representative data spanning 10 years from the U.S. Department of Education’s National Center for Education Statistics, including 15,244 students.

Findings from the study also point to the potential value of using statistical techniques to mitigate bias, although there are still limitations.

“While our research tested various bias-mitigation techniques, we found that no single approach fully eliminates disparities in prediction outcomes or accuracy across different fairness notions,” said Anahideh.  

Higher education institutions are increasingly turning to machine learning and artificial intelligence algorithms that predict student success to inform various decisions, including those related to admissions, budgeting, and student-success interventions. In recent years, there have been concerns raised that these predictive models may perpetuate social disparities.

“As colleges and universities become more data-informed, it is imperative that predictive models are designed with attention to their biases and potential consequences,” said Gándara. “It is critical for institutional users to be aware of the historical discrimination reflected in the data and to not penalize groups that have been subjected to racialized social disadvantages.”

The study’s authors noted that the practical implications of the findings are significant but depend on how the predicted outcomes are used. If models are used to make college admissions decisions, admission may be denied to racially minoritized students if the models show that previous students of the same racial categories had lower success. Higher education observers have also warned that predictions could lead to educational tracking, encouraging Black and Hispanic students to pursue courses or majors that are perceived as less challenging.

On the other hand, biased models may lead to greater support for disadvantaged students. By falsely predicting failure for racially minoritized students who succeed, the model may direct greater resources to those students. Even then, Gándara noted, practitioners must be careful not to produce deficit narratives about minoritized students, treating them as though they had a lower probability of success.

“Our findings point to the importance of institutions training end users on the potential for algorithmic bias,” said Gándara. “Awareness can help users contextualize predictions for individual students and make more informed decisions.”

She noted that policymakers might consider policies to monitor or evaluate the use of predictive analytics, including their design, bias in predicted outcomes, and applications.

Funding note: This research was supported by the Institute of Education Sciences at the U.S. Department of Education

Study citation: Gándara, D., Anahideh, H., Ison, M., & Picchiarini, L. (2024). Inside the black box: Detecting and mitigating algorithmic bias across racialized groups in college student-success prediction. AERA Open, 10(1), 1–15.


About AERA
The American Educational Research Association (AERA) is the largest national interdisciplinary research association devoted to the scientific study of education and learning. Founded in 1916, AERA advances knowledge about education, encourages scholarly inquiry related to education, and promotes the use of research to improve education and serve the public good. Find AERA on FacebookXLinkedInInstagramThreads, and Bluesky.

withyou android app