How AI Impacts Diversity in College Admissions

ODSC - Open Data Science
5 min readSep 26, 2024

--

Higher education institutions are beginning to use artificial intelligence to review college applications. Where does this leave students? Can this technology remain fair when analyzing qualitative factors that come with decades of context?

How Many Universities Are Using AI in Admissions?

Today’s admissions professionals mainly use AI to automate administrative processes, lightening their workloads when opening their applications. Traditionally, a handful of employees manually review letters of recommendation, personal essays, and high school transcripts. Processing multiple documents from thousands of applicants is tremendously time-consuming, which is why an autonomous tool is helpful.

While there’s no definitive data on the prevalence of AI in admissions, many industry leaders view it favorably. Approximately 83% of higher education administrators said they would embrace this technology in 2023. They believe it will have an overall positive impact, increasing administrative efficiency and improving student outcomes. Have they been misled, or are their assumptions correct?

AI’s Influence on Diversity During College Admissions

There are numerous aspects of the college admissions process where AI may affect diversity.

Before Admissions

AI begins affecting diversity in college admissions long before students are old enough to apply. Writing tools, tutoring platforms, and conversation engines can replace professional mentors and instructors, giving those with access a unique advantage. Since this technology is relatively new, the extent of its impact remains to be seen.

During Admissions

Generative AI may not be able to craft unique personal essays, but it can act as a sounding board to help applicants generate interesting ideas or form persuasive arguments. It can also create convincing statements of financial need or letters of recommendation. These advantages can increase their chances of getting accepted.

On the one hand, financially advantaged demographic groups have disproportionate access to advanced AI tools, giving them an unfair edge. On the other, underserved, underrepresented groups can use this technology instead of admissions consultancy — a service they may be unable to use otherwise.

After Admissions

AI’s role in determining financial aid may affect diversity. Depending on how it considers risk factors, statements of need, and demographic data, it may disproportionally offer grants and scholarships to a particular group. According to the National Education Association, this technology has biases built into it that affect its decision-making.

Potential Positive Implications of This Technology

Academics aren’t the only thing higher education institutions consider when reviewing applications. In fact, approximately 90% of students with perfect SAT scores and 4.0 GPAs aren’t accepted into the top 10 universities because admissions professionals want evidence they have a unique drive or personality.

Large language models (LLMs) may be unable to adequately capture a student’s potential or accurately summarize their life story, but they can give them content and grammar tips. People from systemically underserved socioeconomic backgrounds would benefit from having a personalized admissions consultant and mentor guide them through the process.

Some universities are experimenting with AI interviews, potentially improving diversity. The algorithm analyzes the pre-recorded video for tone of voice, facial expressions, and keyword use. Unlike humans, it doesn’t discriminate based on unconscious biases by judging someone’s hairstyle, ethnicity, or accent.

Potential Negative Implications of This Technology

This technology’s influence on college admissions isn’t entirely positive. Experts say it is deepening the digital divide since underserved socioeconomic, racial, and geographic segments lack access to internet-connected devices to access AI tools. According to the U.S. Government Accountability Office, millions of Americans lack broadband.

Even if applicants don’t use AI, they may still be affected since admissions teams often use it to identify work generated by an LLM. AI detection software is accurate less than 80% of the time, incorrectly identifying human-produced content as AI-written. Evidence suggests it flags non-native English speakers more often.

There’s also the potential for admissions teams to inadvertently cause harm. They may exacerbate racial disparities in education since training data reinforces subtle, preexisting biases. For example, their model may assume racial minorities are less likely to succeed academically, effectively treating race as a risk factor — even though any reasonable human understands their historically low acceptance and graduate rates are tied to segregation and xenophobia.

If two applicants are essentially tied, an AI model treating race as a risk factor could give one an unfair advantage, resulting in the other losing out on their college of choice because of the color of their skin. Although many higher education institutions keep humans in the loop to prevent these situations, they may get more lax as they grow more confident in this technology.

What College Administrators Should Do to Help

Administrators can improve diversity and reduce adverse outcomes by better governing their AI. To begin, they should ensure their training data is adequately sourced, cleaned, and transformed before feeding it to their model. This strategy minimizes bias, helping it make decisions based on quantifiable metrics instead of assumptions.

Since technology companies are growing increasingly secretive about their training datasets’ content — often because they include copyrighted, inaccurate, or explicit material — it may be wise to build a model from the ground up rather than sourcing one from a third party. Of course, this necessitates additional financial and time frame considerations.

Above all, administrators must ensure they still consider race, socioeconomic status, ethnicity, gender, and sexuality during their admissions processes. Their model may not be able to make equitable, context-relevant decisions, but humans can.

Ignoring the systemic obstacles that make it more difficult for historically disadvantaged and underserved groups to excel in school, apply to colleges, and graduate on time would do thousands of applicants a disservice. It’s not enough to minimize AI bias — they must prioritize equity during training and utilization.

The Bottom Line of Using AI in College Admissions

AI shouldn’t be used as a stand-alone admissions solution. It can’t adequately capture students’ uniqueness or appropriately consider qualitative factors during decision-making. Even though using it could create beneficial opportunities, neither party should trust it completely. A human-in-the-loop process is crucial for improving outcomes.

Originally posted on OpenDataScience.com

Read more data science articles on OpenDataScience.com, including tutorials and guides from beginner to advanced levels! Subscribe to our weekly newsletter here and receive the latest news every Thursday. You can also get data science training on-demand wherever you are with our Ai+ Training platform. Interested in attending an ODSC event? Learn more about our upcoming events here.

--

--

ODSC - Open Data Science
ODSC - Open Data Science

Written by ODSC - Open Data Science

Our passion is bringing thousands of the best and brightest data scientists together under one roof for an incredible learning and networking experience.