AI in Higher Education: Lessons from the EU
The recent adoption of the European Union's Artificial Intelligence Act (AIA, entry into force on August 1, 2024, and full application expected by 2027) has garnered worldwide attention as the first comprehensive regulatory framework for AI. Given its cross-sectoral approach, the AIA is poised to shape the development and use of AI tools in education. While it is premature to assume the AIA will serve as a global model, particularly in different socioeconomic contexts, its choices represent a stake in the ground that is likely to shape the ensuing debate. Considering the raising usage of Generative AI and other AI driven tools in the educational context, it is of great importance to understand the AIA and its impact. This article offers high-level regulatory insights from the AIA to help institutional leaders and policymakers better grasp the issues at stake.
How AIA Treats Education
The AIA employs a risk-based framework, categorising AI systems based on the risks they pose to health, safety, and fundamental rights. It places high-risk applications, including those in education, under stricter scrutiny while prohibiting systems that pose an unacceptable risk of harm.
Two prohibitions will most directly impact the education sector: the bans on AI for emotional recognition and biometric categorisation.
The AIA prohibits the use of AI to infer a person's emotions in educational institutions, citing concerns about the scientific validity of such systems and the inherent power imbalance in educational settings. While there is an exception for medical and safety reasons, the ban is broad and may prevent the deployment of potentially beneficial technologies. For instance, the law does not make an exception for tools that can be shown to improve learning outcomes, such as those that detect a student's engagement or frustration to help an instructor adjust their pacing or clarify confusing concepts. An AI system that notes a student's confusion during a difficult math lesson and prompts the teacher to offer one-on-one help would likely be prohibited.
The AIA also bans biometric categorisation that uses AI to deduce sensitive attributes like a person's race, political opinions, or religious beliefs. This prohibition addresses the widely discussed risk of AI perpetuating biases by detecting subtle patterns in data, leading to discriminatory outcomes. However, simply omitting sensitive attributes—like language background or disability—does not guarantee fairness. In fact, this data can be crucial for contextualising student work. For example, an AI essay-grading tool that penalises for time spent or grammatical errors could systematically disadvantage non-native English speakers or students with fewer resources. Research increasingly shows that ignoring sensitive attributes may unintentionally widen disparities.
Beyond these prohibitions, the AIA classifies most other education-related AI as "high-risk," reflecting the significant impact these tools can have on students' futures. Under Annex III of the Act, several educational use cases are designated high-risk:
Admissions: AI systems used to determine student access or admission.
Evaluation: AI tools that evaluate learning outcomes, such as automated exam scoring systems.
Student Placement: AI systems that assess the appropriate educational level or track for a student.
Monitoring: AI proctoring tools that detect academic misconduct during tests.
Classifying these systems as high-risk means they must comply with strict requirements. Providers, which can be edtech companies or even universities that develop their own AI, are obligated to implement, among others, robust risk management, ensure high-quality and bias-free data, provide detailed technical documentation, and guarantee human oversight. In effect, a university that develops an AI to automate grading becomes a high-risk AI provider and must establish comprehensive governance to meet the AIA's standards. If, however, a university purchases a compliant solution from a third party, it acts as a "deployer" and incurs fewer obligations.
Finally, for systems that are neither prohibited nor high-risk, transparency obligations apply if the system is intended to interact directly with individuals. This requires designing systems so that users know they are interacting with an AI.
The AIA also dedicates a specific chapter to General-Purpose AI (GPAI), which refers to models, like large language models, that can perform a wide range of tasks. While this chapter does not explicitly target education, its rules will have significant implications for the sector, as future educational AI tools will increasingly be built on these regulated general-purpose models.
A key limitation of the AIA's regulatory approach is that modern large language models can be easily adapted for a plethora of uses without technical expertise. For example, an educator could use an off-the-shelf LLM to grade assignments—a high-risk application. In this scenario, the full weight of the AIA's compliance obligations may apply, making it practically infeasible for individual educators to customise AI tools. This could force them to rely on solutions from large commercial providers with the resources to ensure compliance.
Potential Shortcomings of the AIA for Advancing Education
While the AIA rightfully seeks to mitigate AI's risks and to advance fundamental rights by defining a risk-based approach, some requirements and prohibitions may inadvertently hinder educational innovation.
@Freepik
A significant concern is that treating many learning-support AI tools as uniformly "high-risk" can stifle experimentation. The Act does not conduct an application-specific risk-benefit assessment, weighing a tool's potential benefits against its risks before subjecting it to stricter oversight. Instead, it classifies applications as high-risk based on their "intended purpose" as declared by the provider.
This approach is sensible in that it aligns with existing EU regulatory frameworks like product safety legislation. However, the AIA's descriptions of high-risk applications in Annex III are often abstract and overly broad, causing the classification to be too comprehensive. For instance, an AI tutoring system providing private, real-time feedback to students without impacting their grades falls under the same "high-risk" umbrella as an AI system that decides university admissions. Regulating solely by presumed risk, absent a public-benefit analysis, could obstruct the use of highly beneficial AI tools in education.
This broad-brush approach can likely be explained by the sheer difficulty of conducting a granular risk-benefit evaluation within a horizontal piece of legislation like the AIA. Such an exercise would have required a massive repository of risks and benefits and a complex evaluation methodology, demanding deep sectoral expertise. As a consequence, while the Act's risk classification is useful for mitigating certain dangers like illegal discrimination, its implementation may undermine beneficial innovations.
Furthermore, the AIA has a strong preventative focus on risk mitigation. This inevitably leaves less ground for experimentation and the collection of data on actual use, which would be necessary to perform a more nuanced risk classification over time. Policymakers outside the EU could opt for a different approach where additional regulatory scrutiny is triggered only when there is sufficient evidence that a risk materialises to a significant degree.
The AIA's specific prohibitions also risk constraining valuable pedagogical strategies. Banning emotion-sensing altogether may eliminate potentially useful tools for detecting student interest, stress, or other factors critical for learning. Such insights could help educators identify students who need extra support before they fall behind or recognise those who are highly engaged and ready for more challenging material. By precluding all forms of emotion inference, the ban stifles innovation aimed at creating responsible, privacy-preserving ways to leverage these insights. Although research is not directly subject to the AIA, these restrictions could also create a chilling effect on academic research in these areas.
With its core focus on risk avoidance, the AIA is also unlikely to encourage systems designed to foster creativity, critical thinking, and adaptive teaching methods. Moreover, the compliance burden for high-risk applications may stifle educational innovation, especially for smaller EdTech developers and academic institutions. The laundry list of obligations—from risk assessments and data governance to detailed record-keeping and quality management—is both complex and resource-intensive. Many universities and smaller firms may lack the resources to comply, potentially limiting the diversity of educational tools and concentrating the market in the hands of a few large providers.
Recommendations for Balancing Innovation and Ethics in Higher Education outside the EU
Understanding the intentions and challenges of the AIA presents an opportunity for leaders and policymakers in other jurisdictions to experiment with alternative approaches.
Higher Education Leaders
To avoid top-down implementation, leaders should engage both educators and students in AI-related decision-making from the very beginning. This means creating inclusive processes where faculty, administrators, and learners can collaboratively evaluate AI systems, conduct thorough risk-benefit analyses, and ensure that any adopted tool aligns with institutional values and pedagogical goals.
At the institutional level, leaders should consider a "regulation by design" approach. Rather than relying solely on external compliance mechanisms, they should embed safeguards and accountability directly into the AI systems. For instance, a university could provide access to customised AI tools that have its specific academic integrity policies built-in, which could help detect when systems are being misused. By integrating responsible design and oversight into technical designs, higher education leaders can foster an environment that harnesses AI's strengths while minimising unintended consequences.
Crucially, general-purpose AI tools provide an unprecedented opportunity for individual educators to leverage powerful technology themselves, without needing technical or coding skills. For example, a history professor could develop a custom writing coach that provides feedback on student essays using an off-the-shelf large language model. While the AIA's rules might make such educator-led innovation difficult in the EU, other jurisdictions may pursue a path that broadens access and improve the quality of education, while still ensuring proper caution and oversight.
Policy Makers
Effective AI governance in education requires a shift away from imposing rigid, top-down restrictions and toward building a smarter framework founded on partnership and trust.
First, it is crucial to clarify the division of roles. Regulators should focus on their core mandate: advancing fundamental rights and equity by ensuring that AI tools are unbiased, protect student data, and are safe. Educators, in turn, must remain in control of instructional practice—deciding which tools best serve their students and how to integrate them into the curriculum. This separation ensures that safety and equity are prioritised without stifling classroom innovation.
Second, policymakers should recognise that a one-size-fits-all approach is ill-suited for AI. To address the distinct needs of higher education, sector-specific AI regulations would be more effective. The context of a large research university is vastly different from that of a K-12 school or a tech startup. Rules should be tailored accordingly. This could mean creating specific regulations for higher education or implementing a flexible "public-interest test" for new tools, which would weigh their demonstrated educational benefits against potential risks to students.
Finally, policy should emphasise support over punishment. Rather than only penalising misuse, the primary goal should be to help schools and developers adopt AI responsibly. This can be achieved by offering flexible compliance pathways—such as simplified checklists or phased deadlines—that keep costs manageable, particularly for smaller organisations. Pairing new rules with robust training programmes and collaborative initiatives would further guide the education community in using these powerful tools safely and effectively.
Final Thoughts
No single regulatory framework can ensure the responsible adoption of AI in higher education. While broad prohibitions and abstract risk classifications may provide strong safeguards, they also risk stifling the innovation needed to equip students for the future—especially in under-resourced settings where AI could be a powerful equaliser. The legitimate goal of protecting fundamental rights like privacy and non-discrimination should not preclude the advancement of another fundamental right: education.
The rise of AI challenges educators to proactively harness its potential in the classroom while guarding against abuse. AI literacy—among students, educators, and administrators—will be crucial for fostering an academic culture that values genuine engagement and integrity. As AI literacy becomes a key skill in many sectors, it is something educators must seek to foster in their students.
In contexts outside the EU where policy frameworks are still developing, flexible regulations that balance oversight with innovation may better serve local needs. Such approaches can protect individual rights while recognising AI's promise to expand access to high-quality learning. Coupled with training, collaboration, and clear guidelines, this balanced approach can empower educators to harness new technologies in ways that advance equity and improve student outcomes without sacrificing core educational values.
-
Robert MahariAthorRobert Mahari is the Associate Director of Stanford's CodeX Centre and holds a JD-PhD in Legal Artificial Intelligence from Harvard Law School and the MIT Media Lab. His research focuses on Computational Law, using technology to analyse and enhance legal systems, improve practice, expand access to justice, and increase judicial efficiency. He collaborates with public and private organisations globally to develop and apply computational legal solutions in real-world settings.
-
Gabriele MazziniAthorGabriele Mazzini is a pioneer and leading expert in Artificial Intelligence governance and regulation and a sought-after advisor, lecturer and public speaker across the world. Former Team Leader at the European Commission, he designed and led the drafting of the Commission EU AI Act and was the principal advisor during the legislative negotiations with the Parliament and the Council. He also shaped earlier policy work on the European approach to AI since 2017, including the White paper on the ecosystem of excellence and trust for AI and the work on liability for emerging technologies.