23/09/2025
Without an ethical framework, AI risks losing credibility in English language learning and assessment. Cambridge University Press & Assessment has released a new paper that defines six key principles for ethical AI in English language education.
Cambridge University Press & Assessment has defined six principles for ensuring the ethical use of AI when delivering English language education. The paper comes following increased concern about the role of AI in English learning and assessment. A recent YouGov poll revealed a greater risk of cheating and failure to test appropriate language skills (39% each) to be the British public’s top concerns, when asked about the use of AI in English proficiency tests.
Central to Cambridge’s stance is the recommendation of a human-centred approach to AI: acknowledging the vital role of the human in both language attainment and quality assessment. Cambridge also urges more care to be taken to ensure AI is fair and inclusive and that security, privacy and consent are consistently prioritised.
Dr Nick Saville, Director of Thought Leadership at Cambridge University Press & Assessment and co-author of the paper, believes AI can bring huge benefits to English language education:
“The rapid adoption of AI in English language learning and assessment can provide significant benefits for learners, teachers and institutions around the world, but it’s critical that it’s delivered ethically. Despite the huge benefits AI can bring, without an ethical framework in place, it risks losing credibility and people’s trust. The six principles we have defined will help deliver effective AI-based language learning and assessment solutions. By focussing on keeping a human in the loop and maintaining robust standards, we can carve out a future where teachers and learners feel safe and empowered to use new technology to reach their potential.”
Francesca Woodward, Global Managing Director, English, at Cambridge University Press & Assessment adds:
“To maintain high standards in learning and assessment, we must consistently put learners first. AI offers a world of possibilities, but with that comes a responsibility to make sure solutions are ethical, high-quality, and accessible. The use of AI in education lacks consistent regulation, which means we, as a sector, have a responsibility to champion innovation with integrity. We’ve defined these principles to provide a research-based framework that we encourage others to choose to adopt.”
Reflective of the rapidly evolving AI landscape, and live discussions around the efficacy of English testing methods, the paper urges test providers to collect robust evidence to show how AI scores meet the same standards as highly skilled and experienced human examiners. It also calls for more transparency and explainability, ensuring all parties are aware of the role AI plays in assessment.
Acknowledging the importance of sustainability and the environmental impact of AI, the paper also urges test providers and all stakeholders to consider the vast amounts of energy that is consumed when developing and using AI products.
Cambridge’s six key principles for ethical AI in English learning and assessment:
-
1. AI must consistently meet human examiners’ standards – AI systems must accurately assess the right language skills and deliver results people can trust. Test providers must collect robust evidence to show how AI scores meet the same standards as highly skilled and experienced human examiners.
-
2. Fairness isn’t optional – it’s foundational – AI-based language learning and assessment systems must be trained on inclusive data to ensure they are fair and free from bias. Critical to this is the use of diverse data sets in the training of AI models and continuous monitoring for bias.
3. Data privacy and consent are non-negotiable – All parties must be clearly informed about what data is collected, how it’s stored, and what it’s used for. Behind the scenes, this means implementing robust encryption, secure storage protocols, and safeguards against hacking.
4. Transparency and explainability are key – Learners need to know when and how AI is used to determine their results. AI systems must be developed and deployed transparently, with robust oversight and governance. Providers must be able to clearly articulate the role AI plays, as well as the frameworks that are in place to ensure test integrity and accuracy.
5. Language learning must remain a human endeavour – While AI can enhance learning and assessment, it cannot replace the uniquely human experience of acquiring and using language. AI-based assessment must always keep a human in the loop: ensuring accountability on the part of test providers, and allowing a human to step in where oversight, clarity, or a correction is needed for quality control.
6. Sustainability is an ethical issue – AI isn’t just a digital tool – it’s a physical one, with real-world environmental costs. AI systems crunch vast amounts of data, which comes with massive energy needs. The environmental impact of AI use must be kept in mind when considering how AI should be developed or used.
The full paper, Ethical AI for Language Learning and Assessment, by Dr Carla Pastorino-Campos and Dr Nick Saville is accessible here: