05/05/2026
- New Cambridge English report highlights that convenience must not come at the expense of security in remote language testing
- As remote assessment expands globally, robust safeguards are critical for life-changing decisions in education and work
- Six principles set out what “fit for purpose” remote assessment should look like
As at-home English tests are increasingly used for consequential decisions around the world, a new report from Cambridge English raises critical questions about whether they are suitable in all cases to deliver the level of security, consistency and trust required for outcomes that affect people’s lives, including access to education and employment.
The report, Remotely Delivered Language Assessment: What Makes It Fit for Purpose?, highlights that while remote testing has rapidly evolved from a niche alternative to a mainstream delivery option, stronger safeguards are needed to ensure it is fit for purpose in high-stakes contexts.
Where English test scores inform decisions about study and work, the issue is not just operational efficiency but ensuring public trust in the integrity of the system.
The security challenge
The report does not argue against remote testing; rather, it sets out the conditions under which remote assessment can be considered valid, secure and fair – and where in-centre testing may remain the more appropriate option.
It identifies a number of emerging risks associated with fully remote testing, including impersonation, unauthorised assistance, item harvesting and identity spoofing. It also highlights new threats such as deepfake audio and video, and the use of hidden connected devices.
While technologies such as AI flagging, biometric checks and lockdown browsers can help mitigate these risks, the report emphasises that no single solution is sufficient. Instead, secure remote testing requires multiple layers of protection, ongoing monitoring and continuous adaptation as threats evolve.
Dr Evelina Galaczi, Director of Research, Cambridge English, said: Remote testing has an important role to play in improving geographical access and flexibility. However, when test results determine high-stakes decisions, for example, visa outcomes, the bar for security must be exceptionally high.
Our research shows that while technology can help, it cannot eliminate risk alone. A layered approach, combining technology, human oversight and robust design, is essential to ensure results can be trusted.”
Human oversight remains a critical component of this model. The report highlights the role of trained proctors in interpreting context, intervening in real time and ensuring fair outcomes - particularly in sensitive or regulated settings where public trust is paramount.
Why this matters globally
As remote assessment becomes the default delivery option in some regions, confidence in test results is essential. Policymakers and regulators increasingly rely on English language test scores to make consequential decisions about student admissions and work visas. If those results cannot be trusted, public confidence in the entire system erodes.
The report calls for a more evidence-based approach to remote testing decisions. It stresses that delivery modes should be guided not by convenience alone, but by whether they are demonstrably secure, fair and fit for purpose.
Six principles for best practice
To support policymakers, regulators and test providers globally, Cambridge English has set out six principles for best practice in remote language assessment:
- Testing what matters: Ensure test design and tasks are carefully aligned with the language skills being evaluated and the assessment's purpose
- Rigorous test security: Maintain robust identity checks, effective monitoring and clear protections against malpractice, with appropriate remote proctoring oversight
- Standardised test conditions: Implement consistent standards across all remote sessions to ensure fairness, using user-friendly platforms and clear protocols
- Maintain human involvement: Deploy technology to support delivery and monitoring, but never as a replacement for human judgement and oversight
- Fair and inclusive participation: Accommodate differences in equipment, connectivity, digital confidence and environment to give all candidates a fair opportunity to demonstrate competence
- Comparability across delivery modes: Validate and continuously monitor that results from remote tests are equivalent to in-centre tests
Read more