View related sites
Cambridge Assessment English
Why Cambridge English
You are here:
Dr Kevin Cheung has worked for Cambridge Assessment since 2015 and is now Head of Marking and Results. Prior to joining Cambridge English, he lectured in Social Psychology and Research Methods at Loughborough University, Birmingham City University and the University of Derby, as well as working for the Probation Service. He is a Chartered Psychologist with research specialisms in academic writing, scale development and assessment. He holds a PhD in Psychology and is an Associate Fellow of the British Psychological Society.
Describe your role and involvement in Linguaskill.
I oversee research on writing across all Cambridge English products. I became involved in Linguaskill in November 2017 and I am currently working on the product’s Writing component.
What were the needs you were looking to solve with Linguaskill?
Many language proficiency tests do not include a written component because it is logistically challenging to mark. Many tests consequently rely solely on multiple-choice questions that focus on reading and grammar. However, writing is one of the most important skills that employers and education institutions are interested in. Linguaskill therefore offers an option to include writing, without compromising on cost, efficiency and speed of results delivery. This means that writing skills can be included when they previously wouldn’t have been.
In your opinion, what makes Linguaskill different from other tests?
The fact that the writing automarker uses machine learning research from the University of Cambridge to deliver instantaneous results. Because of the collaboration between researchers at ALTA, Cambridge English and Cambridge University Press, we have a unique automarker tailored to the context of English for speakers of other languages (ESOL) exams. These have allowed development of the writing automarker to use the Cambridge Learner Corpus, a collection of genuine exam scripts submitted by ESOL test takers. ALTA’s research uses novel techniques and this means that the technology we have is cutting edge.
What have you learned while developing Linguaskill?
That some people are resistant to the idea of a computer marking their writing, even if you present evidence that it performs as well as (if not better than) human examiners. It is therefore part of my job to present evidence that demonstrates this to stakeholders, in a way that is easy for them to understand.
Now that Linguaskill is in the market, what are you most satisfied with in terms of the product and market adoption?
The enthusiasm that we have had from centres about a test that is quick, efficient and easy to use. It is great to see stakeholders responding positively to improvements in user experience (UX) and ease of use – this makes the testing and development of the platform feel worthwhile.
How do you see Linguaskill developing over the next 2–3 years?
It will possess greater adaptivity so that it offers a more personalised and targeted testing experience, linked to the test taker’s level. Additionally, it will provide more feedback on performance for candidates and institutions.
How do you see computer-based testing changing in the future, with the increasing use of AI?
More widespread use of AI will make our tests quicker and more resistant to attempted subversion of the results.
Are there other key trends that you see impacting language learning and testing over the next five years?
I see more personalised learning experiences being made possible through more granular assessment information. Being able to link test-taker data together will help us tailor support across the learning journey and manage expectations around progression. Knowing more about how particular groups of test takers improve their language proficiency will also allow us to advise on the best way to progress in specific circumstances. Finally, there will be more assessment happening outside of the exam hall, facilitated by mobile and wearable devices.