Principal Research Manager
Dr Kevin Cheung has worked for Cambridge Assessment since 2015 and is now a Principal Research Manager. Prior to joining Cambridge English, he lectured in Social Psychology and Research Methods at Loughborough University, Birmingham City University and the University of Derby, as well as working for the Probation Service. He is a Chartered Psychologist with research specialisms in academic writing, scale development and assessment. He holds a PhD in Psychology and is an Associate Fellow of the British Psychological Society.
I oversee research on writing across all Cambridge English products. I became involved in Linguaskill in November 2017 and I am currently working on the product’s Writing component.
Many language proficiency tests do not include a written component because it is logistically challenging to mark. Many tests consequently rely solely on multiple-choice questions that focus on reading and grammar. However, writing is one of the most important skills that employers and education institutions are interested in. Linguaskill therefore offers an option to include writing, without compromising on cost, efficiency and speed of results delivery. This means that writing skills can be included when they previously wouldn’t have been.
The fact that the writing automarker uses machine learning research from the University of Cambridge to deliver instantaneous results. Because of the collaboration between researchers at ALTA, Cambridge English and Cambridge University Press, we have a unique automarker tailored to the context of English for speakers of other languages (ESOL) exams. These have allowed development of the writing automarker to use the Cambridge Learner Corpus, a collection of genuine exam scripts submitted by ESOL test takers. ALTA’s research uses novel techniques and this means that the technology we have is cutting edge.
That some people are resistant to the idea of a computer marking their writing, even if you present evidence that it performs as well as (if not better than) human examiners. It is therefore part of my job to present evidence that demonstrates this to stakeholders, in a way that is easy for them to understand.
The enthusiasm that we have had from centres about a test that is quick, efficient and easy to use. It is great to see stakeholders responding positively to improvements in user experience (UX) and ease of use – this makes the testing and development of the platform feel worthwhile.
It will possess greater adaptivity so that it offers a more personalised and targeted testing experience, linked to the test taker’s level. Additionally, it will provide more feedback on performance for candidates and institutions.
More widespread use of AI will make our tests quicker and more resistant to attempted subversion of the results.
I see more personalised learning experiences being made possible through more granular assessment information. Being able to link test-taker data together will help us tailor support across the learning journey and manage expectations around progression. Knowing more about how particular groups of test takers improve their language proficiency will also allow us to advise on the best way to progress in specific circumstances. Finally, there will be more assessment happening outside of the exam hall, facilitated by mobile and wearable devices.