Products and Services
Products and Services
Our innovative products and services for learners, authors and customers are based on world-class research and are relevant, exciting and inspiring.
We unlock the potential of millions of people worldwide. Our assessments, publications and research spread knowledge, spark enquiry and aid understanding around the world.
No matter who you are, what you do, or where you come from, you’ll feel proud to work here.
Why choose us?
Research and consultancy
Exams and tests
You are here:
Our staff – the largest dedicated research team of any UK-based language assessment organisation – are our greatest asset in delivering our commitment to excellence. Our rigorous systems of quality are subject to independent checks and meet international standards, providing accountability and giving confidence to those who rely on our exams.
The Cambridge English Principles of Good Practice outline the systems and processes that drive our search for excellence and continuous improvement. While these systems involve complex research and technology, the underlying philosophy is simple:
We have published Principles of Good Practice to:
Download Principles of Good Practice (PDF 798kb)
In Principles of Good Practice we state our commitment to providing users with data that will allow them to evaluate for themselves the reliability of our exams (appendix, Reliability section F).That data can be found below in the Reporting Reliability section.
The tools and analysis used to develop these figures are also listed for those unfamiliar with analysing and reporting test reliability.
Reliability and validity are the two most important properties of a test. They form part of the Cambridge English VRIPQ approach as described in the Principles of Good Practice booklet. It is a general principle that in any testing situation one needs to maximise validity and reliability to produce the most useful results for test users, within existing practical constraints.
Cambridge English takes the view that reliability is an integral component of validity; there can be no validity without reliability. Hence any approach to estimating reliability must reflect potential sources of evidence for the construct validity of the tests.
Reliability (expressed normally by a figure between 0 and 1) indicates the replicability of the test scores when a test is given twice or more to the same group of people, or when two tests that are constructed in the same manner are given to the same group of people. The expectation is that the candidates would receive nearly the same results on all occasions. If the candidates’ results are consistent on all occasions, the test is said to be reliable; the degree of score consistency is therefore a measure of reliability of the test.
There are various ways to estimate the reliability of an exam. Most Cambridge English exams have two main types of component: objective papers and performance papers. Objective papers are the ones that do not require human judgement for their scoring, i.e. tests of reading comprehension, listening comprehension and use of English. The scores achieved in these sub-tests are simply calculated by adding up the total number of correct responses to each section. The reliability estimates for these papers are calculated using a statistic called Cronbach’s Alpha. The closer the Alpha is to 1, the more reliable the test section is.
Performance papers, however, involve the judgement of human raters. Almost all Cambridge English Speaking tests use a paired format structure where two Oral Examiners assess the performance of the candidates.
In Writing tests each candidate’s performance is usually marked by one human rater, with a sample of scripts marked by a second or a third marker. When there are two examiners marking a performance test, we use the Pearson correlation between the two examiners as a measure of consistency of the ratings. When this is not the case, or where a sample of performances are marked by more than one examiner, we use a statistic called the g-coefficient, which is derived from Generalizability theory. What is common to all these methods is a scale which ranges between 0 and 1, very similar to the Alpha used for objective papers.
The decision to pass or fail a candidate is almost always taken at the syllabus level for Cambridge English exams. That means the overall score on the test is the composite of all the scores in a test’s subcomponents. It is this score which is reported to candidates. The score is reported in the range of 0 to 100 by scaling raw scores to standardised scores. While it is worth having a measure of the reliability of each test component, what matters most to candidates and test users is the overall reliability of the whole syllabus. This reliability is called the composite reliability.
SEM is not a separate approach to estimating reliability, but rather a different way of reporting it. Language testing is subject to the influence of many factors that are not relevant to the ability being measured. Such irrelevant factors contribute to what is called ‘measurement error’. The SEM is a transformation of reliability in terms of test scores. While reliability refers to a group of test takers, the SEM shows the impact of reliability on the likely score of an individual: it indicates how close a test taker’s score is likely to be to their ‘true score’, to within some stated probability. For example, where a candidate receives a score of 67 on a test with an SEM of 3, there is a high probability that their true score is between 64 and 70. This is a very useful piece of information that test users can use in their decision making.
Tables 1–12 below report typical reliability and SEM figures for Cambridge English exams for 2010.
Components: The reliability figures for objective papers are based on raw scores. Speaking is based on inter-rater correlation and Writing is based on g-coefficients. SEM figures are based on raw scores.
Total score: As can be seen from the tables below, the composite reliability for these exams is above 0.90 and the SEM is around 3. These figures demonstrate a high degree of trustworthiness in the overall scores reported.
Table 1: Cambridge English: Key
Key English Test (KET)
Table 2: Cambridge English: Key for Schools
Key English Test (KET) for Schools
Table 3: Cambridge English: Preliminary
Preliminary English Test (PET)
Table 4: Cambridge English: Preliminary for Schools
Preliminary English Test (PET) for Schools
Table 5: Cambridge English: First
First Certificate in English (FCE)
Note: Cambridge English: First for Schools figures are as for Cambridge English: First
Table 6: Cambridge English: Advanced
Certificate in Advanced English (CAE)
Table 7: Cambridge English: Proficiency
Certificate of Proficiency in English (CPE)
Table 8: Cambridge English: Business Preliminary
Business English Certificate (BEC) Preliminary
Table 9: Cambridge English: Business Vantage
Business English Certificate (BEC) Vantage
Table 10: Cambridge English: Business Higher
Business English Certificate (BEC) Higher
Table 11: Cambridge English: Young Learners
Young Learners English (YLE)
Table 12: TKT (Teaching Knowledge Test)