News

ISSN Number

2632-6779 (Print)  

2633-6898 (Online)

Abstracting/Indexing/Listing

Ulrich’s Periodicals Directory (ProQuest)

MLA International Bibliography

MLA Directory of Periodicals

Directory of Open Access Journals (DOAJ)

QOAM (Quality Open Access Market)

British National Bibliography

WAC Clearinghouse Journal Listings

EBSCO Education

ICI Journals Master List

ERIH PLUS

CNKI Scholar

Gale-Cengage

WorldCat

Crossref

Baidu Scholar

British Library

J-Gate

ROAD

BASE

Publons

Google Scholar

Semantic Scholar

ORE Directory

TIRF

China National Center for Philosophy and Social Sciences Documentation

 

Home Journal Index 2022-1

Equating Rasch Values and Expert Judgement Through Externally-Referenced Anchoring

Download Full PDF

Tony Lee
LanguageCert, UK


Michael Milanovic
LanguageCert, UK


Nigel Pike
LanguageCert, UK

 

Abstract
This paper reports on the use of externally-referenced anchoring by LanguageCert as a methodology for calibrating language test materials and aligning test forms. The datasets used are taken from tests at each of the six levels of LanguageCert IESOL suite, all of which have been aligned to the CEFR through expert judgement. We illustrate in this paper the extent to which externally-referenced anchoring, using Item Response Theory (IRT) but based on expert judgement, can be used as an effective, reliable and valid methodology. The approach is based on the premise that successful anchoring may be achieved by reference to well-targeted, expertly-written test forms aligned to the underlying traits of a particular CEFR level by expert judgement and verified through the use of IRT.


This study focuses on the analysis of 18 LanguageCert test forms, three at each CEFR level. The LanguageCert Item Difficulty (LID) scale, which underlies all LanguageCert test materials, is linked empirically to the CEFR, and each test was placed on the LID scale based at the midpoint of its distribution. This midpoint setting was then set as the externally-referenced anchor for a given CEFR level.


The findings of this study indicate that, while the match between the distribution of items in the selected LanguageCert IESOL tests and the LID scale was not perfect, in general, a relatively close match between the items in the tests and the LID scale was found and, as a consequence, the corresponding CEFR level. For each test, most of the items fell between the 25th and 75th percentile of any given level: this range representing the lower and upper bounds of LID scale values for each CEFR level. These results demonstrate that LanguageCert IESOL test items are well set and appropriately positioned at respective CEFR levels on the basis of expert judgement. The study illustrates that externally-referenced anchoring based on expert judgement may be used as a methodology for aligning test forms to an external frame of reference, in this case the CEFR.


Keywords
Externally-referenced anchoring, calibrating test materials, aligning test forms, Rasch