Research into Language Difficulty

CEFR.AI calibrates language difficulty by combining text complexity with task demand.

CEFR.AI triangulation model Equilateral triangle with three evidence anchors: professional materials, CEFR.AI Test scores, and linguistic research. CEFR.AI Calibration Engine Professional Materials CEFR.AI Test Scores Linguistic Research
CEFR.AI Calibration Engine Professional Materials CEFR.AI Test Scores Linguistic Research

How We Calibrate

Professional Materials

Texts + tasks from publisher-grade learning materials.

CEFR.AI Placement Scores

Real learner outcomes from CEFR.AI proprietary assessment.

Linguistic Research

GSE-aligned evidence on vocabulary, grammar, and skills.

Text + Task is the Unit of Difficulty

Text analysis tools are meaningless without consideration of the demands of the reader. You could have a difficult text, but all that is required of the user is to identify the topic. That is why CEFR.AI is the first open-source research into language level difficulty based on real-world use of language. Find out more on our Research page.

Students reading in a classroom

Open Methodology

Scoring logic, assumptions, and calibration are designed to be inspectable and reproducible as part of the open source project. We welcome suggestions and contributions from the research community which will be coordinated accodingly.

Platform Direction

CEFR.AI is the core layer. Products can run as first-party apps like 'Analyse' and 'Level Test', or external tools powered by CEFR.AI. An API will shortly be available for you to freely use CEFR.AI as the basis of your language app.