Open Source Research into Language Difficulty.

CEFR.AI calibrates language difficulty by combining text complexity with task demand.

How We Calibrate

We triangulate the CEFR.AI difficulty rating algorithm between placement outcomes on real learners, professional-grade text+task materials, and linguistic research on vocabulary/grammar/skills difficulty.

CEFR.AI triangulation model Triangle with three evidence anchors: professional materials, placement scores, and linguistic research. CEFR.AI Calibration Engine Professional Materials Placement Scores Linguistic Research
CEFR.AI Calibration Engine Professional Materials Placement Scores Linguistic Research

Professional Materials

Texts + tasks from publisher-grade learning materials.

Placement Scores

Real learner outcomes from texts + tasks in production.

Linguistic Research

GSE-aligned evidence on vocabulary, grammar, and skills.

Text + task is the unit of difficulty.

Text analysis tools are meaningless without consideration of the demands of the reader. You could have a difficult text, but all that is required of the user is to identify the topic. That is why CEFR.AI is the first open-source research into language level difficulty based on real-world use of language. Find out more on our Research page.

Open Methodology

Scoring logic, assumptions, and calibration strategy are designed to be inspectable and reproducible.