Language Difficulty Infrastructure

Text + task is the unit of difficulty.

CEFR.AI is moving from text-only scoring to a calibrated framework that measures task demand alongside text complexity. The goal is transparent, open, and verifiable difficulty ratings for language learning products.

How We Calibrate

We triangulate the CEFR.AI difficulty rating algorithm between placement outcomes on real learners, professional-grade text+task materials, and linguistic research on vocabulary/grammar/skills difficulty.

CEFR.AI triangulation model Three evidence streams feed and calibrate the CEFR.AI rating engine: placement scores, professional materials, and linguistic research. Placement Scores Real users completing texts + tasks in production Professional Materials Calibrated texts + tasks from publisher-grade sources Linguistic Research GSE-informed evidence on vocabulary, grammar, and skills CEFR.AI Difficulty Rating Algorithm

Open Methodology

Scoring logic, assumptions, and calibration strategy are designed to be inspectable and reproducible.

Platform Direction

CEFR.AI is the core layer. Products can run as first-party apps or external tools powered by CEFR.AI.