Open Methodology
Scoring logic, assumptions, and calibration strategy are designed to be inspectable and reproducible.
Language Difficulty Infrastructure
CEFR.AI is moving from text-only scoring to a calibrated framework that measures task demand alongside text complexity. The goal is transparent, open, and verifiable difficulty ratings for language learning products.
We triangulate the CEFR.AI difficulty rating algorithm between placement outcomes on real learners, professional-grade text+task materials, and linguistic research on vocabulary/grammar/skills difficulty.
Scoring logic, assumptions, and calibration strategy are designed to be inspectable and reproducible.
CEFR.AI is the core layer. Products can run as first-party apps or external tools powered by CEFR.AI.