Language frameworks are not optional in language learning; they are the minimum structure required for reliable decisions. If we want to match learners with texts and tasks that are challenging but manageable, we need shared standards for what “difficulty” means.
This note is a simple introduction to language frameworks: why they exist, where they came from, and why CEFR.AI selected CEFR plus GSE as its working model.
Why Language Frameworks Exist
A language framework gives structure to a problem that is otherwise too subjective. It provides:
- Shared definitions of proficiency.
- Common labels for placement and progression.
- Descriptor-based targets for curriculum and assessment design.
Without this structure, “easy” and “hard” become subjective labels that do not transfer across classrooms, tools, or institutions.
A Brief Context
For decades, systems such as ILR and ACTFL provided important proficiency guidance in specific regions and use cases. CEFR, published by the Council of Europe in 2001 and later expanded through the Companion Volume, became the most globally portable reference for general language proficiency.
That portability is critical for an open methodology: a framework should be widely understood, externally documented, and stable enough to support longitudinal calibration work.
Why We Selected CEFR as the Baseline
We chose CEFR as the baseline because it offers:
- A globally recognized A1-C2 structure.
- Public, descriptor-based proficiency definitions.
- Strong compatibility with existing teaching materials and placement workflows.
In short, CEFR gives us the broad reference language we need to communicate levels clearly.
The Granularity Problem Inside CEFR Bands
When we moved from broad reporting to practical calibration, we ran into a common issue: CEFR bands are broad.
Many tools and schools solve this by introducing local sublevels such as B2.1, B2.2, B2.3, and B2.4. We researched these schemes because they reflect a real operational need: educators want smaller steps than a single B2 label.
The issue is not the idea of sublevels. The issue is consistency. B2.2 in one system may not map cleanly to B2.2 in another system, which creates ambiguity across products and institutions.
Why We Selected GSE for Finer Resolution
After evaluating ad-hoc sublevel approaches, we selected GSE as our fine-grained scale.
GSE gives us:
- A continuous 10-90 range with smaller progress steps.
- Published alignment to CEFR bands.
- A standardized alternative to provider-specific sublevel naming.
Practically, this means we can communicate in CEFR (A2, B1, B2) while calibrating with finer distinctions where needed.
CEFR + GSE in Practice
Our framework layer is simple:
- CEFR for broad proficiency communication.
- GSE for finer internal resolution and progression tracking.
This gives us a common public language and enough precision for day-to-day decision making.
Closing
Language frameworks are foundational infrastructure for serious language work. CEFR gives global interoperability; GSE gives finer operational precision. That combination is why we selected CEFR + GSE rather than relying on local sublevel conventions such as B2.1-B2.4.
Looking Ahead
This site is actively investigating granularity itself. We are standing on the shoulders of GSE because it offers a strong, practical fine-grained scale, but we treat that scale as a starting point for research rather than a final answer.
Two questions matter for future work:
- Are all points on a 10-90 scale equally meaningful in real learner progression?
- Are there compressed zones or gaps that become visible when we test against real speaker performance data?
Our long-term goal is not simply to adopt granularity, but to test which granularity is educationally meaningful and empirically stable.
For more information, explore the CEFR Companion Volume or the GSE Resources from Pearson.
Note: Scoring by CEFR.AI uses the Global Scale of English and its components © Pearson as one component in its algorithm. Scores given here are not validated by Pearson.