Beyond explainability: didactic controllability and task density as foundations for responsible learning with AI

Introduction - when revision masks displacement

A student turns in a revised essay. The argument flows. The tone is calibrated. The structure is tight. Yet when asked to explain her revisions, she pauses. “The tool helped,” she says. “It just sounded better.”

There is no deception here. No misconduct. Only a quiet absence of authorship.

As intelligent systems become embedded in everyday learning - offering corrections, rephrasings, summaries, even prompts for reflection - the quality of student work often improves. But in that improvement, something crucial may be lost: the visible trace of human deliberation. The voice that can say, “I chose this, and here is why.”

In educational contexts, the shift is subtle yet profound. We are no longer confronting the binary of originality versus plagiarism, but the murkier question of attribution. Not who wrote the sentence, but who thought the thought. And when the answer becomes unclear, so too does the meaning of learning.

Existing tools excel at assessing what has been produced. What we lack are conceptual instruments to ask: Who was cognitively present in the making of this? Who evaluated alternatives, resolved ambiguity, justified their moves - not merely in the product, but in the process?

This paper introduces two such instruments. The first, didactic controllability, refers to the educator’s capacity to verify whether a learner has retained decision-making authority over their own thinking. The second, task density, aims to render visible what often remains hidden: the distribution of cognitive effort between learner and system.

Together, these constructs suggest a new mode of educational interpretation. Not focused on outcomes alone, but on the ownership embedded - or displaced - within them.

Learning requires ownership, not just output

To learn is not simply to produce. It is to engage, to wrestle, to choose. Decades of educational research affirm this. Zimmerman’s work on self-regulation frames learning as a process of goal-setting, monitoring, and reflection. Deci and Ryan’s theory of self-determination grounds motivation in autonomy, not compliance. Learning, in this view, is not what happens when an answer is correct - it is what happens when the answer becomes yours.

This lens casts a different light on AI-supported performance. When learners rely on systems that rephrase sentences, reorder arguments, or recommend stylistic improvements, the surface may indeed polish. Yet the polish tells us little about the underlying engagement. The learner may have accepted changes, but did they understand them? Did they evaluate, or simply approve?

Here, the concept of didactic controllability becomes crucial. It is not a technical parameter, but a pedagogical concern: can the learner reconstruct the path that led to the product? Can they explain, justify, or revise independently of the system? If not, controllability is lost - not because help was offered, but because ownership quietly transferred.

Task density complements this perspective. It does not measure time spent, but cognitive displacement. When the core operations of a task - analysis, reasoning, integration - are carried out predominantly by the system, the role of the learner shifts from agent to curator. Even active selection among system-generated options can obscure the fact that the cognitive groundwork was not theirs.

This does not mean all system support undermines learning. But it does mean that support must be interpreted through the lens of contribution. A system may assist - but if it displaces too much, the signal of learning is replaced by a signal of efficiency.

Educational tools, then, must not only be judged by what they produce, but by what they preserve: the learner’s ability to think, to decide, and to reflect as the source of their own understanding.

A model for interpretation, not prediction

In a field increasingly shaped by data-driven metrics, it is tempting to seek predictive certainty: Will this tool improve outcomes? Will it raise scores? Yet prediction is not interpretation, and in education, the difference matters.

The model proposed here resists classification schemes and fixed rubrics. It is not designed to rank tools or automate judgments. Instead, it functions as an interpretive framework - an invitation to observe, question, and reason through the ways AI mediates learning. It asks not what happened, but how, why, and with whom.

This reasoning unfolds across four interrelated dimensions:

Clarity of context. The starting point is language. Descriptions of tasks, tools, goals, and learner roles must be semantically clear, contextually grounded, and conceptually coherent. When these elements are vague or misaligned, any subsequent interpretation becomes unstable. The model suspends judgment in such cases, prioritizing input quality over premature conclusions.

Traceability of thought. At the heart of the framework is a concern with how decisions are made. What options were presented? What rationale guided the learner’s choices? Were these choices reactive or deliberative? The model does not assume that every action must originate with the learner - but it does ask whether the learner could retrace it, independently and with reason.

Distribution of execution. Rather than measuring engagement through superficial activity, the model examines the cognitive substance of participation. Who generated the insight? Who performed the integration? Who evaluated alternatives? When a learner delegates these functions to a system, the model flags the risk—not as a failure, but as a potential loss of pedagogical depth.

Integrity of the learning ecology. Finally, the model examines the ethical and structural conditions under which learning occurs. Are system interventions transparent? Is human oversight possible? Can decisions be reconstructed? If these elements are absent, the pedagogical validity of the tool must be reconsidered. In such cases, the model refrains from recommending usage without explicit human justification.

Each of these dimensions is designed not to control learners, but to support educators as interpreters of cognitive presence. The goal is not to eliminate AI, but to understand its place - and its limits - within the learning process.

Supporting educators as cognitive interpreters

Teaching in the presence of AI is no longer about withholding tools. It is about seeing clearly what remains invisible to most metrics: the student’s thinking. If learning is to be more than performance, educators must shift from scoring products to tracing cognition.

This interpretive shift requires practical footholds. What does it mean, in a real classroom, to attend to agency and attribution?

Consider a student who submits a rephrased argument suggested by an AI editor. The surface has improved - but does the student understand why the change works? One strategy is trace-back prompting: asking the learner to reconstruct the reasoning behind a revision. Not just to identify what changed, but to articulate its purpose.

Another strategy is justification mapping. When students integrate AI-generated suggestions - be they formulations, summaries, or feedback - they annotate those with their own rationale. What did they accept, what did they reject, and why? This dual authorship makes visible what would otherwise remain obscured: the boundary between assistance and understanding.

A third approach is controlled detachment. Temporarily removing the AI support for comparable tasks can reveal the degree of dependency. If learners falter not in language, but in logic - unable to synthesize or decide without prompts - the educator gains a diagnostic insight: the tool has replaced thinking rather than supported it.

These interventions do not aim to punish AI use. They aim to preserve accountability. The educator is not a gatekeeper but a witness - someone who can say, with clarity, whether the learner still owns the process.

Interpretation, in this model, is not a luxury. It is a form of pedagogical vigilance. It allows educators to see what the dashboard does not: the presence - or absence - of authentic cognitive engagement.

Reframing the educator’s role: from scoring to diagnosing

The entrance of generative systems into education does not diminish the role of the educator - it demands its transformation. When learners produce polished, fluent, and even insightful outputs with the aid of AI, traditional assessment rubrics begin to blur. Accuracy is no longer a reliable proxy for understanding. Fluency no longer guarantees depth.

In this shifting terrain, the educator’s task becomes diagnostic rather than judgmental. Instead of asking, “Is this correct?”, the more pertinent question becomes: Can the learner explain how this came to be?

This reframing moves beyond detecting misuse. It turns toward identifying displacement - those moments when a learner appears engaged, but has silently relinquished the work of thinking to the system. It also invites a finer kind of skepticism: not about the tool’s function, but about its cognitive implications.

Was this insight constructed or accepted? Was this argument shaped or suggested? Could the learner arrive here again without assistance?

To answer such questions, the educator must cultivate interpretive fluency. Not in the workings of algorithms, but in the signs of ownership: explanation, resistance, revision, and transfer. These are the markers of learning that endure beyond the task.

AI may elevate the surface. But unless the learner can navigate the substance, no sustainable learning has occurred. The educator’s role is to make that distinction visible - not as an auditor, but as a guide who brings thinking back into view.

Conclusion – toward visible learning in invisible systems

The rise of AI in education is not, at its core, a technological event. It is an epistemic shift. Systems that once supported discrete tasks now mediate entire learning processes. Their interventions are subtle, their influence distributed, and their boundaries - between aid and automation - increasingly porous.

This shift creates a new kind of invisibility. Not of action, but of authorship. Outputs may appear seamless, but the path toward them is often obscured. We see the product, not the process. The improvement, not the intention.

The framework presented here is not a solution. It is a lens - one that reframes how educators, designers, and learners might think about responsibility in the age of intelligent systems. It argues that two concepts - didactic controllability and task density - can help restore what automation tends to conceal: the learner’s voice, agency, and presence in their own learning.

These concepts do not oppose technological advancement. They align it with pedagogical integrity. They offer a way to engage with AI not as a threat to education, but as a context that demands new forms of observation, reflection, and design.

Ultimately, the model invites one central question to guide the use of AI in learning environments. Not: Is this accurate? Not even: Is this original? But rather: Who thought here?

That question, if sustained, may prove more transformative than any tool itself.

Make time for what matters the most

See how Saga helps lawyers work more efficiently every day.
Book a demo