By Digital Education Council
April 15, 2026
AI is moving from experimental use to embedded practice in higher education, but a coherent institutional response remains uneven.
Despite widespread adoption among students and faculty, programme-level integration of AI is still largely fragmented. Many efforts are confined to scattered experimentation rather than a coordinated, strategic approach.
Institutions are under pressure to respond to growing student use and employer expectations. Yet there is little agreement on what “AI readiness” should mean in practice, or how it should be applied across disciplines.
A number of distinct models are beginning to emerge, reflecting different ways in which humans and AI collaborate to perform tasks.
At one end, AI is used as a human-guided tool supporting activities such as summarisation, content creation and analysis under clear human direction. This remains the most common form of adoption across teaching and learning.
A second model involves collaborative human–AI workflows, where individuals work alongside AI systems to complete tasks. In these settings, AI may take on a more active role in execution, while human users provide direction, oversight and judgement.
A third, less widespread model is beginning to emerge in more advanced contexts, where AI systems operate in a more autonomous capacity, carrying out parts of a workflow with limited human supervision. Here, human involvement shifts primarily towards strategic creation of the workflows, followed by monitoring and intervention.
Institutions are beginning to translate these models into programme design, preparing students for human–AI collaboration in the workforce.
For instance, DEC member Northeastern University focused on embedding AI fluency across programmes and disciplines, shifting from standalone AI courses to a model where AI capability is developed across the full curriculum. Its approach combines classroom instruction with applied learning through project-based work and industry placements.
During DEC Executive Briefing #027, Javed (Jay) Aslam, Chief of Artificial Intelligence at Northeastern University, points to this approach as a response to how work is evolving: “We need to prepare our students for a workplace with co-intelligence, where we'll be working with and alongside AI tools.”
To support this, Northeastern drew on the DEC AI Literacy Framework as a reference point and adapted it to its own context. Implementation was structured as a faculty-led, bottom-up effort, with representatives across colleges defining what AI readiness should look like within their disciplines.
Aslam details: “AI readiness is going to be inherently discipline-specific, but we really needed an overarching framework that would be common across all colleges and programmes.”
This was operationalised through programme-level mapping, where the university’s faculty reviewed programmes within each course to identify which aspects of AI literacy and fluency could be further developed.
The approach is designed to evolve, with input from industry and applied learning environments shaping its development.
For now, most institutions' AI strategies are still at an early stage. But as AI becomes more embedded in professional work the pressure to move beyond experimentation towards more systematic approaches is likely to grow.
To watch the full Executive Briefing #027, log in to the DEC Member Area.