By Digital Education Council
January 29, 2026

As universities grapple with artificial intelligence (AI), student perspectives are often referenced but rarely examined in depth.
The Digital Education Council brought together a panel of students from universities across North America, Latin America, Europe, Africa, Middle East, and Asia Pacific to discuss their perceptions and uses of AI.
The students didn’t disappoint, providing raw unvarnished anecdotes from their lived experience at their universities. There was much to be hopeful for and much to learn for institutions across the world as they navigate the realities of AI in higher education in 2026.
The panels were part of the series of Executive Briefings conducted monthly for higher education institutions across the world.
Moderated by Charlene Chun, Research and Intelligence Associate at DEC, the discussion surfaced candid student perspectives on how AI is reshaping learning in practice.
AI Use Is Defined by Purpose, Not Presence
When asked what makes AI use valuable for students, the panellists agreed that effective AI use depends less on where it is used and more on the intention behind it.
AI was described as helpful for overcoming inertia during deep research and offloading menial tasks, particularly when students feel stuck. However, students agree that when AI replaces thinking entirely, they risk losing out on the cognitive processes that build foundational understanding of a subject matter.
A common pain point was that learning becomes compromised when AI generates outcomes without student ownership.
Students emphasised forming their own thesis and arguments first, then using AI to filter information, test logic, or explore relevance. This sequence was seen as critical for developing key skills such as critical thinking, reasoning, and judgement.
Several students described this as a personal responsibility to stay accountable to their learning. One student shared a simple accountability benchmark: “If I can’t explain my work to a professor without AI’s help, then I didn’t actually learn it.”
From Automation to Cognitive Partnership
Rather than viewing AI as a shortcut, many students find it productive to use AI as a “thinking companion” or a “co-supervisor.” AI tools were used to challenge assumptions, refine arguments, and introduce alternative perspectives.
One student explained how he uses AI to analyse his professor’s published work, prompting AI to break down how arguments are constructed and how academic writing is structured.
Panellists also highlighted the value of AI for accessibility and inclusion, particularly for neurodivergent learners.
AI was described as helping to declutter thoughts and reduce cognitive overload. Practical uses included generating flashcards across multiple topics or restructuring material in formats that made it easier to focus on learning, rather than managing anxiety or time pressure.
Institutional Silence Drives AI Use Underground
One of the strongest tensions raised during the panel was institutional silence around AI use. When AI is not openly acknowledged in the classroom, students continue using it “discreetly” without guidance or shared expectations.
As one student noted: “When institutional guidelines are unclear, students tend to rely more on personal ethics than formal rules. This means that every student is quietly creating their own internal boundary.”
This ambiguity was described as creating pressure among peers, between students who interpret unspoken norms conservatively and those who use AI more freely.
Students called for greater transparency through normalised AI-use disclosures, rather than an overreliance on AI detection tools.
Many pointed to the unreliability of current AI detectors, citing instances where original work was incorrectly flagged as AI-generated and the level of friction this created amongst the student base.
They also noted a persistent misconception among both peers and educators that AI-assisted work is “inauthentic”, reinforcing the need to demystify AI use within academic settings.
Overall, students viewed openness as essential to building a positive learning-centred culture around AI, rather than one shaped by fear and surveillance.
What Students Want to Learn Next
Students were candid that much of their AI learning still happens informally through trial and error.
While AI is increasingly present in the classroom, structured guidance on its ethical, critical, and responsible use remains limited, leaving students to navigate these questions independently.
Many expressed concern that current curricula lag behind the pace of real-world AI adoption.
Beyond prompt engineering, which currently dominates most institutional training, students expressed a desire to learn how to evaluate AI outputs for accuracy, bias, and hallucinations.
As one student explained: “We’re learning how to ask AI questions, but not how to analyse its outputs.”
A recording of the Briefing is available for all Digital Education Council members.
Participating Member Institutions:
Ajman University (UAE), Ateneo de Manila University (Philippines), EDHEC Business School (France), Imperial College London (UK), Northeastern University (USA), Singapore Management University (Singapore), Stellenbosch University (South Africa), The Hong Kong University of Science and Technology (Hong Kong, China), the University of British Columbia (Canada), University of New South Wales (Australia), University of Nottingham Ningbo China (China), and Universidad Peruana de Ciencias Aplicadas (Peru).