Artificial Intelligence

What Are We Actually Doing as Universities? Rethinking Assessment in the Age of AI

By Digital Education Council

July 3, 2025

“We probably need to start from the learning outcomes of what we actually want student capabilities to look like in two years, in five years and in 10 years time.”

Danny Liu, Professor of Educational Technologies at the University of Sydney, shared his vision during the Digital Education Council (DEC) Executive Briefing #018: The Next Era of Assessment.

Speaking with Danny Bielik, President of DEC Professor Liu explored how universities can redesign assessment to build real capabilities in graduates and respond to the challenges and opportunities presented by artificial intelligence.

From Discussion to Action

Despite widespread debate, Liu noted that universities have been slow to move beyond discussion. “I think it's been two and a half years of a lot of talking but not enough action,” he said.

For Liu, real progress means focusing on what students need to thrive. 

“It's still really critical for students to develop capabilities themselves and not to have AI come in and replace these things,”  he explained, emphasising the importance of designing assessments that actively build student skills and independence.

Redesigning Assessment with Purpose

Liu urges institutions to start with the end in mind. 

“Everything flows from the learning outcomes, a good kind of backwards design.” 

He recommends universities define the capabilities graduates will need in the years ahead, and design assessments that build towards these goals. This means actively revisiting and updating learning outcomes as pedagogy evolves.

As a practical starting point, Liu suggests universities workshop learning outcomes across faculties, break down silos, and ensure assessments measure what matters for future-ready graduates, not just what’s easy to test.

Building and Maintaining Trust

As universities incorporate AI into assessment, Liu warns that trust must not be lost. 

“Students come to us as human teachers because they have a certain level of trust in what we do. And part of building that trust is to read their work, look at their work, and to evaluate their work and also give them feedback on their work so they know how to get better.” 

Liu cautions that relying too much on AI for grading could break this trust and undermine the human connection at the heart of meaningful learning.

He warns that outsourcing grading to AI could set off a harmful cycle: 

“Then it'll be AI making work, AI grading work, everyone goes home and no one benefits or learns anything.”

Looking Ahead

Liu challenges universities to look beyond short-term fixes and confront the bigger questions at the heart of higher education: “Because if we don't answer those questions, we risk five years into the future thinking, what are we actually doing as universities?”

No items found.