Designing AI-Resistant Assessments Using AI: Moving Beyond Invigilation to Authentic Problem-Solving

Andrew Komoder, Senior Learning Experience Designer at Western Sydney University

Description

Domain:
Curriculum & Learning Design
Challenge Area:
Curriculum Coherence and Alignment
Status:
Emerging Practice (pilots and experimental practices)
Implementation Complexity:
Medium

Rather than retreating to invigilated exams in response to generative AI, this practice demonstrates how to use AI as a design partner to create assessments that are inherently resistant to AI cheating. 

By developing scenario-based applied challenges requiring contextual problem-solving, document analysis, and synthesis of multiple concepts, assessments become impossible to shortcut with generic AI prompts. Using a custom AI agent embedded with pedagogical frameworks, one learning designer created varied, realistic workplace scenarios at scale for a 300-hour microcredential. This approach proves that the solution to AI in assessment isn't restriction but redesigning what and how we assess.

Practical Implementation

This practice was applied in the development of a government-funded 300-hour microcredential in Disability Access and Participation in Healthcare. Working as the sole learning designer in partnership with one academic, a custom AI agent was created to function as a learning experience design partner rather than an automation tool. The agent was explicitly trained using established pedagogical frameworks, including Jonassen’s problem-solving framework, Gagné’s Nine Events of Instruction, and Krathwohl’s taxonomies.

Through iterative collaboration, the AI agent was used to generate scenario-based applied challenges that require learners to solve complex workplace problems within a fictional healthcare organisation undergoing accessibility transformation. Each assessment presents learners with realistic documents to analyse, such as email trails, budget reports, policy drafts, and incident reports, which must be synthesised to propose solutions aligned with course concepts.

These scenarios are not generic case studies but contextualised narratives with recurring characters and evolving situations across the course. The applied challenges prepare learners for summative assessments where they support key characters in navigating complex professional situations. Because the scenarios are unique, course-specific, and require synthesis of multiple concepts, students cannot rely on generic AI prompting to generate answers. Throughout the process, the academic retained full control over learning outcomes and content accuracy, while the AI supported the generation of authentic, scalable assessment materials that would otherwise be impractical within standard resourcing constraints.

Impact Measurement

As the microcredential launches in 2026, direct measures of student learning outcomes are not yet available. Impact is therefore assessed through indicators of feasibility, design quality, and capacity, demonstrating the viability of this approach to assessment redesign in an AI-enabled context.

From a capacity perspective, course development at this scale would typically require a multidisciplinary team of six to seven staff. This project was delivered by one academic and one learning designer, using AI as a design partner rather than an automation tool, enabling high-quality, authentic assessment design within realistic institutional constraints. The AI-supported process enabled the creation of 25 unique applied challenges supported by over 100 interconnected workplace artefacts, including email trails, reports, budgets, and policy documents. Thirteen recurring character profiles were maintained consistently across the course narrative, allowing scenarios to evolve over time, a level of contextual depth and variation that would be impractical using traditional templated approaches.

Academic validation provides a further indicator of impact. All AI-generated scenarios and assessments were reviewed and approved by the subject matter expert for disciplinary accuracy, alignment with learning outcomes, and authenticity to professional practice, confirming that AI-enhanced design can meet academic standards while supporting efficiency and scale. While reductions in academic misconduct cannot yet be measured, the assessment design itself limits the effectiveness of generic AI prompting by requiring learners to engage with course-specific contexts, analyse multiple interrelated documents, and synthesise concepts to propose solutions.

Post-launch evaluation will examine student performance data, learner feedback, and academic integrity indicators in comparison with traditional assessment formats. At this stage, demonstrated feasibility, academic validation, and design quality establish this practice as a credible model for assessment redesign in the age of generative AI.

Enablers

  • Dedicated learning designer–academic partnership
  • Custom AI agent configured for learning design (not automation)
  • Use of established pedagogical frameworks to guide assessment design
  • Scenario-based assessment grounded in authentic professional practice
  • Strong academic oversight and validation of AI-generated materials
  • Institutional support for assessment redesign over invigilation

Files

Designing AI-Resistant Assessments Using AI: Moving Beyond Invigilation to Authentic Problem-Solving
Download Material
Worksheet: Adapting Designing AI-Resistant Assessments through Authentic Problem-Solving
Download Material