Danielle Logan-Fleming, Torrens University Australia
When eAssessment expert Dr Mathew Hillier reached out while co-authoring a section of ASCILITE’s Contextualising Horizon Report 2025, he was looking for a high-quality resource to help staff embed Interactive Oral (IO) assessments in their subjects and programs. What followed was more than a quick exchange of links. It sparked a broader conversation about the origins, diffusion, and future of IOs, particularly in the context of rising interest and emerging risks in the age of artificial intelligence.
As I gathered resources developed across the many institutions I’ve supported over the past decade, I found myself reflecting not only on the strength and scalability of the model, but also on its continued relevance in a rapidly evolving learning and teaching landscape.
A Growing Body of Practice
IOs have now been implemented at scale in more than 30 higher education institutions worldwide in classes up to 800-1000 students. These assessments are different to oral exams, vivas, presentations, or scripted Q&A sessions. Instead, IOs are scenario-based, professionally aligned conversations where students are required to think on their feet, synthesise their learning, and communicate clearly in real time.
When implemented as intended, IOs offer a powerful assessment approach that delivers on three critical objectives: student engagement, employability, and academic integrity. The model below illustrates how these outcomes are achieved through alignment with specific design characteristics such as scaffolding, authentic scenarios, and professional relevance.

For those exploring IOs for the first time, the original Griffith University resource remains a solid starting point: https://tinyurl.com/IO-assessments It offers a clear foundation for understanding the intended design and purpose of IOs and is rich with examples and resources collated from around the world.
In a recently co-authored chapter: Harnessing the Potential of Interactive Oral Assessments in Blended Learning: Lessons from Case Studies | SpringerLink, the same team have explored how IOs have evolved while staying grounded in their original purpose.
Other quality institution-led resources from institutions that have implemented IOs at scale include:
Dublin City University: DCU IO resource and overview video
University of Sydney: Interactive Oral Assessment in Practice
University of Sydney Business School: Interactive Oral Assessment (IOA)
University of Auckland: TeachWell IO Resource
Charles Sturt University: Interactive oral assessment resource
Singapore Institute of Technology:
These resources vary by discipline and context but share a commitment to the original design principles of IOs: dynamic, dialogic, and embedded in real-world professional practice.
Protecting What Makes IOs Distinct
As IOs grow in popularity, one of the most pressing challenges they face, alongside emerging risks related to artificial intelligence (AI), is the inconsistent use of the term itself. Increasingly, “Interactive Oral Assessment (IOA)” is being applied as a blanket label for a wide range of oral assessment types, including presentations with Q&A, oral exams, vivas, and other loosely interactive formats. This bundling of fundamentally different approaches under a single term, risks undermining the integrity of the original model.
Such dilution is more than a semantic issue. IOs are purposefully designed as scenario-based, discipline-aligned, and dialogic assessments that require students to apply and extend their learning through real-time, authentic conversation. When the term is used too broadly, it becomes difficult for educators and institutions to distinguish the original intent, scale the model with fidelity, or evaluate its impact effectively.
Mathew offered the following reflection:
“The label usage does matter because, as you point out, the original method gets tarnished by implementations that stray too far from the intended design. The labels of viva or oral exam should stay because those are different methods. I see the same thing happening now with ‘programmatic assessment’, which, like IOs, is a specific and well-researched approach. While it is great that people will always try to adapt a method to fit their local context, risks arise if there is inadequate investment into the necessary cultural or structural changes or where the design eventually implemented strays too far from the original.”
This is a common challenge in the diffusion of educational innovation. As interest grows, so does the risk of adaptations that overlook or compromise core design features. While some researchers choose to protect their models through licensing or trademarks, the IO model has always been shared openly to support sector-wide learning, experimentation, and uptake. This openness has enabled hundreds of educators globally to embed IOs into practice. At the same time, it places the responsibility of maintaining design integrity in the hands of a broad and continually expanding community.
Since their development at Griffith University in 2015, IOs have evolved from their original inspiration in the International Baccalaureate’s interactive oral. The IB model, often used in high school English and drama contexts, emphasised performative elements. In contrast, the higher education adaptation of IOs focuses on professional authenticity, disciplinary alignment, and student-centred communication.
Initially introduced to address concerns around disengagement, employability, and academic misconduct, IOs gained significant traction during the pandemic as a scalable alternative to in-person exams. Today, they continue to serve as a rigorous and adaptable assessment approach, well suited to the evolving needs of educators and students working in AI-informed environments.
Preserving the clarity of what an IO is, and is not, is essential. Mislabelling may seem minor, but it poses a serious risk to the sustainability of the model. If we lose sight of what makes IOs effective, we risk weakening a method that has already proven to support academic integrity, student engagement, and graduate readiness across diverse contexts.
Facing Forward: AI and the Integrity of Online Assessment
Mathew and I eventually turned to the future. Our conversation explored new threats to academic integrity made possible by AI, especially in regard to remote interactive assessment formats. While some of these technologies are still maturing, they are evolving fast and demand attention.
He identified these developments:
1. AI-Generated Video Doubles (Doppelgangers)
Students can now use AI platforms such as Heygen and Synthesia to generate avatars that replicate their appearance and voice. These avatars can speak using uploaded scripts or audio files, making it difficult to verify authenticity in pre-recorded video submissions.
2. Real-Time Interactive Avatars
There are two forms of this risk:
- LLM-Driven Avatars simulate live conversations using large language models. Though still clunky, they are improving rapidly.
- “Sock Puppeting” by a human operator represents the highest risk. In this case, a contract cheating provider could control the avatar in real time. The avatar looks and sounds like the student, but the responses are coming from someone else. This method is already being used in job interviews and virtual meetings and may soon reach assessment contexts.
While I appreciated the seriousness of these risks, my experience designing and supporting IOs over the past decade leads me to believe that well-designed IOs are more than capable of withstanding these emerging threats… at least for now.
Built for Integrity
What protects IOs is not just format but pedagogy. IOs are scaffolded throughout a course or program, contextualised to the student’s learning journey, and structured to assess synthesis, judgment, and communication in real-time, professionally relevant scenarios.
Many IOs are delivered in pairs or small groups where conversational turns are unpredictable. Even one-on-one IOs are not interrogations. Prompts are not scripted questions, and students are not grilled. Instead, assessors guide a curious, open-ended conversation that draws on personalised learning and authentic scenario responses.
These qualities make impersonation significantly more difficult than with conventional formats. For someone to successfully fake an IO, whether using AI or human support, they would need to replicate the student’s voice, prior learning experiences, feedback history, disciplinary knowledge, and communication style. In most cases, without that depth of engagement, it becomes evident that something is missing. This is particularly true when the assessor has prior familiarity with the student and their learning journey. In larger subjects, where delivery may involve multiple educators or casual staff, unfamiliar with individual students, this may not always be possible. This reinforces the importance of robust pre- and post-moderation processes and consistent marker briefings to support integrity in IO delivery.
We also acknowledge that in rare cases of full academic outsourcing, where an individual has completed the learning on the student’s behalf, impersonation may still be possible. In such scenarios, assessment design alone cannot protect against fraud. This is why strong identity verification practices are essential. IO resources recommend that assessors verify photographic ID at the point of assessment, whether in person or online, and cross-check this against institutional records. Without this step, even the most rigorous assessment design remains vulnerable.
Ultimately, the effort required to impersonate a student in a well-designed IO closely mirrors the effort required to genuinely engage. Whether artificial or human, the impersonator would still need to demonstrate genuine insight and real-time application, which is exactly what IOs are designed to reveal.
Looking Ahead with Confidence
Although it is important to remain vigilant about new developments, IOs continue to offer one of the most secure, authentic, and pedagogically powerful approaches available in higher education. Their success has come not only from sound design but from a strong and growing community of practice.
To date, IOs have been adopted across Australasia and beyond, supported by shared resources, collaborative research, and a collective belief in the importance of preparing students to speak, think, and respond with confidence in complex real-world contexts.
Get in Touch
If you are exploring IOs, or working with authentic or programmatic assessment, and would like to collaborate, share ideas, or exchange practice, I would welcome the opportunity to connect. The success of any innovation lies not only in its design, but in the strength of the community that shapes, develops, and sustains it.
Acknowledgements
I wish to acknowledge the contribution of Dr Mathew Hillier as both the inspiration for this blog post and as a critical friend, refining the final version.
In addition, throughout this post, I used the Generative AI tool, ChatGPT 4o, when needed, to adjust prose and correct syntax. I also sought advice from ChatGPT 4o on the logical flow of ideas.
References
Charles Sturt University Division of Learning and Teaching. (2021). Interactive oral assessment. Retrieved 24 July, 2025. https://www.csu.edu.au/division/learning-teaching/assessments/assessment-types/interactive-oral-assessment
Dublin City University Teaching Enhancement Unit. (2025). Interactive Oral Assessment. Retrieved 21 July, 2025. https://www.dcu.ie/teu/interactive-oral-assessment#tab-519426-1
Hillier, M. (2023). Making Online Assessment Active and Authentic. In Technology-Enhanced Learning and the Virtual University (pp. 1-46). Singapore: Springer Nature Singapore. https://doi.org/10.1007/978-981-19-9438-8_17-1
Krautloher, A. (2024). Improving assessment equity using Interactive Oral Assessments. Journal of University Teaching & Learning Practice, 21(4), 1–17. https://doi.org/10.53761/4hg1me11
Lim, S. M. M., & Lim, C. Y. (2023). Use of interactive oral assessment to increase workplace readiness of occupational therapy students. https://doi.org/10.29060/TAPS.2023-8-2/SC2804
Logan, D., Sotiriadou, P., Daly, A. & Guest, R. (2017). Interactive oral assessments: Pedagogical and policy considerations. In J. Dron & S. Mishra (Eds.), Proceedings of E-Learn: World Conference on E-Learning in Corporate, Government, Healthcare, and Higher Education (pp. 403-409). Vancouver, British Columbia, Canada: Association for the Advancement of Computing in Education (AACE). Retrieved 21 July, 2025 from https://www.learntechlib.org/primary/p/181304/.
Logan-Fleming, D., Sotiriadou, P., Daly, A., & Guest, R. (2024). Interactive oral assessment; An authentic and integral alternative to examination
https://sway.cloud.microsoft/yQ2s0Bm3ILkWtGll?ref=Link
O’Riordan, F., Thangaraj, J., Girme, P. & Ward, M. (2025). Interactive oral assessment: Staff perceptions, challenges and benefits of this robust, authentic assessment design approach. Innovations in Education and Teaching International. Advance online publication. https://doi.org/10.1080/14703297.2025.2477160
Sotiriadou, P., Logan, D., Daly, A., & Guest, R. (2020). The role of authentic assessment to preserve academic integrity and promote skill development and employability. Studies in Higher Education, 45(11), 2132-2148. https://doi.org/10.1080/03075079.2019.1582015
Sotiriadou, P., Logan-Fleming, D. (2025). Harnessing the Potential of Interactive Oral Assessments in Blended Learning: Lessons from Case Studies. In: Misra, P.K., Mishra, S., Panda, S. (eds) Case Studies on Blended Learning in Higher Education. Springer, Singapore, pp 193–210. https://doi.org/10.1007/978-981-96-0722-8_11
Stevenson, L., Miller, B., & Sitbon, C. (n.d.). Interactive oral assessment in practice. The University of Sydney. https://educational-innovation.sydney.edu.au/teaching@sydney/interactive-oral-assessment-in-practice/
Tan, C. P., Howes, D., Tan, R. K., & Dancza, K. M. (2022). Developing interactive oral assessments to foster graduate attributes in higher education. Assessment & Evaluation in Higher Education, 47(8), 1183-1199. https://doi.org/10.1080/02602938.2021.2020722
TEQSA (2023) Assessment reform in the age of artificial intelligence: a discussion paper. https://www.teqsa.gov.au/sites/default/files/2023-09/assessment-reform-age-artifi cial-intelligence-discussion-paper.pdf
Ward M, O’Riordan F, Logan-Fleming D, Cooke D, Concannon-Gibney T, Efthymiou M, Watkins N (2023) Interactive oral assessment case studies: an innovative, academically rigorous, authentic assessment approach. Innov Educ Teach Int 1–18. https://doi.org/10.1080/14703297.2023.2251967
Ward, M., O’Riordan, F., Logan-Fleming, D., Cooke, D., Concannon-Gibney, T., Efthymiou, M., & Watkins, N. (2024). Interactive oral assessment case studies: An innovative, academically rigorous, authentic assessment approach. Innovations in Education and Teaching International, 61(5), 930-947. https://doi.org/10.1080/14703297.2023.2251967
University of Auckland. (2024, April 7). Interactive oral assessments. TeachWell Digital. Interactive oral assessments | TeachWell Digital
Ward, M., O’Riordan, F., Logan-Fleming, D., Cooke, D., Concannon-Gibney, T., Efthymiou, M., & Watkins, N. (2024). Interactive oral assessment case studies: An innovative, academically rigorous, authentic assessment approach. Innovations in Education and Teaching International, 61(5), 930-947. https://doi.org/10.1080/14703297.2023.2251967