Srecko Joksimovic (University of South Australia), Marion Blumenstein (University of Auckland), Hazel Jones (independent consultant), Linda Corrin (Deakin University)
In the evolving landscape of higher education, artificial intelligence (AI) is no longer just a tool. It’s an active participant in learning environments. As AI technologies become embedded in learning platforms and educational processes, a fundamental question arises: How can we use Learning Analytics (LA) to generate meaningful insights into student learning when AI is in the mix? This is the central theme of an upcoming workshop series: “Figuring out AI Learning Processes” being presented by the ASCILITE Learning Analytics SIG.
In three interactive online workshops, educators, researchers, learning designers, and developers will collectively explore how we can make learning visible and interpretable in AI-mediated environments. More than ever, we need to sharpen our understanding of how learners interact with AI systems, how their learning evolves, and how to measure their mastery of knowledge and skills in such dynamic settings.
Whether you’re investigating how students develop agency in AI-rich environments, designing analytics dashboards for teachers, or simply curious about how to tell if students are really learning, this series will offer you insights, questions, and a vibrant community of inquiry.
Unpacking Learning Processes in AI-Supported Contexts
Researchers in LA and related fields have been grappling with one of the field’s most persistent challenges: making the process of learning visible. Even in well-structured environments, such as with traditional classrooms or standard online courses, tracing how learning unfolds over time is far from straightforward. Models of instruction, learning activities, feedback, and assessment offer helpful scaffolds, but the reality is often more complex, messy, and context-dependent.
With the integration of AI into learning environments, this complexity intensifies. AI is no longer just a passive provider of information. It can generate explanations, suggest strategies, or simulate dialogue. This shifts the nature of interaction in which learning becomes co-constructed with the system, and the boundaries of what counts as learner input, AI mediation, or shared decision-making begin to blur.
The first workshop will focus on this critical question: What are the actual learning processes in AI-assisted environments? How do we understand learning when it emerges from a dynamic interplay between the learner and an AI system? Are students passively absorbing content, or actively regulating and negotiating their engagement? What new cognitive, metacognitive, and even epistemic skills are required to learn with AI?
These questions aren’t just academic. They have real implications for how we design, measure, and support learning in AI-rich contexts. Answering them requires us to rethink not just how learning happens, but how it appears, as well as what counts as evidence for it.
Making Sense of Learning Data
As the nature of learning changes, so too must our approach to measurement.
The second workshop will explore the types of LA data available to educators and researchers in AI-supported environments. This includes not just traditional metrics like time on task, quiz scores, or forum posts, but new, more nuanced indicators such as:
- Interaction logs with AI tools,
- Revision trajectories and feedback loops,
- Self-reported confidence or trust in AI suggestions,
- Measures of curiosity, persistence, and reflection.
Crucially, we need to ask: which data are truly indicative of learning, and which merely reflect activity?
Focusing on What Matters: Mastery and Meaningful Skills
The third workshop in the series will focus on what we should be measuring. As we move beyond simple content recall, we aim to surface indicators of complex, transferable skills, such as self-regulated learning (SRL), critical thinking, and employability-relevant competencies.
For instance, SRL can be inferred through goal-setting behaviors, strategic use of AI prompts, and evidence of planning and self-monitoring. Critical thinking might be visible in how students challenge AI-generated content, seek additional evidence, or justify alternative perspectives. Employability skills could be tracked through collaborative work, problem-solving approaches, and how learners adapt feedback from AI.
These indicators are subtle and often require multi-source, multi-modal evidence to truly capture what mastery looks like. Our workshop series aims to explore practical methods for identifying such evidence while also critically reflecting on their interpretability, fairness, and ethical use across diverse higher educational contexts.
Join the Conversation
As AI becomes a co-learner, co-teacher, and co-designer in our learning ecosystems, it’s urgent that we rethink how we evaluate learning. The “Figuring out AI Learning Processes” workshops are designed to foster exactly this kind of rethinking, rooted in evidence, ethics, and educational value. Join us at the ASCILITE conference in Adelaide, 30 Nov to 3rd December 2025, where we will present the findings from our workshop series and discuss how to move forward.
If you’re interested in contributing to or attending the “Figuring out AI Learning Processes” online discovery workshop series, we invite you to share your interest and stay updated on upcoming sessions. Whether you’re a researcher, educator, designer, or practitioner working at the intersection of AI and learning, we’d love to hear from you.
Register your interest here.