What does the sector need to maximise the opportunities, and minimise the risks, of GenAI in education?

Simon Knight, University of Technology Sydney, Centre for Research on Education in a Digital Society (UTS:CREDS).

A wide range of materials, of varying quality, were produced over 2023 regarding use of GenAI in education in parallel with its continued technical development. The Australian Federal government, as other governments globally, launched an enquiry mid-2023 inviting submissions from stakeholders regarding the opportunities and risks of GenAI in education. As other groups, our authorship team made a submission to that inquiry; moreover, we also analysed the full set of submissions made. This corpus of over 100 submissions, which we provide in a structured format, provides insight into the perspectives of stakeholders as intended to influence policy.

In providing evidence regarding our own submission at a hearing of the House Inquiry, Simon Knight (lead author) opened by giving an overview of three ways to frame genAI in learning, with distinctive implications:

First, education is one area where genAI will have impact, including on how we teach and learn. The practices and tools marking this shift will develop alongside each other. To support that development, policy and methods are needed to support evidence generation regarding those tools and practices, and avenues to share this knowledge.

Second, of course what we teach will also shift reflecting changes in society and labour markets; this is a cross-sector and discipline challenge and understanding how to support professional learning in this context will need coordination and dynamism.

Third, a particular emphasis for us, to understand ethical engagement with AI, we need to understand how people learn about AI and its applications, because this underpins meaningful stakeholder participation, how real ‘informed consent’ is, or whether ‘explainable AI’ actually achieves its end -i.e., is understandable AI. These are crucial for AI that fosters human autonomy. For this, we need sector-based guidelines with examples alongside ways to share practical cases and strategies.

Navigating the genAI in education discourse over the last 18 months, we’re faced with two contradictory concerns: concern regarding the unknown (we don’t know enough, ‘unprecedented’ change, etc); and the known (sure statements that AI can do x, or will lead to y). Strong claims in either space should be tempered; we have examples of previous technologies, and we have existing regulatory models that apply now just as they did a year ago. We can learn from prior tech-hype and failures and use existing policy in many cases to tackle these novel challenges. Despite this, on the other side we don’t ‘know’ the efficacy of tools or their impact in many contexts, for example what the implications are of being able to offload ‘lower level’ skills that may be required for more advanced operations. These are open questions for research on learning, and this lack of evidence matters if we want people to make judgements about whether engaging is “worthwhile” – i.e., will genAI help us achieve our aims in education.

This last concern was reflected in our analysis of the submissions to the House Inquiry – published in AJET. Through that analysis we both provide an open dataset for future research, while focusing our attention on the recommendations made. A surprising commonality of submissions – including those not from the HE sector – was for greater resourcing for research and evidence generation. A set of recommendations also focused on professional development for a range of actors including teachers, and the importance of involving informed stakeholders in decisions regarding GenAI and policies thereof.

Tensions were also present, including regarding the degree and type of regulation and its flexibility to local needs, and correspondingly of both assessment and curricula development particularly in already full curricula and timetables. Further tensions were in the role of technical approaches (such as watermarking) in assuring assessment outcomes, and ongoing uncertainty regarding issues such as intellectual property considerations in use of GenAI.

Public inquiries are a key vehicle for making perspectives available to the policy process, and provide a lens onto stakeholder views. However, they are also necessarily limited by who submits, and how voices are heard in this policy process. Given the common concern regarding equity issues in access to, and the implications of use for, diverse stakeholders, these concerns regarding how we can achieve stakeholder engagement in policy development are particularly important.

Knight, S., Dickson-Deane, C., Heggart, K., Kitto, K., Çetindamar Kozanoğlu, D., Maher, D., Narayan, B., & Zarrabi, F. (2023). Generative AI in the Australian education system: An open data set of stakeholder recommendations and emerging analysis from a public inquiry. Australasian Journal of Educational Technology, 39(5), 101–124. https://doi.org/10.14742/ajet.8922

0 0 votes
Article Rating
Notify of

Inline Feedbacks
View all comments