Richard McInnes, University of Adelaide
A University Without Thought
Generative AI is the latest weapon in the neoliberal restructuring of higher education, a technological panacea designed to strip away the messiness of intellectual labour and replace it with streamlined, automated efficiency. This is not a minor shift. It is the culmination of decades of neoliberal encroachment, where education has been reduced to a product, students reframed as consumers, and academics recast as service providers, their expertise valued only in relation to key performance indicators and institutional rankings (Giroux, 2002). Generative AI accelerates this commodification, promising efficiency, scalability, and innovation at the expense of intellectual labour. Universities, already pressured to demonstrate return on investment, eagerly adopt gen-AI to speed up curriculum design, automate assessment creation, and streamline content production. But this is not progress—it is an attack on what it means to think, teach, and learn.
At first glance, generative AI appears to offer relief. Academics, burdened by inflated workloads, are handed tools that promise to automate routine tasks (Watermeyer et al., 2024). Universities, eager to cut costs, see gen-AI as a way to deliver education without increasing labour. The appeal is undeniable: AI-generated learning materials, assessments, and even entire courses promise alignment, optimisation, and rapid adaptation to market demands. But what happens when we automate the processes that define academic work? What happens when we let AI—not educators—drive the intellectual labour of designing learning? We get a university that moves faster but thinks less.
The Death of Intellectual Labour
We are told that gen-AI will “enhance” our work, that it will relieve us of the burdens of curriculum design, assessment creation, and feedback provision. But this is a lie. Designing learning is not a burden—it is at the very heart of education. It is an intellectual process, one that requires deep engagement with disciplinary knowledge, pedagogical intent, and the lived realities of students. Let’s be clear: generative AI does none of this. It does not think. It does not reflect. It does not question, challenge or create. It predicts. It mimics. It assembles plausible-sounding content based on existing patterns. Knowledge is no longer created, challenged, or interrogated, but simply generated.
Take constructive alignment, for example. It is not a compliance exercise but a process of reflexive intellectual engagement. When we align learning outcomes, assessments, and activities, they make expert judgments based on disciplinary knowledge, student needs, and pedagogical intent. It is a reflective, iterative process that involves questioning, refining, and challenging assumptions. It is a process that takes time and deep reflection. Gen-AI does none of this. It produces outputs that look right. It generates coherence without understanding, conformity without critique. It generates plausible homogeny that matches outcomes and assessment types without engaging in any critical evaluation. It does not consider context, challenge assumptions, or push boundaries—it simply produces an output that looks right. And that is precisely the danger: AI-generated content does not need to be good; it just needs to be good enough. In a system increasingly driven by efficiency metrics, good enough is all that’s required to legitimise cutting intellectual labour out of the process.
The AI-driven university is a Hollow Shell
The automation of intellectual labour is not neutral. It is an active erasure of the academic as a thinker, designer, and critical agent. We are being repositioned as quality checkers, passive validators of AI-generated outputs, expected to approve and tweak but not to critique (Mitra et al., 2024)—‘zombies-in-the-loop’ (Krügel, et al., 2022). Academics are reduced to compliance officers, rubber-stamping AI-generated content with little time or power to push back (Watermeyer et al, 2023). The university has long been a space of debate, dissent, and discovery, where knowledge is shaped through critical inquiry and human judgment. Education is not just about outcomes; it is about the process of getting there. Struggle, reflection, and critical engagement are essential to intellectual growth. But generative AI accelerates a shift towards a model where these qualities are seen as inefficiencies to be eliminated. The metrics don’t measure them. And so, they disappear. Teaching becomes content delivery. Curriculum becomes a dataset. Thought becomes a cost to be minimised.
Generative AI is not just an academic issue, it is the perfect tool for a university already operating under the logic of efficiency, compliance, and surveillance (Watermeyer et al., 2024). A tool of extractive corporate monopolies that profit from the exploitation of human knowledge and labour. Universities, by uncritically embedding generative AI, are complicit in a system that surveils us, capturing and monetising our interactions, exploits workers, destroys the environment, deepens inequities, widens the digital divide (Wach et al., 2023), colonises knowledge (Couldry & Mejias, 2019), and contaminates the internet (Shumailov et al., 2024). We feed students machine-generated content and ask them to produce AI-assisted responses. The cycle continues, and the university becomes a data-churning machine, surveilled, harvested, and monetised by corporate monopolies. The gen-AI-driven university is not inevitable. It is a choice—a choice made by those who hold power.
Resisting the AI-Driven University
We cannot afford to be passive. The problem is not just generative AI—it is how it is being wielded as an accelerant for the worst tendencies of the neoliberal university. We must resist the techno-solutionist narrative that positions automation as an unquestioned good, that tells us efficiency is the only measure of success (Morozov, 2013). We must reclaim education as a process of engagement, not a product to be delivered. We must refuse to let our intellectual labour be eroded, reduced, and erased. A university that prioritises outcomes over thinking, automation over inquiry, and efficiency over engagement is no longer a university at all. It is a content-delivery system. And we deserve better.
Universities do not need faster curriculum development; they need more thoughtful curriculum development. If generative AI is to have a role in education, it must enhance—not replace—intellectual labour. It must create more space for critical thinking, not less. This means resisting the seductive logic of automation, pushing back against the narrative that efficiency is the ultimate goal, and defending the role of process in higher education. The university does not belong to corporate interests, AI vendors, or the logic of automation. It belongs to those who believe in education as a space of critical inquiry, radical possibility, and collective transformation. That is worth fighting for.
This is not a call to reject generative AI entirely. It is a call to reject the logic that demands we surrender to it.
- Reject the AI-driven curriculum: Designing learning is an intellectual act, not an administrative task.
- Refuse the narrative of inevitability: Automation is not an improvement when it strips away critical engagement.
- Expose corporate capture: Generative AI is not neutral. It is a tool of monopolistic tech empires designed to extract value from academia.
- Defend intellectual labour: Thinking is not a cost to be minimised. It is the foundation of higher education.
- Demand transparency: Institutions cannot embed AI without accountability. Who profits? Who is harmed?
The future is not yet written. The neoliberal university thrives on exhaustion, burying us under workload and bureaucracy to prevent resistance. Generative AI offers a way to make your job “easier,” but in reality, it is a mechanism of control—a tool to accelerate your disposability. We must not let exhaustion become obedience. The university is not a business. Education is not a product. Teaching is not content delivery. The work of learning—the real, messy, human process of intellectual growth—cannot be automated. We do not have to accept the AI-driven university. We can refuse it. We can resist it. We can reclaim the education before it is too late.
References:
Couldry, N., & Mejias, U. A. (2020). The costs of connection: How data are colonizing human life and appropriating it for capitalism. Stanford University Press. https://doi.org/10.1515/9781503609754
Giroux, H. A. (2002). Neoliberalism, corporate culture, and the promise of higher education: The university as a democratic public sphere. Harvard Educational Review, 72(4), 425-464. https://doi.org/10.17763/haer.72.4.0515nr62324n71p1
Krügel, S., Ostermaier, A., & Uhl, M. (2022). Zombies in the loop? Humans trust untrustworthy AI-advisors for ethical decisions. Philosophy & Technology, 35(1), 17.
Mitra, B., Diaz, F., & Craswell, N. (2024). Sociotechnical Implications of Generative Artificial Intelligence for Information Access. arXiv preprint arXiv:2405.11612.
Morozov, E. (2013). To save everything, click here: The folly of technological solutionism. PublicAffairs.
Shumailov, I., Shumaylov, Z., Zhao, Y., Papernot, N., Anderson, R., & Gal, Y.(2024). AI models collapse when trained on recursively generated data. Nature, 631, 755-759. https://doi.org/10.1038/s41586-024-07566-y
Wach, K., Duong, C.D., Ejdys, J., Kazlauskaitė, R., Korzynski, P., Mazurek, G., Paliszkiewicz, J., & Ziemba, E. (2023). The dark side of generative artificial intelligence: A critical analysis of controversies and risks of ChatGPT. Entrepreneurial Business and Economics Review, 11(2), 7-30. https://doi.org/10.15678/EBER.2023.110201
Watermeyer, R., Phipps, L., Lanclos, D. & Knight, C. (2024) Generative AI and the Automating of Academia. Postdigital Science and Education, 6, 446–466 https://doi.org/10.1007/s42438-023-00440-6