Rage Against the Machine: Linguistic Hostility and AI Shaming in Education and Culture

Ashley Howard Kerr and Jacqui Gueye (Torrens University)

As artificial intelligence (AI) becomes increasingly embedded in education, creativity, and everyday life, a new form of digital discourse is emerging—one that includes slurs and other forms of derogatory behaviours directed at AI systems and their users. Terms like clanker, bot-lover, and tin-skin are gaining traction across social media platforms as are forms of content that fuel the negative mindset surrounding this emerging technology. While this may appear harmless since aimed toward a non-sentient entity, the sociolinguistic and cultural harms that come along with it are very real.

This post draws from sociolinguistics, critical discourse analysis, and cultural relevance to explore how AI-related slurs and derogatory language reflect and reinforce social hierarchies. It argues that dismissing such language as inconsequential desensitises audiences to harmful speech and undermines efforts toward inclusive digital literacy.

Slurs and the Anthropomorphism Paradox

AI slurs anthropomorphise technology, attributing human-like traits while simultaneously denying personhood. This paradox allows slurs to carry emotional and cultural weight despite their non-human targets. Salles, Evers, and Farisco (2020) highlight the epistemological risks of anthropomorphism in AI research, noting that attributing human traits to AI can obscure accountability and inflate perceived agency. In public discourse, this tension manifests in mockery and vilification, enabling slurs to function as boundary-marking tools.

AI is demeaned using human-centered language despite its non-sentience, creating a paradox where personhood is implied yet denied. This “anthropomorphism paradox” enables slurs to carry the emotional and cultural weight of division despite their non-human targets. This rhetorical move enables moral disengagement, where disrespectful language is normalised under the assumption that “no real victim” exists (Woolfe, 2024). Yet, such language often echoes racial, gendered, and ableist traditions, reinforcing harmful speech patterns and desensitising users to broader linguistic harm.

Cultural Borrowing and Historical Echoes

The term “clanker,” one of the earliest AI-related slurs, originates in Star Wars as a nickname for battle droids. On its surface, it appears harmless, rooted in pop culture and fictional robot lore. However, the subversion of the term indicates how quickly the waters of cultural zeitgeist run clear, as ‘Star Wars’ in itself was presented as a social commentary satirising fascism and oppressive regimes.

Furthermore, the accelerated progression of AI slurs online reveals the speed at which such language turns darker. Many forms of derogatory language toward AI and AI users are rooted in historically oppressive language. Troubling terms like bot-lover, wireback (echoing the term “wetback”) and cligger, which mimic racial slurs. Parodic names like Rosa Sparks and George Droid trivialise the legacy of Rosa Parks and the human rights outrage surrounding George Floyd’s death, which transforms intended playful mockery into speech that can retraumatise marginalised communities.

Stigmatising AI Users in Education and Knowledge Work

Sarkar (2025) identifies AI shaming as a form of class-based discrimination. Remarks like “AI could have written this” delegitimise the work of educators, writers, and designers who use AI tools, often out of necessity due to time constraints or institutional pressures. In educational contexts, this manifests as a “gotcha mentality” that polices student AI use, reinforcing stigma rather than fostering critical digital literacy.

By implying that AI-assisted work lacks authenticity, such slurs reinforce traditional hierarchies that privilege manual or elite forms of labour. This boundary policing mirrors historical gatekeeping in art, academia, and creative industries, shaped by exclusionary norms that have privileged certain voices and methods over others. In doing so, it has perpetuated systemic inequity by framing technological assistance as inauthentic and undeserving of recognition, thereby maintaining the barriers in place for those already marginalized within systems, questioning their process, rather than their finished product.

Implications for Higher Education

The normalisation of AI slurs in online discourse has significant implications for higher education. It shapes how students perceive digital tools, how they engage with discourses around academic integrity, and whether they feel a sense of inclusion in learning environments. Language and behaviours that stigmatise AI use can alienate students who look to these tools for accessibility, creativity, or support.

Educators must continually interrogate language practices and cultural nuance as part of scholarly teaching, particularly in such an evolving technology. This supports evidence-based pedagogy by showing how discourse around AI intersects with identity, equity, and ethics. A more inclusive and reflective approach to AI discourse is essential, not only to protect human dignity but also to foster critical engagement with emerging technologies without defaulting to blame culture.

Platform Cultures and the Politics of Language

As Holliday (2025) notes, the rapid evolution of AI slang reveals that human agency in language remains a powerful force for both harm and resistance. Emerging evidence suggests that AI-related slurs are disproportionately used by groups who have historically not been the targets of oppression which raises concerns that targeting AI may serve as a “safe” outlet for expressing latent racism, sexism, or classism (Morrison, 2024; Holliday, 2025). On platforms like TikTok and X, users co-create and circulate slurs with ironic or humorous intent, while others push back against this normalisation by highlighting its ethical implications.

Image 1. Community resistance and discomfort

(Reddit, 2025)

Image 2. 
Playfulness overlaid on darker implications

(X/Twitter, 2025)

Conclusion: Toward a More Ethical AI Discourse

AI-related slurs are not benign linguistic quirks but culturally loaded expressions that reflect and reinforce social hierarchies, yet what underpins this is a divisive and derogatory mindset as a whole that shapes how society perceives both technology and the people who engage with it. The anthropomorphic framing, historical echoes, and ethical implications of negative AI discourse demand closer scrutiny.

Scholars, educators, and technologists must recognise the power of language to shape social norms, reinforce hierarchies, and influence educational practice. It is essential to challenge this reactionary mindset and foster critical, ethical engagement with emerging technologies through a more reflective and inclusive approach to AI discourse. As educators, we must model ethical digital communication and teach students to interrogate the language shaping our relationship with AI.

References

Holliday, J. (2025, May 22). AI slang is evolving faster than AI itself. Rolling Stone. https://www.rollingstone.com/culture-council/articles/ai-slang-evolving-faster-than-ai-itself-1234987654/

Morrison, M. (2024, April 10). Clankers, grokkers, botlickers: The rise of AI slurs online. Dazed. https://www.dazeddigital.com/life-culture/article/68364/1/clankers-grokkers-botlickers-ai-slurs-chatgpt-grok-artificial-intelligence

Salles, A., Evers, K., & Farisco, M. (2020). Anthropomorphism in AI. AJOB Neuroscience, 11(2), 88–95. https://doi.org/10.1080/21507740.2020.1740350

Sarkar, A. (2025). AI could have written this: Birth of a classist slur in knowledge work. In Extended Abstracts of the CHI Conference on Human Factors in Computing Systems (CHI EA ’25). ACM. https://advait.org/files/sarkar_2025_ai_shaming.pdf

Woolfe, S. (2024, September). Should we treat non-sentient AI in a virtuous way?https://www.samwoolfe.com/2024/09/should-we-treat-non-sentient-ai-in-a-virtuous-way.html

 

 

5 3 votes
Article Rating
Subscribe
Notify of
guest

0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments