← Back to Think Tank Domain 03

AI Technology & Sciences

AI is transforming how we research, create, and communicate. The question is whether we shape it — or it shapes us.


AI for Researchers Ethical AI Creative Technology Digital Literacy Anthropocene Human-Machine Relations

AI is transforming how we research, create, and communicate. This domain examines how emerging technologies can be integrated into cultural research workflows — ethically, practically, and with community alignment. We are not technologists first; we are researchers and cultural practitioners who recognise that AI is now part of the landscape we study.

The term “Anthropocene” was co-proposed by Paul Crutzen and Eugene Stoermer in 2000 to describe the geological era defined by human impact on the planet (Crutzen, 2002). It has since provoked a rich set of counter-proposals: Donna Haraway (2015) argues for the “Chthulucene” — emphasising multispecies entanglement and the imperative of “making kin” across species boundaries — while Anna Tsing (2015) studies how life persists in capitalism’s damaged landscapes. At AnthroWorks, we engage with these framings to ask: what does it mean to be human when machines can think, create, and speak? We believe AI is not separate from humanity. It is an extension of it. But we need to learn how to communicate with it on terms that serve communities, not just corporations.

Key Questions

“We’re using AI to process qualitative data, generate visual research, and map cultural patterns. But where does the human researcher end and the machine begin?”

How do we use AI as a research tool without flattening the nuance of cultural data? Tools like NVivo and ATLAS.ti — the two major qualitative data analysis platforms, now under a single umbrella following Lumivero’s 2024 acquisition — incorporate AI features including automatic coding and natural-language document analysis. But as researchers using these tools, we must protect participant confidentiality and address AI use in ethics applications. The question is not whether to use AI, but how to use it responsibly.

What ethical frameworks should govern AI in community-based research? Birhane (2021) draws on Sub-Saharan African relational philosophies — including Ubuntu — to critique dominant AI ethics frameworks, arguing that algorithmic injustice stems from treating ethics as individual rather than relational. Mhlambi, founder of Bantucracy and affiliated with Stanford’s Center for Comparative Studies in Race and Ethnicity, advances decolonial AI by applying African indigenous philosophy to ethical AI policy (Boston University, 2021). The Centre for Intellectual Property and Information Technology Law (CIPIT) at Strathmore University in Nairobi has published directly on integrating cultural values into African AI development (CIPIT, n.d.).

Our Approach

We are developing practical frameworks for AI integration in cultural research. This includes an AI Starter Course designed specifically for creative practitioners — not a technical bootcamp, but a guided introduction to the tools, workflows, and ethical considerations that matter when AI enters the cultural research space. We take cues from African AI literacy initiatives like the Deep Learning Indaba, founded in 2017 as the annual gathering of the African machine learning community, which has spawned national IndabaX events across the continent (Deep Learning Indaba, n.d.), and Masakhane NLP, a grassroots community developing natural language processing solutions for African languages.

We use AI in our own practice — for data processing, visual research, content generation, and pattern mapping — and we document what works, what fails, and what raises ethical questions. This domain is as much about honest reporting as it is about innovation. As Haraway (2016) writes in Staying with the Trouble, the challenge is not to master new technologies but to learn to “stay with the trouble” they produce.

Who This Is For

Cultural researchers exploring AI tools. Creative practitioners navigating the AI landscape. Educators building AI literacy programmes. Technologists seeking cultural and ethical perspectives. Anyone grappling with the question of what it means to be human in an age of artificial intelligence.

References

Birhane, A. (2021). ‘Algorithmic Injustice: A Relational Ethics Approach’, Patterns, 2(2).

Boston University (2021). ‘Can an Ancient African Philosophy Save Us from AI Bias?’, BU Today. Available at: bu.edu (Accessed: 19 February 2026).

CIPIT, Strathmore University (n.d.). Ethical AI Development in Africa: Integrating Cultural Values and Addressing Global Disparities. Available at: cipit.org (Accessed: 19 February 2026).

Crutzen, P.J. (2002). ‘Geology of Mankind’, Nature, 415, p. 23.

Deep Learning Indaba (n.d.). Available at: deeplearningindaba.com (Accessed: 19 February 2026).

Haraway, D. (2015). ‘Anthropocene, Capitalocene, Plantationocene, Chthulucene: Making Kin’, Environmental Humanities, 6(1), pp. 159–165.

Haraway, D. (2016). Staying with the Trouble: Making Kin in the Chthulucene. Durham: Duke University Press.

Tsing, A.L. (2015). The Mushroom at the End of the World: On the Possibility of Life in Capitalist Ruins. Princeton: Princeton University Press.

Contribute to This Domain

Working with AI in research or creative practice? Building ethical frameworks for emerging technology? Join the conversation.

Join the Think Tank