There is no shortage of AI courses. Universities offer them. Tech companies run bootcamps. YouTube is saturated with tutorials. And yet, when we speak with cultural researchers, anthropologists, designers, and creative practitioners across Southern Africa, the overwhelming response is the same: none of these courses speak to what we actually do. The gap is not technical knowledge per se — it is contextual knowledge. How does AI fit into qualitative research workflows? What are the ethical implications of using machine learning on community-generated data? How do you use these tools without reproducing the extractive dynamics they were built on?
This is the problem we are trying to solve. AnthroWorks is developing an AI starter course designed specifically for cultural practitioners — people whose primary concern is not building algorithms but understanding how AI can serve (or undermine) research that centres human relationships, cultural knowledge, and community benefit.
Learning from Africa’s AI Community
Any serious engagement with AI in the African context must begin with the communities already doing this work. The Deep Learning Indaba, founded in 2017, has become the continent’s most significant gathering of machine learning researchers and practitioners. What began as a single event has spawned IndabaX satellite conferences across dozens of African countries, creating a distributed network of AI researchers who understand the specific challenges of working with African data, African languages, and African communities. The Indaba model — building local capacity rather than importing expertise — is precisely the approach our course aims to follow.
Equally significant is Masakhane, the grassroots community dedicated to natural language processing for African languages. Founded on the recognition that the vast majority of NLP research is conducted in English and a handful of other European languages, Masakhane has brought together researchers across the continent to build language technologies that actually serve African communities. Their work demonstrates a crucial principle: AI tools are only as useful as the data and languages they are trained on, and for most African cultural practitioners, the dominant tools are built on datasets that do not represent their realities.
“The question is not whether cultural researchers should use AI. It is whether the AI tools available were built with any awareness of the communities those researchers serve.”
Tools That Already Exist
Cultural practitioners do not need to build AI systems from scratch. They need to understand the AI capabilities already embedded in the tools they use — and the implications of those capabilities. NVivo and ATLAS.ti, the qualitative data analysis platforms that many researchers already rely on, have increasingly integrated AI-powered features: automated coding suggestions, sentiment analysis, pattern recognition across large qualitative datasets. These features can dramatically accelerate analysis, but they also introduce new risks. When an algorithm suggests thematic codes for interview transcripts, whose categories is it applying? When sentiment analysis is run on conversations conducted in isiZulu or Sesotho, how reliable are the results?
Our course will address these practical questions directly. Not as abstract ethical dilemmas, but as concrete workflow decisions that researchers face daily. When should you use AI-assisted coding? When should you turn it off? How do you validate machine-generated themes against community knowledge? How do you explain your use of AI tools to research participants in ways that maintain trust?
The Anthropocene and the Algorithm
The broader context for this work is the recognition that we are living through multiple simultaneous transformations. Crutzen’s (2002) identification of the Anthropocene — the geological epoch defined by human impact on Earth systems — has fundamentally reframed how we understand the relationship between human activity and planetary futures. AI is part of this transformation: its energy consumption, its material infrastructure, its reshaping of labour and knowledge production are all ecological and political phenomena, not merely technical ones.
Haraway’s (2016) call to “stay with the trouble” — to resist both uncritical techno-optimism and paralysing techno-pessimism — offers a useful orientation. Cultural practitioners do not need to become AI evangelists or AI sceptics. They need to become literate, critical, and capable users who can make informed decisions about when and how to deploy these tools in service of their research communities.
Ethical Guardrails for Community-Based Research
The ethical dimension of AI in community-based research cannot be an add-on module at the end of the course. It must be woven through every session. Key guardrails include: informed consent that specifically addresses AI processing of participant data; data sovereignty protocols that give communities control over how their knowledge is stored, analysed, and shared; transparency about the limitations and biases of AI tools; and a commitment to human interpretive authority — the principle that AI-generated analysis is always a starting point for human judgement, never a replacement for it.
We are building this course in public, and this thread is part of that process. What tools do you need to understand? What ethical questions keep you up at night? What would make an AI course actually useful for the work you do? The conversation is open.
References
Crutzen, P.J. (2002) ‘Geology of mankind’, Nature, 415(6867), p. 23.
Haraway, D.J. (2016) Staying with the Trouble: Making Kin in the Chthulucene. Durham: Duke University Press.
Masakhane (no date) A grassroots NLP community for Africa, by Africans. Available at: https://www.masakhane.io (Accessed: 8 January 2026).
The Deep Learning Indaba (2017–) Strengthening African Machine Learning. Available at: https://deeplearningindaba.com (Accessed: 8 January 2026).