The integration of artificial intelligence into cultural research is no longer hypothetical. Researchers are already using AI to transcribe interviews, code qualitative data, generate visual outputs, identify patterns in large datasets, and even draft preliminary analyses. The question is no longer whether AI will transform cultural research but how — and whether the transformation will serve the communities that research claims to represent.

This is not a neutral technical development. Every tool embeds the assumptions of its makers. When we use AI to process qualitative data about African cultural practices, we are running that data through systems trained predominantly on Western, English-language texts, built by engineers in Silicon Valley, and optimised for patterns that may not map onto the phenomena we are trying to understand.

The Qualitative Research Toolkit

The two dominant platforms for qualitative data analysis — NVivo and ATLAS.ti — were merged under Lumivero in 2024, consolidating the market for computer-assisted qualitative data analysis software (CAQDAS). Both platforms have increasingly incorporated AI features: automated coding suggestions, sentiment analysis, pattern recognition, and natural language processing. These features promise to accelerate research workflows that have traditionally been labour-intensive and time-consuming.

The promise is real. A researcher working with hundreds of hours of interview transcripts can use AI-assisted coding to identify preliminary themes in a fraction of the time it would take to code manually. Visual pattern recognition can surface connections across large image datasets that a human researcher might miss. Language processing tools can work across multiple languages, making multilingual research more feasible.

But the risks are equally real. Automated coding systems learn from existing codebooks, which means they tend to reproduce the analytical frameworks they were trained on rather than allowing new frameworks to emerge from the data. This is particularly problematic in cross-cultural research, where the most important insights often come from patterns that do not fit existing Western categories.

“The danger is not that AI will replace cultural researchers. The danger is that it will make cultural research more efficient at asking the wrong questions.”

Algorithmic Injustice and Relational Ethics

Abeba Birhane’s work on algorithmic injustice offers a crucial framework for understanding what is at stake. Birhane (2021) argues that the dominant approach to AI ethics — which focuses on individual fairness, bias mitigation, and transparency — is insufficient because it operates within the same individualist ontological framework that produced the problems in the first place. Drawing on Ubuntu and relational ethics, Birhane proposes that AI systems should be evaluated not by whether they treat individuals fairly in isolation but by whether they strengthen or weaken the web of relationships that constitute community life.

This is a profound reframing. It means that an AI tool used in cultural research cannot be evaluated solely by its technical accuracy or its efficiency. It must be evaluated by its relational effects: does it strengthen the relationship between researcher and community? Does it make community members more or less visible in the research process? Does it concentrate analytical power in the hands of the researcher or distribute it?

Decolonial AI: The Bantucracy Framework

Sabelo Mhlambi’s Bantucracy project represents one of the most ambitious attempts to develop AI frameworks grounded in African philosophical traditions. Mhlambi argues that the current AI paradigm — which prioritises efficiency, optimisation, and scale — reflects a specific cultural logic that is neither universal nor inevitable. An AI paradigm grounded in Ubuntu would prioritise relational wellbeing, collective benefit, and the strengthening of community bonds.

This is not a rejection of AI technology. It is a call for AI technology to be developed and deployed within different value systems. The Centre for Intellectual Property and Information Technology Law (CIPIT) at Strathmore University in Nairobi has been at the forefront of this work, developing frameworks for ethical AI deployment across Africa that centre community consent, data sovereignty, and collective benefit rather than individual privacy alone.

Participant Confidentiality in the Age of AI

One of the most pressing concerns is participant confidentiality. When qualitative data — interview transcripts, field notes, photographs, audio recordings — is processed through AI systems, it enters a technical infrastructure that the researcher may not fully understand or control. Cloud-based AI processing means that sensitive community data may be stored on servers in jurisdictions with different privacy laws. Machine learning systems may retain traces of the data they process, creating potential pathways for re-identification.

For cultural researchers working with marginalised communities, these are not abstract concerns. Communities that have experienced generations of surveillance, extraction, and exploitation have every reason to distrust systems that promise efficiency while moving their data through opaque technical architectures. The burden is on researchers to demonstrate that their use of AI tools does not compromise the trust that participants have placed in them.

A Path Forward

None of this means that AI should be avoided in cultural research. It means that its adoption must be deliberate, critical, and community-informed. Researchers should be transparent with participants about which AI tools are used and how data is processed. Communities should have meaningful input into whether and how AI is deployed in research that concerns them. And the field as a whole needs to develop ethical frameworks that go beyond individual consent to address collective data rights and relational accountability.

AI is a tool. Like all tools, it reflects the values of the hand that wields it. The challenge for cultural researchers is to ensure that the hand remains human — and that the values it carries are shaped by the communities the research serves, not by the corporations that build the tools.

References

Birhane, A. (2021) ‘Algorithmic Injustice: A Relational Ethics Approach’, Patterns, 2(2), pp. 1–9.

CIPIT (2023) Artificial Intelligence for Africa: Frameworks for Ethical Deployment. Nairobi: Strathmore University Centre for Intellectual Property and Information Technology Law.

Mhlambi, S. (2020) ‘From Rationality to Relationality: Ubuntu as an Ethical and Human Rights Framework for Artificial Intelligence Governance’, Carr Center Discussion Paper Series, 2020-009. Cambridge, MA: Harvard Kennedy School.

Silver, C. and Lewins, A. (2014) Using Software in Qualitative Research: A Step-by-Step Guide. 2nd edn. London: SAGE Publications.