NEWS

Artificial Intelligence Governance and Ubuntu: democratising access to knowledge for all.
By Published On: May 30, 2025

IN BRIEF

“I am because you are.” This simple expression of Ubuntu […]

SHARE

From left to right: Bathabile Dlamini (Moderator and Media & Communications Officer, Accountability Lab Zimbabwe), panelists Beloved Chiweshe (Programs & Campaigns Manager, Accountability Lab Zimbabwe), Meamande Wamukwamba (Human Rights Lawyer and Executive Director, MESDA), Tawanda Mugari (Co-founder, Digital Society of Africa), and Noor Armad (Communications Manager, Media Monitoring Africa) during the panel “Ubuntu and AI Governance.” The session explored knowledge as a commons for ethical and inclusive technology, examining the tensions between AI’s growing role in knowledge production and the urgent need for that knowledge to be accurate, accessible, inclusive, and culturally grounded.

“I am because you are.”

This simple expression of Ubuntu is both a philosophical statement and a political principle that guides many Africans. As Africa confronts the challenges and opportunities presented by artificial intelligence (AI), this guiding principle should be at the centre of how we imagine a brave new world where AI is shaping and reshaping how we live, understand, work, and access
resources.
At the Digital Rights and Inclusion Forum (DRIF25) in Lusaka, Zambia, the Accountability Lab hosted a side session titled “Ubuntu and AI Governance.” The session explored knowledge as a commons for ethical and inclusive technology, examining the tensions between AI’s growing centrality in knowledge production and the urgent need for that knowledge to be accurate, accessible, inclusive, and culturally grounded.

The session was based on the understanding that, as it develops rapidly and continues to change in scope and ability, AI remains blind to contexts outside the current centres of training in the West. As a result, despite valiant efforts to introduce various languages, it remains blind to cultural relativism. It is inaccessible to some already online in the “developing world” due to
costs and comprehension, as well as global restrictions on existing AI tools, and perhaps more importantly, the large swaths of populations adversely affected by existing internet inequalities regarding access. From a governance perspective, this introduced a variety of issues.
As AI reshapes societies, economies, and governance systems globally, questions arise about the kind of technologies being developed, who is building them, and for whom they are being built. There are also questions around who understands it, can question and shape it. More importantly, while there is a clear recognition around the infinite good that AI has to offer, even
outside the doomers, there are real questions about what kind of guardrails are being put in place to ensure that even as much good is done, harms and the potential bad that AI comes with are mitigated.


The revolutionary and novel nature of AI also means that its development is unprecedented, and try as we may, we cannot see beyond the bend; wedo not have a shared morality around its development and deployment. The vast amounts of capital that have been invested, and that can be earned by developing and deploying AI, make it difficult for developers and technologists to self-regulate in the pursuit of keeping AI for good rather than nefarious purposes. The concept of AI for good is beneficial, and no one can openly dispute the need to focus on it as technologists and developers develop Artificial General Intelligence, AI Agents, and work with various large language models (LLMS). However, as James Madison is famously quoted
as saying, constitutions that delimit our rights and obligations were not developed for the angels
within us, but for the men who must first be controlled and, in turn, controlled by government.

It is precisely this understanding that while men are good, they can be evil, and to stem the evil they must be regulated. The same applies to AI; it can be beneficial, but to mitigate the potential for harm, it needs to be controlled. This should be done under a set of principles that reflect the spirit of Ubuntu. Such governance, when it comes to AI, would ensure that our development of
AI is guided by the spirit of compassion, reciprocity, dignity, harmony, and community, because “we are because you are.”
AI is trained and thrives on data, most of it generated from diverse amounts of information from everyday people through their online interactions, images, location tags, research, writing and posts. As argued inWeapons of Maths Destruction, these digital inputs feed powerful algorithms that increasingly use sophisticated, yet opaque techniques to determine access to resources,

information, and opportunities. While ownership is consolidated, transparency is elusive, and public understanding is minimal.

For most people, the processes that drive AI are invisible, the implications abstract, and the consequences misunderstood. Yet, people are increasingly being affected, often unknowingly, by the decisions and actions that these systems make, with real socio-economic and political consequences. As the documentary Coded Bias illustrates, biases, privilege, and unfairness are
often embedded in code as the process of developing AI unfolds, making these challenges extra-technical and entering the realm of politics and social justice.
This is not accidental. It is an ethical, structural, and political issue that can be addressed through inclusive values, such as those espoused by the Ubuntu philosophy. The policies, terms, and governance frameworks surrounding AI have, thus far, been written in dense, technocratic language that alienates the majority of people; yet, the technology is so sweeping as to have a lasting impact on the general population. Beyond the issues of values as a guiding light in AI development and use, this makes participation in digital governance for the average person a necessity.

Without values-based and inclusive development platforms, powerful actors, be they governments or corporations, can act without accountability, exploiting the knowledge commons without contributing to or allowing public stewardship.
This is why the focus on Ubuntu and AI governance at DRIF25 was timely and apt. Ubuntu is more than a value system. It is a demand for a different kind of digital politics, one that insists on visibility, participation, and shared ownership. It challenges us to treat knowledge in the digital age, as a common good rather than a private commodity. If we are to apply this ethic to AI
governance genuinely, then making AI legible, understandable, comprehensible, and usable to everyone, regardless of their education, geography, or language, is a political imperative.

In light of the case, we are making for centreing Ubuntu in Africa, and localisation is key. AI policies and technologies must be grounded in the lived realities of African communities. That begins with language, not just the Large Language Models being developed with the aid of linguists and polyglots, but also in the legal documents and terms of service, which have hitherto been written in impenetrable English or legalese. This is a translation task, but also an evangelising one, where those fortunate enough to be at the forefront of the AI game take it upon themselves to evangelise it using everyday terms that people can grasp, and as a result be able to engage and debate.
Communities should be empowered to ask: What are we giving up when we click “accept”? What are the risks of sharing our data? How do these systems affect our access to public services, education, or livelihoods? Participatory dialogue is also essential. Instead of top-down policymaking, forums should be established where people, particularly young individuals, rural communities, and marginalised groups, can co-create AGI and input into AI governance frameworks that reflect their values. These conversations should not be symbolic but substantive, with real influence on how technologies are designed, regulated, and deployed. Ubuntu teaches us that no one is fully human in isolation. Likewise, no governance model can claim legitimacy without the input and understanding of those it governs.
This political approach requires challenging dominant narratives, including the notion that innovation is inherently global and detached from local contexts. Africa must resist the temptation to merely adopt AI models developed elsewhere, models trained on data that reflects entirely different cultures, biases, and priorities. Instead, the continent must invest in developing its systems: language models trained in African languages, AI tools tailored to African problems, and data governance frameworks that protect people’s rights while reflecting communal values.
To do this effectively, digital governance and rights efforts must bridge the gap between policy and practice. That means not only advocating for regulation but also building the capacity of communities to engage meaningfully with technology. It means equipping young Africans with the tools to design, test, and scale technologies rooted in local knowledge. And it means understanding that democratizing access to AI is not a matter of charity or inclusion, it is a matter of power.
Although we suggest it as a cover, even Ubuntu itself must be interrogated critically. While it provides a powerful vision for collective well-being, it must be balanced with the right to privacy and individual safeguards that are increasingly under threat in the digital age. The real challenge is not choosing between collective values and personal rights but designing systems that can hold both in tension. Legal and technological frameworks must reflect this complexity.

They must be creative, context-sensitive, and above all, rooted in the belief that everyone deserves to understand and influence the tools that shape their lives. Africa has the philosophical foundation, historical memory, and human capital to lead a new era
of AI governance —one that is participatory, localised, and just. But that leadership begins not in the cloud or the lab, but in the village meeting, the WhatsApp group, the community forum. This solid base must be informed by social justice principles for global application, that centre representation, recognition, redistribution and participation parity in the development and
harnessing of the myriad advantages that AI has to offer, without perpetuating harm.Making AI visible, understandable, and negotiable to ordinary people is not just good practice, it’s a political necessity. Because in the spirit of Ubuntu, technology must serve our collective humanity, not obscure it.

END // by McDonald Lewanika, Beloved Chiweshe, Thulani Mswelanto, Bathabile Dlamini and Makomborero Muropa

Share This Story, Choose Your Platform!

SIGN UP FOR OUR MONTHLY NEWSLETTER

Newsletter Signup