“AI has the potential to expand access to HIV information for people who might otherwise feel too afraid or stigmatised to ask questions – but technology alone cannot eliminate stigma.” – Tatiene Ngoie, CHAPS South Africa
A few years ago, when people talked about robots taking over the world, we laughed. It sounded like something out of a science-fiction movie. Well, here we are. If you used ChatGPT this week, or if a robot vacuum cleaned your floors, the age of artificial intelligence is already living with you and it is already shaping the HIV response.
On 16 March 2026, ITPC Global convened an important webinar AI in HIV: Centering Communities in the Age of Algorithms. We brought together over 100 participants and six distinguished panellists from across the health, technology, governance, and community sectors. The conversation was bold, urgent, and at times unsettling: while algorithms hold real promise for the HIV response, communities must shape them, not merely be subjected to them.
The Conversation
The panel brought together perspectives from an AI product developer, a community researcher working on chatbot bias, a government public health physician, an ethics and governance consultant, an HIV activist and youth advocate, and ITPC Global’s own Executive Director who is a member pf the Lancet Global Health Commission on AI & HIV.
The moderator, Charlene Omrawo, framed the discussion within the arc of industrial revolutions, from steam power to the current fifth revolution, defined by people-centric technology, human-robot collaboration, and ethical stewardship. The question for the HIV sector is clear: will we be passive passengers in this shift, or will we drive it?
Key Discussion Points
1. The Promise: From Data Overload to Life-Changing Efficiency
The panel opened with a direct question: what is the single greatest benefit of AI for the HIV response? The answers were telling. Panellists highlighted AI’s capacity to process vast volumes of complex data, from testing to treatment adherence , far faster than any human team. For overburdened healthcare workers, this could be transformative.
One panellist shared a striking personal example: a visit to a clinic that usually consumed five hours was completed in under 30 minutes, thanks to a machine called Abby that autonomously processed vitals and sent results via email. AI chatbots were also highlighted for their potential to provide confidential, stigma-free information around the clock – a genuine lifeline for young people in communities where speaking openly about HIV or sexual health still carries enormous risk.
Solange Baptiste added another dimension: the ability to ask dozens of small, personal health questions before a doctor’s appointment, without waiting, without judgment, and without feeling like a burden. That shortening of the feedback loop, she noted, is a real, tangible benefit for people living with HIV.
2. The Risks: Bias, Exploitation, and the Erosion of Community Governance
The panel was equally clear-eyed about the dangers. When AI systems are built on datasets that exclude marginalised communities, as most currently are , they do not just fail those communities. They actively deepen inequality. One panellist from the technology sector described how generative AI tools trained primarily on data from high-income, high-connectivity environments will, by default, produce outputs that do not reflect or serve people in sub-Saharan Africa or other under-represented regions.
In the context of HIV, where stigma already silences people, a chatbot that responds with subtle judgment or fails to recognise South African slang or local clinic names, does more than fail to help. It actively harms. The evaluator working on AI chatbot interactions in South Africa described a painstaking process of testing responses for stigma, local relevance, empathy, and plain-language comprehension – work that most AI deployments never bother to do.
Ethics and governance consultant Rohit Malpani raised a broader structural concern: AI is being developed and deployed at a speed that vastly outpaces regulation. A technology can move from concept to deployment in as little as 12 months, with little or no regulatory oversight, and at a moment when HIV funding is being dramatically cut, the pressure to use AI as a cheap substitute for human capacity will be enormous. That pressure, he warned, will push us toward shortcuts and greater risks.
Perhaps the most sobering warning came from Solange Baptiste, who cautioned against AI eroding the governance model that made the HIV response effective in the first place. When programme priorities are set by predictive algorithms rather than community realities, and when who is eligible for services is determined by a risk score rather than lived experience, AI stops being a tool in the hands of communities and starts replacing them.
3. Governance, Data Sovereignty, and Community Control
The panel’s most animated discussion centred on governance: who controls the algorithms, who owns the data, and what enforceable rights do communities have? Solange Baptiste drew a powerful parallel to the extractive economics of agriculture and pharmaceuticals: communities giving up raw materials (in this case, health data) while value flows elsewhere, with no meaningful return. She called for more than theory: enforceable rights, not lovely platitudes.
Rohit Malpani pointed to concrete mechanisms communities can use to assert control: data cooperatives (as pioneered by indigenous groups in New Zealand and Canada), participatory design processes, red teaming, where affected communities trial and stress-test technologies before deployment , and litigation, which has historically been critical to shaping how health technologies serve or fail marginalised populations.
From a government perspective, Dr. Hudson Balidawa of Uganda’s Ministry of Health underscored the very real challenge of regulating a technology that evolves faster than law. Existing data protection frameworks were not designed with AI in mind, and accountability gaps remain serious, particularly around the cost of AI-enhanced HIV programmes at a time when fiscal space is shrinking.
Youth advocate Nomonde Ngema brought the discussion back to its human core: communities must not only be consulted after solutions are designed, they must be present from the very moment a problem is defined. If communities were genuinely at the table, she argued, we would not be seeing the current proliferation of AI chatbots while tools for tracking viral load progress and adherence go underdeveloped.
“AI is no different from any other tool. Keep communities at the centre of how the tools are designed, governed, and used – then we can see that AI will strengthen the response.” – Solange Baptiste, Executive Director, ITPC Global
Three Key Takeaways
1. AI is already here and so is the window to shape it.
The question is no longer whether AI will enter the HIV response because it already has, from diagnostics, adherence tools, data analytics, to chatbot services. The urgent question is governance: who decides how these tools are built, who they serve, and what recourse exists when they cause harm. As Rohit Malpani put it, the rules of the road are being written while we race down the freeway. Communities cannot afford to be passive while those rules take shape.
2. Bias is not a glitch. It is a structural problem that requires structural solutions.
When AI is trained on data that does not include African communities, young people, or people living with HIV, the resulting tools will not just underserve those communities, they will replicate and amplify existing stigma and inequality. Meaningful community participation must begin at the design stage, not as a consultation afterthought. Human oversight, continuous evaluation, and empathy are non-negotiable requirements, not optional upgrades.
3. Community leadership is the blueprint for AI as for everything else.
The HIV response has always succeeded when communities lead. Generic ARVs, community-based testing, PrEP, community-led monitoring all depended on communities being more than data sources. They were decision-makers. The same principle must apply in the age of algorithms. AI must start at the pain point of the affected community. It cannot be solutions looking for a problem.
What’s Next
This webinar is the beginning of a larger ITPC Global conversation on AI and HIV. Two important initiatives are in the pipeline:
- AI 101 Training for Community Leaders: developed in collaboration with Audere, a health tech innovation company. Watch this space for dates.
- Global Community AI Survey: capturing hopes, concerns, and experiences of communities using AI in HIV and health contexts worldwide. Results will be published ahead of AIDS 2026.
For more information or to share your reflections on this discussion, contact us at admin@itpcglobal.org.
If AI is going to shape the HIV response, communities must shape AI.
