The rapid development of any technology brings a range of risks that must be carefully assessed and mitigated before scaling solutions. This is especially true for fast-evolving technologies like LLMSs, where the pace of change can be difficult to track. Failing to consider the societal impacts of these technologies risks entrenching harmful business models, reinforcing inequalities, and amplifying unintended harms that could shape the long-term trajectory of AI adoption.
This is particularly critical in international development, where these risks can have direct and severe consequences for the world’s poorest communities. Moving fast and breaking things is not an option. Instead, AI solutions must be developed carefully and responsibly to ensure they address real needs while avoiding harm.
![](https://images.squarespace-cdn.com/content/v1/6160742e58596279bef906ba/d5da8150-8964-4f5b-84ef-762f7615dc80/image-asset.jpg)
Misinformation
One of the key risks for this pilot was the potential for misinformation, particularly given the a healthcare context where inaccurate information can have serious consequences. The Avatr solution was explicitly designed to reduce misinformation and provide users with reliable medical guidance.
LLMs generate text by predicting the next most likely word or sequence of words based on patterns learned from their training data (Weidinger et al., 2021). This method presents two challenges:
-
LLMs learn from large datasets that may contain both factual and false information, making them susceptible to reproducing inaccuracies.
-
Even if an LLM were trained solely on factually correct data, it would still generate text based on statistical likelihood, rather than an understanding of truth. The fundamental issue is that statistical word prediction is not the same as verifying and relaying accurate information.
While RAG does not entirely eliminate misinformation risks, it significantly mitigates them. Avatr uses RAG to:
Supplement LLM-generated text with information retrieved from a curated database of verified medical factsheets and peer-reviewed sources.
Ensure hospitals control the knowledge base, allowing them to filter and verify all information used in responses.
By retrieving trusted medical content rather than synthesising responses from raw probabilities, Avatr greatly reduces the risk of misinformation.
While it is generally true that LLMs do not inherently distinguish between truth and statistical likelihood, different efforts have been made to mitigate this limitation. Reinforcement Learning from Human Feedback (RLHF) is one such approach, where humans rank outputs based on accuracy, coherence, and factuality, helping fine-tune responses over time. However, RLHF is not foolproof—its effectiveness depends on the quality of human oversight, the diversity of feedback sources, and so on.
Data Privacy
Ensuring data privacy and security is fundamental to any AI-powered solution, particularly in international development, where multiple stakeholders are often involved. International development AI projects typically involve a network of stakeholders (Anderson, 2019), including:
Donor agencies
Foreign and local implementing organisations
Private-sector software developers
The communities and individuals who use these solutions
This complex structure can lead to ambiguity around data ownership and ethical responsibilities. In regions with weak data governance or high corruption, personal data could be exploited, placing vulnerable populations at risk. To address these concerns, Inavya has implemented strict data protection measures, including:
Secure Storage: All sensitive medical data is stored on Microsoft Azure’s secure servers, benefiting from built-in privacy controls and encryption (MS Azure, n.d.).
Minimised Data Sharing: Patient data is not transferred between multiple organisations, reducing the risk of corruption and misuse.
Role-Based Access Controls: Access to patient data is restricted to authorised doctors, who must receive explicit patient consent. Inavya developers do not have access to patient data.
Regulatory Compliance: Data storage and processing comply with GDPR and other key cybersecurity regulations, ensuring high standards of privacy protection.