One area which we would like to focus on in more depth is the question of where AI can add most value.

There is no point replacing an existing system with an AI one if that solution isn’t adding any tangible benefit or costs more to develop than those benefits justify. Again, providing a comprehensive answer to this question is well beyond the scope of this module. Instead, we’re providing a series of questions surfaced from our case studies and the wider literature which can help identify circumstances where AI has scope to add value. It’s also worth noting that for AI to add value it doesn’t necessarily have to meet all these conditions, nor does meeting many of these conditions mean that you should develop an idea, given the potential ethical considerations around a solution.

  • Is there a high level of human error in the existing process? 

Human error is the error rate for a task or process, performed by humans. In our case study exploring the use of AI to improve the classification of faults in WASH assets, the error rate was high. The number of people misidentifying faults in the assets was significant. In other tasks, the error rate is relatively low. AI systems have scope to add value where the human error rate is already high, and there is scope for a system to improve accuracy (Mitra, 2018).  

With AI systems it is a lot easier to get to 90% accuracy than it is to achieve 95%+ accuracy. If humans are already achieving a high level of accuracy then you need to consider whether introducing AI is worth the effort and cost that it will take to develop a system which makes accurate predictions.

  • What is the scale of the problem you are trying to address?

Many of the case studies which we have explored involve a situation where there is an immense amount of work to be done and insufficient resources to do it. In our case study into the early-warning forest fire detection system, one of the key reasons the solution was adding value was that the cameras could be deployed to cover a huge area, which would otherwise have taken an immense amount of human resource to monitor. While the human error rate for detecting forest fires is very low, there is an immense amount of work that needs to be done.

  • Is there expertise needed to address the problem which is unavailable?  

As well as considering the scale of the task to be done, it is also useful to evaluate the level of expertise needed to perform it. In our study into the Avatr system, the challenge was that people could not access the expertise of highly skilled doctors to answer questions about their outpatient care. By working in collaboration with experts to design, and continually monitor and integrate AI systems, we can democratise access to their expertise through the models they help build. In low-resource contexts where expertise can be hard to access, these kinds of applications have potential for impact.

  • Are there large amounts of distributed data that need to be analysed to perform the task?  

Humans are not well suited to the task of analysing large distributed datasets. By distributed we mean that the data comes from a wide range of sources on different factors. Consider the example of creating a weather prediction model. To accurately predict the weather, a wide range of factors are considered: temperature, humidity, atmospheric pressure etc., all of which need to be integrated into one comprehensive model. Humans struggle with processing such a large amount of data coming from different sources. One of the main advantages of AI systems is that they can draw together distributed datasets from different sources at scale to create outputs which humans can easily understand (Mitra, 2018).

  • Does the model already exist? Or can you fine-tune a model rather than train one from scratch?

There has been a proliferation of open-source models, particularly since the advent of Generative AI. HuggingFace (huggingface.com) has grown 1000 fold since 2022 with millions of available models with open licensing that can be finetuned for a fraction of the cost of training a model from scratch. Many of these models can be run on your own inference servers to ensure complete privacy of data.

  • Do you have access to enough representative data to train the model?  

As we’ve explored throughout the module, access to data is crucial to the development of AI systems. Only if you have enough training data can you navigate the balance between overfitting and underfitting a model. Only if data is sufficiently representative of the real-world phenomena that it models can the patterns contained within it be used to train a model which is sufficiently complex. Even if this data has been collected there is still a question of access. There can be complicated licensing rules which limit access to data, particularly with survey data which may be useful for international development use cases (Andersen, 2019). Commercial interests might prevent users from sharing useful training data open source.

  • Can you collect the data, or find a secondary source, if not?

If you don’t have access to primary training data, the natural next step is to consider whether it can be collected. In our interview with Omdena, we discussed how their local chapter approach allows them to collect high-quality data at scale through the large number of people they have available to work for each chapter. Additionally, there may be secondary data sources which offer an indication of what you are looking to measure, such as the satellite images of buildings being used to get data on the width of roads. 

  • Are there quantifiable patterns in the data which an AI system can pick up on?

As we’ve explained throughout the module, AI is focused on the analysis of patterns. Quantifiable patterns are those which can be analysed by our rationality, measured, and modelled. Intuitive patterns are those which are much harder to analyse rationally or verbalise. When we try to teach people how to communicate effectively or engage with others with emotional intelligence, it’s very difficult to set out concrete rules for doing this well. AI systems are well suited to analysing the quantifiable patterns in data, but less so on the intuitive patterns which humans have mastered through our experience (Mitra, 2018).   

When we’re creating AI solutions, we need to evaluate the balance of quantifiable and intuitive patterns involved in the task. Using this evaluation, we can identify whether there is scope for AI to add value and how we ought to balance the tasks being automated by AI with the tasks best left to humans. 

  • Who is going to adopt the solution?  

You need to have a clear use case for the tool. If no one is committed to using it once it has been built, then there is no real scope for impact once the tool has been created. You also need to determine whether the organisations in the ecosystem whose actions will impact the adoption of the solution recognise the problem that your solution is addressing and buy in to taking an AI approach to address it. Everyone we interviewed emphasised the importance of engaging key stakeholders early, understanding their interests, and building a tool which complemented those interests.