Artificial Intelligence for Social Good (AI4SG) Seminar at Dagstuhl 2022

Artificial Intelligence (AI) and Machine Learning (ML) researchers from various universities with representatives from NGOs based in Benin, Tanzania, Uganda, The Netherlands and globally came together over a 5-day seminar in Dagstuhl, Germany to pursue various social good goals, such as improving air quality, increasing agricultural productivity with the help of technology, transforming health care, providing humanitarian support, and defeating poverty. The seminar facilitated the exploration of possible collaborations between AI and ML researchers and NGOs through a two-pronged approach. This approach combined high-level talks and discussions on the one hand with a hands-on hackathon on the other hand. High-level talks and discussions focused first on the central concepts and theories in AI and ML and in the NGOs’ development work, before diving into specific issues such as generalisability, data pipelines, and explainability. These talks and discussions allowed all participants – in a very short time-frame – to reach a sufficient level of understanding of each other’s work. This understanding was the basis to then start investigating jointly through a hackathon how AI and ML could help addressing the real-world challenges presented by the NGOs.

image

Hackathon

During the hackathon, which was spread out over two days, five groups consisting of machine learning experts and NGOs tackled the real-world issues that the latter had brought to Dagstuhl. As several NGOs had brought issues that required similar technological expertise, they joined forces and benefited from each other’s perspectives, challenges, and lessons learned.

Group 1

AirQo from Uganda and Laterite from the Netherlands brought seemingly different cases to the table, i.e. measuring air quality more efficiently with a limited number of sensors placed at various locations across Uganda (fixed locations as well as in moving vehicles), and predicting school drop-out on the basis of various sources of data, for example, surveys of populations, respectively. Nonetheless, in machine learning terms what unified both cases was that their data was feature-based (and not, for example, image- or text-based). Machine Learning algorithms such as Gaussian Processes (GPs) and Gradient Boosted Decision Trees (GBDTs) perform well for such use-cases. Before actually applying these algorithms to their respective cases, both AirQo and Laterite needed to rephrase their problem and properly set up their data pipeline. Laterite went with using GBDTs while AirQo used GPs to balance various trade-offs of these algorithms. At the end of the hackathon, different approaches had been tested with real-world data, and the group had agreed to continue their collaboration beyond the seminar.

Group 2

Soon dubbed “the text group” – Oxfam Novib, Save the Children, and the Red Cross joined forces to discuss and address challenges with natural language. While Oxfam Novib and Save the Children aimed at automating knowledge management and reporting, the Red Cross wanted an algorithm to classify open-text survey responses. Oxfam Novib and Save the Children invested their time mostly in scoping their problem, conducting exploratory online interviews with internal stakeholders, and designing a plan for action after the seminar. The Red Cross sub-team did develop a prototype algorithm that could effectively do the desired classification. The team also considered applicability of the model beyond the particular domain it looked at for this case (i.e. rumors and opinions about COVID-19). It documented the results of tests with various models. And last but not least, it secured internal funding at the Red Cross to continue working on this project.

Group 3

TechnoServe, Humanitarian OpenStreetMap, and again the Red Cross all brought cases requiring computer vision technology. This group probably got furthest with developing and testing actual machine learning models. At the end of the hackathon, their model could indeed recognize trees (catering to TechnoServe’s case) and recognize buildings (useful for the cases of Humanitarian OpenStreetMap and the Red Cross). The machine learning experts in the group committed to remaining available for questions after the seminar, and the NGOs would give a demonstration of the models in their respective organizations.

Group 4

In the fourth and last group, D-tree from Tanzania, which has strong in-house machine learning expertise, mostly scoped out ideas of how to further advance their use of machine learning. Together with the machine learning experts at the seminar, they did two iterations on these ideas with the team in Tanzania. This resulted in a focused list of ideas that D-tree could pick up after the seminar (e.g. solving the interpretability problem, tailoring their health survey, designing a more continuous instead of binary scale, personalizing incentives, detecting suspicious visits, and predicting the type of intervention instead of predicting the outcome). For most of these ideas, simple solutions such as software engineering could already be very valuable. To bring these ideas into practice, the group also identified possible collaborations and partnerships in this area.

All in all, the hackathon was a success: NGOs with a lower AI/ML maturity increased their understanding of the capabilities of AI/ML, while NGOs that already had a more advanced understanding and use of AI/ML technology could take a next step. Key to this success was the presence of AI/ML experts whose respective fields of expertise could seamlessly be matched with the various needs of the various NGOs. In times of COVID-19 – with the many impediments to international travel and even a fair share last-minute cancellations – the seminar was fortunate to have this nearly perfect match between offer and demand of skills and expertise. Finally, the participants came together to discuss and formulate guidelines for effective AI4SG collaborations.

Guidelines on how to do effective AI for social good collaborations in the future

On the final day of the seminar, the participants reflected on the success of the seminar and on what had been the enabling factors. They formulated a set of guidelines on how do effective AI for social good collaborations in the future. For seminars like the AI for Social Good seminar at Dagstuhl, the importance of in-person attendance was underscored. In addition to AI/ML researchers and domain experts, software engineers should be invited, and the presence of financial partners could be considered. Also, the seminar programme could include a session on what AI can and cannot do, if several NGO participants lack this knowledge. For AI/ML participants, a session or a talk at the start of the seminar about structures and constraints for NGOs could be useful. In particular, domain experts should “educate” AI experts on what is needed/feasible in the field, so that expectations of both worlds can be better aligned. Concretely: AI experts should share experiences from previous (failed) AI for social good pilots/experiments/projects. For visibility purposes, it’s advised to broadly advertise AI for Social Good seminars to NGOs (e.g. through NetHope, or at individual organizations’ events) and to invest in communication via social media (e.g. by publishing success stories of seminar 22091 to justify NGOs’ time). In general, AI for social good collaborations should be clear from the outset about the willingness on both sides to pursue the collaboration in the long-term. Long-term partnerships should be established between affiliations/organizations, not (only) between individuals. But attention point: partnerships between organizations can be painful to set up. To facilitate long-term partnerships, a GitHub team page/repository could be set up with a to-do list per project, so that others can pick up work when they have time, or at another workshop etc. A concrete avenue here would be to tap into existing summer schools or other programmes of universities related to AI for social good (e.g. UMass Amherst, University of Washington, DSSG, Data summer school at Ecole polytechnique in France, DeepLearning Indaba). More long-term partnerships with academics can also be very useful, because they have lots of students who need lots of projects. The challenge here is that the students are junior, so the senior academic needs to buy into the collaboration and supervise their students. AI for social good collaborations should ideally follow the following sequence: 1) a scoping exercise to clearly define the problem and identify potentially relevant datasets; 2) a “cleanathon” where software engineers look at the data, 3) the actual hackathon(s) where algorithms/models are designed and tested. In-person interactions are deemed effective for all three of these phases. Finally, it was suggested to bring NGOs to AI conferences and workshops (e.g. the likes of NeurIPS, etc.).

A complete report for the seminar can be found here: https://doi.org/10.4230/DagRep.12.2.134. (This blogpost was adapted mainly from Sections 5 and 6 of this report.)

Written on March 7, 2022