August 16, 2023

Using chatbots in the social sector: Five things to consider

From keeping communities at the forefront, to using data responsibly, here are five things to keep in mind before integrating chatbots into your work.

4 min read

This article was originally published on The Engine Room.

Adopting new tech tools is exciting and can benefit an organisation’s work. But there are also many reasons to proceed with caution

One tool in particular that we’ve seen growing uptake of is the chatbot, especially in the humanitarian sector. So far, however, there has been limited research exploring risks, harms and opportunities related to their use. 

Last year we kicked off a project examining chatbot use in humanitarian work, with support from the IFRC and UNHCR. We looked into the types of chatbots used by humanitarian and civil society organisations across various contexts in more than 10 countries, including Ukraine, Ecuador, Kazakhstan and Libya. In this post, we share some of the key learnings for humanitarian organisations that emerged from our research—keep an eye out for the full report soon! 

What is IDR Answers Page Banner

1. Problem first, solution second, tech third

At The Engine Room, we generally find that new tech tools are best approached in the following order: problem first, solution second, tech third.

In our research, we found that sometimes organisations wanted to use a chatbot for the sake of using a chatbot (i.e. a “solution-first” approach), and not because it was necessarily the best fit for addressing the problem they needed to solve—for example, addressing specific community needs or making up for gaps in staff capacity. 

This could lead to the chatbot being unsuccessful: In cases we encountered, community needs were sometimes better addressed through other means, such as hiring more staff to conduct site visits or answer queries, while the chatbots saw low usage or poor user feedback, or just didn’t answer the questions that people were actually coming to it with (in one case, for example, a Covid misinformation bot was used by people to try and access food aid).    

In terms of solving staff capacity issues, chatbots could sometimes instead create unforeseen double work. In some cases, for example, staff ended up having to manually transfer data from the chatbot into excel spreadsheets, or scan chatbot interactions to answer unaddressed questions.

man in a shop using his phone_chatbox
A chatbot might not be a viable option in situations where smartphone saturation is low. | Picture courtesy: Eric Tyler / CC BY  

2. Centring contextual considerations

Our research found that contextual considerations tend to play a key role in whether a chatbot helps an organisation achieve its goals or not. For example, a chatbot might not be a viable option in situations where smartphone saturation is low or where people share SIM cards. Likewise, some of our interviewees mentioned that younger community members are more likely to use a chatbot integrated into a platform they use regularly, like Telegram or Whatsapp, whereas older members of the community might prefer in-person interactions. 

donate banner

Other factors to take into account included things like tech literacy, access to devices and the internet, and accessibility.

3. Checking operational and user expectations 

A key research finding was the need to adjust expectations of what humanitarian practitioners want a chatbot to do, versus what the chatbots they are considering using can actually do. This means adjusting monetary and staffing expectations, as well as critically thinking about the type of chatbot (e.g. simple, FAQ or button based bots, mid-range bots, or AI-driven bots) that would be useful, if it is determined that a chatbot could be a useful tool in the first place. 

In addition to managing operational considerations, our research found that managing user expectations is essential in order to mitigate frustrating user experiences. For humanitarian organisations using chatbots, this means being transparent and upfront to affected communities about what their chatbot cando, and what it can’t. 

For example, our research saw that many people desired a level of personalisation from chatbots that most chatbots currently deployed by humanitarian organisations cannot provide. This can result in a frustrating situation when users are sent on error loops, are repeatedly shown a predetermined script or can only ask questions rather than make comments. This frustration is worsened if it is not made clear that users are interacting with a bot and not a human. 

4. Weaving in responsible data 

Throughout this project we set out to understand what responsible data means when it comes to the use of chatbots in humanitarian contexts. Responsible data considerations that came up in our interviews and desk research touched on ongoing discussions around organisational data policies, GDPR compliance, and data sharing practices and agreements between humanitarian organisations, governments and corporations (among others). 

Though responsible data is not a prescriptive practice, the following questions could be a starting point for those considering deploying chatbots in humanitarian contexts: 

  • What data is collected and how is it stored (and for how long)? Is this data shared? 
  • How is consent achieved? Are there viable alternatives for impacted communities to access services without using the chatbot? 
  • What data protection agreements are in place? Are risk assessments being conducted? 
  • What platforms are the chatbots hosted on? What are the privacy policies of these companies (e.g., Meta, WhatsApp, Telegram, Viber, etc)? 
  • What is the metadata used for improving the function of the chatbot? How is this data minimised and deleted? 
  • What safeguards are in place (e.g. trauma informed design)? 

5. Centring the needs and priorities of affected populations is essential

A recurring theme in our research was the question of what role automation plays in a humanitarian context, and the need to maintain human interaction even with the adoption of automated tools like chatbots.

Another issue that arose in the research was the lack of participatory, user-centred design practices that are inclusive of language and cultural contextualisation. Oftentimes, the people making tech like chatbots are not from the communities users are from, including language communities (if using machine learning). Further, chatbots can be deployed on populations before being adequately tested (or soliciting feedback) by the people who will need to rely on the tech to receive necessary services or information. 

Situations where chatbots are linguistically and culturally incompatible or make it difficult to access services (due to tech illiteracy, device access, error loops, etc) can potentially be avoided by considering if a chatbot is the right fit in the first place, given the specific needs and contexts of each situation. 

Our analysis surfaced a common tension—from our research, effective chatbots require time and resources to set up, but both of these tend to be in short supply when working in emergencies. This tension could benefit from additional research specifically focused on emergency situations. 

We’ll be publishing our full research report soon—follow us on Twitter or subscribe to our newsletter for updates! 

We want IDR to be as much yours as it is ours. Tell us what you want to read.
ABOUT THE AUTHORS
Olivia Johnson-Image
Olivia Johnson

Olivia works on issues related to technology, surveillance, and the impact of AI on marginalised communities. Prior to joining The Engine Room, she was a research consultant with the Immigrant Defense Project, and worked on their Surveillance, Technology, and Immigration Policing Project.

COMMENTS
READ NEXT