AI in Health and Care: How do we innovate responsibly?

How do we innovate responsibly when it comes to AI? This was the focus of the Anthropology + Technology Conference at Bristol’s Watershed on 3 October, which I organised to foster collaboration between social scientists and technologists working on emerging technology projects. I posed the question  how do we design innovative new digital technologies that have a positive impact on people and society: making people’s lives better?

To address the question of how do we innovate responsibly in Health and Care, Mundy & Anson partnered with One HealthTech Bristol (OHT) for an event in November as part of the Bristol Technology Festival. I gave a talk, on which this article is based, about how to innovate responsibly and why anthropologists are important tech team members.


Social impact of algorithmic decision-making

Making people's lives better is certainly the focus of healthcare professionals and those working in the health tech space  or if not better, certainly not worse  supporting them in times of uncertainty or enabling them to achieve their health goals. 

If we genuinely want to help those who are either in our care or whom we seek to serve through digital health technologies, we should take responsibility for creating technologies that don’t amplify the existing inequalities in society. As Virginia Eubanks writes in her bestselling book, Automating Inequality, we don’t all experience “this new regime of digital data” in the same way. Take, for example, the Glow app, which failed to take into account that not all women want to track their menstrual cycle in order to get pregnant and which one frustrated user, Maggie Delano, explained was "yet another example of technology telling" her that she wasn't even a woman and making her feel that her "identity...was completely erased".

AI is just a tool

There’s a lot of hype around AI. As Dr Mark Woods, Head of Autonomy and Robotics at SCISYS, commented in an interview with me, “40% of AI start-ups actually have no discernible AI in their approach, but it helps them get funding given the popularity of the technology with Venture Capitalists”. Most of what we’re seeing is actually algorithmic automated decision-making (ADM).

While we should fully utilize the power of this new technology, it's important to remember that "AI is just a tool", as Roelof Pieters, one of the leading AI visionaries and developers in Europe, reminds us. So we should think about whether it is the right tool to use, perhaps by asking the following questions as suggested by Algorithm Watch:

  • What data does the system use and is the use of this data legal (such as when London’s Royal Free hospital "failed to comply with the Data Protection Act when it handed over personal data of 1.6 million patients to DeepMind, a Google subsidiary"). 
  • What decision-making model is applied and does it have a certain problematic bias, i.e. because it uses a biased data set or is developed by people with underlying prejudices that were not controlled for.
  • What’s behind the idea to use it in the first place? Is it because there is a problem that cannot be addressed in any other way, maybe due to its inherent complexity? Is it because austerity measures means that automation is used as an option to save money, perhaps because of severe cuts in NHS funding. Or is it because of a political decision?

Data quality is crucial

An example from France, as reported by Algorithm Watch, exemplifies the importance of the quality of the underlying data feeding AI systems, which impacts the results AI yields.

The backbone of French medical information is the relatively new (2016) national health data system, the SNDS, which merges several databases created in the 1990s. The intention of this new system is that it will feed data to AI algorithms. But apparently the quality of the data is questionable. For example, one study of severe maternal events revealed that the data collected nationally could be severely impacted by false negatives and false positives. This was due to coding errors related to the digitization of a hospital’s records where these records had not been reviewed by someone with expert knowledge.

What this demonstrated was that expert reviews of the quality of the data in the SNDS was very much needed because this data is going to be used for automated decision-making systems. Garbage in, garbage out.

It raises questions about what this means for patients’ health and wellbeing.

Is AI the right solution?

The GP at Hand app, owned by Babylon Health, is now being used by the NHS, and through an automated chatbot and a video consultation service gives you 24/7 access to doctors and medical advice. This app is likely to appeal to people who are comfortable with technology, use a smartphone, and who are what medical professionals term, “the worried well”: young, fit and healthy with minor health problems. 

In the short-term this is perhaps good news for patients who want a quick answer and who can’t get a non-urgent appointment with their GP without a two, or even three, week wait. While questions are being raised about the app's safety, it's worth noting that it takes you, as a patient, away from a standard GP practice and pushes the 'easy' patients to Babylon Health, leaving more complex cases being handled by already stretched standard GP practices around the country. If GPs are only dealing with complex cases, it won't be financially viable for practices. Long-term will this impact our local GPs and ultimately the care we receive from them? Is this the right solution to the underlying problem which is likely severe underfunding of our NHS?

Why have anthropologists on tech teams?

So why have anthropologists on tech teams? For many reasons, in my opinion, specifically:

  • I think it's clear that we need people asking critical questions. Challenging assumptions is what anthropologists do best because that’s what we’re trained to do.
  • We have a holistic view of society. We reveal the messy complexity of human lives in context which is important because nothing operates in a vacuum. 
  • We make sense of human behaviour and we often act as a "translator bridge between groups whose beliefs, values and practices may be completely different”. 
  • While health professionals have to navigate, with skill and care, on an almost daily basis, other cultures’ ideas about health and medicine, those designing digital health technologies might lack the knowledge or know-how.
  • Collectively anthropologists have an enormous amount of knowledge about human lives. Let’s use it.

How do we innovate responsibly?

I began this article by asking, how do we innovate responsibly? How do we design innovate new digital technologies that have a positive impact on people and society? Because as Sloane and Moss write in the journal Nature Machine Intelligence: “AI does not fail people in a lab; it fails them in real life, with real consequences”. As a start, I'd suggest:

  • Asking whether AI is the right solution to the problem. Much as healthcare professionals have to weigh up the advantages and disadvantages of a particular drug in terms of the patient's quality of life (do the side effects or risks outweigh the benefits?), AI should be approached similarly  is it appropriate, or not?
  • Ensuring the data that feeds the algorithms is good quality — patients’ health and wellbeing may be at risk.
  • Understanding the wider context in which the technology is being designed and will be used (social, economic, and even political) and whom you're designing for.

Rather than Zuckerburg's motto "move fast and break things", move slow and fix things has to be the new mantra. We can build tech that has a positive (not a negative) impact on society. Responsible innovation is a choice.

 

Photo credits: Female doctor with young girl by Francisco Venâncio; Operating theatre by Piron Guillaume; Pills by freestocks.org; Medical textbooks and illustrations by Annie Spratt; Stethoscope by Hush Naidoo.