XClose

UCL Health of the Public

Home
Menu

Transcript for 'AI for Good - Tech and Ethics in Humanitarian Crises'

Xand Van Tulleken 
Hello and welcome to season 4 of public health disrupted with me Xand van Tulleken…

Rochelle Burgess 
... and me Rochelle Burgess. Xand is a doctor, writer & TV presenter, and I’m a community health psychologist and Associate Professor at the UCL Institute for Global Health.

Xand Van Tulleken 
This podcast is about public health, but more importantly, it’s about the systems that need disrupting to make public health better. Join us each month as we challenge the status quo of the public health field, asking what needs to change, why and how to get there.

Rochelle Burgess 
Our guests will explore how technology is reshaping humanitarian response efforts and the promising technologies on the horizon that might help protect the health of populations. But what are the ethical implications of these technologies and what challenges might need to be addressed?  Let’s introduce today’s guests who will be helping us to unpack all this today…

Xand Van Tulleken 
Our first guest today is Professor Maria Kett. An anthropologist by training, Maria has extensive expertise in disability-inclusive humanitarian responses. Maria has undertaken research in countries across Africa and Asia, leading on a number of research programmes on disability and international development and is author of over 140 publications. Maria also leads on the humanitarian-focused work for the Global Disability Innovation Hub. She regularly serves as a consultant for numerous bilateral and multilateral donors, including the UK FCDO, the World Health Organisation and the United Nations. Maria is the Programme Director for the new UCL MSc Humanitarian Policy and Practice and she is also a friend of mine of about  20 years.

Rochelle Burgess 
You can't see Maria but she did a bit of a nice salute there. We’re also delighted to welcome our second guest, Sarah Spencer - Sarah is working with the UK Humanitarian Innovation Hub as their AI Technical Consultant for Humanitarian Innovation and Emerging Technologies. Sarah is a multi-domain expert working at the intersection of AI, national security, and public policy. She helps governments, industry, and civil society address the challenges posed by AI and ethically capitalise on the opportunities offered by AI. Sarah has spent over two decades working with and in support of communities affected by conflicts and crises and is a regular commentator on ‘AI for Good’ and the geopolitics of advanced technologies.

Xand Van Tulliken
Maria can I start with you, because I think a lot of people are approaching humanitarian crises for the first time or even working in them for the first time find it hard to deal with the terminology between humanitarian crises, complex emergencies, violent conflicts, wars, all these natural disasters, all these different phenomena. So can you give us an overview of what a humanitarian crisis is the impacts they have on population health, and the role that public health professionals play? 

Maria Kett 
Just a small question to start with? So thanks for having me here. And yeah, I think it's really important to start by saying that there are many definitions and and you've touched on that already. The ones that we can talk about for this is that it's an event or a series of events that really does overwhelm or poses a critical threat to health and security and safety. But I think one of the other important things to consider is oftentimes it that it overwhelms national capacity to cope. And that requires some international assistance. the things that we might want to think about implicit in that international assistance, and some of the questions that we asked about today are, that does mean external assistance from other countries or other organisations. And there are lots of things that come with this humanitarian principles, ethics, solidarity, bearing witness, and we can unpack some of those and some of the implications of using technology and AI around some of those.  I think there are a number of other things to think about humanitarian. Currently on the TV, we're seeing Gaza pull that was Ukraine. Oftentimes humanitarian situations are not linear. They don't have a neat beginning, middle and end they can be chronically going on for years. I think about when I first started and you talked about knowing you for 25 years, I was in Azerbaijan and working on the Nagorno Karabakh and we saw that research not so long ago, actually, these things can be sort of, ticking along are not really in the news, but they're still going on and then flare back up.  We also know that they're increasing in their nature, allegedly and increasingly complex that's undoubtable. That's partly due to the changing nature of warfare. And again, the technologies involved in that we, we see daily on our screens, targeted warfare, targeted infrastructure, etc. We also know the different actors that are involved. And that includes increasingly the private sector. And again, we might want to unpack about who we mean in that. But there also the ability to access populations. And I think this has been really stark, how we access affected populations, but also how we speak out about that access. And sometimes that access is contingent on not speaking out. And that that can be really difficult for organisations, when you think about one of the key factors of bearing witness. There's some evidence out there, the last state of the world humanitarian situation report from El nap, I think, really said that the funding levels are about the same, haven't really changed. But actually, we're not maybe doing as good a job as we were the overall general trajectory is the same or worse. And that's partly due to the fact that funding levels are going down or being spread amongst more difficult situations.  that means that because of all of these things, the impact on population health is really various, it depends on the type of disaster type of conflict. But you know, we also think about the chronicity, the longer term implications, and we was trying to deliver the same standard of health care as people might have had in the country, there of origin or the country, as was. So that could be a pretty high standard of health care in many cases, and why not? So to try and attempt to deliver those same standards of health care, but we often forget about basic public health measures, including vaccinations, spread of communicable diseases, again, something we're seeing a lot of in the current complex, sexual and reproductive health, again, something we're seeing a lot about, for example, from Gaza, and it gets how we measure and predict these is really important. And that's something that we can come back to when talking about technologies more broadly. And the final thing was about the role of public health professionals, and I think I would say it's, it's the classic isn't is to prevent, predict and support emergencies, and longer term responses, both local and international.

Xand Van Tulleken 
It's so difficult. I mean, my experience of working both in clinical health delivering healthcare as a as a doctor in a hospital, and then in public health was that those two jobs overlapped. And a huge part of our role was actually about bearing witness about gathering information about telling stories about being being present, as well. So it's a very, very complex role that you're, you're trying to summarise there. And in a way, of course, almost all the different disciplines that work in humanitarian crisis are in some ways doing public health, even if they're peacekeepers, they are, in a way trying to intervene in a health crisis as much as solve a political situation. So that's, that's a lovely overview of the of the the area

Rochelle Burgess 
It feels like one of the domains within Global Health that is full of the most tension, which is surprising to, to say, because the whole field is just is very much defined by tension. But I was just really struck Maria, when, because I've never really heard the phrase prevent, encapsulated and all that. And perhaps it's because I don't specifically focus in the context of sort of complex emergency and humanitarian crisis. And I guess using the word complex emergency possibly dates me as to the last time I was engaging with that, that that literature, I just, is it ever and maybe this isn't even an answerable question, this idea about how we engage in this notion and domain of prevention in the context of ongoing conflict and emergency like prevent then ultimately comes deeply political, does it not?

Maria Kett 
Yes absolutely. I mean, I deliberately was a bit vague about prevent you, no one could have meant prevent cholera outbreaks prevent hunger, famine, but you're absolutely right. And increasingly, I was listening to a programme this morning on radio four about war crimes tribunals. in public health, we always say prevention is better than cure. So the logic extends, right, you know, what, instead of thinking how better to fight war, it's probably better not to fight it in the first place. But you know, as you say, it's entirely political. And I guess that's the other thing. Obviously, there are factors that dictate how responses are delivered, or how much funding goes into, you know, scenarios who, you know, who makes the decisions about peace negotiations. These are entirely political. And we often see these technologies, new emerging technologies AI, or some kind of magic bullet solution, and they're not going to be and Sarah and I've discussed this in other context, but um, you know, they're, they're not a magic bullet solution. And that's, I think, you know, the temptation is to always find the next thing to try and solve, or at least alleviate, you know, suffering and humanity's you know, the things we do to each other that are just awful. But, um, it'll be interesting. Yeah, let's see how this unpacks

Rochelle Burgess
Sarah, can you can you jump in there? Because that was sort of my gut reaction and sort of hearing about technology and AI, just sort of like, oh, is this the, this is our current Magic Bullet interest? what does the use of technology actually mean in this context? Like, what sort of shape does it take? And how is it changing things?

Sarah Spencer 
I think it's important just to build on what Maria was saying, around what you know, around humanitarian action and humanitarian crises to think explicitly about what is different about humanitarian response than say, development aid and other interventions related to poverty reduction, and the uniqueness, the two things that are really unique about humanitarian action. One is that it is ultimately about life saving interventions. So this isn't about sort of, you know, structural budget reforms with the IMF or the World Bank. This isn't about sort of longer term agricultural productivity in country X. This is really about responding to situations where there are opportunities to either decrease morbidity, or mortality rates and or books. And so certainly, health has always played a key component of it. And the origins of humanitarian assistance stretched back to the 19 century, and really related to the rights of civilians, as well as non combatants, combatants that were wounded in conflict, and therefore no longer deemed as combatant, to healthcare to, you know, Florence Nightingale, Red Cross, you can think of all those images, and it's grown from there. And the other really important thing to consider which technology impacts are the humanitarian principles, and there's a wider argument about how those are adhered to or not adhere to these days. But two of those principles, the principle of neutrality and the principle of impartiality are really critical for humanitarian agencies, operating in places like Ukraine, or Gaza, where they the the licence they have to trade is derived from the fact that they are neutral and impartial players in the conflict, they do not play a political role, they are not peacekeepers, they are not people who are brokering peace agreements, they are there to provide life saving interventions, which would normally, you know, for political scientists be provided by the state. And the state, you know, in these instances are either unwilling or unable to provide those services, which may be related to public health, but they could also be related to sort of social work or employment and you know, cash vouchers, right, like the way that you'd normally receive some kind of cash to be able to, or shelter or other things. So thinking about how technology weaves into that I think what's happening these days is that humanitarian actors are looking at this sort of escalating sort of productivity, and especially generative AI, but there's also you know, the wider sort of landscape of AI. But to think more broadly about technologies, I think there's sort of four baskets or buckets into which these use cases to look at one of them is about making their internal operations more efficient. So thinking about how technologies can improve supply chain management or looking at the predictive maintenance of a infrastructure so you know, how often do you need to service your generators or your fleets so WF World Food Programme WFP and ICRC have huge fleets of aircraft, fixed wing aircraft and as well as vehicles as well as most other INGOs There's a significant effort that goes into maintaining those vehicles. Now, certain technologies are very good at maintaining machines basically. So there's a sort of internal operational efficiency that that certain actors are looking at.  The second one is about extending the reach of who we can support. How do you get further? How do you get to the last mile, and this is where sort of robotics and unmanned uncrewed aircraft UAVs, drones, etc, can take supplies and equipment to the last mile? Or equally, how do you extend lines of communications into hard to reach areas? How do you get better information from unreachable areas about needs on the ground? The third one is about the speed of the response. So how do you make your response even quicker? How can you get out the door quickly, there's some really interesting use cases, they're linked to geospatial imagery which we can get into. And the prediction of how a natural disaster for example, might impact a certain community down to sort of one square kilometre, so people can preposition resources and logistics.  And I think the last one is sort of at the quality of the services have delivered. And there's some really exciting use cases in the health space and health and humanitarian. I do want to say that there are some sort of red herrings and that desire to find the panacea or the the cure all for everything can sometimes lead us astray. And I think one of those areas is around the prediction of population displacement, or the prediction of a new conflict, like a black swan event. And there has been extensive money and resources time effort to put into the design or the creation of predictive analytical models, machine learning models to try and figure out, will the population go here, we're there. And wouldn't it be great if we could figure that out? Because we get pre positioned the aid. And in my experience, you know, the first challenge there is that the likelihood of that outcome be incorrect or pretty low. But the more important conclusion is that you there's an assumption baked into there that the response will be benevolent. Now, for humanitarian actors, of course, it's going to be benevolent, because that's the aim of humanitarian action for political actors. You may well see right population 20,000 People are expected to cross exporter, and we should pre positioned our aid. Well, the state that state might actually say, Oh, great, good to know, I'm going to seal that border. I'm not gonna let those people cross the border and declare, declare or claimed asylum as refugees. So there, there are those political coming back to Maria and Xand's earlier conversation, there are political factors at play, which will impact how those specific technological use cases are designed, and whether they're deployed in the with the intention that they're meant to.

Xand Van Tulleken 
That's such an amazingly lovely, comprehensive answer. And you raise this problem of the perfectibility of the response where even if we imagine a response that was completely perfect, and I've worked in, well run camps where you know, services are delivered on time of a decent quality, and of course, you're still, you know, overseeing some massive human displacement that is not you're not in any way solving the problems that people want solved, and even even camps I've visited, even if I think of the camps in in Calais and Dunkirk where they were providing very high quality facilities in certain places. They were not what people what people wanted was to move. And so I just wonder, Maria, can I start with you? This is I suppose there's a question about the ethical implications or even the meaning of this kind of technology. Once you can say, there's going to be an atrocity or a war or displacement in this moment, you use the AI to predict that. And then you have a drone deliver the food so that the people who've been predictably bombed or predictably fared in a predictable camp somewhere, or you're just kind of endlessly marginally improving the delivery of goods and services and these sort of terrible situations that shouldn't exist. Do you think the technology does have ethical implications that we need to be aware of that might alter the sort of the nature of humanitarianism, which is which is grounded in, in our shared humanity

Maria Kett 
I think, yes, it's the obvious answer, isn't it? all of the things you've just said massive ethical implications. I've worked for years on looking at how we could better include people with disabilities in humanitarian responses, but every single humanitarian response, who gets left behind, we're doing better than we were 20 years ago on inclusion, but I can bet your bottom dollar, I could go to any humanitarian context and, we can predict population movements, but we also think about Who's not moving? So if we predict to put all our resources in the place they're going to, then what about the people we've left behind which again, so it comes back to your point about reaching, the last mile, because always there are people who can't move, can't flee, don't flee for whatever reasons that they have. And what happens to those people, they're really among some of the most vulnerable, for obvious reasons,  I guess, agencies have huge volumes of data, including biometric data, personal data, about people about their movements. These are huge data sets, and they are there who owns those datasets, you know, what security measures are in place when the data is being transferred? Whether you know, if you're a person in a field site, for UNHCR transmitting that data, you've got to think about the potential avenues for data breach But I guess the other question is, is, what choice do people have if they're not opting into these large scale datasets, and this is something that, we really haven't thought, I think you've got to be in those datasets to get assistance if that's the tracing data for ICRC, whether it's the needs assessment data of UNHCR, whatever, whatever, you have to be opted in. the sort of bias is well known in in SE and AI algorithms. You know, we've heard about this, you know, a lot of the testing is done on standardised datasets. So that does call into question, replicating standardised datasets where people who are not, you know, data that's not outside of the standard isn't in there. So I guess it perpetuates exclusion or people who don't necessarily want to be included. And do humanitarians really understand all the implications of new technologies, AI? Do the AI folks really understand the humanity? You know, there's a lot of coming together of these worlds

Sarah Spencer 
I was nodding enthusiastically a lot of those comments, just to pick up on the data point. It's not only a question about do you have the data necessary, I'm going to speak specifically. But it's not necessarily a question are only a question of whether you have the access to the data necessary to power a model, machine learning, AI or otherwise. But it's a question of whether you can use it. And it's not only a legal question, it's an ethical one. So the UN agencies that Maria was citing sit on personally identifiable information PII of 10s of millions of people, and arguably some of the most vulnerable people in the planet. Now, are they allowed to use it to power an AI model, but at the moment, there's not the legal parameters around that are pretty vague, particularly for international public organisations, like UN agencies, but there's, I think the ethical question isn't as vague. I don't think there's an ethical dilemma there. I think the I think using the data of 10s of millions of people who are very vulnerable and living in very vulnerable situations not only related to their own morbidity and mortality, but to their own political existence. And what it means to be someone fleeing from persecution, or fear of free of death is that that doesn't feel like an ethical application of AI. You know, the right to opt out of automated decision making is sort of fast becoming a right in the UK in Europe, because of movement by the European Union and the Council of Europe to regulate how AI is used. And it's becoming sort of more in the public consciousness that you should be able to have a right to a human decision. And yet, somehow that rhetoric is not or those discussions are not passing into the humanitarian community, which is sort of in perverse, right, it should be the opposite, you should be thinking that the people who are less empowered, both sort of financially in terms of their own rights, in terms of their mobility in terms of anything, shouldn't be automatically be getting the sort of bottom end of the stick as it were. The consent and agency point is sort of linked to that, right. I think that is where we're missing a trick with regards to technologies and the delivery of humanitarian services, that should be the fundamental starting point. Is this going to improve the way in which we deliver services to an individual? And does the individual know how those services are being delivered? And can we hold those people to account when it's not delivered in that way, in the way that it's intended to? I've been in a couple of worrying conversations where computer scientists and engineers will say, well, we can't really explain the these kinds of technologies to people in North northern Burkina Faso, for example. And I think there are ways to explain it to populations in a way that, you know, particularly for thinking about food security, or farming practices, there are lots of different tricks of the trade that that farmers and and people in the agricultural sector will use to predict certain outcomes and where the seasons will come. And it's sort of the same path. So there are ways to communicate these technologies, I fear, it's a little bit of cutting corners and trying to find sort of ways around ethical principles.

Xand Van Tulleken 
And it's the thing that, you know, we heard back at the start of antiretrovirals being used, well, Africans can't have them because Africans don't wear watches, Africans can't tell the time, all this stuff. And it's this is a bit a bit very kind of common level of resistance to explaining things that actually, either people in the Global North would equally struggle to understand or that people are perfectly capable understanding in northern Kenya for sale, as you say.

Rochelle Burgess 
I mean, it's pretty lazy, and racist.

Xand Van Tulleken 
That was the word we were looking for

Rochelle Burgess 
You know, I'm very happy to say the thing, that's the thing. And it's, you know, and I feel like these fields in these domains that we work in that that is often this case of those two words that we feel very uncomfortable with. But I wanted to ask you both what you thought about like how, to me it feels like there's something bound up in the logic of emergency that also enables the laziness part. That was one of the things I always grappled with, that we were working within a logic of emergency, which you just described. So Well, earlier, Sarah, you know, the need to sort of replace and ensure survival. And that comes needs to come at pace, right. But we're still working in in these contexts where the emergency is so protracted and so embedded, that actually there is sometimes enough time to take time and think about these things. And I just wondered how you both felt about how that plays into some of these, I don't know the risks of AI and how likely we are to work around them.

Sarah Spencer 
I want to I'll jump in and say one thing in defence of my humanitarian colleagues in that, without betraying my age, I think for those of us who've been in on the frontlines of emergencies for decades, I remember when I first started my career, my sort of senior colleagues had responded to the Afghan crisis in the 80s in the early 80s, which has not gone away. I mean, I lived in Pakistan for several years. And you know, the, the Government of Pakistan is still trying to work out what do they do with Afghans who live legally and focus on as well as you know, the displaced populations and refugee populations there? So I think part of it stems from how are we going to find our way out of this? How can we actually have a really gold standard humanitarian response is there, you know, a saviour technology that can help us accelerate that, but to your point before Richelle about the inherent racism that exists in some of these technologies, that's been into these technologies, and certainly plenty of them. I live in Nairobi, and there is a booming tech scene here as there are in a couple of other cities across the continent and Sub Saharan Africa especially. And, you know, when we talk about the use cases that exist for new emergency advanced technologies in the humanitarian sector, there are like five names that we think of who can supply that, and I don't want to necessarily name check them, but you can think of who which agencies or which firms, humanitarian agencies turn to, they do not turn to the ones in Kenya, they do not turn to the ones in Ghana or Nigeria. And that I don't think I think that is laziness, I think that is just like, you know, huge humanitarian industry turning over 30 to 40 billion us a year to support 300 million people in need. And there are solutions out there, but it requires building those relationships. And to Maria's point earlier about, you know, are the tech people really understanding the humanitarian problems are the humanitarians really, really understanding the tech like, do they really understand the tech, and there are some bridges to be built with firms and agencies in the global south. So we don't roll back on the commitments that we as a humanitarian community have made to shift power and resources and decision making to indigenous civil society actors who have been at the frontlines of these crises for decades, right. It's not just all about the international agencies, and the UN agencies, it is about local, indigenous civil society movements, as well as now this increasing sort of tech startup scene, which is super exciting, but yet lacks access to that sort of customer base.

Maria Kett 
This is a really important point, I want to just highlight, because I think in the beginning, I mentioned that I worked on assistive technology for older adults, people with disabilities, all kinds of assistive technologies. And there's some really interesting innovations being, you know, done in places like Ghana, like Kenya. And I think that the more we can try and you know, the humanitarian sector has very much pushed for a localization agenda, it's been a really key issue in the humanitarian sector. And I think that's really, I guess, if I was thinking about a promising technology, it wouldn't necessarily be the technology itself. But one of the promising approaches could be this idea of, you know, trying to join these dots up, there's a whole localization agenda that could work better with a whole kind of emerging technologies, from the affected populations, really thinking through how that comes back to accountability, Sarah touched on supply chains. But actually, when you think about the sheer volume, let's take account you just thinking about vaccination suppliers, cold chain, these are, the logistics is unbelievably complicated, and huge. so I think some of those, joining up some of those dots, bringing the localization agenda, bringing local innovations, and trying to get more funding for peopleI think, could be a promising technology on the horizon, rather than listing different kinds of technologies.

Xand Van Tulleken 
But it sounds like both you and Sarah are resisting this sort of yay, the technology is going to fix all these problems. And I guess particularly with your work and disability, that that must be something that's familiar with people saying, Oh, well, you know, we can get rid of problems for anyone with disability with with technology, when in fact, we we built a world this, you know, you wouldn't need that much technology to make things a bit more accessible. Am I hearing that correctly?

Maria Kett 
I think what I would say to that is it's absolutely an ecosystem. And you can have the product, you need the procurement, you need the policy, you need the person, you need a whole system for it. It's not one of those things. And this is very worrying, and the WHO have a nice five P's approach to this in the assistive technology world, but I think it works just as well outside actually, you know, you need a policy, you need a product, you need a procurement system, you need people to deliver it and people to to be on the receiving end. And I think it's just doing one of those things isn't going to be the isn't going to solve it. But having a joined up ecosystem approach, I guess, is more would have a more promising chance of success.

Xand Van Tulleken 
I think, bearing in mind what both of you have just said this question may sound a bit crass. But when people think of technology, in healthcare, particularly we often think of a particular widget or a particular gadget. Whether it's a you know, a wearable piece of tech or a glucose monitor or something like that in humanitarian aid, you might think of a drone or a tracking device or something. Is there any tech and I suppose I'm particularly thinking in AI, that is again, a game changer on its own where you go well, that one thing that better facial recognition software or something that actually has just been pure out useful Oh, is there anything that you can point to? And go? We are really excited about that thing, or does it not work that way?

Sarah Spencer 
I think the answer is yes. But or Yes. And the first thing to say is that AI is like rarely on its own as delivering this profound change, because it works very well within inherently with lots of other technologies. I think the things that have excited me are about how AI and machine learning are being used to contain epidemics, we saw them used in places like cholera containment in Yemen a few years ago, to predict where cholera might spread at a sub national sub regional level, and therefore deploying preventative measures in response. And those are low stakes low risk scenarios, because if you send out a hand washing station and other sort of, you know, water and sanitation interventions, in advance of cholera  arriving, it sort of no harm, no foul, you know, everyone's got improved sanitation facilities there's no negative consequence. Aside from potentially a diversion of of limited resources.  They're using the same kind of models for to identify how to find efficiencies and vaccine delivery, specifically in sparsely populated areas. So how can you route your vaccine delivery? If you have a couple of guys on motorcycles, what's the best way to get your vaccines into arms the most quickly in the most effective way. And AI is really good at being able to discover new vaccines, for example. And then there was something around this is I'm not a health person. But there were something recently around AI and machine learning being able to identify new bacteria that are more resistant like like a Mersa. MRSA being more resistant to certain types of antibiotics. And that was done that was published in Nature magazine, I know that alpha fold and Deep Mind are really working on trying to get a more effective malaria vaccine out to market. And that really will, that really will be a game changer. I think there is this risk around humanitarian surveillance, that we're not quite the humanitarian community hasn't yet white gripped. And you can go back 10 years ago, five years ago and find quotes from senior leaders in the UN or humanitarian community saying, you know, so exciting, we can now track Sarah, who's in a refugee camp and Jordan, and I've now moved to Egypt. And that means continuity of care, because in their mind, they're thinking this is great, we can now make sure we're meeting their protection needs, make sure they still have, you know, all the paperwork they need as an asylum seeker or refugee in that new context. You know, inverse that, right. And you think about any of the states in Europe who are really keen on reducing the numbers of asylum seekers at their borders, and how humanitarian data and trends towards Datafication of refugees can really lead to a reduction in their rights and their ability to enjoy those rights.

Xand Van Tulleken 
And we've seen that happen. That's not a an imagined scenario. That's real. I think that if your biometrics if you're registered in the dark, claiming asylum in Europe becomes a nightmare. No, that's that's a very helpful level of balance.

Rochelle Burgess 
It's such an important point. Maria, do you have anything to add?

Maria Kett 
you know, we've got lots of promising apps you to do sign language translations and or different translations and speech to language like these are all fantastic, but I come back to the point I was making earlier, in and of themselves, they won't ensure inclusion, and I think that's what we're taught, you know, they won't show equity and inclusion. There's been lots of reports about who are the least, you know, most marginalised, most left behind, and we can, you know, in different contexts, it'll be different groups and technology, I think has the power to To help promote inclusion, but it's not in and of itself enough. And I think that's the key message. Really, it's, you know, what are what are our expectations around the technology? What do we want it to do? And what else has to be in place in order to enable that? And so these are, there are some really use great uses out there, not necessarily directly humanitarian, but could work in the humanitarian sector. But it's also just have that caveat is in and of itself, it's it's not enough to be in a wider context.

Rochelle Burgess 
That sort of final message that I feel like resonates across many generations of sort of Public Health and Health Improvement is that we need to be working across boundaries across contexts and thinking in complex ways. I wanted to thank you both so much, it's been really amazing. And I feel like I've learned a huge amount, actually. And it's been really great. Just as we come towards wrapping up, because we're interested in sort of this idea of disrupting, thinking and not just within public health, but But beyond we ask every guest about a piece of art or music or poetry that has disrupted their perspective. So I wonder if either of you had yours to share with us today. Sarah's nodding, you guys have prepared it. I'm really excited to hear what you have to say. Can we start with Sarah?

Sarah Spencer 
Yeah, I, as a parent of what feels like a legion of small people, you rarely have time to enjoy art, music or poetry. So this was a hard question, really, for me. But I have always been fascinated by history. And I studied history and undergrad. And I just finished Patrick Keith's book called The empire of pain, which is about the history of the Sackler family and the opiate crisis crisis in the United States. And what was really interesting in that book, aside from a number of things, but one of the really interesting pieces in that book or stories in that book is about Arthur Sackler in the Sackler enclave, which was the sort of private storage facility he had in the Metropolitan Museum of Art, and used as leverage over both the Met and the Smithsonian and the one in Washington. And it was arguably something like one of the biggest collections of Asian art in the world. And, and how that's linked to some of the philanthropic ventures of of Sackler, the Sackler family themselves. And I think what it helped remind me of, or what it caused me to reflect on was sort of what we as a society consider a private or a public good, and how they should be managed, and by whom. And so the link for me is around thinking about these new technologies and the vast wealth that they're creating and that are associated with them. And the narratives around sort of AI for good and technology for good. And if they are for good, where is this sort of public consumption of that? And, you know, how do we sort of dig into the motivations between corporate philanthropies and politics behind the sort of philanthropic ventures of high net worth individuals?

Rochelle Burgess 
I want to start the podcast again, from that statement that is so that is such an amazing insight. Maria?

Maria Kett 
I have to top that - amazing. And I actually going to stick with art, I'm gonna say Ai Weiwei, it might be predictable. But actually, I studied in China for a number of years, and very well remember the sort of Artists Collective starting up really. And I remember the first time I saw the Ai Weiwei video of him smashing the vases and you know, that, that you can make a lot of criticisms about Ai Weiwei. I think he's his exhibition that he did after the Sichuan earthquake. I was in China when the LI Jiang earthquake happened and saw the might of the Chinese response to an earthquake actually. And I think the interesting thing, he's, he's was a very, it was the rucksacks of the children on a wall spelling out the sentence. And I think what was really poignant, it was very simple, really, and you could has been criticised some people will say he's commodifying, you know, suffering in China. And that may also be true, but I think it's, you know, he's been pretty consistent. It's this quite simple. There's some of them are really obvious. It might be marbled camera surveillance camera, rods from the earthquake buildings, but I think his work for me, it kind of does what it says on the tin, he's making a point. Right. And that's, it's not it's not subtly disruptive, in some sense. It's just, it's just, it's kind of obvious, but I think, you know, he's been pretty consistent, you know, had those sort of in his sort of criticisms of how All governments or states can operate and I think they are for me he's not the only artist that does that. But I think he's been quite striking his simplicity about it,

Xand Van Tulleken 
oh you're both totally brilliant. Thank you very, very much. Indeed. That was great. 

Rochelle Burgess 
Thank you so much.

Sarah Spencer 
Thanks, guys. Nice to meet you.

Rochelle Burgess 
You've been listening to Public Health Disrupted. This episode was presented by me, Rochelle Burgess and Xand Van Tulleken, produced by UCL Health of the Public, and edited by Annabelle Buckland at Decibelle Creative. Our thanks again to today’s guests, Maria Kett and Sarah Spencer.

Xand Van Tulleken
If you’d like to hear more of these fascinating discussions from UCL Health of the Public, make sure you’re subscribed to this podcast so you don’t miss future episodes! Come and discover more online and keep up with the school’s latest news, events and research – just Google ‘UCL Health of the Public’.   This podcast is brought to you by UCL Minds - bringing together UCL knowledge, insights, and expertise through events, digital content, and activities that are open to everyone.