[music playing]
Oby: From the campus of Harvard Medical School, this is Think Research, a podcast devoted to the stories behind clinical research. I'm Oby.
Brendan Keegan: And I'm Brendan. And we are your hosts. Think Research is brought to you by Harvard Catalyst, Harvard University's Clinical and Translational Science Center.
Oby: And by NCATS, the National Center for Advancing Translational Sciences. In vulnerable populations, implementation of health care and health care education are crucial but can be challenging. Among youth experiencing homelessness, there are additional obstacles to health care delivery.
The rate of HIV infection among youth experiencing homelessness in Los Angeles is 10 times the average rate Dr. Tambe and his collaborators at the University of Southern California combined the science of artificial intelligence and Los Angeles-based social workers to map social networks and more effectively share HIV prevention information.
Dr. Tambe is a Gordon McKay professor of computer science and director of the Center for Research on Computation and Society at Harvard University. He is also the director of AI for Social Good at Google Research India. Hi, Dr. Tambe. Welcome to the show.
Milind Tambe: Thank you for inviting me. Delighted to be here.
Oby: Great to have you. So you're a computer science researcher, and professor, and the director of AI for Social Good at Google Research India. What is AI for Social Good?
Milind Tambe: Great question. So AI is artificial intelligence. When we talk about social good, we mean advancing AI for benefit of those who have not benefited from AI-- generally speaking, more low-resource communities, endangered communities. And when we say AI for social good, here we really mean being able to demonstrate the good on the ground.
So it's not enough to say we are doing AI for social good, perhaps just provide advances in AI algorithms. It is important to actually demonstrate impact on the ground. And often, this is work with communities that themselves cannot commission AI research for themselves. And so it ends up that the kind of work that happens in AI for Social Good are with nonprofits, often, who are working with these low-resource communities.
Oby: So before coming to Harvard, you were at the University of Southern California. While you were there, you partnered with a group in Los Angeles that focused on HIV prevention among youth experiencing homelessness. Tell us about that study and what the goal was?
Milind Tambe: This work that you're referencing was jointly done with the USC School of Social Work. So I was in computer science, and my colleague, collaborator, friend Dr. Eric Rice was in the School of Social Work.
And so together, the goal was on the one hand, to show social impact-- to actually achieve social impact in the community of youth experiencing homelessness for HIV prevention-- and on the other hand, advance AI research. Because AI had not paid attention to these situations which come about in achieving this kind of social impact through exploiting the social network of these youth.
And so the goal of the study was to improve knowledge of HIV prevention behavior and cause those changes. Specifically, as you may know, there are 6,000 youth who sleep on the streets of Los Angeles every night. The rates of HIV among these youth are 10 times the rate of normal, housed populations.
And so drop-in centers, other organizations conduct peer-led campaigns in order to spread information about HIV to reduce HIV risk behaviors. In a sense, you're trying to call key peer leaders-- recruit key peer leaders, because you cannot obviously talk to all 6,000 youth-- educate them about HIV prevention, expect these peer leaders to talk to their friends, and their friends to talk to their friends, and information to spread in the social network.
Now, this is face-to-face conversation. This is not over some other electronic social media or something like that. The question then became, could we do something better in selecting the key peer leaders, compared to how it was traditionally done, using AI techniques? And the goal of the result was indeed that we were able to show that our AI algorithms for selecting peer leaders were far more effective in causing the spread of HIV risk-- HIV risk information and changes in HIV risk behaviors compared to traditional approaches.
Oby: Wow, that is incredibly interesting. I'm so curious. So the AI techniques you used and what that means as far as-- we're talking about social networks and basically determining who within that larger network is the person who can disseminate the information, who's kind of the influencer. And so I'm very interested to know more about, what were the techniques? How did you get to that point?
Milind Tambe: Awesome question. So the basic idea here is we want to select those influencers, those key peer leaders in the network who would obviously be able to spread the information in a way that reaches all of the different subcommunities. So this is a network where there are many different subcommunities. Maybe youth who play basketball together, youth who hang out on the Venice Beach together, and so forth.
The initial approach that was adopted in earlier work was to select the most popular youth, the nodes that in a social network, would have the highest degree. And selecting them makes sense. They're the most popular. However, they are all concentrated, if you will, at the center of the network, meaning they all know each other. They are very popular.
But you don't, then, get to reach more of these subcommunities. What we want to do instead is to select, maybe, one or two nodes in the center, but maybe the other nodes are more strategically placed. Maybe nodes that connect different communities are more effective in spreading information.
And so what this AI algorithm is doing is not looking at demographic information. And it's not looking at any other information other than purely the strategic placement of the nodes in the network. So if you imagine a network of something like, say, 300 youth, and within that we want to select, let's say, 30 youth who are the peer leaders, we want to get that combination of 30 out of 300 that are strategically placed in order to spread this information.
So choosing a combination of 30 out of 300, if you think about it, it's like 300, choose 30. That's a massive number of combinations for us to think through. And the AI algorithm is essentially sifting through all these combinations to figure out what's the most effective way of putting together a coalition of peer leaders that would be most effective in spreading this information.
So they need to be placed far away from-- just the right amount away from each other, because if you just choose neighbors, that's not as effective. They need to be able to reach different subcommunities. So the algorithm is sort of simulating through all of these different possibilities and then ultimately coming up with the right choice of peer leaders.
And often it is a surprising choice, because it often ends up with youth who at first glance may not be the most popular youth, may appear to be around the edges of the network. But they just happen to be the right youth for spreading this information. And that's what was clearly seen in the result of the experiment.
I should say, though, that the network itself is not given to us. So it's not as though we know the social network ahead of time. And so in the first pilot studies we did, our social work colleagues, students of Professor Rice, for example, would painfully collect this data by querying all of the youth in the study, and doing field observations, and so forth. And that in itself became a very complex operation.
So the next approach, then, was could we sample a small fraction of the network? And the key is to figure out what nodes to sample. So in a sense, we don't know the network.
Can we figure out what the network might be by sampling intelligently through a few nodes and then figuring out based on just that small sample that we have, who are the key peer leaders? And so the interesting part of the algorithm was the sampling and then also then selecting the right peer leaders.
Oby: Wow. I'm actually very interested. I'm fascinated by everything that you're talking about and how you're thinking about homelessness among youth and then how you're using this technology to speak to what is happening and disseminating information. And one of the things you just talked about was the social network and how people painfully collected the information to give you the information about the social networks.
And I know in this study, one of the things you had to do in the study was map the social networks of the youth experiencing homelessness. How did you do that part? Because the mapping-- you talked about how you got the social networks, but how are you doing the mapping?
Milind Tambe: Correct. No, initially-- this is a very interesting question, because often in computer science the kind of work that was done on social networks for influence maximization was often driven by, let's say, something like viral marketing, things of that nature, where the idea was that you are given a social network ahead of time. And then you go ahead, given that the social network is given, a very clever algorithm for selecting the right influencers.
The problem in our case is, indeed, that the network is not given. And as I said, one approach would be to painfully collect all of that data. But that is not scalable. That's not going to work if you want to use this technique across many different locations.
And so the next thing was to sample the network. And to sample, essentially, you ask a particular youth, who are your top five friends? Or some simple query that gives us information about their edges, the connections that they have.
The key is to figure out who to sample, which youth to ask questions. And that's where the cleverness of our algorithm comes in. I should mention that the study was led by my great PhD student, Bryan Wilder, who has been the lead in this work. And so he came up with this clever algorithm for sampling the right kind of youth.
So you sample, let's say, 15% of the youth-- that's it-- by asking them who their friends are. And then, just having that much information is sufficient to then select the right peer leaders and then proceed from there.
What we learned here, as I've learned in my other work in AI for social impact, is that going to the real world to solve these problems that are of social impact often opens up new research challenges in computer science that we had not thought about. And this was one example, because traditionally the work that is done assumes that the social network is given. Here social network is not given.
And so it leads to some very interesting, new research challenges. So in some sense, we get wonderful problems to work on in advancing AI. But these are not-- these are being driven by socially impactful applications.
Oby: What was it like partnering with a group that was made up of mostly social workers as a group of computer scientists?
Milind Tambe: Oh, it's such a fascinating experience. And on the one hand, I must say, it is extremely inspiring. It's so different. In computer science circles, people are talking about startups, and startups being bought, and lots of funding, and seed funding, and venture capital, and all that kind of stuff. And the School of Social Work, it's all about social justice.
It's just such a different and inspiring kind of a conversation. It's just a change of context. And so it led to many ideas that many different-- I mean, the particular application that I mentioned of HIV prevention was just the tip of the iceberg, just the start.
And then following that, there just seemed like there's so many things that we could work together. There's so many, whether substance abuse prevention, suicide prevention, nutrition, malnutrition or nutrient deficiency-- addressing that. There's just a ton of problems that we could work together on, collaborate on.
And it was beneficial for both sides, because AI researchers had not encountered these kinds of problems before. And obviously on the social work side, there was this need to apply AI techniques. So fascinating.
And we discovered sometimes that there were problems that neither discipline actually worked on. So there's this big gap where nobody was working on those problems. And that was a very interesting discovery.
And sometimes words have very different meanings on the two sides. I'll give you an example. The word objective in AI, in computer science has a very specific meaning. It means mathematically expressing the actual term that you are trying to optimize.
So in some sense, it has to be a mathematically-expressed term that says, sum over i, xi-- like mathematically precise formulation. And so I was in this phone call-- social work researchers on the one side, AI colleagues on the other, and it was about substance abuse prevention.
And the AI researcher started asking me, what is your objective? And this, clearly, we have been talking about substance abuse prevention for half an hour. And the social work colleague was-- we've been talking for half an hour. The objective is substance abuse prevention among these youth. What more do you want to know?
And what the AI researcher was asking for is, give me sum over i, xi, plus y, some very specific thing. And it was just very interesting to understand that the word objective had just these very different meanings.
So there's a lot-- there's some of that where it's always a great fun and just lots of sparks in terms of, oh, we can-- this is a problem, and we can use this technique. And just the cross-fertilization of ideas in terms of what we had assumed, and what they had assumed, and so much education on the AI side.
And for my students who went and worked alongside the social work students, it was a transformative experience for them to understand what is really going on in the city. It's a very different world compared to, as I said, the world of computer science.
So it was very-- so I've seen students come back just inspired that this is something that they could really contribute to and adopt as their mission in life. And so it's a very powerful connection.
Oby: What was the outcome of this study?
Milind Tambe: So the study showed-- we recruited overall 750 youth. And I must, again, say this is joint work with this friend, colleague Eric Rice. And the result was, there were three arms in the study.
Our algorithm, which is called change-- the normal method, the traditional approach of bringing the most popular youth-- which I'll refer to as decreased centrality-- and third, a no-intervention arm. And at the end of one-- so the way this happened-- in every arm of the study, 250 youth were recruited.
And in every arm, we selected peer leaders according to the algorithm in that arm. So for change, it was our AI algorithm, for decreased centralities-- the traditional approach-- bringing the most popular youth. And no intervention-- there's nobody being selected.
And then, we looked at what the results were in terms of change of behavior of the youth at the end of one month and then at the end of three months. We looked at different types of changes in behavior.
So first was condomless anal sex. And there we find that at the end of one month, there was more than 30% reduction in the change-based group-- the group where the AI intervention took place. Whereas, there was no change, there was no difference in the degree centrality-- the more traditional approach. And there was also no change in the no-intervention group, but that, of course, there was no intervention.
So basically, the AI algorithm led to a significant improvement-- 30% reduction in this HIV risk behavior. And the other two arms, there was no change at all, showing that this really worked much better. At the end of three months, we found that the more traditional approach began to catch up, but it was still not as-- the change of behavior was not as much as what had happened in the AI arm.
So what this showed is that the AI arm-- that our algorithm-- was able to cause changes in behavior much faster, because it happened to cause changes in behavior at the end of one month. And this is important because this is a risk behavior and also because it's a community where youth come and go, and therefore having this reduction in risk behavior faster was important.
We looked at other metrics-- condomless vaginal sex. We looked at knowledge of HIV. And we could see that the AI intervention was faster, more significant compared to all of the other two arms.
And of course, we did statistical tests and all of those things, and those are in our papers. But what this essentially showed is that the AI algorithms are significantly more effective in spreading HIV information and reducing HIV risk behaviors compared to traditional approaches.
Oby: That is amazing. I mean, honestly, it's really-- I love hearing how different scientists, social scientists, different groups of people come together to answer a question like this and how it's more powerful when different groups come together to think about a solution to a really big issue.
What else do you have in mind about the future of AI, and even social issues, and large-scale social issues that are not going anywhere and that we have? I heard you talking about quite a few of them in another question you answered, and I didn't know if you've been thinking about using AI in any other spaces.
Milind Tambe: AI has a huge role to play in social good, when we think about public health, conservation, agriculture and certainly a huge role to play in the emerging market world of the developing world. So one way to unlock the potential, if you will, of AI is to enable AI researchers to find the problems where they could have an impact.
I feel there is a desire when I go and meet with students, that there's this desire to somehow contribute to social causes. But they aren't able to connect with the right NGO, the right problem, the right group of people to work with where their science would have an impact.
So with that in mind, in my other position at Google Research India, what we started is a matchmaking process. So essentially, the faculty members from around the world can apply into our program and NGOs can apply. And so we just completed a call.
And then we do matchmaking based on interest expressed by the AI researcher and the team. And the NGO might have specific problems they have mentioned. And so then, we get the AI researcher to meet with three NGOs and the NGOs to meet with three AI researchers.
And somewhere there is a match. Somebody suddenly realizes that this problem matches this technique that I know, and then they write proposals. And then we support those proposals. We fund them, and so on and so forth.
That's been something we tried piloted in 2019. It seemed to work really well. There were six projects that came out of that first pilot. And now we are hoping to launch many more. Very soon these arrangements would be made.
I believe the idea here is to allow AI researchers to contribute to these social causes. I mean, to allow meaning just via matchmaking to get the right information to the right people. So I'm hopeful this will light up AI research in terms of social impact.
And hopefully it'll be a start, whereby from now on, once people get a taste of how this is done, that we will have many more AI researchers really contributing to finding out the right kinds of NGOs that they could work with and so forth on their own and that this will be a chain reaction whereby we see a lot of AI work going towards social causes.
And I emphasize that this a collaboration where we keep emphasizing-- this is not AI researchers sitting in the lab, wherever they are. I'm sitting here at Harvard. And OK, now here, I have this clever algorithm. Let me helicopter that in to some community, some rural community in India, and it's going to work. That's not the way it will be.
It has to be a partnership. It has to be a collaboration. It has to be something that we have, just like you mentioned, this kind of deep collaboration between the two sides in order to have impactful AI research and an impactful application that is very carefully designed with the partners in mind, working with us, working with AI researchers all along.
Oby: It has been such a pleasure having this conversation with you Dr. Tambe.
Milind Tambe: It's such a pleasure to have this conversation. And thank you for your interest. And it's just really just so nice to be able to discuss the actual work that was done and the broader issues and the actual social good. Thank you for inviting me.
Brendan Keegan: Thank you for listening. If you've enjoyed this podcast, please rate us on iTunes, and help us spread the word about the amazing research taking place across the Harvard community.
Oby: To learn more about the guests on this episode, visit our website-- catalyst.harvard.edu/thinkresearch.
[music playing]