[STEF 2022] AI for social impact: Results from deployments for public health

[STEF 2022] AI for social impact: Results from deployments for public health

Show Video

Hello, welcome to this special session of the Sony technology exchange fair called STEF conference 2022. The title of today's session is AI for social impact, results from deployments for public health. It is my pleasure to welcome Professor Milind Tambe, who is the Gordon McKay professor of computer science, and also the director of the Center for Research on computation and society at Harvard University, as well as the principal scientist and director for AI for social good at Google research.

So welcome, Professor Tambe. It is a pleasure to have you here. And I would also like to introduce you to Dr. Hiroaki Kitano, who is Senior Executive Vice President and Chief Technology Officer of Sony group Corporation. And my name is Allan Sumiyama. I'm the head of corporate technology strategy at Sony group Corporation, and I'm honored to be the moderator of this session. So thank you Milind and thank you Hiroaki for joining.

So without further ado, I would like to ask Milind to begin his talk on AI for social impact results for deployments for public health. And that will be followed by discussion, dialogue session between Milind and Hiroaki. Please go ahead Milind. Hello, my name is Milind Tambe.

I'm from Harvard University and Google Research and I'm going to talk to you today about AI for social impact. So for the past 15 years, my team and I have been advancing AI and multiagent systems research for social impact focusing on topics of public health, conservation, and public safety and security with the key challenge of how to optimize our limited intervention resources. I'll start by giving some background about our research. The first point is that achieving social impact and AI innovation go hand in hand and I want to convince you that AI for social impact doesn't mean just taking AI out of the box as is and applying it towards social impact. Innovation is required and we see this for example in the area of public health.

Here we have large populations to serve limited number of public health resources. Concrete example is work we've done with youth experiencing homelessness in Los Angeles. We are harnessing the social networks of these youth, we are able to show that our social network influence maximization algorithms are far more effective in reducing HIV risk behaviors compared to traditional approaches. This work required innovation in the area of social network influence maximization because the networks themselves are not known ahead of time. With respect to conservation, we have large conservation areas to protect but limited number of ranger resources. Concrete example is work we've done in Uganda and Cambodia where harnessing past poaching data we’re able to predict where poachers set traps or snares, and for the past several years have been able to remove thousands, if not tens of thousands of these snares.

But this work required innovation in the area what we call green security games which combines machine learning and game theory. With respect to public safety and security we have contributed a new model called Stackelberg security games and contributed new algorithms that have been in use by security agencies in the United States such as the US Coast Guard, the US Federal air marshal service and others. These may seem like very different application areas, but what ties them together is the underlying common multiagent systems research.

Today's talk I'm going to talk about AI for maternal and child care. I'll cover papers from the last two years on this topic. I'll focus on real world results in this talk but there are more simulation results in our papers. And I'll

highlight the role of the lead PhD student or postdoc by putting up their picture in the top right hand corner of the slide on which their work is shown. So the motivation for this work is the UN Sustainable Development target, that by 2030, the maternal mortality ratio should be below 70 per 100,000 live births. That is mothers dying during childbirth or soon after should be below 70. That's the UN target. If you look at where we are today, Western Europe much lower numbers. United States rising but still much lower. In the developing world some rates are falling, maternal mortality rate, but they're still higher.

For example, in India, higher than 100. And that's where we begin today. What this high maternal mortality ratio implies is that a woman dies in childbirth in every 20 minutes in India. 4 out of 10 children end up being too thin or short.

We are very fortunate to be working with a nonprofit called Armman that tries to address these issues. Armman is working with 26 million beneficiaries or mothers in India. It's active in 19 states in India. Very, very inspired by the founder of Armman Dr. Aparna Hegde who says that pregnancy is not a disease, childhood is not an ailment, and dying due to a natural life event is not acceptable. We were very inspired by Dr. Hegde.

And I met with her in summer of 2019 and we agreed that we should collaborate and do something with AI but what can we do? We arrived at a solution that we should focus on this mMitra mobile health program adherence. mMitra is one of the programs that Armman runs and it bases itself on the fact that there's a lot of penetration of cell phones. Cell phones are used widely by people in India and therefore our mMitra program uses that to send weekly two-minute automated health messages to mothers who registered in the program. So for newer mothers or expecting mothers from the time they register to the time the baby's one year old, there are 140 messages that go to these mothers. These are small, two minute messages, something like

you're three months into your pregnancy, you should use this health supplement or your baby's three weeks old get the baby vaccinated. Armman has shown with randomized control trials that mothers who enroll in the mMitra program and listen to all of the messages benefit significantly for their own health and the health of their baby. In fact, 2 million mothers have enrolled so far in this mMitra program. So where do we come in? Unfortunately, 30 to 40% of these mothers enroll and then become low listeners or drop out of the program. To understand why Armman took us around to the hospitals where they register these mothers, the localities where these mothers live. I have grown up in Mumbai, and I am familiar with some of these neighborhoods in Mumbai.

These are families that live significantly below the international poverty line. Going into their homes, though, allowed me to understand the pressures that they are under and as a result of which they may become low listeners. And we of course, came away with significant admiration for the work these nonprofits do in these neighborhoods. The question is, what can we do to prevent these dropouts? Armman has a service call center from where they can give service calls to persuade the beneficiaries to stay adhered and to not drop out of the program. Question is, this is a very small number of service call employees, these are the people who are giving our service calls, they cannot call all of the mothers. And so how do you optimize this limited intervention resource? So if you think about 100,000 beneficiaries being registered at a time in mMitra, each of these hundreds of thousands every week gets a health message on their cell phone.

To prevent dropout, we can call only 1000 of these per week. Which 1000 should we call so as to maximize the total number of health messages that are listened to? To understand this, consider now this health worker service call worker who has five mothers under her care for illustration. Four of them are shown in red. They haven't listened to an automated health message. One of them is shown in green. She has listened to her health message last week. The health worker has to decide who to give service calls to and in this case, only two service calls can be given, so she chooses the first two. This turns out to be a good choice because these two red turned to green. There's another red that turns to green,

so now the health worker has to decide who to call next week to encourage everybody to continue to listening to health messages. In this case, she picks the two who are red. But this turns out to be a bad choice because these two who are red don't turn to green. And in fact, those who were green also turn to red. What this is showing you is that a service call may not change beneficiary state. Beneficiaries may change state on their own,

and yet we have to prioritize 1000 beneficiaries per week. So how do we do that? We appeal to restless bandits approach to select K out of N arms per week. So in the restless bandit, each arm is a Markov decision problem. Each mother a beneficiary via this Markov decision problem. The mother might be in a bad state where she has not listened to a voice message or a good state, she has listened to a voice message. We have this action of either giving a service call to the mother or not intervening, no service call.

And now we have this transition matrix which models the behavior of the mother. So for example, when there is no service call the probability of going from a bad state to good state is 0.2. But when there is a service call the probability of going from a bad state to a good state increases to 0.8.

So in reality, of course, this is just one mother, modeling of one mother. We have 100,000 of these mothers or 100,000 of these Markov decision problems, and we have to choose 1000, that's 100,000 choose 1000 is a massive problem. To solve this, we instead compute what's called a Whittle index. Informally, it's computing the benefit of intervention on each arm.

The benefit of giving a service call to each arm. This allows us to rank all of the 100,000 mother's arms and then pick the top 1000, which have a higher benefit of intervention. Formally a Whittle index is the subsidy we would give to a passive action in a state. So the Q value of the passive and active action become equal.

It's not important to understand the formal definition for the purposes of this talk. There is no out of the box algorithm to compute this Whittle index. We defined one fortunately in 2016 that was useful for our purposes, and that's the algorithm that we are going to use for this work. Now one issue here is that computing the Whittle index requires that the model parameters, the transition probabilities, be known but we are not given these ahead of time. When a new mother arrives we don't know

her transition probabilities of going from a bad state to good state etc. What we do have are features of past mothers who have enrolled: age, income, education level, and engagement sequence. They were the mother was in a bad state, she got a service call. She remains in a bad state. Bad state, service call transition to a good state.

So given this whole data that we have, features and engagement sequences of past mothers, we can now use clustering and using this clustering we can learn a mapping from the features age, income, etc, of a particular mother to a particular cluster she belongs to. So when a new mother walks in, we can take her features, age, income, education to map her to an existing cluster. From the cluster center, we can infer the transition probabilities. This is what we are doing to predict the mother's transition probabilities.

Having done these predictions of transition probabilities, we can now compute a Whittle index, and then having the Whittle index of all of the mothers now we can choose the top 1000. So that's how we can figure out which top 1000 mothers to call each week. Now it was time to run a field study which we did with 23,000 beneficiaries,as far as we know first large scale application of restless bandits for public health. We divided these 23,000 into three groups 7667 beneficiaries per group. First group was the restless bandit group. The second was round robin.

The third is current standard of care. In each group, we pulled up 225 arms, that means we call 225 mothers. In the restless bandit group, the 225 were the ones with the higher Whittle index. In the round robin group, we call the first 225 and the next 225 calling one by one these groups in a sequence. Current standard of care, no calls are going out.

Now we want to know how many more health messages are listened to over the current standard of care group where no messages are going out in the restless bandit group and the round robin group due to our service calls. So here's the result on the x axis are different weeks and on the y axis how many more health messages are listened to in the round robin group and the restless bandit group over the current standard of care group. And what we see the blue the restless bandit group 600 more messages are listened to in the restless bandit group compared to the current standard of care with the round robin group shown in orange. Very few additional messages are listened to. In terms of statistical significance, restless bandit RMAB versus current standard of care, yes, statistically significant. Round robin versus current standard of care, not statistically significant.

So what can we infer? First, it's important to optimize service calls. If we just call mothers in a round robin fashion there is no improvement over current standard of care. And second the RMAB improvement restless bandit improvement is statistically significant as well, because it cut by 30% the drop off rate over the current standard of care. So this result is very encouraging. And in fact, we have now deployed this restless bandit model in a system called Saheli with Armman. It's actually in use every week this choose service calls based on recommendations made by Saheli.

100,000 beneficiaries have so far been assisted by Saheli and we are continuing to assist more. Dr. Aparna Hegde having seen these results points out that we're able to reach out to more and more women each week and can get them back into the fold and save lives because of AI. You can see this in our video, which is on YouTube: “AI for social good in partnership with Armman.” What's also in that video is interview with one of the beneficiaries who benefited from this AI based service call. I'm going to play a small clip of an interview with this beneficiary.

I was unable to listen to the calls earlier. Then the mMitra worker reached out and explained the benefits of listening to the messages. Now I listen to the calls regularly, it feels like someone from your own family is looking after you. I follow all the advice and take good care of my baby. So this is indeed very satisfying because this mother is speaking in Marathi and this is all in Mumbai. So this is showing you some of the benefits of our Saheli deployment, but there are of course lots of interesting issues for AI research as well.

So first of them is this decision focused learning. This work is taking place in the context of a whole data to deployment pipeline. We start with the data then there is this machine learning to map features, age, income, etc. to the behaviors, transition probabilities

and then we optimize, choose the top K beneficiaries and then we deploy the software. So we are maximizing learning accuracy, and then we are maximizing decision quality. Two separate stages. Maximizing learning accuracy

doesn't translate into maximizing decision quality. Here's an actual deployment result here on the two datasets, orange and blue. In Armman orange has a higher predictive accuracy. They will be able to more accurately predict mother's behaviors.

However, in terms of the deployment result when we actually compute vital indices and calls, call these mothers based on the orange dataset, and the blue are data set which is lower actual predictive accuracy. It turns out that orange actually leads to less messages being heard by the mothers and blue actually leads to more messages being heard by the mother. So even though predictive accuracy wise blue is lower, from a deployment perspective from computing Whittle index and calling mothers blue is better. How does this happen?

So let's look at this example here of features on the x axis, transition probabilites on the y axis. These blue dots are all the low risk mothers meaning less risk of dropping out. Red are on the high risk mothers higher risk of dropping out. And now if we do this two stage first maximize learning accuracy and then maximize decision quality, then we see that to the left of that orange bar is the blue dots then the green regression line is lower. And what we see here is that it is able to achieve high learning accuracy, make better predictions on average, but it misses on the high risk mothers and so it leads to lower decision quality. Decision focused learning is the idea of modifying the loss function to directly maximize decision quality.

What it learns here is this green regression line is shown at the bottom which manages to more accurately predict the red dots. Overall learning accuracy is low but higher decision quality. We have run an experiment with decision focused learning with 9000 mothers and in this case 3000 were in decision focused learning, 3000 with two stage learning, and 3000 current standard of care. In terms of predictive accuracy two stage model leads to higher predictive accuracy. Decision focused learning leads to lower predictive accuracy. But in terms of actual deployment performance when we compute Whittle indices and call mothers based on the actual Whittle indices that are computed. This is an actual result from the deployed system.

We see that decision focused learning leads to far more messages being listened to shown in blue here over a span of 10 weeks. 10 weeks are shown on the x axis. The Y axis shows cumulatively how many more messages are being listened to. And we see 560 more messages are listened to in decision focused learning, but much lower with the two stage model.

So decision focused learning is what we have now deployed with Armman. The point here is that there are newer research opportunities that have come about by our work with Armman in the social impact space. There are many other challenges. For example, consider that we may not be able to

with limited data make accurate predictions of transition probabilities. So we may say the transition probabilities lie within some interval, for example, 0.4-0.7 or 0.2-0.4. And this leads now to a newer challenge. How do you do robust restless bandits? So essentially, we solve this problem by minimizing maximum regret where we're trying to play a zero sum game against nature. So this is one interesting challenge. Another, showing you that there are different applications of restless bandit.

For example, tuberculosis prevention. Tuberculosis is also terrible disease. It’s a terrible disease in India, for example, and across the globe. In India, half a million people die every year due to tuberculosis. What TB requires is that patients take pills for six months, but that's very difficult so patients drop out. Again, service call centers have to have health workers calling these patients to remind them to take their pills. And now again, what every day the health worker is trying to decide who to call. She may call the first three patients.

Now she may call next day she has to decide who to call next week. This is also a problem similar to the maternal and childcare problem I mentioned of health workers deciding who to call every week from a call center. And so again, this can be solved with Restless banits. There's a issue of partial observability that needs to be addressed. Another interesting challenge that comes up in this space.

I'll end by saying that there are many interesting challenges in research. Our own research now is trying to scale things up. So for example in Armman is 100,000 beneficiaries. We're going to go up to 1 million beneficiaries which will happen in 2023. Talking to the Government of India about that Kilkari program which is 10 times larger than Armman’s mMitra program. And so we will be working hopefully with this Kilkari program versus the government again in trying to reduce dropouts from their program of providing health information to mothers. We're also working with a nonprofit called Khushibaby to reduce malnutrition among babies.

And our IJCAI 22 paper discusses our work with a nonprofit called Helpmum in Nigeria. So this is just to show you that we are very excited about this work in AI for social impact. We are hopeful that the work we are doing will be of assistance to these frontline health workers who are trying to save lives. So thank you very much for listening to me, and I look forward to your questions. Thank you. Thank you so much Milind for such an inspiring talk. The solution that you described, are very innovative and provides a lot of insights and it provokes a lot of questions for us. So let us dive into the questions that we have

and you can answer them and we'll have our CTO Hiroaki comment, and then we can start the dialogue. Okay. So the first question is so what were the key challenges you faced in working with ARMMAN? Some of the technological challenges, but also infrastructure wise, and maybe some kind of relationship building with ARMMAN? How did you overcome those challenges? It's such an honor to be invited here. I'm very grateful in the company of Dr. Kitano, yourself. Thank you for inviting me and sharing our work with you. As you anticipated the challenges we face go right from the beginning of the building of the relationship in AI for social impact all the way towards deployment of the software. So it's the entire data to deployment pipeline, whereby there are interesting challenges to overcome. The work started with our first meeting with the founder of ARMMAN, Dr. Aparna Hegde in a Starbucks in Mumbai in 2019.

And at that time, all we understood is that there is this nonprofit that is doing really inspiring work, and we want to assist them. But exactly what do we do and how can AI help? This required a number of follow-on meetings to try and understand among all of the different possible potential problems that ARMMAN is working on, that ARMMAN is facing, where could AI play a role? And after a number of discussions, we arrived at the problem statement that I discussed in my talk. But then the questions are, I mean, I will get to these questions in detail a little later on.

But there are questions. Is this a problem where there is data? Is this a big enough problem that is going to make an impact? Is this a problem where once we build a solution ARMMAN would be able to pilot it, experiment with it, deploy it, scale it up? All of these issues have to be addressed. And in the meanwhile, we also have to understand if we can actually solve the problem with AI given the amount of data that we have available.

So from the time of building up the relationship from the immersion in the domain to deriving a predictive solution, AI solution then making recommendations, all the way to deployment, there are interesting challenges. And the main way we feel that we have to work on these challenges is to do this work in deep partnership with the nonprofit. It cannot be the fact that we developed a solution as AI researchers independently, and then just provide an answer to ARMMAN but entirely throughout the process again, and again, go back to them, iterate with them. And we initially made some wrong choices and came up with the wrong solution. Now, I can discuss that more, but realize that that was the wrong solution and we came up with a better solution. So I would say the main way of overcoming this hurdle is partnership, partnership, partnership.

Those are the three main points of trying to overcome the challenges we face. Thank you very much for sharing your experience. I think this is really inspiring that you have actually discussed with the founder of this organization beginning from the identification of problem, how to solve it, rather than saying “Hey, we have technology, can I apply for them?” Using that kind of approach actually may not work and probably like a significant mismatch between the problem and the technology that people might have. And particularly, I was really intrigued with the approach that you have taken trying to identify the issue to be the dropout rate for the call, and then apply that technology, which you attributed as a bandit problem, and do the call pattern optimization which actually improved how many people keep listening to the call, and how that actually impacts the overall wellness of the people. And that's actually really impressive, and really the solution then the approach and a theoretical basis, that really match the reality. Rather than saying let’s big data, get the deep learning, hey, we're going to solve the problem, which sometimes will work, but again, not necessarily applicable for the problem that we are facing.

And so I think this is really the beautiful example for the brilliant technology theoretical solution made into reality and impact society. Now, I’d really like to congratulate you on this accomplishment. But I’d like to actually have your insight. How you get there? I mean, of course, you had like a series of very serious discussion with this organization identifying the real problem and got the proper solution. But not many people can do that. And it would be nice to have your experience and the thought, and share with us the mindset then the how you get there, how you actually come up with idea.

Okay, this is the bandit problem, then we're going to have a call pattern optimization. I think this is really the brilliant move. I'm extremely grateful for your very, very kind remarks, very honored Hiroaki for those words. I should point out that this is not a solution that we arrived at at the very beginning.

After initial discussions, we did identify the dropout rate problem as a potential problem to focus on. And initial solution was indeed to say can we just make predictions on who are at high risk of drop off? And so we made those predictions. And then it turned out that we can point out to 50% of the people are at high risk of drop off.

That means, in a group 100,000, we would say 50,000 people are going to drop off or have a risk of drop off. That's useless to ARMMAN. Well, it's somewhat useful, but really, it doesn't prioritize the 1000 people, and also doesn't take into account the fact that the status of these mothers change over time. So the initial solution we offered was indeed predictive solution that was a wrong step. But after creating that solution, and showing it to ARMMAN and realizing that that doesn't work.

Then we took a step back and said okay, we have to refine the solution. Because we initially thought that we could make predictions that would limit a very small number of people that are at a high risk of drop off, but only after working with the data realized that it's a much larger number. But what you said, Hiroaki, is really crucial. This is use inspired research. This is not something where we as technologists can go with a hammer and say everything we look at is a nail, but rather to start from the problem and then understand what is the right AI solution that can work on the problem.

And here, it was only after this initial false steps and going back that we realized that this is a bandit problem that can be solved using this restless bandit. In general, in my experience with many of these problems, it is indeed the case that sometimes we arrive at some solution, mathematically it may be a very brilliant solution, but it doesn't quite fit the problem at hand. And so we have to be super careful in making sure that we go along with our partners and make sure they are satisfied with our solution. Thank you very much.

So you mentioned use inspired research starting from the problem. And you also mentioned when you think about what problems to work on, you mentioned availability of data, the impact of solving the problems would have on society. So my next question is, how do you actually decide or choose the social problems that you will work on? What are some of the factors that influence your decision? So there are many aspects to this answer and it's such a such an important question of figuring out which problem to focus on.

Having grown up in Mumbai, in India, I'm inspired by problems that had come up when I was growing up. So these are challenges related to marginalized communities that I saw there. And public health is clearly a big challenge that has motivated me.

And here, as we look at these problems of public health, as an AI researcher we have to ask is the problem big enough that it is going to have some impact especially the part that can be solved with AI? There's actually a list of seven questions that comes out of an AI institute called Wadhwani AI, which is in Mumbai. And I'm repeating some of the questions I wanted to give them credit for enumerating those questions. But these are essentially questions that I myself ask when I am trying to work with a partner. Is this a big enough problem that it will have an impact? Is there an AI solution to that problem? Because sometimes I'll go try to solve a problem. And people, the partners on the other side would say that we just need better equipment. We don't need AI, we just need better hardware, for example. In this case of ARMMAN that was not the case, of course.

They did want an AI solution that would solve their problem. Is there data for the problem? Because if there's no data, can we have access to that data? Can we when we produce a solution is this something that will be piloted by the organization because they may not be able to actually even pilot the solution? Can we do a detailed experiment with the solution we have so that we can provide evidence that our AI based solution works? Once we provide all that evidence can the solution be deployed and scaled up? Is this an ethical solution? And of course, as AI researchers we want to know in all of this is there an interesting AI problem? Because otherwise, it could be a very simple solution which perhaps is more suited to a class project or some student trying to address it. So these are the types of questions we ask.

And sometimes we find that an organization will come start to work with us. They're very excited. We are very excited. And then it turns out they don't own the data. The data belongs to somebody else, and they cannot give us the data, for example. So there are a lot of challenges to overcome in each of the questions that I've asked, and we want to ask those questions upfront, and then based on all of that decide that this is the right organizational problem to work on. Thank you very much Milind. I think it’s very insightful and I think it could be the kind of a more universal criteria that you're mentioning.

And of course, like when we talk about like sustainability issues or social good, there are many important problem we want to solve. But at the same time not every problem AI can contribute. I mean, sometimes you just want to have, as you say, the hardware, we just need the money, you just have to associate infrastructure. We're not doing the AI.

But I think I believe that AI can contribute a lot. But of course, like their problem, we can actually use AI or we cannot use AI. But I think that this choice is very interesting. And I also like very fortunate that you happen to meet the ARMMAN founder, I think there's certainly a part of the factor of the luck. But at the same time, I think you have to be prepared to just take advantage of that to make something happen. Yes, I should point out there have been other organizations, other problems that I have worked on which are equally crucial and important. One example is there was an organization that was working with Lost and Found children. I mean, this is such an emotional issue,

and I had really wanted to work with them to see what AI can do, but there was no data. They didn't have any data for us to work on. And so as you point out there are Hiroaki I really appreciate the thought you put forth there. Indeed, there are some problems where AI is not the right solution for their problems or AI is the solution. But even if it is the right solution, then there are further hurdles like getting the right kind of data, and the organization being able to supply that data to us. And then all the other problems that I've mentioned, such as being able to deploy the solution as well.

Thank you Milind for a very insightful set of questions to ask before we take on these social problems. Now, I'd like to ask you, based on your experience, could you touch on what are some general principles in applying AI to such social problems? And on the other hand, what are some of the pitfalls? You mentioned some already but what are some of the pitfalls that we should kind of avoid? Some of the pitfalls that we should avoid include problems related to ethical deployment, responsible deployment, challenges that may come up. Because there may be crucial side effects that we may not know.

In our work with nonprofits, we always, of course, want to make sure that these are nonprofits that are working ethically with the local communities. And so certainly ARMMAN was an organization that has done impressive work very deeply rooted in the local communities. But sometimes, there are challenges that we observe. Some years ago, for example, when we were working with problems related to wildlife conservation. This was some years ago. If you're using drones, for example, for wildlife conservation, are there problems in the use of drones that that may come up? Are the drones being used for some other purposes than the ones that we are working with the organization for? So these are some of the questions that may come up about ethical deployment, responsible deployment that we may want to be very careful with.

I think, in terms of other pitfalls that may come up, there can be a sense that we as AI researchers want to show off our brilliant AI solutions, and we want to publish papers. But that solution may not actually bring as much benefit to the organization. There is a cost to this AI solution to be deployed, and there's a benefit.

And this cost and benefit needs to be properly balanced. And perhaps our egos shouldn't come in the way of trying to show off the AI solution even at the cost of a loss to the organization which is deploying that AI solution. There may be several problems that may arise because if we say that a certain AI solution is working brilliantly, but it is not, it may mislead others into thinking that that solution works and lead other organizations on the wrong path.

So these are some of the challenges that I can immediately think of in terms of using AI solutions for social problems. And I'm sure there are others as well, but let's start there. Thank you very much. It's a great point.

We talk about responsible AI, but the same time with your point of ethical deployment, I think this adds a new dimension. Because it’s not just AI’s functionality by itself, but how this organization uses the AI for the deployment. At the same time the cost issues. Of course we,

the researcher, tend to wish to claim that our technology is useful. But is it really useful? Are you actually exacting on extra costs for the organization? The organizations might say, Well, this is great, but the long run it doesn't work, and then they will discard it. And then the misinformation being propagated and people repeat the same mistake. And that actually in the long run, it's not sustainable. So I think that this is really a important issue. So it does mean that you really have to work with the organization and that we have to have good insight of the issues, so that you can actually identify what are the potential issues in deployment in a specific problem, as well as the nature of the organization, whether the organization is trustworthy or not.

That whole thing has to come into play and that's why I think you have to spend a lot of time discussing with the organization, and identify the problem, build up trust, get to the mutual understanding, and where we're going to go and solve the problem together rather than transferring technology. So, am I correct in understanding that long journey, but that's the only way probably. I appreciate that analysis. Yes, it is, indeed a long journey. And it implies there are a few things that we as AI researchers need to do to support this long journey.

One of them is currently the way many AI conferences and publication venues which is the currency for AI research is being able to publish. They are focused very much on methodological improvements in showing the brilliance of the AI solution as opposed to understanding that this long journey also brings about its own challenges, and they're not necessarily all methodological challenges. They're not all algorithmic challenges, but they are evaluating costs and benefits of an AI solution. It's understanding experimentally whether AI solution worked or not. All of these are important scientific contributions in some way, but we need to value them.

And unless we encourage AI researchers to be able to publish these works and share their findings, it is going to be difficult for others to understand that these are important challenges, and AI researchers will not engage in them. So it is our responsibility to provide opportunities platforms for AI researchers to share this kind of work, to understand that this is actually important scientific contribution, and encourage it so that this long journey can be undertaken. It also implies that currently there's a certain gap, because these are often applications that are built for nonprofits, these are work for marginalized communities, wildlife conservation organizations, nonprofits working in difficult circumstances. We certainly wouldn't want them to hire software engineers to maintain the software, the AI after we as AI researchers are done with them. And so currently, there's a bit of a gap on how these AI solutions will be maintained past the point when the AI researchers have done the research, shown that it works, deployed it, and they go home to a new problem. Who's going to maintain that software, if the organization is not able to pay for it? This is going to be an important problem in this field that needs further thought and support from the AI community.

Yeah, again, Milind I think that's a good point. I mean, kick starting projects is great, we get to deliver the result. But can it be sustainable? Who will maintain it? You can't really throw it in and it’ll stay there. It's not going to be like that. Someone has to maintain and continuously improve it. Do you have insight on the how that can be approached? Like, for example, like corporate social responsibility or NGO, or government sectors, but at the same time, it just has to be done at least like at certain level of engineers who understand things.

You can't really ask a random person to take care of it. Hiroaki, I look to you for advice on this matter. I mean, you were brilliant in starting the RoboCup initiative, and bringing together the whole world in trying to build robots that play World Cup level soccer.

And so we need an effort of that type. A RoboCup for AI for social impact whereby organizations can come together and form partnerships that would support these long term deployments. I certainly feel that we could imagine there are a lot of talented AI researchers and software engineers who want to loan their talent for these social causes, but they cannot do it full time. So maybe they have some time that they can loan, and there are organizations which need this talent. And maybe there's a matchmaking service that's available on a global scale, or at least regional scale, whereby some of the software can be supported.

Those are some thoughts I have. But really, I feel that it needs a RoboCup 2050, that kind of an effort in the AI for social impact space to make this happen. And it's an important problem. I should also mention that there are even within this space of the software maintenance. There are lots of research problems there as well. It's not simply a matter of maintaining that software.

Because as these AI systems learn, for example, newer and newer machine learning is continuously going on, they may learn newer things. There's lots of things we don't understand about how drift might happen, and what kind of changes may happen in their behavior, and how do you deal with the fact that because of these changes the AI may start doing the wrong thing? And how have you overcome some of those limitations? So it's not as though we've completely solved the problem of long term deployment, and it's only a matter of maintaining. There's some interesting challenges still open there. Yeah, I think your point is exactly right. I mean

RoboCup turned out to be like a research oriented initiative, but also expanded into the medications and the disaster rescue, and became a global phenomenon. When it's actually going on its own, I think it got stable organization globally. I think we can actually learn the lesson from the local government applied to this specific problem, when Sony group cooperation wished to be a part of that initiative.

I think we can continue this dialogue and then hope we can go and find the solutions and take it further together. Absolutely. I will be delighted to participate in such an initiative. Thank you very much. On that note, I'd like to ask Milind, do you have any advice to aspiring researchers and engineers and students in AI who wants to work on such social problems by applying their AI expertise? What kind of advice would you give them? In order to engage in AI for social impact, first, you really have to be interested in social impact issues. And not everybody necessarily needs to because some people are pretty much motivated by trying to improve the technology. And that's all wonderful. So if indeed, this sort of social impact issue is important, then the next step really is to find the right kind of partners, and there are many avenues available to find these partnerships.

I should point out here, for example, in Google, we've initiated this matchmaking of AI researchers to nonprofits so AI researchers can apply, it’s an annual program, to this program, nonprofits can apply. And then we do matchmaking where each AI researcher meets with three nonprofits. Each nonprofit meets with three AI researchers, sort of a speed dating, and on the basis of match they'll write a proposal and then we'll fund it. So that's one way in which Junior AI researchers may be able to find nonprofit partners, but there are others as well. At Harvard, for example, we have invited local area nonprofits at our Center for Research on computational tools for society, and then use the local nonprofits and local researchers meeting at Harvard to do matchmaking to find interesting problems to work on.

We've invited nonprofits also even in our classrooms sometimes to find. So there are many avenues to find the right partners. And then I would advise them to go to the field, to the place where the actual work is going on. This work cannot be done by sitting in the lab and thinking of solutions. There are many instances where going into the field will reveal fundamental constraints that we may get completely wrong if you just derive the solution in the lab.

Immerse yourself in the domain, understand, and this will also build trust. The next piece of advice is really patience. Sometimes I'll have students who say, Well, I had my first meeting with a nonprofit, and they didn't even give me an AI problem to work on.

And I have to advise them that it's not like they'll come to the meeting, and then start saying sum over I, XI that's the problem. That's the problem you're going to solve. Initially, they don't know what AI is capable of, or what kinds of problems are even solvable. Or sometimes they may have opinion that AI can solve everything. And they may come and say, well, AI can solve the education problem or something like that.

And so you have to sort of have this dialogue to arrive at the right kind of solution, and it takes some time. Then after developing an initial solution, iterate with the nonprofit, and sometimes you may get it wrong. And I think we have to have the patience and the respect towards our partners to understand that yeah, I mean, we may get it quite wrong as AI researchers and they may have a fundamental point that the solution needs to be changed in some different ways.

Throughout, I think it's an issue of having patience, of deep partnership, of being immersed in the domain, and being really deeply motivated by causes of social impact, as much as we are interested in writing our own papers and so forth. So that it is not purely driven by we got to get this paper done, but very much by okay, we want to really achieve this social impact. And along the way comes AI research, which is a side effect or side product along the way. In essence we are saying that AI research and social impact are both first class citizens of this universe, and both need to go hand in hand, and it cannot be the case that AI research is the top priority and social impact is the second.

These are some of the things that I may humbly offer as advice for junior researchers who are starting in this space. Thank you so much. I think that's a great point. And I think this applies not only to AI, but the broad range of technology, whether we apply this technology to the problem to be solved. One of the researchers at Sony, Ken Endo, who used to be at the robotics research. They were creating a biped robot. And now, he actually transformed himself to be the prosthetic device researcher. He got degree in MIT Media Lab and one of the projects he did was going to India in Jaipur working with Jaipurfoot trying to create cheap prosthetic device with a little bit of technology in the 3D printer, so that it can make it much more easier for the people with the prosthetic device to be able to walk.

But of course, the cost it has to be very low, like $10 or something like that. But it's still much better flexibility than the conventional prosthetic device which was available in that region at the time. And so I saw one of his video where this young lady had lost her leg and then usually she has been moving on one leg jumping around so a very emotional movie. The moment she actually have another leg a prosthetic device and start walking in a biped and that was actually one of the most emotional scenes of the video how technology can contribute. But the same time Ken’s case he commuted to the India Jaipur many times to understand what the situation is, and then came up with a solution.

Went back to India, MIT, and now he's researching at Sony CSL now. he's in the high end and he's doing the robotics prosthetic devices. And he's doing the blade for the athlete for the winning the Paralympics and he actually contributed to the team who won the bronze medal at the Tokyo Paralympic which is actually really the mindset issues in addition to the technological issue, how you want to contribute to the society where people who are in need so that you can actually forget about a high end thing. But we have some wisdom, we have some know-how, how you're going to get some essence of the technology into the very, very affordable solution, which is actually appropriate in the region. So I think what you have described has deep implications every one of us should really remember where we try to apply that technology to the real world, particularly for the social good. What you described is absolutely a beautiful example of how technology has impacted people in Jaipur and thank you for all the work that your colleagues are doing towards that end.

And one of the main points you made also is I appreciate that very much that at least when it comes to social impact towards more marginalized communities that the solutions are often in AI terms one of low data, low compute, and low resources. And we have to work with those constraints because this is not a situation where we are working with organizations that have tons of data. We have to have data, but it may not be plentiful, and often we have to find ways to work with those limitations. And what kind of extra data can be collected on a selective basis that becomes an important problem in itself. Low amounts of compute and also low amounts of resources, because we can't just say, hey, go out and deploy all the sensors everywhere, and then we can solve the solution.

Or why don't you give high end phones, smartphones to everybody, and then we can solve the problem. And so as you described in the foot that was built, low cost technology that makes a big impact. I think that's a beautiful example, and that's the type of thing whereby understanding this via this partnership we can understand what are the constraints that we are operating under and build a solution that fits within the constraint. I often tell my PhD students that often the PhD is actually in the innovation that comes about because these constraints are so important, and they constrain the kind of AI solution that you will build and that's often where the PhD comes in. Because normally, you may not think of a solution, but now that there are these real world constraints a completely new solution has to be developed. So it's a win win also for the AI researcher.

Thank you very much. Now, I would like to pose my final question, this time to both Milind and to Hiroaki. To Milind. So you touched on the role private enterprise can play in advancing AI for social good. What are your thoughts on that? And to Hiroaki, how do you think Sony can contribute to promoting advancing AI for social good? As I mentioned in the earlier remarks I've been extremely impressed. I mean, what Hiroaki has done with RoboCup and really starting a worldwide movement for advancing robotics and AI, and that has spread into disaster response and education and other aspects.

You need something very similar the RoboCup of AI for social impact a global movement. And I certainly feel that private enterprise will have a big role to play just as Sony did with the RoboCup effort. And this is not only in terms of encouraging people to contribute to AI for social impact. But as we've discussed earlier, providing some type of an infrastructure so that once the solutions are developed that they can be maintained, they can be supported via AI talent, via software engineering, other resources that are needed so that the solutions don't die off once the interesting research is done. We also need help in terms of promoting this research by encouraging venues. The RoboCup global competition is an important example, whereby the research in RoboCup was highlighted. Maybe they say

a competition not necessarily a competition, but a collaboration for AI research whereby there's a global stage whereby people can come in and feel that their work is appreciated. That may also encourage further AI research. So I can see from building partnerships, helping build partnerships with nonprofits, to helping encourage research, to helping sustain the research. In all of these avenues private enterprise can really play an important role. And thank you for having invited me. Thank you for all of these wonderful questions. Really appreciate it. Thank you very much Milind once again and thank you Allan for the question.

And I think this is really the important question to be a good corporate citizen how Sony group can contribute to the AI for social good. Not just AI for social good, like how we can actually help people in need, social good in general using technology, not necessarily AI, but much broadly. And because a solution, sometimes AI can be useful. Sometimes other technology could be useful as well. I think bottom line is we cannot do it alone. We need a partnership. Partnership with the corporate sectors, with the non-government organizations,

people in need like people in the village and people in the city, and then also government as well. But I think we need to extend the creator network or the partnership. I think there's some of problems Google and Sony can work together to solve specific problems involving the university like Harvard and other places as well.

And I think it's really the important issue is identify the problem, identify the people in need, how are we going to reach a solution, which is sustainable. And then from there, we can take it from the next stage as well. So I think this is really important. So one of the lessons we learned with RoboCup is that if you got the right vision and genuine management, and how you get the mechanism of the organization, which actually grow by itself, I think people will come. I think passions and vision do actually make things into the reality. This is something people drive. But of course, things like RoboCup, Milind thank you very much for mentioning RoboCup many times, cannot actually grow and succeed like we have today without help of many companies and very passionate researchers, also multiple government sectors to support the RoboCup activities. And I think we learned that lesson a lot. There are successful parts in RoboCup,

there are things we have to improve. At the same time, the good news is that AI and robotics community that made RoboCuppers. We are the RoboCuppers, we have Peter Stone, current president of the RoboCup. But also like in a Google and Sony, and another high tech company there are many RoboCuppers. And so they actually understand my experience and how things can be created. I think that's really important part. I think the partnership

is what matters, and shared visions and shared solutions, I think it's very important part. The second part is I think those real world problem sometimes looks messy, but you can come up with extremely interesting scientific outcome, which can be applied broadly. I was stunned looking at the solution that Milind offered this bandit problem, specific call type optimization.

This is a problem for the specific mom issues, starting from there, trying to minimize the dropout ratio. Rather than a random call, we have like a specific call pattern. And he already applied to the other issues, like tuberculosis compliance as well. This compliance is a big issue. We have a drug or we have a specific measure that people can improve their health state or other problem, but on the compliance, they actually drop out. So if we can actually apply this technology or approach to the broader domain, which the compliance and dropout is the issue, other people can benefit. It's not really for the socio issues in developing countries.

Even in industrial countries, we have an issue of obesity and then other issues, which patient need to have a consistent compliance on the exercise or drug intake. And dropout ratio is rather high. So, it can be applied to industrial countries as well. I think the solution that the Milind that came up with is something a lot of people want. Now he’s actually shown that's a solid mathematics, solid theory and the practical applications and he delivered and then the outcome is very clear.

I think he really researched a powerful solution, I was really stunned looking at the solutions. And I think for this kind of problem, sometimes people think, Oh, this is so messy won't have anything nice in theory that you can publish in a nice journal. It's not necessarily the case. I mean this is a very good example that from the reality all the real world problems, which appear to be messy, but if you think through and you have a truth in there.

You can come up with a nice standing theory with broad applicability with the big impact so that brings us yet another long journey, I guess. But still, I think that really is a high impact outcome. So I think the partnership, and then the capability for us to really understand, dig deep in the reality and the theoretical in both, I think that will be the holy grail of research and into how that can be applied to the real world problem. Agreed. Wonderful remarks. I guess I'm happy to end it here if you want, but happy to continue the conversation, whichever way you want.

Okay, we'll end it here. Milind, thank you very much for joining us. This has been a wonderful conversation. We learned a lot and we actually witnessed the depths and the compassion. I think this is just a great endeavor that you have accomplished by this program. I'm sure you'll have many other such program now and probably in the future, as well. I hope that we can work together in some sense and also together we can contribute to make the world a better place.

I'm extremely grateful for your kind remarks, and thank you for inviting me. I would love to continue this conversation and you mentioned you are sometimes in Bangalore. Hopefully, our times in Bangalore will overlap and we can meet in person there. Thank you very much. Let's do that.

So this brings us to the end of today's discussion, AI for social impact. Thank you, Professor Milind Tambe and Dr. Hiroaki Kitano for such an inspiring and thought provoking discussion, as well as practical advice based on your deep and extensive experience in solving social problems in the field. We really look forward to working with you the future. Once again. Thank you so much. Thank you. Thank you very much Milind.

2022-12-08 21:10

Show Video

Other news