Closing Keynote: Human-Centered AI for sustainability: Case Social Robots
Welcome to this last session of the conference where we end all our discussions, and all our content with a closing keynote as the tradition indicates. And today, for the closing. Keynote. We have, we have our esteemed guest Kaisa Väänänen from Tampere University and she will give a keynote about AI and sustainability. You might know Kaisa as she is a distinguished member of our HCI community. She just to remind you of who she is.
She did her PhD in Darmstadt, then she was Nokia. And now she's a professor at Tampere university that used to be Tampere University of Technology. But now they merged and they have a really cool brand at a really cool building that, I believe you can see in the background of Kaisa's image. Most of us will know Kaisa for her work on user experience evaluation, but you might also know that recently, she has been discussing a lot of questions related to Ai and specifically how it effects design processes and the way we create interactive systems and without further ado.
There you go, Kaisa Väänänen. Okay, thank you so much Pawel for the nice introduction, and it's really, really nice to be here. Even though I would have really liked to be there. So, so with you physically, but I'm here in Tampere, but I'm happy that these systems work. Well these days and I'm able to give this this talk to you from from the distance.
And of course, I'm happy. Then in the end to take some questions, let's hope that there will be some time for that. So indeed, my topic today is, Human-Centered AI for sustainability. And more specifically, I will talk be talking about some studies that we have done with for social robots.
So so that will be sort of the application domain. So the speak of my today's talk. So let me see. Okay, now let's move on. Okay. So the outline of my talk is first little bit about sustainability.
I kind of guess we Don't need to be reminded of its importance, but nevertheless some words about that. Then what is human-centered AI? This is not a technical AI talk in case you are you are wondering about that. So this is really the human-centered perspective to AI and then indeed about social robots and how they can be designed for societally, important courses. And then, then test finally wrapping up my talk. Okay, so, so interaction design and interaction research for sustainability, of course, there are great opportunities for us as broadly speaking interaction design community. Of course.
I don't know how everybody's who's listening to the talk is identifying themselves. But if I think of ISS this as a community, and of course, HCI, more broadly, this is, this is something that we. Certainly, as a research community and and practitioner community can do what for. And when we think about this sustainability, we don't only mean this environmental sustainability, which is maybe the the thing that first come to people's mind. When you use the term sustainability, but Very much along the lines of also, the economic sustainability and, and also, very much emphasizing the social sustainability as well. So inclusion of people, and really, really kind of taking into account, the whole, whole Global community.
And the ways in which the sustainability can be cancelled of appear in our work is first order that where we, as, researchers and practitioners can really take into account the user's values, but also always remembering that we as designers. are responsible for our choices and our designs. So this human-centered design view point. And then, of course, there are wealth of already.
Lots of applications that help people make sustainable choices. There are digital systems that Advance inclusion. And some of those I will be talking about today.
There are, of course, ways to consider how we, as humankind can optimize the resources that we have on this Globe. And, of course, we can also think of more, broadly, speaking sort of systemic changes towards sustainability such as kind of, reducing Transportation, or using different types of vehicles for our transportation, for example. So if you look at this map, which I'm quite sure that we are all aware of.
There are probably in all of these areas. We can do something in this community. But if we just pick or point some of these as more prominent first of all equality, so how can we advance equality with a digital technologies, education and well-being, good health, health systems, and other types of Health applications, industrial innovation can certainly be Advanced with interactive Technologies. Sustainable cities, smart cities and Communities, responsible consumption and production and also obviously climate action. So these areas I believe are especially prominent for the sort of output from our community. But of course all the others probably can benefit as well.
But these are maybe the ones that at least I see as the most opportunity areas. So So so if we now now kind of like move onto to this idea of what is human centered Ai and for the purposes of this of this presentation, I won't be start presenting or arguing. What is exactly AI, because we could spend the whole whole keynote time to, to discuss this. But for the purposes of this of this presentation, I would say that the important elements of this of Of AI are all that there is some level of autonomy. It can be proactive in relation to its actions with people. It can be adaptive.
It learns while it interacts with people, so it is in some ways intelligence, but that's, that's kind of, that's kind of level of detail that I will go to AI in this discussion. So, so this kind of gives some background to what kind of characteristics of AI are important when we think about human-centered AI, and so what is then human-centered AI? So in short human-centered AI are AI solutions that focus on human values and needs understanding the agency that AI taking when interacting with people in the society. And then the context of use. And more specifically human-centered AI tries to extend rather than replace human intelligence.
It helps people to understand AI's actions. So explainability Of course, a very prominent Topic in this, in this domain. There should be some kind of good or positive or desirable user experience is coming out of people working with human-centered AI. AI should be ethical and safe in all its decisions and actions and also, we should think about the stakeholder roles more broadly.
So not just the actual user. So, for example, in the example that we have here on the left. Sorry, right Bottom corner.
The sort of autonomous car. There are other stakeholders than just The user. There are the pedestrians. There are the people who are in the car. There are people who are in the other cars and so on. So there is there are lots of different stakeholders in the whole system.
So we need to always think about those different different roles when we are designing human-centered AI. Here are a few examples of what kind of user experiences people expect from AI. So this is a study we did couple of years ago and just very briefly. I will just give some examples or kind of summary of the positive user experience that people had had with AI. And this was really, like, based on their own definition of AI.
So what they considered to be AI, so it was very kind of like a user perspective to this topic. So sense of control and trust and reliability were clearly the most prominent ones that that people really really had already had. And they also expected to have this kind of positive positive experiences. But of course, also other things are just relief, feeling of safety.
And so on, on the negative side, people have had this kind of feelings that something dangerous, might be happening. They might felt disappointed by the outcome of an AI system. For example, when it didn't work too well irritation.
Maybe a little bit milder experience or feeling, but nevertheless important to acknowledge. And in some cases they felt that AI should not be too humanized. So it should not kind of feel too much like it's another person, but rather something that clearly is seen as a machine or technology. So these kind of things are things that people people are, what kind of experiences people have had. And what they might expect.
So, in terms of this kind of like human-centered AI, one aspect that is really important. Is this human AI collaboration because really one of the key criteria, there is that user somehow keeps in control or is aware of what is AI doing and, and how it can support the user's own own actions. So there should be some kind of shared goals and agency.
Somehow the human should be in the loop. This is another concept that has been broadly spoken in the in the AI and and human-centered AI community. So there must be some kind of interactivity involved. Of course, there are situations.
We can easily argue that there are situations where AI just works on the background and user doesn't interfere. But nevertheless, there should be an opportunity to for the user to somehow somehow step into control when necessary or when they They want to have that kind of control. From the user experience side. This kind of feeling of trust, companionship, Transparency are very important. When you think about collaboration. You could even compare this to some kind of human relation where you really want to have this kind of good companion.
That is kind of supportive of you and kind of like gives you some information of what they are thinking and what they are about to do and what they have done and why. And emotional appropriateness. So, these kind of things could be expected from this collaborative, AI systems. Okay. So now this kind of gave kind of an overview of the human-centered AI and the sustainability viewpoint.
But now I move on to the sort of maybe the main part of my talk. So talking about the social robots and how they can be designed for and used for societal good. Before going to those actual dose cases that I will present from Our own work just a brief recap of what is social robot might be well-known to most if not all of you, but nevertheless it's maybe good to just to recap that social robots are kind of like embodied agents that can function in various areas of society.
Either at people's homes, in the public space or in their workplace and they are after all they are machines even no matter how intelligent. They might be or become in the future. They are still machines. And to be social, they have to interact and communicate somehow with humans and to some extent obey, human behavioral. Norms. Of course, they don't probably do it to full extent, but there must be something that resembles human communication to be called social.
And indeed, they are. embodied, expressive. They have Expressions to some extent they can resembling human Expressions, but not necessarily completely.
So and there can be some emotional responses by the robots and also robots, can also recognize people's emotions, which then helps to build on a good collaboration and good interaction with such. Between such a robot and a person. Okay, so we are on here on this picture. We can see the sort of elderly person interacting with with the social robot. Here's another example, from a factory context.
And here's a third one, which most of you probably recognize as this robot Sophia, which was released. I think two weeks. Yeah, two years ago and I'm showing this here because Of course, we all can imagine these kind of situations where human-robot interaction might not be completely optimal. So for example, here we can see this expression of uncanny valley so that somehow the robot tries to be very humanized, but it doesn't quite succeed.
So, it kind of becomes creepy. So I won't be talking too much about those things here in this talk, but nevertheless, it's good to remind us that they don't. Always succeed. So now I go through briefly three studies that we have done, done in my group of human-centered technology in Tampere University. So, the first one is Elias robot, which is an our best robot, which was used in as a teacher's assistant in primary schools in Tampere. So, we did this long-term field study.
So it was four months long, which is usually Considered quite long in this human technology. Interaction studies. My colleagues.
I know acting in a sec. where we're running this study and, and did this work very in very close collaboration with the teachers of those schools and they actually had a very, very strong role in this. And again if we think of this stakeholder Viewpoint, of course, it is very important to have the both the the teachers as well as As parents go along with this because it's not just about kids interacting with the robot. But is the whole a kind of like stakeholder community behind that. And the focus of this study was really on user experience and user expectations. Or and I also I mean the the domain of this of the teaching was language learning and the kids, these kids are nine years old.
And then we had mixed methods, there were interviews, observations questionnaires and diary. So, so just a few highlights of the of the findings here. So the Elias was really taken as a member of the group and the whole school. So it was really called the kind of like a popular dude or a mascot of the whole school. So the kids were mostly very kind of like attached to Elias As part of their classroom environment. And of course this physical embodiment of Elias.
Robot was very important here and kids even express this kind of empathy and tenderness toward Elias, of course, they knew, that it was a machine. But nevertheless, they kind of brought up this kind of, they kind of displayed This kind of emotional responses and further on the positive experiences. With Elias where that people were the kids, as well as the teachers were quite enthusiastic, of course, must be said that the teachers who were involved were already quite kind of willing and ready to interact with Elias robot. So, of course, if you would enforce the robot to all teachers, you might not get all positive responses. But in this case, it was really a kind of About positive response also from from the teachers side and also kids either from the other classes, even even expressed a lot of interest and were even little bit envious that they did not have Elias in their class because as I said, this Elias was used in the classes of the nine years old.
There were also some negative experiences, but these were quite rare, but of course, needs to be acknowledged that sometimes, Sometimes these things/machines don't work and then it's frustrating and even disappointing. And in some very rare cases. There were some kind of sudden movements that the robot might do when the kids might be just a momentarily, little bit scared, but it was nothing in that sense, long term in this case.
And the motivations because this is really a lot about. Of course. We have to have to understand what are the motivations behind using these kind of Technologies. And in this case. It was really there were many things that kind of like advanced this kind of motivation.
So, for example, but again, like mentioned earlier the teachers Innovative role was really important so that they were enthusiastic, that was certainly. Also then sort of spread to the pupils that they were also kind Of positive when Elias came to the class and more specifically. There were things like the "candy eyes" because Elias has this kind of like very friendly eyes and there is this sound that comes out when Elias kind of like blinks, or or kind of flashes, its eyes and verbal feedback of course and gestures and movements, kids were kind of mimicing, its movement. So it was really something like they kind of like mirrored.
It some of its actions and there were some entertaining elements such as telling jokes and singing songs. And but all in all, Elias in some ways felt alive for the kids. And this indeed kind of was displaced as positive learning atmosphere in these, in these classes.
That utilized Elias. Okay, so that was one example so school. So people, people are those pupils were, at least, at least in this language. Learning case. They were quite quite enthusiastic about having a robot.
there kind of teaching them together with the teacher. So, of course, the teacher has a significant role, another area. I'm just looking how we're doing with the time. We are like, is we're still fine.
So social robots have been also used in autism Rehabilitation and I will just show this actually these videos are not from our own work. So I will just show very brief glimpses of this just so that you see that the social robots. They have been used in, in this learning of social skills. I will try to see, I hope this works. Meet Adrian, age 6 (This is some great work) and his robot friend, Kiwi (You are doing an amazing job), on this weekend morning. They've settled in to play, some games along with big brother.
Darren. Adrian is on the autism spectrum and Kiwi is no talk toy. It's a socially assistive robot. You are doing really great.
Keep up. The good work, social assistant. Robotics is a new field that we actually founded About 15 years ago Based on the ability of robots to help people, many children on the autism spectrum can respond positively to robots and in fact can be motivated and learn social skills. Okay.
So in this case, indeed, as the person explaining they kind of mentioned that for this specific user group, or this specific group of people with autism disorder. The Robots, can be really a better way to learn certain skills than when learning with human beings. So, let me just show briefly some one short clip of the other video.
All your choices. Milo is a humanoid robot that is specifically designed to work with students with autism. He can make a sad face. He can make a groom has, he can make a surprised face. He does that.
So that the student can look and be able to imitate. That was so much fun, good job, I like that. So indeed what we might as designers think as deficiencies in this kind of devices. So for example that there is only limited number of facial expressions can actually be a benefit for this specific group of people. So that indeed, there are some maybe surprising benefits from social robots that we might not necessarily think of. But so let me briefly explain
the study that that we did in our University. This was actually a very good Master's thesis project. Sometimes we have those excellent students who did do this really, really good. Recent pieces of research. So so this was done in a local hospital where where young people are going through some Rehabilitation period and These were teenagers. So the picture here is actually not from the study.
We were not allowed to take pictures of our teenagers. So so I just took another picture from from another source, which you can see there, but nevertheless, this was, this was to, to study whether this now, robot would actually help in this physical exercise group at the rehabilitation center. So, so the robot would work again as an assistant for the exercise coach who Would be the real person there. And the robot would do the initial warm-up and the relaxation at the end, but there in the middle. There would be the real person or the actual trainer physical trainer who would do the actual exercise and the the personality of the robot was designed to be this friendly helpful and positive.
So there are some some details of this study. We collected Data by observations, questionnaires and interviews, both from the from the actual these teenagers which turned out to be quite quite challenging to get any. Very detailed qualitative data there but nevertheless with the talking interview into personal assistants as well as the physical educator, then some conclusions or sort of design implications were drawn from this study which are kind of like summarized on this slide here. So again, as mentioned earlier this social robots.
They're very clear and sort of kind of even limited communication manners can be really helpful for, for people with Autism Spectrum disorders, as well as the constancy. So, the same things happened in the same way. So this can be, this can be really important in this in this situation. And of course the robot has patience, it never runs out of patience unless it runs out of battery, of course, but otherwise, it has kind of, like a limitless.
Kind of patience when it interacts with people. And then also, of course, robots can be personalized to individual users needs. And then also a robot can in a way act as middlemen between between the user. And another person because Sometimes it may be easier to speak to a robot than a person especially if you have this, this type of limitations or, or characteristics that come for example, from this autism Spectrum.
So that was the second study, again, kind of like a showing that maybe robots can also be used for this kind of like inclusion and and improving people's well-being. Then the third one looking at my time. Sorry, I guess we still have some time left is this Civic robot study that we have this four-year study running funded.
By Academy of Finland where we study these kind of social robots that can can help or motivate and persuade young people to be more active societally and I will give some examples of that, if it sounds very complex. And even further, we took this sort of sustainable development as a subject domain here. So, how could we help young people become more societally active in this sustainable development by offering them, and designing together with them, these sorry, social robots. So, what we started by, so the protests started already two years ago, of course, we have had some challenges due to the current pandemic situation.
We were running some workshops in February and first week of March 2020 and then then the situation changed and we after that we we run actually, we have been running also online Workshops So we luckily didn't have to completely stop working. But but so we actually had these workshops where we kind of using certain stimulus materials. We asked the young people to ideate kind of meaningful purposes and context of use for this kind of what we call. civic robots, so social robots that are somehow helping these Civic activities, so societal participation and so on. And so also also designing even designing the appearance. Of course, as you can see here, the the prototypes or the mock-ups or their ideas are very, very initial, but nevertheless, they have given us.
Lots of interesting information about what kind of expectations young people would have when working with this kind of robots. And and here is a just a headlines summary of what kind of purposes could these kind of Civic robots have. So they could help young people acquiring societal skills, for example, economic skills, or even language skills. Let's say you are moving to another country. You might be an immigrant and then then this kind of robot robot. Be helpful in that. Societal participation.
Voting for example, kind of collecting like in the story, the cartoons here. are in finish. I did not have a chance to translate them to English or some other other more understandable language than finish.
But still here, the robot is, is going to the school area. And then, then asking the young people to come and, and give some, so what It says here. It says climate suggestions.
So it's collecting like climate suggestions. So at what the young people might have ideas and later on in the in the cartoon here. Then the the robot would give a summary of what kind of suggestions would have come. And then somehow the young people could then take those even further. Together with some authorities for example, but this is, of course, just the initial sketch.
So so, so of course, these things need to be developed further. And then mental well-being, of course, it would be some kind of a coach and kind of sometimes one of the very interesting ideas was this kind of what we called rantbot. So it's a robot that where the person can go and really rage or be angry at, and really kind of let their, let their frustrations to come out because sometimes it's not so easy to maybe act that way with real people. So the robot could be the sort of, kind of a ventilation for this kind of, for this kind of Expressions, just one idea. And then, of course, the robot could help with environmental choices, such as waste recycling, and so on. So these are kind of categories and then there were plenty of further ideas underneath these.
Then we have also, based on the young people's ideas, We have designed some some further scenarios. So here the young people are hanging out in the shopping mall and the robot comes by and ask them to join. Join to discuss certain environmental issues. And then the robot here on the on the right bottom corner. the robot Then then says, okay.
Thanks for coming and then it goes and tries to find other other people to join. And a new study that we did. This is one one example of the studies that we have done done on this on this kind of scenarios or these ideas. What this kind of Civic robots could be used for.
So we round this study in with 9th graders. So 15 years old old pupils from school, and And we actually gave them. There was also a discussion part in the study, but then we gave them three scenarios and they could then individually evaluate them and give feedback. And here is just a summary of the scenarios. There was one where the robot goes on climate strike.
They were also little bit. This kind of like extreme some of these scenarios for Example, Robert going to climate strike and then there was a, the robot would interview, the person regarding environmental issues. And the third one would be that robot would come and sort of ask the person to follow them bit. Similarly than in one of the cartoons that I just showed and different context and different roles that the robot would have. And then we would also give them kind of two different versions. In one there would be kind of more neutral emotions that the robot would display.
And another one would be more negative negative emotion, and and related behavior. And just a brief recap of what kind of findings we got from this. Of course. This is not the full full display of all the findings, but just in terms of the transparency that of course. Well, it turned out that it should be, the robot should display Its own purpose at the very beginning of the encounter. And of course, also.
Also, the robot should kind of make the user. If I now use that term, the person aware what, who are the humans behind the robot? Because, of course, people understand. The robots are not independent actors as such, but there is always someone Or some party behind that. And also people were quite negative about robot being deceptive. So so for example, in one of the the third scenario, the robot would say, hey come here, you have to follow me and then there but they wouldn't but it wasn't exactly a deception, but they wouldn't be explicitly explaining where they are taking that person to.
So this was maybe not too surprisingly. not so positively received. But nevertheless, there are still kind of summary of this there. It's really important, these robots that the users understand, who are the people behind the robots. So, this must be made very clear. And again, this whole stakeholder setup, must be kind of Explicitly explained to the users.
So that they actually know who they are communicating. with and regarding the sort of, the emotional Expressions on roles, of course, not all roles are appropriate for for robots. So so the, for example, there were these comments, that robots cannot really be on strike.
It doesn't make any sense. And of course, that was kind of like purposely we wrote These scenarios in this way that they would be kind of A little bit controversial. But this kind of still still gives this conclusion or outcome that all these roles must be very, very carefully. Thought of what is appropriate for the robot. And this, this robots purpose in the specific context of you should be should be made explicitly available. Okay, so now I think I'm coming towards the end of my talk.
So the question after seeing these three, three examples, of how social robots could be used in the society. And of course, all these three, I mean, these examples were all with the young people and children, but of course, we can also think of other user groups there. So the question then is can robots really be kind of an advantage to these sustainable development goals. And of course social robots.
Don't solve, obviously all the issues, but maybe there can be some or we at least, I believe that there can be there are some potential for for using social robots For certain tasks, the ones the examples that I showed they were especially focusing on Advanced inclusion. So really kind of getting more and more people being active, either in the classroom or in the society, in general, education and well-being are very prominent, for these kind of interactions. And of course, this kind of different types of new collaborations with between different stakeholders, but of course, there are pitfalls and we must always acknowledge that not, everything is so so necessary, all positive. So, of course are very Practical things, and cost of robots.
Practicalities They break down easily. And so, on this are quite obvious ones, but also, of course, there might be this role. Mismatched, maybe in some cases and especially I guess this is something that has been talked a lot in the Press. Like if we replace people with robots in certain situation, let's say in the care of the elderly people.
Then we are going in the wrong direction, or even in the classroom. We cannot just think that. Okay. We will put robot there and teacher goes on and do something. does something else, the teacher still has to be there.
So it's kind of like an additional part of the interactions between people and not replacing. So, so this kind of like a role and the role mismatch must must be really, really carefully. Thought of, so that we actually get good experiences, and good outcomes out of these kind of interactions. This is my final slide just to summarize. What is it that we need to do as interaction design researchers and designers or practitioners when designing, this kind of system, whether social robot or something else. Now, I gave these examples from this domain, but of course, human-centered AI is much more, but focusing on users values and the context of use and what kind of agency these kind of AI plays in this kind of collaboration of humans and technology and the stakeholder involvement in all parts of the system design.
And of course, when it's actually deployed and we need to set goals for both. What are we trying to achieve in terms of sustainability, whether its environmental sustainability or social sustainability or economic or all of these. But also not to forget this user experience Viewpoint.
So what kind of user experiences are we actually aiming at? And of course, then later on, we need to evaluate if we actually achieve that kind of user experience that we were setting the targets for. Well already said few times here that these kind of different roles need to be considered carefully and what kind of human AI collaboration takes place. And of course, sometimes it can happen and it does happen in the industrial part as well that of this whole community that as well as in research that the integration of humans and technology is not so optimal that we have human and people who understand human beings. And then we have people who understand really very well, what are what is technology possible. And we need to really have this tight integration of these different people Working together when we are designing and implementing this kind of systems and very finally I believe that we as in interaction design and research Community can really drive this kind of responsible AI development to make things that are really meaningful to people and also help advancing the sustainable development.
Okay, so that was my talk. I hope I get more or less the time and I would be happy if we have some time to answer some questions. Thanks a lot for your attention.