Can AI help us think for ourselves? | The Royal Society

Can AI help us think for ourselves? | The Royal Society

Show Video

. . . . . . . . . CHAIR: Good evening. It's great to be back at the Royal Society for this ceremony. After two years of online lectures, two years of online lectures, after my name is Marta, I'm a Professor at the University of Oxford, and I currently chair the Royal Society Milner Award Committee. It is my great pleasure to welcome you all, both people that I can see, the faces of, but also the people who are watching online.

Tonight's lecture entitled, " Technologies to think with" is being given by Professor Yvonne Rogers, Fellow of the Royal Society, winner of the 2022 Milner Award for her contributions to Human-Computer Interaction, and the as you know of human-centred technology. The Royal Society Milner Award and Lectures recognises excellence in science in technology across the world and celebrates outstanding scientific achievement. This award, the Milner Award, is the premier European award for outstanding achievement in computer science, and it is named in honour of Professor Robin Milner. The lecture is very kindly supported by Microsoft Research, and, at this point, I would like to invite Professor Abigail Celen from Microsoft Research, who will say a few words about the history of the award.

≫ Thank you for this opportunity to say a few words of introduction ahead of this year's Royal National Lifeboat Institution Milner Award Lecture. . I'm Director of Microsoft Research in Cambridge. We are a lab with a history of ambitious multi-disciplinary research exploring the future of technology. Like today's award-winner, we have a strong focus on research that puts human aspiration and human values at the very centre of the technology that we develop. So it is a special privilege to be here with this year's award winner, at a time when the impact of technology on our everyday lives in society at large has never seemed more unpredictable. It has never been more important that we understand the relationship between people and technology, and how to shape that relationship. Before she is introduced, I would like

to share with you a bit of background about Robin Milner, and why we created this award and partnership with the Royal Society. Robin Milner, a Fellow of the Royal Society and winner of the Al- line Turing Award was a brilliant mathematician and computer sift. His dream was to make it possible to verify mat mat cool proofs by computer , leading to three core functions.

One of the tools automated for automated theorem proving, used it to create the ML programming language, enabling polymorphic types and type inference and type-safe exception handling. Later on, he worked on parallel computation made famous in two seminal books : the Calculus of Communicating Systems , and the Pi Calculus. Over his career, he developed a remarkable body of mathematical work, grounded in practical questions, that continues to influence generations of the theoreticians. Robyn was a true pioneer , and therefore a clear choice to name the award after, which seeks to recognise outstanding achievements in computer science. It is therefore absolutely fitting that the award this year goes to Professor Yvonne Roberts. I

personally would like to congratulate her, and very much look forward to her lecture. At this point, I will hand back to Marta. Thank you. [Applause]. CHAIR: Before I hand you over to tonight's speaker, I wanted to say a few words about her achievements and why she was recognised with the Milner Award. Yvonne Roberts is a pioneer of human- computer interaction, and one of the leaders who created the field of Ubiquitous Computing. Ubiquitous Computing, where computers disappear and weave themselves into everyday activities was conceived by Mark Weiser in the early 1990s, and the topic that Robin Milner himself was concerned with, and I was working with Robin Milner myself and several people around in this room about it.

Mark Weise called it "calm technology ". Yvonne brought a new perspective to Ubiquitous Computing. She was an early prop opinion in any event of placing humans at the centre of technological innovation, arguing that computers need to engage users, rather than simply do things for people. The new perspective went somewhat against the accepted view of Weiser's calm technology. Yvonne's research is concerned with augmenting human activities with a diversity of computational technologies, and, to this end, she investigates, invests, invents, and designs interactive technologies that enhance life and work activities.

She is best known for her foundational work on innovative learning technologies, new methodologies, the pioneer to study in the wild, and new theories about how technology enhances human behaviour, so concepts such as external cognition. Her extensive research has been influential in shaping HCI and has led to many looks and publications. Yvonne is the director of UCL Interaction Centre, and a Deputy Head of the Computer Science Department at UCL. Prior to this, she had professorial position at the Open University, Sussex university, and Indiana university. Yvonne is a Fellow of the ACM, a Fellow of the British Computer Society, and a Fellow.

ACM's Kai Academy. She was elected Fellow of the Royal Society in 2022. So, at this point, ladies and gentlemen, I'm very pleased to present Professor Yvonne Roberts. [Applause]. YVONNE ROGERS: Thank you for that introduction and welcome.

It's a pleasure and a joy to see so many faces out there -- not just my colleagues, my friends, my bridge players, and my family are watching online, so I'm really pleased that you've managed to come out in the rain , or you're tucked up somewhere at home. So, I would like to start my lecture with a picture here of the great Robin Milner. And I had the pleasure of meeting him a few years before he died at a workshop that we were both attending on Ubiquitous Computing. As you can see, Robin had a twinkle in his eye, and he was very endearing and approachable, and even though our areas of computer science are worlds apart, as you heard from Marta and Abi about his work of theoretical computing, and mine is the human aspects within he was interested in how our minds could meet, and to think about Ubiquitous Computing from the user experience, but also from the theoretical aspects, and the design. We had a number of conversations about this, so I'm very lucky to have met him. In honour of Robin, I'm wearing an orange shirt like he is! how our minds could meet, and to think about Ubiquitous Computing from the user experience, but also from the theoretical aspects, and the design. We had a number of

conversations about this, so I'm very lucky to have met him. In honour of Robin, I'm wearing an orange shirt like he is! My lecture will be in three parts. First of all, the early stages of my career where I very much helped to inform the field of Ubiquitous Computing, and in particular the looking at how we can inspire children to learn. Then I'm going to talk about some research I'veFirst of all, the early stages of my career where I very much helped to inform the field of ubiquitous computing, and in particular the looking at how we can inspire children to learn. Then I'm going to talk

about some research I've been doing in the last two or three years. Where we've been trying to design software to help people think more systematically. Finally, I'm going to finish by looking into the future about how we might develop new AI tools to facilitate creativity.

So at the start of my career, I was very much interested in how we could design new technology for children, and this is what we were confronted with: rows of children sitting in front of PCs by themselves , following tasks, and trying to get something finished. It was very dull and drill-based. And I thought surely we can do better than that? There is this amazing technology that we're just discovering that is taking off the desktop, and this is where I got involved in an area called Technology Enhanced Learning.

And what we tried to do was think about how we might move computers out of the classroom and into the wild. And the reason for this is that kids get excited when they're outdoors, and we wanted to encourage them to have -- be more self-initiated in their thinking, to talk to each other and to think about scientific inquiry in a much more engaged way. And also, to inspire curiosity. And so one of the projects I'm most well known for is called the Ambient Wood which we did 20 years now which is really a field trip with a difference. This was work done with partners on the equator IRC project that was eight UK universities of like-minded, daring to think differently researchers, of which one of them Tom Rodnan is in the audience here. Part of this project was we worked with people from different disciplines , from design, from developmental psychology , from engineering, and from computer science.

We suddenly felt like we were in a sweet shop, that we had been given these new technologies to experiment with. We designed all manner of innovative technologies to help children really get inspired by using technology to think so developed what were called probing devices where they could collect readings from the environment, moisture and light. We also made our very own hand held devices. This was before mobile phones were around, by which that the students could get feedback and information whilst walking around outdoors in the woodland. One of my colleagues, Danielle Wild, created this -- she was at the RCA at the time -- she created this periscope device down at the bottom here, wheres idea was that children would come across it, and a video would be played in the woodland, rather than watching something in the classroom and going out. This was a David Attenborough video

about the bluebell lifecycle, or something like that, and they could see immediately , and that would whet their curiosity. So what we tried to do, then, is to encourage learning through exploring and discovering. We didn't tell them explicitly what they had to do, we just gave them these tools that we built, and to go forth and experiment and see what you can find. And one of the things we were experimenting with at the time was what could we do with these new ubiquitous technologies so we thought about allowing them to see the invisible and hear the inaudible. It might be an invisible action where you walk past a flower causes a digital event to occur. This device down here looks weird, called an ambient horn, which would honk every time it went past something that could play a sound.

I will play you the sound that these two girls are listening to, and I want you to guess what it is. It is a butterfly drinking nectar! [Laughter]. Now you know. But we wanted them to -- things that you take

for granted, or you never really think about, we also had sounds for photosynthesis, and what that might sound like, again, to get them curious to think about these things. And we found when we let children, they were aged ten to 12, go out into this woodland, that much self-initiated learning took place, and we got them to go out in pairs so they could talk to each other and collaborate. And with this probing device, they probed everything -- the air, the ground, the trees, and foliage. And what we discovered is, you

know, kids being kids , they liked to find the most extreme, the wettest, the lightest, the darkest. And they of course tested different parts of their own body to see whether they were the lightest, the the wettest, and the darkest. It was a pleasure to see them enjoying that. An important part of being pairs is one of the children would go and probe on the device but then wouldn't have the reading immediately.

They would have to join the other child to look on the display and talk about what they had found. The displays we used, or the visualisations were really simple, so they could just see relative levels, and then that would trigger them to think where they might go and probe next. And here are two girls who are using the devices together. You can see how one does the probing, and then the other reads out what the reading is, and then on the basis of that, they hypothesise where to go next which would be even drier or even wetter. [Video]. So we could couldn't stop them from going around testing things, and thinking about why something was dark, and light, and where something was even darker.

But what little did they know was that once they had been experimenting and exploring the woodland, they had to come back to the classroom, but the classroom wasn't back at school. We made a pop-up classroom, and you can see it on the left here. This rather stripy looking tent.

We got the pairs of children to come back and share their experiences, and what they didn't realise was that every reading that he had taken, every probe, we recorded it, and then one of our software designers here then represented those on a bird's eye view of the woodland and where they had been. And they could click on these dots and the reading they had had taken would show and show the relative light and moisture level. They were absolutely fascinated by this and were trying to redisability which of the readings they had taken -- readings they had taken were moist or light levels.

This triggered a natural conversation between the children, and they were comparing the difference places in which they had been exploring. And this led them to hypothesise with the ecosystem in the woodland. For example, which plants grew in the wetter areas and why? And what creatures and insects thrived in different parts of the woodland. And just to show you how different that type of learning is compared to what we were up against in the classroom, so I think I can be confident enough to say that, at the time, we pioneered a new way of designing technology to daring children to think differently, and it was hard work. As you can see here, after a hard day's work, but it brought us all together, and also, we brought indoor and outdoor learning together in a novel way, and the children talked freely and excitedly with others, not just whilst they were there in the ambient wood, but also on the bus back. Believe it or not, ten or 15 years later, we came across some of these who were nice young adults and they remembered their day in the ambient wood as one of the best days at the school. And they loved doing the scientific

inquiry like this in the wild. But they were also fascinated by the underlying technology. So, at the time, this was a woodland of one of my colleagues where his wife used to do yoga, so we had to wire it up, literally, and make our own Wi-Fi with aerials, and put laptops in trees for this, and they would go round just to try and find that technology as well, to try to work out how it was possible that things were pinging , and noises were being made. So that got them interested in the ubiquitous technology as well as trying to understand more about the ecosystems.

So Marta said how I contributed to the field of ubiquitous computing, and doing this type of work led me to think that that field of ubiquitous computing should be much more exciting and provocative, and stimulating, and engaging, and not how it had been led to many people thought it should be following on from Mark Weiser 's view, that it should make our lives efficient, calm , and easy by doing things on our behalf. You know, the trouble with that is that we get lazy and expect the technology to do it. I think really technology is there to exercise our minds.

And in particular, we should be designing and engaging user experiences, so for most of my career, that's what I've been doing, is trying to think about the different technologies that are out there to encourage us to be more active, be more reflective in our learning, but also in our living, and , and our living, and work, that should say. To facilitate creativity. So that is part one of my talk, and how it has inspired me throughout. 20 years on, since the beginning of ubiquitous computing, there's been a lot of technology that has been around and developed that we can experiment with.

So we have had PCs for a long time, but now we've got tablets and various mobile devices. We've got what is called the Internet of Things which basically is putting sensors into the environment , into objects, and connecting them to the internet, so that they can talk to each other. We've also got what are called tangibles and physical computing where the computation is in some artefact, and this allows us to think how do we , what do we want to do with the digital world? In respect to the objects that are out there? Then there is augmented reality, virtual reality , wearables, and speech interfaces, and robots and chatbots, and artificial intelligence and machine learning. And the question is: which of these technologies

do we design for and how? And this gets me on to thinking about thinking. There are many kind of thinking that we can use though different technologies to augment, so we are all involved in different aspects throughout our lives, whether it is planning, deciding what to do, choosing between alternatives, reasoning about things, making sense, reflecting on what is happening, contemplating, and solving problems. So how the hell do we match this up? How do we know which of these various technologies do we use to support these? We could use PCs, tablets for supporting problem- solving but then again might want to use them for planning, we could use artificial intelligence to support decision-making, use augmented reality to support reflecting. We could use tangible to support planning and so on, and so forth. There really isn't any systematic research or guidance out there as to how to make those decisions. And

what we do in human computing interaction is a bit of , you know, trial and error, and a bit of experimentation , but also we go to theory, particularly in psychology, to inspire us. And one theory I'm going to mention, but there are many theories which I've looked at and been inspired to think about how you design technologies is Daniel Kahneman's book on thinking fast and slow? How many have bought the book? I won't ask if you've read it! I would say 90% of you. It's a best-seller. It's a really great title. Basically, in the book, he argues that there are two types of thinking systems: one which is intuitive, fast, it is no effort, it's instinctive, automatic. The other one is more effortful, it is slower, it is more orderly, and deliberate. And he argues that system one is what routinely guides our thoughts and actions, and is often right, but it is prone to making errors, particularly those of judgment and decision-making.

Whereas system two is meant to be the voice of reason, and that he argues that we should employ this more when we've detected a bias, in our thinking. Now, it's a rather oversimplification of how thinking is, but when you're modelling, you do try to think of how you contrast. So to think of these two systems as alternating, and that sometimes our thinking might be somewhere in between, but it is a useful heuristic, a useful theory by which for us, in human-computer interaction how can we stop people or reduce their bad thinking, or their biased thinking? And how we might promote what is called system two thinking? This is having been inspired by reading this theory, what do we do next is we develop our own concepts to inform the design of the technology, so this is where I've come up with the collaboration of my students, particularly are you here, Lyon? A notion of scaffolded thinking. The idea here is like scaffolding, is that you somehow use the technology to guide people and maybe stop them, slow them down to reflect more on their decisions. And I'm going to give a case study of where we think we can design technology to support scaffolded thinking.

Then the second one I'm going to talk about is what I'm calling integrated thinking. This is designing technology to help people to externalise their thoughts, to be more systematic when problem-solving. So the first one: scaffolded thinking, and we think that we can use this concept to help us design technology to support people who invest in stocks. I don't know about you, but what you did during the pandemic , but apparently there was an astronomical uptake of trading apps and any of you dabble in trading apps? A few who admit to it. Apparently 130 million people used them in 2021 and the most popular is Robin Hood, designed for the novice person who doesn't have much expertise. But many people new to investing made costly mistakes.

We might think about how to design technology to slow down their thinking and to help them so they don't make these mistakes. So what happens when you've just invested in a stock and it goes up then it goes stable, and you see this on your phone? You panic. You get emotional, sweaty, you don't know what to do, think if I leave it like that, I'm going to lose all my money. So you sell, but you often sell too much too fast, and then you regret it later. And the problem with novice traders is

they don't have a good strategy to deal with this situation. And this is where we come in is to think how can we help novice traders to learn to think more methodically. This is one of my PhD student here, Ava, who volunteered to be a there, but seeing that, you get the stress, and it is really difficult to think under stress. But professional traders develop the voice of reason. And they will have a set of questions and criteria they use before finalising their trading decisions.

Unless they have had a couple of glasses of wine, they will sort of maybe think through is this the best thing to do? But beginner traders don't have this, and they make rash decisions. So we wanted to help them to become more experienced and expert by thinking through having this voice of reason, and scaffold it. So that it stops them acting impulsively and think about what they're doing and why.

If you see over here to the right, these are the interior monologue or questions that we would like them to be thinking through in a way which many expert traders do. So things like rather than I need to act fast, it's how did I come to this conclusion? Have you considered criteria X? Overall, this seems to be the best alternative. So having this conversation with yourself, and then you can decide whether it is good to spy or -- to buy or to sell. We decided to choose a chatbot for our technology intervention. Those who are not familiar with chatbots, I suspect most of you go on to British Gas or those banking sites, they all have the chatbots. It's essentially a virtual agent that a person has a conversation with, so it can be customer services, marketing, sales, travel, and this one is travel.

Where the user types in on the right, you know, a question, and the chatbot will answer, and the user will then ask another question, or answer it. So it is having a simple conversation. Now we designed our chatbot in this context. Here was to probe traders about their intentions, and to help externalise their hunches that aren't necessarily well thought through. So our chatbot is called a ProberBot because essentially, it's probing the user, asking them questions. And we designed it such that it would be embedded in the software, so that as the user is looking at how well their stocks are doing, and whether to sell or to buy, the ProberBot will pop up at an an tune time and ask them questions. It says if your

investment hypothesis has changed, what made you change it? The user types in the blue box "recent news" and so I will show you how it works in action. And we developed a software simulation for trading, and you can see here, on the left are the stock lists that the person has, and on the right, it shows the information, and then this will pop up if they want to trade, and that is the point where the trader bot pops up and asks what is their trader hypothesis. This may be enough for them to think about is this what I wanted to do? So that is how it works. It proposals up at key moments when the user is about to make a trade, stops to get the user to think and reflect, and embedded in the trading tool so that is it dove-tails the task execution and the thinking. So how effective is our chatbot? Well, my student Lyon ran a pilot study with six traders and presented three scenarios to them where they had to make investment decisions whether to buy or sell a stock. Then we used the HCI method of think aloud in in-depth interviews. There is a paper out there if you're interested,

but the idea was would it make them stop and thinking? And from the thinking aloud, this was very much the case, that it did. Very occasionally, it was annoying for it to pop up, and that is something that you have to design is not for it to become like Clippy when it is popping up too much. But to encourage reflection on decision-making by helping in the moment when it matters. Also, they said that it would help reduce impulsive actions, so this suggests that this approach by having this type of ChatBot bot appear can make an investor's thinking more sift mat public. Now, I've talked - *systematic.

I've talked about novice traders , what about expert traders, where they have too much information and knowledge and they can be tempted to be naughty. This is the second case study I want to present which is financial institutions are responsible for detecting this naughtiness which is essentially market abuse and trading, where someone who has got confidential information uses that to their advantage, so here is one that was in the news recently where an investor accuses Rocket's Dan Gill Beryl the of insider trading claiming they pocketed 500 million. A lot of this happens. Go and as I said, financial institutions try and stop it, or try and detect it. And they employ compliance officers to do this. And their job is to detect this, but to do this by conducting investigations, and curating and collating data from several sources by which to make up this and to see whether it is true. But it is an awful lot of work that is involved in being a compliance officer.

And this is a hierarchical task analysis which I'm not going to go through the different steps. I'm going to show you that there are many steps involved in doing this. And there is a huge amount of cognitive work. Much multitasking, they have to scan through thousands of alerts, sift through millions of emails, and check lots of news feeds, and there are huge demands on their attention, constantly having to switch between various resources, and much of it is done inside the expert's head, and occasionally, they might jot down their notes, and their thoughts. What, if they were given a new kind of surrender tool kit that could help them with this work and support more integrated thinking? And this is where I was working with the behavioural science team at Nasdaq, a couple of years ago, Wendy Jephson and Anna Lesley, who have both left and co-founders of their start-up called Let's Think, but they thought about how you can develop what is called an "investigative canvas" which is a set of software tools where disparate information can be brought together in one place. Rather than having to go in and

out all of these different software tools, to have them there side by side, and to help them to make and discover new connections. So there were lots of tools that they came up with, and the canvas was in the middle. There is the alert of the , and the way this was designed was that the compliance officer can decide which tools to bring together. So they start off with a blank canvas called The investigator Canvas, and then on the top there, is the case-builder, where they can start to build up their case. They can populate it with information they found with potential alerts . Then they might want to bring up what is called the people profile.

This is an early design. And here you can see what is going on between two people, is there any strange communication or lots of communication that is happening? So they can start to see there, and then they might want to build, add another tool which this tool here is trading information that will be very useful for them. Over here is a scratch pad where you can bring information from the different tools , and have it in place ready to add to the case-builder. So I've given you an example of a few of the tools but there were many of the others that were being developed. How effective is this approach? Well, I think our initial evaluations showed how it could be used by compliance officers to externalise their thoughts, but also project their sometimes random internal thoughts, and make them more systematic, and also they could share it with other team members rather than jotting it down on a notebook and understand each other's lines of thinking and collaborate. One of my PhD student called it a whiteboard on steroids, because you can discover new connections by having the set of side by side tools, and you can move the information around and test hypotheses that come to mini, and also enables you to think about something you may not have.

If you get distracted, you can pick up where you left off, because it's out there. So how generalisable is this approach? Well, we've we've been noticing how others are now developing what are called orchestration platforms, where they sake siloed data from multiple storage locations, and organise it in a way that data analysts have it ready to hand. So there is a lot of interest in this new approach to thinking about integrated thinking, and the start-up company which I am part of, as the CTO, is called Letsthink.

com and if you have areas where you think this approach would be useful, do get in touch. We are trying to develop our kinds of canvas tools in education, in finance, and our strap line is enabling people enabling people to think brilliantly. I want to recompany. I've talked through two approaches by which we've used theory and HCI to think about designing tools to empower us.

I've shown how we can try to slow down people's thinking, and we can scaffold and integrate their thoughts. We can externalise their cognition and Marta mentioned one of the thesies is called external cognition which I won't be going through today, but that's been one of my contributions is to think about how we can do that, and what the design principles can be. We can also see new connections, and it can help us reason and reflect, and in some cases, reduce the biases in our decision-making. There is also the potential for supporting multitasking. So matching technology type to thinking type still is very much an art form rather than a science. But in some ways, it

doesn't really matter. I think the key thing is the theory that you use by which to inform which these and why. Just to recap, we turn that theory, and it can be from psychology, behavioural economics, into concepts to inform the design of an interface, so I've come up with numerous design principles throughout my career, and one is dyna linking where you link representations on different displays, so if you make a change to this one, it's reflected or changed in that one so you can see by looking around what the effects are of making a change. It might be a simulation, or you might be

building something up. This type of dynalinking is really important when you're thinking about designing complex interfaces. We also look at all sorts of questions about the specifics of the interface, so should we use voice or text? What type of conversation? Should it be open? What type of feedback? Not that type of feedback! Where to place information on the dis play? And what kind of interactivity and how much. So there are lots of questions that automatically come to us when we start thinking about new areas by which to design these technologies. I'm going to finish, though, with thinking about the future. And we've heard about how Abi has changed her mind about how AI can make a difference and is very useful.

I believe very strongly that there is a huge potential , and we're just seeing it, in AI changing how we think. I'm not going to go into the discussion about whether it is going to take over our jobs -- I will leave that for someone else -- but I think it can support creative thinking, and in particular, in the sort of art and design. So what is creative thinking? Well, it involves us looking at things differently, finding novel designs, and solutions. And essentially, in a nutshell, it's making something new, so it could be a poem, a picture, a design, it could be a piece of music, a recipe, a dance, an app. And some of us find this difficult to be creative, so wouldn't it be wonderful to design AI tools to help us to discover new ways of being creative and it is happening. This summer, I was amazed at how many people were just talking about these new AI tools, and I suspect some of you out there have tried them. They have emerged to support

creativity and in particular Open AI have developed a process phone as diffusion that turns text into images. So the user types in some words, for those who haven't tried it, into the box here, and then the AI tool generates images to match them. So DallE too is the best known one, and there are others called Stable diffusion - I typed in blue sky, sloths, melting clocks.

If you haven't tried one of these tools, there is a big waiting list for many of them, but this one you can get on to straightaway called Craiyon. It says what do you want to see? I first typed in cat sat on a mat thinking. You can put any sentence in there. And this is what it came back with. They're quite cute. Some have got squiffy eyes, and one looks like it had its nose in the jam , but they're sitting on a mat and looking at how they are thinking. It is clever how it does that matching up. Perhaps the one that is the most advanced is Dalle2.

I typed in a modern painting of a professor giving a lecture, and being nervous, and it came back with four male professors. I thought that's not good. I typed in a picture of Yvonne Roberts giving a lecture and being nervous. They don't look like me, I don't think they look like me, but it certainly does look like someone giving a lecture. The one on the right looks angry rather than nervous. But, it is like you then want

to write another sentence, and you can't stop yourself. You can't stop using these. There is mass appeal. Whoever I talk to, really excited by these, artists, computer scientists, architects, designers and the general public have gone crazy using them. Why?

Aditya Ramesh said that Dall- e is a useful assistant that amplifies what a person can normallily do but is really dependent on the creativity of the person using it. I made in red "amplifies" it's not replacing, but you're anything what can write now? Some might say is this creativity? I would argue, yes. When I typed in "Is Dalle 2 creative" it came up with the design on the left.

Every time you write a few words, the AI tool makes us think of a new idea and enables us to dare to think differently. Some say is it really an art form? And I was having a discussion in my lab this week, and saying well, it's just like photography became a new form of creativity so will this new breed of AI apps. They've only just started to come out. In the next year or two, we will see many more.

Another debate, and it is not one for me to talk about here , but just to mention it, is is it stealing the work of artists that uses in its training data? Can we find a way of compensating or paying for them? I want to finish really by saying in the future, successful tools, AI tools , would be those that help humans in their work, and just like the ProberBot which I talked about, the chatbot, the most effective AI tools will be those that are embedded in other software tools, so that use them whilst you're doing your task, or your work. Just like the investigative canvas , I think the most powerful AI tools will be those that facilitate integrated thinking, enabling us to think and use more and more resources. At I want to end by giving that Microsoft are funding this by a new Microsoft tool that is exciting called Microsoft Designer, and unfortunately, there is waiting list to get this, but hopefully not too long for me to get my hands on. But what it does is that it uses Dalle-2, so you can type in here with a description, like "kitten adoption day" and it will come up with designs, and then you can use those designs in whatever it is you're creating. So it might be a website, it might be a poster, a new letters. It might be social media. But here, again, it's embedding the tool in what you're doing been and here again, you can start with -- oh, it's the same one.

Just add or remove content. So here they're creating a newsletter. It is this idea that it makes it really easy for anyone to use, and opens up many possibilities by which to think about new designs. So I'm really excited by this tool, and I think there will be many more that are coming out that actually match what we as human beings are doing rather than replacing us.

So, to conclude, I think there are diversity of technology tools to think with, and I've just described a couple of them. And my field, human computing interaction, is helping to design and shape those. The most empowering tools are those that are embedded in ongoing using tasks and activities, especially those with a canvas that enable to put things, move them around, to discover, explore, and investigate. And those that enable professionals and the general public to extend how they create work, and I think the future is very much human AI thinking rather than AI replacing thinking, and I've always thought that, and I think the best tools will be there to empower us, engage us, and to excite us. I would like to thank Microsoft, the Royal Society, and the late Robin Milner for this award. And also, the many,

many researchers who I've collaborated with, and I've only really mentioned those in the universities I've worked at and on Equator, but there are many others in the universities in the world , and without them, I wouldn't be here today. If your name is not on here and you're in the audience, and you think it should be, let me know! Thank you. [Applause]. CHAIR: We have time for questions, and I wanted to remind those watching us online that you can ask questions on Slido. Do we have any? Do we have any questions?

I can see, hi, there. FLOOR: That was a marvellous talk. It seems to me you have your best work ahead of you. When we were both starting out, I was an adviser, and you were -- that the computer sued vie for attention, do you think it does vie for attention, and is that a positive thing? Inches that is a very good question for those who didn't hear it. I work with Julie on a project funded by Intel, and I was for making technology visible and she was for the Mark Weiser view of making it hidden. Wouldn't she -- she says hasn't it gone too far? I would agree. I think people

have got addicted to using their mobile phones too much. There are some clever apps out there and games which are difficult to put down, and I think the way to overcome that, like any sort of addictive activity, whether it is taking drugs, or alcohol, or eating too much food, or all of the others, is we have to find ways to help people who find it really difficult, and there are various software tools out there, or attempts to try and get people to stop, and sometimes they're quite blunt instruments and I think there's a lot of opportunity to help people wean themselves off or simply to throw the phone away. I'm probably guilty myself of using it too much.

FLOOR: While you were thinking about ... someone online, you will be next. Please think up some more questions. The question from Warren: thank you very much for the great lecture. During the lecture, you mentioned two case studies which may use a chat box and visualisations respectively to make people think. What are your views on what kind of contexts each method could help people think the best? best? YVONNE ROGERS: I think ChatBot bots can be used in -- I mentioned one or two types in a sort of commercial domain, but also we were trying to think about how it can probe, but they've been used for other applications go contexts. For example, there is

a chatbot called Replica, which has been designed to help people in their wellbeing and get them to think and interact with it, so they've been used for different contexts. In terms of visualisations, again I think there are many application areas where you can use visualisations, and we've seen that in data analysis. Some of the work currently engaged this is thinking about what types of data visualisations might we create for life-long narratives for people and how they can reflect on different aspects of their life? And not just coming up with graphs and -- but to think of what other kinds of visualisations might there be? So I think in, as I showed in slides, there are no hard -and-fast rules as to which type of technology you should use for which type of activity. It is obviously easier and cheaper to design for a mobile phone, and one of my colleagues who can't be here tonight was -- wrote a book called Not A Mobile App For That, but everyone goes straight to designing apps because it's easier to do. Now, I actually think woo we can design technologies, whole range of technologies, rather than just going for one that's easier and cheaper. FLOOR: Hi, my name is Suyosh, a student, studying game design.

I'm looking to make special ly educational games, games that can teach kids about different subjects and different topics. So the slide that you showed about System 1, System 2 thinking was really interesting, because in games a lot of it is about really fast reflexes, shooting, going around, System 1, and with educational games, you want to ask them some question or make or think or learn something which engages the System 2 brain, but it makes the game less fun, and enter take , and it makes it kind of boring. So how can one solve such a challenge of like matrix both learning and while having fun? YVONNE ROGERS: I don't think system 2 is always boring, but maybe children see it as such. It's meant to be a metaphor, the system 1 and system 2, the key is to be able to alternate between those. At certain points, let them be fast, and just react and other times, you might want to slow them down so they can be more strategic and to get them to think about is this the best way to race, or whatever else you are wanting to do in the educational game? I think some early educational software was designed to try and combine different approaches and strategies so using it as a Heweristic rather -- maybe it's a good time after playing for a long time, give them an activity that will slow them down, or get a chatbot saying is this the best thing, to encourage met meta cog anything which is thinking about your thinking. That would be my suggestion. CHAIR: Can we go to there?

One more? FLOOR: Thank you so much, Yvonne, for a really amazing eye-opening talk. I have a question about the chatbot, again, the system 1 and system 2 thinking, I find that whole concept extremely interesting. So, did you consider the users more how to engage more with the chatbot, the system 1 thinking is they're on emotions, scared, angry, or sad, and having something pop up in that moment engaging, knowing that it is a robot might just be immediate by kind of the person might not consider even engaging with the chatbot. Did you consider what the chatbot could be doing, like could chatbot maybe be affecting the emotions of the person? Maybe creating a shock scenario, for example, or where they show what could happen if they make this bad decision, or maybe in using trust, or I mean, just kind of a question of what were the kind of different ideas of how. Why would the person engage with a chatbot in that moment? YVONNE ROGERS: That is a really good question. I think our

research in this area has only just begun. We started off looking into -- first of all, we looked at how we could facilitate teams of clinicians working together sense-making about data that was -- they didn't really -- not necessarily understand, but they didn't know what was causing the different trends. So we designed our chatbots to trigger more conversation between the team, and it was very much thinking about how you can get more conversations going, whereas the next tranche of research was looking at individual users, and how you might get them to stop and think. So I think there's a huge scope for using chatbots that model but maybe understand the types of human emotions and tap into them. The key thing is you need to find a sweet spot because you can just annoy people, and they will switch them off, and they will become annoying for frustrating. That's where doing good user testing can come in to help, "Is this too much? " Our first first ProberBot perhaps was a bit too in the face of the traders, so we reduced the amount of times in the conversation. So the key is

how long should the conversation be? That is so they can get back to the task. If it is just they want to explore their mental health, or their emotions, you might have a much longer conversation which is what Replica does. So I think there's a huge scope there for doing much more in this area, and getting beyond the sort of Q&A model that is very much under much commercial use of chatbots.

CHAIR: [Inaudible]. FLOOR: Thank you, Yvonne, or an excellent and informative lecture. I'm beginning to wonder whether this question follows neatly from the preceding question. So one of the ways we scaffold thinking in society is through debate, challenge, criticism -- a sort of quite scratchy way of engaging with people.

So I'm interested in where that fits within a model, and how you can do that whilst remaining engaging? YVONNE ROGERS: That is a tricky one, because, even if -- because, even humans themselves can find it hard to be all of those things, when, you know, particularly in marriages , in understanding when it is best to say certain things, when is good to be scratchy, and when it is good to be blunt, and open-ended. But I think there is a lot of scope for us moving ... I mean, there's been a lot of work in AI and natural-language processing for quite simple conversations, but I think in the last year or two, there's been more understanding of the nature of conversations, the nature of these discussions that might go on. And so I don't have an answer to that, other than more research into these things, but also for us to understand a bit better what goes on in these types of discussions, and scratchiness, as you call it, and do we have good understandingings and theories about what goes on in human conversations of this nature? If so, can we borrow from that and design these types of chatbots and other interventions? s and theories about what goes on in human conversations of this nature? If so, can we borrow from that and design these types of chatbots and other interventions? Certainly, at some of these large government meetings, it would be very powerful and useful to have some of these chatbots to help! CHAIR: Okay, looking at the time, I think I will take these last two questions. So let's go there. FLOOR: Hi. Adrian Ghegry. I'm a business leader, so I'm coming at it from that aspect.

Really interesting presentation and lecture. I was just wondering, how, on your financial services example, it was about AI to slow down the thinking and scaffold the thinking. About what access to expert opinion , and then what did that open up in terms of regulatory requirements as well? So I imagine that was quite a tricky subject, but I'm really interested in your thoughts on that? YVONNE ROGERS: I think there were two case studies. One is we were focused very much on novice traders where trying to get them to develop new strategies and to think about the criteria. In the financial world, I'm not an expert on the regulatory matters there as to how, sorry, I wasn't sure the second part of your question was? FLOOR: So, you know, a lot of the AI you were talking about was around decision process, part of the decision process is access to expert opinion.

Now, in financial services, expert expert opinion is regulated, it's a complete minefield, what do the experts think about this, how does it affect my decision-making , and how do I get access to it? The technology, the regulation which would be quite stifling in terms of how to get access to that. I wondered what your thoughts were and how you came across that in your example. YVONNE ROGERS: I think we steer clear of that. If we have information that competitors might find useful and we just give that out freely in our chatbot, I think that we weren't trying to tap into that expertise in order to let people interact with expert chatbots, it was more getting them to develop their own thinking strategies. That's a very different area I think is to work that , and I think we steered clear of that because it's a minefield. CHAIR: The last question.

FLOOR: This is probably a broad question, but I was wording how such designs of decision support tools might be applied to more time-constrained settings like in healthcare, where they're making high-stakes decisions, and they're already cognitively overloaded, and they may not have the cognitive resources at their disposal to make such systematic system 2 -type of decisions. Just coming from a PhD in studying decision-making. YVONNE ROGERS: I think artificial intelligence has come a long way in helping people under those types of decisions in decision--making, and particular in dying nottic, diagnosing, and so, they will continue to be developed to help, but iagnostic, diagnosing, and so, they will continue to be developed to help, but the key in my view is not to take over completely, but to know when to trust them, when to use them, and what they could do themselves, and what they would like to do themselves, and that is what we call "human AI" very much one of the research areas that is happening at the moment, is to think about where AI can replace certain activities that are time-consuming, or that can be unreliable, and that for doctors and other clinicians, can use those. But also to give them new tools that can empower them to be creative in ways they couldn't. I think there are two things. One is as I said, for people who are overworked, or how can we help them, but also those trying to this about the future of medicine, how can we help them with these new types of creativity tools, so I think there's a place for both. CHAIR: Okay, so thank you.

Thank you for the very stimulating questions, and thanks, Yvonne, for the great lecture, and now, I think I have the pleasure of actually giving the award! So. YVONNE ROGERS: Thank you. [Applause]. There it is! It is real! [Applause]. Thank you very much. And another scroll. Wow. Look at that! Thank you.

Well, I will hang this somehow around my neck, but not for the time being. But I am really -- words fail me. I'm just touched by how so many of you are here, and also for you to think that I'm worthy of this award, so thank you very much. And long live

... . [Applause]. Just to say that without all the many people who I've worked with , I wouldn't be standing here, so thank you again, and thank you for being a great audience, and I hope to see you around at some point. [Applause]. CHAIR: Hope to see you next year, and remember the nomination s are about to be opened, so you can nominate yourself for the next one. . . . . . . . . .

2022-11-18 21:51

Show Video

Other news