Harald Leitenmüller - Could AI help to build back a better world?

Harald Leitenmüller - Could AI help to build back a better world?

Show Video

[Music] What is artificial intelligence, where does this technology stand today, what can artificial intelligence do and what will it never actually be able to do? I would like to call this term intelligent technologies. So we've been looking at AI with the goal that we say: We actually want to help people to achieve more with intelligent technologies, to perceive more, to be able to do more, to be able to participate in society. This term artificial intelligence for me is a trending word, which contains everything and nothing. There is of course, already a very concrete partial range like machine learning, deep learning and such things which are a part of AI but for me they are in total, strategically considered, intelligent technologies. I like this definition very much, when you say: Everything that a man develops and uses to do tasks and that a human would describe as "intelligent". Does this perhaps also explain why people are afraid of being left behind by machines? Well, if you now only consider a human as someone who does tasks, taks in a niche, then of course he/she competes with algorithms and intelligent technologies.

But now, if you take the human being as a whole, where emotional intelligence and things like that, creativity come in too, then of course we are far away from this, from this competition. This term artificial intelligence comes from this consideration of this general intelligence that can do everything. Therefore, I think the fear is justified. You know those movies like “Terminator”, of course these movies, trigger this fear. I mean, this is of course science fiction dystopias, and we are far away from it. What are the

most important applications of artificial intelligence. It's interesting that when you ask a person: Do you use intelligent solutions, do you work with them? Then most of them say "No i don't use this." If you go into detail you can see that he/she has e.g. online video streams, like netflix, amazon and

has certainly clicked on a suggested video and watched it. this is done by machine learning. Or if one buys a product at amazon that is is suggested, there is machine learning behind it. Also in medicine: an X-ray diagnosis is preprocessed by image recognition algorithms, refinement algorithms. Or cell phones: Photos taken on cell phones communication solutions , virus detection, such things as computer crime detection, all those things.

There, all this is already in use for some time. Where is then this transition of people beginning to fear artificial intelligence? I see this on two levels: One is the fear of something unknown, where you are don't know what it is concretely, how powerful it is, what does it do, how is it developing and what does it mean for me. The second is the fear that these systems are now making concrete decisions for the human being. Decisions that affect people directly or indirectly. People have the fear that the machine will have more knowledge about us, and takes more criteria in the decision-making process, and thereby perhaps, decides against our will, so against the gut feeling we would have. How far

are systems? Do they make autonomous decisions or do they only support people in making decisions? So we have thought about a set of rules, what makes sense to decide autonomously and what not. I have, therefore, defined three situations for me: So there is the situation in which quick decisions have to be made. For example, a self-driving car, in a dangerous situation,under circumstances, in amillisecond you have to be take an initiative to prevent damage. There the human is simply not in the situation, in this short time, to make a good decision. By the way also the human being does not make the decision because of ethical considerations, but because of his instincts, his very own experiences in a crisis situation and that decision may not be morally correct. The second one is

restricted communication possibilities. Imagine they send a mars-robot away where the communication takes 30 minutes or where it may not be possible at all, then this device must be autonomous on site. It must function, it must make decisions on-sight and there to be able to move or do something. The third one I think is the most interesting area actually. To do people's jobs, from people who

work in dangerous circumstances. Imagine a nuclear accident in a power plant. You should not have a a human being sent there to get the radiation mesured. A robot can do that autonomously. So this is an area where you can say there it makes sense to replace jobs by machines. Those are the three areas for me. We have developed decision frameworks, so ethics guidelines one could say, that cover the six rough criteria areas and a question catalog for this that tries to clarify basic things.

What is that innovation about and what's about competence. Is it scoped and so on... And whenever one of these questions is not answered satisfactorily, our ethics committee is entrusted with this. What experience does microsoft have with the use of this ethics framework? We have two levels which we address here. One is decision-making. Do we want to do something, does it make sense, is that appropriate or not? The second is how do we want something. So ethics design guidelines, what we call it, respectively value sensitive design we call that. So that you can always take it into account when implementing something.

Think of a chatpot. How should the Chatpot behave in a dialog? So there are rules, for example i have it there written down "Humans are the heroes". This means respect the human being, he/she is maybe right and should be respected. So respect is of course then the abstract target. "Know the context" - what is this about? So information must really be available.

What is the context of this Dialog or the balance between emotion and intelligence. You know this problem when you have a technical discussion and someone does not understand you, he always argues on an emotional level. This should be recognized from a chatpot - Now is the moment where the context has been lost. Also it is very important to continue to develop over time, otherwise people will lose interest in dialogue with the machine. The

machine is too stupid for me or doesn't understand me. These are simple design guidelines which are also our engineers take into account when they develop such systems, which people fear, so that machines autonomous decisions could be made autonomously, instead of human decisions. To what extent is there a risk that people simply act on the suggestion of machines more and more? Imagine the situation of a doctor who makes decisions under time pressure, making decisions not only in intensive care unit under time pressure but at completely normal cash patient, and he has only a few minutes per cash patient Let us imagine, an AI system would give him the prescreening and make a suggestion which medicine for this patient now has to to be prescribed. Under the time pressure that a doctor has, will he then really make another decision or just put a check mark under the suggestion. I would perhaps look at it this way: On the one hand we should, at least for now, avoid fully automatic decisions in such situations and under these circumstances. The second one is I think there are already such systems, which by machine learning make better decisions. They make better decisions by providing more information.

If you think about it: Machine Learning is actually nothing other than to make data out of data that you have, that one would like to have. This generated data of course only has a certain probability that it is correct, but I have more information available through this. A doctor can get more information, at least that is the theory, and make better decisions. Especially in the health sector, I believe the potential is enormous that better decisions can be made. We know now already actually from studies that, for example with chronic diseases, these systems have a much better probability for success than if a doctor makes this decision alone by his knowledge. I think a second topic could be Data and use of such algorithms the are far-fetched and where the population or individual human being thinks "Where do you get the idea from to use this data". I reading about this recently that an algorithm of an

insurance company uses data about how often someone empties his cell phone completely. So that practically emptying the cell phone battery is used as an indicator for if this is a reliable human being, who plans his life, who charges his battery in the morning and will also pay off his loan. A very misappropriated use of data which, however, obviously has a high forecast quality. I can understand quite well that people

don't want that. I think that especially in europe we have a relatively good legal basis with data protection basic regulation a careful handling of personal data is required as there are also consequences for data abuse. On the other hand, we are now getting into areas where not everything is regulated. Especially there ethics plays an important role. I understand it in such a way that we have a

value-based framework which should teach every human being that he/she also can make decisions about the use of data where it may not be regulated enough. So the person should be able to make a decision to minimize a risk, that his data will be abused or used for harm. I mean, that is the topic on which we are currently focusing, that as many people as possible understand what they are doing here, what they are deciding, what impact ai has, what consequences it could have on the human being and on society. does this mean that the competent handling of technologies and deciding on high tech and new technologies is an educational task? on different levels, yes, I say on two levels. One is the one that is developing systems or has to design completely new competencies. That is just the merging of social innovation, technical innovation, that you can get a broader thinking so this designing solutions and not reinventing them, develop, synthesize.

This gets more meaning. So skills like abstraction skills will become more important. The second is also the benefit. Of course the designer gives the agent a purpose, what the field of application is what the performance can be of the systems. But still, you can get a good system which then can also be misused. Misuse etc. also here the users have to develop

other competences. This is like drving. A driver's license does not yet mean alone to be so ready for driving. Also the driver has a certain responsibility, then to use the car sensibly. You said there is a contrast between

developing and designing. Where do you see this contrast? Well we often talk about innovation. In my opinion there are three innovation concepts: Invention, Discovery and Design. For me, invention is engineering, mathematically calculated, the optimum. In electrical engineering you have the synthetic generation of circuit, that's a classic. Discovery is a bit of a coincidence. I discovered something new.

And design is a broad approach, where you have many many parameters, auch as environmental aspects, social aspects. You need much more general solutions. A colleague once said that we do not need simple solutions, but we need generalist solutions that everyone can benefit from. That is called design, that is another level of abstraction. It is

a category of solutions and not just one single solution. to what extent are these three different innovation concepts applied at microsoft? So i believe only design is left. You know these terms like design thinking. That you have to deal with the customers, sit down with the customer and thinks what does the innovation mean. Not only on technical level,

but really on a social level. For the business model, for the role in the society of the company, for the future and so on. These are relatively complex processes but also very interesting processes. So a software developer nowadays develops solutions with a completely different engagement because he/she understands the context, where his/her solution is used. You mentioned design thinking. Many companies are already using design thinking methods. However, some companies are shying away

from the complexity of such a design thinking process. You have experience with design thinking, what would you tell these cautious companies? So what i find a pity is that i think that companies don't shy away because of the complexity, but because they have no time they want to invest in this approach. We all have less and less time. Once we lean back and look at the problem from a larger perspective, we perhaps realize that the problem in another context maybe does not exist at all or if there are two, three problems put together, a different solution emerges. I

think it is a pity that we do not have the time to think. One is time, the other is thinking together, co-design. Where do you see the opportunities of co-design, where do you see the limits of co-design settings? I see big opporunities in the interdisciplinary approach.

That is the exciting part of artificial intelligence, that is implemented in completely new industries. Think about what does this technology, this concept, mean for my area - for agriculture, medicine, education. Everywhere you can find a starting point, what you could do with it. And when experts from these different

areas come together to to think about "What can I or what should I do with AI in this area", completely new solutions come to light. let's talk about the future outlook. Which technologies will change our lives in the next five to ten years radically? I have a four step process in the eye. Digitalization as a concept is an exponential phenomenon. Within

digitalisation we have experienced this cloud wave. From the principle, this is a sharing innovation model, one for scaling infrastructure and reuse. This is an interesting basis for the internet of things, automation of data generation, which is based on this and highly usable, as highly scalable infrastructure can be processed. So this whole big data analytics, is a lot of data available, which needs to be evaluate. This is now growing in parallel. We develop new strategies, new concepts new algorithms to deal with large amounts of data. Parallel to this, or maybe as next step, there is this edge computing topic. The data

transport of millions of sensors in a large data center is not always useful or also from data protection reasons not very good. Edge computing comes as an upstream, close to the sensors, preprocessing, machine learning algorithms reduce the amount of data and only transport decisions in the data center. And the last interesting step is certainly the topic artificial intelligence or machine learning. For me this is the automation of big data, where simply a machine automatically regulates highly scalable measurement analyses to make decisions on how to support and generate new data which interestingly then also leads to a data reduction. So it needs less data through machines after the use of this results, every time you do a big data analyses. It is interesting that the better artificial intelligence becomes, the more abstract it works, the less data is also needed to to make decisions because the decision happens on a knowledge basis. A lot of data we need only for the learning phase and after we need

less again. That will also contribute to the fact that in the future we will return to higher quality information, which we actually need to make decisions. These are then the things that will make our life change massively, if we reach the level at which decisions are going to be made on this data basis, then we can possibly overcome, the critical and big problems, the challenges that we are facing now.

This coincides with some conversations, that I have had with other entrepreneurs. When it comes to the internet of things, this is a technology that is just ahead of us. However, companies have reported that it is very difficult to find contexts where people can actually use it where it they have a benefit. So a sensor from this size that costs only two euros, as a complete computer. What for? Why should people want that, what's in it for them?

So, I will give you a concrete example from agriculture. It is predicted that in 2050, we need 70 percent more food. That means the challenge will be more precise our Challenge will be to be more precise with our land so that the yields land, so that the yields are optimal and also consider environmental needs. so when you look at sensors, sensors that mesure ground moisture chemical substances are relatively expensive in this environment. That means you think about new types of of sensors: image recognition or drones, maybe if we use this instead of hardware technical sensors over image algorithms it is much cheaper and can make large-scale effects So in the agricultural sector completely different solution concepts arise at once. Artificial intelligence leads to many fears among society. However, there are a lot of hopes too. How

can you deal with these opportunities and threats I think we should stop blaming all our problems immediately on technology. Humans make mistakes and we blame that on technology. I think with artificial intelligence there is a tremendous opportunity to solve problems that are with us at the moment and if we just focus on the the weaknesses or the dangers we forget about this potential. We should only create the conditions that help us to drive these opportunties. There are considerations like "sandboxes", where protected environments are created, where you can experiment and where learn from mistakes in order to then use the potential So I think that is really worth pursuing this positive approach with the right considerations, that you can then benefit from the opportunities and not only discuss the negative. Thank you for this conversation It was a pleasure. [Music.]

2021-07-03 09:23

Show Video

Other news