AI Discussion Series - AI Technologies for Decision Making.
okay so today we have Professor merco Mesi MCO leads the machine intelligence lab which is part of the autonomous systems research group and he here to talk about uh decision making autonomous decision making with AI systems and some of his uh research that he's done that sparked my interest and I think is quite related to this is his work on multi-agent reinforcement learning where he's done work based on trust-based consensus decentralized coordination and modeling moral choices social dilemas so very excited to have Professor merco was laaz thank you for coming thanks a lot for the invitation H the audio is fine okay so um today I'm going to so thanks for the invitation for like I like a lot of this format that is a bit new for me but uh um I would try to essentially um highlight some points for this discussion because this is a discussion Series so the idea is I think sort of a conversation around this themes so see a bit of an agenda for this talk so what the talk is about so the talk about essentially like highlights some ideas and thoughts about decision- making in Ai and so the the theme of this discussion is about say AI threats like also broadly defined so I will try to bring up some problems challenges and uh some interesting areas of research in this space so the talking itself is not about technical details happy to come back and and give talk about that or like students some are here ER they can come and give talk in details about of about some of the more let's say technical aspects of this presentation so the problem of decision making AI I think it's been like in general like this idea of like AI threats and in in general has been in my opinion driven recently in the past years a lot from um from Journal journalists and from the Press I have this not only clearly there are also like effort in Academia and theme tanks and so on but the general discourse in the public has been driven by a lot by like say it's very very very uh specific if you want like case studies or case studies like uh let's say problematic situation that we encounter so some are very serious like they use a compass for for uh for decision making in for for for sentences that is the one on the right here H here there is a famous case like of like Microsoft when when they tried to to release a a chat TI that was not really say fine- tuned and and uh and then and then and then like you get also very weird things like this was like on the BBC some years ago AI teacher appointed at the UK board in school and then you you read it and very it's just a chat bot that is answering the question of the student when they have a question about I don't know the time taing and so on but it's just like just to say and clearly that people say oh now now education is driven by there is no human touch anymore and so on but at the end so sometimes are exaggerations in my opinion and and as I said so there are also various various various very serious problems like like the the case of gender discrimination in and automatic decision making for you know like for financial decision and so on so what I say like it is a mix the the the thing that I see potentially as problematic here is the fact that some of these uh of these discussions are not really evidence-based okay so I see I think for us and also like being like in um at University uh like the importance of of evidence-based discussion about these themes in my opinion is extremely important so this is probably the first uh theme that I want to make about this than clearly there is like the the problem in general of decision making has been is Al quite a technical problem so we know like so some people like you like work say enforcement learning work on like on problems that that are highly technical so also explaining like the ACT highly technical aspects to to the public in my opinion is another aspect that is extremely important in this discussion and also potentially uh having this discussion based on also the technical limitation of of these Technologies so the and in my opinion it's also quite a funny funny very interesting area and sometimes it's also very funny have you I don't know if seen the the video of The Economist that is this one let's see if it works so you have this kind of so this is like it is decision making action essentially so this is like the the the you can see here it's moving yes so it is like a a agent agent based driven analysis essentially simulation of an economic system this is say like some also quite funny in terms of visualization there's no audio but it's fine so but the idea is here is you you might also use this to for very very serious things like for example for economics you can use like decision making H in fields like economics that is one this is one the is is the the the air Economist I seen this we this this paper probably you saw it h in this why why I'm liting is this one I'm liting this one because this is in my opinion is a good example where ER there is like an application of a very well founded Theory to to a very precise and specific problem H and at the same time I I think and this another message that I think is like another point that I want to make here a making like research that is also impactful full and starting potentially from a real problem like like in this in this case was taxation I think is important so again to stress evidence-based ER about important problem and then clearly these are important problems and at the same time when we consider this problem we should also consider the technical aspect so this is like I would say the framework of the talk in terms of about these issues and as I said this is a like decision making is not something that we we essentially we invented as computer scientist even if we have that tendency to take but but essentially K it has a long tradition clearly John fenom that was also a computer scientist and a physicist and a bom maker it was so essentially started the discipline more or less in in the 40s and again this is like an example in my opinion where we should also be open to other disciplines so the other message that in my opinion is quite important when we are talking about this issue and not only the the problem decision making but also the risks that are associated to it in my opinion is very important to be H to interact with people from other dist because what we do most often is linked or it might be even based on the work of other discipline so these are the other message I want to give and this is a is an area in my opinion where we can also give back we can give back because these are extremely serious things I think the work that we do might have impact also back to other discipline one of for sure one of the most most important application of decision making if you want is in conflict resolution or peacekeeping this is like the work of you can see here is strateg of conflict is again it's a very very good book by Thomas shelling that was also a Nobel Prize for economics that that essentially is about the application of Game Theory to to conflict resolution this is a so the work that we do has a importance also here and as you can imagine so and this is probably the risk that was talking about also when we were organizing this talk one of potential risks of this work in this space is the fact that we might end up applying our Technologies to domains like for example uh conflict uh strategic decision making that might have a direct impact on on decision that can involve humans clearly but also like in a like very large number of people and in situation that can be like extremely extremely critical okay so thinking about that the technology that we have now this is probably in my opinion one of the potentially say not a risk but it's something that we should think about is our the technology that we are developing now for decision making can have application okay in economics but also increasingly in situation of conflict but another application that I want to say that I want to highlight is also application in h other fields and again it is related to uh this idea of thinking about this problem more broadly and learning from it especially through so here I I put like I don't know if you if you if you ever heard about George price GE price from UCL and he was a he wrote this paper in I think it was the 70s 73 actually on modeling modeling uh conflicts okay in in animals andar in humans and mathematically and the point that I want to make is that that the decision making process can be modeled okay and by modeling it we can also think about potential risks and misuse this is something that is Trivial the thing that I'm saying but reflecting on the fact there is a a tradition in other discipline one is for example IR the other International relation the other one is biology where there have been like a a long tradition in understanding how certain situation might lead to conflicts problems unexpected behavior and and looking I think at this type of work might be extremely helpful for everyone so this and and I think like the we so like the ey Community tend sometimes not say to forget but maybe to have a certain way to see at these problems without considering also the existing literature if you want and this tradition that I think is extremely interesting and important in this space it is a tradition of building of of studying essentially human behavior using or animal behavior in using computer models okay and and the interesting aspect here is and Rel to the to the theme of of this discussion is the fact that we might use this models to try to understand if any unexpected Behavior might appear in the system under consideration and this is I think very interesting so and in particular in particular Joor price was interested in ceration and cooperation is a big them as you know in in Ai and cerative AI is is a big area at the moment and especially in multi- agen systems and price the price is quite interesting because like the price was was aarcher here at UCR he was studying cooperation and at certain point he was so he decided essentially to leave his research and to sell everything he had and and and essentially give everything to the poor and and and live in a commune and also very sad sad sad of so at the at the end he also took his life unfortunately but is is a a very interesting story of a researcher and and I since I mention it I want also to to suggest you this book is called the price of altruism I think it's quite relevant for for some of you that are interested in these themes of cooperation and and and and potential conflict so this is a it's a good book I I wanted to mention this because is a it's very very interesting something that in my opinion we should ER discover ReDiscover if you want more and more and going back now to conflicts and conflicts am among humans if you want H what is the risk that I want I I I wanted to highlight so the risk of the risk is an opportunity if you want as well but the potential uh problems associated to use AI techniques in say in the situation room so when when you are going to then and this is increasingly the case when you are going to make decision and increasingly decision making is supported byi Technologies there is a potential risk that the unexpected behavior and unpredictable behavior that emerges from this technology is not considered in taking the decision here I'm talking about like long-term if you want existential risk I'm talking about medium-term short-term existential risk if you want but the problem of the fact that you might use technology that we we understand I say we don't fully understand we understand but given that complexity some of the some of the of the actual U can be also like understanding the output of this algorithm might be difficult it might be that the actual also misbehavior of this systems is difficult to understand or to model so going back to this idea of the importance of modeling these these systems I think this is becoming more and more important when this system are really used right now to make a decision Situation Room here but also like in in the in in the military context and so on okay so this is like the something that in my opinion is very interesting and think we should be so we are kind of respons I for that so thinking about like maybe contributing also in terms of work in this space I think is important and and again like sort of like a sort of like a book book club but is another another book in this is on the military context about decision making and autonomous decision making is this one is is a book from por that works in a in in a thing tank in Washington is a very good book on on the military context and using in AI as well in the military context he has also a new book out on this on AI and Military decision making okay and finally and and I mention this because I think is it's quite gives us also like like a direct example of the risk that I was talking about H the problem of not the problem like the fact that AI is extensively used now in in finance as we know and we see already what happens when you have unexpected behavior that emerges from the interaction of different H systems is a dynamical system we have emergent behavior from this dynamical system that might be unexpected this is like the the the Security and Exchange Commission chair that you ask says that the next financial crisis could come from AI in the sense that like you can have this kind of cascading effect and so on and that are extremely difficult to model okay this are an example of what it might happen Al in other fields in my opinion is we have this now but this might happens also in might happen also in other fields okay so then then then then there a question okay so like it seems that a computer can do everything but there are things that computer can't really do yet and things that we might you might be the people that essentially enable computers to do okay and also in this context as I said this also I'm referring this specific context of potential emerging Um this can challenges in decision making so you the was philosopher and it was quite critical of of gold good old fashion AI it was this book is 1972 actually and is extremely super and all the all the all the people of the 60s that were working on logic logic Essen h i probably I don't know probably you are aware of this book I don't know but this book I think is like in also like in his following work uh he also I think he pointed out some very interesting aspect about computers so he he has this four points that I want to to bring to your attention so one is the sort of biological assumption that brain process information a discret way by means of biological equivalent of on on on of switches so the fact that at the end what we have we have a computer that is in any case limited by the by the type of representation that we use that is probably not that important for us so but I wanted to list it we have this kind of psychological assumption that say the brain can be used as a device operating B of information according to Forma rules that this is another kind of assumption that was made and I think it's not that important for us but I want to to say these are the way that computers work then this is this idea of this epistemological assumption so that all knowledge can be for analyzed that is something in my opinion that it is difficult to do okay and also like okay now we have a like foundational model and so on so we we are kind of assuming that we are able to get all possible knowledge in this system but it is not always the case okay and assuming that is the case is problematic especially in situation that potentially can be risky okay so the idea so we have this epistemological assumption that is not always true that but that seems to me but is like but there is a risk I would say that we are assuming that all the Assumption and all the knowledge is contained in this models and might not be the case and this can be a cause of of potential issues okay being aware of this in my opinion is important and the final one so this kind of ontological assumption so that the word consist only independent facts that can be formalized with Precision so this idea of independency as well so work the word is very complex and and and assuming that we can formalizing things as without consider that interaction is probably a probably is problematic okay this is in the context I would say the knowledge that is necessary for taking a decision so like this is something that I added and if you want like if you want to take like a reinforcement learning example if you want to make a decision the state might be really parti partially observable and what you have in the state can be also interlined okay so this is just a another thing that I want to highlight that when when I was preparing this talk I say okay this I think this is an important aspect in my opinion that sometimes we we don't think about and I think I want to discuss with you it's not that it's just my thoughts but it's just to to bring to your attention and for discussion the interesting thing that that he also wrote a book 20 years after called what computer know like it's kind of an updated Edition what computer still can do that is quite interesting because in this version 8 is upen and and essentially all the revolution if you want of neural networks and so on and connection systems took place so now in this book in my opinion has a bit more positive view of nii especially in relation to the possibilities of of um of deep learning if you want neural networks but also reinforcement learning is very interesting and I again I suggest you to you can find this this text quite easily and quite quite quite good to look at what people were thinking while developing the technology that we have right now H especially like this are like was around when enforcement learning essentially was develop was being developed and so very very interesting okay so in general so the problem of decision making machine just will summarize a bit so decision making machine is one of the biggest interest problem in AI in my opinion H but this is a question about decision making about what and for so if you think about it why we need why we are using these technology so can be for helping humans in their own decision making can be for helping humans to take their own decision okay but in order and this is like in order to act individually but also for acting as a group or to collaborate with other humans so there is this aspect that is like the the aspect of collaboration as well that is very important another aspect that I want to stress there is a this difference between decision making humans and decision making machine and the fact that we are if we have the interaction with this two and we struggle to understand our hum must made decision we might design the machines but we might not know how the machine are designed and this is a problem of using a system for decision making in The Situation Room okay because we don't know exactly what's going on and and this is that a lot of people talk about human in the loop human Loop and it's it's good because like you if you need to do labeling is very good to human in theop if you want to do fine tuning is very good but also increasingly we have this these machine in the loop systems where we have system that are helping us to to do things or taking decision for us I don't know how what and prob are the same thing okay but there is I I I I think there is like not not lot of attention about the fact of Designing machines um systems where the human and the perception of the human and how the person uses machine is considered and this is quite important if you want to essentially give the machine the possibility of taking decision okay being autonomous or semi-autonomous and and this is something that goes back to if you want the tradition tradition in in in the stud if you want potential existential risks in this in this field are current machine fully autonomous and in my opinion No so at this for a from a strict definition of the term because that in my opinion is the right one so because machine are not able to set their own goal and this in my opinion at least with the current technological scenario prevent from certain type of risk to happen because machines are not able to define the their own purpose their their own goal this I don't know probably some people disagree but at least in the current technological scenario this is not the case and I understand I mean I think that of course H there's a lot of very interesting work on on on on safety and potentially when you have machines that are fully autonomous but at the moment there is this step that in my opinion is not that and this is a something that I think is uh should we should be aware of okay I this this is my point of view okay and I'm very happy to discuss like with you and maybe I'm also don't want to be controversial but this is I think the the current landscape at the moment okay so try to what we can do and also interesting things that we can do so this are these challenges to deal with potential risk that a potential negative aspect of our negative outcomes from these Technologies but these are great opportunities to do good work impactful work not only for dealing with these risks but also for doing interesting impactful uh fun research okay this this is always this kind of this kind of Duality use and misuse of these Technologies and I think uh it's I think we can really build we can build better Technologies if uh we think also these problems of risk are associated to technology as opportunities for building better technology not to be like uh scared of that or scared of the use of the Technologies but being very aware as I said evidence knowledge technical knowledge of the technology understanding the fact that we interact they interact with humans the limitation of these Technologies we can really there a lot of space here to do very interesting work and I'm trying to now to to list some of them briefly I think should we said 40 minutes more or less no yeah okay so one is a one that many of you work on and we are also working on is the problem of building system that are Cooperative among themselves and also with humans okay and maybe using this system in order to for example as a basis for making for for positive decision making for example in a diplomatic cost context and so on IR context was this is and also like for very problematic situation where like you have a institution that are difficult to Define where we have like so different uh different beliefs and so on Cooperative AI I don't not going to spend a lot of time on this this is a probably a very interesting topic and I as I said to to come and talk more some of my student can give talks about this and then and this is is there are a lot of interesting problem that especially now because as as as we know I when we are in situation where we have finite amount of resources typical typical cases like the environment and also the natural resources this is a is a problem it is also a problem where we might have also competing competing systems that are uh competing resources if you want competing systems that we want uh that are essentially uh collaborating or they are fighting if you want H for for a certain type of resource that is fin it is that what I'm saying like this is quite quite quite quite generic but what I'm saying like if you want if you have different planning systems or different system that make the allocation decision for allocation if you have multiple systems that have driven by individual goals then you might end up with the tragedy of the commons where you have an unnown ideal exploitation of resources I'm talking about in very generic term but can be like this can be applied to markets for example can be financial markets is a typical case like exploitation of resources in a way that is automatic might is still in my opinion like unol problem unol problem if you want in economics but designing system a regulation a regulatory system where decision making system lead to a positive outcome for the society and for the group of individuals that are using them is a big problem and and again just to to like if you don't know the area I just put put put like a paper on but I think most of you are aware of this cerative so this was like a a survey that would came out some years ago but in case you are not aware of the area this is a good introductory survey just as a point and then the problem of making decision and considering like different ethical framework and mor framework this is work that essentially Visa that is here somewhere okay this is there if you want to Visa can come and give a talk about this is working on so the problem of a try to uh find a way for embedding moral values and ethical front constraints in decision making systems this is a super interesting problem because you might have different ethical philosophical Frameworks that needed to interact together potentially and I think is unsolved is very hard to to Really model also decision making that is framed in um is underpin by by moral values this is open area Lisa so is doing a PhD on this but I think this is like a an area where there is a lot of work interesting work to do and this is Al an area where interaction with other communities the humanities and philosophy I think is quite Central thank you and then like I would say like final fact is about also if you want like a bit of regulation of this so regulation of autonomous machine for certain type of application might be a solution because like regulation is the thing that we are thinking about but this is a problem so what about and I think it is it is the problem that a lot of people are thinking and I I want to highlight this as a final point and and and I really still I I don't have like I want just to give to audience this point so the problem that we haven't solved and is related to the cooperation and to also the ethical framework so the fact that there might be a symmetric behavior and symmetric regulation so what about if a country decide to regulate whilst another country will not regulate what happens to the country that is going to regulate these type of Technologies or following certain type of moral standards especially also at International level I'm also like in diplomatic scenarios in military scenarios this is like I think a very big risk and I don't have an answer because like imposing regulation is something that we tried as humans and we see we are not really that good looks like ER but this might be I think this a challenging is challenging but it's something that I'm very happy would like really to hear about what you think about it and also there is potentially competitive advantage of the ACT not respecting the rules what happens in this situation this might be very risky okay this might be very risk and are there any ways for strong incentive to do so and then there are another aspect so there is this problem of asymmetry there is also the problem of the Ence if you want bagely defined so and and this is like I like this I know a lot of people didn't like but I like it this paper of of Sparks of artificial in general intelligence where essentially what they were claiming in this paper was that okay there is like some some very interesting emerging Behavior there is in my opinion there is some interesting emerging behavior in in foundational moders but again the this imagin behavior is very difficult to model okay you know like like everyone it's kind of I always say to the student is kind of becoming a sort of experimental physics so you have a system you are observing the system and you and you see and you try to see what happens if you probe the system it's like really experimental physics and imaging behavior and modeling this is difficult also because we know that emerging Behavior might be very difficult to model like we have tradition about this especially also when you have interaction okay this is the other P paper interaction between agents interacting particles interacting humans interacting States is a problem okay it's quite difficult and this a problem essentially the general problem that it again it has a big tradition the tradition of a studing emergenc in complex systems this is like another classic paper the physics 72 again Mor is different was a gooda that is like a paper about emergent the nature of of of complex system and their emerging Behavior Anderson in this paper kind of semal paper discuss about how the St the application of fundamental laws is not very easy in system where you have a lot of interactions so more is different and more is different also in Ai and we I think there is a lot of potential work to be done in understanding imag be look in in complex system still an active area because we haven't really figure out how to model this system except for certain very very un we we have some models but they they make very strong assumption okay and we don't want to have assumption for system that actually are deployed widely in the in the real world and we don't want we would like to try to understand how to deal with this potential problems okay this is more this different and and a certain certain point you might have behavior is completely unexpect chaotic Behavior this again from physics okay so period three implies chaos this is a paper about like Emer of chaotic Behavior it's another seal paper in this field we have behavior that looks even random but it's not so we have a sort of deterministic behavior in this in this system and I think we are just at the beginning in in terms of understanding how to think about these foundational models for example and this can can can really lead to as I said before to unexpected situation this problems of modeling complex Behavior CTIC Behavior been studied also in the context for example of financial system and has been used also to understand or to try to understand past cashes for example in financial system so I I see like quite quite a quite a useful useful parallel there in order to study this problem okay okay so regulation in particular the one that I I was talking at the end about the regulation of algorithmic financial Market is show that essentially shows that some sort of protection is possible so we can use a framework rules we can have automate automatic breaks in the system we can have a focus also posterior on what can we can have a focus on what it can go wrong before and the posteriority try to to to analyze kind of sort of forensics in terms of potential misbehavior the system I think that I I I I st this idea of financial Market because there have been a lot of work around this type of system and there are algorithmic decision making system are already deployed so I think it might be like a good a good area to look at H this this this kind of mechanism are not perfect but I I think there are lesson that might be drawn from them and again and again about this kind of of unpredictable Behavior there is also an aspect in this system that is some sort of form of okay behavior that is happening or some sort of un so where this unexpected Behavior come from comes from some form of let's say Creative Behavior so the question is is he really creative and I put that like a paper that very paper that wrot with George is here those it that if you want to talk with him and you can give a talk but there is also a problem of the fact that there is also I think this is quite interesting this system a sort of imagin behavior that looks creative if it is creative then it's very difficult to regulate then the question about if it is creative or not we think is not really creative but it looks like creative and modeling something that is new and novel is very hard okay and this is in the con context for example of foundational models okay H just to to also conclude this part so at the moment in the current technological scenario a system set sub goals and not goals so for that reason I think they are not fully autonomous and set type of risk are not there we don't have unpredictable goal so this is a positiv positive so it makes the problem easier the problem will become also when we will have a situation where the goal inor might be set so also this creative aspect so if if I set a new goal by myself that might be a problematic situation okay some people might say it's is new because I think about like the the the paper from Stanford of of the of the Agents of the interacting agents there the the agent set a new goal is it a is really a new goal or is sample from an existing distribution so and again not to dismiss the work in this space of sa in my opinion super interesting but at least at the moment we have problems that are related to sub goal to reach a goal that is predy Define in my opinion at the moment then everything will change if we have system that are also setting goals by themselves okay what we can do now so as I said the importance of evidence-based policy this is the first thing that I want us to summarize the importance of being involved in this problem as scientist there's a lot of interesting work to be done for the public good the importance of discussing this matter with people from Humanities social sciences and so on the importance of working on problems that we lead to positive outcomes that is again something that we should focus and finally I think try also to have a POS importance of having a positive view of these Technologies and I think is also our role to to make sure that also um policy makers will think about technology in a positive way there is a potential risk that I see and my opinion happen with the I act that this Technologies are seen as dangerous or probably only problematic because a lot of the discourse was driven by the negative case that we saw at the trying also to to think about about this technology in a positive way being very aware of the risk that I listed I think it is fundamental and this is not the pangan view of I don't know if you know Pang pangos was in the candid of vol was like a person that was philosopher that it was essentially repeating that we live in the best possible World okay and we don't live in the best possible world so it's not that we have a point of view that is point of view of of pangos Dr pangos from the candid of volter and clearly what I did was second I also ask CH GPT to give like a discussion but I I will show you later the of of of of the problems of of of AI in terms of from a point of view of candid of of pangos I suest you to to do it by yourself it's quite funny yes so like like to to to reframe all the problems of AI from from the point of view of pangos the positive view it's possible to do these days but anyway it is not about forgetting about this problem is about being aware of this problem and okay this is again is automatic generated not penting like it's automatic generated it's like creative AI is a mation tool not this okay but I think we should have a positive View and trying to also enjoy what we are doing and having also some fun and so I will finish my talk with this generated completely nonsense M okay and okay good and we finish here I wanted to to to thank the people on my lab that are some of them are here I said we are happy to give about the topics like more technical talks today was very say very high level if you want about like General problems and also a commercial so I have a postdoc so if you are interested in a postdoc please contact me the ad will be out soon but just contact me if you are interested and and yes that's it these are my contact details okay thanks a lot it's long we've got Sometimes some questions uh feel free to head off if you want to um gonna try and use my phone as a microphone for questions uh please keep questions short if you can okay short uh should I kick off with a short question yeah okay um so you mentioned a lot of these things that we don't fully understand yet and that this might cause risks in the future can you be concrete about what kind of risks you're imagining so like like for example in terms of the thing that we don't know like in terms of modeling or or like yeah that that was one where you had these four points from this this book by draus of of assumptions that go into Ai and how if those fail then there might be trouble but what kind of trouble are you imagining no the problem that I imagine is is the fact like as I said there is a problem like for example if it take a decision is typical you can have also like a very practical problem this is a practical problem of taking a decision I don't know should I stop this nuclear plant because or not given the the the the what I see happening if I I have a sensor as as you know that doesn't work I don't have a full not a full view of of of the of the of the of the nuclear plant so I might take a wrong decision the the risk that I'm thinking that if you rely a lot in in like planning or like for example for say diplomatic military action and thinking or can be also industrial industrial planning can be like economic planning on system that that you like too much without really understanding what's going on or understanding that limitation might be problematic so risk can be like economic risk can be Financial Risk can be diplomatic risk can be military risk and so on so I'm thinking about this very practical problems and I think it's very very real as well okay thank you I think my question is if we want to inject AI into decisionmaking uh how do we make sure that the models that we that we use are actually capable to the task we want to solve and not like they learn an internal model that is actually woly too simple and works for majority of the cases or let's say all the test cases because a simple relationship is enough to model it but the moment you have an edge case or something that like even within distribution just a unique set of inputs that are within IT training set how do you know it just catastrophically fail yes exactly so that that is a this is the problem that was saying like you might not have the full knowledge and the full knowledge can be like that you have always some information about the your States and and in my opinion the only thing that you can do at the moment K you you need to think about have a system that have the breaks or like that makes the person that is taking a decision reflect about what what you see so my my my point was there is a potential risk in in like delegating too much if you want to to system that might be simple did you say like or might be I say that it it is something that it is it happen all the days all the time and for many task is probably perfectly fine to use this but there are things that maybe we need to think about also as a designer to point out what is possible to do what is possible to do in terms of responsibility as well or like the fact that you I don't know I'm pretty sure that like the people that design system like foundational models for health they are thinking they're very very they are fully checked but for example fully checking a system a foundational model for health my opinion is fundamental and as you said maybe it's too simple so do we want to have the human in the loop the machine in the loop in that case because do we have all the knowledge in this system so like a surgeon that been working like 20 years or 30 years can be compressed in a foundational model also like the very because not and also not everything not everything that is something that I didn't say but not everything that is in our Human Experience can be summarized easily and and and used in a data set for a for a learning mod so I think I think that is the simp think that you are saying to too simple but also can be also the the data the view of the world can be tole he thanks for the talk um you mentioned this distinction between goals and subg goals as it relates to autonomy where AI systems like aren't autonomous because they can only create sub goals humans goals I guess a claim you could make is that humans intelligence is a product of natural selection which has the objective of you know inclusive genetic fitness and so like all of the goals that we create are kind of sub goals of that one um how what is the kind of relevant distinction between the goals that humans create and the goals that AI create is it just like the that you know the AI distribution is like much narrower and more predictable no I would say we are going to Freel here in theory we have Ian depends like how you can see it but but we set our goal like date for dat I mean there is the general goal of Stay Alive okay and maybe El like produce and so on okay so there is like a general goal but what I'm saying like there is also like a set goal setting that that you can say it's free will so if you believe in Free Will you say there is goal setting because I have the free will of setting my goal in a machine a goal usually is is given from outside from a human or the designer so this is like the the way the way I see the problem does it make sense so it is the fact that we have external go that are go that are given externally this was my point of view yeah you can respond yeah I guess that maybe I'm still a little bit confused about like externally because you could say that like the natural selection objective was given externally um yeah what what is in what like are my goals said to be internal that okay okay so so yeah I see what you mean like like probably probably also yes talking about I I see what you mean like you probably like a meta goal that is not not the daytoday activity of the you I see what you mean I I probably probably you can see in in the same way but I don't know seems to me that if if you think about like A system that is running okay at some point it is given externally yes but also the type of I don't know it seems to me that there are two things here one that the sub goals are somehow dependent on the goal that is set so as humans we are also good to create sub goal from out of nothing potentially the question is are we or not there is a creativity question here so from that point of view I think if you want to see like The Meta goal as but then we see the sub goal we are also able to set new sub goals I know I know this is a bit recursion here but the fact is we have have General General go both say both machines the machines and must want to stay alive okay we are able to set new sub goals that are like sample from we don't understand really clearly how how this this works but I believe at least I believe that we have the capacity to to set new goals and you are free to set new goals that without an external if you want to philosophical this external external uh input that can in a machine can be really input that the fact that you so people that don't believe in Free Will essentially they say I have an input I will react from that input okay and a machine in my opinion is like that if you believe that you must have free will for various reasons for example there is also very mathematical is that you don't have a full observable state so and Quantum Quantum aspect or the problem and you cannot make a the decision is not fully fully dependent on your input okay that is my opinion is a human Okay is different from the machine that is fully dependent from their input so this is another way of reformulating it if you want P more more mathematical happy to discuss more but I think I think there is this level as well I thinking like at a different level but understand your point you can you have other level of IND Direction this is the thing that you were saying okay all right and since Anthony had his Han you still have a question I would stay around I would stay around yeah I was it's probably more of a philosophical than practical question or even a relevant question but when you were talking about what computers can't do you said an epistemological um assumption uh which sounded like um is always precluded by the incompleteness theorem that no no formal system can be consistent and complete do you say that's true this is a very good question because like it is related to is related to to the point your name sorry no no oh no so the question is are we are we as human are we slave let's say of of the of the of the god go incompleteness theorem or not in probably I think no I think we aren't so if we aren we are more powerful of machine in that respect and the question is like can you have can you have a machine that creates truth outside the arithmetic system that has been red in if you can't yes and clearly it is like what I think I think is a consequence of that as well yeah yeah Oh's leave it there so yeah next week I think we'll have to have Roger P Ros come and settle that question okay because he giv talk I went to a talk of him yeah fantastic because he Ed still like his is is like transparent slides yeah yeah yeah with animation as well one on top of each other all right so Zoom po here but let's give mer a final thank you good and you say you're around yeah I'm around yes thank you very much Marco um I will record it and get it sent out as well so thank you
2023-10-25 07:42