Managing security in an insecure world: AI for understanding war and conflict

Managing security in an insecure world: AI for understanding war and conflict

Show Video

You. So. And thank you all for attending this session we're. Going to spend the next hour. Talking. About. AI. In, the context, of conflicts, war and peace, and. I've. Got four, distinguished. International. Panel. Members with me so what I thought I'd start by, doing is to give them five minutes to introduce themselves five minutes each to, introduce themselves and. Say something, interesting, or controversial, maybe about, the topic before. Going. Into some questions that I've pre-prepared, and. Then finally, open it up to the audience to ask questions and. Provoke some, conflicts. Amongst our panel, members if we come so, let, me start by inviting. Nick. If I may to introduce himself. Good, afternoon ladies and gentlemen. Can. You all hear me okay. Right. Great wonderful okay, so I'm, Nicola small I work for BAE Systems as. Principal technologist, and also, as, Technology. Strategy manager. I'm. Also a visiting professor at Cranfield University where. I'm assigned to the autonomous. And cyber-physical. Systems. Center. So. I guess I'm really a futurist. But, unlike most, futurists. Occasionally, I get asked to prove, it and prove. It through state-of-the-art. Demonstrations. This. Has included the development, and demonstration of, a surrogate UAV. Autonomous. Elements of a male UAV mission, system, UAV. Oh space integration. Such as sense and avoiding capability, and mixed. Reality cockpits. And command centers, designed. To, manage multiple semi. Autonomous. Vehicles. I'm. Really fortunate to have for, quite brilliant friends on this topic wing. Commander Keith Deere in Joint Forces Command. Leftenant. Fennell I'll brown the, author of the MRD human. Machine teaming, joint concept, note and dr.. Heather Roth a leading, AI a thesis on formerly, of deep. Mind and. Last. But not least, Antonius. Professor. Antonius Ordaz who's a leading aerospace and, defense autonomous, systems, expert at Cranfield University, and. Each of these has really influenced my perspectives. Over the years. So. Here are three, areas, of interest, and concern. The. Ease of weaponization, of, commercial. Off-the-shelf AI. In autonomous, systems technologies, drones. With IEDs, and offensive, cyber algorithms. Are just a couple of example. This. Is a particular concern with, rogue threat actors, that don't share the same values and conservatism, as we do nor, do they follow the rules so. Bands would be ineffective. Ease. Of weaponization because. They can be readily repurposed. Or available, affordable and untraceable whilst. Being increasingly, capable. Just. Consider, that the u.s. DoD accounted. For 36. Percent of all global R&D spend, in 1960. When, now it's circa 1%, commercial. Technology, dominates. So. Do we need a weaponization, indexed. And measures, taken to reduce the risk from commercial, technology. My. Next interest is with the very small not. Large, weapons. Or, UAVs. Carrying, weapon systems, but. Vehicles. The size of your own hand down to your finger nail systems. That are difficult to detect encounter. Systems. Capable, of having significant. Combat mass sheer. Overwhelming. Numbers equipped. With, neuromorphic, processing. A brain. On a chip or even a gel are already, emerging. Promising. Ever higher neuromorphic. Performance. Per unit size weight. And power. Making. Such systems, pretty smart, and pretty compact. Last. In my top three our adversarial, networks systems. Capable, of breaking deep, learning neural, networks you. May have seen pictures of school buses misclassified. As an ostrich, through, the addition, of subtle, noise, that. You and I would find very very difficult to, spot we. Rely on such systems at, some risk. Because. We need to explain, them we don't yet trust them, it. Is no longer a tank and is now an ambulance or vice versa because, the adversary, is included, a particular pattern, of camouflage. Generated. By an adversarial, Network, worryingly. Humans, may not be exempt, from such exploitation. Recent. Work in a Harvard lab on monkeys is demonstrated. The potential, to change mental, state and more this. Is perhaps a more profound form, of stimulation, than. Say looking at an image of a beautiful tropical beach, these. Are just some threats we need to understand, but, in some cases there, may be opportunities to, leverage the technology to, increase the capabilities.

Of Our forces, but, we must do so carefully, and avoid the many slippery, slopes on the topic, for. Me this initially, comes down to two things first. What is the compelling need for speed, in which, there is little scope for a human in the loop or even on the loop, and need, to progress through the you dilute, the observe orient decide and, act luke faster, than any human can given. The emergence, of machine, speed, warfare. Digitized. Battlefield, cyber. Electromagnetics. Directed. Energy weapons, automated. Command and control and so on when does speed necessitate. A machine, driven solution. Secondly. What, is the risk considering. The impact of not acting and when, we do act the potential, impact, of that action on non-combatants. And blue forces, when, things might go wrong there. Is a framework here I've been developing that attempts to show when automation, is reckless, when. There is no choice when, there are other options and when the risks associated with things going wrong are very low we. Can overlay different, autonomous solutions, the top of this framework to see where the issues are this. Applies to security. As well as defense but. It requires the incorporation, of several other dimensions. Many of which will touch on in the, discussions, today no. Doubt and, I think then we can get on to questions, of trust. Predictability. Reliability. And explained ability, in the applications. Of AI, in this particular field thank, you for listening. Thank. You Nick and, next we'll go across to Australian, our Australian, colleague and it's my pleasure to welcome Alan, Adams, arena Alan all. The way from Australia. No can you hit me okay. My. Name is Adam sawbuck I work for the Australian. Government. From, DST. Defense Science and Technology organisation, my. Background, is in linguistics. So, I bring, a linguistic. Focus, to the work and the, work, of the team that I work with is, in high-level information fusion. We, work bringing, heterogeneous, data sources together to. Provide, real-time. Situation. Analysis. And prediction of events. Activity. Based analysis, and so on. So. I'd like to pick up on some of the issues that have, been nicely. Drawn, out so. Some of the. Key. Points that we're looking at, the. Reliability, of information, the. Four V's volume. Variety, veracity. And velocity, we, have as. We all know where we have data. Deluge to, deal with and a. Lot of the information comes. To us in. Different levels of information. So. Unstructured. Information sources. Partially. Structured, and highly structured there. Are hard data sources and soft data sources. These. All represent particular, challenges, of their own and require, different techniques. In. Order to bring the information together to provide, it to analysts. In real time we need to have techniques. That will. Allow. The analyst, to have confidence, in information, how.

Much Can the analysts trust the information and how much can. They engage with the information, in a natural, way so. This draws out some, things like, human-computer. Interaction. Interaction techniques. As well, as. Validation. Of the information, we're, also. Investigating. Techniques. To handle degrees, of certainty and uncertainty. Which, requires us to. Investigate. Different, ways that. You can look. At the uncertainty, of the heterogeneous information. Within, a. Combined. Data. Space, so. As you can see there's there, are a plethora, of different, problems. That we're addressing and, we. Don't claim to have all the answers and. That's why we look out to state-of-the-art. To, the research community to the academic, community to, bring these in. Another. Aspect, for us which is very important, is, intermittent. Automated, systems, and, the. Level of trust that we can expect, to have and, I guess you asked us to bring out a particular perhaps. Topic. Of, relevance. Or importance. One. I've. Heard some nice talks this morning about ethics. As. You can imagine in the the field that I work in we, have very very concerned, as the Australian government concerned, about data ethics, and using, data in an ethical way and. I've. Observed, that a lot of the, Western liberal, democracies, are very concerned, that we use data in an ethical, way, but. Is. It the case that our, adversaries are. Also so, concerned, with using data in an ethical way and, I challenge you to think about that and how, do we engage. With. That, problem as people, who. Are very concerned that the data is used ethically, and. With. Adversaries. Who may not be so concerned so that that's a very big challenge for us as well. So. I think I'll leave it there and pass. Over to the next thank. You so. I'll, now bring in one of our PhD students, Dylan, will, introduce himself good. Afternoon everyone, my name is Caleb, so. I'm. Finishing, my PhD at, Warwick University. I'm, currently on a research. Placement. At e alan turing institute working. On conflict. Modeling, so. Myself. I'm a physicist. So my, graph my background, is mostly on statistical. Mechanics. Particularly. I focus, on the study. Of complex, networks, so. In. Network in network analysis, we use graph, theory to. Apply. Methods, of physics, in a variety of real-world, scenarios. So. For instance I've, been working on own opinion dynamics, where. The focus is on studying. The formation of consensus. And the diffusion of opinions. In a, network. Of social contexts, I've. Also worked on transport. Networks, where. Well. You basically study, the topology, of some, sort of distribution, systems. That. Can be rail, networks. Or road networks, or, energy, distributing, systems, you, try to find similarities, in, the topology. Of these systems that explain, the robustness. And pulling a very vulnerable --'tis. Ya. Know area of application of. My, work is biological. Sciences. So. Basically I work on. With. Genetic, data to study gene, regulatory, networks. Or. Protein-protein, interaction, networks. All. The way to the level of new. Neural, networks, in the biological sense, and as. Well here you're trying to find similarities. In the topologies, of these. Biological, systems and actually compare. Them to social. Or transport, systems and. With. All of these I find, I, have. Found like an application, in, the field of conflict modeling, that I cannot, really explain.

Myself, At least right. Now how did I get there. But. The reality. Is that conflict. Modeling is a very, rich academic, field, which in. My experience it's, been. Very. Very interesting, to work, on and it. Has a, very. Nice. Neat John on. The work of complex. Analysis. I've. Been doing with the rest of the system so, the. Idea. Darry. Of my work at, the turing institute is, try to apply, network. Science for the. Modeling. And prediction. Of conflict. In in. Real-world scenarios, so. In. This sort of fields you have. Two. Different, two, different approaches. One, would be the. Modeling, point, of views of finding theoretical, arguments, on why. War. Or cooperation, develops, between countries. Or between actors, in general in, the international. System. And the. Other would be on prediction. And forecasting. So. Using. Machine. Learning and, AI and in general data-driven. Models. To. Inform. Policy and to. Predict. Where the next big, conflict, is going to be and, there are many different. Efforts that. Have be done indeed, in. The direction, which. I think we. Will have time to discuss what is the role of AI in. This sort of. Analysis. And where. The challenges that, AI. May. Pose, to. Policymakers. And academics. Yeah, with that. Move. To the next speaker thank, you. Thank. You and last but not least Anthony. From one of our partner organizations D. STL. Hello. My. Name is Anthony but I'm. From the defense, of Technology, Laboratory which. Is part, of the mo, D. It's. Great honor to be here only. Being it with the mo D for two months. So. We. Use AI in, a, lot. Starting, to use it in a lot of different things and. Obvious application, is for the defence of platforms, there's. A missile coming, towards, a ship how, best to, defeat. It. In. My specific, role, as in, wargaming. Should. Just back, up and explain what wargaming is. Basically. It's, trying, to predict, the future by. Having people play a game, which is you. Know some, model of how we think the world works. And. That way we can test courses of action and. To. You. Know test. It in the most ruthless, way possible where you have a thinking. Adversary, trying to outwit what you've just done. So. There's. Lots of lots, of things we can do with that so for instance. We. Can learn how to use. AI to play, war games which. Sounds. Simple because we've all heard of alphago, but, even, the most simple game like ogre. It's fiendishly. Difficult to. Get an AI to, actually play. And. There's. Something we're actively developing. At the moment. Also. How. Do you log a war game remember these are kind of very complex, discussions. Going on a lot of issues are, being raised there's, a negotiation, going on between partners. How. Do we log it I mean, looking. At people's. Faces seeing. How that's changing, how the, pattern of negotiation. Is changing. In. Terms of people accepting. Or disregarding. Somebody's. Offer. And, also the thing that really impassioned, me is so try to, make. It less kinetic to bring in more of the kind of political, and social science, out. There, and. I. Should. Back up and explain sort of my past, now so I. Originally. Was in physics and then Investment Banking and then, for 12 years I made documentaries, I was directing them for. TV out in former. Soviet Union and so on and I really. Became concerned, that a. Lot, of mistakes. People, were making on. The ground and I felt, that there really needed to be more, sophisticated. Thinking. Going on. So. That that's inspired, me to sort of retrain in political science and. I. Made, a company called peace engine, that the, idea was can, you model a complex, social system, as. A society, and you. Know figure, out how, we can steer it towards peace can, we one can we run thousands, of different courses of action and, figure out what. What is the best way to. Act can, we sort of predict the unpredictable, and.

So. Which. Which leads me to applying. This now in. The Ministry of Defense I think, there's you know I want to encourage you to think about a future, where we can solve global crises. Through. Understanding. How societies. Tick, understanding. The motivations. Of of people, rather than just calling them terrorists. Let's go into the, grievances, let's try to think intelligently. About how, to unpick, complex. Conflicts. And. There's. And, we can use war games for this we can use machine learning to understand, how, societies, work, we can, take. The discourse, of what's, going on in the radios and turn, that into concept, maps of people's psychology and try, to you know build up a much more sophisticated picture. And. I believe that you know this can help create a safer. World migration. What, why deploy a ship there if you can stop it at the root cause if we can go to the Sahel and. Figure. Out you. Know how to increase stability, between. You. Know rival tribes and, that way, you can get you, can do an awful lot of good. Especially. Considering, the UK has a lot. Of power here we've got point seven percent of our gross, national income, goes. Into soft power into, influence, let's let's, make that work far. More effectively, and I think that machine. Learning AI and all this stuff can, help with that so, yeah for me it's a vey there's. A lot of opportunities, out there to. Sort of push. On good, old British values of you know human. Rights and, Hollywood. Non-polarised world. So. Anthony just, posed about five. Of my opening questions, which, is quite good thank you and, so my dear, is I will continue. Going through my questions but I'll perhaps pose them to the other panel members to start with so. My, first question was going to be how can a I help, understanding. Help, with the understanding of complex current. Operating. Environments. And. I. Think that a. Is. Very, often used as a very. Broad, term. For. Different. Technologies, and capabilities. So. The, answer I think to your. Question comes. Down to the. Particular type of technology that may be best. Suited for. Okay. So given. That we're at COEX and, AI to me appears to be a kind of catch-all phrase for anything where one, can derive value from, the. Exploitation of data. They. Now will couch my question as well. Append, AI to AI. Data, science, statistics computer science, data-driven, technologies. How can they help, with the understanding, of complex operating, complex. Kind of environments. Of that kind so, I will broaden your question again rather than make it a specific technology question, so. I think, one. Of the answers to that, from the data-driven, approach you. Can use the techniques to identify. Behaviors. That exist hidden. In the data so in other words latent, behaviors, that you, may not be aware of but, using. Well structured, data, processing. Techniques you can reveal, the, hidden needles. In the haystack if, you like. Sorry. It's really can you give us an example yeah, sure so we're doing some work on. Detecting. Illegal, fishing now. About. A third of, the world's fishing, stocks. Taken. By, illegal. Through illegal means and this. Is a. This. Is a potential. Impact. On our world resources. And it's unsustainable. Now. If you can use. Self-reported. Data so. A is data about the behave behavior, of ships you can, detect for example, the potential, for. Vessels. In. Marine. Zones. Or. Where vessels, turn off their a is, self. Reporting and. Potentially.

Identify, Cases where fishers. Are going, into marine sides, and fishing and. Doing. A transshipment, so. This, can be done using machine. Learning techniques, over large. Statuses, thank. You. Kill. Him. So. I think I. Complex. Modeling at least from my perspective which, is sort of global. Or microscopic, perspective. Of the phenomenon, itself it. Can help sin is, mainly. Three different ways the. First one and most abuses take. The collection so. As you may imagine. Collecting. Data about. War, in. General conflict. Between state or domestic, conflict or wherever it's. Quite hard and it's. Running with surrounded, with lots of difficulties. And. Nowadays. It's mean AI is mainly used in in. The form of probably. NLP, so, automated. Processes. Processes. That, check. News, from everywhere, in, the world and try. To get the signals, of append, interesting. Events going, on that may have something to do with conflict right. Is another. Another. Area is image. Processing, so people use. Images. From satellites, for instance to, try to recover, I don't, know from socioeconomic to, conflict. Or destruction data or stuff like that. The, second point where AI helps. It's in, the modeling itself so people. Concerned with theoretical. Considerations. On how. Conflict. Emerges, in the. Human society are. Using, widely, machine, learning, method two because, well, these, sort, of models that you use in these theoretical discussions. Tend to be quite, complex so, in terms of optimizing, them and finding. Parameter. Ranges, and stuff like that machine, learning methods. Have, been crucially, in, the advancement, of, conflict. Models and, the fair one is the more crude, and probably the more applied, one which. Is forecasting. Itself so. People. Is building, from, decades. Ago these. So-called early. Warning systems, so. These are integrated. Systems, where. People. Use data-driven. Models, to get. Information on which. Are the hot spots of the world or in a given region or, in a given period of time or wherever and the, idea. Is. That these. These, systems use. Very. Complex, hierarchical, models so, you need some sort of. Complex. Machine learning. Methodology. To actually make sense out of them but, the nice thing is that in. Contrast with, the. Modeling perspective where, you want a proper understanding of, the. Phenomena. In, forecasting, sometimes. You only need, risk. Prediction so, you are less interested. In knowing, the exact mechanism, that is driven these. Forecasted. Crises, and, you just want a good, predictor. But. In any case there's. An interaction between these three, topics. I've mentioned, I think there's a lot of interactions, actually. All. Of them are crucial if we want to make. Conflict. Prediction, and conflict modeling a part, of the decision-making. That. Policymakers. Take. Into account when making decisions. Or anything. Regarding, conflict, that can be imposing. Sanctions or. Declaring. Wars obviously, or. Trying. To come out with combined. Strategies, in the international, community zone where, should we intervene, and, well. There's, a lot of ethics. A lot. Of considerations. Probably, we can get. More. Deeply into. Like. You've just just. Really quickly you just do I think, some. Observations, from, the discussion soul so far so, I think one of the things that AI does with respect to conflict.

Is That it. Focuses, the mind on, the laws of armed conflict or. Ethics, of rules, of engagement. That's. That's a good thing to, be thinking about those subjects, more I think, we've already mentioned it's, used as a prediction tool the, ability to predict. What will happen if we try out, different things, and, those different things could be you, know soft power that, Anthony, mentioned earlier, to, say that actually what we're trying to achieve in. A conflict is persuasion. Or dissuasion. Of an adversary's course, of action and, there are many means that the, disposal. For that and, if we can play out some of those effects, to say that some. Soft power tie. Type. Influence, might bring. About, dissuading. The adversary, then, that's a good thing so you might think of it as a soft power amplifier. And. Then lastly I think there's a behavioral. Assessment. Tool which I think we've we've just been discussing. Where. Perhaps you know the scenario is, underway, and we're trying to correlate different. Behaviors, different agents. In the network to understand, who's doing what to whom and, where. The priorities, are in that particular space. Thank. You so and. That. Topic is not often spoken about in the popular. Press we don't often hear about the, uses of data, science artificial, intelligence, for the understanding, of complex environments, and therefore leading to better, decision-making not least I guess because it's not a particularly, interesting topics, of the general public at, large but. What's often spoken, about. Are the potential, misuses, of this technology, so a. Knicks kind of switched the, focus slightly though so I'm. Particularly keen to hear from you about, really. How worried should we be about the misuse of AI, and. Interpreting. The most general sense in, war and conflict and the second follow-up question is what should we as an academic community and then as a society, do to to. To. Prevent and regulate this. So. I think the first thing to say is that no warfighter, wants, a weapon that, they don't. Trust that is unpredictable. That, could perhaps cause. Excessive. Collateral, damage that the perhaps cause of rutter aside that. Is simply, not a reliable, weapon I don't think any, warfighter, ever wants. That so. That's the first thing I think to to say then. I should also emphasize as, well that the defence industry and certainly. The. UK, our. Allies, are in a, position where we are quite conservative, in, terms of in terms of warfare, we, do follow the rules, but. The risk is that the adversary, doesn't. Follow the rules so. We, went down the road of attempting, to to. Ban autonomous. Weapons, the. Risk is that the adversary, will, pay attention to that and behind the scenes will. Effectively, ignore, it while saying, in social, media or the press or elsewhere, that they are complying, the, difficulty. Is the verification. That they are complying, because at the end of the day we're, talking about software, so, I think there's got to be another, route through this and so, I'm not, against, regulation, I think regulation. Will be required, but, it has to be appropriate. To, bring about the results, that we want and, that is to avoid the slippery slopes, associated. With this particular, topic. Yes. I'd, like to add to that so that just. As we. Do. We. Catch. Humans. All of us in regulatory. And social, frameworks, and conventions. Within which we operate there. Are legal frameworks, social, frameworks. Practical. Frameworks, such as driving on the left stopping, at traffic lights and so on so, I think we need to catch a eye systems. In similar, social ethical. And legal frameworks. That, we as an international, community. Agree. To, now not. All humans. Comply, with. Those. Frameworks. And then. We have. To develop mechanisms by. Which we. Tell. Those humans, whoops, you've gone outside that, that's, beyond our conventional, norm so, you have to come back inside. Similarly. I think that that's the sort of approach that we need to adopt as an international, community how, we how. We police that, is another, issue but it, does need to be an, internationally. Recognized framework. Around in my opinion, around, how, these systems operate because, just. As as you said Nick it's. A fallacy to think that, these. Technologies, will, not be used. In. Conflict. Situations. Yeah. Nothing. Much to add really, mostly. Echoing links comment. There's. It's. UK. Government policy that we don't ever envisage. Autonomous. Weapons delivering, lethal force without, there being a human in the loop so.

Deadly. Swarming. Killer robots no, and I've not met anybody. Inside. Who actually wants that will. So. I think from, the broad. Perspective, of conflict. Prediction, the. Users of AI has. Certain. Problems. That the communities, aware of for. Instance. Imagine. We reach a point where with, AI or machine learning or statistical, method or whatever we, have a very, robust, knowledge of where. The next conflict is going to be and so on suppose now. There's. A clear. Risk of. Self-fulfilling. Prophecies. Here like I am. State and I see that there's a high, risk of conflict. Here, so I moved. Towards, a quick intervention, and, was. Doing, that. Probably. Augmenting the probabilities, of actually, creating a conflicting in that. Area. Fulfilling. The protection, whereas. The. The. Outcome, is not what I, expected. At the beginning so there's, a there's, a problem here, there's. A more general problem in in, the sense of, having. This sort of early warning systems. Where. Government, can be equipped with with. This knowledge of what will happen if either been here or there. We. Basically. Have. Some players that can misuse, this and, I think they from, the community, the the way to address, this is us with everything in AI which. I think is transparency. And poor. Public availability, so. I would say that these, early warning systems, I talked before there's, a few which, are private and are, not accessible. Publicly, but, there are some which are starting to move, towards. Transparent. Models, and I think that is there. Is a way, to go. Well. There's a very. Different. Problem. Here, as well with the use of AI and it has to do with the general. Rebranding. That AI has had as a. Kind. Of Pai. Offers, you a vision from nowhere, like, a very. Skeptic. View. People. Are giving you predictions. Don't. Have an ideology behind and stuff like that well I think this is not the case I think every machine. Learning system, needs to be trained and, the way you train the system, actually. Influences. The decisions. It will make so, we need to be quite. Aware of that when we are training either. A. Model, for conflict prediction, or. Abort. A drone that is moving in the combat. Space. Recognizing. Some signatures. In order to do an attack well, probably the way you train. That drone, were all the intelligence, of that drone is going, to be extremely. Important, and again, you need to be transparent. I think. Open. With that if you want to reach a, point. Where you, don't have to sort of very problematic. Thank. You so and so. Really we touch upon two, potential, and risks, or worries, that we should have one, is in the obvious one where everybody jumps - which is autonomous. Weapons and then secondly, we. Kind of covered then some, potential, risks associated with conflict prediction, I think, it was Antony you don't mentioned.

Influence. Operations, and the news so for the. Use of artificial, intelligence and, those kinds of operations is there anything you can offer us. From. That perspective where, you see there might be a risk of using AI in the context, of I'm, trying to influence one, person up all the way through to a whole nation state. Sure. There's. The obvious, one that we all know about. Filter. Bubbles being targeted. With. Stuff that. You. Know knows just exactly what buttons to push in, our minds and it will send us into. A polarized, divided, society. So. Clearly. We need to come up with some kind of antidote for this. This. Could be we, get our news in different ways and, you. Know. And, we help. We. Need to understand, how this process works, sociologically. And. Maybe. That's more for civil, society needs, to you know do, that it's. Up to a big civil society, to figure, out ways to. Not. Be polarized, I. Continue. That. Well I was directing, a documentary about. Right for BBC. In, Moscow and. You. Know, we were getting involved in an information, war. Unwittingly. So. Many. Of you remember Bern in 2011, this. Was the high-water mark of the antiproton protests. And. Also. When. Riot, basically. Had this show trial the, Russian government was making an enormous you. Know meal out of it. And. They were you. Know helping the journalists. Self you, know they. Could have you know you. Know quite close but they. Ended up putting they. Ended up televising, the trial with three different cameras, and, then releasing, the footage out so, we made a documentary, about it. Why. Would they do this. Actually. Simultaneously. They. Were shown to the Russian people hey there's. Riot who were like you know really liberal and, they. Don't share traditional, Russian values and by. The way they're the opposition, I like, half, the opposition, we're like oh well I don't like Putin though you know I'm, not in with this. With. Their values they just lit the opposition, straight down the middle and, you. Know. We talk about voter suppression okay. In. Action and the, journalists will part of this media frenzy and. Of course. They, were they were using that. They will take a few pictures of us and saying oh by the way the West supports. Riot which. Means they against, you so, there's a lot of this kind of like enemy of my enemy kind, of logic though though this is an example of. Understanding. A society, and. You know using, that knowledge to. To. Use, wage politics, and I. Think, we use this good. When, legend. Completely. Loved my trainer. If we can underneath we can understand, how this works we can use the same dynamics, against militia groups we. Can do we can start to understand, Society, in order to do, the same kind of thing to, al-qaeda. You. Know there's a lot of potential in applying. This understanding, for. Good now, look like it's a dark side of the force and there's, a good side of the force as well. Thank. You so in the open in 40. Minutes or so, we've. Kind of covered the, potential. Positive uses, of AI with respect to understanding, conflict. And. The. Things that we should perhaps be worrying about as individuals, in this society, so. At this point I thought it would be appropriate and, to, pause, my questions, and. Just to see whether there are any questions. From the audience and there was one immediate sunrise so somebody's, got. A very. Interesting. Question to raise and, we'll just wait, for a microphone to, appear. And. I'll just wait for the microphone I will then I will go in the order in which I saw the question so if. It just gets to the gentleman behind you first and then you next, then we'll come over here. Dawn. Right. Sorry. Could you just introduce yourself. My. Name is Marcus alien state, question. For. The panel how do you collect. The. Data for. Managing, conflict. When a lot of the data is actually hidden if. You're trying to, understand. Militias. And. How. They interact with each other and what they're actually doing it, doesn't, come out in the news it certainly, doesn't come out very often and you probably, need quite a lot of data to be able to model it and it, tends to be very specific. To the situation so. How do you actually model right and. I. Suppose this also covers. The grievance, site which is psycho. Psychological. I suppose, as well as sociological model. II yes. So I think. Since. The early. Sixties, also there has been a huge. Effort in, social. Sciences community, for. Data gathering. I. Guess. If you can find a, wide, variety. Of. That. Quality. Is a decision from, most. Of them are basically, own faculties. Of the number of casualties that you have been in. A given geographic. Or. Temporal, time cell but. There's. Also a, toe. Threads. Between countries, or. Diplomatic. Crisis, or stuff like that, I mean, obviously. The. If, you that, the data you need will. Depend on the model that you are working, with so. If, you are not seeing some of the global patterns, of conflict in the world you. Will, probably be okay with. You. Know casualty. Data. Things.

Like That if you want to study a. Particular. Case. Of a terrorist. Network or, something like that but, probably you you. Will in very refined. In that and that's where. Agencies. Can. Help, sometimes. There's collaborations. Between private. Or governmental. Research. Institutes. For, making. These datasets available, so. You, would be surprisingly. Develop. Data that you can actually find for. Instance I work on. Can. Wars in. Colombia. With, collaboration. With some people there and yeah. They have, efforts. From the government, you know doing interviews, with victims. And trying. To get very very, specific. Datasets and they. Obviously, takes, years. To gather, this data, once. You have that well, mindful. For modeling. So data. Collection, is a super. Important. Aspect, of story. And methods. In a I were, actually sampling. And finding, where you. Are maybe, offers, or. Undersampled. So, having, methods, for, detecting. Biases. In your data it's. Very important, in an automated way I mean how. Can I detect in this dataset if this region of the world is clearly under. Sampled. Problem. With. Thank. You and, the interest again through the questions what I'll do is I will just, allow one panel members to, answer each question unless. Anybody has got anything specific to say so I'll just get. You next, okay. Fine if, you want to under cross so and go, to the lady just. That great. And. One thing I noticed came up several times in the, panel is this fear, that our, adversary's, will reciprocate, any. Kind of ethics we implement, and, it comes up I think almost even before the, ethics are discussed a lot so. I was wondering whether any of you have ideas for, mechanisms. To build reciprocity, into. Norms around. So kind of incentives, to, have different international, actors reciprocate, words. What. I knew who. Would like to say that question. It's. It's a really good question because I guess, cyberspace. Is an area where the. Speed of response required. In, terms of, defense. Offense. Exceeds. That which any. Human could achieve, right, so that kind, of necessity. To a, machine. Response. I think, about the, think about cyber. For, me really is that, the. Consequences. Of all our cyber warfare, are, just so unthinkable. These. Days in terms of the, economic. Collapse that would occur on both sides. The. Disruption. To the, people, that. Voted. The. Government, is, just something, which I think governments. Would avoid. You know some major powers, state to state I I. Think. It's almost in the same category, as nuclear, weapon that's right ie, there's, a type of power balance, which is strong. The. Thing about cyber. For. Me is that, can. Offer the ability to deescalate. By being used as a leverage, in. Terms of, in, terms of what, could happen, so we'll wind back the dial if you wipe back the dial. And. So so there are there are some concerns, there, I. Think. I'm asking a similar, question to what's just impose this about them we've. Mentioned ethics it's fast algorithm, accountability. So it's. Probably true to say that we. Have the potential data to create systems that are technologically. Impressive, but, social, actually quite harmful how the potential to cause great harm, and. That's certainly within the realm of algorithmic, accountability, so a question, is a. Typical. Given of government and industry, on the panel is how do we incentivize, government. To spend. The money to test and create more accountability. And. How do we incentivize business, to create. Systems. That are more accountable, and more truthful in what they're saying. Great, question, thank you well I can only speak from, Australian. Perspective. Certainly. You. Talk about a quite a distinct afire divide between government. And industry where. I sit, we, think of industry as partners, close, collaborators, and partners and. I think a lot of our industry, think. The same about us. I think, the incentive, really, here, is a. Good, collaboration, and, a, collaboration, which is beneficial, for all parties and. The. Key really, is that, the ethics. Can. Be built into. AI. Systems, so as you, mentioned the timescales, are so critical. That. We. Don't we don't have the, luxury to, pass, the ethical, problem, out to a human, we. Need to build the, humans, ethics, into, the AI so, that those ethics, are in there and then, we will have confidence that as.

Humans, That an. AI. System can explain, its ethics, that. It's using, in its calculations. To. Humans, and if. We are prepared to trust, those ethics, then, we can say. To the. As. Long as you adhere to those ethics, that we agree with that are internationally. Recognized, and adopted, you. Can make those decisions. Basically. Simulating. Societies. And what-if scenarios. To. Get better decision, support, so. Anything, that improves. The model above. What. The alternatives, are is, is. Good I would, argue. It's. Always going to be a rush of these things always that need to get better data and, we. Shouldn't be sort, of making the, models so we understand, this particular thing really really well and, that thing we're, just gonna have a counter, you, know. So. There, has to be epochs in in, model design. Wouldn't. You be. Question, friends thank you Alex, tire from the Museo you've. Mentioned that the, in. The case of state-to-state, interactions. You can have a sort of mutually, assured destruction in. Terms of sniper security when, it comes down to individual, malicious, actors in, the past if it was nuclear, you would have physical detectors for that sort of thing how, in, terms of chemical stuff there's a kind. Of whole, mechanism for tracking that sort of material is. There even in principle a way of tracking an. Online, behavior for malicious, software at. The individual level is that from, a network approach even. Possible. I think, it's a pretty scary possibility. And. Where. I think you've got. Two. Adversaries. Who, have a lot, to lose then you can perhaps strike, a, balance and a pseudo peace, whereas, you. Know what as I said in the opening, talk, there's a number of commercial, off-the-shelf technologies. Which could. Be weaponized, and. Advanced. Cyber. Algorithms. Used. In an offensive, sense by, long. Threat actors. Rogue. States we've got a lot to lose then. You, know these things are in my opinion a very real future. Risks. We. Need to do everything we can to prevent. That but, it's. You, know it's the same. Argument that I made before it is very very difficult to verify. And. Validate. Understand. Who exactly, has got what particularly. When much. Of this code is available freely, for, many, many legitimate uses. Good. Uses, that move the economies forward, make people's lives better and so on there's, always that risk of misuse. So. What can we do what can industry. Do who. Are putting things out there making things open-source, to. Ensure. That it's, very difficult to weaponize, those, pieces of code I'm not sure what the answer is but, I think there is an answer required, in there somewhere. Hi, muffin Simpson, I just have a question in that sort of a no-man's. Land between virtual. Space and moving towards of kinetic, solution. To to a threat, and I'm thinking specifically, in the context, of social division, where, you might have a state actor who's deliberately, stirring, the pot over a sustained, period of time where. Do you see ethics. And moving. Into rules. Of war rules of engagement, such that you can move from a virtual, threat, that's causing disruption, and civil. Unrest in the Western society potentially. And then, seeing a justification. For. A kinetic, solution, to, mitigating, that threat I. Think. That's probably looking. At you and I said that you are Nick really. I. Think. That we need to be thinking in terms of, either balance, of power of influence, so you, mess with me how I'm messing with you and it's like a bit like with cyber we've both demonstrated. We have the ability and we both sort of step, back because we'll split each other's countries apart otherwise.

Or. We. Tried to figure out how, to neutralize. You. Know. Somebody. Else's influence operation, for example. In. East Ukraine I'll take my pstl hat off for a second I felt, very strongly that if the Ukrainian, government, had, just said hey guys we recognize, your grievances, they're legitimate. We're sorry by the way you're. Not terrorists. You know we love you this kind of thing I think, that my, sense is that a lot of the rebels be kind of like you, know what Thanks. You know. I'm. Standing here is because. Listening to be my, life sucks and etcetera. You. Know I put, a pipe on now I think. We need to think in terms of you know this. Kind of thing about getting reaching, the high ground first about how to defeat somebody else's influence campaign, by you, know seeing what they're up to and you, know try to take that ground from underneath them so in. Other words you know can we make an immune system you. Know to combat an, influencer attack rather than just sort of like you know that's it lavage our society, you know that, to me is really interesting, and where I think it's, almost totally, new territory if. Somebody here knows how to do that please get in touch. Any. Final, questions I think got time for one final question. No. Okay well I'll ask my final question then so and if. Each of you had a million, pounds to spend on research in this area and, you had a sentence, to describe that, research and then, what would you spend that million pounds on. So. I think I would spend it on two, things. Wall, is neuromorphic. Processing. Some. Novel forms of neuromorphic processing. To. To. Perhaps provide, a more, explainable. AI, solution. And so. I'm fascinated, by. Dr.. Simon, stringers, work on spiking, neural networks. Which. Solve. A number of problems potentially. Including, explain ability, resilience. To adversarial. Networks. And. Provide. A, means, of avoiding the. Information. Loss that you see moving through, learning, neural network as you get to the, output so. So. My body would be on Simon's, work and probably a little bit of money on the Regius, professor Lee Cronin's, computing. Work to. Grow next generation neuromorphic. Devices. Open University of Glasgow thank. You. You've. Each got a million pounds so you can each speak for him at, least 20 seconds more I mean probably I'm biased because I, like. Random, stuff like, that but I would probably spend it on fundamental. Research and I would do it on multidisciplinary. Research, trying. To bridge. Classical. Social scientists, with, machine. Learning engineers. Because. I think there's a huge risk here of. Obscuring. The whole discipline. With. These names. Of, AI, neural, networks, and stuff like that and, actually hiding all, of these concerns, on on, ethics, and how. And when should winter, be I mean I think the social. And human. Humanism. Point. Of view should, not be lost so probably. Multidisciplinary. Teams, thank. You I. Would. Like to echo. The statements here both explain. Ability, and, social. Research. Combining. Together I think are the key future. Factors. Yeah. The research on spiking, your networks is really interesting, so like. That but I, I, think, though excited. At my master's in security. Studies and there. Was you know quantitative, methods but no machine. Learning and I think that a I had so much to contribute to. Operationalizing. Political. Science, for policy, and. So I'd we like research, on on, that. Thank. You and with that please.

Ask You to join me in thanking my, panels. You.

2019-06-27 15:53

Show Video

Other news