(upbeat music) - Ladies and gentlemen, in a world of exponential innovation, what does the future of cybersecurity look like? What are some of the big threats coming our way that we should think about? And what are some of the big opportunities we can start thinking about to make the world a little more secure on a planet wide basis? My objectives today are to help you think a bit bigger, to broaden the lens if you like, to get you outside of the every day to think about the next 50 years. But before I begin, I call myself a futurist. So let me quickly tell you what that means to me.
Let you in a little bit on my methodology 'cause it's a little different. And the only way I know how to do what I do with integrity is to go and talk to people who are much, much smarter than I am. It's my strategy for life. So my methodology is to spend about half the year traveling and I travel with my wife on the road and visiting scientists and science labs. And talking to them about what they're building and what they're seeing in their specialist areas.
Then I turn it into stories to share with you. So everything good that you're about to hear, everything that you might like isn't down to me, it's down to the generosity of these scientists. These are some of the people who have contributed to this presentation today. And you'll see some pretty amazing people there. Like Yann LeCun, one of the founding fathers of artificial intelligence, this new generation over the last 10 years. One of his colleagues or two of his colleagues won the Nobel Prize just a few weeks ago.
And I'm gonna introduce you to some of them as we go along. There's actually a few more of them that contributed, but it's their generosity in giving me an hour or a day or a half day in their labs that helps me do what I do. And all the mistakes, they're mine. And I do make lots of mistakes 'cause futurism is hard, especially predicting the timing.
Often when I visit scientists they will look me in the eye and give me a lot of confidence about whether something is going to happen, but very often as well, there'll be a lot of elasticity in the timing. So let me give you a quick example. You'll see there just to the left of the middle, William Oliver.
So William Oliver is the head of quantum computing at the MIT lab. And here is a quantum computer that he placed in the palm of my hand as we chatted in his lab. Isn't that amazing? Isn't it beautiful? Isn't it small? So all that chandelier stuff behind, that's just the cooling and the isolation stuff, the actual computer is that bit in the palm of my hand. When I look in the eyes of someone like William, or other colleagues in quantum computing around the world, they'll all give me a pretty firm understanding that this is going to happen. You can believe it and you can ask them a lot of questions and come away with a very firm belief.
And they're pretty confident that they'll break PKI. And this post quantum stuff that you're all talking about is really necessary. When it comes to the timing however, well here's my best approximation 'cause every scientist will give me a different answer. This is one of the hard ones. Quantum computers should be useful in solving or helping to solve some simulation or optimization problems in industry around 2030. But there's a big plus or minus how many years on that.
But that's my best estimation. With breaking PKI little bit further out. So there's my best estimation of that 2040, but it could be seven years out. So it gives you a sense of just how I operate. Some of the timings will be high confidence and I'll tell you about those, some will be low.
And after this presentation I'm gonna be around for a couple hours. So come and tell me where I'm wrong and we'll learn together. That's what the process is all about. So let's begin.
Let's begin by spending a little bit of time on what's coming to help you think a bit bigger about some of the big changes that will also affect security. And I'd like to begin with AI. And I want you to think very seriously about this concept, equivalence. I want you to take it really seriously. AIs that are equivalent to human experience when you are working with them.
Don't worry about consciousness or any of those more difficult terms. Just think equivalence. Today with three seconds of audio, I can simulate your voice with AI. Three seconds, give me seven or eight. I can do it really well.
Give me five minutes of video with an iPhone and I can create an avatar of you of this quality and put it on my website. Five minutes and a single text prompt will generate with AI a background of this quality, that's today. Do we have a deep fake challenge? Oh yes. If I point the same technology at video stored on your social media and I get several hours worth, I can create a service. For example, there are already six or seven services that I've seen in the world that will bring back a dead relative so you can have a conversation with them every morning.
Is that profound? I find that really challenging. But it does illustrate really well the sorts of security challenges we've got in equivalence in being able to not distinguish between the human and AI. But that's today.
Let me show you in three slides what's happening in AI over the next 10 to 20 years. This is my best distillation of it and I visit all the labs, MIT, I was just at Oxford and Cambridge and it's warp speed. The single biggest message I can leave with you today, the single biggest message is that AI will be the biggest force of change in all industries, all jobs, all roles for another 20 years. What you've seen today in AI, everything in your experience is less than 1% of what's coming. So just quickly, scientists represent their goals like this.
They say we want AIS to learn like humans do like babies. And we want them to learn forever, never stop. So not just make a model and freeze it, which is what we've been doing lately, but keep plasticity in the model and keep it learning forever. And we're just in that boundary of plasticity now we're doing that for the first time. So if you are thinking large language models, which are multimodal learners in the sense that they go out there and just mung lots of data off the internet and create a language model and interesting things pop out of that, that's a tiny fraction of what they're working on. Of course, they're also working on structured data sets which are much more controlled and much more powerful and they might be image based.
Others are working on AIs that are soon gonna look at every video ever made and learn from all the video. And then of course there are AIs training AIs like you see here in autonomous vehicle training. So most training of autonomous vehicles is now done with simulations created by AIs for other AIs and running driving tests for them, but not one at a time, but 10,000 at a time.
There are physical models of the world coming that will be embedded in AIs, cause and effect models, emotional models, which we're gonna talk about a fair bit. Experiential models. So this is an AI learning how to manipulate a cube through trial and error. It drops the cube, it tries a different finger arrangement and then it can solve Rubik's cubes and then it can make a cup of coffee because it's experimenting with the world. We're of course learning by adding sensors. A lot of AIs now learn through listening as well as watching and soon by touching as well.
Mathematical models, I could go on. There's actually a lot more I can squeeze onto this chart. And soon AIs will identify the gaps in their learning and ask you questions to fill those gaps.
That last one alone is worth all the others combined. So AI is going multimodal, it's also going multi-level. So Yann LeCun's concept of what he's working on next is AI supervising AIs within the brains. They have higher order supervisory AIs and lower order instinctive AIs.
And it's becoming multi-agent. Very soon you'll have 10,000 AIs that ask each other questions 'cause they're specialists in their fields and their areas. This is what's happening in 2024 right now. Really an exciting space. So we're getting 10x and 20x and 200x improvements in AI quality by having one agent construct an essay and the other one act as a critic of the essay. We've been doing that in coding for a while.
We're doing that in autonomous vehicles with a supervisory AI looking at the decisions of the vehicle and going, I don't like the safety parameters of that there's too much risk, I'll override and we'll just stop right there. So there's actually multiple agents in the vehicle now that's it in three slides. So when you think about the future of customer service or the future of financial apps like this, this is not the interface of the future. This is the interface of the future. Fully conversational equivalence. And we're getting this now.
When I travel with my wife, she does all the admin, she calls the hotels in advance. Recently we're in Savannah, Georgia. And she's on the phone to a hotel for about 20 minutes trying to arrange so we can drop off our bags, do a late check-in, late checkout, do all the things we have to do. And she gets off the phone and she says to me, I just realized that wasn't a human. Those conversational assistance are rolling out in all kinds of industries already.
But I want you to think about this from a security perspective. Because this is the future of interfacing. And if you are not doing that in customer service, you're not competitive, people want this. Now think about the trust that's involved When you start humanizing your interface like that, people trust them.
What happens when you do a man in the middle attack on something like that? And let's go further. What happens when you have assistants that you spend lots of time with and you fall in love with them? It's already starting to happen and it will very definitely happen in future. And the leading AI people I talk to, this is their biggest worry. If you haven't seen it, I wanna give you a bit of homework.
I want you to go and watch this movie "Her." And it's a movie about a man who falls in love with the AI in his phone. And as you watch it, you will think of 400 security threats coming outta that story. Because it's not just trust, it's love. And I want you to understand the AI assistants that are coming are built.
What Meta is building what these guys are building. They are built to make us laugh and make us cry, to engage with us emotionally. We will love them and they will love us back.
And all the crowds around this planet are gonna demand that they have them. They won't wanna give them up. So when I talk about the greatest threat to cybersecurity on the planet and one of the greatest threats of AI to this world, this is it.
You used to worry about what happens when your children have friends online. What happens when they have artificial friends? Think about it, it's a big threat. The AI threat is not Terminator or Skynet, it's much more subtle.
It's the next 10 years of AI interfaces coming in, and taking over so many tasks for us and will trust them with our credit card information, with all kinds of tasks as they go along and they're hackable. Let's now just flesh this out with a few more things. Internet of things. Add AI to it. You get the same thing with robots.
So we have our first general purpose robots rolling out and they will of course explain what they're doing and they have a conversation with you already, like this figure robot which is being deployed in a test in a BMW plant in Spartanburg, South Carolina. The same risks apply, the trust, the emotional attachment. If we look at robots at Columbia University now, I visited their lab with Hod Lipson recently. This robot engages with you emotionally. It smiles before you do because it watches your emotions so carefully that it knows when you're about to smile and it smiles first. Really engaging but also dangerous.
The same goes for vehicles. Lots of people worry about vehicles being hacked and caused to crash. Like this wonderful scene in "Leave the World Behind," that TV series.
Really scary. I don't worry about that at all. And I'll show you why in a little while. I worry much more about the fact that all of the interfaces will be conversational.
They'll be just like a personal agent. They already becoming conversational. 'Cause the computer scientists want them to explain their driving decisions to you. But you won't be worried about the driving so much as ordering your pizza or maybe the dating service, which is matchmaking you with the ride share and all the other things that are gonna happen with that car, which will be listening to you for every second of the ride.
'Cause it has to. Is that a security vector? Absolutely. Then there's the stuff in the sky satellites. That's how many satellites are scheduled to be launched and orbiting in 10 years.
If I go back a few years, it was 1,500. In all of human history, they're low earth orbit. This is a really good thing for some aspects of security. First of all, they're low earth orbit. All of your phones are becoming SAT phones. If you didn't realize it already, that means you can get an SOS to anybody soon on any square foot of planet Earth.
That's a good thing. Secondly, there are 10,000 satellite services that are coming out for all kinds of industries that are really valuable with security benefits. For example, for policing and ambulance services.
I work with those sorts of departments around the planet and we can do so many things now. It's like Google Earth is becoming live, that sort of imagery. And with a credit card we can access that imagery. My favorite that I heard about this year. In two years, we'll have a network of satellites that will detect any fire on planet earth that's more than five meters squared or five meters across and five meters down any fire on planet Earth. And they will give 20 minute updates on all of those fires to any fire department on the planet earth for free.
You think that's a good thing for security? That's fantastic. But all of those satellites and all of those new chips in your phones are a new threat vector, a new attack opportunity. And look at that image down the bottom left.
That's my house. See that little red car down there I can watch when my daughter's boyfriend is parked outside my house. Okay, not yet, but soon. If I get into the workplace it gets even more tricky. So just a few things to open up your horizons here.
Smart glasses are coming back into vogue, but it could be just phones. They're going to geolocate themselves indoors. So this is a phone which is imaging itself all the way around a hardware store for shopping. It geolocates itself. You can drop a PIN for something that you want someone to pick up for you.
All those good things. It's a whole new dimension in IT. But think about the security of that. The phone watching all day, all those smart glasses watching all day, which are hackable. But even if they're not hackable, think of the employees walking around with smart glasses. You don't even know they're on watching screens, passwords, all sorts of procedures.
Privacy, physical privacy is gonna disappear as a result of that. 2020, a lot of controversy, 'cause we found out that 2,200 police departments were already using a service to take an image and reverse engineer it and find an identity. Okay, so that's Clearview AI. Very controversial back then. But really I thought when I'm preparing this presentation, how long will it be before we can all do it? And I started putting some numbers on it and I'm like 2032 and all that. And I'm, while I'm preparing for the PCI SSC conference, a couple of students put together a little hack at Harvard. And they reverse engineered some glasses, attached to the large language model.
And it turned out they could walk around anywhere in Cambridge, Massachusetts. And whenever they saw a face with their glasses, the image was tied to AI, which found out where the image occurred. And then it stripped names, family members, what conferences they've attended.
And live they could DOX these people. They could walk up and go, hi Bruce, I met you at the PCI SSC Conference in Hanoi. And I'd be going, oh, I don't remember you, but yes, yes, of course. And they just did this all day. Anonymity disappeared this year for hackers that wanna do this. Have a look at that, take a photo of that.
Look at what Mr. Nguyen did. This wonderful student. And he hasn't made it available to everyone. He's just published his methodology and it's all there. You can go and look at it. And the research papers that have been published, very provocative.
We're using Wi-Fi waves now to track people in aged care homes. When they have a fall, Wi-Fi waves bounce off everyone differently. We can track individuals in medical facilities. When they fall over, we know, it's free.
It's a free resource, right? Isn't that amazing science? It happened five years ago for the first time. We can also track people's breathing and their pulse. By the way, Wi-Fi waves bounce off them, in a facility like this.
That is going to be embodied in every Alexa, every Google home device using ultra wideband radar. Why? Because we can do so much diagnostics medically. We're gonna have continuous background health monitoring in the home and you'll have it in the workplace as well. So there are dozens of companies building out services. Is that a threat vector? Of course it's. It sounds like science fiction hackable EEG interfaces using our thoughts to do things in the workplace.
And that headline there, my prediction, that you'll see those interfaces everywhere pretty soon. Comes from the fact that I wear hearing aids, I'm wearing them now. And every hearing aid is gonna get an EEG reader. To read who I'm trying to listen to in a room so it can dial down all the other voices.
And they're not all out at once, right? And I can't wait. And the hundreds of thousands of people in the world who wear hearing aids can't wait. It's an EEG reader. The same technology as you've probably seen on the news is going into Apple AirPods, EEG readers.
And we've already had a successful hack, Oxford University, UC Berkeley, where the reverse engineered EEG waves to see if they could recreate the PIN numbers and the passwords you were thinking of. And they did it with about 30% accuracy, which means they're gonna get one in three, right? Oh my god, there's a new vector. And we have to regulate to support it.
And finally, just to finish this off, of course it's not just credit card data. There's some really big new honeypots coming and the biggest of them is our DNA data. Most definitely in the future of medicine, which I present on all the time, everyone in this room is going to be genetically sequenced. The benefits are enormous and it can't be anonymized. The point of it is to tailor the medicines to you. Now there's a target for ransomware.
Is it gonna be exponential? You get a feel for it, right? So what are the scalable responses? Let's spend the next 15 minutes on just sort of some of the big things we can do to respond. And let's start again with AI and my favorite AI of 2024. This is my absolute favorite AI and I'm sure you'll love it.
Because while AI is scaling up as a weapon against you guys, we can also scale it up as a defense. So let me show you just a little project, which I think is gonna become a very big project very quickly. You know that hobby people have, when they get a scammer trying to ring them up and they try and keep them on the phone. You ever seen people do that? So a professor called Dali Kaafar used to do that for a bit of fun to learn about it. And also it helps to hurt the scammer, right? Keep them on the phone as long as you can. And the problem is when you keep them on the phone for an hour, it's an hour of your time too.
So that's not very helpful. What if you could hand your call when you get a scam call, what if you get handed over to an AI that had your voice, but maybe it's an older, more vulnerable version of you. And it speaks more slowly and it's needs some help with the passwords, but I don't have my password. And it's also designed to ask subtle questions to get a bit of intelligence from the scammer as well. So that's what this professor has invented, a victim and it's really good. Let me tell you, it's equivalent.
You can't distinguish it. It's outstanding, but you can't buy it to put on your iPhone today because he's doing something better. Why would we put it on your iPhone? Let's talk to, and he is AT&T, MTT. Let's talk to every Bill Telco in the world and let's give it to them.
So when they see scam calls come in and they identify a lot of them, 45 billion at AT&T last year, they only have to spin up on a server, a few hundred thousand victim bots and they just killed the business model. Isn't that amazing? I love that. It's just lovely. It's called Apate. So if you wanna look it up. And they're out there making deals today on to help all of us, right? It's very scalable. It's an exponential response.
Apate is the Greek goddess of deception. I love it. Less sophisticated. But just as important is the changing nature of regulations. Of course, most of the regulatory response you're familiar with so far has targeted you.
Protect people's data. But one of the great things that's changing slowly, but it is changing is the appetite for regulation to put liability on software companies for the quality of their software. That shift alone is scalable and big. So it used to be about this. And now we're seeing things like IoT security laws. You are liable if you don't put reasonable security on your internet of things device in California.
Now we'll have to see what the court cases say as it plays out. But the appetite from consumers, citizens, voters is changing. We want that. That's a good thing. Finally, because we know that lives depend on good quality software and it should be regulated strongly. Another example would be what's happened with the Artificial intelligence Act in Europe.
Strong controls now on what you can do as an AI provider. Lots of pushback from the IT industry because this slows down software innovation but that is a price we need to pay. So that's a scalable development. Another regulatory one is more regulation around crypto. Because as Bruce Schneier said to me, now he's the security guru at Harvard. He said, if you think about it, without crypto, there'd be almost no ransomware.
Think about that, and if any of you own crypto, I know you didn't go into it for this reason, but you are acting as a smokescreen for money launderers. It is a huge money laundering machine and it is relatively easy to see the money laundering at the bad exchanges. You can identify the types of activity. So the sooner we get financial regulations that push out a little bit, and here's Mr. Schneier, talking away.
As soon as we do that. And we say to US banks for example, you cannot trade with this exchange or you won't be able to trade as a bank. As soon as you introduce those regulations, you start to disrupt the money laundering process. So that's a big one as well.
As old as information is the idea that compartmentalizing information is a good thing. So let's look at three things that we could be doing now. One is everyone in this room could be keeping less information. Now the trend at the moment in industry is to hoard everything because everybody's realizing that machine learning might pull more value out of our data sets and they're keeping it even though they dunno what they might do with it. But you guys understand this really well because I know lots of the software tool providers out there and the FOIA there, they help you identify data you shouldn't be keeping, so you should be getting rid of it, that sort of thing. So the less of it we hold onto that we don't need the better.
Here's another form of data compartmentalization which is exciting me in 2024. You might have seen Apple's announcements around AI. And the biggest thing in there from a cybersecurity point of view was their promise. So this is their branding strategy now. Their promise not to keep your data to do the processing for AI at the edge, not on the servers. And to make that publicly auditable, huge.
I think we're gonna see more companies follow suit and that's a huge game changer for security. Now of course, that's only a promise they have to do it. Remember these people? Remember the hack? They used to collect money to say we'd get rid of all your data. And then when they were hacked, all the records were exposed and the data was still there. So we actually have to see them act on it. A third method, and this is probably the gold standard for compartmentalization, is to allow customers to manage their data in encrypted form and their own permissions for how that data is used.
Now, I've been watching those sorts of initiatives for 20 years, but I finally found one that I think has some good traction. So imagine three different patients with their health data. With encrypted pods, personal data, sort of stores on the internet fully encrypted. And releasing just the types of their records that they want or the aspects to different hospitals as they need it. So it's reversing the idea of hospitals managing the records. It's really the patient owning the entirety of their record.
So the solid pods protocol is actually backed by the inventor of the web. So Tim Berners-Lee, and it has been deployed now. So this is pushing the whole thing back to the customer to manage their data so you don't hold any of it. It has been implemented now in various health systems.
The NHS has started to do some also in Belgium, in Flanders and the BBC. So people with their streaming preferences for TV just release what they need as they go to different TV services instead of it held in a database. It could be any credential you like. So take a look at that as a metaphor for where we're going and something you can do now. Let me combine some of these with a quick IoT story.
I mentioned autonomous vehicles and why I wasn't worried about them all crashing and being hacked. So let me show you a glimpse of the future of autonomous vehicles. And it's multilayered, it combines, lemme just go back. It combines AI learning forever, regulation, compartmentalization, and a few other things as well.
First of all, transport is strongly regulated. And I want you to think about the aircraft industry as the metaphor for where the car industry is going. There will be high expectations for how that data is managed because lives will be lost and it's obvious that they will if it goes wrong. So we've got good people working on strong regulations in all kinds of places. Secondly, the future of autonomous vehicles is not just this picture.
It's not just vehicles with sensor networks. It's vehicles that actually connect to other vehicles and share metadata for all the vehicles around them for a square mile. So every vehicle knows the speed, the trajectory, the destination of every other vehicle, and what objects they might be seeing, the metadata around them. Furthermore, not only are they sharing that information, but they're using it to learn and become more resilient. So the Head of Robotics at Carnegie Mellon University said to me, the way he sees it is if a car skids on some black ice in the mountains in Hokkaido in Japan, at a certain time of year when there's icy conditions on a certain corner, every other vehicle in the world will also know that black ice sometimes occurs on that corner, on that mountain in Hokkaido at that time of year.
That's the hive mind of autonomous vehicles that we're moving to. Thirdly, multiple layers of oversight. Those vehicles not only are controlled by their own systems, but they have AIs overseeing them. And those overseers are just interested in any risk parameters that might be involved in decisions.
And over them of a ratio of let's say one human to every 18 or 800 vehicles, we don't know what that'll look like, but they're humans. They're always humans in the loop. We'll see those stoppable, we'll see vehicles reroute for ambulances. That is the future and it's quite inevitable. And if we look at them today, so this was a more advanced rollout today. This is totally driverless at the University of Florida, Jacksonville.
There's a bunch of these shuttles around. They generally zone them. If you cross outside the geofence in the zone, it stops. That's also a metaphor for the future of cybersecurity generally, all of those layers to me.
The same goes by the way, just be a bit of distraction. But we're gonna be getting these aerial taxis around. They are here, they are now certificate approved in China. So they're commercial operational. It'll happen here I'm sure.
Some of you will pay a little bit extra to get here from the airport in one of these autonomous aerial taxis sometime in the next 10 years or so. And we better have all the same security controls, right? Couple of messages on authentication. We're all struggling with how do we add more layers of authentication? More biometrics. We all know the value of it, right? But at a certain point it gets less convenient. Not more convenient, less convenient 'cause we're asking too much of people.
So let me show you a couple of glimpses of things that I think are scalable that add more layers to authentication. And they're all about doing it at the edge, decentralized authentication. At the moment, lots of big government initiatives are moving outta the way they're trying to centralize it. So in China, for example, France, Japan, Australia, lots of initiatives to say, why don't we get the government involved in providing all of your credentials for proof of age, the fact that you have a driver's license, you're a citizen, anything like that. And then we can provide tokens to any business that asks them. So now there's a single provider and that is a solution.
But it's problematic, because it requires so many businesses to come on board. Australia's been saying we're gonna do this. And the Australian Federal Government has an initiative. I think this centralized approach might make traction, but it's gonna take a long time. If it's gonna make traction anywhere, I would pick China. 'Cause there's strong control, strong conformance, strong ability to influence business decisions to come along.
So China's discussions about having a single internet ID for people and being managed by the government, that's where I would pick meaningful inroads if that takes place. Now let's go the other way. Let's decentralize. So here's a decentralized approach to authentication at Arizona State University.
This professor has simply done liveness detection by adding a chip to your phone, a very cheap chip, and what's happening and this liveness detection. So are you a human can be done at the beginning of a conversation or multiple times during a conversation. All he's done is said, let's collect the voice biometrics with the standard microphones and let's also collect the biometrics for movement, breathing, pulse and other things. Just the movements that must match.
They must correlate if it's a human with the speech. And then you tokenize that and say, yes, this phone is interfacing with a human. Very cute, requires no ID whatsoever. Don't have to know who it is, but it is liveless detection. Let me give you another one, which is also a soft authentication of the edge of the network. And it's more provocative.
Around 2007 I spotted the first lie detection software being used in insurance company call centers. Now it wasn't that accurate, but what it did it voice stress analysis. And then when the person making the claim sounds like they might not be telling the truth, the operator got prompted to ask more questions. So you don't call them a liar, but ask more questions. And here are some more questions you can ask.
About 14 insurance companies were using it that I found in 2007. In 2008, I spotted this device deployed in Iraq and Afghanistan by the US military. And I managed to get hold of the research papers around it. It's called a PCAS, Preliminary Credibility Assessment screening Service or something like that. Lie detector. They ask you baseline questions.
Is your name Bruce McCabe? Yes. Is the sky blue? Yes. Are we in Hanoi? Yes. Or are we in Iraq? Yes. Did you know anything about this bomb? Yes or no? Very provocative green light, red light. And the person asking the questions has a gun.
I found that really shocking. Soon after that I saw lie detection deployed at the Arizona Mexican border, now using 40 different biometrics. Arizona State University again, doing some interesting things in Arizona. Soon after that I saw the same software deployed in airports in Europe and ports I think in Canada and various other places. And of course we don't know it's there. And it's assisting the customer's officer.
The kiosk there says, Bruce, how long do you intend to stay in the United States? Do you plan to work here? Do you plan to go home after your trip here? And then it hands an assessment over to the customs officer to ask more questions. Provocative, right? All of that happened before AI took off. Now the accuracy is zooming. A paper published in the very prestigious journal "Nature," a few weeks ago.
It's zooming way past human abilities to detect lies. We're now sitting 85% or better. Another paper published at University of Wuerzburg in Germany actually studied how people behaviors changed when they had access to the software. There are 40 or 50 software applications out there you can install today. And they're getting better fast. That's authentication at the edge, dangerous, interesting, highly scalable.
Used well, please ask additional questions. Don't call them a liar just ask, could be a game changer. Lastly, there's a very human opportunity. Let me give you two assertions, which I believe to be true. No police initiative has ever worked without the cooperation of citizens in all of history.
The second one is the vast majority of people are good. I really believe that. I travel a lot. I really believe that. I think 8 billion people on this planet earth with a population of 8.2 billion. Really hate scammers and really want you to succeed. And they want you to be secure and they wanna help if they can.
That leaves 200 million bad guys. Is that enough? There's a lot of them right? But most people wanna help. So at a human level, are you mobilizing the crowds in the best possible way? There are lots of examples of interesting mobilizations of the crowds to help you. You have a community here, which is wonderful.
You mobilize each other. That professor there, he got his students mobilized to go and catch the scammers behind the Zeus malware attack. And they did. What a great project. It was fun. They loved it, why isn't every cybersecurity course at every university hooked into industry to help you guys? That same professor is part of a network called InfraGard, to help all kinds of professionals like you guys help critical infrastructure.
32,000 members. That's sponsored by the US government. You reward bug fixes. We do this right? We ask our employees to identify security threats.
We incentivize that. How do we extend the crowd? That last one there, you see, identifying stolen cars became a fun thing to do in the Netherlands. It's an app from the police. You photograph them, we'll tell you if it's stolen or not. Be part of the police force.
Just to inspire you. Here's something a bit outside your context. The Detroit Police Department launched a project in 2016, which I've been watching very closely. They said to people that own 711s 'cause a lot of crime happens around 711s, convenience stores. They said, if you pay for a camera and a network connection and the bandwidth, we will do a live surveillance center at the Detroit Police Department with live monitoring of your premises.
But you need to pay for it. They did a trial with eight. Eight stores, the crime around those eight stores. 'Cause you can see there's signage as well as the cameras dropped 50% in one year. The next year they got a 100 stores, then they've got 300, then 400. Today it's about a 1,000. We've got daycare centers, we've got all sorts of people.
They've become part of the process and spending their own money to be part of it. And the message there is if the Detroit Police Department can mobilize the crowd like that, you guys definitely, definitely, definitely can. Okay, last message. And it's just one slide.
We've looked at exponential threats. We've looked at scalable opportunities to address that, at least some of them because there is lots of opportunity to do this better. I know some of you are probably feeling a little bit stressed because I'm giving you sort of 20 years worth of stuff and you're thinking how do we cope with that level of change? So I just wanna give you something to go home with I guess. You can't plan, you can't write a plan for cybersecurity in 2050 or even 2030. No one can do this.
The single best thing you can do when you leave this conference, when you go back to your work, the single best thing you do is help your security team, your security department, as much of the people you can influence as possible. Become a more innovative unit, create a more innovative culture, one that can respond faster and embrace new tactics faster. It is the single best thing. Innovation is a human process.
It is never a technical process. It is never about economics. It is always about how people work together. So let me give you a few ingredients. I've been studying this for 30 years. They're kind of fun, but actually I'm really serious about them, okay? And we can talk about them I'm gonna hang around for the next two hours.
So come and ask about them. First one is, I wear this t-shirt for a reason. It says yuki in Japanese, courage Because courage, some of it comes from in here, but most of it comes from the people around us. And in your organization, if someone speaks up and you as a leader, shut them down, you robbed of them of courage.
They will never speak up again. But if you encourage them, even when they say something crazy, you are creating an innovative culture. You're providing psychological safety. So give and receive courage practice it. It's the number one factor behind the successful teams at Google in innovation. Psychological safety. We've got great data backing it up.
Practice your language with people that come up with ideas. Tell them where you want innovation. What keeps you awake at night? Just prioritize. You don't want risk taking everywhere. Nobody does. But there are some things in your organization where you really need new ideas, communicate. Foster diverse people, experiences, jobs, backgrounds as much as you possibly can.
Diversity is the juice of innovation. The sameness you're looking for is values. You want people that care about the same things you do. That's it, but the diversity, preserve it any way you can in your hiring. And even in after people are hired in preserving their ability to be different, it matters.
Incentivize. You can give financial rewards to people or you can just give recognition to people. Never take credit for the good things your people do. Always push down the credit.
Find ways to incentivize downwards. So your people come to work every day going, is there a better way of doing this? If you do those well, that experimental culture kind of just comes anyway. People will experiment. But the thing people do the least of, the mistake people make, is they don't experiment early with customers.
They don't expose their ideas early. So create a culture where you put together a PowerPoint really quickly and start circulating it with your new ideas. Experimentation can take many forms and finally have fun. Just like that crowd stuff. The professor and the students.
There is so much fun in what you do. Yeah, take a photo of that, use it as a reference. Innovation is a human thing and it is the single best thing you can do. So that's it, ladies and gentlemen. My name's Bruce McCabe.
It's been a privilege to be with you today. And I will be hanging around for the next two hours. I can discuss any technology you want and I'd be only too pleased to, and tell me what I'm wrong as well when we talk and we'll all learn together. Thank you so much for having me. (audience applauds) ♪ But I don't care if I get behind ♪ - Thank you so much Dr. McCabe. Thank you. That was truly, truly an insightful moment that you were able to share all of your research and experience with us.
But I do have one question for you. - Okay. - You spend a lot of your time in the future predicting what's gonna happen in the future, living in the future. Do you spend any time living in the present in the now? (Bruce laughs) - Well, I would say that's probably a great question. I try very hard to do that because I think it's probably the biggest challenge in what I do. They say mental health is linked to living in the present, right? We should all meditate, be now, all that sort of stuff.
And my profession does exactly the opposite. And I do spend, even though I spend a lot of time on opportunities, I spend a lot of time on some of the bad things that are coming and that can get really, really depressing. So yes, I spend lots of time disconnecting and doing things which are totally different. So I do a lot of history reading and things like that, which have nothing to do with the future. And that helps (laughs).
- And then are there any other challenges that you find that you've experienced as part of your world? You mentioned that there are some negative things. So how do you find you separate yourself from the negativity, you know, is forthcoming? - Well, one of the other big challenges is sometimes you are well ahead of the curve. So what you're saying is unpopular. You can't get people to hear it and you can't get people to listen to it. So on AI, for some years I'm visiting labs going, oh my God, this is the biggest thing in IT ever. That started around 2016, where is no question, it's the game changer of our time.
And along with gene editing and biology and medicine, and agriculture, it's probably one of the two technologies that will change the world most in the next 30 years. That was a pretty lonely couple of years. 'Cause I was saying that at conferences and people were shutting, not shutting down, but they're going, that's really entertaining. You know? And they're just not taking it seriously.
And the same thing has happened in various aspects of the energy industry. So you can be lonely when you're ahead of the curve. That's a big challenge. And like I said at the beginning, the biggest challenge I think operationally is the dates. You've gotta put a date on it. That's my job.
But the dates can move dramatically. You know, you can get quite certain about the direction, but not the dates. - Thank you so much for sharing your insights.
As Dr. McCabe mentioned, he will be around with us to share some more information. So if you have some questions, he'll be here. Pick his brain certainly. And that's all the time that we have for Dr. McCabe.
So thank you so much for joining us. - Alright, thank you. cheers. Bye-bye. Do you want that one? - Oh thank you. (upbeat music)
2025-01-27 08:44