Hello and welcome to the Edge Tech Podcast where technology meets the edge of innovation. I'm David Kosloski. Today we're talking about AI and edge. Why does it matter? Alongside me I've got Ed Barkhuysen who is the VP of product, and Matt Forde, who is the sales director of EMEA. Thank you guys so much for being on the show today and talking about AI at the edge.
Now, we've talked a lot about edge, and I think we've also mentioned that there's sort of like an unconscious knowledge of AI is being implemented with the edge. So I want to start Ed. Can you explain a little bit about edge and maybe what it was before AI, or rather the decisions that are being made without it? Yeah, sure. So I mean, AI has been around for a long time. We can come back to that, perhaps.
But edge computing has also been around for a long time and has been quite as cyclical as edge computing has moved. You know, as far out as you can away from the data center and then being pulled in closer to the data center. And that's been cyclical around CPU enhancements and improvements in the level performance of compute that you can get from a server like system to a smaller, more power efficient system out towards the edge. And then the level of connectivity that supported that as well.
So it has always been, a movement between the edge and the data center. And what we're seeing today is levels of compute performance that you've never seen before. Tied in with levels of connectivity that we never seen before with satellite based communication systems being easily deployable along with 5G networks. So now I think it's it's all coming together and really seeing that tying into AI as an important part of that. But whether you look at that from, you know, an image rendering point of view or even perhaps a local connected back up system or data manipulation, a lot of that still happens at the edge and needs to be close to the consumers and the users that are using that level of compute, without necessarily bringing AI into it.
So there are still two separate fields, but definitely they complement each other. Yeah. And Matt, you were you had some good examples before we, we started specifically around surveillance. I think it's a funny example, but it's a serious example, about breaking and entering. Can you give that example? Yeah.
So there's a there's a in terms of what AI is at the edge and how that affects the companies that we work with, the partners that we work with. So we do a lot with retail. We do a lot with QSR. I think maybe as a foundation level, it's important to just point out the difference of what can be done in the cloud and what can be done very efficiently at the edge. So for a retailer, if you look at what's. Start with the problem problem statement.
So what's affecting retailers today. And one of the things affecting retailers is getting labor. It's a real challenge at the opening your store. We have we have customers in the US who operate QSR who in 40% of their locations. The store itself is not open to sell burgers.
The Drive-Thru is open, but the store is not open because they can't get the staff. So. So where does AI help with that? And you have lots of people who can use a camera to count the number of people that are coming into your store and be able to give you predictive analytics as to how busy you are in the morning. When's your busiest time in the afternoon? And that's and that can all be done. In the cloud in terms of counting people, gender recognition, all that piece can all be done in the cloud.
But if you're, a QSR operator and you're trying to figure out how do I optimize my queue times for my customers, or you're you're a post office and you're trying to make sure that, you know, you don't get fined because you have people waiting in a queue too long. That is information that has to be gathered at the edge, interpreted at the edge, and acted upon at the edge, because the the time to act has to be done in real time. There's no point in saying I had a really long queue of people waiting for an hour at 9:00 this morning. It's it's it's too late. You've got to you've got to act on that within a couple of minutes.
To be able to solve the problem. So that's where the edge and having hardware at the edge allows these new models, AI models, to work on the device itself. That's at the edge.
And maybe to challenge something that that Ed said earlier on, which is about having, you know, connectivity everywhere. Unfortunately, that's actually not the case. And you talk to a lot of retailers where bandwidth restrictions are a real problem. Getting bandwidth to a store, whether it be in, in rural England or in rural United States, is a real challenge. So, you know, we work with customers who have to take all that video, applications.
And in the past, what used to happen was you took your video, you started on a DVR locally. And, hey, what do you know, when you get broken into, the first thing the criminal goes for is the DVR. And it kills the DVR. So the evidence is gone. So that was why you saw a movement of, you know, taking the video and storing it up in the cloud and storing in the cloud. Absolutely. Fine.
Expensive. They don't they don't tell you that when you're when you're when you're buying the camera, but when you do the analysis of if I need to store my data for more than 30 days, it gets really expensive. And another side story, especially in the US, is where you have, people who will come in bad actors and say, I had a slip and fall in your premises, 45 days ago. And why do they say 45 days? Because they know the recordings are not there to, to so that you can take a court case and defend that court case. But I went off tangent there. Apologies. But for for the for the edge piece.
What if you're if you're looking for a data to be interpreted in real time, that's when it has to reside on the edge. It would be the same for, you know, there another big challenge that you're seeing, particularly in the US and retail in the US is theft in retail has become, a real problem. It's a multi-billion dollar problem. And, and one of the ways to defend against that is if you're doing using AI at checkout to determine if someone is stealing something or not paying for something or not scanning something.
So again, that information needs to be done in real time. And you can't do that in the cloud. You can't rely on your bandwidth to get that information up to the cloud and back again, to be able to defend against that type of activity. That's the type of stuff that suits the edge. And that's why you're seeing an emergence of edge hardware that allows these applications to to sit very comfortably and work really well at the edge. And I would agree.
And I think, you know, some of the discussions we've been having earlier podcasts around semi-autonomous systems out of the edge, they're able to continue to operate when perhaps say, can't get the data connectivity that's needed, whether that's just a bandwidth issue or perhaps connectivity has dropped, or perhaps in a location where that connectivity isn't available. You know, we were talking about some extremes, scenarios, and very harsh environments, perhaps under water or perhaps in space. But you need to be able to have a system that is able to, to have a model which you can follow, and be able to the AI inference out there at the edge, often with no connection to the cloud, or at least being able to operate to a level of semi-autonomous, you know, ability, with potentially a limited connection back into the cloud or, you know, with other systems. So, yeah, it needs to be a blend.
But I think now I think with, with the level of connectivity that we have today and that level of compute, that can be put out there at the edge, it's giving a good level of choice. If you're a human meteorologist, looking at the last 20 years of weather data, that model was going to be in the cloud. You're not going to have that at the edge. But then again, the inference you're gaining from that may not need to be as real time as it would be. You know, in a, in a theft protection, scenario where someone is potentially putting something in their pocket and within 10s, they are out of that door and down the street and you need to have. You need to get ahead of that.
You need to have your security guards perhaps waiting for them, waiting to meet and greet them outside the door. So, you know, as we look at the way the AI is being implemented today, I suppose three key areas that we, we were seeing it being used in. And you thought of that AI isn't something new. If you were looking at, language models that were being deployed over 15 years ago, a good example is Google Translate. Whereas looking at, the text in one language, but it's not just looking at it, you know, as individual words, it's conceptualizing it and providing a translation that would be meaningful in a, in a, in a different language.
I think various evolutions of AI have given extra visibility to it from, from a public perspective. But as we're seeing it being deployed right now, it's it's visual. Visual monitoring, you know, predictive maintenance, which we've touched on a little bit as well. And then operational efficiency. So those are the three key areas that we're seeing it being used in today. But that's going to grow and change. And you know there's a particular report I think from IBM just recently doing analysis of the level of change that we can expect.
And they were saying that the the impact that their mobile phone has had on society will be nothing compared to the impact the AI is going to have over the next ten years. Makes me, wonder we talk about AI, where does AI inference play a part in edge computing? And maybe we need to start with what is your definition, either of yours, definition of AI inference? Because I feel like the terminology can get very skewed from person to person. I think when you have a trained model, which you can then use at the edge to make an inference, you know, it's not gen AI, you know, going and generating something which you know, human might be otherwise. But AI is now doing that instead.
You know, a classic example of that, an easy one is text generation. So as you start writing a message, it's trying to fill complete your sentence for you. A really good example and something that perhaps you wouldn't want in the cloud is say, for example, your office is controlled through facial recognition. And as you approach that, that's AI inference. Looking at your face and then unlocking the door for you. Yeah.
And by allowing the AI inference to happen at the edge, like what's the big impact there versus a cloud solution. So it's it's speed is is one thing for certain. You know you can't if, if it was that particular example you know you can't wait. You know 30s for the door to open. You know unfortunately that has to happen. You know within within two seconds or you're going to have a very frustrated number of people.
The so I think that that's certainly one. The other thing is about GDPR in Europe is where do you keep that data. So, you know, if that you're seeing a big move towards wanting to make sure that that data is all kept, within your own silo, within your own control, in your own local area network, really, as opposed to it setting out somewhere else where that it may be compromised or, you know, it may be compromised on its way to the cloud.
That's that's one of the areas of concern. Of what why that data would want to sit in your own control. Yeah. It's also, so when we talk about edge devices and implementing AI, if it had to be in the cloud at one point, I'm assuming that was a very large system, like a server. Is that fair to say when I say that the hardware had to be large? It's changed so much over years, but I think that's a just traditional concept. Yeah.
And so today is these these edge devices don't necessarily need to be large. Right. They could somewhere they could fit in cameras or rather next to cameras is a fair. Yeah.
I mean that's one of the big changes like we spoke I spoke earlier about what challenges do retailers have. And retailers generally do not like to have a lot of equipment on site, you know, how do you manage it? You know, it would be much easier for them to manage one central repository of information, or outsource it to, to a cloud provider to let them manage the kit. So they're reluctant to have large amounts of kits on site. And this is where, you know, with companies like Simply NUC, who’s got the new extreme edge server range where this device is much smaller. You can fit two of them in, in a rack. And consume a lot less power.
These things are actually important. We're working with a client at the moment where the savings, the power savings alone on on moving from traditional large rack servers in store to small form factor servers and store was 4 million pounds a year in power savings, which is I that's a that's a that's a big number. Yeah. You can take the sales guy out of the desk, but you can't take the sales out of the salesman. Yeah sorry. I mean. What could you do with that 4 million. Yeah.
That's right. Yeah. No that's great. Do you see, like, based off what we know about AI models, does, does the, the machine itself need to be, you know, you said two in a server rack, does it need to be smaller to. I guess are we limited by how AI in terms of size, does the machine need to still be very large? Can it keep getting smaller, or is there a progression where the models maybe get leaner? That means the devices get smaller. Like what?
Where can we go with this in the future? Because if it is something that takes up a server size rack, we, you know, you're still I feel like there's a lot of wiggle room in the future. It should be the size of our phone, right? Hopefully in the future that can hold an AI model that can make decisions. AI inference on the fly is that possible? There's a lot of effort going on right now, and to making large language models, which many of these solutions rely on, not just, not just from an integrity point of view.
So that what's in there is accurate and correct. You know, a great example is if you were to go and create a large language model on Wikipedia, that would be so big that you wouldn't be able to normally have that right out at the edge, that would need to be stored in a centralized data center, and then and then leverage that. But is that going to change, you know, as the level of compute, the level of, of storage and memory capabilities are improved in those small form factor devices that could be placed out towards the edge in tandem with the efficiency of that large, large language model and the operation of that. Yeah, absolutely. So we're going to continue to see a greater level of autonomy and deployment of these models outwards.
Yet is there today. But it really depends on on the scale of it. So if it was a more specific target, a model perhaps in this theft protection example where it's looking at, you know, where your hands are going, you know, your hand going into your pocket with and or something in your hand going to pop into your pocket. And it's not, you know, a mobile phone, it's it's a product from the store.
And that's something which can be trained and can be leveraged at the edge today. And is that that does exist today. And so are there any hypothetical ideas of where I feel like AI is in this infancy stage, even though it's you said, you know, the the Google Translate was happening, but GPT, Chat GPT came out and obviously it was this boom rush, everyone's using AI or a lot of people are early adopters. I still it comes across like it's still this unknown. Is that fair? Is it a fair assessment that we're learning still as we're building? And if so, what are the even potential possibilities that we're not even thinking of, especially around edge computing? A good example. So back to the last prevention model okay.
So there are companies out there actively working on machine models that will count the number of items that are, that are there in front of you. Compare that to the number of items that were put through POS so they can make a, a comparison to say, has your has your checkout operator. Checked through everything that they should have, or is that in the US they call it a term called sweetheart. Okay. So so I'm giving product away to somebody that I know now if, if the model is 90% accurate, then what? The result of that means that you're going to have 10% of it is not going to be accurate.
And if you have 10%, that's not accurate. And you're referring that to human inspection, that's a huge amount of human inspection, a huge number of transactions, which actually outweighs the value of having the solution there in the first place. So so where AI will develop is to getting that to being 99% accurate.
And when it's 99% accurate, then the data is worth more than the effort of inspecting it. So that's that's what you'll see of of it getting better and better and more accurate. I think that's what you'll see from the development of, of, of AI across lots of different fields. So play devil's advocate if it's, if, if the hypothetical is 10%, what will be the benefit of implementing AI at the edge today so that the, the reason that you're seeing companies moving to doing the inference at the edge is that that the models are getting more sophisticated. They call it AI, right? But it's not necessarily artificial intelligence. This is human beings who are who are building models, that are able to recognize items, recognize different things.
But the way the models are being built now is that you are, you know, introducing levels of machine learning where the model knows, okay, I know what I'm looking for in terms of how do I build that? And I can I can create that quicker on the edge and do it on the fly. You almost need you need to disagree. Even though there's a discrepancy of 10%, you need the AI to continue to to grow and actually work, to then learn to be able to then shorten that 10% down to the 99 is what you're saying. Yeah, yeah. You know, I think, you know, if, if again if you're if you're counting the number of people who went to a store, you know, if it's, if it's, you know, 95% accurate, that's that's really fine. It's it's okay.
But if you're using it to detect theft, it's not you cannot you think of a situation okay, in a, in a retailer. And if you are stopping 5% of your shoppers at the door, or 10% of your shoppers at the door, because, you know, the AI model suspects that they did something inappropriate, you're going to have a serious problem of getting customers to come to your store. If if 10%, then you're going to stop, and search them as they exit.
So that's where that's how the models have to get much better. From an AI perspective. So I'll throw a curveball, for you each personally. It could be consumer based, could be business based.
What are you most excited about in edge computing? The future that doesn't exist today, that let's say there's no size, issue. It can be as small as you want it to be. What are you most excited about to implement AI in your in your own lives? Me personally, fully autonomous driving. So we're on the run. On the road, if you like, going in that direction.
But we're not there yet. And I think if you look at those almost continuous containerized, applications of, of AI, and with the evolution of large language models, self-learning, you know, as they're being developed around a neural network approach with weightings being attributed to, you know, various decisions that can be made and that being constantly evolved and updated when a particular car sees something happening and respond to that in a particular way, and that's deemed to be a successful way of dealing with a particular situation on the road. As a new driver, you'd have to go through and learn all of that, different junctions you might need to pull out from different speeds of cars and distances all around you. Whereas with the, with the model and the company constantly updated for, for automated driving, that'll be instantly distributed amongst all the cars as one learn something, the whole network learns it instantly.
And then that'll be a, and then an acceleration towards, towards a much safer environment for sort of living. Sure. Matt, you got an opinion? I assume yours is going to be about golf. I was trying to see if. Yes. Well, I I'm a I'm a proud driver of classic cars, so I, I be really concerned about Ed's future of, But what’s interesting. It's interesting about the, the AI piece and the driving piece is that as that matures and it becomes so much better than it is today that the consequence may be that you will not be able to get insurance to self drive a car because, because they the, the AI cars, the automated cars are doing it so much better, less traffic because they, they learn and they say, okay, if they all slow down to 50, we're not going to have this, back and forth piece.
That is certainly one development I wouldn't want to see. And to have my poor car sitting in the driveway, that would be that would be a very sad, and acknowledges that's a possibly inevitable. But at the same time hates it. And I think and I think there's no level of AI that could help me with my golf right now. I'm not. Is there anything that excites you personally? No, I think I'd I'd like to see it, how it develops from, I'm a musician as well, so I'd like to see how that develops from a teaching perspective.
I, I'm certainly not interested in seeing how it writes music and and, and replaces musicians and replaces creativity. But I think there's a lot of potential from a teaching perspective, in the music world. So, yeah. And you're seeing some of it already, do you have any examples. Yeah, there's lots of applications now. So when you if you're a learning play piano, it is actively listening to you. And it is listening to what notes you're hitting.
Correct. And it's giving you positive feedback in the moment that you've, you've, you've hit the right note or you haven't hit the right note. So it's you're learning it from that perspective. I think there's, a long way to go from that perspective. You see it for pianos and guitars, and I'm sure it's there for wind instruments.
So yeah, that that would be interesting to see where that goes. That's. Yeah, I've seen apps that help you sing better as well. Yeah.
That that fortunately, I don't need that app, yeah. Fair enough. Awesome.
Well, guys, this is a really great conversation about edge and AI. Is there anything else? Before we close it out, I didn't know if you had anything for last. Yeah, there was something that, we didn't cover. I think it's important you talked before about, you know, that as the devices get smaller, that, there'll be more and more devices.
I think in parallel to what's happening there is that you're seeing, customers wanting to consolidate on the number of devices. So what we're seeing, it's happening now, it's happening today is where instead of having, you know, a device that does this and a device that does this, that what you're seeing and appliances generally they get called. So instead of having, you know, six different security appliances from six different vendors, what you'll see and what you're seeing now is say, okay, I'll take I'll take the software version of that, and I'm going to apply it on two devices. And these devices work as a redundant pair so that if one goes down, the other device takes over. Instead of having six devices, I have two still two small form factor devices that do exactly what the other six devices do.
Because again, back to space. Not wanting to have it's you know, you also don't want to have to you know, there every IoT endpoint is a device that has to be managed by somebody from a security perspective, you know, every IoT endpoint, whether it be a camera or a sensor, is a security risk. So if you can consolidate that down onto as little numbers, you know, as small a number of hardware appliances as possible, that's that's definitely happening. And as those devices, small devices become more powerful, you'll see that you know, multiply and continue. No. That's great. Any other last thoughts? No, I will just end on saying, you know, this is a very rapidly moving, technology.
And I think as, as, as, as individuals, but as organizations, the company as well, I think there are many, fellow travelers out there who you can partner with to go and and learn and combine solutions together. And I think that's that's gonna be a key part of the evolution of this is, is bringing different technology partners together to go and create the solutions which are going to make a difference, you know, in the world we live in tomorrow. That's great. Thanks guys so much for jumping on the show and talking about edge computing and AI. And thank you all for listening in, make sure you like, share, subscribe.
Thanks for tuning in to the Edge Tech Podcast. Again I'm David Kosloski. Take care.
2025-01-18 03:27