Panel Discussion on Putting Artificial Intelligence to Work

Panel Discussion on Putting Artificial Intelligence to Work

Show Video

I want to start with you. I mean, because I know you obviously at Shell have had a I as part of the working process, tracing efficiencies, a number of different use cases. You're using it to kind of transcend as well. Always do the transformation away from

some of the carbon in parts of the business towards that green evolution. And you see that as consequential. How do you look? How do you assess what the needs are and then isolate and focus on the potential solutions? Before we get on to the question of how you incorporate that, how do you how do you how do you zero it in a way that needs our the business? I think it's like with any improvement initiative, you're going to have have a lot of different ways you look for ideas and you can do them top down. You do benchmarking, you can say where we got there was a cost. What's your intuition?

What did you learn in your last business? And you can do it bottom up as well where you ask the machine operator How how is this process ? How could you do better? Also, you look for any kind of very manual process or something which is very tricky or often goes wrong. I think the trouble is there's opportunity everywhere. The only difference is how much value it is. So once you've got your kind of long

list of ideas or areas you think you could look at, it's in a question of your standard two by two matrix of feasibility, how easy is it versus value? And that's the piece we used to to from the beginning say, do we think this is worth looking at? And then as we go through, as we learn more about the problem you've got, actually it's harder than you thought. It's easier and you thought, well, the value isn't there for some reason and then you adjust it. Basically it's your usual. I don't actually think it's that different from any other lever you can put in business in terms of process redesign and rest of it, because it's also it's not just about efficiency. Often it's a quality thing as well. If you want to improve customer service,

you don't just want to talk to more customers in an hour and treat more of them. You also want to provide a better service. Make sure you answer that query quickly, but fairly standard. Is it is it a harder sell to the board because of the some of the unknowns around at least the new generative parts of the James VII? Or do you get it across the board as easily as you would any other potential? It's an interesting one because I think with AI in particular, Jen, typically you have your usual problem by AI, which is on the one hand people think it's magic. Yeah, magic is an easy sell. On the other hand you have to deliver to magic, and that's not easy. So it's always this mix of you have to

do a certain amount of marketing. You have to tell people what we can do genuinely if you haven't seen it before, this is the amazing stuff we can do, but also set expectations that this isn't magic. It's hard work. It often fails. It requires just as much process change, change management impact from the business as anything else. And that's a very balanced message. But I've certainly found with our leadership is a lot of receptivity and it's some of the excitement of the moment. So let's bring you in at this point. Cloud services, the cloud essential to

what you what you do, and Temenos recently launching an AI powered product as well for transactions. So to make it easier for your clients, financial services and banking clients to be able to kind of put out at least monitor and give detailed information on on transactions for for their clients and their customers as well. How do you identify the need? How have you been thinking about this and some of the potential pitfalls and opportunities around how to implement? Yeah, I think to me was saying it's it's a lever, right? And we were joking just before in the over coffee that the the answer is generative AI Now what's your problem. Yeah and if that's a pitfall so we're also sort of big fan of appropriate use of technology and that can be the pragmatic stuff. A lot of it again, building what Amy was saying is around not only the AI elements, the culture and the organisation as well. If you are changing people's jobs, that's part of that's part of what we're doing and hopefully for the better. And so when you look at things like, you

know, transaction classification is not new, we've been doing it for years and years based off, you know, originally rules, then traditional. I if I can say that at this point and then moving into explainable AI and now generative. So it's layering on these tools and techniques, coupling it with the culture change and the, the underlying organisation change that you need as well.

Does that ring true with you? Alex Riccardo Absolutely. And of course we really first look at the business end to end, understand the operating costs of our retail partners. So Ocado we are no longer Ocado retail, we own 50% of the card or retail along with mass Ocado's business today is and providing an end to end e-comm grocer distinct platform for international retailers the world over. So Kroger calls on to name just a few and teams that work for me and across Ocado technology we focus on understanding the customers the customers business model and so last mile logistics is. Very significant proportion of sales in terms of the cost as is operating the warehousing, the logistics. And so that's where we really focus an awful lot of our innovation is in these two areas of very high cost for retailers to operate a logistics platform. But we need to have machine learning and

pervasive through the end to end capability in order to be able to realise those sort of incremental gains that really lead to fundamentally either having a profitable business or not because of how thin the margins are in grocery. And so, you know, interaction from the web shop through to the warehouse supply chain, reaching systems all the way to last mile, logistic delivery have to be very tightly coupled together and insights from one have to be fed through. And so, you know, customer, customer shopping habits influence how we design warehouses and then all the way through into using AI to do things like picking and packing, which is what my teams do at the moment. All the retailers around the world range different items. In the UK, they range 70,000 different types of items.

And so the diversity in complexity is huge. There's no way you can build systems using traditional technologies and heuristics. One really has to embrace systems that learn from data such as AI, and doing that by really focusing the investment where it generates that return. How do you decide whether to build in-house or to buy in or pull in third parties? So Ocado, in effect, is monetizing its intellectual property. And so we have to be very careful about

how we use third party technology in the way that it's licensed. But we leverage an awful lot of common AI technology and tooling such as TensorFlow. A few years ago we were working on a project to predict the failures in some of our pieces. Actual automation, physical automation came back from aerospace a few years ago, and so we leverage something called Google Wave Nets, which was a machine learning capability developed by Google for time domain signals. It was traditionally developed for speech recognition that you are able to adapt this technology for learning any time domain signal. And then, you know, if you are in the process of physical automation, motors, sensors, these all generate time domain data streams.

And so we leverage things like wave nets and then we build our own. The latest control system that we have, Leverage is a different type of generative I not language models, but actually it's called behavior cloning, where we use the cognitive ability of humans to effectively remote control the robots to start with. We then use that data to train the A.I. and it is end to end, right? So video comes in, robot control comes back out, and the ability of these systems to generate to generalize across problems they haven't seen is phenomenal.

I mean, it's just it truly blows my mind in terms of the capability. So there's the human training element. Then the humans step back, gets back and they're able to yeah, it's functions, but AI isn't perfect, AI goes wrong. And so how we leverage humans and AI working collaboratively together, if the robot, for whatever reason, is unable to complete a pick, maybe it drops an item. Packaging has changed, the packaging has been damaged or something.

Then we're able to use remote operators to effectively collaborate with the AI to help complete the task so it performs operational recovery In order to maintain that kind of very reliable service that we need 24 seven picking millions of items a day. And then we can use the data from a failed pick that has been recovered by these remote pilots to go and improve that over time. And so having a strategy where you build technology, but for us, this remote operation stuff helps us build at the beginning, it helps us operate it in terms of the real world operational challenges. And then it also helps us kind of build further improvements. And so you're kind of like generating inertia in this flywheel. It gets faster and faster and faster.

There's a combination then of solutions and third parties when it comes to you. What about it, Shell? I mean, how do you think about whether to build it out and how do you have the resources, you have the team and you've got your cash rich company? Do you do it in-house or do you put it in third parties? I think it's similar to Alex. I don't think any one could say they're building everything in-house nowadays because your component parts, the component parts you get from outside are just so good. Now you'd be crazy to try and do it on the island. But I mean, we always think in terms of is it our core IP? What are we? We are we produce energy, we process it, we distribute it and we trade it.

That's core to what we do. And if it's something outside of that which isn't completely easy or obvious or doing a lot, then we should use market standard. And if it's something within that, then we might want to look carefully at whether it's our core IP and therefore we want to keep it. I think Build us is quite a bit simple though, because there's plenty of things like partner or build on something that's actually third party. And obviously we we use the cloud

platforms like everyone else. We're not going to try and use our component parts, so. Actually, it's it's always a patchwork of taking what you have from the market building on what you need internally and just being really clear about how best to use your time and resources.

How do you find those frictions when it comes? You think of an oil company or an energy company like Shell and you think of engineers out there on rigs. You have those engineers, but you also have your software engineers building out some of these A.I. solutions. How do you meld the two?

Is there a friction there? How do you get past that? That's a different type of friction. I mean, I don't see that so much with the front line necessarily. That's always a change management question that you would have any population, whatever it is, whether it's call center workers or people on an oil rig, it's always going to be some people think, let's try something new. Let's see how we can do this differently. Some people are like, I don't know, it's workflow. Fine, fine.

So far for 25 years, why change it? And other people could go either way. So it's always an effort to really do the change management, make sure everyone's bought in, make sure it's explained properly, make sure it is to everyone's benefit as well. Of course people don't want to adopt things which they're not sure about, but I mean, that's a very different type of tension, I think. And that's one which again, is no different from any other process change, any other change to a business that you have. Tony, is it is it is it easier for you because you're a pure software play? Maybe you don't have those. So I'm saying I mean, you're smiling at this, so maybe I sense how how do you improve once you've decided you've selected what the problem is, you've chosen whether you're gonna do it in-house or put in a third party, make an acquisition, you find out what the problem is trying to tackle, and then you have a team tackle it with that particular solution.

How do you embed that then into the business and mitigate the downside risks? So I think you've been reading some of our blog posts because the key word there is embed, right? So so yes, much and again, to to not necessarily say yes as well. But yeah, we take a mixed approach. We're not going to build out the nuts and bolts. There's a great deal of open source out there that we want to use. And again, partnering is a it's another angle. But then, you know, pulling it all

together is where it starts to stick. The magic happens, as it were. So in terms of what we try to do and we try to encourage our clients, the banks to do as well is not sort of put a eye over their, you know, corner. We want all of our product managers, all of our product teams, all of our engineering teams to be able to use this platform capability, which, you know, we may well change over time. And that's fine.

That's why we put the platform over the top of it so that they can then build our core IP banking IP over the top of those models, over the top of those capabilities, regardless of what the underlying tech is. So it's about exactly that, embedding it into the teams, embedding into the products. That's how we've seen a lot of our customers be successful and that's how we're rolling out our AI powered products. A lot of this innovation is obviously

happening far ahead and moving far more quickly than the regulators are. And you work obviously in a very regulated space. So how how do you how do you keep ahead of your competitors whilst also assuring of ensuring they don't fall foul of regulators? So one of the things that we talk about a lot is this concept of responsible AI.

And you know, previously we talked about explainable AI and ethical A.I., So we kind of roll it all together into this concept of responsibly. AI And again, we were talking at the break about how we can see legislation coming down the track.

We know it's coming, but moreover, it's a duty of care, if you will. So if somebody has a loan application, reject it. They need to understand why you can't be using black box models that just, you know, computer says no. You need to understand why. You need to be able to challenge that. You need to provide evidence that it is behavior based. So again, we were talking about the idea

that perhaps because A is in some is a leader in a small company, you may not get a loan because your cholesterol is high. But unless you know that's what the problem is, then you don't have a way of fixing it and you don't have a way of challenging it. So, yes, there's all this regulation coming down the track, keeping on looking to the wider industry, obviously, but also a lot of that regulation. Hopefully, maybe this is a little a little optimistic is there to protect the consumers and ourselves.

So really, you know, come back to do what's what's good, what's correct is that the Northstar, if you will. Okay. So explain the importance of explainable models versus some of the black box. Absolutely. So maybe a bit of a critique of of chatbots in open AI on that level. There's a question actually from the audience around what your uses of open air.

So we'll get to that very shortly. But Alex, on the question of overregulation, the. Question the data Obviously your clients are customer facing. I may not want your clients to know that on a Friday I buy ten types of ice cream and a bottle of whiskey. So how do you think about data and abuses of data and how to ringfence that so that you don't lose that client trust? So again, we have a responsible AI policy that every day clients, we track the data and we ensure that lack of biases.

Fundamentally, in terms of your question, how do we ensure that kind of confidence from our retail partners? We don't track individual people. Basically, they hold all of that data in their own systems and what we get is effectively tokenized and anonymized data so that we can use that to improve the systems without having any access to any individual. So do you use on this question that's come through from from one of our viewers or someone in the room, open it. Do you use it in your systems? Not within our systems at the moment. Sort of the free to use chat to empty everything that you put into it is public domain. And so we are being very careful about how we do use it.

However, our teams are of engineers do leverage it as part of supporting them doing their work. I have a team of engineers, 3D printing a robot arm, and actually we were exploring different permutations of which print technology in order to drive cost and the right kind of mechanical performance. And so we leverage touch CBT to explore different permutations with different types of print technology by posing it a series of questions. It came back with the answers in a split

second. We validated a number of them for accuracy. They were 100% accurate, and then we just believe the rest. And I. So as a support tool, if we understand how to write the right query, yeah, it is incredibly powerful. Next on, bring the arm in with you. I will help you do that. Okay, so use chat G.P.S.

for the hallucinations, the stress test for some of those Amy use. You've allowed chat CBT for it to be used at show. Not me personally, but yeah, I do some some institute and some banks, for example. We certainly are using well we haven't actually blocked public activity yet. It's we've trained people but we haven't

blocked that. But more importantly we are using the API is by Microsoft on opening activity to create services in show and to run a series. We built a whole program using that and other change of models both in vision and text to explore what we can do around the business to improve efficiency for sure, but also look at quality and other impacts. So we do have a large generative program now.

Okay. Another question coming through from the audience. Where do you see the trend of automation going, especially when we're influenced by chip shortages and a AI enthusiasm? Tony, let's put that question to you in video is very much in headline. The last few weeks, some of the most high end chips costs about 20 30,000 USD. We have a shortage of them here in the UK. What is your take on that question? So to an extent the answers cloud now the of the risk of coming back to the usual answers, the answer cloud Now what's your problem? You know, the ability to put pragmatic use of technology, putting the right workloads on the most efficient chips cloud gives you that ability.

Most banks, typically for historic reasons, will have an amount of tin that they can reuse. It's unlikely they've got vast amounts of GPUs or a friendly kit lying around. So and you know, in 2023, do you want to be building that? Now there's my friends at IBM. We've had a few discussions on this and the idea of a hybrid cloud where some of our customers, some of our banks need to keep this data on prem or want to for whatever reasons, then, you know, there's a different set of chip technology that they can bring to bear that. So cloud but hybrid. So yeah, a combination of those two, we got an appropriate technology for the right for the right use case.

Alex you project out five years, where do you see automation taking us as we weigh up these two? It seems conflicting pressures, the shortage of chips and this hype and enthusiasm in the space that we operate. I mean, I see it becoming ever more efficient. Our operation for the retailers that we support fundamentally.

And without getting to a different challenging question about AI and jobs, the main direct costs that our retailers face is fairly low skill labor roles, whether they are picking a packing a warehouse or doing other types of handling have jobs on a I'm afraid I kind of think that at some point in the event horizon, they are moving in uncertain environments. If you're on the road as much more complicated than handling things in a warehouse, which is where we are focusing at the moment when it comes to the challenge and constraints of chip availability. Actually, I recall two years ago we are in. Outside our threshold of how much spend we are willing to make in terms of cost for a cloud cost. And so today we are very conscious when we build new systems about the amount of logging that we are creating, the amount of data that we are storing, our ability to generate data versus the utility we get from that data, it far exceeds the potential benefits. And so actually having a very strong focus on how you maximize your utility from data, which therefore ultimately ends up meaning we have to be quite efficient with the hard work that we're using where we deploy robots.

The robots need to be for a certain price point, and so I'm afraid we can't deploy them with just like tens of GPUs. And so actually we have to find ways of deploying very advanced physical automation systems and also software based automation systems that don't just consume all the compute that you can throw at it and actually being very efficient with data and then which means being efficient with compute at the same time is key. Amy How much free rein do you have to spend on compute? Are you concerned that the lack of compute, the shortage of chips, the cost could could hold back some of these innovations? Oh, yes. I think the supply chains currently are

very fragile and there are efforts being made to make them more robust with government support in various countries. That will take some time to flow through. So I think the combination of the absolute hype and the fragile supply chain could lead to a bit of a crunch. Whether that's actually going to impact everyday operations, I'm not sure because these things take time to develop and really put into practice anyway, but that definitely is a concern. There's another question coming in about quantum computing and the impact on the pace in terms of how quantum computing might may speed up the implementation of AI. Anyone kind of jumping out with a view,

a strong view on that I think will take some time. It'll take some time. So anyway, so we have researchers dabbling occasionally as a problem called the knapsack problem, which is a very classic optimization problem. It looks not that dissimilar to a keeping problem that we have where we theoretically test all of the potential groceries into our shopping baskets as you are clicking at to basket, we are doing that tessellation live so that we can maximize the efficiency of the traveling salesman problem. That is the last mile grocery just sticks. And at the point that we've tried to formulate this problem into a quantum problem, the number of qubits we need is in the millions. And at the moment D-Wave machines are in

the thousands. And maybe that's some governments out there with machines which are far more powerful. But for me, when we've dabbled in it, it looks very distant.

Not yet for sure. Not yet. We're talking timeframes of years, decades. So I'm happy to I mean, depending on who you talk to about computing is tomorrow or 300 years away, right? So somewhere between somewhere pursuing that. All right. Fantastic.

Well, thank you very much.

2023-09-25 06:04

Show Video

Other news