The truth about predictive maintenance - with Nat Ford – August 2023

The truth about predictive maintenance - with Nat Ford – August 2023

Show Video

Hello and welcome to this episode of the Trend Detection Podcast. It's great to be back with my colleague Nat. For those who are regular, listeners of the podcast will know we've had a regular series going on for the past trying months or years. Years, but past few months, I guess, but there's been a little bit of a gap. But I'm really pleased to have that back. So we can start digging into our regular series about the truth about predictive maintenance, which is a broad topic.

And as you know, Matt talks very much from the hip, which is what we like today. We're actually going to talk about a very broad topic, but a very important topic because I think there's often a lot of confusion here in a term, or the term predictive maintenance, which is obviously very close to our heart, but it's often very much confused in this market of lots of different uses of it for different products. And we're going to sort of try and unpick that a bit today, I think. But before we get started, I'll hand over to you Nat, just to introduce yourself quickly. Yeah. Thanks. Nah. Hello to everyone who's listened before.

I'm Nathaniel. I am currently Business development manager at Senseye within Siemens. Previously sales director within Senseye when we were in startup mode.

But essentially my job is about going out and meeting organizations who are interested in exploring and understanding predictive maintenance and helping them to unbundle some of the confusion around it. It's one of those terms which needs to be appreciated, understood, before a company can really work out what they can achieve from predictive maintenance and set out a plan to go ahead and do so. And as I always say, why have I got the right to come and pontificate about anything to do with the truth about predictive maintenance? Well, really for the very reason I just said that me and people like me spend all of our years of our lives engaging with people who are struggling to work out what to do next. And as a result, we tend to be there from the very beginning of the concept of implementation of any type of technology that we happen to be selling at a given time through to successful deployment and scaling. So if you look at anybody within a business, we're probably the people who are most likely to if you cut it in half, you'll find us still there.

So we have a good view on what people are doing and how they're doing it, what the misunderstandings are, that sort of thing. So thanks for having me back. Yeah, I thought today it'd be interesting to talk about this concept that's coming up quite a lot with our clients. It tends to be an epiphany that clients go through where they realize we probably try and tell them this in situations like this, but people don't listen until the context is right.

They have an epiphany when they realize that predictive maintenance or condition monitoring or whatever you want to talk about, describe it as is not a technology, it's a methodology they might describe. I was studying it before the idea that predictive maintenance is not dot dot dot and sort of people who are a bit more expansive in mind might call it an ideology or whatever. But the point is when people first start discussing predictive maintenance, they will typically think about a piece of technology. What they discover is that they are reviewing, assessing, and trying to understand and implement and use a methodology.

We happen to sell a piece of technology that underpins those efforts. But the stuff that we do really well is the stuff which is not the technology, it's the implementation and the coordination and the communication and the education and getting people to learn how to make decisions. So that's what I think we can discuss today.

How do you help people break out that mode quickly? How do you give the technology part of it, the respect it's due? If you're going to talk to Senseye within Siemens, you have to understand how our technology is configured, how it works, and to understand why we've been successful and why we can help clients. But you need to move very quickly on from that to talk about how you're going to run a project that looks to the really important stuff, which is everything that happens after the technology. That was a really interesting way you put that, actually how you sort of mix it up there. Like we're a technology, but we actually have a lot of expertise on the other side, the cultural side of it as well. So I can see why that's a slightly confusing picture for clients. Right. You're a technology buyer, we're buying technology.

What would you know about the cultural side of it? I guess would be an obvious question to someone completely new to all this and wondering what's happening. Okay, well that's a good question. I worry sometimes about using the word cultural because it's got all sorts of connotations. I guess what we're talking about is the behavior, behavioral change towards getting people to use a system versus not using it. But then it's not binary.

It's about all the great things you can do and the complex things you can do once you've started using a system. So it's that stuff that we have to become experts in learning how to affect and change and measure and understand what status quo where it needs to get to. And for people who don't see why that might be, this is the epiphany they need to go through.

It's not technology, it's that behavioral stuff. And why? Well, technology is the easy bit easy in, inverter commons. We've created it, it exists. Our platform is very, very good at doing stuff that you don't need to worry about and raising cases, which we talk about as being predictive maintenance cases.

All the stuff that we need to be really good at getting companies to learn how to do is how to take that information off the platform and assimilate it into their stand ups, their conversations or their decision making processes so that they get benefit from it. And the two things are necessarily very different piece of technology. An iPhone is an iPhone.

The things that you do in an iPhone, the behavioral stuff is very different. If you wanted to this isn't a very good analogy, but as soon as I mentioned the iPhone is in front of me. If you wanted to test the capabilities of a new karaoke application for your iPhone, you wouldn't test that by going and looking at the way that the iPhone is built. You don't care about that.

You just assume that it works. It's the stuff that you do behaviorally with the outcome of it that manifests. As I said, it's not a vegan analogy, but the important stuff is how people react to our cases, not the technology.

And this harks back to some of the conversations we've had before. But I still find it astonishing how people want to run a predictive maintenance project, which they know is about maintenance practices and they know is about outcomes. So they want to have less downtime, they want to have less untimed downtime and better maintenance interventions.

And those two things are about activities and yet they still want to test the process by doing a small experiment to see if the technology works. Which doesn't make any sense because you can prove the technology works, but then you don't know. You still have no idea whether or not it'll be used in your environment, how it will be used, how effective it will be. They're two completely different things. Bear in eyes in the iPhone.

It's like testing a tennis racket when what you want to work out is whether you'd enjoy being a member of a club. They're not the same thing. But it's very difficult to get people to the point of understanding that because one thing is easy to describe and measure and build a project around. But it's our job, Niall, to convince people that they need to if they want to test, if they want to assess whether predictive maintenance is something for them, they have to understand that it is a methodology and not a technology. Assume the technology works, go and speak to another client, test the methodology, test how it works in your environment, what's the uptake, what the changes you need to make because you might fail outright. You might discover that your maintenance teams are not in a position to maybe the way that they work is very low level.

They don't have a good understanding of their assets, they don't make intelligent decisions. Maybe they just follow rules that's applicable in some circumstances. But they wouldn't be ripe for a predictive maintenance project. So that's something you need to assess, not the technology.

So you need to describe a way of thinking what are the things that might not work, what are the things we want out of it from a behavioral point of view? And that's why workshops, we call them scoping sort of workshops focus on outcomes primarily for that reason. Right? So the minds fix in that area, and I know we've talked before on these podcasts about the technology side. I don't know whether things like AI, machine learning and those terms being thrown around sort of have complicated the view and make people more interested in what's behind the technology when, like I said, it doesn't really matter. At the end of the day, it's quite a logical thing. It's like, well, it doesn't matter what's happening in the background as long as it produces the results you want it to produce. Right?

Yeah, absolutely. People are rightly interested in AI, but you don't need to become an expert on that to use a dynamic platform any more than you do to use a mobile phone or a tennis record. Tennis records could be interesting how they're made, but doesn't help you use it. There is obviously some examples with organizations where they have data scientists or people who are paid as a job to understand some of these things fundamentally. And there's probably a lot of companies out there who have R and D departments who want to investigate what's possible with AI and build some robust, good solutions. And probably over time that will become an easier task for them.

But under the current circumstances, PDM or working with a company like Siemens and Senseye is not that beneficial from our point of view. Quite often it's the data scientists who reveal a heavy bias towards a different approach than ours because of the questions that they ask. Typically the data scientists will want to ask questions about validity of cases. They want to be very empirical, they want to have some understanding of how accurate we are.

And that's not a misunderstanding. A misunderstanding would be if they misperceive something we told them. But it needs a bit of education because the way our technology works, there's really effectively no such thing as a false positive or accuracy. We're alerting people to change. We may be given some detail around what we think that change means. We might be given some instruction about what we think you should do to mitigate that change.

But we're not saying X is going to happen because we're not a solution that we build specifically looking for very specific things and say, AHA, we can see that specific thing is going to happen. So in a way, those questions from data scientists are useful because they instruct us on the way the company views predictive maintenance. It gives us the opportunity to start educating them slightly differently towards the way that we view should be done. And do you often speak to customers or prospects who have maybe spoken to another vendor, another provider, and got an alternative view, and they're bringing that to the table when they meet us. Oh, this provider said it predict maintenance, should do this and should do that.

And what's your response, that kind of view? Well, first of all, in this industry and any other industry I've ever worked in, people have a strange habit of not telling you who they're speaking to. And I've never understood why, because knowing who someone else is speaking to allows you to help them make a decision, because you know what it is that you're trying to compare to. Not knowing means you have to stab around in the dark trying to work out what's going to be most useful to tell them. But people have always been, and probably always will be very guarded about that stuff.

I have not been as involved as I should have been in my life in buying cycles. So perhaps I need to go and get involved with the purchase department at Siemens and help purchase some solutions to try and work out why that response happens, to help me debunk it a bit. But anyway, that wasn't your question. It just always has interested in me why people don't share that information. But it's not complicated with this technology because really, there are lots of generalization here, but there's three routes to market, we're one of them.

So there's taking lots of data, scientists and lots of data, trying to build something you could throw data into and ask it to predict the future. Now, one day maybe that'll be possible. Currently it's not. But the main other way of doing predictive maintenance, which is opposed to our way, is looking at the world through the lens of failure. What does failure look like? So very often we'll speak to people who are speaking to one of a number of successful vendors who their view of the world is looking at how things fail and building models to cater or to look for those failures. So they're kind of like a spider's web where they have very specific things that they're looking for.

So if you think of PDM as a spider's web, there are signals that they want to listen out for that they will recognize as being indicative of a failure mode, and they can prebuild those. So if they've seen a particular type of motor before, or whatever it might be, or a pump or filter, they can come prearmed with models which look for the signals that indicate that failure. What we know as a business from our multiple clients is that most things that happen, happen in a slightly unique way, and a lot of things that happen have never happened before, and a lot of things that happen are never going to happen again.

So that idea of coming pre armed with models of failure won't cater for everything we reckon from looking at our this is not, by the way, from a wide study, so don't quote this in a scientific paper, but we reckon something like 80% of the cases would not be caught by those systems because they're anomalous things like an accident pool of grease collecting in a place that you would never have thought it was going to. We cater for all of that because what we're looking for, our lens, our view on the world is what does normal look like? And then we can start to make observations about things which step away from normal either in an anomalous way or a trend, or moving towards a pre put in threshold. There's a lot of different ways that things can be described as moving away from normal. And the intelligent and clever piece of the technology, so not the methodology bit, but the technology bit is looking at those changes and asserting whether or not they think it thinks that they should be attended to. Does it need attention? So we have this sort of changing attention index as opposed to saying, yes, we think X is happening.

So I'm giving you a rather waffly answer to your question. But people come to meetings and will have a view of the world which is very much set with the ideology that you should have measurable empirically, measurable accurate cases saying that X will happen. Our view is fantastic, really useful if X is going to happen.

But what about all the rest of the alphabet scrap that bit from the recording. What about all the other things that can happen? A to y. You need to cater for all those. It's a bit like the immune system.

The view that others take is that here is a jab that caters for COVID, but what about flu and what about X and what about Z? So if we were an immune system, we're looking out for anything, we're looking out for any kind of antigen that appears within your system. Some of them will know about, some we won't, but we will raise your attention. Whereas the opposite view is only looking for a set of known diseases. Yeah, I guess that's interesting because the other thing I guess people want is guess they want that early warning sign or early indications, but in time, and if you go as cases are open, the fix is made, then they're closed.

That builds up sort of a history and a backlog as well. That can be if you only have that one if want to call it a silver bullet rather than the whole journey that asset has been on with all those little surely that's a lot more important to say. All these different changes and indications that one was right, that one was because of this and it's all captured there as well.

I think that's quite powerful. So you've got the whole story of one asset which anyone essentially within an organization can even compare asset here to asset here similar age, how have they been behaving? Right? So rather than sitting and waiting to see yeah, exactly. You're not saying, did X, Y, and Z happen to this asset? You're saying, what did happen to this asset? Can we compare those changes? Can we compare that drift? Is that the same across all three two assets or different? Yeah, it's like saying the punchline without telling the story beforehand. It's great, that all right, you know, that's happening then. But what's the background to it? Why did it happen? Is there a certain pattern that we need to be aware of based on previous data and other similar type of assets? It's that kind of information that's also data that's also useful for people. Yeah, but you build it over time.

In a way. If we look back at the question we're asking, which is why is it a methodology or an approach or an ideology or whatever you want to call it, as opposed to a technology? If you have a system which is simply looking out for a known failure mode, then really it is a technology. It's alerting you to a known failure. That's a small subset of what we would do. But if it's a system which is alerting you to change, then it necessarily becomes a methodology because let's say you're in charge of predictive maintenance across a business, and you're looking at existing lines, and perhaps you're looking at a number of new lines that you ever built from a new factor that's been built.

Your task would not be to say, don't worry, everybody, we're going to implement a system that looks out for these known failures. What you'll be saying is, hey, when we're designing this new line or when we're considering doing something to this existing line, we need to speak to the right stakeholders because we're going to implement a technology that will alert you, raise your awareness to change. When it raises your awareness, it will give some level of detail, sometimes scant detail, sometimes very insightful detail, about the likelihood of when a failure might happen, what that failure might be.

But a distribution of information, and that technology sits there doing that. What's important is how you use that information. So that individual who's tasked across the business of being the main stakeholder for maintenance needs to take the maintenance team and help them to understand how to read that information, what they should do with it, which meetings they should take it to, where to filter it into their decision making process. What does it replace? Now, if you're doing some walking around, taking measurements and then peering at the data from measurements to think about condition indicators, you won't need to do that anymore. But the other things are as augment.

So you might have already a fairly complex system of things that you do in your stand ups to try and decide what positive actions you should take in your next set of interventions and how do you filter this information? That's a process. They need to be more cognizant about getting that right than they do about the implementation. The implementation should become easy over time. It's the usage and the methodology that they should be focused on.

I haven't thought of this from this point of view. That is interesting because if your other vendors that we sell against in this environment won't be talking about that process because they're very much more about they will necessarily be much more about saying, don't worry, we'll raise an alert if X is going to fail in a particular way, which is totally different consideration. I guess what you're saying is, and we talk about it a lot again is about sort of the human element of PDM as well. Because again, the fact again, if you focus on technology, it's like human input is less required because you've got this technology. But aside from the feedback loop which again is mentioned quite a lot, but to actually implement a methodology, to implement a method you need people, right? I think that's what we're sort of saying here.

If we look at it as a methodology it's about people and adjusting and user feedback. Yeah, I often hark back to that example of the chap. There's a chap that me and young Jack needed to speak to and he repeatedly didn't come to meetings. And then one day we were on site at Client X and we happened to meet him, but we didn't notice him because he walked through a door and he was wearing full I don't know how you describe it, but a full fire outfit, some shiny silver with a big hat, glass face mask and big gloves. And there was that AHA moment.

Of course he's not come to meetings, he's doing his job, he's maintaining machines in a really, really difficult environment. Probably not all the time but on some days and probably on an ad hoc run plan basis. And we realized that didn't matter if we talked about our brilliant integration with Microsoft teams or our fancy new GUI that allowed them to drill down rapidly to where they were geographically. He's not going to do any of that. He's wearing gloves that mean he probably couldn't open his laptop, let alone do finickety stuff. So our job then became much more about working out how working with the business, we could as rapidly as possible get interesting and important cases that have been raised to get out of our system off a laptop and into the environment where he was going to see them written large on a whiteboard in a room for discussion so he could have his helmet under his arm and talk to the right people and make decisions based on the information.

So that yeah, the behavior and the methodology becomes much more important than the technology. Doesn't matter. The technology is just sitting there doing its job. What's really important is how it's used. And he was a great example of that, and yet we wouldn't learn that by chance. We learned that.

Yeah, exactly. We have to now assume sorry. That's all right. I was just going to say that.

That's the best way to learn, isn't it, when it's a bit of a surprise like that. The point I was going to sort of press on then was and it's gone from my mind, so we might have to edit this bit out. My mind's gone.

Bloke, come back. What was it you were talking about being on site? There was something I was thinking, oh, that was it. Okay, so, and you and yeah, so you were, you were talking about you talk about being on site and we talk about a predictive maintenance methodology, but it's not a case of just applying this cookie cutter and applying it. I guess it's also about adaptability to how your example there for that person who's out there doing his job, but it's about adapting to sort of existing processes and things like that as well.

It's not just one size fits all approach, I guess. Yeah. The first thing you need to try and understand is how a company currently communicates internally about maintenance. You can talk to them about their current maintenance approach. And as a vendor, we probably try and overly fit people into boxes about, oh, well, they have a preventative approach, but of course they can have a SMORGAS of different things that they do.

What we really need to understand is how they communicate about maintenance, how they make the decisions about maintenance, because that's where we need to filter in, in whichever way is appropriate. So the example of the client I just gave you with the man in the outfit, what we learned was that they have their own homegrown platform for asset maintenance that people trust and use and log into. So that's the natural home for our information.

So the next piece of work for Jack was to make sure that everything that is of importance from our system appears in that environment so that other people, in that case will be cognizant of it. Make decisions, communicate about them. And Mr. Man Mask will hear about it in a timely fashion and do the right things, get the right stuff done. But you don't know that. You won't know about those things unless A, you ask and B, you say why you're asking? Because people won't naturally assume that it's more than just technology.

So you need to go through this education process of explaining that. You need to help them to understand how to filter this stuff in so they can use it successfully. They need to know that in order to share enough for us to be able to see where the pitfalls are and where the opportunities are. Yeah. So it's more beyond of what's coming into the app? What can you see in the app, but is it actually affecting having a real positive effect on that person there who's just walked through with his hard hat and is on the site? That's the flip side of the question that possibly isn't considered.

I mean, there is an example, isn't there, of the company Y in Asia where there was a total misunderstanding that if we implemented our technology, that several things would happen that the OEE would go up by magic, that we would implement our solution and positive things would start to happen in their productive environment, that they were totally missing out. The bit in the middle where they had to do something based on the outcomes or application. But you can never blame a client for a misunderstanding, that sort of thing. You have to think, well, that's what they perceive from our communication, of course, in order to understand how to communicate with people so they don't misunderstand what you're saying, you'll just understand what they currently know and think so you understand what they're hearing based on what you're saying. Let's continue to put the psychology how we communicate and how it's understood.

But in that instance, we were at fault for letting an organization think that there was going to be some outcome that didn't work while they were input. And that's probably because they fundamentally believed it was a technology that achieved an outcome as opposed to a methodology underpinned by a technology that achieved an outcome. So it's actually quite a good way of describing it, I guess. The most successful projects, again, because I talked to Gartner quite regularly, I've talked about this a few times about how client and vendor relationship should be more collaborative, and I think that's particularly important with predictive maintenance right in our absolutely, yeah.

And that's not natural. You can't say to a client, hey, we'd like to do some business with you. Here's our technology, there's other ways of doing it, but ours is the best.

You need to collaborate with us. There are a number of iterations of communication and trust building and demonstration of ability that you have to go through. And then of course, at some point the organization has to say, okay, we're going to put our energies behind you. But it has to be a collaboration for everybody on our side. For every Jack and Rebecca and Chris and the CS team globally, across evenings.

For every one of them involved in the delivery, there has to be just as many excited and involved and fested individuals on the client side. Who are not in watch and see mode, who get stuck in mode, who understand the benefits to them individually, personally and at a company level. So that stuff all has to be understood, ingrained, dictated, put into the project, so that it's a good measure of that actually, is if it's exciting to turn up to the steering meetings. Yeah, exactly. If. It's. I haven't looked at the app.

I haven't thought about this this week or if someone's I guess the point is, when I think I've heard this before, I don't have a specific example, but just talking to some of our colleagues and things where a customer or someone from a customer side has come into a meeting and has almost in a nice way, taken over the meeting. There's almost, like, answering all the questions and stuff. And it's like might be your one. Of your yeah, I think we discussed this before.

So there is that kind of magical moment when you know that if you handed a notice in and went did something else and there was no one around to manage a project, it has a life of its own now. The client's taken over. You're not a hindrance, but you're not necessary.

Maybe there's some scaffolding in place that you put, but fundamentally the client now owns the thing and you normally see that happen when there's an individual who becomes they become set side within their business and they start saying things you've heard yourself say before. That's a really good moment. It's not a moment to walk away though, because one of the challenges with that is that this is a really subtle point and quite complicated to consider when you're working with an organization. You need to help them to not become one of us and then viewing their company in the same way that we have.

You need to find a way of them staying on their company side and being a voice within it and disseminating knowledge and information about predictive maintenance. What I mean by that is that we're talking to client X. We'll be considering their environment, but it'll be very much us talking about how are we going to get our how are we going to infect people with the idea of sensai opinion being a good thing. And when you infect people successfully, you don't want them to kind of join you in camp in this camp over here.

I'm just saying the same thing because I don't know how else to describe it. But you don't want them to become one of you in a closed team who are viewing looking out and viewing their organization. You want to stay in their organization conceptually and be somebody who is communicating about your and I'm not saying they're going to come and join us in the company, but you don't want them to turn around and see the same communication issues and challenges. You want them to be the resolution to them by remaining on that side and being infectious in that way.

Need to work on how I describe that. I know what I mean. Yeah, I understand what you mean. It's a careful balance. But I think once you get that right again, it goes back to another thing we talk about, as well as like champion users who would naturally fall into that category as the ones who become the experts and become the drivers within the organization to obviously scale at the appropriate time as well.

And then we're there to support that both technically and obviously I don't want to call it politically, but you see what but as in the conversations that need to be had for that to happen internally, I guess building a business case. There we go. That's a better way to put it. I was going to say. So the time has gone very quickly, which is great.

I wanted just to touch up maybe finally, unless there's I'll ask you to maybe summarize a little bit as well, but you mentioned before about like a mix of approaches and how it's not all let's just scrap everything and focus on predictive maintenance. Could we talk about that a little bit? How you can have a little bit of there's reactive there's corrective. There's all these different approaches and how a blend would you agree a blend of them is better than just focusing on one? Or is it? Again, a case by case basis, you might say.

Well, I guess the point is that when you're a vendor of any type of solution that you're selling into a maintenance team and you want to understand what their approach is, you might make this sort of absurd assumption that they have one size fits all approach across all their assets and all their environments. Peyton did. That's not going to be the case because you might mandate an approach, you might suggest an approach, or you might have a culture that errs towards one approach, but you're going to have different individuals who do things differently, et cetera. And we used to have a slide that we used to use that talked about the evolution of the maintenance approach, which was already wrong. It was suggesting that as you went along you were getting better or more appropriate.

Of course, that's again, not necessarily correct, but the idea is that you would have people who are purely reactive run assets till they fail and then go and fix them. And then you have a slightly more robust approach of planning to avoid things. So you would have preventative maintenance. So you would do lots of activity, go and fix the machines and oil and do whatever you need to do to try and prevent failures from happening, prevent breakdowns from happening. And then you would move into condition based maintenance, which is condition you monitor the condition of your machines by some method and you take that information and you make decisions based on their condition about how to maintain them, but that's based on their condition now. So you've got reactive preventative reacting to knowledge about your machines as they are now.

And then we sit in the next bit, which is you're not preventing based on the condition now. You're preventing based on a prediction of what the condition might be. Predictive maintenance is based on condition information as well.

It's just using that information instead of saying well, we know what the machine is like now. It's using that data to say we think we can tell you what the machine is going to be like in the future. Do we detect a failure coming? Is change occurring to it that merits intention? And then the next stage is prognostics, which is not just intimating that change may happen, but saying what that change is and talking about remaining useful life and prognostics and then this goes on.

But of course you're never going to have any organization who steps from one to the other, to the other, to the other. You're going to have a different there's some assets you're going to run failure and they're fixed because a may not be that pragmatic when they fail and B may not be that costly. There may be some core assets which you have a totally neurotic approach to and have spent huge amounts of money and can never afford to have them fail. You may have budget that gets taken away from you halfway through the year for completely other commercial reasons that affect the maintenance approach. To finish this answer to it. First of all, this idea that there's a discrete path that people take as monsters.

But secondly, and as important, a lot of the people we engage with think probably because of the way companies market this approach that they can indeed step away from their current process and become these predictive maintenance monsters who just do predictive maintenance and that they eat, sleep and breathe predictive maintenance. And again, that's an unhelpful vision because then they buy technology and think this thing is going to happen. Of course it's not. Why would it? It is just more information to help them make better decisions. So Mr.

Man in the Mask is not going to change his overall approach and behavior. Mr. Man in the Mask is going to be the recipient of better decisions, better informed decisions. So when the asset that he's all dressed up to go and deal with, he may have previously done a lot of preventative style, he may now change the things that he does based on better insight to the health of that asset. Or he may go and do something radical because he's discovered through conversation with the decision makers, based on the information that we gave through the tool, that something is AFRT and that he needs to do something to avert whatever it is that's effort. But it's not this total change, this total step change where everybody starts wearing a new outfit, a new hat and doesn't do the things they were doing before. As people learn to trust our system and if they implement it fully across the balance of an environment, then you will see an overall change because people will if you think for every this is an incorrect statement.

Let's say you have 100 assets cases on five of them, then clearly you've got better control over those five than you previously. We didn't have cases, but the other 95, there are no cases. If you trust sensitive technology and you understand that it indicates change where there's change and no change where there is no change, then perhaps you can take a better set of decisions. Also about the preventative maintenance, you do turn it down a bit, spread out the period between interventions more make better use of the maintenance person's time. So it will, over time, as people trust the system, change some of the methodology and ideology. But you don't go from x to predictive maintenance.

Okay, so let's refine the opening statement. Predictive maintenance is not a technology, it's a strand of methodology that sits amongst recumbent processes and makes them better. Yeah, nice way to put it. I mean, the analogy I've interviewed someone from one of our customers, aptly named James Bond, but not our crime fighting person secret agent, but a maintenance person from one of our customers who talked about sensai amongst all those other different methodologies things.

It's like a toolbox. When he looks at a problem, he goes, what's in my toolbox? Like you said, it's there to help build. Is it to haunt decision making? You can add different layers on and have different angles on things. I guess. Another thing, it's about those small incremental changes. I said it's not removing preventative maintenance, but it's making it more efficient or doing it less or whatever.

So it's actually enhancing other areas as well rather than completely wiping it out. Yeah, exactly. Just to finish, I wanted finish on a question, so putting yourself you said earlier on that you wanted to put yourself in the shoes of somewhat in a buying cycle. So maybe you could think about this a little bit in this context. Because I wanted to finish on the questions that people should ask a vendor about predictive maintenance. But what questions should they ask to gauge whether a vendor understands predictive maintenance and what's involved? I don't know. How many questions? Three questions.

Five questions. However, how many you've got up you sleeve will be fine. Just so they can peel underneath. And they're not just saying, oh, we do predictive maintenance, how can they dig beneath the surface? I guess that's a great question to hijack me with the closing doors. I think they should ask for guidance on how to put in place a really robust set of KPIs that they can use, or measures to use to assess in the first deployment and ongoing the value and success of a predictive measurement solution.

And that those measures, they should then shut up and listen to what the vendor says, because it'll be telling if the vendor wants to measure it by the successful prediction of a failure. Because it's not technology, it's about methodology. So there should be a whole raft of suggestions that come back that are about the different phases and stages of engagement from asset selection, integration, project methodology, training, implementation, cases per week, how they reacted to capturing that, dating that data, the reporting internally and externally, lots of things. And if there's a really healthy set of things that they're instructed to try and include and some suggestions for those measures, then it would probably be a strong indicator that company knows what they're talking about and what they're doing. They could also inquire about they could ask a trick question.

How many months will it take us until we are only doing predictive as an approach? Because any answer to that question other than that won't happen. It'd be the wrong answer. My job is not to trick other vendors.

My job is to make sure people understand that we're the best road in, which is why I work here, because we are. Yeah, well said, well said. It's also about telling the truth, right? Because by telling porky's early on in the process is only going to make more pain for vendor and customer. Obviously, customer being very important, but it. Doesn'T interesting, that's an interesting point there.

Okay. We speak and people hear stuff. We write stuff down, people read it.

And what people understand is almost all the time when you're speaking, what people understand is slightly different from what you said when we talked about Gong, which is our system that records our books. I have always enjoyed using it because I'm horribly aware of the fact that I never say what I thought I said. So as the very first point, when I go and listen to those recordings, the things that I've said, the things that I come away saying, guess what I said, I am never correct equally. So therefore, I don't know what I told the client unless I can go back and listen to it. Secondly, the way that the client perceives what I said would be slightly different and then what they decided based on that. So by the time you're kind of four steps away, the impact of what you say is four steps away from what in my case, what I think I've said.

Now, maybe I'm an outlier in this, maybe I'm punishing myself too much or maybe I'm really bad at communicating. I hope not. But it seems like the responsibility is not to not delight of the client, it's to make sure that they're not mishearing in a really deleterious way what it is you're saying.

So you don't end up in a situation with a client who thinks that they're going to turn the on button and the OEE is going to so that example, I never lied. I never said that to a client. Nobody in the business did. But that's what they perceived and that was our fault.

So there's a responsibility to make sure not that you tell the truth, but the right stuff is understood, appreciated, and acted on. Yeah, that's good. Very interesting point.

A good nuance. Because it is easy. Because it's easy. Especially thinking about from a vendor perspective, talking in our language, whatever, or using our terms, that they're just automatically understood and taken on board. But you're right. That's why you have to have that conversation with the customer in collaboration.

Yeah. In fact, there's something I do know. We never did this. I think we should periodically stop our clients and say, Stop, what do you think? We sell? Yeah, just do a quick satellite check to make sure that they really understood everything we've been explaining. Exactly.

One to take away. But no, I think that's a really good conversation. That good to have you back on the podcast again. I have to do another one soon with no preparation really, either, so told you it all work out in the end.

So it was good. It's an interesting topic and I'm sure it will come back into our conversations in future. Yeah, look out for that.

But yeah. Thank you again, Nat, for joining us today. Thank you, everyone, for listening out there. And we'll see you on the next episode. Speak to you soon. Take care. Bye.

2023-09-11 17:06

Show Video

Other news