welcome to chromecast check it out i'm sam major commerce director of chrome technologies in this edition of chromecast we're talking about legacy system migrations from the hardware to the application layer and a slight change of format we've beefed up the intellectual horsepower of this edition and i'm joined by two people namely ben randall technical director rupert mills my founder of chrome and business partner chaps good afternoon afternoon good morning how does it take us to set up but we'll go with afternoon afternoon let's go with that perfect so change of format the idea being that today uh hopefully we're gonna do this a bit more technically than usual based on some client feedback as well people want to see more of ben randall so give the people what they were of course and just looking at the the subject matter so we've recently done our customer satisfaction survey and feedback from that has has given us the data actually people are very interested in migrations specifically from kind of lazy platforms be that you know desktop platforms through to kind of big end infrastructure the how i think people have potentially due to the pandemic people have kind of got on with doing business and not necessarily done some of the the heavy lifting we'd have done in years before now that's caught up with people and people are looking at things like server refresh migrations to the cloud it could be platform new platforms on premise et cetera et cetera but as we know there's a whole raft of of challenges that can come with that and if we work again i'll use my cheat notes but if we kind of look at that from from if we go top down bottom up whichever look at it kind of starting at the hardware because if we're doing a migration of refresh didi we've got you know the hardware the software the applications all intrinsically linked but we sometimes have to look at this as independent parts to make migrations and so on work sure yeah i mean if we were addressing it for a client and as we often do you kind of look at the four different stages you've kind of got the hardware the virtualization layer if it's relevant the operating system layer and then the application layer and you're right you've got to take all of those into account when you're looking at it as to what are you trying to do which which of those are you trying to get away from if not all of them um so yeah we can start with the hardware alerts as good as any and there's a lot of changes in hardware going on at the moment obviously you've got a lot of migration to ssd going on at the moment so a lot of new storage arrays and stuff if you're sticking with on-premise arrays are ssd based you've got to look at the suitability for the the stuff you're putting in it the data you're putting in it and work out whether or not it's going to fit on ssds or how the various different compression and encryption algorithms within the um or how the various different compression and data reduction algorithms within the platform will work with the data you've got so historically used to say i need 100 terabytes of data and we get 100 terabytes of storage now what tends to happen is you say i've got 100 terabytes of data where you actually need 20 terabytes of storage or 10 terabytes of storage and that's been around for a little while now but certainly with hardware migrations that's becoming a bigger and bigger concern to make sure you size it appropriately and then the chipsets are moving forward so fast from intel at the moment that actually you've got to look at what you get out of the new horsepower the new chips compared to what you got historically and sizing those correctly as well there's some there's some horrible old chips around which you can accidentally buy or you can end up with some with with exactly what you need in the newer chipsets and a lot of horsepower but you've got to do that sizing operation perhaps in more detail than you ever did before i suppose also you're looking at potentially if you're splitting that so you could be on premise to part on-prem part cloud again not all data's equal needs to live in the same place and like you said also the yeah the thought process there behind historically you kind of have gone same for same but a bit newer but if you're splitting your data set and you put it elsewhere it might be do we need and everyone's trying to push ssd let's not get away from that but spinning rust is still available and might that be a suitable medium for what you need if you're part putting into the cloud yeah i mean depending on the solution as well you may get a lot more performance than you expect out of a traditional mechanical uh spinning disc um hard drive you know if you've got enough spindles you can actually get some surprising performance yeah and they're not let's be honest they're not expensive some of the vendors obviously never know dell we knocked down um but the me4 takes a vast amount of spindles they can get a huge amount performance for quite a cost effective solution but that's many you compared to a power store full of ssd with four to one data uh important that that'd be six to one with the next firmware et cetera et cetera so i guess sizing that up there's a certain amount you can look at with what you've got at the operating system layer as well like data deduplication which is now available in windows server for example that can actually you know at the the the operating system level you can actually achieve some economies there from older servers which you you know server 2012 you may not have had that depending on your file system so um you know that can be part of your migration does i guess important question on that does that make some of the functionality that vendors have built into arrays obsolete because obviously d-dupe and compression has been a big as you've just said it used to be you buy 100 terabytes of disk 100 terabytes of data or 130 terabytes of discharge bytes of data and now you can buy less because obviously we fit more in but that's because d-dipping compression has been done at the array primarily well the dj compression has been in the arrays for a long time now the difference has been that historically it's been optional so you'll be able to say actually do you or don't you use it so you know that in a worst case scenario as you say 100 terabytes 800 terabytes of disk but you can then turn on the jeep and compression and get more for it now when you're getting the ssd arrays that are all flash basically you're getting to that point of you don't have an option it's on by default and you can't turn it off and you have to rely on that because you can't price 100 terabytes of flash compared to 100 terabytes of spinning disk and expect it to be anywhere in the same market so but they rely on that technology to bring it in um so so that's important but yeah you're in terms of the operating system making some of it redundant it's been doing that for a while now and there's that choice of do you do it in the hardware vendor do you do it in the operating system and there's there's different places to to look at what you do there so for example even with things like v10 and stuff that's been around for ages you've been looking at that's the virtualization there so it's in between the two um so you've kind of got different places to do all of that and that's part of when we talked about at the beginning planning exercise getting that planning exercise right is if you do your planning properly you can work out where you're going to use those and also which of them might conflict with each other because some of them if you turn one on it won't you won't see the impact because you've got the other one on or whatever once yeah and you you've got to look at what you're doing there as well or alternatively whether or not it's going to hugely impact performance if you turn on dupe in two different places are you going to suddenly see the performance drops through the floor because it can't cope with doing it in both locations so before we jump into virtualization though a couple of things i know we've done you guys not me you guys done a lot around this but it's kind of the hardware size in pre-migration into cloud but also what do we call it the kind of compatibility bit so if we're looking at a legacy subsystem a and to get it into new subsystem b that's not necessarily coin the phrase next next finish um but it's not as easy as just drag and drop the data and we've seen some of those challenges well yeah it depends in some cases there may be a supportive migration path you know for example different sounds from the same vendor may have a a direct migration without you know minimizing your downtime you'll be able to replicate volumes over to the newer storage um and also um other operating systems may have a direct migration path so it's something to think about um whether you're going to have to do a forklift upgrade basically or whether you can do something much more seamless that's a consideration for sure i'll chuck the president for cloud on your way don't mind yeah um yeah it's just as important when you're looking at lifting and shifting an environment not only just look at what hardware you've got on prem but look at what hardware your specification you're putting it into in the cloud cloud is charged on metered usage as a rule of thumb and if you're using the wrong size virtual machine if you've got a historic virtualization estate on your on site and the hardware is over spec you can give away ram cpu counts etc to machines because you don't care you've already paid for that hardware yeah as soon as you put it into the cloud if you put something in that's five times the size it needs to be you'll be paying five times as much for it even if it's not using that performance because the virtual machine's spun up at that rate there's the the migration to cloud technologies which is different that comes further down the stack but if you're just lifting shifting the virtual machines you've got to look at making sure that you potentially resize them as part of that lift and shift so you don't end up paying a huge bill you weren't expecting yeah yeah obviously with with migrated to the cloud you've got that ability to change instance type actually becomes very much easier thankfully but um that is a huge consideration the prices you know the cost of that look at that though because we've seen before people have gone wholesale kind of as we are into the cloud and then the unexpected bill arrives and and yet we've seen people still will pay that bill and we'll actually take the time to look at just by adjusting to the right size how much money can be saved yeah absolutely and there's various software we can run to do that automatically on a regular basis as we've talked about in previous podcasts but actually doing it at the point you you migrate as a baseline is a good idea yeah that makes sense okay so i'll shuffle you on obviously desperate to talk about virtualization so i'll chuck that one back in your app as well quite so desperate but yeah so i mean the the big issue actually is that lots of people have sat on a virtualization stack for a long time and said actually it's working it's good etc you patch it you move it forwards but it's amazing the amount of virtualization stacks out there that aren't patched properly because people look at patching their os layer but not the virtualization and now all of a sudden ransomware vendors have started targeting the virtualization stack which is pushing people into those upgrades the problem with that is that you've got supported hardware platforms so i mean you know hardware compatibility in terms of absolutely yeah to keep up with the latest uh release of well of any operating system but yeah specifically the hardware stuff um you need to have hardware which is up to the spec of the latest vmware or ipv release so yeah that's that's important yeah it's driving off that previous hardware conversation is actually if you want to go from let's say vmware six five to seven and your hardware isn't on the hardware compatibility this you may find that actually it drives a hardware migration at the same time so you've got to think of actually checking all those compatibility checks before you go and do that migration and then again supported migration parts so we bumped into quite a few times of people saying are we going to upgrade vmware actually the easiest way to do it and historically a lot of the time the easiest way to do is say i will just build a new vcenter create connect everything to it one of the big gotchas at the moment is people with backup systems where you say actually the backup system now ties into your virtualization rather than your operating system and if you just do a wholesale replacement of your vcenter all of a sudden your backup footprint doubles because historically you've got that issue of your backup is there and it's versioned if you back it up from a new vcenter it sees it all as new backup so your backup footprint doubles so again the gotcha is we've got a cloud backup service suddenly we've doubled the amount of data we're consuming in the cloud backup service and by the time you've done that it's too late to think about it so we've seen this one a couple of times haven't we absolutely yeah it's it's something you have to consider and so actually upgrading the vcenter which is a bit harder than just build spinning up a new one is something you have to seriously consider if you've got that situation yeah you've got to you'll take those things into consideration and work it out it's um but yeah the the virtualization stack is considered an easy option but actually you've got to give a bit more thought these days because of all the integrations with it yeah just on the back this is a really easy question for you to ask it's either no i'll be quiet um but you're talking about backing up previous versions of vmware and so on and this and so forth if you're restoring to a later version or a newer version of vmware is there any issue putting that from a backup and then mounting it if it thinks it should be mounted to 6.5 and now you've got version 7 etc that's an interesting question actually i think that they it depends on the version um i think that there is backwards compatibility in the newer versions um i don't know how far back we can go yeah but um there's certainly for a previous version or two versions i think you're okay but you're essentially restoring the the vmdk files of the virtual disk files um and so they remain compatible um but yeah it's definitely something to consider if you're restoring a very old backup like an archive you know that'll be the situation where you start to run into problems also what operating system is supported to be virtualized so if we were to go way back you know a very old version of nt or something is that supported anymore on your latest uh hypervisor stack going quite way back to nt yeah but the the potential is there i mean we have had clients who've got had systems they wanted to run on nt so um you know legacy systems clearly which are very ripe for for for migration yeah i mean around this when we get to the the application side of this we had one particular client who was running systems that date back to 1978. um i remember from from the discussions with them and actually they couldn't get off them because they couldn't get the data out and there's a when you look at that conversation pieces is where where do you which which where do you lift that and move it forwards yeah yeah i guess that's we're talking about migration but when we get into that sort of stuff we're talking kind of active archive archive conversation a lot more or potentially elt that sort of stuff but it's it's people wouldn't necessarily put that from a backup i said that wouldn't be the use case for it so no potentially it's being painful in conversation for no reason but [Laughter] the one thing that you can think about with the virtualization or virtualized backup restores is that obviously we can use that as a cloud migration path as well so generally you can restore back to a different platform it's just yeah it's the devil's in the detail has been said yeah yeah so i guess if you're talking as you're looking either it's you've got a kind of certified way within a vendor's platform so you're going from a a dell distro dell that or a nav list to an app that but it's nice and easy if it's if they're different vendors and we can do with things like this so vmware could do that or from backup there's different options available to us or hyper-v to vmware vmware type of v depending on what the right choice is going forwards again because vendors are changing licensing models and all the rest of it so when you're moving forward to that new platform you've got to look at again right-sizing the virtualizationist infrastructure as well as the the hardware yeah yeah okay it's a fair bit for me to try and digest and pretend i understand um so moving up that stacks and we move up to virtualization into the operating system yes there's some interesting changes there that we've got facing us with obviously we just touched on some legacy operating systems also i know windows 11 etc is on the horizon there's some interesting stuff there well even closer than the horizon it's basically here isn't it so yeah there's there's hardware considerations with windows 11 um the requirements drill into that yeah yeah so some of that was news to me yeah so we've got like the requirement for the tpm 2.0 which is trusted platform module which is the security uh chip essentially in the within the hardware um version 2.0 was released in november 2019 so it's quite recently really so if you've got an estate of laptops for example that's older than that then you may need to see if there's a you know what what could be done about you know whether you need to whether that's going to be compatible to be honest but this is this tpm module yes i thought it was a standard theme it's it's a cost option with some of these that we're looking at so it could be that and that it's not guaranteed to be in your heart it wasn't required in windows 10 and then that release date i hate to remind everyone but 2019 backs us right back into kind of just pre-pandemic and we all know that the first few months of 2020 was a complete bun fight for everyone trying to grab devices laptops this will work from home so there's plenty of rafter devices out there that we know people grabbed at anything you know what the spec wasn't the most important thing was just having a device so there's probably and everything went you know there's probably not devices that do have tpm and some that don't so it will be a consideration for certain people that want to now move to windows 11 and leverage that that potentially are kind of bound by the the parameters of the hardware that they've purchased unknowingly i don't know yeah if they're going to sweat that asset for five years you may be looking at a 2025 release or 2025 migration date which doesn't leave you that long to get off windows 10 to be fair um look some people no indeed so but but yeah it's as ben said the tpm stuff the whole idea of security in the operating system windows 10 you could turn bitlocker on and off for example the tpm piece was optional windows 11 they made it compulsory to make it a more secure operating system you can understand why because it drives the whole market forward and will drive forward the reputation of the operating system as being more secure et cetera it just means that you're locking out all of the people who traditionally have said i'll i'll sweat this asset for five years seven years nine years in some clients cases yeah exactly i mean in fairness we've had pretty good mileage out of that with the windows 7 8 windows 10 generations have actually been quite lenient really if i look back in my history that the the um supported hardware your laptop when it went obsolete very quickly i think we've actually got quite you know he put it sweated out the the asset for quite a long time we've done quite well out of that but yeah now is that that time is a clear delineation we've got an object you know a piece of r which probably or may not be in there so yeah i mean the minimum specs haven't jumped that far again either have they because you're still talking 64 gig for the os are you talking four gig of ram um so actually he could run those that won't really be four fairly quickly yeah but i mean we all know that minimum specs are are not to be um yeah don't aim for the minimum yeah exactly don't own there unless you want your user experience to be minimum spec so but the the bottom line is that the minimum spec hasn't really changed that much in terms of you can still run it on the older hardware in theory the one that will be overlooked is people go oh yeah we can roll this estate out unless you've got a way of for example if they need a firmware flash to bring them up to the latest tpm firmwares again for security reasons do you have a method for flashing your entire state's firmware because most people have got a method for rolling out a windows update or similar and if they haven't they should have but on the on the other side of things do people have a method for rolling out a firmware update they're not always that simple um so this when you're doing that migration you've got to think about are we going to need a manual touch on those machines or are we going to need a large wholesale hardware replacement as you said the people that snapped out and bought quick what's on the shelf can i have a thousand of those to get them rolled out to our users are they suddenly gonna find that actually they need to replace that thousand machines which may mean an earlier budget cycle or it may mean delaying the the upgrade cycle so but certainly it's a consideration there's direct x 12 as well isn't there so yes the graphics card needs to be capable of um of directx 12 or above as as it comes around so any other thoughts on kind of the operating system level considerations x i'm quite keen to switch you both on to talking about the application there and getting into that now i think the i've already touched on the the the term the hardware for the for the laptops and so on there's certain considerations about what operating system can support in terms of uh disk volumes and so on um there are some things like you may be to consider on windows server for example if you're using refs volumes that if you were considering disconnecting drives and connecting to other servers strictly speaking not a portable drive format so it's okay to go up a volume but if you upper version yeah if you go back again you won't be able to read the data so things like that you need to yeah you do need to consider that but if you're going upwards you you know backwards compatibility compatibility is good but going back the other way not so much trying to avoid it yeah okay yeah obviously server versions moving forwards it's going to become in the future largely similar to what they're doing with desktop version i'm sure um so it's worth considering whether or not you buy your servers with the tpm chip in them at the same time these days if you're doing server upgrades because it's a it's it's yeah it's not a lot of money i think it was 150 pounds extra or something like that when we did it recently for a client that didn't have tpm in the service um you can you could actually buy them in the servers as a plug-in module later but it's worth worth considering that that might become part of what you're trying to achieve in the in the long run as well yeah okay so go on we're going from server kind of my brain actually leads us up into the application layer um and we've seen over the last couple of years challenges with people i guess having fun with vendors uh around some of the commercials it will cost them to migrate uh some of this data from uh either live or legacy platforms um and i know rupert specifically you've had a bit of work in this area yeah one or two yeah exactly so we're good just to get uh i guess some some information out of you so sort of thing sort of things you've seen challenges people are facing so it's not a one-time thing we've had more than a few instances we've helped people but it's an ongoing challenge that people are facing yeah sure so i mean you've kind of got different things there so you've got the standard applications that people will look at so they'll talk about things like office and stuff like that we go through that all the time of how are you going to upgrade to the latest virtual office what you're going to do to upgrade the latest version of whether it might be sage sap whatever whatever you're doing on the client side and there's the client upgrades i mean we talked about earlier about operating systems there's operators new deployment mechanisms so a whole lot of azure joined stuff such as autopilot and stuff for getting operating systems out deploying new apps while you're doing that et cetera can can be built into those um so there's there's the client side of applications to get out there but in terms of the actual thing that's really locking people in on legacy platforms generally seems to be data so if you've got a major issue with an old application a legacy application you need to get that data out it's not uncommon at all for vendors to play the well there's no way to get that data out you need to do this and you need to keep buying our platform for the next 25 years whatever it might be and more often than not we've done a lot of work in sort of the etl space recently sort of extraction transformation and load for um [Laughter] let's have a look at that system and how we can get the data out we did one just recently where one of the large database vendors um was suggesting that it would take well they need to spend i think it's quarter million pounds on upgrading licenses and keeping the system at infinity and we said that we thought we could get the data out about 20 days rather than doing that and then give them the data so the client the data so they can do something else with it we proposed a three-day poc and actually did the whole job in the three-day poc so it was kind of quarter of a million pounds that's obviously a significant saving yeah absolutely i mean it was um it but it's not uncommon we see it a lot in the space of trying to get stuff out of legacy applications as a rule of thumb vendors and i don't want to tie them all with the same brush but there's a rule of thumb vendors will often try and keep you on their platform by pointing out how hard it is to migrate away of course our job in a lot of these situations is to actually look at if the right thing to do is to stay on that vendor's platform stay there upgrade it bring it in in that whole cycle that we've been talking about and get them up to the current version and then you go back through the usual cycle of what are the supported hardware versions what are the supported software versions operating systems are supported and then you put the latest application on there but actually in a lot of cases where people are talking about like migrating away from legacy systems which is what came out in the customer satisfaction survey they're talking about how do i get rid of that thing that's 20 years old ticking in the corner that i don't need anymore yeah unfortunately yeah we see it a lot in the m a work ben and i have just been um on a project where they're moving from a very large organization into they've carved off a small small piece of it and our client has bought that small piece and actually we're looking at getting involved in the various different carve outs of the systems going there there's a whole bunch of stuff that's not needed from for example a very big sap installation now they've got their nsap expert working on that but for that's just one example where not taking all of that across is saving them millions in sap licensing costs again we did a job recently with sap where you needed the piece of data out of it not the legacy sap system and actually we were able to extract the piece of data using the apis that connect into sap give that back to the client and save them the need to license sap for some sometime in the future or those components yeah the actual volume of data might be considerably smaller than the original whole system you're looking at you know so in terms of even the volume of data you have to move it's it's comparatively small it's just understanding what what that is yeah working at the application layer to extract it that's it and normally it's the compliance or the data governance teams who are making you hold on to that data for valid reasons it's okay you're going to hold this for seven years and be able to access it that doesn't mean you've got to hold it in the source system so a lot of the time you can say actually i've still got the data i can still pull back whatever we were meant to keep for regulatory reasons but i don't need it in the source system yeah it comes down to kind of where we started on this going full circle we're looking at types of data not all data is equal isn't it so looking at that information you know you need potentially you need some of it but do you need to be running a very expensive system just in case if we can transform it and move it somewhere else we can still get access to the raw data we don't need all the the trimmings that we're paying a fortune for that won't really do much other than costing us a normal amount yeah especially if it's if it is say if it's data you just going to refer to essentially it's read only yeah so you're not really working with it in the same way as you were you're just referring back to it then there's a whole load of functionality you don't need in the system you're not migrating it to yeah absolutely we had a very interesting conversation recently with a potential client where they've got an ai platform they want to feed a load of data into and actually getting access to all that legacy data will enable them to feed their ai and let the ai learn a lot faster by accessing all the legacy data and this question of someone to assist them with that piece in the middle to get the data out of the legacy systems and feed it into the modern systems and that's a whole new take on it as well so but it's migrating away from those legacy systems and doing either keeping the data for just compliance reasons parking it away somewhere and read only as ben said or saying actually we want to feed it into something new and modernize the whole system the decent intention with it yeah exactly so we've done all sorts yeah and we should continue to do something yes listen guys thank you very much it's been really interesting today uh i'm sure we'll do more of this hopefully we can get some more i guess deeper technical information than certainly i bring to the table so thank you very much i'll start again thank you very much for your time yeah thank you for having me yep thanks cheers guys and thank you for joining us on this edition of chromecast check it out uh by all means leave feedback in the comments section below and if you'd like to cover in future episodes like comment subscribe and share join us again next time [Music]
2022-02-25