Getting Data Off a Failed Pre-Built NAS

Getting Data Off a Failed Pre-Built NAS

Show Video

In this video, I'm going to take a look at how you can get data off a NAS that isn't booting by pulling out the drives, putting them directly into a PC and mounting and dealing with the different file systems and RAID formats these NAS units use. I'm going to try to make this a semi generalized guide that works with most off the shelf prebuilt NAS units. And while they're all a little bit different, it works for most of the units I've seen because I've come across quite a few of these different boxes, each with their own different drives and formatting data. And even though they're slightly different, almost all of these systems are using Linux under the hood.

They're typically using Linux MDADM for RAID, often Linux LVM as well. And then they're using a Linux file system like BTRFS, EXT4 and others. And I'm going to go over how to identify what's actually being used on these drives, how to start up that RAID again, or I start up the LVM partition, and then mount the file system inside of that to get the data off your drive. And while this should work for most units, I've seen some units that do use proprietary layers.

So unfortunately, it won't be able to work there. So let's take a look at the example I'm going to be working with today. This is a little TerraMaster NAS that I'm going to be using as an example.

But as I said earlier, this is very similar across almost all the off the shelf NAS models I've seen over the years. And then what I'm also going to need is a computer. And in this case, it needs to be running Linux because these are almost always running Linux on these prebuilt NAS units. So I'm going to need also a Linux PC that can read its data.

For the purpose of this video, this is going to be a Fedora system, but any Linux distro should work fine. If you don't already have a Linux PC, one other option is to boot your current like Windows PC or other x86 system into Linux using a live disk. If you go look up the website of most of the large Linux distros, they give you an ISO download, and then you could use a tool like Rufus to put that onto a USB stick and boot from it. And the Linux distro I have right now is basically just I installed Linux and haven't done anything with it.

And while I'm going to try to make this as simple as possible, unfortunately, this almost always requires diving into the terminal to get these file systems mounted as this isn't a standard way of mounting it. So auto mount tools typically won't work on here. I just have an open air test bench on this system, but any type of computer that can work that can fit all your drives will work. Now I will note that you can get away with the minimum drives in the RAID array. So for example, if you had a RAID one of two drives, like I have here, you could use what just one, but typically I like to start off with two drives or the total amount of drives you have in that RAID array. And you're going to also want to have them all plugged into the system at the same time.

The other thing you want for your system is some sort of drive that you can copy the data onto. While technically you could keep working with the data on the drives the way it's formatted, I really suggest copying it onto another format so then you have it on a more standard format to move on after you finish doing this. One other thing that might be a good idea if you have very important data is to mount is to make an image of all of these drives first. So essentially that would be putting this drive into a system using some sort of imaging software in Linux. I use like using something like DD rescue to copy all the data off of it. And then now you have all these image files, which will be the same size as the drive.

So for example, a two terabyte drive would give you a two terabyte image file, and then you can assemble those image files together. And that will mean that you can always go back and recopy the image from these drives and can also be advantageous if you have failing drives, for example, as that will allow you to use tools that copy the most amount of data from a failing disk like DD rescue. But it's not always needed. And often I will just start trying to do it on the bare drives itself.

So let's get into actually just physically plugging these drives into my little Linux system. I also do want to make one quick note that you might want to look into professional data recovery services, if your data is extremely important, or you had physical hard drive failure as well, or anything complex. Because if you're new to doing something like this, there's a lot of ways it can go wrong.

So just that might be the safest option if your data is extremely important. So now let's get into the actual process of getting the data back from these drives and getting it mounted. So on my screen right now, I just have my Linux system I've just logged into. And what I first like to do is just make sure that the drives are in the system. And also I want to just take a look at the smart data or self-reported health of the drives. So in a lot of Linux distros, it's going to just be started disks here on this system.

And I'm going to search for it. And I can see these two one terabyte drives are from here. They have a variety of rate information and things like I'm going to work with later. But what I want to just take a look at immediately is how it says under, under smart data and self tests, I just want to see that it says, okay, I don't have to do this and could immediately start trying to get the data back.

But this lets me know that the drives working correctly. Because if the drive is reporting having errors, that likely is going to make some of the other steps have errors. And I want to know that now before I get in deeper and start wondering why things aren't working. In the case that one of my drives reports its health is not being good.

I'd likely start off by seeing if my rate array actually needs that disk. So for example, if I have a RAID six of six disks, and one's bad, I might try to start building it degraded with only five disks. I'd also look into making an image of the failing disk using DDSkew, which is a tool that tries to get the most amount of data from a failing disk. But those are just some ideas to look into. I'm going to skip that over in this more simple guide.

So what I want to do first on this system is open up the terminal to get access to all the drives and information. What I'm also going to do first is I just want a tool called LSBLK, which lists all the block devices. So what I'm going to be looking for now is I'm going to be looking for my drives here. Since I know these are two one terabyte drives, I can see that SDB and SDC are each physical disks on the system. And these are one terabyte drives.

The next thing I see is I need no SDA is my boot drive. I don't want to be touching my boot drive. So it's going to be left alone. And then the next thing I'm going to take a look at on here is I'm going to take a look at these SDB one, two, three and four and the partitions of these drives.

The ways that a lot of these prebuilt NAS units work is that the drives you put in them store the OS as well as the data on your system. And actually have often multiple RAID arrays, often a RAID one or mirror for the boot partition, which is quite small typically, and then a larger partition that stores all your data. And you can probably set it up some other ways too. Typically, I don't care about that boot partition.

It's just the OS that comes with these and some of the configuration, which really isn't useful if you're not using one of these NASs. Anyways, likely if you're trying to get the data back on one of these datas, you only care about the data on the actual RAID array. So in this case, what I can see here is I can see I have a small SDB one partition of 285 megabytes. This is too small to be my data partition, possibly at some sort of boot or swap information.

I'm going to just skip over that one for now. Now I see my SDB two information, which is 7.6 gigabytes. Inside of it, this little one shows, shows MD126. Since it says MD, that means it's detected and started this MD array. And we'll look more into how MD can be set up and managed in a little bit, but it looks like it's auto detecting and trying to start these arrays. So that's typically a good sign things are going well.

The next thing I see is another small partition of about 1.9 gigs. This is likely something like swap info or something else that isn't very important. And then this SDB four partition is 921 gigabytes, the big partition, and likely is what's going to have all of my data.

Looking on the second drive in the system, I'm going to see about the same information. So it looks like it has all the same four partitions, which would make sense because I set these up in a mirrored RAID one. And it also shows me that it has the same MD125 and 127 partitions on it.

Because of the size of these partitions, I currently only care about data in SDC four and SDB four, the last ones. Most likely these other 10 gigabytes of data is OS info. I'm not going to take a look at now and likely don't care about.

Now that I've identified that this fourth partition on both of these drives has all of my data, the next thing I want to do is figure out how I can actually get the data. All of these NAS units are slightly different, but a lot of the way these work is they have multiple partitions, each of which have their own Linux MD RAID. And then inside of that MD RAID, those LVM, and then inside of that LVM is the actual data you want to get.

So let's take a look at actually reading this info. So I'm going to run sudo file-s, I think capital L, and then that SDB four, so slash dev slash SDB four. Since it's the B drive, it doesn't matter in this case, it looks like it's the same for both these drives, and then the fourth partition. Now it's going to run this. And what it's going to tell me is it says Linux software RAID version something with Tina's info and a little bit of information after it.

So immediately this tells me that it's Linux software RAID. So anytime I'm going to want to be mounting it or opening it, I'm going to be wanting to use MDADM. The other thing it'll tell me is it's level one and disks equal two. So that means it's a RAID one array and those two disks in it. So it looks like I have both of my disks and I have everything now. But if it said, for example, disks equal four, I'd start wanting to hunt for those extra disks.

Since I'm on a Linux district that seems to try to do things and auto detect it for me, it looks like it's actually already opened those MD partitions that MD Linux RAID, but I'm actually going to go close it so I can show you how to open it if your system doesn't do that automatically. So behind the scenes, I've turned off the LVM and MDADM array for what it auto detected. So you can go through the process manually in case your system doesn't detect those automatically.

So now that I've turned it offline, you can see that SDB4 and SDC4 don't show any MDADM info. So if I run the same those sudo file dash SL slash, dash, slash SDB4 again, it still shows it as a Linux software RAID. So now that I've detected a software RAID, but it doesn't show a MD file under here, it has a Linux RAID partition, but it isn't open or using it. So now I want to start the Linux RAID. So that way I can see all the data within the RAID. So essentially assembling it.

So I'm going to type sudo MDADM. If this throws an error that your command isn't found, you might have to run sudo apt install MDADM or DNF or whatever package manager your Linux distro uses. So then I'm going to run dash, dash, assemble, and then dash, dash scan. So what it should do automatically is it should go scan your system and then automatically assemble them.

So what are to show on my system is that it's been started with two drives and it just shows it looks like TNAS. So that's Terra master system and with some extra info. So if I run LSBLK now I can see that it's auto started this MD one, two, seven partition.

So now we're starting to open the layers up. So we've gotten the RAID started on here and now we need to see what the info inside the RAID is. So what I'm going to do is I'm going to run file dash SL again, but this time instead of using the SDB4 or the partition, I'm going to use the RAID bar.

So I'm going to take a look at what's under SDB4. So in this case, MD one, two, seven, and then I'm going to do file dash SL. And then under that, I'm going to see LVM PV. So LVM, if you're not familiar with it is Linux's logical volume manager.

And what this lets you do is take a variety of physical disks or partitions or other devices, put them together into one volume group and run a variety of logical volumes on top of it. And it allows you to have multiple volumes, add or move disks as you see fate and other things. And what this means for this process now is that we have to get all of the physical volumes, the PV's in here online. And then once we do that, we can access the logical volumes or where the data is actually stored on here.

My system looks like it automatically detects LVM exists on here and turns it online, but I don't want to do that. So I'm going to actually do it manually for video. So I'm going to run pseudo PV scan and it should scan my system for PV's.

If it doesn't automatically detect and start my volume, I'm going to run sduo PV scan. So PV is the physical volume part of LVM's and Linux, and then it'll scan my system for any of these physical volumes that aren't being used and started up. And it looks like on my system, it detected that MD one, two, seven volume group. And it looks like it's starting things up and making it work.

If it doesn't work like that, I can put dash dash devices slash dev slash MD one, two, seven, or whatever the path of my MD array is. And then it'll detect it and start it up. Now that it's been able to scan and know that I have an LV partition on here, I like to just use some of the Linux LVM commands to find out what's going on on here. So I'm going to run sudo VG display on here and it should list all the volume groups on my system. One thing to note is a lot of Linux installs might be using LVM on the boot drive. So you might want to be aware of it.

And sometimes the volume group name might be the same as what's already on your system. And if that's the case, you might have to install your system or set up a little system that doesn't use LVM because it will not be happy if that's going on here. If I run VG display, it'll give me a little bit of info about the volume group. Like the name of the volume group is VG zero.

And I'm also going to look for the size. And since it says 921 gigs, I know that means it's the data one on this system and not something like a system partition or swap or something else like that, as it's the large one, if that makes sense. It's not a small system partition. The other thing I'm going to take a look at is I'm going to take our current LV, LV, as I said earlier, is a logical volume. So essentially the volumes the data goes into after LVM. And then I see that open LVs are zero.

What I can run now is I can run LV display and it'll list all of the LVs on here. Basically all of these commands that I'm working with discs need to be run as sudo and run errors like this if I don't. So I can take a look at my LV display, which is my logical volumes, which hold the different partitions with my data on them. And I could see the path and I could see some information and I see it says not available.

And then in order to activate this on logical volume, I'm going to run LV change dash a as in why to activate it with the path afterwards. So the path for it will be once I run LV display, LV path, and then this information right here, running that LV change tremendous. It works correctly, won't give any output.

But when I run LV display, it should say LV status available, which means now I can actually read the data from it. If I run LSBLK Again, I can see this VG zero LV zero, which should have my data on it and all the extra information. We'll check that in a second. And then if I run LSBLK Again, I can see VG zero LV zero, which should have all of my actual partition information and data on it.

And in order to find out what it actually is on there again, I'm going to run sudo file dash S L again on here, but instead of the previous path, I'm going to run slash dab slash VG zero slash LV zero. This is the same as I saw in LV display earlier. And what this will show me is that it's a BTRFS file system and it looks like some extra information about BTRFS and hopefully it will just melt. So now I'm going to need a spot to melt this BTRFS file system or in other NASs, maybe something like ext4. What I like to do is make a mount point in slash M and T. So that would be sudo mkdir dirt to make a directory slash M and T slash we're going to just call it like NAS mount, for example.

And then I'm going to run sudo mount slash devs slash VG zero slash LV zero. This is from the LV display earlier. And then slash M and T slash NAS mount or wherever I want to mount the drive to.

And again, if this works correctly, it should give no information. It will say information here on my system because F stabs been changed, but that shouldn't affect if it actually mounts or not. In order to see if it's actually been mounted, I'm going to run DF on this system and DF shows all of the mounted drives and how much data is on or on them. And at the very bottom, I could see slash M and T slash NAS mount and actual byte values is annoying to read. So I like using dash H afterwards.

And that'll show that it's about 922 gigs, about what I expect for a two one terabyte drives and raid one. And it's about half used. So let's dive in though, using the terminal to see what data is on here. So I'm going to see slash M and T slash slash NAS mount LS. And I could see some information on here. It looks like some apps, cash, desktop information and test share probably as well as putting some of the data.

Um, and it's going to say permission denied. So run it as pseudo again with a lot of these permission issues. Sometimes it might just be easier to use sudo dash S to go into a root shell to run everything as root with, because you're going to need root for most of these drive activities.

So then I can see the end to test share. And if I LS, Hey, that's all of my data I copied over here for this test on here. So now that all of my data looks like it's here and looks good, I'm going to use a tool like our sink or something else in the command line to copy it.

If I want to copy it in the GUI, which might be easier, it probably easiest ways to go on this system and go to just slash directly here. Cause it doesn't want to show you a lot of the system data by default, go under MMT NAS mount as we created earlier. And then I'm going to go into test share. I'm going to just type my password in and hopefully it'll let me see it. I could see these folders I made for testing earlier.

I'm just going to command C to copy it on my system and I can copy it to example, to my documents folder and get all the data from those drives. As I said earlier, I really recommend moving them on to a different drive or different storage solution, but you technically can use this mount system to get it on there. And hopefully this means you'll get all your data and it'll work correctly. So if I go CD into this data, I can see I have, yeah, this looks like contents from a camera, different XML and MXF video files I can play.

And it looks like I got all of my video files and other things on that NAS copying onto my all system. So it looks like everything's worked good. And I got the data after one of these NASes, for example, couldn't boot. Now I do want to note that different systems are different.

And perhaps if your system uses caching, it might have a caching layer and knows well for other things that can make this more annoying. I also want to just try to explain a bit more of the process now so it makes a bit more sense of what's going on. So in a lot of these NAS units, what they have and those earlier partitions we saw are partitions used for the OS information. So that'll be things like all the boot OS files, the internal apps and some other info.

There's also often going to be swap drives, even though these systems have a tiny bit of flash included for them for the basic bootloader to get your OS installed, even if you don't have drives of data on them already. That's not enough to store any swap info. So they'll often create a partition on the drives to use as swap. So if they need more memory, it won't run out of memory, they can use them as swap. And then there's going to be the data partitions.

These systems from what I see almost always use MDADM first, which is the standard Linux RAID solution. And then they use LVM on top of that. I've seen LVM use so that they can often use things like multiple MDADM raids together. So that way you can have like a RAID 5 with mixed drive capacities by having multiple MDADM raid configurations, each with a different number of drives, essentially within partitions, and then put those all together into one big LVM volume. And then you can also use the LVM logical volumes to have multiple file systems in there of different sizes. So that way when you go create a file system of a smaller size than natural volume size, it's making a smaller LV.

And then you could expand that later on if you want, as LVM adds a bit of extra flexibility under the hood. And then they run a file system within that. Luckily, all of these NAS units use a relatively standard way of doing it. So you can mount them on systems like this, which is really awesome to see. Have you done something like this or want to get data from a prebuilt NAS? Let me know in the comments below as I'm looking forward to hearing your experiences.

And hopefully this has been useful to you if you wanted to get data off one of these NASs and just gave you some tips of how you could possibly do that. Thanks for watching.

2025-05-04 02:20

Show Video

Other news

US AI Diffusion Rule, The Future of High-Speed Travel | Bloomberg Technology 2025-05-15 17:31
Inside Railway Maintenance: Practices, Strategies & Future Technologies 2025-05-14 04:24
Advancing Defenses: Technologies Shaping Tank Protection in Modern Battlefields [NAR] 2025-05-04 05:57