Rack Mounted Buffalo NAS TS-RXL LC66 8TB Emergency Recovery

Rack Mounted Buffalo NAS TS-RXL LC66 8TB Emergency Data Recovery

Wednesday 16:49 – Client made contact
The client (a media company) contacted us for advice, his company has a Buffalo rack mounted NAS configured Raid 5 containing 4 Seagate 2 Terabyte SATA Hard Disks that had failed and contained business critical data.

The NAS had flashed up that one of the disks had failed so on the instruction of a third party support company they had replaced the disk but during the rebuild of the Raid the operation gave an error and they were not able to proceed any further, the Raid would not rebuild and they were not able to mount the device and had lost all the data stored on it.

R3 sales explained several reasons why this may have happened and were quoted a price for recovery on an emergency service (normally 12 to 48 hours after arrival at our lab).

Wednesday 17:02 – Client Accepted
The client decided to proceed and R3 arranged for the device and disks to be sent over night from London to our lab in Sheffield.

Thursday 10:45 – Received
Upon arrival all jobs are booked into our in house bespoke database to note down all serial numbers take images of all devices and to check for any abnormalities whether with the packaging r the devices. Everything is labelled with the client’s unique reference number and is informed of the arrival.

Everything was as expected so the job was sent to the engineers in the lab.

Thursday 11:07 – Diagnosis
The disks were one by one assessed by and engineer and it was determined the following had happened:

  • Disk 1 (model number and photo) was in good working order, when this is the case R3 engineers make a copy of the disk to another stable disk for use later in the recovery process. At no point during any recovery should a recovery attempt be made directly from an original device, even when the disk is working.
  • Disk 2 (model number) had areas of media degradation and cloning began.
  • Disk 3 (model number) had a firmware failure that was preventing the disk from being detected by anything, the firmware fault was fixed and cloning was started.
  • Disk 4 had a mechanical failure, the read/write heads had failed and required a rebuild to continue, the database flagged a part was required and the parts department began searching for a suitable donor disk.

Thursday 11:20 – Cloning
The first three disks were all cloning and an internal inspection revealed severe damage to the platter surface which would at the least cause the rebuild process to be problematic.

Thursday 11:25 – Client advised
Customer services advised the client of the situation with all four disks, we were up front and advised a successful recovery may be possible with the first three disks due to the redundancy of Raid 5 but we had the options of rebuilding the fourth disks as our team has lots of experience dealing with mechanically failed disks that others would not but to keep their costs down we would only go down that route if needed.

Thursday 16:15 – Cloning complete
Just after 4 o’clock we had the first three disks cloned, Disk 1 and 3 had no unread areas, disk 2 had some corrected sectors.

Thursday 16:20 – Raid rebuild
In some Raid recoveries, for reasons we won’t go into details here there is a need for the disk order to be worked out, in cases such as these however the disk order is obvious, what isn’t so obvious though is what is known as the stripe size. The complexities of exactly how Raid work is beyond the scope of this article (we will publish more on this subject at a later date), but the disk order, stripe size, file system are some of the parameters that have to be worked out to allow the Raid to be reconstructed and so allow access to the lost data.

Thursday 16:35 – Data Transfer
During recovery the most common way of recovering raid is to rebuild them in a virtual environment.
Approximately 1.5tb of data was stored on the raid, there were several business critical files that were copied off before the whole volume of data so that it could be sent directly to the customer and all the data was copied over to a new external hard disk. Due to the volume of data and time to physically move that volume of data the most critical data was copied to another disk to be sent back directly to the customer with the original device.

Thursday 17:00 – FTP transfer complete
The most critical of the data was back in the hands of the client who were able to resume their work.

The NAS was setup new hard disks and was back up and running.

Mike – R3 Data Recovery