External Hard Disk is not accessible. "The disk structure is corrupted and unreadable"

First of all, do not do anything more on the disk (at least never write to it). The disk not being recognized (as opposed to "being recognized and found empty or with unreadable data") seems to indicate either a completely blasted disk, which chkdsk isn't wont to do, or something wrong with the partition table or geometry of the disk, or the way the USB enclosure handles it. A hardware failure is also possible.

This can and will happen when USB enclosures try to negotiate between the disk and the computer they're connected to. So the first thing to do would be to take an image of the disk on a (obviously larger) disk at the closest-to-physical level possible, using dd under Linux. Then you can fiddle with an image copy to your heart's content, without risk of further damage to the real disk.

Update: device recognition in Linux

We have no fewer than three entities in our "external disk". The USB enclosure hardware, exposing as a block device. The physical disk inside the enclosure. The physical device, i.e., the sequence of LBA sectors from first to last. And finally zero or more data partitions, hosting the file systems. To be "recognized" and displayed in a desktop, all links of the chains need to be working. But to take an image of the physical device you need only the first two. If you plug in the device and run the command-line dmesg (as root), you ought to see something like this:

[4984939.028491] usb 8-6: new high speed USB device using ehci_hcd and address 3
[4984939.166658] usb 8-6: configuration #1 chosen from 1 choice
[4984939.170660] scsi7 : SCSI emulation for USB Mass Storage devices
[4984939.172003] usb-storage: device found at 3
[4984939.172005] usb-storage: waiting for device to settle before scanning

...which is the enclosure getting recognized, and then identifying itself and its contents:

[4984939.170660] usb 8-6: New USB device found, idVendor=1058, idProduct=1021
[4984939.170660] usb 8-6: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[4984939.170660] usb 8-6: Product: Ext HDD 1021
[4984939.170660] usb 8-6: Manufacturer: Western Digital
[4984939.170660] usb 8-6: SerialNumber: 574D43305431303831303734
[4984944.400970] usb-storage: device scan complete

Next you'll see the driver informing of its geometry, nature, and implicitly its device node, here sdd (for SCSI Disk Four, since sda, sdb and sdc were already taken):

[4984944.404739] scsi 7:0:0:0: Direct-Access     WD      Ext HDD 1021    2021 PQ: 0 ANSI: 4
[4984944.404739] sd 7:0:0:0: [sdd] 1953519616 512-byte hardware sectors (1000202 MB)
[4984944.407367] sd 7:0:0:0: [sdd] Write Protect is off
[4984944.407369] sd 7:0:0:0: [sdd] Mode Sense: 17 00 10 08
[4984944.407371] sd 7:0:0:0: [sdd] Assuming drive cache: write through
[4984944.408741] sd 7:0:0:0: [sdd] 1953519616 512-byte hardware sectors (1000202 MB)

Then the kernel recognizes that there is a partition (if you don't see this, the partition is not there or is invalid):

[4984944.411497]  sdd: sdd1

Now Linux has everything it needs and reports successful attachment:

[4984944.416739] sd 7:0:0:0: [sdd] Attached SCSI disk
[4984944.416739] sd 7:0:0:0: Attached scsi generic sg4 type 0

And so the search for data partition begins, i.e., OK, we have sdd1, but what is on there?, and the answer is:

[4984997.498613] NTFS driver 2.1.29 [Flags: R/W MODULE].
[4984997.554613] NTFS volume version 3.1.
[4984997.568859] NTFS-fs error (device sdd1): load_system_files(): $LogFile is not clean.  Mounting read-only.  Mount in Windows.
[4985390.027808] NTFS-fs error (device sdd1): ntfs_remount(): Volume has errors and is read-only.  Cannot remount read-write.
[4985442.423299] NTFS volume version 3.1.
[4985442.425032] NTFS-fs error (device sdd1): load_system_files(): $LogFile is not clean.  Mounting read-only.  Mount in Windows.

This above was a "good" mount. But just knowing that the device is sdd, or sdc or sdb, allows me to make a binary copy (assuming I have enough free space on /mnt/backupdisk): input file /dev/sdd, output file DiskImage.raw, block size 1 MB:

# dd if=/dev/sdd of=/mnt/backupdisk/DiskImage.raw bs=1M

Note that the input file is /dev/sdd and not /dev/sdd1 (or any other number).  Now if I wanted to, I could find out the offset of the data partition within DiskImage.raw, and mount it with the help of a loop device. Here you'll find the dirty details.

First recovery attempt

The second thing to do would be to put the physical disk into another enclosure, thereby ensuring the enclosure is good, and getting a chance of the new enclosure correctly interpreting the disk. If the disk reappears, it might have been the previous enclosure that's broken. Just in case, backup all of the newfound drive contents, verify the backup, zero the disk with a disk-overwrite utility so that it goes completely dumb (you can't have two devices with different opinions in a device chain), reformat it natively from Windows and restore the data. It's a lucky shot, but I saw it happen; and the attempt isn't too expensive, good enclosures going for about US$ 19,99 new.

In case the original enclosure was bad, you won't be able to reformat the disk, or the disk won't be accessible. You can retry the new enclosure, and if it works, either replace the old enclosure, or keep using the new (but this is worthwhile if the new enclosure is quite better than a US$ 19.99 El Cheapo).

Professional recovery

Professional recovery services, those you can find with Google. A not too honest way of going about it would be to send over the physical disk, and – in case you got a "Yes, there is no hardware damage and we can recover all of your data for just US$ $$$,$$$!" answer – well you'd then know that the data was still salvageable. So you could attempt doing it yourself for free on the image backup you took, and only pay for the diagnosis and disk S&H. If you failed, the option of coughing up the requested dough would still be there. If there is hardware damage, the professional service is basically your only option. There are several voodoo tricks that will (temporarily) revive a "dead" disk, often long enough to recover the most important data at least, but none that is guaranteed to work each time (heating the disk, cooling it, "twirling" it – I even saw suggested to smartly rap it against a hard surface). All of them will do more damage, i.e., you have to be sure to use the one trick that will work the first time, or you'll have killed the disk forever. I just added this to explain why you'll see success stories about revived disks: there are such stories. But if you want to be (mostly) sure it will happen to you, well – hire a professional.

If you're sure the hardware is OK – disk spins, no rattles, no strange sounds or buzzes, no clickety-clackety recalibrations – then "all" that happened is that chkdsk messed up some data.

DIY recovery

"Home" recovery usually would go like this (basically the same thing the pro guys would do once hardware damage had been discounted), working on the disk image copy:

  • check whether the first sector of the disk image is a valid partition table. If not, scan the disk image looking for either a valid partition table or a recognizable NTFS or FAT32 boot sector, depending on what FS was on the unit (for a 1 TB unit, NTFS seems the only logical possibility). Either way you ought to find something within the first few megabytes.

  • if the partition table is found, verify that the data partition is where the partition table says it should be. If it is not, this is very good news: probably the partition table is all that's wrong. Fixing it is easy (several Linux partition editors will do that), and the disk may be expected to have a 100% recovery. Just to be on the safe side, try mounting the data partition in Linux with a loop device in read-only mode to see whether it's readable. If it is, partition borking is confirmed, and the disk may be pronounced on its way to sure and full recovery. If it is not, possibly the partition is right and a (part of) a data partition has been rewritten. This is bad; see below under 'things go sour'.

  • if it is found and valid, check it against drive geometry and, if they don't match, that's also actually a good thing, since you might have found the root cause of the problem. You can force the physical geometry to kernel (and get it at Linux boot). See if the new geometry leads to the disk being recognized in Linux. If it does, backup the data, verify the backup is correct, and zero the disk with dd (a couple megabytes of zeroes to the appropriate sd device are enough). Power down the computer (don't just reboot; OK, it's paranoid, but it costs little and may accomplish something), then boot up Windows and have it format the now clueless disk into what it thinks is the best format. This ensures no conflicts with Windows. Restore data on the disk. Enjoy.

  • if the geometry trick does not work, or the partition can't be found, or once found it appears to be empty, things go sour. You need some recovery tool capable of scanning the disk image in search of the data areas (MFT, etc.) of the lost data. And once found, interpret them in order to get at the data. This is a difficult job and can't always be fully automated. At a lower and more desperate level, this involves scanning for the signatures of the individual files, hoping that they will lie in contiguous blocks in the disk. This kind of operation I'd gladly leave to professionals, though. I did it several times, always successfully, with old FAT disks. I did it again, about 50% successfully, with newer and larger and more fragmented FAT32 disks. I attempted a couple of times, with poor results (but I had full backups and wasn't really giving it my all), on the much more complicated NTFS and ext4 filesystems.

Manual recovery from Linux

OK, so you try to mount the partition in Linux and get errors (notice that /dev/sdc and /dev/sdcN are different things – the image refers to /dev/sdc).

# mount -t ntfs /dev/sdc1 /mnt/recovery
ntfs_mst_post_read_fixup_warn: magic: 0x00000000  size: 1024   usa_ofs: 0  usa_count: 65535: Invalid argument
Record 1 has no FILE magic (0x0)
Failed to open inode $MFTMirr: Input/output error

...this seems to indicate that the partition, as the system believes it to be, is wrong or badly damaged. Let's check out the first option first:

# fdisk /dev/sdc

You get something like this:

Disk /dev/sdc: 1000.2 GB, 1000204885504 bytes
1 heads, 63 sectors/track, 31008335 cylinders, total 1953525167 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x9d2b7596

Device       Boot      Start    End          Blocks       Id  System
/dev/sdc1              63       1953520127   976760032+   7   HPFS/NTFS/exFAT

Next step will be to check the actual partition start. By seeking into the image file (or /dev/sdc) we will search for the NTFS signature:

  00000000:EB 52 90 4E 54 46 53 20 -20 20 20 00 02 08 00 00 .R.NTFS ........ 
  00000010:00 00 00 00 00 F8 00 00 -3F 00 FF 00 3F 00 00 00 ........?...?... 
  00000020:00 00 00 00 80 00 80 00 -4A F5 7F 00 00 00 00 00 ........J....... 

# dd if=/dev/sdc bs=512 count=1 skip=63 2>/dev/null | hexdump -C | head -n 1

...with the data above we expect NTFS boot to be at sector 63, that's why we set skip. Also, we'll try with every sector in the first (say) megabyte...

# dd if=/dev/sdc bs=512 count=2000000 2>/dev/null | hexdump -C | grep "00:EB 52 90 4E 54 46 53"

...just to be sure there is only one boot sector (I had this happen to me. On a FAT32 disk, but still) and that there are no read errors anywhere.

Your result

00007e00  eb 52 90 4e 54 46 53 20  20 20 20 00 02 08 00 00  |.R.NTFS    .....|

is exactly what we would expect: sector 63 gives an offset of 63×512 = 32256 = 7e00 hexadecimal. The NTFS boot sector is there and the partition table appears to be correct.

So we can now copy a large chunk of /dev/sdc1 onto, say, /tmp/mydisk.img and attempt to fix it from Linux. This won't damage the physical disk, which will still be available unchanged for other attempts. And since now we know the PT to be correct, we can use /dev/sdc1 for the copy and entertain hopes that we couldn't before:

# dd if=/dev/sdc1 of=/tmp/mydisk.img bs=1G count=10
...after copying 10 gigabytes...

# ntfsfix /tmp/mydisk.img

If NTFSfix does not work, well, we're in trouble. There are more accurate utilities that can be tried, though. And if you need to recover JPEG picture files and the file system wasn't fragmented, this can be done automatically by seeking the JPEG headers. Same, almost, goes for PDF, TIFF and Office documents, except that I don't know how to recognize them (for JPEGs, I would :-) ). As a final option I've found these guys – I don't know them, am not related to them and won't accept any blame. However, as these things go, the price is very reasonable.