How can I test the full capacity of an SD card in Linux?

If anyone sees this later: Someone wrote an open source tool called "F3" to test capacity of SD cards and other such media. It can be found on the project hompage and in Github.


The cheating has now been confirmed by the following steps:

  • Generate a random data file.  (4194304 = 4 × 1024 × 1024 = 4 MiB, total size = 40 × 4 MiB = 160 MiB)

    Command:

    dd if=/dev/urandom of=test.orig bs=4194304 count=40
    40+0 records in
    40+0 records out
    167772160 bytes (168 MB) copied, 11.0518 s, 15.2 MB/s
    
  • Copy the data to the SD card.  (2038340 × 4096 = 8153600 KiB = 7962.5 MiB)

    Command:

    sudo dd if=test.orig of=/dev/sde seek=2038399 bs=4096
    40960+0 records in
    40960+0 records out
    167772160 bytes (168 MB) copied, 41.6087 s, 4.0 MB/s
    
  • Read the data back from the SD card.

    Command:

    sudo dd if=/dev/sde of=test.result skip=2038399 bs=4096 count=40960
    40960+0 records in
    40960+0 records out
    167772160 bytes (168 MB) copied, 14.5498 s, 11.5 MB/s
    
  • Show the result

    Command:

    hexdump test.result | less
    ...
    0000ff0 b006 fe69 0823 a635 084a f30a c2db 3f19
    0001000 0000 0000 0000 0000 0000 0000 0000 0000
    *
    1a81000 a8a5 9f9d 6722 7f45 fbde 514c fecd 5145
    
    ...
    

What happened? We observed a gap of zeros. This is an indicator that the random data have not been actually written to the card. But why do the data come back after 1a81000? Obviously the card has an internal cache.

We can also try to investigate the behaviour of the cache.

hexdump test.orig | grep ' 0000 0000 '

provides no result, which means that the generated rubbish does not have such a pattern. However,

hexdump test.result | grep ' 0000 0000 '
0001000 0000 0000 0000 0000 0000 0000 0000 0000
213b000 0000 0000 0000 0000 0000 0000 0000 0000
407b000 0000 0000 0000 0000 0000 0000 0000 0000
601b000 0000 0000 0000 0000 0000 0000 0000 0000

have 4 matches.

So this is why it passes badblocks check.  Further tests can show that the actual capacity is 7962.5 MB, or slightly less than 8 GB.

I conclude that this is very unlikely to be just random hardware failure, but more likely to be a kind of cheating (i.e., fraud).  I would like to know what action I can take to help other victims.

Update 11/05/2019

  • People asked me about how do I figured out the correct seek parameter is 2038399. I did a lot more experience than I have shown in the above. Basically you have to guess in the first place. You have to guess a proper size of data, and you have to guess where the data corruption was. But you can always use bisection method to help.

  • In the comment below I thought I was assumed that the second step above (copy the data to SD card) only copies 1 sector. But I was not make this mistake in my experiement. Instead, the seek was to show that in the "show result" step the offset 1000 is simply happen in the second sector of the data. If the seek is 2038399 sectors, the corruption is at the 2038400th sector.


First of all, read the F3 answer by @Radtoo. It is the correct way.

I have somehow missed it, and tried my own way:

  1. create 1gb test file: dd if=/dev/urandom bs=1024k count=1024 of=testfile1gb

  2. write copies of that file to sdcard (64 is sdcard size in gb): for i in $(seq 1 64); do dd if=testfile1gb bs=1024k of=/media/sdb1/test.$i; done

  3. check md5 of files (all but the last, incomplete, should match): md5sum testfile1gb /media/sdb1/test.*