Quick: Use badblocks to check larger hard drives (8TB+)

Checking newly purchased hard drives can be a smart move, especially before adding them to a pool. While this does not guarantee there won't be any issues, it might help catch some before they could potentially cause for a headache day.
I recently purchased two additional drives in preparation for doubling my current storage capacity and by having the hardware necessary to migrate over to another machine without causing much downtime or other fun stuff, and they unfortunately arrived somewhat poorly packaged, so I had an extra reason to run them through their paces first. Fortunately they seem fine, largely thanks to Western Digital's own packaging I suppose.
Anyway, I digress.
If you've tried running badblocks
before on a larger drive and ran into an error message saying something along the lines of "Value too large for defined data type", this quick guide is for you.
Technically speaking badblocks was apparently never made for larger drives, though there does not seem to be a clear alternative, which is a bit unfortunate. Fortunately it seems pretty easy to fix the above error though, and that's to simply specify the block size manually.
Be sure to have badblocks installed on your system before proceeding. For Ubuntu and Arch/Manjaro you'll want to install e2fsprogs
, which includes badblocks
. For other flavors be sure to check your flavor's package repositories for their equivalent package name, as it might vary a bit.
Expect to have badblocks take a full day to run through larger hard drives like 8TB and up. If you need to test multiple drives you can run these in parallel. As far as I can tell this does not cause any slowdown for these checks.
For the following commands you'll need to know which drive is the one you want to test. You can use your Linux flavor's favorite disk manager GUI to find this, or use sudo fdisk -l
for a command line option.
In my examples I'll use /dev/sdX
, which you should substitute with the correct device identifier you have found.
First, let's take a look at what your drive's recommended blocksize is:
❯ sudo -n blockdev --getbsz /dev/sdX
The value that this command returns is the value we'll use as the blocksize. In my case for an 8TB drive I got 4096
, but be sure to double check with your own drives to make sure you use the correct value, otherwise the results might not be accurate.
Next, let's use the value we have just found to run the badblocks
command.
Warning: Runningbadblocks
with the-w
option is a destructive action, do not do this on a drive that has any data on it already that you don't want to lose!
❯ sudo badblocks -t random -w -s -b {blocksize} /dev/sdX
Here's what the options stand for:
-t random
sets the test pattern to be random, which seems like a good option to use when testing a drive-w
uses write-mode test, which is a destructive action. It's good to be able to fully test a drive for both read and write, but make sure your drive does not contain any sensitive data that you have not (yet) backed up anywhere else.-s
shows an estimate of the progress of the scan. This isn't 100% accurate, but it's better to see something rather than have no clue where it's at.-b {blocksize}
specify the block size. Be sure to replace{blocksize}
with the number you found with the previous command mentioned./dev/sdX
the drive you want to test. Replace with the actual drive. Be extra careful as you don't want to accidentally destroy data on the wrong disk.

That's it, badblocks
should now casually walk its way through your entire drive and check it as thoroughly as it can. While you could still technically have bad blocks or other types of drive issues even with this test reporting no issues, but it should still be slightly less risky as compared to fully relying on drives completely untested.
In my case it took about 25 hours for both drives to be fully tested. Fortunately both reported zero issues.
I hope this helps.