Using dd as a Swiss Army knife

Here are some useful examples of how a programmer (but not only) can use the dd command as a Swiss Army knife. Many of us are used to use a command similar to this one:

  1. dd if=/dev/sda of=sda.img bs=1k

This is a faster version of the cat command because of the bs= option. But dd can do much more, even for a programmer. Here are some examples of dd usage that I find useful as a debugging aid:

Cut part of a file

You can cut of a part of a file using:

  1. dd in=infile of=outfile skip=10 bs=1 count=15

This command will cut 15 bytes from infile starting at position 10. It is slow on big files besause I used 1B buffer, but if you can pass the offset and size in kilobytes, you may use bs=1k

Fast creating large files

Sometimes I quickly need a large file to test something (like if the code I wrote properly handles large files). You can do that like this:

  1. dd of=bigfile count=0 bs=1M seek=8000

This will instantly create a file named bigfile of size 8000MB. The trick is that it's a sparse file (file with holes), it doesn't really occupy space. You can see what the stat command writes for this file, it should say that 0 block are allocated for it. When reading this file you will see all zeros, you may also write to it. The real size will grow as much as read data gets written. Not every file system allows sparse files, it's possible that after issuing this command the system actually writes these 8000MB of zeros to the file, but popular Linux file systems will work as expected.

One interesting usage of sparse files is that you may actually create files bigger that your drives capacity. You are only limited by design limits of the file system. A very interesting trick is to create a sparse file bigger than your drive, create a file system on it and mount it using a loop device or even exporting this file as an iSCSI volume (for example using iSCSI Enterprise Target) to have a SCSI disk much larger than you could buy. Very useful in some kind of system or programming testing and experiments.

Quickly benchmarking hardware's read/write performance

I sometimes use dd to quickly see how fast a disk/file system is in writing or reading continuous data.

For writing:

  1. time dd if=/dev/zero of=test bs=1M count=64 conv=fdatasync

Notice the conv=fdatasyn option that causes flushing data to the medium, so combining with the time command you get the actual time of writing 64MB to the disk. Another option is to use oflag=direct and/or iflag=direct to completely bypass OS buffering during operation, not only flushing data at the end.

Watching statistics during dd run

If you send the USR1 signal to the dd command it prints it's current state. Try it on a larger file. Some useful scenario:

  1. dd if=/dev/zero of=test bs=1M count=2000 & echo $!
  2. # PID is displayed, use it here:
  3. kill -USR1 PID

The bs= argument

When using dd bs is one of the most important command argument. It tells the buffer size used for operations. You must remember that:
  • It will affect performance. Larger buffer is faster, don't do bs=1 if not necessary. Values above a few megabytes rarely make sense.
  • It will be allocated from memory, so the size is limited by the amount of RAM and available address space.
  • Arguments for count= and others are in units of bs.



Thank you on great aricle! I needed this: dd of=bigfile count=0 bs=1M seek=8000 :) Android programming blog