We are living in a time where the quality of hard drives has been drastically decreased. The Vendors are struggling to improve the the capacity what is leading to ECC-Errors. Also the vulnerability for bad blocks has increased. Its no wonder that in one of my PCs an old IDE Spinpoint F1 by Samsung is doing its work while i already had to replace 3 drives within 2 years in my NAS. Because of the dead sectors.
Such dead sectors are blocks on the surface of an hard drive which are not writable due to physical damages. Facing bad blocks can be the sentence of death for a RAID Array.
But how can we be sure that an hard drive is healthy?
If everything looks just fine, as above, we would not have to be concerned, if there would not be the risk of bad blocks because they have the characteristic that they become conspicuous only if there are read or write actions performed to them. For ensuring that the array act correct we are going to perform a manual check:
Now we have to check that the “check” we have just sent to /sys/block/md42/md/sync_action produces an effect:
Recently i had to grow the capacity of my NAS. The NAS is running Debian and RAID is powered by md. I had attached 3 hard drives to it which were working as RAID5 array. Now i could write about how to grow a RAID, but you will be able to read that on more than enough other websites. I would rather share a benchmark i have created using Bonnie++. Let me show you the effects of changing the amount of hard drive on the read and write performance.
At work i need to use an application that can store and restore images of an computer. I am using Clonezilla to do so. Clonezilla is an distribution of Debian(sid) which does all those jobs. The advantage is that it is very customizable. Because i am storing all the images at the same space and using the same network setup i has become contra productive to setup it again and again for every single image . Thank god Clonezilla can run a prerunscript which does this for me. If you want to do the same get the lastest version of clonezilla as .zip file and extract it. Changes need to be done in the /syslinux/syslinux.cfgfile which defines the menu is shown when booting clonezilla.
Here we have two menu entries, i replaced those i did not need. I will explain the meaning of the syntax now for you:
label – Defines the option just for the config an can be set with any value
MENU DEFAULT – This defines which option should be booted if the countdown is over (define only once in config, use # for the rest)
MENU HIDE - Hides the menu.
MENU LABEL – The Label that is shown in menu.
MENU PASSWD – You could ask for an password when choosing the option but it is not needed to me.
“kernel /live/vmlinuzappend initrd=/live/initrd.img boot=live config noswap nolocales edd=on nomodeset” – Starts clonezilla as it is.
ocs_prerun=”mount -t cifs -o user=administrator,domain=domain.net 172.28.64.141:/Images /home/partimag” ocs_live_run=”/opt/drbl/sbin/ocs-sr -u restoredisk ask_user sda” ocs_live_extra_param=”" ocs_live_keymap=”/usr/share/keymaps/i386/qwertz/de-latin1.kmap.gz” ocs_live_batch=”no” ocs_lang=”en_US.UTF-8″ vga=788 toram=filesystem.squashfs nosplash- Here it becomes very tricky. Do not worry i will explain it for you:
ocs_prerun= – Commands in this value will run before clonezilla starts.
mount -t cifs -o- mounts a samba share with parameters:
user =admin – Login as “admin”
domain=domain.net – name of the domain (if you don’t know leave it blank. Home spaces do not use domains )
172.28.64.141:/Images – Place where the Images are stored or should be placed
/home/partimag – Clonezilla mounts the images here which is the reason it does not ask for any other place to search for the images.
ocs_live_run=”/opt/drbl/sbin/ocs-sr -u restoredisk ask_user sda” - ocs_live_run is defined two times in my config. This one runs the restore function of Clonezilla.
ocs_live_run=”/opt/drbl/sbin/ocs-sr -u -q2 -z1p -i 2048 -p poweroff savedisk ask_user sda- This is the second entry which runs the store function:
-u - Asks the user for the image name (could be set in config too).
restoredisk or savedisk - Which mode to run ? store, restore, partition or hard-drive ?
ask_user - This would be the name of the image but “-u” requests it from user.
sda – Which hard-drive should be written or red.
-q2 – Use “partclone”. I am preferring this .
-z1p – Use gzip-Kompression (with multicore)
-i 2048 – Splitzise in megabyte (Split every 2GB a new file for the backup.)
-p poweroff - power off after successfully running the script.
toram=filesystem.squashfs – Extracts all files to a ramdisk. Therefore you can remove the stick when clonezilla is booted.
After we modified our script we can flash it to an flashdrive (e.g using UnetBootIn or something like that) and test it.
If you have any problems with this how-to feel free to ask me for help and more information.