Storage Guide


  • Modern large disks (>2TB) require using GPT labels - so we should use tools that support it - bye-bye fdisk and welcome parted/gdisk
  • parted is included in OpenNode OS already - while gdisk is not (EPEL repo)
  • gdisk includes sgdisk - eg scriptable gdisk
# Creating partition table interactively with parted (softraid example)
select /dev/sda
mklabel gpt
mkpart primary 0 1G
mkpart primary 1G -1
set 1 boot on
set 1 raid on
set 2 raid on

# Cloning sda partition table to sdb with sgdisk
sgdisk -R=/dev/sdb /dev/sda

Extending /storage filesystem

# First - find out how much free space you have available to donate ...
# lookup for "VFree" column - this shows how much free space you have left on selected LVM Volume Group
# this example adds 50GB free space to /storage partition
lvextend -L+50G /dev/VolGroupL0/storage
# growing filesystem to match new partition size
resize2fs /dev/VolGroupL0/storage
# verify new filesystem size
df -h

RAID management

BIOS Fakeraid

In the last few years, a number of hardware products have come onto the market claiming to be IDE or SATA RAID controllers. These have shown up in a number of desktop/workstation motherboards and lower-end servers such as the HP DL360 G5, if ordered without the optional RAID card. Virtually none of these are true hardware RAID controllers. Instead, they are simply multi-channel disk controllers combined with special BIOS configuration options and software drivers to assist the OS in performing RAID operations. This gives the appearance of a hardware RAID, because the RAID configuration is done using a BIOS setup screen, and the operating system can be booted from the RAID. (Referenced from:

# raid devices/member device status
[root@se6 ~]# dmraid -r
/dev/sda: isw, "isw_ccgfgjbcib", GROUP, ok, 976773165 sectors, data@ 0
/dev/sdb: isw, "isw_ccgfgjbcib", GROUP, ok, 488397165 sectors, data@ 0

# raid status (broken)
[root@se6 ~]# dmraid -s -v

*** Group superset isw_ccgfgjbcib
--> Active Subset
name   : isw_ccgfgjbcib_Volume0
size   : 488390656
stride : 128
type   : mirror
status : nosync
subsets: 0
devs   : 2
spares : 0

# rebuild array (adding replacement drive)
dmraid -R isw_ccgfgjbcib_Volume0 /dev/sda

Replacing faulty drive of softraid (RAID1) array

# mark failed disk ALL array member partitions as faulty
mdadm /dev/md0 -f /dev/sdb1
mdadm /dev/md1 -f /dev/sdb2
# remove each partition from the RAID arrays
mdadm /dev/md0 -r /dev/sdb1
mdadm /dev/md1 -r /dev/sdb2
# Physically remove the faulty disk
# Physically install the new disk and check that it appeared
dmesg | grep sd
# find out about array member statuses
cat /proc/mdstat

# copy disk partition table from existing array member (from sda to sdb)
sfdisk -d /dev/sda | sfdisk /dev/sdb

# refresh kernel partition table
partprobe /dev/sdb
#Add the new partitions back to their arrays
mdadm /dev/md0 -a /dev/sdb1
mdadm /dev/md1 -a /dev/sdb2
# Watch the array sync progress
cat /proc/mdstat

Fixing already removed disk status in raid array

  • If disk was physically removed or died completely - before marking it faulty and removed from array - then the following has to be done:
#removing failed disk from array (if removed physically already)
mdadm /dev/md1 -r detached
#Re-adding new disk back to array
mdadm --manage --add /dev/mdX /dev/sdX