In the last few weeks, I spent quite some time wondering about how to arrange the hard disk layout of my productive systems in the future. This article outlines my thoughts and would like to ask the lazyweb for comments.
I try to keep my Debian servers as identically as possible, making it possible to talk non-linux persons remotely through the system without having to worry about this particular box' configuration.
I have been especially worrying about:
- (h1) root FS location (/etc/fstab, grub/menu.lst)
- (h2) LVM volume naming (/etc/fstab)
On my current systems, I usually have the root file system on /dev/hda1, with /dev/hda2 being a PV which is the only PV of a VG vg0, which in turn has LVs named home, var and usr.
This setup has a bunch of disadvantages.
- It is necessary to derive from the standard setup, when RAID or crypto is used. In these cases, the root fs needs to be in LVM as well, and hda1 is /boot.
- LVM setup breaks in recovery and/or migration scenarios, when the disks from one server are connected to another one due to two VGs named vg0 being present. These situations are solveable via lvrename per UUID though.
- Migration to libata is painful since the dreaded "hda" string lurks in half a dozen places, and Debian grub does not really cleanly support having different kernels that need different root= clauses on their command line.
Thankfully, there is a number of possible solutions:
- (l1) Mount the root fs with UUID
- - UUID needs to be adapted in /etc/fstab and grub/menu.lst if file system is rebuilt.
- - mount-per-UUID is a function of the Debian initrd, mechanism nonportable, initrd needed
- + no conflicts when a server can access disks of another system, as UUID are unique.
- (l2) Mount root fs with label (all root fsses have the same label)
- - Conflicts when disks are moved between servers, making it possible to boot the wrong system.
- (l3) Mount root fs with a host-specific label
- - When the system (and the root fs) is renamed, /etc/fstab and grub/menu.lst need to be adapted.
- + no conflicts when disks of more than one server are plugged in.
- (l4) VG is named identically on all servers
- - Manual intervention is necessary (see above) if another server's disks are placed into one server.
- + Backup and other disk related scripts can be configured identically in all systems.
- (l5) VG is called like the server
- - Attention needed when renaming a system (/etc/fstab needs to be adjusted).
- - Backup and other disk related scripts need to be configured differently on each system, no standard config possible.
- + no conflicts in the "more than one set of disks in a server" case.
I am pondering about a solution, but suspect that this is totally overengineered.
- (l1) Root fs is mounted via label, so that /boot/grub/menu.lst only needs to be edited once during syste installation. After migration to grub 2, a hook script (if grub2's update-grub finally supports hooks), the root FS label can be automatically generated. molly-guard refuses to boot the system when the root FS label from the boot manager configuration is not identical to the currently mounted root fs' label, thus forcing to boot with the current root fs or not to boot at all. Do I need to call blkid -g in that case?
- (l2) /etc/fstab is identical on all systems and has entries like
- /dev/disk/localhost/usr /usr ext3 defaults
Did I miss something? Is this a realistic solution? How do you handle this issue? Comments to this article will be open for quite a while.