system:vicki_debian_lenny_to_squeeze
This is an old revision of the document!
Table of Contents
"Upgrading" host vicki from Debian 5.0 "lenny" to Debian 6.0 "squeeze"
Some earlier communications on the matter:
pre-"upgrade" overview of hosts:
- SF-LUG uses:
- sf-lug.org (hosted elsewhere, and difficult to access/maintain)
- sf-lug.com (Xen domU vm sflug on Xen dom0 host vicki)
- BALUG uses:
- dreamhost.com (hosted elsewhere; definitely has its limitations/drawbacks)
- balug-sf-lug-v2.balug.org, etc. (e.g. this wiki; Xen domU vm balug on Xen dom0 host vicki)
- vicki - Xen dom0 host for sflug and balug Xen domUs noted above.
- this "upgrade" is essentially focused on vicki and VMs (presently Xen DomUs) hosted on vicki
- remote access to vicki is quite limited - essentially just ssh for management
- IPMI, though theoretically present, isn't yet sufficiently functional at last checks (perhaps needs to be enabled in BIOS?)
- no remote Keyboard-Video-Mouse
- no remote access to console serial connection
- vicki is in colocation (colo) facility, which is generally good (e.g. good power and network connectity)
- colo access is quite inconvenient (e.g. getting any of the responsible sysadmins physical access to vicki if/when needed, is not a quick and convenient matter - typically requires fair bit of advance planning to coordinate, very limited number of SF-LUG/BALUG sysadmin folks (one?) are on direct access list to be able to gain physical access)
- OS presently: for vicki, sflug & balug: Debian GNU/Linux 5.0.9 "Lenny" i386
draft/outline of upgrade plans/procedure & background:
# hostname --fqdn; pwd -P; expand -t 2 < 0010_a_general_plan_or_outline vicki.sf-lug.com /root/upgrades/5.0_lenny_to_6.0_squeeze SF-LUG & BALUG: System OS upgrades *soon*(?) - volunteer(s)? Jim, et. al., Do we have a quorum of volunteers (or should we also try to add a person or two)? In this case, I'm specifically thinking colo box, physical access and associated systems administration stuff (there's also lots that can be done mostly remotely). Anyway, I see some fairly major upgrades due in our near future. Impacted are: SF-LUG: sflug (guest on vicki, hosts [www.]sf-lug.com) vicki (host for the above) BALUG: vicki (noted above, hosts the immediately below) balug-sf-lug-v2.balug.org (guest on vicki, hosts lot of BALUG production) aladfar.dreamhost.com. (hosted, will be upgraded/replaced for us, hosts [www.]balug.org, etc.) Security support for Debian 5.0 "lenny" ends *soon* (2012-02-06). To the extent feasible, we should upgrade the relevant systems soon, preferably before that date, if that's doable, but if not, soon thereafter. Also planning out, reviewing & discussing those upgrades, etc. at: o Noisebridge Linux Discussion 2012-01-25 o SF-LUG 2012-02-05 Roughly, I have in mind (what I'd like to do): o There isn't any official supported upgrade path from i386 to amd64 o the Silicon Mechanics physical box is and will run amd64/x86_64 o the Silicon Mechanics physical box supports hardware virtualization o suitably backup (including on-disk as feasible) o generally prepare for upgrades o do "upgrades" as follows: o vicki: o DONE: backup / move / "shove" stuff around beginning of disk suitably out-of-the-way (on-disk backups / access to existing data) IMPLEMENTED AS: sd[ab]1 md0 RAID1 >243MiB available for use for upgrade/install (/boot data copied to /boot.2012-01-30.tar.gz) sd[ab]2 md1 RAID1 >16GiB available for use (data relocated; md1 removed from LVM; md1 data wiped to all binary zeros) o install Debian 6.0.4 amd64, using beginning area(s) of disks (md[01] (sd[ab]12) and area preceding sd[ab]1 (boot blocks, MBR, partition table) - partition layout to be preserved, all data on all partitions to be preserved except sd[ab]12 via md[01] will be used for /boot and LVM2 respectively) general architecture layout mostly quite as before (everything mirrored, separate /boot, rest under LVM2, separate filesystems, etc.) o install/configure vicki as above to fully support qemu-kvm, Note that on amd64, and with hardware virtualization, that will allow vicki to support i386 and amd64 images under qemu-kvm. o sflug & balug-sf-lug-v2.balug.org: o once the above vicki upgrades are done, sflug and balug-sf-lug-v2.balug.org can be dealt with remotely, however it may be desirable, in the interest of time, to convert sflug to run under qemu-kvm and verify such is operational before leaving the site. o at minimum, before departing site, it should be ensured that host vicki reboots properly to provide remote ssh access to it, and that it is suitably configured to run i386 and amd64 images under qemu-kvm. o sflug & balug-sf-lug-v2.balug.org can each be dealt with separately by their primary/lead sysadmin(s) as may be desired, in general for them, I'd probably recommend proceeding as follows: o get the existing xen guests converted to qemu-kvm and then running again, more-or-less as they were (will require some adjustments - most notably boot bits) o upgrade guests to Debian 6.0.4 (or latest 6.0.x) o optional: change guests from i386 to amd64, use above guests as reference installations, and do an install/merge to get the guest(s) as desired to amd64 architecture. Security support of Debian GNU/Linux 5.0 (code name "lenny") will be terminated 2012-02-06. Debian released Debian GNU/Linux 5.0 alias "lenny" 2009-02-14. Debian released Debian GNU/Linux 6.0 alias "squeeze" 2011-02-06. references: http://lists.debian.org/debian-security-announce/2011/msg00238.html #
some notes/points/questions/observations/commentary/etc. from planning (meeting(s), etc.)
- meetings held or to be held on discussing these planned upgrades:
- 2012-01-25 at Noisebridge Linux Discussion
- 2012-02-05 at SF-LUG meeting
- "requirements"? - Erick P. Scott made the keen observation that there should be a "requirements" document, or something to that effect.
- Even if not specifically "requirements", something documenting relevant considerations, relative importance, etc., would be useful
- We do sort of have such a document see:System Administration - Rules of the Road (this box): objectives - but it's rather/quite outdated … though much of it is still rather/quite applicable.
- probably a gross oversimplification, but an approximate summary of overall host engineering design goals:
- provide a stable dependable relatively high availability and reasonably manageable platform to …
- well satisfy, as feasible, the needs and interests of the LUGs (SF-LUG and BALUG), including:
- doing it as well as feasible with, e.g. relatively limited available resources
- allowing the LUGs to relatively "do their own thing" without, as feasible, "cramping each other's style" and/or stepping upon each others resources/toes, needing (excessive) coordination/communication, etc. (this is a key objective why some years back we went to a VM environment, with general design for a quite stable host, and with each LUG then having its own VM on the host - this has generally been much easier to allow independent and relatively non-conflicting work to be done by each LUG while minimizing need to carefully coordinate each LUG's system activities … prior setup was one single shared host used by both LUGs - that was significantly more difficult to manage - notably on the coordination, etc.)
- What OS/distribution - from, and to:
- from: Debian GNU/Linux 5.0.9 "Lenny" i386 (vicki host & it's Xen domU guests)
- "upgrade" - there's no official supported way to "upgrade" i386 to amd64, so those aren't really "upgrades", but rather new installations and then merging in of the older data - so the result will be approximately as if such an upgrade path officially existed and was supported.
- to:
- vicki: Debian GNU/Linux 6.0.x "Squeeze" amd64
- although a reasonable alternative might be Ubuntu Server LTS amd64/x86_64 - but probably not preferred at this point in time for this installation/"upgrade"
- (partial) rationale/plan:
- going from i386 to amd64 allows guests to be i386 and/or amd64 (whereas i386 host limits guests to i386)
- convert guests from Xen to qemu-kvm - using full virtualization elimitates some rather sticky guest/host dependencies, e.g. specific kernel(s) and lower level bits, allows (quite) different guest distributions/versions and even operating systems altogether, etc. Regular remote maintenance has been challenging with Xen - particularly without remote console access to vicki. Converting to quem-kvm will make upgrading/updating both host and guests significantly easier going forward.
- guests - earlier plan was host that could run Xen or qemu-kvm guests - turns out that's not particularly feasible (without an intermediary guest, and it's not warranted in our case to add that level of complication), so instead …
- guests - convert from existing Xen to run the existing guests (after host "upgrade") under qemu-kvm
- once host is upgraded and guests are running under qeum-kvm, guests can then be further "upgraded", etc., remotely.
disk layout, etc. details:
# hostname --fqdn; pwd -P; more 00[23]* | expand -t 2 vicki.sf-lug.com /root/upgrades/5.0_lenny_to_6.0_squeeze :::::::::::::: 0020_vicki-pre-disk-analysis :::::::::::::: //hard drive partitions, we have: # 2>>/dev/null sfdisk -uS -l /dev/sda; 2>>/dev/null sfdisk -uS -l /dev/sdb Disk /dev/sda: 30401 cylinders, 255 heads, 63 sectors/track Units = sectors of 512 bytes, counting from 0 Device Boot Start End #sectors Id System /dev/sda1 63 498014 497952 fd Linux raid autodetect /dev/sda2 498015 35648234 35150220 fd Linux raid autodetect /dev/sda3 35648235 488392064 452743830 5 Extended /dev/sda4 0 - 0 0 Empty /dev/sda5 35648298 92213099 56564802 fd Linux raid autodetect /dev/sda6 92213163 148777964 56564802 fd Linux raid autodetect /dev/sda7 148778028 205342829 56564802 fd Linux raid autodetect /dev/sda8 205342893 261907694 56564802 fd Linux raid autodetect /dev/sda9 261907758 318472559 56564802 fd Linux raid autodetect /dev/sda10 318472623 375037424 56564802 fd Linux raid autodetect /dev/sda11 375037488 431602289 56564802 fd Linux raid autodetect /dev/sda12 431602353 488167154 56564802 fd Linux raid autodetect /dev/sda13 488167218 488392064 224847 fd Linux raid autodetect Disk /dev/sdb: 30401 cylinders, 255 heads, 63 sectors/track Units = sectors of 512 bytes, counting from 0 Device Boot Start End #sectors Id System /dev/sdb1 63 498014 497952 fd Linux raid autodetect /dev/sdb2 498015 35648234 35150220 fd Linux raid autodetect /dev/sdb3 35648235 488392064 452743830 5 Extended /dev/sdb4 0 - 0 0 Empty /dev/sdb5 35648298 92213099 56564802 fd Linux raid autodetect /dev/sdb6 92213163 148777964 56564802 fd Linux raid autodetect /dev/sdb7 148778028 205342829 56564802 fd Linux raid autodetect /dev/sdb8 205342893 261907694 56564802 fd Linux raid autodetect /dev/sdb9 261907758 318472559 56564802 fd Linux raid autodetect /dev/sdb10 318472623 375037424 56564802 fd Linux raid autodetect /dev/sdb11 375037488 431602289 56564802 fd Linux raid autodetect /dev/sdb12 431602353 488167154 56564802 fd Linux raid autodetect /dev/sdb13 488167218 488392064 224847 fd Linux raid autodetect //or if we present that data a bit differently, to show just how //identical the partitioning on the two /dev/sd[ab] disks is: Disk /dev/sd[ab]: 30401 cylinders, 255 heads, 63 sectors/track Units = sectors of 512 bytes, counting from 0 Device Boot Start End #sectors Id System /dev/sd[ab]1 63 498014 497952 fd Linux raid autodetect /dev/sd[ab]2 498015 35648234 35150220 fd Linux raid autodetect /dev/sd[ab]3 35648235 488392064 452743830 5 Extended /dev/sd[ab]4 0 - 0 0 Empty /dev/sd[ab]5 35648298 92213099 56564802 fd Linux raid autodetect /dev/sd[ab]6 92213163 148777964 56564802 fd Linux raid autodetect /dev/sd[ab]7 148778028 205342829 56564802 fd Linux raid autodetect /dev/sd[ab]8 205342893 261907694 56564802 fd Linux raid autodetect /dev/sd[ab]9 261907758 318472559 56564802 fd Linux raid autodetect /dev/sd[ab]10 318472623 375037424 56564802 fd Linux raid autodetect /dev/sd[ab]11 375037488 431602289 56564802 fd Linux raid autodetect /dev/sd[ab]12 431602353 488167154 56564802 fd Linux raid autodetect /dev/sd[ab]13 488167218 488392064 224847 fd Linux raid autodetect # 2>>/dev/null sfdisk -uS -d /dev/sda; 2>>/dev/null sfdisk -uS -d /dev/sdb # partition table of /dev/sda unit: sectors /dev/sda1 : start= 63, size= 497952, Id=fd /dev/sda2 : start= 498015, size= 35150220, Id=fd /dev/sda3 : start= 35648235, size=452743830, Id= 5 /dev/sda4 : start= 0, size= 0, Id= 0 /dev/sda5 : start= 35648298, size= 56564802, Id=fd /dev/sda6 : start= 92213163, size= 56564802, Id=fd /dev/sda7 : start=148778028, size= 56564802, Id=fd /dev/sda8 : start=205342893, size= 56564802, Id=fd /dev/sda9 : start=261907758, size= 56564802, Id=fd /dev/sda10: start=318472623, size= 56564802, Id=fd /dev/sda11: start=375037488, size= 56564802, Id=fd /dev/sda12: start=431602353, size= 56564802, Id=fd /dev/sda13: start=488167218, size= 224847, Id=fd # partition table of /dev/sdb unit: sectors /dev/sdb1 : start= 63, size= 497952, Id=fd /dev/sdb2 : start= 498015, size= 35150220, Id=fd /dev/sdb3 : start= 35648235, size=452743830, Id= 5 /dev/sdb4 : start= 0, size= 0, Id= 0 /dev/sdb5 : start= 35648298, size= 56564802, Id=fd /dev/sdb6 : start= 92213163, size= 56564802, Id=fd /dev/sdb7 : start=148778028, size= 56564802, Id=fd /dev/sdb8 : start=205342893, size= 56564802, Id=fd /dev/sdb9 : start=261907758, size= 56564802, Id=fd /dev/sdb10: start=318472623, size= 56564802, Id=fd /dev/sdb11: start=375037488, size= 56564802, Id=fd /dev/sdb12: start=431602353, size= 56564802, Id=fd /dev/sdb13: start=488167218, size= 224847, Id=fd //excepting extended partition, all logical and non-zero length primary //partitions paired up between the sda and sdb devices partitions as md //raid1 devices: # mdadm --verbose --examine --scan ARRAY /dev/md0 level=raid1 num-devices=2 UUID=aa643e53:bf543ced:313266d4:d5715d2d devices=/dev/sdb1,/dev/sda1 ARRAY /dev/md1 level=raid1 num-devices=2 UUID=042ceb88:cc906844:9895a7df:5145afdc devices=/dev/sdb2,/dev/sda2 ARRAY /dev/md2 level=raid1 num-devices=2 UUID=0246205e:28218c5d:abb3665b:0b743010 devices=/dev/sdb5,/dev/sda5 ARRAY /dev/md3 level=raid1 num-devices=2 UUID=2d2f4ea7:c64ec7bc:abb3665b:0b743010 devices=/dev/sdb6,/dev/sda6 ARRAY /dev/md4 level=raid1 num-devices=2 UUID=865ccab9:d4b974f9:abb3665b:0b743010 devices=/dev/sdb7,/dev/sda7 ARRAY /dev/md5 level=raid1 num-devices=2 UUID=658546bb:0f1cd14a:abb3665b:0b743010 devices=/dev/sdb8,/dev/sda8 ARRAY /dev/md6 level=raid1 num-devices=2 UUID=a36c8141:20c78911:abb3665b:0b743010 devices=/dev/sdb9,/dev/sda9 ARRAY /dev/md7 level=raid1 num-devices=2 UUID=fa9405b0:a35f0051:abb3665b:0b743010 devices=/dev/sdb10,/dev/sda10 ARRAY /dev/md8 level=raid1 num-devices=2 UUID=28693012:1c28e9e4:abb3665b:0b743010 devices=/dev/sdb11,/dev/sda11 ARRAY /dev/md9 level=raid1 num-devices=2 UUID=bdc04439:43e908da:abb3665b:0b743010 devices=/dev/sdb12,/dev/sda12 ARRAY /dev/md10 level=raid1 num-devices=2 UUID=c828b7de:3f56cc42:abb3665b:0b743010 devices=/dev/sdb13,/dev/sda13 ///dev/md0 is used for boot: #fgrep /boot /etc/fstab /dev/md0 /boot ext3 nosuid,nodev,ro,noatime 0 2 # df -k /boot Filesystem 1K-blocks Used Available Use% Mounted on /dev/md0 241036 57135 171457 25% /boot //md[1-6] used for LVM # for tmp in /dev/md[1-9] /dev/md[1-9][0-9]; do echo $(pvdisplay "$tmp" | fgrep -e 'PV Name' -e 'VG Name'); done; unset tmp PV Name /dev/md1 VG Name vg00 PV Name /dev/md2 VG Name vg-balug PV Name /dev/md3 VG Name vg-sflug PV Name /dev/md4 VG Name vg-balug PV Name /dev/md5 VG Name vg-balug PV Name /dev/md6 VG Name vg-local No physical volume label read from /dev/md7 Failed to read physical volume "/dev/md7" No physical volume label read from /dev/md8 Failed to read physical volume "/dev/md8" No physical volume label read from /dev/md9 Failed to read physical volume "/dev/md9" No physical volume label read from /dev/md10 Failed to read physical volume "/dev/md10" //are md[7-9] and/or md10 in use for anything? //not used for dom0 swap //not mounted //not referenced in /etc/fstab //not used by xen guests or dom0 for guests //fuser and fuser -m show nothing having them open //apparently /dev/md[7-9] and /dev/md10 not in use (free/available) :::::::::::::: 0030_vicki_initial_disk_prep :::::::::::::: //before: sda1 sdb1 md0 /boot sda2 sdb2 md1 vg00 sda5 sdb5 md2 vg-balug sda6 sdb6 md3 vg-sflug sda7 sdb7 md4 vg-balug sda8 sdb8 md5 vg-balug sda9 sdb9 md6 vg-local sda10 sdb10 md7 (unused) sda11 sdb11 md8 (unused) sda12 sdb12 md9 (unused) sda13 sdb13 md10 (unused) //after: sda1 sdb1 md0 /boot sda2 sdb2 md1 (unused) sda5 sdb5 md2 vg-balug sda6 sdb6 md3 vg-sflug sda7 sdb7 md4 vg-balug sda8 sdb8 md5 vg-balug sda9 sdb9 md6 vg-local sda10 sdb10 md7 vg00 sda11 sdb11 md8 (unused) sda12 sdb12 md9 (unused) sda13 sdb13 md10 (unused) #
host networking bits
# hostname --fqdn; pwd -P; expand < 0050_networking vicki.sf-lug.com /root/upgrades/5.0_lenny_to_6.0_squeeze //still quite accurate: http://www.wiki.balug.org/wiki/doku.php?id=system:ip_addresses IPv4 addresses (this subnet): 208.96.15.248/29 network: 208.96.15.248 network 208.96.15.249 Default Gateway 208.96.15.250 "vicki" dom0 (Xen host - Silicon Mechanics box primary IP) 208.96.15.251 (temporarily?) in use by sflug domU 208.96.15.252 sflug domU (Xen "guest" of host "vicki", sf-lug.com., etc.) 208.96.15.253 (useable - reserved for future use(?)) 208.96.15.254 balug domU (Xen "guest" of host "vicki", for BALUG use (balug-sf-lug-v2.balug.org, etc.) 208.96.15.255 broadcast //DNS servers (colo provided): 216.93.160.11 216.93.160.16 //and specific bits shown from host and guests: $ hostname; /sbin/ifconfig | sed -ne '/HWaddr/{p;n;/inet addr/p;}'; cat /etc/resolv.conf; netstat -nr vicki eth0 Link encap:Ethernet HWaddr 00:30:48:91:97:90 inet addr:208.96.15.250 Bcast:208.96.15.255 Mask:255.255.255.248 peth0 Link encap:Ethernet HWaddr 00:30:48:91:97:90 vif2.0 Link encap:Ethernet HWaddr fe:ff:ff:ff:ff:ff vif5.0 Link encap:Ethernet HWaddr fe:ff:ff:ff:ff:ff search sf-lug.com nameserver 216.93.160.11 nameserver 216.93.160.16 nameserver 64.81.79.2 Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 208.96.15.248 0.0.0.0 255.255.255.248 U 0 0 0 eth0 0.0.0.0 208.96.15.249 0.0.0.0 UG 0 0 0 eth0 $ $ hostname; /sbin/ifconfig | sed -ne '/HWaddr/{p;n;/inet addr/p;}' balug-sf-lug-v2.balug.org eth0 Link encap:Ethernet HWaddr 00:16:3e:4f:52:43 inet addr:208.96.15.254 Bcast:208.96.15.255 Mask:255.255.255.248 $ dig -t A sf-lug.com. +short 208.96.15.252 $ dig -t A www.sf-lug.com. +short 208.96.15.252 $ $ hostname; /sbin/ifconfig | sed -ne '/HWaddr/{p;n;/inet addr/p;}' sflug eth0 Link encap:Ethernet HWaddr 00:16:3e:7d:0c:67 inet addr:208.96.15.252 Bcast:208.96.15.255 Mask:255.255.255.248 eth0:0 Link encap:Ethernet HWaddr 00:16:3e:7d:0c:67 inet addr:208.96.15.251 Bcast:208.96.15.255 Mask:255.255.255.248 $ //qemu-kvm doesn't automagically do quite as much bridge setup for us, //so we'll need to do a bit more of that manually, for Debian 6.0 squeeze, //we'll need at least package bridge-utils on vicki //relevant networking file bits should look like: //based on relatively similar Debian GNU/Linux 6.0.4 amd64 host $ cat /etc/network/interfaces # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo br0 iface lo inet loopback # The primary network interface allow-hotplug eth0 iface br0 inet static bridge_ports eth0 address 208.96.15.250 netmask 255.255.255.248 network 208.96.15.248 broadcast 208.96.15.255 gateway 208.96.15.249 # dns-* options are implemented by the resolvconf package, if installed dns-nameservers 216.93.160.11 216.93.160.16 dns-search sf-lug.com $ #
xen --> qemu-kvm (specific example sflug):
# hostname --fqdn; pwd -P; expand -t 2 < 0060_vicki_sflug_xen2qemu-kvm vicki.sf-lug.com /root/upgrades/5.0_lenny_to_6.0_squeeze The bits on converting sflug from Xen to qemu-kvm This isn't everything, this is mostly just the less than trival bits (detailed) example given is sflug, balug would be fairly similar o sflug is Debian GNU/Linux 5.0.9 i386 (excepting some slight bits that may predate that - e.g. some existing low-level bits quite interdependent with existing Xen host domU, e.g. kernel - but nevertheless, even those bits are at least Debian GNU/Linux 5.0.x i386) o existing sflug has sda1 (/) and sda2 (swap), but no sda presented from host to it o create an sda as follows: o LVM volume suitably sized on host to house sflug / filesystem (may be resized) and its existing swap o partition the above to act like and be configured as an sda (when presented from host) with suitably sized sda1 and sda2, using Linux and Linux swap types respectively, and with sda1 set as bootable o use losetup (with -o and --sizelimit options) to create loop devices to access the sda1 and sda2 partitions within the above (note that such needs to be sufficiently current losetup - such is well the case under Debian 6.0.x, but 5.0.9 lacks the --sizelimit option) o use dd with output of the above loop devices and input of existing sflug root (/) and swap respectively (or resized root (/) filesystem, as applicable) o UUIDs should be unique, so we adjust accordingly o use e2fstune to change UUID of target root (/) filesystem, o use mkswap to recreate target swap o use blkid to determine UUID and label(s) of the above targets o mount target root filesystem and inspect/edit /etc/fstab appropriately: o adjust UUIDs in /etc/fstab as appropriate o if/as appropriate, temporarily and suitably comment out anything that shouldn't be initially mounted o note that the above sets the (virtual) sda up nearly to be bootable, but not quite, since those bits weren't written to such a virtual drive (nor needed under xen) for the existing xen sflug o boot an installation sflug qemu-kvm using virtual sda as noted above, using virt-install(1), and suitably adjusted configuration approximately as follows - and with CD-ROM (virtual) of: Debian GNU/Linux 5.0.9 "Lenny" - Official i386 CD Binary-1 20111001-17:16 and with suitable ssh X-11 forwarding enabled, etc.: DISPLAY=localhost:10.0 XAUTHORITY=/home/mpaoli/.Xauthority \ virt-install \ --name=sflug \ --ram=256 \ --os-type=linux \ --os-variant=debianlenny \ --network=bridge=br0 \ --hvm \ --virt-type kvm \ --cdrom=/var/local/pub/mirrored/cdimage.debian.org/debian-cd/5.0.9/i386/iso-cd/debian-509-i386-CD-1.iso \ --disk path=/dev/vg-sflug/sflug-sda,format=raw,bus=scsi \ --wait=-1 boot the (virtual) guest from CD into graphical recovery mode ... Device to use as root file system: /dev/sda1 Execute a shell in /dev/sda1 o from outside the chroot(8), bind mount the already mounted bits we'll need: # mount -o bind /cdrom /target/cdrom o and within chroot(8) to keep from driving myself batty: sh-3.2# set -o vi sh-3.2# FCEDIT=nvi o and some other environment bits to avoid problems in our chroot(8) with aptitude and friends: cd / && exec env -i SHELL=/bin/sh TERM="$TERM" USER="$USER" \ > PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin \ > HOME=/root /bin/sh o and again to keep from driving myself batty: sh-3.2# set -o vi sh-3.2# FCEDIT=nvi o edit /etc/apt/sources.list - comment out all active entries o use apt-crom to update /etc/apt.sources.list, e.g.: sh-3.2# apt-cdrom -m -d=/cdrom add o create a backup of /boot "just in case" (and/or for reference): sh-3.2# mkdir /boot.bak && (cd /boot && tar -cf - .) | > (cd /boot.bak && tar -xf -) o (mostly) update aptitude as feasible: note that it may be necessary to update /etc/apt/sources.list (e.g. to use archive.debian.org for Debian GNU/LInux 5.0.9) sh-3.2# aptitude update sh-3.2# aptitude safe-upgrade o package changes freeing ourselves from xen: sh-3.2# aptitude install grub linux-image-2.6-686 libc6 libc6-xen_ \ linux-image-2.6-xen-686_ linux-image-2.6.26-2-xen-686_ \ linux-modules-2.6-xen-686_ linux-modules-2.6.26-2-xen-686_ o install/configure grub: sh-3.2# grub-install --no-floppy /dev/sda o suitably create/adjust /boot/grub/menu.lst, e.g.: sh-3.2# update-grub o get our new root filesystem to consistent state - umount it, or remount it ro: from outside chroot(8): # umount /target/cdrom from inside chroot(8): sh-3.2# exit o From the rescue menu, choose: Reboot the system o if all's well, should get to grub boot prompt, boot single user mode sanity check system # cd / && exec shutdown -h now o reconfigure guest to: add networking (if not already done) add access to cdrom image add access to any additional storage as appropriate, e.g.: inspect/adjust/update /etc/fstab o boot guest o suitably update /etc/apt/sources.list o sanity check services running from guest (e.g. sshd, DNS & Apache from Internet) o configure guest to restart upon host reboot, e.g.: # virsh autostart sflug o shutdown guest(s), reboot host, check that all properly comes up #
vicki host install/"upgrade" procedure outline + select details:
# hostname --fqdn; pwd -P; expand -t 2 < 0070_vicki_upgrade_steps vicki.sf-lug.com /root/upgrades/5.0_lenny_to_6.0_squeeze vicki upgrade key steps (outline of general procedure - fair bit of detail, but skipping details of many of the more obvious/routine steps) (remaining) pre steps: o remount /boot ro o backup /boot data o backup MBR (440 bytes) o shutdown VM guests o attach KVM o shutdown vicki install(/merge/upgrade) steps: o attach bootable USB image: debian-6.0.4-amd64-CD-1.iso o boot USB o Advanced options o Graphical expert install o ... o Configure locales, add all en_US* locales o default locale: en_us.UTF-8 o ... o Load installer components from CD, include: o cfdisk-udeb o choose-mirror o multipath o openssh o parted-udeb o partman-reiserfs o reiserfs-modules o Configure the network o Auto-configure network with DHCP: o no o IP address: 208.96.15.250 o Netmask: 255.255.255.248 o Gateway: 208.96.15.249 o Name server addresses: 216.93.160.11 216.93.160.16 o Hostname: vicki o Domain name: sf-lug.com o Set up users and passwords o Allow login as root: yes (will change that later) o Create a normal user account now?: No o Configure the clock o ... o Pacific o Partition disks o Manual ONLY TOUCH PARTITIONS #1 AND #2 ON EACH DISK, DO NOT RECREATE OR MOVE THEM, THEY MUST REMAIN EXACTLY WHERE THEY ARE AND THE OTHER PARTITIONS TOTALLY UNTOUCHED o Configure software RAID o Keep current partition layout and conigure RAID?: Yes o Create MD device o RAID1 o Number of active devices for the RAID1 array: 2 o Number of spare devices for the RAID1 array: 0 o select partition 1 from both disks o Keep current partition layout and conigure RAID?: Yes o Create MD device o RAID1 o Number of active devices for the RAID1 array: 2 o Number of spare devices for the RAID1 array: 0 o select partition 2 from both disks o Keep current partition layout and conigure RAID?: Yes o Finish o RAID1 device #0 #1 o Ext3 o /boot o vicki-boot o RAID1 device #1 #1 o physical volume for LVM o Configure the Logical Volume Manager o Write the changes to disks and configure LVM?: Yes o Create volume group o Volume group name: vicki o /dev/md1 o Create logical volume logical volume name & size root / 1G usr 2G var 4G home 2G (we'll add swap later, not now: swap1 512M swap2 512M swap3 512M swap4 512M ) o Partition disks LVM mount fs type root / ext3 usr /usr ext3 var /var ext3 home /home ext3 o RAID1 device #0 #1 o Ext3 o /boot o vicki-boot o Finish partitioning nad write changes to disk (continue without swap) o Install base system after that, and before kernel, etc.: ID "merging": At this point, target filesystems are mounted on/under /target, including /etc/{passwd,shadow,group,gshadow} files (although root password isn't in there yet). Here's where we start UID/GID reconciliation and alignment and merging in, etc. of IDs from "old" vicki. merge/reconcile /etc/{passwd,shadow,group,gshadow} activate and use console session on tty2 (Ctrl+Alt+F2) start additional md (mdadm) devices, e.g.: ~ # mdadm --assemble /dev/md2 /dev/sd[ab]5 ~ # mdadm --assemble /dev/md3 /dev/sd[ab]6 ~ # mdadm --assemble /dev/md4 /dev/sd[ab]7 ~ # mdadm --assemble /dev/md5 /dev/sd[ab]8 ~ # mdadm --assemble /dev/md6 /dev/sd[ab]9 ~ # mdadm --assemble /dev/md7 /dev/sd[ab]10 ~ # mdadm --assemble /dev/md8 /dev/sd[ab]11 ~ # mdadm --assemble /dev/md9 /dev/sd[ab]12 ~ # mdadm --assemble /dev/md10 /dev/sd[ab]13 Scan for volume groups: ~ # vgscan and activate: ~ # vgchange -a y vg00 ~ # vgchange -a y vg-local ~ # vgchange -a y vg-sflug ~ # vgchange -a y vg-balug mount "old" root filesystem and copy ID information to handy location, and while we're at it, ssh keys: ~ # mkdir /tmp/mnt ~ # mount -o ro /dev/vg00/root /tmp/mnt ~ # (umask 077 && mkdir /target/var/tmp/etc /target/var/tmp/etc/ssh) ~ # (cd /tmp/mnt/etc && cp -p passwd shadow group gshadow /target/var/tmp/etc/) ~ # (cd /tmp/mnt/etc/ssh && cp -p *key* /target/var/tmp/etc/ssh/) unmount "old" root filesystem and remove our temporary mountpoint: ~ # umount /tmp/mnt && rmdir /tmp/mnt chroot into target we're building: ~ # cd / && exec chroot /target optionally give us more "friendly" shell, etc.: # cd / && exec /bin/bash --posix --login bash-4.1# PS1='# ' # set -o vi; FCEDIT=vi VISUAL=vi EDITOR=vi export FCEDIT VISUAL EDITOR mount /proc filesystem: # mount /proc VERY CAREFULLY merge in ID information from /var/tmp/etc/* files into corresponding /etc/* files: # vipw # pwconv # vipw -s # vigr # grpconv # vigr -s etc. as needed and NOTE ANY CHANGES WE'LL NEED TO MAKE, e.g.: o any changes on NEW filesystem(s) and/or o any changes on OLD filesystem(s) At last dry run check on 2012-02-25, the following issues were found, and their recommended actions: login/UID conflicts/issues: libuuid new UID/GID on target adjust any old data before allowing import, multichown IDspec: 107,100,107,101 100 uid conflict (libuuid vs. old Debian-exim) change Debian-exim to available <1000 UID adjust any old data before allowing import, multichown IDspec (replacing Debian-exim with new target UID, e.g. 125): 100,125 group/GID conflicts/issues: libuuid new GID on target: 101 adjust any old data before allowing import, ,,107,101 crontab new GID on target: 102 adjust any old data before allowing import, ,,101,102 102 gid conflict (vs. old Debian-exim) change Debian-exim to available <1000 UID adjust any old data before allowing import, ,,102,125 umount /proc filesystem from within chroot and exit chroot: # umount /proc && exit o continue with installation (kernel, etc.) o include non-free (in case we need it for, e.g. firmware) o Software selection o Choose software to install: select only: x SSH server x Standard system utilities (we'll add other stuff later) o Is the system clock set to UTC?: Yes ... reboot to single user empty contents of /tmp edit /etc/fstab to use tmpfs for tmp: tmpfs /tmp tmpfs nosuid,nodev 0 0 mount /tmp recopy our "old" ssh keys to /etc/ssh/ reboot Should be able to login via ssh as "regular" user and su to root, verify that, and if okay, disable root login via ssh start additional md (mdadm) devices, e.g.: # mdadm --assemble /dev/md2 /dev/sd[ab]5 # mdadm --assemble /dev/md3 /dev/sd[ab]6 # mdadm --assemble /dev/md4 /dev/sd[ab]7 # mdadm --assemble /dev/md5 /dev/sd[ab]8 # mdadm --assemble /dev/md6 /dev/sd[ab]9 # mdadm --assemble /dev/md7 /dev/sd[ab]10 # mdadm --assemble /dev/md8 /dev/sd[ab]11 # mdadm --assemble /dev/md9 /dev/sd[ab]12 # mdadm --assemble /dev/md10 /dev/sd[ab]13 add them to /etc/mdadm/mdadm.conf: # /usr/share/mdadm/mkconf | diff --ed /etc/mdadm/mdadm.conf - ... and apply those changes mount our CD image and configure into apt install: sudo libvirt-bin libvirt-doc virtinst virt-viewer qemu-kvm ed nvi reconfigure network using br0 convert sflug from Xen to qemu-kvm and configure to autostart reboot vicki, confirm all comes up as expected time permitting, work on IMPI remainder can be handled remotely #
checklist/outline of things to bring onsite:
- laptop(s) - Michael Paoli
- Ethernet cables - Michael Paoli
- 10/100/1000 Mbit Ethernet switch - Michael Paoli
- "Home" router (optional; Michael Paoli may bring)
- portable power strip - Michael Paoli
- off-line accessible copies of reference documentation, Michael Paoli:
- existing root passwords
- IP addresses & networking configuration information
- Debian GNU/Linux 6.0 "Squeeze" amd64 release notes and installation documentation
- vicki/SF-LUG/BALUG prepared install/upgrade outline/documentation
- boot/install images: Michael Paoli, on bootable USB flash drives:
- Debian GNU/Linux 6.0.4 "Squeeze" - Official amd64 CD Binary-1 20120128-13:42
- Debian GNU/Linux 6.0.4 "Squeeze" - Official amd64 NETINST Binary-1 20120129-00:39
procedure "details" - the "short" list /outline (specifics, etc.)
- sf-lug.com: disable its IP: 208.96.15.251 currently and from restart (not needed for sf-lug.com, and highly useful to have 2nd IP freed for upgrade/etc. procedures)
- Michael Paoli linux laptop - configure for network:
- down dhcp server, repoint dhcp server to config. for Servepath
- reconfig br0: # ifconfig br0 208.96.15.253 netmask 255.255.255.248 broadcast 208.96.15.255 up
- # route add default gw 208.96.15.249
- DNS: 216.93.160.11 216.93.160.16
- restart dhcp server
- connect:
- Gig switch: Internet, vicki, Michael Paoli linux laptop, Linksys "Internet" Ethernet port
- Linksys Ethernet ports 1-5 - additional laptop(s) (drive/watch process, Internet via NAT/SNAT)
- (www.)sf-lug.com - prepare to have services temporarily on Michael Paoli's laptop:
- have command set, but don't yet hit enter: # ifconfig br0:1 208.96.15.252 netmask 255.255.255.248 broadcast 208.96.15.255 up
- bring down sflug domU, and once down, hit enter on above command
- test connectivity/functionality (sflug domU down, but DNS and http up on 208.96.15.252
- shutdown the balug VM
- continue with vicki "upgrade"/install as noted on this wiki page, and including also:
- before bringing sflug VM back up on Internet, on Michael Paoli's Linux laptop, down the IP 208.96.15.252:
- # ifconfig br0:1 down
to do:
- save needed upgrade related data (e.g. procedure steps/outlines) to handy locations accessible throughout upgrade
system/vicki_debian_lenny_to_squeeze.1330326579.txt.bz2 · Last modified: 2012-02-27T07:09:39+0000 by michael_paoli