wiki:VirtualisationTools

Version 3 (modified by andree, 18 years ago) ( diff )

--

Use of virtualisation tools with Mondo Rescue can be quite helpful for development, testing and debugging. It can also help to give an administrator slightly more peace of mind in situations where real restore runs are not possible.

The following describes the use of virtualisation tools available today in conjunction with Mondo Rescue. Note that the emphasis here is on doing restores in a virtual machine (VM). You are free to run mondoarchive inside the VM or guest.

QEMU

QEMU is a CPU emulator for a number of different architectures both for the host and the guest (VM). Performancewise, QEMU lies probably somewhere between Bochs and VMware.

Getting QEMU

Being GPL'd software (as opposed to VMware), QEMU should come with your distribution, for Debian for instance their is a qemu package that work just fine (for me in sid).

However, if QEMU is not included in your distribution or you want a newer version, you can get it from here: http://www.qemu.org/. I have not compiled it myself - so if you have feel free to add the missing details!

Just insure that you're using gcc v3.x for your compilation of qemu/kqemu as gcc v4.x isn't supported yet. It may get tricky when you want kqemu (non-GPL) to load with your kernel as you'll need to have again the same compiler version which probably means for uptodate distros a kernel recompilation with gcc v3.x

Setting Things Up

QEMU comes with quite extensive information which includes a detailed manpage.

Getting going really only requires the following two things though:

Create a disk image using qemu_img:

qemu-img create hda.img 8G

This will create a sparse 8 GB raw disk image. This is not a partition nor a filesystem, this is a disk. Sparse means that it will initially only use very little space in the host filesystem and keep growing as required up to 8 GB. As a side note, sparse also means that you can create a disk image file the size of your hard disk and have Mondo Rescue running a restore of your physical hard disk onto the hard disk image that QEMU uses as long as you have somewhat more than 50% of your physical disk free. This can be useful to avoid mondorestore coming up with errors saying there is not enough space inside QEMU.

The second prerequisite is to have a mondorescue image or CD ready. You can just use mondorescue.iso that mondoarchive has created (under /root/images/mindi as default).

Running QEMU

Run QEMU, e.g. like this using the disk image we created beforehand and mondorescue.iso from our last mondoarchive run:

qemu -cdrom /root/images/mindi/mondorescue.iso -m 256 -boot d -hda hda.img

That's it. If all goes well you get a new window with the familar Mondo Rescue boot screen.

Some remarks:

  • '-m 256' gives 256M of physical RAM to the qemu instance
  • '-hda hda.img' could be written as just 'hda.img'. However, I include the switch for clarity sake. As a side note this means that only IDE disks are suppored by qemu, no SCSI.
  • '-boot d' makes it so that qemu boots off the CDROM. Once a restore is finished and you want to check the result, i.e. verify that the restored system is bootbale and working, use '-boot c'.
  • If the task is to boot off a real optical disk, use 'cdrom /dev/hdc' or whatever the correct device for the optical drive is.

Advanced Topics

Networking

Hardware

QEMU emulates a PCI NE2000 NIC. This means you need support for this on your Mondo Rescue restore media. The only way I have gotten this to work is by appending 8390 and ne2k-pci to the NET_MODS variable in mindi (< 2.0.8) which may then look like this:

NET_MODS="sunrpc nfs nfs_acl lockd loop mii e100 bcm5700 e1000 eepro100 tg3 pcnet32 vmxnet 8390 ne2k-pci"

Configuration

QEMU does support networking in various ways. The simplest is the 'user networking' which means that no setup (and no command line switches) are required.

QEMU has a DHCP server built in at address 10.0.2.2 which always issues IP address 10.0.2.15. Theoretically adding:

ipconf=dhcp

should make it so that networking is started uding busybox' udhcpc. Unfortunately, I have not had any success with this yet. However, adding the following kernel parameter does the trick for me and yields a working network connection:

ipconf=10.0.2.15:255.255.255.0:10.0.2.255:10.0.2.2

Newer versions of Mondo Rescue require the network interface as the first parameter:

ipconf=eth0:10.0.2.15:255.255.255.0:10.0.2.255:10.0.2.2

So, a complete kernel line at the prompt could read:

boot: nuke ipconf=10.0.2.15:255.255.255.0:10.0.2.255:10.0.2.2

or, with network interface:

boot: nuke ipconf=eth0:10.0.2.15:255.255.255.0:10.0.2.255:10.0.2.2

Using QEMU for mondorescue PXE mode

I've found only one mode which works in a useful way for our tests:

  • Create a disk image using qemu_img:

qemu-img create pxe.qemu 3G

  • Let’s go to ROM-o-Matic to generate a bootable image that support net boot for the emulated NIC (thanks to http://tomas.andago.com/cgi-bin/trac.cgi/wiki/QEMUPXE)
    • Select last version (at 20060528 is 5.4.2)
    • Chose NIC/ROM type. Qemu emulates Realtek 8029 - Choose ns8390:rtl8029 -- [0x10ec,0x8029]
    • Select ISO bootable image for without legacy floppy emulation.
    • Go under configuration page and add the following fields:
      • Check USE_STATIC_BOOT_INFO
      • STATIC_CLIENT_IP: 10.0.2.15
      • STATIC_SUBNET_MASK: 255.255.255.0
      • STATIC_SERVER_IP: 10.0.2.2
      • STATIC_GATEWAY_IP: 10.0.2.2
      • STATIC_BOOTFILE: /usr/local/tftpboot/pxelinux.bin
    • Click on get ROM (a copy is available also at ftp://ftp.mondorescue.org/mondo/ftp/pxe/pxe-5.4.2.iso)
label mondo
	KERNEL kmondo
	APPEND initrd=imondo.img load_ramdisk=1 prompt_ramdisk=0 ramdisk_size=36864 rw root=/dev/ram expert_mode acpi=off apm=off devfs=nomount exec-shield=0 noresume selinux=0 barrier=off
  • Copy the correct kernel and initrd from your mondoarchive images:
    • mount /mondo/mondo/images/victoria2-1.iso /mnt/cdrom -o loop
    • cp /mnt/cdrom/vmlinuz /usr/local/tftpboot/kmondo
    • cp /mnt/cdrom/initrd.img /usr/local/tftpboot/imondo.img
  • Use that iso to lauch your qemu VM:

qemu -hda pxe.qemu -cdrom pxe-5.4.2.iso -boot d -tftp /usr/local/tftpboot

  • enjoy !

Remote NFS Server Configuration

Even if all the above works, access to the required NFS share to do a restore may still be impossible because the remote NFS server will not allow access because the mount request comes from an illegal port. You may find something similar to the following in /var/log/syslog:

May 24 17:20:01 emden2 rpc.mountd: refused mount request from aurich3.ostfriesland for /srv/backups (/srv/backups): illegal port 32889

This is because QEMU does port translation to avoid clashes with host networking activities. The fix is to add the 'insecure' option in the NFS server's exports file, e.g.:

[...] /srv/backups aurich3(rw,root_squash,sync,insecure) [...]

As a side note, NFS requests (and all other network traffic from QEMU) will appear to come from the host's IP address because of QEMU's internal network address translation (NAT).

Issues

mondorestore hangs when extracting configuration files from all.tar.gz from the loopback-mounted ISO image located on NFS server. A workaround is to boot in expert mode, to manually mount the first ISO image, e.g.:

mount /tmp/isodir/aurich3/aurich3_nfs_nofiles-1.iso -t iso9660 -o loop,ro /mnt/cdrom

and to then copy all.tar.gz to /:

cp /mnt/cdrom/images/all.tar.gz .

This will take a surprisingly long time. But after it has finished (the copy can be deleted) and /mnt/cdrom unmounted, 'mondorestore --nuke' should start ok.

It looks like this may be somewhat fixed in 0.8.1 at least on Debian systems. For 0.8.1, the restore starts but then stalls after a number of files have been unpacked and then continues again after a few minutes and so on.

Doing a restore from an ISO image works fine, though (minus the disk not found issue discussed in more detail under VMware Issues below).

VMware Server

The following was done on a Debian Sid system. It should work with slight modifications on any Linux system.

Getting VMware Server

VMware Server can be downloaded here: http://www.vmware.com/download/server/ :

  • VMware-server-e.x.p-23869.tar.gz
  • VMware-server-linux-client-e.x.p-23869.zip

Note: At the time of this writing (24 May 06), this is still in beta and has an expiry date not too much in the future. This is supposed to change once the final version is out which will be unlimited.

Note: There are other packages like VMware-mui-e.x.p-23869.tar.gz which are not required for a server/console only scenario.

Note: Another possibility is to use VMWare VMPlayer.

In addtion to the actual software, it is highly recommended to also get the following manuals from http://www.vmware.com/support/pubs/ :

  • VMware Server Beta 3 Administration Guide
  • VMware Server Beta 3 Virtual Machine Guide

Setting Up VMware Server

Call me a paranoid bastard, but I don't like running reasonably complex Perl scripts that scatter files all over my beloved Debian system. (The uninstall routine is said to work well, but I haven't used it.) So, before the actual installtion, we'll set up a chroot environment. You can skip this step if you are more trsuting than I am and directly proceed to installation.

Setting Up The Chroot

Note: debootstrap is Debian-specific. I am sure though that other distributions will have simialr tools for bootstrapping a system.

The chroot will potentially be quite large in size depending on the number and sizes of the virtual machines. This needs to be taken into account when choosing the location of the chroot directory.

Settings things up is then quite straight-forward:

  • use debootstrap to create changeroot, e.g. (using local apt-proxy cache):

debootstrap sid ./sid-vmware-chroot http://emden2:9999/debian

  • create normal user in chroot, e.g.:

chroot ./sid-vmware-chroot adduser andree

  • mount things (needs to be repeated after every reboot, or see script below):

mount --bind /tmp ./sid-vmware-chroot/tmp mount --bind /dev ./sid-vmware-chroot/dev

Note: This will create some files in /dev, somewhat undermining the original purpose. However, without this, it does not work - there is a pipe error.

chroot ./sid-vmware-chroot mount -t proc proc /proc chroot ./sid-vmware-chroot mount -t sysfs sys /sys

  • in the chroot, install the following packages:
    • psmisc
    • libdb3
    • gcc
    • make
    • linux-headers-uname -r
    • xbase-clients i.e.:

chroot ./sid-vmware-chroot apt-get install psmisc libdb3 gcc make linux-headers-uname -r xbase-clients

Note: xbase-clients is definitely more than is strictly required. However, it comes with some potentially helpful tools. Note: linux-headers-uname -r is the headers for the running kernel. See below for implications.

  • allow anyone access to X (needs to be repeated after every reboot):

xhost +

  • use the following script to set the environment and to launch the console:
    #!/bin/bash
    #
    # /usr/local/bin/chvmware - Start VMware in a chroot and bring up Console
    #
    # Changelog:
    # 21May06AL: - initial version
    # 22May06AL: - umount everything in chroot upfront
    #            - umount /proc explicitly because -a does do it
    #            - also mount /sys in addition to /proc in chroot
    #            - use grep -c to count (and compare to 1 in mount checks)
    #
     
    # simple parameter check
    if [ $# != 2 ] ; then
      echo "Usage: $0 <chroot dir> <user in chroot>"
      exit 1
    fi
        
    # parameters
    CHROOT_DIR=$1
    CHROOT_USER=$2
        
    # allow X to be accessed by anyone
    xhost +
        
    # umount everything in chroot
    sudo chroot $CHROOT_DIR umount -a
    sudo chroot $CHROOT_DIR umount /proc
        
    # bind mount /tmp and /dev into chroot
    if [ `mount | grep -c "$CHROOT_DIR/tmp"` <> 1 ]; then
      sudo mount --bind /tmp $CHROOT_DIR/tmp
    fi   
    if [ `mount | grep -c "$CHROOT_DIR/dev"` <> 1 ]; then
      sudo mount --bind /dev $CHROOT_DIR/dev
    fi
        
    # mount /proc and /sys in chroot
    if [ ! -a "$CHROOT_DIR/proc/cpuinfo" ]; then
      sudo chroot $CHROOT_DIR mount -t proc proc /proc
    fi
    if [ ! -a "$CHROOT_DIR/sys/kernel" ]; then
      sudo chroot $CHROOT_DIR mount -t sysfs sys /sys
    fi
    
    # start vmware if not running already
    if [ -z `ps -C vmnet-bridge -o pid=` ]; then
      echo would start vmware server
      sudo chroot $CHROOT_DIR invoke-rc.d vmware start
    fi
    
    # start vmware console if not running already
    if [ -z `ps -C vmware -o pid=` ]; then
      echo would start vmware console
      sudo chroot $CHROOT_DIR su $CHROOT_USER -c vmware
    fi
    
    exit 0
    

Note: The largest remaining issue with the chroot is that the virtual machines are not properly stopped when the system goes down. This could be amended by means of an init.d script.

Installing VMware Server

  • unpack the server package:

tar xvzf VMware-server-e.x.p-23869.tar.gz

  • change into the server directory and start the installation:

cd vmware-server-distrib ./vmware-install.pl_

Note: Leave everything at their default values but only choose first networking option, i.e. no NAT and no host only. Note: Ignore error regarding vmware-cmd. Note: This should automatically find the kernel headers. If the kernel changes the instllation process will have to run with the new

header files.

  • unpack the server console package:

tar xvzf VMware-server-console-e.x.p-23869.tar.gz (contained in VMware-server-linux-client-e.x.p-23869.zip)

  • change into the server console directory and start the installation:

cd vmware-server-console-distrib ./vmware-install.pl

Running VMserver

After installation, VMware should be running. If you have installed into a chroot the following should start VMware Console:

chroot ./sid-vmware-chroot/ su andree -c vmware

If you have chosen a normal install, there may be an new icon in your menu. Alternatively, the following command should get you there:

vmware

Lean back and enjoy or rather create your first virtual machine. Many things said about disk images and booting off ISOs for QEMU above apply here as well.

Issues

Restore works fine, after reboot GRUB starts up ok, the system starts booting but then can't find the harddisk. This happens both for Linux and Windows.

The underlying reason appears to be the different IDE controller inside VMware as opposed to on the real machine. The restored systems are not prepared to deal with the new IDE controller inside the VM. Funnily enough, trying with kernel 2.4.27 does indeed work, but 2.6.16 does not (both stock Debian). The fact that Windows can't cope is not too surprising. Linux not working is more of a worry.

In regards to Linux or more specifically Debian Sid, the problem is caused by the chipset driver not being included in the initrd image that is built when the kernel is installed. This can be fixed as follows:

After the restore is finished (but before rebooting), remount the newly restored system to, say, /mnt/target. Note that this may entice mounting multiple partitions on top of each other. Then, chroot into /mnt/target, mount /proc and /sys and run dpkg-reconfigure for all kernel images. An example for a full session may look like this:

mkdir /mnt/target
mount /dev/hda3 /mnt/target
mount /dev/hda5 /mnt/target/tmp
mount /dev/hda6 /mnt/target/var
mount /dev/hda7 /mnt/target/usr
mount /dev/hda8 /mnt/target/usr/local
mount /dev/hda9 /mnt/target/home
chroot /mnt/target
mount -t proc proc /proc
mount -t sysfs sys sys
dpkg-reconfigure linux-image-2.6.16-1-k7
exit
umount -a
reboot

Note: The idea of running dpkg-reconfigure on the kernel image package to rebuild the initrd image is not mine. Rather, it was suggested by Guillaume Pernot here: http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=325877. Thanks, Guillaume!

Appendix A: Sarge Differences

  • Call debootstrap like this (using local apt-proxy cache):

debootstrap sarge ./sarge-vmware-chroot http://gateway:9999/debian

  • Replace 'sid' with 'sarge' in all commands above
  • Manually create file /etc/apt/sources.list in sarge chroot, e.g.:

deb http://192.168.1.1:9999/debian sarge main deb http://192.168.1.1:9999/security sarge/updates main

  • In Sarge, packages psmisc and libdb3 are already installed by debootstrap.
  • Kernel headers package on Sarge is: kernel-headers-uname -r
  • Install package module-init-tools for handling 2.6 kernels.
Note: See TracWiki for help on using the wiki.