Changes between Initial Version and Version 1 of VirtualisationTools


Ignore:
Timestamp:
Aug 3, 2006, 3:22:26 PM (19 years ago)
Author:
andree
Comment:

Copy from BerliOS

Legend:

Unmodified
Added
Removed
Modified
  • VirtualisationTools

    v1 v1  
     1Use of virtualisation tools with Mondo Rescue can be quite helpful for development, testing and debugging. It can also help to give an administrator slightly more peace of mind in situations where real restore runs are not possible.
     2
     3The following describes the use of virtualisation tools available today in conjunction with Mondo Rescue. Note that the emphasis here is on doing restores in a virtual machine (VM). You are free to run mondoarchive inside the VM or guest.
     4
     5== QEMU ==
     6
     7QEMU is a CPU emulator for a number of different architectures both for the host and the guest (VM). Performancewise, QEMU lies probably somewhere between Bochs and VMware.
     8
     9=== Getting QEMU ===
     10
     11Being GPL'd software (as opposed to VMware), QEMU should come with your distribution, for Debian for instance their is a qemu package that work just fine (for me in sid).
     12
     13However, if QEMU is not included in your distribution or you want a newer version, you can get it from here: http://www.qemu.org/. I have not compiled it myself - so if you have feel free to add the missing details!
     14
     15Just insure that you're using gcc v3.x for your compilation of qemu/kqemu as gcc v4.x isn't supported yet. It may get tricky when you want kqemu (non-GPL) to load with your kernel as you'll need to have again the same compiler version which probably means for uptodate distros a kernel recompilation with gcc v3.x
     16
     17=== Setting Things Up ===
     18
     19QEMU comes with quite extensive information which includes a detailed manpage.
     20
     21Getting going really only requires the following two things though:
     22
     23Create a disk image using qemu_img:
     24
     25  qemu-img create hda.img 8G
     26
     27This will create a sparse 8 GB raw disk image. This is not a partition nor a filesystem, this is a disk. Sparse means that it will initially only use very little space in the host filesystem and keep growing as required up to 8 GB. As a side note, sparse also means that you can create a disk image file the size of your hard disk and have Mondo Rescue running a restore of your physical hard disk onto the hard disk image that QEMU uses as long as you have somewhat more than 50% of your physical disk free. This can be useful to avoid mondorestore coming up with errors saying there is not enough space inside QEMU.
     28
     29The second prerequisite is to have a mondorescue image or CD ready. You can just use mondorescue.iso that mondoarchive has created (under /root/images/mindi as default).
     30
     31=== Running QEMU ===
     32
     33Run QEMU, e.g. like this using the disk image we created beforehand and mondorescue.iso from our last mondoarchive run:
     34
     35  qemu -cdrom /root/images/mindi/mondorescue.iso -m 256 -boot d -hda hda.img
     36
     37That's it. If all goes well you get a new window with the familar Mondo Rescue boot screen.
     38
     39Some remarks:
     40 * '-m 256' gives 256M of physical RAM to the qemu instance
     41 * '-hda hda.img' could be written as just 'hda.img'. However, I include the switch for clarity sake. As a side note this means that only IDE
     42   disks are suppored by qemu, no SCSI.
     43 * '-boot d' makes it so that qemu boots off the CDROM. Once a restore is finished and you want to check the result, i.e. verify that the
     44   restored system is bootbale and working, use '-boot c'.
     45 * If the task is to boot off a real optical disk, use 'cdrom /dev/hdc' or whatever the correct device for the optical drive is.
     46
     47== Advanced Topics ==
     48
     49=== Networking ===
     50
     51==== Hardware ====
     52
     53QEMU emulates a PCI NE2000 NIC. This means you need support for this on your Mondo Rescue restore media. The only way I have gotten this to work is by appending 8390 and ne2k-pci to the NET_MODS variable in mindi (< 2.0.8) which may then look like this:
     54
     55  NET_MODS="sunrpc nfs nfs_acl lockd loop mii e100 bcm5700 e1000 eepro100 tg3 pcnet32 vmxnet 8390 ne2k-pci"
     56
     57==== Configuration ====
     58
     59QEMU does support networking in various ways. The simplest is the 'user networking' which means that no setup (and no command line switches) are required.
     60
     61QEMU has a DHCP server built in at address 10.0.2.2 which always issues IP address 10.0.2.15. Theoretically adding:
     62
     63  ipconf=dhcp
     64
     65should make it so that networking is started uding busybox' udhcpc. Unfortunately, I have not had any success with this yet. However, adding the following kernel parameter does the trick for me and yields a working network connection:
     66
     67  ipconf=10.0.2.15:255.255.255.0:10.0.2.255:10.0.2.2
     68
     69So, a complete kernel line at the prompt could read:
     70
     71  boot: nuke ipconf=10.0.2.15:255.255.255.0:10.0.2.255:10.0.2.2
     72
     73==== Using QEMU for mondorescue PXE mode ====
     74
     75I've found only one mode which works in a useful way for our tests:
     76
     77 * Create a disk image using qemu_img:
     78
     79  qemu-img create pxe.qemu 3G
     80
     81 * Let’s go to ROM-o-Matic to generate a  bootable image that support net boot for the emulated NIC ''(thanks to   
     82   [http://tomas.andago.com/cgi-bin/trac.cgi/wiki/QEMUPXE])''
     83   * Select last version (at 20060528 is 5.4.2)
     84   * Chose NIC/ROM type. Qemu emulates Realtek 8029 - Choose ns8390:rtl8029 -- [0x10ec,0x8029]
     85   * Select ISO bootable image for without legacy floppy emulation.
     86   * Go under configuration page and add the following fields:
     87     * Check USE_STATIC_BOOT_INFO
     88     * STATIC_CLIENT_IP: 10.0.2.15
     89     * STATIC_SUBNET_MASK: 255.255.255.0
     90     * STATIC_SERVER_IP: 10.0.2.2
     91     * STATIC_GATEWAY_IP: 10.0.2.2
     92     * STATIC_BOOTFILE: /usr/local/tftpboot/pxelinux.bin
     93   * Click on get ROM (a copy is available also at ftp://ftp.mondorescue.org/mondo/ftp/pxe/pxe-5.4.2.iso)
     94
     95 * Configure your pxelinux enironment as usual (refer to http://syslinux.zytor.com/pxe.php). For example, add to your pxelinux.cfg/default
     96   conf file the following lines:
     97
     98{{{
     99label mondo
     100        KERNEL kmondo
     101        APPEND initrd=imondo.img load_ramdisk=1 prompt_ramdisk=0 ramdisk_size=36864 rw root=/dev/ram expert_mode acpi=off apm=off devfs=nomount exec-shield=0 noresume selinux=0 barrier=off
     102}}}
     103
     104 * Copy the correct kernel and initrd from your mondoarchive images:
     105   * mount /mondo/mondo/images/victoria2-1.iso  /mnt/cdrom -o loop
     106   * cp /mnt/cdrom/vmlinuz /usr/local/tftpboot/kmondo
     107   * cp /mnt/cdrom/initrd.img /usr/local/tftpboot/imondo.img
     108
     109 * Use that iso to lauch your qemu VM:
     110
     111  qemu -hda pxe.qemu -cdrom pxe-5.4.2.iso -boot d -tftp /usr/local/tftpboot
     112
     113 * enjoy !
     114
     115==== Remote NFS Server Configuration ====
     116
     117Even if all the above works, access to the required NFS share to do a restore may still be impossible because the remote NFS server will not allow access because the mount request comes from an illegal port. You may find something similar to the following in /var/log/syslog:
     118
     119  May 24 17:20:01 emden2 rpc.mountd: refused mount request from aurich3.ostfriesland for /srv/backups (/srv/backups): illegal port 32889
     120
     121This is because QEMU does port translation to avoid clashes with host networking activities. The fix is to add the 'insecure' option in the NFS server's exports file, e.g.:
     122
     123  [...]
     124  /srv/backups aurich3(rw,root_squash,sync,insecure)
     125  [...]
     126
     127As a side note, NFS requests (and all other network traffic from QEMU) will appear to come from the host's IP address because of QEMU's internal network address translation (NAT).
     128
     129=== Issues ===
     130
     131mondorestore hangs when extracting configuration files from all.tar.gz from the loopback-mounted ISO image located on NFS server. A workaround is to boot in expert mode, to manually mount the first ISO image, e.g.:
     132
     133  mount /tmp/isodir/aurich3/aurich3_nfs_nofiles-1.iso -t iso9660 -o loop,ro /mnt/cdrom
     134
     135and to then copy all.tar.gz to /:
     136
     137  cp /mnt/cdrom/images/all.tar.gz .
     138
     139This will take a surprisingly long time. But after it has finished (the copy can be deleted) and /mnt/cdrom unmounted, 'mondorestore --nuke' should start ok.
     140
     141It looks like this may be somewhat fixed in 0.8.1 at least on Debian systems. For 0.8.1, the restore starts but then stalls after a number of files have been unpacked and then continues again after a few minutes and so on.
     142
     143Doing a restore from an ISO image works fine, though (minus the disk not found issue discussed in more detail under VMware Issues below).
     144
     145== VMware Server==
     146
     147The following was done on a Debian Sid system. It should work with slight modifications on any Linux system.
     148
     149=== Getting VMware Server ===
     150
     151VMware Server can be downloaded here: http://www.vmware.com/download/server/ :
     152
     153 * VMware-server-e.x.p-23869.tar.gz
     154 * VMware-server-linux-client-e.x.p-23869.zip
     155
     156'''Note:''' At the time of this writing (24 May 06), this is still in beta and has an expiry date not too much in the future. This is supposed to change once the final version is out which will be unlimited.
     157
     158'''Note:''' There are other packages like VMware-mui-e.x.p-23869.tar.gz which are not required for a server/console only scenario.
     159
     160'''Note:''' Another possibility is to use VMWare VMPlayer.
     161
     162In addtion to the actual software, it is highly recommended to also get the following manuals from http://www.vmware.com/support/pubs/ :
     163
     164 * VMware Server Beta 3 Administration Guide
     165 * VMware Server Beta 3 Virtual Machine Guide
     166
     167=== Setting Up VMware Server ===
     168
     169Call me a paranoid bastard, but I don't like running reasonably complex Perl scripts that scatter files all over my beloved Debian system. (The uninstall routine is said to work well, but I haven't used it.) So, before the actual installtion, we'll set up a chroot environment. You can skip this step if you are more trsuting than I am and directly proceed to installation.
     170
     171==== Setting Up The Chroot ====
     172
     173'''Note:''' debootstrap ''is'' Debian-specific. I am sure though that other distributions will have simialr tools for bootstrapping a system.
     174
     175The chroot will potentially be quite large in size depending on the number and sizes of the virtual machines. This needs to be taken into account when choosing the location of the chroot directory.
     176
     177Settings things up is then quite straight-forward:
     178
     179 * use debootstrap to create changeroot, e.g. (using local apt-proxy cache):
     180      debootstrap sid ./sid-vmware-chroot http://emden2:9999/debian
     181 * create normal user in chroot, e.g.:
     182      chroot ./sid-vmware-chroot adduser andree
     183 * mount things (needs to be repeated after every reboot, or see script below):
     184      mount --bind /tmp ./sid-vmware-chroot/tmp
     185      mount --bind /dev ./sid-vmware-chroot/dev
     186'''Note:''' This will create some files in /dev, somewhat undermining the original purpose. However, without this, it does not work - there is a pipe error.
     187      chroot ./sid-vmware-chroot mount -t proc proc /proc
     188      chroot ./sid-vmware-chroot mount -t sysfs sys /sys
     189 * in the chroot, install the following packages:
     190   * psmisc
     191   * libdb3
     192   * gcc
     193   * make
     194   * linux-headers-`uname -r`
     195   * xbase-clients
     196   i.e.:
     197      chroot ./sid-vmware-chroot apt-get install psmisc libdb3 gcc make linux-headers-`uname -r` xbase-clients
     198
     199'''Note:''' xbase-clients is definitely more than is strictly required. However, it comes with some potentially helpful tools.
     200'''Note:''' linux-headers-`uname -r` is the headers for the running kernel. See below for implications.
     201
     202 * allow anyone access to X (needs to be repeated after every reboot):
     203      xhost +
     204
     205 * use the following script to set the environment and to launch the console:
     206  {{{
     207  #!/bin/bash
     208  #
     209  # /usr/local/bin/chvmware - Start VMware in a chroot and bring up Console
     210  #
     211  # Changelog:
     212  # 21May06AL: - initial version
     213  # 22May06AL: - umount everything in chroot upfront
     214  #            - umount /proc explicitly because -a does do it
     215  #            - also mount /sys in addition to /proc in chroot
     216  #            - use grep -c to count (and compare to 1 in mount checks)
     217  #
     218   
     219  # simple parameter check
     220  if [ $# != 2 ] ; then
     221    echo "Usage: $0 <chroot dir> <user in chroot>"
     222    exit 1
     223  fi
     224     
     225  # parameters
     226  CHROOT_DIR=$1
     227  CHROOT_USER=$2
     228     
     229  # allow X to be accessed by anyone
     230  xhost +
     231     
     232  # umount everything in chroot
     233  sudo chroot $CHROOT_DIR umount -a
     234  sudo chroot $CHROOT_DIR umount /proc
     235     
     236  # bind mount /tmp and /dev into chroot
     237  if [ `mount | grep -c "$CHROOT_DIR/tmp"` <> 1 ]; then
     238    sudo mount --bind /tmp $CHROOT_DIR/tmp
     239  fi   
     240  if [ `mount | grep -c "$CHROOT_DIR/dev"` <> 1 ]; then
     241    sudo mount --bind /dev $CHROOT_DIR/dev
     242  fi
     243     
     244  # mount /proc and /sys in chroot
     245  if [ ! -a "$CHROOT_DIR/proc/cpuinfo" ]; then
     246    sudo chroot $CHROOT_DIR mount -t proc proc /proc
     247  fi
     248  if [ ! -a "$CHROOT_DIR/sys/kernel" ]; then
     249    sudo chroot $CHROOT_DIR mount -t sysfs sys /sys
     250  fi
     251
     252  # start vmware if not running already
     253  if [ -z `ps -C vmnet-bridge -o pid=` ]; then
     254    echo would start vmware server
     255    sudo chroot $CHROOT_DIR invoke-rc.d vmware start
     256  fi
     257
     258  # start vmware console if not running already
     259  if [ -z `ps -C vmware -o pid=` ]; then
     260    echo would start vmware console
     261    sudo chroot $CHROOT_DIR su $CHROOT_USER -c vmware
     262  fi
     263
     264  exit 0
     265  }}}
     266
     267'''Note:''' The largest remaining issue with the chroot is that the virtual machines are not properly stopped when the system goes down. This could be amended by means of an init.d script.
     268
     269==== Installing VMware Server ====
     270 * unpack the server package:
     271      tar xvzf VMware-server-e.x.p-23869.tar.gz
     272 * change into the server directory and start the installation:
     273      cd vmware-server-distrib
     274      ./vmware-install.pl_
     275'''Note:''' Leave everything at their default values but only choose first networking option, i.e. no NAT and no host only.
     276'''Note:''' Ignore error regarding vmware-cmd.
     277'''Note:''' This should automatically find the kernel headers. If the kernel changes the instllation process will have to run with the new
     278            header files.
     279 * unpack the server console package:
     280      tar xvzf VMware-server-console-e.x.p-23869.tar.gz (contained in VMware-server-linux-client-e.x.p-23869.zip)
     281 * change into the server console directory and start the installation:
     282       cd vmware-server-console-distrib
     283       ./vmware-install.pl
     284
     285
     286==== Running VMserver ====
     287
     288After installation, VMware should be running. If you have installed into a chroot the following should start VMware Console:
     289  chroot ./sid-vmware-chroot/ su andree -c vmware
     290
     291If you have chosen a normal install, there may be an new icon in your menu. Alternatively, the following command should get you there:
     292  vmware
     293
     294Lean back and enjoy or rather create your first virtual machine. Many things said about disk images and booting off ISOs for QEMU above apply here as well.
     295
     296==== Issues ====
     297
     298Restore works fine, after reboot GRUB starts up ok, the system starts booting but then can't find the harddisk. This happens both for Linux and Windows.
     299
     300The underlying reason appears to be the different IDE controller inside VMware as opposed to on the real machine. The restored systems are not prepared to deal with the new IDE controller inside the VM. Funnily enough, trying with kernel 2.4.27 does indeed work, but 2.6.16 does not (both stock Debian). The fact that Windows can't cope is not too surprising. Linux not working is more of a worry.
     301
     302In regards to Linux or more specifically Debian Sid, the problem is caused by the chipset driver not being included in the initrd image that is built when the kernel is installed. This can be fixed as follows:
     303
     304After the restore is finished (but before rebooting), remount the newly restored system to, say, /mnt/target. Note that this may entice mounting multiple partitions on top of each other. Then, chroot into /mnt/target, mount /proc and /sys and run dpkg-reconfigure for all kernel images. An example for a full session may look like this:
     305
     306{{{
     307mkdir /mnt/target
     308mount /dev/hda3 /mnt/target
     309mount /dev/hda5 /mnt/target/tmp
     310mount /dev/hda6 /mnt/target/var
     311mount /dev/hda7 /mnt/target/usr
     312mount /dev/hda8 /mnt/target/usr/local
     313mount /dev/hda9 /mnt/target/home
     314chroot /mnt/target
     315mount -t proc proc /proc
     316mount -t sysfs sys sys
     317dpkg-reconfigure linux-image-2.6.16-1-k7
     318exit
     319umount -a
     320reboot
     321}}}
     322
     323'''Note:''' The idea of running dpkg-reconfigure on the kernel image package to rebuild the initrd image is not mine. Rather, it was suggested by Guillaume Pernot here: http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=325877. Thanks, Guillaume!
     324
     325
     326==== Appendix A: Sarge Differences ====
     327
     328 * Call debootstrap like this (using local apt-proxy cache):
     329      debootstrap sarge ./sarge-vmware-chroot http://gateway:9999/debian
     330 * Replace 'sid' with 'sarge' in all commands above
     331 * Manually create file /etc/apt/sources.list in sarge chroot, e.g.:
     332      deb http://192.168.1.1:9999/debian sarge main
     333      deb http://192.168.1.1:9999/security sarge/updates main
     334 * In Sarge, packages psmisc and libdb3 are already installed by debootstrap.
     335 * Kernel headers package on Sarge is: kernel-headers-`uname -r`
     336 * Install package module-init-tools for handling 2.6 kernels.