Although we have a number of Linux servers running on our ESX servers until recently the need had never arisen to convert an existing physical Linux server to virtual. However recently I’ve been doing a lot of work on using virtualization to improve Disaster Recovery options for SME clients (typically 2-25 servers), where cost is always important. Nowadays its not unusual to find networks of this size running one or two Linux boxes amongst their Windows servers but often the in-house IT only have minimal Linux admin skills.
VMware have now released Converter 4 which has support for P2V conversion of Linux systems, but only live conversion using a helper VM which has a number of drawbacks – for a start it only supports Red Hat, Suse and Ubuntu currently. My Linux guru said he wouldn’t bother using any virtualization utilities but instead would backup all the config files and other data, then install the OS from scratch on a new Virtual Machine and restore the configs. I’m sure that would work but I wouldn’t have a clue where to start doing that in Linux so instead I worked out a simple step-by-step process that any Windows IT person should be able to easily follow. Another important advantage of this “cold cloning” method is that no changes are made to the source Linux server as it remains offline throughout the process, so there is no risk of accidentally corrupting it.
Step One: Obtain the Required Tools
First of all this guide assumes you will be using VMware ESX3 as your virtual environment, although there is no reason you can’t import your virtualized server into a VMware Server system instead. You will need an ESX server with enough free storage capacity to hold the whole capacity of your Linux server’s hard disks, including free space. So if your Linux server has a 120GB RAID5 disk with 30GB of data on it you will still need to have 120GB of free space on your ESX datastore. After conversion you will be able to shrink the virtual disks to reclaim some of that space if necessary.
The conversion tool we will use is the VMware Converter 3 Enterprise BootCD, if you already have a VMware Infrastructure license or support agreement then you are entitled to download this from here. However if you are still only evaluating virtualization you can still get it – I discovered that if you register for the 60 day Infrastructure trial it enables the other downloads for you as well! Make sure you download the Zip file version of “VMware VCenter Converter 3.0.3 (Standalone Enterprise Edition)” as this also includes the BootCD image, the standard installer download does not. Once you have downloaded it just extract the ISO file, burn it to a CD and you are ready to go.
Step Two: Virtualize your Server
Shutdown your Linux server and then boot it off the Converter BootCD, it loads a WinPE environment which has driver support for a wide range of hardware so it shouldn’t have any problems detecting your system. Should you be unfortunate enough to have an exotic server that isn’t supported then VMware do supply a tool for creating your own BootCD with the necessary drivers but that is beyond the scope of this article. Once VMware Converter has loaded you will find that it is very similar to the Windows version if you have used that before. It will attempt to obtain an IP address by DHCP but if you want to set one manually just go to the “Administration” menu and select “Network Configuration” and enter the details. Usually you will convert the server direct to an ESX server but if you want to convert to file you should map a network drive first in the “Network Configuration” window. Now you are ready to convert your server as follows:
- Start the wizard by clicking “Convert Machine”
- Click “Next”, then select “Import all disks and maintain size” (resizing Linux disks when virtualizing is not supported)
- Click “Next” to move onto the Destination selection screen
- If you are converting to an ESX server select “VMware Infrastructure Virtual Machine” and click “Next”
- If you want to create a standalone image for VMware Server etc. then select “Other Virtual Machine”
- On the next window either enter the IP address and login details for your ESX server, or browse to the network drive you created earlier for your standalone image.
- On the next few windows you will need to enter a name for your VM and then select which datastore it will be created in.
- Next select which virtual NICs on your ESX server you want to be available for your VM, and whether to connect them at power on
- The guest OS customization and VMware Tools install options are not available for Linux VMs so you can ignore those sections.
- Now click “Finish” and the conversion process will start, depending on the size of your server and speed of the network connection this could take several hours.
- When the conversion process has completed you may remove the BootCD from the server and reboot it if you want it back online whilst you test your new VM.
Step Three: Power Up Your Linux VM
Once the conversion process is completed if you connect to your ESX server you should see your new VM waiting for you to start it up but resist the urge for the moment. Right-click the VM and select the “Edit Settings” option – the Converter isn’t very good at analyzing Linux systems so you will need to set various options manually. The main one is the memory setting, adjust it to a suitable amount, then check the list of installed hardware – I’d recommend removing anything that isn’t essential like USB support and floppy drives. Also check the “Guest Operating System” option under the “Options” tab, you will need to change this to whichever option is closest to your install.
Now you can try starting your VM, and if you’re lucky it will boot up fine with everything in place, however chances are it won’t and you will get an error screen. The error message will hopefully give you some indication of what the problem is, but usually it comes down to the fact that the bootloader cannot find the main Linux install, similar to “cannot find NTloader” and such errors in Windows. Again like Windows it will not be very helpful when it comes to fixing the problem so you need to get yourself a recovery environment (like the Windows Recovery Console). With most Linux distros this means booting off the first CD/DVD of the install media, you will either then get a menu option offering “Linux Rescue”, or from the boot prompt you can try “linux rescue”. The Linux Rescue mode is preferable as it should mount the hard drive file system for you – you could do it with any “LiveCD” but then you have to manually mount the relevant partitions.
Once you have got yourself booted to a terminal prompt I suggest the first step is to try the “fdisk –l” command, this will list the partitions on your virtual hard drive. Depending on your Linux distro you will either see just two, one small boot partition and then a large “Linux LVM” partition.
Otherwise you may have the more classic Linux multiple partition configuration, where in addition to the boot partition there may be three or four more (var, usr, swap etc.). Note the initial “device” column as this shows you where Linux sees the disks and partitions, usually this will now be /dev/sda1 for the boot partition but if you have multiple virtual hard disks now you may also get /dev/sdb and so forth.
The “s” is for SCSI, if it was an IDE disk it would be /dev/hda, which is where the first potential problem comes from – if you have virtualized a system which had an IDE boot disk then the bootloader is going to be looking in the wrong place for the OS.
There are two common bootloaders for Linux, Grub & Lilo, a quick Google will find plenty of “how to” guides explaining how to fix problems with them (example). It basically comes down to either reinstalling your bootloader from scratch, or editing the “.conf” file so it has the correct /dev/sda entries. Provided you don’t have a “Linux LVM” partition then you should find this gets your VM booting into Linux fine. With an LVM partition an extra step is necessary, as the bootloader runs an “initrd” command which builds a ramdisk to load the various drivers required to access the file system. In the interests of efficiency the “initrd” is usually compiled during the original install process to only include the necessary drivers for that system, so of course now you have virtualized it the SCSI controller has changed to a type it doesn’t recognize. Fortunately its not hard to fix provided you have the Rescue CD for your distribution, as that will automatically load the correct drivers (or modules as they like to call them).
I did a test conversion using a Dell PowerEdge 1600 running a standard install of Fedora Core 10, which uses an LVM partition and ramdisk loader, on the first boot I got this error:
The process to fix this wasn’t too difficult, once I’d remembered to change the VM BIOS to boot off the virtual CD first – I connected the Fedora installation DVD ISO then did the following:
- Boot VM from installation DVD, choose “rescue installed system”
- Change to HD filesystem – at the prompt enter chroot /mnt/sysimage
- Change to boot partition – cd /boot
- View partition contents – ls
- Rebuild the ramdisk installer – mkinitrd –v –f initrdxxxx.img xxxxx (note that once you start typing the initrdxx part pressing the Tab key will autocomplete, the “xxxx” part is the kernel version details)
- You should get a couple of pages of output, if you get nothing but the console prompt again then you have probably done something wrong – the mkinitrd command doesn’t appear to have any useful error reporting.
Once you’ve done that just type exit a couple of times and your VM will reboot, then it should load up fine.
While not as simple as virtualizing Windows systems you shouldn’t let the fact its Linux put you off, most of the problems you might encounter aren’t difficult to solve. Of course another advantage of virtualization is that you can now snapshot your VM before changing bootloader options, so you don’t have to worry about making a mess of your VM.
Once you have got your Linux VM up and running there isn’t much else you should have to worry about, although you may have to set any IP addresses manually again as the NIC will have changed. From the ESX point of view you can manage it like any other Windows VM and back it up using VCB if required.