Tuesday, December 22, 2009

RHEL, KVM virtualization, and migrating from Xen

My Christmas project is to convert the half-dozen or so Xen virtual guests we have over to KVM.

Why KVM?  It's the future for Red Hat; it's lightweight, and it's included in the kernel,  so you don't need to run a special kernel.

My first project was to take my Dell server that's now running Xen and a few Windows instances, add a RHEL image or two, and see how much work it was getting everything switched over.  The official line is that while it's possible to migrate a Xen host to KVM, it's not possible to migrate a guest.  We'll see.

The place to start is the brand new RHEL Virtualization Guide, here:

http://www.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5.4/html/Virtualization_Guide/index.html

Well, no.  The place to start was very, very good backups.  I'm running these on loop disks; I backed up the loopback files first.

On the host, I did a 'yum install kvm' to get those utilities. Then, I did a 'yum install kernel kernel-devel' on both the host and  the guest, and changed grub.conf to boot into this new non-Xen kernel, and re-booted the host.

Probably the biggest change on the host is networking.  You'll either have to use NAT or bridging, and since I didn't want to add Yet Another Layer of NAT into my network, I chose bridging.

/etc/sysconfig/network-scripts/ifup-eth1 looks like this originally:

DEVICE=eth1
BROADCAST=192.168.1.255
IPADDR=192.168.1.140
NETMASK=255.255.255.0
NETWORK=192.168.1.0
ONBOOT=yes
GATEWAY=192.168.1.250
TYPE=Ethernet

I had to comment out the IPADDR line, and add to the bottom:


BRIDGE=br1

Then add ifcfg-br1:

DEVICE=br1
BROADCAST=192.168.1.255
IPADDR=192.168.1.140  
NETMASK=255.255.255.0
NETWORK=192.168.1.0 
ONBOOT=yes
GATEWAY=192.168.1.250
TYPE=Bridge  (NOTE:  Case is important here.  Must be capital 'B', lower 'ridge')

And restart the network.  It’ll now look like this:



Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
192.168.1.0     *               255.255.255.0   U     0      0        0 br1
default         saratoga1.denma 0.0.0.0         UG    0      0        0 br1

Add this to /etc/sysctl.conf:

net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0

and then

sysctl -p /etc/sysctl.conf

This is so you don’t have to put rules into iptables to forward the bridge traffic.


OK, new kernel installed, and networking fixed.  Now it's time to build a guest xml file. As I said, I've got a loopback file for the Xen image already, sitting in /xmdata1.  So let's import that file:


virt-install --name=zimbra --ram=2048 --vcpus=2 --check-cpu  --accelerate
    --file=/xmdata1/xen-zimbra --vnc –import

The '--import’ uses an existing file.  You’ve now got a file in /etc/libvirt/qemu called zimbra.xml

The ‘--accelerate’ is important.  Otherwise, the guest gets created as domain type='qemu'.  That caused each guest to use 100% of the host’s CPU, and run bog slow.  Edit the file and change it to ‘kvm’ if you forget the switch - the xml files get put in /etc/libvirt/qemu. But if you change this by hand, remember to restart
/etc/init.d/libvirtd.  

You control the guests using the 'virsh' command.  So after I've done all this, I started up zimbra by doing

virsh start zimbra

And that was it.  Didn't have to do a thing on the guest side, except add a normal kernel.  It's been running two days now without a hitch.  To make sure the guest starts up when your host starts, simply do a

virsh autostart kantech
Domain kantech marked as autostarted

and I'm done.

But that was the easy part, right? I had two more guests to get going - a Windows 2000 Server loop disk, which isn't even officially supported, and a 2008R2 64-bit server on a real disk. Those were going to be the fun ones.  I budgeted two days, since I'm an optimist.

For the Win2K server, I did the virt-install exactly as above, started it from the console, and... it worked.  The stupid thing booted, installed all new drivers, rebooted, and it's been running ever since.

The 2008R2 server was slightly more tricky.  Because of the 'real' disk drives, I had to do this:

virt-install --name=antivirus --ram=2048 --vcpus=2 --check-cpu --accelerate --disk path=/dev/sdb1 --disk path=/dev/sdb2 --disk path=/dev/sde1 --import

Did that; did the start, and... this one didn't even bother to install new drivers.  It just ran.

So the week-long installation and debugging process has taken about three hours.  I'm going to spend the rest of the week catching up on BOFH episodes.



No comments:

Post a Comment