KVM was up second, being the (default) Hypervisor in Ubuntu Server 12.04 LTS, with which I already had some experience. Ubuntu Server is free and the LTS editions have “Long Term Support”, so updates will be made available over several years. No hassles with download registration or license keys… easy. Documentation in the Ubuntu Server Guide tells you how to install KVM for Virtualization, but the setup process will also offer it as the last step, where you can state your intent for the server. In my case, I only selected SSH server and VM host as functions for the box. Snappy, with a well-defined result. I already had experience with Ubuntu Server and it generally impresses me with it’s ease of installation.
My first intent was to use thumb drives just as for VMWare, but, although booting it works, the installer specifically looks for a CD-ROM device and simply refuses to use the USB key. Also, installation to a USB drive was not possible, so I installed it to the 500 GB drive. This was no big disadvantage compared to the VMWare setup, because I had to claim the same drive for the obligatory datastore. KVM itself required no additional configuration, just use the guide to install the necessary bits. The first network interface was defaulted to DHCP, but I changed that to a fixed IP address and was ready to leave the console alone.
Getting a management interface up and running turned out to be a bit more work, as I’m trying to be as lazy as possible and wanted a Gui. Virt-manager is great, but you need an X11 server to run it, because there is no (effort nor interest in) a Windows based app. I have X11 machines available, but was working from a Windows 7 machine as “controller”, so I had installed Xming and putty for earlier experiments. This works great. You tell putty to enable X11 tunneling and it will cause the “DISPLAY” environment variable to be set at logon, so any Gui will work as long as Xming is alive (and configured to accept clients from remote machines!).
Creating the First VM
Creating a client is no problem at all, so I was up and running with my first Ubuntu Virtual Server in no time. Biggest difference was that I could only use server local resources for the installation media, so I couldn’t use that fancy VMWare trick of mounting an ISO which was on my PC. I gave the VM a virtual disk to use and the installation progressed exactly like for the host itself. Really no surprises at all. It simply works as advertised. Now for the difficult bits: adding the drives as devices and the networking interface.
Direct I/O with Virtio
KVM supports direct device I/O using a library called Virtio, which is nowadays installed by default. Most documentation I could find was talking about how to enable it in the Linux kernel, but that was already done. The problem was how to enable it. The dialog for adding devices gave no clue at all and documentation was not available. The virt-manager website is minimal and Ubuntu’s pages don’t describe the scenario. However, I found that, when adding storage, I could pick an existing file instead of creating a virtual disk, so (armed with the knowledge that everything in Linux is accessible as a file) I just went ahead and put in “
/dev/sdc“, that being the block device for my 2 TB disk. In the dialog I found Virtio as a choice, so I selected that. Caching? No. That should be it, right? And it was! The client (at next boot) had a new disk, “
/dev/vdb” with the single partition on it recognized as well as “
/dev/vdb1“. Ok, that was more like it!
Default networking for a VM is to use an anonymous local network, meaning your VM appears to be connected to a LAN with a DHCP server. The default gateway you get from that will pass on traffic to the outside world using the host’s networking stack, but you’re basically sandboxed. Internet access works, but nobody knows where you are. Not what you want for a server. I tackled this before, so setting up a (or the) network adapter as a bridge was no big problem. For someone new to this field however, it certainly is not trivial.
So how about getting a network adapter to be owned by the VM? This turned out to be even less trivial. First up, the virt-manager Gui is no help at all; you can select all kind of interesting things in the dialogs, but I had no clue what I needed to select for what result. The dialogs just translate the configuration file into a Gui, instead of a task oriented interface. “Assign physical device to VM” would be an option in such a Gui, stating the intent and guiding you through the necessary steps. I could add a network adapter or add a PCI device, but which was better was not stated. Also, there is no checking if what you are trying to do will work; the client will simply refuse to start up, even though the dialog accepts your input without any complaint nor warning.
The network card is still a no go. I tried several things, but adding it to the client made the client throw errors at startup, before actually becoming a VM, so I’m doing something wrong there. I even found some pages from Red Hat describing how to pass on a PCI device, but that also gave me a non-starting VM. All documentation I could Google expected me to know what I was doing, which I wasn’t. As a Hypervisor, KVM on Ubuntu installed fast and easy and adding disk devices directly is no problem, as long as you know how. Documentation is the real problem here: Just like for VMWare the “5 easy steps” are well documented and easy. Doing the unusual is a real puzzle.
So, KVM is definitely an option for my box, if I accept network bridges.