In a “Best Buy Guide” on the Tweakers.net (in Dutch) a new category Home server was introduced. Since I was already looking at my server and its capacity, I was intrigued by the remark that VMWare’s ESXi hypervisor is free for personal use. There was also a remark that some Intel Core i5 processors support VxD, which apparently allows you to pass on hardware directly (more or less) to a client VM, so you don’t have to create virtual disks if you want the full device to go to the client. You can also pass on PCI (and PCIe) devices to VMs with minimal overhead that way. A post on the forums (also in Dutch) described the installation in some more detail, including the nice touch of using a USB thumb drive as installation disk, so the hypervisor has almost no footprint on the host. So I gave it a spin.
I started out with a brand new toy: motherboard, processor, memory, and disks. I had a spare 500GB disk already available, I added a 2TB and a 3TB Seagate Barracuda for use by the file server. The biggest disk for backups, the other for media and such. New system built, BIOS configured to forget about Floppy drives and parallel printers, and enable all virtualization support.
Installing the VMWare Hypervisor
The VMWare website gave the usual confusion about “free”, “supported”, and “trial”. Most of the text describes the “Free Trial”, but the Hypervisor is in fact free. Use this link to go the the page that actually mentions the free download. What you miss out on is some of the more “enterprisey” features and support. I mention the support, but that is a big thing. People get training and certifications for being able to install, configure, and maintain the thing, and there is a lot you can configure. Luckily I got some quick feedback on that Tweakers forum to help my through some tight spots, but if you’re expecting to get everything working smoothly, it’s either that or start reading the gigabytes of documentation.
Installation of VMWare ESXi itself was pretty straightforward; the USB drive was recognized and I could use it as the installation drive. Only thing needed to be set was the management password. After installation I removed the installation media and rebooted, to get a host doing nothing much in particular. Still, the installation accepted my hardware well enough, so I was greeted with a yellow screen, sporting the invitation to go to a URL,
http://0.0.0.0, to download the management tools. Wait, what?
The first thing you need to do is configure the management network interface. Sure, it recognized there was a network adapter, but it wasn’t configured. I guess that’s the first part where the enterprise background shows up, because you don’t want your bare-metal VM host to start offering itself to whole network. So you log yourself on to the console (screen+keyboard connected to the physical machine) with the password you set during installation. Here I found out there was actually a user called “root”, so we’re apparently on some kind of Unix system. (turns out the is a Linux base beneath the Hypervisor, but that aside) Next I selected the “Configure Management Network” option and next “Network Adapters”. This showed me a list of two adapters, whereas my box had three. (one on-board, one recent PCI card with Gigabyte Ethernet support, and one older card with generic 10/100MB support) VMWare doesn’t appear to have as much hardware support as Linux has, but I guess they focus on the more recent (and high-performance) options. I selected the interface I wanted and used the other menu options to configure it for static IP. A handy test option is on the menu as well to check if it all works as intended, and it did.
I did take a look at the system logs presented here as well, which is where I found some very recognizable Linux log messages, but basically there wasn’t much to do. Interestingly you can enable ssh and ESXi Shell support, but they are under the “Troubleshooting Options”, so you’re not really expected to use them. Logging out from this console got me back to the screen with the “Download tools to manage this host” message, which now sported the correct IP address. After accepting the risk of a dummy certificate (for “localhost.localdomain” :-)) I got a page with download links for the vSphere Client and some other toolkits. I installed the vSphere Client for Windows, which appears to have been written in Microsoft f#. Starting the client you have to login on the host, using “root” as username and the password mentioned before.
Getting the License Installed
When connecting for the first time, you are confronted with two complaints:
- The license is a Trial that will expire in so many days, and
- There is no Datastore available.
I thought the first one would be simple, because VMWare gave me a license code when I downloaded the software (after creating an account of course), but it doesn’t say where I should do this. Odd.
I searched the help, but that tells me I should choose “Home”, then “Administration”, then “Licensing”, but I could not find a “Licensing” in the “Administration” pane. This actually had me stumped, because I could view the host and user “root” was an “Administrator” (with all possible privileges assigned), but nowhere could I see this “Licensing” part of the application. The forums gave me the answer: select the host in the tree, open the “Configuration” tab, then select “Licensed Features” in the “Software” box. You are greeted with information about the Evaluation mode the host is running in. On the right above the information is a link named “Edit…”, which brings up the dialog box where you can add the key. Only then will this complaint disappear.
Ok, the free Hypervisor supports 32 GB of memory and max 8 processors. I guess that last bit includes Hyperthreading, so a Core i7 with 4 cores and Hyperthreading on is max. No big deal for a hobbyist setup, but a limitation nonetheless.
Datastores and Raw Device Mappings (who don’t work)
So now we’re ready to set up a VM. Remember those big disks? I wanted to build a file server, with as little as possible overhead from the Hypervisor. So I guess I want those disks to go to the client. In ESXi terms this is called a “Raw Device Mapping” (RDM) and it won’t work for local disks. Don’t ask me why, but you simply cannot pass the disk straight to the VM. Of course the Hypervisor has to do some filtering, so you cannot access any disk in the system, but I have a system here that supports this (Intel’s VxD feature) so why won’t it work? I asked at the forum and this is what I gathered: RDM is for certain types of enterprise hardware (mostly NAS storage) but not for your SATA disk.
So what does work: You can use that ssh access to the host and create a VMWare disk image descriptor that fakes a drive image from a device. It uses a command that will create something that looks like an image file, but is actually linked to your physical device. This is not something you can do from the vSphere Client, so I’m not sure about the status of your support (if you buy it). You will have to put this set of files (the commands produce two files: the descriptor and the virtual file) on a datastore however, so I guess we have to do that first. Creating a datastore is another way of saying that you’re going to partition a harddisk and create a VMWare filesystem on it. No problems. Next I tried the “virtual RDM” and ran in my second snag: VMWare’s filesystems won’t support files larger than 2 TB, so I couldn’t use this trick on my 3 TB drive. Bummer. The filesystem itself will go to 64 TB, but files on it cannot.
Update: I goofed here. A colleague of mine (Wiebo de Wit) attended me to the fact I didn’t read the article far enough downwards. I created a virtual RDM, which is indeed limited, but not a physical RDM, which does real passthrough. After retesting with the correct settings, I found I could indeed create the 3 TB disk as a Physical RDM. Actually, the Ubuntu kernel got to see the actual disk, so it was shown with the correct model name rather than “VMWare Virtual disk”. Great, thanks Wiebo!
Strike Three: a Network Card
Last, but certainly not least: the network cards. I managed to put the PCI ports into passthrough mode, so I could add it to the VM. Next starting the VM crashed the host. Hard.
Update: Wiebo also asked me to try a paravirtualized network interface, which means you don’t let VMWare emulate a physical card but expose that it is a VMWare device. The driver for this device (named VMXNET3) is available in all major distributions, so it was recognized and activated. This results in a minimal overhead in the VMWare kernel and as good as you can get without going physical.
Mixed emotions about this one. Sure, the VMWare ESXi Hypervisor is an enterprise level tool, but for me as a hobbyist it appears to have some down-sides. Let’s summarize the good and the bad.
- Compact Hypervisor. (could install it to a 4 GB thumb drive)
- Supports my hardware.
- Lots of documentation. (and needs it too. There’s a lot to configure)
- Free. (if your hardware isn’t too enterprisey)
- Great management tool. (I could boot from an ISO on my PC! Remote mounting!)
- Not as free as Linux; you don’t get everything, nor the source.
- No RDM for local disks unless you use commandline tools.
- That crash was definitely not good. Needs some work methinks.
Now let’s see if I can do a comparable installation with Ubuntu Server as KVM host.