
If I wanted to set up a host for a bunch of headless VMs, what's the OS/Hypervisor to run these days? I'm doing this out of curiosity and for testing purposes. I don't exactly have appropriate hardware - an i5 with 16GB of memory - but it should be sufficient to run 5-10 VMs for my very limited purposes (private network, none of the VMs will be public-facing). QEMU/KVM looks like the best choice for a FOSS advocate? Other recommendations? I could particularly use a good HOWTO or tutorial if anyone knows of one. Thanks. -- Giles http://www.gilesorr.com/ gilesorr@gmail.com

On Fri, Aug 26, 2016 at 10:37:37AM -0400, Giles Orr via talk wrote:
If I wanted to set up a host for a bunch of headless VMs, what's the OS/Hypervisor to run these days? I'm doing this out of curiosity and for testing purposes. I don't exactly have appropriate hardware - an i5 with 16GB of memory - but it should be sufficient to run 5-10 VMs for my very limited purposes (private network, none of the VMs will be public-facing). QEMU/KVM looks like the best choice for a FOSS advocate? Other recommendations? I could particularly use a good HOWTO or tutorial if anyone knows of one. Thanks.
I certainly like kvm. Works well. Finding examples for how to start if isn't hard. I am personally NOT a fan of libvirt and the associated crap it provides and much prefers just making a shell script to pass the right arguments to qemu myself. As long as you have VT support (Most if not all i5s do, as long as it is on in the BIOS/UEFI), I would think that should be fine. 16GB would certainly allow you 10 1GB or 5 2GB VMs without any issue. Creative people would try and use KMS (kernel memory sharing I think it is), to merge identical pages between VMs to save some resources. It's a neat feature. Depending on what you intend to do with them and put in them, some people might use containers instead (like lxc and such). It has its own limitations but uses less resources. If you are looking to run different OSs though, then containers are not what you want. -- Len Sorensen

I've used proxmox . It got me up and running with a gui quick. But I've also use virtual box ( oracle : yuck ) and that also got me up and running quick. "professionally" I sit in front of a lot of vmware, but that's closed / for pay / proprietary / expensive. ( but feature rich ) (I've not used libvrt / rhev / kvm so my perspective is limited) David On Fri, Aug 26, 2016 at 11:34 AM, Lennart Sorensen via talk <talk@gtalug.org
wrote:
On Fri, Aug 26, 2016 at 10:37:37AM -0400, Giles Orr via talk wrote:
If I wanted to set up a host for a bunch of headless VMs, what's the OS/Hypervisor to run these days? I'm doing this out of curiosity and for testing purposes. I don't exactly have appropriate hardware - an i5 with 16GB of memory - but it should be sufficient to run 5-10 VMs for my very limited purposes (private network, none of the VMs will be public-facing). QEMU/KVM looks like the best choice for a FOSS advocate? Other recommendations? I could particularly use a good HOWTO or tutorial if anyone knows of one. Thanks.
I certainly like kvm. Works well. Finding examples for how to start if isn't hard. I am personally NOT a fan of libvirt and the associated crap it provides and much prefers just making a shell script to pass the right arguments to qemu myself.
As long as you have VT support (Most if not all i5s do, as long as it is on in the BIOS/UEFI), I would think that should be fine. 16GB would certainly allow you 10 1GB or 5 2GB VMs without any issue. Creative people would try and use KMS (kernel memory sharing I think it is), to merge identical pages between VMs to save some resources. It's a neat feature.
Depending on what you intend to do with them and put in them, some people might use containers instead (like lxc and such). It has its own limitations but uses less resources. If you are looking to run different OSs though, then containers are not what you want.
-- Len Sorensen --- Talk Mailing List talk@gtalug.org https://gtalug.org/mailman/listinfo/talk

Virtual-manager that's part of the libvirt package is functional enough for most use. I use Virtual-manager backed by xen to run between 5 and 10 VMs on a couple of machines. If you have the hots to setup a complete server you could download xenserver. You could put RDO on a system and install OpenStack. Openstack has a nice GUI and management environment but is a bit heavyweight to just put up a few VM's On 08/26/2016 01:55 PM, David Thornton via talk wrote:
I've used proxmox . It got me up and running with a gui quick.
But I've also use virtual box ( oracle : yuck ) and that also got me up and running quick.
"professionally" I sit in front of a lot of vmware, but that's closed / for pay / proprietary / expensive. ( but feature rich )
(I've not used libvrt / rhev / kvm so my perspective is limited)
Proxmox is KVM based.
David
On Fri, Aug 26, 2016 at 11:34 AM, Lennart Sorensen via talk <talk@gtalug.org <mailto:talk@gtalug.org>> wrote:
On Fri, Aug 26, 2016 at 10:37:37AM -0400, Giles Orr via talk wrote: > If I wanted to set up a host for a bunch of headless VMs, what's the > OS/Hypervisor to run these days? I'm doing this out of curiosity and > for testing purposes. I don't exactly have appropriate hardware - an > i5 with 16GB of memory - but it should be sufficient to run 5-10 VMs > for my very limited purposes (private network, none of the VMs will be > public-facing). QEMU/KVM looks like the best choice for a FOSS > advocate? Other recommendations? I could particularly use a good > HOWTO or tutorial if anyone knows of one. Thanks.
I certainly like kvm. Works well. Finding examples for how to start if isn't hard. I am personally NOT a fan of libvirt and the associated crap it provides and much prefers just making a shell script to pass the right arguments to qemu myself.
As long as you have VT support (Most if not all i5s do, as long as it is on in the BIOS/UEFI), I would think that should be fine. 16GB would certainly allow you 10 1GB or 5 2GB VMs without any issue. Creative people would try and use KMS (kernel memory sharing I think it is), to merge identical pages between VMs to save some resources. It's a neat feature.
Depending on what you intend to do with them and put in them, some people might use containers instead (like lxc and such). It has its own limitations but uses less resources. If you are looking to run different OSs though, then containers are not what you want.
-- Len Sorensen --- Talk Mailing List talk@gtalug.org <mailto:talk@gtalug.org> https://gtalug.org/mailman/listinfo/talk <https://gtalug.org/mailman/listinfo/talk>
--- Talk Mailing List talk@gtalug.org https://gtalug.org/mailman/listinfo/talk
-- Alvin Starr || voice: (905)513-7688 Netvel Inc. || Cell: (416)806-0133 alvin@netvel.net ||

having the hots for technology is a list requirement. On 08/26/2016 10:58 PM, John Moniz wrote:
On Aug 26, 2016 2:42 PM, Alvin Starr via talk <talk@gtalug.org> wrote:
If you have the hots to setup a complete server you could download
xenserver.
I've never had the hots for any computer, so I guess xenserver is not for me... 😀
-- Alvin Starr || voice: (905)513-7688 Netvel Inc. || Cell: (416)806-0133 alvin@netvel.net ||

Might want to see a doctor about that. David On Sat, Aug 27, 2016, 7:30 AM Alvin Starr via talk <talk@gtalug.org> wrote:
having the hots for technology is a list requirement.
On 08/26/2016 10:58 PM, John Moniz wrote:
On Aug 26, 2016 2:42 PM, Alvin Starr via talk <talk@gtalug.org> <talk@gtalug.org> wrote:
If you have the hots to setup a complete server you could download
xenserver.
I've never had the hots for any computer, so I guess xenserver is not for me... 😀
-- Alvin Starr || voice: (905)513-7688 Netvel Inc. || Cell: (416)806-0133alvin@netvel.net ||
--- Talk Mailing List talk@gtalug.org https://gtalug.org/mailman/listinfo/talk

On Fri, Aug 26, 2016 at 11:34 AM, Lennart Sorensen via talk <talk@gtalug.org
wrote:
On Fri, Aug 26, 2016 at 10:37:37AM -0400, Giles Orr via talk wrote:
If I wanted to set up a host for a bunch of headless VMs, what's the OS/Hypervisor to run these days? I'm doing this out of curiosity and for testing purposes. I don't exactly have appropriate hardware - an i5 with 16GB of memory - but it should be sufficient to run 5-10 VMs for my very limited purposes (private network, none of the VMs will be public-facing). QEMU/KVM looks like the best choice for a FOSS advocate? Other recommendations? I could particularly use a good HOWTO or tutorial if anyone knows of one. Thanks.
I certainly like kvm. Works well. Finding examples for how to start if isn't hard. I am personally NOT a fan of libvirt and the associated crap it provides and much prefers just making a shell script to pass the right arguments to qemu myself.
As long as you have VT support (Most if not all i5s do, as long as it is on in the BIOS/UEFI), I would think that should be fine. 16GB would certainly allow you 10 1GB or 5 2GB VMs without any issue. Creative people would try and use KMS (kernel memory sharing I think it is), to merge identical pages between VMs to save some resources. It's a neat feature.
I second the vote for qemu-kvm. It seems to be the swiss army knife. The only thing I've wanted to do with it that I haven't been able to is to boot 1994 Yggdrasil Linux. I liked the libvirt environment I tried out a year or so ago, but abandoned it because it seemed to use about the same amount of memory that my ~4G VM did. I can't imagine why it is so enormous. Cheers, Mike

Mike via talk wrote:
I second the vote for qemu-kvm. It seems to be the swiss army knife. The only thing I've wanted to do with it that I haven't been able to is to boot 1994 Yggdrasil Linux.
The other thing it doesn't do is give you a bunch of virtual machines with collectively more memory than the server has installed. I've had to deal with a box where people had given away 110% of the RAM chips to their various QEMU instances and the server was crawling around on the floor in a very slow and hesitant way. Someone mentioned options: yes, QEMU has a lot of them, and having a script that sets the standard ones and lets you specify just what's different between each one and the next can make starting them a lot easier. QEMU's native commandline can easily start onto a third line on an 80-column screen. Also, do please use kernel bridging and not the userspace virtual switch. If you're running a bunch of heterogenous operating systems, though, QEMU gets you there. I've even seen people use it for Windows. If everything is this-decade Linux then LXC may be an option. It isolates each VM's processes and root filesystem and network interfaces while still sharing memory, CPUs, and (optionally, if VMs are in the same filesystem) disk space. I've run three generations of Debian all under the kernel that came with the latest. If you're also wanting to run eg Fedora then there's a good chance of it just working, or if not you may have to compare kernel configs and build a host kernel that makes everyone happy. LXC lets you run collectively a far bigger party; users have access to all the CPUs and RAM when they have a large compute job, not just the tiny ration the QEMU config sets aside for them. So far I haven't had to configure resource limits on LXC; YMMV. -- Anthony de Boer

William Park via talk wrote:
On Sat, Aug 27, 2016 at 11:53:39AM -0400, Anthony de Boer via talk wrote:
Also, do please use kernel bridging and not the userspace virtual switch.
Can you elaborate? Is there some parameters at compile time?
The qemu manpage where it talks about setting up a TAP interface to talk to a bridge (br0 in the example) should set you on the right path. You'll also want to look at the brctl manpage. Bridging lets the kernel tie together several real and/or virtual interfaces, and in this case can be set up to let your QEMU virtual talk to the LAN. The vde switch it talks about later does not deal well with heavy traffic and will go pear-shaped. -- Anthony de Boer

On Sat, Aug 27, 2016 at 11:53:39AM -0400, Anthony de Boer via talk wrote:
The other thing it doesn't do is give you a bunch of virtual machines with collectively more memory than the server has installed. I've had to deal with a box where people had given away 110% of the RAM chips to their various QEMU instances and the server was crawling around on the floor in a very slow and hesitant way.
qemu/kvm supports balloon drivers in the guest. Also the host could run KMS. So you have multiple options for allowing you to overprovision the system and still do fine.
Someone mentioned options: yes, QEMU has a lot of them, and having a script that sets the standard ones and lets you specify just what's different between each one and the next can make starting them a lot easier. QEMU's native commandline can easily start onto a third line on an 80-column screen.
Also, do please use kernel bridging and not the userspace virtual switch.
Yes a kernel bridge with tap interfaces in qemu works pretty well. There might even be better options than the tap interface these days.
If you're running a bunch of heterogenous operating systems, though, QEMU gets you there. I've even seen people use it for Windows.
If everything is this-decade Linux then LXC may be an option. It isolates each VM's processes and root filesystem and network interfaces while still sharing memory, CPUs, and (optionally, if VMs are in the same filesystem) disk space. I've run three generations of Debian all under the kernel that came with the latest. If you're also wanting to run eg Fedora then there's a good chance of it just working, or if not you may have to compare kernel configs and build a host kernel that makes everyone happy.
LXC lets you run collectively a far bigger party; users have access to all the CPUs and RAM when they have a large compute job, not just the tiny ration the QEMU config sets aside for them. So far I haven't had to configure resource limits on LXC; YMMV.
You can restrict resources with lxc as far as I recall. But you might not have to. I guess it depends if you trust the stuff inside the container to be well behaved or not. -- Len Sorensen

On Fri, Aug 26, 2016 at 02:33:10PM -0400, Mike via talk wrote:
I second the vote for qemu-kvm. It seems to be the swiss army knife. The only thing I've wanted to do with it that I haven't been able to is to boot 1994 Yggdrasil Linux.
Well it will boot the floppy, but it doesn't seem to like real ide cdrom drives. That kernel is just a bit too old. Trying to fake it with other methods doesn't seem to finish booting. No idea why. If someone wrote emulation of the old matsushita or panasonic or mitsumi cdrom interfaces it might work, but why would anyone bother? -- Len Sorensen

| From: Lennart Sorensen via talk <talk@gtalug.org> | If someone wrote emulation of the old matsushita or panasonic or mitsumi | cdrom interfaces it might work, but why would anyone bother? I'm a bit of collector of antique systems (hoarder, more accurately). This would be less burdensome if I could collect just the software and not the hardware. More generally, we are losing access to our past technology at an alarming rate but only the hardware actually deteriorates. If we could reliably run ancient systems, a lot of the risks could be reduced.

On Mon, Aug 29, 2016 at 01:54:46PM -0400, D. Hugh Redelmeier via talk wrote:
I'm a bit of collector of antique systems (hoarder, more accurately). This would be less burdensome if I could collect just the software and not the hardware.
More generally, we are losing access to our past technology at an alarming rate but only the hardware actually deteriorates. If we could reliably run ancient systems, a lot of the risks could be reduced.
Well there are emulators for lots of systems, including PCs, but it seems the CDROM interfaces have been left out somehow. -- Len Sorensen

On Fri, Aug 26, 2016 at 10:37:37AM -0400, Giles Orr via talk wrote:
If I wanted to set up a host for a bunch of headless VMs, what's the OS/Hypervisor to run these days? I'm doing this out of curiosity and for testing purposes. I don't exactly have appropriate hardware - an i5 with 16GB of memory - but it should be sufficient to run 5-10 VMs for my very limited purposes (private network, none of the VMs will be public-facing). QEMU/KVM looks like the best choice for a FOSS advocate? Other recommendations? I could particularly use a good HOWTO or tutorial if anyone knows of one. Thanks.
- QEMU and VirtualBox. They both use KVM. - VirtualBox practically needs no manual. It's all mouse clicks. The only time I actually had to read something, was to convert VMDK to VDI format (using VBoxManage on command line in Windows) - QEMU requires manpage and shell script to store all the options you discovered. :-) I'm not sure about "headless". From memory, I seems to have closer association with VirtualBox than with QEMU. -- William

On 26 August 2016 at 21:33, William Park via talk <talk@gtalug.org> wrote:
On Fri, Aug 26, 2016 at 10:37:37AM -0400, Giles Orr via talk wrote:
If I wanted to set up a host for a bunch of headless VMs, what's the OS/Hypervisor to run these days? I'm doing this out of curiosity and for testing purposes. I don't exactly have appropriate hardware - an i5 with 16GB of memory - but it should be sufficient to run 5-10 VMs for my very limited purposes (private network, none of the VMs will be public-facing). QEMU/KVM looks like the best choice for a FOSS advocate? Other recommendations? I could particularly use a good HOWTO or tutorial if anyone knows of one. Thanks.
- QEMU and VirtualBox. They both use KVM. - VirtualBox practically needs no manual. It's all mouse clicks. The only time I actually had to read something, was to convert VMDK to VDI format (using VBoxManage on command line in Windows) - QEMU requires manpage and shell script to store all the options you discovered. :-)
I'm not sure about "headless". From memory, I seems to have closer association wiimmenseth VirtualBox than with QEMU.
I like VirtualBox and use it heavily. But it's a Type 2 hypervisor ( https://en.wikipedia.org/wiki/Hypervisor ) and in many circumstances I've seen it suck up significant amounts of processor even though its hosted machine(s) are idle. So I was hoping for a Type 1 hypervisor. I had a previous, unsatisfactory experience with KVM - although I admit I was at the time trying to run fully graphical VMs. Lennart's suggestion to skip libvirt and just run shell scripts for QEMU agrees with my general sentiment and behaviour, but I attempted to do that previously and discovered the string of options for QEMU to run one of the machines I wanted stretched to fill most of an 80x24 terminal. So that didn't seem like a probable option to me: too high a barrier to entry. This time out I'm trying Proxmox (thanks Dave!). It claims the client performance degradation over bare metal is only 1-3% ... And I had a client machine up and running in about three quarters an hour (that includes the time to install both Proxmox and the Debian client) almost without reference to their documentation. That suggests to me that it's amazingly intuitive for such a high level piece of software. Although most of the terms and behaviour made sense to me only because I'm so familiar with VirtualBox. My primary complaint about Proxmox is that its install disk says "all your disk is belong to me." It won't install to a partition, it simply takes over the entire drive. I get that it's meant for VMs and needs space, but it's still unimpressive behaviour. An important difference I'm noticing between Proxmox and VirtualBox: while you can see and interact with the VM within the web interface that Proxmox provides, the assumption appears to be that they'll mostly be treated as headless. And to go with this, when you set up a VM and run a (Debian, in this case) installer, the VM by default has its own outward-facing (virtual) NIC. There are options for other arrangements, but that's the default. VB's default is that each VM has a NATed NIC that's not publicly available. You can change VB's behaviour, but this is looking like it's got good defaults and good basic behaviour for what I want now. Thanks everyone. -- Giles http://www.gilesorr.com/ gilesorr@gmail.com

On Fri, Aug 26, 2016 at 09:33:50PM -0400, William Park via talk wrote:
- QEMU and VirtualBox. They both use KVM.
Virtualbox does not use kvm. It will use vt-x if you have it. kvm requires it.
- VirtualBox practically needs no manual. It's all mouse clicks. The only time I actually had to read something, was to convert VMDK to VDI format (using VBoxManage on command line in Windows) - QEMU requires manpage and shell script to store all the options you discovered. :-)
But the flexibility is great.
I'm not sure about "headless". From memory, I seems to have closer association with VirtualBox than with QEMU.
qemu's ability to run as a vnc server is handy. -- Len Sorensen

On 08/29/2016 10:07 AM, Lennart Sorensen via talk wrote:
On Fri, Aug 26, 2016 at 09:33:50PM -0400, William Park via talk wrote:
- QEMU and VirtualBox. They both use KVM. Virtualbox does not use kvm. It will use vt-x if you have it. kvm requires it. Its kind of the other way around. Virtualbox uses QEMU as does Xen and I am sure it appears in some form with other virtualization platforms. QEMU is the go-to source for hardware emulation.
- VirtualBox practically needs no manual. It's all mouse clicks. The only time I actually had to read something, was to convert VMDK to VDI format (using VBoxManage on command line in Windows) - QEMU requires manpage and shell script to store all the options you discovered. :-) But the flexibility is great.
Virtual-box is more akin to libvirt in that its a hypervisor management tool. Virtual-box gives a lot of flexibility if you want dig under the covers and use the command line interface. But its ugly. Libvirt has the advantage that it will interface with multiple back-end hypervisors. Using QEMU directly gives more flexibility because you have to manage most of the system plumbing tasks on your own. Virtual-box and libvirt help you manage those tasks but make some implementation choices that you may or may not like.
I'm not sure about "headless". From memory, I seems to have closer association with VirtualBox than with QEMU. qemu's ability to run as a vnc server is handy.
I know libvirt allows for a serial console so that there is no need for a graphical and I have used that quite a bit because I often found myself at the end of a low bandwidth connection where running VNC was just too painful to be believed. I am sure Virtual-box has the same feature but I have never used it. In general Virtual-box is easier to install and works well for running up a hand full of virtual machines. Virtual-box seems to make better default choices for performance and gives better emulation of things like USB interfaces. It also does a reasonable job at simple network plumbing and importing and exporting of disk images. Virtual-manager is somewhat like Vritual-box and is part of the libvirt suite of tools. It is more generic and allows you to manage several virtualization servers along with different hypervisors. It does a reasonable job at network plumbing but its hardware emulation is only as good as the mainline QEMU support, and the Oracle folks are able to pay to get access to proprietary information about hardware making their emulation a bit better. The libvirt development tends to track the KVM/QEMU develpment so just about every feature that is available directly in QEMU can be found in libvirt. The above tools are good for straight forward flat networks but if you require complex network plumbing like NAT interfaces and simple firewalls and multiple isolated networks then your are getting out of the range of the simple virtualization managers and are looking more at something like Openstack and its competitors. -- Alvin Starr || voice: (905)513-7688 Netvel Inc. || Cell: (416)806-0133 alvin@netvel.net ||

On Mon, Aug 29, 2016 at 11:01:55AM -0400, Alvin Starr wrote:
On 08/29/2016 10:07 AM, Lennart Sorensen via talk wrote:
On Fri, Aug 26, 2016 at 09:33:50PM -0400, William Park via talk wrote:
- QEMU and VirtualBox. They both use KVM. Virtualbox does not use kvm. It will use vt-x if you have it. kvm requires it. Its kind of the other way around. Virtualbox uses QEMU as does Xen and I am sure it appears in some form with other virtualization platforms. QEMU is the go-to source for hardware emulation.
qemu yes, kvm no. Virtualbox and kvm (and xen I believe) use qemu for device emulation. Virtualbox does not use kvm (that would make it hard to run on windows after all). qemu-system knows how to use kvm if you ask it to (with -enable-kvm) which is of course much faster than the normal qemu cpu emulation. -- Len Sorensen

On Mon, Aug 29, 2016 at 10:07:03AM -0400, Lennart Sorensen via talk wrote:
On Fri, Aug 26, 2016 at 09:33:50PM -0400, William Park via talk wrote:
- QEMU and VirtualBox. They both use KVM.
Virtualbox does not use kvm. It will use vt-x if you have it. kvm requires it.
Now you got me confused. I thought VirtualBox and QEMU are just "emulator" which use KVM kernel modules (ie. kvm, kvm-intel, kvm-amd) if they are available, or do full software emulation if not available. And, VT-x is required for KVM kernel modules to work. -- William

On 09/01/2016 10:07 PM, William Park via talk wrote:
On Fri, Aug 26, 2016 at 09:33:50PM -0400, William Park via talk wrote:
- QEMU and VirtualBox. They both use KVM. Virtualbox does not use kvm. It will use vt-x if you have it. kvm requires it. Now you got me confused. I thought VirtualBox and QEMU are just "emulator" which use KVM kernel modules (ie. kvm, kvm-intel, kvm-amd) if
On Mon, Aug 29, 2016 at 10:07:03AM -0400, Lennart Sorensen via talk wrote: they are available, or do full software emulation if not available. And, VT-x is required for KVM kernel modules to work. Part of the problem is comparing a mixture of apples and oranges. you cannot run a virtual machine using KVM on its own. KVM is a set of kernel API's and modules that make use of the underlying CPU hardware.
Virtualbox is a complete product that contains its own API's and a bunch of tools to provide a user interface. Virtualbox is better compared with things like VMware or virtual-manager or proxmox. QEMU is a middle wear tool that has been forked by several products to provide hardware emulation but is generally not enough to provide all the features of something like virtualbox or virtual-manager. You can use QEMU to roll your own virtual systems but you then need to manage most of the scaffolding on your own. Its kind of like comparing gnome or KDE to the x-server. -- Alvin Starr || voice: (905)513-7688 Netvel Inc. || Cell: (416)806-0133 alvin@netvel.net ||

On Thu, Sep 01, 2016 at 10:29:16PM -0400, Alvin Starr via talk wrote:
Part of the problem is comparing a mixture of apples and oranges. you cannot run a virtual machine using KVM on its own. KVM is a set of kernel API's and modules that make use of the underlying CPU hardware.
Virtualbox is a complete product that contains its own API's and a bunch of tools to provide a user interface.
Virtualbox is better compared with things like VMware or virtual-manager or proxmox.
virtualbox is like vmware. A complete system. proxmox is just a UI on top of kvm and lxc. Virtual-manager is a UI layer on top of a lot of things (kvm, qemu, etc).
QEMU is a middle wear tool that has been forked by several products to provide hardware emulation but is generally not enough to provide all the features of something like virtualbox or virtual-manager. You can use QEMU to roll your own virtual systems but you then need to manage most of the scaffolding on your own.
Yes qemu has tons of options and features, but not a friendly UI, and certainly not friendly defaults.
Its kind of like comparing gnome or KDE to the x-server.
-- Len Sorensen

On Thu, Sep 01, 2016 at 10:07:41PM -0400, William Park via talk wrote:
Now you got me confused. I thought VirtualBox and QEMU are just "emulator" which use KVM kernel modules (ie. kvm, kvm-intel, kvm-amd) if they are available, or do full software emulation if not available. And, VT-x is required for KVM kernel modules to work.
Virtualbox runs fine without vt-x, bit can't run 64 bit guests in that case. kvm is a linux kernel feature for doing virtual machines, and on x86, it requires vt-x (or the amd equivalant), while on other architectures (powerpc and arm) it requries different features. On arm it requires HYP mode on the CPU, while I think on powerpc it should run on anything. qemu is an emulator with quite fast cpu emulation that lets you emulate lots of different architectures. It's rather neat. It of course has to emulate lots of hardware as well as the CPU to emulate complete systems. So since it emulates lots of hardware very well, kvm decided to use qemu as the frontend and the device emulation, but to skip the cpu emulation and instead run natively with kvm providing that part. It was originally a fork with patches, but is now merged into the normal qemu. If you want an arm vm or powerpc vm, qemu is the thing to use. Won't be fast, but they do work. Qemu also has a user mode emulation where instead of emulating a whole machine, it emulates a running linux system on your existing system, so it runs a binary, translating the instructions for the cpu type (like arm or powerpc) and then translates the system calls to your native kernel, so you are only emulating the application, not the whole system, which makes it faster. This can let you do neat cross compile tricks with autoconf and such, which normally hate cross compiling, since you are actually pretending to be the target system when running things, even though using a cross compiler to actually build things. The kernel's binfmt handler helps make this happen automatically when you register the qemu handlers correctly. virtualbox also uses qemu code for device emulation (at least some of it), but I believe it uses its own way of doing the cpu handling (similar to kvm, but since it runs on many platforms it does not use kvm, and I believe it actually predates kvm). It is almost certainly more similar to what vmware has been doing for years too. It is able to use the cpu vt-x (and similar) features when present for better performance, and in the case of 64 bit guests, it is in fact required (as it also is for vmware and any other vm system as far as I know). Virtualbox does NOT use kvm, and in fact gives an error if the kvm modules are loaded (or at least it used to). It wants to do vt-x itself instead. -- Len Sorensen

On Fri, Sep 02, 2016 at 12:22:47PM -0400, Lennart Sorensen via talk wrote:
Virtualbox does NOT use kvm, and in fact gives an error if the kvm modules are loaded (or at least it used to). It wants to do vt-x itself instead.
This is where my confusion starts... - I can run VirtualBox or QEMU, but not both at the same time. - VirtualBox can run with or without KVM modules. - QEMU requires KVM modules. Current machine: i3 cpu, H97 chipset -- William

On Sat, Sep 03, 2016 at 07:39:30PM -0400, William Park via talk wrote:
This is where my confusion starts...
- I can run VirtualBox or QEMU, but not both at the same time.
You can run qemu without -enable-kvm at the same time as virtualbox, but it is quite slow when run that way. It is only when running kvm (which is qemu with -enable-kvm) that you can't run virtualbox (and probably not vmware either for that matter). Only one vm system using vt-x can be enabled at a time.
- VirtualBox can run with or without KVM modules.
No apparently virtualbox can not run if kvm kernel modules are loaded.
- QEMU requires KVM modules.
Only if you run it with -enable-kvm. Otherwise it does its own (slower) thing.
Current machine: i3 cpu, H97 chipset
As far as I know all i3 chips have vt-x, so that should be fine. -- Len Sorensen

So I didn't know that about qemu. I've been thinking about doing a Linux from scratch for my pi Zero. I feel like qemu would be the tool to do that . At the risk of forking the thread... anyone done that? On Tue, Sep 6, 2016, 9:18 AM Lennart Sorensen via talk <talk@gtalug.org> wrote:
On Sat, Sep 03, 2016 at 07:39:30PM -0400, William Park via talk wrote:
This is where my confusion starts...
- I can run VirtualBox or QEMU, but not both at the same time.
You can run qemu without -enable-kvm at the same time as virtualbox, but it is quite slow when run that way. It is only when running kvm (which is qemu with -enable-kvm) that you can't run virtualbox (and probably not vmware either for that matter). Only one vm system using vt-x can be enabled at a time.
- VirtualBox can run with or without KVM modules.
No apparently virtualbox can not run if kvm kernel modules are loaded.
- QEMU requires KVM modules.
Only if you run it with -enable-kvm. Otherwise it does its own (slower) thing.
Current machine: i3 cpu, H97 chipset
As far as I know all i3 chips have vt-x, so that should be fine.
-- Len Sorensen --- Talk Mailing List talk@gtalug.org https://gtalug.org/mailman/listinfo/talk

On Tue, Sep 06, 2016 at 04:03:16PM +0000, David Thornton wrote:
So I didn't know that about qemu. I've been thinking about doing a Linux from scratch for my pi Zero. I feel like qemu would be the tool to do that .
At the risk of forking the thread... anyone done that?
Well in Debian there is a qemu-debootstrap which lets you setup a root fs for another architecture. Not the same as compiling everything, but it does involve running some stuff on a host machine and some emulated for the target machine. So, haven't done it, but it should work. -- Len Sorensen

I don't think this was mentioned before, but KVM/libvirt with shell interface, besides gui inteface (virt-manager) there's a shell interface virsh, which let you configure VM using shells scripts. libvirt, is just a bunch of processes with, and I'm sorry, xml config files located in /etc/libvirt. You can have any number of virtualization technologies (LXC, KVM) and any number user backends -- (virt-manager, virsh); managing all of that requires copying configuration and image files around -- it doesn't get any simpler than that; unlike QEMU, you don't need to pass long line of arguments to start up the machine and you get such niceties as autostarting VMs on machine boot, not to mention that the whole thing is enterpris-y and supported by redhat. Actually virt-manager works pretty well -- libvirt likes ssh, and uses client/server architecture, so when connecting to a remote host you don't have to tunnel it through ssh, just have it on a client machine and give it a correct url. On Fri, Aug 26, 2016 at 10:37 AM, Giles Orr via talk <talk@gtalug.org> wrote:
If I wanted to set up a host for a bunch of headless VMs, what's the OS/Hypervisor to run these days? I'm doing this out of curiosity and for testing purposes. I don't exactly have appropriate hardware - an i5 with 16GB of memory - but it should be sufficient to run 5-10 VMs for my very limited purposes (private network, none of the VMs will be public-facing). QEMU/KVM looks like the best choice for a FOSS advocate? Other recommendations? I could particularly use a good HOWTO or tutorial if anyone knows of one. Thanks.
-- Giles http://www.gilesorr.com/ gilesorr@gmail.com --- Talk Mailing List talk@gtalug.org https://gtalug.org/mailman/listinfo/talk

On Mon, Aug 29, 2016 at 04:10:51PM -0400, Alex Volkov via talk wrote:
I don't think this was mentioned before, but KVM/libvirt with shell interface, besides gui inteface (virt-manager) there's a shell interface virsh, which let you configure VM using shells scripts.
libvirt, is just a bunch of processes with, and I'm sorry, xml config files located in /etc/libvirt. You can have any number of virtualization technologies (LXC, KVM) and any number user backends -- (virt-manager, virsh); managing all of that requires copying configuration and image files around -- it doesn't get any simpler than that; unlike QEMU, you don't need to pass long line of arguments to start up the machine and you get such niceties as autostarting VMs on machine boot, not to mention that the whole thing is enterpris-y and supported by redhat.
Actually virt-manager works pretty well -- libvirt likes ssh, and uses client/server architecture, so when connecting to a remote host you don't have to tunnel it through ssh, just have it on a client machine and give it a correct url.
My first (and last) encounter with libvirt was that I wanted to use kvm with smp, and at the time it didn't know about the smp option, so you simply couldn't do that. So given the use of both xml, and the inability of keeping up with kvm/qemu options, I determined it was hopeless and went with doing it myself which works great. -- Len Sorensen

On 08/29/2016 04:13 PM, Lennart Sorensen via talk wrote:
On Mon, Aug 29, 2016 at 04:10:51PM -0400, Alex Volkov via talk wrote:
I don't think this was mentioned before, but KVM/libvirt with shell interface, besides gui inteface (virt-manager) there's a shell interface virsh, which let you configure VM using shells scripts.
libvirt, is just a bunch of processes with, and I'm sorry, xml config files located in /etc/libvirt. You can have any number of virtualization technologies (LXC, KVM) and any number user backends -- (virt-manager, virsh); managing all of that requires copying configuration and image files around -- it doesn't get any simpler than that; unlike QEMU, you don't need to pass long line of arguments to start up the machine and you get such niceties as autostarting VMs on machine boot, not to mention that the whole thing is enterpris-y and supported by redhat.
Actually virt-manager works pretty well -- libvirt likes ssh, and uses client/server architecture, so when connecting to a remote host you don't have to tunnel it through ssh, just have it on a client machine and give it a correct url. My first (and last) encounter with libvirt was that I wanted to use kvm with smp, and at the time it didn't know about the smp option, so you simply couldn't do that. So given the use of both xml, and the inability of keeping up with kvm/qemu options, I determined it was hopeless and went with doing it myself which works great.
That has somewhat changed now and libvirt is very full featured. But I understand your feelings. I once tried to use KVM and found it a bug farm so moved back to Xen and am still there. -- Alvin Starr || voice: (905)513-7688 Netvel Inc. || Cell: (416)806-0133 alvin@netvel.net ||

-- snip -- On Aug 26, 2016 10:38 AM, "Giles Orr via talk" <talk@gtalug.org> wrote:
If I wanted to set up a host for a bunch of headless VMs, what's the OS/Hypervisor to run these days? I'm doing this out of curiosity and for testing purposes. I don't exactly have appropriate hardware - an i5 with 16GB of memory - but it should be sufficient to run 5-10 VMs for my very limited purposes (private network, none of the VMs will be public-facing). QEMU/KVM looks like the best choice for a FOSS advocate? Other recommendations? I could particularly use a good
I've had great success with Vagrant and VirtualBox. Not the most FOSS friendly, but it makes for a good way to programmatically define a network of virtual machines. It sure beats manually spinning up a dozen vms. https://www.vagrantup.com/docs/virtualbox/ Jason

On 31 August 2016 at 09:01, Jason Shaw <grazer@gmail.com> wrote:
-- snip -- On Aug 26, 2016 10:38 AM, "Giles Orr via talk" <talk@gtalug.org> wrote:
If I wanted to set up a host for a bunch of headless VMs, what's the OS/Hypervisor to run these days? I'm doing this out of curiosity and for testing purposes. I don't exactly have appropriate hardware - an i5 with 16GB of memory - but it should be sufficient to run 5-10 VMs for my very limited purposes (private network, none of the VMs will be public-facing). QEMU/KVM looks like the best choice for a FOSS advocate? Other recommendations? I could particularly use a good
I've had great success with Vagrant and VirtualBox. Not the most FOSS friendly, but it makes for a good way to programmatically define a network of virtual machines. It sure beats manually spinning up a dozen vms.
I love VirtualBox, but I've come to think of it as better for running local graphical virtual machines rather than remote headless ones. As I've mentioned, I've seen VirtualBox consume a lot of CPU cycles just sitting still (ie. guest machines idle) both on Mac OSX and on Linux: this doesn't seem like a good quality in a hypervisor. I've also developed a dislike of Vagrant, probably unjustified. Let me explain: I tried Vagrant, created a box, modified it. Rebooted, modified it again, rebooted ... and found that it had reset to the base box - all my mods gone. I suspect this stems from their idea that a Vagrant box is a local representation of an immutable remote deploy - but I wish they'd make up their minds and go full read-only like Docker. My other problem with Vagrant in this context is that its meant for internal use - just on the local machine. I'm trying to set up a bunch of VMs that are usable on the local network and behave essentially like remote cloud-based machines (I don't think I specified that clearly, my apologies). Docker, KVM, and Xen could all use further investigation ... but I'm not sure my life is long enough when I've found a solution that does nearly everything I need in the form of Proxmox. KVM or Xen might be better general solutions. Docker might be better for just containerized stuff: although I don't like the limitation to the local kernel, and the read-only aspect of it is limiting (and makes basic setup more time-consuming) even if the resource use is considerably less. So I'm good for now. Thanks! -- Giles http://www.gilesorr.com/ gilesorr@gmail.com

On Wed, Aug 31, 2016 at 11:05:43AM -0400, Giles Orr via talk wrote:
Docker, KVM, and Xen could all use further investigation ... but I'm not sure my life is long enough when I've found a solution that does nearly everything I need in the form of Proxmox. KVM or Xen might be better general solutions. Docker might be better for just containerized stuff: although I don't like the limitation to the local kernel, and the read-only aspect of it is limiting (and makes basic setup more time-consuming) even if the resource use is considerably less.
Well from what I can tell proxmox uses kvm and lxc depending on what you are trying to do. -- len Sorensen
participants (11)
-
Alex Volkov
-
Alvin Starr
-
Anthony de Boer
-
D. Hugh Redelmeier
-
David Thornton
-
Giles Orr
-
Jason Shaw
-
John Moniz
-
lsorense@csclub.uwaterloo.ca
-
Mike
-
William Park