ivoras’ FreeBSD blog

September 29, 2007

How slow is VMWare (Server)?

Filed under: FreeBSD — ivoras @ 9:47 pm

VMWare is slow. But how slow is it? Here’s two runs of benchmarks/unixbench on the same machine, first in a VMWare guest under VMWare Server 1.0 on Windows XP, the second under the native OS on the same machine.

Here are the results on VMWare:


INDEX VALUES
TEST BASELINE RESULT INDEX

Dhrystone 2 using register variables 116700.0 6330202.6 542.4
Double-Precision Whetstone 55.0 1606.8 292.1
Execl Throughput 43.0 468.4 108.9
File Copy 1024 bufsize 2000 maxblocks 3960.0 36722.0 92.7
File Copy 256 bufsize 500 maxblocks 1655.0 11696.0 70.7
File Copy 4096 bufsize 8000 maxblocks 5800.0 49643.0 85.6
Pipe Throughput 12440.0 95945.5 77.1
Pipe-based Context Switching 4000.0 21320.3 53.3
Process Creation 126.0 1209.9 96.0
Shell Scripts (8 concurrent) 6.0 1.0 1.7
System Call Overhead 15000.0 47093.0 31.4
=========
FINAL SCORE 70.1

And here on the raw hardware:

INDEX VALUES
TEST BASELINE RESULT INDEX

Dhrystone 2 using register variables 116700.0 6467105.1 554.2
Double-Precision Whetstone 55.0 1633.7 297.0
Execl Throughput 43.0 2030.9 472.3
File Copy 1024 bufsize 2000 maxblocks 3960.0 63783.0 161.1
File Copy 256 bufsize 500 maxblocks 1655.0 57489.0 347.4
File Copy 4096 bufsize 8000 maxblocks 5800.0 53476.0 92.2
Pipe Throughput 12440.0 930715.9 748.2
Pipe-based Context Switching 4000.0 204248.8 510.6
Process Creation 126.0 5373.3 426.5
Shell Scripts (8 concurrent) 6.0 563.7 939.5
System Call Overhead 15000.0 720641.0 480.4
=========
FINAL SCORE 387.4

Both guests are FreeBSD 7-CURRENT with debugging disabled. The results are not 100% comparable since the VMWare image was run without SMP, but on this benchmark, SMP positively influences only “shell scripts” results (parallel execution) – other results are either comparable, or negatively influenced by SMP (the CPU is a dual-core Athlon 64, i386 mode).

Make your own conclusions, but I consider the IO and context switch performance so bad they’re making the whole system unusable in production (at least where performance is important).

Update:

In defense of VMWare I’ve run unixbench on VMWare ESX3 server (though on a system not at all comparable to the one in above benchmarks – a 3 GHz Xeon from the NetBurst era, running 6.2-RELEASE as a guest) and the results are better:


INDEX VALUES
TEST BASELINE RESULT INDEX

Dhrystone 2 using register variables 116700.0 5113310.0 438.2
Double-Precision Whetstone 55.0 935.0 170.0
Execl Throughput 43.0 555.5 129.2
File Copy 1024 bufsize 2000 maxblocks 3960.0 55662.0 140.6
File Copy 256 bufsize 500 maxblocks 1655.0 17818.0 107.7
File Copy 4096 bufsize 8000 maxblocks 5800.0 66604.0 114.8
Pipe Throughput 12440.0 132556.6 106.6
Pipe-based Context Switching 4000.0 18074.1 45.2
Process Creation 126.0 1414.9 112.3
Shell Scripts (8 concurrent) 6.0 130.7 217.8
System Call Overhead 15000.0 62919.9 41.9
=========
FINAL SCORE 121.2

I still wouldn’t use it where performance is important, but at least these results look half-usable. The major improvement seems to be in context switching and parallel execution.

Second update:

Here’s the same setup as in the first VMWare Server benchmark (same machine, Windows XP host, 7-CURRENT), with QEmu+kqemu (kernel+user code acceleration):


TEST BASELINE RESULT INDEX

Dhrystone 2 using register variables 116700.0 5456588.4 467.6
Double-Precision Whetstone 55.0 1492.1 271.3
Execl Throughput 43.0 166.5 38.7
File Copy 1024 bufsize 2000 maxblocks 3960.0 13744.0 34.7
File Copy 256 bufsize 500 maxblocks 1655.0 4426.0 26.7
File Copy 4096 bufsize 8000 maxblocks 5800.0 23832.0 41.1
Pipe Throughput 12440.0 23079.7 18.6
Pipe-based Context Switching 4000.0 2159.5 5.4
Process Creation 126.0 409.8 32.5
Shell Scripts (8 concurrent) 6.0 8.6 14.3
System Call Overhead 15000.0 9728.4 6.5
=========
FINAL SCORE 33.3

Compared to this, VMWare server doesn’t look bad at all :(

17 Comments

  1. Of course there’s significant overhead in virtualizing with hosted products. VMware Server is perfectly fine for small, development, and test environments. It’s not meant to serve as a production solution in an enterprise network. The disk overhead is bad on Server. It’s not as bad if you use pre-filled disks rather than growing ones, but the performance hit vs. the bare metal is significant. For low load machines, that doesn’t matter. For high load machines where you actually need close to the bare metal’s capabilities, you need to be running ESX.

    I’ve done similar testing except using Windows and IOMeter as the comparison. Server is significantly slower than bare metal, but ESX is very close. It’s more than usable in enterprise environments.

    Your “unusable in production” statement contradicts reality. Millions of companies use Server and ESX in production with largely satisfactory results.

    Comment by Chris Buechler — September 30, 2007 @ 4:07 am

  2. Consumer Electronics Reviews…

    I couldn’t understand some parts of this article, but it sounds interesting…

    Trackback by Consumer Electronics Reviews — October 16, 2007 @ 2:19 am

  3. The performance impact of the Windows host in addition to that of the VMWare virtualization translation is evident in your test.

    Out of curiosity, why was a LInux host OS not benchmarked?

    VMWare themselves use a modified Redhat 7 as the Virtual Machine Manager for their enterprise ESX product. VMWare Server supports both Windows and various flavors of Linux as the host OS. I would expect a lightweight variant of Linux a much better reference than Windows. Is your expectation that Windows will yield faster throughput than Linux? Historical data does not necessarily back up such an assumption.

    -=dave

    Comment by Dave Johnson — May 28, 2008 @ 11:40 pm

  4. Linux hosts weren’t tested because I don’t have any available. I agree that ESX is much faster by design; I didn’t expect that the “plain” VMWare Server would be that much slower than it – hence the post.

    Calling it “unusable” was a bit over the top though; there are workloads where performance is not that important.

    Comment by ivoras — May 29, 2008 @ 12:29 pm

  5. Hi, I am writing my diploma about virtualization. In the last part of my work i want to test using some benchmarks (one of them is unixbench). I’ve instaled two kinds of virtual machines on Ubunu 8.04 on my Toshiba T2330 (2 GB RAM): Vmware Workstation 6.04 and Sun xVM VirtualBox 1.6.2. Vmware has got a lot of usefull function like cloning or taking a snapshot. But how can i test them? I want to run unixbench on my host system, then run openSuse 11 on vmware and test with unixbench, and the run virtualbox with openSuse 11 on virtual machine and then – compare it.. Do u think is it good idea?

    I also wanto to test by IOmeter, but i have some problems to run it on ubuntu;/

    Is there any other tests/benchmarks i can use for my work?

    Thanks for advice.

    Comment by bleser — June 27, 2008 @ 11:48 pm

  6. I don’t see any problems with what you want to do. Maybe the first thing you should do is decide which aspects of the system you want to measure and measure each of them separately; for example: CPU, IO, context switching, etc. Then you can compare the system in each of the categories.

    Comment by ivoras — July 4, 2008 @ 9:33 am

  7. FYI, Red Hat 7 is *not* used as the “virtual machine manager” for ESX. It is currently based on RedHat Enterprise Linux 3, however there is an important distinction to be made. When the physical hardware boots, it loads RHEL. RHEL eventually loads the ESX kernel (through the kernel module interface) which then completely takes over the bare metal and transfers the running RHEL image into a separate VM, managed by the ESX kernel. The RHEL VM is then used for the management interface.

    In other words: ESX uses RHEL as a bootloader and then a management interface. The actual grunt work is done by VMware’s proprietary ESX kernel.

    Comment by Dan Parsons — July 20, 2008 @ 7:50 am

  8. [...] is too good to be true, compared to what I got previously on the same hardware in VMWare previously (in short: a score of [...]

    Pingback by Lotsa FreeBSD » VirtualBox — July 31, 2008 @ 12:57 am

  9. We are looking for someone to perform similar tests with a VMware add-on solution we have that may offer improved performance. This would be on a contracted project basis. If you are interested, let me know and I’ll outline the tests.

    Comment by Paul Swart — August 18, 2008 @ 11:55 pm

  10. I would be interested in an update to this blog for Server 2.0 vs. ESXi.

    Comment by Chris — February 9, 2009 @ 3:01 pm

  11. VMWare ESXi performs better for Linux (and probably Windows) and worse for FreeBSD. See http://lists.freebsd.org/pipermail/freebsd-performance/2009-February/003686.html

    Comment by ivoras — February 12, 2009 @ 1:15 pm

  12. Slow. VMware Server 2 was unusable on my machine. Either web service on the default http and https ports take 23 mins to load from logging in. I gave up and went back to Virtual Box, because I could not even create a new guest O/S. I tried on Firefox and Opera running on Ubuntu with gcc 4.2.4.

    Comment by Dean.L — March 4, 2009 @ 4:58 pm

  13. Described very exactly.

    Comment by ????????? — May 11, 2009 @ 2:23 pm

  14. Dan Parsons:
    Actually, I was speaking from my experience, which was with ESX 2.x. So, I *should* have said “VMWare themselves use a modified Redhat 7 as the Virtual Machine Manager for their enterprise ESX [2.x] product.” Just as you *should* have stated “FYI, Red Hat 7 is *not* used as the

    Comment by Dave Johnson — May 11, 2009 @ 11:02 pm

  15. This benchmark is long past, but performing the same benchmark with VMWare Server 2.0 using RAW would be interesting to see. If it were closer to the hardware result, it may prove more capable of providing service for heavy disk IO workloads.

    Thanks again for the results.

    -=dave

    Comment by Dave Johnson — May 11, 2009 @ 11:07 pm

  16. I think, if you will keep writing ! Thank you !!!

    Comment by shuriken — May 29, 2009 @ 4:57 pm

  17. ??????? ????? ???????? ?????? ?????? ???? ?? ???????!

    Comment by Evfrosin — July 11, 2009 @ 1:59 pm

RSS feed for comments on this post.

Sorry, the comment form is closed at this time.

Powered by WordPress