Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 02 May 2011 21:02:24 -0700
From:      Ted Mittelstaedt <tedm@mittelstaedt.us>
To:        Adam Vande More <amvandemore@gmail.com>
Cc:        freebsd-emulation@freebsd.org
Subject:   Re: virtualbox I/O 3 times slower than KVM?
Message-ID:  <4DBF7E50.8050401@mittelstaedt.us>
In-Reply-To: <BANLkTikf2JU_KKp1tt2j8DkaqYBxMzWerw@mail.gmail.com>
References:  <10651953.1304315663013.JavaMail.root@mswamui-blood.atl.sa.earthlink.net>	<BANLkTikyzGZt6YUWVc3KiYt_Of0gEBUp%2Bg@mail.gmail.com>	<4DBEFBD8.8050107@mittelstaedt.us>	<BANLkTi=dhtSFJm_gZhHTu1ohyE2-kQgy_A@mail.gmail.com>	<4DBF227A.1000704@mittelstaedt.us> <BANLkTikf2JU_KKp1tt2j8DkaqYBxMzWerw@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On 5/2/2011 7:39 PM, Adam Vande More wrote:
> On Mon, May 2, 2011 at 4:30 PM, Ted Mittelstaedt <tedm@mittelstaedt.us
> <mailto:tedm@mittelstaedt.us>> wrote:
>
>     that's sync within the VM.  Where is the bottleneck taking place?  If
>     the bottleneck is hypervisor to host, then the guest to vm write may
>     write all it's data to a memory buffer in the hypervisor that is
>     then slower-writing it to the filesystem.  In that case killing the
>     guest without killing the VM manager will allow the buffer to
>     complete emptying since the hypervisor isn't actually being shut down.
>
>
> No the bottle neck is the emulated hardware inside the VM process
> container.  This is easy to observe, just start a bound process in the
> VM and watch top host side.  Also the hypervisor uses native host IO
> driver, there's no reason for it to be slow.  Since it's the emulated
> NIC which is the bottleneck, there is nothing left to issue the write.
> Further empirical evidence for this can be seen by by watching gstat on
> VM running with an md or ZVOL backed storage.  I already utilize ZVOL's
> for this so it was pretty easy to confirm no IO occurs when the VM is
> paused or shutdown.
>
>     Is his app going to ever face the extremely bad scenario, though?
>
>
> The point is it should be relatively easy to induce patterns you expect
> to see in production.  If you can't, I would consider that a problem.
> Testing out theories(performance based or otherwise) on a production
> system is not a good way to keep the continued faith of your clients
> when the production system is a mission critical one.  Maybe throwing
> more hardware at a problem is the first line of defense for some
> companies, unfortunately I don't work for them.  Are they hiring? ;)  I
> understand the logic of such an approach and have even argued for it
> occasionally.  Unfortunately payroll is already in the budget, extra
> hardware is not even if it would be a net savings.
>

Most if not all sites I've ever been in that run Windows servers behave 
in this manner.  With most of these sites SOP is to "prove" that the 
existing hardware is inadequate by loading whatever Windows software 
that management wants loaded then letting the users on the network 
scream about it.  Then money magically frees itself up when there wasn't
any before.  Since of course management will never blame the OS for
the slowness, always the hardware.

Understand I'm not advocating this, just making an observation.

Understand that I'm not against testing but I've seen people get
so engrossed in spending time constructing test suites that they
have ended up wasting a lot of money.  I would have to ask, how much
time did the OP who started this thread take building 2 systems,
a Linux and a BSD system?  How much time has he spent trying to get
the BSD system to "work as well as the Linux" system?  Wouldn't it
have been cheaper for him to not spend that time and just put the
Linux system into production?

Ted



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?4DBF7E50.8050401>