Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 02 May 2011 14:30:34 -0700
From:      Ted Mittelstaedt <tedm@mittelstaedt.us>
To:        Adam Vande More <amvandemore@gmail.com>
Cc:        freebsd-emulation@freebsd.org
Subject:   Re: virtualbox I/O 3 times slower than KVM?
Message-ID:  <4DBF227A.1000704@mittelstaedt.us>
In-Reply-To: <BANLkTi=dhtSFJm_gZhHTu1ohyE2-kQgy_A@mail.gmail.com>
References:  <10651953.1304315663013.JavaMail.root@mswamui-blood.atl.sa.earthlink.net>	<BANLkTikyzGZt6YUWVc3KiYt_Of0gEBUp%2Bg@mail.gmail.com>	<4DBEFBD8.8050107@mittelstaedt.us> <BANLkTi=dhtSFJm_gZhHTu1ohyE2-kQgy_A@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On 5/2/2011 12:43 PM, Adam Vande More wrote:
> On Mon, May 2, 2011 at 1:45 PM, Ted Mittelstaedt <tedm@mittelstaedt.us
> <mailto:tedm@mittelstaedt.us>> wrote:
>
>     On 5/2/2011 5:09 AM, Adam Vande More wrote:
>
>         On Mon, May 2, 2011 at 12:54 AM, John<aqqa11@earthlink.net
>         <mailto:aqqa11@earthlink.net>>  wrote:
>
>             On both the FreeBSD host and the CentOS host, the copying
>             only takes 1
>             second, as tested before.  Actually, the classic "dd" test
>             is slightly
>             faster on the FreeBSD host than on the CentOS host.
>
>             The storage I chose for the virtualbox guest is a SAS
>             controller.  I found
>             by default it did not enable "Use Host I/O Cache".  I just
>             enabled that and
>             rebooted the guest.  Now the copying on the guest takes 3
>             seconds.  Still,
>             that's clearly slower than 1 second.
>
>             Any other things I can try?  I love FreeBSD and hope we can
>             sort this out.
>
>
>         Your FreeBSD Host/guest results seem relatively consistent with
>         what I would
>         expect since VM block io isn't really that great yet, however
>         the results in
>         your Linux VM seems too good to be true.
>
>
>     We know that Linux likes to run with the condom off on the file system,
>     (async writes) just because it helps them win all the know-nothing
>     benchmark contests in the ragazines out there, and FreeBSD does not
>     because it's users want to have an intact filesystem in case the
>     system crashes or loses power.  I'm guessing this is the central issue
>     here.
>
>
>         Have you tried powering off the
>         Linux VM immediately after the cp exits and md5'ing the two
>         files?  This
>         will insure your writes are completing successfully.
>
>
>     That isn't going to do anything because the VM will take longer than 3
>     seconds to close and it it's done gracefully then the VM won't close
>     until the writes are all complete.
>
>
> No, this is no correct.  You can kill the VM before it has a chance to
> sync(in Vbox, the poweroff button does this, and the qemu/kvm stop
> command is not a graceful shutdown either).

that's sync within the VM.  Where is the bottleneck taking place?  If
the bottleneck is hypervisor to host, then the guest to vm write may 
write all it's data to a memory buffer in the hypervisor that is then 
slower-writing it to the filesystem.  In that case killing the guest 
without killing the VM manager will allow the buffer to complete 
emptying since the hypervisor isn't actually being shut down.

   I haven't actually tested
> this, but it would seem to be a large bug if doesn't work this way since
> there are also graceful shutdown options in both hypervisors and the
> documenation states you may lose data with this option.  If nothing
> else, the real power cord will do the same thing.
>
>
>         http://ivoras.sharanet.org/blog/tree/2009-12-02.using-ministat.html
>         http://lists.freebsd.org/pipermail/freebsd-current/2011-March/023435.html
>
>
>     However that tool doesn't mimic real world behavior, either.
>
>
> That tool is for analyzing benchmarks, not running them.
>
>     The only
>     real way to test is to run both systems in production and see what
>     happens.
>
>
> Any dev/testing environment I setup or worked with has a method for
> simulating extremely bad scenarios production might face like 10,000
> devices phoning home at once to an aggregation network, with an equally
> severe load coming from the web frontend.  I thought this was pretty
> common practice.
>

Is his app going to ever face the extremely bad scenario, though?

If your server is going to melt down at 10,000 devices phoning home then
the difference between FreeBSD and Centos may be that the Centos system
lasts 50 milliseconds longer than the FreeBSD system before cascading
into an overload.

You can spend a lot of time setting up a test environment to simulate a
production environment when just running it in production for a while 
would answer the question.  Not to mention that for high-volume apps the
iron is always the cheapest part so most admins in that scenario just
throw a little more money at the iron.

>     I would not make a choice of going with one system over another based
>     on a single large file write difference of 2 seconds.  We have to
>     assume he's got an application that makes hundreds to thousands of large
>     file writes where this discrepancy would actually make a difference.
>
>  From the information given, that's not an assumption I'm comfortable
> with.  OP will have to find his own way on that whether it's something
> like blogbench or bonnie or "real data" with real data being the best.
> Agreed that discrepancy surely would make a difference if it's
> consistent across his normal workload.  However, there are many cases
> where that might not be true.
>

The lack of further info from the OP makes this more of a speculators
discussion than anything else.

Ted


> --
> Adam Vande More




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?4DBF227A.1000704>