Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 9 Aug 2010 09:11:24 -0700
From:      Jeremy Chadwick <freebsd@jdc.parodius.com>
To:        Ivan Voras <ivoras@freebsd.org>
Cc:        Joshua Boyd <boydjd@jbip.net>, freebsd-stable@freebsd.org
Subject:   Re: 8-STABLE Slow Write Speeds on ESXI 4.0
Message-ID:  <20100809161124.GA4618@icarus.home.lan>
In-Reply-To: <AANLkTimuPnac_h-ipCyD76j%2B0HGttBxDYyTNdtdU0_sm@mail.gmail.com>
References:  <AANLkTi=FNZ%2B=4yMPJBu%2BucGJiHqwMwQvoGcgqB%2BtPJF2@mail.gmail.com> <i3jhn0$ovp$1@dough.gmane.org> <AANLkTik%2BS2fe-sS242OXQprsEA4Oh4t6-CvBCuBCASz7@mail.gmail.com> <AANLkTimMA6OQKt-d6ecM=GmG2ciBTis-nHNovEwvjCB-@mail.gmail.com> <AANLkTimu2JoC6bmaBcSY3e5ovBPnwZ_s_zbRK=v8h7f6@mail.gmail.com> <AANLkTimuPnac_h-ipCyD76j%2B0HGttBxDYyTNdtdU0_sm@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On Mon, Aug 09, 2010 at 05:12:21PM +0200, Ivan Voras wrote:
> On 9 August 2010 16:55, Joshua Boyd <boydjd@jbip.net> wrote:
> > On Sat, Aug 7, 2010 at 1:58 PM, Ivan Voras <ivoras@freebsd.org> wrote:
> >>
> >> On 7 August 2010 19:03, Joshua Boyd <boydjd@jbip.net> wrote:
> >> > On Sat, Aug 7, 2010 at 7:57 AM, Ivan Voras <ivoras@freebsd.org> wrote:
> >>
> >> >> It's unlikely they will help, but try:
> >> >>
> >> >> vfs.read_max=32
> >> >>
> >> >> for read speeds (but test using the UFS file system, not as a raw
> >> >> device
> >> >> like above), and:
> >> >>
> >> >> vfs.hirunningspace=8388608
> >> >> vfs.lorunningspace=4194304
> >> >>
> >> >> for writes. Again, it's unlikely but I'm interested in results you
> >> >> achieve.
> >> >>
> >> >
> >> > This is interesting. Write speeds went up to 40MBish. Still slow, but 4x
> >> > faster than before.
> >> > [root@git ~]# dd if=/dev/zero of=/var/testfile bs=1M count=250
> >> > 250+0 records in
> >> > 250+0 records out
> >> > 262144000 bytes transferred in 6.185955 secs (42377288 bytes/sec)
> >> > [root@git ~]# dd if=/var/testfile of=/dev/null
> >> > 512000+0 records in
> >> > 512000+0 records out
> >> > 262144000 bytes transferred in 0.811397 secs (323077424 bytes/sec)
> >> > So read speeds are up to what they should be, but write speeds are still
> >> > significantly below what they should be.
> >>
> >> Well, you *could* double the size of "runningspace" tunables and try that
> >> :)
> >>
> >> Basically, in tuning these two settings we are cheating: increasing
> >> read-ahead (read_max) and write in-flight buffering (runningspace) in
> >> order to offload as much IO to the controller (in this case vmware) as
> >> soon as possible, so to reschedule horrible IO-caused context switches
> >> vmware has. It will help sequential performance, but nothing can help
> >> random IOs.
> >
> > Hmm. So what you're saying is that FreeBSD doesn't properly support the ESXI
> > controller?
> 
> Nope, I'm saying you will never get raw disk-like performance with any
> "full" virtualization product, regardless of specifics. If you want
> performance, go OS-level (like jails) or some example of
> paravirtualization.
> 
> > I'm going to try 7.3-RELEASE today, just to make sure that this isn't a
> > regression of some kind. It seems from reading other posts that this used to
> > work properly and satisfactorily.
> 
> Nope, I've been messing around with VMWare for a long time and the
> performance penalty was always there.

I thought Intel VT-d was supposed to help address things like this?

I can confirm on VMware Workstation 7.1, not ESXi, that disk I/O
performance isn't that great.  I only test with a Host OS of Windows XP
SP3, and for the Guest OS's hard disk driver use the LSI SATA/SAS
option.  I can't imagine IDE/ATA being faster, since (at least
Workstation) emulates an Intel ICH2.

I was under the impression that ESXi provided native access to the
hardware in the system (vs. Workstation which emulates everything)?
The controller seen by FreeBSD in the OP's system is:

mpt0: <LSILogic SAS/SATA Adapter> port 0x4000-0x40ff mem 0xd9c04000-0xd9c07fff,0xd9c10000-0xd9c1ffff irq 18 at device 0.0 on pci3
mpt0: [ITHREAD]
mpt0: MPI Version=1.5.0.0

Which looks an awful lot like what I see on Workstation 7.1.

FWIW, Workstation 7.1 is fairly adamant about stating "if you want
faster disk I/O, pre-allocate the disk space rather than let disk use
grow dynamically".  I've never tested this however.

How does Linux's I/O perform with the same setup?

-- 
| Jeremy Chadwick                                   jdc@parodius.com |
| Parodius Networking                       http://www.parodius.com/ |
| UNIX Systems Administrator                  Mountain View, CA, USA |
| Making life hard for others since 1977.              PGP: 4BD6C0CB |




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20100809161124.GA4618>