Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 10 Aug 2010 00:16:24 -0400
From:      Joshua Boyd <boydjd@jbip.net>
To:        Jeremy Chadwick <freebsd@jdc.parodius.com>
Cc:        freebsd-stable@freebsd.org, Ivan Voras <ivoras@freebsd.org>
Subject:   Re: 8-STABLE Slow Write Speeds on ESXI 4.0
Message-ID:  <AANLkTikfo%2BaXCG5ix2nO15Dx-YSv3w0MMEt_rz=O6z%2Bp@mail.gmail.com>
In-Reply-To: <20100810040519.GA21921@icarus.home.lan>
References:  <AANLkTi=FNZ%2B=4yMPJBu%2BucGJiHqwMwQvoGcgqB%2BtPJF2@mail.gmail.com>  <i3jhn0$ovp$1@dough.gmane.org> <AANLkTik%2BS2fe-sS242OXQprsEA4Oh4t6-CvBCuBCASz7@mail.gmail.com>  <AANLkTimMA6OQKt-d6ecM=GmG2ciBTis-nHNovEwvjCB-@mail.gmail.com>  <AANLkTimu2JoC6bmaBcSY3e5ovBPnwZ_s_zbRK=v8h7f6@mail.gmail.com>  <AANLkTimuPnac_h-ipCyD76j%2B0HGttBxDYyTNdtdU0_sm@mail.gmail.com>  <20100809161124.GA4618@icarus.home.lan> <AANLkTimYKupOZXDgL6O2SRxp3JHJcMGSrcVK697tKPss@mail.gmail.com>  <20100810040519.GA21921@icarus.home.lan>

next in thread | previous in thread | raw e-mail | index | archive | help
On Tue, Aug 10, 2010 at 12:05 AM, Jeremy Chadwick
<freebsd@jdc.parodius.com>wrote:

> On Mon, Aug 09, 2010 at 11:59:46PM -0400, Joshua Boyd wrote:
> > On Mon, Aug 9, 2010 at 12:11 PM, Jeremy Chadwick
> > <freebsd@jdc.parodius.com>wrote:
> >
> > > On Mon, Aug 09, 2010 at 05:12:21PM +0200, Ivan Voras wrote:
> > > > On 9 August 2010 16:55, Joshua Boyd <boydjd@jbip.net> wrote:
> > > > > On Sat, Aug 7, 2010 at 1:58 PM, Ivan Voras <ivoras@freebsd.org>
> wrote:
> > > > >>
> > > > >> On 7 August 2010 19:03, Joshua Boyd <boydjd@jbip.net> wrote:
> > > > >> > On Sat, Aug 7, 2010 at 7:57 AM, Ivan Voras <ivoras@freebsd.org>
> > > wrote:
> > > > >>
> > > > >> >> It's unlikely they will help, but try:
> > > > >> >>
> > > > >> >> vfs.read_max=32
> > > > >> >>
> > > > >> >> for read speeds (but test using the UFS file system, not as a
> raw
> > > > >> >> device
> > > > >> >> like above), and:
> > > > >> >>
> > > > >> >> vfs.hirunningspace=8388608
> > > > >> >> vfs.lorunningspace=4194304
> > > > >> >>
> > > > >> >> for writes. Again, it's unlikely but I'm interested in results
> you
> > > > >> >> achieve.
> > > > >> >>
> > > > >> >
> > > > >> > This is interesting. Write speeds went up to 40MBish. Still
> slow,
> > > but 4x
> > > > >> > faster than before.
> > > > >> > [root@git ~]# dd if=/dev/zero of=/var/testfile bs=1M count=250
> > > > >> > 250+0 records in
> > > > >> > 250+0 records out
> > > > >> > 262144000 bytes transferred in 6.185955 secs (42377288
> bytes/sec)
> > > > >> > [root@git ~]# dd if=/var/testfile of=/dev/null
> > > > >> > 512000+0 records in
> > > > >> > 512000+0 records out
> > > > >> > 262144000 bytes transferred in 0.811397 secs (323077424
> bytes/sec)
> > > > >> > So read speeds are up to what they should be, but write speeds
> are
> > > still
> > > > >> > significantly below what they should be.
> > > > >>
> > > > >> Well, you *could* double the size of "runningspace" tunables and
> try
> > > that
> > > > >> :)
> > > > >>
> > > > >> Basically, in tuning these two settings we are cheating:
> increasing
> > > > >> read-ahead (read_max) and write in-flight buffering (runningspace)
> in
> > > > >> order to offload as much IO to the controller (in this case
> vmware) as
> > > > >> soon as possible, so to reschedule horrible IO-caused context
> switches
> > > > >> vmware has. It will help sequential performance, but nothing can
> help
> > > > >> random IOs.
> > > > >
> > > > > Hmm. So what you're saying is that FreeBSD doesn't properly support
> the
> > > ESXI
> > > > > controller?
> > > >
> > > > Nope, I'm saying you will never get raw disk-like performance with
> any
> > > > "full" virtualization product, regardless of specifics. If you want
> > > > performance, go OS-level (like jails) or some example of
> > > > paravirtualization.
> > > >
> > > > > I'm going to try 7.3-RELEASE today, just to make sure that this
> isn't a
> > > > > regression of some kind. It seems from reading other posts that
> this
> > > used to
> > > > > work properly and satisfactorily.
> > > >
> > > > Nope, I've been messing around with VMWare for a long time and the
> > > > performance penalty was always there.
> > >
> > > I thought Intel VT-d was supposed to help address things like this?
> > >
> >
> > Our ESXI boxes are AMD rigs, so VT-d doesn't help here.
>
> AMD offers the same technology; it's called AMD-Vi these days, and was
> previously known as IOMMU.  I don't have any familiarity with it.
>

As far as I know, all it gets you is passthrough.

>
> --
> | Jeremy Chadwick                                   jdc@parodius.com |
> | Parodius Networking                       http://www.parodius.com/ |
> | UNIX Systems Administrator                  Mountain View, CA, USA |
> | Making life hard for others since 1977.              PGP: 4BD6C0CB |
>
>


-- 
Joshua Boyd
JBipNet

E-mail: boydjd@jbip.net

http://www.jbip.net



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?AANLkTikfo%2BaXCG5ix2nO15Dx-YSv3w0MMEt_rz=O6z%2Bp>