Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 2 May 2011 22:49:32 -0700
From:      Jan Koum <jan@whatsapp.com>
To:        Jeremy Chadwick <freebsd@jdc.parodius.com>
Cc:        freebsd-fs@freebsd.org, Chris Peiffer <chris@whatsapp.com>
Subject:   Re: very strange IO issue with FreeBSD 8 and SSD
Message-ID:  <BANLkTinchOrXFo%2B7RqV9-pf_2zFoBtVdeQ@mail.gmail.com>
In-Reply-To: <20110503041718.GA34604@icarus.home.lan>
References:  <BANLkTin-qEoxxFbjJkDaA_-UZMkza08NNQ@mail.gmail.com> <20110502233601.GA29710@icarus.home.lan> <BANLkTik5tXegwoRvB7XAvpEPb385KjGEtA@mail.gmail.com> <BANLkTinQt4YZiudZUSgxL0x8dJ6MJTueRw@mail.gmail.com> <20110503041718.GA34604@icarus.home.lan>

next in thread | previous in thread | raw e-mail | index | archive | help
On Mon, May 2, 2011 at 9:17 PM, Jeremy Chadwick <freebsd@jdc.parodius.com>wrote:

>
> To emulate "iostat 1", you will need to run this from inside of a while
> loop via the shell.  E.g. in sh or bash:
>
> while true; do gstat -b; sleep 1; done
>
>
sure:

$ sudo gstat -b | head -2 ; while true; do sudo gstat -b | grep 'a$'; sleep
1; echo; done
dT: 1.009s  w: 1.000s
 L(q)  ops/s    r/s   kBps   ms/r    w/s   kBps   ms/w   %busy Name
  258     56     16     42    0.2     40    312    2.2    1.0  ad4s1a
  288     76     20     81    0.2     57    387    4.0    1.2  ad5a
  255    208     28     76    0.4    180   1977   12.1    3.1  ad6a
  276     83     26    139    0.5     58    499    6.2    3.1  ad7a

    0     17     16     40    0.2      1      4    0.2    0.4  ad4s1a
    0     30     28     95    5.4      2     20    0.2   15.1  ad5a
    0   2943     30    139   17.9   2913  46257  261.6   40.5  ad6a
    0     24     23     82    0.2      1      4    1.6    0.6  ad7a

    0    791     30    137    0.5    762   6897   24.2   16.1  ad4s1a
    0    858     18     68    0.2    840   8261   35.7   16.7  ad5a
    0   1308     18     46    1.7   1290  13023   25.5   22.1  ad6a
    0    791     21    113    1.5    771   7320   19.8   21.3  ad7a

    0   3152     26     77   18.1   3126  46089  236.0   44.0  ad4s1a
    0    385     30    109   10.6    355   2420   11.4   28.1  ad5a
    0   1263     25    107   11.5   1239   7172   37.3   27.8  ad6a
  696    761     32    159   12.2    730   4510   22.5   31.1  ad7a

    0    456     26     76    0.4    430   1892   19.0    9.4  ad4s1a
    0    616     14     36    0.2    602   4971   20.3    8.6  ad5a
    0    811     14     46    0.3    797   6186   27.0   10.4  ad6a
    0    207     19     58    2.1    188   2982   25.2   10.3  ad7a

  313    467     20     76    0.2    447   3834   19.2    4.6  ad4s1a
   10     33     17     96    0.2     16    123   82.7    8.8  ad5a
    3     32     16     62    0.2     16     98    0.3    0.6  ad6a
    1     40     20     52    0.2     20    223    0.3    0.7  ad7a

  151   1624     18     77   51.6   1606  10039  106.3   69.1  ad4s1a
   25    232      8     22   95.1    224   3565   94.4   64.5  ad5a
    0    868     15     48    0.2    854   7438   20.7   17.7  ad6a
    0    821     11     73    1.2    810   8846   26.3   17.1  ad7a




> I believe your concern point that started the thread was that
> 4MBytes/sec was considered bad performance.



sorry, not quite...  i am not judging "performance" - what i am trying to
get to the bottom of is why in the world would 500KB of file updates
(write/append) per second would generate so much IO



> There are indications from
> your iostat output that occasionally the writes are buffered and come in
> "in a burst" at 10-11MByte/sec, but your overall average is around
> 4-5MByte/sec.
>
>

we see higher averages, but OK -- don't think you 4-5MB/sec is still way too
high for the little IO application is doing?


(dd doesn't really reproduce the real life usage of filesystem with multiple
directories and threads using the underlying fs)



> I can safely say the conversation is going to immediately turn to "how
> does your application work?", including people asking for full source
> code and so on.



it is a very very very simple app built on top of erlang file module:
http://www.erlang.org/doc/man/file.html



> Unless I misunderstand, that's effectively what you're
> asking: "why does our application perform so badly on these SSDs?"
>
>
not really.  what i am asking is: why is there so much IO overhead?  where
is it coming from?



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?BANLkTinchOrXFo%2B7RqV9-pf_2zFoBtVdeQ>