Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 13 Jun 2011 23:50:50 +0100
From:      "Steven Hartland" <killing@multiplay.co.uk>
To:        "jhell" <jhell@DataIX.net>
Cc:        freebsd-fs@freebsd.org
Subject:   Re: Impossible compression ratio on ZFS
Message-ID:  <BC78B9F4947D4BA18B2227C809092871@multiplay.co.uk>
References:  <F21D6DCDBA494B4A9FDF20A13BC4947A@multiplay.co.uk> <20110613094803.GA10290@icarus.home.lan> <4E09C82B45BA46019281930B2EB13AC1@multiplay.co.uk> <20110613193529.GA21103@DataIX.net>

next in thread | previous in thread | raw e-mail | index | archive | help

----- Original Message ----- 
From: "jhell" <jhell@DataIX.net>
To: "Steven Hartland" <killing@multiplay.co.uk>
Cc: "Jeremy Chadwick" <freebsd@jdc.parodius.com>; <freebsd-fs@freebsd.org>
Sent: Monday, June 13, 2011 8:35 PM
Subject: Re: Impossible compression ratio on ZFS
> 

> Hi Steve,
> 
> Knowing that there were patches out for v28 on 8.X can you confirm that
> in fact you are using v15 ZFS ? I would assume you are because of the
> release but I don't want to do that.

Confirmed this is a pure 8.2 release build machine no additional patches
except for compiling libz without assembly optimisations as thats known
to cause crashes.

Specifically the following as directed by Xin LI:-
cd /usr/src/lib/libz
make cleandir
make cleandir (yes, do it the second time)
make MACHINE_ARCH=x86_64 obj depend all
make MACHINE_ARCH=x86_64 install


> If not, then seeing you have compression turned on... did you  just dump
> that whole table into the database ? its quite possible that the
> compression was still happening in ARC before it was finally written out
> and this would also explain why that happened.

The table was just rebuilt due to changing an index, so in effect yes
the data would have been copied from the old table into a fresh new copy
and then renamed.

> Also what level of compression are you using ?

Standard lzjb, which is achieving 1.9 overall and 2.45 on this table file.

Does indeed sound like this data was still being processed in some way but
surprised it took quite so long to show something other than the initial file
creation size.

Its not a big issue in this case, but does raise concerns that if it wasn't
showing the "correct" file size that the data may not have been commited to
disk, hence could have been unsafe for this quite extended period.

Setting that may be relavent in the case within mysql are:-
innodb_log_file_size = 1024M
innodb_log_buffer_size = 8M
innodb_flush_method = O_DIRECT
innodb_use_native_aio = 1

So its possible that the table was in the innodb log, but I've never
witnessed that before tbh but its also only very recently we have moved
our db server from ufs to zfs, hence the questions.

    Regards
    Steve



================================================
This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it. 

In the event of misdirection, illegible or incomplete transmission please telephone +44 845 868 1337
or return the E.mail to postmaster@multiplay.co.uk.




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?BC78B9F4947D4BA18B2227C809092871>