Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 01 Jul 2020 10:03:31 +0000
From:      bugzilla-noreply@freebsd.org
To:        ports-bugs@FreeBSD.org
Subject:   [Bug 247690] sysutils/openzfs-kmod performance problem
Message-ID:  <bug-247690-7788@https.bugs.freebsd.org/bugzilla/>

next in thread | raw e-mail | index | archive | help
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=3D247690

            Bug ID: 247690
           Summary: sysutils/openzfs-kmod performance problem
           Product: Ports & Packages
           Version: Latest
          Hardware: amd64
                OS: Any
            Status: New
          Severity: Affects Some People
          Priority: ---
         Component: Individual Port(s)
          Assignee: freqlabs@FreeBSD.org
          Reporter: spam123@bitbert.com
          Assignee: freqlabs@FreeBSD.org
             Flags: maintainer-feedback?(freqlabs@FreeBSD.org)

Created attachment 216107
  --> https://bugs.freebsd.org/bugzilla/attachment.cgi?id=3D216107&action=
=3Dedit
logs + sample test script

I started lots of benchmarks concerning ZFS. Among the test candidates was a
Notebook with a single SSD disk where I compared the performance of FreeBSDs
ZFS shipped with the base system versus openzfs-kmod. I realized that
openzfs-kmod is much slower compared to FreeBSDs base zfs. I used the tool
benchmarks/fio (3.20).

If I pick one test, of course on the same hardware, the random-read-write t=
est
fired up with the "sync"-ioengine and ashift=3D12, by the command "fio
--name=3Drandrw --rw=3Drandrw --direct=3D1 --ioengine=3Dsync --bs=3D8k --nu=
mjobs=3D2
--rwmixread=3D80 --size=3D1G --runtime=3D600 --group_reporting" native-zfs =
delivers
81.7MiB/s read and 20.5MiB/s write bandwidth whereas openzfs-kmod only reac=
hes
9665KiB/s read and 2422KiB/s write bandwidth.

The difference is quite big on all my tests, with combinations of io-engines
posixaio, psync, mmap, sync, pvsync, vsync; zfs with ashift=3D9 and ashift=
=3D12.
This was not really a scientific test, but still showed the obvious differe=
nce
in performance. fio seems to be a linux tool, however, works on FreeBSD - I
don't know how the different io-engines are implemented, but it shows the
difference in all tests.

Tests on normal hdds are to come too, but they take of course significantly
longer. I am also testing geli/zfs-encryption on FreeBSD and Linux (luks
encryption). This is out of scope here, just for info that I can provide lo=
gs
of different zfs combinations (2-disk raidz1/zmirror, 3-disk raidz1) at a l=
ater
date if desired.

Attached are 24 log files which include the results of some fio tests (and =
a dd
command output/timing). Also, see the included unenc-openzfs.sh for how/what
tests were done.

--=20
You are receiving this mail because:
You are the assignee for the bug.=



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?bug-247690-7788>