Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 01 Aug 2018 12:39:01 +0000
From:      bugzilla-noreply@freebsd.org
To:        bugs@FreeBSD.org
Subject:   [Bug 230260] [FUSE] [PERFORMANCE]: Performance issue (I/O block size)
Message-ID:  <bug-230260-227@https.bugs.freebsd.org/bugzilla/>

next in thread | raw e-mail | index | archive | help
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=3D230260

            Bug ID: 230260
           Summary: [FUSE] [PERFORMANCE]: Performance issue (I/O block
                    size)
           Product: Base System
           Version: 11.1-RELEASE
          Hardware: Any
               URL: https://robo.moosefs.com/support/fuse_helloworld.tgz
                OS: Any
            Status: New
          Severity: Affects Some People
          Priority: ---
         Component: kern
          Assignee: bugs@FreeBSD.org
          Reporter: freebsd@moosefs.com

This is one of three issues we detected in FreeBSD FUSE while developing our
distributed file system. All four issues can be replicated using this simple
test script:
https://robo.moosefs.com/support/fuse_helloworld.tgz

Performance issue in FUSE: if a program uses FUSE without the "direct" opti=
on,
any I/O is always performed in 4k blocks. Maximum I/O speed we managed to g=
et
was 600MB/s (no physical I/O, just sending zeros from a RAM buffer).

With "direct" it's fast, 5GB/s, but "direct" is not the best solution: no
cache, read operation has no limit on block size and if one uses extremely =
big
block size, the read speed drastically drops again (we performed dd with bs=
=3D1G
and the speed was only 40MB/s). Generally, "direct" is geared toward
stream-like data (character devices) and should not be used for disk-like I=
/O.

Other FUSE implementations (Linux, MacOS) use 64k block.

Best regards,
Peter / MooseFS Team

--=20
You are receiving this mail because:
You are the assignee for the bug.=



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?bug-230260-227>