Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 23 Dec 2016 00:30:29 +0000
From:      bugzilla-noreply@freebsd.org
To:        freebsd-ports-bugs@FreeBSD.org
Subject:   [Bug 215503] net/glusterfs: Glusterfs client does not refreshing content of the files
Message-ID:  <bug-215503-13@https.bugs.freebsd.org/bugzilla/>

next in thread | raw e-mail | index | archive | help
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=3D215503

            Bug ID: 215503
           Summary: net/glusterfs: Glusterfs client does not refreshing
                    content of the files
           Product: Ports & Packages
           Version: Latest
          Hardware: Any
                OS: Any
            Status: New
          Severity: Affects Some People
          Priority: ---
         Component: Individual Port(s)
          Assignee: freebsd-ports-bugs@FreeBSD.org
          Reporter: craig001@lerwick.hopto.org

New issue reported to me via email regarding glusterfs not refeshing files.
Opening PR to track and fix issue

Krzysztof Kosarzycki (Chris) reported -


I have a problem with glusterfs client on FreeBSD 10.3.
Glusterfs client does not refreshing content of the files,
but size is always correct. From the server point of view
all is correct (bricks are synchronizing, log clear etc.).
By the way Linux gluster client operating on FBSD brick behave OK.


The scenario was simple :
I have created 4 bricks (3 on FreeBSD 1 on Ubuntu Linux).
All bricks are additional disk 500GB (Test is performed on VmWare 5.5
environment).
The bricks on FreeBSD are on ZFS file system. The brick on Ubuntu is on XFS
file system.
All bricks are mounted on /brick1, /brick2 etc.
Command to create gluster volume :
# gluster volume create test replica 2 z-zt1:/brick1 z-zt2:/brick2
z-zt3:/brick3 z-zt4:/brick4
# gluster volume start test

root@z-zt1:~ # gluster volume status test
Status of volume: test
Gluster process                             TCP Port  RDMA Port  Online  Pid
---------------------------------------------------------------------------=
---
Brick z-zt1:/brick1                         N/A       N/A        N       N/=
A=20
Brick z-zt2:/brick2                         49152     0          Y       93=
26
Brick z-zt3:/brick3                         49152     0          Y       93=
15
Brick z-zt4:/brick4                         49152     0          Y       13=
17
NFS Server on localhost                     2049      0          Y       79=
8=20=20=20
      temporary workaround by NFS mount
Self-heal Daemon on localhost               N/A       N/A        Y       79=
7=20
NFS Server on z-zt4                         N/A       N/A        N       N/=
A=20
Self-heal Daemon on z-zt4                   N/A       N/A        Y       13=
44
NFS Server on z-mail                        N/A       N/A        N       N/=
A=20
Self-heal Daemon on z-mail                  N/A       N/A        Y       39=
25
NFS Server on z-zt5                         N/A       N/A        N       N/=
A=20
Self-heal Daemon on z-zt5                   N/A       N/A        Y       13=
63
NFS Server on z-zt3                         N/A       N/A        N       N/=
A=20
Self-heal Daemon on z-zt3                   N/A       N/A        Y       93=
21
NFS Server on z-zt2                         N/A       N/A        N       N/=
A=20
Self-heal Daemon on z-zt2                   N/A       N/A        Y       93=
32

Task Status of Volume test
---------------------------------------------------------------------------=
---
Task                 : Rebalance=20=20=20=20=20=20=20=20=20=20
ID                   : 3fb64829-7626-4681-a8ca-272567c95ae6
Status               : completed=20=20=20=20=20=20=20=20=20=20

root@z-zt1:~ # gluster peer status
Number of Peers: 5

Hostname: z-zt4
Uuid: 719494e9-d584-4016-b918-aa19b8f1957a
State: Peer in Cluster (Connected)

Hostname: z-zt2
Uuid: 0cc9a9f2-0a90-4a8c-bebd-5d2260fbb2e0
State: Peer in Cluster (Connected)

Hostname: z-zt5
Uuid: cfc52d78-6cd3-4e9e-8db6-ce9e67535a51
State: Peer in Cluster (Connected)

Hostname: z-mail
Uuid: a5a40b84-0bca-4fbd-bec8-73594251677e
State: Peer in Cluster (Connected)

Hostname: z-zt3
Uuid: ef1c0986-cd15-4e04-b6f4-8ab1e911a806
State: Peer in Cluster (Connected)
root@z-zt1:~ #

Peers z-zt5 and z-mail are candidate for expanding test volume for next 2
bricks.

Command to mount gluster volume :
# mount_glusterfs z-zt1:test /root/test

root@z-zt1:~ # mount
zroot/ROOT/default on / (zfs, local, noatime, nfsv4acls)
devfs on /dev (devfs, local, multilabel)
brick1 on /brick1 (zfs, local, nfsv4acls)
zroot/tmp on /tmp (zfs, local, noatime, nosuid, nfsv4acls)
zroot/usr/home on /usr/home (zfs, local, noatime, nfsv4acls)
zroot/usr/ports on /usr/ports (zfs, local, noatime, nosuid, nfsv4acls)
zroot/usr/src on /usr/src (zfs, local, noatime, nfsv4acls)
zroot/var/audit on /var/audit (zfs, local, noatime, noexec, nosuid, nfsv4ac=
ls)
zroot/var/crash on /var/crash (zfs, local, noatime, noexec, nosuid, nfsv4ac=
ls)
zroot/var/log on /var/log (zfs, local, noatime, noexec, nosuid, nfsv4acls)
zroot/var/mail on /var/mail (zfs, local, nfsv4acls)
zroot/var/tmp on /var/tmp (zfs, local, noatime, nosuid, nfsv4acls)
zroot on /zroot (zfs, local, noatime, nfsv4acls)
/dev/fuse on /root/test (fusefs, local, synchronous)
root@z-zt1:~ #

I test create file, check on other nodes - file is there all nodes register=
ing
new file OK.
When I try to edit this file on other node and close the file. The originaly
creating node register
new file size, but not the content. When I unmount gluster volume and mount=
 it
again :
all is OK - new size and content. This is not happen when I use glusterfs
client on Ubuntu
and check changes on other Ubuntu client connected to FreeBSD host.  This
situation
has place on all three FreeBSD hosts. There is no parameters in the gluster=
fs
client connected
with file system etc. I discover that gluster has embedded NFS server and t=
urn
on this functionality
on z-zt1 host. When I mount gluster volume as NFS mount all problems gone.
Maybe I made some stupid error but I can not find where.

--=20
You are receiving this mail because:
You are the assignee for the bug.=



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?bug-215503-13>