Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 10 Aug 2010 04:14:21 +0200
From:      Thomas Steen Rasmussen <thomas@gibfest.dk>
To:        pjd@FreeBSD.org
Cc:        freebsd-fs@freebsd.org
Subject:   Re: HAST initial sync speed
Message-ID:  <4C60B5FD.9080603@gibfest.dk>
In-Reply-To: <4C5ECA78.6010803@gibfest.dk>
References:  <4C57E20E.2030908@gibfest.dk>	<20100806135001.GF1710@garage.freebsd.pl> <4C5ECA78.6010803@gibfest.dk>

next in thread | previous in thread | raw e-mail | index | archive | help
On 08-08-2010 17:17, Thomas Steen Rasmussen wrote:
 > On 06-08-2010 15:50, Pawel Jakub Dawidek wrote:
 >> On Tue, Aug 03, 2010 at 11:31:58AM +0200, Thomas Rasmussen wrote:
 >>
 >>> Hello list,
 >>>
 >>> I finally got my ZFS/HAST setup up and running, or trying to at least.
 >>> I am wondering how fast the initial HAST sync normally is - I created
 >>> these 4 HAST providers yesterday on 4 146 gig drives, and they 
still each
 >>> have over 90 gigabytes 'dirty' today. The machines are powerful (dell
 >>> r710) and are otherwise idle, and they are connected to the same 
gigabit
 >>> switch.
 >>>
 >>> I can supply details about any part of the configuration if needed, 
but I
 >>> just wanted to ask if you guys believe something is wrong here. I 
can't help
 >>> but think, if the initial sync takes 24+ hours, then if I ever need to
 >>> replace one of the servers, I will be without redundancy until the new
 >>> server reaches 0 'dirty' bytes, correct ?
 >>>
 >> Correct, but synchronizartion should take much, much less time.
 >> Is dirty count actually decreasing?
 >>
 >>
 > Hello,
 >
 > Yes it was decreasing steadily but very slowly. It finished between 
thursday
 > evening and friday morning, and the dirty count is now 0. All in all 
it took over
 > 72 hours. It was transferring around 20mbits while doing this. 
However, if I
 > copied a large file to the primary HAST node, it would use up a lot more
 > bandwidth. It is like HAST was synchronizing the "empty space" with lower
 > priority or something. Does that make any sense ? The servers are not in
 > production so I can perform any testing needed. Thank you for your reply.
 >
 > Regards
 >
 > Thomas Steen Rasmussen
 >
Hello again,

I just wanted to include the configs here for completeness:

/etc/hast.conf:
-----------------------------
resource hasthd4 {
         local /dev/label/hd4
         on server1 {
                 remote 192.168.0.15
         }
         on server2 {
                 remote 192.168.0.14
         }
}
resource hasthd5 {
         local /dev/label/hd5
         on server1 {
                 remote 192.168.0.15
         }
         on server2 {
                 remote 192.168.0.14
         }
}
resource hasthd6 {
         local /dev/label/hd6
         on server1 {
                 remote 192.168.0.15
         }
         on server2 {
                 remote 192.168.0.14
         }
}
resource hasthd7 {
         local /dev/label/hd7
         on server1 {
                 remote 192.168.0.15
         }
         on server2 {
                 remote 192.168.0.14
         }
}
-----------------------------

To create the setup I ran the following commands on both servers:

glabel label ssd0 /dev/mfid1
glabel label ssd1 /dev/mfid2
glabel label hd4 /dev/mfid3
glabel label hd5 /dev/mfid4
glabel label hd6 /dev/mfid5
glabel label hd7 /dev/mfid6

And on server2:
[root@server2 ~]# hastctl create hasthd4
[root@server2 ~]# hastctl create hasthd5
[root@server2 ~]# hastctl create hasthd6
[root@server2 ~]# hastctl create hasthd7
[root@server2 ~]# /etc/rc.d/hastd start
[root@server2 ~]# hastctl role secondary all

And on server1:
[root@server1 ~]# hastctl create hasthd4
[root@server1 ~]# hastctl create hasthd5
[root@server1 ~]# hastctl create hasthd6
[root@server1 ~]# hastctl create hasthd7
[root@server1 ~]# /etc/rc.d/hastd start
[root@server1 ~]# hastctl role primary all

This made the HAST devices appear on server1 under /dev/hast/

Then I created the ZFS filesystem on top, on server1:
zpool create hatank raidz2 /dev/hast/hasthd4 /dev/hast/hasthd5 
/dev/hast/hasthd6 /dev/hast/hasthd7 cache /dev/label/ssd0 /dev/label/ssd1

This resulted in the following "hastctl status" output, on server1:
hasthd4:
   role: primary
   provname: hasthd4
   localpath: /dev/label/hd4
   extentsize: 2097152
   keepdirty: 64
   remoteaddr: 192.168.0.15
   replication: memsync
   status: complete
   dirty: 146051956736 bytes
hasthd5:
   role: primary
   provname: hasthd5
   localpath: /dev/label/hd5
   extentsize: 2097152
   keepdirty: 64
   remoteaddr: 192.168.0.15
   replication: memsync
   status: complete
   dirty: 146045665280 bytes
hasthd6:
   role: primary
   provname: hasthd6
   localpath: /dev/label/hd6
   extentsize: 2097152
   keepdirty: 64
   remoteaddr: 192.168.0.15
   replication: memsync
   status: complete
   dirty: 146047762432 bytes
hasthd7:
   role: primary
   provname: hasthd7
   localpath: /dev/label/hd7
   extentsize: 2097152
   keepdirty: 64
   remoteaddr: 192.168.0.15
   replication: memsync
   status: complete
   dirty: 146047762432 bytes

--------------------------------------------------

The problem again is simply that the initial synchronization
took way too long. If I copy a large file to the primary HAST
server now it syncs very quickly. I am open for any input, I
obviously can't really use HAST before this problem is solved.

Thank you again.

Thomas Steen Rasmussen



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?4C60B5FD.9080603>