Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 28 Nov 2014 00:45:51 +0100
From:      Lorenzo Perone <lopez.on.the.lists@yellowspace.net>
To:        freebsd-fs@freebsd.org
Cc:        bill@ethernext.com
Subject:   Re: HAST, zvols, istgt and carp working...
Message-ID:  <5477B7AF.5020802@yellowspace.net>
In-Reply-To: <AANLkTim5FRJkf_S0aSV74S=JY%2Bg1DBZLhjYW7X9C0MkP@mail.gmail.com>
References:  <AANLkTim5FRJkf_S0aSV74S=JY%2Bg1DBZLhjYW7X9C0MkP@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
Hello,

I am quoting the below thread fully as this is an old thread. But I 
ended up here while searching on the same subject.

I have a question on the same kind of setup. In some situations, I 
thought it might be useful to use HAST on ZVOLs for single jails - which 
might be running on machine a and b or c and d.

So my idea is the following:

HAST on a zvol, with two volumes per filesystem: One for a UFS 
filesystem and one for a gjournal.

Example - quick code rush:

zfs create -o compression=on -s -b 4096 -V 2T rtank/vols/jail1
zfs create -o compression=on -b 4096 -V 32G rtank/vols/jail1_j
hastctl create jail1
service hastd onestart
hastctl create jail1_j
hastctl role primary jail1
hastctl role primary jail1_j
gjournal label hast/jail1 hast/jail1_j
newfs -J -U /dev/hast/jail1.journal
mount /dev/hast/jail1.journal /jails/jail1

I thought of gjournal, as that would allow to choose replication = 
memsync for the filesystem, and async for the journal: speedup while 
keeping the filesystem consistent.

For as far as I get, it works in my tests, and performance is OK.

So, now down to my question: It would be great to be able to mount 
read-only snapshots of the zvol...

But it does not seem to succeed:

zfs snapshot rtank/vols/jail1@hellotest
mount -t ufs -o ro /dev/zvol/rtank/vols/jail1@hellotest /mnt
mount: /dev/zvol/rtank/vols/jail1@hellotest: Invalid argument

Even doing a clone does not succeed. I guess this is because the newfs, 
of course, was done on the hast volume.

Is there any way to mount the snapshot read only - or to have a 'hast' 
wrapper for the snapshot without having to really 'hast' it?

Thanx a lot for any comment or hint.. (even if it is: "bad idea, don't 
do any of that"'..)

Of course I could use the hast vols to make another zpool on top of 
them, and then snapshot those. But I had a bad feeling @ reliability for 
a setup like that (correct feeling?)

Thanks a lot in advance for any time taken by anyone to reply..

Greetings and Regards,

Lorenzo


On 01.03.11 04:35, Bill Desjardins wrote:
> Hello All,
>
> as an experiment today I setup a couple 8-stable guests on vmware ESXI to test
>   hast with zfs, carp and istgt for a redundant nas system I am putting
> together.
> I havent seen any mention that anyone has used hast to mirror a zfs zvol so I
> figured I would try it and atleast my proof of concept seems to work just fine.
>
> is anyone doing this and using it in a production environment?
>
> heres how I setup the testing environment...
>
> - created (2) 8-stable hosts on esxi 4.1:  hast-1 & hast-2 (os on da0)
>
> on both hast-1 and hast-2
>
> - added 4 x 8GB disk's to each (da1 - da4)
> - glabel'd disks disk1 - disk4
> - zpool create tank mirror disk1 disk2 mirror disk3 disk4
> - zfs create -p -s -b 64k -V 4G tank/hzvol.1
>
> hast.conf on each
>
> resource tank_hzvol.1 {
>      local /dev/zvol/tank/hzvol.1
>      on hast-1 {
>          remote x.x.x.9
>      }
>      on hast-2 {
>          remote x.x.x.8
>      }
> }
>
> on hast-1 and hast-2
>
>> hastd_enable="YES" in rc.conf
>> hastctl create tank_hzvol.1
>> /etc/rc.d/hastd start
>
> on hast-2
>
>> hastctl role secondary tank_hzvol.1
>
> on hast-1
>
>> hastctl role primary tank_hzvol.1
>
> hastctl status reports all is well so far...
>
> next I configured istgt identically on hast-1 and hast-2 for the hast device
>
>>>   LUN0 Storage    /dev/hast/tank_hzvol.1 3G
>
> istgt was started (istgt onestart) and the zvol target was setup on
> another vmware esxi server
> which then was formatted as a vmfs volume. I created a 2GB disk on this volume
> and added it to another 8-stable host as a ufs disk mounted on /mnt. so far
> going good, everything working as expected.
>
> to test hast replication, I created a few 200MB files on the host with the
> ufs vmdk volume and seen traffic over the hast network from hast-1 to hast-2. on
> hast-1, the zvol size reflected correct sparse disk space usage, but
> hast-2 showed
> the full 4GB zvol allocated which I suspect is due to hast.
>
> to test failover of the isci zvol target from hast-1 to hast-2:
>
> on hast-1
>
>> istgt stop
>> hastctl role secondary tank_hzvol.1
>
> on hast-2
>
>> hastctl role primary tank_hzvol.1
>> istgt onestart
>
> NOTE: carp does not seem to work on esxi for me so between hast-1 and hast-2 I
> manually moved the IP for istgt to hast-2.
>
> the result was that the istgt hast zvol successfully failed over to hast-2
> with only a brief stall while I manually performed the failover process. I only
> performed the ideal manual failover scenario for proof of concept. I will be
> testing this on 2 real development  servers later this week for a more
> complete understanding.
>
>
> I see some real advantages for zvols only hast:
> ++++++++++++++++++++++++++++++++++++++++++++++++
>
> + no need to hast each individual disk in the zpool so you can access all
> available storage on either storage unit
> + maintaining storage units remains functionally consistent between them
> + once setup, zvols are easily migrated to new storage environments in real-time
> since there is only a single zvol hast resource to replicate. (no need
> to have all
> matching zpool hast members, just reconfigure the primary zvol hast
> resource to point to
> a new secondary server and swap roles/failover when ready)
> + can have active hast zvols on each unit to distribute IO
> + no need for zpool export/import on failover
> + hast easily added to current zvols
> + retains performance of entire zpool
> + zpool can be expanded without changing hast config
> + minimizes hast replication traffic between storage units
> + hast split-brain localized to specific zvol's
> + can use ufs on hast zvol resource for things like samba and nfs
>
> cons
> -------------------------------------------
>
> - performace impact (???)
> - each hast zvol requires distinct application configurations (more
>    confgurations to deal with/screw up)
> - zfs sparse volumes seem not to be working correctly via hast (???)
> - expanding zvol requires hastctl create, init, startup plus may need
>    application specific changes/restart.
> - other methods needed to replicate data in rest of pool
> - possible long rebuild time on large zvols?
> - snapshots / rollbacks (???)
> - many more???
>
> my main question is if using hast to replicate a zvol is a supported
> configuration and what are the possible drawbacks? Its more than
> likely I am overlooking some very basic requirement/restrictions and
> am blatantly wrong in all this, but if it can perform, I think its a big+
> for freebsd and zfs useability as a nas server.
>
> thoughts? comments? criticisms? :)
>
> Best,
>
> Bill
> _______________________________________________
> freebsd-fs@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-fs
> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org"
>




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?5477B7AF.5020802>