Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 6 Jun 2013 15:39:11 -0700
From:      Jeremy Chadwick <jdc@koitsu.org>
To:        mxb <mxb@alumni.chalmers.se>
Cc:        "freebsd-fs@freebsd.org" <freebsd-fs@freebsd.org>
Subject:   Re: zpool export/import on failover - The pool metadata is corrupted
Message-ID:  <20130606223911.GA45807@icarus.home.lan>
In-Reply-To: <016B635E-4EDC-4CDF-AC58-82AC39CBFF56@alumni.chalmers.se>
References:  <D7F099CB-855F-43F8-ACB5-094B93201B4B@alumni.chalmers.se> <CAKYr3zyPLpLau8xsv3fCkYrpJVzS0tXkyMn4E2aLz29EMBF9cA@mail.gmail.com> <016B635E-4EDC-4CDF-AC58-82AC39CBFF56@alumni.chalmers.se>

next in thread | previous in thread | raw e-mail | index | archive | help
On Fri, Jun 07, 2013 at 12:12:39AM +0200, mxb wrote:
> 
> Then MASTER goes down, CARP on the second node goes MASTER (devd.conf, and script for lifting):
> 
> root@nfs2:/root # cat /etc/devd.conf
> 
> 
> notify 30 {
> match "system"		"IFNET";
> match "subsystem"	"carp0";
> match "type"		"LINK_UP";
> action "/etc/zfs_switch.sh active";
> };
> 
> notify 30 {
> match "system"          "IFNET";
> match "subsystem"       "carp0";
> match "type"            "LINK_DOWN";
> action "/etc/zfs_switch.sh backup";
> };
> 
> root@nfs2:/root # cat /etc/zfs_switch.sh
> #!/bin/sh
> 
> DATE=`date +%Y%m%d`
> HOSTNAME=`hostname`
> 
> ZFS_POOL="jbod"
> 
> 
> case $1 in
> 	active)
> 		echo "Switching to ACTIVE and importing ZFS" | mail -s ''$DATE': '$HOSTNAME' switching to ACTIVE' root
> 		sleep 10
> 		/sbin/zpool import -f jbod
> 		/etc/rc.d/mountd restart
> 		/etc/rc.d/nfsd restart
> 		;;
> 	backup)
> 		echo "Switching to BACKUP and exporting ZFS" | mail -s ''$DATE': '$HOSTNAME' switching to BACKUP' root
> 		/sbin/zpool export jbod
> 		/etc/rc.d/mountd restart
>                 /etc/rc.d/nfsd restart
> 		;;
> 	*)
> 		exit 0
> 		;;
> esac
> 
> This works, most of the time, but sometimes I'm forced to re-create pool. Those machines suppose to go into prod.
> Loosing pool(and data inside it) stops me from deploy this setup.

This script looks highly error-prone.  Hasty hasty...  :-)

This script assumes that the "zpool" commands (import and export) always
work/succeed; there is no exit code ($?) checking being used.

Since this is run from within devd(8): where does stdout/stderr go to
when running a program/script under devd(8)?  Does it effectively go
to the bit bucket (/dev/null)?  If so, you'd never know if the import or
export actually succeeded or not (the export sounds more likely to be
the problem point).

I imagine there would be some situations where the export would fail
(some files on filesystems under pool "jbod" still in use), yet CARP is
already blindly assuming everything will be fantastic.  Surprise.

I also do not know if devd.conf(5) "action" commands spawn a sub-shell
(/bin/sh) or not.  If they don't, you won't be able to use things like"
'action "/etc/zfs_switch.sh active >> /var/log/failover.log";'.  You
would then need to implement the equivalent of logging within your
zfs_switch.sh script.

You may want to consider the -f flag to zpool import/export
(particularly export).  However there are risks involved -- userland
applications which have an fd/fh open on a file which is stored on a
filesystem that has now completely disappeared can sometimes crash
(segfault) or behave very oddly (100% CPU usage, etc.) depending on how
they're designed.

Basically what I'm trying to say is that devd(8) being used as a form of
HA (high availability) and load balancing is not always possible.
Real/true HA (especially with SANs) is often done very differently (now
you know why it's often proprietary.  :-) )

-- 
| Jeremy Chadwick                                   jdc@koitsu.org |
| UNIX Systems Administrator                http://jdc.koitsu.org/ |
| Making life hard for others since 1977.             PGP 4BD6C0CB |




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20130606223911.GA45807>