Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 05 Dec 2008 18:16:02 +0100
From:      Hilko Meyer <Hilko.Meyer@gmx.de>
To:        Ulf Lilleengen <lulf@stud.ntnu.no>
Cc:        adnan@hochpass.uni-hannover.de, freebsd-geom@freebsd.org
Subject:   Re: System freeze with gvinum
Message-ID:  <6snij41mh5vtm92ch1d045upgjj6atbkn1@mail.gmx.net>
In-Reply-To: <20081204063410.GA1465@nobby.lan>
References:  <fmg3j4lkbngkmcm41lvbrjvuj2o4iagcvb@4ax.com> <20081130153558.GA2120@nobby.lan> <des5j4pqfbceh1hg4ad26arivjdmcqvr7m@mail.gmx.net> <20081130222445.GA1528@carrot.studby.ntnu.no> <nh86j4h8c0hf2musqeubinbqp5snrfmm8h@mail.gmx.net> <20081201021720.GA1949@carrot.studby.ntnu.no> <mucej495asrgc6s5gfu4clp7feeluat1bl@mail.gmx.net> <20081204063410.GA1465@nobby.lan>

next in thread | previous in thread | raw e-mail | index | archive | help
Ulf Lilleengen schrieb:
>On Thu, Dec 04, 2008 at 03:02:39AM +0100, Hilko Meyer wrote:
>> Unfortunately I have some other work for you. After changing the
>> BIOS-setting to AHCI, I tried gvinum with 6.4 again. And strangely
>> enough it worked. No freeze with newfs and I could copy several GB to
>> the volumes, but after a reboot gvinum list looks like that:
>>=20
>> | D sata3                 State: up       /dev/ad10       A: 9/476939 =
MB (0%)
>> | D sata2                 State: up       /dev/ad8        A: 9/476939 =
MB (0%)
>> | D sata1                 State: up       /dev/ad4        A: 9/476939 =
MB (0%)
>> |=20
>> | 2 volumes:
>> | V homes_raid5           State: down     Plexes:       1 Size:       =
 465 GB
>> | V dump_raid5            State: down     Plexes:       1 Size:       =
 465 GB
>> |=20
>> | 2 plexes:
>> | P homes_raid5.p0     R5 State: down     Subdisks:     3 Size:       =
 465 GB
>> | P dump_raid5.p0      R5 State: down     Subdisks:     3 Size:       =
 465 GB
>> |=20
>> | 6 subdisks:
>> | S homes_raid5.p0.s0     State: stale    D: sata1        Size:       =
 232 GB
>> | S homes_raid5.p0.s1     State: stale    D: sata2        Size:       =
 232 GB
>> | S homes_raid5.p0.s2     State: stale    D: sata3        Size:       =
 232 GB
>> | S dump_raid5.p0.s0      State: stale    D: sata1        Size:       =
 232 GB
>> | S dump_raid5.p0.s1      State: stale    D: sata2        Size:       =
 232 GB
>> | S dump_raid5.p0.s2      State: stale    D: sata3        Size:       =
 232 GB
>>=20
>> Then we updated to FreeBSD 7.1-PRERELEASE, but nothing changed. After =
a
>> reboot the volumes are down. In dmesg I found
>> g_vfs_done():gvinum/dump_raid5[READ(offset=3D65536, =
length=3D8192)]error =3D 6
>> but I think, that occurred during a try to mount a volume.
>>=20
>Well, this can happen if there was errors reading/writing to volumes
>previously. When volumes are in the down state, it is not possible to =
use
>them. You have a few options:
>
>If currently have any data on the volumes, and would like to recover =
without
>reinitializing the volumes, you can try and force the subdisk states to =
up by
>doing:
>
>1. 'gvinum setstate -f up <subdisk>' on all subdisk. The plexes should =
then
>go into the upstate as all the subdisks are up.
>2. Do fsck on the volumes to ensure that they are ok. If so, you are =
ready to
>go again. Note that you might have to pass -t ufs  to fsck as vinum =
volumes
>previously have set their own disklabels and other weird stuff.

That didn't helped. After a reboot were the subdisks stale again.

>If you don't have any valuable data yet, you can run 'gvinum start =
<volume>'
>on all volumes, which should reinitialize the plexes,

That worked, All up after a reboot. Took nine hours per volume...

In dmesg I found
| GEOM_VINUM: subdisk 'homes_raid5.p0.s2' init: finished successfully
| GEOM_VINUM: subdisk 'homes_raid5.p0.s0' init: finished successfully
| GEOM_VINUM: plex homes_raid5.p0 state change: down -> up
| GEOM_VINUM: g_access failed on drive sata2, errno 1
| GEOM_VINUM: subdisk 'homes_raid5.p0.s1' init: finished successfully

Do I have to worry about "g_access failed on drive sata2, errno 1"?

>or you can just recreate the entire config. Recreating the entire config
>might also work if you have data, but I'd try the tip above first.

I've tried that before writing the last mail, but didn't mentioned that.
Has not worked.

thanks for your help,
Hilko



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?6snij41mh5vtm92ch1d045upgjj6atbkn1>