Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 13 Dec 2008 21:22:27 +0100 (CET)
From:      oxy@field.hu
To:        "Ulf Lilleengen" <ulf.lilleengen@gmail.com>
Cc:        Michael Jung <mikej@paymentallianceintl.com>, freebsd-geom@freebsd.org
Subject:   Re: Encrypting raid5 volume with geli
Message-ID:  <3934.79.122.6.53.1229199747.squirrel@webmail.field.hu>
In-Reply-To: <917871cf0812130559r6d423688q57287dd765d6edf4@mail.gmail.com>
References:  <20081212155023.GA82667@keira.kiwi-computer.com> <ADC733B130BF1D4A82795B6B3A2654E2777381@exchange.paymentallianceintl.com> <917871cf0812130559r6d423688q57287dd765d6edf4@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
as i read it seems that it's useless for sync the raid after initing, i
have no chance to encrypt it with geli, am i right?

On Szo, December 13, 2008 14:59, Ulf Lilleengen wrote:
> On Fri, Dec 12, 2008 at 5:00 PM, Michael Jung
> <mikej@paymentallianceintl.com
>
>> wrote:
>>
>
>> FreeBSD charon.confluentasp.com 7.1-PRERELEASE FreeBSD 7.1-PRERELEASE
>> #2: Thu Sep  4 12:06:08 EDT 2008
>>
>>
>> In the interest of this thread I tried to duplicate the problem. I
>> created:
>>
>>
>> 10 drives:
>> D d9                    State: up       /dev/da9        A: 0/17366 MB
>> (0%)
>> D d8                    State: up       /dev/da8        A: 0/17366 MB
>> (0%)
>> D d7                    State: up       /dev/da7        A: 0/17366 MB
>> (0%)
>> D d6                    State: up       /dev/da6        A: 0/17366 MB
>> (0%)
>> D d5                    State: up       /dev/da5        A: 0/17366 MB
>> (0%)
>> D d4                    State: up       /dev/da4        A: 0/17366 MB
>> (0%)
>> D d3                    State: up       /dev/da3        A: 0/17366 MB
>> (0%)
>> D d2                    State: up       /dev/da2        A: 0/17366 MB
>> (0%)
>> D d1                    State: up       /dev/da1        A: 0/17366 MB
>> (0%)
>> D d0                    State: up       /dev/da0        A: 0/17366 MB
>> (0%)
>>
>>
>> 1 volume:
>> V test                  State: up       Plexes:       1 Size:        152
>>  GB
>>
>>
>> 1 plex:
>> P test.p0            R5 State: up       Subdisks:    10 Size:        152
>>  GB
>>
>>
>> 10 subdisks:
>> S test.p0.s9            State: up       D: d9           Size:         16
>>  GB
>> S test.p0.s8            State: up       D: d8           Size:         16
>>  GB
>> S test.p0.s7            State: up       D: d7           Size:         16
>>  GB
>> S test.p0.s6            State: up       D: d6           Size:         16
>>  GB
>> S test.p0.s5            State: up       D: d5           Size:         16
>>  GB
>> S test.p0.s4            State: up       D: d4           Size:         16
>>  GB
>> S test.p0.s3            State: up       D: d3           Size:         16
>>  GB
>> S test.p0.s2            State: up       D: d2           Size:         16
>>  GB
>> S test.p0.s1            State: up       D: d1           Size:         16
>>  GB
>> S test.p0.s0            State: up       D: d0           Size:         16
>>  GB
>>
>>
>> Which I can newfs and mount
>>
>>
>> (root@charon) /etc# mount /dev/gvinum/test /mnt
>> (root@charon) /etc# df -h
>> Filesystem                 Size    Used   Avail Capacity  Mounted on
>> /dev/ad4s1a                357G    119G    209G    36%    /
>> devfs                      1.0K    1.0K      0B   100%    /dev
>> 172.0.255.28:/data/unix    1.3T    643G    559G    54%    /nas1
>> /dev/gvinum/test           148G    4.0K    136G     0%    /mnt
>>
>>
>> But with /dev/gvinum/test unmounted if I try:
>>
>>
>> (root@charon) /etc# geli init -P -K /root/test.key /dev/gvinum/test
>> geli: Cannot store metadata on /dev/gvinum/test: Operation not
>> permitted. (root@charon) /etc#
>>
>>
>> My random file was created like
>>
>>
>> dd if=/dev/random of=/root/test.key bs=64 count=1
>>
>> I use GELI at home with no trouble, although not with a gvinum volume.
>>
>>
>
> Hello,
>
>
> When I tried this myself, I also got the EPERM error in return. I though
> this was very strange. I went through the gvinum code today, and put
> debugging prints everywhere, but everything looked fine, and it was only
> raid5 volumes
>
> that failed. Then I saw that the EPERM error came from the underlying
> providers of geom (more specifially from the read requests to the parity
> stripes etc), so I was starting to suspect that it was not a gvinum error.
> But still, I was
> able to write/read from the disks from outside of gvinum!
>
> Then, I discovered in geom userland code that it opens the disk where
> metadata should be written in write only mode. Then I discovered the
> reason:
> gvinum tries to write to the stripe in question, but has to read back the
> parity data from one of the other stripes. But, they are opened O_WRONLY,
> so the request fails. I tried opening the device as O_RDWR, and everything
> is find.
>
> Phew :) You can bet I was frustrated
>
>
> I hope to commit the attached change in the near future.
>
>
> --
> Ulf Lilleengen
> _______________________________________________
> freebsd-geom@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-geom
> To unsubscribe, send any mail to "freebsd-geom-unsubscribe@freebsd.org"
>
>





Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?3934.79.122.6.53.1229199747.squirrel>