Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 3 Dec 2007 19:42:25 +0100
From:      =?ISO-8859-1?Q?Johan_Str=F6m?= <johan@stromnet.se>
To:        freebsd-current@freebsd.org
Subject:   Re: 7.0-Beta 3: zfs makes system reboot
Message-ID:  <DB09DEDC-6C2A-4774-8457-C5702BA231A1@stromnet.se>
In-Reply-To: <F9BBE18F-1C22-4194-9FDA-33143B5439B0@stromnet.se>
References:  <475039D5.4020204@web.de> <CAAD8692-4EBA-4BEF-9523-721EFFC5643E@stromnet.se> <F9BBE18F-1C22-4194-9FDA-33143B5439B0@stromnet.se>

next in thread | previous in thread | raw e-mail | index | archive | help
On Dec 2, 2007, at 14:15 , Johan Str=F6m wrote:

> On Dec 2, 2007, at 13:33 , Johan Str=F6m wrote:
>
>> On Nov 30, 2007, at 17:27 , Michael Rebele wrote:
>>
>>> Hello,
>>>
>>> i'm testing the zfs since 7.0-Beta 1.
>>> First, i had only access to an 32 Bit Machine (P4/3GHz with 2GB =20
>>> RAM, 2xHD for RAID1 and 2xHD for ZFS Raid 0).
>>>
>>> While running iozone with the following call:
>>> iozone -R -a -z -b file.wks -g 4G -f testile
>>>
>>> (This is inspired by Dominic Kay from Sun, see http://=20
>>> blogs.sun.com/dom/entry/zfs_v_vxfs_iozone for details).
>>>
>>> the well known "kmem_malloc" error occured and stopped the system.
>>> (panic: kmem_malloc (131072): kmem_map too small: 398491648 total =20=

>>> allocated cpuid=3D1)
>>>
>>> I tested several optimizations as suggested in the ZFS Tuning =20
>>> Guide and several postings on this list.
>>> The problem stayed mainly the same, it stopped with a =20
>>> "kmem_malloc" or rebooted without warning. This depends on the =20
>>> configuration, if i raised the vm.kmem_-sizes or only the =20
>>> KVA_PAGES or both.
>>> But it never ever made the benchmark. With more memory in =20
>>> vm.kmem_size and vm.kmem_size_max, the problem came later.
>>>
>>>
>>>
>>> But ok, the main target for the ZFS is to use amd64, not i386. =20
>>> Now i have access to an Intel Woodcrest-System, it's a Xeon 5160 =20
>>> with 4GB RAM and 1xHD. It has UFS for the System and Home and one =20=

>>> ZFS only for data (for the iozone-Benchmark).
>>> It has a vanilla kernel, i haven't touched it. I've tested the =20
>>> default settings from Beta 3 and applied the tuning tips from the =20=

>>> Tuning Guide.
>>> It shows the same behaviour as on the 32 Bit machine. One major =20
>>> difference: it makes always a reboot. There's no kmem_malloc =20
>>> error message (which made the system hang).
>>>
>>> The problem is the "-z" option in the iozone-Benchmark. Without =20
>>> it, the benchmark works (on the i386 and on the amd64-Machine). =20
>>> This option makes iozone testing small record sizes for large =20
>>> files. On an UFS-Filesystem, iozone works with the "-z" option. =20
>>> Though, it seems to me, that this is a problem with ZFS.
>>>
>>> Here are some more informations (from the amd64-System):
>>>
>>> 1. The captured iozone output
>>>
>>> [root@zfs /tank/iozone]# iozone -R -a -z -b filez-512M.wks -g 4G -=20=

>>> f testile
>>> ...
>>
>>
>> For the record, I can reproduce the same thing on amd64 FreeBSD =20
>> RELENG_7 (installed from beta3 2 days ago) from 2 days ago. Its a =20
>> c2d box with 2Gb of memory and two satadrives in zpool mirror. No =20
>> special tweaking whatsoever yet..
>> The panic was Page fault, supervisor read instruction page not =20
>> present.. so not the (apparently) regular kmem_malloc? So I doubt =20
>> the other patch that was linked to by Alexandre would help?
>>
>> iozone got to
>>         Run began: Sun Dec  2 13:11:53 2007
>>
>>         Excel chart generation enabled
>>         Auto Mode
>>         Cross over of record size disabled.
>>         Using maximum file size of 4194304 kilobytes.
>>         Command line used: iozone -R -a -z -b file.wks -g 4G -f =20
>> testile
>>         Output is in Kbytes/sec
>>         Time Resolution =3D 0.000001 seconds.
>>         Processor cache size set to 1024 Kbytes.
>>         Processor cache line size set to 32 bytes.
>>         File stride size set to 17 * record size.
>>                                                             =20
>> random  random    bkwd  record  stride
>>               KB  reclen   write rewrite    read    reread    =20
>> read   write    read rewrite    read   fwrite frewrite   fread  =20
>> freread
>>               64       4  122584  489126   969761  1210227 =20
>> 1033216  503814  769584  516414  877797   291206   460591  =20
>> 703068   735831
>>               64       8  204474  735831  1452528  1518251 =20
>> 1279447  799377 1255511  752329 1460430   372410   727850 1087638  =20=

>> 1279447
>> ......
>>           131072       4   65734   71698  1011780   970967  =20
>> 755928    5479 1008858  494172  931232    65869    68155  906746   =20=

>> 910950
>>           131072       8   79507   74422  1699148  1710185 =20
>> 1350184   10907 1612344  929991 1372725    34699    74782 1407638  =20=

>> 1429434
>>           131072      16   82479   74279  2411000  2426173 =20
>> 2095714   25327 2299061 1608974 2038950    71102    69200 1887231  =20=

>> 1893067
>>           131072      32   75268   73077  3276650  3326454 =20
>> 2954789   70573 3195793 2697621 2987611
>> then it died
>>
>> No cores dumped however.. Altough I'm running on a gmirror for =20
>> swap, if I recall correct at least 6.x couldnt dump to a gmirror, =20
>> I guess 7.x cant either then.. Altought the dump message DID say =20
>> it dumped memory (and it did say Dump complete), savecore didnt =20
>> find any dumps at boot..
>>
>> The box didnt do anything else during this test, and is not =20
>> running any apps yet. Havent encounterd the problem before, but =20
>> then again I've only been playing with it for 2 days without any =20
>> real hard test (just scp'ed about 50 gigs of data to it, but thats =20=

>> it)
>
> Ehr.. im sorry, I think i missread the dump.. or rather the whole =20
> dump wasnt on screen.. just did the same test again:
>
> panic: kmem_malloc(131082): kmem_map too small: 412643328 total =20
> allocated
> and page fault..
>
> I'll test that VM patch.

Back again after > 24h testing. Without the patch it crashes (two out =20=

of two times).

After applying the patch I havent had a single crash, and it was =20
runnig almost through the full test (After 10 hours I didnt realy =20
think it would crash anyway so :P)
Tried with no special loader.conf-settings, then with vm.kmem_size=20
(_max)=3D"1G", and then also with vfs.zfs.arc_max=3D"512M".. No crashes =20=

anywhere..

So this patch seems to make it not crash.. But as Michael Rebele =20
pointed out in another post, it was written 4 weeks before beta3, and =20=

yet not applied?




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?DB09DEDC-6C2A-4774-8457-C5702BA231A1>