Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 29 Jan 2014 10:30:39 +0100
From:      Matthias Gamsjager <mgamsjager@gmail.com>
To:        Anton Sayetsky <vsjcfm@gmail.com>
Cc:        freebsd-fs@freebsd.org
Subject:   Re: ZFS and Wired memory, again
Message-ID:  <CA%2BD9QhuBU12e6tQyiAqPMmg3C-k0k06wrD693J1E_b%2BVUd_wMA@mail.gmail.com>
In-Reply-To: <CA%2BD9QhveCGuaeTfjUNaJmmJLXWaTRNFFE6nOxZj9h_0GFuEcwg@mail.gmail.com>
References:  <CAFG2KC%2BZSHEVFbpPD9e1QHRdY=Sd6EuAD80vyDLDDQcpgCQNhA@mail.gmail.com> <CAFG2KCJUWtLwR_j2Ykr1J%2BO6PESgs3RdztS_Yx0gNJ_7UmrGJw@mail.gmail.com> <CA%2BD9QhveCGuaeTfjUNaJmmJLXWaTRNFFE6nOxZj9h_0GFuEcwg@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
Found it. in the Freebsd Current list with subject ARC "pressured out", how
to control/stabilize
looks kinda alike


On Wed, Jan 29, 2014 at 10:28 AM, Matthias Gamsjager
<mgamsjager@gmail.com>wrote:

> I remember reading something similar couple of days ago but can't find the
> thread.
>
>
> On Tue, Jan 28, 2014 at 7:50 PM, Anton Sayetsky <vsjcfm@gmail.com> wrote:
>
>> 2013-11-22 Anton Sayetsky <vsjcfm@gmail.com>:
>> > Hello,
>> >
>> > I'm planning to deploy a ~150 TiB ZFS pool and when playing with ZFS
>> > noticed that amount of wired memory is MUCH bigger than ARC size (in
>> > absence of other hungry memory consumers, of course). I'm afraid that
>> > this strange behavior may become even worse on a machine with big pool
>> > and some hundreds gibibytes of RAM.
>> >
>> > So let me explain what happened.
>> >
>> > Immediately after booting system top says the following:
>> > =====
>> > Mem: 14M Active, 13M Inact, 117M Wired, 2947M Free
>> > ARC: 24M Total, 5360K MFU, 18M MRU, 16K Anon, 328K Header, 1096K Other
>> > =====
>> > Ok, wired mem - arc = 92 MiB
>> >
>> > Then I started to read pool (tar cpf /dev/null /).
>> > Memory usage when ARC size is ~1GiB
>> > =====
>> > Mem: 16M Active, 15M Inact, 1410M Wired, 1649M Free
>> > ARC: 1114M Total, 29M MFU, 972M MRU, 21K Anon, 18M Header, 95M Other
>> > =====
>> > 1410-1114=296 MiB
>> >
>> > Memory usage when ARC size reaches it's maximum of 2 GiB
>> > =====
>> > Mem: 16M Active, 16M Inact, 2523M Wired, 536M Free
>> > ARC: 2067M Total, 3255K MFU, 1821M MRU, 35K Anon, 38M Header, 204M Other
>> > =====
>> > 2523-2067=456 MiB
>> >
>> > Memory usage a few minutes later
>> > =====
>> > Mem: 10M Active, 27M Inact, 2721M Wired, 333M Free
>> > ARC: 2002M Total, 22M MFU, 1655M MRU, 21K Anon, 36M Header, 289M Other
>> > =====
>> > 2721-2002=719 MiB
>> >
>> > So why the wired ram on a machine with only minimal amount of services
>> > has grown from 92 to 719 MiB? Sometimes I can even see about a gig!
>> > I'm using 9.2-RELEASE-p1 amd64. Test machine has a T5450 C2D CPU and 4
>> > G RAM (actual available amount is 3 G). ZFS pool is configured on a
>> > GPT partition of a single 1 TB HDD.
>> > Disabling/enabling prefetch does't helps. Limiting ARC to 1 gig doesn't
>> helps.
>> > When reading a pool, evict skips can increment very fast and sometimes
>> > arc metadata exceeds limit (2x-5x).
>> >
>> > I've attached logs with system configuration, outputs from top, ps,
>> > zfs-stats and vmstat.
>> > conf.log = system configuration, also uploaded to
>> http://pastebin.com/NYBcJPeT
>> > top_ps_zfs-stats_vmstat_afterboot = memory stats immediately after
>> > booting system, http://pastebin.com/mudmEyG5
>> > top_ps_zfs-stats_vmstat_1g-arc = after ARC grown to 1 gig,
>> > http://pastebin.com/4AC8dn5C
>> > top_ps_zfs-stats_vmstat_fullmem = when ARC reached limit of 2 gigs,
>> > http://pastebin.com/bx7svEP0
>> > top_ps_zfs-stats_vmstat_fullmem_2 = few minutes later,
>> > http://pastebin.com/qYWFaNeA
>> >
>> > What should I do next?
>> BUMP
>> _______________________________________________
>> freebsd-fs@freebsd.org mailing list
>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs
>> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org"
>>
>
>



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CA%2BD9QhuBU12e6tQyiAqPMmg3C-k0k06wrD693J1E_b%2BVUd_wMA>