Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 7 Jul 2015 07:29:55 -0300
From:      Christopher Forgeron <csforgeron@gmail.com>
To:        Adrian Chadd <adrian@freebsd.org>
Cc:        Csaba Banhalmi <bimmer@field.hu>, FreeBSD Net <freebsd-net@freebsd.org>
Subject:   Re: FreeBSD 10.1-REL - network unaccessible after high traffic
Message-ID:  <CAB2_NwBEzq7uVpSqba8BD4=YHR-WsvvTG6AwHBh_SukdTeyX4Q@mail.gmail.com>
In-Reply-To: <CAJ-VmokfH3XBXeDgH1cjeCzyjcTY6ve2zWLz9q2XZG5dW%2B8u2w@mail.gmail.com>
References:  <374339249.53058039.1433681874571.JavaMail.root@uoguelph.ca> <55744F28.5000402@field.hu> <CAB2_NwA-D7bH47=Qkf9QLF3=mZOQBVo81bUsQzQr02W9U4vHMA@mail.gmail.com> <557AB1BB.60502@field.hu> <CAB2_NwA9i-wMXGH2%2BcP9SWxDMNomFRjoVP25hsGWaTDGjBxFTw@mail.gmail.com> <557AD10D.5070205@field.hu> <CAB2_NwAeD43tSwWO3LGuniRMNZ3TVupOuLWj3aUm228jLT2y1A@mail.gmail.com> <557AD2FA.103@field.hu> <CAB2_NwCgEvmMxqmAotO1USsipXOSaGkwK3Uu%2BiVbKd9_bn%2BLWg@mail.gmail.com> <CAJ-VmokXk69V_YURWOjLOQmKrW%2B2-YAiFQnhLOA7SKO6ipw_KQ@mail.gmail.com> <CAB2_NwANWRB2SJY0rO7n%2B_8RK61dyGJ5FCphH_ewQG-E7eOAUg@mail.gmail.com> <CAJ-Vmo=hSE%2Bk1q_JrX9wKOshSRa_WJ78hbL54ZaXMH03PrNFdg@mail.gmail.com> <559109C3.7070900@field.hu> <CAJ-VmokfH3XBXeDgH1cjeCzyjcTY6ve2zWLz9q2XZG5dW%2B8u2w@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
Hello,

 Sorry for not replying sooner, but I've been gathering more info.

 I still have the problem across all my heavily loaded machines, and will
be posting detailed info later today - My plan was to start a new thread as
to not hijack this further.

 I'll CC you Adrian on the new thread so you can find it easily.

 Thanks for your help.

On Mon, Jun 29, 2015 at 2:27 PM, Adrian Chadd <adrian@freebsd.org> wrote:

> hi,
>
> I asked for the output of vmstat -z and vmstat -m in a loop. :)
>
>
> -a
>
>
> On 29 June 2015 at 02:02, Csaba Banhalmi <bimmer@field.hu> wrote:
> > Hi All,
> >
> > "vmstat 5" output when system freezes:
> >  procs      memory      page                    disks faults         cp=
u
> >  r b w     avm    fre   flt  re  pi  po    fr  sr ad0 ad1   in sy   cs
> us sy
> > id
> >  0 0 0   8752M   126M  5663   0   0   0  4042 445  66   0 1219 7148
> 4870  3
> > 2 95
> >  0 0 0   8650M   145M  2167   0   0   0  3501 447  79   0  974 4042
> 3578  1
> > 1 98
> >  0 0 0   8374M   201M  3113   0   0   0  6790 441   5   0 1130 6670
> 3729  3
> > 1 96
> >  0 0 0   8252M   220M  2632   0   0   0  4014 435   4   0  726 11653
> 2401  2
> > 1 97
> >  0 0 0   8188M   224M  1625   0   0   0  2189 434   5   0  713 6714
> 2376  1
> > 1 98
> >  0 0 0   7992M   233M  1504   0   0   0  2254 433   2   0  867 2890
> 2868  1
> > 1 98
> >  4 0 0   8032M   216M  2145   0   0   0  1995 435  18   0  526 3769
> 2048  1
> > 1 98
> >  0 0 0   8180M   195M  1949   0   0   0  1741 435  50   0  593 3441
> 2363  1
> > 1 98
> >  0 0 0   8186M   178M  2859   0   0   0  2525 436   6   0  499 3313
> 1733  2
> > 1 97
> >  1 0 0   8410M   146M  2521   0   0   0  1764 440  11   0  736 67271
> 2121  4
> > 2 94
> >  0 0 0   8182M   205M  2910   0   0   0  6378 927   8   0  495 16043
> 1775  1
> > 1 98
> >  1 1 0   7944M   210M  3009   0   0   0  3696 438   8   0  522 4247
> 1963  2
> > 1 97
> >  0 0 0   8091M   169M  7529   0   0   0  3601 436 105   0 1359 75290
> 4400  9
> > 3 88
> >  0 0 0   8121M   141M  4607   0   0   0  3288 444  62   0  949 12169
> 3268  5
> > 1 94
> >  0 0 0   8044M   201M  1782   0   0   0  4954 1795   9   0  446 3025
> 1927  1
> > 1 99
> >  0 0 0   7916M   222M  1296   0   0   0  2671 438   5   0  525 2984
> 1920  1
> > 1 98
> >  1 0 0   7870M   230M   888   0   0   0  1677 432   8   0  473 6424
> 2126  1
> > 1 99
> >  0 0 0   7968M   228M  3375   0   0   0  2625 433  51   0  768 4100
> 2852  3
> > 1 96
> >  0 0 0   8238M   194M  7586   0   0   0  4758 436  88   0 1026 9631
> 3908  4
> > 2 94
> >  0 0 0   8293M   185M  3253   0   0   0  2362 437  52   0  747 4475
> 3105  2
> > 1 97
> >
> > I increased the vm.v_free_min, but did not help. It was a different
> froze,
> > the system was unreacheable even through IPMI, needed a hard reset.
> >
> > Regards,
> > Csaba
> >
> >
> >
> > 2015.06.12. 20:17 keltez=C3=A9ssel, Adrian Chadd =C3=ADrta:
> >>
> >> On 12 June 2015 at 10:57, Christopher Forgeron <csforgeron@gmail.com>
> >> wrote:
> >>>
> >>> I agree it shouldn't run out of memory. Here's what mine does under
> >>> network
> >>> load, or rsync load:
> >>>
> >>> 2 0 9   1822M  1834M     0   0   0   0    14   8   0   0 22750  724
> >>> 136119
> >>> 0 23 77
> >>>
> >>> 0 0 9   1822M  1823M     0   0   0   0     0   8   0   0 44317  347
> >>> 138151
> >>> 0 16 84
> >>>
> >>> 0 0 9   1822M  1761M     0   0   0   0    17   8   0   0 23818  820
> 92198
> >>> 0
> >>> 12 88
> >>>
> >>> 0 0 9   1822M  1727M     0   0   0   0    14   8   0   0 40768  634
> >>> 126688
> >>> 0 17 83
> >>>
> >>> 0 0 9   1822M  8192B     0   8   0   0    15   3   3   0 9236  305
> 57149
> >>> 0
> >>> 33 67
> >>>
> >>>
> >>> That's with a 5 second vmstat output. After the 8KiB, the system is
> >>> nearly
> >>> completely brain-dead and needs a hard power-off.
> >>>
> >>>
> >>> I've seen it go from 6 GiB free to 8KiB in 5 sec as well. Currently m=
y
> >>> large
> >>> machines are set to 12 GiB free to keep them from crashing, from what=
 I
> >>> presume is just network load due to lots of iSCSI / NFS traffic on my
> >>> 10GiB
> >>> network.
> >>>
> >>>
> >>> I haven't had time to type this up for the list yet, but I'm putting =
it
> >>> here
> >>> just to make sure people know it's real.
> >>>
> >> Hi,
> >>
> >> Then something is leaking or  holding onto memory when it shouldn't be=
.
> >>
> >> Try doing vmstat -z and vmstat -m in a one second loop, post the data
> >> just before it falls over.
> >>
> >>
> >> -adrian
> >
> >
>



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAB2_NwBEzq7uVpSqba8BD4=YHR-WsvvTG6AwHBh_SukdTeyX4Q>