Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 29 Feb 2016 21:21:49 +0000
From:      Steven Hartland <killing@multiplay.co.uk>
To:        freebsd-fs@freebsd.org
Subject:   Re: abnormally high CPU load after zfs destroy
Message-ID:  <56D4B66D.4070007@multiplay.co.uk>
In-Reply-To: <56D4964D.3010604@quip.cz>
References:  <56D4964D.3010604@quip.cz>

next in thread | previous in thread | raw e-mail | index | archive | help
Its likely churning through the actual delete, show system processes in 
top and you'll see it.

On 29/02/2016 19:04, Miroslav Lachman wrote:
> I am using ZFS pool (4x 3TB) as small backup storage. Backups are made 
> by rsync and there are few snapshots. When I use "zfs destroy -r", 
> there are high disk activity (it seems normal) but also high CPU load 
> - 80+
>
> The system did nothing at this time, just deleting old ZFS snapshot, 
> so why is the load so high?
>
> last pid: 90302;  load averages: 81.63, 43.60, 19.28                up 
> 43+03:33:04  19:56:16
> 36 processes:  1 running, 34 sleeping, 1 zombie
> CPU:  0.0% user,  0.0% nice, 96.0% system,  0.0% interrupt,  4.0% idle
> Mem: 5836K Active, 20M Inact, 4046M Wired, 755M Free
> ARC: 1572M Total, 82M MFU, 1018M MRU, 24M Anon, 26M Header, 422M Other
> Swap: 5120M Total, 17M Used, 5103M Free
>
>   PID USERNAME     THR PRI NICE   SIZE    RES STATE   C   TIME WCPU 
> COMMAND
>   592 root           1  20    0 26160K 18080K select  1   3:24 0.00% 
> /usr/sbin/ntpd -g -c /etc/ntp.conf -p /var/run/ntpd.pid -f 
> /var/db/ntpd.drift
>   624 root           1  20    0 61224K  4204K select  1   2:20 0.00% 
> /usr/sbin/sshd
>   672 root           1  20    0 24136K  4592K select  0   1:10 0.00% 
> sendmail: rejecting connections on daemon Daemon0: load average: 77 
> (sendmail)
>   655 root           1  20    0 25124K  4148K select  0   1:01 0.00% 
> /usr/sbin/bsnmpd -p /var/run/snmpd.pid
>   443 root           1  20    0 14512K  1760K select  1   0:43 0.00% 
> /usr/sbin/syslogd -ss
>   679 root           1  22    0 16612K   672K nanslp  1   0:32 0.00% 
> /usr/sbin/cron -s
> 95875 xyz            1  20    0 65892K  4708K select  1   0:05 0.00% 
> sshd: xyz @pts/0 (sshd)
>   649 root           1  20    0 30704K  1432K nanslp  1   0:04 0.00% 
> /usr/local/sbin/smartd -c /usr/local/etc/smartd.conf -p 
> /var/run/smartd.pid
>   352 root           1  20    0 13624K  1204K select  0   0:02 0.00% 
> /sbin/devd
> 95873 root           1  20    0 65892K  4580K select  1   0:02 0.00% 
> sshd: xyz  [priv] (sshd)
> 95912 root           1  20    0 25772K   652K pause   1   0:02 0.00% 
> screen
>   675 smmsp          1  20    0 24136K  1152K pause   0   0:01 0.00% 
> sendmail: Queue runner@00:30:00 for /var/spool/clientmqueue (sendmail)
> 89875 root           1  20    0 21940K  3188K CPU1    1   0:00 0.00% top
> 95914 root           1  20    0 23592K  3152K pause   0   0:00 0.00% 
> -/bin/tcsh
> 95913 root           1  20    0 25772K  3192K select  1   0:00 0.00% 
> screen
> 95895 root           1  52    0 23592K     0K pause   0   0:00 0.00% 
> -su (<tcsh>)
> 95876 xyz            1  52    0 23592K     0K pause   1   0:00 0.00% 
> -tcsh (<tcsh>)
> 95894 xyz            1  23    0 47740K     0K wait    0   0:00 0.00% 
> /usr/bin/su - root (<su>)
> 90271 mrtg           1  52    0 17088K  2508K wait    1   0:00 0.00% 
> /bin/sh ./local_iostat_disk.sh
> 89976 root           1  21    0 16612K  1704K piperd  1   0:00 0.00% 
> cron: running job (cron)
> 90270 mrtg           1  52    0 17088K  2504K piperd  1   0:00 0.00% 
> /bin/sh ./local_iostat_cpu.sh
>   726 root           1  52    0 14508K  1700K ttyin   0   0:00 0.00% 
> /usr/libexec/getty Pc ttyv0
>   733 root           1  52    0 14508K  1700K ttyin   1   0:00 0.00% 
> /usr/libexec/getty Pc ttyv7
>   728 root           1  52    0 14508K  1700K ttyin   1   0:00 0.00% 
> /usr/libexec/getty Pc ttyv2
>   727 root           1  52    0 14508K  1700K ttyin   1   0:00 0.00% 
> /usr/libexec/getty Pc ttyv1
>   729 root           1  52    0 14508K  1700K ttyin   0   0:00 0.00% 
> /usr/libexec/getty Pc ttyv3
>   731 root           1  52    0 14508K  1700K ttyin   0   0:00 0.00% 
> /usr/libexec/getty Pc ttyv5
>   730 root           1  52    0 14508K  1700K ttyin   1   0:00 0.00% 
> /usr/libexec/getty Pc ttyv4
>   732 root           1  52    0 14508K  1700K ttyin   0   0:00 0.00% 
> /usr/libexec/getty Pc ttyv6
> 90286 mrtg           1  52    0 18740K  2236K nanslp  1   0:00 0.00% 
> iostat -w 250 -c 2 -x ada0 ada1 ada2 ada3
> 90283 mrtg           1  52    0 18740K  2228K nanslp  1   0:00 0.00% 
> iostat -d -C -n 0 -w 240 -c 2
> 90287 mrtg           1  52    0 12356K  1952K piperd  1   0:00 0.00% 
> tail -n 4
>   137 root           1  52    0 12352K     0K pause   1   0:00 0.00% 
> adjkerntz -i (<adjkerntz>)
> 90288 mrtg           1  52    0 17088K  2508K piperd  0   0:00 0.00% 
> /bin/sh ./local_iostat_disk.sh
>  3313 root           1  52    0 16728K  1884K select  1   0:00 0.00% 
> /usr/sbin/moused -p /dev/ums0 -t auto -I /var/run/moused.ums0.pid
>
>
>
> # uname -srmi
> FreeBSD 10.2-RELEASE-p10 amd64 GENERIC
>
> # grep CPU: /var/run/dmesg.boot
> CPU: Intel(R) Pentium(R) Dual  CPU  E2160  @ 1.80GHz (1795.53-MHz 
> K8-class CPU)
>
>
> Miroslav Lachman
> _______________________________________________
> freebsd-fs@freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-fs
> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org"




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?56D4B66D.4070007>