Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 15 Nov 2019 13:23:51 +0700
From:      Eugene Grosbein <eugen@grosbein.net>
To:        FreeBSD stable <freebsd-stable@freebsd.org>
Subject:   Re: panic: I/O to pool appears to be hung on vdev
Message-ID:  <463ec8a3-1f99-d8ad-32e9-b350a3d768ba@grosbein.net>
In-Reply-To: <32dc05dc-e006-0d21-d9e0-f9a86d5bf47d@grosbein.net>
References:  <32dc05dc-e006-0d21-d9e0-f9a86d5bf47d@grosbein.net>

next in thread | previous in thread | raw e-mail | index | archive | help
15.11.2019 13:08, Eugene Grosbein wrote:
> Hi!
> 
> Recently I did routine source upgrade from 11.2-STABLE/amd64 to 11.3-STABLE r354667
> that went without any problem. After less than 2 days of uptime it paniced and failed to reboot (hung),
> screenshot is here: http://www.grosbein.net/freebsd/zpanic.png
> 
> It did not panic with 11.2-STABLE but had some performance problems with ZFS.
> 
> Hardware: Dell PowerEdge R640 with 360G RAM, mrsas(4)-supported controller PERC H730/P Mini LSI MegaRAID SAS-3 3108 [Invader]
> and 7 SSD devices, two of them keep FreeBSD installation (distinct boot pool) and five others
> are GELI-encrypted and combined to another (RAIDZ1) pool 'sata' mentioned on screenshot.
> 
> vfs.zfs.arc_max=160g
> 
> The system runs several bhyve instances over ZVOls. There are many shapshots that are routinely
> created/destroyed so system generally issues many TRIM requests to underlying SSDs.
> 
> After 1.5 day of uptime (before panic) I set kern.cam.da.[2-6].delete_max=262144
> changing it from default 17179607040 hoping it would decrease latency of read-write operations
> like listing of snapshots. No other non-default settings for ZFS were done.
> 
> What does it mean "panic: I/O to pool appears to be hung on vdev" provided hardware is healthy?

I wonder also why did it panic instead of degrading the RAIDZ pool.





Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?463ec8a3-1f99-d8ad-32e9-b350a3d768ba>