From owner-freebsd-current@FreeBSD.ORG Sat Oct 12 00:14:13 2013 Return-Path: Delivered-To: current@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTP id 11F26F1B; Sat, 12 Oct 2013 00:14:13 +0000 (UTC) (envelope-from jmg@h2.funkthat.com) Received: from h2.funkthat.com (gate2.funkthat.com [208.87.223.18]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id DB15E2D1A; Sat, 12 Oct 2013 00:14:12 +0000 (UTC) Received: from h2.funkthat.com (localhost [127.0.0.1]) by h2.funkthat.com (8.14.3/8.14.3) with ESMTP id r9C0EB1P069513 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Fri, 11 Oct 2013 17:14:11 -0700 (PDT) (envelope-from jmg@h2.funkthat.com) Received: (from jmg@localhost) by h2.funkthat.com (8.14.3/8.14.3/Submit) id r9C0EBai069512; Fri, 11 Oct 2013 17:14:11 -0700 (PDT) (envelope-from jmg) Date: Fri, 11 Oct 2013 17:14:10 -0700 From: John-Mark Gurney To: Maksim Yevmenkin Subject: Re: [rfc] small bioq patch Message-ID: <20131012001410.GA56872@funkthat.com> Mail-Followup-To: Maksim Yevmenkin , Maksim Yevmenkin , "current@freebsd.org" References: <20131011215210.GY56872@funkthat.com> <72DA2C4F-44F0-456D-8679-A45CE617F8E6@gmail.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <72DA2C4F-44F0-456D-8679-A45CE617F8E6@gmail.com> User-Agent: Mutt/1.4.2.3i X-Operating-System: FreeBSD 7.2-RELEASE i386 X-PGP-Fingerprint: 54BA 873B 6515 3F10 9E88 9322 9CB1 8F74 6D3F A396 X-Files: The truth is out there X-URL: http://resnet.uoregon.edu/~gurney_j/ X-Resume: http://resnet.uoregon.edu/~gurney_j/resume.html X-to-the-FBI-CIA-and-NSA: HI! HOW YA DOIN? can i haz chizburger? X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.2.2 (h2.funkthat.com [127.0.0.1]); Fri, 11 Oct 2013 17:14:11 -0700 (PDT) Cc: Maksim Yevmenkin , "current@freebsd.org" X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 12 Oct 2013 00:14:13 -0000 Maksim Yevmenkin wrote this message on Fri, Oct 11, 2013 at 15:39 -0700: > > On Oct 11, 2013, at 2:52 PM, John-Mark Gurney wrote: > > > > Maksim Yevmenkin wrote this message on Fri, Oct 11, 2013 at 11:17 -0700: > >> i would like to submit the attached bioq patch for review and > >> comments. this is proof of concept. it helps with smoothing disk read > >> service times and arrear to eliminates outliers. please see attached > >> pictures (about a week worth of data) > >> > >> - c034 "control" unmodified system > >> - c044 patched system > > > > Can you describe how you got this data? Were you using the gstat > > code or some other code? > > Yes, it's basically gstat data. The reason I ask this is that I don't think the data you are getting from gstat is what you think you are... It accumulates time for a set of operations and then divides by the count... So I'm not sure if the stat improvements you are seeing are as meaningful as you might think they are... > > Also, was your control system w/ the patch, but w/ the sysctl set to > > zero to possibly eliminate any code alignment issues? > > Both systems use the same code base and build. Patched system has patch included, "control" system does not have the patch. I can rerun my tests with sysctl set to zero and use it as "control". So, the answer to your question is "no". I don't believe the code would make a difference, but more wanted to know what control was... > >> graphs show max/avg disk read service times for both systems across 36 > >> spinning drives. both systems are relatively busy serving production > >> traffic (about 10 Gbps at peak). grey shaded areas on the graphs > >> represent time when systems are refreshing their content, i.e. disks > >> are both reading and writing at the same time. > > > > Can you describe why you think this change makes an improvement? Unless > > you're running 10k or 15k RPM drives, 128 seems like a large number.. as > > that's about halve number of IOPs that a normal HD handles in a second.. > > Our (Netflix) load is basically random disk io. We have tweaked the system to ensure that our io path is "wide" enough, I.e. We read 1mb per disk io for majority of the requests. However offsets we read from are all over the place. It appears that we are getting into situation where larger offsets are getting delayed because smaller offsets are "jumping" ahead of them. Forcing bioq insert tail operation and effectively moving insertion point seems to help avoiding getting into this situation. And, no. We don't use 10k or 15k drives. Just regular enterprise 7200 sata drives. I assume that the 1mb reads are then further broken up into 8 128kb reads? so it's more like every 16 reads in your work load that you insert the "ordered" io... I want to make sure that we choose the right value for this number.. What number of IOPs are you seeing? > > I assume you must be regularly seeing queue depths of 128+ for this > > code to make a difference, do you see that w/ gstat? > > No, we don't see large (128+) queue sizes in gstat data. The way I see it, we don't have to have deep queue here. We could just have a steady stream of io requests where new, smaller, offsets consistently "jumping" ahead of older, larger offset. In fact gstat data show shallow queue of 5 or less items. Sorry, I miss read the patch the first time... After rereading it, the short summary is that if there hasn't been an ordered bio (bioq_insert_tail) after 128 requests, the next request will be "ordered"... > > Also, do you see a similar throughput of the system? > > Yes. We do see almost identical throughput from both systems. I have not pushed the system to its limit yet, but having much smoother disk read service time is important for us because we use it as one of the components of system health metrics. We also need to ensure that disk io request is actually dispatched to the disk in a timely manner. Per above, have you measured at the application layer that you are getting better latency times on your reads? Maybe by doing a ktrace of the io, and calculating times between read and return or something like that... Have you looked at the geom disk schedulers work that Luigi did a few years back? There have been known issues w/ our io scheduler for a long time... If you search the mailing lists, you'll see lots of reports from some processes starving out others, probably due to a similar issue... I've seen similar unfair behavior between processes, but spend time tracking it down... It does look like a good improvement though... Thanks for the work! -- John-Mark Gurney Voice: +1 415 225 5579 "All that I will do, has been done, All that I have, has not."