From owner-freebsd-geom@FreeBSD.ORG Fri Oct 17 16:58:53 2014 Return-Path: Delivered-To: freebsd-geom@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id E27E3B5F for ; Fri, 17 Oct 2014 16:58:52 +0000 (UTC) Received: from h2.funkthat.com (gate2.funkthat.com [208.87.223.18]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client CN "funkthat.com", Issuer "funkthat.com" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id B1A47F0 for ; Fri, 17 Oct 2014 16:58:52 +0000 (UTC) Received: from h2.funkthat.com (localhost [127.0.0.1]) by h2.funkthat.com (8.14.3/8.14.3) with ESMTP id s9HGwnxw077714 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Fri, 17 Oct 2014 09:58:50 -0700 (PDT) (envelope-from jmg@h2.funkthat.com) Received: (from jmg@localhost) by h2.funkthat.com (8.14.3/8.14.3/Submit) id s9HGwngN077713; Fri, 17 Oct 2014 09:58:49 -0700 (PDT) (envelope-from jmg) Date: Fri, 17 Oct 2014 09:58:49 -0700 From: John-Mark Gurney To: Sourish Mazumder Subject: Re: geom gate network Message-ID: <20141017165849.GX1852@funkthat.com> Mail-Followup-To: Sourish Mazumder , freebsd-geom@freebsd.org References: Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.4.2.3i X-Operating-System: FreeBSD 7.2-RELEASE i386 X-PGP-Fingerprint: 54BA 873B 6515 3F10 9E88 9322 9CB1 8F74 6D3F A396 X-Files: The truth is out there X-URL: http://resnet.uoregon.edu/~gurney_j/ X-Resume: http://resnet.uoregon.edu/~gurney_j/resume.html X-TipJar: bitcoin:13Qmb6AeTgQecazTWph4XasEsP7nGRbAPE X-to-the-FBI-CIA-and-NSA: HI! HOW YA DOIN? can i haz chizburger? X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.2.2 (h2.funkthat.com [127.0.0.1]); Fri, 17 Oct 2014 09:58:50 -0700 (PDT) Cc: freebsd-geom@freebsd.org X-BeenThere: freebsd-geom@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: GEOM-specific discussions and implementations List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 17 Oct 2014 16:58:53 -0000 Sourish Mazumder wrote this message on Fri, Oct 17, 2014 at 17:34 +0530: > I am planning to use geom gate network for accessing remote disks. I set up > geom gate as per the freebsd handbook. I am using freebsd 9.2. > I am noticing heavy performance impact for disk IO when using geom gate. I > am using the dd command to directly write to the SSD for testing > performance. The IOPS gets cut down to 1/3 when accessing the SSD remotely > over a geom gate network, compared to the IOPS achieved when writing to the > SSD directly on the system where the SSD is attached. > I thought that there might be some problems with the network, so decided to > create a geom gate disk on the same system where the SSD is attached. This > way the IO is not going over the network. However, in this use case I > noticed the IOPS get cut down to 2/3 compared to IOPS achieved when writing > to the SSD directly. > > So, I have a SSD and its geom gate network disk created on the same node > and the same IOPS test using the dd command gives 2/3 IOPS performance for > the geom gate disk compared to running the IOPS test directly on the SSD. > > This points to some performance issues with the geom gate itself. Not necessarily... Yes, it's slower, but at the same time, you now have to run lots of network and TCP code in addition to the IO for each and every IO... > Is anyone aware of any such performance issues when using geom gate network > disks? If so, what is the reason for such IO performance drop and are there > any solutions or tuning parameters to rectify the performance drop? > > Any information regarding the same will be highly appreciated. I did some work at this a while back... and if you're interested in improving performance and willing to do some testing... I can send you some patches.. There are a couple issues that I know about.. First, ggate specificly sets the buffer sizes, which disables the autosizing of TCP's window.. This means that if you have a high latency, high bandwidth link, you'll be limited to 128k / rtt of bandwidth. Second is that ggate isn't issueing multiple IOs at a time. This means that any NCQ or tagging isn't able to be used, where as when running natively they can be used... -- John-Mark Gurney Voice: +1 415 225 5579 "All that I will do, has been done, All that I have, has not."