Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 4 Jul 2014 00:10:51 +0400
From:      Slawa Olhovchenkov <slw@zxy.spb.ru>
To:        Adrian Chadd <adrian@freebsd.org>
Cc:        Nikolay Denev <ndenev@gmail.com>, Sreenivasa Honnur <shonnur@chelsio.com>, FreeBSD Current <freebsd-current@freebsd.org>, Kevin Oberman <rkoberman@gmail.com>
Subject:   Re: FreeBSD iscsi target
Message-ID:  <20140703201051.GT5102@zxy.spb.ru>
In-Reply-To: <CAJ-Vmomfi2NmTzEsHfrpd9j1GOkLkXp%2B8pbRLShSOANUB1t21w@mail.gmail.com>
References:  <CAN6yY1t2qDzfeO37p2s_3=vzEVv5C813M0ttqjnM4tJGkkBhyA@mail.gmail.com> <20140702112609.GA85758@zxy.spb.ru> <CAN6yY1uzfjoDfEdti91Ogy11LzT3-5JvLREBdW6ynEOgm0uUPA@mail.gmail.com> <20140702203603.GO5102@zxy.spb.ru> <CAN6yY1von-Z586V=8qs3%2BOfV3oXes380s2GD-149EYWLxws-qA@mail.gmail.com> <CA%2BP_MZE013dv22Sb-rk7ZoiYbCTodmth0d-XpdM6mrpw3WxQmg@mail.gmail.com> <20140703091321.GP5102@zxy.spb.ru> <CA%2BP_MZEJ=Gj4%2B8tKdZHAObjw-_riGLYLFOeiUXj9vn=JkwShmQ@mail.gmail.com> <20140703102901.GQ5102@zxy.spb.ru> <CAJ-Vmomfi2NmTzEsHfrpd9j1GOkLkXp%2B8pbRLShSOANUB1t21w@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On Thu, Jul 03, 2014 at 10:28:19AM -0700, Adrian Chadd wrote:

> Which NIC?

I am can't find again this forum posts (last time I find -- year ago).
May be this http://hardforum.com/showthread.php?t=1662769
In this case -- Mellanox QDR ConnectX2 Infiniband.


> On 3 July 2014 03:29, Slawa Olhovchenkov <slw@zxy.spb.ru> wrote:
> > On Thu, Jul 03, 2014 at 10:35:55AM +0100, Nikolay Denev wrote:
> >
> >> >> I found this white paper useful in understanding how this works :
> >> >> http://www.cisco.com/c/en/us/products/collateral/switches/nexus-3000-series-switches/white_paper_c11-726674.pdf
> >> >
> >> > In real world "Reality is quite different than it actually is".
> >> > http://www.cisco.com/c/en/us/products/collateral/switches/catalyst-6500-series-switches/white_paper_c11-696669.html
> >> >
> >> > See "Packet Path Theory of Operation. Ingress Mode".
> >> >
> >>
> >> Interesting, however this seems like implementation specific detail,
> >> and not limitation of native 40Gbit ethernet.
> >
> > I see some perfomance tests on solaris and 40G link.
> > In this test perfomance limited about 10Gbit per flow.
> > May be I found links to this test.
> >
> > May be some NIC's implementation specific detail also limited
> > performance per flow.
> >
> >> Still, it's something that one must be aware of (esp when dealing with
> >> Cisco gear :) )
> >>
> >> I wonder why they are not doing something like this :
> >> http://blog.ipspace.net/2011/04/brocade-vcs-fabric-has-almost-perfect.html
> >>
> >> --Nikolay
> > _______________________________________________
> > freebsd-current@freebsd.org mailing list
> > http://lists.freebsd.org/mailman/listinfo/freebsd-current
> > To unsubscribe, send any mail to "freebsd-current-unsubscribe@freebsd.org"



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20140703201051.GT5102>