Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 20 Nov 2009 10:11:14 -0800
From:      patrick <gibblertron@gmail.com>
To:        Grant Peel <gpeel@thenetnow.com>
Cc:        questions@freebsd.org
Subject:   Re: NFS- SAN - FreeBSD
Message-ID:  <b043a4850911201011p15ba4d94rcf03da1316670f53@mail.gmail.com>
In-Reply-To: <534AF36AC3BE4B3581FFB7756DD9ADFC@GRANT>
References:  <25A3192F31A344B99F50583BDC58C921@GRANT> <C4577BCC84D24FFE97FD4036C2C4FB82@GRANT> <f151ba00907201321x363de61ai27c54d4902d1d9fc@mail.gmail.com> <85A4A9F5895D4CDCAEDF23E8181A118D@GRANT> <4A6535A2.90707@studsvikscandpower.com> <26D9A85FF5344B9CA8F5DCDA1AFFBC46@GRANT> <4A66368C.3010009@studsvikscandpower.com> <FB08EAB37B8347FAA6C2A7E107D7DB05@GRANT> <4A6656F2.50909@studsvikscandpower.com> <534AF36AC3BE4B3581FFB7756DD9ADFC@GRANT>

next in thread | previous in thread | raw e-mail | index | archive | help
Hi Grant,

I'm in a similar situation to where you were in July, and I was
wondering what route you ended up going?

Patrick


On Tue, Jul 21, 2009 at 4:42 PM, Grant Peel <gpeel@thenetnow.com> wrote:
> Chris,
>
> Again, thanks for the info.
>
> I only have one server with a PERC (raid) card installed, and I beleive i=
t
> is an older PERC 3 DCI, and doubt it would do the job. I would not be abl=
e
> to add more PERC cards to the other machines.
>
> I am looking to have the connections all done via Ethernet. Again, the
> connections would be local (device to my switch, switch to the individual
> servers).
>
> Does this mean I should be considering iSCSI, or, since the connections w=
ill
> all be on a local network, that I can continue to consider NFS?
>
> Any takers?
>
> -Grant
>
> ----- Original Message ----- From: "Christopher J. Umina"
> <chris.umina@studsvikscandpower.com>
> To: "Grant Peel" <gpeel@thenetnow.com>
> Cc: <questions@freebsd.org>
> Sent: Tuesday, July 21, 2009 8:01 PM
> Subject: Re: NFS- SAN - FreeBSD
>
>
>> Grant,
>>
>> DAS =3D Direct-Attached Storage, sorry to be confusing.
>>
>> I cannot personally speak to the performance of FreeBSD's NFS, but I
>> wouldn't expect it to be the bottleneck in the situation described. =A0M=
aybe
>> others with more experience could chime in on this topic.
>>
>> The way to use a DAS is to connect the DAS to a server with an external
>> SAS cable (or two). =A0The PERC6/E controller you would need inside the =
server
>> is very well supported in FreeBSD. =A0The DAS system would basically act=
 the
>> same as internal disks would act (in the case of the MD1000). =A0Of cour=
se
>> you'll want to check with Dell before you make any purchases to be posit=
ive
>> that your hardware will all communicate nicely, as I'm no Dell salespers=
on.
>>
>> Depending on how large of an array you plan to make (if larger than 2TB)
>> you may have to investigate gpart/gpt to partition correctly, but that's
>> quite simple in my experience.
>>
>> Chris
>>
>> Grant Peel wrote:
>>>
>>> Chris,
>>>
>>> Thanks for the insight!
>>>
>>> I will defineately investigate that DAS ... although I am not (yet) sur=
e
>>> what the acronym means, I am sure it is something akin to "Direct Acces=
s
>>> SCSI".
>>>
>>> You are quite right, I would like to use NFS to connect the device to t=
he
>>> 6 servers I have, again, it would be only hosting the /home partition f=
or
>>> each of them. Do you know if there would be any NFS I/O slowdowns using=
 it
>>> in that fassion? Would freebsd support (on the storage device) that man=
y
>>> connections?
>>>
>>> Also, do the Dell DAS machines run with FreeBSD?
>>>
>>> Also, from you you explained, I doubt I really need the versatility of
>>> the SAN at this point, or in the near future. I simply want a mass /hom=
e
>>> storage unit.
>>>
>>> -Grant
>>>
>>> ----- Original Message ----- From: "Christopher J. Umina"
>>> <chris.umina@studsvikscandpower.com>
>>> To: "Grant Peel" <gpeel@thenetnow.com>
>>> Cc: <questions@freebsd.org>
>>> Sent: Tuesday, July 21, 2009 5:43 PM
>>> Subject: Re: NFS- SAN - FreeBSD
>>>
>>>
>>>> Grant,
>>>>
>>>> I mean to say that often times external SCSI solutions (direct attache=
d)
>>>> are cheaper and perform better (in terms of I/O) than iSCSI SANs. Espe=
cially
>>>> if you're using many disks. =A0SANs are generally chosen for the abili=
ty to be
>>>> split into LUNs for different servers. =A0Think of it as a disk which =
you can
>>>> partition and serve out to servers on a per-partition basis, over Ethe=
rnet.
>>>> =A0That's essentially what an iSCSI SAN does. =A0While DAS systems all=
ow the
>>>> same sort of configuration, they don't serve out over Ethernet, only
>>>> SCSI/SAS.
>>>>
>>>> Since you plan to use NFS to share the files to the other servers, I
>>>> think it may make more sense for you to use a SCSI solution if yo don'=
t need
>>>> the versatility of a SAN.
>>>>
>>>> Of course I know nothing of how you plan to expand this system, but fr=
om
>>>> what I understand, with Dell DAS hardware it is possible to connect up=
 to 4
>>>> different servers to the DAS and expand to up to 6 15 disk enclosures.=
 The
>>>> MD3000i (iSCSI) expands only to 3.
>>>>
>>>> Another issue is that without compiling in special versions of the iSC=
SI
>>>> initiator, even in 8.0-BETA2 (which is not production-ready), iSCSI
>>>> performance and reliability are terrible. =A0There are other versions =
of the
>>>> code (which I currently use) for the iscsi_initiator kernel module, bu=
t
>>>> unless you're comfortable doing that, you may consider DAS in terms of=
 ease
>>>> of implementation and maintenance as well.
>>>>
>>>> Chris
>>>>
>>>> Grant Peel wrote:
>>>>>
>>>>> Chris,
>>>>>
>>>>> I don't know what a direct attached array is.....
>>>>>
>>>>> What I was just thinking was move all of the servers /home directory =
to
>>>>> a huge NFS mount.
>>>>>
>>>>> If you have the time to elaborate fursther, I would apprciate it...
>>>>>
>>>>> This iSCSI think has me entrigued, but I must admit I know little abo=
ut
>>>>> it at this point.
>>>>>
>>>>> -Grant
>>>>>
>>>>> ----- Original Message ----- From: "Christopher J. Umina"
>>>>> <chris.umina@studsvik.com>
>>>>> To: "Grant Peel" <gpeel@thenetnow.com>
>>>>> Sent: Monday, July 20, 2009 11:27 PM
>>>>> Subject: Re: NFS- SAN - FreeBSD
>>>>>
>>>>>
>>>>>> Grant,
>>>>>>
>>>>>> I have to ask, is there a reason you're intent on going with a SAN
>>>>>> versus a direct-attached array?
>>>>>>
>>>>>> Chris
>>>>>>
>>>>>> Grant Peel wrote:
>>>>>>>
>>>>>>> Thanks for the reply.
>>>>>>>
>>>>>>> I have not used/investigated the iSCSI thing yet....
>>>>>>>
>>>>>>> The original question is can I just use an NFS mount to the storage=
's
>>>>>>> /home partition?
>>>>>>>
>>>>>>> -Grant
>>>>>>> =A0----- Original Message ----- =A0 From: mojo fms To: Grant Peel C=
c:
>>>>>>> freebsd-questions@freebsd.org Sent: Monday, July 20, 2009 4:21 PM
>>>>>>> =A0Subject: Re: NFS- SAN - FreeBSD
>>>>>>>
>>>>>>>
>>>>>>> =A0You would be better off at least having the SAN on 1gb ethernet =
or
>>>>>>> even better tripple 1gb (on a 100mb switch should be fine but you n=
eed
>>>>>>> failover for higher avaliability) ethernet for latency and failover=
 reasons
>>>>>>> with a hot backup on the network controller. =A0I dont see why you =
could not
>>>>>>> do this, its just iscsi connection normally so there is not a big i=
ssue
>>>>>>> getting freebsd to connect to it. =A0We run 2 of the 16tb powervaul=
t which
>>>>>>> does pretty well for storage, one runs everything and the other is =
a
>>>>>>> replicated offsite backup. =A0Performance wise, it really depends o=
n how many
>>>>>>> servers you have pulling data from the SAN and how hard the IO work=
s on the
>>>>>>> current servers. =A0If you have 100 servers you might push the IO a=
 bit but
>>>>>>> but it should be fine if your not serving more than 2Mb/s out to ev=
eryone,
>>>>>>> the servers and disks are going to cache a fair amount of always us=
ed data.
>>>>>>>
>>>>>>>
>>>>>>> =A0On Mon, Jul 20, 2009 at 11:52 AM, Grant Peel <gpeel@thenetnow.co=
m>
>>>>>>> wrote:
>>>>>>>
>>>>>>> =A0 =A0Hi all,
>>>>>>>
>>>>>>> =A0 =A0I am assuming by the lack of response, my question to too lo=
ng
>>>>>>> winded, let me re-phrase:
>>>>>>>
>>>>>>> =A0 =A0What kind of performance might I expect if I load FreeBSD 7.=
2 on a
>>>>>>> 24 disk, Dell PowerVault when its only mission is to serve as a loc=
al area
>>>>>>> storage unit (/home). Obviously, to store all users /home data. Thr=
oug an
>>>>>>> NFS connection via fast (100m/b) ethernet. Each connecting server (=
6)
>>>>>>> contain about 200 domains?
>>>>>>>
>>>>>>> =A0 =A0-Grant
>>>>>>>
>>>>>>> =A0 =A0----- Original Message ----- From: "Grant Peel"
>>>>>>> <gpeel@thenetnow.com>
>>>>>>> =A0 =A0To: <freebsd-questions@freebsd.org>
>>>>>>> =A0 =A0Sent: Saturday, July 18, 2009 10:35 AM
>>>>>>> =A0 =A0Subject: NFS- SAN - FreeBSD
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> =A0 =A0 =A0Hi all,
>>>>>>>
>>>>>>> =A0 =A0 =A0Up to this point, all of our servers are standalone, i.e=
. all
>>>>>>> services and software required are installed on each local server.
>>>>>>>
>>>>>>> =A0 =A0 =A0Apache, Exim, vm-pop3d, Mysql, etc etc.
>>>>>>>
>>>>>>> =A0 =A0 =A0Each local server is connected to the Inet via a VLAN (W=
AN), to
>>>>>>> our colo's switch.
>>>>>>>
>>>>>>> =A0 =A0 =A0Each server contains about 300 domains, each domain has =
its own
>>>>>>> IP.
>>>>>>>
>>>>>>> =A0 =A0 =A0Each sever is also connected to a VLAN (LAN) via the sam=
e (Dell
>>>>>>> 48 Port managed switch).
>>>>>>>
>>>>>>> =A0 =A0 =A0We have been considering consolidating all users data fr=
om each
>>>>>>> server to a central (local), storage unit.
>>>>>>>
>>>>>>> =A0 =A0 =A0While I do have active nfs's running (for backups etc), =
on the
>>>>>>> LAN only, I have never attempted to create 1 mass storage unit.
>>>>>>>
>>>>>>> =A0 =A0 =A0So I suppose the questions are:
>>>>>>>
>>>>>>> =A0 =A0 =A01) Is there any specific hardware that anyone might recc=
ommend?
>>>>>>> I want to stick with FreeBSD as the OS as I am quite comfortable ad=
mining
>>>>>>> it,
>>>>>>>
>>>>>>> =A0 =A0 =A02) Would anyone reccomend NOT using FreeBSD? Why?
>>>>>>>
>>>>>>> =A0 =A0 =A03) Assuming I am using FreeBSD as the storage systems OS=
, could
>>>>>>> NFS simply be used?
>>>>>>>
>>>>>>> =A0 =A0 =A04) Considering out whole Inet traffic runs about 2 Mb/s,=
 is
>>>>>>> there any reason the port to the Storage unit should be more than 1=
00 M/b
>>>>>>> (would it be imparative to use 1 G/b transfer)?
>>>>>>>
>>>>>>> =A0 =A0 =A0TIA,
>>>>>>>
>>>>>>> =A0 =A0 =A0-Grant
>>>>>>>
>>>>>>> =A0 =A0 =A0_______________________________________________
>>>>>>> =A0 =A0 =A0freebsd-questions@freebsd.org mailing list
>>>>>>> =A0 =A0 =A0http://lists.freebsd.org/mailman/listinfo/freebsd-questi=
ons
>>>>>>> =A0 =A0 =A0To unsubscribe, send any mail to
>>>>>>> "freebsd-questions-unsubscribe@freebsd.org"
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> =A0 =A0_______________________________________________
>>>>>>> =A0 =A0freebsd-questions@freebsd.org mailing list
>>>>>>> =A0 =A0http://lists.freebsd.org/mailman/listinfo/freebsd-questions
>>>>>>> =A0 =A0To unsubscribe, send any mail to
>>>>>>> "freebsd-questions-unsubscribe@freebsd.org"
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> =A0-- =A0 Who knew
>>>>>>> _______________________________________________
>>>>>>> freebsd-questions@freebsd.org mailing list
>>>>>>> http://lists.freebsd.org/mailman/listinfo/freebsd-questions
>>>>>>> To unsubscribe, send any mail to
>>>>>>> "freebsd-questions-unsubscribe@freebsd.org"
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>
>>>
>> _______________________________________________
>> freebsd-questions@freebsd.org mailing list
>> http://lists.freebsd.org/mailman/listinfo/freebsd-questions
>> To unsubscribe, send any mail to
>> "freebsd-questions-unsubscribe@freebsd.org"
>>
>>
>
>
> _______________________________________________
> freebsd-questions@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-questions
> To unsubscribe, send any mail to "freebsd-questions-unsubscribe@freebsd.o=
rg"
>



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?b043a4850911201011p15ba4d94rcf03da1316670f53>