Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 14 Oct 2011 17:18:31 -0600 (MDT)
From:      Dennis Glatting <freebsd@penx.com>
To:        Tim Daneliuk <tundra@tundraware.com>
Cc:        freebsd-questions@freebsd.org
Subject:   Re: Very large swap
Message-ID:  <alpine.BSF.2.00.1110141647030.27991@Elmer.dco.penx.com>
In-Reply-To: <4E98707F.3070202@tundraware.com>
References:  <alpine.BSF.2.00.1110132256080.3996@Elmer.dco.penx.com> <4E9866CF.6010209@gmx.com> <4E98707F.3070202@tundraware.com>

next in thread | previous in thread | raw e-mail | index | archive | help


On Fri, 14 Oct 2011, Tim Daneliuk wrote:

> On 10/14/2011 11:43 AM, Nikos Vassiliadis wrote:
>> On 10/14/2011 8:08 AM, Dennis Glatting wrote:
>>> 
>>> This is kind of stupid question but at a minimum I thought it would be
>>> interesting to know.
>>> 
>>> What is the limitations in terms of swap devices under RELENG_8 (or 9)?
>>> 
>>> A single swap dev appears to be limited to 32GB (there are truncation
>>> messages on boot). I am looking at a possible need of 2-20TB (probably
>>> more) with as much main memory that is affordable.
>> 
>> The limit is raised to 256GB in HEAD and RELENG_8
>> http://svnweb.freebsd.org/base?view=revision&amp;revision=225076
>> 
>>> I am working with large data sets and there are various ways of solving
>>> the problem sets but simply letting the processors swap as they work
>>> through a given problem is a possible technique.
>> 
>> I would advise against this technique. Possibly, it's easier to design
>> your program to user smaller amounts of memory and avoid swapping.
>> 
>> After all, designing your program to use big amounts of swapped out
>> memory *and* perform in a timely manner, can be very challenging.
>> 
>> Nikos
>
> Well ... I dunno how much large dataset processing you've done, but
> it's not that simple.  Ordinarily, with modern machines and
> architectures, you're right.  In fact, you NEVER want to swap,
> instead, throw memory at the problem.
>
> But when you get into really big datasets, it's a different story.
> You probably will not find a mobo with 20TB memory capacity :)
> So ... you have to do something with disk.  You generally get
> two choices:  Memory mapped files or swap.  It's been some years
> since I considered either seriously, but they do have some tradeoffs.
> MM files give the programmer very fine grained control of just what
> might get pushed out to disk at the cost of user space context
> switching.  Swap gets managed by the kernel which is about as
> efficient as disk I/O is going to get, but that means what and how
> things get moved on- and off disk is invisible to the application.
>
> What a lot of big data shops are moving to is SSD for such operations.
> SSD is VERY fast and can be RAIDed to overcome the tendency of at least
> the early SSD products' tendency to, um ... blow up.
>
> As always, scale is hard, and giant data problems are Really Hard (tm).
> That's why people like IBM, Sun/Oracle, and Teradata make lots of money
> building giant iron farms.
>

This is a proof-of-concept project that is personally educational with a 
substantial amusement factor. I am doing it on the cheap, which means 
commercial products. I'm also doing it at home, which means expenses come 
out of my pocket.

This project is about manipulating and creating large data sets for crypto 
related applications (use your imagination here). The manipulations are 
fairly stupid and generally I am using UNIX utilities because I don't want 
to re-invent existing code (I'm not that bored). I have also written some 
programs but they are no more than a few pages of code. I am also using 
MPI, OpenMP, and pthreads where supported and make sense.

I have committed to the project five machines. Three (the 'attack' 
machines) run over clocked Phenom II x6 processors with 16GB of RAM, 1TB 
disk for the OS, 1TB disk for Junk, and a 3-2TB disk RAIDz array. Two of 
the three MBs (ASUS CROSSHAIR V FORMULA, Gigabyte GA-990FXA-UD7, and 
something I had lying around) are upgradable to 8150s, which I have one on 
order. These machines are liquid cooled because, well, for no reason in 
particular other than I thought it would be fun (defiantly educational). 
Roughly fifty percent of the parts were lying around but more importantly 
my wife and I keep our finances separate. :)

A data manipulation server is running an i7 x4 (not over clocked but turbo 
is enabled) with 24GB of fast RAM. It has 12 2TB disks, 2 1TB disks (OS), 
plus a few SSDs configured across three volumes (OS, Junk, and a work 
volume).

A repository server is an i7 x6 3.3GHz (not over clocked) with 24GB of 
RAM, several volumes, two of which are RAIDz, SSDs, and other junk. I NFS 
the data to the attack servers from this server.

I am using an Intel Gb card, picked because it supports large MTUs.

All are running RELENG_8.

If this project moves beyond proof-of-concept, and therefore not my money, 
we're talking about 100-1,000 TB of data with real servers (SMART, iLO, a 
NMS, etc). I have my doubts this will happen but in the mean time it's 
play day.





Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?alpine.BSF.2.00.1110141647030.27991>