Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 26 Nov 2001 23:10:41 -0800
From:      "Ted Mittelstaedt" <>
To:        "Robert Suetterlin" <>
Cc:        <freebsd-questions@FreeBSD.ORG>
Subject:   RE: storing 10TB in 494x435x248mm, with power of 200W (at 28VDC) (was: why are You asking here)
Message-ID:  <000001c17712$9f775880$>
In-Reply-To: <>

Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help
>-----Original Message-----
>From: owner-freebsd-questions@FreeBSD.ORG
>[mailto:owner-freebsd-questions@FreeBSD.ORG]On Behalf Of Robert
>Sent: Monday, November 26, 2001 3:05 AM
>To: Ted Mittelstaedt
>Cc: freebsd-questions@FreeBSD.ORG
>Subject: storing 10TB in 494x435x248mm, with power of 200W (at 28VDC)
>(was: why are You asking here)
>Hello Ted,
>thanks for discussing this problem so far, it really helps making up
>my mind when I get different opinions on this topic.

your welcome

>> >> 3) What exactly is a HSM going to do for you?
>> >HSM is useful if You have a hirarchy of storage with different
>> >qualities.
>> That's what I said.
>> You indicated that your doing archival only - ie: "one quality" in your
>> terms.  Is this not true?
>Sorry.  I seem to keep to much of my thoughts to myself: A data rate of
>260MByte per second, the size and power limits seemed to prohibit real
>time compression.  (We need 'lossless' compression, and there has not
>been an algorithm selected / designed yet.)

The only people I know of in the industry that have the experience with
on-the-fly high compression that might possibly help you is Stac Electronics,
they have some patented algorithms that could possibly be scaled up to what
you want.

I would not discount on-the-fly compression if I were you.  Even if you got a
small amount of compression it would reduce the storage needs tremendously.
It's an option that because the benefits are so large it's worth exploring.

>I concluded we would have to
>record the 45 Minute 260MB/s databursts to some kind of fast
>intermediate storage (for example RAM or Solid State Disks, etc.).  Then
>we could spend some time (e.g. a day) on transferring that data to
>permanent storage and possibly reprocessing / compressing offline.

>Sometimes experimentalists would like to read some 'old' data back into
>the machinery to do a little analysis online (via telescience) before a
>shuttle takes the 'tapes' back to earth.  In addition the system should
>'automatically' record metadata and implement redundancy.

Would the experimentalists really need the high resolution to do online
analysis?  What about recording 2 databursts at the same time - a high
resolution one and a low resolution one?  The low resolution one would
spool off to temp storage and perhaps only be saved for a few days, or
downloaded Earthside.  That might be enough to figure out where the
interesting things are.

>To me there seemed to exist two possible solutions (except for the
>completely handmade thingy): 1) a kind of batchmode system running
>{record, reprocess, store} and {reread, reprocess, manipulate} jobs that
>would be similar to a VCR and would require implementing metadata and
>redundancy 'by hand'. 2) or using some kind of storage manager, that
>could handle these tasks and manage resources 'automatically'.

Well, I think you got 2 redundancy needs here.  First is what you would call
data integrity problems, that's what you want the metadata for, all it does is
verify that the storage medium isn't trash.  It's important for storage
media like tape because tape is nowhere near got the critical reliability
that you want.  But, I'd pose the question, what if the storage medium
were "perfect" or nearly so, would there be the same redundancy needs?

The second redundancy is, well what happens if a shuttle on a return trip
and your tapes get disintegrated.

>> >150TB of data per year.  If there is a
>> >shuttle mission each month, they will have to transport 10TB of data
>> >each time.
>> So then have someone design an optical solution for you.
>I have monitored the 'optical solutions' market for three years now and
>I do not trust in them beeing able to deliver better data density than
>magnetic media in the next ten years... but then I would be delighted if
>they proved me wrong.
>> >I also would like to use the
>> >same general technique today that we will launch in five years (and then
>> >use for up to ten more).  So I would like to use standard hardware and
>> >software solutions.  These will most likely adept to the future
>> >automatically.
>> Any standard solution will be hopelessly obsolete 10 years from now and
>> getting
>> parts for it will cost astronomically.
>You are completely right --- talking about fixed hardware.  (I see that
>my paragraph hints to such a thing.)  Yet I thought about a combination
>of software and hardware, where both could be upgraded independently
>while still keeping standard interfaces available, and called that a
>'standard solution'.  I mean something like Intel CPU, PC Architecture
>and *BSD.  All have changed over the last ten years quite a lot.  But
>still I could run a software that would rely on 'standard' interfaces
>(like files, ports, pipes, etc.) from ten years back on todays most
>modern hardware and newest *BSD version.  And the prices would even have

I would caution you about that.  One of the poorest-kept secrets in computer
hardware today is we have long since gotten into the area of diminishing
returns in the "industry standard" Wintel architecture.

There's a lot of future computer technology, like voice recognition,
intelligence, and so on that is going to require an order of magnitude more
computer hardware than what we got today.  Microsoft and Intel both know it,
both of them are caught between a rock and a hard place.  On one hand, they
break with the past and go to a totally new architecture to break away from
old IBM PC/XT limitations.  On the other, if they do that then they redefine
market and in so doing open the door for a huge host of new competitors.  But
if they
do nothing, then people are going to eventually stop buying new hardware and
software except for replacement of failed electronics, and both of them will
out of business.

I think we are seeing the warning shots across the bow now, the computer
market is
getting restless and is sending the message loud and clear that the same _old_
Wintel in a new box won't cut it anymore.  They aren't buying anymore and it's
hurting the entire technology sector.  I suspect that in 10 years that
are going to look very, very different than they do today with fundamentally
different internal architectures.  I hope that FreeBSD keeps up!

>> Instead of attempting to get some sort of standard now, what you need
>> to do is trust in the Capatalistic system to use whatever work your
>> R&D budget creates and make commercial products out of it.  Then 10
>> years from now such high-capacity arrays will cost $100 and be
>> available from Costco.  Attempting to use off-the-shelf products now
>> does not help anybody realize any R&D leverage from your budget -
>> which is the entire point of the money spent on ISS.
>> The dollars are available from the hardware manufacturers.  Your
>> project would have high marketing value to them.  Leverage that value!
>Yes.  Maybe that is the best strategy to go with... and it would be in
>the best spirit of the ISS project.  Considering my 'spare' time and all
>I heard in several discussions this will be a likely

Well I'm sure that we would be interested in future news here on this
espically if it involves FreeBSD.  Keep us apprised!

Ted Mittelstaedt                             
Author of:                           The FreeBSD Corporate Networker's Guide
Book website:                

To Unsubscribe: send mail to
with "unsubscribe freebsd-questions" in the body of the message

Want to link to this message? Use this URL: <$9f775880$1401a8c0>