Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 26 Nov 2001 12:04:31 +0100
From:      Robert Suetterlin <robert@mpe.mpg.de>
To:        Ted Mittelstaedt <tedm@toybox.placo.com>
Cc:        freebsd-questions@freebsd.org
Subject:   storing 10TB in 494x435x248mm, with power of 200W (at 28VDC) (was: why are You asking here)
Message-ID:  <20011126120431.C1170@robert2.mpe-garching.mpg.de>
In-Reply-To: <000001c1744d$ca806340$1401a8c0@tedm.placo.com>; from tedm@toybox.placo.com on Fri, Nov 23, 2001 at 10:36:40AM -0800
References:  <20011123175912.B1170@robert2.mpe-garching.mpg.de> <000001c1744d$ca806340$1401a8c0@tedm.placo.com>

next in thread | previous in thread | raw e-mail | index | archive | help
Hello Ted, 

thanks for discussing this problem so far, it really helps making up
my mind when I get different opinions on this topic.

[...]
> >And we must have a solution
> >that can be serviced by astronauts / kosmonauts easily.
> >
> what hardware are they already using and how much data is it storing?
> Maybe someone can write a FreeBSD driver for it.
[...]
for our current experiment on the ISS we use a venerable sun laptop that
contains the control-software and records housekeeping data (like 1-5MB
every month).  The real data is just standard video images recorded by
Standard VCRs.  These are no options in the planned for experiment.

[...]
> >> 3) What exactly is a HSM going to do for you?
> >HSM is useful if You have a hirarchy of storage with different
> >qualities.  
> That's what I said.
> 
> You indicated that your doing archival only - ie: "one quality" in your
> terms.  Is this not true?
[...]
Sorry.  I seem to keep to much of my thoughts to myself: A data rate of
260MByte per second, the size and power limits seemed to prohibit real
time compression.  (We need 'lossless' compression, and there has not
been an algorithm selected / designed yet.)  I concluded we would have to
record the 45 Minute 260MB/s databursts to some kind of fast
intermediate storage (for example RAM or Solid State Disks, etc.).  Then
we could spend some time (e.g. a day) on transferring that data to
permanent storage and possibly reprocessing / compressing offline.
Sometimes experimentalists would like to read some 'old' data back into
the machinery to do a little analysis online (via telescience) before a
shuttle takes the 'tapes' back to earth.  In addition the system should
'automatically' record metadata and implement redundancy.

To me there seemed to exist two possible solutions (except for the
completely handmade thingy): 1) a kind of batchmode system running
{record, reprocess, store} and {reread, reprocess, manipulate} jobs that
would be similar to a VCR and would require implementing metadata and
redundancy 'by hand'. 2) or using some kind of storage manager, that
could handle these tasks and manage resources 'automatically'.

[...]
> >150TB of data per year.  If there is a
> >shuttle mission each month, they will have to transport 10TB of data
> >each time.
> So then have someone design an optical solution for you.
[...]
I have monitored the 'optical solutions' market for three years now and
I do not trust in them beeing able to deliver better data density than
magnetic media in the next ten years... but then I would be delighted if
they proved me wrong.
[...]
> >I also would like to use the
> >same general technique today that we will launch in five years (and then
> >use for up to ten more).  So I would like to use standard hardware and
> >software solutions.  These will most likely adept to the future
> >automatically.
> Any standard solution will be hopelessly obsolete 10 years from now and
> getting
> parts for it will cost astronomically.
[...]
You are completely right --- talking about fixed hardware.  (I see that
my paragraph hints to such a thing.)  Yet I thought about a combination
of software and hardware, where both could be upgraded independently
while still keeping standard interfaces available, and called that a
'standard solution'.  I mean something like Intel CPU, PC Architecture
and *BSD.  All have changed over the last ten years quite a lot.  But
still I could run a software that would rely on 'standard' interfaces
(like files, ports, pipes, etc.) from ten years back on todays most
modern hardware and newest *BSD version.  And the prices would even have
dropped.
[...]
> > An HSM would be an important building block in my
> >concept of a solution to the data storage problem, as it relieves me of
> >all the problems connected with hirarchical storage management.
> No, it relieves you of having to redesign a storage solution sometime in the
> future when you START needing HSM.
[...]
Exactly that is what I was looking for.  A solution that can be deployed
now.  That can already be used and experimented with at the same time
that the experiment is build and improved.  And it's components will
evolve on market preasure over the next five years.
[...]
> I think that your grossly underestimating the technical problems of handling
> 10TB of data a month in a 494x435x248mm space, with power of 200W (at 28VDC).
[...]
> You could end up with a need for 50TB of data
[...]
> You have an application here that's crying out for a custom-built
> high-capacity data storage device that takes small space and little
> power and your screwing around looking for an off the shelf HSM array
> that will do it.  Such an animal doesen't exist.  You need to take
> that R&D budget you have and put it towards R&D with one of the major
> technical players - perhaps IBM - that can supplement your budget.  Of
> course your going to end up with a custom-built solution but that's
> the entire point of the exercise as then such solution can be
> translated into a real product.
> 
> Instead of attempting to get some sort of standard now, what you need
> to do is trust in the Capatalistic system to use whatever work your
> R&D budget creates and make commercial products out of it.  Then 10
> years from now such high-capacity arrays will cost $100 and be
> available from Costco.  Attempting to use off-the-shelf products now
> does not help anybody realize any R&D leverage from your budget -
> which is the entire point of the money spent on ISS.
> 
> The dollars are available from the hardware manufacturers.  Your
> project would have high marketing value to them.  Leverage that value!
Yes.  Maybe that is the best strategy to go with... and it would be in
the best spirit of the ISS project.  Considering my 'spare' time and all
I heard in several discussions this will be a likely
solution.

Regards, Robert.

-- 
Robert Suetterlin (robert@mpe.mpg.de)
phone: (+49)89 / 30000-3546   fax: (+49)89 / 30000-3950

To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-questions" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20011126120431.C1170>