Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 08 Jul 2002 16:37:21 -0700
From:      Peter Wemm <peter@wemm.org>
To:        Julian Elischer <julian@elischer.org>
Cc:        John Nielsen <hackers@jnielsen.net>, hackers@FreeBSD.ORG
Subject:   Re: offtopic: low level format of IDE drive. 
Message-ID:  <20020708233721.DC2C33808@overcee.wemm.org>
In-Reply-To: <Pine.BSF.4.21.0207081422100.29644-100000@InterJet.elischer.org> 

next in thread | previous in thread | raw e-mail | index | archive | help
Julian Elischer wrote:
> this is not a 'reformat'
> 
> what I want to do is an old-fashionned refomat/verify where the controller
> writes new track headers etc.

The thing is, just about all IDE drives more than a few GB or so do 'track
writing' and have no fixed sectoring or sector positioning.  ie: each time
you write a single sector to a track, it does a read-modify-write of *THE
ENTIRE TRACK*.  This is why we have to have write caching turned on for IDE
drives to get decent performance.  Without it, it essentially rewrites the
entire track over and over and over again because it cannot fill its write
buffer in order to write a contiguous block to completely replace what was
there before.  ie: each track is one giant physical sector with multiple logical
sectors inside it.

The really annoying thing is that most newer scsi drives do this too.

The sad thing is that this makes softdep almost completely useless, because
the basic assumption is that sectors that were not explicitly written to
will not be touched.  The problem is that this isn't the case, even with
write caching turned off.  Writing a single sector causes the drive to
completely rebuild the track and all the sectors on it... in a different
relative postition to what was there before.  The resulting power off
midwrite can cause an absolute mess in sectors *adjacent* to where soft
updates was carefully writing to.  This means that the 'power off failsafe'
file system idea isn't possible with these drives.  The only thing that can
deal with this sort of failure mode is being willing to resort to 'newfs
and restore' or a log structured file system (can you say LFS?).

Get a UPS if you value the data. :-]

Back to the topic for a moment..   In theory, dd if=/dev/zero of=disk bs=64k
is as good as it gets for a low level format, on these drive.  With write
caching turned on, you are causing every single bit on the disk to be written
to, including the metadata.  And the dd if=/dev/disk of=/dev/null is the
read verify.  Some drives can have write verify turned on (I know of certain
maxtor models) where the drive will read back the data and rewrite the entire
track if necessary.

Take the above with a grain of salt, I've never actually worked at a drive
manufacturer.  The only thing for sure is that all hard drives suck. :-)

Cheers,
-Peter
--
Peter Wemm - peter@wemm.org; peter@FreeBSD.org; peter@yahoo-inc.com
"All of this is for nothing if we don't go to the stars" - JMS/B5


To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-hackers" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20020708233721.DC2C33808>