Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 13 Jan 2001 20:24:32 -0800 (PST)
From:      Matthew Jacob <mjacob@feral.com>
To:        Eric Lee Green <eric@estinc.com>
Cc:        freebsd-scsi@FreeBSD.ORG
Subject:   Re: Why filemarks in sardpos?
Message-ID:  <Pine.BSF.4.21.0101132006280.14728-100000@beppo.feral.com>
In-Reply-To: <Pine.LNX.4.21.0101131547160.6204-100000@h23.estsatnet>

next in thread | previous in thread | raw e-mail | index | archive | help

I've been giving this some thought and rereading the spec some more. I'm
inclined to think that the difference between first and last logical block and
the likelihood of a drive vendor pooching it has to do with whether one is
using LOGICAL or HARDWARE block positioning.

The paragraphs I quoted earlier, to recap, are:

+The first block location field indicates the block address associated with
+the current logical position. The value shall inidcate the block address of
+the next data block to be transferred between the initiator and the target
+if a READ or WRITE command is issued.
+
+The last block location field indicates the block address (see 10.1.6)
+associated with the next block to be transferred from the buffer to the
+medium. The value shall indicate the block address of the ntext data block to
+be transferred between the buffer and the medium.  If the buffer does not
+contain a whole block of data or is empty, the value reported for the last
+block location shall be equal to the value reported for the first block
+location.



A diagram of the tape model must be something like:

                       Tape Buffer
                    +----------------+
Initiator --->>-----| Z .. E D C B A |-->>---> Tape
                    +----------------+
                      ^            ^
          First Block Value    Last Block Value


So, all other things being equal, "First Block Value" should be good enough if
one assumes that blocks Z..A get flushed to tape.

The thing I would expect to break with drives is HARDWARE block positions...
That is because actual HARDWARE block position might not in fact be known
until data is actually on the media. One would *hope* that such a device would
flush before reporting hardware position anyway, but, well..

LOGICAL block position is different- I would expect block #Z to be 26 more
than block #A above- this shouldn't change. Again, assuming Z..A make it to
tape (if they don't that's a catastrophic error).

I notice you're using MTIOCRSPOS- I think I would tend agree that this
wouldn't require a flush. Hardware position should probably stay where it is
now.

Anyone else out there have an opinion on this?

-matt

p.s.: Not all drives, btw, have a horrible speed loss with the flush
operation. DAT drives seem fine. DLTs are horribly affected.

p.p.s:

[ yes, I'll ignore your barbarous manners. the content *is* important- not
because BRU or ESTinc (ESTINK? New Unix errno?) is all that important- but the
problem *has* been seen before... And, oh, btw, NetBackup ended up owned by
Veritas, not Legato, I believe ]




To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-scsi" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?Pine.BSF.4.21.0101132006280.14728-100000>