[Tfug] A/V drives

Bexley Hall bexley401 at yahoo.com
Thu Dec 25 03:38:19 MST 2008


Hi, Harry,

--- On Thu, 12/25/08, Harry McGregor <micros at osef.org> wrote:

> >> Secondly, A/V rated drives did not throw out thermal
> >> recalibration, the could delay it slight, hopefully until idle.
> >
> > Exactly.  Note my reference to *defering* T-cal.
>
> Length of deferring matters, in many of the older A/V
> drives, it was of the order of ms or maybe a few seconds at most.

So, to my mind, that doesn't buy you *squat*!  (or, *does* it?)
I guess it would depend on your available bandwidth and the
depth of cache (either in the OS *or* on the drive)

Regardless, it's a kludge.

> > But, this presupposes there is a time when you *know* you
> > can safely do this without compromising performance!
> > So, it seems like A/V drives just kicked the can down the
> > road in the *hope* that there *might* be a more opportune
> > time to do this, "later".
>
> Yep
>
> > I.e., this wasn't a "fix".  Rather, it is akin to buying a faster
> > computer to avoid "making coasters" with early CD writers (buffer
> > underrun).  There's no *guarantee* that the computer still
> > won't make coasters -- since it depends on what else the machine
> > is doing at the time, how much swapping, etc.
> 
> Or the way I avoided coasters with the $1K 2x SCSI sony cd
> writer I was working with... bag of ice on top of the drive.

Hmmm... I'll assume that was a *different* problem than
buffer underrun?  I.e., it seemed most CD writer problems
(early on) had to do with the host being unable to supply data fast
enough (and continuously) to keep up with the laser...

> > OTOH, adding track-size buffers and supporting track-at-a-time
> > burning was a *real* solution to the coaster problem.
>
> yep
>
> > The (original) A/V drive approach is similar to throwing
> > MIPS at a problem to try to make a non-RT system behave
> > deterministicly.
>
> Your still never going to get true RT performance out of
> it...

Yes.  But most folks are happy with the "illusion"  :>

> > Or, doing away with dynamic object creation to try to make
> > GC-based languages behave deterministicly.
> >
> > These are just work-arounds, not solutions.
> >
> > My concern is there are some technologies (not related to
> > A/V drives at all) that will probably be commercially viable
> > in 2012 and I need to make an educated gamble as to which
> > "problems" in those technologies will be surmounted (with time)
> > vs. those that won't.  And, decide if I want to design with
> > support for them in mind or sidestep the issue.
>
> Without knowing more, it's hard to say on that :)

It's hard to say *with* knowing more!  :-/  And, given
the state of The Economy, all bets are off...

> > This is relevant as many technologies are rushed to market
> > "crippled" and later refined ("fixed"?).  Witness the CD/DVD
> > writer issue, wear-leveling in MNOS devices, *new* A/V
> > drive technology, etc.  I.e., if the upcoming technologies
> > fall in with this crowd, their future looks promising...
> > if not, <shrug>
>
> Ok
> 
> >> This has nothing to do with a 24/7 i/o stream, or broadcast
> >> level A/V or anything to that effect.
> >>
> >> It was more of a marketing ploy to get IDE drives in where
> >> enterprise class SCSI was the only option.
> >
> > Understood.  As with the technologies I mentioned (above),
> > "get your foot in the door" (even if the technology isn't
> > "quite right", yet) and then fix it once/if it gains
> > traction in the marketplace.
> >
> > Moral:  if the technology *can* be fixed, you (I) just
> > have to make a wager on how *likely* it is to take hold...
>
> The current fix for the A/V t-cal issues is larger cache. 

I think there have also been changes in the way data is
recorded on the medium so that any recalibration can be done
"adjacent" to the data tracks (?).  Without incurring the cost of
a full-swing seek, etc.

> With the price of memory, can you really justify not using have a
> large cache?

Agreed.  While adding memory fixes *many* problems, it doesn't
solve *all*.  E.g., adding memory won't make something run cooler  :>

> Current drives are shipping with 32MB on board, with the
> cost of memory though the floor, I can see drive makers 
> jumping to 64MB or even 128MB.

Though this extra smarts in the drive opens the door for
other types of failures.  E.g., what happens if the data cached
*in* the drive fails to get written onto the media (i.e.,
asynchronous writes vs. synchronous writes).

> The IBM storage array I spoke of has 120GB of memory, of
> which quite a bit is used as cache.

Is this to improve overall performance (read-ahead, write-behind)
or to support special QoS needs?  (e.g., multimedia, etc.)

Or, is it just a good *space heater*???!  :>   (I know my office
is 4 degrees warmer than the rest of the house :-/ )

Cherry Mristmas,
--don


      




More information about the tfug mailing list