[23197] in Athena Bugs

home help back first fref pref prev next nref lref last post

Re: Zip drives are horribly broken

daemon@ATHENA.MIT.EDU (Garry Zacheiss)
Thu Jul 24 04:39:06 2003

Date: Thu, 24 Jul 2003 04:39:04 -0400 (EDT)
Message-Id: <200307240839.h6O8d4Gk029497@bart-savagewood.mit.edu>
From: Garry Zacheiss <zacheiss@MIT.EDU>
To: John Hawkinson <jhawk@MIT.EDU>
CC: hotline@MIT.EDU, bugs@MIT.EDU
In-reply-to: "[23170] in Athena Bugs"

>> I conclude that essentially no zip drives in the W20 cluster work properly
>> from a user standpoint -- "SUCK."

   I went and investigated this evening.  Findings below, organized by
machine and drive type.

Suns w/USB zip drives - Currently not working, but for known reasons.
When 9.2.N+1 comes out with an entry for rpc.smserverd in
/etc/inet/inetd.conf, these will start working again, and will automount
on /rmdisk like they're supposed to.

Suns w/SCSI zip drives - Same as above.

So the Suns are all not working, but we understand why, and all the
corrective action that needs to have happened on the development side
already has, and we're awaiting deployment.

Linux is an altogether more depressing situation.

Linux w/USB zip drives - I didn't even know we had this combination, and
I do know we've not put any effort into making it work; the athena-ws
init script only deals with the parallel port case.  Something (RH9
kudzu, I assume) is putting a line in /etc/fstab that tries to mount the
drive on /mnt/zip250.0 which would be useful if it included the "user"
option; without it, you need to be root, as you observed.  This is
correctable with some minimal development effort.

Linux w/parallel port zip drives - This is the strangest case of the
four; the drives look like they're working, and the code in the
athena-ws init script to support them is clearly getting run, but any
attempt to use the drive gets errors about not being able to read the
partition table and dmesg shows the kernel module logging something
about a bus reset.  This happened to 3 distinct machines to me; I'm not
sure if it's indicative of mass hardware failure, or some more subtle
software problem.

In addition to the software problems, there are lots of hardware
problems as well, primarily with the older (SCSI and parallel port)
drives.  Missing eject buttons and missing power supplies seemed to be
the two common ones I ran across.

To be honest, I'm not sure having the drives around is doing anyone a
favor at this point, but whether we remove them is a question for owls.

Garry





home help back first fref pref prev next nref lref last post