[355] in bugtraq
Re: Full Disclosure works, here's proof:
daemon@ATHENA.MIT.EDU (smb@research.att.com)
Mon Dec 5 09:02:32 1994
From: smb@research.att.com
To: Karl Strickland <karl@bagpuss.demon.co.uk>
Cc: Bela Lubkin <belal@sco.COM>, bugtraq@fc.net
Date: Mon, 05 Dec 94 07:26:06 EST
> If the reader believes that the holes originally exist as stated and
> that SCO has made a good faith effort to fix them, it is sensible to
> install the fixes even if it eventually turns out that a narrower
> hole remains.
What if it turns out that they open an even bigger hole? Im thinking
of binmail.
Yup. Let me add some quotes from Fred Brooks' classic ``The Mythical Man-
Month''. (Disclaimer: I studied software engineering lo these many years
ago from Brooks. And he's had a far greater effect on my thinking than
anyone else.)
Betty Campbell, of MIT's Laboratory for Nuclear Science, points
out an interesting cycle in the life of a particular release of a
program.... Initially, old bugs found and solved in previous
releases tend to reappear in a new release. New functions of the
new release turn out to have defects. These things get shaken
out, and all goes well for several months. Then the bug rate
begins to climb again...
The fundamental problem with program maintenance is that fixing a
defect has a substantial (20-50 percent) chance of introducing
another. So the whole process is two steps forward and one step
back.
This was published about 20 years ago, btw.
Without expressing any opinion one way or another on full disclosure,
I will point out that it's quite rational to refrain from installing
patches if you don't think you're susceptible to the problem. You
never know when you'll actually make things worse.
Let me give one relevant case in point. I was one of the two people
most responsible for the general strategy of the System V pty allocator.
Our goal was to eliminate both the complexity and the security holes
present in the 4.3bsd version, such as the race conditions on setting
the ownership and modes of the slave device. In my opinion, we succeeded.
But despite all our design efforts, and all our attention to subtle
timing flaws, we've all just heard that there was a bug in one particular
instantiation. Furthermore, that bug -- though unquestionably involving
incorrect code on all releases -- only showed up as a security problem
when there was an interaction with totally unrelated operating system
design decisions.
Getting code right is hard. Getting code right in a complex system is
*very* hard. While one can, I claim, do better for security stuff than
in the general case, I do not think it is humanly possible to build
a large system with no security flaws. (And yes, I put firewalls in
that category -- which is why good firewalls are as small and simple
as possible.)
Finally, if you don't believe me, go and read (or reread) Mythical
Man-Month.
--Steve Bellovin