[1281] in RISKS Forum

home help back first fref pref prev next nref lref last post

RISKS DIGEST 18.59

daemon@ATHENA.MIT.EDU (RISKS List Owner)
Thu Nov 7 20:13:25 1996

From: RISKS List Owner <risko@csl.sri.com>
Date: Thu, 7 Nov 96 17:12:11 PST
To: risks@MIT.EDU

RISKS-LIST: Risks-Forum Digest  Thursday 7 November 1996  Volume 18 : Issue 59

   FORUM ON RISKS TO THE PUBLIC IN COMPUTERS AND RELATED SYSTEMS (comp.risks)
   ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

***** See last item for further information, disclaimers, caveats, etc. *****

  Contents:
Intel product reaches directly into networked workstations (Jeff Mantei)
Big Internet is Watching You (Martin Minow)
Careful AeroPerusal (Peter Ladkin)
Risks of using keyless coinlockers in Vienna (Stefan Sachs)
Re: Fault-induced crypto attacks ... (Brian Randell)
Why cryptography is harder than it looks (Bruce Schneier) [long]
Abridged info on RISKS (comp.risks)

----------------------------------------------------------------------

Date: Thu, 7 Nov 1996 13:35:48 -0800
From: mantei@bbs.ug.eds.com (Jeff Mantei (EDS OO/AI Svcs, Troy MI))
Subject: Intel product reaches directly into networked workstations

I don't think this product or class of products has been mentioned in RISKS
before, but I think their potential for abuse is self-evident and should be
more widely known.  I quote the following from:

http://www.intel.com/comm-net/sns/showcase/netmanag/ld_virus/whites/wp-etem.htm

"Intel is working to make the network's help desk more than just an
 answering service. With LANDesk Manager's remote access facility,
 network managers can take over a node and perform most of the tasks
 that typically would require a visit to the problem workstation. 

"Under Novell's NMS, a network manager clicks on the node's network
 map object and launches LANDesk Manager's remote access tool. The
 manager can take over the user's PC and directly control the user's PC
 keyboard and mouse. The network manager can also access utilities
 and applications remotely, permitting checks of CONFIG.SYS,
 AUTOEXEC.BAT and WIN.INI files or anything else on the machine.
 This eliminates the laborious process of asking an end user to describe
 cryptic error messages and codes."

------------------------------

Date: Thu, 7 Nov 1996 07:29:32 -0800
From: Martin Minow <minow@apple.com>
Subject: Big Internet is Watching You

Over the past month or so, a mailing list I subscribe to has endured a flame
war with a disgruntled (ex-)subscriber. A few days ago, an anonymous
participant provided what I'll call an Internet Biography of the subscriber.

The anonymous message began with "I had some free time this morning, and
just for fun, thought I'd  create a brief Net profile of our friend ..."
Among the discoveries are the following:

-- Home address and phone number from http://www.yahoo.com
   (Four11 people search)
-- Birthday from http://www.boutell.com/birthday.cgi/[Month]/[Day]
-- Company name and internet domain ownership from InterNIC.
-- An uncomplimentary "who is ..." from a private academic site.
-- A Usenet author profile showing over 500 messages posted to about 50
   newsgroups over the last 18 months from http://www.dejanews.com profile.
-- An uncomplimentary note from an academic, private "legends" homepage.
-- Several professional contributions to FAQ's.

Over ten years ago, when computer bulletin boards appeared at my former
employer, I formulated "Minow's law:" "never write anything you don't want
to see on your resume." I seem to have been more prophetic than I expected.

Martin Minow, minow@apple.com

------------------------------

Date: Fri, 8 Nov 1996 01:52:12 +0100
From: Peter Ladkin <ladkin@TechFak.Uni-Bielefeld.DE>
Subject: Careful AeroPerusal (Ladkin RISKS-18.51, PGN RISKS-18.57)

There are a lot of rumors about the latest AeroPeru news.  CNN's reports on
the latest AeroPeru findings have been inaccurate and incomplete. PGN may
have helped to spread another one in RISKS-18.57.

The facts are, from a source at the NTSB as well as information about the
B757 static system (obtainable from my Compendium, see below):

a) *Masking tape*, not duct tape or `Remove Before Flight' Covers, 
   was covering the *left-side static ports* on the aircraft [NTSB]; 
   (there's no way to attach covers: the ports are flush with the
    fuselage [B757 P/S system diagram]);

b) Static ports to all three independent pitot-static systems are on both 
  the left side and the right side of the fuselage, including those for the 
  electro-mechanical backup: both static ports and pitot in each system are 
  interconnected by an open tube [B757 pitot-static system diagram];

c) the right-side static ports have not been recovered; it is therefore
   not known whether masking tape was also covering these [NTSB];

d) blockage of all the left static ports would cause some degradation
   of *all* the air data in both EFIS-displayed P/S systems plus the backup;
   blockage of the right-side static ports as well would cause worse
   degradation [general aero and system knowledge]; this is thus a *common
   failure mode* of all three independent P/S systems: both primaries 
   and backup.

e) the Peruvian Transport Ministry said that this obstruction of the 
   sensors "could explain the erroneous and confusing altitude and 
   speed information received by the pilots after takeoff" [NTSB source,
   quoting an official statement]. This contrasts with the Minister's 
   reported statement on October 2 which seemed to the press to ascribe 
   computer problems as the cause.

f) Putting masking tape on the ports when cleaning the aircraft is
   a normal maintenance procedure [NTSB]; however, leaving it on is
   certainly not! I don't know whether after such a procedure the 
   aircraft has explicitly to be `signed off' after inspection by a 
   qualified inspector, who would then make a `returned to service' 
   entry in the maintenance logs. This is so for most procedures which
   render an aircraft temporarily unairworthy (as putting tape on the
   static ports does). This is a question still to be answered here,
   and I'm sure there are many readers who could do so;

g) A further question, posed by Jim Wolper, is why the air crew
   did not notice the tape on static ports on the pre-flight inspection.
   It was dark, but nonetheless on most airplanes visually checking the
   static ports is an explicit item on the pre-flight inspection check list.
   The B757 body is relatively high off the ground, but nevertheless I 
   should have thought that tape on the ports would be clearly visible.

h) The CVR and DFDR have been recovered, examined in the NTSB Laboratories,
   and the data returned to Peruvian colleagues [NTSB];

In RISKS-18.51, I expressed extreme scepticism that computer failure could
be the sole cause of any B757 accident (except for one possibility which has
never happened to any aircraft). It should now be clear that the
recently-discovered failure mode under discussion is (a) not
computer-related, and (b) deemed sufficient by itself to cause the known
effects and history of the flight. This does not of course rule out other
simultaneous failure modes that are computer-related. We still await the CVR
and DFDR data.

More information about the B757 systems and about how a static-system
blockage would affect the air data, as well as a history of the rather
misleading statements and press reports about the Aeroperu accident, may be
found (from Friday 8 November, 1996, and so dated) under the section on the
AeroPeru accident in `Computer-Related Incidents and Accidents to Commercial
Airplanes', under http://www.techfak.uni-bielefeld.de/~ladkin/ Until 8
November, information on the B757 pitot-static system may be found under the
BirgenAir section.

Peter Ladkin

------------------------------

Date: Thu, 31 Oct 1996 22:32:22 +-100
From: Stefan Sachs <ssachs@acm.org>
Subject: Risks of using keyless coinlockers in Vienna

On my last trip to Vienna, I placed my baggage in a very advanced coinlocker
in an urban train station. The coinlocker uses a magnetic card instead of a
conventional key. The user is guided by an LCD Screen on an operating panel
serving six compartments (equipped with a numeric keypad, which is not used
for normal operation).  I received my card and since such cards are quite
common in car parking facilities, left with confidence. On my way back, with
only twenty minutes left for the train to the airport, following the
instructions on the screen, I fed the card correctly positioned into the
slot. Nothing happened and the screen continued to show the instruction, to
feed the card into the slot to release the lock. When I asked at the ticket
counter for help, the attendant was in no way astonished and explained that
this happened because of children playing around with the keypad. A service
technician was called and used the keypad to release the compartment lock,
and then started a debugging session collecting several cards from the
machine.  Complaining that I needed my luggage, I was told that he already
had made an `exception' by handing me my suitcase without checking my
identity and that it was my problem losing my card. Considering the need to
reach my plane and the fact that I couldn't prove that I correctly inserted
my card, I took my baggage and left.

The risks I see are these: If such a mechanism fails, it should in any case
return the keycard it didn't accept. Since the keycard is not further
protected by a PIN, it makes no sense to keep it to prevent abuse. Since the
card is the only receipt, it is in the best interest of both, the user and
the owner of the coinlocker, that it is always available.  Having a keypad,
which is obviously required only for servicing purposes open for the public
is a completely unnecessary risk; sooner or later someone will be successful
in opening the locker using the keypad.  It is absolutely irresponsible, to
continue to operate a system in which malfunction is so common (during the
short time, I had to wait for the technician to open the locker, two people
passing by told me, that they had experienced the same problems before).

I can only recommend avoiding a coinlocker with such a setup, under any
circumstances.

Dr. Stefan Sachs, Ringreiterweg 20, 23558 Luebeck, Germany  +49-451-8714936
   ssachs@acm.org     Dalbacka 30, 66600 Bengtsfors Sweden  +46-531-26069

------------------------------

Date: Wed, 6 Nov 1996 18:00:28 +0000
From: Brian.Randell@newcastle.ac.uk (Brian Randell)
Subject: Re: Fault-induced crypto attacks ... (Kocher, RISKS-18.57)

A different sort of fault, perhaps, but Tony Sale's lecture here a few weeks
ago revealed that Bletchley Park's initial breaking of the Lorenz
teleprinter (a.k.a. "Fish") ciphers in the early years of WW2, which led
subsequently to the building of the Colossus computers, was entirely due to
*one* fault on the part of one German teleprinter operator. They found that
he had resent one lengthy message, but by re-keying it (somewhat
inaccurately) rather than using the punched teleprinter tape. From this one
pair of messages they managed to discover the full detailed logical
operation of the cipher machine unseen, and create a means of breaking the
messages that were being sent using it to and from the German High Command.
As Tony said, for the rest of the war, the cryptanalysts prayed that no
over-eager Allied soldier captured a Fish machine!

Brian

PS. Years ago, after a lecture here by Donald Davies on DES, and emboldened
merely by my reading of David Kahn and the like, I brought a
typically-academic discussion of its security to a screeching halt by
suggesting that perhaps sometime in the future I would be the proud
possessor of a DES-based cipher machine -- which (like the Enigma cipher
machine that I already own) was historically famous for the importance of
the messages that machines like it had failed to protect.  :-)

Dept. of Computing Science, University of Newcastle, Newcastle upon Tyne,
NE1 7RU, UK  Brian.Randell@newcastle.ac.uk   +44 191 222 7923

  [And of course the Brits also invented E-fish-ient Chips.  PGN]

------------------------------

Date: Wed, 6 Nov 1996 16:45:52 -0500
From: Bruce Schneier <schneier@counterpane.com>
Subject: Why cryptography is harder than it looks

From e-mail to cellular communications, from secure Web access to digital
cash, cryptography is an essential part of today's information systems.
Cryptography helps provide accountability, fairness, accuracy, and
confidentiality.  It can prevent fraud in electronic commerce and assure the
validity of financial transactions.  Used properly, it protects your
anonymity and prove your identity.  It can keep vandals from altering your
Web page and prevent industrial competitors from reading your confidential
documents.  And in the future, as commerce and communications continue to
move to computer networks, cryptography will become more and more vital.

But the cryptography now on the market doesn't provide the level of security
it advertises.  Most systems are designed and implemented not by
cryptographers, but by engineers who think cryptography is like any other
computer technology.  It's not.  You can't make systems secure by tacking on
cryptography as an afterthought.  You have to know what you are doing every
step of the way, from conception through installation.

Billions of dollars are spent on computer security, and most of it wasted on
insecure products.  After all, weak cryptography looks the same on the shelf
as strong cryptography.  Two e-mail encryption products may have almost the
same user interface, yet one is secure while the other permits
eavesdropping.  A feature comparison chart may suggest that two programs
have similar features, although one has gaping security holes that the other
doesn't.  An experienced cryptographer can tell the difference.  So can a
thief.

Present-day computer security is a house of cards; it may stand for now,
but it can't last.  Many insecure products have not yet been broken because
they are still in their infancy.  But as these products become more and
more widely used, they will become tempting targets for criminals.  The
press will publicize the attacks, undermining public confidence in these
systems.  Ultimately, products will win or lose in the marketplace
depending on the strength of their security.

Threats to computer systems

Every form of commerce ever invented has been subject to fraud, from rigged
scales in a farmers' market to counterfeit currency to phony invoices.
Electronic commerce schemes will also face fraud, through forgery,
misrepresentation, denial of service, and cheating.  You can't walk the
streets wearing a mask of someone else's face, but in the digital world it
is easy to impersonate others.  In fact, computerization makes the risks
even greater, by allowing automated and systematic attacks that are
impossible against non-automated systems.  A thief can make a living
skimming a penny from every Visa cardholder.  Only strong cryptography can
protect against these attacks.

Privacy violations are another threat.  Some attacks on privacy are
targeted: a member of the press tries to read a public figure's e-mail, or a
company tries to intercept a competitor's communications.  Others are broad
data-harvesting attacks, searching a sea of data for interesting
information: a list of rich widows, AZT users, or people who view a
particular Web page.

Electronic vandalism is an increasingly serious problem. Already computer
vandals have graffitied the CIA's web page, mail-bombed Internet providers,
and canceled thousands of newsgroup messages.  And of course, vandals and
thieves routinely break into networked computer systems.  When security
safeguards aren't adequate, trespassers run little risk of getting caught.
Attackers don't follow rules.  They can attack a system using techniques not
anticipated by the original designers they cheat.  In California, art
thieves burgle homes by cutting through the walls with a chain saw.
Sophisticated, expensive home security systems don't stand a chance against
this sort of attack.

Computer thieves come through the walls too.  They steal technical data,
bribe insiders, modify software, and collude.  They take advantage of
technologies newer than the system, and even invent new mathematics to
attack the system with.

Attackers also have more time; it's unusual for a good guy to disassemble
and examine a public system.  SecurID stayed around for years before anyone
looked at their key management, and they didn't even strip their binaries.
And the odds favor the attacker: defenders have to protect against every
possible vulnerability, but an attacker only has to find one security flaw
to compromise the whole system.

What cryptography can and can't do

No one can guarantee 100% security.  But we can work toward 100% risk
acceptance.  Fraud exists in current commerce systems: cash can be
counterfeited, checks altered, credit card numbers stolen.  Yet these
systems are still successful because the benefits and conveniences outweigh
the losses.  Privacy systems-wall safes, door locks, curtains-are not
perfect, but they're often good enough.  A good cryptographic system strikes
a balance between what is possible and what is acceptable.

Strong cryptography can successfully withstand targeted attacks up to a
point-the point at which it becomes easier to get the information some other
way.  A computer encryption program, no matter how good, will not prevent an
attacker from going through someone's garbage.  But it can absolutely
prevent data-harvesting attacks; no attacker can go through enough trash to
find every AZT user in the country.

The good news about cryptography is that we already have the algorithms and
protocols we need to secure our systems.  The bad news is that that was the
easy part; successful implementation requires considerable expertise. The
areas of security that interact with people-key management, human/computer
interface security, access control-often defy analysis.  And the disciplines
of public-key infrastructure, software security, computer security, network
security, and tamper-resistant hardware design are very poorly understood.

Companies often get the easy part wrong and implement insecure algorithms
and protocols.  But even so, practical cryptography is rarely broken through
the mathematics; other parts of systems are much easier to break.  The best
protocol ever invented can fall to an easy attack if no one pays attention
to the more complex and subtle implementation issues.  Netscape's security
fell to a bug in the random-number generator.  Flaws can be anywhere: the
threat model, the system design, the software or hardware implementation,
the system management.  Security is a chain, and a single weak link can
break the entire system.  Fatal bugs may be far removed from the security
portion of the software; a design decision that has nothing to do with
security can nonetheless create a security flaw.

Once you find a security flaw, you can fix it.  But finding the flaws to
begin with can be incredibly difficult.  Security is different from any
other design requirement, because functionality does not equal quality.  If
a word processor prints successfully, you know that the print function
works.  Security is different; just because a safe recognizes the correct
combination does not mean that its contents are secure from a safecracker.
No amount of general beta testing will reveal a security flaw, and there's
no test possible that can prove the absence of flaws.

Threat models

A good design starts with a threat model: what the system is designed to
protect, from whom, and for how long?  The threat model must take the entire
system into account: not just the data to be protected, but the people who
will use the system and how they will use it What motivates the attackers?
What kinds of abuses can be tolerated?  Must attacks be prevented, or can
they just be detected?  If the worst happens and one of the fundamental
security assumptions of a system is broken, what kind of disaster recovery
is possible?  The answers to these questions can't be standardized; they're
different for every system.  Too often, designers don't take the time to
build accurate threat models or analyze the real risks.

Threat models allow both product designers and consumers to determine what
security measures they need.  Does it makes sense to encrypt your hard drive
if you don't put your files in a safe?  How can someone inside the company
defraud the commerce system?  What exactly is the cost of defeating the
tamper-resistance on the smart card?  You can't design a secure system
unless you understand what it has to be secure against.

System design

System design should only begin after you understand the threat model.  This
design work is the mainstay of the science of cryptography, and it is very
specialized.  Cryptography blends several areas of mathematics: number
theory, complexity theory, information theory, probability theory, abstract
algebra, and formal analysis, among others.  Few can do the science
properly, and a little knowledge is a dangerous thing: inexperienced
cryptographers almost always design flawed systems.  Good cryptographers
know that nothing substitutes for extensive peer review and years of
analysis.  Quality systems use published and well-understood algorithms and
protocols; to use unpublished or unproven elements in a design is risky at
best.

Cryptographic system design is also an art.  A designer must strike a
balance between security and accessibility, anonymity and accountability,
privacy and availability.  Science alone cannot prove security; only
experience, and the intuition born of experience, can help the cryptographer
design secure systems and find flaws in existing designs.

Good security systems are made up of small, verifiable (and verified!)
chunks, each of which provides some service that clearly reduces to a
primitive, such as the difficulty of forging a certain hash function.  There
are a lot of big systems out there (DCE, for example) which are just too big
to verify in a reasonable amount of time.

Implementation

There is an enormous difference between a mathematical algorithm and its
concrete implementation in hardware or software.  Cryptographic system
designs are fragile.  Just because a protocol is logically secure doesn't
mean it will stay secure when a designer starts defining message structures
and passing bits around.  Close isn't close enough; these systems must be
implemented exactly, perfectly, or they will fail.  A poorly-designed user
interface can make a hard-drive encryption program completely insecure.  A
bad clock interface can leave a gaping hole in a communications security
program.  A false reliance on tamper-resistant hardware can render an
electronic commerce system all but useless.  Since these mistakes aren't
apparent in testing, they end up in finished products.

Implementers are under pressure from budgets and deadlines.  They make the
same mistakes over and over again, in many different products.  They use bad
random-number generators, don't check properly for error conditions, and
leave secret information in swap files.  Many of these flaws cannot be
studied in the scientific literature because they are not technically
interesting. The only way to learn how to prevent these flaws is to make and
break systems, again and again.

Procedures and management

In the end, many security systems are broken by the people who use them;
most fraud against commerce systems is perpetrated by insiders.  Honest
users cause problems too, because they usually don't care about security.
They want simplicity, convenience, and compatibility with existing
(insecure) systems.  They choose bad passwords, write them down, give
friends and relatives their private keys, leave computers logged in, and so
on.  It's hard to sell door locks to people who don't want to be bothered
with keys.  A well-designed system must take people into account, and people
are often the hardest factor to design around.

This is where you find the real cost of security.  It's not in the
algorithms; strong cryptography is no more expensive than weak cryptography.
It's not even in the design or the implementation; a good system, while
expensive to build and verify, is far cheaper than the losses from an
insecure system.  The cost is in getting people to use it.  It's hard to
convince consumers that their financial privacy is important when they are
willing to leave a detailed purchase record in exchange for one thousandth
of a free trip to Hawaii.  It's hard to build a system that provides strong
authentication on top of systems that can be penetrated by knowing someone's
mother's maiden name.  Security is routinely bypassed by store clerks,
seniortop executives, and anyone else who just needs to get the job done.

Even when users do understand the need for strong security, they have no way
of comparing systems.  Computer magazines compare security products by
listing their features, not by evaluating their security.  Marketing
literature makes claims that are just not true; a competing product that is
more secure and more expensive will only fare worse in the market.  People
rely on the government to look out for their safety and security in areas
where they lack the knowledge to make evaluations-food packaging, aviation,
medicine.  For cryptography, the U.S. government is doing just the opposite.

Tomorrow's problems

When an airplane crashes, there are inquiries, analyses, and reports.
Information is widely disseminated, and everyone learns from the failure.
You can read a complete record of airline accidents from the beginning of
commercial aviation.  When a bank's electronic commerce system is breached
and defrauded, it's usually covered up.  If it does make the newspapers,
details are omitted.  No one analyzes the attack; no one learns from the
mistake.  The bank tries to patch things in secret, hoping that the public
won't lose confidence in a system that deserves no confidence.

It's no longer good enough to install security patches in response to
attacks.  Computer systems move too quickly; a security flaw described on
the Internet can be exploited by thousands in a day. Today's systems must
anticipate future attacks.  Any comprehensive system-whether for
authenticated communications, secure data storage, or electronic commerce-is
likely to remain in use for five years or more.  To remain secure, it must
be able to withstand the future: smarter attackers, more computational
power, and greater incentives to subvert a widespread system.  There won't
be time to upgrade them in the field.

History has taught us: never underestimate the amount of money, time, and
effort someone will expend to thwart a security system.  Use orthogonal
defense systems: different ways of doing the same thing.  Secure
authentication might mean digital signatures on the desktop, SSL protecting
the incoming transmission, and IPsec from the firewall to the back end,
along with multiple audit points along the way for recovery and evidence.
Breaking parts of it gives an attacker a wedge, but doesn't cause the whole
system to collapse.

It's always better to assume the worst.  Assume your adversaries are better
than they are.  Assume science and technology will soon be able to do things
they cannot yet. Give yourself a margin for error.  Give yourself more
security than you need today.  When the unexpected happens, you'll be glad
you did.

Bruce Schneier, Counterpane Systems  Author of APPLIED CRYPTOGRAPHY  
For Blowfish C code, see ftp.ox.ac.uk:/pub/crypto/misc/blowfish.c.gz

------------------------------

Date: 15 Aug 1996 (LAST-MODIFIED)
From: RISKS-request@csl.sri.com
Subject: Abridged info on RISKS (comp.risks)

 The RISKS Forum is a MODERATED digest.  Its Usenet equivalent is comp.risks.
=> SUBSCRIPTIONS: PLEASE read RISKS as a newsgroup (comp.risks or equivalent) 
 if possible and convenient for you.  Or use Bitnet LISTSERV.  Alternatively,
 (via majordomo) DIRECT REQUESTS to <risks-request@csl.sri.com> with one-line, 
   SUBSCRIBE (or UNSUBSCRIBE) [with net address if different from FROM:] or
   INFO     [for unabridged version of RISKS information]
=> The INFO file (submissions, default disclaimers, archive sites, .mil/.uk
 subscribers, copyright policy, PRIVACY digests, etc.) is also obtainable from
 http://www.CSL.sri.com/risksinfo.html  ftp://www.CSL.sri.com/pub/risks.info
 The full info file will appear now and then in future issues.  *** All 
 contributors are assumed to have read the full info file for guidelines. ***
=> SUBMISSIONS: to risks@CSL.sri.com with meaningful SUBJECT: line.
=> ARCHIVES are available: ftp://ftp.sri.com/risks or
 ftp ftp.sri.com<CR>login anonymous<CR>[YourNetAddress]<CR>cd risks
 or http://catless.ncl.ac.uk/Risks/VL.IS.html      [i.e., VoLume, ISsue].
 The ftp.sri.com site risks directory also contains the most recent 
 PostScript copy of PGN's comprehensive historical summary of one liners:
   get illustrative.PS

------------------------------

End of RISKS-FORUM Digest 18.59 
************************

home help back first fref pref prev next nref lref last post