[145487] in cryptography@c2.net mail archive
Re: A mighty fortress is our PKI, Part II
daemon@ATHENA.MIT.EDU (Steven Bellovin)
Wed Jul 28 23:14:05 2010
From: Steven Bellovin <smb@cs.columbia.edu>
In-Reply-To: <E1Oe6ab-0007PQ-KJ@wintermute02.cs.auckland.ac.nz>
Date: Wed, 28 Jul 2010 20:44:48 -0400
Cc: ben@links.org, cryptography@metzdowd.com
To: Peter Gutmann <pgut001@cs.auckland.ac.nz>
On Jul 28, 2010, at 9:22 29AM, Peter Gutmann wrote:
> Steven Bellovin <smb@cs.columbia.edu> writes:
>=20
>> For the last issue, I'd note that using pki instead of PKI (i.e., =
many=20
>> different per-realm roots, authorization certificates rather than =
identity=20
>> certificates, etc.) doesn't help: Realtek et al. still have no better =
way or=20
>> better incentive to revoke their own widely-used keys.
>=20
> I think the problems go a bit further than just Realtek's motivation, =
if you=20
> look at the way it's supposed to work in all the PKI textbooks it's:
>=20
> Time t: Malware appears signed with a stolen key.
> Shortly after t: Realtek requests that the issuing CA revoke the =
cert.
> Shortly after t': CA revokes the cert.
> Shortly after t'': Signature is no longer regarded as valid.
>=20
> What actually happened was:
>=20
> Time t: Malware appears signed with a stolen key.
> Shortly after t: Widespread (well, relatively) news coverage of the =
issue.
>=20
> Time t + 2-3 days: The issuing CA reads about the cert problem in the =
news.
> Time t + 4-5 days: The certificate is revoked by the CA.
> Time t + 2 weeks and counting: The certificate is regarded as still =
valid by
> the sig-checking software.
>=20
> That's pretty much what you'd expect if you're familiar with the =
realities of=20
> PKI, but definitely not PKI's finest hour. In addition you have:
>=20
> Time t - lots: Stuxnet malware appears (i.e. is noticed by people =
other than
> the victims)
> Shortly after t - lots: AV vendors add it to their AV databases and =
push out
> updates
>=20
> (I don't know what "lots" is here, it seems to be anything from weeks =
to
> months depending on which news reports you go with).
>=20
> So if I'm looking for a defence against signed malware, it's not going =
to be=20
> PKI. That was the point of my previous exchange with Ben, assume that =
PKI=20
> doesn't work and you won't be disappointed, and more importantly, you =
now have=20
> the freedom to design around it to try and find mechanisms that do =
work.
When I look at this, though, little of the problem is inherent to PKI. =
Rather, there are faulty communications paths.
You note that at t+2-3 days, the CA read the news. Apart from the =
question of whether or not "2-3 days" is "shortly after" -- the time you =
suggest the next step takes place -- how should the CA or Realtek know =
about the problem? Did the folks who found the offending key contact =
either party? Should they have? The AV companies are in the business =
of looking for malware or reports thereof; I think (though I'm not =
certain) that they have a sharing agreement for new samples. (Btw -- =
I'm confused by your definition of "t" vs. "t-lots". The first two =
scenarios appear to be "t =3D=3D the published report appearing"; the =
third is confusing, but if you change the timeline to "t+lots" it works =
for "t =3D=3D initial, unnoticed appearance in the wild". Did the AV =
companies push something out long before the analysis showed the stolen =
key?)
Suppose, though, that Realtek has some Google profile set up to send =
them reports of malware affecting their anything. Even leaving aside =
false positives, once they get the alert they should do something. What =
should that something be? Immediately revoke the key? The initial =
reports I saw were not nearly specific enough to identify which key was =
involved. Besides, maybe the report was not just bogus but malicious -- =
a DoS attack on their key. They really need to investigate it; I don't =
regard 2-3 days as unreasonable to establish communications with an =
malware analysis company you've never heard of and which has to verify =
your bonafides, check it out, and verify that the allegedly malsigned =
code isn't something you actually released N years ago as release =
5.6.7.9.9.a.b for a minor product line you've since discontinued. At =
that point, a revocation request should go out; delays past that point =
are not justifiable. The issue of software still accepting it, CRLs =
notwithstanding, is more a sign of buggy code.
The point about the communications delay is that it's inherent to =
anything involving the source company canceling anything -- whether it's =
a PKI cert, a pki cert, a self-validating URL, a KDC, or magic fairies =
who warn sysadmins not to trust certain software. =20
What's interesting here is the claim that AV companies could respond =
much faster. They have three inherent advantages: they're in the =
business of looking for malware; they don't have to complete the =
analysis to see if a stolen key is involved; and they can detect =
problems after installation, whereas certs are checked only at =
installation time. Of course, speedy action can have its own problems; =
see =
http://www.pcworld.com/businesscenter/article/194752/few_answers_after_mca=
fee_antivirus_update_hits_intel_others.html for a recent example, but =
there have been others. =20
Note that I'm not saying that PKI is a good solution. But it's =
important to factor out the different contributing factors in order to =
understand what needs to be fixed. It's also important to understand =
the failure modes of replacements. (To pick a bad example, Kerberos for =
the Internet is extremely vulnerable to compromise of the KDC, unless =
you use the public key variants of Kerberos.)
--Steve Bellovin, http://www.cs.columbia.edu/~smb
---------------------------------------------------------------------
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majordomo@metzdowd.com