[145657] in cryptography@c2.net mail archive
RE: Has there been a change in US banking regulations recently?
daemon@ATHENA.MIT.EDU (eric.lengvenis@wellsfargo.com)
Sat Aug 14 15:03:25 2010
From: <eric.lengvenis@wellsfargo.com>
To: <lynn@garlic.com>, <jon@callas.org>
CC: <pgut001@cs.auckland.ac.nz>, <cryptography@metzdowd.com>
Date: Fri, 13 Aug 2010 14:55:32 -0500
In-Reply-To: <4C659278.7090902@garlic.com>
>Ann & Lynn Wheeler wrote:=20
> the original requirement for SSL deployment was that it was on from the
> original URL entered by the user. The drop-back to using SSL for only sma=
ll
> subset ... was based on computational load caused by SSL cryptography ...=
. in
> the online merchant scenario, it cut thruput by 90-95%; alternative to ha=
ndle
> the online merchant scenario for total user interaction would have requir=
ed
> increasing the number of servers by factor of 10-20.
>=20
> One possibility is that the institution has increased the server capacity=
...
> and/or added specific hardware to handle the cryptographic load.
Moore's law helped immensely here. In the last 5 years systems have gotten =
about 8 times faster, reducing the processing cost of crypto a lot. I'm fam=
iliar with one site that has 24 servers evenly divided across three geograp=
hical areas. To entirely SSL-enable their site required only one new server=
at each site. Meanwhile, load-balancing SSL terminator/accelerators have i=
mproved even faster due to improvements in load-balancing, network compress=
ion, etc. So putting one of them in front of a previously na=EFve load-bala=
ncing scheme, like basic round-robin would provide enough offloading to SSL=
-enable an entire site.
The big drawback is that those who want to follow NIST's recommendations to=
migrate to 2048-bit keys will be returning to the 2005-era overhead. Dan K=
aminsky provided some benchmarks in a different thread on this list [1] tha=
t showed 2048-bit keys performing at 1/9th of 1024-bit. My own internal ben=
chmarks have been closer to 1/7th to 1/8th. Either way, that's back in line=
with the above stated 90-95% overhead. Meaning, in Dan's words "2048 ain't=
happening."=20
There are some possibilities, my co-workers and I have discussed. For purel=
y internal systems TLS-PSK (RFC 4279) provides symmetric encryption through=
pre-shared keys which provides us with whitelisting as well as removing as=
ymmetric crypto. Or, possibly stepping up the key-size in accordance with M=
oore's law, which would take several years to reach 2048-bit, but each time=
a certificate expired the new certificate could be issued with the next hi=
gher keylength.
Besides, I think we all know that NIST's 2010 algorithm transaction isn't g=
oing to happen on schedule. At the IEEE key management summit back in May (=
IIRC) Elaine Barker from NIST presented a back-off talk in which NIST only =
"strongly recommends" 110-bit security by 2011, and pushes the real deadlin=
e out to the end of 2013. That was subsequently released as a draft SP800-1=
31; available for comments [2]
Of course, the industry had five years to plan for this, and no major vendo=
rs seem to be ready. Most comments are essentially vendors sighing with rel=
ief.
[1] http://www.mail-archive.com/cryptography@metzdowd.com/msg11245.html
[2] http://csrc.nist.gov/publications/drafts/800-131/draft-sp800-131_spd-ju=
ne2010.pdf
Eric Lengvenis
InfoSec Arch.
---------------------------------------------------------------------
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majordomo@metzdowd.com