[145186] in cryptography@c2.net mail archive

home help back first fref pref prev next nref lref last post

Re: "Against Rekeying"

daemon@ATHENA.MIT.EDU (Jon Callas)
Thu Mar 25 08:48:06 2010

From: Jon Callas <jon@callas.org>
In-Reply-To: <7E9DC0BD-6F89-4729-8970-298F18D30C95@st.cs.uni-sb.de>
Date: Wed, 24 Mar 2010 15:36:48 -0700
Cc: Jon Callas <jon@callas.org>,
 "Perry E. Metzger" <perry@piermont.com>,
 cryptography@metzdowd.com
To: Stephan Neuhaus <neuhaus@st.cs.uni-sb.de>


On Mar 24, 2010, at 2:07 AM, Stephan Neuhaus wrote:

>=20
> On Mar 23, 2010, at 22:42, Jon Callas wrote:
>=20
>> If you need to rekey, tear down the SSL connection and make a new =
one. There should be a higher level construct in the application that =
abstracts the two connections into one session.
>=20
> ... which will have its own subtleties and hence probability of =
failure.

Exactly, but they're at the proper place in the system. That's what =
layering is all about.

I'm not suggesting that there's a perfect solution, or even a good one. =
There are times when a designer has a responsibility to make a decision =
and times when a designer has a responsibility *not* to make a decision.

In this particular case, rekeying introduced the most serious problem =
we've ever seen in a protocol like that. Rekeying itself has always been =
a bit dodgy. If you're rekeying because you are worried about the =
strength of the key (e.g. you're using DES), picking a better key is a =
better answer (use AES instead). The most compelling reason to rekey is =
not because of the key, but because of the data size. For ciphers that =
have a 64-bit block size, rekeying because you've sent 2^32 blocks is a =
much better reason to rekey. But -- an even better solution is to use a =
cipher with a bigger block size. Like AES. Or Camillia. Or Twofish. Or =
Threefish (which has a 512-bit block size in its main version). It's far =
more reasonable to rekey because you encrypted 32G of data than because =
you are worried about the key.

However, once you've graduated up to ciphers that have at least 128-bits =
of key and at least 128-bits of block size, the security considerations =
shift dramatically. I will ask explicitly the question I handwaved =
before: What makes you think that the chance there is a bug in your =
protocol is less than 2^-128? Or if you don't like that question -- I am =
the one who brought up birthday attacks -- What makes you think the =
chance of a bug is less than 2^-64? I believe that it's best to stop =
worrying about the core cryptographic components and worry about the =
protocol and its use within a stack of related things.

I've done encrypted file managers like what I alluded to, and it's so =
easy to get rekeying active files right, you don't have to worry. Just =
pull a new bulk key from the PRNG every time you write a file. Poof, =
you're done. For inactive files, rekeying them is isomorphic to writing =
a garbage collector. Garbage collectors are hard to get right. We =
designed, but never built an automatic rekeying system. The added =
security wasn't worth the trouble.

Getting back to your point, yes, you're right, but if rekeying is just =
opening a new network connection, or rewriting a file, it's easy to =
understand and get right. Rekeying makes sense when you (1) don't want =
to create a new context (because that automatically rekeys) and (2) =
don't like your crypto parameters (key, data length, etc). I hesitate to =
say that it never happens, but I think that coming up with a compelling =
use case where rekeying makes more sense than tearing down and =
recreating the context is a great exercise. Inconvenient use cases, =
sure. Compelling, that's hard.

	Jon=

---------------------------------------------------------------------
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majordomo@metzdowd.com

home help back first fref pref prev next nref lref last post