[1023] in Kerberos

home help back first fref pref prev next nref lref last post

Re: Why is initial user authentication done the way it is?

daemon@ATHENA.MIT.EDU (smb@ulysses.att.com)
Fri Jun 15 17:18:51 1990

From: smb@ulysses.att.com
To: "Jonathan I. Kamens" <jik@pit-manager.MIT.EDU>
Cc: kerberos@ATHENA.MIT.EDU
Date: Fri, 15 Jun 90 16:27:58 EDT

	 
	   Perhaps I'm confused, but it seems to me that if crypt() is doing
	 25 encryptions, and Kerberos DES is doing 1, then crypt() is going to
	 be slower than Kerberos DES.

It depends on how much ciphertext you have to decrypt before you can
determine if you've succeeded in cracking it.  And the ratio isn't
25:1, because one can omit the IP and IP^-1 steps between each pair
of encryptions, and can in fact omit the initial IP because the constant
string is 0.  Matt Bishop wrote a lovely paper describing all sorts
of optimizations....

	   Furthermore, if you have a database of passwords you want to try in
	 your attack, crypt() requires you to encrypt each of them 4096 times
	 (64 characters are legal in the seed, and there are two seed
	 characters), which means you've got 4096 times as many passwords to
	 store in your password database.

That's not a significant problem in practice.  Even with the seed,
a dictionary of 10K passwords takes only 400M bytes -- not a trivial
amount of storage, but hardly outrageously expensive.

	   Granted, Unix encryption does have the added problem that once you
	 *do* do all of those crypt()s, the database you have needs only be
	 compared to the passwords you're trying to crack using strcmp() (or
	 its equivalent), rather than having to do an encryption using them as
	 the key.  However, I'm not sure how relevant this is, since the type
	 of atack I'm going to do on a Kerberos tgt isn't really the same as
	 the type of attack I'm going to use on a Unix password.

A key point, incidentally, is the availability of subsidiary information
about the user -- say, the stuff that fingerd spits out.

	   One final note -- one of the big mistakes made in the design of Unix
	 authentication is that it assumed that machines were slow enough that
	 brute-force attacks would not be feasible.  We all know that nowadays,
	 this is more and more no longer the case.  Therefore, do we want to
	 make the same assumption, and same mistake, in the design of the
	 Kerberos authentication scheme?

I do not agree that mistake was made.  The mistake was in not enforcing
better password standards years ago.  I'll leave the arithmetic as
an exercise for the reader, but my calculations show that for an 8-character
password composed of mixed upper/lower case -- i.e., 52 characters -- and
an encryption time of 1 microsecond (better than I've seen any hardware
designs), a brute-force attack would take about 2 years.

home help back first fref pref prev next nref lref last post