[18958] in cryptography@c2.net mail archive

home help back first fref pref prev next nref lref last post

RE: Fermat's primality test vs. Miller-Rabin

daemon@ATHENA.MIT.EDU (Anton Stiglic)
Wed Nov 16 11:37:07 2005

X-Original-To: cryptography@metzdowd.com
X-Original-To: cryptography@metzdowd.com
From: "Anton Stiglic" <astiglic@okiok.com>
To: "'Alexander Klimov'" <alserkli@inbox.ru>,
	<cryptography@metzdowd.com>
Date: Tue, 15 Nov 2005 19:55:25 -0500
In-Reply-To: <TheMailAgent.190093e87522fa@15507a2322d7947218dcf>


>The general consensus is that for 500-bit numbers one needs only 6 MR
>tests for 2^{-80} error probability [1]:

>... 

> and thus a single test gives ~2^{-13}.

If you just took the exponent 80 and divided it by 6 to get ~13, I don't
think that is the right reasoning.  Look at table 4.3 of the Handbook of
applied cryptography: for t = 1 (one iteration) and for a 500-bit candidate,
we have probability p(X | Y_1) <= 2^-56, which is better than what you
concluded.  (X representing the event that the candidate n is composite, Y_t
representing the event that Miller-Rabin(n, t) declares n to be prime).

The results in table 4.3 and 4.4 of HAC are for randomly (uniform) chosen
candidates, and I think you need to do a basic sieving (don't remeber if
that is necessary, but I think it is).  The result is due to the fact that
under these conditions, the strong pseudoprime test does in fact much better
than 1/4 probability of error ( value of P(Y_t | X) is very low ), this
result is due to Damgard, Landrock and Pomerance, based on earlier work of
Erdos and Pomerance.

--Anton


---------------------------------------------------------------------
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majordomo@metzdowd.com

home help back first fref pref prev next nref lref last post