A
primality test is an
algorithm for determining whether an input number is
prime. Among other fields of
mathematics, it is used for
cryptography. Unlike
integer factorization, primality tests do not generally give
prime factors,
only stating whether the input number is prime or not. Factorization is
thought to be a computationally difficult problem, whereas primality
testing is comparatively easy (its
running time is
polynomial in the size of the input). Some primality tests
prove that a number is prime, while others like
Miller–Rabin prove that a number is
composite. Therefore, the latter might be called
compositeness tests instead of primality tests.
Simple methods
The simplest primality test is
trial division: Given an input number
n, check whether any prime integer
m from 2 to
√n evenly
divides n (the division leaves no
remainder). If
n is divisible by any
m then
n is
composite, otherwise it is
prime.
[1]
For example, we can do a trial division to test the primality of 100. Let's look at all the divisors of 100:
- 2, 4, 5, 10, 20, 25, 50
Here we see that the largest factor is 100/2 = 50. This is true for all
n: all divisors are less than or equal to
n/2. If we take a closer look at the divisors, we will see that some of them are redundant. If we write the list differently:
- 100 = 2 × 50 = 4 × 25 = 5 × 20 = 10 × 10 = 20 × 5 = 25 × 4 = 50 × 2
the redundancy becomes obvious. Once we reach 10, which is
√100, the divisors just flip around and repeat. Therefore, we can further eliminate testing divisors greater than
√n.
[1] We can also eliminate all the even numbers greater than 2, since if an even number can divide
n, so can 2.
Let's look at another example, and use trial division to test the
primality of 17. Since we now know we do not need to test using divisors
greater than
√n, we only need to use integer divisors less than or equal to

.
Those would be 2, 3, and 4. As stated above, we can skip 4 because if 4
evenly divides 17, 2 must also evenly divide 17, which we already would
have checked before that. That leaves us with just 2 and 3. After
dividing, we find that 17 is not divisible by 2 or 3, and we can confirm
that 17 must be prime.
The algorithm can be improved further by observing that all primes are of the form
6k ± 1, with the exception of 2 and 3. This is because all integers can be expressed as
(6k + i) for some integer
k and for
i = −1, 0, 1, 2, 3, or 4; 2 divides
(6k + 0), (6k + 2), (6k + 4); and 3 divides
(6k + 3). So, a more efficient method is to test if
n is divisible by 2 or 3, then to check through all the numbers of form

. This is 3 times as fast as testing all
m.
Generalising further, it can be seen that all primes are of the form
c#k + i for
i < c# where
i represents the numbers that are
coprime to
c# and where
c and
k are integers. For example, let
c = 6. Then
c# = 2 · 3 · 5 = 30. All integers are of the form
30k + i for
i = 0, 1, 2,...,29 and
k
an integer. However, 2 divides 0, 2, 4,...,28 and 3 divides 0, 3,
6,...,27 and 5 divides 0, 5, 10,...,25. So all prime numbers are of the
form
30k + i for
i = 1, 7, 11, 13, 17, 19, 23, 29 (i.e. for
i < 30 such that
gcd(i,30) = 1). Note that if
i and 30 are not coprime, then
30k + i is divisible by a prime divisor of 30, namely 2, 3 or 5, and is therefore not prime.
As
c → ∞, the number of values that
c#k + i can take over a certain range decreases, and so the time to test
n decreases. For this method, it is also necessary to check for divisibility by all primes that are less than
c. Observations analogous to the preceding can be applied
recursively, giving the
Sieve of Eratosthenes.
A good way to speed up these methods (and all the others mentioned
below) is to pre-compute and store a list of all primes up to a certain
bound, say all primes up to 200. (Such a list can be computed with the
Sieve of Eratosthenes or by an algorithm that tests each incremental
m against all known primes <
√m). Then, before testing
n for primality with a serious method,
n
can first be checked for divisibility by any prime from the list. If it
is divisible by any of those numbers then it is composite, and any
further tests can be skipped.
A simple, but very inefficient primality test uses
Wilson's theorem, which states that
p is prime if and only if:

Although this method requires about
p modular multiplications,
rendering it impractical, theorems about primes and modular residues
form the basis of many more practical methods.
Pseudocode
The following is a simple primality test in
pseudocode for not very large numbers.
function is_prime(n)
if n ≤ 1
return false
else if n ≤ 3
return true
else if n mod 2 = 0 or n mod 3 = 0
return false
let i ← 5
while i * i ≤ n
if n mod i = 0 or n mod (i + 2) = 0
return false
i ← i + 6
return true
Heuristic tests
These
are tests that seem to work well in practice, but are unproven and
therefore are not, technically speaking, algorithms at all. The Fermat
test and the Fibonacci test are simple examples, and they are
very effective when combined.
John Selfridge has conjectured that if
p is an odd number, and
p ≡ ±2 (mod 5), then
p will be prime if both of the following hold:
- 2p−1 ≡ 1 (mod p),
- fp+1 ≡ 0 (mod p),
where
fk is the
k-th
Fibonacci number. The first condition is the Fermat primality test using base 2.
Selfridge,
Carl Pomerance, and
Samuel Wagstaff together offer $620 for a counterexample. The problem is still open as of September 11, 2015.
[2] Pomerance thinks that Selfridge's estate would probably pay his share of the reward.
[3]
The
Baillie-PSW primality test is another excellent heuristic, using the
Lucas sequence in place of the Fibonacci sequence. It has no known counterexamples.
Probabilistic tests
Probabilistic tests
are more rigorous than heuristics in that they provide provable bounds
on the probability of being fooled by a composite number. Many popular
primality tests are probabilistic tests. These tests use, apart from the
tested number
n, some other numbers
a which are chosen at random from some
sample space;
the usual randomized primality tests never report a prime number as
composite, but it is possible for a composite number to be reported as
prime. The probability of error can be reduced by repeating the test
with several independently chosen values of
a; for two commonly used tests, for
any composite
n at least half the
a's detect
n's compositeness, so
k repetitions reduce the error probability to at most 2
−k, which can be made arbitrarily small by increasing
k.
The basic structure of randomized primality tests is as follows:
- Randomly pick a number a.
- Check some equality (corresponding to the chosen test) involving a and the given number n. If the equality fails to hold true, then n is a composite number, a is known as a witness for the compositeness, and the test stops.
- Repeat from step 1 until the required accuracy is achieved.
After one or more iterations, if
n is not found to be a composite number, then it can be declared
probably prime.
Fermat primality test
The simplest probabilistic primality test is the
Fermat primality test (actually a compositeness test). It works as follows:
- Given an integer n, choose some integer a coprime to n and calculate an − 1 modulo n. If the result is different from 1, then n is composite. If it is 1, then n may or may not be prime.
If
an−1 (modulo
n) is 1 but
n is not prime, then
n is called a
pseudoprime to base
a. In practice, we observe that, if
an−1 (modulo
n) is 1, then
n is usually prime. But here is a counterexample: if
n = 341 and
a = 2, then

even though 341 = 11·31 is composite. In fact, 341 is the smallest pseudoprime base 2 (see Figure 1 of
[4]).
There are only 21853 pseudoprimes base 2 that are less than 2.5
×10
10 (see page 1005 of
[4]). This means that, for
n up to 2.5
×10
10, if
2n−1 (modulo
n) equals 1, then
n is prime, unless
n is one of these 21853 pseudoprimes.
Some composite numbers (
Carmichael numbers) have the property that
an − 1 is 1 (modulo
n) for every
a that is coprime to
n. The smallest example is
n = 561 = 3·11·17, for which
a560 is 1 (modulo 561) for all
a
coprime to 561. Nevertheless, the Fermat test is often used if a rapid
screening of numbers is needed, for instance in the key generation phase
of the
RSA public key cryptographic algorithm.
Miller–Rabin and Solovay–Strassen primality test
The
Miller–Rabin primality test and
Solovay–Strassen primality test are more sophisticated variants, which detect all composites (once again, this means: for
every composite number
n, at least 3/4 (Miller–Rabin) or 1/2 (Solovay–Strassen) of numbers
a are witnesses of compositeness of
n). These are also compositeness tests.
The Miller–Rabin primality test works as follows: Given an integer
n, choose some positive integer
a <
n. Let 2
sd =
n − 1, where
d is odd. If

and
for all 
then
n is composite and
a is a witness for the compositeness. Otherwise,
n may or may not be prime. The Miller–Rabin test is a
strong pseudoprime test (see PSW
[4] page 1004).
The Solovay–Strassen primality test uses another equality: Given an odd number
n, choose some integer
a <
n, if
, where
is the Jacobi symbol,
then
n is composite and
a is a witness for the compositeness. Otherwise,
n may or may not be prime. The Solovay–Strassen test is an
Euler pseudoprime test (see PSW
[4] page 1003).
For each individual value of
a, the Solovay–Strassen test is weaker than the Miller–Rabin test. For example, if
n = 1905 and
a = 2, then the Miller-Rabin test shows that
n
is composite, but the Solovay–Strassen test does not. This is because
1905 is an Euler pseudoprime base 2 but not a strong pseudoprime base 2
(this is illustrated in Figure 1 of PSW
[4]).
Frobenius primality test
The
Miller–Rabin and the Solovay–Strassen primality tests are simple and
are much faster than other general primality tests. One method of
improving efficiency further in some cases is the
Frobenius pseudoprimality test;
a round of this test takes about three times as long as a round of
Miller–Rabin, but achieves a probability bound comparable to seven
rounds of Miller–Rabin.
The Frobenius test is a generalization of the
Lucas pseudoprime
test. One can also combine a Miller–Rabin type test with a Lucas
pseudoprime test to get a primality test that has no known
counterexamples. That is, this combined test has no known composite
n for which the test reports that
n is probably prime. One such test is the
Baillie–PSW primality test, several variations of which exist.
[5]
Other tests
Leonard Adleman and Ming-Deh Huang presented an errorless (but expected polynomial-time) variant of the
elliptic curve primality test. Unlike the other probabilistic tests, this algorithm produces a
primality certificate, and thus can be used to prove that a number is prime.
[6] The algorithm is prohibitively slow in practice.
If
quantum computers were available, primality could be tested
asymptotically faster than by using classical computers. A combination of
Shor's algorithm, an integer factorization method, with the
Pocklington primality test could solve the problem in

.
[7]
Fast deterministic tests
Near the beginning of the 20th century, it was shown that a corollary of
Fermat's little theorem could be used to test for primality.
[8] This resulted in the
Pocklington primality test.
[9] However, as this test requires a partial
factorization of
n − 1 the running time was still quite slow in the worst case. The first
deterministic primality test significantly faster than the naive methods was the
cyclotomy test; its runtime can be proven to be
O((log
n)
c log log log n), where
n is the number to test for primality and
c is a constant independent of
n.
Many further improvements were made, but none could be proven to have
polynomial running time. (Note that running time is measured in terms of
the size of the input, which in this case is ~ log
n, that being the number of bits needed to represent the number
n.) The
elliptic curve primality test can be proven to run in O((log
n)
6), if some conjectures on
analytic number theory are true.
[which?] Similarly, under the
generalized Riemann hypothesis, the deterministic
Miller's test, which forms the basis of the probabilistic Miller–Rabin test, can be proved to run in
Õ((log
n)
4).
[10]
In practice, this algorithm is slower than the other two for sizes of
numbers that can be dealt with at all. Because the implementation of
these two methods is rather difficult and creates a risk of programming
errors, slower but simpler tests are often preferred.
In 2002, the first provably unconditional deterministic polynomial time test for primality was invented by
Manindra Agrawal,
Neeraj Kayal, and
Nitin Saxena. The
AKS primality test runs in Õ((log
n)
12) (improved to Õ((log
n)
7.5)
[11] in the published revision of their paper), which can be further reduced to Õ((log
n)
6) if the
Sophie Germain conjecture is true.
[12] Subsequently, Lenstra and Pomerance presented a version of the test which runs in time Õ((log
n)
6) unconditionally.
[13]
Agrawal, Kayal and Saxena suggest a variant of their algorithm which would run in Õ((log
n)
3) if
Agrawal's conjecture is true; however, a heuristic argument by Hendrik Lenstra and Carl Pomerance suggests that it is probably false.
[11] A modified version of the Agrawal's conjecture, the Agrawal–Popovych conjecture,
[14] may still be true.
Complexity
In
computational complexity theory, the formal language corresponding to the prime numbers is denoted as PRIMES. It is easy to show that PRIMES is in
Co-NP: its complement COMPOSITES is in
NP because one can decide compositeness by nondeterministically guessing a factor.
In 1975,
Vaughan Pratt showed that there existed a certificate for primality that was checkable in polynomial time, and thus that PRIMES was in
NP, and therefore in
NP ∩ coNP. See
primality certificate for details.
The subsequent discovery of the Solovay–Strassen and Miller–Rabin algorithms put PRIMES in
coRP. In 1992, the Adleman–Huang algorithm
[6] reduced the complexity to
ZPP =
RP ∩
coRP, which superseded Pratt's result.
The
Adleman–Pomerance–Rumely primality test from 1983 put PRIMES in
QP (
quasi-polynomial time), which is not known to be comparable with the classes mentioned above.
Because of its tractability in practice, polynomial-time algorithms
assuming the Riemann hypothesis, and other similar evidence, it was long
suspected but not proven that primality could be solved in polynomial
time. The existence of the
AKS primality test finally settled this long-standing question and placed PRIMES in
P. However, PRIMES is not known to be
P-complete, and it is not known whether it lies in classes lying inside
P such as
NC or
L. It is known that PRIMES is not in
AC0.
[15]
Number-theoretic methods
Certain number-theoretic methods exist for testing whether a number is prime, such as the
Lucas test and
Proth's test. These tests typically require factorization of
n + 1,
n
− 1, or a similar quantity, which means that they are not useful for
general-purpose primality testing, but they are often quite powerful
when the tested number
n is known to have a special form.
The Lucas test relies on the fact that the
multiplicative order of a number
a modulo
n is
n − 1 for a prime
n when
a is a
primitive root modulo n. If we can show
a is primitive for
n, we can show
n is prime.