M23,209 is 36 CPU hours ≡ 1979
Welcome to 1979!
- Yeah. I know. :-) --Short Circuit 07:55, 21 February 2008 (MST)
List of known Mersenne primes
The table below lists all known Mersenne primes:
|#||n||Mn||Digits in Mn||Date of discovery||Discoverer|
|13||521||686479766…115057151||157||January 30 1952||Robinson|
|14||607||531137992…031728127||183||January 30 1952||Robinson|
|15||1,279||104079321…168729087||386||June 25 1952||Robinson|
|16||2,203||147597991…697771007||664||October 7 1952||Robinson|
|17||2,281||446087557…132836351||687||October 9 1952||Robinson|
|18||3,217||259117086…909315071||969||September 8 1957||Riesel|
|19||4,253||190797007…350484991||1,281||November 3 1961||Hurwitz|
|20||4,423||285542542…608580607||1,332||November 3 1961||Hurwitz|
|21||9,689||478220278…225754111||2,917||May 11 1963||Gillies|
|22||9,941||346088282…789463551||2,993||May 16 1963||Gillies|
|23||11,213||281411201…696392191||3,376||June 2 1963||Gillies|
|24||19,937||431542479…968041471||6,002||March 4 1971||Tuckerman|
|25||21,701||448679166…511882751||6,533||October 30 1978||Noll & Laura Nickel|Nickel|
|26||23,209||402874115…779264511||6,987||February 9 1979||Noll|
|27||44,497||854509824…011228671||13,395||April 8 1979||Nelson & David Slowinski|Slowinski|
|28||86,243||536927995…433438207||25,962||September 25 1982||Slowinski|
|29||110,503||521928313…465515007||33,265||January 28 1988||Colquitt & Luke Welsh|Welsh|
|30||132,049||512740276…730061311||39,751||September 19 1983||Slowinski|
|31||216,091||746093103…815528447||65,050||September 1 1985||Slowinski|
|32||756,839||174135906…544677887||227,832||February 19 1992||Slowinski & Paul Gage|Gage on Harwell Lab Cray-2 |
|33||859,433||129498125…500142591||258,716||January 4 1994 ||Slowinski & Paul Gage|Gage|
|34||1,257,787||412245773…089366527||378,632||September 3 1996||Slowinski & Paul Gage|Gage |
|35||1,398,269||814717564…451315711||420,921||November 13 1996||GIMPS / Joel Armengaud |
|36||2,976,221||623340076…729201151||895,932||August 24 1997||GIMPS / Gordon Spence |
|37||3,021,377||127411683…024694271||909,526||January 27 1998||GIMPS / Roland Clarkson |
|38||6,972,593||437075744…924193791||2,098,960||June 1 1999||GIMPS / Nayan Hajratwala |
|39||13,466,917||924947738…256259071||4,053,946||November 14 2001||GIMPS / Michael Cameron |
|40*||20,996,011||125976895…855682047||6,320,430||November 17 2003||GIMPS / Michael Shafer |
|41*||24,036,583||299410429…733969407||7,235,733||May 15 2004||GIMPS / Josh Findley |
|42*||25,964,951||122164630…577077247||7,816,230||February 18 2005||GIMPS / Martin Nowak |
|43*||30,402,457||315416475…652943871||9,152,052||December 15 2005||GIMPS / Curtis Cooper (mathematician)|Curtis Cooper & Steven Boone |
|44*||32,582,657||124575026…053967871||9,808,358||September 4 2006||GIMPS / Curtis Cooper (mathematician)|Curtis Cooper & Steven Boone |
- The 45th and 46th Mersenne primes have been discovered, is the intention to keep this table up to date? --Lupus 17:24, 2 December 2008 (UTC)
- The above list seems to be copied in toto from the updated Wikipedia site:
The Java version is still limited by types. Integer.parseInt(args) limits p to 2147483647. Also the fact that isMersennePrime takes an int limits it there too. For full arbitrary precision every int needs to be a BigInteger or BigDecimal and a square root method will need to be created for them. The limitation is OK I think (I don't think we'll be getting up to 22147483647 - 1 anytime soon), but the claim "any arbitrary prime" is false because of the use of ints. --Mwn3d 07:45, 21 February 2008 (MST)
Speeding things up
The main loop in Lucas-Lehmer is doing (n*n) mod M where M=2^p-1, and p > 1. But we can do it without division.
We compute the remainder of a division by M. Now, intuitively, dividing by 2^p-1 is almost
like dividing by 2^p, except the latter is much faster since it's a shift.
Let's compute how much the two divisions differ.
We will call S = n*n. Notice that since the remainder mod M is computed again and again, the value of n must be < M at
the beginning of a loop, that is at most 2^p-2, thus S = n*n <= 2^(2*p) - 4*2^p + 4 = 2^p * (2^p - 2) + 4 - 2*2^p
When dividing S by M, you get quotient q1 and remainder r1 with S = q1*M + r1 and 0 <= r1 < M
When dividing S by M+1, you get likewise S = q2*(M+1) + r2 and 0 <= r2 <= M
In the latter, we divide by a larger number, so the quotient must be less, or maybe equal, that is, q2 <= q1.
Subtract the two equalities, giving
0 = (q2 - q1)*M + q2 + r2 - r1
(q1 - q2)*M = q2 + r2 - r1
Since S = 2^p * (2^p - 2) + 4 - 2*2^p
<= 2^p * (2^p - 2),
then the quotient q2 is less than 2^p - 2 (remember, when computing q2, we divide by M+1 = 2^p).
Now, 0 <= q2 <= 2^p - 2
0 <= r2 <= 2^p - 1
0 <= r1 <= 2^p - 2
Thus the right hand side is >= 0, and <= 2*2^p - 3.
The left hand side is a multiple of M = 2^p - 1.
Therefore, this multiple must be 0*M or 1*M, certainly not 2*M = 2*2^p - 2,
which would be > 2*2^p - 3, and not any other higher multiple would do.
So we have proved that q1 - q2 = 0 or 1.
This means that division by 2^p is almost equivalent (regarding the quotient)
to dividing by 2^p-1: it's the same quotient, or maybe too short by 1.
Now, the remainder S mod M.
We start with a quotient q = S div 2^p, or simply q = S >> p (right shift).
The remainder is S - q*M = S - q*(2^p - 1) = S - q*2^p + q, and the multiplication by 2^p is a left shift.
And this remainder may be a bit too large, if our quotient is a bit too small (by one): in this case we must subtract M.
So, in pseudo-code, we are done if we do:
S = n*n
q = S >> p
r = S - (q << p) + q
if r >= M then r = r - M
We can go a bit further: taking S >> p then q << p is simply keeping the higher bits of S.
But then we subtract these higher bits from S, so we only keep the lower bits,
that is we do (S & mask), and this mask is simply M ! (remember, M = 2^p - 1, a bit mask of p bits equal to "1")
The pseudo-code can thus be written
S = n*n
r = (S & M) + (S >> p)
if r >= M then r = r - M
And we have computed a remainder mod M without any division, only a few addition/subtraction/shift/bitwise-and,
which will be much faster (each has a linear time complexity).
How much faster ? For exponents between 1 and 2000, in Python, the job is done 2.87 times as fast.
For exponents between 1 and 5000, it's 3.42 times as fast. And it gets better and better, since the comlexity is lower.
May 2015 - fixed small bugs in both Python implementations. In the first, execution failed (Python 3) without a cast to int in the test. In the second, there was a typo - an 'r' should have been 's'.
- Timing for some solutions for 2..11213:
|871.2||Python without optimizations|
|314.7||Python with optimizations|
|124.7||Perl Math::GMP without optimizations|
|61.8||Perl Math::GMP with optimization|
|33.0||Python using gmpy2 (skipping non-primes)|
|14.2||C/GMP with even more optimizations|
|13.3||Perl Math::Prime::Util::GMP (source of C/GMP code)|
Rust solution for 2..11213 -> 8.6 s