Hamming numbers: Difference between revisions

Added Easylang
(→‎{{header|Picat}}: Added {{out}}. Corrected the time for the larger problem.)
(Added Easylang)
 
(43 intermediate revisions by 15 users not shown)
Line 31:
{{trans|Python}}
 
<langsyntaxhighlight lang="11l">F hamming(limit)
V h = [1] * limit
V (x2, x3, x5) = (2, 3, 5)
Line 52:
 
print((1..20).map(i -> hamming(i)))
print(hamming(1691))</langsyntaxhighlight>
 
{{out}}
Line 61:
 
=={{header|360 Assembly}}==
<langsyntaxhighlight lang="360asm">* Hamming numbers 12/03/2017
HAM CSECT
USING HAM,R13 base register
Line 168:
HH DS 1691F array h(1691)
YREGS
END HAM</langsyntaxhighlight>
{{out}}
<pre>
Line 202:
It will fail to compute the billion'th number because we use an array of the stack to store all numbers. It is possible to get rid of this array though it will make the code slightly less readable.
 
<syntaxhighlight lang="ada">
<lang Ada>
with Ada.Numerics.Generic_Elementary_Functions;
with Ada.Text_IO; use Ada.Text_IO;
Line 316:
Put_Line (Hamming.Image (Hamming.Compute (1_000_000)));
end Hamming;
</syntaxhighlight>
</lang>
{{out}}
<pre>
Line 327:
{{works with|Algol 68 Genie 1.19.0}}
Hamming numbers are generated in a trivial iterative way as in the Python version below. This program keeps the series needed to generate the numbers as short as possible using flexible rows; on the downside, it spends considerable time on garbage collection.
<langsyntaxhighlight lang="algol68">PR precision=100 PR
 
MODE SERIES = FLEX [1 : 0] UNT, # Initially, no elements #
Line 363:
OD;
print ((newline, whole (hamming number (1 691), 0)));
print ((newline, whole (hamming number (1 000 000), 0)))</langsyntaxhighlight>
{{out}}
<pre>
Line 375:
<br>
This uses the algorithm from Dr Dobbs (as in the Python version). The Coffee Script solution has some notes on how it works.
<langsyntaxhighlight lang="algolw">begin
% returns the minimum of a and b %
integer procedure min ( integer value a, b ) ; if a < b then a else b;
Line 415:
write( H( MAX_HAMMING ) )
end
end.</langsyntaxhighlight>
{{out}}
<pre>
Line 424:
=={{header|Arturo}}==
 
<langsyntaxhighlight lang="rebol">hamming: function [limit][
if limit=1 -> return 1
h: map 0..limit-1 'z -> 1
Line 449:
print map 1..20 => hamming
print hamming 1691
print hamming 1000000</langsyntaxhighlight>
 
{{out}}
Line 458:
 
=={{header|ATS}}==
<syntaxhighlight lang="ats">
<lang ATS>
//
// How to compile:
Line 548:
//
} (* end of [main0] *)
</syntaxhighlight>
</lang>
{{out}}
<pre>
Line 575:
 
=={{header|AutoHotkey}}==
<langsyntaxhighlight AutoHotKeylang="autohotkey">SetBatchLines, -1
Msgbox % hamming(1,20)
Msgbox % hamming(1690)
Line 625:
 
return Output
}</langsyntaxhighlight>
 
=={{header|AWK}}==
<syntaxhighlight lang="awk">
<lang AWK>
# syntax: gawk -M -f hamming_numbers.awk
BEGIN {
Line 654:
return((x < y) ? x : y)
}
</syntaxhighlight>
</lang>
{{out}}
<pre>
Line 661:
1000000: 519312780448388736089589843750000000000000000000000000000000000000000000000000000000
</pre>
 
=={{header|BASIC256}}==
{{trans|FreeBASIC}}
<syntaxhighlight lang="freebasic">print "The first 20 Hamming numbers are :"
for i = 1 to 20
print Hamming(i);" ";
next i
 
print
print "H( 1691) = "; Hamming(1691)
end
 
function min(a, b)
if a < b then return a else return b
end function
 
function Hamming(limit)
dim h(1000000)
 
h[0] = 1
x2 = 2 : x3 = 3 : x5 = 5
i = 0 : j = 0 : k = 0
for n = 1 to limit
h[n] = min(x2, min(x3, x5))
if x2 = h[n] then i += 1: x2 = 2 *h[i]
if x3 = h[n] then j += 1: x3 = 3 *h[j]
if x5 = h[n] then k += 1: x5 = 5 *h[k]
next n
return h[limit -1]
end function</syntaxhighlight>
 
 
=={{header|BBC BASIC}}==
<langsyntaxhighlight lang="bbcbasic"> @% = &1010
FOR h% = 1 TO 20
PRINT "H("; h% ") = "; FNhamming(h%)
Line 683 ⟶ 714:
IF m = x5 k% += 1 : x5 = 5 * h%(k%)
NEXT
= h%(l%-1)</langsyntaxhighlight>
{{out}}
<pre>
Line 710 ⟶ 741:
 
=={{header|Bc}}==
<langsyntaxhighlight lang="bc">cat hamming_numbers.bc
define min(x,y) {
if (x < y) {
Line 740 ⟶ 771:
hamming(1000000)
quit
</syntaxhighlight>
</lang>
 
{{out}}
Line 776 ⟶ 807:
=={{header|Bracmat}}==
{{trans|D}}
<langsyntaxhighlight lang="bracmat">( ( hamming
= x2 x3 x5 n i j k min
. tbl$(h,!arg) { This creates an array. Arrays are always global in Bracmat. }
Line 810 ⟶ 841:
& out$(hamming$1691)
& out$(hamming$1000000)
);</langsyntaxhighlight>
{{out}}
<pre>1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36
2125764000
519312780448388736089589843750000000000000000000000000000000000000000000000000000000</pre>
 
=={{header|Bruijn}}==
{{trans|Haskell}}
<code>n1000000</code> takes a very long time but eventually reduces to the correct result.
<syntaxhighlight lang="bruijn">
:import std/Combinator .
:import std/Number .
:import std/List .
 
merge y [[[∅?1 0 (1 [[2 [[go]]]])]]]
go 3 <? 1 (3 : (6 2 4)) (1 : (6 5 0))
 
# classic version while avoiding duplicate generation
hammings-classic (+1) : (foldr u empty ((+2) : ((+3) : {}(+5))))
u [[y [merge 1 ((mul 2) <$> ((+1) : 0))]]]
 
:test ((hammings-classic !! (+42)) =? (+162)) ([[1]])
 
# enumeration by a chain of folded merges (faster)
hammings-folded ([(0 ∘ a) ∘ (0 ∘ b)] (foldr merge1 empty)) $ c
merge1 [[1 [[1 : (merge 0 2)]]]]
a iterate (map (mul (+5)))
b iterate (map (mul (+3)))
c iterate (mul (+2)) (+1)
 
:test ((hammings-folded !! (+42)) =? (+162)) ([[1]])
 
# --- output ---
 
main [first-twenty : (n1691 : {}n1000000)]
first-twenty take (+20) hammings-folded
n1691 hammings-folded !! (+1690)
n1000000 hammings-folded !! (+999999)
</syntaxhighlight>
 
=={{header|C}}==
Using a min-heap to keep track of numbers. Does not handle big integers.
<langsyntaxhighlight lang="c">#include <stdio.h>
#include <stdlib.h>
 
Line 873 ⟶ 938:
/* free(q); */
return 0;
}</langsyntaxhighlight>
===Alternative===
Standard algorithm. Numbers are stored as exponents of factors instead of big integers, while GMP is only used for display. It's much more efficient this way.
<langsyntaxhighlight lang="c">#include <stdio.h>
#include <stdlib.h>
#include <string.h>
Line 971 ⟶ 1,036:
printf("10,000,000: "); show_ham(get_ham(1e7));
return 0;
}</langsyntaxhighlight>
{{out}}
<pre> 1,691: 2125764000
Line 979 ⟶ 1,044:
=={{header|C sharp|C#}}==
{{trans|D}}
<langsyntaxhighlight lang="csharp">using System;
using System.Numerics;
using System.Linq;
Line 1,009 ⟶ 1,074:
}
}
}</langsyntaxhighlight>
{{out}}
<pre>1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36
Line 1,017 ⟶ 1,082:
===Generic version for any set of numbers===
The algorithm is similar to the one above.
<langsyntaxhighlight lang="csharp">using System;
using System.Numerics;
using System.Linq;
Line 1,053 ⟶ 1,118:
}
}
}</langsyntaxhighlight>
 
{{out}}
Line 1,081 ⟶ 1,146:
--Mike Lorenz
 
<langsyntaxhighlight lang="csharp">using System;
using System.Linq;
using System.Numerics;
Line 1,153 ⟶ 1,218:
}
}
}</langsyntaxhighlight>
 
{{out}}
Line 1,174 ⟶ 1,239:
I wanted to fix the enumerator (old) version, as it wasn't working. It became a bit of an obsession... after a few iterations I came up with the following, which is the fastest C# version on my computer - your mileage may vary. It combines the speed of the Log method; Log(2)+Log(3)=Log(2*3) to help determine which is the next one to use. Then I have added some logic (using the series property) to ensure that exponent sets are never duplicated - which speeds the calculations up a bit.... Adding this trick to the Fast Version will probably result in the fastest version, but I'll leave that to someone else to implement. Finally it's all enumerated through a crazy one-way-linked-list-type-structure that only exists as long as the enumerator and is left up to the garbage collector to remove the bits no longer needed... I hope it's commented enough... follow it if you dare!
 
<langsyntaxhighlight lang="csharp">using System;
using System.Collections.Generic;
using System.Linq;
Line 1,297 ⟶ 1,362:
}
}
</syntaxhighlight>
</lang>
{{out}}
<pre>5-Smooth:
Line 1,324 ⟶ 1,389:
It isn't nearly as fast as a Haskell, Scala or even Clojure and Scheme (GambitC) versions of this algorithm, being about five times slower is primarily due to its use of many small heap based instances of classes, both for the LazyList's and for closures (implemented using at least one class to hold the captured free variables) and the inefficiency of DotNet's allocation and garbage collection of many small instance objects (although about twice as fast as F#'s implementation, whose closures must require even more small object instances); it seems Haskell and the (Java) JVM are much more efficient at doing these allocations/garbage collections for many small objects. The slower speed to a relatively minor extent is also due to less efficient BigInteger operations:
{{trans|Haskell}}
<langsyntaxhighlight lang="csharp">using System;
using System.Collections;
using System.Collections.Generic;
Line 1,404 ⟶ 1,469:
}
}
}</langsyntaxhighlight>
 
===Fast enumerating logarithmic version===
Line 1,412 ⟶ 1,477:
The following code eliminates or reduces all of those problems by being non-generic, eliminating duplicate calculations, saving memory by "draining" the growable List's used in blocks as back pointer indexes are used (thus using memory at the same rate as the functional version), thus avoiding excessive allocations/garbage collections; it also is enumerates through the Hamming numbers although that comes at a slight cost in overhead function calls:
{{trans|Nim}}
<langsyntaxhighlight lang="csharp">using System;
using System.Collections;
using System.Collections.Generic;
Line 1,429 ⟶ 1,494:
private const double lb5 = 2.3219280948873623478703194294894; // Math.Log(5) / Math.Log(2);
private struct logrep {
public double lg;
public uint x2, x3, x5;
public logrep(double lg;, uint x, uint y, uint z) {
this.lg = lg; this.x2 = x; this.x3 = y; this.x5 = z;
public logrep(uint x, uint y, uint z, double lg) {
this.x2 = x; this.x3 = y; this.x5 = z; this.lg = lg;
}
public static bool operator <(logrep x, logrep y) {
return x.lg < y.lg;
}
public static bool operator >(logrep x, logrep y) {
return x.lg > y.lg;
}
public logrep mul2() {
return new logrep (this.x2lg + 1.0, this.x3x2 + 1, this.x5x3, this.lg + 1.0x5);
}
public logrep mul3() {
return new logrep(this.lg + lb3, this.x2, this.x3 + 1, this.x5, this.lg + lb3);
}
public logrep mul5() {
return new logrep(this.x2lg + lb5, this.x3x2, this.x5 + 1x3, this.lgx5 + lb51);
}
}
public IEnumerator<Tuple<uint, uint, uint>> GetEnumerator() {
var one = new logrep();
var ms2 = new List<logrep>(); var hs3 = new List<logrep>();
var x5 =s2.Add(one); s3.Add(one.mul5mul3());
var s5 = one.mul5(); var mrg = one.mul3();
var x53s2hdi = one.mul3().mul3()0; //var alreadys3hdi advanced= one step0;
var x532 = one.mul2();
var i = 0; var j = 0;
yield return Tuple.Create(0u, 0u, 0u); // trivial case for one representation
while (true) {
if (is2hdi >= hs2.Capacity >> 1Count) { hs2.RemoveRange(0, is2hdi); is2hdi = 0; } // assume capacity stays the same...
var v = s2[s2hdi];
if (x532 < mrg) { h.Add(x532); x532 = h[i].mul2(); i++; }
if ( v.lg < mrg.lg) { s2.Add(v.mul2()); s2hdi++; }
else {
if (s3hdi >= s3.Count) { s3.RemoveRange(0, s3hdi); s3hdi = 0; }
h.Add(mrg);
ifv (j >= mmrg; s2.CapacityAdd(v.mul2()); { ms3.RemoveRangeAdd(v.mul3(0, j)); j = 0; }
ifs3hdi++; (x53var < x5) { mrgchkv = x53; x53 = ms3[js3hdi].mul3(); j++; }
elseif {(chkv.lg mrg< =s5.lg) x5;{ x5mrg = x5.mul5()chkv; }
melse { mrg = s5; s5 = s5.Addmul5(mrg); s3hdi--; }
}
varyield rsltreturn =Tuple.Create(v.x2, h[hv.Countx3, - 1]v.x5);
yield return Tuple.Create(rslt.x2, rslt.x3, rslt.x5);
}
}
Line 1,483 ⟶ 1,539:
.Select(t => HammingsLogArr.trival(t))
.ToArray()));
Console.WriteLine(HammingsLogArr.trival(NthHamming(new HammingsLogArr()).findNthElementAt((int)1691 - 1)));
 
var n = 1000000UL;
Line 1,511 ⟶ 1,567:
Console.WriteLine();
}
}</langsyntaxhighlight>
{{output}}
<pre>1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36
Line 1,527 ⟶ 1,583:
The above code is about as fast as one can go generating sequences; however, if one is willing to forego sequences and just calculate the nth Hamming number (again), then some reading on the relationship between the size of numbers to the sequence numbers is helpful (Wikipedia: regular number). One finds that there is a very distinct relationship and that it quite quickly reduces to quite a small error band proportional to the log of the output value for larger ranges. Thus, the following code just scans for logarithmic representations to insert into a sequence for this top error band and extracts the correct nth representation from that band. It reduces time complexity to O(n^(2/3)) from O(n) for the sequence versions, but even more amazingly, reduces memory requirements to O(n^(1/3)) from O(n^(2/3)) and thus makes it possible to calculate very large values in the sequence on common personal computers. The code is as follows:
{{trans|Nim}}
<langsyntaxhighlight lang="csharp">using System;
using System.Collections;
using System.Collections.Generic;
Line 1,616 ⟶ 1,672:
Console.WriteLine();
}
}</langsyntaxhighlight>
 
The output is the same as above except that the time is too small to be measured. The billionth number in the sequence can be calculated in just about 10 milliseconds, the trillionth in about one second, the thousand trillionth in about a hundred seconds, and it should be possible to calculate the 10^19th value in less than a day (untested) on common personal computers. The (2^64 - 1)th value (18446744073709551615) cannot be calculated due to a slight overflow problem as it approaches that limit.
Line 1,622 ⟶ 1,678:
=={{header|C++}}==
===C++11 For Each Generator===
<langsyntaxhighlight lang="cpp">
#include <iostream>
#include <vector>
Line 1,645 ⟶ 1,701:
}
};
</syntaxhighlight>
</lang>
 
===5-Smooth===
<langsyntaxhighlight lang="cpp">
int main() {
int count = 1;
Line 1,660 ⟶ 1,716:
return 0;
}
</syntaxhighlight>
</lang>
Produces:
<pre>
Line 1,668 ⟶ 1,724:
 
===7-Smooth===
<langsyntaxhighlight lang="cpp">
int main() {
int count = 1;
Line 1,678 ⟶ 1,734:
return 0;
}
</syntaxhighlight>
</lang>
Produces:
<pre>
Line 1,689 ⟶ 1,745:
{{trans|Haskell}}
{{works with|C++11}}
<langsyntaxhighlight lang="cpp">#include <chrono>
#include <iostream>
#include <gmpxx.h>
Line 1,784 ⟶ 1,840:
auto ms = std::chrono::duration_cast<std::chrono::milliseconds>(stop - start);
std::cout << *hmgs.head << " in " << ms.count() << "milliseconds.\n";
}</langsyntaxhighlight>
 
{{output}}
Line 1,802 ⟶ 1,858:
{{trans|Rust}}
{{works with|C++11}}
<langsyntaxhighlight lang="cpp">#include <chrono>
#include <iostream>
#include <vector>
Line 1,865 ⟶ 1,921:
auto ms = std::chrono::duration_cast<std::chrono::milliseconds>(stop - start);
std::cout << *hmgitr << " in " << ms.count() << "milliseconds.\n";
}</langsyntaxhighlight>
 
{{output}}
Line 1,880 ⟶ 1,936:
{{trans|Haskell}}[[Hamming_numbers#Avoiding_generation_of_duplicates]]
{{works with|Chapel|1.24.1|or greater, maybe lesser}}
<langsyntaxhighlight lang="chapel">use BigInteger; use Time;
 
// Chapel doesn't have closure functions that can capture variables from
Line 1,950 ⟶ 2,006:
for h in hammings() { cnt += 1; if cnt < 1000000 then continue; write(h); break; }
timer.stop(); writeln(".\nThis last took ",
timer.elapsed(TimeUnits.milliseconds), " milliseconds.");</langsyntaxhighlight>
 
{{out}}
Line 1,961 ⟶ 2,017:
The above time is as run on an Intel Skylake i5-6500 at 3.6 GHz (turbo, single-threaded).
 
It isn't as fast as the following versions due to the many memory allocations and de-allocations as for typically functional forms of code, but it is in the order of speed of many actual functional languages and faster than many, depending on how well their memory management is adapted for this programming paradigm and also because the "bigint" implementation isn't as fast as the "gmp" package used by many languages for multi-precision integer caluclations.
 
This shows that the functional forms of most algorithms can be translated into Chapel, although some concessions need to be made for the functional facilities that Chapel doesn't have. '''However, there is one major problem in using languages such as this for functional algorithms when such languages do not have cyclic reference breaking capabilities: the code will leak memory due to the cyclic memory reference cycles and it is perhaps impossible to change the functional algorithm to manually free, even though the code uses "shared"/reference counting facilities.'''
Line 1,969 ⟶ 2,025:
In general, we can convert functional algorithms into imperative style algorithms using Array's to emulate memoizing lazy lists and simple mutable variables to express the recursion within a while loop, as in the following codes (as also used when necessary in the above code). Rather than implement the true Dijkstra merge algorithm which is slower and uses more memory, the following codes implement the better non-duplicating algorithm:
{{trans|Nim}}
<langsyntaxhighlight lang="chapel">use BigInteger; use Time;
 
iter nodupsHamming(): bigint {
var mdoms2dom = { 0 .. 01023 }; var ms2: [mdoms2dom] bigint; // init so can double!
var hdoms3dom = { 0 .. 01023 }; var hs3: [hdoms3dom] bigint; // init so can double!
s2[0] = 1: bigint; s3[0] = 3: bigint;
var x5 = 5: bigint; var mrg = 3: bigint;
var s2hdi, s2tli, s3hdi, s3tli: int;
var x53 = 9: bigint; // already advanced one step!
var x532 = 2: bigint;
var ih, jm, i, j: int;
yield 1: bigint; // first trivial case
while true {
s2tli += 1;
const cph = h.size; // move in place to avoid allocation!
if is2hdi + is2hdi >= cphs2tli { // move in place to avoid allocation!
hs2[0 .. ihs2tli - is2hdi - 1] = hs2[is2hdi .. ihs2tli - 1]; ih -= i; i = 0; }
if ih >s2tli -= cphs2hdi; then hdoms2hdi = { 0 .. cph + cph - 1; };
const s2sz = s2.size;
if x532 < mrg { h[ih] = x532; x532 = h[i] * 2; i += 1; }
if s2tli >= s2sz then s2dom = { 0 .. s2sz + s2sz - 1 };
var rslt: bigint; const s2hd = s2[s2hdi];
if s2hd < mrg { rslt = s2hd; s2hdi += 1; }
else {
s3tli += 1;
h[ih] = mrg; const cpm = m.size; // move in place to avoid allocation!
if js3hdi + js3hdi >= cpms2tli { // move in place to avoid allocation!
ms3[0 .. jms3tli - js3hdi - 1] = ms3[js3hdi .. jms3tli - 1]; jm -= j; j = 0; }
if jm >s3tli -= cpms3hdi; then mdoms3hdi = { 0 .. cpm + cpm - 1; };
const s3sz = s3.size;
if x53 < x5 { mrg = x53; x53 = m[j] * 3; j += 1; }
elseif { mrgs3tli >= x5;s3sz x5then s3dom = x5{ *0 5;.. s3sz + s3sz - 1 };
m[jm]rslt = mrg; jms3[s3tli] += 1rslt * 3;
s3hdi += 1; const s3hd = s3[s3hdi];
if s3hd < x5 { mrg = s3hd; }
else { mrg = x5; x5 = x5 * 5; s3hdi -= 1; }
}
yield hs2[ihs2tli]; ih += 1rslt * 2;
yield rslt;
}
}
Line 2,000 ⟶ 2,061:
// test it...
write("The first 20 hamming numbers are: ");
var cnt = 0: uint(64);
for h in nodupsHamming() { if cnt >= 20 then break; cnt += 1; write(" ", h); }
if cnt >= 20 then break; cnt += 1; write(" ", h); }
 
write("\nThe 1691st hamming number is "); cnt = 1;
for h in nodupsHamming() { if cnt >= 1691 then { writeln(h); break; } cnt += 1; }
if cnt >= 1691 { writeln(h); break; } cnt += 1; }
 
write("The millionth hamming number is ");
var timer: Timer; cnt = 1;
timer.start(); var rslt: bigint;
for h in nodupsHamming() { if cnt >= 1000000 { write(h); break; } cnt += 1; }
if cnt >= 1000000 { rslt = h; break; } cnt += 1; }
timer.stop();
write(rslt);
writeln(".\nThis last took ", timer.elapsed(TimeUnits.milliseconds), " milliseconds.");</lang>
writeln(".\nThis last took ",
timer.elapsed(TimeUnits.milliseconds), " milliseconds.");</syntaxhighlight>
{{out}}
<pre>The first 20 hamming numbers are: 1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36
The 1691st hamming number is 2125764000
The millionth hamming number is 519312780448388736089589843750000000000000000000000000000000000000000000000000000000.
This last took 125114.51867 milliseconds.</pre>
 
The above time is as run on an Intel Skylake i5-6500 at 3.6 GHz (turbo, single-threaded).
Line 2,024 ⟶ 2,090:
'''Alternate version using logarithm approximations for sorting order'''
 
To greatly reduce the time used for BigInteger calculations, the following algorithm uses logarithmic approximations (real(64)) for internal calculations for sorting and only outputoutputs the final answer(s) as BigInteger(s):
{{trans|Nim}}
<lang chapel>use BigInteger; use Math; use Time;
<syntaxhighlight lang="chapel">use BigInteger; use Math; use Time;
 
config const nth: uint(64) = 1000000;
 
const lb2 = 1: real(64); // log base 2 of 2!
type Trival = 3*uint(32);
const lb3 = log2(3: real(64)); const lb5 = log2(5: real(64));
 
record LogRep {
proc trival2bigint(x: Trival): bigint {
procvar xpnd(bslg: uint,real(64); vvar x2: uint(32)): bigint {;
var rslt = 1x3: bigintuint(32); var bsm = bs: bigint; var vm = vx5: uint(32);
inline proc mul2(): LogRep {
while vm > 0 { if vm & 1 then rslt *= bsm; bsm *= bsm; vm >>= 1; }
return new LogRep(this.lg + lb2, this.x2 + 1, this.x3, this.x5); }
return rslt;
inline proc mul3(): LogRep {
return new LogRep(this.lg + lb3, this.x2, this.x3 + 1, this.x5); }
inline proc mul5(): LogRep {
return new LogRep(this.lg + lb5, this.x2, this.x3, this.x5 + 1); }
proc lr2bigint(): bigint {
proc xpnd(bs: uint, v: uint(32)): bigint {
var rslt = 1: bigint; var bsm = bs: bigint; var vm = v: uint;
while vm > 0 { if vm & 1 then rslt *= bsm; bsm *= bsm; vm >>= 1; }
return rslt;
}
return xpnd(2: uint, this.x2) *
xpnd(3: uint, this.x3) * xpnd(5: uint, this.x5);
}
proc writeThis(lr) throws {
lr <~> this.lr2bigint();
}
const (x2, x3, x5) = x;
return xpnd(2: uint, x2) * xpnd(3: uint, x3) * xpnd(5: uint, x5);
}
operator <(const ref a: LogRep, const ref b: LogRep): bool { return a.lg < b.lg; }
const one = new LogRep(0, 0, 0, 0);
 
iter nodupsHammingLnodupsHammingLog(): TrivalLogRep {
constvar lb2s2dom = 1{ 0 .. 1023 }; var s2: real(64)[s2dom] LogRep; // log baseinit 2so ofcan 2double!
var s3dom = { 0 .. 1023 }; var s3: [s3dom] LogRep; // init so can double!
const lb3 = log2(3: real(64)); const lb5 = log2(5: real(64));
s2[0] = one; s3[0] = one.mul3();
record LogRep { var lg: real(64); var x2: uint(32);
var x5 = var x3: uintone.mul5(32); var x5:mrg uint= one.mul3(32); }
var s2hdi, s2tli, s3hdi, s3tli: int;
proc <(a: LogRep, b: LogRep): bool { return a.lg < b.lg; }
proc mul2(x: LogRep): LogRep {
return new LogRep(x.lg + lb2, x.x2 + 1, x.x3, x.x5); }
proc mul3(x: LogRep): LogRep {
return new LogRep(x.lg + lb3, x.x2, x.x3 + 1, x.x5); }
proc mul5(x: LogRep): LogRep {
return new LogRep(x.lg + lb5, x.x2, x.x3, x.x5 + 1); }
const one = new LogRep(0, 0, 0, 0);
var mdom = { 0 .. 0 }; var m: [mdom] LogRep; // init so can double!
var hdom = { 0 .. 0 }; var h: [hdom] LogRep; // init so can double!
var x5 = mul5(one); var mrg = mul3(one); var x532 = mul2(one);
var x53 = mul3(mrg); // 9 -> already advanced one step!
var ih, jm, i, j: int;
yield (0: uint(32), 0: uint(32), 0: uint(32)); // first trivial case of 1
while true {
s2tli += 1;
const cph = h.size; // move in place to avoid allocation!
if is2hdi + is2hdi >= cphs2tli { // move in place to avoid allocation!
hs2[0 .. ihs2tli - is2hdi - 1] = hs2[is2hdi .. ihs2tli - 1]; ih -= i; i = 0; }
if ih >s2tli -= cphs2hdi; then hdoms2hdi = { 0 .. cph + cph - 1; };
const s2sz = s2.size;
if x532 < mrg { h[ih] = x532; x532 = mul2(h[i]); i += 1; }
if s2tli >= s2sz then s2dom = { 0 .. s2sz + s2sz - 1 };
var rslt: LogRep; const s2hd = s2[s2hdi];
if s2hd.lg < mrg.lg { rslt = s2hd; s2hdi += 1; }
else {
s3tli += 1;
h[ih] = mrg; const cpm = m.size; // move in place to avoid allocation!
if js3hdi + js3hdi >= cpms2tli { // move in place to avoid allocation!
ms3[0 .. jms3tli - js3hdi - 1] = ms3[js3hdi .. jms3tli - 1]; jm -= j; j = 0; }
if jm >s3tli -= cpms3hdi; then mdoms3hdi = { 0 .. cpm + cpm - 1; };
const s3sz = s3.size;
if x53 < x5 { mrg = x53; x53 = mul3(m[j]); j += 1; }
elseif { mrgs3tli >= x5;s3sz x5then s3dom = mul5(x5);{ 0 .. s3sz + s3sz - 1 };
mrslt = mrg; s3[jms3tli] = mrg.mul3(); jms3hdi += 1;
const s3hd = s3[s3hdi];
if s3hd.lg < x5.lg { mrg = s3hd; }
else { mrg = x5; x5 = x5.mul5(); s3hdi -= 1; }
}
const rslt = hs2[ihs2tli]; yield= (rslt.x2, rslt.x3, rslt.x5mul2(); ih += 1;
yield rslt;
}
}
 
// test it...
write("The first 20 hamming numbers are: ");
var cnt = 0: uint(64);
for h in nodupsHammingLnodupsHammingLog() {
if cnt >= 20 then break; cnt += 1; write(" ", trival2bigint(h)); }
 
write("\nThe 1691st hamming number is "); cnt = 1;
for h in nodupsHammingLnodupsHammingLog() {
if cnt >= 1691 { writeln(trival2bigint(h)); break; } cnt += 1; }
 
write("The ", nth, "th hamming number is ");
var timer: Timer; cnt = 1;
timer.start(); var rslt: LogRep;
for h in nodupsHammingLnodupsHammingLog() {
if cnt >= nth { write(trival2bigint(rslt = h)); break; } cnt += 1; }
timer.stop();
write(rslt);
writeln(".\nThis last took ", timer.elapsed(TimeUnits.milliseconds), " milliseconds.");</lang>
writeln(".\nThis last took ",
timer.elapsed(TimeUnits.milliseconds), " milliseconds.");</syntaxhighlight>
{{out}}
<pre>The first 20 hamming numbers are: 1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36
The 1691st hamming number is 2125764000
The 1000000th hamming number is 519312780448388736089589843750000000000000000000000000000000000000000000000000000000.
This last took 86.926372 milliseconds.</pre>
 
The above time is as run on an Intel Skylake i5-6500 at 3.6 GHz (turbo, single-threaded).
 
As you can see, the time expended for the required task is almost too fast to measure, meaning that much of the time expended in previous versions was just the time doing multi-precision arithmetic; the program takes about 138.41 seconds to find the billionth Hamming number.
 
===Very Fast Algorithm Using a Sorted Error Band===
Line 2,111 ⟶ 2,189:
{{trans|Nim}}
{{works with|Chapel|1.22|for zero based tuple indices}}
<langsyntaxhighlight lang="chapel">use BigInteger; use Math; use Sort; use Time;
 
config const nth = 1000000: uint(64);
Line 2,195 ⟶ 2,273:
else write("It's too long to print");
writeln("!\nThis last took ",
timer.elapsed(TimeUnits.milliseconds), " milliseconds.");</langsyntaxhighlight>
{{out}}
<pre>The first 20 Hamming numbers are: 1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36
Line 2,211 ⟶ 2,289:
{{trans|Nim}}
{{works with|Chapel|1.22|for zero based tuple indices}}
<langsyntaxhighlight lang="chapel">use BigInteger; use Math; use Sort; use Time;
 
config const nth = 1000000: uint(64);
Line 2,300 ⟶ 2,378:
else write("It's too long to print");
writeln("!\nThis last took ",
timer.elapsed(TimeUnits.milliseconds), " milliseconds.");</langsyntaxhighlight>
 
The above code has the same output as before and doesn't take an appreciably different amount time to execute; it can produce the billionth Hamming number in about 31 milliseconds, the trillionth in about 0.546 seconds and the thousand trillionth (which is now possible without error) in about 39.36 seconds. Thus, it successfully extends the usable range of the algorithm to near the maximum expressible 64 bit number in a few hours of execution time on a modern desktop computer although the (2^64 - 1)th Hamming number can't be found due to the restrictions of the expressible range limit in sizing of the required error band.
Line 2,308 ⟶ 2,386:
=={{header|Clojure}}==
This version implements Dijkstra's merge solution, so is closely related to the Haskell version.
<langsyntaxhighlight lang="clojure">(defn smerge [xs ys]
(lazy-seq
(let [x (first xs),
Line 2,324 ⟶ 2,402:
(smerge (map #(*' 3 %) hamming))
(smerge (map #(*' 2 %) hamming))
(cons 1))))</langsyntaxhighlight>
Note that the above version uses a lot of space and time after calculating a few hundred thousand elements of the sequence. This is no doubt due to not avoiding the generation of duplicates in the sequences as well as its "holding on to the head": it maintains the entire generated sequences in memory.
 
Line 2,331 ⟶ 2,409:
In order to fix the problems with the above program as to memory use and extra time expended, the following code implements the Haskell idea as a function so that it does not retain the pointers to the streams used so that they can be garbage collected from the beginning as they are consumed. it avoids duplicate number generation by using intermediate streams for each of the multiples and building each on the results of the last; also, it orders the streams from least dense to most so that the intermediate streams retained are as short as possible, with the "s5" stream only from one fifth to a third of the current value, the "s35" stream only between a third and a half of the current output value, and the s235 stream only between a half and the current output - as the sequence is not very dense with increasing range, mot many values need be retained:
{{trans|Haskell}}
<langsyntaxhighlight lang="clojure">(defn hamming
"Computes the unbounded sequence of Hamming 235 numbers."
[]
Line 2,343 ⟶ 2,421:
(u [s n] (let [r (atom nil)]
(reset! r (merge s (smult n (cons 1 (lazy-seq @r)))))))]
(cons 1 (lazy-seq (reduce u nil (list 5 3 2))))))</langsyntaxhighlight>
 
Much of the time expended for larger ranges (say 10 million or more) is due to the time doing extended precision arithmetic, with also a significant percentage spent in garbage collection. Following is the output from the REPL after compiling the program:
Line 2,364 ⟶ 2,442:
 
=={{header|CoffeeScript}}==
<langsyntaxhighlight lang="coffeescript"># Generate hamming numbers in order. Hamming numbers have the
# property that they don't evenly divide any prime numbers outside
# a given set, such as [2, 3, 5].
Line 2,445 ⟶ 2,523:
numbers = generate_hamming_sequence(primes, 10000)
console.log numbers[1690]
console.log numbers[9999]</langsyntaxhighlight>
 
=={{header|Common Lisp}}==
Maintaining three queues, popping the smallest value every time.
<langsyntaxhighlight lang="lisp">(defun next-hamm (factors seqs)
(let ((x (apply #'min (map 'list #'first seqs))))
(loop for s in seqs
Line 2,468 ⟶ 2,546:
(if (or (< n 21)
(= n 1691)
(= n 1000000)) (format t "~d: ~d~%" n x))))</langsyntaxhighlight>
A much faster method:
<langsyntaxhighlight lang="lisp">(defun hamming (n)
(let ((fac '(2 3 5))
(idx (make-array 3 :initial-element 0))
Line 2,489 ⟶ 2,567:
 
(loop for i in '(1691 1000000) do
(format t "~d: ~d~%" i (hamming i)))</langsyntaxhighlight>
{{out}}
<pre> 1: 1
Line 2,516 ⟶ 2,594:
=={{header|Crystal}}==
{{trans|Bc}}
<langsyntaxhighlight lang="ruby">require "big"
 
def hamming(limit)
Line 2,537 ⟶ 2,615:
puts "Hamming Number 1,000,000: #{hamming 1_000_000}"
puts "Elasped Time: #{(Time.monotonic - start).total_seconds} secs"
</syntaxhighlight>
</lang>
 
System: I7-6700HQ, 3.5 GHz, Linux Kernel 5.6.17, Crystal 0.35
Line 2,552 ⟶ 2,630:
 
{{trans|Kotlin}}
<langsyntaxhighlight lang="ruby">require "big"
 
# Unlike some languages like Kotlin, Crystal doesn't have a Lazy module,
Line 2,628 ⟶ 2,706:
Hammings.new.skip(999_999).first(1).each { |h| print h }
elpsd = (Time.monotonic - start_time).total_milliseconds
printf(".\r\nThis last took %f milliseconds.\r\n", elpsd)</langsyntaxhighlight>
 
{{out}}
Line 2,644 ⟶ 2,722:
In order to show the time expended in multi-precision integer calculations, the following code implements the same algorithm as above but uses logarithmic estimations rather than multi-precision integer arithmetic to compute each instance of the Hamming number sequence, only converting to `BigInt` for the results:
 
<langsyntaxhighlight lang="ruby">require "big"
 
# Unlike some languages like Kotlin, Crystal doesn't have a Lazy module,
Line 2,759 ⟶ 2,837:
HammingsLogRep.new.skip(999_999).first(1).each { |h| print h.toBigInt }
elpsd = (Time.monotonic - start_time).total_milliseconds
printf(".\r\nThis last took %f milliseconds.\r\n", elpsd)</langsyntaxhighlight>
 
{{out}}
Line 2,773 ⟶ 2,851:
To show that the majority of the time for the above implementations is used in memory allocations/deallocations for the functional lazy list form of code, the following code implements this imperatively by using home-grown "growable" arrays; these "growable" arrays were hand implemented using pointer allocations to avoid the automatic bounds checking done for conventional Array's; note that the `LogRep` is now a `struct` rather than a `class` as now there aren't many value copies and to save the quite large amount of time required to allocation/deallocate memory as if `class`'s were used:
 
{{trans|KotlinNim}}
<langsyntaxhighlight lang="ruby">require "big"
 
struct LogRep
Line 2,784 ⟶ 2,862:
end
 
def self.mult2(x : LogRep)
LogRep.new(x.@logrep + LOG2_2, x.@x2 + 1, x.@x3, x.@x5)
end
 
def self.mult3(x : LogRep)
LogRep.new(x.@logrep + LOG2_3, x.@x2, x.@x3 + 1, x.@x5)
end
 
def self.mult5(x : LogRep)
LogRep.new(x.@logrep + LOG2_5, x.@x2, x.@x3, x.@x5 + 1)
end
 
Line 2,815 ⟶ 2,893:
include Iterator(LogRep)
private ONE = LogRep.new(0.0, 0, 0, 0)
# use pointers to avoid bounds checking...
@x5 = LogRep.mult5 ONE; @mrg = LogRep.mult3 ONE; @x2 = LogRep.mult2 ONE
@s2 = Pointer(LogRep).malloc 1024; @s3 = Pointer(LogRep).malloc 1024
@hh = 0; @mh = 0; @hl = 0; @ml = 0; @rslt = ONE
@has5 =: Pointer(LogRep).malloc 1= ONE.mult5; @mamrg =: Pointer(LogRep).malloc 1= ONE.mult3
@haszs2sz = 11024; @maszs3sz = 11024
@s2hdi = 0; @s2tli = 0; @s3hdi = 0; @s3tli = 0
 
def initialize
@s2[0] = ONE; @s3[0] = ONE.mult3
@x3 = LogRep.mult3 @mrg # already advance one step to 9
end
 
def next
if @hls2tli + @hl >= @hasz # @ha.size1
if @ha.move_from(@has2hdi + @hl,s2hdi @hh ->= @hl);s2sz @hh# -=unused @hl;is @hlhalf =of 0used
@s2.move_from(@s2 + @s2hdi, @s2tli - @s2hdi)
@s2tli -= @s2hdi; @s2hdi = 0
end
if @hhs2tli >= @haszs2sz # grow array, copying former contents
@haszs2sz += @haszs2sz; nhans2 = Pointer(LogRep).malloc @haszs2sz
nhans2.move_from(@has2, @hhs2tli); @has2 = nhans2
end
ifrsltp = @x2s2 <+ @mrgs2hdi;
if rsltp.value < @mrg
@ha[@hh] = @x2; @x2 = LogRep.mult2(@ha[@hl]); @hl += 1
@s2[@s2tli] = rsltp.value.mult2; @s2hdi += 1
else
@ha[@hh]s3tli += @mrg1
if @mls3hdi + @mls3hdi >= @maszs3sz # @ma.sizeunused is half of used
@mas3.move_from(@mas3 + @mls3hdi, @mhs3tli - @mls3hdi); @mh -= @ml; @ml = 0
@s3tli -= @s3hdi; @s3hdi = 0
end
if @mhs3tli >= @maszs3sz # grow array, copying former contents
@maszs3sz += @maszs3sz; nmans3 = Pointer(LogRep).malloc @maszs3sz
nmans3.move_from(@mas3, @mhs3tli); @mas3 = nmans3
end
@s2[@s2tli] = @mrg.mult2; @s3[@s3tli] = @mrg.mult3
if @x3 < @x5
@mrgs3hdi += @x31; @x3ns3hdp = LogRep.mult3(@ma[@ml]); @mls3 += 1@s3hdi
rslt = @mrg; rsltp = pointerof(rslt)
if ns3hdp.value < @s5
@mrg = ns3hdp.value
else
@mrg = @x5s5; @x5s5 = LogRep@s5.mult5(; @x5)s3hdi -= 1
end
@ma[@mh] = @mrg; @mh += 1
end
rsltp.value
rslt = @rslt; @rslt = @ha[@hh]; @hh += 1; rslt
end
end
Line 2,862 ⟶ 2,947:
HammingsImpLogRep.new.skip(999_999).first(1).each { |h| print h.toBigInt }
elpsd = (Time.monotonic - start_time).total_milliseconds
printf(".\r\nThis last took %f milliseconds.\r\n", elpsd)</langsyntaxhighlight>
 
{{out}}
Line 2,868 ⟶ 2,953:
The 1691st Hamming number is 2125764000.
The millionth Hamming number is 519312780448388736089589843750000000000000000000000000000000000000000000000000000000.
This last took 107.722201330211 milliseconds.</pre>
 
As can be seen by comparing with the above results using the same Intel Skylake i5-6500 CPU, this is about thirteeneighteen times faster than the functional version also using logarithmic representations due to less time spent doing memory allocations/deallocations forby using the imperative form of code. This version can find the billionth Hamming number in about 7.6 seconds on this machine.
 
=={{header|D}}==
===Basic Version===
This version keeps all numbers in memory, computing all the Hamming numbers up to the needed one. Performs constant number of operations per Hamming number produced.
<langsyntaxhighlight lang="d">import std.stdio, std.bigint, std.algorithm, std.range, core.memory;
 
auto hamming(in uint n) pure nothrow /*@safe*/ {
Line 2,898 ⟶ 2,983:
1_691.hamming.writeln;
1_000_000.hamming.writeln;
}</langsyntaxhighlight>
{{out}}
<pre>[1, 2, 3, 4, 5, 6, 8, 9, 10, 12, 15, 16, 18, 20, 24, 25, 27, 30, 32, 36]
Line 2,908 ⟶ 2,993:
This keeps <math>O(n^{2/3})</math> numbers in memory, but over-computes a sequence by a factor of about <math>\Theta(n^{2/3})</math>, calculating extra multiples past that as well. Incurs an extra <math>O(\log n)</math> factor of operations per each number produced (reinserting its multiples into a tree). Doesn't stop when the target number is reached, instead continuing until it is no longer needed:
{{trans|Java}}
<langsyntaxhighlight lang="d">import std.stdio, std.bigint, std.container, std.algorithm, std.range,
core.memory;
 
Line 2,932 ⟶ 3,017:
writeln("hamming(1691) = ", 1691.hamming);
writeln("hamming(1_000_000) = ", 1_000_000.hamming);
}</langsyntaxhighlight>
{{out}}
<pre>First 20 Hamming numbers: [1, 2, 3, 4, 5, 6, 8, 9, 10, 12, 15, 16, 18, 20, 24, 25, 27, 30, 32, 36]
Line 2,942 ⟶ 3,027:
Does exactly what the first version does, creating an array and filling it with Hamming numbers, keeping the three back pointers into the sequence for next multiples calculations, except that it represents the numbers as their coefficients triples and their logarithm values (for comparisons), thus saving on BigInt calculations.
{{trans|C}}
<langsyntaxhighlight lang="d">import std.stdio: writefln;
import std.bigint: BigInt;
import std.conv: text;
Line 3,049 ⟶ 3,134:
foreach (immutable n; [1691, 10 ^^ 6, MAX_HAM])
writefln("%8d: %s", n, n.getHam);
}</langsyntaxhighlight>
The output is similar to the second C version.
Runtime is about 0.11 seconds if MAX_HAM = 1_000_000 (as the task requires),
Line 3,058 ⟶ 3,143:
It's a little slower than the precedent version, but it uses much less RAM,
so it allows to compute the result for larger n.
<langsyntaxhighlight lang="d">import std.stdio: writefln;
import std.bigint: BigInt;
import std.conv: text;
Line 3,211 ⟶ 3,296:
foreach (immutable n; [1691, 10 ^^ 6, 10_000_000])
writefln("%8d: %s", n, n.getHam);
}</langsyntaxhighlight>
The output is the same as the second alternative version.
 
Line 3,217 ⟶ 3,302:
 
In order to produce reasonable ranges of Hamming numbers, one needs the BigInt type, but processing of many BigInt's in generating a sequence slows the code; for that reason the following code records the determined values as a combination of an approximation of the log base two value and the triple of the powers of two, three and five, only generating the final output values as BigInt's as required:
<langsyntaxhighlight lang="dart">import 'dart:math';
 
final lb2of2 = 1.0;
Line 3,297 ⟶ 3,382:
print("in full as: ${trival2BigInt(answr)}");
print("This test took $elpsd milliseconds.");
}</langsyntaxhighlight>
{{Output}}
<pre>The first 20 Hamming numbers are: ( 1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36 )
Line 3,310 ⟶ 3,395:
Although not a Hamming sequence generator, the following code uses the known characteristics of the distribution of Hamming numbers to just scan through to find all possibilities in a relatively narrow "error band" which then can be sorted based on the log base two approximation and the nth element determined inside that band; it has a huge advantage that memory requirements drop to O(n^(1/3)) and asymptotic execution complexity drops from O(n) to O(n^(2/3)) for an extremely fast execution speed (thanks to WillNess for the start of this algorithm as referenced in the Haskell section):
{{translated from|Haskell}}
<langsyntaxhighlight lang="dart">import 'dart:math';
 
final lb2of2 = 1.0;
Line 3,397 ⟶ 3,482:
print("in full as: ${trival2BigInt(answr)}");
print("This test took $elpsd milliseconds.");
}</langsyntaxhighlight>
 
The output from the above code is the same as the above version but it is so fast that the time to find the millionth Hamming number is too small to be measured other than the Dart VM JIT time. It can find the billionth prime in a fraction of a second and the trillionth prime in seconds.
Line 3,404 ⟶ 3,489:
 
For arguments higher than about 1e13, the precision of the Double log base two approximations used above is not adequate to do an accurate sort, but the algorithm continues to work (although perhaps slightly slower) by changing the code to use BigInt log base two representations as follows:
<langsyntaxhighlight lang="dart">import 'dart:math';
 
final biglb2of2 = BigInt.from(1) << 100; // 100 bit representations...
Line 3,486 ⟶ 3,571:
print("in full as: ${bigtrival2BigInt(answr)}");
print("This test took $elpsd milliseconds.");
}</langsyntaxhighlight>
 
With these changes, the algorithm can find the 1e19'th prime in the order af days depending on the CPU used.
 
=={{header|DCL}}==
<langsyntaxhighlight DCLlang="dcl">$ limit = p1
$
$ n = 0
Line 3,533 ⟶ 3,618:
$
$ n = limit - 1
$ write sys$output h_'n</langsyntaxhighlight>
{{out}}
<pre>
Line 3,563 ⟶ 3,648:
=={{header|Delphi}}==
See [https://rosettacode.org/wiki/Hamming_numbers#Pascal Pascal].
=={{header|EasyLang}}==
{{trans|11l}}
<syntaxhighlight>
func hamming lim .
len h[] lim
h[1] = 1
x2 = 2 ; x3 = 3 ; x5 = 5
i = 1 ; j = 1 ; k = 1
for n = 2 to lim
h[n] = lower x2 lower x3 x5
if x2 = h[n]
i += 1
x2 = 2 * h[i]
.
if x3 = h[n]
j += 1
x3 = 3 * h[j]
.
if x5 = h[n]
k += 1
x5 = 5 * h[k]
.
.
return h[lim]
.
for nr = 1 to 20
write hamming nr & " "
.
print ""
print hamming 1691
</syntaxhighlight>
{{out}}
<pre>
1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36
2125764000
</pre>
 
=={{header|Eiffel}}==
<syntaxhighlight lang="eiffel">
<lang Eiffel>
note
description : "Initial part, in order, of the sequence of Hamming numbers"
Line 3,590 ⟶ 3,712:
]"
warning : "[
The formatting (<syntaxhighlight lang="text">) specifications for Eiffel in RosettaCode are slightly obsolete:
`note' and other newer keywords not supported, red color for manifest strings.
This should be fixed soon.
Line 3,642 ⟶ 3,764:
end
end
</syntaxhighlight>
</lang>
 
{{out}}
Line 3,650 ⟶ 3,772:
 
=={{header|Elixir}}==
<langsyntaxhighlight lang="elixir">defmodule Hamming do
def generater do
queues = [{2, queue}, {3, queue}, {5, queue}]
Line 3,681 ⟶ 3,803:
IO.puts Hamming.generater |> Enum.take(1691) |> List.last
IO.puts "one millionth Hamming number:"
IO.puts Hamming.generater |> Enum.take(1_000_000) |> List.last</langsyntaxhighlight>
 
{{out}}
Line 3,695 ⟶ 3,817:
=={{header|Elm}}==
 
The Elm language has many restrictions that make the implementation of the Hamming Number sequence algorithms difficult, as the classic Edsger Dijkstra algorithm as written in Haskell [[Hamming_numbers#The_classic_version]] cannot be written in Elm as current Elm forbids cyclic value references (the value "hamming" is back referenced three times), and the implementation wouldn't be efficient even if it could as the current Elm version 0.19.x has removed the "Lazy" package the would defer the memoization of the result of a computation as necessary in implementing Haskell's lazy lists. Thus, one has to implement memoization using a different data structure than a lazy list; however, all current Elm data structures are persistent/forbid mutation and can only implement some sort of Copy On Write (COW), thus there is no implementation of a linear array and the "Array" module is a tree based structure (with some concessions to data blocks for slightly better performance) that will have a logarithmic execution complexity when the size increases above a minimum. In fact, all Elm data structures that could be used for this also have a logarithmic response (Dict, Set, Array). The implementation of List is not lazy so new elements can't be added to the "tail" but need to be added to the "head" for efficiency, which means if one wants to add higher elements to a list in increasing order, one needs to (COW) reverse the List (twice) in order to do it!
 
The solution here uses a pure functional implementation of a Min Heap (Binary Heap) Priority Queue so that the minimum element can be viewed in O(1) time although inserting new elements/replacing elements still takes O(log n) time where "n" is the number of elements in the queue. As written, no queue needs to be maintained for the multiples of five, but two quesqueues are maintained, one for the merge of the multiples of five and three, and the larger one for the merge of all the multiples of five, three, and two. In order to minimize redundant computation time, the implementation maintains the "next" comparison values as part of the recursive function loop states that can change with every loop.
 
To express the sequence, a Co-Inductive Stream (CIS) is used as a deferred execution (lazy) stream; it does not memoize computations (as discussed above) but that isn't necessary for this application where the sequence is only traversed once and consumed as being traversed.
 
In addition, in order to reduce the "BigInt" computation time, the calculations are done on the basis of a "Float" logarithmic approxitmationapproximation while maintaining "Trival" triple representation of the number of timespowers of two, three, and five, are multiplied in order to obtain the current value represented by the logarithmic approximation. The working code is as follows:
 
<langsyntaxhighlight lang="elm">module Main exposing ( main )
 
import Bitwise exposing (..)
import BigInt
import Task exposing ( Task, succeed, perform, andThen )
import Html exposing (Html, div, text )
import Browser exposing ( element )
import Time exposing ( now, posixToMillis )
Line 3,814 ⟶ 3,936:
let omin = case peekMinPQ bpq of
Just (lr, trv) -> LogRep lr trv
otherwiseNothing -> m235 -- at the beginning!
nm235 = multLR2 omin
nbpq = replaceMinPQ m235.lr m235.trv bpq
Line 3,823 ⟶ 3,945:
let omin = case peekMinPQ mpq of
Just (lr, trv) -> LogRep lr trv
otherwiseNothing -> mrg -- at the beginning!
nm35 = multLR3 omin
nmrg = if ltLogRep nm35 m5 then nm35 else m5
Line 3,839 ⟶ 3,961:
in CIS (0, 0, 0) <| \ () ->
next emptyPQ emptyPQ im235 imrg im35 im5
 
-- following code has to do with outputting to a web page using MUV/TEA...
 
type alias Model = { time: Int
, str1: String, str2: String, str3: String }
 
type Msg = Start Int | Done ( Int, Trival )
 
timemillis : () -> Task Never Int -- a side effect function
Line 3,852 ⟶ 3,967:
test : Int -> Cmd Msg -- side effect function chain (includes "perform")...
test lmt =
let msg1 = "The first 20 Hamming numbers are: " ++
timemillis()
|> andThen (\ t -> succeed ( t, nthCIS lmt (hammingsLog()) ))|> takeCIS2List 20
|> List.map showTrival
|> String.join ", ") ++ "."
msg2 = "The 1691st Hamming number is " ++
(hammingsLog() |> nthCIS 1691
|> showTrival) ++ "."
msg3 = "The " ++ String.fromInt cLIMIT ++ "th Hamming number is:"
in timemillis()
|> andThen (\ strt ->
let rsltstr = hammingsLog() |> nthCIS lmt
|> showTrival in
timemillis()
|> andThen (\ stop ->
succeed [msg1, msg2, msg3, rsltstr ++ " in "
++ String.fromInt (stop - strt)
++ " milliseconds."]))
|> perform Done
 
-- following code has to do with outputting to a web page using MUV/TEA...
myupdate : Msg -> Model -> (Model, Cmd Msg)
 
myupdate msg mdl =
type alias Model = List String
case msg of
Start tm -> ( Model tm mdl.str1 mdl.str2 "Running...", test cLIMIT )
Done (stptm, answr) ->
( ( Model stptm mdl.str1 mdl.str2
<| "The " ++ String.fromInt cLIMIT ++
"th Hamming number is " ++
showTrival answr ++ " in " ++
String.fromInt (stptm - mdl.time) ++
" milliseconds." )
, Cmd.none ) -- terminates computation; allows UI update...
 
type Msg = Done Model
myview : Model -> Html msg
myview mdl =
div [] [ div [] [text mdl.str1]
, div [] [text mdl.str2]
, div [] [text mdl.str3] ]
 
main : Program () Model Msg
main = -- starts with empty list of strings; views model of filled list...
main =
element { init = \ _ -> ( Model[], 0test cLIMIT )
, update = \ (Done mdl) _ -> ("The first 20 Hamming numbers are:mdl , "Cmd.none ++)
(hammingsLog() |> takeCIS2List 20
|> List.map showTrival
|> String.join ", ") ++ ".")
("The 1691st Hamming number is " ++
(hammingsLog() |> nthCIS 1691
|> showTrival) ++ ".")
"", perform Start (timemillis()) )
, update = myupdate
, subscriptions = \ _ -> Sub.none
, view = myviewdiv [] << List.map (div [] << List.singleton << text) }</langsyntaxhighlight>
{{out}}
<pre>The first 20 Hamming numbers are: 1, 2, 3, 4, 5, 6, 8, 9, 10, 12, 15, 16, 18, 20, 24, 25, 27, 30, 32, 36.
The 1691st Hamming number is 2125764000.
The 1000000th Hamming number is 519312780448388736089589843750000000000000000000000000000000000000000000000000000000 in 756 milliseconds.</pre>:
519312780448388736089589843750000000000000000000000000000000000000000000000000000000 in 767 milliseconds.</pre>
 
Do note that, due to the logarithmic response of the Min Heap Priority Queue, the execution time is logarithmic with number of elements evaluation and not linear as it would otherwise be, so if it takes 0.7 seconds to find the millionth Hamming number, it takes something about 10 seconds to find the ten millionth value instead of about 7 seconds. Considering that the generated "native" code is just JavaScript, it is reasonably fast and somewhat competitive with easier implementations in other languages such as F#.
 
=={{header|Erlang}}==
For relatively small values of n we can use an elegant code:
 
<syntaxhighlight lang="erlang">
list(N) -> array:to_list(element(1, array(N, [2, 3, 5]))).
 
nth(N) -> array:get(N-1, element(1, array(N, [2, 3, 5]))).
 
array(N, Primes) -> array(array:new(), N, 1, [{P, 1, P} || P <- Primes]).
 
array(Array, Max, Max, Candidates) -> {Array, Candidates};
array(Array, Max, I, Candidates) ->
Smallest = smallest(Candidates),
N_array = array:set(I, Smallest, Array),
array(N_array, Max, I+1, update(Smallest, N_array, Candidates)).
 
update(Val, Array, Candidates) -> [update_(Val, C, Array) || C <- Candidates].
 
update_(Val, {Val, Ind, Mul}, Array) ->
{Mul*array:get(Ind, Array), Ind+1, Mul};
update_(_, X, _) -> X.
 
smallest(L) -> lists:min([element(1, V) || V <- L]).
</syntaxhighlight>
 
However, when n become large (let say above 5e7) the memory needed grew very large as I store all the values. Fortunately, the algorithm uses only a small fraction of the end of the array. So I can drop the beginning of the array when it is no longer needed.
 
<syntaxhighlight lang="erlang">
nth(N, Batch) ->
array:get(N-1, element(1, compact_array(N, Batch, [2, 3, 5]))).
 
compact_array(Goal, Lim, Primes) ->
{Array, Candidates} = array(Lim, Primes),
compact_array(Goal, Lim, Lim, Array, Candidates).
 
compact_array(Goal, _, Index, Array, Candidates) when Index > Goal ->
{Array, Candidates};
compact_array(Goal, Lim, Index, Array, Candidates) ->
{N_array, N_candidates} =
array(compact(Array, Candidates), Index + Lim, Index, Candidates),
compact_array(Goal, Lim, Index+Lim, N_array, N_candidates).
 
compact(Array, L) ->
Index = lists:min([element(2, V) || V <- L]),
Keep = [E || E <- array:sparse_to_orddict(Array), element(1, E) >= Index],
array:from_orddict(Keep).
</syntaxhighlight>
 
With this approach memory is no longer an issue:
 
{{out}}
<pre>
timer:tc(task_hamming_numbers, nth, [100_000_000, 1_000_000]).
{232894309,
18140143309611363532953342430693354584669635033709097929462505366714035156593135818380467866054222964635144914854949550271375442721368122191972041094311075107507067573147191502194201568268202614781694681859513649083616294200541611489469967999559505365172812095568020073934100699850397033005903158113691518456912149989919601385875227049401605594538145621585911726469930727034807205200195312500}
</pre>
 
So a bit less than 4 minutes to get the 100 000 000th regular number. The complexity is slightly worse than linear which is not a surprise given than all the regular numbers are computed.
 
=={{header|ERRE}}==
For bigger numbers, you have to use an external program, like MULPREC.R
 
<lang ERRE>PROGRAM HAMMING
<syntaxhighlight lang="erre">PROGRAM HAMMING
 
 
!$DOUBLE
Line 3,927 ⟶ 4,098:
HAMMING(1691->RES)
PRINT("H(1691)=";RES)
END PROGRAM</langsyntaxhighlight>
{{out}}<pre>H( 1 )= 1
H( 2 )= 2
Line 3,953 ⟶ 4,124:
=={{header|F_Sharp|F#}}==
This version implements Dijkstra's merge solution, so is closely related to the Haskell classic version.
<langsyntaxhighlight lang="fsharp">type LazyList<'a> = Cons of 'a * Lazy<LazyList<'a>> // ': fix colouring
 
let rec hammings() =
Line 3,977 ⟶ 4,148:
printfn "%A" (hammings() |> nthLazyList 1691)
printfn "%A" (hammings() |> nthLazyList 1000000)
0</langsyntaxhighlight>
 
The above code memory residency is quite high as it holds the entire lazy sequence in memory due to the reference preventing garbage collection as the sequence is consumed,
Line 3,983 ⟶ 4,154:
The following code reduces that high memory residency by making the routine a function and using internal local stream references for the intermediate streams so that they can be collected as the stream is consumed as long as no reference is held to the main results stream (which is not in the sample test functions); it also avoids duplication of factors by successively building up streams and further reduces memory use by ordering of the streams so that the least dense are determined first:
{{trans|Haskell}}
<syntaxhighlight lang="fsharp">let cNUMVALS = 1000000
<lang fsharp>type LazyList<'a> = Cons of 'a * Lazy<LazyList<'a>> // ': fix colouring
 
type LazyList<'a> = Cons of 'a * Lazy<LazyList<'a>>
 
let cNUMVALS = 1000000
let hammings() =
let rec merge (Cons(x, f) as xs) (Cons(y, g) as ys) =
Line 4,012 ⟶ 4,183:
printfn "Found this last up to %d in %d milliseconds." cNUMVALS ((stop - strt) / 10000L)
0 // return an integer exit code</langsyntaxhighlight>
 
Both codes output the same results as follows but the second is over three times faster:
Line 4,025 ⟶ 4,196:
===Fast somewhat imperative sequence version using logarithms===
 
Since the above pure functional approach isn't very efficient, a more imperative approach using "growable" arrays which are "drained" of unnecessary older values in blocks once the back pointer indices are advanced is used in the following code. The code also implements an algorithm to avoid duplicate calculations and thus does the same number of operations as the above code but faster due to using integer and floating point operations rather an BigInteger ones. Due to the "draining" the memory use is the same as the above by a constant factor. However, note that other than the contents of these arrays, pure functional code using immutable values is used. Note that the implementation of IEnumerable using sequences in F# is also not very efficient and a "roll-your-own" IEnumerable implementation would likely be somewhat faster:
 
F# has a particularly slow enumeration ability in the use of the `Seq` type (although easy to use) so in order to be able to bypass that, the following code still uses the imperative `ResizeArray`'s but outputs a closure "next" function that can be used directly to avoid the generation of a `Seq` sequence where maximum speed is desired:
<lang fsharp>let cCOUNT = 1000000
{{tran|Nim}}
<syntaxhighlight lang="fsharp">let cCOUNT = 1000000
 
type LogRep = struct val lr: double; val x2: uint32; val x3: uint32; val x5: uint32
new(lr, x2, x3, x5) = {lr = lr; x2 = x2; x3 = x3; x5 = x5 } end
let one: LogRep = LogRep(0.0, 0u, 0u, 0u)
let lg2_2: double = 1.0
let lg3_2: double = log 3.0 / log 2.0
let lg5_2: double = log 5.0 / log 2.0
let inline mul2 (lr: LogRep): LogRep = LogRep(lr.lr + lg2_2, lr.x2 + 1u, lr.x3, lr.x5)
let inline mul3 (lr: LogRep): LogRep = LogRep(lr.lr + lg3_2, lr.x2, lr.x3 + 1u, lr.x5)
let inline mul5 (lr: LogRep): LogRep = LogRep(lr.lr + lg5_2, lr.x2, lr.x3, lr.x5 + 1u)
 
let hammingsLog() = // imperative arrays, eliminates the BigInteger operations...
let s2 = ResizeArray<_>() in let s3 = ResizeArray<_>()
let lb3 = 1.5849625007211561814537389439478 // Math.Log(3) / Math.Log(2);
s2.Add(one); s3.Add(mul3 one)
let lb5 = 2.3219280948873623478703194294894 // Math.Log(5) / Math.Log(2);
let inlinemutable mul2s5 (lg,= x2,mul5 x3,one x5)in =let (lgmutable +mrg 1.0, x2 + 1u,= x3,mul3 x5)one
let inlinemutable mul3 (lg, x2, x3, x5)s2hdi = (lg0 +in lb3,let x2,mutable x3s3hdi + 1u,= x5)0
let inline mul5 next(lg, x2, x3, x5) = (lg// +imperative lb5,next x2,function x3,to x5advance + 1u)value
if s2hdi + s2hdi >= s2.Count then s2.RemoveRange(0, s2hdi); s2hdi <- 0
let one = (0.0, 0u, 0u, 0u)
let mutable rslt: LogRep = s2.[s2hdi]
let s532, mrg = one |> mul2, one |> mul3
if rslt.lr < mrg.lr then s2.Add(mul2 rslt); s2hdi <- s2hdi + 1
let s53 = one |> mul3 |> mul3 // equivalent to 9 for advance step
else
let s5 = one |> mul5
if s3hdi + s3hdi >= s3.Count then s3.RemoveRange(0, s3hdi); s3hdi <- 0
let h = ResizeArray<_>()
rslt <- mrg; s2.Add(mul2 rslt); s3.Add(mul3 rslt); s3hdi <- s3hdi + 1
let m = ResizeArray<_>()
let inline drplg (_, x2,let x3,chkv: x5)LogRep = (x2, x3, x5)s3.[s3hdi]
if chkv.lr < s5.lr then mrg <- chkv
let inline nontriv() = Seq.unfold (fun (i, j, s532, mrg, s53, s5) -> // THIS STILL IS PATTERN MATCHING!!!!!
let inline (else mrg <)- (lga,s5; _,s5 _,<- _)mul5 (lgb,s5; _,s3hdi _,<- _)s3hdi - 1 = lga < lgb
rslt
let nv, ni, nj, ns532, nmrg, ns53, ns5 =
next
if s532 < mrg then h.Add(s532)
s532, i + 1, j, h.[i] |> mul2, mrg, s53, s5
else if s53 < s5 then h.Add(mrg); m.Add(s53)
mrg, i, j + 1, s532, s53, m.[j] |> mul3, s5
else h.Add(mrg); m.Add(s5)
mrg, i, j, s532, s5, s53, s5 |> mul5
let nj = if nj >= m.Capacity / 2 then m.RemoveRange(0, nj); 0 else nj
let ni = if ni >= h.Capacity / 2 then h.RemoveRange(0, ni); 0 else ni
Some (drplg nv, (ni, nj, ns532, nmrg, ns53, ns5))) (0,0,s532,mrg,s53,s5)
seq { yield drplg one; yield! nontriv() } // this is very slow
 
let hl2Seq f = Seq.unfold (fun v -> Some(v, f())) (f())
let trival (x2, x3, x5) = // convert trival to BigInteger
let rec loopnthLogHamming n mlt rsltf =
let rec nxt i = if i >= n then f() else f() |> ignore; nxt (i + 1) in nxt 0
 
let lr2BigInt (lr: LogRep) = // convert trival to BigInteger
let rec xpnd n mlt rslt =
if n <= 0u then rslt
else loopxpnd (n - 1u) mlt (mlt * rslt)
loopxpnd lr.x2 2I 1I |> loopxpnd lr.x3 3I |> loopxpnd lr.x5 5I
 
[<EntryPoint>]
let main argv =
printf "( "; hammingsLog() |> Seq.take 20hl2Seq |> Seq.itertake (printf "%A " << trival); printfn ")"20
printfn "%A" (hammingsLog() |> Seq.itemiter (1691printf -"%A 1" << lr2BigInt); printfn ")"
printfn "%A" (hammingsLog() |> hl2Seq |> Seq.item (1691 - 1) |> lr2BigInt)
let strt = System.DateTime.Now.Ticks
// slow way using Seq:
 
// let rslt = (hammingsLog()) |> hl2Seq |> Seq.item (1000000 - 1)
// fast way using closure directly:
let rslt = (hammingsLog()) |> nthLogHamming (1000000 - 1)
 
let stop = System.DateTime.Now.Ticks
 
printfn "%A" (rslt |> trivallr2BigInt)
printfn "Found this last up to %d in %d milliseconds." cCOUNT ((stop - strt) / 10000L)
printfn ""
0 // return an integer exit code</langsyntaxhighlight>
{{out}}
<pre>( 1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36 )
2125764000
(5u, 12u, 3u)
519312780448388736089589843750000000000000000000000000000000000000000000000000000000
Found this last up to 1000000 in 25357 milliseconds.</pre>
 
The above code can find the billionth Hamming number in about 60 seconds on the same Intel i5-6500 at 3.6 GHz (single threaded boosted). If the "fast way" is commented out and the commenting out removed from the "slow way", the code is about twice as slow.
The above code produces the same outputs as above, but takes only about 253 milliseconds rather than over a second.
 
===Extremely fast non-enumerating version sorting values in error band===
Line 4,089 ⟶ 4,270:
If one is willing to forego sequences and just calculate the nth Hamming number, then some reading on the relationship between the size of numbers to the sequence numbers is helpful (Wikipedia: regular number). One finds that there is a very distinct relationship and that it quite quickly reduces to quite a small error band proportional to the log of the output value for larger ranges. Thus, the following code just scans for logarithmic representations to insert into a sequence for this top error band and extracts the correct nth representation from that band. It reduces time complexity to O(n^(2/3)) from O(n) for the sequence versions, but even more amazingly, reduces memory requirements to O(n^(1/3)) from O(n^(2/3)) and thus makes it possible to calculate very large values in the sequence on common personal computers. The code is as follows:
{{trans|Haskell}}
<langsyntaxhighlight lang="fsharp">let nthHamming n =
if n < 1UL then failwith "nthHamming; argument must be > 0!"
if n < 2UL then 0u, 0u, 0u else // trivial case for first value of one
Line 4,150 ⟶ 4,331:
System.Console.ReadKey(true) |> ignore
printfn ""
0 // return an integer exit code</langsyntaxhighlight>
{{output}}
<pre>( 1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36 )
Line 4,169 ⟶ 4,350:
Due to the limited 53-bit mantissa of 64-bit double floating piint numbers, the above code can't properly sort the error band for input arguments somewhere above 10**13; the following code makes the sort accurate by using a multi-precision logarithm representation of sufficient precision so that the sort is accurate for arguments well beyond the uint64 input argument range, at about a doubling in cost in execution speed:
{{trans|Haskell}}
<langsyntaxhighlight lang="fsharp">let nthHamming n =
if n < 1UL then failwith "nthHamming: argument must be > 0!"
if n < 2UL then 0u, 0u, 0u else // trivial case for first value of one
Line 4,202 ⟶ 4,383:
let sbnd = bnd |> List.sortBy (fun (lg, _) -> -lg) // sort in decending order
let _, rslt = sbnd.[ndx]
rslt</langsyntaxhighlight>
 
=={{header|Factor}}==
{{trans|Scala}}
<langsyntaxhighlight lang="factor">USING: accessors deques dlists fry kernel make math math.order
;
IN: rosetta.hamming
Line 4,237 ⟶ 4,418:
 
: nth-from-now ( hamming-iterator n -- m )
1 - over '[ _ next drop ] times next ;</langsyntaxhighlight>
 
<hamming-iterator> 20 next-n .
Line 4,245 ⟶ 4,426:
{{trans|Haskell}}
Lazy lists are quite slow in Factor, but still.
<langsyntaxhighlight lang="factor">USING: combinators fry kernel lists lists.lazy locals math ;
IN: rosetta.hamming-lazy
 
Line 4,262 ⟶ 4,443:
h 2 3 5 [ '[ _ * ] lazy-map ] tri-curry@ tri
sort-merge sort-merge
] lazy-cons h! h ;</langsyntaxhighlight>
 
20 hamming ltake list>array .
Line 4,273 ⟶ 4,454:
64-bit cell represents a number 2^l*3^m*5^n, where l, n, and m are
bitfields in the cell (20 bits for now). It also uses a fixed-point logarithm to compare the Hamming numbers and prints them in factored form. This code has been tested up to the 10^9th Hamming number.
<langsyntaxhighlight Forthlang="forth">\ manipulating and computing with Hamming numbers:
 
: extract2 ( h -- l )
Line 4,364 ⟶ 4,545:
20 .nseq
cr 1691 nthseq h.
cr 1000000 nthseq h.</langsyntaxhighlight>
{{out}}
<pre>
Line 4,372 ⟶ 4,553:
</pre>
A smaller, less capable solution is presented here. It solves two out of three requirements and is ANS-Forth compliant.
<langsyntaxhighlight Forthlang="forth">2000 cells constant /hamming
create hamming /hamming allot
( n1 n2 n3 n4 n5 n6 n7 -- n3 n4 n5 n6 n1 n2 n8)
Line 4,396 ⟶ 4,577:
cr 21 1 ?do i . i hamming# . cr loop
1691 hamming# . cr
;</langsyntaxhighlight>
 
=={{header|Fortran}}==
{{works with|Fortran|90 and later}}
Using big_integer_module from here [http://fortran.com/big_integer_module.f95]
<langsyntaxhighlight lang="fortran">program Hamming_Test
use big_integer_module
implicit none
Line 4,472 ⟶ 4,653:
end if
end function mini
end program</langsyntaxhighlight>
{{out}}
<pre>1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36
Line 4,479 ⟶ 4,660:
 
=={{header|FreeBASIC}}==
<langsyntaxhighlight lang="freebasic">' FB 1.05.0 Win64
 
' The biggest integer which FB supports natively is 8 bytes so unable
Line 4,523 ⟶ 4,704:
Print
Print "Press any key to quit"
Sleep</langsyntaxhighlight>
 
{{out}}
Line 4,536 ⟶ 4,717:
=={{header|FunL}}==
{{trans|Scala}}
<langsyntaxhighlight lang="funl">native scala.collection.mutable.Queue
 
val hamming =
Line 4,560 ⟶ 4,741:
for q <- [q2, q3, q5] do q.enqueue( 1 )
stream()</langsyntaxhighlight>
 
{{trans|Haskell}}
<langsyntaxhighlight lang="funl">val hamming = 1 # merge( map((2*), hamming), merge(map((3*), hamming), map((5*), hamming)) )
 
def
Line 4,573 ⟶ 4,754:
println( hamming.take(20) )
println( hamming(1690) )
println( hamming(2000) )</langsyntaxhighlight>
 
{{out}}
Line 4,585 ⟶ 4,766:
=={{header|Fōrmulæ}}==
 
{{FormulaeEntry|page=https://formulae.org/?script=examples/Hamming_numbers}}
Fōrmulæ programs are not textual, visualization/edition of programs is done showing/manipulating structures but not text. Moreover, there can be multiple visual representations of the same program. Even though it is possible to have textual representation &mdash;i.e. XML, JSON&mdash; they are intended for storage and transfer purposes more than visualization and edition.
 
'''Solution'''
 
[[File:Fōrmulæ - Hamming numbers 01.png]]
 
'''Case 1.''' First twenty Hamming numbers
 
[[File:Fōrmulæ - Hamming numbers 02.png]]
 
[[File:Fōrmulæ - Hamming numbers 03.png]]
 
'''Case 2.''' 1691-st Hamming number
 
[[File:Fōrmulæ - Hamming numbers 04.png]]
 
[[File:Fōrmulæ - Hamming numbers 05.png]]
 
'''Case 3.''' One million-th Hamming number
 
[[File:Fōrmulæ - Hamming numbers 06.png]]
Programs in Fōrmulæ are created/edited online in its [https://formulae.org website], However they run on execution servers. By default remote servers are used, but they are limited in memory and processing power, since they are intended for demonstration and casual use. A local server can be downloaded and installed, it has no limitations (it runs in your own computer). Because of that, example programs can be fully visualized and edited, but some of them will not run if they require a moderate or heavy computation/memory resources, and no local server is being used.
 
[[File:Fōrmulæ - Hamming numbers 07.png]]
In '''[https://formulae.org/?example=Hamming_numbers this]''' page you can see the program(s) related to this task and their results.
 
=={{header|Go}}==
===Concise version using dynamic-programming===
<langsyntaxhighlight lang="go">package main
 
import (
Line 4,627 ⟶ 4,826:
fmt.Println(h[1691-1])
fmt.Println(h[len(h)-1])
}</langsyntaxhighlight>
{{out}}
<pre>
Line 4,636 ⟶ 4,835:
===Longer version using dynamic-programming and logarithms===
More than 10 times faster.
<langsyntaxhighlight lang="go">package main
 
import (
Line 4,728 ⟶ 4,927:
}
show(table[ordinal-1])
}</langsyntaxhighlight>
{{out}}
<pre>
Line 4,751 ⟶ 4,950:
The program implements the memoized streams/lazylists with a "roll-your-own" implementation and only the necessary methods as required by this algorithm as Go does not have a library to supply such, and uses a function closure to implement a simple form of enumeration of the Hamming values. It used "llmult to perform the function of the "map" function used in the Haskell code, which is to produce a new stream which has each value of the input stream multiplied by a constant. Instead of the Haskell "foldl" function, this program uses a simple Go "for" comprehension of the input primes array.
{{trans|Haskell}}
<langsyntaxhighlight lang="go">// Hamming project main.go
package main
 
Line 4,861 ⟶ 5,060:
fmt.Printf("Found the %vth Hamming number as %v in %v.\r\n", n, rslt.String(), end.Sub(strt))
}
</syntaxhighlight>
</lang>
 
The outputs are about the same as the above versions. In order to perform this algorithm, one can see how much more verbose Go is than more functional languages such as Haskell or F# for this primarily functional algorithm.
Line 4,869 ⟶ 5,068:
While the above version can calculate to larger ranges due to somewhat reduced memory use, it is still somewhat limited as to range by memory limits due to the increasing size of the big integers used, limited in speed due to those big integer calculations, and also limited in speed due to Go's slow memory allocations and de-allocations. The following code uses combined techniques to overcome all three limitations: 1) as for other solutions on this page, it uses a representation using integer exponents of 2, 3, and 5 and a scaled integer logarithm only for value comparisons (scaled such that round-off errors aren't a factor over the applicable range); thus memory use per element is constant rather than growing with range for big integers, and operations are simple integer comparisons and additions and are thus very fast. 2) memory reductions are by draining the used arrays by batches (rather than one by one as above) in place to reduce the time required for constant allocations and de-allocations. The code is as follows:
{{trans|Rust}}
<langsyntaxhighlight lang="go">package main
 
import (
Line 5,016 ⟶ 5,215:
 
fmt.Printf("This last found the %vth hamming number in %v.\r\n", n, end.Sub(strt))
}</langsyntaxhighlight>
{{output}}
<pre>[1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36]
Line 5,030 ⟶ 5,229:
The above code is not as fast as one can go as it is limited by the need to calculate all Hamming numbers in the sequence up to the required one: some reading on the relationship between the size of numbers to the sequence numbers is helpful (Wikipedia: regular number). One finds that there is a very distinct relationship and that it quite quickly reduces to quite a small error band proportional to the log of the output value for larger ranges. Thus, the following code just scans for logarithmic representations to insert into a sequence for this top error band and extracts the correct nth representation from that band. It reduces time complexity to O(n^(2/3)) from O(n) for the sequence versions, but even more amazingly, reduces memory requirements to O(n^(1/3)) from O(n^(2/3)) and thus makes it possible to calculate very large values in the sequence on common personal computers. The code is as follows:
{{trans|Nim}}
<langsyntaxhighlight lang="go">package main
 
import (
Line 5,152 ⟶ 5,351:
 
fmt.Printf("This last found the %vth hamming number in %v.\r\n", uint64(1e6), end.Sub(strt))
}</langsyntaxhighlight>
{{output}}
<pre>1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36
Line 5,165 ⟶ 5,364:
 
=={{header|Groovy}}==
<langsyntaxhighlight lang="groovy">class Hamming {
 
static final ONE = BigInteger.ONE
Line 5,202 ⟶ 5,401:
priorityQueue.add(lowest.multiply FIVE)
}
}</langsyntaxhighlight>
 
=={{header|Haskell}}==
===The classic version===
<langsyntaxhighlight lang="haskell">hamming = 1 : map (2*) hamming `union` map (3*) hamming `union` map (5*) hamming
 
union a@(x:xs) b@(y:ys) = case compare x y of
Line 5,221 ⟶ 5,420:
-- [1,2,3,4,5,6,8,9,10,12,15,16,18,20,24,25,27,30,32,36]
-- (2125764000,2147483648)
-- 519312780448388736089589843750000000000000000000000000000000000000000000000000000000</langsyntaxhighlight>
Runs in about a second on [http://ideone.com/q3fma Ideone.com].
The nested <code>union</code>s' effect is to produce the minimal value at each step,
Line 5,234 ⟶ 5,433:
 
The classic version can be sped up quite a bit (about twice, with roughly the same [http://en.wikipedia.org/wiki/Analysis_of_algorithms#Empirical_orders_of_growth empirical orders of growth]) by avoiding generation of duplicate values in the first place:
<langsyntaxhighlight lang="haskell">hammings :: () -> [Integer]
hammings() = 1 : foldr u [] [2,3,5] where
u n s = -- fix (merge s . map (n*) . (1:))
Line 5,247 ⟶ 5,446:
print $ take 20 (hammings ())
print $ (hammings ()) !! 1690
print $ (hammings ()) !! (1000000-1)</langsyntaxhighlight>
 
===Explicit multiples reinserting===
This is a common approach which explicitly maintains an internal buffer of <math>O(n^{2/3})</math> elements, removing the numbers from its front and reinserting their 2- 3- and 5-multiples in order. It overproduces the sequence, stopping when the ''n''-th number is no longer needed instead of when it's first found. Also overworks by maintaining this buffer in total order where just heap would be sufficient. Worse, this particular version uses a sequential list for its buffer. That means <math>O(n^{2/3})</math> operations for each number, instead of <math>O(1)</math> of the above version (and thus <math>O(n^{5/3})</math> overall). Translation of [[#Java|Java]] (which does use priority queue though, so should have ''O''&zwj;&thinsp;&zwj;(''n''&zwj;&thinsp;&zwj;log''n'') operations overall). Uses <code>union</code> from the "classic" version above:
<langsyntaxhighlight Haskelllang="haskell">hammFrom n = drop n $
iterate (\(_ , (a:t)) -> (a, union t [2*a,3*a,5*a])) (0, [1])</langsyntaxhighlight>
{{out}}
<langsyntaxhighlight Haskelllang="haskell">> take 20 $ map fst $ hammFrom 1
[1,2,3,4,5,6,8,9,10,12,15,16,18,20,24,25,27,30,32,36]
 
Line 5,276 ⟶ 5,475:
 
> map (logBase 2) $ zipWith (/) =<< tail $ [402,638,1007,1596]
[0.67,0.66,0.66]</langsyntaxhighlight>
Runs too slowly to reach 1,000,000, with empirical orders of growth above ''~''&zwj;&thinsp;&zwj;(''n''&zwj;&thinsp;&zwj;<sup>1.7</sup>&zwj;&thinsp;&zwj;) and worsening. Last two lines show the internal buffer's length for several sample ''n''&zwj;&thinsp;&zwj;s, and its [http://en.wikipedia.org/wiki/Analysis_of_algorithms#Empirical_orders_of_growth ''empirical'' orders of growth] which strongly support the <math>O(n^{2/3})</math> claim.
 
===Enumeration by a chain of folded merges===
 
<syntaxhighlight lang="haskell">
<lang Haskell>
hamm = foldr merge1 [] . iterate (map (5*)) .
foldr merge1 [] . iterate (map (3*))
Line 5,291 ⟶ 5,490:
3, 6, 12, 24, 48, 96, ...
9, 18, 36, 72, 144, 288, ...
27, ... -}</langsyntaxhighlight>
 
Uses <code>merge</code>, as there's no need for duplicates-removing <code>union</code> because each number is produced only once here, too.
Line 5,305 ⟶ 5,504:
 
The total count of thus produced triples is then the band's topmost value's index in the Hamming sequence, 1-based. The ''n''th number in the sequence is then found by a simple lookup in the sorted band, provided it was wide enough. This produces the 1,000,000-th value instantaneously. Following the 2017-10 IDEOne update to a faster 64-bit system, the 1 trillionth number [https://ideone.com/01dpQu is found in 0.7s] on Ideone.com:
<langsyntaxhighlight lang="haskell">-- directly find n-th Hamming number, in ~ O(n^{2/3}) time.
-- based on "top band" idea by Louis Klauder, from the DDJ discussion.
-- by Will Ness, original post: drdobbs.com/blogs/architecture-and-design/228700538
Line 5,339 ⟶ 5,538:
let (i,frac) = pr (hi-q) ; r = hi - frac -- r = i + q
] where pr = properFraction -- pr 1.24 => (1,0.24)
foldl_ = foldl'</langsyntaxhighlight>
{{out}}
<pre>-- time: 0.00s memory: 4.2MB
Line 5,351 ⟶ 5,550:
As well, although it isn't quite as elegant in a Haskell style sense, one can get an additional constant factor in execution time by replacing the "loops" based on list comprehensions to tail-recursive function call "loops", as in the following code:
 
<langsyntaxhighlight lang="haskell">{-# OPTIONS_GHC -O3 -XStrict #-}
 
import Data.Word
Line 5,390 ⟶ 5,589:
(s,res) = ( sortBy (flip compare `on` fst) b, s!!m ) -- sorted decreasing, result<
 
main = putStrLn $ show $ nthHam 1000000000000</langsyntaxhighlight>
 
This implementation can likely calculate the 10^19th Hamming number in less than a day and can't quite reach the (2^64-1)th (18446744073709551615th) Hamming due to a slight range overflow as it approaches this limit. Maximum memory used to these limits is less than a few hundred Megabytes, so possible on typical personal computers given the required day or two of computing time.
Line 5,399 ⟶ 5,598:
 
All of these codes using algorithms can't do an accurate sort of the error band for arguments somewhere above 10^13 due to the limited precision of the Double logarithm values, but this is easily fixed by using "roll-your-own" Integer logarithm values as follows with very little cost in execution time as it only applies to the relatively very small error band:
<langsyntaxhighlight lang="haskell">{-# OPTIONS_GHC -O3 -XStrict #-}
 
import Data.Word
Line 5,446 ⟶ 5,645:
(s,res) = ( sortBy (flip compare `on` fst) b, s!!m ) -- sorted decreasing, result<
 
main = putStrLn $ show $ nthHam 1000000000000</langsyntaxhighlight>
 
All of these codes run a constant factor faster using the forced "Strict" mode, which shows that it is very difficult to anticipate the Haskell strictness analyser, especially in the case of the first code using List comprehensions.
Line 5,454 ⟶ 5,653:
 
Lazy evaluation is used to improve performance.
<langsyntaxhighlight Uniconlang="unicon"># Lazily generate the three Hamming numbers that can be derived directly
# from a known Hamming number h
class Triplet : Class (cv, ce)
Line 5,506 ⟶ 5,705:
t1.nextVal() | delete(triplers, t1)
}
end</langsyntaxhighlight>
 
=={{header|J}}==
'''Solution:'''<br>
A concise tacit expression using a (right) fold:
<langsyntaxhighlight lang="j">hamming=: {. (/:~@~.@] , 2 3 5 * {)/@(1x ,~ i.@-)</langsyntaxhighlight>
'''Example usage:'''
<langsyntaxhighlight lang="j"> hamming 20
1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36
 
{: hamming 1691
2125764000</langsyntaxhighlight>
For the millionth (and billionth (1e9)) Hamming number see the <code>nh</code> verb from [[j:Essays/Hamming Number|Hamming Number essay]] on the J wiki.
 
Line 5,523 ⟶ 5,722:
I'll explain this J-sentence by dividing it in three parts from left to right omitting the leftmost <code>{.</code>:
* sort and remove duplicates
<syntaxhighlight lang ="j"> /:~@~.@]</langsyntaxhighlight>
* produce 3 elements by selection and multiplication (we have already produced smaller values, this will overproduce slightly larger values, but the extra values overlap, and we handle that by discarding duplicates):
<syntaxhighlight lang ="j"> 2 3 5 * {</langsyntaxhighlight>
note that LHA (2 3 5 * {) RHA [http://www.jsoftware.com/help/dictionary/intro05.htm is equivalent] to
<langsyntaxhighlight lang="j"> 2 3 5 * LHA { RHA</langsyntaxhighlight>
* the RH part forms an array of descending indices and the initial Hamming number 1
<langsyntaxhighlight lang="j"> (1x ,~ i.@-)</langsyntaxhighlight>
e.g. if we want the first 5 Hamming numbers, it produces the array:
4 3 2 1 0 1
in other words, we compute a sequence which begins with the desired hamming sequence and then take the first n elements (which will be our desired hamming sequence)
<langsyntaxhighlight lang="j"> ({. (/:~@~.@] , 2 3 5 * {)/@(1x ,~ i.@-)) 7
1 2 3 4 5 6 8</langsyntaxhighlight>
This starts using a descending sequence with 1 appended:
<langsyntaxhighlight lang="j"> (1x ,~ i.@-) 7
6 5 4 3 2 1 0 1</langsyntaxhighlight>
and then the fold expression is inserted between these list elements and the result computed:
<langsyntaxhighlight lang="j"> 6(/:~@~.@] , 2 3 5 * {) 5(/:~@~.@] , 2 3 5 * {) 4(/:~@~.@] , 2 3 5 * {) 3(/:~@~.@] , 2 3 5 * {) 2(/:~@~.@] , 2 3 5 * {) 1(/:~@~.@] , 2 3 5 * {) 0(/:~@~.@] , 2 3 5 * {) 1
1 2 3 4 5 6 8 9 10 12 15 18 20 25 30 16 24 40</langsyntaxhighlight>
(Note: A train of verbs in J is evaluated by supplying arguments to the every other verb (counting from the right) and the combining these results with the remaining verbs. Also: <code>{</code> has been implemented so that an index of 0 will select the only item from an array with no dimensions.)
 
Line 5,548 ⟶ 5,747:
 
Inserting the top number's three multiples deep into the priority queue as it does, incurs extra cost for each number produced. To not worsen the expected algorithm complexity, the priority queue should have (amortized) <math>O(1)</math> implementation for both insertion and deletion operations but it looks like it's <math>O(\log n)</math> in Java.
<langsyntaxhighlight lang="java5">import java.math.BigInteger;
import java.util.PriorityQueue;
 
Line 5,584 ⟶ 5,783:
System.out.println("Hamming(1000000) = " + hamming(1000000));
}
}</langsyntaxhighlight>
{{out}}
<pre>Hamming(1 .. 20) = 1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36
Line 5,592 ⟶ 5,791:
Another possibility is to realize that Hamming numbers can be represented and stored as triples of nonnegative integers which are the powers of 2, 3 and 5, and do all operations needed by the algorithms directly on these triples without converting to <math>BigInteger</math>, which saves tremendous memory and time. Also, the search frontier through this three-dimensional grid can be generated in an order that never reaches the same state twice, so we don't need to keep track which states have already been encountered, saving even more memory. The objects of <math>HammingTriple</math> encode Hamming numbers in three fields <math>a</math>, <math>b</math> and <math>c</math>. Multiplying by 2, 3 and 5 can now be done just by incrementing that field. The order comparison of triples needed by the priority queue is implemented with simple logarithm formulas of multiplication and addition, resorting to conversion to <math>BigInteger</math> only in the rare cases that the floating point arithmetic produces too close results.
 
<langsyntaxhighlight lang="java5">
import java.math.BigInteger;
import java.util.*;
Line 5,703 ⟶ 5,902:
}
}
</syntaxhighlight>
</lang>
<pre>
Hamming number #1000000
Line 5,730 ⟶ 5,929:
This uses memoized streams - similar in principle to the classic lazy-evaluated sequence, but with a java flavor. Hope you like this recipe!
 
<langsyntaxhighlight lang="java">
import java.math.BigInteger;
 
Line 5,841 ⟶ 6,040:
}
}
</syntaxhighlight>
</lang>
 
<pre>
Line 5,857 ⟶ 6,056:
 
Note the use of <code>'''for''' (x in obj)</code> to iterate over the ''properties'' of an object, versus <code>'''for each''' (y in obj)</code> to iterate over the ''values'' of the properties of an object.
<langsyntaxhighlight lang="javascript">function hamming() {
var queues = {2: [], 3: [], 5: []};
var base;
Line 5,883 ⟶ 6,082:
for (; i <= 1690; i++)
ham.next();
print(i + " => " + ham.next());</langsyntaxhighlight>
{{out}}
<pre>1, 2, 3, 4, 5, 6, 8, 9, 10, 12, 15, 16, 18, 20, 24, 25, 27, 30, 32, 36
Line 5,897 ⟶ 6,096:
--Mike Lorenz
 
<langsyntaxhighlight lang="javascript"><html>
<head></head>
<body>
Line 5,994 ⟶ 6,193:
});
</script>
</html></langsyntaxhighlight>
 
{{out}}
Line 6,074 ⟶ 6,273:
new builtins such as "limit" and "nth".
==== Hamming number generator ====
<langsyntaxhighlight lang="jq"># Return the index in the input array of the min_by(f) value
def index_min_by(f):
. as $in
Line 6,119 ⟶ 6,318:
[1, [[2],[3],[5]], 1] | _hamming;
 
. as $n | hamming($n)</langsyntaxhighlight>
'''Examples''':
<langsyntaxhighlight lang="jq"># First twenty:
hamming(20)
# See elsewhere for output
Line 6,132 ⟶ 6,331:
hamming(-1000000)
# => 1.926511252902403e+44
</syntaxhighlight>
</lang>
==== Hamming numbers as triples ====
In this section, Hamming numbers are represented as triples,
Line 6,138 ⟶ 6,337:
respectively. We therefore begin with some functions for managing
Hamming numbers represented in this manner:
<langsyntaxhighlight lang="jq"># The log (base e) of a Hamming triple:
def ln_hamming:
if length != 3 then error("ln_hamming: \(.)") else . end
Line 6,199 ⟶ 6,398:
end;
[[0,0,0], [ [[1,0,0]] ,[[0,1,0]], [[0,0,1]] ], 1] | _hamming;
</syntaxhighlight>
</lang>
'''Examples'''
<langsyntaxhighlight lang="jq"># The first twenty Hamming numbers as integers:
hamming(-20) | hamming_toi
# => (see elsewhere)
Line 6,211 ⟶ 6,410:
# The millionth:
hamming(-1000000)
# => [55,47,64]</langsyntaxhighlight>
 
=={{header|Julia}}==
Simple brute force algorithm, derived from the discussion at ProgrammingPraxis.com.
<langsyntaxhighlight lang="julia">function hammingsequence(N)
if N < 1
throw("Hamming sequence exponent must be a positive integer")
Line 6,239 ⟶ 6,438:
println(hammingsequence(20))
println(hammingsequence(1691)[end])
println(hammingsequence(1000000)[end])</langsyntaxhighlight>{{output}}<pre>
[1, 2, 3, 4, 5, 6, 8, 9, 10, 12, 15, 16, 18, 20, 24, 25, 27, 30, 32, 36]
2125764000
Line 6,246 ⟶ 6,445:
 
The above code is terribly inefficient, just as said, but can be improved by about a factor of two by using intermediate variables (next2, next3, and next5) to avoid recalculating the long multi-precision integers for each comparison, as it seems that the Julia compiler (version 1.0.2) is not doing common sub expression elimination:
<langsyntaxhighlight lang="julia">function hammingsequence(N::Int)
if N < 1
throw("Hamming sequence index must be a positive integer")
Line 6,261 ⟶ 6,460:
end
ham
end</langsyntaxhighlight>
 
===Infinite generator, avoiding duplicates, using logarithms for faster processing===
 
The above code is slow for several reasons, partly because it is doing many multi-precision integer multiplications requiring much memory allocation and garbage collection for which the current Julia version 1.0.2 is quite slow, but also because there are many repeated calculations (3 timetimes 2 equals 2 times three, etc.). The following code is about 3060 times faster by using floating point logarithms for multiplication and comparison; it also is an infinite generator (an iterator), which means that memory consumption can be greatly reduced by eliminating values which are no longer of any use:
{{trans|Nim}}
<lang julia>function trival((twos, threes, fives))
<syntaxhighlight lang="julia">struct LogRep
BigInt(2)^twos * BigInt(3)^threes * BigInt(5)^fives
lg :: Float64
x2 :: UInt32
x3 :: UInt32
x5 :: UInt32
end
const ONE = LogRep(0.0, 0, 0, 0)
const LB2_2 = 1.0
const LB2_3 = log(2,3)
const LB2_5 = log(2,5)
function mult2(lr :: LogRep) # :: LogRep
LogRep(lr.lg + LB2_2, lr.x2 + 1, lr.x3, lr.x5)
end
function mult3(lr :: LogRep) # :: LogRep
LogRep(lr.lg + LB2_3, lr.x2, lr.x3 + 1, lr.x5)
end
function mult5(lr :: LogRep) # :: LogRep
LogRep(lr.lg + LB2_5, lr.x2, lr.x3, lr.x5 + 1)
end
function lr2BigInt(lr :: LogRep) # :: BigInt
BigInt(2)^lr.x2 * BigInt(3)^lr.x3 * BigInt(5)^lr.x5
end
 
mutable struct HammingsHammingsLog
ham532s2 :: Vector{Tuple{Float64,Tuple{Int,Int,Int}}LogRep}
ham53s3 :: Vector{Tuple{Float64,Tuple{Int,Int,Int}}LogRep}
ndx532s5 :: IntLogRep
ndx53mrg :: IntLogRep
next2s2hdi :: Tuple{Float64,Tuple{Int,Int,Int}}
next3s3hdi :: Tuple{Float64,Tuple{Int,Int,Int}}
HammingsLog() = new(
next5 :: Tuple{Float64,Tuple{Int,Int,Int}}
[ONE],
next53 :: Tuple{Float64,Tuple{Int,Int,Int}}
Hammings() = new [mult3(ONE)],
mult5(ONE),
Vector{Tuple{Float64,Tuple{Int,Int,Int}}}(),
mult3(ONE),
Vector{Tuple{Float64,Tuple{Int,Int,Int}}}(),
1, 1,
(1.0, (1, 0, 0)), (log(2, 3), (0, 1, 0)),
(log(2, 5), (0, 0, 1)), (0.0, (0, 0, 0))
)
end
Base.eltype(::Type{HammingsHammingsLog}) = Tuple{Int,Int,Int}LogRep
function Base.iterate(HM::HammingsHammingsLog, st = HM) # :: Union{Nothing,Tuple{Tuple{Float64,Tuple{Int,Int,Int}}LogRep,HammingsHammingsLog}}
s2sz = length(st.s2)
log2of2, log2of3, log2of5 = 1.0, log(2,3), log(2,5)
if st.next2[1]s2hdi <+ st.next53[1]s2hdi - 2 >= s2sz
push!(st.ham532,ns2sz st.next2);= s2sz - st.ndx532s2hdi += 1
last, copyto!(twosst.s2, threes1, fives) = st.ham532[s2, st.ndx532]s2hdi, ns2sz)
resize!(st.next2s2, =ns2sz); (log2of2st.s2hdi + last, (twos += 1, threes, fives))
end
rslt = @inbounds(st.s2[st.s2hdi])
if rslt.lg < st.mrg.lg
st.s2hdi += 1
else
push!s3sz = length(st.ham532, st.next53s3)
if st.next3[1]s3hdi <+ st.next5[1]s3hdi - 2 >= s3sz
st.next53ns3sz = st.next3;s3sz - push!(st.ham53,s3hdi st.next3)+ 1
last, copyto!(_st.s3, threes1, fives) = st.ham53[st.ndx53];s3, st.ndx53s3hdi, += 1ns3sz)
resize!(st.next3 = (log2of3 + lasts3, (0,ns3sz); threesst.s3hdi += 1, fives))
end
rslt = st.mrg; push!(st.s3, mult3(rslt))
st.s3hdi += 1; chkv = @inbounds(st.s3[st.s3hdi])
if chkv.lg < st.s5.lg
st.mrg = chkv
else
st.next53mrg = st.next5s5; push!(st.ham53,s5 = mult5(st.next5s5); st.s3hdi -= 1
last, (_, _, fives) = st.next5
st.next5 = (log2of5 + last, (0, 0, fives + 1))
end
end
push!(st.s2, mult2(rslt)); rslt, st
len53 = length(st.ham53)
if st.ndx53 > (len53 >>> 1)
nlen53 = len53 - st.ndx53 + 1
copyto!(st.ham53, 1, st.ham53, st.ndx53, nlen53)
resize!(st.ham53, nlen53); st.ndx53 = 1
end
len532 = length(st.ham532)
if st.ndx532 > (len532 >>> 1)
nlen532 = len532 - st.ndx532 + 1
copyto!(st.ham532, 1, st.ham532, st.ndx532, nlen532)
resize!(st.ham532, nlen532); st.ndx532 = 1
end
_, tri = st.ham532[end]
tri, st
# convert(Union{Nothing,Tuple{Tuple{Float64,Tuple{Int,Int,Int}},Hammings}},(st.ham532[end], st))
# (length(st.ham532), length(st.ham53)), st
end
 
function test(n :: Int) :: Tuple{LogRep, Float64}
foreach(x -> print(trival(x)," "), (Iterators.take(Hammings(), 20))); println()
start = time(); rslt :: LogRep = ONE
let count = 1691; for t in Hammings() count <= 1 && (println(trival(t)); break); count -= 1 end end
let count = 1000000n; for t in HammingsHammingsLog() count <= 1 && (println(trival(rslt = t)); break); count -= 1 end end</lang>
elpsd = (time() - start) * 1000
rslt, elpsd
end
 
foreach(x -> print(lr2BigInt(x)," "), (Iterators.take(HammingsLog(), 20))); println()
let count = 1691; for t in HammingsLog() count <= 1 && (println(lr2BigInt(t)); break); count -= 1 end end
rslt, elpsd = test(1000000)
println(lr2BigInt(rslt))
println("This last took $elpsd milliseconds.")</syntaxhighlight>
{{out}}
<pre>1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36
2125764000
519312780448388736089589843750000000000000000000000000000000000000000000000000000000
This last took 16.8759822845459 milliseconds.</pre>
The above execution time is as run on an Intel i5-6500 at 3.6 GHz (single threaded boost), and the program can find the billionth Hamming number in about 17 seconds.
 
===Determination of the nth Hamming number by processing of error band===
 
For some phenomenal speed in determining the nth Hamming/regular number, one doesn't need to find all the values up to that limit but rather only the values within an error band which is a factor of two either way from the correct value; this has the advantage that the number of processing loops are reduced from O(n^3) to O(n^(2/3)) for a considerable saving for larger ranges and has the further advantage that memory consumption is reduced to O(n^(1/3)) meaning that huge ranges can be computed on a common desktop computer. The folwingcode can compute the trillionth (10^12th) Hamming number is a couple of seconds:
<langsyntaxhighlight lang="julia">function nthhamming(n :: UInt64) # :: Tuple{UInt32, UInt32, UInt32}
# take care of trivial cases too small for band size estimation to work...
n < 1 && throw("nthhamming: argument must be greater than zero!!!")
Line 6,371 ⟶ 6,595:
foreach(x-> print(trival(nthhamming(UInt(x))), " "), 1:20); println()
println(trival(nthhamming(UInt64(1691))))
println(trival(nthhamming(UInt64(1000000))))</langsyntaxhighlight>
 
Above about a range of 10^13, a Float64 logarithm doesn't have enough precision to be able to sort the error band properly, so a refinement of using a "roll-your-own" extended precision logarithm must be used, as follows:
<langsyntaxhighlight lang="julia">function nthhamming(n :: UInt64) # :: Tuple{UInt32, UInt32, UInt32}
# take care of trivial cases too small for band size estimation to work...
n < 1 && throw("nthhamming: argument must be greater than zero!!!")
Line 6,418 ⟶ 6,642:
_, tri = band[ndx]
tri
end</langsyntaxhighlight>
 
The above code can find the trillionth Hamming number in about two seconds (very little slower) and the thousand trillionth value in about 192 seconds. This routine would be able to find the million trillionth Hamming number in about 20,000 seconds or about five and a half hours.
Line 6,425 ⟶ 6,649:
{{trans|Java}}
 
<langsyntaxhighlight lang="scala">import java.math.BigInteger
import java.util.*
 
Line 6,456 ⟶ 6,680:
System.out.println("\nHamming(1691) = ${hamming(1691)}")
System.out.println("Hamming(1000000) = ${hamming(1000000)}")
}</langsyntaxhighlight>
 
{{out}}
Line 6,464 ⟶ 6,688:
 
===Overloaded function:===
<langsyntaxhighlight lang="scala">import java.math.BigInteger
import java.util.*
 
Line 6,496 ⟶ 6,720:
println("Hamming($r) = " + hamming(r))
arrayOf(1691, 1000000).forEach { println("Hamming($it) = " + hamming(it)) }
}</langsyntaxhighlight>
 
===Recursive function:===
<langsyntaxhighlight lang="scala">import java.math.BigInteger
import java.util.*
 
Line 6,530 ⟶ 6,754:
fun main(args: Array<String>) {
arrayOf(1..20, 1691, 1000000).forEach { println("Hamming($it) = " + hamming(it)) }
}</langsyntaxhighlight>
 
{{out}}
Line 6,541 ⟶ 6,765:
The following code implements a functional version, with the only mutable state that required to implement a recursive binding as commented in the code. It is fast because it uses non-genereric functions so that much of the boxing/unboxing can be optimized away, and it takes very little memory because of the avoiding duplicates, the order that the primes are processed with least dense first, and because it is implemented in such a way so as to use only local bindings for the heads of the lazy lists so that they can be consumed as used and garbage collected away. Kotlin does not have a lazy list like Haskell or a memoized lazy Stream like Scala, so the code implements a basic version of LazyList to be used by the algorithm (Java 8 Streams are not memoized as required here):
{{trans|scala}}
<langsyntaxhighlight lang="scala">import java.math.BigInteger as BI
 
data class LazyList<T>(val head: T, val lztail: Lazy<LazyList<T>?>) {
Line 6,588 ⟶ 6,812:
val stop = System.currentTimeMillis()
println("Took ${stop - strt} milliseconds for the last.")
}</langsyntaxhighlight>
{{output}}
<pre>[1, 2, 3, 4, 5, 6, 8, 9, 10, 12, 15, 16, 18, 20, 24, 25, 27, 30, 32, 36]
Line 6,595 ⟶ 6,819:
Took 381 milliseconds for the last.</pre>
Run on a AMD Bulldozer FX8120 3.1 GHz which is about half the speed as an equivalent Intel (but also half the price).
 
=={{header|Lambdatalk}}==
 
===1) recursive version===
 
<syntaxhighlight lang="scheme">
 
{def hamming
{def hamming.loop
{lambda {:h :a :i :b :j :c :k :m :n}
{if {>= :n :m}
then {A.last :h}
else {let { {:h {A.set! :n {min :a :b :c} :h}}
{:a :a} {:i :i}
{:b :b} {:j :j}
{:c :c} {:k :k}
{:m :m} {:n :n}
} {hamming.loop :h
{if {= :a {A.get :n :h}}
then {* 2 {A.get {+ :i 1} :h}} {+ :i 1}
else :a :i}
{if {= :b {A.get :n :h}}
then {* 3 {A.get {+ :j 1} :h}} {+ :j 1}
else :b :j}
{if {= :c {A.get :n :h}}
then {* 5 {A.get {+ :k 1} :h}} {+ :k 1}
else :c :k}
:m
{+ :n 1} }
}}}}
{lambda {:n}
{hamming.loop {A.new {S.serie 1 :n}} 2 0 3 0 5 0 :n 1}
}}
-> hamming
 
{S.map hamming {S.serie 1 20}}
-> 1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36
 
{hamming 1691}
-> 2125764000 // < 200ms
 
Currently limited to javascript's integers and by stackoverflow on some computers.
 
</syntaxhighlight>
 
===2) iterative version===
 
Build a table of 2^i•3^j•5^k from i,j,k = 0 to n and sort it.
 
===2.1) compute===
 
<syntaxhighlight lang="scheme">
 
{def ham
{lambda {:n}
{S.sort <
{S.map {{lambda {:n :i}
{S.map {{lambda {:n :i :j}
{S.map {{lambda {:i :j :k}
{* {pow 2 :i} {pow 3 :j} {pow 5 :k}}} :i :j}
{S.serie 0 :n} } } :n :i}
{S.serie 0 :n} } } :n}
{S.serie 0 :n} }
}}}
-> ham
 
{def H {ham 30}}
-> H
 
{S.slice 0 19 {H}}
-> 1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36
 
{S.get 1690 {H}}
-> 2125764000 // on my macbook pro
 
</syntaxhighlight>
 
===2.2) display===
 
Display a hamming number as 2<sup>a</sup>•3<sup>b</sup>•5<sup>c</sup>
 
<syntaxhighlight lang="scheme">
 
{def factor
{def factor.r
{lambda {:n :i}
{if {> :i :n}
then
else {if {= {% :n :i} 0}
then :i {factor.r {/ :n :i} :i}
else {factor.r :n {+ :i 1}} }}}}
{lambda {:n}
:n is the product of 1 {factor.r :n 2} }}
-> factor
 
{def asproductofpowers
{def asproductofpowers.loop
{lambda {:a :b :c :n}
{if {= {S.first :n} 1}
then 2{sup :a}•3{sup :b}•5{sup :c}
else {asproductofpowers.loop
{if {= {S.first :n} 2} then {+ :a 1} else :a}
{if {= {S.first :n} 3} then {+ :b 1} else :b}
{if {= {S.first :n} 5} then {+ :c 1} else :c}
{W.rest :n} }
}}}
{lambda {:n}
{asproductofpowers.loop 0 0 0 {S.reverse :n}}}}
-> asproductofpowers
 
{factor 2125764000}
-> 2125764000 is the product of 1 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3 3 3 5 5 5
 
{asproductofpowers {factor 2125764000}}
-> 2^5•3^12•5^3
 
{S.map {lambda {:i} {div}:i: {S.get :i {H}} =
{asproductofpowers {factor {S.get :i {H}}}}}
{S.serie 0 19}}
->
0: 1 = 2^0•3^0•5^0
1: 2 = 2^1•3^0•5^0
2: 3 = 2^0•3^1•5^0
3: 4 = 2^2•3^0•5^0
4: 5 = 2^0•3^0•5^1
5: 6 = 2^1•3^1•5^0
6: 8 = 2^3•3^0•5^0
7: 9 = 2^0•3^2•5^0
8: 10 = 2^1•3^0•5^1
9: 12 = 2^2•3^1•5^0
10: 15 = 2^0•3^1•5^1
11: 16 = 2^4•3^0•5^0
12: 18 = 2^1•3^2•5^0
13: 20 = 2^2•3^0•5^1
14: 24 = 2^3•3^1•5^0
15: 25 = 2^0•3^0•5^2
16: 27 = 2^0•3^3•5^0
17: 30 = 2^1•3^1•5^1
18: 32 = 2^5•3^0•5^0
19: 36 = 2^2•3^2•5^0
 
</syntaxhighlight>
 
See http://lambdaway.free.fr/lambdawalks/?view=hamming_numbers3 for a better display as 2<sup>a</sup>•3<sup>b</sup>•5<sup>c</sup>.
 
=={{header|Liberty BASIC}}==
LB has unlimited precision integers.
<syntaxhighlight lang="lb">
<lang lb>
dim h( 1000000)
 
Line 6,622 ⟶ 6,990:
next n
hamming =h( limit -1)
end function</langsyntaxhighlight>
<pre>
1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36
Line 6,631 ⟶ 6,999:
 
=={{header|Logo}}==
<langsyntaxhighlight lang="logo">to init.ham
; queues
make "twos [1]
Line 6,656 ⟶ 7,024:
repeat 20 [print next.ham]
repeat 1690-20 [ignore next.ham]
print next.ham</langsyntaxhighlight>
 
=={{header|Lua}}==
<langsyntaxhighlight lang="lua">function hiter()
hammings = {1}
prev, vals = {1, 1, 1}
Line 6,682 ⟶ 7,050:
n, l = 0, 0
while n < 2^31 do n, l = j(), n end
print(l)</langsyntaxhighlight>
 
=={{header|M2000 Interpreter}}==
===For Long Only===
We have to exit loop (and function) before calculating new X2 or X3 or X4 and get overflow error
<syntaxhighlight lang="m2000 interpreter">
<lang M2000 Interpreter>
Module hamming_long {
function hamming(l as long, &h(),&last()) {
Line 6,724 ⟶ 7,092:
}
hamming_long
</syntaxhighlight>
</lang>
{{out}}
<pre style="height:30ex;overflow:scroll">
Line 6,755 ⟶ 7,123:
We have to exit loop (and function) before calculating new X2 or X3 or X4 and get overflow error
 
<syntaxhighlight lang="m2000 interpreter">
<lang M2000 Interpreter>
Module hamming {
function hamming(l as long, &h(),&last()) {
Line 6,795 ⟶ 7,163:
}
hamming
</syntaxhighlight>
</lang>
 
{{out}}
Line 6,826 ⟶ 7,194:
 
=={{header|Mathematica}} / {{header|Wolfram Language}}==
<langsyntaxhighlight lang="mathematica">HammingList[N_] := Module[{A, B, C}, {A, B, C} = (N^(1/3))*{2.8054745679851933, 1.7700573778298891, 1.2082521307023026} - {1, 1, 1};
Take[ Sort@Flatten@Table[ 2^x * 3^y * 5^z ,
{x, 0, A}, {y, 0, (-B/A)*x + B}, {z, 0, C - (C/A)*x - (C/B)*y}], N]];</langsyntaxhighlight>
<pre>HammingList[20]
-> {1, 2, 3, 4, 5, 6, 8, 9, 10, 12, 15, 16, 18, 20, 24, 25, 27, 30, 32, 36}
Line 6,841 ⟶ 7,209:
The ''n'' parameter was chosen by trial and error. You have to pick an ''n'' large enough that the powers of 2, 3 and 5 will all be greater than ''n'' at the 1691st Hamming number.
 
<langsyntaxhighlight Matlablang="matlab">n = 40;
 
powers_2 = 2.^[0:n-1];
Line 6,860 ⟶ 7,228:
 
disp(powers_235(1:20))
disp(powers_235(1691))</langsyntaxhighlight>
 
=={{header|Mojo}}==
 
Since current Mojo (version 0.7) does not have many forms of recursive expression, the below is an imperative version of the First In Last Out (FILO) Queue version of the fastest iterative Nim version using logarithmic approximations for the comparison and final conversion of the power tuples to a big integer output. Since Mojo does not currently have a big integer library, enough of the required functionality of one (multiplication and conversion to string) is implemented in the following code:
 
{{trans|Nim}}
 
<syntaxhighlight lang="mojo">from collections.vector import (DynamicVector, CollectionElement)
from math import (log2, trunc, pow)
from memory import memset_zero #, memcpy)
from time import now
 
alias cCOUNT: Int = 1_000_000
 
struct BigNat(Stringable): # enough just to support conversion and printing
''' Enough "infinite" precision to support as required here - multiply and
divide by 10 conversion to string...
'''
var contents: DynamicVector[UInt32]
fn __init__(inout self):
self.contents = DynamicVector[UInt32]()
fn __init__(inout self, val: UInt32):
self.contents = DynamicVector[UInt32](4)
self.contents.resize(1, val)
fn __copyinit__(inout self, existing: Self):
self.contents = existing.contents
fn __moveinit__(inout self, owned existing: Self):
self.contents = existing.contents^
fn __str__(self) -> String:
var rslt: String = ""
var v = self.contents
while len(v) > 0:
var t: UInt64 = 0
for i in range(len(v) - 1, -1, -1):
t = ((t << 32) + v[i].to_int())
v[i] = (t // 10).to_int(); t -= v[i].to_int() * 10
var sz = len(v) - 1
while sz >= 0 and v[sz] == 0: sz -= 1
v.resize(sz + 1, 0)
rslt = str(t) + rslt
return rslt
fn mult(inout self, mltplr: Self):
var rslt = DynamicVector[UInt32]()
rslt.resize(len(self.contents) + len(mltplr.contents), 0)
for i in range(len(mltplr.contents)):
var t: UInt64 = 0
for j in range(len(self.contents)):
t += self.contents[j].to_int() * mltplr.contents[i].to_int() + rslt[i + j].to_int()
rslt[i + j] = (t & 0xFFFFFFFF).to_int(); t >>= 32
rslt[i + len(self.contents)] += t.to_int()
var sz = len(rslt) - 1
while sz >= 0 and rslt[sz] == 0: sz -= 1
rslt.resize(sz + 1, 0); self.contents = rslt
 
alias lb2: Float64 = 1.0
alias lb3: Float64 = log2[DType.float64, 1](3.0)
alias lb5: Float64 = log2[DType.float64, 1](5.0)
 
@value
struct LogRep(CollectionElement, Stringable):
var logrep: Float64
var x2: UInt32
var x3: UInt32
var x5: UInt32
fn __del__(owned self): return
@always_inline
fn mul2(self) -> Self:
return LogRep(self.logrep + lb2, self.x2 + 1, self.x3, self.x5)
@always_inline
fn mul3(self) -> Self:
return LogRep(self.logrep + lb3, self.x2, self.x3 + 1, self.x5)
@always_inline
fn mul5(self) -> Self:
return LogRep(self.logrep + lb5, self.x2, self.x3, self.x5 + 1)
fn __str__(self) -> String:
var rslt = BigNat(1)
fn expnd(inout rslt: BigNat, bs: UInt32, n: UInt32):
var bsm = BigNat(bs); var nm = n
while nm > 0:
if (nm & 1) != 0: rslt.mult(bsm)
bsm.mult(bsm); nm >>= 1
expnd(rslt, 2, self.x2); expnd(rslt, 3, self.x3); expnd(rslt, 5, self.x5)
return str(rslt)
 
alias oneLR: LogRep = LogRep(0.0, 0, 0, 0)
 
alias LogRepThunk = fn() escaping -> LogRep
 
fn hammingsLogImp() -> LogRepThunk:
var s2 = DynamicVector[LogRep](); var s3 = DynamicVector[LogRep](); var s5 = oneLR; var mrg = oneLR
s2.resize(512, oneLR); s2[0] = oneLR.mul2(); s3.resize(1, oneLR); s3[0] = oneLR.mul3()
# var s2p = s2.steal_data(); var s3p = s3.steal_data()
var s2hdi = 0; var s2tli = -1; var s3hdi = 0; var s3tli = -1
@always_inline
fn next() escaping -> LogRep:
var rslt = s2[s2hdi]
var s2len = len(s2)
s2tli += 1;
if s2tli >= s2len:
s2tli = 0
if s2hdi == s2tli:
if s2len < 1024:
s2.resize(1024, oneLR)
else:
s2.resize(s2len + s2len, oneLR) # ; s2p = s2.steal_data()
for i in range(s2hdi):
s2[s2len + i] = s2[i]
# memcpy[UInt8, 0](s2p + s2len, s2p, sizeof[LogRep]() * s2hdi)
s2tli += s2len; s2len += s2len
if rslt.logrep < mrg.logrep:
s2hdi += 1
if s2hdi >= s2len:
s2hdi = 0
else:
rslt = mrg
var s3len = len(s3)
s3tli += 1;
if s3tli >= s3len:
s3tli = 0
if s3hdi == s3tli:
if s3len < 1024:
s3.resize(1024, oneLR)
else:
s3.resize(s3len + s3len, oneLR) # ; s3p = s3.steal_data()
for i in range(s3hdi):
s3[s3len + i] = s3[i]
# memcpy[UInt8, 0](s3p + s3len, s3p, sizeof[LogRep]() * s3hdi)
s3tli += s3len; s3len += s3len
if mrg.logrep < s5.logrep:
s3hdi += 1
if s3hdi >= s3len:
s3hdi = 0
else:
s5 = s5.mul5()
s3[s3tli] = rslt.mul3(); let t = s3[s3hdi];
mrg = t if t.logrep < s5.logrep else s5
s2[s2tli] = rslt.mul2(); return rslt
return next
 
fn main():
print("The first 20 Hamming numbers are:")
var f = hammingsLogImp();
for i in range(20): print_no_newline(f(), " ")
print()
f = hammingsLogImp(); var h: LogRep = oneLR
for i in range(1691): h = f()
print("The 1691st Hamming number is", h)
let strt: Int = now()
f = hammingsLogImp()
for i in range(cCOUNT): h = f()
let elpsd = (now() - strt) / 1000
 
print("The " + str(cCOUNT) + "th Hamming number is:")
print("2**" + str(h.x2) + " * 3**" + str(h.x3) + " * 5**" + str(h.x5))
let lg2 = lb2 * Float64(h.x2.to_int()) + lb3 * Float64(h.x3.to_int()) + lb5 * Float64(h.x5.to_int())
let lg10 = lg2 / log2(Float64(10))
let expnt = trunc(lg10); let num = pow(Float64(10.0), lg10 - expnt)
let apprxstr = str(num) + "E+" + str(expnt.to_int())
print("Approximately: ", apprxstr)
let answrstr = str(h)
print("The result has", len(answrstr), "digits.")
print(answrstr)
print("This took " + str(elpsd) + " microseconds.")</syntaxhighlight>
 
{{out}}
 
<pre>The first 20 Hamming numbers are:
1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36
The 1691st Hamming number is 2125764000
The 1000000th Hamming number is:
2**55 * 3**47 * 5**64
Approximately: 5.1931278110620553E+83
The result has 84 digits.
519312780448388736089589843750000000000000000000000000000000000000000000000000000000
This took 3626.192 microseconds.</pre>
 
The above was as run on an AMD 7840HS CPU single-thread boosted to 5.1 GHz. It is about the same speed as the Nim version from which it was translated.
 
=={{header|MUMPS}}==
<langsyntaxhighlight MUMPSlang="mumps">Hamming(n) New count,ok,next,number,which
For which=2,3,5 Set number=1
For count=1:1:n Do
Line 6,896 ⟶ 7,441:
20: 36
1691: 2125764000
2000: 8062156800</langsyntaxhighlight>
 
=={{header|Nim}}==
Line 6,902 ⟶ 7,447:
 
===Classic Dijkstra algorithm===
<langsyntaxhighlight lang="nim">import bigints
 
proc min(a: varargs[BigInt]): BigInt =
Line 6,936 ⟶ 7,481:
echo ""
echo hamming(1691)
echo hamming(1_000_000)</langsyntaxhighlight>
{{out}}
<pre>1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36
Line 6,947 ⟶ 7,492:
 
The following code improves on the above by reducing the number of computationally-time-expensive BigInt comparisons slightly:
<langsyntaxhighlight lang="nim">import bigints, times
proc hamming(limit: int): BigInt =
Line 6,985 ⟶ 7,530:
 
echo rslt
echo "This last took ", (stop - strt)*1000, " milliseconds."</langsyntaxhighlight>
{{output}}
<pre>1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36
Line 7,000 ⟶ 7,545:
{{works with|Nim 1.4.0}}
Note, the following code uses the "bigints" library that doesn't ship with the Nim compiler; install it with "nimble install bigints".
<langsyntaxhighlight lang="nim">import bigints, times
iterator func_hamming() : BigInt =
Line 7,063 ⟶ 7,608:
 
echo rslt
echo "This last took ", (stop - strt)*1000, " milliseconds."</langsyntaxhighlight>
{{output}}
<pre>1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36
Line 7,080 ⟶ 7,625:
{{works with|Nim 1.4.0}}
Note, the following code uses the "bigints" library that doesn't ship with the Nim compiler; install it with "nimble install bigints".
<langsyntaxhighlight lang="nim">from times import inMilliseconds
import std/monotimes, bigints
from math import log2
Line 7,161 ⟶ 7,706:
echo "This last took ", elpsd, " milliseconds."
 
main()</langsyntaxhighlight>
{{out}}
<pre>The first 20 hammings are: 1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36
Line 7,176 ⟶ 7,721:
The following code uses imperative techniques to implement the same algorithm, using sequences for storage, indexes for back pointers to the results of previous calculations, and custom deleting unused values in chunks in place (using constantly growing capacity) so that the same size of sequence can be longer used and many less new memory allocations need be made:
 
<langsyntaxhighlight lang="nim">import bigints, times
 
iterator nodups_hamming(): BigInt =
Line 7,233 ⟶ 7,778:
 
echo rslt
echo "This last took ", (stop - strt)*1000, " milliseconds."</langsyntaxhighlight>
 
{{out}}
Line 7,247 ⟶ 7,792:
===Much faster iterating version using logarithmic calculations===
 
Still, much of the above time is used by BigInt calculations and still many heap allocations/deallocations, as BigInt's have an internal sequence to contain the infinite precision binary digits. The following code uses an internal logarithmic representation of the values rather than BigInt for the sorting comparisons and thus all mathematic operations required are just integer and floating point additions and comparison; as well, since these don't require heap space there is almost no allocation/deallocation at all for greatly increased speed:
 
<syntaxhighlight lang="nim"># HammingsLogImp.nim
<lang nim>import bigints, math, times
# compile with: nim c -d:danger -t:-march=native -d:LTO --gc:arc HammingsLogImp
 
import bigints, std/math
type TriVal = (uint32, uint32, uint32)
from std/times import inMicroseconds
from std/monotimes import getMonoTime, `-`
 
type LogRep = (float64, uint32, uint32, uint32)
proc convertTrival2BigInt(tv: TriVal): BigInt =
 
let one: LogRep = (0.0, 0'u32, 0'u32, 0'u32)
 
let lb2 = 1.0'f64; let lb3 = 3.0.log2; let lb5 = 5.0.log2
proc mul2(me: Logrep): Logrep {.inline.} =
(me[0] + lb2, me[1] + 1, me[2], me[3])
proc mul3(me: Logrep): Logrep {.inline.} =
(me[0] + lb3, me[1], me[2] + 1, me[3])
proc mul5(me: Logrep): Logrep {.inline.} =
(me[0] + lb5, me[1], me[2], me[3] + 1)
 
proc lr2BigInt(lr: Logrep): BigInt =
proc xpnd(bs: uint, v: uint32): BigInt =
result = initBigInt 1;
var bsm = initBigInt bs;
var vm = v.uint
while vm > 0:
if (vm and 1) != 0: result *= bsm
bsm *= bsm; *vm bsm= vm shr # bsm *= bsm crashes.1
xpnd(2, lr[1]) * xpnd(3, lr[2]) * xpnd(5, lr[3])
vm = vm shr 1
 
iterator hammingsLogImp(): LogRep =
result = (2.xpnd tv[0]) * (3.xpnd tv[1]) * (5.xpnd tv[2])
 
iterator log_nodups_hamming(): TriVal =
type Logrep = (float64, uint32, uint32, uint32)
proc `<`(me: Logrep, othr: Logrep): bool = me[0] < othr[0]
proc mul2(me: Logrep): Logrep = (me[0] + 1.0f64, me[1]+1, me[2], me[3])
proc mul3(me: Logrep): Logrep = (me[0] + 3.0f64.log2, me[1], me[2]+1, me[3])
proc mul5(me: Logrep): Logrep = (me[0] + 5.0f64.log2, me[1], me[2], me[3]+1)
 
let one: Logrep = (0.0f64, 0u32, 0u32, 0u32)
var
ms2 = newSeq[Logrep](11024) # give it twosize valuesone so doubling size works
hs3 = newSeq[Logrep](11024) # reasonably sizesized
x5s5 = one.mul5 # initBigInt 5
mrg = one.mul3 # initBigInt 3
s2hdi, s2tli, s3hdi, s3tli = 0
x53 = one.mul3().mul3 # initBigInt 9 # already advanced one step
x532 = one.mul2 # initBigInt 2
yield one
ih, jm, i, j = 0
s2[0] = one.mul2; s3[0] = one.mul3
 
yield (0u32, 0u32, 0u32)
while true:
s2tli += 1
let cph = h.len # move in-place to avoid allocation
if is2hdi >=+ cphs2hdi div>= 2s2tli: # move in-place to avoid allocation
copyMem(addr(s2[0]), addr(s2[s2hdi]), sizeof(LogRep) * (s2tli - s2hdi))
var s = i; var d = 0
whiles2tli s < ih: shallowCopy(h[d], h[s]); s +-= 1s2hdi; ds2hdi += 10
let cps2 = s2.len # move in-place to avoid allocation
ih -= i; i = 0
if ihs2tli >= cphcps2: hs2.setLen(2cps2 *+ cphcps2)
var rsltp = addr(s2[s2hdi])
if x532 < mrg: h[ih] = x532; x532 = h[i].mul2; i += 1
if rsltp[][0] < mrg[0]: s2[s2tli] = rsltp[].mul2; s2hdi += 1; yield rsltp[]
else:
h[ih]s3tli += mrg1
if s3hdi + s3hdi >= s3tli: # move in-place to avoid allocation
let cpm = m.len
copyMem(addr(s3[0]), addr(s3[s3hdi]), sizeof(LogRep) * (s3tli - s3hdi))
if j >= cpm div 2: # move in-place to avoid allocation
var ss3tli -= js3hdi; var ds3hdi = 0
let cps3 = s3.len
while s < jm: shallowCopy(m[d], m[s]); s += 1; d += 1
if s3tli jm ->= j;cps3: js3.setLen(cps3 =+ 0cps3)
s2[s2tli] = mrg.mul2; s3[s3tli] = mrg.mul3; s3hdi += 1
if jm >= cpm: m.setLen(2 * cpm)
iflet x53 < x5: mrgarsltp = x53; x53 = maddr(s3[js3hdi].mul3; j += 1)
else:let mrgrslt = x5; x5 = x5.mul5mrg
mif arsltp[jm][0] =< s5[0]: mrg = arsltp[]
jmelse: +mrg = s5; s5 = s5.mul5; s3hdi -= 1
yield rslt
 
var cnt = 0
let lr = h[ih]; yield (lr[1], lr[2], lr[3]); ih += 1
for h in hammingsLogImp():
 
write stdout, h.lr2BigInt, " "; cnt += 1
var cnt = 1
if cnt >= 20: break
for h in log_nodups_hamming():
if cnt > 20: break
write stdout, h.convertTrival2BigInt, " "; cnt += 1
echo ""
cnt = 10
for h in log_nodups_hamminghammingsLogImp():
if cnt < 1691: cnt += 1; continue
elseif cnt >= 1691: echo h.convertTrival2BigIntlr2BigInt; break
 
let strt = epochTimegetMonoTime()
var rslt: (uint32, uint32, uint32)LogRep
cnt = 10
for h in log_nodups_hamminghammingsLogImp():
if cnt < 1000000: cnt += 1; continue
elseif cnt >= 1_000_000: rslt = h; break # """
let elpsd = (getMonoTime() - strt).inMicroseconds
let stop = epochTime()
 
let (_, x2, x3, x5) = rslt
writeLine stdout, "2^", x2, " + 3^", x3, " + 5^", x5
let lgrslt = (x2.float64 + x3.float64 * 3.0f64.log2 +
Line 7,331 ⟶ 7,880:
let (whl, frac) = lgrslt.splitDecimal
echo "Approximately: ", 10.0f64.pow(frac), "E+", whl.uint64
let brslt = rslt.convertTrival2BigIntlr2BigInt()
let s = brslt.to_string
let ls = s.len
Line 7,339 ⟶ 7,888:
if i + 100 < ls: echo s[i .. i + 99]
else: echo s[i .. ls - 1]
echo "This last took ", (stop - strt)*1000elpsd, " millisecondsmicroseconds."</langsyntaxhighlight>
{{output}}
<pre>1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36
Line 7,347 ⟶ 7,896:
Number of digits: 84
519312780448388736089589843750000000000000000000000000000000000000000000000000000000
This last took 13.855934143066416004 millisecondsmicroseconds.</pre>
 
The time as shown is donefor for compilation withas optionin -d:release.the Withsecond -d:dangerline (noof checks)code; with these options, the timebillionth dropsHamming tonumber 8can ms.be Andcalculated addingin --gc:arcabout to7 4 msseconds.
 
'''Faster alternate to the above using a ring buffer'''
With these options, the billionth can be calculated in about 9-10 seconds.
 
As other language contributions refer to it, the above code is left in place; however, it seems that the amount of time spent "draining" the buffers by already-used values using copying as used in the above code can be eliminated by using the buffers as "ring buffers" by making the indices wrap around from the end of the buffers to the beginning and detecting when the buffer needs to be "grown" by when the next/last/tail index runs into the first/head index, and changing the "grow" logic a little so as to open up a hole between the next and first indexes by the size of the expansion once the buffer size has "grown". The code is as follows:
<syntaxhighlight lang=nim># HammingsLogDQ.nim
# compile with: nim c -d:danger -t:-march=native -d:LTO --gc:arc HammingsImpLogQ
 
import bigints, std/math
from std/times import inMicroseconds
from std/monotimes import getMonoTime, `-`
 
type LogRep = (float64, uint32, uint32, uint32)
 
let one: LogRep = (0.0, 0'u32, 0'u32, 0'u32)
 
let lb2 = 1.0'f64; let lb3 = 3.0.log2; let lb5 = 5.0.log2
proc mul2(me: Logrep): Logrep {.inline.} =
(me[0] + lb2, me[1] + 1, me[2], me[3])
proc mul3(me: Logrep): Logrep {.inline.} =
(me[0] + lb3, me[1], me[2] + 1, me[3])
proc mul5(me: Logrep): Logrep {.inline.} =
(me[0] + lb5, me[1], me[2], me[3] + 1)
 
proc lr2BigInt(lr: Logrep): BigInt =
proc xpnd(bs: uint, v: uint32): BigInt =
result = initBigInt 1
var bsm = initBigInt bs;
var vm = v.uint
while vm > 0:
if (vm and 1) != 0: result *= bsm
bsm *= bsm; vm = vm shr 1
xpnd(2, lr[1]) * xpnd(3, lr[2]) * xpnd(5, lr[3])
 
proc `$`(lr: LogRep): string {.inline.} = $lr2BigInt(lr)
 
iterator hammingsLogQ(): LogRep =
var s2msk, s3msk = 1024
var s2 = newSeq[LogRep] s2msk; var s3 = newSeq[LogRep] s3msk
s2msk -= 1; s3msk -= 1; s2[0] = one; var s2nxti = 1
var s2hdi, s3hdi, s3nxti = 0
var s5 = one.mul5; var mrg = one.mul3
while true:
let s2hdp = addr(s2[s2hdi])
if s2hdp[][0] < mrg[0]:
s2[s2nxti] = s2hdp[].mul2; s2hdi += 1; s2hdi = s2hdi and s2msk
yield s2hdp[]
else:
s2[s2nxti] = mrg.mul2; s3[s3nxti] = mrg.mul3; yield mrg
let s3hdp = addr(s3[s3hdi])
if s3hdp[0] < s5[0]:
mrg = s3hdp[]; s3hdi += 1; s3hdi = s3hdi and s3msk
else: mrg = s5; s5 = s5.mul5
s3nxti += 1; s3nxti = s3nxti and s3msk
if s3nxti == s3hdi: # buffer full - expand...
let sz = s3msk + 1; s3msk = sz + sz; s3.setLen(s3msk); s3msk -= 1
if s3hdi == 0: s3nxti = sz
else: # put extra space between next and head...
copyMem(addr(s3[s3hdi + sz]), addr(s3[s3hdi]),
sizeof(LogRep) * (sz - s3hdi)); s3hdi += sz
s2nxti += 1; s2nxti = s2nxti and s2msk
if s2nxti == s2hdi: # buffer full - expand...
let sz = s2msk + 1; s2msk = sz + sz; s2.setLen s2msk; s2msk -= 1
if s2hdi == 0: s2nxti = sz # copy all in a single block...
else: # make extra space between next and head...
copyMem(addr(s2[s2hdi + sz]), addr(s2[s2hdi]),
sizeof(LogRep) * (sz - s2hdi)); s2hdi += sz
 
# testing it...
var cnt = 0
for h in hammingsLogQ():
write stdout, h, " "; cnt += 1
if cnt >= 20: break
echo ""
cnt = 0
for h in hammingsLogQ():
cnt += 1
if cnt >= 1691: echo h; break
let strt = getMonoTime()
var rslt: LogRep
cnt = 0
for h in hammingsLogQ():
cnt += 1
if cnt >= 1_000_000: rslt = h; break # """
let elpsd = (getMonoTime() - strt).inMicroseconds
let (_, x2, x3, x5) = rslt
writeLine stdout, "2^", x2, " + 3^", x3, " + 5^", x5
let lgrslt = (x2.float64 + x3.float64 * 3.0f64.log2 +
x5.float64 * 5.0f64.log2) * 2.0f64.log10
let (whl, frac) = lgrslt.splitDecimal
echo "Approximately: ", 10.0f64.pow(frac), "E+", whl.uint64
let s = $rslt
let ls = s.len
echo "Number of digits: ", ls
if ls <= 2000:
for i in countup(0, ls - 1, 100):
if i + 100 < ls: echo s[i .. i + 99]
else: echo s[i .. ls - 1]
echo "This last took ", elpsd, " microseconds."</syntaxhighlight>
{{out}}
<pre>1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36
2125764000
2^55 + 3^47 + 5^64
Approximately: 5.193127804483804E+83
Number of digits: 84
519312780448388736089589843750000000000000000000000000000000000000000000000000000000
This last took 5044 microseconds.</pre>
As tested on an Intel i5-6500 (3.6 GHz single-threaded boosted), this is about a millisecond or about twenty percent faster than the version above, and can find the billionth Hamming number in about 4.5 seconds on this machine. The reason this is faster is mostly due to the elimination of the majority of the copy operations.
 
===Extremely fast version inserting logarithms into the top error band===
Line 7,357 ⟶ 8,013:
The above code is about as fast as one can go generating sequences; however, if one is willing to forego sequences and just calculate the nth Hamming number (repeatedly), then some reading on the relationship between the size of numbers to the sequence numbers is helpful (Wikipedia: Regular Number). One finds that there is a very distinct relationship and that it quite quickly reduces to quite a small error band proportional to the log of the output value for larger ranges. Thus, the following code just scans for logarithmic representations to insert into a sequence for this top error band and extracts the correct nth representation from that band. It reduces time complexity to O(n^(2/3)) from O(n) for the sequence versions, but even more amazingly, reduces memory requirements to O(n^(1/3)) from O(n^(2/3)) and thus makes it possible to calculate very large values in the sequence on common personal computers. The code is as follows:
{{trans|Rust}}
<langsyntaxhighlight lang="nim">import bigints, math, algorithm, times
 
type TriVal = (uint32, uint32, uint32)
Line 7,428 ⟶ 8,084:
else: echo s[i .. ls - 1]
 
echo "This last took ", (stop - strt) * 1000, " milliseconds."</langsyntaxhighlight>
 
The output is the same as above except that the execution time is much too small to be measured. The billionth number in the sequence can be calculated in under 5 milliseconds, the trillionth in about 0.38 seconds. The (2^64 - 1)th value (18446744073709551615) cannot be calculated due to a slight overflow problem as it approaches that limit. However, this version gives inaccurate results much about the 1e13th Hamming number due to the log base two (double) approximate representation not having enough precision to accurately sort the values put into the error band array.
Line 7,435 ⟶ 8,091:
 
To solve the problem of inadequate precision in the double log base two representation, the following code uses a BigInt representation of the log value with about twice the significant bits, which is then sufficient to extend the usable range well beyond any reasonable requirement:
<langsyntaxhighlight lang="nim">import bigints, math, algorithm, times
 
type TriVal = (uint32, uint32, uint32)
Line 7,513 ⟶ 8,169:
else: echo s[i .. ls - 1]
 
echo "This last took ", (stop - strt) * 1000, " milliseconds."</langsyntaxhighlight>
 
The above code has the same output as before and doesn't take an appreciable amount time different to execute; it can produce the trillionth Hamming number in about 0.35 seconds and the thousand trillionth (which is now possible without error) in about 34.8 seconds. Thus, it successfully extends the usable range of the algorithm to near the maximum expressible 64 bit number in a few hours of execution time on a modern desktop computer although the (2^64 - 1)th Hamming number can't be found due to the restrictions of the expressible range limit in sizing of the required error band.
Line 7,521 ⟶ 8,177:
A simple implementation using an integer Set as a priority queue. The semantics of the standard library Set provide a minimum element and prevent duplicate entries. <i>min_elt</i> and <i>add</i> are <em>O</em>(log N).
 
<langsyntaxhighlight OCamllang="ocaml">module ISet = Set.Make(struct type t = int let compare=compare end)
 
let pq = ref (ISet.singleton 1)
Line 7,544 ⟶ 8,200:
done;
 
Printf.printf "\nThe 1691st is %d\n" (next ());</langsyntaxhighlight>
 
Output:
Line 7,554 ⟶ 8,210:
 
An arbitrary precision version for the one millionth number. Compile with eg: <i>ocamlopt -o hamming.exe nums.cmxa hamming.ml</i>
<langsyntaxhighlight OCamllang="ocaml">open Big_int
 
module APSet = Set.Make(
Line 7,578 ⟶ 8,234:
done;
 
Printf.printf "\nThe %dth is %s\n" n (string_of_big_int (next ()));</langsyntaxhighlight>
Output:
<blockquote>The 1000000th is 519312780448388736089589843750000000000000000000000000000000000000000000000000000000</blockquote>
Line 7,587 ⟶ 8,243:
 
{{trans|Haskell}}
<langsyntaxhighlight lang="oz">declare
fun lazy {HammingFun}
1|{FoldL1 [{MultHamming 2} {MultHamming 3} {MultHamming 5}] LMerge}
Line 7,618 ⟶ 8,274:
{ForAll {List.take Hamming 20} System.showInfo}
{System.showInfo {Nth Hamming 1690}}
{System.showInfo {Nth Hamming 1000000}}</langsyntaxhighlight>
 
 
Line 7,625 ⟶ 8,281:
The strict version uses iterators and a priority queue.
Note that it can calculate other variations of the hamming numbers too. By changing K, it will calculate the p(K)-smooth numbers. (E.g. K = 3, it will use the first three primes 2,3 and 5, thus resulting in the 5-smooth numbers, see [https://en.wikipedia.org/wiki/Hamming_numbers])
<syntaxhighlight lang="oz">
<lang oz>
functor
import
Line 7,789 ⟶ 8,445:
end
end
</syntaxhighlight>
</lang>
Strict version made by pietervdvn; do what you want with the code.
 
=={{header|PARI/GP}}==
This is a basic implementation; finding the millionth term requires 1 second and 54 MB. Much better algorithms exist.
<langsyntaxhighlight lang="parigp">Hupto(n)={
my(vr=vectorVec([1],n),x2=2,x3v=primes(3),x5[v1,v2,v3]=5v,i=1,j=1,k=1,t);
v[1]=1;
for(m=2,n,
vr[m]=t=min(x2v1,min(x3v2,x5v3));
if(x2v1 == t, x2v1 = v[1] * r[i++] << 1);
if(x3v2 == t, x3v2 = 3v[2] * vr[j++]);
if(x5v3 == t, x5v3 = 5v[3] * vr[k++]);
);
vr
};
H(n)=Hupto(n)[n];
Line 7,809 ⟶ 8,464:
Hupto(20)
H(1691)
H(10^6)</langsyntaxhighlight>
{{out}}
<pre>%1 = [1, 2, 3, 4, 5, 6, 8, 9, 10, 12, 15, 16, 18, 20, 24, 25, 27, 30, 32, 36]
Line 7,818 ⟶ 8,473:
Simple brute force til 2^32-1.I was astonished by the speed.The inner loop is taken 2^32 -1 times.DIV by constant is optimized to Mul and shift.
Using FPC_64 3.1.1, i4330 3.5 Ghz
<langsyntaxhighlight lang="pascal">
program HammNumb;
{$IFDEF FPC}
Line 7,893 ⟶ 8,548:
Begin
Check;
End.</langsyntaxhighlight>
Output
<pre>
Line 7,905 ⟶ 8,560:
 
The above is not a true sequence of Hamming numbers as it doesn't generate an iteration or enumeration of the numbers where each new value is generated from the accumulated state of all the generated numbers up to that point, but rather regenerates all the previous values very inefficiently for each new value, and thus does not have a linear execution complexity with number of generated values. Much more elegant solutions are those using functional programming paradigms, but as Pascal is by no means a functional language, lacking many of the requirements of functional programming such as closure functions to be functional and being difficult (although not impossible) to emulate those functions using classes/objects, the following code implements an imperative version of the non-duplicating Hamming sequence which also saves both time and space in not processing the duplicates (for instance, with two times three already accounted for, there is no need to process three times two); as well, since there is no standard "infinite" precision integer library for Pascal so that numbers larger than 64-bit can't easily be handled, the following code uses the "triplet" method and does the sorting based on a logarithmic estimation of the multiples:
<langsyntaxhighlight lang="pascal">{$OPTIMIZATION LEVEL4}
program Hammings(output);
 
Line 8,148 ⟶ 8,803:
writeln('The millionth Hamming number is ', LogRep2String(h), '.');
writeln('This last took ', elpsd, ' milliseconds.')
end.</langsyntaxhighlight>
{{out}}
<pre>The first 20 Hamming numbers are: 1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36.
Line 8,186 ⟶ 8,841:
Changing sizeOf(tElem) to 32 {maxPrimFakCnt = 3+8} instead of 16 ( x2) {maxPrimFakCnt = 3} results in increasing the runtime by x4 ( 2^2 )
 
<langsyntaxhighlight lang="pascal">program hammNumb;
{$IFDEF FPC}
{$MODE DELPHI}
Line 8,519 ⟶ 9,174:
' elemcount ',FA[i].frMaxIdx+1:7,' out of',length(FA[i].frElems):7);
LoEFree(FA);
End.</langsyntaxhighlight>
{{out|@ TIO.RUN}}
<pre>
Line 8,533 ⟶ 9,188:
3 elemcount 1209 out of 1236
5 elemcount 1 out of 2
 
...
change zu use 12 primes [2..37] ( 32 bit ) -> 2.2x runtime over using 3 primes
Line 8,554 ⟶ 9,210:
31 elemcount 15 out of 17
37 elemcount 1 out of 2
 
@home:
//tested til 1E12 with 4.4 Ghz 5600G Free Pascal Compiler version 3.2.2-[2022/11/22] for x86_64
Timed 1,000,000,000,000 in 57:53.015
ln 19444.3672890 2^1126 3^16930 5^40 -> see Haskell-Version [https://ideone.com/RnAh5X]
Actual Index 1000075683108
ln 19444.3672890 2^8295 3^2426 5^6853
2 elemcount 106935365 out of 156797362
3 elemcount 12083 out of 12969
5 elemcount 1 out of 2
user 57m51.015s <<
sys 0m1.616s
</pre>
 
=={{header|Perl}}==
<langsyntaxhighlight lang="perl">use strict;
use warnings;
use List::Util 'min';
Line 8,592 ⟶ 9,260:
#++$i, $h->() until $i == 999999;
#print ++$i, "-th: ", $h->(), "\n";
</syntaxhighlight>
</lang>
{{out}}
<pre>1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36 40 ...
Line 8,604 ⟶ 9,272:
{{libheader|Phix/mpfr}}
standard and gmp versions
<!--<langsyntaxhighlight Phixlang="phix">(phixonline)-->
<span style="color: #008080;">with</span> <span style="color: #008080;">javascript_semantics</span>
<span style="color: #008080;">function</span> <span style="color: #000000;">hamming</span><span style="color: #0000FF;">(</span><span style="color: #004080;">integer</span> <span style="color: #000000;">N</span><span style="color: #0000FF;">)</span>
Line 8,649 ⟶ 9,317:
<span style="color: #7060A8;">printf</span><span style="color: #0000FF;">(</span><span style="color: #000000;">1</span><span style="color: #0000FF;">,</span><span style="color: #008000;">"%s\n"</span><span style="color: #0000FF;">,{</span><span style="color: #7060A8;">mpz_get_str</span><span style="color: #0000FF;">(</span><span style="color: #000000;">mpz_hamming</span><span style="color: #0000FF;">(</span><span style="color: #000000;">1691</span><span style="color: #0000FF;">))})</span>
<span style="color: #7060A8;">printf</span><span style="color: #0000FF;">(</span><span style="color: #000000;">1</span><span style="color: #0000FF;">,</span><span style="color: #008000;">"%s\n"</span><span style="color: #0000FF;">,{</span><span style="color: #7060A8;">mpz_get_str</span><span style="color: #0000FF;">(</span><span style="color: #000000;">mpz_hamming</span><span style="color: #0000FF;">(</span><span style="color: #000000;">1000000</span><span style="color: #0000FF;">))})</span>
<!--</langsyntaxhighlight>-->
{{out}}
<pre>
Line 8,661 ⟶ 9,329:
This proved much easier to implement than scanning the other entries suggested [not copied, they all frighten me].<br>
At some point, comparing logs will no doubt get it wrong, but I have no idea when that might happen.
<!--<langsyntaxhighlight Phixlang="phix">(phixonline)-->
<span style="color: #008080;">with</span> <span style="color: #008080;">javascript_semantics</span>
<span style="color: #000080;font-style:italic;">-- numbers kept as {log,{pow2,pow3,pow5}},
Line 8,721 ⟶ 9,389:
<span style="color: #0000FF;">?</span><span style="color: #000000;">mpz_hint</span><span style="color: #0000FF;">(</span><span style="color: #000000;">hamming</span><span style="color: #0000FF;">(</span><span style="color: #000000;">1000000</span><span style="color: #0000FF;">))</span>
<!--</langsyntaxhighlight>-->
{{out}}
<pre>
Line 8,736 ⟶ 9,404:
 
=={{header|Picat}}==
<langsyntaxhighlight Picatlang="picat">go =>
println([hamming(I) : I in 1..20]),
time(println(hamming_1691=hamming(1691))),
Line 8,757 ⟶ 9,425:
M := M + 1
end,
Hamming = A[N-1].</langsyntaxhighlight>
 
{{out}}
Line 8,768 ⟶ 9,436:
 
=={{header|PicoLisp}}==
<langsyntaxhighlight PicoLisplang="picolisp">(de hamming (N)
(let (L (1) H)
(do N
Line 8,780 ⟶ 9,448:
(println (make (for N 20 (link (hamming N)))))
(println (hamming 1691)) # very fast
(println (hamming 1000000)) # runtime about 13 minutes on i5-3570S</langsyntaxhighlight>
{{out}}
<pre>(1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36)
Line 8,787 ⟶ 9,455:
 
=={{header|PL/I}}==
<langsyntaxhighlight lang="pli">(subscriptrange):
Hamming: procedure options (main); /* 14 November 2013 with fixes 2021 */
declare (H(2000), p2, p3, p5, twoTo31, Hm, tenP(11)) decimal(12)fixed;
Line 8,873 ⟶ 9,541:
put skip list (H(1691));
 
end Hamming;</langsyntaxhighlight>
Results:
<pre>
Line 8,903 ⟶ 9,571:
=={{header|Prolog}}==
===Generator idiom===
<langsyntaxhighlight Prologlang="prolog">%% collect N elements produced by a generator in a row
take( 0, Next, Z-Z, Next).
Line 8,923 ⟶ 9,591:
mkHamm(G),take(20,G,A-[],_), write(A), nl,
take(1691-1,G,_,G2),take(2,G2,B-[],_), write(B), nl,
take( N -1,G,_,G3),take(2,G3,[C1|_]-_,_), write(C1), nl.</langsyntaxhighlight>
SWI Prolog 6.2.6 produces (in about 7 ideone seconds):
&nbsp;?- time( main(1000000) ).
Line 8,934 ⟶ 9,602:
Works with SWI-Prolog. Laziness is simulate with '''freeze/2''' and '''ground/2'''.<br>
Took inspiration from this code : http://chr.informatik.uni-ulm.de/~webchr (click on ''hamming.pl: Solves Hamming Problem'').
<langsyntaxhighlight Prologlang="prolog">hamming(N) :-
% to stop cleanly
nb_setval(go, 1),
Line 9,007 ⟶ 9,675:
(format('~w ', [X]),
N1 is N - 1,
watch_20(N1, L))).</langsyntaxhighlight>
{{out}}
<pre>?- hamming(20).
Line 9,023 ⟶ 9,691:
 
=={{header|PureBasic}}==
<langsyntaxhighlight PureBasiclang="purebasic">#X2 = 2
#X3 = 3
#X5 = 5
Line 9,051 ⟶ 9,719:
Next
Ham(1691)
Input()</langsyntaxhighlight>
{{out}}<pre>H(1) = 1
H(2) = 2
Line 9,076 ⟶ 9,744:
=={{header|Python}}==
===Version based on example from Dr. Dobb's CodeTalk===
<langsyntaxhighlight lang="python">from itertools import islice
 
def hamming2():
Line 9,121 ⟶ 9,789:
multindeces = [i - mini for i in multindeces]
#
yield h</langsyntaxhighlight>
{{out}}
<pre>>>> list(islice(hamming2(), 20))
Line 9,133 ⟶ 9,801:
===Another implementation of same approach===
This version uses a lot of memory, it doesn't try to limit memory usage.
<langsyntaxhighlight lang="python">import psyco
 
def hamming(limit):
Line 9,157 ⟶ 9,825:
print [hamming(i) for i in xrange(1, 21)]
print hamming(1691)
print hamming(1000000)</langsyntaxhighlight>
 
===Implementation based on priority queue===
This is inspired by the Picolisp implementation further down, but uses a priority queue instead of a search tree. Computes 3x more numbers than necessary, but discards them quickly so memory usage is not too bad.
 
<langsyntaxhighlight lang="python">from heapq import heappush, heappop
from itertools import islice
 
Line 9,178 ⟶ 9,846:
print list(islice(h(), 1690, 1691))
print list(islice(h(), 999999, 1000000)) # runtime 9.5 sec on i5-3570S
</syntaxhighlight>
</lang>
 
==="Cyclical Iterators"===
The original author is Raymond Hettinger and the code was first published [http://code.activestate.com/recipes/576961/ here] under the MIT license. Uses iterators dubbed "cyclical" in a sense that they are referring back (explicitly, with <code>p2, p3, p5</code> iterators) to the previously produced values, same as the above versions (through indices into shared storage) and the classic [[#Haskell|Haskell]] version (implicitly timed by lazy evaluation).
 
Memory is efficiently maintained automatically by the <code>tee</code> function for each of the three generator expressions, i.e. only that much is maintained as needed to produce the next value (although, for Python versions older than 3.6 it looks like the storage is not shared so three copies are maintained implicitly there -- whereas for 3.6 and up the storage <i>is</i> shared between the returned iterators, so only a single underlying FIFO queue is maintained, according to the [https://docs.python.org/3.6/library/itertools.html#itertools.tee documentation]).
<langsyntaxhighlight lang="python">from itertools import tee, chain, groupby, islice
from heapq import merge
 
Line 9,208 ⟶ 9,876:
print list(islice(raymonds_hamming(), 20))
print islice(raymonds_hamming(), 1689, 1690).next()
print islice(raymonds_hamming(), 999999, 1000000).next()</langsyntaxhighlight>
Results are the same as before.
 
Line 9,216 ⟶ 9,884:
This [http://ideone.com/PIkWEN gravely impacts the efficiency]. Not to be used.
 
<langsyntaxhighlight lang="python">from heapq import merge
from itertools import tee
 
Line 9,228 ⟶ 9,896:
if n != last:
yield n
last = n</langsyntaxhighlight>
 
====Cyclic generator method #2.====
Cyclic generator method #2. Considerably faster due to early elimination (before merge) of duplicates. Currently the faster Python version. Direct copy of [[Hamming_numbers#Avoiding_generation_of_duplicates | Haskell code]].
<langsyntaxhighlight lang="python">from itertools import islice, chain, tee
 
def merge(r, s):
Line 9,269 ⟶ 9,937:
print hamming(1, 21)
print hamming(1691)[0]
print hamming(1000000)[0]</langsyntaxhighlight>
 
=={{header|QBasic}}==
{{works with|QBasic|1.1}}
{{works with|QuickBasic|4.5}}
<syntaxhighlight lang="qbasic">FUNCTION min (a, b)
IF a < b THEN min = a ELSE min = b
END FUNCTION
 
FUNCTION Hamming (limit)
DIM h(limit)
h(0) = 1
x2 = 2
x3 = 3
x5 = 5
i = 0
j = 0
k = 0
FOR n = 1 TO limit
h(n) = min(x2, min(x3, x5))
IF x2 = h(n) THEN
i = i + 1
x2 = 2 * h(i)
END IF
IF x3 = h(n) THEN
j = j + 1
x3 = 3 * h(j)
END IF
IF x5 = h(n) THEN
k = k + 1
x5 = 5 * h(k)
END IF
NEXT n
Hamming = h(limit - 1)
END FUNCTION
 
PRINT "The first 20 Hamming numbers are :"
FOR i = 1 TO 20
PRINT Hamming(i); " ";
NEXT i
 
PRINT
PRINT "H( 1691) = "; Hamming(1691)</syntaxhighlight>
 
=={{header|Qi}}==
{{incomplete|Qi|Parts 2 & 3 of task missing.}}
{{trans|Clojure}}
<langsyntaxhighlight lang="qi">(define smerge
[X|Xs] [Y|Ys] -> [X | (freeze (smerge (thaw Xs) [Y|Ys]))] where (< X Y)
[X|Xs] [Y|Ys] -> [Y | (freeze (smerge [X|Xs] (thaw Ys)))] where (> X Y)
Line 9,293 ⟶ 10,004:
[S|Ss] N -> [S|(stake (thaw Ss) (1- N))])
 
(stake (value hamming) 20)</langsyntaxhighlight>
{{out}}
<pre>
Line 9,303 ⟶ 10,014:
Uses <code>smoothwith</code> from [[N-smooth numbers#Quackery]].
 
<langsyntaxhighlight Quackerylang="quackery"> ' [ 2 3 5 ] smoothwith [ size 1000000 = ]
dup 20 split drop echo cr
dup 1690 peek echo cr
-1 peek echo
</syntaxhighlight>
</lang>
 
{{out}}
Line 9,317 ⟶ 10,028:
=={{header|R}}==
Recursively find the Hamming numbers below <math>2^{31}</math>. Shown are results for tasks 1 and 2. Arbitrary precision integers are not supported natively.
<langsyntaxhighlight Rlang="r">hamming=function(hamms,limit) {
tmp=hamms
for(h in c(2,3,5)) {
Line 9,330 ⟶ 10,041:
h <- sort(hamming(1,limit=2^31-1))
print(h[1:20])
print(h[length(h)])</langsyntaxhighlight>
{{out}}
<pre>
Line 9,340 ⟶ 10,051:
The '''nextn''' R function provides the needed functionality:
 
<langsyntaxhighlight lang="r">hamming <- function(n) {
a <- numeric(n)
a[1] <- 1
Line 9,347 ⟶ 10,058:
}
a
}</langsyntaxhighlight>
 
'''Output'''
Line 9,355 ⟶ 10,066:
 
=={{header|Racket}}==
<langsyntaxhighlight lang="scheme">#lang racket
(require racket/stream)
(define first stream-first)
Line 9,377 ⟶ 10,088:
(for/list ([i 20] [x hamming]) x)
(stream-ref hamming 1690)
(stream-ref hamming 999999)</langsyntaxhighlight>
{{out}}
<pre>'(1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36)
Line 9,387 ⟶ 10,098:
The above version consumes quite a lot of memory as streams are retained since the head of the stream is a global defined binding "hamming". The following code implements (hamming) as a function and all heads of streams are locally defined so that they can be garbage collected as they are consumed; as well it is formulated so that no duplicate values are generated so as to simplify the calculation and minimize the number of values in the streams; to further the latter it also evaluates the least dense stream first. The following code is about three times faster than the above code:
{{trans|Haskell}}
<langsyntaxhighlight lang="scheme">#lang racket
(require racket/stream)
(define first stream-first)
Line 9,414 ⟶ 10,125:
(for/list ([i 20] [x (hamming)]) x) (newline)
(stream-ref (hamming) 1690) (newline)
(stream-ref (hamming) 999999) (newline)</langsyntaxhighlight>
 
The output of the above code is the same as that of the earlier code.
Line 9,424 ⟶ 10,135:
{{Works with|rakudo|2015-11-04}}
The limit scaling is not <em>required</em>, but it cuts down on a bunch of unnecessary calculation.
<syntaxhighlight lang="raku" perl6line>my $limit = 32;
 
sub powers_of ($radix) { 1, |[\*] $radix xx * }
Line 9,435 ⟶ 10,146:
 
say @hammings[^20];
say @hammings[1690]; # zero indexed</langsyntaxhighlight>
{{out}}
<pre>(1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36)
Line 9,445 ⟶ 10,156:
This version uses a lazy list, storing a maximum of two extra value from the highest index requested
 
<syntaxhighlight lang="raku" perl6line>my \Hammings := gather {
my %i = 2, 3, 5 Z=> (Hammings.iterator for ^3);
my %n = 2, 3, 5 Z=> 1 xx 3;
Line 9,461 ⟶ 10,172:
say Hammings.[1691 - 1];
say Hammings.[1000000 - 1];
</syntaxhighlight>
</lang>
 
{{out}}
Line 9,470 ⟶ 10,181:
=={{header|Raven}}==
{{trans|Liberty Basic}}
<langsyntaxhighlight lang="raven">define hamming use $limit
[ ] as $h
1 $h 0 set
Line 9,496 ⟶ 10,207:
# Raven can't handle > 2^31 using integers
#
#"Hamming(1000000) is: " print 1000000 hamming print "\n" print</langsyntaxhighlight>
{{out}}
<pre>1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36
Line 9,506 ⟶ 10,217:
This REXX program was a direct copy from my old REXX subroutine to compute &nbsp; '''UGLY''' &nbsp; numbers,
<br>it computes &nbsp; ''just enough'' &nbsp; Hamming numbers &nbsp; (two Hamming numbers after the current number).
<langsyntaxhighlight lang="rexx">/*REXX program computes Hamming numbers: 1 ──► 20, # 1691, and the one millionth. */
numeric digits 100 /*ensure enough decimal digits. */
call hamming 1, 20 /*show the 1st ──► twentieth Hamming #s*/
Line 9,525 ⟶ 10,236:
 
say right( 'length of last Hamming number =' length(@.y), 70); say
return</langsyntaxhighlight>
{{out|output|text=&nbsp; when using the default inputs:}}
<pre>
Line 9,559 ⟶ 10,270:
===optimized===
This REXX version is roughly twice as fast as the 1<sup>st</sup> REXX version.
<langsyntaxhighlight lang="rexx">/*REXX program computes Hamming numbers: 1 ──► 20, # 1691, and the one millionth.*/
call hamming 1, 20 /*show the 1st ──► twentieth Hamming #s*/
call hamming 1691 /*show the 1,691st Hamming number. */
Line 9,584 ⟶ 10,295:
 
say right( 'length of last Hamming number =' length(@.y), 70); say
return</langsyntaxhighlight>
{{out|output|text=&nbsp; is identical to the 1<sup>st</sup> REXX version.}}
 
Line 9,592 ⟶ 10,303:
It can, however, computer much larger Hamming numbers &nbsp; (by storing the larger numbers in exponential format).
<br>This is possible because larger Hamming numbers have a significant number of trailing zeros.
<langsyntaxhighlight lang="rexx">/*REXX program computes Hamming numbers: 1 ──► 20, # 1691, and the one millionth.*/
call hamming 1, 20 /*show the 1st ──► twentieth Hamming #s*/
call hamming 1691 /*show the 1,691st Hamming number. */
Line 9,617 ⟶ 10,328:
 
say right( 'length of last Hamming number =' length(@.y / 1), 70); say
return</langsyntaxhighlight>
{{out|output|text=&nbsp; is identical to the 1<sup>st</sup> REXX version.}} <br><br>
 
=={{header|Ring}}==
<langsyntaxhighlight lang="ring">
see "h(1) = 1" + nl
for nr = 1 to 19
Line 9,642 ⟶ 10,353:
hamming = h[limit]
return hamming
</syntaxhighlight>
</lang>
Output:
<pre>
Line 9,666 ⟶ 10,377:
h(20) = 36
h(1691) = 2125764000
</pre>
 
=={{header|RPL}}==
RPL does not provide any multi-precision capability, so only parts 1 and 2 of the task can be implemented.
 
Using global variables <code>In</code> and <code>Xn</code> avoids stack acrobatics that would have made the code slower and unintelligible, despite the ugly <code> 'var_name' STO</code> syntax inherited from vintage HP calculators.
≪ 1 ‘I2’ STO 1 ‘I3’ STO 1 ‘I5’ STO 2 ‘X2’ STO 3 ‘X3’ STO 5 ‘X5’ STO
{ 1 } 1 ROT 1 - '''FOR''' n
X2 X3 MIN X5 MIN
SWAP OVER + SWAP
'''IF''' X2 OVER == '''THEN''' 1 ‘I2’ STO+ OVER I2 GET 2 * ‘X2’ STO '''END'''
'''IF''' X3 OVER == '''THEN''' 1 ‘I3’ STO+ OVER I3 GET 3 * ‘X3’ STO '''END'''
'''IF''' X5 == '''THEN''' 1 ‘I5’ STO+ DUP I5 GET 5 * ‘X5’ STO '''END'''
'''NEXT'''
≫ 'HAMM' STO
 
20 HAMM
1691 HAMM DUP SIZE GET
{{out}}
<pre>
2: { 1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36 }
1: 2125764000
</pre>
 
Line 9,671 ⟶ 10,404:
{{trans|Scala}}
{{works with|Ruby|1.9.3}}
<langsyntaxhighlight lang="ruby">hamming = Enumerator.new do |yielder|
next_ham = 1
queues = [[ 2, []], [3, []], [5, []] ]
Line 9,682 ⟶ 10,415:
queues.each {|m,queue| queue.shift if queue.first==next_ham}
end
end</langsyntaxhighlight>
And the "main" part of the task
<langsyntaxhighlight lang="ruby">start = Time.now
 
hamming.each.with_index(1) do |ham, idx|
Line 9,696 ⟶ 10,429:
end
 
puts "elapsed: #{Time.now - start} seconds"</langsyntaxhighlight>
{{out}}
<pre style='height: 30ex; overflow: scroll'>
Line 9,732 ⟶ 10,465:
Alternative version:
{{trans|Crystal}}
<langsyntaxhighlight lang="ruby">def hamming(limit)
h = Array.new(limit, 1)
x2, x3, x5 = 2, 3, 5
Line 9,751 ⟶ 10,484:
puts "Hamming Number 1691: #{hamming 1691}"
puts "Hamming Number 1,000,000: #{hamming 1_000_000}"
puts "Elasped Time: #{Time.new - start} secs"</langsyntaxhighlight>
 
System: I7-6700HQ, 3.5 GHz, Linux Kernel 5.6.17
Line 9,766 ⟶ 10,499:
 
=={{header|Run BASIC}}==
<langsyntaxhighlight lang="runbasic">
dim h(1000000)
for i =1 to 20
Line 9,789 ⟶ 10,522:
next n
hamming = h(limit -1)
end function</langsyntaxhighlight>
<pre>1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36
Hamming List First(1691) = 2125764000
Line 9,799 ⟶ 10,532:
{{trans|D}}
Improved by minimizing the number of BigUint comparisons:
<langsyntaxhighlight lang="rust">extern crate num;
num::bigint::BigUint;
 
Line 9,868 ⟶ 10,601:
 
println!("This last took {} milliseconds", dur);
}</langsyntaxhighlight>
{{output}}
<pre>[ 1, 2, 3, 4, 5, 6, 8, 9, 10, 12, 15, 16, 18, 20, 24, 25, 27, 30, 32, 36 ]
Line 9,879 ⟶ 10,612:
 
Much of the time above is wasted doing big integer multiplications that are duplicated elsewhere as in 2 times 3 and 3 times 2, etc. The following code eliminates such duplicate multiplications and reduces the number of comparisons, as follows:
<langsyntaxhighlight lang="rust">fn nodups_hamming(n: usize) -> BigUint {
let two = BigUint::from(2u8);
let three = BigUint::from(3u8);
Line 9,910 ⟶ 10,643:
_ => panic!("nodups_hamming: arg is zero; no elements")
}
}</langsyntaxhighlight>
 
Substitute the calls to the above code for the calls to "basic_hamming" (three places) in the "main" function above. The output is the same except that the time expended is less (249 milliseconds), for over two and a half times faster.
Line 9,919 ⟶ 10,652:
 
Another problem is that the above versions use so much memory that they can't compute even the billionth hamming number without running out of memory on a 16 Gigabyte machine. This version greatly reduces the memory use to about O(n^(2/3)) by eliminating no longer required back values in batches so that with about 9 Gigabytes it will calculate the hamming numbers to 1.2e13 (it's limit due to the ranges of the exponents) in a day or so. The code is as follows:
<langsyntaxhighlight lang="rust">fn log_nodups_hamming(n: u64) -> BigUint {
if n <= 0 { panic!("nodups_hamming: arg is zero; no elements") }
if n < 2 { return BigUint::from(1u8) } // trivial case for n == 1
Line 10,001 ⟶ 10,734:
for _ in 0 .. o.exp5 { ob = ob * &five }
ob
}</langsyntaxhighlight>
 
Again, this function can be used with the same "main" as above and the outputs are the same except that the execution time is only 7 milliseconds. It calculates the hamming number to a billion and just over a second and to one hundred billion in just over 100 seconds - O(n) time complexity. As well as eliminating duplicate calculations and calculating using exponents rather than BitUint operations, it also reduces the time required as compared to other similar algorithms by scaling the logarithms of two, three, and five into 64-bit integers so no floating point operations are required. The scaling is such that round-off errors will not affect the order of results for well past the usable range.
Line 10,011 ⟶ 10,744:
As the task actually asks for a sequence of Hamming numbers, any of the above three solutions can easily be adapted to output an Iterator sequence; in this case the last fastest one is converted as follows:
 
<langsyntaxhighlight lang="rust">extern crate num; // requires dependency on the num library
use num::bigint::BigUint;
 
Line 10,141 ⟶ 10,874:
 
println!("This last took {} milliseconds.", dur);
}</langsyntaxhighlight>
{{output}}
<pre>[ 1, 2, 3, 4, 5, 6, 8, 9, 10, 12, 15, 16, 18, 20, 24, 25, 27, 30, 32, 36 ]
Line 10,170 ⟶ 10,903:
{{trans|Haskell}}
{{works with|Rust 1.53.0}}
<langsyntaxhighlight lang="rust">extern crate num;
use num::bigint::BigUint;
Line 10,416 ⟶ 11,149:
println!("This last took {} milliseconds.", dur);
}</langsyntaxhighlight>
As can be seen, there is little code necessary for the "hammings" and "main" functions if the rest were available in libraries, as they really should be.
{{output}}
Line 10,436 ⟶ 11,169:
Although we can't eliminate the memory leak of the ahove code, we can increase the speed by eliminating the many BigUint calculations and also reduce the memory used (and thus leaked) by using a LogRep structure instead of the variable length container where the contained BigUint gets constantly bigger with increasing range as per the following code:
{{works with|Rust 1.53.0}}
<langsyntaxhighlight lang="rust">extern crate num;
use num::bigint::BigUint;
 
Line 10,732 ⟶ 11,465:
println!("This last took {} milliseconds.", dur);
}</langsyntaxhighlight>
{{out}}
<pre>[ 1, 2, 3, 4, 5, 6, 8, 9, 10, 12, 15, 16, 18, 20, 24, 25, 27, 30, 32, 36 ]
Line 10,740 ⟶ 11,473:
 
As can be seen, the above version takes about two thirds of the time as the previous version running on the same Intel Skylake i5-6500 - although it still has a memory leak, the size of the leak for a given range will be many times smaller. It still isn't as fast as Haskell running the same algorithm, but it is only about 30% slower and about as fast as most other languages that compile their code to a running executable.
 
===Very fast sequence version using imperative code (mutable vectors) and logarithmic approximations for sorting===
 
Most of the remaining execution time for the above version is due to the many allocations/deallocations used in implementing the functional lazy list sequence; the following code avoids that overhead by memoizing the pst values using linear vectors with the head and tail values marked by tracking indices:
{{trans|Nim}}
<syntaxhighlight lang="rust">extern crate num;
use num::bigint::BigInt;
 
use core::fmt::Display;
use std::time::Instant;
use std::iter;
 
const NUM_ELEMENTS: usize = 1000000;
 
const LB2_2: f64 = 1.0_f64; // log2(2.0)
const LB2_3: f64 = 1.5849625007211563_f64; // log2(3.0)
const LB2_5: f64 = 2.321928094887362_f64; // log2(5.0)
 
#[derive (Clone)]
struct LogRep {
lr: f64,
x2: u32,
x3: u32,
x5: u32,
}
impl LogRep {
fn int_value(&self) -> BigInt {
BigInt::from(2).pow(self.x2) * BigInt::from(3).pow(self.x3) * BigInt::from(5).pow(self.x5)
}
 
#[inline(always)]
fn mul2(&self) -> Self {
LogRep {
lr: self.lr + LB2_2,
x2: self.x2 + 1,
x3: self.x3,
x5: self.x5,
}
}
 
#[inline(always)]
fn mul3(&self) -> Self {
LogRep {
lr: self.lr + LB2_3,
x2: self.x2,
x3: self.x3 + 1,
x5: self.x5,
}
}
 
#[inline(always)]
fn mul5(&self) -> Self {
LogRep {
lr: self.lr + LB2_5,
x2: self.x2,
x3: self.x3,
x5: self.x5 + 1,
}
}
 
}
 
impl Display for LogRep {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let val = self.int_value();
let x2 = self.x2;
let x3 = self.x3;
let x5 = self.x5;
write!(f, "[{x2} {x3} {x5}]=>{val}")
}
}
 
const ONE: LogRep = LogRep { lr: 0.0, x2: 0, x3: 0, x5: 0 };
struct LogRepImperativeIterator {
s2: Vec<LogRep>,
s3: Vec<LogRep>,
s5: LogRep,
mrg: LogRep,
s2i: usize,
s3i: usize,
}
impl LogRepImperativeIterator {
pub fn new() -> Self {
LogRepImperativeIterator {
s2: vec![ONE.mul2()],
s3: vec![ONE.mul3()],
s5: ONE.mul5(),
mrg: ONE.mul3(),
s2i: 0,
s3i: 0,
}
}
 
fn iter(&self) -> impl Iterator<Item = LogRep> {
iter::once(ONE).chain(LogRepImperativeIterator::new())
}
}
impl Iterator for LogRepImperativeIterator {
type Item = LogRep;
 
#[inline(always)]
fn next(&mut self) -> Option<Self::Item> {
if self.s2i + self.s2i >= self.s2.len() {
self.s2.drain(0..self.s2i);
self.s2i = 0;
}
let result: LogRep;
if self.s2[self.s2i].lr < self.mrg.lr {
self.s2.push(self.s2[self.s2i].mul2());
result = self.s2[self.s2i].clone(); self.s2i += 1;
} else {
if self.s3i + self.s3i >= self.s3.len() {
self.s3.drain(0..self.s3i);
self.s3i = 0;
}
 
result = self.mrg.clone();
self.s2.push(self.mrg.mul2());
self.s3.push(self.mrg.mul3());
 
self.s3i += 1;
if self.s3[self.s3i].lr < self.s5.lr {
self.mrg = self.s3[self.s3i].clone();
} else {
self.mrg = self.s5.clone();
self.s5 = self.s5.mul5();
self.s3i -= 1;
}
};
 
Some(result)
}
}
 
fn main() {
LogRepImperativeIterator::new().iter().take(20)
.for_each(&|h: LogRep| print!("{} ", h.int_value()));
println!();
 
println!("{} ", LogRepImperativeIterator::new().iter()
.take(1691).last().unwrap().int_value());
 
let t0 = Instant::now();
let rslt = LogRepImperativeIterator::new().iter()
.take(NUM_ELEMENTS).last().unwrap();
let elpsd = t0.elapsed().as_micros() as f64;
 
println!("{}", rslt.int_value());
println!("This took {} microseconds for {} elements!", elpsd, NUM_ELEMENTS)
}</syntaxhighlight>
 
{{out}}
 
<pre>1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36
2125764000
519312780448388736089589843750000000000000000000000000000000000000000000000000000000
This took 6517 microseconds for 1000000 elements!</pre>
 
The code above is almost twenty times faster than the previous functional lazy list sequence code due to not losing the time for the many small allocations/deallocations of small heap (reference counted) objects and not having recursive references, also it does not leak memory. This version can calculate the billionth Hamming number in about 8.1 seconds.
 
===Extremely fast non-sequence version by calculation of top band of Hamming numbers===
Line 10,745 ⟶ 11,637:
One might ask "What could possibly be done to further speed up finding Hamming numbers?": the answer is quite a lot, but one has to dump the ability to iterate a sequence as that depends on being able to refer to past calculated values by back pointers to the memorized O(n^(2/3)) arrays or lists and thus quite large amounts of memory. If one just wants to find very large Hamming numbers individually, one looks to the [mathematical analysis of Hamming/regular numbers on Wikipedia](https://en.wikipedia.org/wiki/Regular_number) and finds there is quite an exact relationship between 'n', the sequence number, and the logarithmic magnitude of the resulting Hamming number, and that the error term is directly proportional to the logarithm of that output number. This means that only the band of Hamming values as wide of this error and including the estimated value need to be generated, and that we need only iterate over two of the three prime exponents, thus O(n^(2/3)) time complexity and O(n^(1/3)) space complexity. The following code was adapted from [an article in DDJ](http://www.drdobbs.com/architecture-and-design/hamming-problem/228700538) and the Haskell code with the further refinements to decrease the memory requirements as described above:
{{trans|Haskell}}
 
<lang rust>extern crate num; // requires dependency on the num library
<syntaxhighlight lang="rust">extern crate num; // requires dependency on the num library
use num::bigint::BigUint;
 
Line 10,839 ⟶ 11,732:
 
println!("This last took {} milliseconds.", dur);
}</langsyntaxhighlight>
 
<pre>[ 1, 2, 3, 4, 5, 6, 8, 9, 10, 12, 15, 16, 18, 20, 24, 25, 27, 30, 32, 36 ]
Line 10,856 ⟶ 11,749:
 
=={{header|Scala}}==
<langsyntaxhighlight lang="scala">class Hamming extends Iterator[BigInt] {
import scala.collection.mutable.Queue
val qs = Seq.fill(3)(new Queue[BigInt])
Line 10,868 ⟶ 11,761:
def hasNext = true
qs foreach (_ enqueue 1)
}</langsyntaxhighlight>
However, the usage of closures adds a significant amount of time. The code below, though a bit uglier because of the repetitions, is twice as fast:
<langsyntaxhighlight lang="scala">class Hamming extends Iterator[BigInt] {
import scala.collection.mutable.Queue
val q2 = new Queue[BigInt]
Line 10,890 ⟶ 11,783:
def hasNext = true
List(q2, q3, q5) foreach (_ enqueue 1)
}</langsyntaxhighlight>
Usage:
<pre>
Line 10,904 ⟶ 11,797:
There's also a fairly mechanical translation from Haskell using purely functional lazy streams
{{trans|Haskell}}
<langsyntaxhighlight lang="scala">val hamming : Stream[BigInt] = {
def merge(inx : Stream[BigInt], iny : Stream[BigInt]) : Stream[BigInt] = {
if (inx.head < iny.head) inx.head #:: merge(inx.tail, iny) else
Line 10,912 ⟶ 11,805:
 
1 #:: merge(hamming map (_ * 2), merge(hamming map (_ * 3), hamming map (_ * 5)))
}</langsyntaxhighlight>
Use of "force" ensures that the stream is computed before being printed, otherwise it would just be left suspended and you'd see "Stream(1, ?)"
<pre>
Line 10,932 ⟶ 11,825:
 
One can fix the problems of the memory use of the above code resulting from the entire stream being held in memory due to the use a "val hamming: Stream[BigInt]" by using a function "def hamming(): Stream[BigInt]" and making temporary local variables for intermediate streams so that the beginnings of the streams are garbage collected as the output stream is consumed; one can also implement the other Haskell algorithm to avoid factor duplication by building each stream on successive streams, again with memory conserved by building the least dense first:
<langsyntaxhighlight lang="scala"> def hamming(): Stream[BigInt] = {
def merge(a: Stream[BigInt], b: Stream[BigInt]): Stream[BigInt] = {
if (a.isEmpty) b else {
Line 10,943 ⟶ 11,836:
lazy val r: Stream[BigInt] = merge(s, smult(n, 1 #:: r))
r }
1 #:: List(5, 3, 2).foldLeft(Stream.empty[BigInt]) { u } }</langsyntaxhighlight>
Usage:
<pre>
Line 10,958 ⟶ 11,851:
 
=={{header|Scheme}}==
<langsyntaxhighlight lang="scheme">(define-syntax lons
(syntax-rules ()
((_ lar ldr) (delay (cons lar (delay ldr))))))
Line 11,005 ⟶ 11,898:
(newline)
(display (llist-ref 1000000 hamming))
(newline)</langsyntaxhighlight>
{{out}}
<pre>(1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36)
Line 11,016 ⟶ 11,909:
 
Although the algorithm above is true to the classic Dijkstra version and although the algorithm does require a form of lazy list/stream processing in order to utilize memoization and avoid repeated recalculations/comparisons, the stream implementation can be simplified, and the modified algorithm as per the Haskell code avoids duplicate generations of factors. As well, the following code implements the algorithm as a procedure/function so that it restarts the calculation from the beginning on every new call and so that internal stream variables are not top level so that the garbage collector can collect the beginning of all intermediate and final streams when they are no longer referenced; in this way total memory used (after interspersed garbage collections) is almost zero for a sequence of the first million numbers. Note that Scheme R5RS does not define "map" or "foldl" functions, so these are provided (a simplified "smult" which is faster than using map for this one purpose):
<langsyntaxhighlight lang="scheme">(define (hamming)
(define (foldl f z l)
(define (foldls zs ls)
Line 11,040 ⟶ 11,933:
(display (stream-take->list 20 (hamming))) (newline)
(display (stream-ref (hamming) (- 1691 1))) (newline)
(display (stream-ref (hamming) (- 1000000 1))) (newline)</langsyntaxhighlight>
 
{{output}}
Line 11,050 ⟶ 11,943:
 
=={{header|Seed7}}==
<langsyntaxhighlight lang="seed7">$ include "seed7_05.s7i";
include "bigint.s7i";
 
Line 11,109 ⟶ 12,002:
writeln(hamming(1691));
writeln(hamming(1000000));
end func;</langsyntaxhighlight>
{{out}}
<pre>
Line 11,118 ⟶ 12,011:
 
=={{header|Sidef}}==
<langsyntaxhighlight lang="ruby">func ham_gen {
var s = [[1], [1], [1]]
var m = [2, 3, 5]
Line 11,138 ⟶ 12,031:
 
{ h() } << (i+1 ..^ 1691)
say h()</langsyntaxhighlight>
 
{{out}}
Line 11,149 ⟶ 12,042:
{{works with|GNU Smalltalk}}
This is a straightforward implementation of the pseudocode snippet found in the Python section. Smalltalk supports arbitrary-precision integers, but the implementation is too slow to try it with 1 million.
<langsyntaxhighlight lang="smalltalk">Object subclass: Hammer [
Hammer class >> hammingNumbers: howMany [
|h i j k x2 x3 x5|
Line 11,169 ⟶ 12,062:
 
(Hammer hammingNumbers: 20) displayNl.
(Hammer hammingNumbers: 1690) last displayNl.</langsyntaxhighlight>
 
{{works with|Pharo Smalltalk}}
<langsyntaxhighlight lang="smalltalk">
limit := 10 raisedToInteger: 84.
tape := Set new.
Line 11,194 ⟶ 12,087:
sc at: 1691. "2125764000"
sc at: 1000000. "519312780448388736089589843750000000000000000000000000000000000000000000000000000000"
</syntaxhighlight>
</lang>
 
 
Line 11,203 ⟶ 12,096:
The stream can only move forward, for economy, we don't bother buffering past values.
The counterpart is that we have no direct indexing and must keep the position counter by ourself.
<langsyntaxhighlight lang="smalltalk">tape := Heap with: 1 -> 1.
hammingStream :=
[| next |
Line 11,220 ⟶ 12,113:
tape size. "See how many we have buffered => 24904"
 
</syntaxhighlight>
</lang>
 
=={{header|SQL}}==
This uses SQL99's "WITH RECURSIVE" (more like co-recursion) to build a table of Hamming numbers, then selects out the desired ones. With sqlite it is very fast. It doesn't try to get the millionth number because sqlite doesn't have bignums.
 
<langsyntaxhighlight lang="sql">
CREATE TEMPORARY TABLE factors(n INT);
INSERT INTO factors VALUES(2);
Line 11,263 ⟶ 12,156:
sqlite> SELECT h FROM hamming ORDER BY h LIMIT 1 OFFSET 1690;
2125764000
</syntaxhighlight>
</lang>
 
=={{header|Tcl}}==
This uses coroutines to simplify the description of what's going on.
{{works with|Tcl|8.6}}
<langsyntaxhighlight lang="tcl">package require Tcl 8.6
 
# Simple helper: Tcl-style list "map"
Line 11,325 ⟶ 12,218:
puts "hamming{1690} = $h"
for {} {$i <= 1000000} {incr i} {set h [hamming]}
puts "hamming{1000000} = $h"</langsyntaxhighlight>
{{out}}
<pre>
Line 11,352 ⟶ 12,245:
</pre>
A faster version can be built that also works on Tcl 8.5 (or earlier, if only small hamming numbers are being computed):
<langsyntaxhighlight lang="tcl">variable hamming 1 hi2 0 hi3 0 hi5 0
proc hamming {n} {
global hamming hi2 hi3 hi5
Line 11,385 ⟶ 12,278:
puts "hamming{1692} = [hamming 1692]"
puts "hamming{1693} = [hamming 1693]"
puts "hamming{1000000} = [hamming 1000000]"</langsyntaxhighlight>
 
=={{header|uBasic/4tH}}==
uBasic's single array does not have the required size to calculate the 1691st number, let alone the millionth.
<syntaxhighlight lang="text">For H = 1 To 20
Print "H("; H; ") = "; Func (_FnHamming(H))
Next
Line 11,412 ⟶ 12,305:
Next
 
Return (@(a@-1))</langsyntaxhighlight>
{{out}}
<pre>H(1) = 1
Line 11,439 ⟶ 12,332:
=={{header|UNIX Shell}}==
{{works with|ksh93}}
{{works with|Bourne Again SHell|4+}}
Large numbers are not supported.
<syntaxhighlight lang="bash">
<lang bash>typeset -a hamming=(1)
typeset -a hamming=(1) q2 q3 q5
function nextHamming {
typeset -Sai q2 q3 q5h=${hamming[${#hamming[@]}-1]}
integer h=${hamming[${#hamming[@]}-1]}
q2+=( $(( h*2 )) )
q3+=( $(( h*3 )) )
Line 11,455 ⟶ 12,349:
 
function ashift {
namereftypeset -n ary=$1
printprintf --'%s\n' "${ary[0]}"
ary=( "${ary[@]:1}" )
}
Line 11,462 ⟶ 12,356:
function min3 {
if (( $1 < $2 )); then
(( $1 < $3 )) && print --printf '%s\n'$1 || print --printf '%s\n'$3
else
(( $2 < $3 )) && print --printf '%s\n'$2 || print --printf '%s\n'$3
fi
}
Line 11,470 ⟶ 12,364:
for ((i=1; i<=20; i++)); do
nextHamming
printf "'%d\t%d\n"' "$i" "${hamming[i-1]}"
done
for ((; i<=1690; i++)); do nextHamming; done
nextHamming
printf "'%d\t%d\n"' "$i" "${hamming[i-1]}"
printprintf "'elapsed: %s\n' "$SECONDS"</lang>
</syntaxhighlight>
 
{{out}}
Line 11,507 ⟶ 12,402:
number with respect to them. An elegant but inefficient formulation based on the J solution is the
following.
<langsyntaxhighlight Ursalalang="ursala">#import std
#import nat
 
smooth"p" "n" = ~&z take/"n" nleq-< (rep(length "n") ^Ts/~& product*K0/"p") <1></langsyntaxhighlight>
This test program
<langsyntaxhighlight Ursalalang="ursala">main = smooth<2,3,5>* nrange(1,20)</langsyntaxhighlight>
yields this list of the first 20 Hamming numbers.
<pre>
Line 11,519 ⟶ 12,414:
Although all calculations are performed using unlimited precision, the version
above is impractical for large numbers. A more hardcore approach is the following.
<langsyntaxhighlight Ursalalang="ursala">#import std
#import nat
 
Line 11,530 ⟶ 12,425:
#cast %nL
 
main = smooth<2,3,5>* nrange(1,20)--<1691,1000000></langsyntaxhighlight>
{{out}}
The great majority of time is spent calculating the millionth Hamming number.
Line 11,558 ⟶ 12,453:
 
=={{header|VBA}}==
<langsyntaxhighlight lang="vb">'RosettaCode Hamming numbers
'This is a well known hard problem in number theory:
'counting the number of lattice points in a
Line 11,701 ⟶ 12,596:
finis_time = GetTickCount
Debug.Print "Execution time"; (finis_time - start_time); " milliseconds"
End Sub</langsyntaxhighlight>{{out}}
<pre>The first twenty Hamming numbers are:
1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32
Line 11,712 ⟶ 12,607:
=={{header|VBScript}}==
{{trans|BBC BASIC}}
<syntaxhighlight lang="vb">
<lang vb>
For h = 1 To 20
WScript.StdOut.Write "H(" & h & ") = " & Hamming(h)
Line 11,735 ⟶ 12,630:
Hamming = h(l-1)
End Function
</syntaxhighlight>
</lang>
 
{{Out}}
Line 11,762 ⟶ 12,657:
</pre>
 
=={{header|V (Vlang)}}==
{{trans|go}}
===Concise version using dynamic-programming===
<syntaxhighlight lang="v (vlang)">import math.big
 
fn min(a big.Integer, b big.Integer) big.Integer {
Line 11,803 ⟶ 12,698:
println(h[1691-1])
println(h[h.len-1])
}</langsyntaxhighlight>
{{out}}
<pre>[1, 2, 3, 4, 5, 6, 8, 9, 10, 12, 15, 16, 18, 20, 24, 25, 27, 30, 32, 36]
2125764000
519312780448388736089589843750000000000000000000000000000000000000000000000000000000</pre>
 
===Fast version with no duplicates algorithm using arrays for memoization and logarithmic approximations===
 
The V (Vlang) language isn't yet stable enough (version 0.30) to support a fully functional version using generic lazy lists as per the Haskell language versions and in truth is mostly an imperative language anyway; however, it already can do the page task very quickly with a more imperative algorithm using arrays for memoization storage and logarithmic approximations for sorting comparisons to avoid "infinite" precision integer calculations except for the final result values, as per the following code, which is Nim's "ring buffer" version as that is faster due to less copying required:
{{trans|Nim}}
<syntaxhighlight lang="v (vlang)">// compile with: v -cflags -march=native -cflags -O3 -prod HammingsLogQ.v
 
import time
import math.big
import math { log2 }
import arrays { copy }
 
const num_elements = 1_000_000
 
struct LogRep {
lg f64
x2 u32
x3 u32
x5 u32
}
const (
one = LogRep { 0.0, 0, 0, 0 }
lb2_2 = 1.0
lb2_3 = log2(3.0)
lb2_5 = log2(5.0)
)
[inline]
fn (lr &LogRep) mul2() LogRep {
return LogRep { lg: lr.lg + lb2_2,
x2: lr.x2 + 1,
x3: lr.x3,
x5: lr.x5 }
}
[inline]
fn (lr &LogRep) mul3() LogRep {
return LogRep { lg: lr.lg + lb2_3,
x2: lr.x2,
x3: lr.x3 + 1,
x5: lr.x5 }
}
[inline]
fn (lr &LogRep) mul5() LogRep {
return LogRep { lg: lr.lg + lb2_5,
x2: lr.x2,
x3: lr.x3,
x5: lr.x5 + 1 }
}
[inline]
fn xpnd(x u32, mlt u32) big.Integer {
mut r := big.integer_from_int(1)
mut m := big.integer_from_u32(mlt)
mut v := x
for {
if v <= 0 { break }
else {
if v & 1 != 0 { r = r * m }
m = m * m
v >>= 1
}
}
return r
}
fn (lr &LogRep) to_integer() big.Integer {
return xpnd(lr.x2, 2) * xpnd(lr.x3, 3) * xpnd(lr.x5, 5)
}
fn (lr LogRep) str() string {
return (&lr).to_integer().str()
}
 
struct HammingsLog {
mut:
// automatically initialized with LogRep = one (defult)...
s2 []LogRep = []LogRep { len: 1024, cap: 1024 }
s3 []LogRep = []LogRep { len: 1024, cap: 1024 }
s5 LogRep = one.mul5()
mrg LogRep = one.mul3()
s2msk int = 1023
s2hdi int
s2nxti int = 1
s3msk int = 1023
s3hdi int
s3nxti int
}
[direct_array_access][inline]
fn (mut hl HammingsLog) next() ?LogRep {
mut rsltp := &hl.s2[hl.s2hdi]
if rsltp.lg < hl.mrg.lg {
hl.s2[hl.s2nxti] = rsltp.mul2()
hl.s2hdi++
hl.s2hdi &= hl.s2msk
} else {
mut rslt := hl.mrg
rsltp = &rslt
hl.s2[hl.s2nxti] = hl.mrg.mul2()
hl.s3[hl.s3nxti] = hl.mrg.mul3()
s3hdp := &hl.s3[hl.s3hdi]
if unsafe { s3hdp.lg < hl.s5.lg } {
hl.mrg = *s3hdp
hl.s3hdi++
hl.s3hdi &= hl.s3msk
} else {
hl.mrg = hl.s5
hl.s5 = hl.s5.mul5()
}
hl.s3nxti++
hl.s3nxti &= hl.s3msk
if hl.s3nxti == hl.s3hdi { // buffer full: grow it
sz := hl.s3msk + 1
hl.s3msk = sz + sz
unsafe { hl.s3.grow_len(sz) }
hl.s3msk--
if hl.s3hdi == 0 {
hl.s3nxti = sz
} else {
unsafe { vmemcpy(&hl.s3[hl.s3hdi + sz], &hl.s3[hl.s3hdi],
int(sizeof(LogRep)) * (sz - hl.s3hdi)) }
hl.s3hdi += sz
}
}
}
hl.s2nxti++
hl.s2nxti &= hl.s2msk
if hl.s2nxti == hl.s2hdi { // buffer full: grow it
sz := hl.s2msk + 1
hl.s2msk = sz + sz
unsafe { hl.s2.grow_len(sz) }
hl.s2msk--
if hl.s2hdi == 0 {
hl.s2nxti = sz
} else {
unsafe { vmemcpy(&hl.s2[hl.s2hdi + sz], &hl.s2[hl.s2hdi],
int(sizeof(LogRep)) * (sz - hl.s2hdi)) }
hl.s2hdi += sz
}
}
return *rsltp
}
 
fn (hmgs HammingsLog) nth_hammings_log(n int) LogRep {
mut cnt := 0
if n > 0 { for h in hmgs {
cnt++
if cnt >= n { return h } }
}
panic("argument less than 1 for nth!")
}
 
{
hs := HammingsLog {}
mut cnt := 0
for h in hs {
print("$h ")
cnt++
if cnt >= 20 { break }
}
println("")
}
 
println("${(HammingsLog{}).nth_hammings_log(1691)}")
 
start_time := time.now()
rslt := (HammingsLog{}).nth_hammings_log(num_elements)
duration := (time.now() - start_time).microseconds()
println("$rslt")
println("Above result for $num_elements elements in $duration microseconds.")</syntaxhighlight>
{{out}}
<pre>1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36
2125764000
519312780448388736089589843750000000000000000000000000000000000000000000000000000000
Above result for 1000000 elements in 4881 microseconds.</pre>
The above result is as computed on an Intel i5-6500 at 3.6 GHz (single-threaded, boosted); the execution time is somewhat variable due to V currently using Garbage Collection by default, but the intention is to eventually use automatic reference counting by default which should make it slightly faster and more consistent other than for any variations caused by the memory allocator. The above version can calculate the billionth Hamming number in about 5.3 seconds.
 
===Extremely fast version inserting values into the error band and using logarithmic approximations for sorting===
 
The above code is about as fast as one can go generating sequences/iterations; however, if one is willing to forego sequences/iterations and just calculate the nth Hamming number (repeatedly when a sequence is desired, but that is only for the first required task of three and then only for a trivial range), then some reading on the relationship between the size of numbers to the sequence numbers is helpful (Wikipedia: Regular Number). One finds that there is a very distinct relationship and that it quite quickly reduces to quite a small error band proportional to the log of the output value for larger ranges. Thus, the following code just scans for logarithmic representations to insert into a sequence for this top error band and extracts the correct nth representation from that band. It reduces time complexity to O(n^(2/3)) from O(n) for the sequence versions, but even more amazingly, reduces memory requirements to O(n^(1/3)) from O(n^(2/3)) and thus makes it possible to calculate very large values in the sequence on common personal computers. This version uses a multi-precision integer as the representation of the logarithmic approximation of the value for sorting of the error band to extend the precision for accurate results up to almost the 64-bit number range (in about a day on common desktop computers). The code is as follows:
{{trans|Nim}}
<syntaxhighlight lang="v (vlang)">// compile with: v -cflags -march=native -cflags -O3 -prod HammingsLog.v
 
import time
import math.big
import math { log2, sqrt, pow, floor }
 
const num_elements = 1_000_000
 
struct LogRep {
lg big.Integer
x2 u32
x3 u32
x5 u32
}
const (
one = LogRep { big.zero_int, 0, 0, 0 }
// 1267650600228229401496703205376
lb2_2 = big.Integer { digits: [u32(0), 0, 0, 16],
signum: 1, is_const: true }
// 2009178665378409109047848542368
lb2_3 = big.Integer { digits: [u32(11608224), 3177740794, 1543611295, 25]
signum: 1, is_const: true }
// 2943393543170754072109742145491
lb2_5 = big.Integer { digits: [u32(1258143699), 1189265298, 647893747, 37],
signum: 1, is_const: true }
smlb2_2 = f64(1.0)
smlb2_3 = log2(3.0)
smlb2_5 = log2(5.0)
fctr = f64(6.0) * smlb2_3 * smlb2_5
crctn = log2(sqrt(30.0))
)
fn xpnd(x u32, mlt u32) big.Integer {
mut r := big.integer_from_int(1)
mut m := big.integer_from_u32(mlt)
mut v := x
for {
if v <= 0 { break }
else {
if v & 1 != 0 { r = r * m }
m = m * m
v >>= 1
}
}
return r
}
fn (lr LogRep) to_integer() big.Integer {
return xpnd(lr.x2, 2) * xpnd(lr.x3, 3) * xpnd(lr.x5, 5)
}
fn (lr LogRep) str() string {
return lr.to_integer().str()
}
 
fn nth_hamming_log(n u64) LogRep {
if n < 2 { return one }
lgest := pow(fctr * f64(n), f64(1.0)/f64(3.0)) - crctn // from WP formula
frctn := if n < 1_000_000_000 { f64(0.509) } else { f64(0.105) }
lghi := pow(fctr * (f64(n) + frctn * lgest), f64(1.0)/f64(3.0)) - crctn
lglo := f64(2.0) * lgest - lghi // and a lower limit of the upper "band"
mut count := u64(0) // need to use extended precision, might go over
mut band := []LogRep { len: 1, cap: 1 } // give it one value so doubling size works
mut ih := 0 // band array insertion index
klmt := u32(lghi / smlb2_5) + 1
for k in u32(0) .. klmt {
p := f64(k) * smlb2_5
jlmt := u32((lghi - p) / smlb2_3) + 1
for j in u32(0) .. jlmt {
q := p + f64(j) * smlb2_3
ir := lghi - q
lg := q + floor(ir) // current log value (estimated)
count += u64(ir) + 1
if lg >= lglo {
len := band.len
if ih >= len { unsafe { band.grow_len(len) } }
bglg := lb2_2 * big.integer_from_u32(u32(ir)) +
lb2_3 * big.integer_from_u32(j) +
lb2_5 * big.integer_from_u32(k)
band[ih] = LogRep { lg: bglg, x2: u32(ir), x3: j, x5: k }
ih++
}
}
}
band.sort_with_compare(fn(a &LogRep, b &LogRep) int {
return b.lg.abs_cmp(a.lg)
}
)
if n > count { panic("nth_hamming_log: band high estimate is too low!") }
ndx := int(count - n)
if ndx >= band.len { panic("nth_hamming_log: band low estimate is too high!") }
return band[ndx]
}
 
for i in 1 .. 21 { print("${nth_hamming_log(i)} ") }
println("")
 
println("${nth_hamming_log(1691)}")
 
start_time := time.now()
rslt := nth_hamming_log(num_elements)
duration := (time.now() - start_time).microseconds()
println("$rslt")
println("Above result for $num_elements elements in $duration microseconds.")</syntaxhighlight>
{{out}}
<pre>1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36
2125764000
519312780448388736089589843750000000000000000000000000000000000000000000000000000000
Above result for 1000000 elements in 277 microseconds.</pre>
The output is the same as above except that the execution time is almost too small to be measured; it can produce the billionth Hamming number in about five milliseconds, the trillionth Hamming number in about 440 milliseconds, and the thousand trillionth (which is now possible without error) in about 42.4 seconds. Thus, it successfully extends the usable range of the algorithm to near the maximum expressible 64 bit number in a few hours of execution time on a modern desktop computer although the (2^64 - 1)th Hamming number can't be found due to the restrictions of the expressible range limit in sizing of the required error band. This is in spite of the current Vlang standard library using its own implementation of multi-precision integers rather than the highly optimized "gmp" library used by some languages which could be somewhat faster.
 
=={{header|Wren}}==
===Simple but slow===
{{libheader|Wren-big}}
<syntaxhighlight lang="wren">import "./big" for BigInt, BigInts
Simple but slow.
<lang ecmascript>import "/big" for BigInt, BigInts
 
var primes = [2, 3, 5].map { |p| BigInt.new(p) }.toList
Line 11,842 ⟶ 13,020:
System.print()
System.print("The 1,000,000th Hamming number is:")
System.print(h[999999])</langsyntaxhighlight>
 
{{out}}
Line 11,855 ⟶ 13,033:
519312780448388736089589843750000000000000000000000000000000000000000000000000000000
</pre>
 
===Much faster logarithmic version===
{{trans|Go}}
{{libheader|Wren-dynamic}}
{{libheader|Wren-long}}
{{libheader|Wren-math}}
A translation of Go's 'extremely fast version inserting logarithms into the top error band'.
 
Not as fast as the statically typed languages but fast enough for me :)
 
<syntaxhighlight lang="wren">import "./dynamic" for Struct
import "./long" for ULong
import "./big" for BigInt
import "./math" for Math
 
var Logrep = Struct.create("LogRep", ["lg", "x2", "x3", "x5"])
 
var nthHamming = Fn.new { |n|
if (n < 2) {
if (n < 1) Fiber.abort("nthHamming: argument is zero!")
return [0, 0, 0]
}
var lb3 = 1.5849625007211561814537389439478
var lb5 = 2.3219280948873623478703194294894
var fctr = 6 * lb3 * lb5
var crctn = 2.4534452978042592646620291867186
var lgest = (n.toNum * fctr).cbrt - crctn
var frctn = (n < 1000000000) ? 0.509 : 0.106
var lghi = ((n.toNum + lgest * frctn) * fctr).cbrt - crctn
var lglo = lgest * 2 - lghi
var count = ULong.zero
var bnd = []
var klmt = (lghi/lb5).truncate.abs + 1
for (k in 0...klmt) {
var p = k * lb5
var jlmt = ((lghi - p)/lb3).truncate.abs + 1
for (j in 0...jlmt) {
var q = p + j * lb3
var ir = lghi - q
var lg = q + ir.floor
count = count + ir.truncate.abs + 1
if (lg >= lglo) bnd.add(Logrep.new(lg, ir.truncate.abs, j, k))
}
}
if (n > count) Fiber.abort("nthHamming: band high estimate is too low!")
var ndx = (count - n).toSmall
if (ndx >= bnd.count) Fiber.abort("nthHamming: band low estimate is too high!")
bnd.sort { |a, b| b.lg < a.lg }
var rslt = bnd[ndx]
return [rslt.x2, rslt.x3, rslt.x5]
}
 
var convertTpl2BigInt = Fn.new { |tpl|
var result = BigInt.one
for (i in 0...tpl[0]) result = result * 2
for (i in 0...tpl[1]) result = result * 3
for (i in 0...tpl[2]) result = result * 5
return result
}
 
System.print("The first 20 Hamming numbers are:")
for (i in 1..20) {
System.write("%(convertTpl2BigInt.call(nthHamming.call(ULong.new(i)))) ")
}
System.print("\n\nThe 1,691st Hamming number is:")
System.print(convertTpl2BigInt.call(nthHamming.call(ULong.new(1691))))
var start = System.clock
var res = nthHamming.call(ULong.new(1e6))
var end = System.clock
System.print("\nThe 1,000,000 Hamming number is:")
System.print(convertTpl2BigInt.call(res))
var duration = ((end-start) * 1000).round
System.print("The last of these found in %(duration) milliseconds.")</syntaxhighlight>
 
{{out}}
<pre>
The first 20 Hamming numbers are:
1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36
 
The 1,691st Hamming number is:
2125764000
 
The 1,000,000 Hamming number is:
519312780448388736089589843750000000000000000000000000000000000000000000000000000000
The last of these found in 16 milliseconds.
</pre>
 
=={{header|XPL0}}==
<syntaxhighlight lang "XPL0">func Hamming(N); \Return 'true' if N is a Hamming number
int N;
[if N = 1 then return true;
if rem(N/2) = 0 then return Hamming(N/2);
if rem(N/3) = 0 then return Hamming(N/3);
if rem(N/5) = 0 then return Hamming(N/5);
return false;
];
 
int N, C;
[N:= 1; C:= 0;
loop [if Hamming(N) then
[C:= C+1;
IntOut(0, N); ChOut(0, ^ );
if C >= 20 then quit;
];
N:= N+1;
];
CrLf(0);
N:= 1<<31; \ 8-)
repeat N:= N-1 until Hamming(N);
IntOut(0, N);
]</syntaxhighlight>
{{out}}
<pre>
1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36
2125764000</pre>
 
=={{header|Yabasic}}==
{{trans|Run BASIC}}
<langsyntaxhighlight Yabasiclang="yabasic">dim h(1000000)
for i =1 to 20
print hamming(i)," ";
Line 11,868 ⟶ 13,161:
sub hamming(limit)
local x2,x3,x5,i,j,k,n
 
x3,x5,i,j,k,n
h(0) =1
Line 11,880 ⟶ 13,175:
next n
return h(limit -1)
end sub</langsyntaxhighlight>
 
=={{header|zkl}}==
<langsyntaxhighlight lang="zkl">var BN=Import("zklBigNum"); // only needed for large N
fcn hamming(N){
h:=List.createLong(N+1); (0).pump(N+1,h.write,Void); // fill list with stuff
Line 11,901 ⟶ 13,196:
}
[1..20].apply(hamming).println();
hamming(1691).println();</langsyntaxhighlight>
{{out}}
<pre>
Line 11,916 ⟶ 13,211:
OK, I was wrong, calculating the nth Hamming number can be fast and efficient.
{{trans|Haskell}} as direct a translation as I can, except using a nested for loop instead of list comprehension (which makes it easier to keep the count).
<langsyntaxhighlight lang="zkl">#-- directly find n-th Hamming number, in ~ O(n^{2/3}) time
#-- by Will Ness, based on "top band" idea by Louis Klauder, from DDJ discussion
#-- http://drdobbs.com/blogs/architecture-and-design/228700538
Line 11,958 ⟶ 13,253:
}
return(cnt,b.close());
}</langsyntaxhighlight>
<langsyntaxhighlight lang="zkl">fcn printHam(n){
r,t:=nthHam(n); i,j,k:=t; h:=trival(i,j,k);
println("Hamming(%,d)-->2^%d * 3^%d * 5^%d-->\n%s".fmt(n,i,j,k,h));
Line 11,967 ⟶ 13,262:
printHam(0d1_000_000); //(55,47,64), 84 digits
printHam(0d10_000_000); //(80,92,162), 182 digits, 80 zeros at end
printHam(0d1_000_000_000); //(1334,335,404), 845 digits</langsyntaxhighlight>
{{out}}
<pre>
Line 11,982 ⟶ 13,277:
=={{header|ZX Spectrum Basic}}==
{{trans|BBC_BASIC}}
<langsyntaxhighlight lang="zxbasic">10 FOR h=1 TO 20: GO SUB 1000: NEXT h
20 LET h=1691: GO SUB 1000
30 STOP
Line 11,998 ⟶ 13,293:
1120 NEXT n
1130 PRINT "H(";h;")= ";a(h)
1140 RETURN </langsyntaxhighlight>
2,023

edits