Closest-pair problem: Difference between revisions

Rename Perl 6 -> Raku, alphabetize, minor clean-up
(Closest-pair problem en FreeBASIC)
(Rename Perl 6 -> Raku, alphabetize, minor clean-up)
Line 67:
*   [http://www.cs.iupui.edu/~xkzou/teaching/CS580/Divide-and-conquer-closestPair.ppt Closest pair (IUPUI)]
<br><br>
 
 
=={{header|360 Assembly}}==
Line 206 ⟶ 205:
[ 0.925092, 0.818220]
</pre>
 
 
=={{header|Ada}}==
Line 384 ⟶ 382:
=={{header|C}}==
See [[Closest-pair problem/C]]
 
=={{header|C sharp|C#}}==
We provide a small helper class for distance comparisons:
<lang csharp>class Segment
{
public Segment(PointF p1, PointF p2)
{
P1 = p1;
P2 = p2;
}
 
public readonly PointF P1;
public readonly PointF P2;
 
public float Length()
{
return (float)Math.Sqrt(LengthSquared());
}
 
public float LengthSquared()
{
return (P1.X - P2.X) * (P1.X - P2.X)
+ (P1.Y - P2.Y) * (P1.Y - P2.Y);
}
}</lang>
 
Brute force:
<lang csharp>Segment Closest_BruteForce(List<PointF> points)
{
int n = points.Count;
var result = Enumerable.Range( 0, n-1)
.SelectMany( i => Enumerable.Range( i+1, n-(i+1) )
.Select( j => new Segment( points[i], points[j] )))
.OrderBy( seg => seg.LengthSquared())
.First();
 
return result;
}</lang>
 
 
And divide-and-conquer.
<lang csharp>
public static Segment MyClosestDivide(List<PointF> points)
{
return MyClosestRec(points.OrderBy(p => p.X).ToList());
}
 
private static Segment MyClosestRec(List<PointF> pointsByX)
{
int count = pointsByX.Count;
if (count <= 4)
return Closest_BruteForce(pointsByX);
 
// left and right lists sorted by X, as order retained from full list
var leftByX = pointsByX.Take(count/2).ToList();
var leftResult = MyClosestRec(leftByX);
 
var rightByX = pointsByX.Skip(count/2).ToList();
var rightResult = MyClosestRec(rightByX);
 
var result = rightResult.Length() < leftResult.Length() ? rightResult : leftResult;
 
// There may be a shorter distance that crosses the divider
// Thus, extract all the points within result.Length either side
var midX = leftByX.Last().X;
var bandWidth = result.Length();
var inBandByX = pointsByX.Where(p => Math.Abs(midX - p.X) <= bandWidth);
 
// Sort by Y, so we can efficiently check for closer pairs
var inBandByY = inBandByX.OrderBy(p => p.Y).ToArray();
 
int iLast = inBandByY.Length - 1;
for (int i = 0; i < iLast; i++ )
{
var pLower = inBandByY[i];
 
for (int j = i + 1; j <= iLast; j++)
{
var pUpper = inBandByY[j];
 
// Comparing each point to successivly increasing Y values
// Thus, can terminate as soon as deltaY is greater than best result
if ((pUpper.Y - pLower.Y) >= result.Length())
break;
 
if (Segment.Length(pLower, pUpper) < result.Length())
result = new Segment(pLower, pUpper);
}
}
 
return result;
}
</lang>
 
However, the difference in speed is still remarkable.
<lang csharp>var randomizer = new Random(10);
var points = Enumerable.Range( 0, 10000).Select( i => new PointF( (float)randomizer.NextDouble(), (float)randomizer.NextDouble())).ToList();
Stopwatch sw = Stopwatch.StartNew();
var r1 = Closest_BruteForce(points);
sw.Stop();
Debugger.Log(1, "", string.Format("Time used (Brute force) (float): {0} ms", sw.Elapsed.TotalMilliseconds));
Stopwatch sw2 = Stopwatch.StartNew();
var result2 = Closest_Recursive(points);
sw2.Stop();
Debugger.Log(1, "", string.Format("Time used (Divide & Conquer): {0} ms",sw2.Elapsed.TotalMilliseconds));
Assert.Equal(r1.Length(), result2.Length());</lang>
 
{{out}}
<pre>Time used (Brute force) (float): 145731.8935 ms
Time used (Divide & Conquer): 1139.2111 ms</pre>
 
Non Linq Brute Force:
<lang csharp>
Segment Closest_BruteForce(List<PointF> points)
{
Trace.Assert(points.Count >= 2);
 
int count = points.Count;
// Seed the result - doesn't matter what points are used
// This just avoids having to do null checks in the main loop below
var result = new Segment(points[0], points[1]);
var bestLength = result.Length();
 
for (int i = 0; i < count; i++)
for (int j = i + 1; j < count; j++)
if (Segment.Length(points[i], points[j]) < bestLength)
{
result = new Segment(points[i], points[j]);
bestLength = result.Length();
}
 
return result;
}</lang>
 
Targeted Search: Much simpler than divide and conquer, and actually runs faster for the random points. Key optimization is that if the distance along the X axis is greater than the best total length you already have, you can terminate the inner loop early. However, as only sorts in the X direction, it degenerates into an N^2 algorithm if all the points have the same X.
 
<lang csharp>
Segment Closest(List<PointF> points)
{
Trace.Assert(points.Count >= 2);
 
int count = points.Count;
points.Sort((lhs, rhs) => lhs.X.CompareTo(rhs.X));
 
var result = new Segment(points[0], points[1]);
var bestLength = result.Length();
 
for (int i = 0; i < count; i++)
{
var from = points[i];
 
for (int j = i + 1; j < count; j++)
{
var to = points[j];
 
var dx = to.X - from.X;
if (dx >= bestLength)
{
break;
}
 
if (Segment.Length(from, to) < bestLength)
{
result = new Segment(from, to);
bestLength = result.Length();
}
}
}
 
return result;
}
</lang>
 
=={{header|C++}}==
Line 606 ⟶ 777:
(cp (sort (copy-list points) '< :key 'car))
(values pair distance))))</lang>
 
=={{header|C sharp|C#}}==
We provide a small helper class for distance comparisons:
<lang csharp>class Segment
{
public Segment(PointF p1, PointF p2)
{
P1 = p1;
P2 = p2;
}
 
public readonly PointF P1;
public readonly PointF P2;
 
public float Length()
{
return (float)Math.Sqrt(LengthSquared());
}
 
public float LengthSquared()
{
return (P1.X - P2.X) * (P1.X - P2.X)
+ (P1.Y - P2.Y) * (P1.Y - P2.Y);
}
}</lang>
 
Brute force:
<lang csharp>Segment Closest_BruteForce(List<PointF> points)
{
int n = points.Count;
var result = Enumerable.Range( 0, n-1)
.SelectMany( i => Enumerable.Range( i+1, n-(i+1) )
.Select( j => new Segment( points[i], points[j] )))
.OrderBy( seg => seg.LengthSquared())
.First();
 
return result;
}</lang>
 
 
And divide-and-conquer.
<lang csharp>
public static Segment MyClosestDivide(List<PointF> points)
{
return MyClosestRec(points.OrderBy(p => p.X).ToList());
}
 
private static Segment MyClosestRec(List<PointF> pointsByX)
{
int count = pointsByX.Count;
if (count <= 4)
return Closest_BruteForce(pointsByX);
 
// left and right lists sorted by X, as order retained from full list
var leftByX = pointsByX.Take(count/2).ToList();
var leftResult = MyClosestRec(leftByX);
 
var rightByX = pointsByX.Skip(count/2).ToList();
var rightResult = MyClosestRec(rightByX);
 
var result = rightResult.Length() < leftResult.Length() ? rightResult : leftResult;
 
// There may be a shorter distance that crosses the divider
// Thus, extract all the points within result.Length either side
var midX = leftByX.Last().X;
var bandWidth = result.Length();
var inBandByX = pointsByX.Where(p => Math.Abs(midX - p.X) <= bandWidth);
 
// Sort by Y, so we can efficiently check for closer pairs
var inBandByY = inBandByX.OrderBy(p => p.Y).ToArray();
 
int iLast = inBandByY.Length - 1;
for (int i = 0; i < iLast; i++ )
{
var pLower = inBandByY[i];
 
for (int j = i + 1; j <= iLast; j++)
{
var pUpper = inBandByY[j];
 
// Comparing each point to successivly increasing Y values
// Thus, can terminate as soon as deltaY is greater than best result
if ((pUpper.Y - pLower.Y) >= result.Length())
break;
 
if (Segment.Length(pLower, pUpper) < result.Length())
result = new Segment(pLower, pUpper);
}
}
 
return result;
}
</lang>
 
However, the difference in speed is still remarkable.
<lang csharp>var randomizer = new Random(10);
var points = Enumerable.Range( 0, 10000).Select( i => new PointF( (float)randomizer.NextDouble(), (float)randomizer.NextDouble())).ToList();
Stopwatch sw = Stopwatch.StartNew();
var r1 = Closest_BruteForce(points);
sw.Stop();
Debugger.Log(1, "", string.Format("Time used (Brute force) (float): {0} ms", sw.Elapsed.TotalMilliseconds));
Stopwatch sw2 = Stopwatch.StartNew();
var result2 = Closest_Recursive(points);
sw2.Stop();
Debugger.Log(1, "", string.Format("Time used (Divide & Conquer): {0} ms",sw2.Elapsed.TotalMilliseconds));
Assert.Equal(r1.Length(), result2.Length());</lang>
 
{{out}}
<pre>Time used (Brute force) (float): 145731.8935 ms
Time used (Divide & Conquer): 1139.2111 ms</pre>
 
Non Linq Brute Force:
<lang csharp>
Segment Closest_BruteForce(List<PointF> points)
{
Trace.Assert(points.Count >= 2);
 
int count = points.Count;
// Seed the result - doesn't matter what points are used
// This just avoids having to do null checks in the main loop below
var result = new Segment(points[0], points[1]);
var bestLength = result.Length();
 
for (int i = 0; i < count; i++)
for (int j = i + 1; j < count; j++)
if (Segment.Length(points[i], points[j]) < bestLength)
{
result = new Segment(points[i], points[j]);
bestLength = result.Length();
}
 
return result;
}</lang>
 
Targeted Search: Much simpler than divide and conquer, and actually runs faster for the random points. Key optimization is that if the distance along the X axis is greater than the best total length you already have, you can terminate the inner loop early. However, as only sorts in the X direction, it degenerates into an N^2 algorithm if all the points have the same X.
 
<lang csharp>
Segment Closest(List<PointF> points)
{
Trace.Assert(points.Count >= 2);
 
int count = points.Count;
points.Sort((lhs, rhs) => lhs.X.CompareTo(rhs.X));
 
var result = new Segment(points[0], points[1]);
var bestLength = result.Length();
 
for (int i = 0; i < count; i++)
{
var from = points[i];
 
for (int j = i + 1; j < count; j++)
{
var to = points[j];
 
var dx = to.X - from.X;
if (dx >= bestLength)
{
break;
}
 
if (Segment.Length(from, to) < bestLength)
{
result = new Segment(from, to);
bestLength = result.Length();
}
}
}
 
return result;
}
</lang>
 
=={{header|Crystal}}==
Line 1,285 ⟶ 1,283:
El par más cercano es 2 y 5 a una distancia de 0.07791019135517516
</pre>
 
 
=={{header|Go}}==
Line 2,369 ⟶ 2,366:
</lang>
Distance =0.77910191e-1 between ( 0.891663, 0.888594) and ( 0.925092, 0.81822)
 
 
=={{header|Maple}}==
Line 2,852 ⟶ 2,848:
[v[at[1]],v[at[2]]]
};</lang>
 
=={{header|Pascal}}==
Brute force only calc square of distance, like AWK etc...
Line 3,047 ⟶ 3,044:
my ($a1, $b1, $d1) = closest_pair(\@points);
print "$d1\n";</lang>
 
=={{header|Perl 6}}==
 
{{trans|Perl 5}}
 
We avoid taking square roots in the slow method because the squares are just as comparable.
(This doesn't always work in the fast method because of distance assumptions in the algorithm.)
<lang perl6>sub MAIN ($N = 5000) {
my @points = (^$N).map: { [rand * 20 - 10, rand * 20 - 10] }
 
my ($af, $bf, $df) = closest_pair(@points);
say "fast $df at [$af], [$bf]";
 
my ($as, $bs, $ds) = closest_pair_simple(@points);
say "slow $ds at [$as], [$bs]";
}
 
sub dist-squared($a,$b) {
($a[0] - $b[0]) ** 2 +
($a[1] - $b[1]) ** 2;
}
 
sub closest_pair_simple(@arr is copy) {
return Inf if @arr < 2;
my ($a, $b, $d) = flat @arr[0,1], dist-squared(|@arr[0,1]);
while @arr {
my $p = pop @arr;
for @arr -> $l {
my $t = dist-squared($p, $l);
($a, $b, $d) = $p, $l, $t if $t < $d;
}
}
return $a, $b, sqrt $d;
}
sub closest_pair(@r) {
my @ax = @r.sort: { .[0] }
my @ay = @r.sort: { .[1] }
return closest_pair_real(@ax, @ay);
}
sub closest_pair_real(@rx, @ry) {
return closest_pair_simple(@rx) if @rx <= 3;
 
my @xP = @rx;
my @yP = @ry;
my $N = @xP;
my $midx = ceiling($N/2)-1;
my @PL = @xP[0 .. $midx];
my @PR = @xP[$midx+1 ..^ $N];
my $xm = @xP[$midx][0];
my @yR;
my @yL;
push ($_[0] <= $xm ?? @yR !! @yL), $_ for @yP;
my ($al, $bl, $dL) = closest_pair_real(@PL, @yR);
my ($ar, $br, $dR) = closest_pair_real(@PR, @yL);
my ($m1, $m2, $dmin) = $dR < $dL
?? ($ar, $br, $dR)
!! ($al, $bl, $dL);
my @yS = @yP.grep: { abs($xm - .[0]) < $dmin }
if @yS {
my ($w1, $w2, $closest) = $m1, $m2, $dmin;
for 0 ..^ @yS.end -> $i {
for $i+1 ..^ @yS -> $k {
last unless @yS[$k][1] - @yS[$i][1] < $dmin;
my $d = sqrt dist-squared(@yS[$k], @yS[$i]);
($w1, $w2, $closest) = @yS[$k], @yS[$i], $d if $d < $closest;
}
}
return $w1, $w2, $closest;
} else {
return $m1, $m2, $dmin;
}
}</lang>
 
=={{header|Phix}}==
Line 3,944 ⟶ 3,857:
Closest points on a quadratic curve (0,0) (1,1)
</lang>
 
=={{header|Raku}}==
(formerly Perl 6)
 
{{trans|Perl 5}}
 
We avoid taking square roots in the slow method because the squares are just as comparable.
(This doesn't always work in the fast method because of distance assumptions in the algorithm.)
<lang perl6>sub MAIN ($N = 5000) {
my @points = (^$N).map: { [rand * 20 - 10, rand * 20 - 10] }
 
my ($af, $bf, $df) = closest_pair(@points);
say "fast $df at [$af], [$bf]";
 
my ($as, $bs, $ds) = closest_pair_simple(@points);
say "slow $ds at [$as], [$bs]";
}
 
sub dist-squared($a,$b) {
($a[0] - $b[0]) ** 2 +
($a[1] - $b[1]) ** 2;
}
 
sub closest_pair_simple(@arr is copy) {
return Inf if @arr < 2;
my ($a, $b, $d) = flat @arr[0,1], dist-squared(|@arr[0,1]);
while @arr {
my $p = pop @arr;
for @arr -> $l {
my $t = dist-squared($p, $l);
($a, $b, $d) = $p, $l, $t if $t < $d;
}
}
return $a, $b, sqrt $d;
}
sub closest_pair(@r) {
my @ax = @r.sort: { .[0] }
my @ay = @r.sort: { .[1] }
return closest_pair_real(@ax, @ay);
}
sub closest_pair_real(@rx, @ry) {
return closest_pair_simple(@rx) if @rx <= 3;
 
my @xP = @rx;
my @yP = @ry;
my $N = @xP;
my $midx = ceiling($N/2)-1;
my @PL = @xP[0 .. $midx];
my @PR = @xP[$midx+1 ..^ $N];
my $xm = @xP[$midx][0];
my @yR;
my @yL;
push ($_[0] <= $xm ?? @yR !! @yL), $_ for @yP;
my ($al, $bl, $dL) = closest_pair_real(@PL, @yR);
my ($ar, $br, $dR) = closest_pair_real(@PR, @yL);
my ($m1, $m2, $dmin) = $dR < $dL
?? ($ar, $br, $dR)
!! ($al, $bl, $dL);
my @yS = @yP.grep: { abs($xm - .[0]) < $dmin }
if @yS {
my ($w1, $w2, $closest) = $m1, $m2, $dmin;
for 0 ..^ @yS.end -> $i {
for $i+1 ..^ @yS -> $k {
last unless @yS[$k][1] - @yS[$i][1] < $dmin;
my $d = sqrt dist-squared(@yS[$k], @yS[$i]);
($w1, $w2, $closest) = @yS[$k], @yS[$i], $d if $d < $closest;
}
}
return $w1, $w2, $closest;
} else {
return $m1, $m2, $dmin;
}
}</lang>
 
=={{header|REXX}}==
10,327

edits