Parallel calculations: Difference between revisions

→‎{{header|Perl 6}}: Provide an example for parallel compution that works today.
(Add Perl)
(→‎{{header|Perl 6}}: Provide an example for parallel compution that works today.)
Line 1,400:
=={{header|Perl 6}}==
Assuming that <tt>factors</tt> is defined exactly as in the prime decomposition task:
<lang perl6>
<lang perl6>my @nums = 12757923, 12878611, 123456789, 15808973, 15780709, 197622519;
my @nums = (1_000_000 .. 100_000_000).pick: 100000;
 
my \factories = @nums.race(:batch(@nums / 32)).map: *.&prime-factors.cache;
my @factories;
my $gmf = ([max] @factories»[0] »=>« @nums).reduce(&max).value;
@factories[$_] := factors(@nums[$_]) for ^@nums;
my $gmf = ([max] @factories»[0] »=>« @nums).value;
</lang>
The second line takes the list of numbers and converts then to a <tt>RaceSeq</tt> that is stored in a raw variable. Any <tt>RaceSeq</tt> overloads <tt>map</tt> and <tt>grep</tt> to convert and pick values in worker threads. The runtime will pick the number of OS-level threads and assign worker threads to them while avoiding stalling in any part of the program. A <tt>RaceSeq</tt> is lazy, so the computation of values will happen in chunks as they are requested in the third line.
The line with the <tt>for</tt> loop is just setting up a bunch of lazy lists, one for each number to be factored, but doesn't actually do any of the work of factoring.
Most of the parallelizing work is done by the hyperoperators that demand the first value from each of the factories' lists, then builds (again in parallel) the pairs associating each first value with its original value. The <tt>[max]</tt> reduction finds the pair with the largest key, from which we can easily extract the greatest minimum factor candidate, and then refactor it completely.
 
Beside <tt>RaceSeq</tt> and its in order equivalent <tt>HyperSeq</tt>, [[Rakudo]] supports primitive threads, locks and highlevel promises. Using channels and supplies values can be move threadsafe from one thread to another. A react-block can be used as a central hub for message passing.
The [[rakudo]] system does not actually do hypers in parallel yet, but when it does, this can automatically parallelize. (Hypers do parallelize in [[pugs]], but it doesn't do some of the other things we rely on here.) It will be up to each individual compiler to determine how many cores to use for any given hyperoperator; the construct merely promises the compiler that it can be parallelized, but does not require that it must be.
 
In the future hyper operators, junctions and feeds will be candidates for autothreading.
There is also some pipelining that can happen within the <tt>factors</tt> routine itself, which uses a <tt>gather</tt>/<tt>take</tt> construct, which the compiler may implement using either coroutines or threads as it sees fit.
Threading pipelines can make more sense on, say, a cell architecture.
 
=={{header|PicoLisp}}==
Anonymous user