Parallel calculations: Difference between revisions

m
→‎{{header|Perl 6}}: Expand output to more fully show what is going on, style twiddles
(→‎{{header|Perl 6}}: Expand on parallelization parameters)
m (→‎{{header|Perl 6}}: Expand output to more fully show what is going on, style twiddles)
Line 1,491:
Takes the list of numbers and converts them to a <tt>HyperSeq</tt> that is stored in a raw variable. Any <tt>HyperSeq</tt> overloads <tt>map</tt> and <tt>grep</tt> to convert and pick values in worker threads. The runtime will pick the number of OS-level threads and assign worker threads to them while avoiding stalling in any part of the program. A <tt>HyperSeq</tt> is lazy, so the computation of values will happen in chunks as they are requested.
 
The hyper (and race) method can take two parameters that will tweak how the parallelization occurs: :degree and :batch. :degree is the number of worker threads to allocate to the job. By default it is set to the number of physical cores available. If you have a hyper threading processor, and the tasks are not cpu bound, it may be useful to raise that number but it is a reasonable default. :degree is how many sub-tasks are parcelled out at a time to each worker thread. Default is 64. For small numbers of cpu intensive tasks a lower number will likely be better, but too low may make the dispatch overhead cancel out the benefit of threading. Conversely, too high will over-burden some threads and starve others. Over long -running processes of multi hundreds/thousands of sub-tasks, the scheduler will automatically adjust the batch size up or down to try to keep the pipeline filled. For small batch sizes of cpu intensive tasks (such as this one) it is useful to give it a smaller starting batch size.
 
On my system, under the load I was running, I found a batch size of 3 to be optimal for this task. May be different for different systems and different loads.
Line 1,502:
278352769033314050117, 281398154745309057242, 292057004737291582187;
 
my \@factories = @nums.hyper(:batch(3)3batch).map: *.&prime-factors.cache;
 
say my $gmf = {}.append(factories»[0] »=>« @nums).max: {+.key};
printf "%21d factors: %s\n", |$_ for @nums Z @factories;
 
say my $gmf = {}.append(@factories»[0] »=>« @nums).max: {+*.key};
 
say "\nGreatest minimum factor: ", $gmf.key;
 
say "from: { $gmf.value }\n";
 
say 'Run time: ', now - INIT now;
 
Line 1,530 ⟶ 1,538:
}</lang>
{{out|Typical output}}
<pre>736717 => [64921987050997300559 71774104902986066597factors: 83448083465633593921 87001033462961102237 89538854889623608177736717 98421229882942378967]88123373087627
70251412046988563035 factors: 5 43 349 936248577956801
Run time: 0.2893774</pre>
71774104902986066597 factors: 736717 97424255043641
83448083465633593921 factors: 736717 113270202079813
84209429893632345702 factors: 2 3 3 3 41 107880821 352564733
87001033462961102237 factors: 736717 118092881612561
87762379890959854011 factors: 3 3 3 3 331 3273372119315201
89538854889623608177 factors: 736717 121537652707381
98421229882942378967 factors: 736717 133594351539251
259826672618677756753 factors: 7 37118096088382536679
262872058330672763871 factors: 3 47 1864340839224629531
267440136898665274575 factors: 3 5 5 71 50223499887073291
278352769033314050117 factors: 7 39764681290473435731
281398154745309057242 factors: 2 809 28571 46061 132155099
292057004737291582187 factors: 7 151 373 2339 111323 2844911
 
Greatest minimum factor: 736717
from: 64921987050997300559 71774104902986066597 83448083465633593921 87001033462961102237 89538854889623608177 98421229882942378967
 
Run time: 0.289377429642621</pre>
 
Beside <tt>HyperSeq</tt> and its (allowed to be) out-of-order equivalent <tt>RaceSeq</tt>, [[Rakudo]] supports primitive threads, locks and highlevel promises. Using channels and supplies values can be move thread-safely from one thread to another. A react-block can be used as a central hub for message passing.
10,327

edits