Concurrent computing: Difference between revisions

m
→‎{{header|Wren}}: Changed to Wren S/H
m (Removed omits as should be tag defined, removed output as should be task defined, removed comments as language should allow clarity of code.)
m (→‎{{header|Wren}}: Changed to Wren S/H)
 
(38 intermediate revisions by 19 users not shown)
Line 1:
{{task|Concurrency}}
[[Category:Basic language learning]]
 
;Task:
Using either native language concurrency syntax or freely available libraries, write a program to display the strings "Enjoy" "Rosetta" "Code", one string per line, in random order.
Display "Enjoy", "Rosetta", and "Code" to a screen at exactly the same time, with a new line after each.
 
Concurrency syntax must use [[thread|threads]], tasks, co-routines, or whatever concurrency is called in your language.
The language's standard libraries are to be used, with external libraries only being used if the language does not allow this task to be completed with standard libraries.
<br><br>
 
(the task has been updated to avoid using programming-specific language, if a better description of displaying strings parallel-ly is found, please edit task)
=={{header|Ada}}==
<langsyntaxhighlight lang="ada">with Ada.Text_IO, Ada.Numerics.Float_Random;
 
procedure Concurrent_Hello is
Line 24 ⟶ 25:
begin
null; -- the "environment task" doesn't need to do anything
end Concurrent_Hello;</langsyntaxhighlight>
 
Note that random generator object is local to each task. It cannot be accessed concurrently without mutual exclusion. In order to get different initial states of local generators Reset is called (see [http://www.adaic.org/resources/add_content/standards/05rm/html/RM-A-5-2.html ARM A.5.2]).
 
=={{header|ALGOL 68}}==
<langsyntaxhighlight lang="algol68">main:(
PROC echo = (STRING string)VOID:
printf(($gl$,string));
Line 37 ⟶ 38:
echo("Code")
)
)</langsyntaxhighlight>
 
=={{header|APL}}==
{{works with|Dyalog APL}}
 
Dyalog APL supports the <code>&</code> operator, which runs a function on its own thread.
 
<syntaxhighlight lang="apl">{⎕←⍵}&¨'Enjoy' 'Rosetta' 'Code'</syntaxhighlight>
{{out}}
(Example)
<pre>Enjoy
Code
Rosetta</pre>
 
=={{header|Astro}}==
<langsyntaxhighlight lang="python">let words = ["Enjoy", "Rosetta", "Code"]
 
for word in words:
(word) |> async (w) =>
sleep(random())
print(w)</langsyntaxhighlight>
 
=={{header|BaConBASIC}}==
==={{header|BaCon}}===
{{libheader|gomp}}
{{works with|OpenMP}}
BaCon is a BASIC-to-C compiler. Assuming GCC compiler in this demonstration. Based on the C OpenMP source.
 
<langsyntaxhighlight lang="freebasic">' Concurrent computing using the OpenMP extension in GCC. Requires BaCon 3.6 or higher.
 
' Specify compiler flag
Line 70 ⟶ 84:
PRINT str$[i]
NEXT
</syntaxhighlight>
</lang>
 
{{out}}
Line 83 ⟶ 97:
Rosetta</pre>
 
==={{header|BBC BASIC}}===
{{works with|BBC BASIC for Windows}}
The BBC BASIC interpreter is single-threaded so the only way of achieving 'concurrency' (short of using assembler code) is to use timer events:
<langsyntaxhighlight lang="bbcbasic"> INSTALL @lib$+"TIMERLIB"
tID1% = FN_ontimer(100, PROCtask1, 1)
Line 116 ⟶ 130:
PROC_killtimer(tID2%)
PROC_killtimer(tID3%)
ENDPROC</langsyntaxhighlight>
 
=={{header|C}}==
 
<lang c>#include <stdio.h>
{{works with|POSIX}}
{{libheader|pthread}}
 
<syntaxhighlight lang="c">#include <stdio.h>
#include <unistd.h>
#include <pthread.h>
 
pthread_mutex_t condm = PTHREAD_MUTEX_INITIALIZER;
pthread_cond_t cond = PTHREAD_COND_INITIALIZER;
int bang = 0;
 
#define WAITBANG() do { \
pthread_mutex_lock(&condm); \
while( bang == 0) {) \
{ \
pthread_cond_wait(&cond, &condm); \
pthread_cond_wait(&cond, &condm); \
} \
} \
pthread_mutex_unlock(&condm); \
pthread_mutex_unlock(&condm); } while(0);\
 
void * t_enjoy(void * p) {
void *t_enjoy(void *p)
WAITBANG();
{
printf("Enjoy\n");
pthread_exitWAITBANG(0);
printf("Enjoy\n");
pthread_exit(0);
}
 
void * t_rosetta(void * p) {
void *t_rosetta(void *p)
WAITBANG();
{
printf("Rosetta\n");
pthread_exitWAITBANG(0);
printf("Rosetta\n");
pthread_exit(0);
}
 
void * t_code(void * p) {
void *t_code(void *p)
WAITBANG();
{
printf("Code\n");
pthread_exitWAITBANG(0);
printf("Code\n");
pthread_exit(0);
}
typedef void * (* threadfunc) (void *);
int main() {
int i;
pthread_t a[3];
threadfunc p[3] = {t_enjoy, t_rosetta, t_code};
for(i=0; i<3; i++) {
pthread_create(&a[i], NULL, p[i], NULL);
}
sleep(1);
bang = 1;
pthread_cond_broadcast(&cond);
for(i=0; i<3; i++) {
pthread_join(a[i], NULL);
}
}</lang>
 
typedef void *(*threadfunc)(void *);
=={{header|c sharp|C#}}==
int main()
{
int i;
pthread_t a[3];
threadfunc p[3] = {t_enjoy, t_rosetta, t_code};
for(i=0;i<3;i++)
{
pthread_create(&a[i], NULL, p[i], NULL);
}
sleep(1);
bang = 1;
pthread_cond_broadcast(&cond);
for(i=0;i<3;i++)
{
pthread_join(a[i], NULL);
}
}</syntaxhighlight>
 
'''Note''': since threads are created one after another, it is likely that the execution of their code follows the order of creation. To make this less evident, I've added the ''bang'' idea using condition: the thread really executes their code once the gun bang is heard. Nonetheless, I still obtain the same order of creation (Enjoy, Rosetta, Code), and maybe it is because of the order locks are acquired. The only way to obtain randomness seems to be to add random wait in each thread (or wait for special cpu load condition)
 
===OpenMP===
Compile with <code>gcc -std=c99 -fopenmp</code>:
<syntaxhighlight lang="c">#include <stdio.h>
#include <omp.h>
 
int main()
{
const char *str[] = { "Enjoy", "Rosetta", "Code" };
#pragma omp parallel for num_threads(3)
for (int i = 0; i < 3; i++)
printf("%s\n", str[i]);
return 0;
}</syntaxhighlight>
 
=={{header|C sharp|C#}}==
===With Threads===
<syntaxhighlight lang="csharp">
<lang csharp>static Random tRand = new Random();
static voidRandom Main(string[]tRand args)= {new Random();
 
Thread t = new Thread(new ParameterizedThreadStart(WriteText));
static void Main(string[] args)
t.Start("Enjoy");
{
t = new Thread(new ParameterizedThreadStart(WriteText));
Thread t = new Thread(new ParameterizedThreadStart(WriteText));
t.Start("Rosetta");
t.Start("Enjoy");
t = new Thread(new ParameterizedThreadStart(WriteText));
 
t.Start("Code");
t = new Thread(new ParameterizedThreadStart(WriteText));
Console.ReadLine();
t.Start("Rosetta");
 
t = new Thread(new ParameterizedThreadStart(WriteText));
t.Start("Code");
 
Console.ReadLine();
}
 
private static void WriteText(object p) {
private static void WriteText(object p)
Thread.Sleep(tRand.Next(1000, 4000));
{
Console.WriteLine(p);
Thread.Sleep(tRand.Next(1000, 4000));
}</lang>
Console.WriteLine(p);
}
</syntaxhighlight>
 
An example result:
<pre>
Enjoy
Code
Rosetta
</pre>
===With Tasks===
{{works with|C sharp|7.1}}
<langsyntaxhighlight lang="csharp">using System;
using System.Threading.Tasks;
 
public class Program {
public class Program
{
static async Task Main() {
Task t1 = Task.Run(() => Console.WriteLine("Enjoy"));
Task t2 = Task.Run(() => Console.WriteLine("Rosetta"));
Task t3 = Task.Run(() => Console.WriteLine("Code"));
 
await Task.WhenAll(t1, t2, t3);
}
}</langsyntaxhighlight>
===With a parallel loop===
<langsyntaxhighlight lang="csharp">using System;
using System.Threading.Tasks;
 
public class Program {
public class Program
{
static void Main() => Parallel.ForEach(new[] {"Enjoy", "Rosetta", "Code"}, s => Console.WriteLine(s));
}</langsyntaxhighlight>
 
=={{header|C++}}==
{{works with|C++11}}
The following example compiles with GCC 4.7.
 
<code>g++ -std=c++11 -D_GLIBCXX_USE_NANOSLEEP -o concomp concomp.cpp</code>
 
<lang cpp>#include <thread>
<syntaxhighlight lang="cpp">#include <thread>
#include <iostream>
#include <vector>
#include <random>
#include <chrono>
 
int main() {
int main()
std::random_device rd;
{
std::mt19937 eng(rd());
std::random_device rd;
std::uniform_int_distribution<> dist(1,1000);
std::mt19937 eng(rd()); // mt19937 generator with a hardware random seed.
std::vector<std::thread> threads;
std::uniform_int_distribution<> dist(1,1000);
for (const auto& str: {"Enjoy\n", "Rosetta\n", "Code\n"}) {
std::vector<std::thread> threads;
std::chrono::milliseconds duration(dist(eng));
 
threads.emplace_back([str, duration]() {
for(const auto& str: {"Enjoy\n", "Rosetta\n", "Code\n"}) {
std::this_thread::sleep_for(duration);
// between 1 and 1000ms per our std::cout << str;distribution
std::chrono::milliseconds duration(dist(eng));
});
 
}
threads.emplace_back([str, duration](){
for (auto& t: threads)
t.joinstd::this_thread::sleep_for(duration);
std::cout << str;
});
}
 
for(auto& t: threads) t.join();
 
return 0;
}</syntaxhighlight>
 
Output:
<pre>Enjoy
Code
Rosetta</pre>
 
{{libheader|Microsoft Parallel Patterns Library (PPL)}}
 
<syntaxhighlight lang="cpp">#include <iostream>
#include <ppl.h> // MSVC++
 
void a(void) { std::cout << "Eat\n"; }
void b(void) { std::cout << "At\n"; }
void c(void) { std::cout << "Joe's\n"; }
 
int main()
{
// function pointers
Concurrency::parallel_invoke(&a, &b, &c);
 
// C++11 lambda functions
Concurrency::parallel_invoke(
[]{ std::cout << "Enjoy\n"; },
[]{ std::cout << "Rosetta\n"; },
[]{ std::cout << "Code\n"; }
);
return 0;
}</langsyntaxhighlight>
Output:
<pre>
Joe's
Eat
At
Enjoy
Code
Rosetta
</pre>
 
=={{header|Cind}}==
 
<langsyntaxhighlight lang="cind">
execute() {
{# host.println("Enjoy");
Line 232 ⟶ 345:
# host.println("Code"); }
}
</syntaxhighlight>
</lang>
 
=={{header|Clojure}}==
 
A simple way to obtain concurrency is using the ''future'' function, which evaluates its body on a separate thread.
<langsyntaxhighlight lang="clojure">(doseq [text ["Enjoy" "Rosetta" "Code"]]
(future (println text)))</langsyntaxhighlight>
Using the new (2013) ''core.async'' library, "go blocks" can execute asynchronously,
sharing threads from a pool. This works even in ClojureScript (the JavaScript target of Clojure)
on a single thread. The ''timeout'' call is there just to shuffle things up: note this delay doesn't block a thread.
<langsyntaxhighlight lang="clojure">(require '[clojure.core.async :refer [go <! timeout]])
(doseq [text ["Enjoy" "Rosetta" "Code"]]
(go
(<! (timeout (rand-int 1000))) ; wait a random fraction of a second,
(println text)))</langsyntaxhighlight>
 
=={{header|CoffeeScript}}==
Line 255 ⟶ 368:
JavaScript, which CoffeeScript compiles to, is single-threaded. This approach launches multiple process to achieve concurrency on [http://nodejs.org Node.js]:
 
<langsyntaxhighlight lang="coffeescript">{ exec } = require 'child_process'
 
for word in [ 'Enjoy', 'Rosetta', 'Code' ]
exec "echo #{word}", (err, stdout) ->
console.log stdout</langsyntaxhighlight>
 
===Using Node.js===
Line 265 ⟶ 378:
As stated above, CoffeeScript is single-threaded. This approach launches multiple [http://nodejs.org Node.js] processes to achieve concurrency.
 
<langsyntaxhighlight lang="coffeescript"># The "master" file.
 
{ fork } = require 'child_process'
Line 272 ⟶ 385:
words = [ 'Enjoy', 'Rosetta', 'Code' ]
 
fork child_name, [ word ] for word in words</langsyntaxhighlight>
 
<langsyntaxhighlight lang="coffeescript"># child.coffee
 
console.log process.argv[ 2 ]</langsyntaxhighlight>
 
=={{header|Common Lisp}}==
Line 284 ⟶ 397:
Concurrency and threads are not part of the Common Lisp standard. However, most implementations provide some interface for concurrency. [http://common-lisp.net/project/bordeaux-threads/ Bordeaux Threads], used here, provides a compatibility layer for many implementations. (Binding <var>out</var> to <code>*standard-output*</code> before threads are created is needed as each thread gets its own binding for <code>*standard-output*</code>.)
 
<langsyntaxhighlight lang="lisp">(defun concurrency-example (&optional (out *standard-output*))
(let ((lock (bordeaux-threads:make-lock)))
(flet ((writer (string)
Line 293 ⟶ 406:
(bordeaux-threads:make-thread (writer "Enjoy"))
(bordeaux-threads:make-thread (writer "Rosetta"))
(bordeaux-threads:make-thread (writer "Code")))))</langsyntaxhighlight>
 
=={{header|Crystal}}==
Crystal requires the use of channels to ensure that the main fiber doesn't exit before any of the new fibers are done, since each fiber sleeping could return control to the main fiber.
<syntaxhighlight lang="ruby">require "channel"
require "fiber"
require "random"
 
done = Channel(Nil).new
 
"Enjoy Rosetta Code".split.map do |x|
spawn do
sleep Random.new.rand(0..500).milliseconds
puts x
done.send nil
end
end
 
3.times do
done.receive
end</syntaxhighlight>
 
=={{header|D}}==
<langsyntaxhighlight lang="d">import std.stdio, std.random, std.parallelism, core.thread, core.time;
 
void main() {
Line 303 ⟶ 436:
s.writeln;
}
}</langsyntaxhighlight>
 
===Alternative version===
{{libheader|Tango}}
<langsyntaxhighlight lang="d">import tango.core.Thread;
import tango.io.Console;
import tango.math.Random;
Line 315 ⟶ 448:
(new Thread( { Thread.sleep(Random.shared.next(1000) / 1000.0); Cout("Rosetta").newline; } )).start;
(new Thread( { Thread.sleep(Random.shared.next(1000) / 1000.0); Cout("Code").newline; } )).start;
}</langsyntaxhighlight>
 
=={{header|Dart}}==
===Future===
Using Futures, called Promises in Javascript
<langsyntaxhighlight lang="javascript">import 'dart:math' show Random;
 
main(){
Line 337 ⟶ 470:
code() => Future.delayed( Duration( milliseconds: rng.nextInt( 10 ) ), () => "Code");
 
</syntaxhighlight>
</lang>
===Isolate===
Using Isolates, similar to threads but each has its own memory, so they are more like Rust threads than C++
<langsyntaxhighlight lang="javascript">import 'dart:isolate' show Isolate, ReceivePort;
import 'dart:io' show exit, sleep;
import 'dart:math' show Random;
Line 400 ⟶ 533:
}
 
</syntaxhighlight>
</lang>
 
=={{header|Delphi}}==
<langsyntaxhighlight Delphilang="delphi">program ConcurrentComputing;
 
{$APPTYPE CONSOLE}
Line 441 ⟶ 574:
 
WaitForMultipleObjects(Length(lThreadArray), @lThreadArray, True, INFINITE);
end.</langsyntaxhighlight>
 
=={{header|dodo0}}==
<langsyntaxhighlight lang="dodo0">fun parprint -> text, return
(
fork() -> return, throw
Line 457 ⟶ 590:
parprint("Code") ->
 
exit()</langsyntaxhighlight>
 
=={{header|E}}==
<langsyntaxhighlight lang="e">def base := timer.now()
for string in ["Enjoy", "Rosetta", "Code"] {
timer <- whenPast(base + entropy.nextInt(1000), fn { println(string) })
}</langsyntaxhighlight>
 
Nondeterminism from preemptive concurrency rather than a random number generator:
 
<langsyntaxhighlight lang="e">def seedVat := <import:org.erights.e.elang.interp.seedVatAuthor>(<unsafe>)
for string in ["Enjoy", "Rosetta", "Code"] {
seedVat <- (`
Line 475 ⟶ 608:
}
`) <- get(0) <- (string)
}</langsyntaxhighlight>
 
=={{header|EchoLisp}}==
<langsyntaxhighlight lang="scheme">
(lib 'tasks) ;; use the tasks library
 
Line 492 ⟶ 625:
#task:id:67:running code
#task:id:65:running Enjoy
</syntaxhighlight>
</lang>
 
=={{header|Egel}}==
<syntaxhighlight lang="egel">
<lang Egel>
import "prelude.eg"
import "io.ego"
Line 506 ⟶ 639:
[_ -> print "rosetta\n"])
[_ -> print "code\n"] in nop
</syntaxhighlight>
</lang>
 
=={{header|Elixir}}==
<langsyntaxhighlight Elixirlang="elixir">defmodule Concurrent do
def computing(xs) do
Enum.each(xs, fn x ->
Line 521 ⟶ 654:
end
Concurrent.computing ["Enjoy", "Rosetta", "Code"]</langsyntaxhighlight>
 
{{out}}
Line 532 ⟶ 665:
=={{header|Erlang}}==
hw.erl
<langsyntaxhighlight lang="erlang">-module(hw).
-export([start/0]).
 
Line 550 ⟶ 683:
_N -> wait(N-1)
end
end.</langsyntaxhighlight>
 
running it
<langsyntaxhighlight lang="erlang">|erlc hw.erl
|erl -run hw start -run init stop -noshell</langsyntaxhighlight>
 
=={{header|Euphoria}}==
<langsyntaxhighlight lang="euphoria">procedure echo(sequence s)
puts(1,s)
puts(1,'\n')
Line 573 ⟶ 706:
task_schedule(task3,1)
 
task_yield()</langsyntaxhighlight>
 
Output:
Line 582 ⟶ 715:
=={{header|F_Sharp|F#}}==
We define a parallel version of <code>Seq.iter</code> by using asynchronous workflows:
<langsyntaxhighlight lang="fsharp">module Seq =
let piter f xs =
seq { for x in xs -> async { f x } }
Line 593 ⟶ 726:
["Enjoy"; "Rosetta"; "Code";]
 
main()</langsyntaxhighlight>
 
With version 4 of the .NET framework and F# PowerPack 2.0 installed, it is possible to use the predefined <code>PSeq.iter</code> instead.
 
=={{header|Factor}}==
<langsyntaxhighlight lang="factor">USE: concurrency.combinators
 
{ "Enjoy" "Rosetta" "Code" } [ print ] parallel-each</langsyntaxhighlight>
 
=={{header|Forth}}==
Line 606 ⟶ 739:
Many Forth implementations come with a simple cooperative task scheduler. Typically each task blocks on I/O or explicit use of the '''pause''' word. There is also a class of variables called "user" variables which contain task-specific data, such as the current base and stack pointers.
 
<langsyntaxhighlight lang="forth">require tasker.fs
require random.fs
 
Line 622 ⟶ 755:
s" Code" task
begin pause single-tasking? until ;
main</langsyntaxhighlight>
 
=={{header|Fortran}}==
Fortran doesn't have threads but there are several compilers that support OpenMP, e.g. gfortran and Intel. The following code has been tested with thw Intel 11.1 compiler on WinXP.
 
<langsyntaxhighlight Fortranlang="fortran">program concurrency
implicit none
character(len=*), parameter :: str1 = 'Enjoy'
Line 670 ⟶ 803:
!$omp end parallel do
 
end program concurrency</langsyntaxhighlight>
 
=={{header|FreeBASIC}}==
<langsyntaxhighlight lang="freebasic">' FB 1.05.0 Win64
' Compiled with -mt switch (to use threadsafe runtiume)
' The 'ThreadCall' functionality in FB is based internally on LibFFi (see [https://github.com/libffi/libffi/blob/master/LICENSE] for license)
Line 701 ⟶ 834:
Print
Sleep
Loop While Inkey <> Chr(27)</langsyntaxhighlight>
 
Sample output
Line 717 ⟶ 850:
Code
</pre>
 
=={{header|FutureBasic}}==
<syntaxhighlight lang="futurebasic">
include "NSLog.incl"
 
long priority(2)
priority(0) = _dispatchPriorityDefault
priority(1) = _dispatchPriorityHigh
priority(2) = _dispatchPriorityLow
 
dispatchglobal , priority(rnd(3)-1)
NSLog(@"Enjoy")
dispatchend
 
dispatchglobal , priority(rnd(3)-1)
NSLog(@"Rosetta")
dispatchend
 
dispatchglobal , priority(rnd(3)-1)
NSLog(@"Code")
dispatchend
 
HandleEvents
</syntaxhighlight>
 
=={{header|Go}}==
Line 724 ⟶ 881:
This solution also shows a good practice for generating random numbers in concurrent goroutines. While certainly not needed for this RC task, in the more general case where you have a number of goroutines concurrently needing random numbers, the goroutines can suffer congestion if they compete heavily for the sole default library source. This can be relieved by having each goroutine create its own non-sharable source. Also particularly in cases where there might be a large number of concurrent goroutines, the source provided in subrepository rand package (exp/rand) can be a better choice than the standard library generator. The subrepo generator requires much less memory for "state" and is much faster to seed.
 
<langsyntaxhighlight lang="go">package main
import (
Line 746 ⟶ 903:
fmt.Println(<-q)
}
}</langsyntaxhighlight>
 
===Afterfunc===
time.Afterfunc combines the sleep and the goroutine start. log.Println serializes output in the case goroutines attempt to print concurrently. sync.WaitGroup is used directly as a checkpoint.
<langsyntaxhighlight lang="go">package main
 
import (
Line 774 ⟶ 931:
}
q.Wait()
}</langsyntaxhighlight>
 
===Select===
This solution might stretch the intent of the task a bit. It is concurrent but not parallel. Also it doesn't sleep and doesn't call the random number generator explicity. It works because the select statement is specified to make a "pseudo-random fair choice" among
multiple channel operations.
<langsyntaxhighlight lang="go">package main
 
import "fmt"
Line 803 ⟶ 960:
}
}
}</langsyntaxhighlight>
Output:
<pre>
Line 820 ⟶ 977:
 
=={{header|Groovy}}==
<langsyntaxhighlight lang="groovy">'Enjoy Rosetta Code'.tokenize().collect { w ->
Thread.start {
Thread.sleep(1000 * Math.random() as int)
println w
}
}.each { it.join() }</langsyntaxhighlight>
 
=={{header|Haskell}}==
Line 831 ⟶ 988:
Note how the map treats the list of processes just like any other data.
 
<langsyntaxhighlight lang="haskell">import Control.Concurrent
 
main = mapM_ forkIO [process1, process2, process3] where
process1 = putStrLn "Enjoy"
process2 = putStrLn "Rosetta"
process3 = putStrLn "Code"</langsyntaxhighlight>
 
A more elaborated example using MVars and a random running time per thread.
 
<langsyntaxhighlight lang="haskell">import Control.Concurrent
import System.Random
 
Line 858 ⟶ 1,015:
-- until we write another value to it
putMVar v (s : val) -- append a text string to the MVar and block other
-- threads from writing to it unless it is read first</langsyntaxhighlight>
 
==Icon and {{header|Unicon}}==
The following code uses features exclusive to Unicon
<langsyntaxhighlight lang="unicon">procedure main()
L:=[ thread write("Enjoy"), thread write("Rosetta"), thread write("Code") ]
every wait(!L)
end</langsyntaxhighlight>
 
=={{header|J}}==
 
Using J's new threading primitives (in place of some sort of thread emulation):
Example:
 
<syntaxhighlight lang=J>reqthreads=: {{ 0&T.@''^:(0>.y-1 T.'')0 }}
<lang j> smoutput&>({~?~@#);:'Enjoy Rosetta Code'
dispatchwith=: (t.'')every
newmutex=: 10&T.
lock=: 11&T.
unlock=: 13&T.
synced=: {{
lock n
r=. u y
unlock n
r
}}
register=: {{ out=: out, y }} synced (newmutex 0)
task=: {{
reqthreads 3 NB. at least 3 worker threads
out=: EMPTY
#@> register dispatchwith ;:'Enjoy Rosetta Code'
out
}}</syntaxhighlight>
 
Sample use:
 
<syntaxhighlight lang=J> task''
Enjoy
Rosetta
Code
task''
Enjoy </lang>
Enjoy
 
Code
NOTES AND CAUTIONS:
Rosetta</syntaxhighlight>
 
1) While J's syntax and semantics is highly parallel, it is a deterministic sort of parallelism (analogous to the design of modern GPUs) and not the stochastic parallelism which is implied in this task specification (and which is usually obtained by timeslicing threads of control). The randomness implemented here is orthogonal to the parallelism in the display (and you could remove <code>smoutput&</code> without altering the appearence, in this trivial example).
 
2) The current release of J (and the past implementations) do not implement hardware based concurrency. This is partially an economic issue (since all of the current and past language implementations have been distributed for free, with terms which allow free distribution), and partially a hardware maturity issue (historically, most CPU multi-core development has been optimized for stochastic parallelism with minimal cheap support for large scale deterministic parallelism and GPUs have not been capable of supporting the kind of generality needed by J).
 
This state of affairs is likely to change, eventually (most likely this will be after greater than factor of 2 speedups from hardware parallelism are available for the J users in cases which are common and important enough to support the implementation). But, for now, J's parallelism is entirely conceptual.
 
=={{header|Java}}==
Create a new <code>Thread</code> array, shuffle the array, start each thread.
<syntaxhighlight lang="java">
Thread[] threads = new Thread[3];
threads[0] = new Thread(() -> System.out.println("enjoy"));
threads[1] = new Thread(() -> System.out.println("rosetta"));
threads[2] = new Thread(() -> System.out.println("code"));
Collections.shuffle(Arrays.asList(threads));
for (Thread thread : threads)
thread.start();
</syntaxhighlight>
<br />
An alternate demonstration
{{works with|Java|1.5+}}
Uses CyclicBarrier to force all threads to wait until they're at the same point before executing the println, increasing the odds they'll print in a different order (otherwise, while the they may be executing in parallel, the threads are started sequentially and with such a short run-time, will usually output sequentially as well).
 
<langsyntaxhighlight lang="java5">import java.util.concurrent.CyclicBarrier;
 
public class Threads
Line 920 ⟶ 1,106:
new Thread(new DelayedMessagePrinter(barrier, "Code")).start();
}
}</langsyntaxhighlight>
 
=={{header|JavaScript}}==
 
JavaScript now enjoys access to a concurrency library thanks to [http://en.wikipedia.org/wiki/Web_worker Web Workers]. The Web Workers specification defines an API for spawning background scripts. This first code is the background script and should be in the concurrent_worker.js file.
<langsyntaxhighlight lang="javascript">self.addEventListener('message', function (event) {
self.postMessage(event.data);
self.close();
}, false);</langsyntaxhighlight>
This second block creates the workers, sends them a message and creates an event listener to handle the response.
<langsyntaxhighlight lang="javascript">var words = ["Enjoy", "Rosetta", "Code"];
var workers = [];
 
Line 939 ⟶ 1,125:
}, false);
workers[i].postMessage(words[i]);
}</langsyntaxhighlight>
 
=={{header|Julia}}==
{{works with|Julia|0.6}}
 
<langsyntaxhighlight lang="julia">words = ["Enjoy", "Rosetta", "Code"]
 
function sleepprint(s)
Line 953 ⟶ 1,139:
@sync for word in words
@async sleepprint(word)
end</langsyntaxhighlight>
 
=={{header|Kotlin}}==
{{trans|Java}}
<langsyntaxhighlight lang="scala">// version 1.1.2
 
import java.util.concurrent.CyclicBarrier
Line 972 ⟶ 1,158:
val barrier = CyclicBarrier(msgs.size)
for (msg in msgs) Thread(DelayedMessagePrinter(barrier, msg)).start()
}</langsyntaxhighlight>
 
{{out}}
Line 983 ⟶ 1,169:
 
=={{header|LFE}}==
<langsyntaxhighlight lang="lisp">
;;;
;;; This is a straight port of the Erlang version.
Line 1,010 ⟶ 1,196:
(0 0)
(_n (wait (- n 1)))))))
</syntaxhighlight>
</lang>
 
=={{header|Logtalk}}==
Works when using SWI-Prolog, XSB, or YAP as the backend compiler.
<langsyntaxhighlight lang="logtalk">:- object(concurrency).
 
:- initialization(output).
Line 1,025 ⟶ 1,211:
)).
 
:- end_object.</langsyntaxhighlight>
 
=={{header|Lua}}==
<langsyntaxhighlight lang="lua">co = {}
co[1] = coroutine.create( function() print "Enjoy" end )
co[2] = coroutine.create( function() print "Rosetta" end )
Line 1,043 ⟶ 1,229:
i = i + 1
end
until i == 3</langsyntaxhighlight>
 
=={{header|M2000 Interpreter}}==
Line 1,054 ⟶ 1,240:
Threads actually runs in Wait loop. We can use Main.Task as a loop which is thread also. Threads can be run when we wait for input in m2000 console, or for events from M2000 GUI forms, also. Events always run in sequential form.
 
<syntaxhighlight lang="m2000 interpreter">
<lang M2000 Interpreter>
Thread.Plan Concurrent
Module CheckIt {
Line 1,108 ⟶ 1,294:
CheckIt
 
</syntaxhighlight>
</lang>
 
=={{header|Mathematica}} / {{header|Wolfram Language}}==
Parallelization requires Mathematica 7 or later
<langsyntaxhighlight Mathematicalang="mathematica">ParallelDo[
Pause[RandomReal[]];
Print[s],
{s, {"Enjoy", "Rosetta", "Code"}}
]</langsyntaxhighlight>
 
=={{header|Mercury}}==
<syntaxhighlight lang="text">:- module concurrent_computing.
:- interface.
 
Line 1,131 ⟶ 1,317:
spawn(io.print_cc("Enjoy\n"), !IO),
spawn(io.print_cc("Rosetta\n"), !IO),
spawn(io.print_cc("Code\n"), !IO).</langsyntaxhighlight>
 
=={{header|Neko}}==
<syntaxhighlight lang="actionscript">/**
<lang ActionScript>/**
Concurrent computing, in Neko
*/
Line 1,167 ⟶ 1,353:
 
/* Let the threads complete */
sys_sleep(4);</langsyntaxhighlight>
 
{{out}}
Line 1,183 ⟶ 1,369:
=={{header|Nim}}==
Compile with <code>nim --threads:on c concurrent</code>:
<langsyntaxhighlight lang="nim">const str = ["Enjoy", "Rosetta", "Code"]
 
var thr: array[3, Thread[int32]]
Line 1,192 ⟶ 1,378:
for i in 0..thr.high:
createThread(thr[i], f, int32(i))
joinThreads(thr)</langsyntaxhighlight>
 
===OpenMP===
Compile with <code>nim --passC:"-fopenmp" --passL:"-fopenmp" c concurrent</code>:
<langsyntaxhighlight lang="nim">const str = ["Enjoy", "Rosetta", "Code"]
 
for i in 0||2:
echo str[i]</langsyntaxhighlight>
 
===Thread Pools===
Compile with <code>nim --threads:on c concurrent</code>:
<langsyntaxhighlight lang="nim">import threadpool
const str = ["Enjoy", "Rosetta", "Code"]
 
proc f(i: int) {.thread.} =
echo str[i]
 
for i in 0..str.high:
spawn f(i)
sync()</langsyntaxhighlight>
 
=={{header|Objeck}}==
<langsyntaxhighlight lang="objeck">
bundle Default {
class MyThread from Thread {
Line 1,242 ⟶ 1,428:
}
}
</syntaxhighlight>
</lang>
 
=={{header|OCaml}}==
 
<langsyntaxhighlight lang="ocaml">#directory "+threads"
#load "unix.cma"
#load "threads.cma"
Line 1,259 ⟶ 1,445:
let () =
Random.self_init ();
List.iter (Thread.join) threads</langsyntaxhighlight>
 
=={{header|Oforth}}==
Line 1,265 ⟶ 1,451:
Oforth uses tasks to implement concurrent computing. A task is scheduled using #& on a function, method, block, ...
 
<langsyntaxhighlight Oforthlang="oforth">#[ "Enjoy" println ] &
#[ "Rosetta" println ] &
#[ "Code" println ] &</langsyntaxhighlight>
mapParallel method can be used to map a runnable on each element of a collection and returns a collection of results. Here, we println the string and return string size.
 
<langsyntaxhighlight Oforthlang="oforth">[ "Enjoy", "Rosetta", "Code" ] mapParallel(#[ dup . size ])</langsyntaxhighlight>
 
=={{header|Ol}}==
<syntaxhighlight lang="scheme">
(import (otus random!))
 
(for-each (lambda (str)
(define timeout (rand! 999))
(async (lambda ()
(sleep timeout)
(print str))))
'("Enjoy" "Rosetta" "Code"))
</syntaxhighlight>
{{Out}}
<pre>Code
Enjoy
Rosetta
</pre>
 
=={{header|ooRexx}}==
<syntaxhighlight lang="oorexx">
<lang ooRexx>
-- this will launch 3 threads, with each thread given a message to print out.
-- I've added a stoplight to make each thread wait until given a go signal,
Line 1,309 ⟶ 1,512:
call syssleep .5 -- add another sleep here
say text
</syntaxhighlight>
</lang>
 
=={{header|Oz}}==
The randomness comes from the unpredictability of thread scheduling (this is how I understand this exercise).
 
<langsyntaxhighlight lang="oz">for Msg in ["Enjoy" "Rosetta" "Code"] do
thread
{System.showInfo Msg}
end
end
</syntaxhighlight>
</lang>
 
=={{header|PARI/GP}}==
Here is a GP implementation using the [http://pari.math.u-bordeaux.fr/cgi-bin/gitweb.cgi?p=pari.git;a=tree;h=refs/heads/bill-mt;hb=refs/heads/bill-mt bill-mt] branch:
<langsyntaxhighlight lang="parigp">inline(func);
func(n)=print(["Enjoy","Rosetta","Code"][n]);
parapply(func,[1..3]);</langsyntaxhighlight>
 
This is a PARI implementation which uses <code>fork()</code> internally. Note that the [[#C|C]] solutions can be used instead if desired; this program demonstrates the native PARI capabilities instead.
 
For serious concurrency, see Appendix B of the User's Guide to the PARI Library which discusses a solution using [[wp:Thread-local storage|tls]] on [[wp:POSIX Threads|pthreads]]. (There are nontrivial issues with using PARI in this environment, do not attempt to blindly implement a [[#C|C]] solution.)
<langsyntaxhighlight Clang="c">void
foo()
{
Line 1,346 ⟶ 1,549:
pari_printf("Rosetta\n");
}
}</langsyntaxhighlight>
 
See also [http://pari.math.u-bordeaux1.fr/Events/PARI2012/talks/pareval.pdf Bill Allombert's slides on parallel programming in GP].
 
=={{header|Pascal}}==
{{trans|Delphi}} modified for linux. Using simple running thread-counter to circumvent WaitForMultipleObjects.<BR>
Output of difference of sleep time and true sleep time ( running with 0..1999 threads you see once a while 1)
 
<syntaxhighlight lang="pascal">program ConcurrentComputing;
{$IFdef FPC}
{$MODE DELPHI}
{$ELSE}
{$APPTYPE CONSOLE}
{$ENDIF}
uses
{$IFDEF UNIX}
cthreads,
{$ENDIF}
SysUtils, Classes;
 
type
TRandomThread = class(TThread)
private
FString: string;
T0 : Uint64;
protected
procedure Execute; override;
public
constructor Create(const aString: string); overload;
end;
const
MyStrings: array[0..2] of String = ('Enjoy ','Rosetta ','Code ');
var
gblRunThdCnt : LongWord = 0;
 
constructor TRandomThread.Create(const aString: string);
begin
inherited Create(False);
FreeOnTerminate := True;
FString := aString;
interlockedincrement(gblRunThdCnt);
end;
 
procedure TRandomThread.Execute;
var
i : NativeInt;
begin
i := Random(300);
T0 := GettickCount64;
Sleep(i);
//output of difference in time
Writeln(FString,i:4,GettickCount64-T0 -i:2);
interlockeddecrement(gblRunThdCnt);
end;
 
var
lThreadArray: Array[0..9] of THandle;
i : NativeInt;
begin
Randomize;
 
gblRunThdCnt := 0;
For i := low(lThreadArray) to High(lThreadArray) do
lThreadArray[i] := TRandomThread.Create(Format('%9s %4d',[myStrings[Random(3)],i])).Handle;
 
while gblRunThdCnt > 0 do
sleep(125);
end.</syntaxhighlight>
{{out}}
<pre>
Enjoy 4 16 0
Code 0 22 0
Code 1 32 0
Rosetta 7 117 0
Enjoy 2 137 0
Code 6 214 0
Code 5 252 0
Enjoy 3 299 0</pre>
 
=={{header|Perl}}==
 
{{libheader|Time::HiRes}}
<langsyntaxhighlight lang="perl">use threads;
use Time::HiRes qw(sleep);
 
Line 1,360 ⟶ 1,638:
print shift, "\n";
}, $_)
} qw(Enjoy Rosetta Code);</langsyntaxhighlight>
 
Or using coroutines provided by {{libheader|Coro}}
<langsyntaxhighlight lang="perl">use feature qw( say );
use Coro;
use Coro::Timer qw( sleep );
Line 1,373 ⟶ 1,651:
} $_
} qw( Enjoy Rosetta Code );
</syntaxhighlight>
</lang>
 
=={{header|Phix}}==
Without the sleep it is almost always Enjoy Rosetta Code, because create_thread() is more costly than echo(), as the former has to create a new call stack etc.<br>
The lock prevents the displays from mangling each other.
<!--<syntaxhighlight lang="phix">(notonline)-->
<lang Phix>procedure echo(string s)
<span style="color: #008080;">without</span> <span style="color: #008080;">js</span> <span style="color: #000080;font-style:italic;">-- (threads)</span>
sleep(rand(100)/100)
<span style="color: #008080;">procedure</span> <span style="color: #000000;">echo</span><span style="color: #0000FF;">(</span><span style="color: #004080;">string</span> <span style="color: #000000;">s</span><span style="color: #0000FF;">)</span>
enter_cs()
<span style="color: #7060A8;">sleep</span><span style="color: #0000FF;">(</span><span style="color: #7060A8;">rand</span><span style="color: #0000FF;">(</span><span style="color: #000000;">100</span><span style="color: #0000FF;">)/</span><span style="color: #000000;">100</span><span style="color: #0000FF;">)</span>
puts(1,s)
<span style="color: #7060A8;">enter_cs</span><span style="color: #0000FF;">()</span>
puts(1,'\n')
<span style="color: #7060A8;">puts</span><span style="color: #0000FF;">(</span><span style="color: #000000;">1</span><span style="color: #0000FF;">,</span><span style="color: #000000;">s</span><span style="color: #0000FF;">)</span>
leave_cs()
<span style="color: #7060A8;">puts</span><span style="color: #0000FF;">(</span><span style="color: #000000;">1</span><span style="color: #0000FF;">,</span><span style="color: #008000;">'\n'</span><span style="color: #0000FF;">)</span>
end procedure
<span style="color: #7060A8;">leave_cs</span><span style="color: #0000FF;">()</span>
 
<span style="color: #008080;">end</span> <span style="color: #008080;">procedure</span>
constant threads = {create_thread(routine_id("echo"),{"Enjoy"}),
create_thread(routine_id("echo"),{"Rosetta"}),
<span style="color: #008080;">constant</span> <span style="color: #000000;">threads</span> <span style="color: #0000FF;">=</span> <span style="color: #0000FF;">{</span><span style="color: #000000;">create_thread</span><span style="color: #0000FF;">(</span><span style="color: #7060A8;">routine_id</span><span style="color: #0000FF;">(</span><span style="color: #008000;">"echo"</span><span style="color: #0000FF;">),{</span><span style="color: #008000;">"Enjoy"</span><span style="color: #0000FF;">}),</span>
create_thread(routine_id("echo"),{"Code"})}
<span style="color: #000000;">create_thread</span><span style="color: #0000FF;">(</span><span style="color: #7060A8;">routine_id</span><span style="color: #0000FF;">(</span><span style="color: #008000;">"echo"</span><span style="color: #0000FF;">),{</span><span style="color: #008000;">"Rosetta"</span><span style="color: #0000FF;">}),</span>
 
<span style="color: #000000;">create_thread</span><span style="color: #0000FF;">(</span><span style="color: #7060A8;">routine_id</span><span style="color: #0000FF;">(</span><span style="color: #008000;">"echo"</span><span style="color: #0000FF;">),{</span><span style="color: #008000;">"Code"</span><span style="color: #0000FF;">})}</span>
wait_thread(threads)
puts(1,"done")
<span style="color: #000000;">wait_thread</span><span style="color: #0000FF;">(</span><span style="color: #000000;">threads</span><span style="color: #0000FF;">)</span>
{} = wait_key()</lang>
<span style="color: #7060A8;">puts</span><span style="color: #0000FF;">(</span><span style="color: #000000;">1</span><span style="color: #0000FF;">,</span><span style="color: #008000;">"done"</span><span style="color: #0000FF;">)</span>
<span style="color: #0000FF;">{}</span> <span style="color: #0000FF;">=</span> <span style="color: #7060A8;">wait_key</span><span style="color: #0000FF;">()</span>
<!--</syntaxhighlight>-->
 
=={{header|PicoLisp}}==
===Using background tasks===
<langsyntaxhighlight PicoLisplang="picolisp">(for (N . Str) '("Enjoy" "Rosetta" "Code")
(task (- N) (rand 1000 4000) # Random start time 1 .. 4 sec
Str Str # Closure with string value
(println Str) # Task body: Print the string
(task @) ) ) # and stop the task</langsyntaxhighlight>
===Using child processes===
<langsyntaxhighlight PicoLisplang="picolisp">(for Str '("Enjoy" "Rosetta" "Code")
(let N (rand 1000 4000) # Randomize
(unless (fork) # Create child process
(wait N) # Wait 1 .. 4 sec
(println Str) # Print string
(bye) ) ) ) # Terminate child process</langsyntaxhighlight>
 
=={{header|Pike}}==
Using POSIX threads:
<langsyntaxhighlight Pikelang="pike">int main() {
// Start threads and wait for them to finish
({
Line 1,421 ⟶ 1,702:
// Exit program
exit(0);
}</langsyntaxhighlight>
Output:
Enjoy
Line 1,428 ⟶ 1,709:
 
Using Pike's backend:
<langsyntaxhighlight Pikelang="pike">int main(int argc, array argv)
{
call_out(write, random(1.0), "Enjoy\n");
Line 1,435 ⟶ 1,716:
call_out(exit, 1, 0);
return -1; // return -1 starts the backend which makes Pike run until exit() is called.
}</langsyntaxhighlight>
Output:
Rosetta
Line 1,443 ⟶ 1,724:
=={{header|PowerShell}}==
Using Background Jobs:
<langsyntaxhighlight Powershelllang="powershell">$Strings = "Enjoy","Rosetta","Code"
 
$SB = {param($String)Write-Output $String}
Line 1,452 ⟶ 1,733:
 
Get-Job | Wait-Job | Receive-Job
Get-Job | Remove-Job</langsyntaxhighlight>
 
Using .NET Runspaces:
<langsyntaxhighlight Powershelllang="powershell">$Strings = "Enjoy","Rosetta","Code"
 
$SB = {param($String)Write-Output $String}
Line 1,470 ⟶ 1,751:
$Pipeline.Dispose()
}
$Pool.Close()</langsyntaxhighlight>
 
=={{header|Prolog}}==
Line 1,478 ⟶ 1,759:
Create a separate thread for each word. Join the threads to make sure they complete before the program exits.
 
<langsyntaxhighlight lang="prolog">main :-
thread_create(say("Enjoy"),A,[]),
thread_create(say("Rosetta"),B,[]),
Line 1,489 ⟶ 1,770:
Delay is random_float,
sleep(Delay),
writeln(Message).</langsyntaxhighlight>
 
=={{header|PureBasic}}==
<langsyntaxhighlight PureBasiclang="purebasic">Global mutex = CreateMutex()
 
Procedure Printer(*str)
Line 1,517 ⟶ 1,798:
EndIf
 
FreeMutex(mutex)</langsyntaxhighlight>
 
=={{header|Python}}==
{{works with|Python|3.7}}
<lang python>import asyncio
Using asyncio module (I know almost nothing about it, so feel free to improve it :-)):
<syntaxhighlight lang="python">import asyncio
 
 
async def print_(string: str) -> None:
print(string)
 
 
async def main():
strings = ['Enjoy', 'Rosetta', 'Code']
coroutines = map(print_, strings)
await asyncio.gather(*coroutines)
 
 
if __name__ == '__main__':
asyncio.run(main())</langsyntaxhighlight>
 
{{works with|Python|3.2}}
 
Using the new to Python 3.2 [http://docs.python.org/release/3.2/library/concurrent.futures.html concurrent.futures library] and choosing to use processes over threads; the example will use up to as many processes as your machine has cores. This doesn't however guarantee an order of sub-process results.
<syntaxhighlight lang="python">Python 3.2 (r32:88445, Feb 20 2011, 21:30:00) [MSC v.1500 64 bit (AMD64)] on win 32
Type "help", "copyright", "credits" or "license" for more information.
>>> from concurrent import futures
>>> with futures.ProcessPoolExecutor() as executor:
... _ = list(executor.map(print, 'Enjoy Rosetta Code'.split()))
...
Enjoy
Rosetta
Code
>>></syntaxhighlight>
 
{{works with|Python|2.5}}
 
<syntaxhighlight lang="python">import threading
import random
def echo(text):
print(text)
threading.Timer(random.random(), echo, ("Enjoy",)).start()
threading.Timer(random.random(), echo, ("Rosetta",)).start()
threading.Timer(random.random(), echo, ("Code",)).start()</syntaxhighlight>
 
Or, by using a for loop to start one thread per list entry, where our list is our set of source strings:
 
<syntaxhighlight lang="python">import threading
import random
 
def echo(text):
print(text)
 
for text in ["Enjoy", "Rosetta", "Code"]:
threading.Timer(random.random(), echo, (text,)).start()</syntaxhighlight>
 
=== threading.Thread ===
<langsyntaxhighlight lang="python">import random, sys, time
import threading
 
lock = threading.Lock()
 
def echo(s):
time.sleep(1e-2*random.random())
Line 1,539 ⟶ 1,868:
sys.stdout.write(s)
sys.stdout.write('\n')
 
for line in 'Enjoy Rosetta Code'.split():
threading.Thread(target=echo, args=(line,)).start()</langsyntaxhighlight>
 
=== multiprocessing ===
 
<lang python>from __future__ import print_function
{{works with|Python|2.6}}
<syntaxhighlight lang="python">from __future__ import print_function
from multiprocessing import Pool
 
def main():
p = Pool()
p.map(print, 'Enjoy Rosetta Code'.split())
 
if __name__=="__main__":
main()</langsyntaxhighlight>
 
=== twisted ===
<syntaxhighlight lang="python">import random
from twisted.internet import reactor, task, defer
from twisted.python.util import println
 
delay = lambda: 1e-4*random.random()
d = defer.DeferredList([task.deferLater(reactor, delay(), println, line)
for line in 'Enjoy Rosetta Code'.split()])
d.addBoth(lambda _: reactor.stop())
reactor.run()</syntaxhighlight>
 
=== gevent ===
<syntaxhighlight lang="python">from __future__ import print_function
import random
import gevent
 
delay = lambda: 1e-4*random.random()
gevent.joinall([gevent.spawn_later(delay(), print, line)
for line in 'Enjoy Rosetta Code'.split()])</syntaxhighlight>
 
=={{header|Racket}}==
 
Threads provide a simple API for concurrent programming.
<langsyntaxhighlight lang="racket">
#lang racket
(for ([str '("Enjoy" "Rosetta" "Code")])
(thread (λ () (displayln str))))
</syntaxhighlight>
</lang>
 
In addition to "thread" which is implemented as green threads (useful for IO etc), Racket has "futures" and "places" which are similar tools for using multiple OS cores.
Line 1,564 ⟶ 1,919:
(formerly Perl 6)
{{works with|Rakudo|2018.9}}
<syntaxhighlight lang="raku" perl6line>my @words = <Enjoy Rosetta Code>;
@words.race(:batch(1)).map: { sleep rand; say $_ };</langsyntaxhighlight>
{{out}}
<pre>Code
Line 1,572 ⟶ 1,927:
 
=={{header|Raven}}==
<langsyntaxhighlight lang="raven">[ 'Enjoy' 'Rosetta' 'Code' ] as $words
 
thread talker
Line 1,581 ⟶ 1,936:
talker as a
talker as b
talker as c</langsyntaxhighlight>
 
=={{header|Rhope}}==
{{works with|Rhope|alpha 1}}
<langsyntaxhighlight lang="rhope">Main(0,0)
|:
Print["Enjoy"]
Print["Rosetta"]
Print["Code"]
:|</langsyntaxhighlight>
In Rhope, expressions with no shared dependencies run in parallel by default.
 
=={{header|Ruby}}==
<langsyntaxhighlight lang="ruby">%w{Enjoy Rosetta Code}.map do |x|
Thread.new do
sleep rand
Line 1,601 ⟶ 1,956:
end.each do |t|
t.join
end</langsyntaxhighlight>
 
=={{header|Rust}}==
{{libheader|rand}}
<langsyntaxhighlight lang="rust">extern crate rand; // not needed for recent versions
use std::thread;
use rand::thread_rng;
Line 1,622 ⟶ 1,977:
}
thread::sleep_ms(1000);
}</langsyntaxhighlight>
 
=={{header|Scala}}==
<langsyntaxhighlight lang="scala">import scala.actors.Futures
List("Enjoy", "Rosetta", "Code").map { x =>
Futures.future {
Line 1,631 ⟶ 1,986:
println(x)
}
}.foreach(_())</langsyntaxhighlight>
 
=={{header|Scheme}}==
<langsyntaxhighlight lang="scheme">(parallel-execute (lambda () (print "Enjoy"))
(lambda () (print "Rosetta"))
(lambda () (print "Code")))</langsyntaxhighlight>
 
If your implementation doesn't provide parallel-execute, it can be implemented with [https://srfi.schemers.org/srfi-18/srfi-18.html SRFI-18].
<langsyntaxhighlight lang="scheme">(import (srfi 18))
(define (parallel-execute . thunks)
(let ((threads (map make-thread thunks)))
(for-each thread-start! threads)
(for-each thread-join! threads)))</langsyntaxhighlight>
 
=={{header|Sidef}}==
A very basic threading support is provided by the '''Block.fork()''' method:
<langsyntaxhighlight lang="ruby">var a = <Enjoy Rosetta Code>
 
a.map{|str|
Line 1,653 ⟶ 2,008:
say str
}.fork
}.map{|thr| thr.wait }</langsyntaxhighlight>
 
{{out}}
Line 1,661 ⟶ 2,016:
Rosetta
</pre>
 
=={{header|Slope}}==
<syntaxhighlight lang="slope">(coeval
(display "Enjoy")
(display "Rosetta")
(display "Code"))</syntaxhighlight>
 
=={{header|Swift}}==
Using Grand Central Dispatch with concurrent queues.
<langsyntaxhighlight Swiftlang="swift">import Foundation
 
let myList = ["Enjoy", "Rosetta", "Code"]
Line 1,674 ⟶ 2,035:
}
 
dispatch_main()</langsyntaxhighlight>
{{out}}
<pre>
Line 1,684 ⟶ 2,045:
=={{header|Standard ML}}==
Works with PolyML
<langsyntaxhighlight Standardlang="standard MLml">structure TTd = Thread.Thread ;
structure TTm = Thread.Mutex ;
 
Line 1,707 ⟶ 2,068:
end ;
</syntaxhighlight>
</lang>
call
threadedStringList [ "Enjoy","Rosetta","Code" ];
Line 1,716 ⟶ 2,077:
Assuming that "random" means that we really want the words to appear in random (rather then "undefined" or "arbitrary") order:
 
<langsyntaxhighlight lang="tcl">after [expr int(1000*rand())] {puts "Enjoy"}
after [expr int(1000*rand())] {puts "Rosetta"}
after [expr int(1000*rand())] {puts "Code"}</langsyntaxhighlight>
 
will execute each line after a randomly chosen number (0...1000) of milliseconds.
Line 1,724 ⟶ 2,085:
A step towards "undefined" would be to use <tt>after idle</tt>, which is Tcl for "do this whenever you get around to it". Thus:
 
<langsyntaxhighlight lang="tcl">after idle {puts "Enjoy"}
after idle {puts "Rosetta"}
after idle {puts "Code"}</langsyntaxhighlight>
 
(While no particular order is guaranteed by the Tcl spec, the current implementations will all execute these in the order in which they were added to the idle queue).
 
It's also possible to use threads for this. Here we do this with the built-in thread-pool support:
<langsyntaxhighlight lang="tcl">package require Thread
set pool [tpool::create -initcmd {
proc delayPrint msg {
Line 1,743 ⟶ 2,104:
tpool::release $pool
after 1200 ;# Give threads time to do their work
exit</langsyntaxhighlight>
 
=={{header|UnixPipes}}==
<langsyntaxhighlight lang="bash">(echo "Enjoy" & echo "Rosetta"& echo "Code"&)</langsyntaxhighlight>
 
=={{header|VBA}}==
Three tasks scheduled for the same time with OnTime. The last scheduled task gets executed first.
<langsyntaxhighlight lang="vb">Private Sub Enjoy()
Debug.Print "Enjoy"
End Sub
Line 1,764 ⟶ 2,125:
Application.OnTime when, "Rosetta"
Application.OnTime when, "Code"
End Sub</langsyntaxhighlight>
 
=={{header|Visual Basic .NET}}==
 
<langsyntaxhighlight lang="vbnet">Imports System.Threading
 
Module Module1
Line 1,793 ⟶ 2,154:
End Sub
 
End Module</langsyntaxhighlight>
===Alternative version===
[https://tio.run/##TY9PC8IwDMXv@xRhpw60oODFm@gEQUWs4Llbg6t0zWjrn3362bmBvssjCfnl5VlMS3LYdbu6IRc8iNYHrPmlciiVtrfkQOphEAabJRC10TU4q2Dl4YgvOEurqGbZdyYeBRyktmPZ6ySdNAYN35LLZVmxNLd3auFMHkOQsCaFKReN0YGlkGaTHsL8D9BrCMSFQWxYPM6P@A5svsgyWEaC9WSQX50OuNcW/7fzmDQCh8ZYJL0PL3XdBw Try It Online!]
<langsyntaxhighlight lang="vbnet">Imports System.Threading
Module Module1
Dim rnd As New Random()
Line 1,804 ⟶ 2,165:
End Sub)
End Sub
End Module</langsyntaxhighlight>
{{out}}
<pre>Rosetta
Enjoy
Code</pre>
 
=={{header|V (Vlang)}}==
===Porting of Go code===
<syntaxhighlight lang="go">import time
import rand
import rand.pcg32
import rand.seed
 
fn main() {
words := ['Enjoy', 'Rosetta', 'Code']
seed_u64 := u64(time.now().unix_time_milli())
q := chan string{}
for i, w in words {
go fn (q chan string, w string, seed_u64 u64) {
mut rng := pcg32.PCG32RNG{}
time_seed := seed.time_seed_array(2)
seed_arr := [u32(seed_u64), u32(seed_u64 >> 32), time_seed[0], time_seed[1]]
rng.seed(seed_arr)
time.sleep(time.Duration(rng.i64n(1_000_000_000)))
q <- w
}(q, w, seed_u64 + u64(i))
}
for _ in 0 .. words.len {
println(<-q)
}
}</syntaxhighlight>
 
===Vlang Idiomatic version===
<syntaxhighlight lang="go">import time
import rand
import rand.pcg32
import rand.seed
 
fn main() {
words := ['Enjoy', 'Rosetta', 'Code']
mut threads := []thread{} // mutable array to hold the id of the thread
for w in words {
threads << go fn (w string) { // record the thread
mut rng := pcg32.PCG32RNG{}
time_seed := seed.time_seed_array(4) // the time derived array to seed the random generator
rng.seed(time_seed)
time.sleep(time.Duration(rng.i64n(1_000_000_000)))
println(w)
}(w)
}
threads.wait() // join the thread waiting. wait() is defined for threads and arrays of threads
}</syntaxhighlight>
{{out}}<pre>Code
Rosetta
Enjoy
 
Rosetta
Enjoy
Code</pre>
 
=={{header|Wren}}==
<langsyntaxhighlight ecmascriptlang="wren">import "random" for Random
 
var words = ["Enjoy", "Rosetta", "Code"]
var rand = Random.new()
Line 1,828 ⟶ 2,244:
}
System.print()
}</langsyntaxhighlight>
 
{{out}}
Sample run:
<pre>
Enjoy
Code
Rosetta
 
Code
Enjoy
Rosetta
 
Rosetta
Enjoy
Code
</pre>
 
=={{header|XPL0}}==
Works on Raspberry Pi using XPL0 version 3.2. Processes actually execute
simultaneously, one per CPU core (beyond single-core RPi-1). Lock is
necessary to enable one line to finish printing before another line starts.
<syntaxhighlight lang="xpl0">int Key, Process;
[Key:= SharedMem(4); \allocate 4 bytes of memory common to all processes
Process:= Fork(2); \start 2 child processes
case Process of
0: [Lock(Key); Text(0, "Enjoy"); CrLf(0); Unlock(Key)]; \parent process
1: [Lock(Key); Text(0, "Rosetta"); CrLf(0); Unlock(Key)]; \child process
2: [Lock(Key); Text(0, "Code"); CrLf(0); Unlock(Key)] \child process
other [Lock(Key); Text(0, "Error"); CrLf(0); Unlock(Key)];
Join(Process); \wait for all child processes to finish
]</syntaxhighlight>
 
{{out}}
<pre>
Code
Enjoy
Rosetta
</pre>
 
=={{header|zkl}}==
<langsyntaxhighlight lang="zkl">fcn{println("Enjoy")}.launch(); // thread
fcn{println("Rosetta")}.strand(); // co-op thread
fcn{println("Code")}.future(); // another thread type</langsyntaxhighlight>
{{out}}
<pre>
Rosetta
Code
Enjoy
</pre>
 
{{omit from|AWK}}
{{omit from|bc}}
{{omit from|Brlcad}}
{{omit from|dc}}
{{omit from|GUISS}}
{{omit from|Lilypond}}
{{omit from|Maxima}}
{{Omit From|Metafont}}
{{omit from|Openscad}}
{{omit from|TI-83 BASIC|Does not have concurrency or background processes.}}
{{omit from|TI-89 BASIC|Does not have concurrency or background processes.}}
{{omit from|TPP}}
{{omit from|Vim Script}}
{{omit from|ZX Spectrum Basic}}
{{omit from|Axe}}
9,476

edits