Concurrent computing: Difference between revisions

From Rosetta Code
Content added Content deleted
No edit summary
(Added FreeBASIC)
Line 585: Line 585:


end program concurrency</lang>
end program concurrency</lang>

=={{header|FreeBASIC}}==
<lang freebasic>' FB 1.05.0 Win64
' Compiled with -mt switch (to use threadsafe runtiume)
' The 'ThreadCall' functionality in FB is based internally on LibFFi (see [https://github.com/libffi/libffi/blob/master/LICENSE] for license)

Sub thread1()
Print "Enjoy"
End Sub

Sub thread2()
Print "Rosetta"
End Sub

Sub thread3()
Print "Code"
End Sub

Print "Press any key to print next batch of 3 strings or ESC to quit"
Print

Do
Dim t1 As Any Ptr = ThreadCall thread1
Dim t2 As Any Ptr = ThreadCall thread2
Dim t3 As Any Ptr = ThreadCall thread3
ThreadWait t1
ThreadWait t2
ThreadWait t3
Print
Sleep
Loop While Inkey <> Chr(27)</lang>

Sample output

{{out}}
<pre>
Press any key to print next batch of 3 strings or ESC to quit

Enjoy
Code
Rosetta

Enjoy
Rosetta
Code
</pre>


=={{header|Go}}==
=={{header|Go}}==

Revision as of 17:40, 30 October 2016

Task
Concurrent computing
You are encouraged to solve this task according to the task description, using any language you may know.
Task

Using either native language concurrency syntax or freely available libraries, write a program to display the strings "Enjoy" "Rosetta" "Code", one string per line, in random order.

Concurrency syntax must use threads, tasks, co-routines, or whatever concurrency is called in your language.

Ada

<lang ada>with Ada.Text_IO, Ada.Numerics.Float_Random;

procedure Concurrent_Hello is

  type Messages is (Enjoy, Rosetta, Code);
  task type Writer (Message : Messages);
  task body Writer is
     Seed : Ada.Numerics.Float_Random.Generator;
  begin
     Ada.Numerics.Float_Random.Reset (Seed); -- time-dependent, see ARM A.5.2
     delay Duration (Ada.Numerics.Float_Random.Random (Seed));
     Ada.Text_IO.Put_Line (Messages'Image(Message));
  end Writer;
  Taks: array(Messages) of access Writer -- 3 Writer tasks will immediately run
    := (new Writer(Enjoy), new Writer(Rosetta), new Writer(Code));

begin

  null; -- the "environment task" doesn't need to do anything

end Concurrent_Hello;</lang>

Note that random generator object is local to each task. It cannot be accessed concurrently without mutual exclusion. In order to get different initial states of local generators Reset is called (see ARM A.5.2).

ALGOL 68

<lang algol68>main:(

 PROC echo = (STRING string)VOID:
     printf(($gl$,string));
 PAR(
   echo("Enjoy"),
   echo("Rosetta"),
   echo("Code")
 )

)</lang>

BBC BASIC

The BBC BASIC interpreter is single-threaded so the only way of achieving 'concurrency' (short of using assembler code) is to use timer events: <lang bbcbasic> INSTALL @lib$+"TIMERLIB"

     tID1% = FN_ontimer(100, PROCtask1, 1)
     tID2% = FN_ontimer(100, PROCtask2, 1)
     tID3% = FN_ontimer(100, PROCtask3, 1)
     
     ON ERROR PRINT REPORT$ : PROCcleanup : END
     ON CLOSE PROCcleanup : QUIT
     
     REPEAT
       WAIT 0
     UNTIL FALSE
     END
     
     DEF PROCtask1
     PRINT "Enjoy"
     ENDPROC
     
     DEF PROCtask2
     PRINT "Rosetta"
     ENDPROC
     
     DEF PROCtask3
     PRINT "Code"
     ENDPROC
     
     DEF PROCcleanup
     PROC_killtimer(tID1%)
     PROC_killtimer(tID2%)
     PROC_killtimer(tID3%)
     ENDPROC</lang>

C

Works with: POSIX
Library: pthread

<lang c>#include <stdio.h>

  1. include <unistd.h>
  2. include <pthread.h>

pthread_mutex_t condm = PTHREAD_MUTEX_INITIALIZER; pthread_cond_t cond = PTHREAD_COND_INITIALIZER; int bang = 0;

  1. define WAITBANG() do { \
  pthread_mutex_lock(&condm); \
  while( bang == 0 ) \
  { \
     pthread_cond_wait(&cond, &condm); \
  } \
  pthread_mutex_unlock(&condm); } while(0);\

void *t_enjoy(void *p) {

 WAITBANG();
 printf("Enjoy\n");
 pthread_exit(0);

}

void *t_rosetta(void *p) {

 WAITBANG();
 printf("Rosetta\n");
 pthread_exit(0);

}

void *t_code(void *p) {

 WAITBANG();
 printf("Code\n");
 pthread_exit(0);

}

typedef void *(*threadfunc)(void *); int main() {

  int i;
  pthread_t a[3];
  threadfunc p[3] = {t_enjoy, t_rosetta, t_code};
  
  for(i=0;i<3;i++)
  {
    pthread_create(&a[i], NULL, p[i], NULL);
  }
  sleep(1);
  bang = 1;
  pthread_cond_broadcast(&cond);
  for(i=0;i<3;i++)
  {
    pthread_join(a[i], NULL);
  }

}</lang>

Note: since threads are created one after another, it is likely that the execution of their code follows the order of creation. To make this less evident, I've added the bang idea using condition: the thread really executes their code once the gun bang is heard. Nonetheless, I still obtain the same order of creation (Enjoy, Rosetta, Code), and maybe it is because of the order locks are acquired. The only way to obtain randomness seems to be to add random wait in each thread (or wait for special cpu load condition)

OpenMP

Compile with gcc -std=c99 -fopenmp: <lang C>#include <stdio.h>

  1. include <omp.h>

int main() { const char *str[] = { "Enjoy", "Rosetta", "Code" }; #pragma omp parallel for num_threads(3) for (int i = 0; i < 3; i++) printf("%s\n", str[i]); return 0; }</lang>

C++

Works with: C++11

The following example compiles with GCC 4.7.

g++ -std=c++11 -D_GLIBCXX_USE_NANOSLEEP -o concomp concomp.cpp

<lang cpp>#include <thread>

  1. include <iostream>
  2. include <vector>
  3. include <random>
  4. include <chrono>

int main() {

 std::random_device rd;
 std::mt19937 eng(rd()); // mt19937 generator with a hardware random seed.
 std::uniform_int_distribution<> dist(1,1000);
 std::vector<std::thread> threads;
 for(const auto& str: {"Enjoy\n", "Rosetta\n", "Code\n"}) {
   // between 1 and 1000ms per our distribution
   std::chrono::milliseconds duration(dist(eng)); 
   threads.emplace_back([str, duration](){                                                                    
     std::this_thread::sleep_for(duration);
     std::cout << str;
   });
 }
 for(auto& t: threads) t.join(); 
 return 0;

}</lang>

Output:

Enjoy
Code
Rosetta

<lang cpp>#include <iostream>

  1. include <ppl.h> // MSVC++

void a(void) { std::cout << "Eat\n"; } void b(void) { std::cout << "At\n"; } void c(void) { std::cout << "Joe's\n"; }

int main() {

   // function pointers
   Concurrency::parallel_invoke(&a, &b, &c);
   // C++11 lambda functions
   Concurrency::parallel_invoke(
       []{ std::cout << "Enjoy\n";   },
       []{ std::cout << "Rosetta\n"; },
       []{ std::cout << "Code\n";    }
   );
   return 0;

}</lang> Output:

Joe's
Eat
At
Enjoy
Code
Rosetta

C#

<lang csharp> static Random tRand = new Random();

static void Main(string[] args) { Thread t = new Thread(new ParameterizedThreadStart(WriteText)); t.Start("Enjoy");

t = new Thread(new ParameterizedThreadStart(WriteText)); t.Start("Rosetta");

t = new Thread(new ParameterizedThreadStart(WriteText)); t.Start("Code");

Console.ReadLine(); }

private static void WriteText(object p) { Thread.Sleep(tRand.Next(1000, 4000)); Console.WriteLine(p); } </lang>

An example result:

Enjoy
Code
Rosetta

Clojure

A simple way to obtain concurrency is using the future function, which evaluates its body on a separate thread. <lang clojure>(doseq [text ["Enjoy" "Rosetta" "Code"]]

 (future (println text)))</lang>

Using the new (2013) core.async library, "go blocks" can execute asynchronously, sharing threads from a pool. This works even in ClojureScript (the JavaScript target of Clojure) on a single thread. The timeout call is there just to shuffle things up: note this delay doesn't block a thread. <lang clojure>(require '[clojure.core.async :refer [go <! timeout]]) (doseq [text ["Enjoy" "Rosetta" "Code"]]

 (go
   (<! (timeout (rand-int 1000))) ; wait a random fraction of a second,
   (println text)))</lang>

CoffeeScript

Using Bash (or an equivalent shell)

Works with: Node.js
Works with: Bash

JavaScript, which CoffeeScript compiles to, is single-threaded. This approach launches multiple process to achieve concurrency on Node.js:

<lang coffeescript>{ exec } = require 'child_process'

for word in [ 'Enjoy', 'Rosetta', 'Code' ]

   exec "echo #{word}", (err, stdout) ->
       console.log stdout</lang>

Using Node.js

Works with: Node.js

As stated above, CoffeeScript is single-threaded. This approach launches multiple Node.js processes to achieve concurrency.

<lang coffeescript># The "master" file.

{ fork } = require 'child_process' path = require 'path' child_name = path.join __dirname, 'child.coffee' words = [ 'Enjoy', 'Rosetta', 'Code' ]

fork child_name, [ word ] for word in words</lang>

<lang coffeescript># child.coffee

console.log process.argv[ 2 ]</lang>

Common Lisp

Concurrency and threads are not part of the Common Lisp standard. However, most implementations provide some interface for concurrency. Bordeaux Threads, used here, provides a compatibility layer for many implementations. (Binding out to *standard-output* before threads are created is needed as each thread gets its own binding for *standard-output*.)

<lang lisp>(defun concurrency-example (&optional (out *standard-output*))

 (let ((lock (bordeaux-threads:make-lock)))
   (flet ((writer (string)
            #'(lambda () 
                (bordeaux-threads:acquire-lock lock t)
                (write-line string out)
                (bordeaux-threads:release-lock lock))))
     (bordeaux-threads:make-thread (writer "Enjoy"))
     (bordeaux-threads:make-thread (writer "Rosetta"))
     (bordeaux-threads:make-thread (writer "Code")))))</lang>

D

<lang d>import std.stdio, std.random, std.parallelism, core.thread, core.time;

void main() {

   foreach (s; ["Enjoy", "Rosetta", "Code"].parallel(1)) {
       Thread.sleep(uniform(0, 1000).dur!"msecs");
       s.writeln;
   }

}</lang>

Alternative version

Library: Tango

<lang d>import tango.core.Thread; import tango.io.Console; import tango.math.Random;

void main() {

   (new Thread( { Thread.sleep(Random.shared.next(1000) / 1000.0); Cout("Enjoy").newline; } )).start;
   (new Thread( { Thread.sleep(Random.shared.next(1000) / 1000.0); Cout("Rosetta").newline; } )).start;
   (new Thread( { Thread.sleep(Random.shared.next(1000) / 1000.0); Cout("Code").newline; } )).start;

}</lang>

Delphi

<lang Delphi>program ConcurrentComputing;

{$APPTYPE CONSOLE}

uses SysUtils, Classes, Windows;

type

 TRandomThread = class(TThread)
 private
   FString: string;
 protected
   procedure Execute; override;
 public
   constructor Create(const aString: string); overload;
 end;

constructor TRandomThread.Create(const aString: string); begin

 inherited Create(False);
 FreeOnTerminate := True;
 FString := aString;

end;

procedure TRandomThread.Execute; begin

 Sleep(Random(5) * 100);
 Writeln(FString);

end;

var

 lThreadArray: Array[0..2] of THandle;

begin

 Randomize;
 lThreadArray[0] := TRandomThread.Create('Enjoy').Handle;
 lThreadArray[1] := TRandomThread.Create('Rosetta').Handle;
 lThreadArray[2] := TRandomThread.Create('Stone').Handle;
 WaitForMultipleObjects(Length(lThreadArray), @lThreadArray, True, INFINITE);

end.</lang>

dodo0

<lang dodo0>fun parprint -> text, return (

  fork() -> return, throw
     println(text, return)
  | x
  return()

) | parprint

parprint("Enjoy") -> parprint("Rosetta") -> parprint("Code") ->

exit()</lang>

E

<lang e>def base := timer.now() for string in ["Enjoy", "Rosetta", "Code"] {

   timer <- whenPast(base + entropy.nextInt(1000), fn { println(string) })

}</lang>

Nondeterminism from preemptive concurrency rather than a random number generator:

<lang e>def seedVat := <import:org.erights.e.elang.interp.seedVatAuthor>(<unsafe>) for string in ["Enjoy", "Rosetta", "Code"] {

  seedVat <- (`
      fn string {
          println(string)
          currentVat <- orderlyShutdown("done")
      }
  `) <- get(0) <- (string)

}</lang>

EchoLisp

<lang scheme> (lib 'tasks) ;; use the tasks library

(define (tprint line ) ;; task definition (writeln _TASK line) #f )

(for-each task-run ;; run three // tasks

     (map (curry make-task tprint) '(Enjoy Rosetta code )))
  →
  #task:id:66:running     Rosetta    
  #task:id:67:running     code    
  #task:id:65:running     Enjoy 

</lang>

Elixir

<lang Elixir> defmodule ConcurrentComputing do

 def print(xs) do
   Enum.map(xs, fn x ->
     spawn(fn -> IO.puts x end)
   end)
 end

end

ConcurrentComputing.print ["Enjoy", "Rosetta", "Code"] </lang>

Erlang

hw.erl <lang erlang>-module(hw). -export([start/0]).

start() ->

  [ spawn(fun() ->  say(self(), X) end) || X <- ['Enjoy', 'Rosetta', 'Code'] ],
  wait(2),
  ok.

say(Pid,Str) ->

  io:fwrite("~s~n",[Str]),
  Pid ! done.

wait(N) ->

  receive
      done -> case N of
          0 -> 0;
          _N -> wait(N-1)
      end
  end.</lang>

running it <lang erlang>|erlc hw.erl |erl -run hw start -run init stop -noshell</lang>

Euphoria

<lang euphoria>procedure echo(sequence s)

   puts(1,s)
   puts(1,'\n')

end procedure

atom task1,task2,task3

task1 = task_create(routine_id("echo"),{"Enjoy"}) task_schedule(task1,1)

task2 = task_create(routine_id("echo"),{"Rosetta"}) task_schedule(task2,1)

task3 = task_create(routine_id("echo"),{"Code"}) task_schedule(task3,1)

task_yield()</lang>

Output:

Code
Rosetta
Enjoy

F#

We define a parallel version of Seq.iter by using asynchronous workflows: <lang fsharp>module Seq =

   let piter f xs =
       seq { for x in xs -> async { f x } }
       |> Async.Parallel
       |> Async.RunSynchronously
       |> ignore

let main() = Seq.piter

               (System.Console.WriteLine:string->unit)
               ["Enjoy"; "Rosetta"; "Code";]

main()</lang>

With version 4 of the .NET framework and F# PowerPack 2.0 installed, it is possible to use the predefined PSeq.iter instead.

Factor

<lang factor>USE: concurrency.combinators

{ "Enjoy" "Rosetta" "Code" } [ print ] parallel-each</lang>

Forth

Works with: gforth version 0.6.2

Many Forth implementations come with a simple cooperative task scheduler. Typically each task blocks on I/O or explicit use of the pause word. There is also a class of variables called "user" variables which contain task-specific data, such as the current base and stack pointers.

<lang forth>require tasker.fs require random.fs

task ( str len -- )
 64 NewTask 2 swap pass
 ( str len -- )
 10 0 do
   100 random ms
   pause 2dup cr type
 loop 2drop ;
main
 s" Enjoy"   task
 s" Rosetta" task
 s" Code"    task
 begin pause single-tasking? until ;

main</lang>

Fortran

Fortran doesn't have threads but there are several compilers that support OpenMP, e.g. gfortran and Intel. The following code has been tested with thw Intel 11.1 compiler on WinXP.

<lang Fortran>program concurrency

 implicit none
 character(len=*), parameter :: str1 = 'Enjoy'
 character(len=*), parameter :: str2 = 'Rosetta'
 character(len=*), parameter :: str3 = 'Code'
 integer                     :: i
 real                        :: h
 real, parameter             :: one_third = 1.0e0/3
 real, parameter             :: two_thirds = 2.0e0/3
 interface
    integer function omp_get_thread_num
    end function omp_get_thread_num
 end interface
 interface
    integer function omp_get_num_threads
    end function omp_get_num_threads
 end interface
 ! Use OpenMP to create a team of threads
 !$omp parallel do private(i,h)
 do i=1,20
    ! First time through the master thread output the number of threads
    ! in the team
    if (omp_get_thread_num() == 0 .and. i == 1) then
       write(*,'(a,i0,a)') 'Using ',omp_get_num_threads(),' threads'
    end if
    ! Randomize the order
    call random_number(h)
    !$omp critical
    if (h < one_third) then
       write(*,'(a)') str1
    else if (h < two_thirds) then
       write(*,'(a)') str2
    else
       write(*,'(a)') str3
    end if
    !$omp end critical
 end do
 !$omp end parallel do

end program concurrency</lang>

FreeBASIC

<lang freebasic>' FB 1.05.0 Win64 ' Compiled with -mt switch (to use threadsafe runtiume) ' The 'ThreadCall' functionality in FB is based internally on LibFFi (see [1] for license)

Sub thread1()

 Print "Enjoy"

End Sub

Sub thread2()

 Print "Rosetta"

End Sub

Sub thread3()

 Print "Code"

End Sub

Print "Press any key to print next batch of 3 strings or ESC to quit" Print

Do

 Dim t1 As Any Ptr = ThreadCall thread1
 Dim t2 As Any Ptr = ThreadCall thread2
 Dim t3 As Any Ptr = ThreadCall thread3
 ThreadWait t1
 ThreadWait t2
 ThreadWait t3
 Print
 Sleep

Loop While Inkey <> Chr(27)</lang>

Sample output

Output:
Press any key to print next batch of 3 strings or ESC to quit

Enjoy
Code
Rosetta

Enjoy
Rosetta
Code

Go

Channel

Simplest and most direct solution: Start three goroutines, give each one a word. Each sleeps, then returns the word on a channel. The main goroutine prints words as they return. The print loop represents a checkpoint--main doesn't exit until all words have returned and been printed. <lang go>package main

import (

   "fmt"
   "math/rand"
   "time"

)

func main() {

   words := []string{"Enjoy", "Rosetta", "Code"}
   rand.Seed(time.Now().UnixNano())
   q := make(chan string)
   for _, w := range words {
       go func(w string) {
           time.Sleep(time.Duration(rand.Int63n(1e9)))
           q <- w
       }(w)
   }
   for i := 0; i < len(words); i++ {
       fmt.Println(<-q)
   }

}</lang>

Afterfunc

time.Afterfunc combines the sleep and the goroutine start. log.Println serializes output in the case goroutines attempt to print concurrently. sync.WaitGroup is used directly as a checkpoint. <lang go>package main

import (

   "log"
   "math/rand"
   "os"
   "sync"
   "time"

)

func main() {

   words := []string{"Enjoy", "Rosetta", "Code"}
   rand.Seed(time.Now().UnixNano())
   l := log.New(os.Stdout, "", 0)
   var q sync.WaitGroup
   q.Add(len(words))
   for _, w := range words {
       w := w
       time.AfterFunc(time.Duration(rand.Int63n(1e9)), func() {
           l.Println(w)
           q.Done()
       })
   }
   q.Wait()

}</lang>

Select

This solution might stretch the intent of the task a bit. It is concurrent but not parallel. Also it doesn't sleep and doesn't call the random number generator explicity. It works because the select statement is specified to make a "pseudo-random fair choice" among multiple channel operations. <lang go>package main

import "fmt"

func main() {

   w1 := make(chan bool, 1)
   w2 := make(chan bool, 1)
   w3 := make(chan bool, 1)
   for i := 0; i < 3; i++ {
       w1 <- true
       w2 <- true
       w3 <- true
       fmt.Println()
       for i := 0; i < 3; i++ {
           select {
           case <-w1:
               fmt.Println("Enjoy")
           case <-w2:
               fmt.Println("Rosetta")
           case <-w3:
               fmt.Println("Code")
           }
       }
   }

}</lang> Output:

Code
Rosetta
Enjoy

Enjoy
Rosetta
Code

Rosetta
Enjoy
Code

Groovy

<lang groovy>'Enjoy Rosetta Code'.tokenize().collect { w ->

   Thread.start {
       Thread.sleep(1000 * Math.random() as int)
       println w
   }

}.each { it.join() }</lang>

Haskell

Note how the map treats the list of processes just like any other data.

<lang haskell>import Control.Concurrent

main = mapM_ forkIO [process1, process2, process3] where

 process1 = putStrLn "Enjoy" 
 process2 = putStrLn "Rosetta"
 process3 = putStrLn "Code"</lang>

A more elaborated example using MVars and a random running time per thread.

<lang haskell>import Control.Concurrent import System.Random

concurrent :: IO () concurrent = do

   var <- newMVar [] -- use an MVar to collect the results of each thread
   mapM_ (forkIO . task var) ["Enjoy", "Rosetta", "Code"] -- run 3 threads
   putStrLn "Press Return to show the results." -- while we wait for the user,
   -- the threads run
   _ <- getLine
   takeMVar var >>= mapM_ putStrLn -- read the results and show them on screen
   where
       -- "task" is a thread
       task v s = do
           randomRIO (1,10) >>= \r -> threadDelay (r * 100000) -- wait a while
           val <- takeMVar v -- read the MVar and block other threads from reading it
           -- until we write another value to it
           putMVar v (s : val) -- append a text string to the MVar and block other
           -- threads from writing to it unless it is read first</lang>

Icon and Unicon

The following code uses features exclusive to Unicon <lang unicon>procedure main()

  L:=[ thread write("Enjoy"), thread write("Rosetta"), thread write("Code") ]
  every wait(!L)

end</lang>

J

Example:

<lang j> smoutput&>({~?~@#);:'Enjoy Rosetta Code' Rosetta Code Enjoy </lang>

NOTES AND CAUTIONS:

1) While J's syntax and semantics is highly parallel, it is a deterministic sort of parallelism (analogous to the design of modern GPUs) and not the stochastic parallelism which is implied in this task specification (and which is usually obtained by timeslicing threads of control). The randomness implemented here is orthogonal to the parallelism in the display (and you could remove smoutput& without altering the appearence, in this trivial example).

2) The current release of J (and the past implementations) do not implement hardware based concurrency. This is partially an economic issue (since all of the current and past language implementations have been distributed for free, with terms which allow free distribution), and partially a hardware maturity issue (historically, most CPU multi-core development has been optimized for stochastic parallelism with minimal cheap support for large scale deterministic parallelism and GPUs have not been capable of supporting the kind of generality needed by J).

This state of affairs is likely to change, eventually (most likely this will be after greater than factor of 2 speedups from hardware parallelism are available for the J users in cases which are common and important enough to support the implementation). But, for now, J's parallelism is entirely conceptual.

Java

Works with: Java version 1.5+

Uses CyclicBarrier to force all threads to wait until they're at the same point before executing the println, increasing the odds they'll print in a different order (otherwise, while the they may be executing in parallel, the threads are started sequentially and with such a short run-time, will usually output sequentially as well).

<lang java5>import java.util.concurrent.CyclicBarrier;

public class Threads {

 public static class DelayedMessagePrinter implements Runnable
 {
   private CyclicBarrier barrier;
   private String msg;
   
   public DelayedMessagePrinter(CyclicBarrier barrier, String msg)
   {
     this.barrier = barrier;
     this.msg = msg;
   }
   
   public void run()
   {
     try
     {  barrier.await();  }
     catch (Exception e)
     {  }
     System.out.println(msg);
   }
 }
 
 public static void main(String[] args)
 {
   CyclicBarrier barrier = new CyclicBarrier(3);
   new Thread(new DelayedMessagePrinter(barrier, "Enjoy")).start();
   new Thread(new DelayedMessagePrinter(barrier, "Rosetta")).start();
   new Thread(new DelayedMessagePrinter(barrier, "Code")).start();
 }

}</lang>

JavaScript

JavaScript now enjoys access to a concurrency library thanks to Web Workers. The Web Workers specification defines an API for spawning background scripts. This first code is the background script and should be in the concurrent_worker.js file. <lang javascript>self.addEventListener('message', function (event) {

 self.postMessage(event.data);
 self.close();

}, false);</lang> This second block creates the workers, sends them a message and creates an event listener to handle the response. <lang javascript>var words = ["Enjoy", "Rosetta", "Code"]; var workers = [];

for (var i = 0; i < words.length; i++) {

 workers[i] = new Worker("concurrent_worker.js");
 workers[i].addEventListener('message', function (event) {
   console.log(event.data);
 }, false);
 workers[i].postMessage(words[i]);

}</lang>

LFE

<lang lisp>

This is a straight port of the Erlang version.
You can run this under the LFE REPL as follows
(slurp "concurrent-computing.lfe")
(start)

(defmodule concurrent-computing

 (export (start 0)))

(defun start ()

 (lc ((<- word '("Enjoy" "Rosetta" "Code")))
   (spawn (lambda () (say (self) word))))
 (wait 2)
 'ok)

(defun say (pid word)

 (lfe_io:format "~p~n" (list word))
 (! pid 'done))

(defun wait (n)

 (receive
   ('done (case n
            (0 0)
            (_n (wait (- n 1)))))))

</lang>

Logtalk

Works when using SWI-Prolog, XSB, or YAP as the backend compiler. <lang logtalk>:- object(concurrency).

   :- initialization(output).
   output :-
       threaded((
           write('Enjoy'),
           write('Rosetta'),
           write('Code')
       )).
- end_object.</lang>

Lua

<lang lua>co = {} co[1] = coroutine.create( function() print "Enjoy" end ) co[2] = coroutine.create( function() print "Rosetta" end ) co[3] = coroutine.create( function() print "Code" end )

math.randomseed( os.time() ) h = {} i = 0 repeat

   j = math.random(3)
   if h[j] == nil then
      coroutine.resume( co[j] )
      h[j] = true
      i = i + 1
   end

until i == 3</lang>


Mathematica / Wolfram Language

Parallelization requires Mathematica 7 or later <lang Mathematica>ParallelDo[

   Pause[RandomReal[]];
   Print[s],
   {s, {"Enjoy", "Rosetta", "Code"}}

]</lang>

Mercury

<lang>:- module concurrent_computing.

- interface.
- import_module io.
- pred main(io::di, io::uo) is cc_multi.
- implementation.
- import_module thread.

main(!IO) :-

  spawn(io.print_cc("Enjoy\n"), !IO),
  spawn(io.print_cc("Rosetta\n"), !IO),
  spawn(io.print_cc("Code\n"), !IO).</lang>

Nim

Compile with nim --threads:on c concurrent: <lang nim>const str = ["Enjoy", "Rosetta", "Code"]

var thr: array[3, TThread[int32]]

proc f(i) {.thread.} =

 echo str[i]

for i in 0..thr.high:

 createThread(thr[i], f, int32(i))

joinThreads(thr)</lang>

OpenMP

Compile with nim --passC:"-fopenmp" --passL:"-fopenmp" c concurrent: <lang nim>const str = ["Enjoy", "Rosetta", "Code"]

for i in 0||2:

 echo str[i]</lang>

Thread Pools

Compile with nim --threads:on c concurrent: <lang nim>import threadpool const str = ["Enjoy", "Rosetta", "Code"]

proc f(i) {.thread.} =

 echo str[i]

for i in 0..str.high:

 spawn f(i)

sync()</lang>

Objeck

<lang objeck> bundle Default {

 class MyThread from Thread {
   New(name : String) {
     Parent(name);
   }
   method : public : Run(param : Base) ~ Nil {
     string := param->As(String);
     string->PrintLine();
   }
 }
 class Concurrent {
   New() {
   }
   function : Main(args : System.String[]) ~ Nil {
     t0 := MyThread->New("t0");
     t1 := MyThread->New("t1");
     t2 := MyThread->New("t2");
     t0->Execute("Enjoy"->As(Base));
     t1->Execute("Rosetta"->As(Base));
     t2->Execute("Code"->As(Base));
   }
 }

} </lang>

OCaml

<lang ocaml>#directory "+threads"

  1. load "unix.cma"
  2. load "threads.cma"

let sleepy_print msg =

 Unix.sleep (Random.int 4);
 print_endline msg

let threads =

 List.map (Thread.create sleepy_print) ["Enjoy"; "Rosetta"; "Code"]

let () =

 Random.self_init ();
 List.iter (Thread.join) threads</lang>

Oforth

Oforth uses tasks to implement concurrent computing. A task is scheduled using #& on a function, method, block, ...

<lang Oforth>#[ "Enjoy" println ] &

  1. [ "Rosetta" println ] &
  2. [ "Code" println ] &</lang>

mapParallel method can be used to map a runnable on each element of a collection and returns a collection of results. Here, we println the string and return string size.

<lang Oforth>[ "Enjoy", "Rosetta", "Code" ] mapParallel(#[ dup . size ])</lang>

ooRexx

<lang ooRexx> -- this will launch 3 threads, with each thread given a message to print out. -- I've added a stoplight to make each thread wait until given a go signal, -- plus some sleeps to give the threads a chance to randomize the execution -- order a little. launcher = .launcher~new launcher~launch

class launcher

-- the launcher method. Guarded is the default, but let's make this -- explicit here

method launch guarded
 runner1 = .runner~new(self, "Enjoy")
 runner2 = .runner~new(self, "Rosetta")
 runner3 = .runner~new(self, "Code")
 -- let's give the threads a chance to settle in to the
 -- starting line
 call syssleep 1
 guard off   -- release the launcher lock.  This is the starter's gun

-- this is a guarded method that the runners will call. They -- will block until the launch method releases the object guard

method block guarded
class runner
method init
 use arg launcher, text
 reply  -- this creates the new thread
 call syssleep .5  -- try to mix things up by sleeping
 launcher~block    -- wait for the go signal
 call syssleep .5  -- add another sleep here
 say text

</lang>

Oz

The randomness comes from the unpredictability of thread scheduling (this is how I understand this exercise).

<lang oz>for Msg in ["Enjoy" "Rosetta" "Code"] do

  thread
     {System.showInfo Msg}
  end

end </lang>

PARI/GP

Here is a GP implementation using the bill-mt branch: <lang parigp>inline(func); func(n)=print(["Enjoy","Rosetta","Code"][n]); parapply(func,[1..3]);</lang>

This is a PARI implementation which uses fork() internally. Note that the C solutions can be used instead if desired; this program demonstrates the native PARI capabilities instead.

For serious concurrency, see Appendix B of the User's Guide to the PARI Library which discusses a solution using tls on pthreads. (There are nontrivial issues with using PARI in this environment, do not attempt to blindly implement a C solution.) <lang C>void foo() {

 if (pari_daemon()) {
   // Original
   if (pari_daemon()) {
     // Original
     pari_printf("Enjoy\n");
   } else {
     // Daemon #2
     pari_printf("Code\n");
   }
 } else {
   // Daemon #1
   pari_printf("Rosetta\n");
 }

}</lang>

See also Bill Allombert's slides on parallel programming in GP.

Perl

<lang perl>use threads; use Time::HiRes qw(sleep);

$_->join for map {

   threads->create(sub {
       sleep rand;
       print shift, "\n";
   }, $_)

} qw(Enjoy Rosetta Code);</lang>

Or using coroutines provided by

Library: Coro

<lang perl>use feature qw( say ); use Coro; use Coro::Timer qw( sleep );

$_->join for map {

   async {
       sleep rand; 
       say @_;
   } $_

} qw( Enjoy Rosetta Code ); </lang>

Perl 6

Works with: pugs

Hyper-operators are unordered: <lang perl6>my @words = <Enjoy Rosetta Code>; @words».say</lang> Output: <lang>Rosetta Code Enjoy</lang>

Phix

Without the sleep it is almost always Enjoy Rosetta Code, because create_thread() is more costly than echo(), as the former has to create a new call stack etc.
The lock prevents the displays from mangling each other. <lang Phix>procedure echo(string s)

   sleep(rand(100)/100)
   enter_cs()
   puts(1,s)
   puts(1,'\n')
   leave_cs()

end procedure

constant threads = {create_thread(routine_id("echo"),{"Enjoy"}),

                   create_thread(routine_id("echo"),{"Rosetta"}),
                   create_thread(routine_id("echo"),{"Code"})}

wait_thread(threads) puts(1,"done") {} = wait_key()</lang>

PicoLisp

Using background tasks

<lang PicoLisp>(for (N . Str) '("Enjoy" "Rosetta" "Code")

  (task (- N) (rand 1000 4000)              # Random start time 1 .. 4 sec
     Str Str                                # Closure with string value
     (println Str)                          # Task body: Print the string
     (task @) ) )                           # and stop the task</lang>

Using child processes

<lang PicoLisp>(for Str '("Enjoy" "Rosetta" "Code")

  (let N (rand 1000 4000)                   # Randomize
     (unless (fork)                         # Create child process
        (wait N)                            # Wait 1 .. 4 sec
        (println Str)                       # Print string
        (bye) ) ) )                         # Terminate child process</lang>

Pike

Using POSIX threads: <lang Pike>int main() { // Start threads and wait for them to finish ({ Thread.Thread(write, "Enjoy\n"), Thread.Thread(write, "Rosetta\n"), Thread.Thread(write, "Code\n") })->wait();

// Exit program exit(0); }</lang> Output:

Enjoy
Rosetta
Code

Using Pike's backend: <lang Pike>int main(int argc, array argv) {

   call_out(write, random(1.0), "Enjoy\n");
   call_out(write, random(1.0), "Rosetta\n");
   call_out(write, random(1.0), "Code\n");
   call_out(exit, 1, 0);
   return -1; // return -1 starts the backend which makes Pike run until exit() is called.

}</lang> Output:

Rosetta
Code
Enjoy

PowerShell

Using Background Jobs: <lang Powershell>$Strings = "Enjoy","Rosetta","Code"

$SB = {param($String)Write-Output $String}

foreach($String in $Strings) {

   Start-Job -ScriptBlock $SB -ArgumentList $String | Out-Null    
   }

Get-Job | Wait-Job | Receive-Job Get-Job | Remove-Job</lang>

Using .NET Runspaces: <lang Powershell>$Strings = "Enjoy","Rosetta","Code"

$SB = {param($String)Write-Output $String}

$Pool = [RunspaceFactory]::CreateRunspacePool(1, 3) $Pool.ApartmentState = "STA" $Pool.Open() foreach ($String in $Strings) {

   $Pipeline  = [System.Management.Automation.PowerShell]::create()
   $Pipeline.RunspacePool = $Pool
   [void]$Pipeline.AddScript($SB).AddArgument($String)
   $AsyncHandle = $Pipeline.BeginInvoke()
   $Pipeline.EndInvoke($AsyncHandle)
   $Pipeline.Dispose()
   }

$Pool.Close()</lang>

Prolog

This example works in SWI-Prolog. It may work in other Prolog implementations too.

Create a separate thread for each word. Join the threads to make sure they complete before the program exits.

<lang prolog>main :-

   thread_create(say("Enjoy"),A,[]),
   thread_create(say("Rosetta"),B,[]),
   thread_create(say("Code"),C,[]),
   thread_join(A,_),
   thread_join(B,_),
   thread_join(C,_).

say(Message) :-

   Delay is random_float,
   sleep(Delay),
   writeln(Message).</lang>

PureBasic

<lang PureBasic>Global mutex = CreateMutex()

Procedure Printer(*str) LockMutex(mutex) PrintN( PeekS(*str) ) UnlockMutex(mutex) EndProcedure

If OpenConsole() LockMutex(mutex) thread1 = CreateThread(@Printer(), @"Enjoy") thread2 = CreateThread(@Printer(), @"Rosetta") thread3 = CreateThread(@Printer(), @"Code") UnlockMutex(mutex)

WaitThread(thread1) WaitThread(thread2) WaitThread(thread3)

Print(#CRLF$ + #CRLF$ + "Press ENTER to exit") Input()

CloseConsole() EndIf

FreeMutex(mutex)</lang>

Python

Works with: Python version 3.2

Using the new to Python 3.2 concurrent.futures library and choosing to use processes over threads; the example will use up to as many processes as your machine has cores. This doesn't however guarantee an order of sub-process results. <lang python>Python 3.2 (r32:88445, Feb 20 2011, 21:30:00) [MSC v.1500 64 bit (AMD64)] on win 32 Type "help", "copyright", "credits" or "license" for more information. >>> from concurrent import futures >>> with futures.ProcessPoolExecutor() as executor: ... _ = list(executor.map(print, 'Enjoy Rosetta Code'.split())) ... Enjoy Rosetta Code >>></lang>

Works with: Python version 2.5

<lang python>import threading import random

def echo(text):

   print(text)

threading.Timer(random.random(), echo, ("Enjoy",)).start() threading.Timer(random.random(), echo, ("Rosetta",)).start() threading.Timer(random.random(), echo, ("Code",)).start()</lang>

Or, by using a for loop to start one thread per list entry, where our list is our set of source strings:

<lang python>import threading import random

def echo(text):

   print(text)

for text in ["Enjoy", "Rosetta", "Code"]:

   threading.Timer(random.random(), echo, (text,)).start()</lang>

threading.Thread

<lang python>import random, sys, time import threading

lock = threading.Lock()

def echo(s):

   time.sleep(1e-2*random.random())
   # use `.write()` with lock due to `print` prints empty lines occasionally
   with lock:
       sys.stdout.write(s)
       sys.stdout.write('\n')

for line in 'Enjoy Rosetta Code'.split():

   threading.Thread(target=echo, args=(line,)).start()</lang>

multiprocessing

Works with: Python version 2.6

<lang python>from __future__ import print_function from multiprocessing import Pool

def main():

   p = Pool()
   p.map(print, 'Enjoy Rosetta Code'.split())

if __name__=="__main__":

   main()</lang>

twisted

<lang python>import random from twisted.internet import reactor, task, defer from twisted.python.util import println

delay = lambda: 1e-4*random.random() d = defer.DeferredList([task.deferLater(reactor, delay(), println, line)

                       for line in 'Enjoy Rosetta Code'.split()])

d.addBoth(lambda _: reactor.stop()) reactor.run()</lang>

gevent

<lang python>from __future__ import print_function import random import gevent

delay = lambda: 1e-4*random.random() gevent.joinall([gevent.spawn_later(delay(), print, line)

              for line in 'Enjoy Rosetta Code'.split()])</lang>

Racket

Threads provide a simple API for concurrent programming. <lang racket>

  1. lang racket

(for ([str '("Enjoy" "Rosetta" "Code")])

 (thread (λ () (displayln str))))

</lang>

In addition to "thread" which is implemented as green threads (useful for IO etc), Racket has "futures" and "places" which are similar tools for using multiple OS cores.

Raven

<lang raven>[ 'Enjoy' 'Rosetta' 'Code' ] as $words

thread talker

   $words pop "%s\n"
   repeat dup print
       500 choose ms

talker as a talker as b talker as c</lang>

Rhope

Works with: Rhope version alpha 1

<lang rhope>Main(0,0) |:

   Print["Enjoy"]
   Print["Rosetta"]
   Print["Code"]
|</lang>

In Rhope, expressions with no shared dependencies run in parallel by default.

Ruby

<lang ruby>%w{Enjoy Rosetta Code}.map do |x|

   Thread.new do
       sleep rand
       puts x
   end

end.each do |t|

 t.join

end</lang>

Rust

<lang rust>extern crate rand; use std::thread; use rand::thread_rng; use rand::distributions::{Range, IndependentSample};

fn main() {

   let mut rng = thread_rng();
   let rng_range = Range::new(0u32, 100);
   for word in "Enjoy Rosetta Code".split_whitespace() {
       let snooze_time = rng_range.ind_sample(&mut rng);
       let local_word = word.to_owned();
       std::thread::spawn(move || {
           thread::sleep_ms(snooze_time);
           println!("{}", local_word);
       });
   }
   thread::sleep_ms(1000);

}</lang>

Scala

<lang scala>import scala.actors.Futures List("Enjoy", "Rosetta", "Code").map { x =>

   Futures.future {                           
     Thread.sleep((Math.random * 1000).toInt)   
      println(x)                                 
   }         

}.foreach(_())</lang>

Scheme

<lang scheme>(parallel-execute (lambda () (print "Enjoy"))

                 (lambda () (print "Rosetta"))
                 (lambda () (print "Code")))</lang>

Sidef

A very basic threading support is provided by the Block.fork() method: <lang ruby>var a = <Enjoy Rosetta Code>

a.map{|str|

   {   Sys.sleep(1.rand)
       say str
   }.fork

}.map{|thr| thr.wait }</lang>

Output:
Enjoy
Code
Rosetta

Swift

Using Grand Central Dispatch with concurrent queues. <lang Swift>import Foundation

let myList = ["Enjoy", "Rosetta", "Code"]

for word in myList {

   dispatch_async(dispatch_get_global_queue(0, 0)) {
       NSLog(word)
   }

}

dispatch_main()</lang>

Output:
2015-02-05 10:15:01.831 rosettaconcurrency[1917:37905] Code
2015-02-05 10:15:01.831 rosettaconcurrency[1917:37902] Enjoy
2015-02-05 10:15:01.831 rosettaconcurrency[1917:37904] Rosetta

Tcl

Assuming that "random" means that we really want the words to appear in random (rather then "undefined" or "arbitrary") order:

<lang tcl>after [expr int(1000*rand())] {puts "Enjoy"} after [expr int(1000*rand())] {puts "Rosetta"} after [expr int(1000*rand())] {puts "Code"}</lang>

will execute each line after a randomly chosen number (0...1000) of milliseconds.

A step towards "undefined" would be to use after idle, which is Tcl for "do this whenever you get around to it". Thus:

<lang tcl>after idle {puts "Enjoy"} after idle {puts "Rosetta"} after idle {puts "Code"}</lang>

(While no particular order is guaranteed by the Tcl spec, the current implementations will all execute these in the order in which they were added to the idle queue).

It's also possible to use threads for this. Here we do this with the built-in thread-pool support: <lang tcl>package require Thread set pool [tpool::create -initcmd {

   proc delayPrint msg {
       after [expr int(1000*rand())]
       puts $msg
   }

}] tpool::post -detached $pool [list delayPrint "Enjoy"] tpool::post -detached $pool [list delayPrint "Rosetta"] tpool::post -detached $pool [list delayPrint "Code"] tpool::release $pool after 1200 ;# Give threads time to do their work exit</lang>

UnixPipes

<lang bash>(echo "Enjoy" & echo "Rosetta"& echo "Code"&)</lang>

Visual Basic .NET

<lang vbnet>Imports System.Threading

Module Module1

  Public rnd As New Random
  Sub Main()
      Dim t1 As New Thread(AddressOf Foo)
      Dim t2 As New Thread(AddressOf Foo)
      Dim t3 As New Thread(AddressOf Foo)
      t1.Start("Enjoy")
      t2.Start("Rosetta")
      t3.Start("Code")
      t1.Join()
      t2.Join()
      t3.Join()
  End Sub
  Sub Foo(ByVal state As Object)
      Thread.Sleep(rnd.Next(1000))
      Console.WriteLine(state)
  End Sub

End Module</lang>

zkl

<lang zkl>fcn{println("Enjoy")}.launch(); // thread fcn{println("Rosetta")}.strand(); // co-op thread fcn{println("Code")}.future(); // another thread type</lang>

Output:
Rosetta
Code
Enjoy