Metered concurrency: Difference between revisions
m
syntax highlighting fixup automation
(→{{header|Haskell}}: Specified imports, applied Ormolu, swapped print for putStrLn) |
Thundergnat (talk | contribs) m (syntax highlighting fixup automation) |
||
Line 6:
The interface for the counting semaphore is defined in an Ada package specification:
<
protected type Counting_Semaphore(Max : Positive) is
entry Acquire;
Line 14:
Lock_Count : Natural := 0;
end Counting_Semaphore;
end Semaphores;</
The ''Acquire'' entry has a condition associated with it. A task can only execute the ''Acquire'' entry when ''Lock_Count'' is less than ''Max''. This is the key to making this structure behave as a counting semaphore. This condition, and all the other aspects of ''Counting_Semaphore'' are contained in the package body.
<
------------------------
Line 55:
end Counting_Semaphore;
end Semaphores;</
We now need a set of tasks to properly call an instance of ''Counting_Semaphore''.
<
with Ada.Text_Io; use Ada.Text_Io;
Line 93:
Crew(I).Start(2.0, I);
end loop;
end Semaphores_Main;</
=={{header|ALGOL 68}}==
Line 100:
{{works with|ALGOL 68G|Any - tested with release [http://sourceforge.net/projects/algol68/files/algol68g/algol68g-1.18.0/algol68g-1.18.0-9h.tiny.el5.centos.fc11.i386.rpm/download 1.18.0-9h.tiny]}}
{{wont work with|ELLA ALGOL 68|Any (with appropriate job cards) - tested with release [http://sourceforge.net/projects/algol68/files/algol68toc/algol68toc-1.8.8d/algol68toc-1.8-8d.fc9.i386.rpm/download 1.8-8d] - due to PAR and SEMA being unimplemented}}
<
PROC job = (INT n)VOID: (
Line 112:
( DOWN sem ; job(2) ; UP sem ) ,
( DOWN sem ; job(3) ; UP sem )
)</
Output:
<pre>
Line 123:
{{works with|BBC BASIC for Windows}}
In BBC BASIC concurrency can only be achieved by timer events (short of running multiple processes).
<
DIM tID%(6)
Line 183:
PROC_killtimer(tID%(i%))
NEXT
ENDPROC</
'''Output:'''
<pre>
Line 205:
=={{header|C}}==
{{works with|POSIX}}
<
#include <pthread.h>
#include <stdlib.h>
Line 255:
return sem_destroy(&sem);
}</
=={{header|C sharp}}==
C# has built in semaphore system where acquire is called via Wait(), release with Release() and count with semaphore.CurrentCount.
<
using System.Threading;
using System.Threading.Tasks;
Line 287:
}
}
}</
=={{header|C++}}==
With std::counting_semaphore and std::jthread from c++20's standard library:
<
#include <iostream>
#include <format>
Line 319:
return 0;
}</
=={{header|D}}==
<
import std.stdio ;
import std.thread ;
Line 370:
foreach(inout c ; crew)
c.wait ;
}</
===Phobos with tools===
Using the scrapple.tools extension library for Phobos ..
<
import tools.threads, tools.log, tools.time, tools.threadpool;
Line 393:
for (int i = 0; i < 10; ++i)
done.acquire;
}</
=={{header|E}}==
This semaphore slightly differs from the task description; the release operation is not on the semaphore itself but given out with each acquisition, and cannot be invoked too many times.
<
var current := 0
def waiters := <elib:vat.makeQueue>()
Line 436:
for i in 1..5 {
work(i, 2000, semaphore, timer, println)
}</
=={{header|EchoLisp}}==
<
(require 'tasks) ;; tasks library
Line 453:
;; run 10 // tasks
(for ([i 10]) (task-run (make-task task i ) (random 500)))
</syntaxhighlight>
{{out}}
<pre>
Line 478:
=={{header|Erlang}}==
In this implementation the semaphore is handled as its own process. Taking advantage of erlang's receive queues, which act as a FIFO queue for 'acquire' requests. As workers come online and request the semaphore they will receive it in order. 'receive' has the effect of pausing the process until a message is matched, so there's no idle looping.
<
-module(metered).
-compile(export_all).
Line 546:
lists:foreach(fun (P) -> receive {done, P} -> ok end end, Pids),
stop(Sem).
</syntaxhighlight>
=={{header|Euphoria}}==
<
sems = {}
constant COUNTER = 1, QUEUE = 2
Line 613:
while length(task_list())>1 do
task_yield()
end while</
Output:
Line 638:
=={{header|Factor}}==
<
concurrency.semaphores formatting kernel sequences threads ;
Line 649:
] with-semaphore
"task %d released\n" printf
] curry parallel-each</
{{out}}
<pre>
Line 675:
=={{header|FreeBASIC}}==
<
Dim Shared As Any Ptr ttylock
Line 721:
' Clean up when finished
Mutexdestroy(ttylock)
Sleep</
=={{header|Go}}==
Line 732:
A couple of other concurrency related details used in the example are the log package for serializing output and sync.WaitGroup used as a completion checkpoint. Functions of the fmt package are not synchronized and can produce interleaved output with concurrent writers. The log package does nice synchronization to avoid this.
<
import (
Line 776:
rooms.release()
studied.Done() // signal that student is done
}</
Output for this and the other Go programs here shows 10 students studying immediately, about a 2 second pause, 10 more students studying, then another pause of about 2 seconds before returning to the command prompt. In this example the count values may look jumbled. This is a result of the student goroutines running concurrently.
===Sync.Cond===
A more traditional approach implementing a counting semaphore object with sync.Cond. It has a constructor and methods for the three operations requested by the task.
<
import (
Line 841:
studyRoom.release()
studied.Done()
}</
=={{header|Groovy}}==
Solution:
<
private int count = 0
private final int max
Line 862:
synchronized int getCount() { count }
}</
Test:
<
(1..12).each { threadID ->
Thread.start {
Line 878:
}
}
}</
Output:
Line 909:
The QSem (quantity semaphore) waitQSem and signalQSem functions are the Haskell acquire and release equivalents, and the MVar (synchronizing variable) functions are used to put the workers statuses on the main thread for printing. Note that this code is likely only compatible with GHC due to the use of "threadDelay" from Control.Concurrent.
<
( newQSem,
signalQSem,
Line 937:
prints = 2 * workers
mapM_ (forkIO . worker q m) [1 .. workers]
replicateM_ prints $ takeMVar m >>= putStrLn</
==Icon and {{header|Unicon}}==
Icon doesn't support concurrency. A Unicon solution is:
<
n := integer(A[1] | 3) # Max. number of active tasks
m := integer(A[2] | 2) # Number of visits by each task
Line 958:
every wait(!threads)
end</
Sample run:
Line 1,000:
Here's an approach which uses the new (j904, currently in beta) threading primitives:
<
sleep=: 6!:3
task=: {{
Line 1,012:
> task t.''"0 i.10 NB. dispatch and wait for 10 tasks
14 T. lock NB. discard lock
}}</
An example run might look like this:
<
Task 0 has the semaphore
Task 1 has the semaphore
Line 1,026:
Task 7 has the semaphore
Task 8 has the semaphore
Task 6 has the semaphore</
An alternative implementation, while (barely) sufficient for this task's requirements, is for demonstration purposes only, and is not meant for serious work:
<
id=:'dumb',":x:6!:9''
wd 'pc ',id
Line 1,060:
sleep 2
release 0
}}</
Task example:
<
unit 1 acquired semaphore, t=54683.6
unit 0 acquired semaphore, t=54685.6
unit 4 acquired semaphore, t=54687.7
unit 2 acquired semaphore, t=54689.7
unit 3 acquired semaphore, t=54691.7</
=={{header|Java}}==
<
private int lockCount = 0;
private int maxCount;
Line 1,130:
}
}</
=={{header|Julia}}==
<
function acquire(num, sem)
sleep(rand())
Line 1,155:
runsem(4)
</
Sleeping and running 4 tasks.
Task 4 waiting for semaphore
Line 1,169:
=={{header|Kotlin}}==
<
import java.util.concurrent.Semaphore
Line 1,188:
}
}
}</
Sample output:
Line 1,214:
=={{header|Logtalk}}==
Using Logtalk's multi-threading notifications, which use a per-object FIFO message queue, thus avoiding the need of idle-loops. Works when using SWI-Prolog, XSB, or YAP as the backend compiler.
<
:- object(metered_concurrency).
Line 1,272:
:- end_object.
</syntaxhighlight>
Output:
<
| ?- metered_concurrency::run.
Worker 1 acquired semaphore
Line 1,291:
Worker 4 releasing semaphore
yes
</syntaxhighlight>
=={{header|Nim}}==
Line 1,298:
This program must be compiled with option <code>--threads:on</code>.
<
type SemaphoreError = object of CatchableError
Line 1,349:
for n in 0..9: createThread(threads[n], task, n)
threads.joinThreads()
sem.close()</
{{out}}
Line 1,412:
Using Nim standard mechanisms provided by module “locks”. As for the previous program, it must be compiled with option <code>--threads:on</code>.
<
type Semaphore = object
Line 1,474:
for n in 0..9: createThread(threads[n], task, n)
threads.joinThreads()
sem.close()</
{{out}}
Line 1,542:
If the channel is empty a task will wait until it is no more empty.
<
Object Class new: Semaphore(ch)
Line 1,551:
Semaphore method: acquire @ch receive drop ;
Semaphore method: release 1 @ch send drop ;</
Usage :
<
while( true ) [
s acquire "Semaphore acquired" .cr
Line 1,565:
| s i |
Semaphore new(n) ->s
10 loop: i [ #[ s mytask ] & ] ;</
=={{header|Oz}}==
Counting semaphores can be implemented in terms of mutexes (called "locks" in Oz) and dataflow variables (used as condition variables here). The mutex protects both the counter and the mutable reference to the dataflow variable.
<
fun {NewSemaphore N}
sem(max:N count:{NewCell 0} 'lock':{NewLock} sync:{NewCell _})
Line 1,628:
for I in 1..10 do
{StartWorker I}
end</
=={{header|Perl}}==
Line 1,635:
=={{header|Phix}}==
{{trans|Euphoria}}
<!--<
<span style="color: #008080;">without</span> <span style="color: #008080;">js</span> <span style="color: #000080;font-style:italic;">-- (tasks)</span>
<span style="color: #004080;">sequence</span> <span style="color: #000000;">sems</span> <span style="color: #0000FF;">=</span> <span style="color: #0000FF;">{}</span>
Line 1,707:
<span style="color: #0000FF;">?</span><span style="color: #008000;">"done"</span>
<span style="color: #0000FF;">{}</span> <span style="color: #0000FF;">=</span> <span style="color: #7060A8;">wait_key</span><span style="color: #0000FF;">()</span>
<!--</
{{out}}
<pre>
Line 1,738:
=={{header|PicoLisp}}==
<
(for U 4 # Create 4 concurrent units
(unless (fork)
Line 1,745:
(wait 2000)
(prinl "Unit " U " releasing the semaphore") )
(bye) ) ) )</
=={{header|PureBasic}}==
Line 1,751:
After a thread has completed it releases the Semaphore and a new thread will
be able to start.
<
#Parallels=3
Global Semaphore=CreateSemaphore(#Parallels)
Line 1,774:
WaitThread(i)
EndIf
Next</
Sample output
<pre>Thread #0 active.
Line 1,792:
Python threading module includes a semaphore implementation. This code show how to use it.
<
import threading
Line 1,828:
running = 0
for t in workers:
t.join()</
=={{header|Racket}}==
<
#lang racket
Line 1,846:
(printf "Job #~a done\n" i)
(semaphore-post sema)))))
</syntaxhighlight>
=={{header|Raku}}==
(formerly Perl 6)
Uses a buffered channel to hand out a limited number of tickets.
<syntaxhighlight lang="raku"
has $.tickets = Channel.new;
method new ($max) {
Line 1,873:
}
await @units;
}</
{{out}}
<pre>unit 0 acquired
Line 1,889:
Counting semaphores are built in:
<
4 semaphore as sem
Line 1,905:
group
10 each drop worker
list as workers</
Thread joining is automatic by default.
Line 1,912:
This one uses SizedQueue class from the standard library since it blocks when the size limit is reached. An alternative approach would be having a mutex and a counter and blocking explicitly.
<
require 'thread'
Line 1,958:
threads.each(&:join)
</syntaxhighlight>
=={{header|Rust}}==
<
//! Rust has a perfectly good Semaphore type already. It lacks count(), though, so we can't use it
//! directly.
Line 2,075:
}
</syntaxhighlight>
{{out}}
<pre>
Line 2,101:
=={{header|Scala}}==
<
private var lockCount = 0
Line 2,130:
}
}
}</
=={{header|Tcl}}==
{{works with|Tcl|8.6}}
Uses the Thread package, which is expected to form part of the overall Tcl 8.6 release.
<
package require Thread
Line 2,200:
foreach t $threads {
thread::release -wait $t
}</
=={{header|UnixPipes}}==
The number of concurrent jobs can be set by issuing that many echo '1''s at the begining to sem.
<
acquire() {
Line 2,223:
( acquire < sem ; job 3 ; release > sem ) &
echo 'Initialize Jobs' >&2 ; echo '1' > sem</
=={{header|Visual Basic .NET}}==
Line 2,229:
This code shows using a local semaphore. Semaphores can also be named, in which case they will be shared system wide.
<
sem.WaitOne() 'Blocks until a resouce can be aquired
Dim oldCount = sem.Release() 'Returns a resource to the pool
'oldCount has the Semaphore's count before Release was called</
=={{header|Wren}}==
{{libheader|Wren-queue}}
In Wren, only one fiber can be run at a time but can yield control to another fiber and be resumed later. Also other tasks can be scheduled to run when a fiber is suspended by its sleep method. The following script (with 6 tasks) therefore takes just over 4 seconds to run rather than 12.
<
import "timer" for Timer
import "/queue" for Queue
Line 2,300:
// call the first one
tasks[0].call(1)
System.print("\nAll %(numTasks) tasks completed!")</
{{out}}
Line 2,335:
=={{header|zkl}}==
Semaphores are built in.
<
name.println(" wait"); sem.acquire();
name.println(" go"); Atomic.sleep(2);
Line 2,342:
// start 3 threads using the same semphore
s:=Thread.Semaphore(1);
job.launch("1",s); job.launch("2",s); job.launch("3",s);</
{{out}}
<pre>
|