Arena storage pool: Difference between revisions

New post.
No edit summary
(New post.)
 
(48 intermediate revisions by 26 users not shown)
Line 1:
{{task}}[[Category:Encyclopedia]]
{{task}}
Dynamically allocated objects take their memory from a [[heap]]. The memory for an object is provided by an '''allocator''' which maintains the storage pool used for the [[heap]]. Often a call to allocator is denoted as
 
<lang ada>P := new T</lang>
Dynamically allocated objects take their memory from a [[heap]].
where T is the type of an allocated object and P is a [[reference]] to the object.
 
The memory for an object is provided by an '''allocator''' which maintains the storage pool used for the [[heap]].
 
Often a call to allocator is denoted as
<syntaxhighlight lang="ada">P := new T</syntaxhighlight>
where &nbsp; '''T''' &nbsp; is the type of an allocated object, &nbsp; and &nbsp; '''P''' &nbsp; is a [[reference]] to the object.
 
The storage pool chosen by the allocator can be determined by either:
* the object type T&nbsp; '''T'''
* the type of pointer &nbsp; '''P.'''
In the former case objects can be allocated only in one storage pool. In the latter case objects of the type can be allocated in any storage pool or on the [[stack]].
 
 
'''Task description'''<br>
In the former case objects can be allocated only in one storage pool.
The task is to show how allocators and user-defined storage pools are supported by the language. In particular:
 
# define an arena storage pool. An arena is a pool in which objects are allocated individually, but freed by groups.
In the latter case objects of the type can be allocated in any storage pool or on the [[stack]].
 
 
;Task:
The task is to show how allocators and user-defined storage pools are supported by the language.
 
In particular:
# define an arena storage pool. &nbsp; An arena is a pool in which objects are allocated individually, but freed by groups.
# allocate some objects (e.g., integers) in the pool.
 
 
Explain what controls the storage pool choice in the language.
<br><br>
 
=={{header|Ada}}==
In [[Ada]] the choice of storage pool is controlled by the type of the pointer. Objects pointed by anonymous access types are allocated in the default storage pool. Pool-specific pointer types may get a pool assigned to them:
<langsyntaxhighlight lang="ada">type My_Pointer is access My_Object;
for My_Pointer'Storage_Pool use My_Pool;</langsyntaxhighlight>
The following example illustrates implementation of an arena pool. Specification:
<langsyntaxhighlight lang="ada">with System.Storage_Elements; use System.Storage_Elements;
with System.Storage_Pools; use System.Storage_Pools;
 
Line 46 ⟶ 62:
Core : Storage_Array (1..Size);
end record;
end Arena_Pools;</langsyntaxhighlight>
Here is an implementation of the package:
<langsyntaxhighlight lang="ada">package body Arena_Pools is
procedure Allocate
( Pool : in out Arena;
Line 69 ⟶ 85:
return Pool.Size;
end Storage_Size;
end Arena_Pools;</langsyntaxhighlight>
The following is a test program that uses the pool:
<langsyntaxhighlight lang="ada">with Arena_Pools;
use Arena_Pools;
 
Line 85 ⟶ 101:
Z := new Integer;
Z.all := X.all + Y.all;
end Test_Allocator;</langsyntaxhighlight>
 
=={{header|C}}==
Line 92 ⟶ 108:
 
To use dynamic memory, the header for the standard library must be included in the module.
<syntaxhighlight lang ="c">#include <stdlib.h></langsyntaxhighlight>
Uninitialized memory is allocated using the malloc function. To obtain the amount of memory that needs to be allocated, sizeof is used. Sizeof is not a normal C function, it is evaluated by the compiler to obtain the amount of memory needed.
<langsyntaxhighlight lang="c">int *var = malloc(n*sizeof(int));
Typename *var = malloc(sizeof(Typename));
Typename *var = malloc(sizeof var[0]);</langsyntaxhighlight>
Since pointers to structures are needed so frequently, often a
typedef will define a type as being a pointer to the associated structure.
Once one gets used to the notation, programs are actually easier to read, as the
variable declarations don't include all the '*'s.
<langsyntaxhighlight lang="c">typedef struct mytypeStruct { .... } sMyType, *MyType;
 
MyType var = malloc(sizeof(sMyType));</langsyntaxhighlight>
The calloc() function initializes all allocated memory to zero. It is also often
used for allocating memory for arrays of some type.
<langsyntaxhighlight lang="c">/* allocate an array of n MyTypes */
MyType var = calloc(n, sizeof(sMyType));
 
MyType third = var+3; /* a reference to the 3rd item allocated */
 
MyType fourth = &var[4]; /* another way, getting the fourth item */</langsyntaxhighlight>
Freeing memory dynamically allocated from the heap is done by calling free().
<syntaxhighlight lang ="c">free(var);</langsyntaxhighlight>
One can allocate space on the stack using the alloca() function. You do not
free memory that's been allocated with alloca
<langsyntaxhighlight lang="c">Typename *var = alloca(sizeof(Typename));</langsyntaxhighlight>
An object oriented approach will define a function for creating a new object of a class.
In these systems, the size of the memory that needs to be allocated for an instance of the
class will often be included in the 'class' record.
See http://rosettacode.org/wiki/Polymorphic%20copy#C
 
Without using the standard malloc, things get a bit more complicated. For example, here is some code that implements something like it using the mmap system call (for Linux):
 
<syntaxhighlight lang="c">#include <sys/mman.h>
#include <unistd.h>
#include <stdio.h>
 
// VERY rudimentary C memory management independent of C library's malloc.
 
// Linked list (yes, this is inefficient)
struct __ALLOCC_ENTRY__
{
void * allocatedAddr;
size_t size;
struct __ALLOCC_ENTRY__ * next;
};
typedef struct __ALLOCC_ENTRY__ __ALLOCC_ENTRY__;
 
// Keeps track of allocated memory and metadata
__ALLOCC_ENTRY__ * __ALLOCC_ROOT__ = NULL;
__ALLOCC_ENTRY__ * __ALLOCC_TAIL__ = NULL;
 
// Add new metadata to the table
void _add_mem_entry(void * location, size_t size)
{
__ALLOCC_ENTRY__ * newEntry = (__ALLOCC_ENTRY__ *) mmap(NULL, sizeof(__ALLOCC_ENTRY__), PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
if (__ALLOCC_TAIL__ != NULL)
{
__ALLOCC_TAIL__ -> next = newEntry;
__ALLOCC_TAIL__ = __ALLOCC_TAIL__ -> next;
}
else
{
// Create new table
__ALLOCC_ROOT__ = newEntry;
__ALLOCC_TAIL__ = newEntry;
}
__ALLOCC_ENTRY__ * tail = __ALLOCC_TAIL__;
tail -> allocatedAddr = location;
tail -> size = size;
tail -> next = NULL;
__ALLOCC_TAIL__ = tail;
}
 
// Remove metadata from the table given pointer
size_t _remove_mem_entry(void * location)
{
__ALLOCC_ENTRY__ * curNode = __ALLOCC_ROOT__;
// Nothing to do
if (curNode == NULL)
{
return 0;
}
// First entry matches
if (curNode -> allocatedAddr == location)
{
__ALLOCC_ROOT__ = curNode -> next;
size_t chunkSize = curNode -> size;
// No nodes left
if (__ALLOCC_ROOT__ == NULL)
{
__ALLOCC_TAIL__ = NULL;
}
munmap(curNode, sizeof(__ALLOCC_ENTRY__));
return chunkSize;
}
// If next node is null, remove it
while (curNode -> next != NULL)
{
__ALLOCC_ENTRY__ * nextNode = curNode -> next;
if (nextNode -> allocatedAddr == location)
{
size_t chunkSize = nextNode -> size;
if(curNode -> next == __ALLOCC_TAIL__)
{
__ALLOCC_TAIL__ = curNode;
}
curNode -> next = nextNode -> next;
munmap(nextNode, sizeof(__ALLOCC_ENTRY__));
return chunkSize;
}
curNode = nextNode;
}
// Nothing was found
return 0;
}
 
// Allocate a block of memory with size
// When customMalloc an already mapped location, causes undefined behavior
void * customMalloc(size_t size)
{
// Now we can use 0 as our error state
if (size == 0)
{
return NULL;
}
void * mapped = mmap(NULL, size, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
// Store metadata
_add_mem_entry(mapped, size);
return mapped;
}
 
// Free a block of memory that has been customMalloc'ed
void customFree(void * addr)
{
size_t size = _remove_mem_entry(addr);
munmap(addr, size);
}
 
int main(int argc, char const *argv[])
{
int *p1 = customMalloc(4*sizeof(int)); // allocates enough for an array of 4 int
int *p2 = customMalloc(sizeof(int[4])); // same, naming the type directly
int *p3 = customMalloc(4*sizeof *p3); // same, without repeating the type name
if(p1) {
for(int n=0; n<4; ++n) // populate the array
p1[n] = n*n;
for(int n=0; n<4; ++n) // print it back out
printf("p1[%d] == %d\n", n, p1[n]);
}
customFree(p1);
customFree(p2);
customFree(p3);
return 0;
}</syntaxhighlight>
 
This is ''not'' how the real malloc is implemented on Linux. For one, memory leaks cannot be caught by Valgrind, and using a linked list to keep track of allocated blocks is very inefficient.
 
=={{header|C++}}==
Line 127 ⟶ 290:
* You can replace the global allocation/deallocation routines, which are used by new/delete whenever there are no class specific functions available.
* You can write operator new/operator delete with additional arguments, both in a class and globally. To use those, you add those parameters after the keyword <code>new</code>, like
<langsyntaxhighlight lang="cpp">T* foo = new(arena) T;</langsyntaxhighlight>
* In addition, for objects in containers, there's a completely separate allocator interface, where the containers use an allocator object for allocating/deallocating memory.
 
The following code uses class-specific allocation and deallocation functions:
 
<langsyntaxhighlight lang="cpp">#include <csttdlibcstdlib>
#include <cassert>
#include <new>
Line 162 ⟶ 325:
memory(static_cast<char*>(::operator new(size))),
free(memory),
end(memory + size))
{
prev = cur;
Line 235 ⟶ 398:
delete my_second_pool // also deallocates the memory for p6 and p7
 
} // Here my_pool goes out of scope, deallocating the memory for p1, p2 and p3</langsyntaxhighlight>
=={{header|Delphi}}==
{{libheader| Winapi.Windows}}
{{libheader| System.SysUtils}}
{{libheader| system.generics.collections}}
<syntaxhighlight lang="delphi">
program Arena_storage_pool;
 
{$APPTYPE CONSOLE}
 
uses
Winapi.Windows,
System.SysUtils,
system.generics.collections;
 
type
TPool = class
private
FStorage: TList<Pointer>;
public
constructor Create;
destructor Destroy; override;
function Allocate(aSize: Integer): Pointer;
function Release(P: Pointer): Integer;
end;
 
{ TPool }
 
function TPool.Allocate(aSize: Integer): Pointer;
begin
Result := GetMemory(aSize);
if Assigned(Result) then
FStorage.Add(Result);
end;
 
constructor TPool.Create;
begin
FStorage := TList<Pointer>.Create;
end;
 
destructor TPool.Destroy;
var
p: Pointer;
begin
while FStorage.Count > 0 do
begin
p := FStorage[0];
Release(p);
end;
FStorage.Free;
inherited;
end;
 
function TPool.Release(P: Pointer): Integer;
var
index: Integer;
begin
index := FStorage.IndexOf(P);
if index > -1 then
FStorage.Delete(index);
FreeMemory(P)
end;
 
var
Manager: TPool;
int1, int2: PInteger;
str: PChar;
 
begin
Manager := TPool.Create;
int1 := Manager.Allocate(sizeof(Integer));
int1^ := 5;
 
int2 := Manager.Allocate(sizeof(Integer));
int2^ := 3;
 
writeln('Allocate at addres ', cardinal(int1).ToHexString, ' with value of ', int1^);
writeln('Allocate at addres ', cardinal(int2).ToHexString, ' with value of ', int2^);
 
Manager.Free;
readln;
end.</syntaxhighlight>
{{out}}
<pre>Allocate at addres 026788D0 with value of 5
Allocate at addres 026788E0 with value of 3</pre>
 
=={{header|Erlang}}==
Given automatic memory handling the only way to ask for memory in Erlang is when creating a process. Likewise the only way to manually return memory is by killing a process. So the pool could be built like this. The unit for memory is word, b.t.w.
 
<syntaxhighlight lang="erlang">
<lang Erlang>
-module( arena_storage_pool ).
 
Line 271 ⟶ 518:
 
set( Pid, Key, Value ) -> Pid ! {set, Key, Value}.
</syntaxhighlight>
</lang>
 
=={{header|Fortran}}==
Run-time memory allocation is a latter-day feature in Fortran. In the beginning, a programme would either fit in the available memory or it would not. Any local variables declared in subroutines, especially arrays, would have some storage requirement that had been fixed at compile time, and space would be reserved for all of them whether any subroutine would be invoked or not in a particular run. Fixed array sizes were particularly troublesome in subroutines, as pre-specifying some largeish size for all such arrays would soon exhaust the available memory and this was especially annoying when it was never going to be the case that all the arrays had to be available simultaneously because not all the subroutines would be invoked or be active together in a particular run. Thus, developers of complicated calculations, say involving a lot of matrix manipulation, would be forced towards devising some storage allocation scheme involving scratchpad arrays that would be passed as additional parameters for subroutines to use as working storage, and soon enough one escalated to having a "pool" array, with portions being reserved and passed about the various routines as needed for a given run. Possibly escalating to further schemes involving disc storage and a lot of effort, repaid in suddenly having larger problems solvable.
 
Fortran 90 standardised two ameliorations. A subroutine can now declare arrays whose size is specified at run time, with storage typically organised via a stack, since on exit from the subroutine such storage is abandoned, which is to say, returned to the system pool. Secondly, within a routine, and not requiring entry into a subroutine (nor a <code>begin ... end;</code> block as in Algol), storage can be explicitly allocated with a specified size for arrays as needed, this time from a "heap" storage pool, and later de-allocated. Again, on exiting the subroutine, storage for such arrays (if declared within the subroutine) is abandoned.
 
Thus, in a sense, a group of items for which storage has been allocated can have their storage released en-mass by exiting the routine. However, it is not the case that items A, B, C can be allocated in one storage "area" (say called "Able") and another group D, E in a second named area (say "Baker"), and that by discarding "Able" all its components would be de-allocated without the need to name them in tedious detail.
 
So, for example: <syntaxhighlight lang="fortran"> SUBROUTINE CHECK(A,N) !Inspect matrix A.
REAL A(:,:) !The matrix, whatever size it is.
INTEGER N !The order.
REAL B(N,N) !A scratchpad, size known on entry..
INTEGER, ALLOCATABLE::TROUBLE(:) !But for this, I'll decide later.
INTEGER M
 
M = COUNT(A(1:N,1:N).LE.0) !Some maximum number of troublemakers.
 
ALLOCATE (TROUBLE(1:M**3)) !Just enough.
 
DEALLOCATE(TROUBLE) !Not necessary.
END SUBROUTINE CHECK !As TROUBLE is declared within CHECK.</syntaxhighlight>
 
Whereas previously a problem might not be solvable via the existing code because of excessive fixed-size storage requirements, now reduced demands can be made and those only for subroutines that are in action. Thus larger problems can be handled without agonising attempts to cut-to-fit, the usage for scratchpads such as B being particularly natural as in Algol from the 1960s. But on the other hand, a run might exhaust the available storage (either via the stack or via the heap) somewhere in the middle of job because its particular execution path made too many requests and the happy anticipation of results is instead met by a mess - and a bigger mess, because larger problems are being attempted.
 
 
=={{header|FreeBASIC}}==
<syntaxhighlight lang="freebasic">
/' FreeBASIC admite tres tipos básicos de asignación de memoria:
- La asignación estática se produce para variables estáticas y globales.
La memoria se asigna una vez cuando el programa se ejecuta y persiste durante
toda la vida del programa.
- La asignación de pila se produce para parámetros de procedimiento y variables
locales. La memoria se asigna cuando se ingresa el bloque correspondiente y se
libera cuando se deja el bloque, tantas veces como sea necesario.
- La asignación dinámica es el tema de este artículo.
 
La asignación estática y la asignación de pila tienen dos cosas en común:
- El tamaño de la variable debe conocerse en el momento de la compilación.
- La asignación y desasignación de memoria ocurren automáticamente (cuando se
crea una instancia de la variable y luego se destruye). El usuario no puede
anticipar la destrucción de dicha variable.
 
La mayoría de las veces, eso está bien. Sin embargo, hay situaciones en las que
una u otra de estas restricciones causan problemas (cuando la memoria necesaria
depende de la entrada del usuario, el tamaño solo se puede determinar durante el
tiempo de ejecución).
 
1) Palabras clave para la asignación de memoria dinámica:
Hay dos conjuntos de palabras clave para la asignación / desasignación dinámica:
* Allocate / Callocate / Reallocate / Deallocate: para la asignación de
memoria bruta y luego la desasignación, para tipos simples predefinidos
o búferes de usuario.
* New / Delete: para asignación de memoria + construcción, luego
destrucción + desasignación.
Se desaconseja encarecidamente mezclar palabras clave entre estos dos
conjuntos cuando se gestiona un mismo bloque de memoria.
 
2) Variante usando Redim / Erase:
FreeBASIC también admite matrices dinámicas (matrices de longitud variable).
La memoria utilizada por una matriz dinámica para almacenar sus elementos se
asigna en tiempo de ejecución en el montón. Las matrices dinámicas pueden
contener tipos simples y objetos complejos.
Al usar Redim, el usuario no necesita llamar al Constructor / Destructor
porque Redim lo hace automáticamente cuando agrega / elimina un elemento.
Erase luego destruye todos los elementos restantes para liberar completamente
la memoria asignada a ellos.
 
'/
 
Type UDT
Dim As String S = "FreeBASIC" '' induce an implicit constructor and destructor
End Type
 
' 3 then 4 objects: Callocate, Reallocate, Deallocate, (+ .constructor + .destructor)
Dim As UDT Ptr p1 = Callocate(3, Sizeof(UDT)) '' allocate cleared memory for 3 elements (string descriptors cleared,
'' but maybe useless because of the constructor's call right behind)
For I As Integer = 0 To 2
p1[I].Constructor() '' call the constructor on each element
Next I
For I As Integer = 0 To 2
p1[I].S &= Str(I) '' add the element number to the string of each element
Next I
For I As Integer = 0 To 2
Print "'" & p1[I].S & "'", '' print each element string
Next I
Print
p1 = Reallocate(p1, 4 * Sizeof(UDT)) '' reallocate memory for one additional element
Clear p1[3], 0, 3 * Sizeof(Integer) '' clear the descriptor of the additional element,
'' but maybe useless because of the constructor's call right behind
p1[3].Constructor() '' call the constructor on the additional element
p1[3].S &= Str(3) '' add the element number to the string of the additional element
For I As Integer = 0 To 3
Print "'" & p1[I].S & "'", '' print each element string
Next I
Print
For I As Integer = 0 To 3
p1[I].Destructor() '' call the destructor on each element
Next I
Deallocate(p1) '' deallocate the memory
Print
 
' 3 objects: New, Delete
Dim As UDT Ptr p2 = New UDT[3] '' allocate memory and construct 3 elements
For I As Integer = 0 To 2
p2[I].S &= Str(I) '' add the element number to the string of each element
Next I
For I As Integer = 0 To 2
Print "'" & p2[I].S & "'", '' print each element string
Next I
Print
Delete [] p2 '' destroy the 3 element and deallocate the memory
Print
 
' 3 objects: Placement New, (+ .destructor)
Redim As Byte array(0 To 3 * Sizeof(UDT) - 1) '' allocate buffer for 3 elements
Dim As Any Ptr p = @array(0)
Dim As UDT Ptr p3 = New(p) UDT[3] '' only construct the 3 elements in the buffer (placement New)
For I As Integer = 0 To 2
p3[I].S &= Str(I) '' add the element number to the string of each element
Next I
For I As Integer = 0 To 2
Print "'" & p3[I].S & "'", '' print each element string
Next I
Print
For I As Integer = 0 To 2
p3[I].Destructor() '' call the destructor on each element
Next I
Erase array '' deallocate the buffer
Print
 
' 3 then 4 objects: Redim, Erase
Redim As UDT p4(0 To 2) '' define a dynamic array of 3 elements
For I As Integer = 0 To 2
p4(I).S &= Str(I) '' add the element number to the string of each element
Next I
For I As Integer = 0 To 2
Print "'" & p4(I).S & "'", '' print each element string
Next I
Print
Redim Preserve p4(0 To 3) '' resize the dynamic array for one additional element
p4(3).S &= Str(3) '' add the element number to the string of the additional element
For I As Integer = 0 To 3
Print "'" & p4(I).S & "'", '' print each element string
Next I
Print
Erase p4 '' erase the dynamic array
Print
Sleep
</syntaxhighlight>
 
 
=={{header|Go}}==
<langsyntaxhighlight lang="go">package main
 
import (
Line 329 ⟶ 726:
*j = 8
fmt.Println(*i + *j) // prints 15
}</langsyntaxhighlight>
{{output}}
<pre>
Line 344 ⟶ 741:
For example, you can define a class which allocates a pool of integers:
 
<langsyntaxhighlight lang="j">coclass 'integerPool'
require 'jmf'
create=: monad define
Line 370 ⟶ 767:
set=: adverb define
y memw m
)</langsyntaxhighlight>
 
With this script you can then create instances of this class, and use them. In this case, we will create a pool of three integers:
 
<langsyntaxhighlight lang="j"> pool0=: 3 conew 'integerPool'
x0=: alloc__pool0 0
x1=: alloc__pool0 0
Line 382 ⟶ 779:
x2 set__pool0 9
x0 get__pool0 + x1 get__pool0 + x2 get__pool0
24</langsyntaxhighlight>
 
Finally, the pool can be destroyed:
 
<syntaxhighlight lang ="j"> destroy__pool0 _</langsyntaxhighlight>
 
That said, using J's built-in support for integers (and for using them) usually results in better code.
 
=={{header|MathematicaJava}}==
The Java programmer does not generally need to be concerned about memory management as the
Java Virtual Machine (JVM) automatically takes care of memory allocation for new objects,
and the releasing of that memory when an object is no longer needed. The latter operation
is accomplished by a garbage collector which runs in the background at intervals determined
by the JVM. The programmer can force a garbage collection by invoking the System.gc() method,
although this is not restricted to a single specified object.
The programmer can simulate an arena storage pool by using an array or one of the built-in
classes such as List, Map or Set. A simple example is given below.
<syntaxhighlight lang="java">
 
import java.math.BigInteger;
import java.util.ArrayList;
import java.util.List;
 
public final class ArenaStoragePool {
 
public static void main(String[] args) {
List<Object> storagePool = new ArrayList<Object>();
storagePool.addLast(42);
storagePool.addLast("Hello World");
storagePool.addFirst(BigInteger.ZERO);
System.out.println(storagePool);
storagePool = null;
System.gc();
}
 
}
</syntaxhighlight>
{{ out }}
<pre>
[0, 42, Hello World]
</pre>
 
=={{header|Julia}}==
All program elements in Julia are dynamically allocated objects which are garbage collected
as required after they are out of scope. If a specific storage pool is needed in advance, perhaps for
memory efficiency reasons, that pool can be optionally preallocated as an array or other large structure.
For example, a large 1000 X 1000 X 1000 matrix that will need to be changed repeatedly might be
allocated and initialized to zero with:
<syntaxhighlight lang="julia">
matrix = zeros(Float64, (1000,1000,1000))
# use matrix, then when done set variable to 0 to garbage collect the matrix:
matrix = 0 # large memory pool will now be collected when needed
</syntaxhighlight>
 
=={{header|Kotlin}}==
The Kotlin JVM programmer does not generally need to worry about memory management as the JVM automatically takes care of memory allocation for new objects and the freeing of that memory when an object is no longer needed. The latter is accomplished using a garbage collector which runs in the background at intervals determined by the JVM though the programmer can force a collection using the System.gc() function.
 
Similarly, the Kotlin Native runtime takes care of memory allocation for new Kotlin objects and cleans them up when there are no longer any references to them. Currently, the latter is accomplished using 'automatic reference counting' together with a collector for cyclic references though, in principle, other systems could be plugged in.
 
However, where interoperation with C is required, it is often necessary to allocate memory on the native heap so that a pointer to it can be passed to a C function. It is the responsibility of the Kotlin Native programmer to allocate this memory and free it when no longer needed to avoid memory leaks.
 
In general native memory is allocated via the NativePlacement interface together with the alloc() or allocArray() functions. Currently, placement normally takes place using the nativeHeap object (which calls malloc() and free() in the background) though other placement objects (such as the stack) are possible in principle.
 
To make life easier for the programmer when a number of allocations are required, it is possible for these to take place under the auspices of a MemScope object (an 'arena') which implements NativePlacement and automatically frees up the memory for these objects when they are no longer in scope. The process is automated by the memScoped function which takes a block as a parameter whose receiver is an implicit MemScope object. Here's a very simple example:
<syntaxhighlight lang="scala">// Kotlin Native v0.5
 
import kotlinx.cinterop.*
 
fun main(args: Array<String>) {
memScoped {
val intVar1 = alloc<IntVar>()
intVar1.value = 1
val intVar2 = alloc<IntVar>()
intVar2.value = 2
println("${intVar1.value} + ${intVar2.value} = ${intVar1.value + intVar2.value}")
}
// native memory used by intVar1 & intVar2 is automatically freed when memScoped block ends
}</syntaxhighlight>
 
{{output}}
<pre>
1 + 2 = 3
</pre>
=={{header|Lua}}==
Lua is a higher-level language with automatic allocation and garbage collection, and in most cases the programmer need not worry about the internal details - though some control over the strategy and scheduling of the garbage collector is exposed. ''(the reader is directed to the Lua Reference Manual if further internal specifics are desired, because it would be impractical to reproduce that material here, particularly since the details may vary among releases)''
 
If the goal is to simply create a collection of 'objects' that can be deleted en masse as a group, here is one possible approach that might suffice..
<syntaxhighlight lang="lua">pool = {} -- create an "arena" for the variables a,b,c,d
pool.a = 1 -- individually allocated..
pool.b = "hello"
pool.c = true
pool.d = { 1,2,3 }
pool = nil -- release reference allowing garbage collection of entire structure</syntaxhighlight>
 
=={{header|Mathematica}}/{{header|Wolfram Language}}==
Mathematica does not allow stack/heap control, so all variables are defined on the heap. However, tags must be given a ''value'' for a meaningful assignment to take place.
<langsyntaxhighlight Mathematicalang="mathematica">f[x_] := x^2</langsyntaxhighlight>
 
=={{header|Nim}}==
Nim proposes two ways to manage memory. In the first one, objects are allocated on the heap using "new" and their deallocation is managed by the compiler and the runtime. In this case, objects are accessed via references ("ref").
 
The second method to allocate objects on the heap consists to do this the C way, using procedures "alloc" or "alloc0" to get blocks of memory. The deallocation is managed by the user who must call "dealloc" to free the memory. Using this method, objects are accessed via pointers ("ptr").
 
The preferred way is of course to use references which are safe. To allow a better fit to user needs, Nim proposes several options for automatic memory management. By specifying "--gc:refc" in the command line to launch the compilation, a garbage collector using reference counting is used. This is currently the default. Others options are "markAndSweep", "boehm" and "go" to use a different GC, each one having its pros and cons. "--gc:none" allows to deactivate totally automatic memory management which limits considerably the available functionnalities.
 
Since version 1.2, two more options are proposed. The first one, "--gc:arc" allows to choose the "arc" memory management which, in fact, is not a garbage collector. This is the most efficient option but it cannot free data structures containing cycles. The second option, "--gc:orc", allows to choose the "orc" memory management which is in fact "arc" with a cycle detection mechanism. It will probably become the default option in a future version.
 
In some cases, especially when a lot of similar objects must be allocated, it is a good idea to use a pool. It allows to allocate by blocks and to free globally.
 
Nim doesn’t propose any mechanism to manage such pools. There is an option "regions" which may be useful for this purpose but it is experimental and not documented. So, we will ignore it for now.
 
One way to manage pools consists to allocate blocks of memory and use some pointer arithmetic to return pointers to objects. The blocks may be allocated using "new" (and in this case will be freed automatically) or using "alloc" or "alloc0" and the deallocation must be required explicitely.
 
We provide here an example using blocks allocated by "new".
 
<syntaxhighlight lang="nim">
####################################################################################################
# Pool management.
 
# Pool of objects.
type
Block[Size: static Positive, T] = ref array[Size, T]
Pool[BlockSize: static Positive, T] = ref object
blocks: seq[Block[BlockSize, T]] # List of blocks.
lastindex: int # Last object index in the last block.
 
#---------------------------------------------------------------------------------------------------
 
proc newPool(S: static Positive; T: typedesc): Pool[S, T] =
## Create a pool with blocks of "S" type "T" objects.
 
new(result)
result.blocks = @[new(Block[S, T])]
result.lastindex = -1
 
#---------------------------------------------------------------------------------------------------
 
proc getItem(pool: Pool): ptr pool.T =
## Return a pointer on a node from the pool.
 
inc pool.lastindex
if pool.lastindex == pool.BlockSize:
# Allocate a new block. It is initialized with zeroes.
pool.blocks.add(new(Block[pool.BlockSize, pool.T]))
pool.lastindex = 0
result = cast[ptr pool.T](addr(pool.blocks[^1][pool.lastindex]))
 
 
####################################################################################################
# Example: use the pool to allocate nodes.
 
type
 
# Node description.
NodePtr = ptr Node
Node = object
value: int
prev: NodePtr
next: NodePtr
 
type NodePool = Pool[5000, Node]
 
proc newNode(pool: NodePool; value: int): NodePtr =
## Create a new node.
 
result = pool.getItem()
result.value = value
result.prev = nil # Not needed, allocated memory being initialized to 0.
result.next = nil
 
proc test() =
## Build a circular list of nodes managed in a pool.
 
let pool = newPool(NodePool.BlockSize, Node)
var head = pool.newNode(0)
var prev = head
for i in 1..11999:
let node = pool.newNode(i)
node.prev = prev
prev.next = node
# Display information about the pool state.
echo "Number of allocated blocks: ", pool.blocks.len
echo "Number of nodes in the last block: ", pool.lastindex + 1
 
test()
 
# No need to free the pool. This is done automatically as it has been allocated by "new".</syntaxhighlight>
 
{{out}}
<pre>
Number of allocated blocks: 3
Number of nodes in the last block: 2000
</pre>
 
=={{header|Oforth}}==
 
This only way to allocate memory is to ask new class method on a class object. This will create an instance of this class on the heap. The heap is managed by the garbage collector.
 
The stacks (data stack and execution stack) only holds addresses of these objects. There is no object created on the stacks, apart small integers.
 
There is no user-defined storage pool and it is not possible to explicitly destroy an object.
 
<syntaxhighlight lang="oforth">Object Class new: MyClass(a, b, c)
MyClass new</syntaxhighlight>
 
=={{header|ooRexx}}==
In ooRexx:
Line 404 ⟶ 998:
 
=={{header|OxygenBasic}}==
<langsyntaxhighlight lang="oxygenbasic">
'==============
Class ArenaPool
Line 436 ⟶ 1,030:
 
pool.empty
</syntaxhighlight>
</lang>
 
=={{header|PARI/GP}}==
Line 442 ⟶ 1,036:
 
PARI allocates objects on the PARI stack by default, but objects can be allocated on the heap if desired.
<langsyntaxhighlight Clang="c">pari_init(1<<20, 0); // Initialize PARI with a stack size of 1 MB.
GEN four = addii(gen_2, gen_2); // On the stack
GEN persist = gclone(four); // On the heap</langsyntaxhighlight>
 
=={{header|Pascal}}==
{{works with|Free_Pascal}}
The procedure New allocates memory on the heap:
<langsyntaxhighlight lang="pascal">procedure New (var P: Pointer);</langsyntaxhighlight>
 
The Pointer P is typed and the amount of memory allocated on the heap matches the type. Deallocation is done with the procedure Dispose. In ObjectPascal constructors and destructors can be passed to New and Dispose correspondingly. The following example is from the rtl docs of [[Free_Pascal]]
 
<langsyntaxhighlight lang="pascal">Program Example16;
{ Program to demonstrate the Dispose and New functions. }
Type
Line 485 ⟶ 1,079:
T^.i := 0;
Dispose ( T, Done );
end .</langsyntaxhighlight>
 
Instead of implicit specification of the amount of memory using a type, the explicit amount can directly specified with the procedure getmem (out p: pointer; Size: PtrUInt);
 
=={{header|Phix}}==
Phix applications do not generally need to allocate and free memory explicitly, except for use in ffi, and even then the cffi package or
any of the GUI wrappers can handle most or all of it for you automatically. Both arwen and win32lib (both now superceded by pGUI, and note that that both are 32-bit only, with 4-byte alignment) contain arena storage implementations which may be of interest: see eg allocate_Rect() in demo\arwen\Quick_Allocations.ew, which also offers performance benefits via a circular buffer for short-term use, and also w32new_memset()/w32acquire_mem()/w32release_mem() in win32lib.
 
The simplest approach however is to rely on automatic memory management (as used by pGUI, and first implemented after arwen and win32lib were originally written):
<!--<syntaxhighlight lang="phix">(notonline)-->
<span style="color: #008080;">without</span> <span style="color: #008080;">js</span>
<span style="color: #004080;">atom</span> <span style="color: #000000;">mem</span> <span style="color: #0000FF;">=</span> <span style="color: #7060A8;">allocate</span><span style="color: #0000FF;">(</span><span style="color: #000000;">size</span><span style="color: #0000FF;">,</span><span style="color: #004600;">true</span><span style="color: #0000FF;">)</span>
<!--</syntaxhighlight>-->
If the optional cleanup flag is non-zero (or true, as above), the memory is automatically released once it is no longer required
(ie when the variable mem drops out of scope or gets overwritten, assuming you have not made a copy of it elsewhere, which would
all be handled quite properly and seamlessly, with the deallocation not happening until all copies were also overwritten
or discarded), otherwise (ie cleanup is zero or omitted) the application should invoke free() manually.
 
For completeness, here is a very simplistic arena manager, with just a single pool, not that it would be tricky to implement multiple pools:
<!--<syntaxhighlight lang="phix">(notonline)-->
<span style="color: #008080;">without</span> <span style="color: #008080;">js</span>
<span style="color: #004080;">sequence</span> <span style="color: #000000;">ap</span> <span style="color: #0000FF;">=</span> <span style="color: #0000FF;">{}</span>
<span style="color: #008080;">function</span> <span style="color: #000000;">ap_allocate</span><span style="color: #0000FF;">(</span><span style="color: #004080;">integer</span> <span style="color: #000000;">size</span><span style="color: #0000FF;">)</span>
<span style="color: #000080;font-style:italic;">-- allocate some memory and add it to the arena pool 'ap' for later release</span>
<span style="color: #004080;">atom</span> <span style="color: #000000;">res</span> <span style="color: #0000FF;">=</span> <span style="color: #7060A8;">allocate</span><span style="color: #0000FF;">(</span><span style="color: #000000;">size</span><span style="color: #0000FF;">)</span>
<span style="color: #000000;">ap</span> <span style="color: #0000FF;">=</span> <span style="color: #7060A8;">append</span><span style="color: #0000FF;">(</span><span style="color: #000000;">ap</span><span style="color: #0000FF;">,</span><span style="color: #000000;">res</span><span style="color: #0000FF;">)</span>
<span style="color: #008080;">return</span> <span style="color: #000000;">res</span>
<span style="color: #008080;">end</span> <span style="color: #008080;">function</span>
<span style="color: #008080;">procedure</span> <span style="color: #000000;">ap_free</span><span style="color: #0000FF;">()</span>
<span style="color: #000080;font-style:italic;">-- free all memory allocated in arena pool 'ap'</span>
<span style="color: #7060A8;">free</span><span style="color: #0000FF;">(</span><span style="color: #000000;">ap</span><span style="color: #0000FF;">)</span>
<span style="color: #000000;">ap</span> <span style="color: #0000FF;">=</span> <span style="color: #0000FF;">{}</span>
<span style="color: #008080;">end</span> <span style="color: #008080;">procedure</span>
<!--</syntaxhighlight>-->
 
=={{header|PicoLisp}}==
Line 498 ⟶ 1,123:
freed with '[http://software-lab.de/doc/refZ.html#zap zap]'.
 
=={{header|PL/I}}==
Allocation of storage other than via routine or block entry is via the ALLOCATE statement applied to variables declared with the CONTROLLED attribute. Such storage is obtained from and returned to a single "heap" storage area during the course of execution and not necessarily corresponding to the entry and exit of routines or blocks. However, variables can further be declared as being BASED on some other variable which might be considered to be a storage area that can be manipulated separately. This can be escalated to being based IN the storage area of a named variable, say POOL. In this situation, storage for items declared IN the POOL are allocated and de-allocated within the storage space of POOL (and there may be insufficient space in the POOL, whereupon the AREA error condition is raised) so this POOL, although obtained from the system heap, is treated as if it were a heap as well.
 
One reason for doing this is that the addressing of entities within the POOL is relative to the address of the POOL so that pointer variables linking items with the POOL do not employ the momentary machine address of the POOL storage. The point of this is that the contents of a POOL may be saved and read back from a suitable disc file (say at the start of a new execution) and although the memory address of the new POOL may well be different from that during the previous usage, addressing within the new POOL remains the same. In other words, a complex data structure can be developed within the POOL then saved and restored simply by writing the POOL and later reading it back, rather than having to unravel the assemblage in some convention that can be reversed to read it back piece-by-piece. Similarly, if the POOL is a CONTROLLED variable, new POOL areas can be allocated and de-allocated at any time, and by de-allocating a POOL, all of its complex content vanishes in one poof.
 
=={{header|Python}}==
Line 513 ⟶ 1,142:
 
As is common with high-level languages, Racket usually deals with memory automatically. By default, this means using a precise generational GC. However, when there's need for better control over allocation, we can use the <tt>malloc()</tt> function via the FFI, and the many variants that are provided by the GC:
<langsyntaxhighlight lang="racket">
(malloc 1000 'raw) ; raw allocation, bypass the GC, requires free()-ing
(malloc 1000 'uncollectable) ; no GC, for use with other GCs that Racket can be configured with
Line 523 ⟶ 1,152:
(malloc 1000 'atomic-interior) ; same for atomic chunks
(malloc-immobile-cell v) ; allocates a single cell that the GC will not move
</syntaxhighlight>
</lang>
 
=={{header|Raku}}==
(formerly Perl 6)
 
Raku is a high level language where, to a first approximation, everything is an object. Raku dynamically allocates memory as objects are created and does automatic garbage collection and freeing of memory as objects go out of scope. There is almost no high level control over how memory is managed, it is considered to be an implementation detail of the virtual machine on which it is running.
 
If you absolutely must take manual control over memory management you would need to use the foreign function interface to call into a language that provides the capability, but even that would only affect items in the scope of that function, not anything in the mainline process.
 
There is some ability to specify data types for various objects which allows for (but does not guarantee) more efficient memory layout, but again, it is considered to be an implementation detail, the use that the virtual machine makes of that information varies by implementation maturity and capabilities.
 
=={{header|REXX}}==
In the REXX language, each (internal and external) procedure has its
{{Novice_example|REXX}}
In the REXX language, each (internal and external) procedure has it's
own storage (memory) to hold local variables and other information
pertaining to a procedure.
<!-- A newline can be made by twice entering an Enter -->
<br>Each call to a precedure (to facilitate recursion) has its own
<!-- ... until someone comes along a deletes the blank line(s), causing extremely long lines. This has happend numerous times, apparently by well-meaning editors. Gerard Schildberger. -->
<br>Each call to a procedure (to facilitate recursion) has its own
storage.
<br>Garbage collection can be performed after a procedure finishes
Line 536 ⟶ 1,175:
or some other external action),
but this isn't specified in the language.
<br>A <tt>&nbsp; DROP'''drop''' </tt>&nbsp; (a REXX verb) will mark a variable
as not defined, but doesn't necessarily deallocate its storage, but the freed
storage can be used by anotherother variablevariables within the program (or procedure).
<br>Essentially, the method used by a particular REXX interpreter isn't
of concern to a programmer as there is but one type of variable
Line 546 ⟶ 1,185:
<br>Some REXX interpreters have built-in functions to query how much free
memory is available (these were written when real storage was a premium
during the early DOS days).<br>
<langsyntaxhighlight lang="rexx">/*REXX doesn't have declarations/allocations of variables, */
/* but this is the closest to an allocation: */
 
Line 557 ⟶ 1,196:
stemmed_array.dog = stemmed_array.6000 / 2
 
drop stemmed_array.</langsyntaxhighlight>
 
=={{header|Rust}}==
<syntaxhighlight lang="rust">#![feature(rustc_private)]
 
extern crate arena;
 
use arena::TypedArena;
 
fn main() {
// Memory is allocated using the default allocator (currently jemalloc). The memory is
// allocated in chunks, and when one chunk is full another is allocated. This ensures that
// references to an arena don't become invalid when the original chunk runs out of space. The
// chunk size is configurable as an argument to TypedArena::with_capacity if necessary.
let arena = TypedArena::new();
 
// The arena crate contains two types of arenas: TypedArena and Arena. Arena is
// reflection-basd and slower, but can allocate objects of any type. TypedArena is faster, and
// can allocate only objects of one type. The type is determined by type inference--if you try
// to allocate an integer, then Rust's compiler knows it is an integer arena.
let v1 = arena.alloc(1i32);
 
// TypedArena returns a mutable reference
let v2 = arena.alloc(3);
*v2 += 38;
println!("{}", *v1 + *v2);
 
// The arena's destructor is called as it goes out of scope, at which point it deallocates
// everything stored within it at once.
}</syntaxhighlight>
 
=={{header|Scala}}==
Today's languages as Scala relies on memory management outside the scope of the application programmer. So memory leaks etc. are no issues anymore. This is governed by a Garbage Collector.
The purpose of Scala is the realizing of solutions, not to make problems with memories, transistors and cooling fans. The use of memory is completely transparent for the Scala programmer. The notifiable effects are the garbage collection latency which gives random execution times. There is only one API call which addresses this <tt>scala.compat.Platform.collectGarbage()</tt> but the implementation is not guaranteed. If you still want to use this "Don't Try This At Home" stuff and your VM is a JVM the Java run-time library can be used. Good luck.
 
=={{header|Tcl}}==
Tcl does not really expose the heap itself, and while it is possible to use [http://www.swig.org/ SWIG] or [[Critcl]] to map the implementation-level allocator into the language, this is highly unusual.
Line 567 ⟶ 1,237:
 
The pool engine class itself (a metaclass):
<langsyntaxhighlight lang="tcl">package require Tcl 8.6
oo::class create Pool {
superclass oo::class
Line 647 ⟶ 1,317:
return $cls
}
}</langsyntaxhighlight>
Example of how to use:
<langsyntaxhighlight lang="tcl">Pool create PoolExample {
variable int
 
Line 701 ⟶ 1,371:
}
puts ""
PoolExample clearPool</langsyntaxhighlight>
Produces this output (red text to <tt>stderr</tt>, black text to <tt>stdout</tt>):
<span style="color:red">Initializing ::oo::Obj4::Obj5 with 0
Line 733 ⟶ 1,403:
Finalizing ::oo::Obj4::Obj6 which held 10
Finalizing ::oo::Obj4::Obj7 which held 11</span>
 
=={{header|Wren}}==
Memory is managed automatically by Wren's VM which allocates memory for new objects and reclaims that memory when those objects are no longer used.
 
However, the built-in List class (a dynamic array) does give the programmer some control of memory in that a minimum size can be specified when the List is created. It's also possible to request a garbage collection (though not for a specific unused object) by calling the System.gc method.
 
We can therefore simulate an arena storage pool as follows.
<syntaxhighlight lang="wren">var arena = List.filled(5, 0) // allocate memory for 5 integers
for (i in 0..4) arena[i] = i // insert some integers
System.print(arena) // print them
arena = null // make arena eligible for garbage collection
System.gc() // request immediate garbage collection</syntaxhighlight>
 
{{out}}
<pre>
[0, 1, 2, 3, 4]
</pre>
 
=={{header|zkl}}==
Memory allocation "just happens", unreachable memory is recovered via garbage collection. The closest thing to explicit memory allocation is Data object, which is a bit bucket you can [optionally] set the size of upon creation. However, it grows as needed. The closest thing to "new" is the create method, which tells an object to create a new instance of itself. For this task:
<langsyntaxhighlight lang="zkl">var pool=List(); // pool could be any mutable container
pool.append(Data(0,1234)); // allocate mem blob and add to pool
pool=Void; // free the pool and everything in it.</langsyntaxhighlight>
 
{{omit from|6502 Assembly|Does not have controlled memory allocation.}}
{{omit from|8086 Assembly|Does not have controlled memory allocation.}}
{{omit from|68000 Assembly|Does not have controlled memory allocation.}}
{{omit from|Clojure}}
{{omit from|Erlang|Erlang does not have a program-controllable heap.}}
{{omit from|Haskell|Haskell does not have a program-controllable heap.}}
{{omit from|Io}}
{{omit from|Lily}}
{{omit from|Logtalk}}
{{omit from|M4}}
Line 749 ⟶ 1,441:
{{omit from|Oz|Oz does not have a program-controllable heap.}}
{{omit from|TI-89 BASIC|Does not have controlled memory allocation.}}
{{omit from|Z80 Assembly|Does not have controlled memory allocation.}}
871

edits