Safe mode: Difference between revisions

m
(Frink)
 
(19 intermediate revisions by 10 users not shown)
Line 4:
 
Along with a simple yes/no answer, describe what features are restricted when running in safe mode.
 
=={{header|6502 Assembly}}==
The 6502 has no safe mode.
 
=={{header|68000 Assembly}}==
There is a "Supervisor Mode", however there's really nothing stopping you from setting the supervisor flag to true if you can execute arbitrary code.
 
=={{header|8080 Assembly}}==
The 8080 has no safe mode.
 
=={{header|8086 Assembly}}==
The 8086 has no safe mode.
 
=={{header|AWK}}==
<syntaxhighlight lang="awk">
<lang AWK>
# syntax: GAWK --sandbox -f SAFE_MODE.AWK
#
Line 25 ⟶ 37:
exit(0)
}
</syntaxhighlight>
</lang>
 
=={{header|C3}}==
The C3 compiler has a `--fast` and a `--safe` mode respectively. The latter, intended for development, enables a wide range of checks from out-of-bounds checks and null dereference checks to runtime contracts with full stacktraces.
 
<!-- == Free Pascal == -->
{{omit from|Free Pascal}}
 
=={{header|Go}}==
''Any'' code written in Go is considered to be 'safe' unless it uses one or more of the following features:
 
* The 'unsafe' package.
 
* The 'reflect' package.
 
* cgo.
<br>
Although 'normal' Go code uses pointers, arithmetic on them is not permitted and so they cannot be made to point to arbitrary locations in memory. However, the 'unsafe' package contains features which do allow one to perform pointer arithmetic with all the risks this entails.
 
The 'reflect' package allows one to inspect and manipulate objects of arbitrary types and exposes internal data structures such as string and slice headers. This can result in fragile code where mistakes which would normally be identified at compile time will instead manifest themselves as runtime panics.
 
'cgo' is Go's bridge to using C code. As such it is just as unsafe as writing C code directly.
 
=={{header|Frink}}==
Line 50:
The easiest way to test this is to add the <CODE>--sandbox</CODE> option when starting Frink. This enforces the strictest sandboxing mode. Similarly, when creating a Frink interpreter from Java code, the most restrictive security can be enabled by calling its <CODE><I>Frink</I>.setRestrictiveSecurity(true)</CODE> method.
 
<langsyntaxhighlight lang="java">
frink.parser.Frink interp = new frink.parser.Frink();
interp.setRestrictiveSecurity(true);
</syntaxhighlight>
</lang>
 
Below are some operations that can be allowed/disallowed from a custom security manager. For most of these, the permission can be restricted to allow/disallow a ''particular'' file, URL, or class, or method:
Line 70:
* Write a file
* Open a graphics window
* Construct an expression from an expression type and argument list
* Transform an expression
* Create a transformation rule
* Set a class-level variable
 
All of these operations are disallowed when the most restrictive security is enabled.
=={{header|Jsish}}==
 
=={{header|Go}}==
''Any'' code written in Go is considered to be 'safe' unless it uses one or more of the following features:
 
* The 'unsafe' package.
 
* The 'reflect' package.
 
* cgo.
<br>
Although 'normal' Go code uses pointers, arithmetic on them is not permitted and so they cannot be made to point to arbitrary locations in memory. However, the 'unsafe' package contains features which do allow one to perform pointer arithmetic with all the risks this entails.
 
The 'reflect' package allows one to inspect and manipulate objects of arbitrary types and exposes internal data structures such as string and slice headers. This can result in fragile code where mistakes which would normally be identified at compile time will instead manifest themselves as runtime panics.
 
'cgo' is Go's bridge to using C code. As such it is just as unsafe as writing C code directly.
 
=={{header|J}}==
The ''security level'' (default: 0) can be increased to 1 by executing:
<syntaxhighlight lang="j">(9!:25) 1</syntaxhighlight>
Afterwards, all verbs able to alter the environment outside J are prohibited. See the [https://code.jsoftware.com/wiki/Vocabulary/ErrorMessages#security J Community Wiki] for details regarding the restrictions.
 
=={{header|Jsish}}==
The '''jsish''' interpreter allows a '''-s''', '''--safe''' command line switch to restrict access to the file system.
 
For example, given '''safer.jsi''':
 
<langsyntaxhighlight lang="javascript">File.write('/tmp/safer-mode.txt', 'data line');</langsyntaxhighlight>
 
{{out}}
Line 101 ⟶ 122:
Some control is allowed over the restrictions provided by safer mode.
 
<langsyntaxhighlight lang="javascript">var interp1 = new Interp({isSafe:true, safeWriteDirs:['/tmp'], , safeReadDirs:['/tmp']});</langsyntaxhighlight>
 
=={{header|Julia}}==
Julia does not have a "sandbox" mode that restricts access to operating system resources such as files, since this is considered to be the province of the underlying operating system. Julia does have functions that handle underlying OS memory resources similar to C type pointers. Such functions, including <pre> unsafe_wrap unsafe_read unsafe_load unsafe_write unsafe_trunc unsafe_string unsafe_store! unsafe_copyto! </pre> are prefixed with "unsafe_" to indicate that a memory access fault could be generated if arguments to those functions are in error.
 
=={{header|Nim}}==
Nim doesn’t provide safe mode, but it make a distinction between safe and unsafe features. Safe features are those which cannot corrupt memory integrity while unsafe ones can.
 
There is currently no restrictions for using unsafe features, but a programmer should be aware that they must be used with care.
 
Here are some unsafe features:
 
* The ones dealing with raw memory and especially those using pointers. Note that Nim makes a difference between pointers which allow access to raw (untraced) memory and references which allow access to traced memory.
 
* Type casting which, contrary to type conversion, is a simple assignment of a new type without any conversion to make the value fully compatible with the new type.
 
* Using <code>cstring</code> variables as no index checking is performed when accessing an element.
 
* Inserting assembly instructions with the <code>asm</code> statement.
 
=={{header|Perl}}==
Line 111 ⟶ 150:
There's really no switch to flip to make Perl code more secure. It is up to the programmer to follow security best-practices, such as employing the <code>strict</code> and <code>warnings</code> pragmas, using 3-argument form of <code>open</code> for filehandles, being careful about the contents of <code>$ENV{PATH}</code>, and so forth. The CPAN module <code>Perl::Critic</code> can be helpful in this regard. Read further on this topic in the language documentation on [https://perldoc.perl.org/perlsec.html Perl security]
 
=={{header|Perl 6Phix}}==
''Mostly a cut-n-paste from theSee [[Untrusted_environment#Perl_6|Untrusted environmentPhix]] task.
 
=={{header|Raku}}==
(formerly Perl 6)
 
''Mostly a cut-n-paste from the [[Untrusted_environment#Raku|Untrusted environment]] task.
 
Perl 6Raku doesn't really provide a high security mode for untrusted environments. By default, Perl 6Raku is sort of a walled garden. It is difficult to access memory directly, especially in locations not controlled by the Perl 6Raku interpreter, so unauthorized memory access is unlikely to be a threat with default Perl 6Raku commands and capabilities.
 
It is possible (and quite easy) to run Perl 6Raku with a restricted setting which will disable many IO commands that can be used to access or modify things outside of the Perl 6Raku interpreter. However, a determined bad actor could theoretically work around the restrictions, especially if the nativecall interface is available. The nativecall interface allows directly calling in to and executing code from C libraries so anything possible in C is now possible in Perl 6Raku. This is great for all of the power it provides, but along with that comes the responsibility and inherent security risk. The same issue arises with unrestricted loading of modules. If modules can be loaded, especially from arbitrary locations, then any and all restriction imposed by the setting can be worked around.
 
The restricted setting is modifiable, but by default places restrictions on or completely disables the following things:
Line 167 ⟶ 211:
:* method gist() ''display method''
 
Really, if you want to lock down a Perl 6Raku instance so it is "safe" for unauthenticated, untrusted, general access, you are better off running it in some kind of locked down virtual machine or sandbox managed by the operating system rather than trying to build an ad hoc "safe" environment.
 
=={{header|Phix}}==
''See [[Untrusted_environment#Phix]]
 
=={{header|Rust}}==
 
While Rust compiles to native code and does not provide any kind of runtime sandbox, it does implement a compile-time enforced distinction between "safe" and "unsafe" code, intended to improve the maintainability of complex codebases by confining sources of certain types of difficult-to-debug problems to small, clearly marked subsets of the code which can be audited more intensely.
 
Safe code, which is the default, cannot cause memory unsafety or data races as long the unsafe code it depends on upholds the invariants expected of it.
 
Unsafe code, enabled by marking a function, block, or trait (interface) with the <code>unsafe</code> keyword, enables the use of four additional language capabilities which the compiler cannot verify correct use of and which are intended for building safe abstractions, such as the standard library's <code>Mutex</code> and reference-counted pointers.
 
Those four capabilities are:
* Dereferencing raw pointers (Rust's name for C-style pointers)
* Calling <code>unsafe</code> functions (All foreign functions, as well native APIs with safety invariants that are impossible or impractical to encode in the type system)
* Interacting with mutable static variables (the idiomatic solution is to use "interior mutability" via a wrapper type like <code>Mutex</code> or <code>RWLock</code> which allows a mutable value to be stored in an "immutable" static variable.)
* Implementing an <code>unsafe</code> trait (interface)
 
To further the goal of improving maintainability in large codebases, the Rust compiler can also be configured to warn or error out if <code>unsafe</code> code is encountered within a given scope.
 
(An example of this would be an enterprise project where the coders most experienced in low-level work are responsible for the module where <code>unsafe</code> is allowed, while the majority of the codebase lives in modules which depend on the unsafe-containing module, but are configured to forbid the use of <code>unsafe</code> within their own code.)
 
=={{header|REXX}}==
Line 199 ⟶ 222:
Regina REXX supports a '''--restricted''' command-line option, and embedded interpreters can also be set to run restricted. Many commands are disabled in this mode, including most access to hosted services. The intrinsic '''FUNCTION REXX()''' extension in GnuCOBOL defaults to restricted mode, and programmers must explicitly use '''FUNCTION REXX-UNRESTRICTED(script, args...)''' for access to the full REXX programming environment from that [[COBOL]] implementation.
 
<langsyntaxhighlight lang="cobol"> identification division.
program-id. rexxtrial.
 
Line 221 ⟶ 244:
display "No exception raised: " exception-status
goback.
end program rexxtrial.</langsyntaxhighlight>
 
{{out}}
Line 234 ⟶ 257:
success
No exception raised:</pre>
 
=={{header|Rust}}==
 
While Rust compiles to native code and does not provide any kind of runtime sandbox, it does implement a compile-time enforced distinction between "safe" and "unsafe" code, intended to improve the maintainability of complex codebases by confining sources of certain types of difficult-to-debug problems to small, clearly marked subsets of the code which can be audited more intensely.
 
Safe code, which is the default, cannot cause memory unsafety or data races as long the unsafe code it depends on upholds the invariants expected of it.
 
Unsafe code, enabled by marking a function, block, or trait (interface) with the <code>unsafe</code> keyword, enables the use of four additional language capabilities which the compiler cannot verify correct use of and which are intended for building safe abstractions, such as the standard library's <code>Mutex</code> and reference-counted pointers.
 
Those four capabilities are:
* Dereferencing raw pointers (Rust's name for C-style pointers)
* Calling <code>unsafe</code> functions (All foreign functions, as well native APIs with safety invariants that are impossible or impractical to encode in the type system)
* Interacting with mutable static variables (the idiomatic solution is to use "interior mutability" via a wrapper type like <code>Mutex</code> or <code>RWLock</code> which allows a mutable value to be stored in an "immutable" static variable.)
* Implementing an <code>unsafe</code> trait (interface)
 
To further the goal of improving maintainability in large codebases, the Rust compiler can also be configured to warn or error out if <code>unsafe</code> code is encountered within a given scope.
 
(An example of this would be an enterprise project where the coders most experienced in low-level work are responsible for the module where <code>unsafe</code> is allowed, while the majority of the codebase lives in modules which depend on the unsafe-containing module, but are configured to forbid the use of <code>unsafe</code> within their own code.)
 
=={{header|Scala}}==
Actually, with a high-level programming language as Scala, it's a bad idea to flag or unflag for a "Safe mode"..
This should be a task for the target system.
 
=={{header|Wren}}==
Wren code is considered to be 'safe' in itself. Method/function calls are dynamically checked and generate runtime errors which can be caught and handled. A fiber's stack grows if it gets close to overflowing. There is no support for pointers nor reflection and memory is managed automatically by the runtime.
 
However, when Wren is embedded in a host application, one needs to deal with the embedding API (or a wrapper thereof) to pass data between Wren and the host. The embedding API is written in C and, as such, is intrinsically unsafe. It is generally up to the programmer to ensure that API functions are passed the correct number of arguments and those arguments are of the correct types.
 
=={{header|Z80 Assembly}}==
Has no safe mode.
 
=={{header|zkl}}==
zkl is unsafe. Any program can access any method and many methods access
the system. Additionally, any program can load a program or
eval (compile and run) text.
 
=={{header|Zig}}==
Zig provides compilation mode settings for safety (`Debug` and `ReleaseSafe`) and code-block annotations (`@setRuntimeSafety`) to allow the user tight control of optimization and safety.
The list of checked safety properties and tooling to debug existing memory problems is extensive.
Computations at compiletime are unconditionally checked for those errors and memory problems.
 
Zig is optimized for compilation performance and thus will not include advanced shape analysis like Rust's borrow checker into the compilation phase or compromise on compilation performance for the necessary data.
Thus temporal memory safety and data race safety are not covered during compilation time analysis and must be tested,
ie with thread sanitizer, Valgrind and test allocator.
There are plans to offer upper bound stack- and uninitialized memory analysis.
4,102

edits