Literals/Integer: Difference between revisions

m
→‎{{header|Wren}}: Changed to Wren S/H
m (→‎{{header|Wren}}: Changed to Wren S/H)
 
(29 intermediate revisions by 15 users not shown)
Line 19:
 
=={{header|11l}}==
<langsyntaxhighlight lang="11l">print(255) // decimal literal
print(0000'00FF) // hexadecimal literal
print(00'FF) // short hexadecimal literal
Line 25:
print(377o) // octal literal
print(1111'1111b) // binary literal
print(255'000) // decimal literal</langsyntaxhighlight>
 
{{out}}
Line 38:
</pre>
=={{header|6502 Assembly}}==
Conventions vary between assemblers, but typically a $ represents hexadecimal and a % represents binary. The absence of either of those symbols means decimal. Single or double quotes represent an ASCII value. Keep in mind that without a # in front, any quantity is interpreted as a dereference operation at the memory location equal to the supplied number, rather than a constant value.
<langsyntaxhighlight lang="6502asm">;These are all equivalent:, and each load the constant value 65 into the accumulator.
LDA #$41
LDA #65
LDA #%01000001
LDA #'A'</langsyntaxhighlight>
 
Since all are equivalent, which one you use is entirely up to your preference. It's a good practice to use the representation that conveys the intent and meaning of your data the best.
 
Negative numbers can be represented by a minus sign. Minus signs only work for decimal numbers, not hexadecimal or binary. The assembler will interpret the negative number using the two's complement method, sign-extending it as necessary to fit the context it was provided in. This typically means that -1 maps to 0xFF, -2 to 0xFE, -3 to 0xFD, and so on. For absolute addresses, -1 gets converted to 0xFFFF, -2 to 0xFFFE, etc.
 
=={{header|68000 Assembly}}==
Conventions vary between assemblers, but typically a $ represents hexadecimal and a % represents binary. The absence of either of those symbols means decimal. Single or double quotes represent an ASCII value. Keep in mind that without a # in front, any quantity is interpreted as a dereference operation at the memory location equal to the supplied number, rather than a constant value.
<langsyntaxhighlight lang="68000devpac">;These are all equivalent:
MOVE.B #$41,D0
MOVE.B #65,D0
MOVE.B #%01000001,D0
MOVE.B #'A',D0</langsyntaxhighlight>
 
=={{header|8086 Assembly}}==
Supported integer literals may differ across assemblers.
The following work with UASM which is MASM-compatible:
* A "0x" prefix or "h" suffix for hexadecimal.
* A % prefix for binary
* No prefix for base 10
 
<syntaxhighlight lang="asm">MOV AX,4C00h
MOV BX,%1111000011110000
MOV CX,0xBEEF
MOV DL,35</syntaxhighlight>
 
=={{header|AArch64 Assembly}}==
Line 59 ⟶ 75:
 
{{works with|aarch64-linux-gnu-as/qemu-aarch64}}
<langsyntaxhighlight ARM_Assemblylang="arm_assembly">.equ STDOUT, 1
.equ SVC_WRITE, 64
.equ SVC_EXIT, 93
Line 129 ⟶ 145:
_exit:
mov x8, #SVC_EXIT
svc #0</langsyntaxhighlight>
 
=={{header|Ada}}==
Line 135 ⟶ 151:
Here <base> can be from the range 2..16.
For example:
<langsyntaxhighlight lang="ada">with Ada.Integer_Text_IO; use Ada.Integer_Text_IO;
 
procedure Test_Literals is
Line 143 ⟶ 159:
Put (8#1_327#);
Put (2#10_1101_0111#);
end Test_Literals;</langsyntaxhighlight>
{{out}}
<pre>
Line 150 ⟶ 166:
 
=={{header|Aime}}==
<langsyntaxhighlight lang="aime">if ((727 == 0x2d7) && (727 == 01327)) {
o_text("true\n");
} else {
o_text("false\n");
}</langsyntaxhighlight>
 
=={{header|ALGOL 68}}==
Line 166 ⟶ 182:
Binary literals are of type BITS, and need to be converted
to INT using the operator ABS.
<langsyntaxhighlight lang="algol68">main:(
SHORT SHORT INT ssdec = SHORT SHORT 727,
Line 205 ⟶ 221:
END CO
 
)</langsyntaxhighlight>
[http://sourceforge.net/projects/algol68/files/algol68g/algol68g-1.18.0/algol68g-1.18.0-9h.tiny.el5.centos.fc11.i386.rpm/download algol68g] output:
<pre>
Line 224 ⟶ 240:
Algol W has only decimal integer literals. Hexadecimal values can be written (prefixed with #) but these are of type "bits" and the
standard number function must be used to "convert" them to an integer.
<langsyntaxhighlight lang="algolw">begin
write( 16, number( #10 ) )
end.</langsyntaxhighlight>
{{out}}
<pre>
Line 233 ⟶ 249:
 
=={{header|AmigaE}}==
<langsyntaxhighlight lang="amigae">PROC main()
IF ($2d7 = 727) AND (%001011010111 = 727) THEN WriteF('true\n')
ENDPROC</langsyntaxhighlight>
 
=={{header|ARM Assembly}}==
{{works with|as|Raspberry Pi}}
<syntaxhighlight lang="arm assembly">
<lang ARM Assembly>
/* ARM assembly Raspberry PI */
/* program integer.s */
Line 379 ⟶ 395:
 
 
</syntaxhighlight>
</lang>
 
=={{header|Arturo}}==
<langsyntaxhighlight lang="rebol">num: 18966
 
print [num "->" type num]</langsyntaxhighlight>
 
{{out}}
Line 391 ⟶ 407:
 
=={{header|AutoHotkey}}==
<langsyntaxhighlight AutoHotkeylang="autohotkey">If (727 == 0x2d7)
MsgBox true</langsyntaxhighlight>
 
=={{header|Avail}}==
Avail's built-in lexers recognize "traditional" binary, octal, and hexadecimal prefixes <code>0b</code>, <code>0o</code>, and <code>0x</code> respectively:
<langsyntaxhighlight Availlang="avail">Print: "0b11001101 = " ++ “0b11001101”;
Print: "0o755 = " ++ “0o755”;
Print: "0xDEADBEEF = " ++ “0xDEADBEEF”;</langsyntaxhighlight>
Arbitrary integer bases from 2 to 36 are supported with the format ''digits r base''. As additional digit characters are needed, they are taken from the latin alphabet in order.
<langsyntaxhighlight Availlang="avail">Print: "ZZr36 = " ++ “ZZr36”;</langsyntaxhighlight>
While the task limits examples to those understood by the compiler and "not involve the calling of any functions/methods", the line is not so clear cut in Avail. For example, one could define new lexers to understand new integer formats which are then accepted by the compiler, allowing for an unlimited array of integer literal kinds.
 
Line 409 ⟶ 425:
 
{{works with|gawk|3.1.7}}
<langsyntaxhighlight lang="awk">BEGIN {
if ( (0x2d7 == 727) &&
(01327 == 727) ) {
print "true with GNU awk"
}
}</langsyntaxhighlight>
 
nawk parses <tt>01327</tt> as <tt>1327</tt>, and parses <tt>0x2d7</tt> as <tt>0 x2d7</tt> (which is the string concatentation of <tt>"0"</tt> and variable <tt>x2d7</tt>).
 
<langsyntaxhighlight lang="awk">BEGIN {
x2d7 = "Goodbye, world!"
print 0x2d7 # gawk prints "727", nawk prints "0Goodbye, world!"
print 01327 # gawk prints "727", nawk prints "1327"
}</langsyntaxhighlight>
 
=={{header|Axe}}==
In addition to decimal integer literals, Axe supports hexadecimal and binary integers using a leading exponent operator or pi, respectively. Note that the leading E below is the small-caps E.
<langsyntaxhighlight lang="axe">123
ᴇFACE
π101010</langsyntaxhighlight>
 
=={{header|BASIC}}==
&O = octal; &H = hexadecimal. Some flavors of BASIC also support &B = binary, but they're somewhat rare.
 
<langsyntaxhighlight lang="qbasic">PRINT 17
PRINT &O21
PRINT &H11</langsyntaxhighlight>
Output:
<pre>17
Line 443 ⟶ 459:
==={{header|BaCon}}===
BaCon allows (as it converts to C) C style integer literals. zero prefix Octal, 0x prefix Hexadecimal, no prefix Decimal, and if supported by the underlying compiler, 0b prefix for Binary. 0x and 0b can be upper case 0X and 0B.
<langsyntaxhighlight lang="freebasic">' literal integers
PRINT 10
PRINT 010
PRINT 0x10
' C compiler dependent, GCC extension
PRINT 0b10</langsyntaxhighlight>
 
{{out}}
Line 461 ⟶ 477:
16
2</pre>
 
==={{header|BASIC256}}===
<syntaxhighlight lang="freebasic">print 17
print 0o21 #octal
print 0x11 #hexadecimal
print 0b10001 #binary
 
print FromRadix(17,10) #FromRadix(string, base)
print FromOctal(21)
print FromHex(11)
print FromBinary(10001)</syntaxhighlight>
 
==={{header|BBC BASIC}}===
<langsyntaxhighlight lang="bbcbasic"> PRINT 1234 : REM Decimal
PRINT &4D2 : REM Hexadecimal
PRINT %10011010010 : REM Binary</langsyntaxhighlight>
'''Output:'''
<pre>
Line 474 ⟶ 501:
 
==={{header|IS-BASIC}}===
<langsyntaxhighlight ISlang="is-BASICbasic">PRINT 17
PRINT BIN(10001)
PRINT ORD(HEX$("11"))</langsyntaxhighlight>
 
==={{header|Yabasic}}===
<syntaxhighlight lang="freebasic">print 17
print 0x11 //hexadecimal
print 0b10001 //binary
 
print dec("11",16)
print dec("10001",2)</syntaxhighlight>
 
=={{header|bc}}==
Line 483 ⟶ 518:
This example shows the literal -727 in all bases from 2 to 16. (It never prints "Impossible!")
 
<langsyntaxhighlight lang="bc">ibase = 2
b[10] = -1011010111
ibase = 11 /* 3 */
Line 517 ⟶ 552:
for (i = 2; i <= 16; i++) if (b[i] != -727) "Impossible!
"
quit</langsyntaxhighlight>
 
The digits 0-9 and A-F are valid with all input bases. For example, FF from base 2 is 45 (because 15 * 2 + 15 is 45), and FF from base 10 is 165 (because 15 * 10 + 15 is 45). Most importantly, <tt>ibase = A</tt> always switches to base ten.
Line 525 ⟶ 560:
While Befunge doesn't directly support numbers aside from 0-9 (base 10), characters in strings are essentially treated as base-256 numbers.
 
<langsyntaxhighlight lang="befunge">" ~"..@</langsyntaxhighlight>
 
Output:
126 32
 
=={{header|BQN}}==
BQN only supports base ten integer literals. There are some things to note, however:
 
A high minus must be used instead of a plain minus for negative numbers (also a feature of APL):
<syntaxhighlight lang="bqn">¯5
¯3000</syntaxhighlight>
 
Underscores are ignored in numeric literals in general.
<syntaxhighlight lang="bqn">1_000_000</syntaxhighlight>
 
=={{header|Bracmat}}==
Line 537 ⟶ 582:
Leading 0 means octal, 0x or 0X means hexadecimal. Otherwise, it is just decimal.
 
<langsyntaxhighlight lang="c">#include <stdio.h>
 
int main(void)
Line 546 ⟶ 591:
 
return 0;
}</langsyntaxhighlight>
 
GCC supports specifying integers in binary using the [http://gcc.gnu.org/onlinedocs/gcc/Binary-constants.html 0b prefix] syntax, but it's not standard. Standard C has no way of specifying integers in binary.
Line 554 ⟶ 599:
=={{header|C sharp|C#}}==
C# has decimal and hexadecimal integer literals, the latter of which are prefixed with <code>0x</code>:
<langsyntaxhighlight lang="csharp">int a = 42;
int b = 0x2a;</langsyntaxhighlight>
Literals of either form can be suffixed with <code>U</code> and/or <code>L</code>. <code>U</code> will cause the literal to be interpreted as an unsigned type (necessary for numbers exceeding 2<sup>31</sup> or hex literals that have a first digit larger than <code>7</code>) and <code>L</code> signifies the use of a <code>long</code> type – using <code>UL</code> or <code>LU</code> as suffix will then use <code>ulong</code>. C# has no syntactic notion of expressing integer literals of smaller types than <code>Int32</code>; it is a compile-time error to have an assignment such as
<langsyntaxhighlight lang="csharp">byte x = 500;</langsyntaxhighlight>
'''Update'''<br/>
As of C#7, integer literals can be written in binary with the prefix <code>0b</code>. Furthermore, underscores can be used as separators:
<langsyntaxhighlight lang="csharp">
int x = 0b1100_1001_1111_0000;
</syntaxhighlight>
</lang>
 
=={{header|C++}}==
Line 568 ⟶ 613:
The same comments apply as to the [[#C|C example]].
 
<langsyntaxhighlight lang="cpp">#include <iostream>
 
int main()
Line 577 ⟶ 622:
return 0;
}</langsyntaxhighlight>
 
=={{header|Cherrycake}}==
 
<syntaxhighlight lang="cherrycake">
515142 # Interpretted as an integer, 515142
0b10111011 # Interpretted as a binary integer, 10111011 (187)
0x0AB3 # Interpretted as a binary integer, 0AB3 (2739)
</syntaxhighlight>
 
=={{header|Clojure}}==
Line 583 ⟶ 636:
Clojure uses the Java octal (0...) and hexadecimal (0x...) notation; for any other base, nR... is used, 2 <= n <= 36.
 
<langsyntaxhighlight lang="lisp">user=> 2r1001
9
user=> 8r64
Line 593 ⟶ 646:
user=> 0x4b
75
user=></langsyntaxhighlight>
 
=={{header|COBOL}}==
Line 605 ⟶ 658:
</pre>
 
<syntaxhighlight lang="cobol">
<lang COBOL>
display B#10 ", " O#01234567 ", " -0123456789 ", "
H#0123456789ABCDEF ", " X#0123456789ABCDEF ", " 1;2;3;4
</syntaxhighlight>
</lang>
 
{{out}}
Line 617 ⟶ 670:
Some characters are removed by the COBOL text manipulation facility, and are allowed in numeric literals. These symbols are stripped out, along with comment lines, before seen by the compiler proper.
 
<langsyntaxhighlight lang="cobol">
if 1234 = 1,2,3,4 then display "Decimal point is not comma" end-if
if 1234 = 1;2;3;4 then display "literals are equal, semi-colons ignored" end-if
</syntaxhighlight>
</lang>
 
Comma is a special case, as COBOL can be compiled with <code>DECIMAL POINT IS COMMA</code> in the <code>CONFIGURATION SECTION</code>. The <tt>1,2,3,4</tt> comparison test above would cause a compile time syntax error when <code>DECIMAL POINT IS COMMA</code> is in effect.
 
=={{header|Comal}}==
<langsyntaxhighlight Comallang="comal">IF 37=$25 THEN PRINT "True"
IF 37=%00100101 THEN PRINT "True"
</syntaxhighlight>
</lang>
 
=={{header|Common Lisp}}==
Line 634 ⟶ 687:
 
binary: #b, octal: #o, hexadecimal: #x, any base from 2 to 36: #Nr
<langsyntaxhighlight lang="lisp">>(= 727 #b1011010111)
T
>(= 727 #o1327)
Line 641 ⟶ 694:
T
>(= 727 #20r1g7)
T</langsyntaxhighlight>
 
=={{header|D}}==
 
D besides hexadecimal, has also binary base. Additionally you can use '''_''' to separate digits in integer (and FP) literals. Octal number literals are library-based to avoid bugs caused by the leading zero.
<langsyntaxhighlight lang="d">import std.stdio, std.conv;
 
void main() {
Line 665 ⟶ 718:
 
writefln("%x", 0xFEE1_BAD_CAFE_BABEuL);
}</langsyntaxhighlight>
{{out}}
<pre>oct: 511
Line 682 ⟶ 735:
 
=={{header|DCL}}==
<langsyntaxhighlight DCLlang="dcl">$ decimal1 = 123490
$ decimal2 = %D123490
$ octal = %O12370
$ hex = %X1234AF0</langsyntaxhighlight>
 
=={{header|Delphi}}==
<langsyntaxhighlight Delphilang="delphi">const
DEC_VALUE = 256; // decimal notation
HEX_VALUE = $100; // hexadecimal notation
BIN_VALUE = %100000000; // binary notation (since Delphi 10.4 version)
</syntaxhighlight>
</lang>
 
=={{header|DWScript}}==
 
DWScript has decimal and hexadecimal integer literals, the latter of which are prefixed with <code>$</code>:
<langsyntaxhighlight lang="delphi">var a : Integer := 42;
var b : Integer := $2a;</langsyntaxhighlight>
Both notations can also be used for character codes (when prefixed by <code>#</code>).
 
Line 705 ⟶ 758:
Dyalect has decimal and hexadecimal integer literals, the latter of which are prefixed with 0x:
 
<langsyntaxhighlight Dyalectlang="dyalect">var a = 42
var b = 0x2a</langsyntaxhighlight>
 
=={{header|Dylan}}==
<langsyntaxhighlight Dylanlang="dylan">42 // a decimal integer
#x2A // a hexadecimal integer
#o52 // an octal integer
#b101010 // a binary integer</langsyntaxhighlight>
 
=={{header|E}}==
 
<langsyntaxhighlight lang="e">? 256
# value: 256
 
Line 723 ⟶ 776:
 
? 0123
# syntax error: Octal is no longer supported: 0123</langsyntaxhighlight>
 
=={{header|EasyLang}}==
EasyLang's ability to use hexadecimal literals is undocumented.
<syntaxhighlight lang="easylang">
decimal = 57
hexadecimal = 0x39
print decimal
print hexadecimal
</syntaxhighlight>
{{out}}
<pre>
57
57
</pre>
 
=={{header|Efene}}==
 
<langsyntaxhighlight lang="efene">@public
run = fn () {
io.format("0xff : ~B~n", [0xff])
Line 734 ⟶ 801:
io.format("0b1011: ~B~n", [0b1011])
}
</syntaxhighlight>
</lang>
 
=={{header|Eiffel}}==
Integer literals can be specified in decimal, hexadecimal, octal and binary. Only decimal literals can have an optional sign. Underscores may also be used as separators, but cannot begin or end the literal. Literals are case insensitive.<langsyntaxhighlight Eiffellang="eiffel">
123 -- decimal
-1_2_3 -- decimal
Line 743 ⟶ 810:
0c173 -- octal
0b111_1011 -- binary
</syntaxhighlight>
</lang>
 
Literals are by default interpreted as type INTEGER, where INTEGER is a synonym for either INTEGER_32 or INTEGER_64 (depending on the compiler option) but can be explicitly converted to another type.<langsyntaxhighlight Eiffellang="eiffel">
{NATURAL_8} 255
{INTEGER_64} 2_147_483_648
</syntaxhighlight>
</lang>
 
=={{header|Elena}}==
<langsyntaxhighlight lang="elena">
var n := 1234; // decimal number
var x := 1234h; // hexadecimal number
</syntaxhighlight>
</lang>
 
=={{header|Elixir}}==
<langsyntaxhighlight lang="elixir">1234 #=> 1234
1_000_000 #=> 1000000
0010 #=> 10
Line 766 ⟶ 833:
0B10 #=> syntax error before: B10
0X10 #=> syntax error before: X10
0xFF #=> 255</langsyntaxhighlight>
 
=={{header|Emacs Lisp}}==
<langsyntaxhighlight Lisplang="lisp">123 ;; decimal all Emacs
#b101 ;; binary Emacs 21 up, XEmacs 21
#o77 ;; octal Emacs 21 up, XEmacs 21
#xFF ;; hex Emacs 21 up, XEmacs 21
#3r210 ;; any radix 2-36 Emacs 21 up (but not XEmacs 21.4)</langsyntaxhighlight>
 
The digits and the radix character can both be any mixture of upper and lower case. See [http://www.gnu.org/software/emacs/manual/html_node/elisp/Integer-Basics.html GNU Elisp reference manual "Integer Basics"].
 
=={{header|EMal}}==
<syntaxhighlight lang="emal">
^|
| EMal internally uses 64 bit signed integers.
|^
int hex = 0xff # base16
int oct = 0o377 # base8
int bin = 0b11111111 # base2
int dec = 255 # base10
writeLine(hex)
writeLine(oct)
writeLine(bin)
writeLine(dec)
# here we check that they give the same value
writeLine(0b1011010111 == 0o1327 and
0o1327 == 0x2d7 and
0x2d7 == 727 and
727 == 0b1011010111)
</syntaxhighlight>
{{out}}
<pre>
255
255
255
255
</pre>
 
=={{header|Erlang}}==
Erlang allows integer literals in bases 2 through 36. The format is Base#Number. For bases greater than 10, the values 10-35 are represented by A-Z or a-z.
<langsyntaxhighlight lang="erlang">
> 2#101.
5
Line 788 ⟶ 883:
> 36#3z.
143
</syntaxhighlight>
</lang>
 
=={{header|ERRE}}==
% = binary, & = octal; $ = hexadecimal.
<syntaxhighlight lang="erre">
<lang ERRE>
PRINT(17)
PRINT(&21)
PRINT($11)
PRINT(%1001)
</syntaxhighlight>
</lang>
Output:
<pre>
Line 806 ⟶ 901:
 
=={{header|Euphoria}}==
<langsyntaxhighlight lang="euphoria">
printf(1,"Decimal:\t%d, %d, %d, %d\n",{-10,10,16,64})
printf(1,"Hex:\t%x, %x, %x, %x\n",{-10,10,16,64})
Line 813 ⟶ 908:
printf(1,"Floating Point\t%3.3f, %3.3f, %+3.3f\n",{-10,10.2,16.25,64.12625})
printf(1,"Floating Point or Exponential: %g, %g, %g, %g\n",{10,16,64,123456789.123})
</syntaxhighlight>
</lang>
{{out}}
<pre>
Line 827 ⟶ 922:
===Base prefixes===
Binary numbers begin with 0b, octal numbers with 0o, and hexadecimal numbers with 0x. The hexadecimal digits A-F may be in any case.
<langsyntaxhighlight lang="fsharp">0b101 // = 5
0o12 // = 10
0xF // = 16</langsyntaxhighlight>
 
===Type suffixes===
Most type suffixes can be preceded with a 'u', which indicates the type is unisgned.
<langsyntaxhighlight lang="fsharp">10y // 8-bit
'g'B // Character literals can be turned into unsigned 8-bit literals
10s // 16-bit
Line 840 ⟶ 935:
10I // Bigint (cannot be preceded by a 'u')
 
10un // Unsigned native int (used to represent pointers)</langsyntaxhighlight>
 
=={{header|Factor}}==
<langsyntaxhighlight lang="factor">10 . ! decimal
0b10 . ! binary
-0o10 . ! octal
0x10 . ! hexadecimal</langsyntaxhighlight>
{{out}}
<pre>
Line 855 ⟶ 950:
</pre>
Factor also supports the arbitrary use of commas in integer literals:
<langsyntaxhighlight lang="factor">1,234,567 .
1,23,4,567 .</langsyntaxhighlight>
{{out}}
<pre>
Line 862 ⟶ 957:
1234567
</pre>
 
=={{header|Fennel}}==
<syntaxhighlight lang="fennel">;; Fennel, like Lua, supports base 10 and hex literals (with a leading 0x).
1234 ;1234
0x1234 ;4660
 
;; Optionally, underscores can be used to split numbers into readable chunks.
123_456_789 ;123456789
0x1234_5678 ;305419896</syntaxhighlight>
 
=={{header|Forth}}==
The standard method for entering numbers of a particular base is to set the user variable BASE to the desired radix from 2 to 36. There are also convenience words for setting the base to DECIMAL and HEX.
<langsyntaxhighlight lang="forth">HEX
FEEDFACE
2 BASE !
Line 871 ⟶ 975:
DECIMAL
1234
: mask var @ [ base @ hex ] 3fff and [ base ! ] var ! ;</langsyntaxhighlight>
The Forth numeric parser will look for symbols embedded within the stream of digits to determine whether to interpret it as a single cell, double cell, or floating point literal ('e').
<langsyntaxhighlight lang="forth">1234 ( n )
123.4 ( l h )
123e4 ( F: n )</langsyntaxhighlight>
 
===Base prefixes===
{{works with|GNU Forth}}
In addition, many Forths have extensions for using a prefix to temporarily override BASE when entering an integer literal. These are the prefixes supported by GNU Forth.
<langsyntaxhighlight lang="forth">$feedface \ hexadecimal
&1234 \ decimal
%1001101 \ binary
'a \ base 256 (ASCII literal)</langsyntaxhighlight>
Some Forths also support "0xABCD" hex literals for compatibility with C-like languages.
 
=={{header|Fortran}}==
 
<langsyntaxhighlight lang="fortran">program IntegerLiteral
 
implicit none
Line 898 ⟶ 1,002:
print *, dec, hex, oct, bin
 
end program IntegerLiteral</langsyntaxhighlight>
 
Outputs:
Line 907 ⟶ 1,011:
 
=={{header|FreeBASIC}}==
<langsyntaxhighlight lang="freebasic">' FB 1.05.0 Win64
 
' The following all print 64 to the console
Line 926 ⟶ 1,030:
Print 64ULL '' Decimal unsigned 8 byte integer (ULongInt)
 
Sleep</langsyntaxhighlight>
 
=={{header|Frink}}==
Bases from 2 to 36 are allowed in Frink. All literals can be arbitrarily large. Frink does not subscribe to the insanity that a leading 0 implies octal.
<langsyntaxhighlight lang="frink">
123456789123456789 // (a number in base 10)
123_456_789_123_456_789 // (the same number in base 10 with underscores for readability)
Line 947 ⟶ 1,051:
0b100001000101111111101101 // (Common binary notation)
0b1000_0100_0101_1111_1110_1101 // (Binary with underscores for readability)
</syntaxhighlight>
</lang>
 
=={{header|FutureBasic}}==
<langsyntaxhighlight lang="futurebasic">window 1
include "ConsoleWindow"
 
printf @" Decimal: %ld", 100
def tab 2
printf @" Hexadecimal: %x", 100
printf @" Octal: %o", 100
print @" Binary: "; bin$(100)
 
HandleEvents</syntaxhighlight>
print " Decimal 100:", 100
print " Hexadecimal &h64:", &h64, hex$(100)
print " Octal &o144:", &o144, oct$(100)
print " Binary &x1100100:", &x1100100, bin$(100)
</lang>
Output:
<pre>
Decimal 100: 100
Hexadecimal &h64: 100 0000006464
Octal &o144: 100 00000000144144
Binary &x1100100: 100 Binary: 00000000000000000000000001100100
</pre>
 
=={{header|GAP}}==
 
<langsyntaxhighlight lang="gap"># Only decimal integers, but of any length
31415926
1606938044258990275541962092341162602522202993782792835301376</langsyntaxhighlight>
 
=={{header|Go}}==
Line 982 ⟶ 1,084:
Constant expressions are evaluated at compile time at an arbitrary precision.
It is only when a constant is assigned to a variable that it is given a type and an error produced if the constant value cannot be represented as a value of the respective type.
<langsyntaxhighlight lang="go">package main
 
import "fmt"
Line 992 ⟶ 1,094:
fmt.Println(727 == '˗') // prints true
}
</syntaxhighlight>
</lang>
 
=={{header|Groovy}}==
Solution:
<langsyntaxhighlight lang="groovy">println 025 // octal
println 25 // decimal integer
println 25l // decimal long
println 25g // decimal BigInteger
println 0x25 // hexadecimal</langsyntaxhighlight>
 
Output:
Line 1,011 ⟶ 1,113:
=={{header|Harbour}}==
Hexademical integer literals are supported - the leading symbols must be 0x or 0X:
<syntaxhighlight lang ="visualfoxpro">? 0x1f</langsyntaxhighlight>
Output:
<pre>31</pre>
Line 1,020 ⟶ 1,122:
 
Oct(leading 0o or 0O), Hex(leading 0x or 0X)
<langsyntaxhighlight lang="haskell">Prelude> 727 == 0o1327
True
Prelude> 727 == 0x2d7
True</langsyntaxhighlight>
 
=={{header|hexiscript}}==
<langsyntaxhighlight lang="hexiscript"># All equal to 15
println 15
println 000015 # Leading zeros are ignored
println 0b1111
println 0o17
println 0xf</langsyntaxhighlight>
 
=={{header|HicEst}}==
Line 1,039 ⟶ 1,141:
HolyC supports various integer sizes.
 
<langsyntaxhighlight lang="holyc">U8 i; // 8 bit integer
U16 i; // 16 bit integer
U32 i; // 32 bit integer
U64 i; // 64 bit integer</langsyntaxhighlight>
 
By default all integers are decimal. Leading "0x" implies hexadecimal.
<langsyntaxhighlight lang="holyc">U16 i = 727; // decimal
U16 i = 0x2d7; // hexadecimal</langsyntaxhighlight>
 
=={{header|Icon}} and {{header|Unicon}}==
Icon/Unicon supports digit literals of the form <base>r<value> with base being from 2-36 and the digits being from 0..9 and a..z.
<langsyntaxhighlight Iconlang="icon">procedure main()
L := [1, 2r10, 3r10, 4r10, 5r10, 6r10, 7r10, 8r10, 9r10, 10r10, 11r10, 12r10, 13r10, 14r10,
15r10, 16r10, 17r10, 18r10,19r10, 20r10, 21r10, 22r10, 23r10, 24r10, 25r10, 26r10, 27r10,
Line 1,056 ⟶ 1,158:
 
every write(!L)
end</langsyntaxhighlight>
 
=={{Header|Insitux}}==
 
<syntaxhighlight lang="insitux">
[123 0x7F 0xFFF 0b0101001]
</syntaxhighlight>
 
{{out}}
 
<pre>
[123 127 4095 41]
</pre>
 
=={{header|J}}==
Line 1,064 ⟶ 1,178:
Arbitrary base numbers begin with a base ten literal (which represents the base of this number), and then the letter 'b' and then an arbitrary sequence of digits and letters which represents the number in that base. Letters a..z represent digits in the range 10..35. Each numeric item in a numeric constant must have its base specified independently.
 
<langsyntaxhighlight lang="j"> 10b123 16b123 8b123 20b123 2b123 1b123 0b123 100b123 99 0 0bsilliness
1
123 291 83 443 11 6 3 10203 99 0 1 28</langsyntaxhighlight>
 
This may be used to enter hexadecimal or octal or binary numbers. However, note also that J's primitives support a variety of binary operations on numbers represented as sequences of 0s and 1s, like this:
 
<langsyntaxhighlight lang="j">0 1 0 0 0 1 0 0 0 1 1 1 1</langsyntaxhighlight>
 
 
J also supports extended precision integers, if one member of a list ends with an 'x' when they are parsed. Extended precision literals can not be combined, in the same constant, with arbitrary base literals. (The notation supports no way of indicating that extra precision in an arbitrary base literal should be preserved and the extra complexity to let this attribute bleed from any member of a list to any other member was deemed not worth implementing.)
 
<langsyntaxhighlight lang="j"> 123456789123456789123456789 100000000000x
123456789123456789123456789 100000000000
 
16b100 10x
|ill-formed number</langsyntaxhighlight>
 
J also allows integers to be entered using other notations, such as scientific or rational.
 
<langsyntaxhighlight lang="j"> 1e2 100r5
100 20</langsyntaxhighlight>
 
Internally, J freely [http://www.jsoftware.com/help/dictionary/dictg.htm converts] fixed precision integers to floating point numbers when they overflow, and numbers (including integers) of any type may be combined using any operation where they would individually be valid arguments.
Line 1,096 ⟶ 1,210:
A leading 0 means octal, 0x or 0X means hexadecimal. Otherwise, it is just decimal.
 
<langsyntaxhighlight lang="java5">public class IntegerLiterals {
public static void main(String[] args) {
System.out.println( 727 == 0x2d7 &&
727 == 01327 );
}
}</langsyntaxhighlight>
 
You may also specify a <tt>long</tt> literal by adding an <tt>l</tt> or <tt>L</tt> (uppercase is preferred as the lowercase looks like a "1" in some fonts) to the end (ex: <tt>long a = 574298540721727L</tt>). This is required for numbers that are too large to be expressed as an <tt>int</tt>.
Line 1,107 ⟶ 1,221:
{{works with|Java|7}}
Java 7 has added binary literals to the language. A leading 0b means binary. You may also use underscores as separators in all bases.
<langsyntaxhighlight lang="java5">public class BinaryLiteral {
public static void main(String[] args) {
System.out.println( 727 == 0b10_1101_0111 );
}
}</langsyntaxhighlight>
 
=={{header|JavaScript}}==
 
<langsyntaxhighlight lang="javascript">if ( 727 == 0x2d7 &&
727 == 01327 )
window.alert("true");</langsyntaxhighlight>
 
=={{header|jq}}==
Line 1,124 ⟶ 1,238:
=={{header|Julia}}==
Julia has binary, octal and hexadecimal literals. We check that they give the same value.
<langsyntaxhighlight lang="julia">julia> 0b1011010111 == 0o1327 == 0x2d7 == 727
true</langsyntaxhighlight>
 
=={{header|Kotlin}}==
Kotlin supports 3 types of integer literal: (signeddecimal, 4hexadecimal byte),and namelybinary. :Hexadecimal decimal,literals hexadecimalare prefixed with <code>0x</code> or <code>0X</code>, and binary literals with <code>0b</code> or <code>0B</code>. Hexadecimal digits can be uppercase or lowercase, or a combination of the two.
 
A signed integer literal can be assigned to a variable of any signed integer type. If no type is specified, Int (4 bytes) is assumed. If the value cannot fit into an Int, Long (8 bytes) is assumed.
These can be converted to long integer literals (signed 8 byte) by appending the suffix 'L' (lower case 'l' is not allowed as it is easily confused with the digit '1').
 
An unsigned integer literal is made by appending <code>u</code> or <code>U</code> to a signed integer literal. Unsigned literals can be assigned to any unsigned integer type, with UInt (4 bytes) being assumed if none is specified, or ULong (8 bytes) if the value cannot fit into a UInt.
It is also possible to assign integer literals to variables of type Short (signed 2 byte) or Byte (signed 1 byte). They will be automatically converted by the compiler provided they are within the range of the variable concerned.
<lang scala>// version 1.0.6
 
Signed and unsigned integer literals can be forced to be interpreted as Long or ULong respectively by appending the suffix <code>L</code> to the literal (lower case 'l' is not allowed as it is easily confused with the digit '1').
fun main(args: Array<String>) {
val d = 255 // decimal integer literal
val h = 0xff // hexadecimal integer literal
val b = 0b11111111 // binary integer literal
 
Underscores can be used between digits of a literal for clarity.
val ld = 255L // decimal long integer literal (can't use l instead of L)
val lh = 0xffL // hexadecimal long integer literal (could use 0X rather than 0x)
val lb = 0b11111111L // binary long integer literal (could use 0B rather than 0b)
 
<syntaxhighlight lang="kotlin">
val sd : Short = 127 // decimal integer literal automatically converted to Short
fun main() {
val sh : Short = 0x7f // hexadecimal integer literal automatically converted to Short
// signed integer literals
val bd : Byte = 0b01111111 // binary integer literal automatically converted to Byte
val d = 255 // decimal
val h = 0xff // hexadecimal (can use 0X instead of 0x)
val b = 0b11111111 // binary (can use 0B instead of 0b)
 
// signed long integer literals (cannot use l instead of L)
println("$d $h $b $ld $lh $lb $sd $sh $bd")
val ld = 255L // decimal
}</lang>
val lh = 0xffL // hexadecimal
val lb = 0b11111111L // binary
 
// unsigned integer literals (can use U instead of u)
val ud = 255u // decimal
val uh = 0xffu // hexadecimal
val ub = 0b11111111u // binary
 
// unsigned long integer literals (can use U instead of u)
val uld = 255uL // decimal
val ulh = 0xffuL // hexadecimal
val ulb = 0b11111111uL // binary
 
// implicit conversions
val ld2 = 2147483648 // decimal signed integer literal automatically converted to Long since it cannot fit into an Int
val ush : UShort = 0x7fu // hexadecimal unsigned integer literal automatically converted to UShort
val bd : Byte = 0b01111111 // binary signed integer literal automatically converted to Byte
 
println("$d $h $b $ud $uh $ub $ld $lh $lb $uld $ulh $ulb $ld2 $ush $bd")
}</syntaxhighlight>
 
{{out}}
<pre>
255 255 255 255 255 255 127255 255 255 255 255 255 2147483648 127 127
</pre>
 
=={{header|Lasso}}==
<syntaxhighlight lang="lasso">42
<lang Lasso>42
0x2a</langsyntaxhighlight>
 
=={{header|Limbo}}==
Integer literals in Limbo can be written in any base from 2 to 36 by putting the base (or radix), then 'r' or 'R', and the digits of the number. If no base is explicitly given then the number will be in base 10.
<langsyntaxhighlight Limbolang="limbo">implement Command;
 
include "sys.m";
Line 1,179 ⟶ 1,310:
sys->print("%d\n", 15); # decimal
sys->print("%d\n", 16rF); # hexadecimal
}</langsyntaxhighlight>
 
=={{header|LiveCode}}==
LiveCode supports hexadecimal literals, and if "convertOctals" is set to true, then integer literals with leading zeroes are interpreted as octal and not base 10.
 
Hex example<syntaxhighlight lang LiveCode="livecode">put 0x1 + 0xff</langsyntaxhighlight>
 
=={{header|Logo}}==
Line 1,191 ⟶ 1,322:
=={{header|Logtalk}}==
Built-in support for bases 2, 8, 10, and 16:
<langsyntaxhighlight lang="logtalk">
:- object(integers).
 
Line 1,203 ⟶ 1,334:
 
:- end_object.
</syntaxhighlight>
</lang>
Sample output:
<langsyntaxhighlight lang="text">
| ?- integers::show.
Binary 0b11110101101 = 1965
Line 1,212 ⟶ 1,343:
Hexadecimal 0x7AD = 1965
yes
</syntaxhighlight>
</lang>
 
=={{header|Lua}}==
Lua supports either base ten or hex
<syntaxhighlight lang="lua">
<lang Lua>
45, 0x45
</syntaxhighlight>
</lang>
 
=={{header|M2000 Interpreter}}==
<syntaxhighlight lang="m2000 interpreter">
<lang M2000 Interpreter>
Def ExpType$(x)=Type$(x)
Print ExpType$(12345678912345#)="Currency", 12345678912345#
Line 1,229 ⟶ 1,360:
\\ used for unsigned integers (but it is double)
Print ExpType$(0xFFFFFFFF)="Double", 0xFFFFFFFF=4294967295
</syntaxhighlight>
</lang>
 
=={{header|M4}}==
m4 has decimal, octal and hexadecimal literals like C.
 
<langsyntaxhighlight M4lang="m4">eval(10) # base 10
eval(010) # base 8
eval(0x10) # base 16</langsyntaxhighlight>
 
Output: <pre>10 # base 10
Line 1,246 ⟶ 1,377:
{{works with|GNU m4}}
 
<langsyntaxhighlight M4lang="m4">eval(0b10) # base 2
eval(`0r2:10') # base 2
...
eval(`0r36:10') # base 36</langsyntaxhighlight>
 
Output: <pre>2 # base 2
Line 1,257 ⟶ 1,388:
 
=={{header|Mathematica}}/{{header|Wolfram Language}}==
<langsyntaxhighlight Mathematicalang="mathematica">b^^nnnn is a valid number in base b (with b ranging from 2 to 36) :
2^^1011
-> 11
 
36^^1011
-> 46693</langsyntaxhighlight>
 
=={{header|MATLAB}} / {{header|Octave}}==
Matlab uses only base 10 integers.
<syntaxhighlight lang="matlab">> 11
<lang MATLAB>> 11
ans = 11</langsyntaxhighlight>
 
Octave allows also a hexadecimal representation
<langsyntaxhighlight Octavelang="octave">> 0x11
ans = 17</langsyntaxhighlight>
 
Other representation of other bases need to be converted by functions
<langsyntaxhighlight MATLABlang="matlab">hex2dec(s)
bin2dec(s)
base2dec(s,base)</langsyntaxhighlight>
 
Different integer types can be defined by casting.
<langsyntaxhighlight MATLABlang="matlab">int8(8)
uint8(8)
int16(8)
Line 1,286 ⟶ 1,417:
uint32(8)
int64(8)
uint64(8)</langsyntaxhighlight>
 
=={{header|Maxima}}==
<langsyntaxhighlight lang="maxima">/* Maxima has integers of arbitrary length */
170141183460469231731687303715884105727</langsyntaxhighlight>
 
=={{header|Mercury}}==
 
<langsyntaxhighlight Mercurylang="mercury">Bin = 0b010101,
Octal = 0o666,
Hex = 0x1fa,
CharCode = 0'a.</langsyntaxhighlight>
 
An integer is either a decimal, binary, octal, hexadecimal, or character-code literal. A decimal literal is any sequence of decimal digits. A binary literal is <tt>0b</tt> followed by any sequence of binary digits. An octal literal is <tt>0o</tt> followed by any sequence of octal digits. A hexadecimal literal is <tt>0x</tt> followed by any sequence of hexadecimal digits. A character-code literal is <tt>0'</tt> followed by any single character.
Line 1,303 ⟶ 1,434:
=={{header|Metafont}}==
 
<langsyntaxhighlight lang="metafont">num1 := oct"100";
num2 := hex"100";</langsyntaxhighlight>
 
Metafont numbers can't be greater than 4096, so that the maximum octal and hexadecimal legal values are <tt>7777</tt> and <tt>FFF</tt> respectively. To be honest, <tt>"100"</tt> is a string, and <tt>oct</tt> is an "internal" "''macro''"; but this is the way Metafont specifies numbers in base 8 and 16.
 
=={{header|MIPS Assembly}}==
This ultimately depends on the assembler you're using.
{{works with|https://github.com/Kingcom/armips ARMIPS}}
Hexadecimal numbers are prefixed with <tt>0x</tt>, binary with <tt>0b</tt>. A number with no prefix is decimal.
 
If fewer than the maximum number of digits is specified, the number is padded with zeroes to fill the declared space.
 
<code>.byte</code> is 8-bit, <code>.halfword</code> is 16-bit, and <code>.word</code> is 32-bit.
 
The endianness of your CPU determines what order the bytes are actually stored in. Bytes are always stored in the order they are declared, but words and halfwords will be endian-swapped if you are assembling for a little-endian MIPS CPU such as the PlayStation 1. On a big-endian MIPS CPU (e.g. Nintendo 64), words and halfwords are assembled as-is.
 
You can have multiple declarations on the same line separated by commas, and if you do, you only need to specify the data type once for that entire line. (Everything in that line is understood to be the same data type.) Or, you can put each on its own line with the data type declaration in front of each. Either way, the memory layout of the declared literals is the same. How you present the data in your source code is up to you, so it's best to display it in a way that maximizes readability and communicates your intent.
 
<syntaxhighlight lang="mips">.word 0xDEADBEEF
.byte 0b00000000,0b11111111,0,255
.halfword 0xCAFE,0xBABE</syntaxhighlight>
 
A minus sign can be used to indicate a negative number. Negative number literals are sign-extended to fit whatever operand size matches the context.
<syntaxhighlight lang="mips">addi $t0,-1 ;assembled the same as "addi $t0,0xFFFF"
li $t0,-2 ;assembled the same as "li $t0,0xFFFFFFFE"</syntaxhighlight>
 
=={{header|Modula-3}}==
All numbers 2 to 16 are allowed to be bases.
<langsyntaxhighlight lang="modula3">MODULE Literals EXPORTS Main;
 
IMPORT IO;
Line 1,323 ⟶ 1,475:
IO.PutInt(2_1011010111);
IO.Put("\n");
END Literals.</langsyntaxhighlight>
 
=={{header|Neko}}==
Neko supports base 10 and 0x prefixed base 16 integer literals. Leading zero is NOT octal.
 
<syntaxhighlight lang="actionscript">/**
<lang ActionScript>/**
Integer literals, in Neko
Base 10 and Base 16, no leading zero octal in Neko
Line 1,335 ⟶ 1,487:
var num = 2730
if (num == 02730) $print("base 10, even with leading zero\n")
if (num == 0xAAA) $print("base 16, with leading 0x or 0X\n")</langsyntaxhighlight>
 
=={{header|Nemerle}}==
<langsyntaxhighlight Nemerlelang="nemerle">42 // integer literal
1_000_000 // _ can be used for readability
1_42_00 // or unreadability...
Line 1,348 ⟶ 1,500:
10ub, 10bu // unsigned byte
10L // long
10UL, 10LU // unsigned long</langsyntaxhighlight>
 
Formally (adapted from [http://nemerle.org/wiki/index.php?title=Lexical_structure_%28ref%29 Reference Manual]):
Line 1,380 ⟶ 1,532:
character is <tt>B</tt> or <tt>b</tt>, and the digits of ''string'' must be either <tt>0</tt> or <tt>1</tt>, each representing a single bit.
 
<langsyntaxhighlight NetRexxlang="netrexx">/* NetRexx */
options replace format comments java crossref symbols
 
Line 1,417 ⟶ 1,569:
iv = 32B1111111111111111; say '32B1111111111111111'.right(20) '==' iv.right(8) -- 65535
 
return</langsyntaxhighlight>
'''Output:'''
<pre>
Line 1,456 ⟶ 1,608:
 
=={{header|Nim}}==
<langsyntaxhighlight lang="nim">var x: int
x = 0b1011010111
x = 0b10_1101_0111
Line 1,475 ⟶ 1,627:
var g = 128'u16
var h = 129'u32
var i = 130'u64</langsyntaxhighlight>
 
=={{header|Objeck}}==
As of v1.1, Objeck only supports hexadecimal and decimal literals.
<langsyntaxhighlight lang="objeck">
bundle Default {
class Literal {
Line 1,487 ⟶ 1,639:
}
}
</syntaxhighlight>
</lang>
 
=={{header|OCaml}}==
Line 1,494 ⟶ 1,646:
 
Bin(leading 0b or 0B), Oct(leading 0o or 0O), Hex(leading 0x or 0X)
<langsyntaxhighlight lang="ocaml"># 727 = 0b1011010111;;
- : bool = true
# 727 = 0o1327;;
Line 1,501 ⟶ 1,653:
- : bool = true
# 12345 = 12_345 (* underscores are ignored; useful for keeping track of places *);;
- : bool = true</langsyntaxhighlight>
 
Literals for the other built-in integer types:
Line 1,527 ⟶ 1,679:
=={{header|Oz}}==
To demonstrate the different numerical bases, we unify the identical values:
<langsyntaxhighlight lang="oz">try
%% binary octal dec. hexadecimal
0b1011010111 = 01327 = 727 = 0x2d7
Line 1,533 ⟶ 1,685:
catch _ then
{Show unexpectedError}
end</langsyntaxhighlight>
 
Negative integers start with "~":
<langsyntaxhighlight lang="oz">X = ~42</langsyntaxhighlight>
 
=={{header|PARI/GP}}==
Line 1,549 ⟶ 1,701:
octal (with leading ampersand: &) and
binary (with leading percent sign: %) literals:
<langsyntaxhighlight Pascallang="pascal">const
 
DEC_VALUE = 15;
Line 1,555 ⟶ 1,707:
OCTAL_VALUE = &017;
BINARY_VALUE = %1111;
</syntaxhighlight>
</lang>
 
=={{header|Perl}}==
 
<langsyntaxhighlight lang="perl">print "true\n" if ( 727 == 0x2d7 &&
727 == 01327 &&
727 == 0b1011010111 &&
12345 == 12_345 # underscores are ignored; useful for keeping track of places
);</langsyntaxhighlight>
 
=={{header|Phix}}==
Line 1,583 ⟶ 1,735:
The included mpfr/gmp library allows working with extremely large integers with arbitrary precision, very efficiently.
 
<!--<langsyntaxhighlight Phixlang="phix">(phixonline)-->
<span style="color: #0000FF;">?{</span><span style="color: #000000;">65</span><span style="color: #0000FF;">,</span><span style="color: #000000;">#41</span><span style="color: #0000FF;">,</span><span style="color: #008000;">'A'</span><span style="color: #0000FF;">,</span><span style="color: #7060A8;">scanf</span><span style="color: #0000FF;">(</span><span style="color: #008000;">"55"</span><span style="color: #0000FF;">,</span><span style="color: #008000;">"%d"</span><span style="color: #0000FF;">),</span><span style="color: #000000;">0o10</span><span style="color: #0000FF;">,</span><span style="color: #000000;">0(7)11</span><span style="color: #0000FF;">}</span>
<!--</langsyntaxhighlight>-->
 
{{out}}
Line 1,594 ⟶ 1,746:
=={{header|PHP}}==
 
<langsyntaxhighlight lang="php"><?php
if (727 == 0x2d7 &&
727 == 01327) {
Line 1,605 ⟶ 1,757:
$a = 0b11111111; // binary number (equivalent to 255 decimal)
$a = 1_234_567; // decimal number (as of PHP 7.4.0)
</syntaxhighlight>
</lang>
 
=={{header|Picat}}==
All output are in base 10.
<syntaxhighlight lang="picat">% All outputs are in base 10
main =>
println(100), % plain integer
println(1_234_567_890), % underscores can be used for clarity
println(1_000_000_000_070_000_030_000_001), % arbitrary precision
nl,
 
println(0x10ABC), % Hexadecimal
println(0xBe_ad_ed_83), % lower or upper case are the same
nl,
println(0o666), % Octal
println(0o555_666_777),
nl,
 
println(0b1111111111111), % binary
println(0b1011_1111_1110)</syntaxhighlight>
 
{{out}}
<pre>1234567890
1000000000070000030000001
 
68284
3199069571
 
438
95907327
 
8191
3070</pre>
 
 
=={{header|PicoLisp}}==
In the strict sense of this task, PicoLisp reads only integers at bases which are a power of ten (scaled fixpoint numbers). This is controlled via the global variable '[http://software-lab.de/doc/refS.html#*Scl *Scl]':
<langsyntaxhighlight PicoLisplang="picolisp">: (setq *Scl 4)
-> 4
 
: 123.456789
-> 1234568</langsyntaxhighlight>
However, the reader is normally augmented by read macros, which can read any
base or any desired format. Read macros are not executed at runtime, but
intially when the sources are read.
<langsyntaxhighlight PicoLisplang="picolisp">: '(a `(hex "7F") b `(oct "377") c)
-> (a 127 b 255 c)</langsyntaxhighlight>
In addition to standard formats like
'[http://software-lab.de/doc/refH.html#hex hex]' (hexadecimal) and
Line 1,628 ⟶ 1,814:
 
=={{header|PL/I}}==
<syntaxhighlight lang="pl/i">
<lang PL/I>
12345
'b4'xn /* a hexadecimal literal integer. */
'ffff_ffff'xn /* a longer hexadecimal hexadecimal integer. */
1101b /* a binary integer, of value decimal 13. */
</syntaxhighlight>
</lang>
 
=={{header|Plain English}}==
Plain English has two types of numerical literals. The first is the ordinary "number literal", which is expressed in base ten.
<syntaxhighlight lang="text">
12345
-12345 \ with a negative sign
+12345 \ with a positive sign
</syntaxhighlight>
 
The second is the "nibble literal", which is a dollar sign followed by a hexadecimal literal.
<syntaxhighlight lang="text">$12345DEADBEEF</syntaxhighlight>
 
Numerical literals can also be embedded into "ratio" or "mixed literals".
<syntaxhighlight lang="text">
123/456 \ ratio literal
1-2/3 \ mixed literal
</syntaxhighlight>
 
=={{header|PostScript}}==
Integer literals in PostScript can be either standard decimal literals or in the form ''base''<code>#</code>''number''. ''base'' can be any decimal integer between 2 and 36, ''number'' can then use digits from <code>0</code> to ''base''&nbsp;−&nbsp;1. Digits above <code>9</code> are replaced by <code>A</code> through <code>Z</code> and case does not matter.
<langsyntaxhighlight lang="postscript">123 % 123
8#1777 % 1023
16#FFFE % 65534
2#11011 % 27
5#44 % 24</langsyntaxhighlight>
 
=={{header|PowerShell}}==
PowerShell only supports base 10 and 16 directly:
<langsyntaxhighlight lang="powershell">727 # base 10
0x2d7 # base 16</langsyntaxhighlight>
Furthermore there are special suffices which treat the integer as a multiple of a specific power of two, intended to simplify file size operations:
<langsyntaxhighlight lang="powershell">3KB # 3072
3MB # 3145728
3GB # 3221225472
3TB # 3298534883328</langsyntaxhighlight>
A number can be suffixed with <code>d</code> to make it a <code>decimal</code>. This doesn't work in conjunction with above suffixes, though:
<pre>PS> 4d.GetType().ToString()
Line 1,658 ⟶ 1,861:
=={{header|PureBasic}}==
PureBasic allows integer literals to be specified in base 10, base 2 by using the prefix '%', or base 16 by using the prefix '$'.
<langsyntaxhighlight PureBasiclang="purebasic">x = 15 ;15 in base 10
x = %1111 ;15 in base 2
x = $f ;15 in base 16</langsyntaxhighlight>
An integer literal representing a character code can also be expressed by surrounding the character with single quotes. More than one character can be included in the single quotes (i.e. 'abc'). Depending on whether code is compiled in Ascii or Unicode mode this will result in the integer value being specified in base 256 or base 65536 respectively.
 
<langsyntaxhighlight PureBasiclang="purebasic">x = 'a' ;129</langsyntaxhighlight>
 
=={{header|Python}}==
{{works with|Python|3.0}}
Python 3.0 brought in the binary literal and uses 0o or 0O exclusively for octal.
<langsyntaxhighlight lang="python">>>> # Bin(leading 0b or 0B), Oct(leading 0o or 0O), Dec, Hex(leading 0x or 0X), in order:
>>> 0b1011010111 == 0o1327 == 727 == 0x2d7
True
>>></langsyntaxhighlight>
{{works with|Python|2.6}}
Python 2.6 has the binary and new octal formats of 3.0, as well as keeping the earlier leading 0 octal format of previous 2.X versions for compatability.
<langsyntaxhighlight lang="python">>>> # Bin(leading 0b or 0B), Oct(leading 0o or 0O, or just 0), Dec, Hex(leading 0x or 0X), in order:
>>> 0b1011010111 == 0o1327 == 01327 == 727 == 0x2d7
True
>>></langsyntaxhighlight>
{{works with|Python|2.5}}
<langsyntaxhighlight lang="python">>>> # Oct(leading 0), Dec, Hex(leading 0x or 0X), in order:
>>> 01327 == 727 == 0x2d7
True
>>></langsyntaxhighlight>
 
In Python 2.x you may also specify a <tt>long</tt> literal by adding an <tt>l</tt> or <tt>L</tt> (the latter form is preferred as the former looks like a "1") to the end (ex: <tt>574298540721727L</tt>), but this is optional, as integer literals that are too large for an <tt>int</tt> will be interpreted as a <tt>long</tt>.
Line 1,692 ⟶ 1,895:
The default base can be overridden for a section of code using the compiler directive <code>now!</code> like this;
 
<langsyntaxhighlight Quackerylang="quackery">[ 2 base put ] now!
 
( The Quackery compiler now expects numeric literals to be in binary. )
Line 1,699 ⟶ 1,902:
 
( The Quackery compiler now expects numeric literals to be in whichever
base they were previously. The default base is decimal. )</langsyntaxhighlight>
 
If a new compiler directive akin to <code>hex</code> is required, say to allow occasional octal literals in the form <code>octal 7777</code>, the compiler can be extended like this;
 
<langsyntaxhighlight Quackerylang="quackery"> [ 8 base put
nextword dup
$ '' = if
Line 1,715 ⟶ 1,918:
$ '" is not octal.'
join message put bail ]
base release ] builds octal ( [ $ --> [ $ )</langsyntaxhighlight>
 
=={{header|R}}==
0x or 0X followed by digits or the letters a-f denotes a hexadecimal number. The suffix L means that the number should be stored as an integer rather than numeric (floating point).
<langsyntaxhighlight Rlang="r">0x2d7==727 # TRUE
identical(0x2d7, 727) # TRUE
is.numeric(727) # TRUE
Line 1,726 ⟶ 1,929:
is.numeric(0x2d7) # TRUE
is.integer(0x2d7) # FALSE
is.integer(0x2d7L) # TRUE</langsyntaxhighlight>
For more information, see [http://cran.r-project.org/doc/manuals/R-lang.pdf Section 10.3.1 of the R Language definition] (PDF).
 
=={{header|Racket}}==
 
<langsyntaxhighlight lang="racket">
#lang racket
#b1011010111
Line 1,737 ⟶ 1,940:
#d727
#x2d7
</syntaxhighlight>
</lang>
 
Output:
Line 1,750 ⟶ 1,953:
(formerly Perl 6)
These all print 255.
<syntaxhighlight lang="raku" perl6line>say 255;
say 0d255;
say 0xff;
Line 1,763 ⟶ 1,966:
say :4<3333>;
say :12<193>;
say :36<73>;</langsyntaxhighlight>
There is a specced form for bases above 36, but rakudo does not yet implement it.
 
=={{header|REBOL}}==
<syntaxhighlight lang ="rebol">1</langsyntaxhighlight>
 
=={{header|Retro}}==
<langsyntaxhighlight Retrolang="retro">#100 ( decimal )
%100 ( binary )
$100 ( hex )
'c ( ascii character )
100 ( number in current base )</langsyntaxhighlight>
 
Numbers without a prefix are interpreted using the current '''base''', which is a variable Valid characters are stored in a string called '''numbers''', which can also be altered to allow for larger bases.
 
=={{header|REXX}}==
<langsyntaxhighlight lang="rexx">/*REXX pgm displays an integer (expressed in the pgm as a literal) in different bases*/
/*────────── expressing decimal numbers ──────────*/
ddd = 123 /*a decimal number (expressed as a literal). */
Line 1,809 ⟶ 2,012:
thingy8= ' + 123 ' /*╚══════════════════════════════════════════════╝*/
 
/*stick a fork in it, we're all done. */</langsyntaxhighlight>
{{out|output}}
<pre>
Line 1,829 ⟶ 2,032:
 
=={{header|Ring}}==
<langsyntaxhighlight lang="ring">
see "Decimal literal = " + 1234 + nl
see "Hexadecimal literal = " + dec("4D2") + nl
Line 1,851 ⟶ 2,054:
end
return output
</syntaxhighlight>
</lang>
Output:
<pre>
Line 1,860 ⟶ 2,063:
</pre>
 
Unsigned integers, which must begin with <code>#</code>, can be expressed in binary, octal, decimal or hexadecimal. A final lowercase letter defines the base.
#100111010b
#472o
#314d
#13Ah
 
=={{header|RPL}}==
#1011b <span style="color:grey">@ Base 2</span>
#1234o <span style="color:grey">@ Base 8</span>
#6789d <span style="color:grey">@ Base 10</span>
#ABCDh <span style="color:grey">@ Base 16</span>
=={{header|Ruby}}==
 
<langsyntaxhighlight lang="ruby">727 == 0b1011010111 # => true, binary
727 == 0x2d7 # => true, hex
727 == 0o1327 # => true, octal
Line 1,868 ⟶ 2,082:
 
12345 == 12_345 # => true underscores are ignored; useful for keeping track of places
</syntaxhighlight>
</lang>
 
=={{header|Rust}}==
<langsyntaxhighlight lang="rust">10 // Decimal
0b10 // Binary
0x10 // Hexadecimal
Line 1,877 ⟶ 2,091:
1_000 // Underscores may appear anywhere in the numeric literal for clarity
10_i32 // The type (in this case i32, a 32-bit signed integer) may also be appended.
10i32 // With or without underscores</langsyntaxhighlight>
 
=={{header|Scala}}==
Line 1,911 ⟶ 2,125:
 
binary: #b, octal: #o, decimal: #d (optional obviously), hex: #x
<langsyntaxhighlight lang="scheme">> (= 727 #b1011010111)
#t
> (= 727 #o1327)
Line 1,918 ⟶ 2,132:
#t
> (= 727 #x2d7)
#t</langsyntaxhighlight>
 
=={{header|Seed7}}==
In [[Seed7]] integer literals may have the form <base>#<numeral>. Here <base> can be from the range 2..36. For example:
<langsyntaxhighlight lang="seed7">$ include "seed7_05.s7i";
 
const proc: main is func
Line 1,933 ⟶ 2,147:
writeln(2#1011010111);
end func;
</syntaxhighlight>
</lang>
Sample output:
<pre>
Line 1,945 ⟶ 2,159:
 
=={{header|Sidef}}==
<langsyntaxhighlight lang="ruby">say 255;
say 0xff;
say 0377;
say 0b1111_1111;</langsyntaxhighlight>
{{out}}
<pre>255
Line 1,957 ⟶ 2,171:
=={{header|Slate}}==
<langsyntaxhighlight lang="slate">2r1011010111 + 8r1327 + 10r727 + 16r2d7 / 4</langsyntaxhighlight>
 
=={{header|Smalltalk}}==
<langsyntaxhighlight lang="smalltalk">2r1011010111 + 5r100 + 8r1327 + 10r727 + 16r2d7 / 4</langsyntaxhighlight>
binary, base-5, octal, decimal, binary, decimal (default).
Any base between 2 and 32 can be used (although only 2, 8, 10 and 16 are typically needed).
 
There is no size limit (except memory constraints), the runtime chooses an appropriate representation automatically:
<langsyntaxhighlight lang="smalltalk">16r1B30964EC395DC24069528D54BBDA40D16E966EF9A70EB21B5B2943A321CDF10391745570CCA9420C6ECB3B72ED2EE8B02EA2735C61A000000000000000000000000 = 100 factorial
"evaluates to true"
 
2r101010101011111100000011111000000111111111111111110101010101010101010100101000000000111111100000000111
bitCount -> 55</langsyntaxhighlight>
 
=={{header|Standard ML}}==
Line 1,976 ⟶ 2,190:
 
Hex(leading 0x), Word (unsigned ints, leading 0w), Word Hex (leading 0wx)
<langsyntaxhighlight lang="sml">- 727 = 0x2d7;
val it = true : bool
- 727 = Word.toInt 0w727;
Line 1,986 ⟶ 2,200:
* worth mentioning because it's unusual
*)
val it = ~727 : int</langsyntaxhighlight>
 
=={{header|Stata}}==
Line 1,993 ⟶ 2,207:
 
=={{header|Swift}}==
<langsyntaxhighlight Swiftlang="swift">let hex = 0x2F // Hexadecimal
let bin = 0b101111 // Binary
let oct = 0o57 // Octal</langsyntaxhighlight>
 
=={{header|Tcl}}==
{{works with|Tcl|8.5}}
(This is an interactive tclsh session; <tt>expr</tt> is only called to evaluate the equality test.)
<langsyntaxhighlight lang="tcl">% expr 727 == 0x2d7
1
% expr 727 == 0o1327
Line 2,007 ⟶ 2,221:
1
% expr 727 == 0b1011010111
1</langsyntaxhighlight>
 
=={{header|TI-89 BASIC}}==
Line 2,013 ⟶ 2,227:
Binary, decimal, and hexadecimal are supported. The system base mode sets the default output base, but does not affect input; unmarked digits are always decimal.
 
<langsyntaxhighlight lang="ti89b">0b10000001 = 129 = 0h81</langsyntaxhighlight>
 
=={{header|UNIX Shell}}==
The <tt>expr</tt> command accepts only decimal literals.
 
<langsyntaxhighlight lang="bash">$ expr 700 - 1
699
$ expr 0700 - 01
699</langsyntaxhighlight>
 
Some shells have arithmetic expansion. These shells may accept literals in other bases. This syntax only works in places that do arithmetic expansion, such as in <tt>$(( ))</tt>, or in Bash's <tt>let</tt> command.
Line 2,039 ⟶ 2,253:
 
{{works with|bash}}
<langsyntaxhighlight lang="bash">dec=727
oct=$(( 01327 ))
bin=$(( 2#1011010111 ))
Line 2,045 ⟶ 2,259:
# or e.g.
let bin=2#1011010111
let "baseXX = 20#1g7"</langsyntaxhighlight>
 
{{works with|pdksh|5.2.14}}
<langsyntaxhighlight lang="bash">dec=727
oct=$(( 01327 ))
bin=$(( 2#1011010111 ))
Line 2,054 ⟶ 2,268:
# or e.g.
(( bin = 2#1011010111 ))
(( baseXX = 20#1g7 ))</langsyntaxhighlight>
 
=={{header|Ursa}}==
Ursa supports signed, base-10 integers.
<langsyntaxhighlight lang="ursa">decl int i
set i 123
set i -456</langsyntaxhighlight>
 
=={{header|Ursala}}==
 
Natural numbers (i.e., unsigned integers) of any size are supported. Only decimal integer literals are recognized by the compiler, as in a declaration such as the following.
<langsyntaxhighlight Ursalalang="ursala">n = 724</langsyntaxhighlight>
Signed integers are also recognized and are considered a separate type from natural numbers, but non-negative integers and natural numbers have compatible binary representations.
<langsyntaxhighlight Ursalalang="ursala">z = -35</langsyntaxhighlight>
Signed rational numbers of unlimited precision are yet another primitive type and can be expressed
in conventional decimal form.
<langsyntaxhighlight Ursalalang="ursala">m = -2/3</langsyntaxhighlight>
The forward slash in a rational literal is part of the syntax and not a division operator. Finally, a signed or unsigned integer with a trailing underscore, like this
<langsyntaxhighlight Usalalang="usala">t = 4534934521_</langsyntaxhighlight>
is used for numbers stored in binary converted decimal format, also with unlimited precision, which may perform better in applications involving very large decimal numbers.
 
=={{header|Uxntal}}==
Uxntal only allows hexadecimal literals, and they can be either one or two bytes. In order to push them to the stack, rather than writing them directly to the assembled binary, they must be prefixed with <code>#</code>.
<syntaxhighlight lang="Uxntal">#2a ( byte literal )
#c0de ( short literal )</syntaxhighlight>
And yes, they do have to be in lowercase hex.
 
=={{header|Verbexx}}==
<langsyntaxhighlight lang="verbexx">// Integer Literals:
//
// If present, base prefix must be: 0b 0B (binary) 0o 0O (octal)
Line 2,113 ⟶ 2,333:
// no prefix, the numeric literal cannot begin with underscore:
 
@SAY 100_000 1_u1 0x_FFFF_u16 1__0__ 0x__7890_ABCD_EFAB_CDEF__u64; </langsyntaxhighlight>
 
=={{header|Visual Basic}}==
Line 2,122 ⟶ 2,342:
{{works with|VBA|7.1}}
Integer literals can be expressed in octal, decimal and hexadecimal form.
<langsyntaxhighlight lang="vb">Sub Main()
 
'Long: 4 Bytes (signed), type specifier = &
Line 2,149 ⟶ 2,369:
Debug.Assert b2 = b3
End Sub</langsyntaxhighlight>
 
=={{header|V (Vlang)}}==
<syntaxhighlight lang="Vlang">
fn main() {
w := 727
x := 0x2d7
y := 0o1327
z := 0b10110_10111
println([w, x, y, z])
}
</syntaxhighlight>
 
{{out}}
<pre>
[727, 727, 727, 727]
</pre>
 
=={{header|Wren}}==
Line 2,159 ⟶ 2,395:
 
As the only difference between integers and other numbers is that the former do not have a decimal part, it is also possible to represent integers using scientific notation.
<langsyntaxhighlight ecmascriptlang="wren">var a = 255
var b = 0xff
var c = 0255 // not an octal literal
var d = 2.55e2
System.print([a, b, c, d])</langsyntaxhighlight>
 
{{out}}
Line 2,171 ⟶ 2,407:
 
=={{header|XPL0}}==
<langsyntaxhighlight XPL0lang="xpl0">code CrLf=9, IntOut=11;
def A=123, B=$123, C=%11_0011, D=^A;
[IntOut(0, A); CrLf(0); \decimal
Line 2,177 ⟶ 2,413:
IntOut(0, C); CrLf(0); \binary
IntOut(0, D); CrLf(0); \ASCII
]</langsyntaxhighlight>
 
Output:
Line 2,189 ⟶ 2,425:
=={{header|Z80 Assembly}}==
Numeric values can be defined in decimal, binary, or hexadecimal.
<langsyntaxhighlight lang="z80">byte &55 ;hexadecimal 55
byte $42 ;hexadecimal 42
byte 33 ;decimal 33
byte %00001111 ;binary equivalent of &0F</langsyntaxhighlight>
 
=={{header|zkl}}==
Three int types the compiler understands: decimal, hex, binary. Other bases (2-36) require a method call.
<langsyntaxhighlight lang="zkl">123, 0d1_000
0x123, 0x12|34
0b1111|0000</langsyntaxhighlight>
 
 
9,476

edits