Integer: Difference between revisions

m
 
(One intermediate revision by the same user not shown)
Line 12:
==Signed/Unsigned==
Integer variables can be either be declared as '''signed''' or '''unsigned''', and this affects how the compiler handles them. CPUs have different ways of comparing values depending on whether a variable is intended to be signed or unsigned. Notice that I said "intended" - the CPU doesn't really know whether your data is meant to be signed or unsigned. This means that the quantity <tt>0xFFFFFFFF</tt> can represent either negative 1 or 4,294,967,295. But which one is it? Most high-level languages lock you into picking one, but at a hardware level it can be whatever you want it to be at any particular moment. (Kind of like the Ace in Blackjack.)
 
For most programming languages, integer variables (and numeric variables in general) are treated as signed by default (some don't even give you a choice.)
<lang C>int foo; //this is a signed integer
unsigned int bar; //this is an unsigned integer</lang>
 
==Two's Complement==
Line 17 ⟶ 21:
 
==Integer Overflow==
And I'm sure you've figured it out by now. Yes, if a signed number gets too big, it suddenly becomes very very small. This is known as integer overflow and occurs when a number crosses over from <tt>0x7F...</tt> to <tt>0x80...</tt> (fill in the dots with Fs/0s depending on the size of your numeric data type.) Luckily, nearly all CPUs have special hardware just for detecting overflow, and do so automatically after every calculation. Here's ana simple example from [[x86 Assembly]].
<lang asm>MOV EAX,0x7FFFFFFF
ADD EAX,1
1,489

edits