Talk:Host introspection

From Rosetta Code
Revision as of 17:35, 13 October 2008 by Ce (talk | contribs) (Problems with current C implementation)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

The C example is not correct: While it is true that on current popular platforms a pointer is as large as a word, this is not universally true. For example in x86 16 bit code using the large memory model a pointer has two words (one for the segment, one for the offset). I'd not be surprised if there are other, more current platforms (especially embedded ones) where pointer size and word size don't agree either.

Before 64 bit platforms appeared, the size of an int was a good indicator of the word size, because it was intended to be the fastest integer type, and that's typically the word size. Thus for 16 bit code, int was 16 bit, and for 32 bit code, int was 32 bit. However, when the transition to 64 bits came, compatbility with code making hard assumptions about the size of int was considered more important, thus int remained at 32 bits.

Maybe using sizeof(size_t) would be a better test: Even though on 16 bit x86, pointers can be 32 bits, object sizes always fit into 16 bits (because on 16 bit systems a segment only is that large). Of course it's not guaranteed either, but it's at least the best bet you can make.

Also, multiplying with 8 isn't quite right either: While today a byte is commonly 8 bits, this is not guaranteed. I'm not sure if today there are systems sold where a byte has not 8 bits (again, embedded systems might be prime candidates to look at, as they might well lack support for sub-word addressing, making a byte as large as a word), however in limits.h there exists a macro CHAR_BIT which holds the number of bits in a byte, so there's no need to make any possibly wrong assumptions.

I'm going to change the C example according to the explanations above. --Ce 17:35, 13 October 2008 (UTC)