Talk:Bitwise IO: Difference between revisions

Line 11:
 
::::As far as I know, packing into bytes is as expected in common mathematics (that is, if we want to talk about endianness, big endian): less significative bits are on the right, most significative bits are on the left, so that being A<sub>i</sub> with i &#x2208; [0,7] the binary digits, A<sub>0</sub> is the ''unity'' bit, which is so to say A<sub>0</sub>&sdot;2<sup>0</sup>. So having the binary number 10010001, this is ''packed'' into a byte as 10010001 simply. When we are interested in less than 8 bits, they should be ''counted'' (always so to say) from right to left; e.g. if I want to write the 4 bits 1001, these must be right-aligned into the ''variable'' holding the bits. But these are conventions the user of the functions must know, they are not mandatory to do the way I did. I've implemented all the stuff so that you must always right-align the bits, then the functions will shift to left so that the intended most significative bit of the ''bits datum'' becomes the leftmost bit in the ''container'' (unsigned int in the C implementation). It is enough the user of the functions gets knowledge about how your functions extract bits (in which order) from the data they want to pass; then, it is ''intended'' that the first (being it the leftmost or the rightmost according to your convention) must be the first of the output stream. So that it will be the first bit you read when you ask for a single bit from the so output stream. Maybe your misunderstanding comes from the fact that you handle single bits (in your expl, still not looked your code) as unit held by an array. In C it would be like having char bitarray[N], where each ''char'' can be only 0 or 1. Doing this way, you can decide your preferred convention, i.e. if bitarray[0] is the first bit to be output or it is instead bitarray[N-1]. In C this implementation would be hard to use; if I want to write integers less than 16, which can be stored in just 4 bits, it is enough I pass to the function the number, like 12 (binary 1100), and tell the function I want just (the first) 4 bits; otherwise, I (as user of the functions) should split the integer into chars, each rapresenting a bit, pack it into an array in the right ''endianness'', and then pass the array to the function. Maybe this is easier in Ada (I don't know), but it would be harder in C. Consider e.g. the integers output array of the LZW task; in C, it is enough to take one integer per time, and say to the function to output (e.g.) 11 bits of that integer (of course, 11 bits must be enough to hold the value!); you are outputting 11-bits long ''words'' (which are the ''words'' of the LZW dictionary). Hope I've understood well what you've said and expressed well my explanation (too long as usual, sorry). --[[User:ShinTakezou|ShinTakezou]] 21:53, 20 December 2008 (UTC)
 
Ok, seen the code. In your implementation, you need this specification since you can choose. In the C implementation, or in any other ''lowlevel'' implementation, it is not a need since the choice is one: we must put bits so that the leftmost is the most significative, and rightmost the less significative (which is, so to say, the ''mathematical'' convention). How these ''significative'' bits are aligned into the container, is a choice of the implementation. The only important thing is that if I wanted to output the 4 bits 1100, then the only byte of output must be 0xC0 (i.e. 1100'''0000''' where bold bits are just for padding). In this way, when reading, the first bit will be 1, the second 1, the third 0 and the fourth 0. If we put one bit after another according to the reading order, we obtain 1100 (and we must obtain 1100 also if we read the bits in just one shot). This is the way ''bit oriented'' is meant. --[[User:ShinTakezou|ShinTakezou]] 22:07, 20 December 2008 (UTC)