
In message 473AE49A.40801@inaccessnetworks.com you wrote:
This is the main misunderstanding. When you said "int" I though you meant dereferencing an "int *", in fact not only me but other people on the list as well. So your proposal is to convert the "char val" to an "int val". You don't solve the problem I mentioned by doing this.
Well, the original error description said the problem was caused by the fact that "char" might be treated as "unsigned char" on some platforms to the test for "< 0" would always fail.
Let us not forget that all we want to do here is take the *bits* of the buffer one by one, starting from the MSB. Checking for negativity is just a hack to acquire the MSB, since signed values are two's complement.
If this was the intention of the code, then the implementation of that part is wrong (as has been pointed out before). Ii is already wrong to assume that you are on a two complements machine...
architectures, but I fail to see how this is cleaner than converting the val to "unsigned char" like the "data" and doing "val & 0x80".
It depends on the purpose of the code (which I didn't bother to dig into). If you want to really make a difference between <0 and >=0 you should use integer types. If you want to test if a certain bit is set or not than you should use a logical AND operation. In this case the bug was in using a '<0', not in the variable type.
I think we can close this now. A patch was submitted which cleans this up.
Best regards,
Wolfgang Denk