answersLogoWhite

0

With a signed integer, the top bit is used to hold the sign of the number, so that the range of numbers that can be held is -(2number_of_bits-1) to 2number_of_bits-1 -1, whereas with an unsigned integer, all the bits are used to store the number which will always be positive and so the range is 0 to 2number_of_bits -1.

For example, with 16 bits, a signed integer has the range -(215) to 215 -1 = -32768 to 32767 whereas an unsigned integer has the range 0 to 216 -1 = 0 to 65535.

The point in particular to note is how the numbers (bit patterns) with the top bit set are interpreted: with unsigned ints they are greater than those without the top bit set (eg 0xffff = 65535 > 0x7fff = 32767), but with signed ints they are less (eg 0xffff = -1 < 0x7fff = 32767).

I'm not sure about the significance of declaring a constant as unsigned (having never used one), but at a guess I would say it's to tell the compiler that the bit pattern is an unsigned bit pattern and to throw up a warning (or error) if it's used with a signed bit pattern, eg if used in a comparison with a signed int - ie an aid to cutting down on bugs by ensuring that things are only used for their intended purpose.

User Avatar

Wiki User

14y ago

What else can I help you with?