What is the Largest real number that can be stored in binary using 16 bits Where 1 bit is used for the sign 5 bits for the characteristics and 10 for the Mantissa?
In binary the largest number (using IEEE binary16) representable
would be: 0111 1111 1111 1111 (grouping the bits in nybbles* for
easier reading). This is split as |0|111 11|11 1111 1111| which
represents:
0 = sign
111 11 = exponent
11 1111 1111 = mantissa.
Using IEEE style, the exponent is offset by 011 11, making the
maximum exponent 100 00
This is scientific notation using binary instead of decimal. As
such there must be a non-zero digit before the binary point, but in
binary this can only ever be a 1, so to save storage it is not
stored and the mantissa effectively has an extra bit, which for the
10 bits specified makes it 11 bits long. Thus the mantissa
represents: 1.11 1111 1111
This gives the largest number as:
1.1111 1111 11 × 10^10000
(all digits are binary, not decimal.) This expands to 1 1111
1111 1100 0000 (binary) = 0x1ffc0 = 131,008
Note that this is NOT accurate in storage - there are 6 bits
which are forced to be zero, making the number only accurate to ±32
(decimal): the second largest possible real would be 1 1111 1111
1000 000 = 0x1ff80 = 130,944 - the numbers are only accurate to
about 4 decimal digits; the largest decimal real number would be
1.310 × 10^5, and the next 1.309 × 10^5 and so on.
However, with proper IEEE, an exponent with all bits set is used
to identify special numbers, which makes the largest possible 0111
1101 1111 1111 which is 1.1111 1111 11 × 10^1111 = 1111 1111 1110
0000 = 0xffe0 = 65504 accurate to ±16, ie the largest is about 6.55
× 10^4.
* a nybble is half a byte which is directly representable as a
single hexadecimal digit.