log2 200 = ln 200 ÷ ln 2 = 7.6... → need 8 bits.
If a signed number is being stored, then 9 bits would be needed as one would be needed to indicate the sign of the number.
5
To represent an eight-digit decimal number in Binary-Coded Decimal (BCD), each decimal digit is encoded using 4 bits. Since there are 8 digits in the number, the total number of bits required is 8 digits × 4 bits/digit = 32 bits. Therefore, 32 bits are needed to represent an eight-digit decimal number in BCD.
103
6 bits 111110 = 32 + 16 + 8 + 4 + 2 + 0
If the 8 bits represent a signed number, the range is usually -128 to +127. This is -27 to 27-1.
5
8 bits if unsigned, 9 bits if signed
how many bits are needed to represent decimal values ranging from 0 to 12,500?
9 bits
1200
8
8
8 (assuming unsigned numbers - i.e., you don't reserve a bit for the sign).
103
Four bytes represent 32 bits. 32 bits represent 4,294,967,296 possibilities.
17 bits would allow a value up to 131071.
6 bits 111110 = 32 + 16 + 8 + 4 + 2 + 0