10 digits.
It takes 7 digits.
All I know is that when a number is negative, you convert the decimal into binary and if it is negative you put 1111 before the binary digits.
7 digits
When you convert this decimal number to the binary format, we have 111001001 that has 9 digits so 9bits is required to represent it in normal case. To convert decimals to binary visit http://acc6.its.brooklyn.cuny.edu/~gurwitz/core5/nav2tool.html
To consider the difference between straight binary and BCD, the binary numbers need to be split up into 4 binary digits (bits) starting from the units. In 4 bits there are 16 possible values from 0000 to 1111 (0 to 15). In straight binary all of these possible combinations are used, thus: 4 bits can represent the decimal numbers 0-15 8 bits can represent the decimal numbers 0-255 12 bits can represent the decimal numbers 0-4095 16 bits can represent the decimal numbers 0-65535 etc In arithmetic, all combinations of bits are used, thus: 0000 1001 + 0001 = 0000 1010 In BCD or Binary Coded Decimal, only the representations of the decimal numbers 0-9 are used (that is 0000 to 1001 in binary), and the 4-bits (nybbles) are read as decimal digits, thus: 4 bits can represent the decimal digits 0-9 8 bits can represent the decimal digits 0-99 12 bits can represent the decimal digits 0-999 16 bits can represent the decimal digits 0-9999 In arithmetic, only the representations of decimal numbers are used, thus: 0000 1001 + 0001 = 0001 0000 When BCD is used each half of a byte is read directly as a decimal digit. BCD is obviously inefficient as storage (for large numbers) as each nybble is only holding 3/8 of the possible numbers, however, it is sometimes easier and quicker to work with decimal digits (for example when there is lots of display of counting numbers to do there is less binary to decimal conversion needing to be done).
5 bits are 5 binary digits. If they represent a decimal number, then that number can be anything from zero to 31, and can have either 1 or 2 digits.
10 bits would be required. 10 bits long (10 digits long) can represent up to 1024.
The largest decimal number is binary 11111, which is decimal 31.
56 in binary is 111000. Unlike the decimal number system where we use the digits.
1 million < 165 so 6 digits would be enough.
Seven will be more than enough.
Assuming you start from 0, you need at least 4 bits. 15 in binary: 15 = 8 + 4 + 2 + 1 = 1111₂