I don't believe anyone suggested it, certainly no-one that could be named at least. Long before we had digital computers we had machines that were fully capable of processing binary information. The Jacquard loom is a prime example which pre-dates Charles Babbage's early computer designs. Although Babbage dabbled with decimal machines, even he saw the benefits of using binary; by far the simplest method of implementing a numeric system at the machine level.
Decimal 30 = binary 11110. The decimal binary code (BCD), however, is 11 0000.
I'm pretty sure binary is just 1's and 0's
BAD16: Binary = 10111010110100010110 Decimal = 765206
0X at the beginning represent a number in the hexadecimal system of units. FFFF is the hexadecimal equivalent of i) 65535 in decimal system of units ii) 1111111111111111 in binary system of units
8 in decimal is 1000 in binary
Decimal.
Computers use a binary system, not decimal.
Binary ( 1 0 ) = decimal ( 2 )
The Binary system uses only the numbers 1 & 0. The decimal system has "dots" in them example of decimal: 1.25
No.
Binary is base 2, using the digits 0 and 1. Decimal system is base 10 with 0-9.
8
Just as in decimal, you can put a minus sign in front. For example, if 101 (binary) is decimal 5, then -101 (binary) is decimal -5.
There is no decimal number for the binary number 13 because 13 cannot be a binary number.
Decimal.
The decimal representation of numbers is shorter. Binary number require approx 3.3 times as many digits.
110.101 is already a decimal number. Unless that is intended to be two binary numbers with a decimal point between them for some reason. (decimal points are not used to represent fractional numbers in the binary system).