0000 0000 1111 1000F ( or 15) = 1111 in binary, and 8 = 1000 in binary, so F is 1111 1000
1 + 1,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111 = 1,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,112 Unless it is binary, in which case: 1 + 111 1111 1111 1111 1111 1111 1111 1111 1111 1111 1111 1111 1111 1111 1111 1111 1111 11111 1111 1111 1111 1111 = 1000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000
5
300 = 256 + 32 + 8 + 4 = Binary 0000 0001 0010 1100
Floating point numbers are stored in scientific notation using base 2 not base 10.There are a limited number of bits so they are stored to a certain number of significant binary figures.There are various number of bytes (bits) used to store the numbers - the bits being split between the mantissa (the number) and the exponent (the power of 10 (being in the base of the storage - in binary, 10 equals 2 in decimal) by which the mantissa is multiplied to get the binary/decimal point back to where it should be), examples:Single precision (IEEE) uses 4 bytes: 8 bits for the exponent (encoding ±), 1 bit for the sign of the number and 23 bits for the number itself;Double precision (IEEE) uses 8 bytes: 11 bits for the exponent, 1 bit for the sign, 52 bits for the number;The Commodore PET used 5 bytes: 8 bits for the exponent, 1 bit for the sign and 31 bits for the number;The Sinclair QL used 6 bytes: 12 bits for the exponent (stored in 2 bytes, 16 bits, 4 bits of which were unused), 1 bit for the sign and 31 bits for the number.The numbers are stored normalised:In decimal numbers the digit before the decimal point is non-zero, ie one of {1, 2, ..., 9}.In binary numbers, the only non-zero digit is 1, so *every* floating point number in binary (except 0) has a 1 before the binary point; thus the initial 1 (before the binary point) is not stored (it is implicit).The exponent is stored by adding an offset of 2^(bits of exponent - 1), eg with 8 bit exponents it is stored by adding 2^7 = 1000 0000Zero is stored by having an exponent of zero (and mantissa of zero).Example 10 (decimal):10 (decimal) = 1010 in binary → 1.010 × 10^11 (all digits binary) which is stored in single precision as:sign = 0exponent = 1000 0000 + 0000 0011 = 1000 00011mantissa = 010 0000 0000 0000 0000 0000 (the 1 before the binary point is explicit).Example -0.75 (decimal):-0.75 decimal = -0.11 in binary (0.75 = ½ + ¼) → 1.1 × 10^-1 (all digits binary) → single precision:sign = 1exponent = 1000 0000 + (-0000 0001) = 0111 1111mantissa = 100 0000 0000 0000 0000 0000Note 0.1 in decimal is a recurring binary fraction 0.1 (decimal) = 0.0001100110011... in binary which is one reason floating point numbers have rounding issues when dealing with decimal fractions.
It is 1 0000 0000 0011
0000 0000 1111 1000F ( or 15) = 1111 in binary, and 8 = 1000 in binary, so F is 1111 1000
I assume you mean BCD, Binary Coded Decimal. BCD uses 4 bits to represent one decimal number. The easiest way is to make a table, with decimal, BCD, Hex and straight binary. 1 0000 0001 1 0000 0001 2 0000 0010 2 0000 0010 3 0000 0011 3 0000 0011 ...Skip a bit.... 9 0000 1001 9 0000 1001 10 0001 0000 A 0000 1010 11 0001 0001 B 0000 1011 ...Skipping again.... 15 0001 0101 F 0000 1111 16 0001 0110 10 0001 0000 Get the idea? In the first one, 4 binary bits are matched with one decimal digit. In straight binary, the number scrolls on. Interestingly, this caused some problems, earning itself the name 'the 2.1K bug'. some systems, generally small systems like Eftpos terminals, wrote values in BCD binary, but read them as straight binary. So dates were written in BCD 10, but read back as (check the table) Ordinary binary 16. Hilarity ensued.
0001 0000
192 = 1100 0000 168 = 1010 1000 0 = 0000 0000 1 = 0000 0001 192.168.0.1 = 11000000.10101000.00000000.00000001 = 11000000.10101000.0.1
The binary value 1000 0000 represents the decimal number 128. In binary, each digit's place value doubles from right to left, starting at 1. Therefore, the rightmost digit is 1, representing 2^0, and the leftmost digit is 1, representing 2^7, which equals 128 in decimal.
Decimal 30 = binary 11110. The decimal binary code (BCD), however, is 11 0000.
1 + 1,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111 = 1,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,112 Unless it is binary, in which case: 1 + 111 1111 1111 1111 1111 1111 1111 1111 1111 1111 1111 1111 1111 1111 1111 1111 1111 11111 1111 1111 1111 1111 = 1000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000
The binary number 10000000 represents the decimal 128
in EBCDIC: 11001000, 10000101, 10010011, 10010011 10010110 in ASCII: 1001000, 1100101, 1101100, 1101100, 1101111 in Unicode: 0000 0000 0100 1000, 0000 0000 0110 0101, 0000 0000 0110 1100, 0000 0000 0110 1100, 0000 0000 0110 1111
The Alphabet in Binary CodeLetterBinary CodeA01000001B01000010C01000011D01000100E01000101F01000110G01000111H01001000I01001001J01001010K01001011L01001100M01001101N01001110O01001111P01010000Q01010001R01010010S01010011T01010100U01010101V01010110W01010111X01011000Y01011001Z01011010LetterBinary Codea01100001b01100010c01100011d01100100e01100101f01100110g01100111h01101000i01101001j01101010k01101011l01101100m01101101n01101110o01101111p01110000q01110001r01110010s01110011t01110100u01110101v01110110w01110111x01111000y01111001z01111010
5