answersLogoWhite

0


Best Answer

Many computer systems (though not all) represent numbers internally in base two, by using large numbers of heavily miniaturized devices called bistables.

Bistables really are a class of devices, like "cars" for instance. Various technologies will implement bistables in various ways, but like "cars", they will all implement some basic common functionality. In the case of the bistable this basic functionality is the fact that they can only have one of two states, at any one time: "on" and "off".

The "on" state of a bistable represents a binary "1" and the "off" state represents a binary "0". Thus, the decimal number 3 (binary 11) for example can be represented by two bistables, each of them in their "on" state.

Because bistables have two possible states, they can only be used to represent base two numbers for which only two possible digits (1 and 0) are needed. Therefore computers are said to operate in base two.

However, any number can be converted from any base to any base, so any decimal number one can think of can also be stored by a computer in base two, with no disadvantage.

Any content that can be represented numerically can also be represented by binary numbers and therefore computers are able to employ various quantities of bistables to represent all sorts of contents, from images to text to software. hello people, kyjc

User Avatar

Wiki User

14y ago
This answer is:
User Avatar
More answers
User Avatar

Wiki User

11y ago

Computers store numbers in a base two. We think in base ten (I purposely avoid digits for a reason here.) Also, this may seem a bit condescending when I talk about base ten - I'm not assuming you don't know how normal numbers work - I'm just explaining it in the kind of detail necessary to understand unfamiliar bases. (Skip to the bold paragraph if you want the quick and dirty explanation - the extended version assumes you want to know how to actually work with binary representations of numbers.)

A note on how numerical bases work: Each position up from the rightmost position (using whole numbers, decimals make translation to/from the on/off format a little complex) is the base number, multiplied by what the previous digit means. In base ten, a one in the third position from the right is ten times a one in the second position from the right. (Yes, I just said a hundred is ten times ten.) Similarly, in base two, the third position is two times the meaning of the second position, which is two times the meaning of the first position (which is always one). So the third digit is a four; the second is a two. Fourth is an eight; fifth is a 16, and so on. (This is why I explained it in base ten first - base ten makes a lot more sense to people who haven't gotten used to other numerical bases)

A bit is actually a physical transistor - an electronic on/off switch. Since there are only 2 positions, in a numerical context it can only function in base two. So to represent a number like, say, 127, to a computer, one needs to first translate the concept of one-hundred twenty seven into a base 2 form. (I use 127 because it's easy to convert). It's 1+2+4+8+16+32+64. So there's a 1 in the 1, 2, 4, 8, 16, 32 and 64 digits: 127 in base ten equals 0111 1111 in base two.

(Tangent: The leading zero is habitual. Most programmers prefer to think in hexadecimal (base 16) instead of binary (base 2) - the numbers are a lot shorter and thus easier for humans to comprehend, but also directly relate to what the computer uses - every 4 digits in binary is 1 digit in hexadecimal (and it's always the same digit - it doesn't matter if "0111" turns up in the first set of 4 or the tenth set of 4 digits in binary, it always means...umm...8+2+1=11=b in hexadecimal (the letters a-f represent digits for 10-15, respectively)))

In short, computers can represent large numbers using only 1s and 0s because they function in a lower numerical base (meaning they use more digits, but there are less possible digits for each position in the number).

If you're wondering, there is such a thing as base one, though it's not very useful for large numbers. Tally marks have only ones; there's no zero and thus no exponential gain in meaning as you get more digits. (Even then, when we use them practically, we still usually group them in sets of 5 so they're easier to keep track of).

Oh, and if you're wondering how a computer actually keeps track of where a number starts and stops, all I can tell you is that it's really complex, it varies by CPU (which is why nobody actually programs in 0's and 1's (Machine language) unless they really have to - the language varies significantly depending on the exact model of processor you're using - Assembly is bad enough, but there's at least some standardization and human logic in that), and that it might well be proprietary. I could see memory management at that level having a noticeable impact on how fast a computer runs, and I'm quite sure Intel and AMD both want that slight edge over their competitor. It's also completely irrelevant to the vast majority of programmers, let alone the general population.

This answer is:
User Avatar

User Avatar

Wiki User

8y ago

That depends on the type and coding of the variable containing the number.

Integer numbers are typically represented using Binary coding with 8, 16, 32, 64, or 128 bits on most modern computers although occasionally more bits are used for special applications.

Real numbers are typically represented using Floating Point coding with 32, 64, 80, or 128 bits on most modern computers although graphics processors often use 16 bits and special applications sometimes use 256 bits.

Complex numbers, being made of two Real numbers use twice as many bits as whatever type of Real number uses.

Some numbers are represented in a coded Decimal format instead of Binary.

Computers in the 1950s through 1970s have used as few as 6 bits to as many as 72 bits for numbers.

It can be almost anything, I once wrote a special program that used more than 60,000 bits for a few numbers that required very high precision to get the correct answer. That computer had variable wordlength hardware that supported such very wide words directly, but on modern computers which all have fixed wordlength hardware the same sized numbers can be handled using software arithmetic libraries.

This answer is:
User Avatar

User Avatar

Wiki User

12y ago

Various mathematical systems are used by computing systems to represent, encode or process numbers.

Binary: only two values are combined in to 8 bit blocks to provide mathematical values.

Hexadecimal: 16 values from 0 to 9 and A to F are used in four bit blocks to represent numbers.

Many other systems are used and more information can be found in multiple places online.

Example systems: Decimal, Binary, Octal, Hexidecimal

This answer is:
User Avatar

User Avatar

Wiki User

12y ago

Most modern computers use straight binary notation with 2's complement negative numbers. (e.g. 10101 = 16 + 4 + 1 = 21)

Early computers used many different methods including: various Binary Coded Decimal formats, 9's complement negative numbers, 1's complement negative numbers, signed magnitude negative numbers.

Floating point involves storing both the magnitude and exponent (both signed) together (usually in the same word).

This answer is:
User Avatar

User Avatar

Wiki User

12y ago

Computers use binary in a sort of on/off system. For instance: switches 1,5, and 9 will be on. So it would look a bit like: 1 2 3 4 5 6 7 8 9

1 0 0 0 1 0 0 0 1

A2

Computers use switches made of transistors. They are either ON or OFF. Binary numbers satisfy this system, because there are only two states, 1 and 0.

By grouping binary digits into Bytes, large numbers can be represented.

The processor has an inbuilt instruction set. When triggered by a binary number on it's bus, it can look up a table and perform an instruction.

This answer is:
User Avatar

User Avatar

Wiki User

14y ago

A computer represent numbers in binary code. For example, the number 17 would equal 10001 in binary code.

This answer is:
User Avatar

Add your answer:

Earn +20 pts
Q: How does a computer represent numbers using bits?
Write your answer...
Submit
Still have questions?
magnify glass
imp
Related questions

How many numbers can you represent with 4 bits?

16 of them.


How many binary bits are necessary to represent 748 different numbers?

Binary bits are necessary to represent 748 different numbers in the sense that binary bits are represented in digital wave form. Binary bits also have an exponent of one.


How many bits are required in decimal numbers in range 0 to 999 using Straight binary code and BCD code?

10 bits would be required. 10 bits long (10 digits long) can represent up to 1024.


What decimal numbers can 1 byte represent?

byte has 8 bits all bits at 0 = zero all bits at 1 = 255


How are floating point numbers handled as binary numbers?

Floating point numbers are typically stored as numbers in scientific notation, but in base 2. A certain number of bits represent the mantissa, other bits represent the exponent. - This is a highly simplified explanation; there are several complications in the IEEE floating point format (or other similar formats).Floating point numbers are typically stored as numbers in scientific notation, but in base 2. A certain number of bits represent the mantissa, other bits represent the exponent. - This is a highly simplified explanation; there are several complications in the IEEE floating point format (or other similar formats).Floating point numbers are typically stored as numbers in scientific notation, but in base 2. A certain number of bits represent the mantissa, other bits represent the exponent. - This is a highly simplified explanation; there are several complications in the IEEE floating point format (or other similar formats).Floating point numbers are typically stored as numbers in scientific notation, but in base 2. A certain number of bits represent the mantissa, other bits represent the exponent. - This is a highly simplified explanation; there are several complications in the IEEE floating point format (or other similar formats).


How many bits are required to represent double data type in memory?

It differs slightly depending on what platform your computer uses or what language you are using. for the Java programming language, which is platform independent, it is 64-bits.


How does a series of bits represent data?

A series of bits is actually a lot of data sent though the computer to little information holds on a disk or something that stores data. The bits can be comprised of anything from keystrokes to pictures to movies and music.


How many bits are need to represent colors?

Most modern digital cameras use 24 bits (8 bits per primary) to represent a color. But more or less can be used, depending on the quality desired. Many early computer graphics cards used only 4 bits to represent a color.


Using bits and byes in different combinations to represent a code known as?

6


How many bits are needed to represent decimal value ranging from 0 to 12500?

how many bits are needed to represent decimal values ranging from 0 to 12,500?


How does BCD differ from the straight binary number system?

To consider the difference between straight binary and BCD, the binary numbers need to be split up into 4 binary digits (bits) starting from the units. In 4 bits there are 16 possible values from 0000 to 1111 (0 to 15). In straight binary all of these possible combinations are used, thus: 4 bits can represent the decimal numbers 0-15 8 bits can represent the decimal numbers 0-255 12 bits can represent the decimal numbers 0-4095 16 bits can represent the decimal numbers 0-65535 etc In arithmetic, all combinations of bits are used, thus: 0000 1001 + 0001 = 0000 1010 In BCD or Binary Coded Decimal, only the representations of the decimal numbers 0-9 are used (that is 0000 to 1001 in binary), and the 4-bits (nybbles) are read as decimal digits, thus: 4 bits can represent the decimal digits 0-9 8 bits can represent the decimal digits 0-99 12 bits can represent the decimal digits 0-999 16 bits can represent the decimal digits 0-9999 In arithmetic, only the representations of decimal numbers are used, thus: 0000 1001 + 0001 = 0001 0000 When BCD is used each half of a byte is read directly as a decimal digit. BCD is obviously inefficient as storage (for large numbers) as each nybble is only holding 3/8 of the possible numbers, however, it is sometimes easier and quicker to work with decimal digits (for example when there is lots of display of counting numbers to do there is less binary to decimal conversion needing to be done).


What is the range of numbers that can be encoded in 4 bits using 2s complement notation?

Using 4 bits the signed range of numbers is -8 to 7. When working with signed numbers one bit is the sign bit, thus with 4 bits this leaves 3 bits for the value. With 3 bits there are 8 possible values, which when using 2s complement have ranges: for non-negative numbers these are 0 to 7; for negative numbers these are -1 to -8. Thus the range for signed 4 bit numbers is -8 to 7.