answersLogoWhite

0

Everything has to be binary encoded since it's the only language natively understood by a binary computer. Even ASCII character codes must be binary encoded. However, it's not clear how you would use ASCII to perform a calculation since the American Standard Code for Information Interchange is a character encoding scheme that maps 7-bit binary codes to glyph bitmaps according to the current ASCII code page. You can certainly use the encodings in calculations, but not the characters themselves, because the digits '0' through '9' in the ASCII table do not map to the values 0 through 9. In order to translate an ASCII digit to the value it represents, you first have to subtract 48 from the ASCII character value. From the computer's perspective, determining the actual value represented by the character '7' requires the binary calculation 00110111 - 00110000 = 00001110. In hexadecimal, this equates to 0x37 - 0x30 = 0x07, because 0x37 maps to character '7' in the ASCII table, while 0x30 maps to '0'. Thus you could also say '7' - '0' = 7.

User Avatar

Wiki User

10y ago

Still curious? Ask our experts.

Chat with our AI personalities

ViviVivi
Your ride-or-die bestie who's seen you through every high and low.
Chat with Vivi
DevinDevin
I've poured enough drinks to know that people don't always want advice—they just want to talk.
Chat with Devin
JordanJordan
Looking for a career mentor? I've seen my fair share of shake-ups.
Chat with Jordan
More answers

Digital computers use binary representations because it is by far the simplest representation to implement. Computers use various methods to represent a binary digit (a bit), such as switches that are either on or off, or capacitors that can be filled or drained of electric charge via transistors. The more rapidly it can switch an individual bit between these two states the more quickly it will operate, but it must also be able to "read" the state just as rapidly. With only two possible states this is extremely simple because the computer only needs to test if a signal (of any kind) is above or below a given threshold. If we were to attempt to do the same thing with a decimal system, the computer would need to differentiate between 10 possible states using 9 thresholds per digit. In binary, the computer can easily differentiate between 16 possible states using just 4 bits (and therefore 4 thresholds). Thus binary is clearly the more efficient way to represent digital information.

User Avatar

Wiki User

10y ago
User Avatar

There are only two electrical states in a digital computer:

- No potential (0 Volts)

- Potential (typically between 3V and 5V, depending on the processor architecture)


Therefore, a digital computer can "understand" only two values - zero and one, yes and no, electrical potential and no electrical potential. That is called the binary system.


Analogue computers (see the related link) are an entirely different story.

User Avatar

Wiki User

12y ago
User Avatar

computers only understand 1(on) and 0(off)

User Avatar

Wiki User

13y ago
User Avatar

Add your answer:

Earn +20 pts
Q: Why calculations in computer systems are performed in binary and not ASCII?
Write your answer...
Submit
Still have questions?
magnify glass
imp