answersLogoWhite

0


Best Answer

Because 1 byte is defined [now] to be 8 bits.

Digital computers use binary memory locations, ie each memory location can hold one of two values: 0 or 1, as each state is easily representable by whether an electrical charge/voltage exists or not.

However, using a BInary digiT, or bit for short, to represent data is not very useful as there are only the two possible data items: 0 and 1. However, by combining more binary digits together into a single unit, more data can be represented in each unit:

Using 1 bit only 2 numbers can be represented: 0 & 1

Using 2 bits, 4 numbers can be represented: 0, 1, 2 & 3

Using 3 bits, 8 numbers can be represented: 0, 1, 2, 3, 4, 5, 6 & 7

Using 4 bits, 16 numbers can be represented: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14 & 15

And so on, every additional bit doubling the quantity of numbers that can be represented.

Using 8-bit storage units, two decimal digits can be stored (quite efficiently); this is Binary Coded Decimal (BCD). 8-bit computers became popular (with the manufacturers) probably due to BCD storing two decimal digits in a single unit. I don't know the entymology fully, but I guess that back in the 1950s they needed a term to describe this 8-bit storage unit and either considered it an extended bit (bitE) or as there were eight bits, bit-eight abbreviated to bite. However, it would be very easy to mis- read or write this, so it was deliberately misspelled with a 'y' instead of the 'i': byte.

Early microprocessor digital computers used 4 bit units, so that in one storage unit one of 16 different numbers could be stored: 0-9 (the decimal numbers ) and a further 6 numbers (10-15, often represented by the hexadecimal "digits" A-F).

When wanting a word to describe the 4-bit storage, the term nybble (nibble spelt with a 'y' - to match bite with a 'y') was probably coined as a pun on taking 2 nibbles of something (eg a piece of cake) and getting a bite out of it.

User Avatar

Wiki User

10y ago
This answer is:
User Avatar

Add your answer:

Earn +20 pts
Q: Why 1 byte equals to 8 bits?
Write your answer...
Submit
Still have questions?
magnify glass
imp
Continue Learning about Other Math

What is smaller than a byte?

1 byte = 8 bits 1 bit = 1024 cenibits 1 cenibit =1024 milbits


Can a 16-bit CPU process 2 bytes at a time?

14


How could you convert MegaBits per second to MegaBytes per second?

1 Byte = 8 bits (definition)-- Take the number of MegaBits-- Divide it by 8-- The answer is the number of MegaBytes.


Can two hexadecimal digits be stored in one byte?

Yes, a byte is 8 bits, and a one hexadecimal digit takes up four bits, so two hexadecimal digits can be stored in a byte. The largest hexadecimal digit is F (which is 15 in base ten.) In base two, this converts to 1111, which takes up four bits, which is why it only takes four bits to store a hexadecimal digit. With 8 bits, two hexadecimal digits can be stored (FF would be 11111111, which is 8 bits), and 8 bits make up a byte. Generally, 4 bits are always used to store a hexadecimal digit, using leading zeros where necessary. For example, the hexadecimal digit 5 would be stored as 0101, and the hexadecimal digits 5A would be stored as 01011010.


How many zeros in a nanobyte?

We would have to say that there's no such thing as a nanobyte. A "byte" is defined as a word or number composed of 8 bits. A "bit" is defined as the quantum of information, that is, the smallest possible unit of it, which can't be divided down into anything smaller. So we'd have to say that the smallest possible fraction of a byte is 1/8th of it. Once you cut up a byte into 8 pieces, you can't cut them any smaller. So there's certainly no such thing as a billionth of a byte. (That's what 'nano' means.)