answersLogoWhite

0


Want this question answered?

Be notified when an answer is posted

Add your answer:

Earn +20 pts
Q: Why 6 bits are of least significance in the image representation?
Write your answer...
Submit
Still have questions?
magnify glass
imp
Related questions

Explain why these 6 bits are of least significance in the image representation?

The least significant 6 bits in image representation typically encode subtle details or noise that are less crucial for visual perception compared to the more significant bits. These bits contribute to fine texture or color variations that may not significantly impact the overall image quality. In practice, discarding or compressing these least significant bits can help reduce file size without noticeably affecting the image's overall visual quality.


How many bits do you need to represent months of the year?

Well, honey, to represent months of the year, you need at least 4 bits because you've got 12 months in a year, and you need 4 bits to represent numbers from 0 to 15. So, technically, you could do it with just 4 bits, but if you want to be fancy, you could use 5 bits for a more efficient representation.


What do you call an image made out of bits and bytes?

That's called a digital image.


What is the BCD representation of the decimal number 41 in 6-bit?

41 in decimal is 0100 0001 in BCD (this is 8 bits not 6 bits)41 in decimal is 101001 in binary (this is 6 bits, but binary not BCD)There is no 6 bit BCD representation of the decimal number 41!


What is the miniature representation of all located graphics in the clip art task?

bits


What is meant by data precision in computer?

precision is the total number of bits or digits in the representation of a number.accuracy is the number of correct bits or digits in a number.Given a certain representation on a computer, all numbers stored in that representation will have the same precision; however the accuracy of different numbers will vary, depending on the source and on the calculations done on them.


What does bitmap mean?

A bitmap is a series of bits which represents a rasterized graphic image, each pixel being represented as a group of bits.


How can the subnet mask for a class C address be represented?

255.255.255.0 - in decimal representation 11111111.11111111.11111111.00000000 - binary representation (3 bytes with all the bits equal to 1, the last byte with all the bits equal to 0) /24 - binary short representation (the number of bits equal to 1)


How many Bits use in Bitmap?

1024bitsIf "Bitmap" refers to a specific entity, image or file: I do not know. But if "Bitmap" refers to a general image then it is 8 bytes or 64 bits per pixel. I just made 3 1*1 bitmap images at colordepths 2bits (monochrome), 8 bits (256 colors) and 24 bits (16 Million colors). The sizes of these images were the same! (surprized me too!) Then I made a 1*2 pixel image and it was 66 bytes (528 bits) so the "overhead Microsoft paint puts on a bitmap is 400 bits. This could be an effect of limitations inherent in Microsoft Paint.


How many pixel 30 KB?

30kb = ? pixel?


In a class A how many subnet bits are needed to make at least 365 useable hosts or subnets?

You would need at least 9 bits to borrow. Since 8 bits gives only 255 the additional bit will get you 256. Adding 256 + 128 gives you at least 384 subnets or hosts.


How do computers store informations?

ALL Computers read write store information as binary (1 and 0's) in representations of bits(smallest representation of information) and bytes (8 bits make a byte)