answersLogoWhite

0


Best Answer

Decimal Cases * * * () * () In programming, a floating point number is expressed as . In general, a floating-point number can be written as
where * M is the fraction mantissa or significand. * E is the exponent. * B is the base, in decimal case . Binary Cases As an example, a 32-bit word is used in MIPS computer to represent a floating-point number: 1 bit ..... 8 bits .............. 23 bits representing: * The implied base is 2 (not explicitly shown in the representation). * The exponent can be represented in signed 2's complement (but also see biased notation later). * The implied decimal point is between the exponent field E and the significand field M. * More bits in field E mean larger range of values representable. * More bits in field M mean higher precision. * Zero is represented by all bits equal to 0: Normalization To efficiently use the bits available for the significand, it is shifted to the left until all leading 0's disappear (as they make no contribution to the precision). The value can be kept unchanged by adjusting the exponent accordingly. Moreover, as the MSB of the significand is always 1, it does not need to be shown explicitly. The significand could be further shifted to the left by 1 bit to gain one more bit for precision. The first bit 1 before the decimal point is implicit. The actual value represented is
However, to avoid possible confusion, in the following the default normalization does not assume this implicit 1 unless otherwise specified. Zero is represented by all 0's and is not (and cannot be) normalized. Example: A binary number can be represented in 14-bit floating-point form in the following ways (1 sign bit, a 4-bit exponent field and a 9-bit significand field): * * * * * with an implied 1.0: By normalization, highest precision can be achieved. The bias depends on number of bits in the exponent field. If there are e bits in this field, the bias is , which lifts the representation (not the actual exponent) by half of the range to get rid of the negative parts represented by 2's complement. The range of actual exponents represented is still the same. With the biased exponent, the value represented by the notation is:

Note: * Zero exponent is represented by , the bias of the notation; * The range of exponents representable is from -126 to 127; * The exponent (with all zero significand) is reserved to represent infinities or not-a-number (NaN) which may occur when, e.g., a number is divided by zero; * The smallest exponent is reserved to represent denormalized numbers (smaller than which cannot be normalized) and zero, e.g., is represented by: Normalization: If the implied base is , the significand must be shifted multiple of q bits at a time so that the exponent can be correspondingly adjusted to keep the value unchanged. If at least one of the first q bits of the significand is 1, the representation is normalized. Obviously, the implied 1 can no longer be used. Examples: * Normalize . Note that the base is 4 (instead of 2)
Note that the significand has to be shifted to the left twobits at a time during normalization, because the smallest reduction of the exponent necessary to keep the value represented unchanged is 1, corresponding to dividing the value by 4. Similarly, if the implied base is , the significand has to be shifted 3 bits at a time. In general, if , normalization means to left shift the significand q bits at a time until there is at least one 1 in the highest q bits of the significand. Obviously the implied 1 can not be used. * Represent in biased notation with bits for exponent field. The bias is and implied base is 2.
The biased exponent is , and the notation is (without implied 1): or (with implied 1): * Find the value represented in this biased notation: The biased exponent is 17, the actual exponent is , the value is (without implied 1):
or (with implied 1):
Examples of IEEE 754: * -0.3125
The biased exponent is , * 1.0
The biased exponent is , * 37.5
The based exponent: , . * -78.25
The biased exponent: , * As the most negative exponent representable is -126, this value is a denorm which cannot be normalized: by GAURAV PANDEY & VIJAY MAHARA..........

AMRAPALI INSTITUTE...................

User Avatar

Wiki User

15y ago
This answer is:
User Avatar
More answers
User Avatar

Wiki User

9y ago

In most scientific writings, large floating-point numbers are written with the decimal moved and the number would then be followed by x10 to some power, representing where the decimal place would actually be. As examples,Ê15,000 would be 1.5x10^4 and .0123 would be 1.23x10^-2.ÊHowever, on computers and calculators, those numbers would be written as 1.5e+4 andÊ1.23e-2.

This answer is:
User Avatar

User Avatar

Wiki User

11y ago

In computing, a floating point number is one that does not have a fixed position for the decimal point. For example, currency is often not a floating point number, because most currencies use exactly two decimal places. A floating point operation is one that is capable of handling floating point numbers, one of the more complex tasks in computer math.

This answer is:
User Avatar

Add your answer:

Earn +20 pts
Q: What is floating point in computing?
Write your answer...
Submit
Still have questions?
magnify glass
imp
Related questions

How does a computer represent floating point numbers?

In Computing, Floating Point refers to a method of representing an estimate of a real number in a way which has the ability to support a large range of values.


Where is apple computer company strong and weak in its operations as an open system?

: A measure of computing speed equal to one billion floating-point operations per second.


Apple Computers G4 is a supercomputer because its operations can be measured in gigaflops. What is a gigaflop?

A measure of computing speed equal to one billion floating-point operations per second


When was Floating Point created?

Floating Point was created in 2007-04.


What is a giga flop?

A giga-flop stands for a billion FLOATING POINT instructions per second. It signifies nothing about the number of Integer or memory load/store/jump operations. It is primarily used in the Scientific Computing field, which mostly run large-scale simulations, which are (almost) exclusively floating point calculations.


What is the floating point unit used for on the processor system?

"Floating Point" refers to the decimal point. Since there can be any number of digits before and after the decimal, the point "floats". The floating point unit performs arithmetic operations on decimal numbers.


What are some common errors detected by the CPU?

Fixed point overflow, Floating point overflow, Floating point underflow, etc.


Is Fixed floating point choice is not an important ISA condition?

fixed/floating point choice is an important ISA condition.


Benefits of using floating point arithmetic over fixed point arithmetic in CPUs?

Fixed point number usually allow only 8 bits (32 bit computing) of binary numbers for the fractional portion of the number which means many decimal numbers are recorded inaccurately. Floating Point numbers use exponents to shift the decimal point therefore they can store more accurate fractional values than fixed point numbers. However the CPU will have to perform extra arithmetic to read the number when stored in this format. Fixed point number usually allow only 8 bits (32 bit computing) of binary numbers for the fractional portion of the number which means many decimal numbers are recorded inaccurately. Floating Point numbers use exponents to shift the decimal point therefore they can store more accurate fractional values than fixed point numbers. However the CPU will have to perform extra arithmetic to read the number when stored in this format.


Apple Computer's G4 is a supercomputer because its operation can be measured in gigaflops What is a gigaflop?

mr degregory class is boring lol if your reading this


What is the possible reason if the turbo C compiler displays an error like this Illegal use of floating point?

Floating-point library not linked in.


How many bits are used in double precision floating point format number representation?

Depends on the format IEEE double precision floating point is 64 bits. But all sorts of other sizes have been used IBM 7094 double precision floating point was 72 bits CDC 6600 double precision floating point was 120 bits Sperry UNIVAC 1110 double precision floating point was 72 bits the DEC VAX had about half a dozen different floating point formats varying from 32 bits to 128 bits the IBM 1620 had floating point sizes from 4 decimal digits to 102 decimal digits (yes digits not bits).