If you are referring to normalization of floating point numbers, it is to maintain the most precision of the number possible. Leading zeros in floating point representation is lost precision, thus normalization removes the leading zeros by shifting left and adjusting the exponent. If the calculation was done in a hidden extended precision register (like IEEE 80-bit format) extra precision bits may be shifted in to the LSBs before restoring the result to a standard single or double precision register, reducing loss of precision.
Chat with our AI personalities