Normal number (computing)

In computing, a normal number is a non-zero number in a floating-point representation which is within the balanced range supported by a given floating-point format: it is a floating point number that can be represented without leading zeros in its significand.

The magnitude of the smallest normal number in a format is given by:

where b is the base (radix) of the format (like common values 2 or 10, for binary and decimal number systems), and depends on the size and layout of the format.

Similarly, the magnitude of the largest normal number in a format is given by

where p is the precision of the format in digits and is related to as:

In the IEEE 754 binary and decimal formats, b, p, , and have the following values:[1]

Smallest and Largest Normal Numbers for common numerical Formats
Format Smallest Normal Number Largest Normal Number
binary16211−1415
binary32224−126127
binary64253−10221023
binary1282113−1638216383
decimal32107−9596
decimal641016−383384
decimal1281034−61436144

For example, in the smallest decimal format in the table (decimal32), the range of positive normal numbers is 1095 through 9.999999 × 1096.

Non-zero numbers smaller in magnitude than the smallest normal number are called subnormal numbers (or denormal numbers).

Zero is considered neither normal nor subnormal.

See also

References

  1. IEEE Standard for Floating-Point Arithmetic, 2008-08-29, doi:10.1109/IEEESTD.2008.4610935, ISBN 978-0-7381-5752-8, retrieved 2015-04-26


This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.