# What is Floating-Point Representation?

What is Floating-Point Representation? In our previous post, we discuss Introduction to Numerical Techniques/ Analysis. Guys if you haven’t visited, hurry up!!

First, we have to know what is Floating-Point Representation – Floating-Point Representation is the scientific notation of binary numbers. Floating  Point Representation is used for high-range values. It divides the number into three parts. The left-hand side represents a signed number and the right-hand side represents an exponent and the middle fixed point is called Mantissa.

Floating  Point values are also attached with a significant number that is 0 and 1.  0 represents the positive value (+) and 1 represents the negative value (-).

The Floating  Point Representation is one of the important concepts and nowadays whatever the technologies that we are using, in that technologies the number system which we are following is represented in a Floating  Point is a scientific notation.

The FORMAT of the  Scientific notation is 1M ×BE.

How many numbers can be represented with floating points?

A floating-point representation system is used to represent, with a fixed number of digits, numbers of different orders of magnitude: e.g. the distance between galaxies can be demonstrated with the same unit of length. The sum of this dynamic range is that the numbers which can be represented as not uniformly spaced; the distinction between two consecutive representable numbers varies with the chosen scale.

For several years, a mixture of floating-point representations has been used in computers. In 1985, the IEEE 754 Standard for Floating-Point Representation was established, and since the 1990s, the most commonly encountered representations are those defined by the IEEE.

The speed of floating-point operations is measured in terms of FLOPS, which is a most important characteristic of a computer system, especially for applications that involve mathematical calculations.

A floating-point representation unit (FPU, colloquially a math coprocessor) is a part of a computer system specially designed to carry out operations on floating-point numbers.

What does the decimal point represent?

The decimal point represents the point or dot which is used to separate the whole number part from the fractional part of a decimal number. it is a number that consists of a whole number and a fractional part.

EXAMPLE

• 9×108

From the above example, 9 is Mantissa, 10 is Base, and 8 is Exponent.

• 4321.654

From the above example, 4321654 is Mantissa, 10 is Base, and -3 is Exponent.

Here, X1=0.4273×103 and Y2= -0.3400×102

Solution :

X3=X1+Y2

First of all we have to make equal the exponent of  X1 and Y2

Y2 = -0.3400×102

= -0.03400×103

X3=X1+Y2

= 0.4273×103 + ( -0.03400×103)

= 0.4273×103 – 0.03400×103

= (0.4273 – 0.3400) ×103

= 0.39330 × 103 (and).

History of  Floating-Point Representation

In 1914, Leonardo Torres y Quevedo suggest a form of floating-point in the course of discussing his design for a special-purpose electromechanical calculator. In 1938, Konrad Zuse of Berlin completed the Z1, the first binary, programmable mechanical computer, which uses a 24-bit binary floating-point number representation with a 7-bit signed exponent, a 17-bit significand, and a sign bit. The more reliable relay is based upon Z3, which is completed in 1941 and has representations for both positive and negative infinities, in particular, the implements defined operations with infinitude, and it stops on undefined operations.

Konrad Zuse, founder of the Z3 computer, which uses a 22-bit binary floating-point representation. Zuse also proposed, but did not complete, carefully rounded floating-point arithmetic that includes representations, the expected features of the IEEE Standard by four decades. In contrast, von Neumann recommended against floating-point numbers for the 1951 IAS machine, disagreeing that fixed-point arithmetic is preferable.

The mass-produced IBM 704 was observed in 1954, introducing the use of a biased exponent. After that, floating-point hardware was typically an optional feature, and computers that had it were said to be “scientific computers”, it has the capability (see also Extensions for Scientific Computation (XSC)). It was not taken off the Intel i486 in 1989 that general-purpose personal computers had the floating-point capability in hardware as a standard feature.