Have you ever considered that the plural of “half” is “whole”?
Until now, we have worked with data as either numbers or strings. Ultimately, however, computers represent everything in terms of binary digits, or bits. A decimal digit can take on any of 10 values: zero through nine. A binary digit can take on any of two values, zero or one. Using binary, computers (and computer software) can represent and manipulate numerical and character data. In general, the more bits you can use to represent a particular thing, the greater the range of possible values it can take on.
Modern computers support at least two, and often more, ways to do arithmetic. Each kind of arithmetic uses a different representation (organization of the bits) for the numbers. The kinds of arithmetic that interest us are:
This is the kind of arithmetic you learned in elementary school, using paper and pencil (and/or a calculator). In theory, numbers can have an arbitrary number of digits on either side (or both sides) of the decimal point, and the results of a computation are always exact.
Some modern systems can do decimal arithmetic in hardware, but usually you need a special software library to provide access to these instructions. There are also libraries that do decimal arithmetic entirely in software.
Despite the fact that some users expect gawk
to be performing
decimal arithmetic,99 it does not do so.
In school, integer values were referred to as “whole” numbers—that is, numbers without any fractional part, such as 1, 42, or −17. The advantage to integer numbers is that they represent values exactly. The disadvantage is that their range is limited.
In computers, integer values come in two flavors: signed and unsigned. Signed values may be negative or positive, whereas unsigned values are always greater than or equal to zero.
In computer systems, integer arithmetic is exact, but the possible range of values is limited. Integer arithmetic is generally faster than floating-point arithmetic.
Floating-point numbers represent what were called in school “real” numbers (i.e., those that have a fractional part, such as 3.1415927). The advantage to floating-point numbers is that they can represent a much larger range of values than can integers. The disadvantage is that there are numbers that they cannot represent exactly.
Modern systems support floating-point arithmetic in hardware, with a limited range of values. There are software libraries that allow the use of arbitrary-precision floating-point calculations.
POSIX awk
uses double-precision floating-point numbers, which
can hold more digits than single-precision floating-point numbers.
gawk
has facilities for performing arbitrary-precision
floating-point arithmetic, which we describe in more detail shortly.
Computers work with integer and floating-point values of different
ranges. Integer values are usually either 32 or 64 bits in size.
Single-precision floating-point values occupy 32 bits, whereas double-precision
floating-point values occupy 64 bits.
(Quadruple-precision floating point values also exist. They occupy 128 bits,
but such numbers are not available in awk
.)
Floating-point values are always
signed. The possible ranges of values are shown in Table 16.1
and Table 16.2.
Representation | Minimum value | Maximum value |
---|---|---|
32-bit signed integer | −2,147,483,648 | 2,147,483,647 |
32-bit unsigned integer | 0 | 4,294,967,295 |
64-bit signed integer | −9,223,372,036,854,775,808 | 9,223,372,036,854,775,807 |
64-bit unsigned integer | 0 | 18,446,744,073,709,551,615 |
Representation | Minimum positive nonzero value | Minimum finite value | Maximum finite value |
---|---|---|---|
Single-precision floating-point | 1.175494*10-38 | -3.402823*1038 | 3.402823*1038 |
Double-precision floating-point | 2.225074*10-308 | -1.797693*10308 | 1.797693*10308 |
Quadruple-precision floating-point | 3.362103*10-4932 | -1.189731*104932 | 1.189731*104932 |