18
RETROSPECTIVE: How to Print Floating-Point Numbers Accurately Guy L. Steele Jr. Sun Microsystems Laboratories 1 Network Drive, UBUR02-311 Burlington, MA 01803 USA [email protected] Jon L White Lisp Wizard 2001 The Forward March of Technology Drive Oceanview, Kansas 99999 USA [email protected] 1. History Our PLDI paper [28] was almost 20 years in the making. How should the result of dividing 1.0 by 10.0 be printed? In 1970, one usually got “0.0999999” or “0.099999994”; why not “0.1”? (The problem of binary-to-decimal conversion is amazingly tricky, even for fixed-point fractions, let alone floating- point numbers [31][22][20][12].) Some programming systems of the day required a prespecified limit for the number of decimal dig- its to be printed for a binary floating-point number. If, in our exam- ple, the limit were set to nine decimal places, then the printed result might well be 0.099999994; but if the limit were set to seven decimal places, then poorly implemented printers would produce 0.0999999 and those that took the trouble to perform decimal rounding would produce either 0.1000000 or 0.1. What is the correct number of digits to produce if the user doesn’t specify? If a system prints too many digits, the excess digits may be “garbage,” reflecting more information than the number actually contains; if a system prints too few digits, the result will be wrong in a stronger sense: converting the decimal representation back to binary may not recover the original binary value. It really bugged one of us (White) that systems of the day not only produced incorrect decimal representations of binary numbers, but provided no guaranteed bound on how wrong they might be. In early 1971, he began to analyze the machine-language algorithm used in the implementation of MacLisp [23][27] to convert PDP-10 floating-point numbers into decimal notation. He also investigated the behavior of other language systems, both those that printed a variable number of digits and those that printed a fixed number of digits but failed to perform decimal rounding correctly. Using the “bignum” (high-precision integer) facilities that had recently been added to MacLisp for other purposes, White inves- tigated the exact representations of certain floating point numbers that seemed to print out with too many decimal digits. He discov- ered that virtually all of the systems he examined suffered from one or more of the following problems on many floating-point val- ues: (a) printing many more decimal digits than were necessary, (b) printing decimal values that differed from the binary values by many units in the last place (ulps), and (c) exhibiting substantial “drift” when a value is printed, then read back in, the new value printed and again read back in, and so on. White found that the programmers or authors of these systems invariably claimed that their systems must be producing “correct” results because (1) they produced lots of decimal digits, making no claim of producing “shortest” representations, and/or (2) they 20 Years of the ACM/SIGPLAN Conference on Programming Language Design and Implementation (1979-1999): A Selection, 2003. Copyright 2003 ACM 1-58113-623-4 ...$5.00. had coded some variant of Taranto’s algorithm [31]—“I found it in Knuth!” [18, ex. 4.4–3]. Some also dismissed examples of dis- crepancies with hand-waving generalizations like those found in Knuth (“... floating point arithmetic is inherently approximate” [18, section 4.2.1]) and later in Steele (“In general, computations with floating-point numbers are only approximate.” [29, section 12.1]. But White saw no reason why the printing problem could not be solved exactly, and he set out to find a way. One suggested plan for obtaining the shortest output that correctly preserves the floating-point value was simply to produce many more decimal digits than necessary; then one would “round back- wards” in decimal, converting the decimal string back into binary at each step, until the shortest string of digits was found that would convert back into the original number. However, this approach is quite inefficient. It fails to take advantage of information that is in- crementally available during the steps of digit generation and that adequately characterizes the difference between what has already (incrementally) been outputted and the original number. The first attempt to take advantage of such information was by White, later in 1971. By 1972, when Steele joined White at MIT to work on MacLisp, there was a rudimentary version of the algorithm in the MacLisp PRINT function, better than previous printing al- gorithms but not yet correct in all cases. White then decided to try tracking the error propagation by watching what happened to a value equal to 1 2 ulp, subjected to the same arithmetic operations that were being performed on the fraction being printed. This line of investigation led to the use of the variable M in our algorithms. (We called this value “M” because White thought of it as defining a “mask,” the complement of the interval [M, 1 - M], that filters (“masks”) out fractional values for which enough digits had been generated and passes through only fractions for which more dig- its are required, namely those lying in the gap [M, 1 - M]. As M grows, the mask grows and the interval at its center shrinks, until eventually M exceeds 1 2 , the gap shrinks away to nothing, and no remaining fraction can pass through the mask.) When Steele left MIT in 1980 to go to Carnegie-Mellon Uni- versity, MacLisp was using the algorithm Dragon2, with double- precision floating-point prescaling of values that were large or small enough to require “E” notation. By about this time Steele had written a very early draft of the PLDI paper. (He wrote it in T E X, so it must have been after Fall 1978, when he first ported T E X from Stanford to MIT, and it must have been prior to the 1981 publication of the second edition of “Knuth Volume 2” [19], which mentions this unpublished draft in the answer to exercise 4.4–3.) Steele continued to polish the algorithm, off and on, during the early 1980s. His principal contributions were (1) to replace floating-point prescaling with implicit rational prescaling, thus in- troducing the scale factor S to the algorithm (and incidentally com- ACM SIGPLAN 372 Best of PLDI 1979-1999

How to Print Floating-Point Numbers Accuratelykurtstephens.com/files/p372-steele.pdf · RETROSPECTIVE: How to Print Floating-Point Numbers Accurately Guy L. Steele Jr. Sun Microsystems

Embed Size (px)

Citation preview

Page 1: How to Print Floating-Point Numbers Accuratelykurtstephens.com/files/p372-steele.pdf · RETROSPECTIVE: How to Print Floating-Point Numbers Accurately Guy L. Steele Jr. Sun Microsystems

RETROSPECTIVE:

How to Print Floating-Point Numbers Accurately

Guy L. Steele Jr.Sun Microsystems Laboratories1 Network Drive, UBUR02-311Burlington, MA 01803 [email protected]

Jon L WhiteLisp Wizard

2001 The Forward March of Technology DriveOceanview, Kansas 99999 USA

[email protected]

1. HistoryOur PLDI paper [28] was almost 20 years in the making.

How should the result of dividing 1.0 by 10.0 be printed?In 1970, one usually got “0.0999999” or “0.099999994”;why not “0.1”? (The problem of binary-to-decimal conversion isamazingly tricky, even for fixed-point fractions, let alone floating-point numbers [31][22][20][12].) Some programming systems ofthe day required a prespecified limit for the number of decimal dig-its to be printed for a binary floating-point number. If, in our exam-ple, the limit were set to nine decimal places, then the printed resultmight well be 0.099999994; but if the limit were set to sevendecimal places, then poorly implemented printers would produce0.0999999 and those that took the trouble to perform decimalrounding would produce either 0.1000000 or 0.1.

What is the correct number of digits to produce if the user doesn’tspecify? If a system prints too many digits, the excess digits maybe “garbage,” reflecting more information than the number actuallycontains; if a system prints too few digits, the result will be wrongin a stronger sense: converting the decimal representation back tobinary may not recover the original binary value.

It really bugged one of us (White) that systems of the day not onlyproduced incorrect decimal representations of binary numbers, butprovided no guaranteed bound on how wrong they might be. Inearly 1971, he began to analyze the machine-language algorithmused in the implementation of MacLisp [23][27] to convert PDP-10floating-point numbers into decimal notation. He also investigatedthe behavior of other language systems, both those that printed avariable number of digits and those that printed a fixed number ofdigits but failed to perform decimal rounding correctly.

Using the “bignum” (high-precision integer) facilities that hadrecently been added to MacLisp for other purposes, White inves-tigated the exact representations of certain floating point numbersthat seemed to print out with too many decimal digits. He discov-ered that virtually all of the systems he examined suffered fromone or more of the following problems on many floating-point val-ues: (a) printing many more decimal digits than were necessary,(b) printing decimal values that differed from the binary values bymany units in the last place (ulps), and (c) exhibiting substantial“drift” when a value is printed, then read back in, the new valueprinted and again read back in, and so on.

White found that the programmers or authors of these systemsinvariably claimed that their systems must be producing “correct”results because (1) they produced lots of decimal digits, makingno claim of producing “shortest” representations, and/or (2) they

20 Years of the ACM/SIGPLAN Conference on Programming LanguageDesign and Implementation (1979-1999): A Selection, 2003.Copyright 2003 ACM 1-58113-623-4 ...$5.00.

had coded some variant of Taranto’s algorithm [31]—“I found itin Knuth!” [18, ex. 4.4–3]. Some also dismissed examples of dis-crepancies with hand-waving generalizations like those found inKnuth (“. . . floating point arithmetic is inherently approximate”[18, section 4.2.1]) and later in Steele (“In general, computationswith floating-point numbers are only approximate.” [29, section12.1]. But White saw no reason why the printing problem couldnot be solved exactly, and he set out to find a way.

One suggested plan for obtaining the shortest output that correctlypreserves the floating-point value was simply to produce manymore decimal digits than necessary; then one would “round back-wards” in decimal, converting the decimal string back into binaryat each step, until the shortest string of digits was found that wouldconvert back into the original number. However, this approach isquite inefficient. It fails to take advantage of information that is in-crementally available during the steps of digit generation and thatadequately characterizes the difference between what has already(incrementally) been outputted and the original number.

The first attempt to take advantage of such information was byWhite, later in 1971. By 1972, when Steele joined White at MIT towork on MacLisp, there was a rudimentary version of the algorithmin the MacLisp PRINT function, better than previous printing al-gorithms but not yet correct in all cases. White then decided totry tracking the error propagation by watching what happened to avalue equal to 1

2 ulp, subjected to the same arithmetic operationsthat were being performed on the fraction being printed. This lineof investigation led to the use of the variable M in our algorithms.(We called this value “M” because White thought of it as defininga “mask,” the complement of the interval [M,1−M], that filters(“masks”) out fractional values for which enough digits had beengenerated and passes through only fractions for which more dig-its are required, namely those lying in the gap [M,1−M]. As Mgrows, the mask grows and the interval at its center shrinks, untileventually M exceeds 1

2 , the gap shrinks away to nothing, and noremaining fraction can pass through the mask.)

When Steele left MIT in 1980 to go to Carnegie-Mellon Uni-versity, MacLisp was using the algorithm Dragon2, with double-precision floating-point prescaling of values that were large orsmall enough to require “E” notation. By about this time Steelehad written a very early draft of the PLDI paper. (He wrote it inTEX, so it must have been after Fall 1978, when he first portedTEX from Stanford to MIT, and it must have been prior to the 1981publication of the second edition of “Knuth Volume 2” [19], whichmentions this unpublished draft in the answer to exercise 4.4–3.)

Steele continued to polish the algorithm, off and on, duringthe early 1980s. His principal contributions were (1) to replacefloating-point prescaling with implicit rational prescaling, thus in-troducing the scale factor S to the algorithm (and incidentally com-

ACM SIGPLAN 372 Best of PLDI 1979-1999

Page 2: How to Print Floating-Point Numbers Accuratelykurtstephens.com/files/p372-steele.pdf · RETROSPECTIVE: How to Print Floating-Point Numbers Accurately Guy L. Steele Jr. Sun Microsystems

mitting to the use of bignum arithmetic in the printing algorithmitself); (2) to use two mask quantities M+ and M− rather than justone; (3) to separate the digit-generation process from the format-ting process, organizing them as coroutines; (4) to provide a proofof correctness for the digit-generation process; and (5) to code thewhole thing up (in Pascal), including many different formatters.(The PL/I-style “picture” formatter was omitted from the PLDI pa-per for lack of space, alas.) Steele finished all this by 1985.

During the 1980s, White investigated the question of whether onecould use limited-precision arithmetic after all rather than bignums.He had earlier proved by exhaustive testing that just 7 extra bits suf-fice for correctly printing 36-bit PDP-10 floating-point numbers, ifpowers of ten used for prescaling are precomputed using bignumsand rounded just once. But can one derive, without exhaustive test-ing, the necessary amount of extra precision solely as a function ofthe precision and exponent range of a floating-point format? Thisproblem is still open, and appears to be very hard.

It weighed on Steele’s mind that Knuth had shown confidencein him by citing—in a book!—a paper that didn’t really exist yet.Eventually Steele summoned the effort to complete the paper andsubmit it to the 1990 PLDI, just to rid himself of the nagging bur-den. Imagine our surprise when we discovered that Will Clingerhad written a paper on the decimal-to-binary problem and submit-ted it to the same conference! (By the way, the third edition of“Knuth Volume 2” [21] cites our PLDI paper, so all is well now.)

Since our paper was published, there seem to have been exactlytwo follow-up papers [9][2] of any consequence. Both provided im-portant practical improvements on our work; we recommend themto interested readers and especially to implementors.

2. Influence on Programming LanguagesSince our paper was published, it has had a clear influence on thespecification and implementation of many programming languages.

We had coded an early version of this algorithm as part of theruntime system for MacLisp while we were at MIT in the late1970s; this version was not completely accurate because it firstused floating-point operations to scale the number to be printed bya power of 10 if the number was very large or small, but it wascompletely accurate for numbers not requiring such prescaling.

The MacLisp successor Lisp Machine Lisp, also developed atMIT, was influenced by our work [32, p. 281]:

The number of digits printed is the “correct” number; no infor-mation present in the flonum is lost, and no extra trailing dig-its are printed that do not represent information in the flonum.Feeding the [printed representation] of a flonum back to thereader is always supposed to produce an equal flonum.

The documentation for the commercial version of this language,Symbolics’ Zetalisp, has similar wording [30, p. 15].

After publication of our paper, the idea began to spread, to thelanguage Id, another MIT project, in 1991 [1, p. 15]:

Utilities for accurate reading and printing of double precisionfloating point numbers have been implemented. The floatingpoint reader [3] has been implemented in Common Lisp and Id90, the floating point printer [28] in Common Lisp.

and to Modula-3 [14, p. 15]:The idea of converting to decimal by retaining just as many dig-its as are necessary to convert back to binary exactly was pop-ularized by Guy L. Steele Jr. and Jon L White [28]. David M.Gay pointed out the importance . . . of demanding that the con-version to binary handle mid-point cases by a known rule [9].

and to the fourth revision of the report on Scheme, which cites [28]and [3] and also remarks [24, p. 24]:

. . . the result [of a binary-to-decimal conversion] is expressedusing the minimum number of digits . . .

The fifth revision of the report [17] has similar wording but cites[2] rather than [28].

Steele had a hand in establishing conversion accuracy require-ments in The Java Language Specification [11, pp. 506–507],which requires that all binary-to-decimal and decimal-to-binaryconversions be correctly rounded and that binary-to-decimal con-versions generate as few digits as possible.

The description of the “basis library” for Standard ML says [25]:toDecimal should produce only as many digits as are neces-sary for fromDecimal to convert back to the same number,i.e., for any Normal or SubNormal real value r, we have:fromDecimal (toDecimal r) = r . . . Algorithms for ac-curately and efficiently converting between binary and decimalreal representations are readily available, e.g., see the technicalreport by [9].

Soon after that, the language Limbo adopted accurate base con-version as one of its improvements over C [13, p. 271]:

The most important numerical development at the languagelevel recently has been accurate binary/decimal conversion[3][9][28]. Thus printing a real using %g and reading it on adifferent machine guarantees recovering identical bits.

As for Haskell, a comment that appears in the code for the func-tion floatToDigits in the Haskell 98 library [16, p. 14] saysthat the code is based on Burger and Dybvig’s work [2].

Borneo is a numerical programming language [4, pp. 7, 9]:Although published algorithms exist for both correctly roundedinput [3] and output [28], conversion problems persist. Cor-rectly rounded algorithms are also acceptably fast for commoncases [2], [9]. While working on the BEEF tests for transcen-dental functions, it was discovered that the Turbo C 1.0 compilerdid not convert 11.0 exactly into a floating point number equalto 11! . . . Java requires correctly rounded decimal to binary andbinary to decimal conversion . . . Borneo maintains Java’s baseconversion requirements . . .

Steele served as Project Editor for the first edition of the ECMAstandard for the web browser scripting language ECMAScript(more popularly known as JavaScript), which recommends, butdoes not require, accurate conversions [5, p. 36]:

NOTE: The following observations may be useful as guidelinesfor implementations, but are not part of the normative require-ments of this Standard: If x is any number value other than 0,then ToNumber(ToString(x)) is exactly the same number valueas x. . . . Implementors of ECMAScript may find useful the pa-per and code written by David M. Gay for binary-to-decimalconversion of floating-point numbers [9].

A recent extension of the GNU Fortran compiler makes use ofaccurate conversion techniques [26, p. 319]:

. . . support routines for performing conversion between charac-ter strings and intervals . . . were developed based on routinesfor floating-point input [9] and floating-point output [2].

Finally, David M. Gay himself has been working on an algebraicmodeling language called AMPL [10, p. 3]:

AMPL and its solver interface library use correctly roundedbinary↔decimal conversions, which is now possible on all ma-chines where AMPL has run other than old Cray machines.Details are described in Gay [9]. Part of the reason for men-tioning binary↔decimal conversions here is to point out a re-cent extension to Gay [9] that carries out correctly rounded

ACM SIGPLAN 373 Best of PLDI 1979-1999

Page 3: How to Print Floating-Point Numbers Accuratelykurtstephens.com/files/p372-steele.pdf · RETROSPECTIVE: How to Print Floating-Point Numbers Accurately Guy L. Steele Jr. Sun Microsystems

conversions for other arithmetics with properties similar to bi-nary IEEE arithmetic. This includes correct directed roundingsand rounding of a decimal string to a floating-point intervalof width at most one unit in the last place, both of which areobviously useful for rigorous interval computations. There isno paper yet about this work, but the source files are availableas ftp://netlib.bell-labs.com/netlib/fp/gdtoa.tgz which includes aREADME file for documentation.

The IEEE 754 standard for floating-point arithmetic [15] has aconversion accuracy requirement, but a liberal one. Now that thepracticality of our approach has been established, we have highhopes that an upcoming revision of that standard will adopt a re-quirement of full conversion accuracy.

3. Why “Dragon”?A very few readers have wondered why the principal algorithmsin our work are named “Dragon2” and “Dragon3” and “Dragon4”;this was an obscure joke (entirely Steele’s fault) alluding to the“dragon curves” discovered and analyzed by John E. Heighway,Bruce A. Banks, and William G. Harter and reported on by MartinGardner [7][8]. The initial letters in the multiword description ofa “Dragon” algorithm form a sequence of letters ‘F’ and ‘P’ thatrepresents the sequence of (valley) Folds and (mountain) Peaks ina piece of paper that has been folded to form a dragon curve.

REFERENCES[1] Boughton, G. A., editor. Computation Structures Group

Progress Report 1990–91. CSG Memo 337. MIT Laboratoryfor Computer Science (Cambridge, MA, June 1991).

[2] Burger, Robert G., and Dybvig, R. Kent. Printingfloating-point numbers quickly and accurately. In Proc. ACMSIGPLAN ’96 Conf. Prog. Lang. Design and Implementation.ACM (Philadelphia, PA, June 1996), 108–116.

[3] Clinger, William D. How to read floating point numbersaccurately. In Proc. ACM SIGPLAN ’90 Conf. Prog. Lang.Design and Implementation. ACM (White Plains, NY, June1990), 92–101. ACM SIGPLAN Notices 25, 6 (June 1990).

[4] Darcy, Joseph D. Borneo 1.0.2: Adding IEEE 754 floatingpoint support to Java. U. California, Berkeley, May 1998.

[5] ECMAScript Language Specification, third edition. ECMA(December 1999). Standard ECMA-262.

[6] Gansner, Emden R., and Reppy, John H. The Standard MLBasis Manual. Cambridge University Press (New York,2003). Not yet published—available from October 2003.

[7] Gardner, Martin. Mathematical games. Scientific American216, 3 (March 1967), 124–125; 216, 4 (April 1967),118–120; 217, 1 (July 1967), 115.

[8] Gardner, Martin. Mathematical Magic Show. Vintage (NewYork, 1978), 207–209, 215–220.

[9] Gay, David M. Correctly Rounded Binary-Decimal andDecimal-Binary Conversions. Numerical AnalysisManuscript 90-10. AT&T Bell Laboratories (Murray Hill,NJ, November 1990).

[10] Gay, David M. Symbolic-Algebraic Computations in aModeling Language for Mathematical Programming.Technical Report 00-3-02. Computing Sciences ResearchCenter, Bell Laboratories (Murray Hill, NJ, July 2000).

[11] Gosling, James, Joy, Bill, and Steele, Guy. The Java Lan-guage Specification. Addison-Wesley (Reading, MA, 1996).

[12] Gries, David. Binary to decimal, one more time. In Feijen,W. H. J., van Gasteren, A. J. M., Gries, D., and Misra, J.,

eds., Beauty is our business: a birthday salute to Edsger W.Dijkstra. Springer-Verlag (Berlin, 1990), 141–148.

[13] Grosse, Eric. Real Inferno. In Boisvert, Ronald F., editor, TheQuality of Numerical Software: Assessment andEnhancement: Proc. IFIP TC2/WG 2.5 Working Conf.,Oxford, United Kingdom, 8–12 July 1996. Chapman Hall onbehalf of IFIP (London, 1997), 270–279.

[14] Horning, Jim, Kalsow, Bill, McJones, Paul, and Nelson,Greg. Some Useful Modula-3 Interfaces. Memo 113. DigitalEquipment Corporation Systems Research Center (Palo Alto,CA, December 1993).

[15] IEEE Standard for Binary Floating-Point Arithmetic,ANSI/IEEE Std 754-1985 edition. IEEE (New York, 1985).

[16] Standard libraries for the Haskell 98 programming language.http://www.haskell.org/definition/haskell98-library.pdf,February 1999.

[17] Kelsey, Richard, Rees, Jonathan, Clinger, William, et al. Therevised5 report on the algorithmic language Scheme. ACMSIGPLAN Notices 33, 9 (September 1998), 26–76.

[18] Knuth, Donald E. Seminumerical Algorithms.Addison-Wesley (Reading, MA, 1969).

[19] Knuth, Donald E. Seminumerical Algorithms (SecondEdition). Addison-Wesley (Reading, MA, 1981).

[20] Knuth, Donald E. A simple program whose proof isn’t. InFeijen, W. H. J., van Gasteren, A. J. M., Gries, D., and Misra,J., eds., Beauty is our business: a birthday salute to EdsgerW. Dijkstra. Springer-Verlag (Berlin, 1990), 233–242.

[21] Knuth, Donald E. Seminumerical Algorithms (Third Edition).Addison-Wesley (Reading, MA, 1998).

[22] Matula, David W. In-and-out conversions. Communicationsof the ACM 11, 1 (January 1968), 47–50.

[23] Moon, David A. MacLISP Reference Manual. MIT ProjectMAC (Cambridge, MA, April 1974).

[24] Rees, Jonathan, Clinger, William, et al. The revised4 reporton the algorithmic language Scheme. ACM SIGPLAN LispPointers 4, 3 (July–September 1991), 1–55.

[25] Reppy, John H., et al. http://cm.bell-labs.com/cm/cs/what/smlnj/doc/basis/pages/real.html, October 1997. To bepublished as [6].

[26] Schulte, M. J., Zelov, V. A., Akkas, A., and Burley, J. C. Theinterval-enhanced GNU Fortran compiler. In Csendes, Tibor,ed., Developments in Reliable Computing. Kluwer(Dordrecht, Netherlands, 1999), 311–322.

[27] Steele, Jr., Guy L., and Gabriel, Richard P. The evolution ofLisp, pages 233–330. In History of ProgrammingLanguages. ACM Press (New York, 1996), 233–330.

[28] Steele, Jr., Guy L., and White, Jon L. How to printfloating-point numbers accurately. In Proc. ACM SIGPLAN’90 Conf. Prog. Lang. Design and Implementation. ACM(White Plains, NY, June 1990), 112–126. ACM SIGPLANNotices 25, 6 (June 1990).

[29] Steele, Guy L., Jr., Fahlman, Scott E., Gabriel, Richard P.,Moon, David A., and Weinreb, Daniel L. Common Lisp: TheLanguage. Digital Press (Burlington, MA, 1984).

[30] Reference Guide to Symbolics-Lisp. Symbolics, Inc.(Cambridge, MA, March 1985).

[31] Taranto, Donald. Binary conversion, with fixed decimalprecision, of a decimal fraction. Communications of the ACM2, 7 (July 1959), 27.

[32] Weinreb, Daniel, and Moon, David. LISP Machine Manual,Third Edition. MIT Artificial Intelligence Laboratory(Cambridge, MA, March 1981).

ACM SIGPLAN 374 Best of PLDI 1979-1999

Page 4: How to Print Floating-Point Numbers Accuratelykurtstephens.com/files/p372-steele.pdf · RETROSPECTIVE: How to Print Floating-Point Numbers Accurately Guy L. Steele Jr. Sun Microsystems

ACM SIGPLAN 375 Best of PLDI 1979-1999

Page 5: How to Print Floating-Point Numbers Accuratelykurtstephens.com/files/p372-steele.pdf · RETROSPECTIVE: How to Print Floating-Point Numbers Accurately Guy L. Steele Jr. Sun Microsystems

ACM SIGPLAN 376 Best of PLDI 1979-1999

Page 6: How to Print Floating-Point Numbers Accuratelykurtstephens.com/files/p372-steele.pdf · RETROSPECTIVE: How to Print Floating-Point Numbers Accurately Guy L. Steele Jr. Sun Microsystems

ACM SIGPLAN 377 Best of PLDI 1979-1999

Page 7: How to Print Floating-Point Numbers Accuratelykurtstephens.com/files/p372-steele.pdf · RETROSPECTIVE: How to Print Floating-Point Numbers Accurately Guy L. Steele Jr. Sun Microsystems

ACM SIGPLAN 378 Best of PLDI 1979-1999

Page 8: How to Print Floating-Point Numbers Accuratelykurtstephens.com/files/p372-steele.pdf · RETROSPECTIVE: How to Print Floating-Point Numbers Accurately Guy L. Steele Jr. Sun Microsystems

ACM SIGPLAN 379 Best of PLDI 1979-1999

Page 9: How to Print Floating-Point Numbers Accuratelykurtstephens.com/files/p372-steele.pdf · RETROSPECTIVE: How to Print Floating-Point Numbers Accurately Guy L. Steele Jr. Sun Microsystems

ACM SIGPLAN 380 Best of PLDI 1979-1999

Page 10: How to Print Floating-Point Numbers Accuratelykurtstephens.com/files/p372-steele.pdf · RETROSPECTIVE: How to Print Floating-Point Numbers Accurately Guy L. Steele Jr. Sun Microsystems

ACM SIGPLAN 381 Best of PLDI 1979-1999

Page 11: How to Print Floating-Point Numbers Accuratelykurtstephens.com/files/p372-steele.pdf · RETROSPECTIVE: How to Print Floating-Point Numbers Accurately Guy L. Steele Jr. Sun Microsystems

ACM SIGPLAN 382 Best of PLDI 1979-1999

Page 12: How to Print Floating-Point Numbers Accuratelykurtstephens.com/files/p372-steele.pdf · RETROSPECTIVE: How to Print Floating-Point Numbers Accurately Guy L. Steele Jr. Sun Microsystems

ACM SIGPLAN 383 Best of PLDI 1979-1999

Page 13: How to Print Floating-Point Numbers Accuratelykurtstephens.com/files/p372-steele.pdf · RETROSPECTIVE: How to Print Floating-Point Numbers Accurately Guy L. Steele Jr. Sun Microsystems

ACM SIGPLAN 384 Best of PLDI 1979-1999

Page 14: How to Print Floating-Point Numbers Accuratelykurtstephens.com/files/p372-steele.pdf · RETROSPECTIVE: How to Print Floating-Point Numbers Accurately Guy L. Steele Jr. Sun Microsystems

ACM SIGPLAN 385 Best of PLDI 1979-1999

Page 15: How to Print Floating-Point Numbers Accuratelykurtstephens.com/files/p372-steele.pdf · RETROSPECTIVE: How to Print Floating-Point Numbers Accurately Guy L. Steele Jr. Sun Microsystems

ACM SIGPLAN 386 Best of PLDI 1979-1999

Page 16: How to Print Floating-Point Numbers Accuratelykurtstephens.com/files/p372-steele.pdf · RETROSPECTIVE: How to Print Floating-Point Numbers Accurately Guy L. Steele Jr. Sun Microsystems

ACM SIGPLAN 387 Best of PLDI 1979-1999

Page 17: How to Print Floating-Point Numbers Accuratelykurtstephens.com/files/p372-steele.pdf · RETROSPECTIVE: How to Print Floating-Point Numbers Accurately Guy L. Steele Jr. Sun Microsystems

ACM SIGPLAN 388 Best of PLDI 1979-1999

Page 18: How to Print Floating-Point Numbers Accuratelykurtstephens.com/files/p372-steele.pdf · RETROSPECTIVE: How to Print Floating-Point Numbers Accurately Guy L. Steele Jr. Sun Microsystems

ACM SIGPLAN 389 Best of PLDI 1979-1999