85
Floating Point Verification Unifying Abstract Interpretation and Decision Procedures Daniel Kroening 13 November 2012 (joint work with Leopold Haller,Vijay D’Silva, Michael Tautschnig, Martin Brain) 1 Sunday, 18 November 12

Floating Point Verification - University of · PDF fileFloating Point Verification ... LMS Journal of Computation and Mathematics, I, 1998. ... [7,8] has been to define the concrete

  • Upload
    vubao

  • View
    223

  • Download
    0

Embed Size (px)

Citation preview

Floating Point VerificationUnifying Abstract Interpretation and

Decision Procedures

Daniel Kroening

13 November 2012

(joint work with Leopold Haller, Vijay D’Silva, Michael Tautschnig, Martin Brain)

1

Sunday, 18 November 12

2

LeopoldHaller

VijayD’Silva

MichaelTautschnig

+ Martin Brain(no photo, but he is here)

Sunday, 18 November 12

References

• TACAS 2012: paths in floating-point programs with intervals

• POPL 2013: Framework

• VMCAI 2013: DPLL(T)

• FMCAD 2012: Learning for intervals

• SAS 2012: propositional SAT

3

Sunday, 18 November 12

Abstract Satisfiability

Presentation Outline

Existing approaches to FP - Verification

Manual,Semi-automated

Decision Procedures

Decision Procedures

ScalablePrecise

Abstract Interpretation

Abstract Interpretation

Our research

Part I

Part II

4

Sunday, 18 November 12

Part I

5

Sunday, 18 November 12

IEEE754 Floating Point Numbers

Special values: �0,+0,�1,1,NaN

6

Sunday, 18 November 12

The Pitfalls of FP

I II

III IV

V

7

Sunday, 18 November 12

Is this program correct?

8

(We will ignore the case x=NaN)

Sunday, 18 November 12

What does correctness mean?

Three possible meanings:

• Result is sufficiently close to the real number result

• Result is sufficiently close to the sine function

• The assertion cannot be violated9

Sunday, 18 November 12

How can we check correctness?

Abstract Interpretation

Manual

Decision Procedures

10

Sunday, 18 November 12

Manual, semi-automated

• Use an interactive theorem prover

• Experts write proof scripts with machine assistance

• Potentially powerful, but expensive

• Proof scripts require expert understanding, may be much harder to write than programs

C-Program Theorem statementautomatic translation

Prover returns unsolved subgoals

User simplifies subgoals

11

Sunday, 18 November 12

Manual, semi-automated

User enters proof Computer keeps track of what is left to prove

12

Sunday, 18 November 12

Manual, semi-automated

etc...Sunday, 18 November 12

Selection of notable work:

• John Harrison (Intel) - Verification of FP hardware and firmware using HOL

• Various formalizations of IEEE754 FP arithmetic for different theorem provers

• Boldot, Filliâtre, Melquiond et. al. - Theorem prover combined with incomplete FP prover.

Manual, semi-automated

14

Sunday, 18 November 12

Conclusion:

• Manual or semi-automated techniques can be very powerful, but require experts and large time investments

• Typically feasible for small system components of critical importance (e.g., Intel’s verification of processor components)

Manual, semi-automated

15

Sunday, 18 November 12

References

M. Daumas, L. Rideau and L. Théry. A Generic Library for Floating-Point Numbers and Its Application to Exact Computing. TPHOLs 2001

G. Melquiond. Floating-point arithmetic in the Coq system. RNC 2008.

P. Miner and S. Boldo. Float PVS library. http://shemesh.larc.nasa.gov/fm/ftp/larc/PVS-library/pvslib.html

P. Miner. Defining the IEEE-854 Floating-Point Standard in PVS. PVS. Technical Memorandum NASA, Langley Research, 1995

J. Harrison. A machine-checked theory of floating-point arithmetic. TPHOLs 1999

Axiomatisations of FP

A. Ayad and C. Marché. Behavioral properties of floating-point programs. Hisseo publications, 2009.

A. Ayad and C. Marché. Multi-prover verification of floating-point programs. www.lri.fr/~marche/ayad10ijcar-submission.pdf, 2010

S. Boldo and J.C. Filliâtre. Formal verification of floating-point programs. ARITH 2007

Specification of FP properties

16

Sunday, 18 November 12

References

J. Harrison. Floating point verification in HOL light: The exponential function. FMSD, 16(3), 2000

J. Harrison. Floating-point verification. FM 2005

J. Harrison. Formal verification of square root algorithms. FMSD, 22(2), 2003

B. Akbarpour, A.T. Abdel-Hamid, S. Tahar, and J. Harrison. Verifying a synthesized implementation of IEEE-754 floating-point exponential function using HOL. CJ 53(4), 2010

J. O’Leary, X. Zhao, R. Gerth, C.H. Seger. Formally Verifying IEEE Compliance of Floating-Point Hardware

R, Kaivola and M. D. Aagaard. Divider circuit verification with model checking and theorem proving. TPHOLs 2000

M. Cornea-Hasegan. Proving the IEEE correctness of iterative floating-point square root, divide and remainder algorithms. Intel Technology Journal, Q2, 1998

T. L. J. Strother Moore and M. Kaufmann. A mechanically checked proof of the correctness of the kernel of the AMD5K86 floating-point division algorithm. IEEE Transactions on Computers, 47(9), 1998.

D. Rusinoff. A mechanically checked proof of IEEE compliance of a register-transfer-level specification of the AMD-K7 floating-point multiplication, division, and square root instructions. LMS Journal of Computation and Mathematics, I, 1998.

J. Sawada. Formal verification of divide and square root algorithms using series calculation. ACL2 2002.

S. Boldo, J.-C. Filliâtre and G. Melquiond. Combining Coq and Gappa for Certifying Floating-Point Programs. Calculemus 2009.

Applications

17

Sunday, 18 November 12

Requires experts,expensive, powerful

Abstract Interpretation

Manual

Decision Procedures

18

Sunday, 18 November 12

Abstract Interpretation

Error

• Instead of exploring all executions, explore a single abstract execution

• Abstract execution contains all concrete executions!

• Highly efficient and scalable, but imprecise

Abstract representationProgram traces

Error states do not overlapabstract representation, hence program is safe

Program Abstract InterpreterProgram is safe

?19

Sunday, 18 November 12

Interpreter Abstract Domain

An abstract interpreter modularly uses operations provided by an abstract domain.Changing the domain changes the analysis.

Example Signs domain

y = +x = +

z = +

safe!

Constants domain{c | c 2 FP} [ {?}{+,�} [ {?}

y = 5x = ?

z = ?

Possibly unsafe

Abstract Interpretation

20

Sunday, 18 November 12

Interpreter Abstract Domain

An abstract interpreter modularly uses operations provided by an abstract domain.Changing the domain changes the analysis.

Example

Abstract Interpretation

Interval Domain

{[l, u] | l, u 2 Int}x, y 2 [min(Int),max(Int)]

x, y 2 [min(Int),�1]

x 2 [5, 5], y 2 [min(Int),max(Int)]

x 2 [min(Int), 5], y 2 [min(Int),max(Int)]

21

Sunday, 18 November 12

Floating Point Intervals {[l, u] | l, u 2 FP} [ {?}

result 2 [�2.216760, 2.216760]

result 2 [�2.301135, 2.301135]

result 2 [�2.296453, 2.296453]

x 2 [�1.570796, 1.570796]

Potentially unsafe

Abstract Interpretation

22

Sunday, 18 November 12

Astrée Abstract Interpreter

• Mature abstract interpreter by Cousot et. al

• Large number of domains

• Sold and supported by Absint GmbH

• Successful in proving correct large avionics control software: 100k lines of code in 1h -> highly scalable

• Various domains for floating point analysis:

ASTREE 5

a huge amount of work, if done by hand. Therefore the packing parameterizationis automatized using context-sensitive syntactic criteria. Experimentations showthat the average pack size is usually of order of 3 or 4 variables while the numberof packs grows linearly with the program size. It follows that precise abstractionsare performed only when needed, which is necessary to scale up.

Floating-Point Interval Linear Form Abstraction. A general problemwith relational numerical domains is that of floating point numbers. Consideringthem as reals (as usually done with theorem provers) or fixed point numbers(as in CBMC [11]) would not conform to the norm whence would be unsound.Using rationals or other symbolic reals in the abstract domains would be toocostly. The general approach [7, 8] has been to define the concrete semanticsof floating point computations in the reals (taking the worst possible roundingerrors explicitly into account), to abstract with real numbers but to implement,thanks to a further sound over-approximation, using floats. For example the floatexpression (x + y) + z is evaluated as in the reals as x + y + z + !1 + !2 where|!1| ! "rel.|x+ y|+ "abs and |!2| ! "rel.|x+ y + !1 + z|+ "abs. The real !1 encodesrounding errors in the atomic computation (x + y), and the real !2 encodesrounding errors in the atomic computation (x + y + !1) + z. The parameters "reland "abs depends on the floating-point type being used in the analyzed program.This linearization [7, 8] of arbitrary expressions is a correct abstraction of thefloating point semantics into interval linear forms [a0, b0]+

!nk=1[ak, bk]Xk. This

approach separates the treatment of rounding errors from that of the numericalabstract domains.

Fig. 2. Filter trace Ellipsoid abstraction Octagon abstraction Interval abstraction

The Simplified Filter Abstract Domains. The simplified filter abstractdomains [13] provide examples of domain-aware abstractions. A typical exampleof simplified filter behavior is traced in Fig. 2 (tracing the sequence D1 in Fig. 3).Interval and octagonal envelops are unstable because they are rotated and shrunka little at each iteration so that some corner always sticks out of the envelop.However, the ellipsoid of Fig. 2 is stable. First, filter domains use dynamical linearproperties that are captured by the other domains such as the range of inputvariables (x1 and y1 for the example of Fig. 3) and symbolic a!ne equalitieswith interval coe!cients (to model rounding errors) such as t1 " [1 # !1, 1 +!1].x1+[b1[0]#!2, b1[0]+!2].D1[0]# [b1[1]#!3, b1[1]+!3].D1[1]+[#!, !] for theexample of Fig. 3 (where !1, !2, and !3 describe relative error contributions and !describes an absolute error contribution). These symbolic equalities are capturedeither by linearization (see Sect. 6), or by symbolic constant propagation (see

Ellipses Octagons IntervalsOriginal traces

23

Sunday, 18 November 12

Abstract Domains for Floating Point

• Abstract domains are typically formulated over the real or rational numbers

• Numeric domains rely on mathematical properties such as associativity which do not hold over floating point numbers

(a+ b) + c = a+ (b+ c)

• Solution (Mine 2004): Interpret operations over floating point numbers as real number operations + error terms

24

Sunday, 18 November 12

Fluctuat: Errors as First Class Citizens

• Static analyser built for FP precision analysis

• Idea: Keep track separately of three distinct values for each variable

(fx, rx, ex)

FP value Real value FP error

• Abstract these values separately

([1, 1], [1, 1], [0, 0])

([0, 0], [0, 0], [0, 0])

([0, 1], [0, 1], [0, 0]) FP and real value are imprecise, but there is no rounding error 25

Sunday, 18 November 12

Fluctuat: Tracking errors with Zonotopes

• Fluctuat uses zonotope abstractions which combines intervals with noise symbols

x = "1

Noise variables take values in [0, 1]

y = 2 ⇤ "1 + c ⇤ "2 + 1.0f

Relation to x is preserved new error symbol models rounding error

(⌘ x 2 [0, 1])

• The source of imprecisions can be precisely tracekd

26

Sunday, 18 November 12

Imprecision in Abstract Interpretation

• The efficiency of abstract interpreters comes at the cost of precision. Imprecision is accumulated from three sources:

• Statements

• Control-flow

• Loops

x 2 [�5, 5] y 2 [�25, 25]

x 2 [0, 1] x, y 2 [0, 1]

x 2 [�1, 1]

x, y 2 [1, 1] x 2 [100001,max(Int)]

y 2 [min(Int),max(Int)]

27

Sunday, 18 November 12

Imprecision in Abstract Interpretation

• For efficiency reasons, most numeric abstract domains are convex

ASTREE 5

a huge amount of work, if done by hand. Therefore the packing parameterizationis automatized using context-sensitive syntactic criteria. Experimentations showthat the average pack size is usually of order of 3 or 4 variables while the numberof packs grows linearly with the program size. It follows that precise abstractionsare performed only when needed, which is necessary to scale up.

Floating-Point Interval Linear Form Abstraction. A general problemwith relational numerical domains is that of floating point numbers. Consideringthem as reals (as usually done with theorem provers) or fixed point numbers(as in CBMC [11]) would not conform to the norm whence would be unsound.Using rationals or other symbolic reals in the abstract domains would be toocostly. The general approach [7, 8] has been to define the concrete semanticsof floating point computations in the reals (taking the worst possible roundingerrors explicitly into account), to abstract with real numbers but to implement,thanks to a further sound over-approximation, using floats. For example the floatexpression (x + y) + z is evaluated as in the reals as x + y + z + !1 + !2 where|!1| ! "rel.|x+ y|+ "abs and |!2| ! "rel.|x+ y + !1 + z|+ "abs. The real !1 encodesrounding errors in the atomic computation (x + y), and the real !2 encodesrounding errors in the atomic computation (x + y + !1) + z. The parameters "reland "abs depends on the floating-point type being used in the analyzed program.This linearization [7, 8] of arbitrary expressions is a correct abstraction of thefloating point semantics into interval linear forms [a0, b0]+

!nk=1[ak, bk]Xk. This

approach separates the treatment of rounding errors from that of the numericalabstract domains.

Fig. 2. Filter trace Ellipsoid abstraction Octagon abstraction Interval abstraction

The Simplified Filter Abstract Domains. The simplified filter abstractdomains [13] provide examples of domain-aware abstractions. A typical exampleof simplified filter behavior is traced in Fig. 2 (tracing the sequence D1 in Fig. 3).Interval and octagonal envelops are unstable because they are rotated and shrunka little at each iteration so that some corner always sticks out of the envelop.However, the ellipsoid of Fig. 2 is stable. First, filter domains use dynamical linearproperties that are captured by the other domains such as the range of inputvariables (x1 and y1 for the example of Fig. 3) and symbolic a!ne equalitieswith interval coe!cients (to model rounding errors) such as t1 " [1 # !1, 1 +!1].x1+[b1[0]#!2, b1[0]+!2].D1[0]# [b1[1]#!3, b1[1]+!3].D1[1]+[#!, !] for theexample of Fig. 3 (where !1, !2, and !3 describe relative error contributions and !describes an absolute error contribution). These symbolic equalities are capturedeither by linearization (see Sect. 6), or by symbolic constant propagation (see

Ellipses Octagons IntervalsOriginal traces

Convex polyhedra

the union of interval concretisations of x and y :

z

0 = mid(�(x) [ �(y)) (central value of z)↵

z

i

= argminmin(↵x

i

,↵

y

i

)↵max(↵x

i

,↵

y

i

)(|↵|),8i � 1 (coe↵. of ✏

i

)

z = sup(�(x) [ �(y))� ↵

z

0 �P

i�1 |↵z

i

| (coe↵. of ✏

U

)

where the � function returns the interval concretisation of an a�ne form and

mid([a, b]) := 12 (a + b) and argmin

axb

(|x|) := {x 2 [a, b], |x| is minimal }.

Example 1. By the formula of definition 1:✓x = 3 +✏1 +2✏2

u = 0 +✏1 +✏2

◆[

✓y = 1 �2✏1 +✏2

u = 0 +✏1 +✏2

◆=✓

x [ y = 2 +✏2 +3✏

U

u [ u = 0 +✏1 +✏2

x

u

2 4 6

�2

2

[y

u

�2 1 4

�2

2

=6�2 2

�2

2

x[y

u

We also define the cyclic unfold, denoted by (i, c,N ), as the one obtainedby initially unrolling i times the loop, and from then computing the fixpointof the loop functional iterated c times until convergence, this with at most Niterations, after which a classical interval semantics is used [1]. As proved in [5],and shown in Section 4, the cyclic unfold schemes together with the join operatorensures termination with accurate fixpoint bounds for linear iterative schemes.

3 Implementation aspects

The APRON Project [2] provides a uniform high level interface for numericaldomains. For the time being, intervals, convex polyhedra, octagons, and congru-ences abstract domains are interfaced. We enrich here the library with a domainbased on a�ne forms, called Taylor1+.

As we represent coe�cients of a�ne forms by double precision floating-pointnumbers instead of real numbers, we have to adapt our transfer functions. Forinstance, instruction z = x + y; is abstracted by

z = x� y = float(↵x

0 + ↵

y

0) +nX

i=1

float(↵x

i

+ ↵

y

i

)✏i

+

nX

i=0

dev(↵x

i

+ ↵

y

i

)

!✏

n+1

where float(x) is the nearest double-precision floating-point number to the realnumber x and dev(x) := .(|x� float(x)|), (. being rounding towards +1).

We are working on some techniques, namely those used in [8] and [10], tocontrol the potential increase of the number of noise symbols during analysis.However, in practise, the number of symbols reaches high levels very scarcely,since our join operator has the e↵ect of reducing the number of noise symbolsby collapsing some of them into a join symbol.

Zonotope

28

Sunday, 18 November 12

Imprecision in Abstract Interpretation

What if convex abstractions are too weak?

Error Error

Very common scenario

29

Sunday, 18 November 12

Handling Imprecision

What happens if the analysis is imprecise?Error

Error

Customer

AbsInt GmbH

sends code sends manuallycreated configuration

Researcher

manually createsnew abstract domain

30

Sunday, 18 November 12

Conclusion:

• Very scalable

• Imprecise

• Precise results require experts and research effort

• Expert created domains are moderately reusable

• Feasible for programs with homogenous structure and behaviour (success in avionics)

Abstract Interpretation

31

Sunday, 18 November 12

References

A. Chapoutot. Interval slopes as a numerical abstract domain for floating-point variables. SAS 2010

L. Chen, A. Miné and P. Cousot. A sound floating-point polyhedra abstract domain. APLAS 2008

A. Miné. Relational abstract domains for the detection of floating-point run-time errors. ESOP 2004

L. Chen, A. Miné, J. Wang and P. Cousot. An abstract domain to discover interval Linear Equalities. VMCAI 2010

L. Chen, A. Miné, J. Wang and P. Cousot. Interval polyhedra: An Abstract Domain to Infer Interval Linear Relationships. SAS 2009

K. Ghorbal, E. Goubault and S. Putot. The zonotope abstract domain Taylor1. CAV 2009

B. Jeannet, and A. Miné. Apron: A library of numerical abstract domains for static analysis. CAV 2009

D. Monniaux. Compositional analysis of floating-point linear numerical filters. CAV 2005

J. Feret. Static analysis of digital filters. ESOP 2004

F. Alegre, E. Feron and S. Pande. Using ellipsoidal domains to analyze control systems software. CoRR 2009

E. Goubault and S. Putot. Weakly relational domains for floating-point computation analysis. NSAD 2005

E. Goubault. Static analyses of the precision of floating-point operations. SAS 2001

Floating point abstract domains

32

Sunday, 18 November 12

ReferencesIndustrial Case Studies

E. Goubault, S. Putot, P. Baufreton, J. Gassino. Static analysis of the accuracy in control systems: principles and experiments. FMICS 2007

D. Delmas, E. Goubault, S. Putot, J. Souyris, K. Tekkal, F. Védrine. Towards an industrial use of FLUCTUAT on safety-critical avionics software.  FMICS 2009

J. Souyris and D. Delmas. Experimental assessment of Astrée on safety-critical avionics software. SAFECOMP 2007

J. Souyris. Industrial experience of abstract interpretation-based static analyzers. IFIP 2004

P. Cousot. Proving the absence of run-time errors in safety-critical avionics code. EMSOFT 2007

FP Static Analysers

B. Blanchet, P. Cousot, R. Cousot, J. Feret, L. Mauborgne, A. Miné, D. Monniaux and X. Rival. A static analyzer for large safety-critical software. SIGPLAN 38(5), 2003

P. Cousot, R. Cousot, J. Feret, L. Mauborgne, A. Miné, D. Monniaux and Xavier Rival. The ASTREÉ analyzer. ESOP 2005

E. Goubault, M. Martel and S. Putot. Asserting the precision of floating-point computations: a simple abstract interpreter. ESOP 2002

33

Sunday, 18 November 12

Requires experts,expensive, powerful

Abstract Interpretation

Manual

Decision Procedures

Scalable and efficient.Precise analysis requires experts

34

Sunday, 18 November 12

Error

Decision Procedures

• Precisely explore a large set of program traces

• For efficiency, represent problem symbolically as satisfiability of a logical formula

Program traces

Program is safe exactly if isTrace(t) ^ error(t) is satisfied by some t

35

Sunday, 18 November 12

Propositional SAT

' = (a _ ¬b) ^ (¬a _ b) ^ ¬bPropositional formula:

Is there an assignment to a,b that makes the formula true?

SAT Solvers are E�cient

2000 2001 2002 2003 2004 2005 2006 2007

1s

10s

100s

(Malik and Zhang 2009)

Leopold Haller (OUDCS) DPLL is Abstract Interpretation 3 / 33

Decrease in SAT solving time for SAT algorithms 2000-2007

36

Sunday, 18 November 12

Why are SAT solvers so efficient

Probe for solution Learn from failure

failure

• SAT solvers learn from failure

• SAT solvers spot relevance

37

Sunday, 18 November 12

Example

c ! (r = a/32b)

^ ¬c ! (r = a ⇤32 b)^ a > 0 ^ b > 0 ^ r < 0

Can be translated to propositional logic using divider and multiplier circuits

The formula evaluates to true under the following assignment:

a, b 7! 123456789

r 7! �1757895751

c 7! false

Decision Procedures

Counterexample!38

Sunday, 18 November 12

Bounded Model Checking

Loops require unrolling before translation

If the loop does not have a known fixed bound, the result is unrolled up to a chosen depth.

39

Sunday, 18 November 12

Bounded Model Checking

Decision ProcedureProgram has bug,counter-example is returned

?

Satisfiable

Unsatisfiable

40

Sunday, 18 November 12

FP support in CBMC (2008)

• CBMC implements bit-precise reasoning over floating-point numbers using a propositional encoding

• Uses IEEE-754 semantics with support various rounding-modes

• Allows proofs of complex, bit-level properties

Sunday, 18 November 12

Scalability of Propositional Encoding

• Floating-point arithmetic is flattened to propositional logic

• Requires instantiation of large floating point arithmetic circuits

N Nr. Variables Memory use

5 ~130000 ~90MB

10 ~260000 ~180MB

• Resulting formulas are hard for SAT solvers and take up large amounts of memory

42

Sunday, 18 November 12

Mixed Abstractions for Floating Point Arithmetic (2009)

• Use propositional abstraction to increase efficiency and ease memory requirements

• Novel mixed abstraction framework

• Over-approximations allow more behaviours: Reduce the initial number of variables. Eases memory requirements and improves efficiency.

• Under-approximations restrict behaviours: Allows us to quickly identify solutions.

• Integrated with CBMC and the Boolector SMT solver43

Sunday, 18 November 12

Mixed Abstractions for Floating Point Arithmetic (2009)

]

44

Sunday, 18 November 12

Related work

Constraint satisfaction

C. Michel, M. Rueher and Y. Lebbah: Solving constraints over floating-point numbers. CP2001

B. Botella, A. Gotlieb and C. Michel: Symbolic execution of floating-point computations. STVR2006

SMTP. Ruemmer and T. Wahl. An SMT-LIB theory of binary floating-point arithmetic. SMT 2010

A. Brillout, D. Kroening and T. Wahl. Mixed abstractions for floating point arithmetic. FMCAD 2009

R. Brummayer and A. Biere. Boolector: An Efficient SMT Solver for Bit-Vectors and Arrays. TACAS 2009

Incomplete SolversS. Boldo, J.-C. Filliâtre and G. Melquiond. Combining Coq and Gappa for Certifying Floating-Point Programs. Calculemus 2009.

45

Sunday, 18 November 12

Requires experts,scalable, precise

Abstract Interpretation

Manual

Decision Procedures

Scalable.Precision requires experts

Precise.Scalability requires experts

46

Sunday, 18 November 12

Automatic

Scalable PreciseTheorem provingD

ecision proceduresAb

stra

ct in

terp

reta

tion

Conclusion Part I

Abstract Interpreter Decision Procedures

Safe

? Bug

?

47

Sunday, 18 November 12

Questions so far?

48

Sunday, 18 November 12

Part II

49

Sunday, 18 November 12

Automatic

Scalable Precise

Decision procedures

Abst

ract

inte

rpre

tatio

nWe are interested in techniques that are• scalable• sufficiently precise to prove safety• fully automatic

Central insight: Modern decision procedures are abstract interpreters!

50

Sunday, 18 November 12

Manually adjusting analysis precisionby abstract partitioning

Error Error

y 2 [�1, 1]

Potentially unsafe! Safe!51

Sunday, 18 November 12

How do we find the partition automatically?

52

Sunday, 18 November 12

SAT solving by example

Their main data structure is a partial variable assignment which represents a solution candidate

V ! {t, f}

clauses

literals

| {z } | {z }' = (p _ ¬q) ^ . . . ^ (¬r _ w _ q)

SAT solvers accept formulas in conjunctive normal form

53

Sunday, 18 November 12

SAT solving: Deduction

' = p ^ (¬p _ ¬q) ^ (q _ r _ ¬w) ^ (q _ r _ w)

SAT deduces new facts from clauses:

p 7! t p 7! t

q 7! f

At this point, clauses yield no further information

54

Sunday, 18 November 12

SAT is Abstract Analysis: Deduction

' = p ^ (¬p _ ¬q) ^ (q _ r _ ¬w) ^ (q _ r _ w)

p 2 [1, 1]q 2 [0, 0]

p 7! t p 7! t

q 7! f

The result of deduction is identical to applying interval

analysis to the program:

Deduction in a SAT solver is abstract analysis55

Sunday, 18 November 12

SAT solving: Decisions

' = p ^ (¬p _ ¬q) ^ (q _ r _ ¬w) ^ (q _ r _ w)

Pick an unassigned variable and assign a truth value

p 7! t

q 7! f

p 7! t

q 7! f

r 7! f

SAT solver makes a “guess”

Now new deductions are possible56

Sunday, 18 November 12

SAT solving: Learning

' = p ^ (¬p _ ¬q) ^ (q _ r _ ¬w) ^ (q _ r _ w)

The variable w would have to be both true and false.

The contradiction is the result of r being assigned to false as part of a decision. The SAT solver therefore learns that r must be true:

p 7! t

q 7! f

r 7! f

p 7! t

q 7! f

r 7! f

w 7! f

conflict

' ' ^ r57

Sunday, 18 November 12

SAT is Abstract Analysis: Decisions & Learning

Decisions and learning in a SAT solver are abstract partitioning

' ' ^ r

58

Sunday, 18 November 12

SAT is Abstract Analysis

• Deduction in SAT is abstract interpretation

• Decisions and learning are abstract partitioning

• The SAT algorithm is really an automatic partition refinement algorithm.

Domain A

SAT(A)Rich logic,e.g. FP Programs

Prop. Logic Boolean programs

Data

Control

Expanding the scope of SAT59

Sunday, 18 November 12

SAT for programsAbstract Implication Graph

n1

c2 c3c1 c4

n2

[a �2]

[a = �1] [a = 0]

[a � 1]

b := 2 b := �2

b := �1 b := 1

[b = 0]

DL0

c1 : a �2

c2 : a �1 c3 : a 0 c2 : a � �1c3 : a � 0

c4 : a � 1

c3 : a = 0 c2 : a = �1

n2 : b 2 n2 : b � �2

: b 0 : b � 0

DL1n1 : a �42

n1 : a �2

c2 : ?

c3 : ?

c4 : ?

c1 : a �42

n2 : b � 2 : ? SAFE ! Generalise!! find cut

maximal wp-underapproximation transformer

¬(n2 : b � 1)

¬(n1 : a �2)

n2 : b �1

: ? SAFE

Leopold Haller (OUDCS) DPLL is Abstract Interpretation 25 / 33

Abstract Implication Graph

n1

c2 c3c1 c4

n2

[a �2]

[a = �1] [a = 0]

[a � 1]

b := 2 b := �2

b := �1 b := 1

[b = 0]

DL0

c1 : a �2

c2 : a �1 c3 : a 0 c2 : a � �1c3 : a � 0

c4 : a � 1

c3 : a = 0 c2 : a = �1

n2 : b 2 n2 : b � �2

: b 0 : b � 0

DL1n1 : a �42

n1 : a �2

c2 : ?

c3 : ?

c4 : ?

c1 : a �42

n2 : b � 2 : ? SAFE ! Generalise!! find cut

maximal wp-underapproximation transformer

¬(n2 : b � 1)

¬(n1 : a �2)

n2 : b �1

: ? SAFE

Leopold Haller (OUDCS) DPLL is Abstract Interpretation 25 / 33

Abstract Implication Graph

n1

c2 c3c1 c4

n2

[a �2]

[a = �1] [a = 0]

[a � 1]

b := 2 b := �2

b := �1 b := 1

[b = 0]

DL0

c1 : a �2

c2 : a �1 c3 : a 0 c2 : a � �1c3 : a � 0

c4 : a � 1

c3 : a = 0 c2 : a = �1

n2 : b 2 n2 : b � �2

: b 0 : b � 0

DL1n1 : a �42

n1 : a �2

c2 : ?

c3 : ?

c4 : ?

c1 : a �42

n2 : b � 2 : ? SAFE ! Generalise!! find cut

maximal wp-underapproximation transformer

¬(n2 : b � 1)

¬(n1 : a �2)

n2 : b �1

: ? SAFE

Leopold Haller (OUDCS) DPLL is Abstract Interpretation 25 / 33

Abstract Implication Graph

n1

c2 c3c1 c4

n2

[a �2]

[a = �1] [a = 0]

[a � 1]

b := 2 b := �2

b := �1 b := 1

[b = 0]

DL0

c1 : a �2

c2 : a �1 c3 : a 0 c2 : a � �1c3 : a � 0

c4 : a � 1

c3 : a = 0 c2 : a = �1

n2 : b 2 n2 : b � �2

: b 0 : b � 0

DL1n1 : a �42

n1 : a �2

c2 : ?

c3 : ?

c4 : ?

c1 : a �42

n2 : b � 2 : ? SAFE ! Generalise!! find cut

maximal wp-underapproximation transformer

¬(n2 : b � 1)

¬(n1 : a �2)

n2 : b �1

: ? SAFE

Leopold Haller (OUDCS) DPLL is Abstract Interpretation 25 / 33

Abstract Implication Graph

n1

c2 c3c1 c4

n2

[a �2]

[a = �1] [a = 0]

[a � 1]

b := 2 b := �2

b := �1 b := 1

[b = 0]

DL0

c1 : a �2

c2 : a �1 c3 : a 0 c2 : a � �1c3 : a � 0

c4 : a � 1

c3 : a = 0 c2 : a = �1

n2 : b 2 n2 : b � �2

: b 0 : b � 0

DL1n1 : a �42

n1 : a �2

c2 : ?

c3 : ?

c4 : ?

c1 : a �42

n2 : b � 2 : ? SAFE

! Generalise!! find cut

maximal wp-underapproximation transformer

¬(n2 : b � 1)

¬(n1 : a �2)

n2 : b �1

: ? SAFE

Leopold Haller (OUDCS) DPLL is Abstract Interpretation 25 / 33

Abstract Implication Graph

n1

c2 c3c1 c4

n2

[a �2]

[a = �1] [a = 0]

[a � 1]

b := 2 b := �2

b := �1 b := 1

[b = 0]

DL0

c1 : a �2

c2 : a �1 c3 : a 0 c2 : a � �1c3 : a � 0

c4 : a � 1

c3 : a = 0 c2 : a = �1

n2 : b 2 n2 : b � �2

: b 0 : b � 0

DL1

n1 : a �42

n1 : a �2

c2 : ?

c3 : ?

c4 : ?

c1 : >

n2 : b � 1 : ? SAFE ! Generalise!

! find cut

maximal wp-underapproximation transformer

¬(n2 : b � 1)

¬(n1 : a �2)

n2 : b �1

: ? SAFE

Leopold Haller (OUDCS) DPLL is Abstract Interpretation 25 / 33

Abstract Implication Graph

n1

c2 c3c1 c4

n2

[a �2]

[a = �1] [a = 0]

[a � 1]

b := 2 b := �2

b := �1 b := 1

[b = 0]

b 0

DL0

c1 : a �2

c2 : a �1 c3 : a 0 c2 : a � �1c3 : a � 0

c4 : a � 1

c3 : a = 0 c2 : a = �1

n2 : b 2 n2 : b � �2

: b 0 : b � 0

DL1

n1 : a �42

n1 : a �2

c2 : ?

c3 : ?

c4 : ?

c1 : >

n2 : b � 1 : ? SAFE

! Generalise!

! find cut

maximal wp-underapproximation transformer

¬(n2 : b � 1)

¬(n1 : a �2)

n2 : b �1

: ? SAFE

Leopold Haller (OUDCS) DPLL is Abstract Interpretation 25 / 33

60

Sunday, 18 November 12

Prototype:Abstract Conflict Driven Learning (ACDL)

• Implementation over floating-point intervals

• Automatically refines an analysis in a way that is

• Property dependent

• Program dependent

• Uses learning to intelligently explore partitions

• Significantly more precise than mature abstract interpreters

• Significantly more efficient than floating-point decision procedures on short non-linear programs

61

Sunday, 18 November 12

Demo

62

Sunday, 18 November 12

More results

13

benchmark

time(s)

0 5 10 15 20 25 30 35 40 45 50 55

0.1

1

10

100

1000 Astree

CBMC

CDFL

Fig. 2. Execution times of Astree, CBMC, and cdfl; wrong results set to 3600s

Learning disabled

Withlearning

0.1 1 10 100 1000

0.1

1

10

100

1000

Range decisions

Ran

dom

decisions

0.1 1 10 100 1000

0.1

1

10

100

1000

Fig. 3. E↵ects of learning and decision heuristics

several observations: on average, our analysis is 264 times faster than cbmc, ifcbmc terminates properly at all. The largest speed-up is a factor of 1595. Al-though Astree is often faster than our prototype, its precision is insu�cient inmany cases – we obtained 16 false alerts for the 33 safe benchmarks.

Decision Heuristics and Learning Figure 3 visualises the e↵ects of learningand decision heuristics. Learning has a significant influence on runtime, as doesthe choice of a decision heuristic. We compare a random heuristic, which picksa restriction over a random variable, with a range-based one, which always aimsto restrict the least restricted variable. Random decision making outperformsrange-based. Activity-based heuristics common in sat may work as well in ourcase.

Dynamic Precision Adjustment One of the main advantages of our pro-cedure is that refinement is property-dependent. The precision of the analysisdynamically adapts to match the precision required by the property. This is il-lustrated in Figure 4 where we check bounds on the result of computing a sineapproximation under the input range [�⇡

2 ,⇡

2 ]. The input value is shown on thex-axis, the result of the computation on the y-axis. The bound we check againstis depicted as two red horizontal lines, boundaries of explored partitions areshown as black vertical lines. The actual maximum of the function lies at about1.00921. As the checked bound (Figure 4 shows bounds 1.2 and 1.01) approaches

Average speedup over CBMC ~270x

63

Sunday, 18 November 12

Property-dependent Trace Partitioning

�⇡2

⇡2

result 0.99

result � -0.99

result 1.0

result � -1.0

result 1.001

result � -1.001

result 1.01

result � -1.01

result 1.1

result � -1.1

result 1.2

result � -1.2

result 1.5

result � -1.5

result 2.0

result � -2.0

Leopold Haller (OUDCS) DPLL is Abstract Interpretation 27 / 33

Implementation

64

Sunday, 18 November 12

Property-dependent Trace Partitioning

�⇡2

⇡2

result 0.99

result � -0.99

result 1.0

result � -1.0

result 1.001

result � -1.001

result 1.01

result � -1.01

result 1.1

result � -1.1

result 1.2

result � -1.2

result 1.5

result � -1.5

result 2.0

result � -2.0

Leopold Haller (OUDCS) DPLL is Abstract Interpretation 27 / 33

Property-dependent Trace Partitioning

�⇡2

⇡2

result 0.99

result � -0.99

result 1.0

result � -1.0

result 1.001

result � -1.001

result 1.01

result � -1.01

result 1.1

result � -1.1

result 1.2

result � -1.2

result 1.5

result � -1.5

result 2.0

result � -2.0

Leopold Haller (OUDCS) DPLL is Abstract Interpretation 27 / 33

Number of partitions vs. tightness of bound

65

Sunday, 18 November 12

Property-dependent Trace Partitioning

�⇡2

⇡2

result 0.99

result � -0.99

result 1.0

result � -1.0

result 1.001

result � -1.001

result 1.01

result � -1.01

result 1.1

result � -1.1

result 1.2

result � -1.2

result 1.5

result � -1.5

result 2.0

result � -2.0

Leopold Haller (OUDCS) DPLL is Abstract Interpretation 27 / 33

Property-dependent Trace Partitioning

�⇡2

⇡2

result 0.99

result � -0.99

result 1.0

result � -1.0

result 1.001

result � -1.001

result 1.01

result � -1.01

result 1.1

result � -1.1

result 1.2

result � -1.2

result 1.5

result � -1.5

result 2.0

result � -2.0

Leopold Haller (OUDCS) DPLL is Abstract Interpretation 27 / 33

Property-dependent Trace Partitioning

�⇡2

⇡2

result 0.99

result � -0.99

result 1.0

result � -1.0

result 1.001

result � -1.001

result 1.01

result � -1.01

result 1.1

result � -1.1

result 1.2

result � -1.2

result 1.5

result � -1.5

result 2.0

result � -2.0

Leopold Haller (OUDCS) DPLL is Abstract Interpretation 27 / 33

Number of partitions vs. tightness of bound

66

Sunday, 18 November 12

Property-dependent Trace Partitioning

�⇡2

⇡2

result 0.99

result � -0.99

result 1.0

result � -1.0

result 1.001

result � -1.001

result 1.01

result � -1.01

result 1.1

result � -1.1

result 1.2

result � -1.2

result 1.5

result � -1.5

result 2.0

result � -2.0

Leopold Haller (OUDCS) DPLL is Abstract Interpretation 27 / 33

Property-dependent Trace Partitioning

�⇡2

⇡2

result 0.99

result � -0.99

result 1.0

result � -1.0

result 1.001

result � -1.001

result 1.01

result � -1.01

result 1.1

result � -1.1

result 1.2

result � -1.2

result 1.5

result � -1.5

result 2.0

result � -2.0

Leopold Haller (OUDCS) DPLL is Abstract Interpretation 27 / 33

Property-dependent Trace Partitioning

�⇡2

⇡2

result 0.99

result � -0.99

result 1.0

result � -1.0

result 1.001

result � -1.001

result 1.01

result � -1.01

result 1.1

result � -1.1

result 1.2

result � -1.2

result 1.5

result � -1.5

result 2.0

result � -2.0

Leopold Haller (OUDCS) DPLL is Abstract Interpretation 27 / 33

Property-dependent Trace Partitioning

�⇡2

⇡2

result 0.99

result � -0.99

result 1.0

result � -1.0

result 1.001

result � -1.001

result 1.01

result � -1.01

result 1.1

result � -1.1

result 1.2

result � -1.2

result 1.5

result � -1.5

result 2.0

result � -2.0

Leopold Haller (OUDCS) DPLL is Abstract Interpretation 27 / 33

Number of partitions vs. tightness of bound

67

Sunday, 18 November 12

Property-dependent Trace Partitioning

�⇡2

⇡2

result 0.99

result � -0.99

result 1.0

result � -1.0

result 1.001

result � -1.001

result 1.01

result � -1.01

result 1.1

result � -1.1

result 1.2

result � -1.2

result 1.5

result � -1.5

result 2.0

result � -2.0

Leopold Haller (OUDCS) DPLL is Abstract Interpretation 27 / 33

Property-dependent Trace Partitioning

�⇡2

⇡2

result 0.99

result � -0.99

result 1.0

result � -1.0

result 1.001

result � -1.001

result 1.01

result � -1.01

result 1.1

result � -1.1

result 1.2

result � -1.2

result 1.5

result � -1.5

result 2.0

result � -2.0

Leopold Haller (OUDCS) DPLL is Abstract Interpretation 27 / 33

Property-dependent Trace Partitioning

�⇡2

⇡2

result 0.99

result � -0.99

result 1.0

result � -1.0

result 1.001

result � -1.001

result 1.01

result � -1.01

result 1.1

result � -1.1

result 1.2

result � -1.2

result 1.5

result � -1.5

result 2.0

result � -2.0

Leopold Haller (OUDCS) DPLL is Abstract Interpretation 27 / 33

Property-dependent Trace Partitioning

�⇡2

⇡2

result 0.99

result � -0.99

result 1.0

result � -1.0

result 1.001

result � -1.001

result 1.01

result � -1.01

result 1.1

result � -1.1

result 1.2

result � -1.2

result 1.5

result � -1.5

result 2.0

result � -2.0

Leopold Haller (OUDCS) DPLL is Abstract Interpretation 27 / 33

Property-dependent Trace Partitioning

�⇡2

⇡2

result 0.99

result � -0.99

result 1.0

result � -1.0

result 1.001

result � -1.001

result 1.01

result � -1.01

result 1.1

result � -1.1

result 1.2

result � -1.2

result 1.5

result � -1.5

result 2.0

result � -2.0

Leopold Haller (OUDCS) DPLL is Abstract Interpretation 27 / 33

Number of partitions vs. tightness of bound

68

Sunday, 18 November 12

Property-dependent Trace Partitioning

�⇡2

⇡2

result 0.99

result � -0.99

result 1.0

result � -1.0

result 1.001

result � -1.001

result 1.01

result � -1.01

result 1.1

result � -1.1

result 1.2

result � -1.2

result 1.5

result � -1.5

result 2.0

result � -2.0

Leopold Haller (OUDCS) DPLL is Abstract Interpretation 27 / 33

Property-dependent Trace Partitioning

�⇡2

⇡2

result 0.99

result � -0.99

result 1.0

result � -1.0

result 1.001

result � -1.001

result 1.01

result � -1.01

result 1.1

result � -1.1

result 1.2

result � -1.2

result 1.5

result � -1.5

result 2.0

result � -2.0

Leopold Haller (OUDCS) DPLL is Abstract Interpretation 27 / 33

Property-dependent Trace Partitioning

�⇡2

⇡2

result 0.99

result � -0.99

result 1.0

result � -1.0

result 1.001

result � -1.001

result 1.01

result � -1.01

result 1.1

result � -1.1

result 1.2

result � -1.2

result 1.5

result � -1.5

result 2.0

result � -2.0

Leopold Haller (OUDCS) DPLL is Abstract Interpretation 27 / 33

Property-dependent Trace Partitioning

�⇡2

⇡2

result 0.99

result � -0.99

result 1.0

result � -1.0

result 1.001

result � -1.001

result 1.01

result � -1.01

result 1.1

result � -1.1

result 1.2

result � -1.2

result 1.5

result � -1.5

result 2.0

result � -2.0

Leopold Haller (OUDCS) DPLL is Abstract Interpretation 27 / 33

Property-dependent Trace Partitioning

�⇡2

⇡2

result 0.99

result � -0.99

result 1.0

result � -1.0

result 1.001

result � -1.001

result 1.01

result � -1.01

result 1.1

result � -1.1

result 1.2

result � -1.2

result 1.5

result � -1.5

result 2.0

result � -2.0

Leopold Haller (OUDCS) DPLL is Abstract Interpretation 27 / 33

Property-dependent Trace Partitioning

�⇡2

⇡2

result 0.99

result � -0.99

result 1.0

result � -1.0

result 1.001

result � -1.001

result 1.01

result � -1.01

result 1.1

result � -1.1

result 1.2

result � -1.2

result 1.5

result � -1.5

result 2.0

result � -2.0

Leopold Haller (OUDCS) DPLL is Abstract Interpretation 27 / 33

Number of partitions vs. tightness of bound

69

Sunday, 18 November 12

Property-dependent Trace Partitioning

�⇡2

⇡2

result 0.99

result � -0.99

result 1.0

result � -1.0

result 1.001

result � -1.001

result 1.01

result � -1.01

result 1.1

result � -1.1

result 1.2

result � -1.2

result 1.5

result � -1.5

result 2.0

result � -2.0

Leopold Haller (OUDCS) DPLL is Abstract Interpretation 27 / 33

Property-dependent Trace Partitioning

�⇡2

⇡2

result 0.99

result � -0.99

result 1.0

result � -1.0

result 1.001

result � -1.001

result 1.01

result � -1.01

result 1.1

result � -1.1

result 1.2

result � -1.2

result 1.5

result � -1.5

result 2.0

result � -2.0

Leopold Haller (OUDCS) DPLL is Abstract Interpretation 27 / 33

Property-dependent Trace Partitioning

�⇡2

⇡2

result 0.99

result � -0.99

result 1.0

result � -1.0

result 1.001

result � -1.001

result 1.01

result � -1.01

result 1.1

result � -1.1

result 1.2

result � -1.2

result 1.5

result � -1.5

result 2.0

result � -2.0

Leopold Haller (OUDCS) DPLL is Abstract Interpretation 27 / 33

Property-dependent Trace Partitioning

�⇡2

⇡2

result 0.99

result � -0.99

result 1.0

result � -1.0

result 1.001

result � -1.001

result 1.01

result � -1.01

result 1.1

result � -1.1

result 1.2

result � -1.2

result 1.5

result � -1.5

result 2.0

result � -2.0

Leopold Haller (OUDCS) DPLL is Abstract Interpretation 27 / 33

Property-dependent Trace Partitioning

�⇡2

⇡2

result 0.99

result � -0.99

result 1.0

result � -1.0

result 1.001

result � -1.001

result 1.01

result � -1.01

result 1.1

result � -1.1

result 1.2

result � -1.2

result 1.5

result � -1.5

result 2.0

result � -2.0

Leopold Haller (OUDCS) DPLL is Abstract Interpretation 27 / 33

Property-dependent Trace Partitioning

�⇡2

⇡2

result 0.99

result � -0.99

result 1.0

result � -1.0

result 1.001

result � -1.001

result 1.01

result � -1.01

result 1.1

result � -1.1

result 1.2

result � -1.2

result 1.5

result � -1.5

result 2.0

result � -2.0

Leopold Haller (OUDCS) DPLL is Abstract Interpretation 27 / 33

Property-dependent Trace Partitioning

�⇡2

⇡2

result 0.99

result � -0.99

result 1.0

result � -1.0

result 1.001

result � -1.001

result 1.01

result � -1.01

result 1.1

result � -1.1

result 1.2

result � -1.2

result 1.5

result � -1.5

result 2.0

result � -2.0

Leopold Haller (OUDCS) DPLL is Abstract Interpretation 27 / 33

Number of partitions vs. tightness of bound

70

Sunday, 18 November 12

Current and Future Work

• Develop an SMT solver for floating point logic

• Model on the success of propositional SAT:

• Simple abstract domain

• Highly efficient data structures

Rich logic,e.g. FP Programs

Prop. Logic Boolean programs

SAT Solvers are E�cient

2000 2001 2002 2003 2004 2005 2006 2007

1s

10s

100s

(Malik and Zhang 2009)

Leopold Haller (OUDCS) DPLL is Abstract Interpretation 3 / 33

71

Sunday, 18 November 12

Current and Future Work

• Reengineer prototype into a tool for floating point verification

• Significantly improved efficiency

• Generic interface for integrating abstract domains

• Development and generalisation of heuristics and learning strategies

Rich logic,e.g. FP Programs

Prop. Logic Boolean programs

72

Sunday, 18 November 12

Refining loops with ACDL

• Currently, loops cause imprecision in our analysis

• Analysis may fail to prove safety

Successful analysis requires distinguishing between even and odd numbers of loop iterations

73

Sunday, 18 November 12

Refining loops with ACDL

Exhaustive testing

Full abstraction

• Solution: Apply the SAT algorithm to control flow itself

• Make decisions over control-flow (e.g., assume odd number of loop iterations)

• Learning permanently alters control flow

• Resulting analysis can dynamically vary precision from full abstraction to precise case exploration

ACDL

Maximal precision Maximal efficiency 74

Sunday, 18 November 12

Conclusion - Part II

Automatic

Scalable PreciseTheorem provingD

ecision proceduresAb

stra

ct in

terp

reta

tion

Scalability ACDL Precision

Fully automatic

75

Sunday, 18 November 12

Thank you for your attention

76

Sunday, 18 November 12

Additional slides

77

Sunday, 18 November 12

Lazy and eager SMT

Two approaches to lift SAT to a richer logic L

Fully translate to propositional logic

� 2 L

' 2 Prop

Propositional SAT solver

SAT UNSAT

Partially translate to propositional logic

� 2 L

' 2 Prop

Propositional SAT solver

SAT

UNSAT

Theory solverL

infeasible

SAT

Eager approach Lazy approach

78

Sunday, 18 November 12

Limits of lazy SMT for FP

Lazy SMT works if the logic can be decomposed into an efficiently solvable theory component and a propositional component.

L

LTProp

Propositional SAT solver Theory solver

L

The approach breaks down if significant communication is necessary between the two.

Due to the non-numeric behaviour in floating-point arithmetic such as rounding, special values, etc., there is no clear decomposition. Therefore, analysis is often performed over the real numbers instead, which may lead to unsound results.

79

Sunday, 18 November 12

Collected references

80

Sunday, 18 November 12

References

M. Daumas, L. Rideau and L. Théry. A Generic Library for Floating-Point Numbers and Its Application to Exact Computing. TPHOLs 2001

G. Melquiond. Floating-point arithmetic in the Coq system. RNC 2008.

P. Miner and S. Boldo. Float PVS library. http://shemesh.larc.nasa.gov/fm/ftp/larc/PVS-library/pvslib.html

P. Miner. Defining the IEEE-854 Floating-Point Standard in PVS. PVS. Technical Memorandum NASA, Langley Research, 1995

J. Harrison. A machine-checked theory of floating-point arithmetic. TPHOLs 1999

Axiomatisations of FP

A. Ayad and C. Marché. Behavioral properties of floating-point programs. Hisseo publications, 2009.

A. Ayad and C. Marché. Multi-prover verification of floating-point programs. www.lri.fr/~marche/ayad10ijcar-submission.pdf, 2010

S. Boldo and J.C. Filliâtre. Formal verification of floating-point programs. ARITH 2007

Specification of FP properties

81

Sunday, 18 November 12

References

J. Harrison. Floating point verification in HOL light: The exponential function. FMSD, 16(3), 2000

J. Harrison. Floating-point verification. FM 2005

J. Harrison. Formal verification of square root algorithms. FMSD, 22(2), 2003

B. Akbarpour, A.T. Abdel-Hamid, S. Tahar, and J. Harrison. Verifying a synthesized implementation of IEEE-754 floating-point exponential function using HOL. CJ 53(4), 2010

J. O’Leary, X. Zhao, R. Gerth, C.H. Seger. Formally Verifying IEEE Compliance of Floating-Point Hardware

R, Kaivola and M. D. Aagaard. Divider circuit verification with model checking and theorem proving. TPHOLs 2000

M. Cornea-Hasegan. Proving the IEEE correctness of iterative floating-point square root, divide and remainder algorithms. Intel Technology Journal, Q2, 1998

T. L. J. Strother Moore and M. Kaufmann. A mechanically checked proof of the correctness of the kernel of the AMD5K86 floating-point division algorithm. IEEE Transactions on Computers, 47(9), 1998.

D. Rusinoff. A mechanically checked proof of IEEE compliance of a register-transfer-level specification of the AMD-K7 floating-point multiplication, division, and square root instructions. LMS Journal of Computation and Mathematics, I, 1998.

J. Sawada. Formal verification of divide and square root algorithms using series calculation. ACL2 2002.

S. Boldo, J.-C. Filliâtre and G. Melquiond. Combining Coq and Gappa for Certifying Floating-Point Programs. Calculemus 2009.

Applications

82

Sunday, 18 November 12

References

A. Chapoutot. Interval slopes as a numerical abstract domain for floating-point variables. SAS 2010

L. Chen, A. Miné and P. Cousot. A sound floating-point polyhedra abstract domain. APLAS 2008

A. Miné. Relational abstract domains for the detection of floating-point run-time errors. ESOP 2004

L. Chen, A. Miné, J. Wang and P. Cousot. An abstract domain to discover interval Linear Equalities. VMCAI 2010

L. Chen, A. Miné, J. Wang and P. Cousot. Interval polyhedra: An Abstract Domain to Infer Interval Linear Relationships. SAS 2009

K. Ghorbal, E. Goubault and S. Putot. The zonotope abstract domain Taylor1. CAV 2009

B. Jeannet, and A. Miné. Apron: A library of numerical abstract domains for static analysis. CAV 2009

D. Monniaux. Compositional analysis of floating-point linear numerical filters. CAV 2005

J. Feret. Static analysis of digital filters. ESOP 2004

F. Alegre, E. Feron and S. Pande. Using ellipsoidal domains to analyze control systems software. CoRR 2009

E. Goubault and S. Putot. Weakly relational domains for floating-point computation analysis. NSAD 2005

E. Goubault. Static analyses of the precision of floating-point operations. SAS 2001

Floating point abstract domains

83

Sunday, 18 November 12

ReferencesIndustrial Case Studies

E. Goubault, S. Putot, P. Baufreton, J. Gassino. Static analysis of the accuracy in control systems: principles and experiments. FMICS 2007

D. Delmas, E. Goubault, S. Putot, J. Souyris, K. Tekkal, F. Védrine. Towards an industrial use of FLUCTUAT on safety-critical avionics software.  FMICS 2009

J. Souyris and D. Delmas. Experimental assessment of Astrée on safety-critical avionics software. SAFECOMP 2007

J. Souyris. Industrial experience of abstract interpretation-based static analyzers. IFIP 2004

P. Cousot. Proving the absence of run-time errors in safety-critical avionics code. EMSOFT 2007

FP Static Analysers

B. Blanchet, P. Cousot, R. Cousot, J. Feret, L. Mauborgne, A. Miné, D. Monniaux and X. Rival. A static analyzer for large safety-critical software. SIGPLAN 38(5), 2003

P. Cousot, R. Cousot, J. Feret, L. Mauborgne, A. Miné, D. Monniaux and Xavier Rival. The ASTREÉ analyzer. ESOP 2005

E. Goubault, M. Martel and S. Putot. Asserting the precision of floating-point computations: a simple abstract interpreter. ESOP 2002

84

Sunday, 18 November 12

Related work

Constraint satisfaction

C. Michel, M. Rueher and Y. Lebbah: Solving constraints over floating-point numbers. CP2001

B. Botella, A. Gotlieb and C. Michel: Symbolic execution of floating-point computations. STVR2006

SMTP. Ruemmer and T. Wahl. An SMT-LIB theory of binary floating-point arithmetic. SMT 2010

A. Brillout, D. Kroening and T. Wahl. Mixed abstractions for floating point arithmetic. FMCAD 2009

R. Brummayer and A. Biere. Boolector: An Efficient SMT Solver for Bit-Vectors and Arrays. TACAS 2009

Incomplete SolversS. Boldo, J.-C. Filliâtre and G. Melquiond. Combining Coq and Gappa for Certifying Floating-Point Programs. Calculemus 2009.

85

Sunday, 18 November 12