11
MANY OF TODAYS digital circuit designs depend on the tight integration of multiple design modules. These modules, designed by dif- ferent engineers, interact via an interface, such as a bus or IP core interface. The interface is of utmost importance because it is where the vari- ous designers’ conceptions come together. If designers do not uniformly interpret the interface specification, modules will not meld together easily or will behave incorrectly when connect- ed. Therefore, the interface protocols must be clearly defined so that the designers can view the expected module behavior consistently. Fur- thermore, the protocol must be thoroughly debugged and solid, because so much of the module design depends on it. Despite the critical role it plays, the interface protocol is usually defined informally in a natur- al language such as English. Unfortunately, such an informal specification fundamentally lacks preciseness and clarity, and consequently tends to be ambiguous and subject to misinterpreta- tion. Another drawback is that a protocol speci- fied in English cannot be debugged until it is implemented in software. Then, after much work, the protocol can be simulated and corrected so that it operates as expected. Therefore, a new, informally specified protocol is likely to be buggy and contradictory until extensively tested in implementations. Formal specifications can help resolve both the ambiguity and correctness problems. These specifications use well-defined logic languages and are accordingly precise. Furthermore, they can be functionally verified directly, before any implementations are designed, via automated tools. To understand the difference between a formal specification and an informal one, con- sider this example: “trdy# cannot be driven until devsel# is asserted.” This informally stated rule comes from the specification of the peripheral component interconnect (PCI) bus protocol, used in many designs. The formal version might look like this: trdy devsel (“if trdy is true, then devsel must be true”) Clearly, an entire specification written this way has no room for ambiguity and can be mechan- ically checked for correctness. Using Formal Specifications for Functional Validation of Hardware Designs Special DAC Section 96 Formal specifications can help resolve both ambiguity issues and correctness problems in verifying complex hardware designs. This new methodology shows how specifications can also help design productivity by automating many procedures that are now done manually. Input sequences, output assertions, and a simulation coverage metric for the design under verification are all generated directly from the specification. Kanna Shimizu and David L. Dill Stanford University 0740-7475/02/$17.00 © 2002 IEEE IEEE Design & Test of Computers

Using formal specifications for functional validation of hardware designs

  • Upload
    dl

  • View
    212

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Using formal specifications for functional validation of hardware designs

MANY OF TODAY’S digital circuit designs

depend on the tight integration of multiple

design modules. These modules, designed by dif-

ferent engineers, interact via an interface, such

as a bus or IP core interface. The interface is of

utmost importance because it is where the vari-

ous designers’ conceptions come together. If

designers do not uniformly interpret the interface

specification, modules will not meld together

easily or will behave incorrectly when connect-

ed. Therefore, the interface protocols must be

clearly defined so that the designers can view the

expected module behavior consistently. Fur-

thermore, the protocol must be thoroughly

debugged and solid, because so much of the

module design depends on it.

Despite the critical role it plays, the interface

protocol is usually defined informally in a natur-

al language such as English. Unfortunately, such

an informal specification fundamentally lacks

preciseness and clarity, and consequently tends

to be ambiguous and subject to misinterpreta-

tion. Another drawback is that a protocol speci-

fied in English cannot be debugged until it is

implemented in software. Then, after much work,

the protocol can be simulated and corrected so

that it operates as expected. Therefore, a new,informally specified protocol is likely to be buggy

and contradictory until extensively tested in

implementations.

Formal specifications can help resolve both

the ambiguity and correctness problems. These

specifications use well-defined logic languages

and are accordingly precise. Furthermore, they

can be functionally verified directly, before any

implementations are designed, via automated

tools. To understand the difference between a

formal specification and an informal one, con-

sider this example: “trdy# cannot be driven until

devsel# is asserted.” This informally stated rule

comes from the specification of the peripheral

component interconnect (PCI) bus protocol,

used in many designs. The formal version might

look like this:

trdy → devsel (“if trdy is true,

then devsel must be true”)

Clearly, an entire specification written this way

has no room for ambiguity and can be mechan-

ically checked for correctness.

Using Formal Specificationsfor Functional Validation ofHardware Designs

Special DAC Section

96

Formal specifications can help resolve both

ambiguity issues and correctness problems in

verifying complex hardware designs. This new

methodology shows how specifications can also

help design productivity by automating many

procedures that are now done manually. Input

sequences, output assertions, and a simulation

coverage metric for the design under verification

are all generated directly from the specification.

Kanna Shimizu and David L. DillStanford University

0740-7475/02/$17.00 © 2002 IEEE IEEE Design & Test of Computers

Page 2: Using formal specifications for functional validation of hardware designs

The advantages of formal specifications are

obvious. However, many designers avoid such

specifications because of a perceived cost-value

problem; they are often considered too costly

for the benefits they promise. (See the sidebar,

“Assertion-based and constraint-based verifica-

tion,” for more information on formal methods.)

On the down side, formal specifications may

require lengthy development time and formal

verification expertise, and, for most designers,

the value of a correct and precise specification

does not justify these costs. In this day of dimin-

97July–August 2002

Carl Pixley, SynopsysIn the past few years, there has been great progress in

CAD vendor deployment of formal methods for commer-cial design. Static Boolean equivalence is now routine andclearly preferable to simulation-based equivalence check-ing. The current trend is for assertion-based checking insimulation, emulation, and formal verification. Assertions(also known as properties, monitors, and checkers) arebecoming widely accepted in industry. Approximately adozen companies, such as Synopsys, Cadence, andTempus Fugit, offer various forms of assertions, including

� assertion checkers (for example, tristate multiplexerchecking, and array indices out of bounds);

� predefined assertion suites (such as AMBA and PCIassertion checker suites);

� source code semantic analysis (such as checkingfull_case and parallel_case); and

� formal verification of assertions with model checkersbased on various technologies such as BDDs, SAT(Boolean satisfaction), symbolic simulation, automat-ic test-pattern generation, automatic abstraction, andlightweight theorem proving.

Several hotly debated assertion languages are vyingfor standardization. For example, the Accellera techni-cal committee has chosen IBM’s Sugar language(http://www.eda.org/vfv/), and Synopsys has announcedOpenVera Assertions (http://www.eedesign.com/story/OEG20020415S0029).

This article by Shimizu and Dill continues a sequenceof articles related to constraint-based verification.1-3

These articles, and others, have shown that simpleBoolean constraints (that is, assertion checkers) withsmall companion state machines could define a bus pro-tocol, which could be easily model checked. This con-straint-based modeling resulted in identifying bugs in thewell-established PCI protocol. The current articleexplains how the same methodology can be used tocheck bus interface units and also measure simulation

coverage of the constraints themselves. The fundamen-tal computational algorithm of constraint-based verifica-tion is the method for automatically converting simple(but powerful) interface checkers into stimulus genera-tors3 for simulation-based verification, and environmentsnecessary for formal verification.1 Hence, constraint-based verification unites informal (that is, simulation-based) verification with formal verification, bothsupporting assertion-based verification.

Constraint-based verification establishes a new para-digm for verification. Constraints can be developed incre-mentally and inexpensively without a heavyweight testbench, and can help animate the design under verification(DUV) at the earliest opportunity. Constraints are simple towrite, and designers can use them directly, well before turn-ing over the design to a verification team. Constraints canthen be “flipped” to become checkers at higher integrationlevels, whereas conventional simulation drivers are just dis-carded during full-chip or system-on-a-chip integration.Constraints therefore enable assume/guarantee reason-ing—that is, assumptions about the environment of a DUVmust be proven when the DUV is connected to its true envi-ronment. Constraint-based verification can be easily inte-grated into, and extend, a conventional verification flow.

References1. M. Kaufmann, A. Martin, and C. Pixley, “Design Constraints

in Symbolic Model Checking,” Proc. Int’l Conf. Computer-

Aided Verification (CAV 98), Springer-Verlag, Berlin, 1998,

pp. 477-487.

2. J. Kulula and T. Shiple, “Building Circuits from Relations,”

Proc. Int’l Conf. Computer-Aided Verification (CAV 00),

Springer-Verlag, Berlin, 2000, pp. 113-123.

3. J. Yuan et al., “Modeling Design Constraints and Biasing

in Simulation Using BDDs,” Proc. Int’l Conf. Computer-

Aided Design (ICCAD 99), ACM Press, New York, 1999,

pp. 584-589.

Carl Pixley is a senior director at Synopsys. Contacthim at [email protected].

Assertion-based and constraint-based verification

Page 3: Using formal specifications for functional validation of hardware designs

ishing time to market, precious resources are

allocated to more pressing design needs.

To counter these disincentives, we have

developed a methodology that increases a

specification’s value beyond its role as an inter-

face documentation. If designers can use for-

mal specifications in novel ways that enhance

design productivity, they may be less reluctant

to develop them. In this light, our methodology

automates simulation-based validation proce-

dures, which are now done manually through

exploitation of formal specifications. For regis-

ter-transfer-level (RTL) designs simulated in

software, designers can use the specification to

directly generate its inputs, check its behavior,

and monitor simulation coverage.

The problemTo verify a hardware description language

(HDL) design module with software simulation,

an engineer needs additional tools, as Figure 1

illustrates:

� Input generator. Logic is required to drive the

design inputs. One way is to use random

sequences. The problem with this approach

is that because the inputs are not guaranteed

to be correct (they might be garbage from

the design’s point of view), it is difficult to

gauge the design’s correctness. A less hap-

hazard method is directed testing, in which

input sequences are manually written, but

this approach is time-consuming, and writ-

ing such sequences correctly is difficult.

� Output correctness checker. Logic to deter-

mine the correctness of the module’s behav-

ior is needed because manual scrutiny is

usually too cumbersome. There are two

types of correctness. When designers check

design outputs for protocol violations, they

focus on the interface for correctness. In

contrast, when they check design behavior

by observing actions at multiple interfaces,

the correctness centers more on the design

and less on a single interface. The output

correctness checker described in this article

can check only for the former, in which the

checker’s observations are contained with-

in one interface.

� Coverage metric. Because complete verifi-

cation coverage that tests all possible input

sequences is not possible, some metric must

quantify its progress. Such a metric lets the

verification engineer know whether the

design’s functionalities have been thor-

oughly exercised and all interesting cases

have been reached during the simulation.

Our approachOur methodology is based on a unified

framework approach in which the three afore-

mentioned tools are generated from a single

source specification, as Figure 2 shows. This

unified framework is possible because all three

tools are based on the interface protocol. The

input generator generates input sequences

based on what the interface protocol allows,

the output correctness checker compares the

output to what the protocol deems correct,

and the coverage metric quantifies coverage

by exploiting the fact that the protocol defines

the set of all possible interface events. Thus,

designers can transform the interface specifi-

cation into the three tools using automated

methods. Currently, each verification aid is

written from scratch, requiring a tremendous

amount of development time and effort. By

eliminating this step, our methodology

enhances productivity and shortens overall

development time.

Special DAC Section

98 IEEE Design & Test of Computers

Designunder

verification

Correct?

Simulation done?

Outputcorrectness

checker

Coveragemetric

Inputgenerator

Figure 1. Three aids are required to verify a

design’s functional behavior: an input

generator, a coverage metric, and a

correctness checker.

Page 4: Using formal specifications for functional validation of hardware designs

Furthermore, a thoroughly debugged, solid

specification invariably leads to correct input

sequences, correctness-checking properties,

and coverage metrics. The correctness of the

core document guarantees the correctness of

the derived tools. In contrast, with current

methods each verification aid must be individ-

ually debugged. The advantages of only having

to check the specification are most pro-

nounced for standard interfaces where the cor-

rectness effort can be concentrated in the

standards committee responsible for defining

the interface and not duplicated among the

many interface implementers. Furthermore, a

change made to the protocol (a frequent occur-

rence in industry) requires only the associated

change in the protocol specification because

the verification aids can be regenerated from

the revised document. Otherwise, the engineer

would have to manually determine the effect

of the change for each tool.

Generating the three toolsThe output correctness checker is the most

straightforward of the three tools to derive from

the specification. The checker functions on the

fly; during simulations, it flags an error as soon

as the module violates the interface protocol. It

checks for protocol conformance but does not

check for the design-centric correctness

described earlier. For example, a PCI checker

will verify that a module obeys the protocol

rules trdy → devsel or prev(trdy ∧stop) → stop. The specification is guaran-

teed to be executable by the style rules, so the

translation from it to an HDL checker requires

minimal changes. We and Hu describe the

details of this translation from the specification

to the output correctness checker in another

article.1

We do not describe the details of the output

correctness checker here. In this article, we

focus on the other two tools. The input genera-

tor produced by our method is dynamic and

reactive; the generated inputs depend on the

previous cycle outputs of the design under ver-

ification. In addition, these inputs always obey

the protocol, and the generation is a one-pass

process. On every clock cycle, the algorithm

solves the constraints that the specification

imposes on the inputs. The design cannot dis-

cern the difference between interacting with

this setup and interacting with an actual HDL

implementation of the environment.

Although input generation using constraint

solvers is not by itself novel, our approach is the

first to use and exploit a complete specification.

Writing constraints on an ad hoc basis for the

express purpose of generating signal sequences

is common. However, few have succeeded in

transforming an existing, complete specifica-

tion into a generator for verifying the complex

designs commonly found in industry.

Finally, we introduce a new simulation cov-

erage metric and describe the automatic input

biasing based on this metric. Although more

experiments are needed to validate this met-

ric’s effectiveness in measuring coverage, its

main advantage (currently) is that it is specifi-

cation-based and saves time. Extra work is not

needed to write out a metric or to pinpoint the

interesting scenarios; they are gleaned mechan-

ically from the specification document.

MethodologyThe bedrock of our methodology is the spec-

ification style, which is language independent.

We explain the style rules that determine the

specification structure so that the specification

can be used for input generation and tracking

coverage.

99July–August 2002

Specification

Compiler

Compiler Compiler

Inputgenerator

Coveragemetric

Outputcorrectness

checker

Figure 2. Unified framework approach.

Three verification tools can be derived

from one source specification.

Page 5: Using formal specifications for functional validation of hardware designs

Specification styleThe specification style can be applied to

many specification languages, from SMV to

Verilog. Best described as a way to structure

and restrict a specification, the style has been

used to formally specify the core subsets of the

signal-level PCI1 and Intel’s Itanium processor

bus protocols.2

A structured specification has many benefits

that a freeform one lacks. For example, although

Yuan et al. have also developed an input gener-

ator, SimGen,3 our generator is far more memory

efficient than SimGen because our approach

exploits the specification’s structure. The struc-

ture lets only the relevant portions of the speci-

fication be extracted for signal generation. This

results in dramatically smaller data structures

and can allow input generation for large designs

that previously could not be handled. For exam-

ple, with the PCI design that we experimented

on, a SimGen-like method would have required,

in the worst case, 2161 nodes in the data structure,

whereas our method required only 215 nodes.

The specification uses multiple constraints

to collectively define the signaling behavior at

the interface. The constraints are short Boolean

formulas that follow certain syntactic rules. The

constraints are also independent of one anoth-

er; rely on state variables for historic informa-

tion; and when joined together by an AND

operator, define exactly the correct behavior of

the interface. This method is similar to using

temporal logic for describing behavior.

However, our methodology allows and requires

only the most basic operators for writing the

constraints, and eschews the more powerful

operators such as “eventually event A will hap-

pen” or “it is always possible for event A to hap-

pen.” Furthermore, many designers use

temporal formulas to describe a system’s iso-

lated characteristics, but this methodology

requires constructing a self-contained, com-

plete specification for the interface.

Style rule 1. The first style rule requires that the

constraints be written in the following form:

prev(signal0 … ∧¬ signalj … ∨variable0 … ∧¬ variablek) →signali ∨ … ∧¬ signaln

where “→” is the logical symbol for “implies”.

The antecedent is the expression to the left of

the “→” symbol, and the consequent is the

expression to the right of it. The allowed oper-

ators are AND, OR, NEGATION, and prev. The

prev construct allows the value of a signal (or

the state of a state machine) a cycle before the

current state to be expressed. The constraints

are written as an implication with the past

expression (events from any state previous to

the current state) as the antecedent and the cur-

rent expression as the consequent. In essence,

the past history, when it satisfies the antecedent

expression, requires the current consequent

expression to be true; otherwise, the constraint

is not activated, and the interface signals do not

have to obey the consequent in the current

cycle. In this way, the activating logic and the

constraining logic are separated. For example,

the PCI protocol constraint, prev(trdy ∧stop)→ stop, means “if the trdy and stop sig-

nals were true in the previous cycle (the acti-

vating logic), then stop must be true in the

current cycle (the constraining logic).”

This separation is key to memory-efficient

signal generation. It identifies the relevant (that

is, activated) constraints on a particular cycle

so that only these constraints are used. The

other constraints, because they are not activat-

ed, can be ignored for this particular cycle.

Also, the separation allows the final constraint

formulas to contain only the consequent

halves. The activating half can be discarded

because it is used only to determine the rele-

vance of the constraint in a cycle. For these two

reasons, the final formula is far smaller, and

memory efficiency is greatly improved.

Style rule 2. The second style rule—separabil-

ity—requires each constraint to constrain only

one component’s behavior. Thus, a single con-

straint should constrain only outputs from one

component, and not its inputs or other compo-

nents’ outputs. Equivalently, because the con-

straining part is isolated from the activating part

(due to the first style rule), the separability rule

requires the consequent to contain only one

component’s outputs.

Because only the consequent halves are

used when the signals are generated and the

Special DAC Section

100 IEEE Design & Test of Computers

Page 6: Using formal specifications for functional validation of hardware designs

consequents contain only outputs from one

component, the number of variables in the con-

straint formula is greatly reduced. Otherwise,

the formula will contain internal state variables,

input variables, or output variables from other

components. Consider the PCI constraint, “mas-

ter must raise irdy# within eight cycles of the

assertion of frame#,” which translates to “IF the

agent is the master and it has been seven cycles

since frame was asserted and irdy has not been

asserted yet and irdy is not asserted in this

cycle, THEN the output irdy must be true in the

next cycle.”

prev((masteris = true) ∧ (frame8

= 7) ∧¬ irdy) → irdy

Without the technique, the constraint formula

will have to contain masteris (a 1-bit state vari-

able), frame8 (a 3-bit counter), irdy (a 1-bit state

variable), and irdy_out (a 1-bit free variable

whose value we will choose). With the tech-

nique, the constraint formula contains only

irdy_out if the constraint is activated and nothing

if it is not. This is why, for the PCI example, the

separation technique reduced the number of

Boolean variables in the data structure from 161

to 15 and the space complexity from 2161 to 215.

Style rule 3. The third rule requires that the spec-

ification is free of a certain type of contradiction.

This rule effectively guarantees that an output

vector satisfying all the activated constraints

always exists for a module as long as the output

sequence so far has not violated the constraints.

There is a universal test that can verify this prop-

erty for a specification. Using a model checker,

the following computation tree logic (CTL)4

property can be checked against the constraints,

and any violations will pinpoint the dead state:

AG(all constraints have been true

so far →EX(all constraints are true))

This rule is necessary to guarantee that a cor-

rect vector exists for every clock cycle when

generating signals. If there was a contradiction,

there would be no possible correct output for

the module at a particular execution point.

Deriving an input generatorAfter designers fully specify a protocol with

the list of constraints, they can use these con-

straints directly to emulate the protocol in soft-

ware. Two setups are possible. In one scenario,

no implementations have been designed yet,

but designers would like to see possible inter-

face signal traces to understand the protocol.

For this scenario, dummy agents, each repre-

senting a particular module at the interface,

can be created automatically. By using the con-

straints, the dummy agents generate outputs in

every cycle and demonstrate how the interface

modules would interact using the protocol. This

is particularly useful because it is hard to visu-

alize how the protocol works just from a col-

lection of constraints.

In the second scenario, an implementation for

a module has been designed and correct inputs

are needed to stimulate the design. The dummy

agents (minus the one for the implemented mod-

ule) can emulate the behavior of the other agents

at the interface and act as the environment for

the module. The pseudocode in Figure 3 (next

page) outlines how dummy agents can be creat-

ed from the structured specification to generate

inputs for the design, as in Figure 4 (on p. 103).

Biasing the inputsThere is a lot of interest in how to steer sim-

ulations through meaningful scenarios so that

bugs can be found. Most designers acknowl-

edge that the main challenges in simulation are

determining the interesting scenarios and

deciding how to lead the design to those sce-

narios. We explain how designers can use the

specification to attain these two goals.

Coverage metric. As a first-order approximation

of interesting scenarios, or corner cases, design-

ers can use the antecedents of the constraints,

because the implementation needs to comply

with the constraint clause only when the

antecedent clause is true. For example, con-

sider the PCI constraint, “master must raise irdy

within eight cycles of the assertion of frame.”

The antecedent is “the counter that starts count-

ing from the assertion of frame has reached 7

and irdy still has not been asserted” and the

consequent is “irdy is asserted.” Unless this

101July–August 2002

Page 7: Using formal specifications for functional validation of hardware designs

antecedent condition happens during the sim-

ulation, compliance with this constraint cannot

be completely known. For a simulation run that

has triggered only 10% of the antecedents, only

10% of the constraints have been checked for

the implementation. In this sense, the number

of antecedents fired during a simulation run is

a rough coverage metric.

One major drawback results from using this

metric for coverage. The problem is integral to the

general relationship between implementation

and specification. To create an implementation,

the designer chooses an action from the choices

offered by the specification for every state in the

state machine. As a result, the implementation

will not cover the full range of behavior allowed

by the specification. Thus, some of the

antecedents in the specification will never be

true, because the implementation precludes any

paths to a state where the antecedent is true.

Unless verification engineers are familiar with the

implementation design, they cannot know

whether an antecedent has been missed because

of the lack of appropriate simulation vectors or

because the antecedent is structurally impossible.

Deriving biases for missed corner cases. To

reach interesting corner cases, verification engi-

Special DAC Section

102 IEEE Design & Test of Computers

Group constraints according to interface //This is possible because of style rule 2,

component they specify. separability.

If there are n interface components,

there will be n groups.

Remove the group whose constraints apply to //These will not be needed.

component under verification. Now there are n – 1 groups of constraints.

For each group of constraints //The goal is to choose an output assignment

perform the following steps on every for each dummy agent, and consequently an

clock cycle of the simulation run. input assignment for the design in the next

cycle.

//Step 1

For each constraint //The antecedent values are determined by

evaluate just the antecedent half. internal state variables and observed

interface signal values.

For antecedents that evaluate to true

mark the corresponding constraints as activated.

//Step 2

Within each group, AND together consequent //As a result, there is one final formula for

halves of activated constraints to form the each dummy agent. Because of rule 2, the final

final formula. formulas are mutually independent and do not

have common variables; thus, finding solutions

is easier.

//Step 3

Use a BDD-based satisfiability solver to determine //Because the specification is flexible and

a solution to each final formula. allows a range of behaviors, multiple

solutions are likely. The chosen solutions

are the output vectors for each dummy agent

in this cycle.

//Step 4

Go back to step 1 in the next clock cycle.

End

Figure 3. Pseudocode outlining how dummy agents can be created from the structured specification to generate

inputs for the design.

Page 8: Using formal specifications for functional validation of hardware designs

neers often apply biasing to input generation. If

problematic states are caused by certain inputs

being true often, the engineer programs the input

generator to set the variable, n% true, instead of

the neutral 50% true. For example, to verify how

a component reacts to an environment that

delays its response, env_response, the engineer

can set the biasing so that the input,

env_response, is true only 5% of the time.

Designers should not use 0%, because it might

cause the interface to deadlock. With prevailing

methods, designers must provide the biasing

numbers, requiring expert knowledge of the

design, and must determine the biases by hand.

In contrast, by targeting antecedents, designers

can automatically derive interesting biases from

the specification without knowing anything

about the design. The algorithm works as follows:

1. Gather the constraints that specify the out-

puts of the component to be verified. The

goal is to get as many antecedents of these

constraints to become true during the sim-

ulation runs.

2. Set biases for all input signals to neutral

(50% true) in the input generator.

3. Run the simulation.

4. Determine which antecedents have not

fired so far.

5. Pick one missed antecedent and use it to

determine the variable biasing. If, for exam-

ple, antecedent ¬ a ∧ b ∧ ¬ c has not been

true, set the following biases: a is true sel-

dom (2% of the time), b is true often (98%),

c is true seldom (2%).

6. Rerun the simulation and repeat from step

4. Continue until all antecedents have been

considered.

Several interesting conclusions can be

drawn regarding this algorithm. First, although

effort was invested in determining optimal bias

numbers exactly, biases that simply allowed a

signal to be true (or false) often were sufficient.

103July–August 2002

Figure 4. Input generation algorithm. There are three components at the interface (a); the

algorithm groups each constraint according to which input component that constraint

specified (b); for every clock cycle, the algorithm evaluates the antecedent of each

constraint, and if it evaluates to true, the corresponding constraint is marked activated (c);

the algorithm combines the consequents of the activated constraints to form the final

formula, and finds a satisfying solution for it (d).

a2_1 c2_1a2_2 c2_2

a2_0 c2_0Used forgenerating

Not used(to be verified)

Specificationfor 0

Specification for 2

Specificationfor 1

Do not haveimplementations

2Implementation to be verified

Interface

Input formulac0_0 & c0_2 & c0_3

Active

Active

Active

Antecedent ConsequentOutput for component 0 at time t(partial input for component 2)

Find solution

(a)

(c) (d)

(b)

0 1a0_1 c0_1a0_2 c0_2

a0_0 c0_0a0_1 c1_1a0_2 c1_2

a0_1 c1_0

a0_0 c0_0

a0_1 c0_1

a0_2 c0_2

a0_3 c0_3

Page 9: Using formal specifications for functional validation of hardware designs

Empirically, interpreting “often” as 49 out of 50

times (98%) seems to work well. Second, an

antecedent expression contains not only inter-

face signal variables but also countervalues and

other variables that cannot be skewed directly.

Just skewing the input variables in the

antecedent is primary biasing; dependency

analysis produces a more refined, secondary

biasing. For example, many hard-to-reach cases

are states in which a counter has reached a

high value; using dependency analysis, we

determined biases that allow a counter to incre-

ment frequently without resetting.

Implementing biasing. To find variable assign-

ments that satisfy a Boolean constraint, algo-

rithms often use a binary decision diagram

(BDD).5 A constraint can be transformed into a

tree-like BDD in which each node corresponds

to a variable in the constraint, as Figure 5a shows.

A node has two outgoing branches; the algo-

rithm takes the THEN branch if the variable is

set to true, and the ELSE branch if false. By tra-

versing this tree, the algorithm will eventually

reach one of the two leaf nodes. Terminal node

1 indicates that the choices of variable assign-

ments along the path taken (a = true, b = false, c

= false, …) satisfy the constraint, whereas node

0 indicates that the assignment does not. For

example, in Figure 5a, the path (a = false, b =

true, c = false, f = false, h = false) leads to node 1,

so the assignment satisfies the constraint.

The biasing of the input variables occurs dur-

ing the BDD traversal stage of the input genera-

tion. Yuan et al. introduced the basic technique,3

and we modified it. After the algorithm builds

the input formula BDD for a component, the

algorithm traverses the structure according to the

biases. If variable b is biased to be true 49 out of

50 times, the TRUE branch from it is taken 49 out

of 50 times (as in Figure 5a). Likewise, if b is

biased to be false 49 out of 50 times, the FALSE

branch will be taken with that probability.

The input generation algorithm has an extra

step to accommodate the biasing. Often, for a

single simulation run targeting a specific

antecedent, only a few input variables are

biased. The variables must be reordered so that

the biased variables are at the top of the BDD,

and their truth values are not determined by the

other variables. In Figure 5b, variable c is

intended to be true most of the time. However,

because c is buried toward the bottom of the

BDD, if {a = 0, b = 1} is chosen, c is forced to be

false to satisfy the constraint. In contrast, if c is

at the top of the BDD, the true branch can be

taken as long as the other variables are set

accordingly (for example, a = 1).

Special DAC Section

104 IEEE Design & Test of Computers

a:b:c:

2% true98% true 2% true

Biasing a

2%

b

98%

d

e f

0

1

2%

c

2%

98%

2% 98%

2%

98%50% 50%

50% 50% 50%50%

50%50%

50% 50%

ThenElse

1 0

a

b

c

50%

50%50%

50%

2% 98%

(a)

(b)

g h

Figure 5. A traversal down a biased binary decision

diagram (BDD) helps find a solution to a constraint (a);

incorrect ordering of the BDD variables will not allow

the desired biasing in some cases (b).

Page 10: Using formal specifications for functional validation of hardware designs

Experimental resultsTo demonstrate our verification methodol-

ogy on a meaningful design, we chose the I/O

component from the Stanford Flash design,6

which is part of the multiprocessor project, for

verification. The I/O unit, along with the rest of

the project, had been extensively debugged,

fabricated, and tested and is part of an opera-

tional system. We evaluated the methods on

the PCI interface.

The design is described by 8,000 lines of

Verilog and contains 283 variables, each rang-

ing from 1 to 32 bits—a complexity that renders

straightforward model checking unsuitable.

The setupWe used a formal PCI specification to con-

strain the inputs and check the outputs at the

design’s PCI interface. Using a compiler tool

written in OCAML (http://caml.inria.fr), we gen-

erated, from the specification, a simulation

checker that flags PCI protocol violations and

an input generator that controls the design’s

PCI inputs. The I/O unit (the design under veri-

fication), checker, and input generator are con-

nected and simulated together, and results are

viewed using the Value Change Dump (VCD)

file. We skewed the inputs with different biases

for each simulation run to produce various

extreme environments and stress the I/O unit.

Verification resultsUsing the 70 assertions provided by the inter-

face specification, we found nine previously

unreported bugs in the I/O unit. Most resulted

from incorrect state machine design. For exam-

ple, one bug manifested itself by violating the

protocol constraint, “once trdy has been assert-

ed, it must stay asserted until the completion of

a data phase.” Because of an incorrect path in

the state machine, in some cases, the design

would assert trdy and then, before the comple-

tion of the data phase, deassert trdy. This can

deadlock the bus if the counterparty infinitely

waits for the assertion of trdy. We easily cor-

rected the bug by removing the problematic,

and most likely unintended, path.

The setup makes the verification process far

easier; the process of finding signal-level bugs

is now nearly automated, so most of the effort

can focus on reasoning about the bug once it

is found.

Coverage resultsBecause the Flash PCI design is conservative

and implements a small subset of the specifica-

tion, basing the coverage metric on the compo-

nent’s specification was not especially useful. For

example, the design initiates only single data—

never multiple data—phase transactions. Or,

instead of having the flexibility to respond with

any of the three termination modes, it always

responds with the same mode. Thus, most of the

antecedents remained false because the com-

ponent never performed many of the actions

allowed by the specification.

However, by using the metric to ensure that

the environment is as flexible as possible, we

found that the coverage proved to be far more

powerful. The goal is to ensure that the design is

compatible with any component that complies

with the interface protocol. The design should

be stimulated with the most general set of

inputs, and so, using the antecedents from the

constraints that specify the environment (in

Figure 6, a0, a1, and so on) to determine bias-

es was extremely fruitful; most of the design

bugs were unearthed with these biases.

Performance resultsPerformance issues, such as speed and

memory use, did not pose problems, so we

were free to focus on generating interesting

simulation inputs. However, to demonstrate

the method’s scalability for larger designs, we

tabulated performance results. We ran the sim-

ulations on a four-processor Sun UltraSparc-II

296-MHz system with 1.28 Gbytes of main mem-

105July–August 2002

Environment Design

a0 → c0a1 → c1a2 → c2

Specification

A0 → C0A1 → C1A2 → C2

Specification

Figure 6. The metric using the antecedents of the environment

proved more useful than the metric using the antecedents of the

design under verification.

Page 11: Using formal specifications for functional validation of hardware designs

ory. The specification provided 63 constraints

to model the environment. The BDDs used for

signal generation were small; the peak number

of nodes during simulation was 193, and the

peak amount of memory used was 4 Mbytes.

Furthermore, speed was only slightly sacri-

ficed to achieve this space efficiency. Table 1

lists the execution times for different settings.

With no constraint solving, where inputs are ran-

domly set, the simulation takes 0.64 seconds for

12,000 simulator time steps. If we used the input

generator, the execution time increased by 57%

to 1.00 second. Such an increase in time was not

debilitating, and the inputs were guaranteed to

be correct. Table 1 also indicates how progres-

sively adding signal value dumps, a correctness

checker module, and coverage monitor mod-

ules, increased the execution time.

EXPERIMENTS to determine whether our input

generation algorithm can handle designs too

large for SimGen-type algorithms would further

validate our methodology’s memory efficiency.

More extensive experiments to quantify the

speed penalty for building the BDDs dynami-

cally would also be useful. �

AcknowledgmentThis research was supported by GSRC con-

tract SA220623106PG2.

References1. K. Shimizu, D.L. Dill, and A.J. Hu, “Monitor-Based

Formal Specification of PCI,” Proc. Int’l Conf. For-

mal Methods in Computer-Aided Design, (FMCAD

00), Lecture Notes in Computing Science, vol.

1954, Springer-Verlag, Berlin, 2000, pp. 335-353.

2. K. Shimizu, D.L. Dill, and C.T. Chou, “A Specifica-

tion Methodology by a Collection of Compact

Properties as Applied to the Intel Itanium Proces-

sor Bus Protocol,” Proc. 11th Advanced Research

Working Conf. Correct Hardware Design and Veri-

fication Methods (CHARME 01), Lecture Notes in

Computing Science, vol. 2144, Springer-Verlag,

Berlin, 2001, pp. 340-354.

3. J. Yuan et al., “Modeling Design Constraints and

Biasing in Simulation Using BDDs,” Proc. Int’l

Conf. Computer-Aided Design (ICCAD 99), ACM

Press, New York, 1999, pp. 584-589.

4. E. Clarke and E. Emerson, “Synthesis of Synchro-

nization Skeletons for Branching Time Temporal

Logic,” Proc. Workshop Logic of Programs, Lec-

ture Notes in Computer Science, vol. 131,

Springer-Verlag, Berlin, 1981, pp. 52-71.

5. R. Bryant, “Graph-Based Algorithms for Boolean

Function Manipulation,” IEEE Trans. Computers,

vol. 35, no. 8, Aug. 1986, pp. 677-691.

6. J. Kuskin et al., “The Stanford FLASH Multipro-

cessor,” Proc. Int’l Symp. Computer Architecture,

ACM Press, New York, 1994, pp. 302-313.

Kanna Shimizu is a PhDcandidate in the ComputerSystems Laboratory at Stan-ford University. Her researchinterests include practicalformal verification and spec-

ification methodologies. Shimizu has a BS inelectrical engineering from the California Instituteof Technology and an MSc in computation fromthe University of Oxford.

David L. Dill is a professor of computer sci-ence at Stanford University. His research inter-ests include the theory and application of formalverification techniques to system designs,including hardware, protocols, and software. Dillhas an SB in electrical engineering and comput-er science from the Massachusetts Institute ofTechnology, and an MS and a PhD in computerscience from Carnegie Mellon University.

Direct questions and comments about thisarticle to Kanna Shimizu, Gates Hall, 352,Computer Systems Laboratory, Stanford Univer-sity, Stanford, CA 94305-9030; [email protected].

Special DAC Section

106 IEEE Design & Test of Computers

Table 1. Time performance of the methodology in the Flash example (for

12,000 simulator time steps).

Setting User time (s) System time (s) Total time (s)

Random 0.53 0.11 0.64

Constrained 0.77 0.23 1.00

With dump 0.77 0.26 1.03

With monitor 1.33 0.29 1.62

With coverage 1.54 0.25 1.79