Upload
andy-gray
View
213
Download
0
Embed Size (px)
Citation preview
Abstract
The intent of this article is to provide the
reader with an historical perspective of
software vulnerability assessment. This
historical overview will examine the lessons
learned from the periods of formal
approaches as applied to system
certification and validation; to the periods
where ‘simplistic’ tools are introduced to
perform the tasks of vulnerability
assessment; then to an overview of a
macroscopic approach for studying the
fundamental output of the complex
nonlinear system known as software
development; and finally to the present,
where state-of-the-art tools and
methodologies are beginning to apply
principles of formal methods to the
evaluation of software. The events and
lessons learned from each of these periods
will become evident to the reader,
concluding with a requirement set and an
outline for moving vulnerability analysis
into the future.
“The fundamental problem of
communication is that of reproducing at
one point either exactly or approximately a
message selected at another point.
Frequently the messages have meaning; that
is, they refer to or are correlated according
to some system with certain physical or
conceptual entities. These semantic aspects
of communication are irrelevant to the
engineering problem. The significant aspect
is that the actual message is one selected
from a set of possible messages. The system
must be designed to operate for each
possible selection, not just the one which
will actually be chosen since this is
unknown at the time of design.”
C. E. Shannon, A Mathematical Theory
of Communication, 1948 [Shannon48]
1. IntroductionThe failure of system architects and
application developers to understand
Shannon’s last sentence has led to the
emergence of two of the greatest evils
within software engineering today: the ‘bug’
and the ‘exploit’. Locating and
exterminating bugs in software has become
a big business, as has the associated
‘detection or prevention’ of the exploit.
The focus of this article is to outline for
the reader a timeline of vulnerability
analysis efforts framed by an understanding
of a generic taxonomy. We shall see how
the evolution of a formal certification and
validation process for software systems
transforms into the realm known today as
‘software vulnerability analysis’. It is no
coincidence that Bell, LaPadula, and
Denning are still referred to today as gospel.
Concepts that they introduced with the
‘formal’ system certification methodologies
are passed on to the future as underlying
tenets of understanding for today’s newest
testing and analysis techniques. Research
and tool development throughout the 1990s
provides numerous approaches for locating
and identifying vulnerabilities. Basic manual
and automatic syntax checking methods,
and automatic vulnerability scanning
techniques that search for software issues
known a priori, are assumed to be
understood by the reader. We will focus on
more recent advances surrounding
environment manipulation. We will
conclude with the state-of-the-art today,
where the formal method and the actual
tool capability are converging into a useful
state. (This is not to say that formal
methods in and of themselves were never
useful.)
In 1973, with a relatively innocuous note
within the Communications of the ACM
[Lampson73], Butler Lampson founded,
perhaps inadvertently, an entire arena of
34 1363-4127/03/© 2003, Elsevier Ltd
An historical perspective of softwarevulnerability management
Andy Gray
Chief technology officer,
Icons
Andrew Gray is chief
technology officer for
Icons and a partner in the
firm. Gray combines an
impressive technical
knowledge of the past
with a strong insight into
the future to lead Icons’
technical direction,
research and growth. Gray
joined Icons from The
Open Group Research
Institute, where he was
instrumental in their early
public key infrastructure
development. During his
time at the institute he
conducted technical risk
assessments of the use of
public key infrastructures
with the US Banker’s
Round Table, co-
authoring the paper ‘The
Global Security
Architecture Project:
Toward A Blueprint for
Safe and Secure Electronic
Commerce over the Public
Internet’ that was
presented to the White
House. He also completed
significant research on the
issues of network
survivability. Early
research efforts at The
Ohio State University
were in the realm of
applying information
security principles to the
realm of workflow
architecture. Gray holds a
BA in Philosophy from
Bucknell University.
ISTR 0804.qxd 05/12/2003 12:39 Page 34
information security. His note on the
‘confinement problem’1 and the
‘constraining’ of a service, such that it
cannot transmit information to anyone
other than its caller, initiated a rash of work
focused on ensuring the ‘security’ and
‘trustworthiness’ of a system. While much
of this early work was focused specifically
on the location and elimination of ‘covert
channels’, the focus quickly transformed
into the formal application of classical
information theory to problems of
information security [Cohen77]. Much of
this early work, rooted in formal methods,
while maybe not as readily applicable today
to the vast majority of software engineering
efforts, does maintain and push forward
fundamental information security analysis
paradigms.
With the ‘Morris worm’ of 1988
[Rochlis89], the computing community
realized the devastating effects of a ‘buffer
overflow’ [Aleph96]. To this day we still
experience the same threat (MSBlaster
worm) and the same programming mistakes
are still made. However, newer approaches
to software vulnerability assessment are
emerging. Even more encouraging is the fact
that vulnerability management is playing a
vanguard role in prevention. Perhaps we
simply realized that we would never get it
right? However, we could also be keeping
our heads stuck in the sands of apathy
[Cowan03]. Until architects and developers
realize the importance of ‘each possible
selection’ by using techniques and means
that secure programmatic input and output
with sound algorithms and computation,
there will always be a need for software
vulnerability analysis.
2. Defining a vulnerabilitytaxonomy
Much of the earlier work on defining
software flaw and vulnerability taxonomies
[Landwehr94, Bishop86, Bishop95] focused
on the ‘how, when and where’, but not on
the ‘why’. Taxonomies tend to define
arbitrary sets of classes in which the whole
world of a certain type of object can fit.
Landwehr’s and Bishop’s taxonomies can
be used as is, or even extended, as a solid
basis for a descriptive effort. Only recently
was work introduced on defining the
impact side of a vulnerability [Wang03] by
associating security risks with software
quality factors. The buffer overflow in the
C program directly affects the ‘correctness’
of a bank account balance, which in turn
may affect the reputation of the bank.
Auditors and others infatuated with the
world of metrics and actuarial sciences
love to analyze. One of the basic ideas
behind a taxonomy is that a set of
instances over time can be aggregated so
that underlying issues can be identified. If,
for example, this buffer overflow was the
30th discovered within the last month,
wouldn’t that be an indication of the
necessity for training of the particular
software developer? Analyses such as this
are usually used to identify root causes of
‘risk’. However, do analyses like this really
identify the absolute root of the
introduction of the potential vulnerability?
If one were to really think about it, was
the potential for the vulnerability
introduced when the decision was made to
use the inherently unsafe C language as
opposed to a more type-safe language such
as java? Without approaching Hegelian2
levels, implementation and modeling of
this decision theoretic into a relatively
static taxonomy is the foundation for
future work.
The work of Landwehr, Bishop, and
Wang can be combined and further
extended into a taxonomical framework
suitable for use within the remainder of this
article, as follows.
An historical perspective of software vulnerability managementAndy Gray
Information Security Technical Report. Vol. 8, No. 4 35
1. Imagine an attacker with access
to a computing system that controls
the launch of a nuclear missile.
Would he be able to gain knowledge
of an imminent launch by observing
a spike in CPU usage?
2. Georg Wilhelm Friedrich Hegel
— German philosopher, 1770–1831.
One might consider Hegel as the
prototypical compulsive taxonomist
in his attempt to classify
‘everything’.
ISTR 0804.qxd 05/12/2003 12:39 Page 35
A taxonomy of a computer program flaw:
Genesis (Landwehr et al) implies origin,
either intentional or inadvertent (see Table
1). Is the flaw an ‘intentional non-malicious
timing-based covert channel’ or an
‘inadvertent validation error’?
Time of introduction (Landwehr et al)
can be during design, development,
maintenance or operation.
Location (Landwehr et al), as originally
proposed by Landwehr, encompasses
hardware and software (operating system,
support applications or the application).
With the ubiquity of interconnected
applications, the ‘network’ must now be
added.
Execution environment (Bishop) defines
files used, system calls, execution modes
etc.
Quality impact (Wang et al) binds the
flaw to specific risks within the logical
system. Is, for example, an illicit transfer of
funds possible?
Method of discovery can encompass
formal method procedures with theorem
proving languages, either manual or
automatic source code analysis, traditional
vulnerability assessment tools (Nessus etc),
application instrumentation etc.
Threat and exploitation scenarios can
include, for example, denial of service,
arbitrary code execution etc, and
furthermore indicates likelihood and
potential motive.
Monitoring and exploitation detection
scenarios may be as easily implemented as a
signature fed to an intrusion detection
system or as difficult to detect as a passive
observer timing communication channels.
Vulnerability Assessment
36 Information Security Technical Report. Vol. 8, No. 4
International
Inadvertent
Malicious
Non-
Malicious
Validation Error
Domain Error (Including Object Re-Use,
Residuals, and Exposed Representation Errors)
Serialization/Aliasing (Including TOCTTOU
errors)
Identification Authentication Inadequate
Boundary Condition Violation (Including
Resource Exhaustion and Violable Constraint
Errors)
Other Exploitable Logic Error
Trojan Horse
Non-
ReplicatingReplicating
Trapdoor
Logic/Time Bomb
Covert
Channel
Storage
Timing
Other
Genesis
Table 1: A taxonomy of a computer program flaw.
ISTR 0804.qxd 05/12/2003 12:39 Page 36
Limitation and remediation scenarios,
for example, might be an access control list
on a router, or conversely may be
impossible.
Elimination methods may be as simple as
adding the line of code that performs
proper bounds-checking or as difficult as re-
architecture of the system
This taxonomy means different things to
many different people. The president of the
bank will not care about the lines of source
code where the flaw exists (location),
preferring to view only the analysis of the
bottom-line — ‘software quality impact’.
The auditor of the bank will focus on
limitation and elimination steps, whereas a
software quality assurance expert will need
information on all 10 classes. Within this
vulnerability taxonomy are several classes
that will be important for the remainder of
this article. The phrase ‘software
vulnerability management’ elicits
indications on the validity of several of
these taxonomy classes. Threat and
Exploitation Scenarios for example, can
encompass ‘denial of service’, ‘arbitrary
code execution’, leakage arrangements etc.
Simply thinking of these three subclasses
leads to the premise that it is necessary to
examine the possibility of the existence of
each within a software vulnerability
assessment. As will be seen in the remainder
of this paper, no one discovery or
assessment methodology encompasses all
potential vulnerabilities.
3. Software vulnerabilitymanagement
Traditional steps associated with
vulnerability management are associated
with early analytical efforts focused on the
certification and validation of systems.3
The formal methods of Bell, LaPadula and
others in early trusted system evaluation
[Bell75, Neumann76] focused on
constraining and tightly defining the rules
to minimize the risks of flaw introduction.
By tightly defining and subsequently
proving and verifying the security attributes
associated with flows of execution between
the modes of operation (e.g. user-mode,
kernel-mode, suid, ruid, etc), attempts can
be made to ‘certify’ that a system is secure.
“Computer system security relies in part
on information flow control, that is, on
methods of regulating the dissemination of
information among objects throughout the
system. An information flow policy
specifies a set of security classes for
information, a flow relation defining
permissible flows among these classes, and
a method of binding each storage object to
some class. An operation, or series of
operations, that uses the value of some
object, say x, to derive a value for another,
say y, causes a flow from x to y. This flow is
admissible in the given flow policy only if
the security class of x flows into the
security class of y.”
[Denn77]
Often cited, but seldom read, Bell,
LaPadula, Denning and others provided
many of the fundamental tenets of today’s
information security — access control,
multi-level security, file permissions, and so
on. These models were typically targeting
military classification requirements where
high levels are ordered and their categorical
subsets:
<top-secret,(NASA, NATO)>
“dominates” <secret,(NATO)>
are governed by abstract rules:
no ‘read up’
no ‘write down’.
(In other words, one can only read files if
the subject dominates or equals the target
object.)
An historical perspective of software vulnerability managementAndy Gray
Information Security Technical Report. Vol. 8, No. 4 37
3. This does not go to say that
‘post-implementation analyses’
were not performed in the sense of
a vulnerability assessment, as in the
case of the MULTICS vulnerability
assessment [karg74].
ISTR 0804.qxd 05/12/2003 12:39 Page 37
Early certification and validation models
focus on an attempt to define and
determine the security of information under
a particular flow model. It is argued that
the flow model can be represented by a uni-
directional lattice [Denning76] that decides
if a subject has permission to read from and
write to a particular object. Implicit
information passing is the threat, for
example, of making a statement
conditionally on the value of some piece of
secret data and then allowing someone who
should not have rights to that data to
discover it by testing to see if the statement
has been executed. Bell and Lapadula
proved formally that the ‘no read up, no
write down’ paradigm was not violated in
the US Air Force’s use of the MULTICS
kernel [Bell75].
Potential flaws in the ideas presented in
these models are that they sometimes seem
to push for certification as a means of
guaranteeing security. All they really do
mean is that the threat scenario, as
modeled, is averted with a high degree of
confidence. Although this is clearly
possible, it is expensive and error-prone. In
the end, it would require humans to
evaluate code that they did not write and
therefore probably won’t understand
completely. Regarding other threat scenarios
contained within the taxonomical class
Threat and exploitation scenarios, for
example denial of service, the models need
more support.
One aspect of all the early work that is
evident is that each focused on specific
times within the development and
subsequent instantiation of the system.
Denning, for example, was interested in risk
alleviation during the design and
implementation phase with an idea for a
formal check prior to implementation.
Today we are faced with the reality of post-
implementation detection of both known
and unknown software vulnerabilities.
Granted, early work directed by the US
Department of Defense was targeted at one
fundamental tenet, namely ‘keeping secret
information secret’. The implementors also
had the advantage of working in a highly
structured waterfall software development
environment. In today’s environment, where
functionality and release schedules govern
software development, security takes a back
seat.
Few, if any, of these early models actually
provided a basic methodology for
interfacing into a ‘business risk’ evaluation
capability required today. Requiring
extremely skilled practitioners, these
approaches are time consuming [Walker79],
potentially cost prohibitive and, in today’s
day and age, one might think next to
impossible to model. Formal methods can
and should be used selectively within a
system, especially where complex behaviors
need to be analyzed rather than just
reviewed when dealing with aspects such as
safety-related software requirements
[Rushby95]. We will however revisit the
more formal approach to software
vulnerability assessment after we examine
briefly what can be characterized as our
scrambling of recent history.
4. Moving Forward with the“ringing” of MULTICS4 in ourears…..
With the Morris worm that was
introduced on 2 November 1988, which
supposedly infected 10% of the hosts on the
Internet, came a universal eye-opener about
the implicit dangers of the Internet.
Propagating through its use of
vulnerabilities in Sendmail and fingerd, the
worm introduced the dangers of one of the
most insidious software vulnerabilities —
the ‘buffer overflow’ or stack overflow, and
its close cousin the heap overflow. A buffer
Vulnerability Assessment
38 Information Security Technical Report. Vol. 8, No. 4
4. Admittedly, it’s a bad pun.
ISTR 0804.qxd 05/12/2003 12:39 Page 38
overflow in a computing program occurs
where the program either fails to allocate
enough memory for an input array or fails
to test whether the length of the array lies
within its valid range. Consequences can
range from ‘nothing happens’ or the
annoying ‘An application error has
occurred’. A malicious entity can exploit
such a weakness by submitting an extra-
long input to the program, designed to
overflow its allocated input buffer
(temporary storage area) and modify the
values of nearby variables, cause the
program to jump to unintended places, or
even replace the program’s instructions by
arbitrary code by overwriting the EIP
register, for example.
The location of the buffer overflow in the
following code snippet can easily be
identified:
A copy of a 10 byte array is attempted into
a five byte buffer — in essence a buffer
overflow. Although this is a very simple
example, it is a very common error.
Considering multi-threaded programs with
numerous potential paths of execution, the
problem of locating these errors is
exacerbated as one flow may perform the
correct bounds-checking operation, while
another may not. We will revisit a very
elegant approach to this problem at the end
of this article with an examination of MOPS.
This bug, however, is a little more subtle:
I will refer to several of these examples
later in this article in order to describe and
exhibit the limitations of various models
and software vulnerability analysis
techniques. The flaw5 in the second
program should be evident to any
experienced C programmer, yet various
testing techniques and methodologies will,
with various degrees of certainty, indicate
the presence of the error.
5. Instrumenting software forsecurity analysis
Simply put, there are two distinct
approaches to a vulnerability analysis of an
application, with each requiring different
techniques. You either have the source code
(white) or you do not (black). Little time is
to be spent on traditional white-box source
code auditing or black-box ‘vulnerability
scanning’ and analysis techniques, as much
work has been done on these techniques.
The most thorough examination will always
fall short of ideal. These analyses will
always be governed by time, money, the
inherent limitations of tools, and the
examiners’ ultimate knowledge of the
language and infrastructure.
Supplementing traditional software
penetration testing, an interesting technique
that is gaining favor within the software
security assessment community deals with
An historical perspective of software vulnerability managementAndy Gray
Information Security Technical Report. Vol. 8, No. 4 39
void my_routine(char *uname)
{
char a_buffer[5];
strcpy(a_buffer, uname);
}
int main(int argc, char *argv[])
{
char *user_name = “AndrewGray”;
....(do stuff)...
my_routine(user_name);
}
…..
//at this point in the program, the
current working directory is “/var/ftp”
chroot(“/var/ftp/pub/”);
filename = read_file_from_network();
fd = open(filename, O_RDONLY );
…..
([Wagner00])
5. The flaw in the program is that
the chroot(); system call used in an
attempt to create a jailed
environment was not immediately
followed by a chdir(); (change
directory) call to that directory.
This flaw in a production system
could feasibly result in the reading
of any file on the system.
ISTR 0804.qxd 05/12/2003 12:39 Page 39
manipulation of the environment using
instrumentation. Instrumenting software in
the application security sense can be
thought of as adding tools such that input
and output of a process, shared library or
file system, among others, can be modified
or manipulated dynamically. Subsequent
return conditions can then be analyzed and
further manipulated.
Instrumentation of source code to locate
buffer overflows [Ghosh98a, Ghosh98b] can
be thought of simplistically to proceed as
follows:
1. Automatically scan the source code to
locate each function call.
2. Beginning with the innermost function
call modify the value of the character array
to pass it a very large buffer.
3. Examine the return.
4. Repeat steps 2 and 3 moving
successively higher in the call stack.
In the buffer overflow example above,
instrumentation of strcpy() would be
followed by instrumentation of
my_routine(), ending with instrumentation
of main(). This technique is not only
applicable to C, but can also be used with
other languages in order to locate
programmatic flaws and inadequate error
handling routines.
What may be surprising to some readers
is that both white- and black-box testing
scenarios are open to instrumentation.
Whereas instrumentation techniques are
easily identified by the reader when working
with source code as simply adding the
requisite instrumentation code inline with
the program code or redefining function
calls to call an instrumented
implementation of the function, the black-
box environment just as easily accepts
instrumentation concepts. One of the
approaches to working solely with the
application, library or object code involves
a technique known as ‘binary wrapping’
[Carg92]. With binary wrapping, calls into
an executable or library can easily be
intercepted and modified by manipulating
epilogue and prologue procedures and their
associated entry and exit points within
symbol tables — in essence, we redefine an
exported function to call our own. Other
techniques are as simple as using the age-
old trick of symbolically linking a critical
file to /etc/password, or by merely
disconnecting the network.
One of the promising aspects of
instrumentation techniques that has been
proposed is the use of real-world results.
Early work in software and application
instrumentation was performed at the
University of Wisconsin using a technique
called ‘fuzzing’ [Mill90, Mill95, Forr00].
The studies in 1990 and 1995 examined
UNIX and its underlying utilities while the
study in 2000 focused on Windows 95 and
NT. The results were surprising in several
senses. Using instrumentation techniques
that injected random input into numerous
UNIX utilities (e.g. lex, spell or ftp etc).
Miller was able to crash or hang up to
43% of the native utility programs on a
NEXT system (NEXTSTEP 3.2) in the
1995 study. More common operating
systems of the time such as SunOS, Solaris,
HPUX and AIX, while faring better all,
were in the 18–23% range. While slightly
less than the numbers in the 1990 study,
they are significant. Maybe not so
surprising is that only 9% of the utilities
on a SlackWare Linux system (v2.1.0)
crashed or hung. Miller was able to classify
five overlying reasons that caused the
problems:
• Incorrect management of pointers and
arrays.
• Dangerous input functions.
Vulnerability Assessment
40 Information Security Technical Report. Vol. 8, No. 4
ISTR 0804.qxd 05/12/2003 12:39 Page 40
• Improper management of signed
characters.
• Division by zero.
• Improper end-of-file checks.
More importantly, Miller demonstrates
the efficacy of this technique in the
discovery of software vulnerabilities. Even
though these utilities are all non-suid
utilities, one can only imagine how many
are running as root (or suid) in custom
processes across the world.
Miller’s work in 2000 is, however, much
more significant in demonstrating that
random injection is important. The results
pointed directly not only to a ‘bug’ but a
serious design flaw in Microsoft’s Win32
API [Paget02]. Microsoft’s operating
system relies on a queue-based ‘messaging’
infrastructure in order to communicate
information between applications. For
example, when you click on a link, the
operating system sends a message to the
window of a mouse event. The message
queue of a running window that is waiting
for notification of the mouse click, when
subjected to random input, caused an
application crash in 96% of the Windows
applications tested. Two years later, the
exploitation of this flaw was manifested in
BugTraq [Bugtraq02]. The fundamental
design flaw is that all windows are
considered peers with no access control. In
essence, this is manifested as a dramatic
failure of the Bell and LaPadula ‘no read
up’ rule with a ‘write up’ capability.
Further efforts [Du98, Du00,
Thompson02] within the software security
instrumentation arena are very promising.
As opposed to an informal approach with
random stream input into a limited set of
applications and functions, Wenliang Du of
Purdue’s COAST Laboratory has moved
this arena by defining a ‘fault-injection’
environment [Du98, Du00] and made a
distinction between what is internal and
external to a process or program. Whereas
‘user input’ is internal to the process, a
symbolic link within the file system is
external. By observing historical causes and
locations of software flaws, appropriate
targets for instrumentation and fault
injection can be postulated. Even more
importantly, specifically crafted input can
be used with the injection. Thinking back
to our simplistic buffer overflow, if
user_name had been introduced via
command arguments, then both the
techniques of Du and Miller could catch the
overflow. However, only Du’s technique can
guarantee its detection since ‘user port’ is a
component of his fault injection model.
6. Moving instrumentation tothe web
Perhaps the most appropriate use of
instrumentation is one where re-entrance is
a fundamental problem, i.e. the World Wide
Web. While maintaining concurrency and
safe re-entrant paradigms has been
fundamental for distributed and also multi-
threaded applications, the web provides not
only a cornerstone but the entire house in
which the problem resides. The underlying
issue of state maintenance within web
applications has given rise to a host of new
applications and software threats. A simple
search on ‘SQL-Injection’ will return tens of
thousands of web pages. The following is
an example of a common login script for an
ASP-based Microsoft SQL Server
application that is all too often presented to
developers as the correct way:
An historical perspective of software vulnerability managementAndy Gray
Information Security Technical Report. Vol. 8, No. 4 41
ASP code:
sql = “SELECT * from users where
username=’” & request.form(“username”)
& “‘ and password=’” &
request.form(“password”) & “‘“
ISTR 0804.qxd 05/12/2003 12:39 Page 41
By passing SQL statements as the
username and password, an attacker can
easily break into the application as follows6:
Username: admin ‘—
Password: junk
Traditional software analysis activities
within either a black or white environment
will always look for this type of entry into
applications by following the standard locate,
test, and hopefully penetrate a potential entry.
For web-based applications, this implies form
field ‘crawling’ in the case of SQL-Injection,
or potentially HTTP header manipulation.
Several commercial products have automated
this technique. Sanctum and AppSecInc, for
example, and an open source capability
[Spike03] are available for the examination of
web server returns following ‘fuzzing’ from
field inputs. In addition, recent research
advances [HUANG03] have added in better
behavior analysis techniques in analysing
returns leading towards a self-learning
knowledge base.
Almost all of the practitioners of web
application assessments use SQL-injection
as a test. Fuzzing techniques are applied to
web form fields in an attempt to gain entry
to the database or application. However,
that rarely occurs is an examination of
backend applications and environment.
Imagine the surprise of the analyst using an
ancillary tool, tasked with creating a
demographic profile of the customer base,
when he discovers that I really live in “
‘EXECUTE xp_cmdshell....”.
7. Current work andconclusions
Giving hope for the future is the current
work being done by David Wagner and
others at the University of California at
Berkeley. Using a more traditional
engineering approach, as opposed to the
‘testing’ approach Wagner et al, not only
submits a formal proof to the world but
actually provides software that focuses on
‘detecting violations of ordering
constraints, also known as temporal safety
properties’ [Wagner02]. The chroot/chdir
bug, exhibited earlier, can be used within a
very simplistic safety property. Chroot must
be followed immediately by a chdir call.
Keeping this in mind, Wagner’s MOPS not
only attempts to locate instances of this
programming flaw but more importantly
can be used to formally prove their absence
within the system. Unique to MOPS is the
element of ‘multiplication’, allowing
complex security models to be built from
simple components:
{process privilege} X {risky system calls}
= “the property that a process should not
make risky system calls while in the
privileged state”
Not only does Wagner model these
security relevant properties, the software
exists to not only construct the components
but also to facilitate their proof of non-
existence within an application7.
Much of the previous discussion within
this article has focused on the C
programming language. Today’s
environment demands that these same
techniques are applied to other languages
and technologies as well. Mentioning the
word ‘application’ today quickly brings to
mind the idea of a web browser interfacing
directly with a server in order to perform
some task. Fortunately for us all, the C
language did not persist as the language of
choice from its early days within NCSA’s
HTTPD project. We have introduced new
languages and technologies that demand
different tools and approaches. We have
moved well beyond the centralized
computing environment where the trusted
computing base can easily be identified and
Vulnerability Assessment
42 Information Security Technical Report. Vol. 8, No. 4
6. The “ ‘- - “ effectively
comments out the second half of
the sql query, leaving sql =
“SELECT * from users where
username=’” &
request.form(“username”) & “‘ as
the query string. Other attacks
deploy a variety of techniques,
ranging from error return analysis
to the triggering of stored
procedures (especially targeting
xp_cmdshell() and family within
MS SQL Server).
7. http://www.cs.berkeley.edu/
~daw/mops/
ISTR 0804.qxd 05/12/2003 12:39 Page 42
maintained to a heterogeneous environment
fraught with illicit trust relationships and
insecure networks. Truth-positive methods
for certifying and validating security are
impossible in today’s world, and we are left
with the requirement of ‘assessing’ security.
It is now 15 years since the Morris worm,
yet it is only now that we are seeing
introductions into technology components
that allow us to protect us from ourselves.
Stackguard, with its associated
modifications to the gcc compiler, and
Microsoft’s subsequent adoption of similar
(the same?) technology, give us the
fundamental technology that protects us
from ourselves. Recent advances by Wagner
et al provide hope for the future of the C
language and those who use it. How long
will it take for this paradigm of protection
to move into the compiler space? Most
importantly, how long will it be until these
techniques move into other languages and
their associated infrastructures?
References[Aleph96] Aleph One. Smashing the stack for fun and
profit. Phrack 49, 14, November 1996. Avaliable from
http://www.phrack.com.
[Bell75] D. Bell and L. LaPadula, 1975. Secure Computer
Systems: Unified Exposition & Multics Interpretation,
Technical Report MTIS AD-A023588, MITRE Corp, July
1975.
[Bishop86] M. Bishop, 1986. Analyzing the Security of an
Existing Computer System, 1986 Proceedings of the Fall
Joint Computer Conference pp. 1115–1119 (November
1986).
[Bishop95] M. Bishop, 1995. A Taxonomy of UNIX
System and Network Vulnerabilities, Technical Report,
Department of Computer Science, University of
California at Davis, May 1995.
[Bugtraq02]
http://www.securityfocus.com/bid/5408/discussion/
[Cargille92] J. Cargille and B. P. Miller, 1992. Binary
Wrapping: A Technique for Instrumenting Object Code,
ACM SIGPLAN Notices, 27(6):17–18, June 1992.
[Cohen77] E. Cohen, 1977. Information transmission in
computational systems, ACM SIGOPS Operating
Systems Review, 11(5):133–139, 1977.
[Cowan03] C. Cowan, 2003. Activity. A note posted to
the Sardonix Mailing List, 25 March 2003.
http://mail.wirex.com/pipermail/sardonix/2003-
March/0153.html.
[Denning76] D. Denning, 1976. A Lattice Model of
Secure Information Flow, Communications of the ACM,
19(5):236–243, May 1976.
[Denning77] D. Denning and P.J. Denning, 1977.
Certification of programs for secure information
flow, Communications of the ACM, 20 (7) (1977)
504–513.
[Du98] W. Du and A. Mathur, 1998. Vulnerability Testing
of Software System Using Fault Injection. Department of
Computer Sciences, Purdue University, Coast TR 98-02,
1998 75.
[Du00] W. Du and A. Mathur, 2000. Testing for software
vulnerability using environment perturbation, Proceeding
of the International Conference on Dependable Systems
and Networks (DSN 2000).
[Forr00] J. Forrester and B. Miller, 2000. An Empirical
Study of the Robustness of Windows NT Applications
Using Random Testing, The 4th Usenix Windows System
Symposium, Seattle, August 2000.
[Ghosh98a] A.K. Ghosh, T. O’Connor and G. McGraw,
1998. An Automated Approach for Identifying Potential
Vulnerabilities in Software, Proceedings of the 1998 IEEE
Symposium on Security and Privacy.
[Ghosh98b] A. Ghosh and T. O’Connor. Analyzing
Programs for Vulnerability to Buffer Overrun Attacks,
Technical Report, Reliable Software Technologies,
January 1998.
[Huang03] Yao-Wen Huang, Shih-Kun Huang, Tsung-Po
Lin and Chung-Hung Tsai. Web application security
assessment by fault injection and behavior monitoring,
WWW 2003: 148–159.
[Lampson73] B. Lampson, 1973. A Note on the
Confinement Problem, Communications of the ACM,
16(10):613–615, 1973.
[Landwehr94] C. Landwehr, A. Bull, J. McDermott and W.
Choi, 1994. A Taxonomy of Computer Program Security
Flaws, with Examples, ACM Computing Surveys 26, no.
3, September 1994.
[Miller90] B. Miller, L. Fredricksen and B. So, 1990. An
Empirical Study of the Reliability of Unix Utilities,
An historical perspective of software vulnerability managementAndy Gray
Information Security Technical Report. Vol. 8, No. 4 43
ISTR 0804.qxd 05/12/2003 12:39 Page 43
Communications of the ACM, vol.33, no.12, Dec. 1990,
pp. 32–44.
[Miller95] B. Miller, D. Koski, C. P. Lee, V. Maganty, R.
Murthy, A. Natarajan and J. Steidl, 1995. Fuzz
Revisited: A Re-examination of the Reliability of UNIX
Utilities and Services, Technical Report, Computer
Science Department, University of Wisconsin,
November 1995.
[Neumann76] P. G. Neumann et al, 1976. Software
development and proofs of multi-level security, Proc.
2nd International Conference on Software Engineering,
pp. 421–428, San Francisco, CA, 1976.
[Paget02] C. Paget, 2002. Exploiting design flaws in the
Win32 API for privilege escalation.
http://security.tombom.co.uk/shatter.htm, 2002.
[Rochlis89] J. Rochlis and M. Eichin. With microscope and
tweezers: The worm from MIT’s perspective,
Communications of the ACM, June 1989.
[Rushby95] J. Rushby, 1995. Formal Methods and their
Role in the Certification of Critical Systems, Technical
Report CSL-95-1, Computer Science Laboratory, SRI
International, March 1995.
[Shanon48] C. Shannon, 1948. The Mathematical Theory
of Communication, The Bell System Technical Journal,
vol. 27, pp. 379–423, 1948.
[Spike03] http://www.immunitysec.com/spike.html.
[Thompson02] H. Thompson, J. Whittaker and F. Mottay,
2002. Software Security Vulnerability Testing in Hostile
Environments, Proceedings of the 17th ACM Software
Applications Conference (ACM-SAC), Madrid, Spain, 2002.
[Viega00] J. Viega, J.T. Bloch, T. Kohno and G. McGraw,
2000. ITS4: A static vulnerability scanner for C and C++
code, Annual Computer Security Applications
Conference, 2000.
[Walker79] B. J. Walker, R. A. Kemmerer and G. J. Popek,
1979. Specification and Verification of the UCLA Unix
Security Kernel, Proceedings of the 7th ACM Symposium
on Operating Systems Principles (SOSP), pp. 64–65, 1979.
[Wang03] H. Wang and C. Wang, 2003. Taxonomy of
security considerations and software quality.
Communications of the ACM, 46(6): 75–78, 2003.
[Wagn00] D. Wagner, J. Foster, E. Brewer and A. Aiken,
2000. A First Step Towards Automated Detection of
Buffer Overrun Vulnerabilities, Symposium on Network
and Distributed Systems Security (NDSS ‘00), pp. 3–17,
San Diego, CA, February 2000.
[Wagner02] D. Wagner and H. Chen, 2002. MOPS: an
infrastructure for examining security properties of
software, Technical Report, UCB//CSD-02-1197, UC
Berkeley, 2002.
Vulnerability Assessment
44 Information Security Technical Report. Vol. 8, No. 4
ISTR 0804.qxd 05/12/2003 12:39 Page 44