3
Why Everything Keeps - Failing Fred Cohen Networks dominate today’s computing landscape and commercial technical protec- tion is lagging behind attack technology. As a result, protection programme success depends more on prudent management decisions than on the selection of technical safeguards. Managing Network Security takes a management view of protection and seeks to reconcile the need for security with the limitations of technology. Did you hear about PGP? PGP has been one of the most trusted soft- ware packages in the history of computing, and yet, it had a major security flaw intro- duced a few years ago and nobody appar- ently found or published the flaw in open literature until late August of 2000. The flaw in PGP - and in GPG (the Gnu ver- sion of PGP) was, in my view, typical of what we have come to expect in the soft- ware arena. A strong package was, once again, made weak by adding excessive func- tionality and not taking adequate care. In the case of PGP it wasn’t a particularly subtle bug. It certainly would have been detected if anybody had done the analysis of the design change that we would expect from - for example - a bridge refit. In the case of any cryptosystem where we are changing the key distribution and manage- ment system (which is what happened with PGP) we would expect that a reasonably thorough cryptographic protocol analysis would have been done. Apparently none was performed - at least not by those who seem to support the notion that PGP - and cryptography - are good security methods. It may have been done by those who suggested or designed the changes to the protocol, and if so, it was intentional. At any rate, the next time somebody tells you that “open source” makes it safe, tell them that PGP had a major security hole in it for more than a year and the “open source” community only figured it out once experimentalists demonstrated the vulnera- bility. Tell them that this isn’t the first time. For example, there was a period of about a year when the IRC daemon for Unix had a vulnerability in the form of an intentional Trojan horse allowing remote system access to anyone using the client. In the case of PGP, probably more than a million public keys were susceptible to the attack. In the case of the IRC program, about 100 000 systems were vulnerable. How many actual attacks there were and what they were used for we will likely never know. Did you hear about the DNS system? Recently, a variety of attacks against the Domain Name System (DNS) of the Internet have been found and exploited to great effect. In this case, the DNS system vulnerability had been around for several years. Hundreds of thousands to millions of systems had the faulty DNS servers and cer- tainly thousands were exploited for remote root access and extensions of attacks to other systems. In one case alone, more than 250 DNS systems in as many Internet domains were successfully taken over by the attacker. When you look at why this DNS system was exploited, you find another rather obvi- ous vulnerability - an input overflow that allowed a remote user to overwrite program memory with their own program code. This is common today, the techniques have been widely published for many years, there are even programs that test for the presence of many input overrun conditions and identify the potential for abuse. Any prudent design- er of such a security critical application as the software that controls all translations of domain names to IP addresses and IP addresses to domain names, and is the core of the ability of the Internet as used by normal users today, would have surely made such a simple, obvious, and easy to make check of their software. Certainly we would expect that a competent designer would have found such an obvious and widely understood flaw as this. As an aside, the DNS systems that were attacked were open source, so hundreds of thousands of people from the open source community had the opportunity over a period of years to examine this software and find the flaws at any time. None apparently ever did. It wasn’t till these systems were found to be widely exploited that the flaw was found. Is this the last such flaw? Did the DNS fix that was released a few days after the flaw was widely published fix all of those flaws? Who in the open source community verified this? Did you hear about the thousands of Microsoft Word viruses? That’s thousands of viruses that exploit the same basic unnecessary and almost never legitimately used feature of Microsoft Word - the macros. I have personally never desired to write a document that acted like a program. If I want to write a program, I write a program. If all the macros are really used for is filling out forms and changing default behaviours of documents, whey do they have the ability to - for example - delete all of your files and remove functions from the Word menu bars? I like to call it creeping featurism. Programmers keep adding features because that’s what programmers do. Forget the fact that there are no users or particularly legiti- mate uses for those features. The program- mers philosophy is, “build it and they will come”. Of course the legitimate users of Word macros have not come in droves - but the virus writers have. Its the same sort of thing as the feature that causes most menu systems to grant unlimited access to users who use special key sequences (like ! followed by a command) and causes many CGI scripts to grant general purpose access to the system it runs from (like the use of

Why everything keeps failing

Embed Size (px)

Citation preview

Page 1: Why everything keeps failing

Why Everything Keeps -

Failing Fred Cohen

Networks dominate today’s computing landscape and commercial technical protec-

tion is lagging behind attack technology. As a result, protection programme success

depends more on prudent management decisions than on the selection of technical

safeguards. Managing Network Security takes a management view of protection

and seeks to reconcile the need for security with the limitations of technology.

Did you hear about PGP?

PGP has been one of the most trusted soft-

ware packages in the history of computing,

and yet, it had a major security flaw intro-

duced a few years ago and nobody appar-

ently found or published the flaw in open

literature until late August of 2000. The

flaw in PGP - and in GPG (the Gnu ver-

sion of PGP) was, in my view, typical of

what we have come to expect in the soft-

ware arena. A strong package was, once

again, made weak by adding excessive func-

tionality and not taking adequate care.

In the case of PGP it wasn’t a particularly

subtle bug. It certainly would have been

detected if anybody had done the analysis of

the design change that we would expect

from - for example - a bridge refit. In the

case of any cryptosystem where we are

changing the key distribution and manage-

ment system (which is what happened with

PGP) we would expect that a reasonably

thorough cryptographic protocol analysis

would have been done. Apparently none

was performed - at least not by those who

seem to support the notion that PGP -

and cryptography - are good security

methods. It may have been done by those

who suggested or designed the changes to

the protocol, and if so, it was intentional.

At any rate, the next time somebody tells

you that “open source” makes it safe, tell

them that PGP had a major security hole in

it for more than a year and the “open

source” community only figured it out once

experimentalists demonstrated the vulnera-

bility. Tell them that this isn’t the first time.

For example, there was a period of about a

year when the IRC daemon for Unix had a

vulnerability in the form of an intentional

Trojan horse allowing remote system access

to anyone using the client. In the case of

PGP, probably more than a million public

keys were susceptible to the attack. In the

case of the IRC program, about 100 000

systems were vulnerable. How many actual

attacks there were and what they were used

for we will likely never know.

Did you hear about the DNS system?

Recently, a variety of attacks against the

Domain Name System (DNS) of the

Internet have been found and exploited to

great effect. In this case, the DNS system

vulnerability had been around for several

years. Hundreds of thousands to millions of

systems had the faulty DNS servers and cer-

tainly thousands were exploited for remote

root access and extensions of attacks to

other systems. In one case alone, more than

250 DNS systems in as many Internet

domains were successfully taken over by the

attacker.

When you look at why this DNS system

was exploited, you find another rather obvi-

ous vulnerability - an input overflow that

allowed a remote user to overwrite program

memory with their own program code. This

is common today, the techniques have been

widely published for many years, there are

even programs that test for the presence of

many input overrun conditions and identify

the potential for abuse. Any prudent design-

er of such a security critical application as

the software that controls all translations of

domain names to IP addresses and IP

addresses to domain names, and is the core

of the ability of the Internet as used by

normal users today, would have surely made

such a simple, obvious, and easy to make

check of their software. Certainly we would

expect that a competent designer would

have found such an obvious and widely

understood flaw as this.

As an aside, the DNS systems that were

attacked were open source, so hundreds of

thousands of people from the open source

community had the opportunity over a

period of years to examine this software and

find the flaws at any time. None apparently

ever did. It wasn’t till these systems were

found to be widely exploited that the flaw

was found. Is this the last such flaw? Did the

DNS fix that was released a few days after

the flaw was widely published fix all of those

flaws? Who in the open source community

verified this?

Did you hear about the thousands of Microsoft Word viruses?

That’s thousands of viruses that exploit the

same basic unnecessary and almost never

legitimately used feature of Microsoft Word

- the macros. I have personally never

desired to write a document that acted like a

program. If I want to write a program, I

write a program. If all the macros are really

used for is filling out forms and changing

default behaviours of documents, whey do

they have the ability to - for example -

delete all of your files and remove functions

from the Word menu bars?

I like to call it creeping featurism.

Programmers keep adding features because

that’s what programmers do. Forget the fact

that there are no users or particularly legiti-

mate uses for those features. The program-

mers philosophy is, “build it and they will

come”. Of course the legitimate users of

Word macros have not come in droves -

but the virus writers have. Its the same sort

of thing as the feature that causes most

menu systems to grant unlimited access to

users who use special key sequences (like !

followed by a command) and causes many

CGI scripts to grant general purpose access

to the system it runs from (like the use of

Page 2: Why everything keeps failing

back quotes in an argument). Its use of

trivial hacks to run general purpose inter-

preters,on user supplied data without ade-

quate input checking and filtering instead

of taking the time to write a proper program

that does the right and desired function and

nothing else.

Word, by the way, isn’t open source, but

the vulnerabilities of Word macros to virus

exploitation have been published since the

1980s - more than 12 years ago now. The

exploits have not changed significantly over

that time, nor have the applications that

support these massive security holes.

Microsoft has known about this for many

years but they have not acted to remedy the

situation, nor are they particularly likely to

unless it becomes a competitive advantage.

Of course any competent person who

desired to eliminate such a flaw should be

able to do it to a reasonable extent at a rea-

sonable cost. So why is it that the same set

of flaws are still present today as 12+ years

ago? The same CGI-script flaws are also pre-

sent - as are the same excessive functional-

ity flaws in the menu systems of today.

If I left you out...

It was from a lack of space...not a lack of

examples. I could write a daily column on

these situations and still not have enough

space to keep up with the security flaws

being found and exploited in current soft-

ware. This is a completely ridiculous situa-

tion in my opinion. It is beyond belief that

those making so many billions of dollars

and claiming to be building the future of

humanity cannot keep themselves from

making the same stupid mistakes again and

again and again. It’s not just a short term

thing - they have been doing it for more

than a decade. Now, if I had an employee

who kept making the same mistake again

and again and again for a period of ten

years, I would fire myself for retaining such

a poor performer for so long. The employee

would have to go as well, but it’s hardly their

fault that their employers keep paying them

to do stupid things.

I don’t mean to be claim that nobody

should ever make a mistake. I make plenty

of them. I like to tell my students that

the way I got to know so much about

computers was by making so many mistakes

over such a long period of time. But I

should also mention that there is another

key ingredient to this equation. You need to

learn from your mistakes. And that is where

the computing community seems to be fail-

ing. The companies that sell computer soft-

ware are not learning from their mistakes -

they are simply making more of them.

After all, computer-related companies are

making record profits... How bad can they

be? The answer is obvious - just not quite

as bad as the next company Who’s fault is

this? Yours. After all, you buy this software,

you don’t demand your money back when it

fails, you don’t sue for negligence when the

same problem happens again and again and

again, and you probably don’t even go to

another vendor. There are companies that

do a very good job on both sides of this issue

- there are companies that make very good

software and companies that try to only buy

very good software - but there aren’t

enough of them.

Hurry up - do it wrong and get on with it.

I sometimes rush to get the 80% solution

like anyone else. Its the right thing to do in

many circumstances. There are two major

problems with the way today’s business

community uses the 80% solution in com-

puting:

Once the 80% point is reached, the

product hits the market - but instead

of working it further and producing the

39+% solution, the push is to get the

next 80% solution - after all this prod-

uct can now sell, it’s time to make

another one - or the next generation.

The unseen improvements to quality

just don’t sell more product, so they are

not worth making - they only benefit

the customer and not the seller. Besides,

when it breaks, the seller will just tell the

customer its the old version and have

them pay for a new and improved ver-

sion with different flaws.

When its really important to get it right

because the consequences are high for

getting it wrong, it is treated just like

everything else and the 80% solution is

used. The truth is, the 80% solution is

probably just right for game software.

After all, those subtle bugs in the pro-

gram only make the game play a little

bit differently or crash once every day or

two, but its only a game. If your win-

dow manager crashes every few days,

you reboot the computer - no big deal.

The problem is that if your security sys-

tem fails - even once over the lifetime

of the product - it can be a very big

deal. You could lose all of your work, get

wrong answers and have the bridge you

are designing fall down, have your com-

pany confidential information leaked

out to a competitor who uses it to put

you out of business, end up in my next

article as an example of how to not run

your security, or end up in jail because I

decided to make it look like you com-

mitted the next big Internet crime.

The computing community seems to suf-

fer from a lack of pride in the quality of

workmanship and a lack of ability to tell

what’s important. I guess I should say that

this is because money has simply taken too

high a precedence in our society, Money is

chosen over doing the job well - after all,

you are responsible to the stockholders to

make as high a return on investment as you

can, regardless of how you do it. As long as

it’s legal, you can sell crappy software for a

big profit - and I have heard this many

times - “There’s nothing wrong with

that.”

Pride is one of the seven deadly sins, and

yet without self-respect we lose a part of

the social fabric that is at the heart of soci-

ety’s ability to bring prosperity to all of us.

When we decide to use the ability to accu-

mulate wealth as the basis for self-respect

we move away from the thing that has led

to success for all successful societies over

the course of history - and we move toward

the causes of the downfall of most of them.

The software industry’s lack of pride in

workmanship is symptomatic, not of the

revolutionary nature of the information

age, but of the pending downfall of

Western society, unless...

Unless what?

That was pretty provocative... Western soci-

ety will collapse unless we balance the drive

Page 3: Why everything keeps failing

for money with the drive for the survival of

humankind. That applies to a lot of things

- to the environmental impacts of destroy-

ing Earth for profits - to the mass extinc-

tion of animal forms because we want to eat

them or don’t care if we destroy them via

pollution - the destruction of rain forests.

But this isn’t quite so dire. If we continue

to use low quality in our software develop-

ment process for security-critical elements,

we will simply allow for massive loss of the

financial wealth we have created through

the use of information technology to those

who are cleaver enough to steal it from us

using our own tools. Will it be the Chinese?

Perhaps the French? Criminal elements

from all over the world? Who knows - but

they are all trying - and they are starting to

succeed.

Conclusions

No big conclusion this time. Just the same

conclusions we have been drawing for a

long time. Things we thought we could

trust or that we once trusted are no longer

trusted because they were never trustworthy

and we have just now come to realize it. We

live in a decaying world and we need to

struggle eternally to keep it functioning

with some level of assurance.

We can reasonably expect that everything

we trust will fail from time to time - as it has

always been and will likely always be. These

days things are falling apart in our field at a

fast and furious pace. But there is a solu-

tion.. .

The solution is diversity, redundancy,

good intelligence, rapid detection, and

appropriate response. 1’11 lay it out a bit

more:

Diversity: We need to place less trust in

each thing we trust. In this way, as

things we trust fail, we suffer smaller

consequences and are more resilient to

their failure. This is traded off with

higher operating costs for the more

diverse environment - but gains in

that will have already invested in the

technologies that do better and are thus

better prepared for a wider range of

futures.

Redundancy: Whenever one thing fails,

something else backs it up. Again, we

pay, but in exchange we have redundant

protection to compensate for individual

protection faults so that the overall sys-

tem does not fail. This we attain a reli-

able system from unreliable

components. (Thank you John von

Neumann.)

Good Intelligence: It is very important

to know when things are failing so you

can respond to those failures. This

means that you need to know what is

failing and when and how to fix it

before your redundancy also fails.

Similarly, you need to know about the

changing threat so as to strategically

adapt your defences as needed.

Rapid Detection: Rapid detection is a

major key to mitigation of consequence.

How rapid depends on the situation.

Appropriate Response: Over-reaction

can be as bad or worse than most conse-

quences we encounter. Under-reaction

can leave you open for along time and

susceptible to liabilities. The notion of

measured and appropriate response is

something that develops with practice

and practical experience.

Well...1 am off to write the millennium

article...again.

About The Author:

Fred Cohen is exploring the minimum raise as

a Principal Member of Technical Staff at

Sandia National Laboratories, helping clients

meet their information protection need2 as the

Managing Director of Fred Cohen and

Rrsociates in Livermore Calzfornia, and edu-

cating defenders over-the-Internet on all

aspects of information protection as a practi-

tioner in residence in the Universi~ of New

Haven; Forensic Sciences Program. He can be

reached by sending email to fc@aU net or visit-

ing bnp://alL. net/

Meteors and Managers

The current state of Internet and E-commerce safeguards is a conhrsing melange of

technical safeguards that work, more or less. Difficult as it is to make them work

together, they can be effective in managing risks @hey are implemented. However, at

present many security professionals find it difhcult, in spite of the post Y2K aware-

ness of senior executives, to get su&ient investment for safeguards for electronic

business systems. It seems that for every manager who appreciates the significance of

media reports of a successful hack against prominent Web business sites, there are two

others who have not seen the point yet, who subscribe to the “meteor strike” theory of

risk management (i.e. “never seen a meteor strike anyone I know”), and who thus

need to be educated about the risks arising from E-commerce before they will make

an adequate investment in security.

When talking with management it is

important to help them really appreciate

the full extent of the changes arising from

E-commerce. With regards to E-business,

we are in a fundamentally new world.

The 20th century experience base that is

the foundation for management actions

and priorities is no longer, of itself, suff-

cient to really appreciate the nature and

extent of the changes sweeping through

the business environment. The reach and

efficiency of electronic business, both

internally aligning the business to exploit

the technology as well as the substantial

external scope of activities, is creating