Vulnerabilites and patches
Economics of Security and Privacy (BMEVIHIAV15)
Mark Felegyhazi assistant professor
CrySyS Lab.
BME Department of Telecommunications (Hradstechnikai Tanszk)
mfelegyhazi(atat)crysys(dot)hu
Goal of risk management
vulnerabilities threats incidents losses
unknown
Goal: Minimize the costs associated with risks (threats)
Vulnerabilites and patches 2 Mark Felegyhazi, CrySyS Lab, Department of Telecommunications, BME
Software vulnerabilities lifecycle
software contains bugs thorough testing requires strategic interactions uncertainties about system strength
discovery of vulnerabilities miscreants exploit honest users assess risks
apply patches questions
produce secure software discover vulnerabilites patch vulnerabilites
Vulnerabilites and patches
Timing in risk management incentive issues
3 Mark Felegyhazi, CrySyS Lab, Department of Telecommunications, BME
Incentive issues
produce secure software discover vulnerabilities apply patches
Vulnerabilites and patches 4 Mark Felegyhazi, CrySyS Lab, Department of Telecommunications, BME
Incentive issues
produce secure software discover vulnerabilities apply patches
Vulnerabilites and patches 5 Mark Felegyhazi, CrySyS Lab, Department of Telecommunications, BME
Incentive issues software production
producing secure software (supply-side) lemon market security investments (demand-side) tragedy of the
commons and free-riding
MARKET FAILURE
Solutions: regulations
liability (more at the end) new market mechanisms
feedback on quality prices are indicators
Vulnerabilites and patches 6 Mark Felegyhazi, CrySyS Lab, Department of Telecommunications, BME
Incentive issues
produce secure software discover vulnerabilities apply patches
Vulnerabilites and patches 7 Mark Felegyhazi, CrySyS Lab, Department of Telecommunications, BME
Incentive issues Vulnerability discovery
discovering and reporting security info profitability privacy and sensitive information
Solution:
vulnerability markets
Vulnerabilites and patches
Bhme, R., A Comparison of Market Approaches to Software Vulnerability Disclosure, ETRICS 2006
8 Mark Felegyhazi, CrySyS Lab, Department of Telecommunications, BME
Classification of vulnerability markets
bug challenges vulnerability brokers exploit derivatives cyber-insurance
Vulnerabilites and patches 9 Mark Felegyhazi, CrySyS Lab, Department of Telecommunications, BME
Vulnerability challenges
also: market price of vulnerability ex: Mozilla security bug bounties
allocate money for a vulnerability better to get the money than to exploit the bug
Problems: difficult to get the reward right price as an indicator for
security incentive issues by selling waiting for the right price are rewards high enough?
dynamic price setting buyer-driven auctions bug offering seller-driven auctions
Vulnerabilites and patches 10 Mark Felegyhazi, CrySyS Lab, Department of Telecommunications, BME
Vulnerability brokers vulnerability sharing circles distribute alerts in a closed circle
cooperative game member ship control (black hats)?
examples iDefense Vulnerability Contributor Program TippingPoint/3COM Zero-day Initiative Digital Armaments
CERTs a non-profit vulnerability broker social planner better than a market approach might need public funding
Vulnerabilites and patches
Kannan, K., Telang, R., An economic analysis of markets for software vulnerabilities, WEIS 2004
11 Mark Felegyhazi, CrySyS Lab, Department of Telecommunications, BME
Exploit derivatives
based on option pricing in finance two complementary contracts:
C theres an exploit of software X on OS Y until date D
C theres NO exploit of software X on OS Y until date D
they pay the same money M if fulfilled
Vulnerabilites and patches
Matsuura, K.: Security tokens and their derivatives. Technical report, Centre for Communications Systems Research (CCSR), University of Cambridge, UK, 2001
12 Mark Felegyhazi, CrySyS Lab, Department of Telecommunications, BME
Exploit derivative market
exploit derivative = free market to trade such contracts hypothesis:
market price indicates probability issue such bundles to enable information sharing
market efficiency! liquidity = high nr. of participants low transaction costs requires a TTP
- announce results at the end of the contract - can be distributed
Vulnerabilites and patches
Matsuura, K.: Security tokens and their derivatives. Technical report, Centre for Communications Systems Research (CCSR), University of Cambridge, UK, 2001
13 Mark Felegyhazi, CrySyS Lab, Department of Telecommunications, BME
Exploit derivative market
demand users and insurance: C type to distribute risks investors: C type to diversify software vendors both types (risk transfer + trust signal)
- could be an incentive scheme for developers security experts investment depending on their
assessment
Vulnerabilites and patches
Matsuura, K.: Security tokens and their derivatives. Technical report, Centre for Communications Systems Research (CCSR), University of Cambridge, UK, 2001
14 Mark Felegyhazi, CrySyS Lab, Department of Telecommunications, BME
Cyber-insurance
provides a liability proof solution for several issues
insurance for software vendors insurance for software users
premiums are adjusted to individual risk profiles unlike vulnerability challenges and exploit derivatives
more in Chapter 10
Vulnerabilites and patches 15 Mark Felegyhazi, CrySyS Lab, Department of Telecommunications, BME
Comparison functions
information use market price as a signal for quality - measurability accuracy, timeliness and availability separate security effects from the rest
incentive reward to sec professionals to participate developers + controllers monetary vs. reputation short term vs. long-term
risk-balancing survive critical events taxes bad security reduces overall risk of large-scale events
Vulnerabilites and patches 16 Mark Felegyhazi, CrySyS Lab, Department of Telecommunications, BME
Comparison market efficiency
market properties liquidity = high number of participants low transaction costs = inexpensive to participate accountability = low default risks transparency = fair rules, public price quotes
dependencies between properties
Vulnerabilites and patches 17 Mark Felegyhazi, CrySyS Lab, Department of Telecommunications, BME
Comparison of vulnerability markets
Vulnerabilites and patches
Table 2. Comparison of Vulnerability Markets
CriterionRisk-
Market type Infomation Incentives balancing EciencyBug challenges + Vulnerability brokers Exploit derivatives ++ + + +Cyber-insurance + ++ ++
Symbols ranging from (poor) to ++ (excellent)
4.5 A provisional assessment of market types
Putting the three functions and the eciency property together, gives us a frame-work for a structured comparison of the market types discussed in Section 3. Asummary of the correspondence of each market type to the criteria is given in Ta-ble 2. Note that the evaluation is based on a qualitative assessment and shouldbe regarded as a starting point for exchanges of view rather than as outrightevidence. Some arguments backing the relative assessment of dierent markettypes are given below.
The incentive function is fulfilled by all market types, though to varyingdegree. The ambivalent evaluation for vulnerability brokers is due to the ques-tionable incentives created for adversaries to join the circle in order to obtainsensitive vulnerability information before the general public [3]. Conversely, weconsider cyber-insurance as particularly good at the incentive function becausethe incentives to give security a higher priority are not limited to bug huntersand developers, but also aect the end user. This fosters security awareness ona large basis.
As to the information function, bug challenges fail to provide accurate indi-cators when vulnerabilities are reported frequently. Vulnerability brokers do notreveal timely information to the public at all. Even worse, the usual practice ofrequiring vulnerability discoverers to sign non-disclosure agreements hinders thevital exchange of security-relevant information. We consider exploit derivativesas superior to cyber-insurance, because insurance contracts are re-negotiated lessfrequently, which negatively aects the timeliness of a price indicator. And it isquestionable whether price information on actual cyber-insurance contractsnotmerely unspecified oerswill ever be made available to the public on a largeand regular basis. This together with the presumably high transaction costs ofinsurance contracts justifies a slightly negative assessment of cyber-insurancewith respect to eciency.
Bug challenges and vulnerability brokers provide no risk-balancing instru-ments at all. Exploit derivatives are somewhat worse than cyber-insurance be-
11
involves users too timeliness and accuracy
difficult to manage portfolios
no risk-balancing function
18 Mark Felegyhazi, CrySyS Lab, Department of Telecommunications, BME
Critiques of a market-based approach
vulnerability discovery useful? markets err on the short-term, but stabilize on the long-
term vulnerability market bubble?
close relation between information security and financial markets effects spill over
Vulnerabilites and patches 19 Mark Felegyhazi, CrySyS Lab, Department of Telecommunications, BME
Is it worth discovering vulnerabilities?
enormous effort spent of discovery and disclosure losses by intrusions decrease but: bug finding is much easier for attackers maybe it would have been obscure assumption: vulnerabilities found at random
Questions: Is it better to discover by good guys or leave them to the
bad guys?
Does vulnerability discovery increase software security?
Vulnerabilites and patches
Rescorla, E., Is finding a security hole a good idea?, WEIS 2004 20 Mark Felegyhazi, CrySyS Lab, Department of Telecommunications, BME
Vulnerability lifecycle
white hat
Vulnerabilites and patches
industrial software. The literature on software reliabil-ity mostly focuses on large fault-tolerant systems, noton personal computer systems. Moreover, such studiestypically focus on all faults, not on security vulnerabili-ties.
Chou et al. [4] measured the rate of bug findingand fixing in the Linux and BSD kernels but did notdistinguish between vulnerabilities and other bugs.They did not attempt to fit a parametric model, butinstead used Kaplan-Meier estimation and did notattempt to compute rediscovery probabilities. Theirestimate of bug lifetime (mean=1.8 years) is somewhatshorter than ours, but they see a generally similarcurve. Unfortunately, it is not possible to directly com-pare Chou et al.s results with ours because we haveextensive and unmeasurable censoring (i.e., if there arevulnerabilities that persist past the study period, ourtechnique does not know about them at all). However,for the two programs for which we have long timebaselines, thus partially ameliorating the censoring (NT4.0 and Solaris 2.5.1), we find a much slower decaycurve than found by Chou et al.
The theoretical work on bug finding is also sparse.In [5] and [6] Anderson presents a theoretical argu-ment using reliability modeling that suggests that alarge number of low probability vulnerabilities favorsthe attacker rather than the defender because it is easierto find a single bug than to find all of them. Thus, thedefender needs to work much harder than a dedicatedattacker in order to prevent a single penetration. In arelated paper, Brady et al. [7] argue that reducing bugsthrough testing quickly runs into diminishing returns inlarge systems once the most obvious bugs (and hencevulnerabilities) are removed.
Anderson et al. do not, however, address the ques-tion of disclosure or whether attempting to find vulner-abilities is worthwhile. Answering these questionsrequires addressing empirical data as we do in thispaper. As far as we know, we are the first to do so.
3 The life cycle of a vulnerabilityIn order to assess the value of vulnerability finding, wemust examine the events surrounding discovery anddisclosure. Several authors, including Browne et al. [3],and Schneier [8] have considered the life cycle of avulnerability. In this paper, we use the followingmodel, which is rather similar to that described byBrowne. Introductionthe vulnerability is first released as
part of the software. Discoverythe vulnerability is found. Private Exploitationthe vulnerability is exploited
by the discoverer or a small group known to him.
Disclosurea description of the vulnerability ispublished.
Public Exploitationthe vulnerability is exploitedby the general community of black hats.
Fix Releasea patch or upgrade is released thatcloses the vulnerability.
These events do not necessarily occur strictly in thisorder. In particular, Disclosure and Fix Release oftenoccur together, especially when a manufacturer discov-ers a vulnerability and releases the announcementalong with a patch. We are most interested in twopotential scenarios, which we term White Hat Discov-ery (WHD) and Black Hat Discovery (BHD).
3.1 White Hat Discover yIn the White Hat Discovery scenario, the vulnerabilityis discovered by a researcher with no interest inexploiting it. The researcher then notifies the vendoroften he is an employee of the vendorand the vendorreleases an advisory along with some sort of fix. Notethat it is of course possible for an advisory to bereleased prior to a fix but this is no longer commonpractice. During the rest of this paper, we will assumethat fixes and public disclosures occur at the same time.In this scenario, Disclosure and Fix Release happensimultaneously, as the entire world (with the exceptionof the discoverer and vendor) finds out about the vul-nerability at the same time. There is no PrivateExploitation phase. Public Exploitation begins at thetime of Disclosure.
Time
Machines
IntroductionDiscovery
Disclosure/Fix Release
Public Exploitation Intrusion Rate
Vulnerable Machines
Figure 1 White Hat Discovery process when disclosure andfix release occur together
Figure 1 shows the sort of process we expect to see.The bottom curve shows the number of intrusions as afunction of time. As the vulnerability is not known toattackers prior to Disclosure, there are no intrusions upto this time. At Disclosure time the Public Exploitationphase begins and we start to see intrusions. The rate of
Private exploitation
vs. black hat
21 Mark Felegyhazi, CrySyS Lab, Department of Telecommunications, BME
Costs of a vulnerability
intrusion cost private (Cpriv) and public (Cpub) exploit cost bias towards valuable machines during the private
exploit time
real cost distribution is unclear! - depending on the scenario, admins patch more quickly - early exploiters after public release might be in fact private
fixing cost applying patches personnel time system downtime
Vulnerabilites and patches 22 Mark Felegyhazi, CrySyS Lab, Department of Telecommunications, BME
Costs of a vulnerability
intrusion cost private (Cpriv) and public (Cpub)
exploit cost
cost-benefit analysis = estimate prob. vulnerability never
discovered prob. vuln. discovered by a
black hat (pr) prob. vuln. discovered by white
hat (pw)
estimate pr from measurement data (ICAT database)
Vulnerabilites and patches
the under the intrusion rate curve. Clearly, somemachines are more valuable than others and thereforetheir compromise will have a greater cost. We expectthat Black Hats will preferentially attack high value tar-gets and that those high value targets will be fixed rela-tively quickly. Therefore, we might expect that themachines compromised during the private exploitationperiod will be more valuable in general than thosecompromised during the public exploitation period.Overall, this is probably true, however, even high valuetargets often take days or weeks to fix and thereforethere will be substantial exposure right after Disclo-sure. Because most of the Black Hat community likelydoes not know about a given vulnerability, the periodjust after disclosure (when the rest of the Black Hatsfind out as well) will have plenty of new opportunitiesfor attacks on high value targets. In addition, when aworm is available it will generally not discriminatebetween high and low value targets.
3.4 WHD vs. BHDIt seems intuitively obvious that if one has to choosebetween the BHD and WHD scenarios, one should pre-fer WHD, as WHD eliminates the period of PrivateExploitation. As a first approximation, we assume thatexcept for this difference the WHD and BHD scenariosare identical. Thus, the cost advantage of WHD overBHD is the cost incurred during the Private Exploita-tion phase. If we denote the cost of Private Exploitationas Cpriv and the cost of Public Exploitation as Cpub,then the cost of intrusions in the WHD scenario isgiven by:
CWHD = Cpub (1)and the cost of intrusions in the BHD scenario is:
CBHD = Cpriv + Cpub (2)The advantage of WHD is
CBHD ! CWHD = Cpriv (3)
Obviously, this approximation is imperfect and proba-bly overestimates the cost difference. First, administra-tors are likely to be more diligent about patching if theyknow that a vulnerability is being actively exploited.Thus, the total number of vulnerable systems willdecline more quickly in the BHD scenario and the peakrate of disclosure will be correspondingly lower. Simi-larly, some of the "early exploiters" immediately afterDisclosure are likely part of the Private Exploitationcommunity and therefore Disclosure will likely notproduce as large a rise in initial exploitation in theBHD case as in the WHD. However, the simple andconservative approach is to ignore these effects.
4 Cost-Benefit Analysis of DisclosureImagine that you are a researcher who is the first per-son anywhere to discover a vulnerability in a widelyused piece of software. You have the option of keepingquiet or disclosing the vulnerability to the vendor. Ifyou notify the vendor the WHD scenario of Section 3.1will follow. If you do not notify the vendor, a BlackHat may independently discover the vulnerability, thusinitiating the BHD scenario. However, there is alsosome chance that the vulnerability will never be redis-covered at all or that it will be rediscovered by anotherWhite Hat. In the first case, the cost of disclosure willnever be incurred. In the second, it will be incurredlater. Either outcome is superior to immediate disclo-sure.
Consequently, in order to assess whether disclo-sure is a good thing or not we need to estimate theprobability of the following three outcomes:1. The vulnerability is never rediscovered (pnull)2. The vulnerability is rediscovered by a White Hat
(pwhd )3. The vulnerability is rediscovered by a Black Hat
(pbhd )We consider a "worst-case" model: assume that allpotential rediscovery is by Black Hats and denote theprobability of rediscovery as pr . Consistent with ourpractice, this simplifying assumption introduces a biasin favor of disclosure. The only way in which failure todisclose does harm is if the vulnerability is rediscov-ered by a Black Hat. Thus, assuming that vulnerabili-ties are always rediscovered by Black Hats overesti-mates the damage done by rediscovery and thereforethe advantage of disclosure. Using standard decisiontheory (see, for instance [12, 13]) we get the choicematrix of Figure 3.
Not Rediscov-ered (pnull)
Rediscovered(pr)
Disclose Cpub CpubNot Disclose 0 Cpub + CprivFigure 3 Disclose/not disclose decision matrix
Working through the math, we find that the choice todisclose only reduces the expected cost of intrusions if:
pr (Cpriv + Cpub) > Cpub (4)
In order to justify disclosing, then, the expected cost ofexcess intrusions in the case of BHD has to be largeenough to outweigh the known cost of intrusionsincurred by disclosing in the first place. The rest of thispaper is concerned with this question.
allows us to aggregate the data into months instead ofquarters, thus the exponential scale constants are a fac-tor of three larger than with the cohort regressions ofFigures 9 and 13.
Age of Bug (months)
Num
ber o
f Bug
s0 50 100 150
010
2030 ExponentialWeibull
Figure 14 Fitted overall vulnerability decay curves
A 31. 3! 48.6
N = A! 1521Figure 15 Exponential fit parameters for vulnerability age at
discovery time
" (shape)1.25# (scale)36.6
Figure 16 Weibull fit parameters for vulnerability age at dis-covery time
Figure 17 shows the cumulative distribution functionsfor the probability that a vulnerability will be found bytime t given these fit parameters.
0 50 100 150
0.0
0.2
0.4
0.6
0.8
1.0
Age of Bug (months)
Num
ber o
f bug
s fou
nd
ExponentialWeibull
Figure 17 Probability that a vulnerability will be found
As mentioned above, this estimate of pr is very likelyto be an overestimate because of sampling bias.
6.7 Are we depleting the pool of vulnera-bilities?
We are now in a position to come back to our basicquestion from Section 5: to what extent does vulnera-bility finding deplete the pool of vulnerabilities. Thedata from Sections 6.5 and 6.4 provides only very weaksupport for a depletion effect. Even under conditions ofextreme bias, the highest depletion estimate we canobtain from Section 6.5.1, is that the half-life for vul-nerabilities is approximately 2.5 years. However, nodepletion whatsoever cannot be ruled out given thisdata. In that case, the probability of rediscovery prwould be vanishingly small.
The conclusion that there is fairly little depletionaccords with anecdotal evidence. Its quite common todiscover vulnerabilities that have been in programs foryears, despite extensive audits of those programs. Forinstance, OpenSSH has recently had a number of vul-nerabilities [23] that were in the original SSH sourceand survived audits by the OpenSSH team.
6.8 Sources of ErrorIn any analysis of this type there are a large number ofpotential sources of error. We discuss the knownsources in this section.
6.8.1 Unknown VersionsAs indicated in Section 6.2.2, a number of the programversions were listed as ".", meaning "unknown ver-sion". In approximately 15% of our data points time offirst introduction was therefore somewhat indetermi-nate. We discarded these data points in our initial anal-ysis. As a check for bias, we manually investigated allof these vulnerabilities and were able to determine ver-sion numbers for approximately 100 (8% of the totaldata set). We reran our regressions with largely similarresults to the original data set. With this change, the2000 cohort linear regression is now barely significant(p = . 0446) instead of barely insignificant (p = . 0565).
6.8.2 Bad Version AssignmentOne problem with the introduction version algorithmdescribed in Section 6.5 is that some programs do notissue version numbers in strict sequence. For instance,FreeBSD for some time maintained the 3.x and 4.xbranches in parallel. Because we use version number asour primary sort, in some unusual cases this can makevulnerabilities appear younger than they in fact are,thus making disclosure look more attractive.
For instance, a vulnerability which appeared onlyin FreeBSD 4.1 and FreeBSD 3.5 would be recorded as"introduced" in 3.5, even though 4.1 was released
23 Mark Felegyhazi, CrySyS Lab, Department of Telecommunications, BME
Main conclusion
Discovering vulnerabilities is not worthwhile!
large number of bugs always a new one security increases but discounting balances it out
lessons learned:
de-emphasize finding bugs collect more data improve patching (efficiency and time)
Vulnerabilites and patches 24 Mark Felegyhazi, CrySyS Lab, Department of Telecommunications, BME
Incentive issues
produce secure software discover vulnerabilities apply patches
Vulnerabilites and patches 25 Mark Felegyhazi, CrySyS Lab, Department of Telecommunications, BME
Incentive issues Applying patches
applying security patches real economic decision future benefit (uncertain) increase security vs. immediate losses personnel time + lost productivity = higher operational cost for increased security
Why not patch immediately? too many patches prioritizing takes effort patch testing needed distribution is not standardized installation causes downtimes
Why not release patches immediately? reduce overhead for users periodic release (ex. MS)
Vulnerabilites and patches
Cavasoglu, H. and Cavasoglu, H. and Zhang, J., Economics of security patch management, WEIS 2006
26 Mark Felegyhazi, CrySyS Lab, Department of Telecommunications, BME
Model game
risk management decision patch timing Stackelberg game players: software vendor, user (firm)
one vendor and one firm vulnerabilities discovered with a Poisson process () moves:
vendor patch release Tv firm software update Tf
Vulnerabilites and patches 27 Mark Felegyhazi, CrySyS Lab, Department of Telecommunications, BME
Model costs
vendor costs: patch release cost
- fixed cost for dissemination Kv - variable cost for development of n patches ncv
reputation cost: pre- (da) and post release (db) firm costs:
patch application cost - fixed cost for downtime Kf - variable cost for testing, installation of n patches ncf
damage cost by attacks: pre- (ca) and post release (cb)
Vulnerabilites and patches 28 Mark Felegyhazi, CrySyS Lab, Department of Telecommunications, BME
Centralized system
to calculate the optimal solution by a social planner
patch release and update must be synchronized
Vulnerabilites and patches
Proposition 1. Let *vT and *
fT be the optimal patch release and update cycles for the integrated system, respectively. Then,
* *2( )v f
v fb
K KT T
c!!
" " , (4)
and the minimum expected average system cost is * *( , ) 2 ( ) ( )I v f v f b f vC T T K K c c c! !" ! ! ! .
Proposition 1 states that at social optimality patch release and update cycles must be synchronized, that is, patches must be applied to the system as soon as they are released by the vendor. In practice, Tf and Tv are determined by the firm and the vendor individually. In the next section, we discuss how patch release and update cycles change when these are determined in a noncooperative setting.
4. The Decentralized System To solve this game, we work backward and first focus on the firms patch update problem, assuming a patch release cycle Tv is given.
4.1 The Firms Problem Given that the firm follows a nested policy, that is, Tf = kTv, one patch update cycle of the firm is divided into k patch release cycles. Because the release times of n patches are uniformly distributed within kTv time units, (n1, . . . , nk) follows a multinomial distribution Hence, the firms expected average cost per unit time is
( , ) / 2 ( 1) / 2 /( )f v b v a v f v fC T k c T c k T K kT c! ! !" ! # ! ! (5)
For a given Tv, the firms problem can be formulated as min { / 2 ( 1) / 2 /( ) | is integer}b v a v f v fk c T c k T K kT c k! ! !! # ! !
Lemma 1 characterizes the optimal patch update policy for the firm.
Lemma 1. For a given patch release cycle Tv of the vendor, k satisfies * * * *
2 2
2 2( 1) and ( 1)f fa v a v
K Kk k k k
c T c T! !# $ ! % (6)
Given the firms best response, we next study how the vendor chooses its patch release cycle.
4.2. The Vendors Problem Taking firms response *k into consideration, the vendor chooses its patch release cycle Tv to minimize its average cost. The average expected cost for vendor can be written as
* *( , ) / ( ) / 2 / 2v v v v b a v a v vC T k K T T k T c" " ! " ! !" ! # ! ! (7)
The vendor chooses Tv to minimize *( , )v vC T k for a given reaction *k . Note that *k is uniquely identified by constraints in (6); therefore, the vendors problem can be formulated as
,
2 2
min / ( ) / 2 / 2
2 2. . ( 1) and ( 1)
v
v v b a v a v vT k
f f
a v a v
K T T kT c
K Ks t k k k k
c T c T
" " ! " ! !
! !
! # ! !
# $ ! %
Unfortunately, there are no close-form solutions for the vendors problem. Because our focus is on coordination schemes to achieve the socially optimal patch release and update cycles, and the patch
29 Mark Felegyhazi, CrySyS Lab, Department of Telecommunications, BME
Decentralized system
Stackelberg game vendor is the leader no closed-form solution
if
then
Even in this case, the patch release cycle is not the social optimum.
Vulnerabilites and patches
!
K fca
" 2Kvdb
!
Tv** = Tf** =2Kv"db
tolerance for the firm to wait for an update
tolerance of the vendor to wait to release an update
30 Mark Felegyhazi, CrySyS Lab, Department of Telecommunications, BME
Coordination schemes
cost-sharing = (part of) the patch update is charged to the vendor
liability = (part of) the damage before patch release is charged
depending on the tolerance levels either or both schemes can achieve socially optimal patch release timing
Main conclusion:
Cost-sharing and liability are not complements. They should not be used together.
Vulnerabilites and patches 31 Mark Felegyhazi, CrySyS Lab, Department of Telecommunications, BME
Reading for next time
E. Gal-Or, A. Ghose, The Economic Incentives for Sharing Security Information, Information Systems Research Journal, 2005
optional: Gordon, Ford,
When Worlds Collide: Information Sharing for the Security and Anti-virus Communities, IBM research report, 1999
Gordon, L.A. and Loeb, M.P. and Lucyshyn, W., Sharing information on computer systems security: An economic analysis, Journal of Accounting and Public Policy, 2003
Vulnerabilites and patches 32 Mark Felegyhazi, CrySyS Lab, Department of Telecommunications, BME