In the past decade, the information technology (IT) sector has experienced tremendous growth as firms have widely adopted IT throughout the enterprise. Despite its vast benefits, a major issue that continues to be poorly addressed is security. Between 1993 and 2003, the number of reported Internet security incidents grew from roughly 1,300 to 137,000 (CERT 2004), while the economic cost of fourteen of the most severe attacks was estimated at $36.5 billion (Computer Economics 2004). In more recent studies, researchers found that phishing attacks cost $3.2 billion in 2007, and the total per-incident cost of data breaches was $6.65 million in 2008, respectively (Gartner 2007, Ponemon 2009).
Internet security is in the public’s best interest, but, thus far, investments to increase security have yet to yield their full potential. For example, the Code Red worm exploited a buffer-overflow vulnerability in Microsoft IIS in 2001. One surprising fact is that Microsoft developed a security patch for this vulnerability and released it an entire month before Code Red erupted. However, to predict the behavior of a system, one has to predict the behavior of all of its components. These components work together to define how well the system performs. With software security, users form an essential component of the system, but how often can one actually predict the behavior of users? Even with the technical fix being made publicly available, many users failed to patch their installations which led to 360,000 servers being struck by the worm. As in the case of Code Red, software firms typically do invest in creating patches once security vulnerabilities are known, but, in many cases, end users lack incentives to properly maintain computing systems by applying these patches. To effectively address the security problem, we must understand user behavior. Some specific economic factors that influence a user’s patching decision include the cost of deploying patches on enterprise systems, the likelihood of security attacks, and the cost of being struck by an attack and incurring downtime. When we account for these factors, we can better predict the impact of actions taken to increase software security (e.g., investments in security, rebates on patching, etc.).
As both public and private entities increasingly rely on the Internet, communications systems, and electronically transmitted data to perform business functions, government must assume an active role in maintaining the reliability and security of the global cyber infrastructure. The U.S. government has identified protecting the nation’s 17 critical infrastructure and key resources as one of four primary goals of its strategy for homeland security. It has particularly identified the importance of its cyber infrastructure to national security and the stability of the economy (Bush 2007). However, securing the cyber infrastructure presents difficult challenges since it requires coordinating the efforts of government, the private sector, and society (U.S. White House 2003).
When misaligned incentives induce outcomes with destructive social consequences, there is a natural call for government involvement. Currently, the government’s strategy has been mostly suggestive and an attempt to lead by example. However, in the face of increasing security attacks and associated economic losses, the government may necessarily need to adopt a more hands-on approach to securing the Internet (Lemos 2002, U.S. White House 2003, Krebs 2003). Information security is not simply a technical problem – it is a socio-technical one, greatly influenced by the underlying economic incentives of technology firms and users (Anderson 2001). To strategically craft policies aimed at its protection, we must develop a better understanding of how decisions of individual actors affect aggregate measures of security and social welfare, as well as how to appropriately influence their decisions toward serving public interest.
Figure 1 illustrates how government, the private sector, and consumers interact to influence information security risk. Some appropriate measures of this risk include the likelihood of both directed and undirected security attacks as well as the expected economic losses due to security vulnerabilities. One can trace the impact of a software liability policy to better understand the complexity of this system. If government determined that software vendors should be held liable for a share of security losses, this decision would directly affect the economic costs faced by both software firms and users as traced through arrows G1 and G2, respectively. However, in response to such a policy, software firms can alter their investments in the security of their own software products, which directly affects security risk (S1). Furthermore, software firms can also provide various incentives to users and even alter price to affect user behavior in response to software liability (S2). Finally, taking into account both government liability policy and the vendor’s response, users choose which types of software to consume and whether to invest in security by patching which directly affects security risk (U). For a liability policy to be effective, it must coordinate decision making across all components within the system due to these interactions. In subsequent sections, we will refer back to this figure to discuss how the focus of each topic fits in the scope of the larger study.
It is imperative that the security community continues to discover new technical approaches to increase information security. However, such improvements alone are likely to be insufficient because information security is deeply affected by how technologies are deployed and maintained. To better manage security risk, one has to learn more about the behavior of the firms producing technologies and the users who consume them. Thus, along with new security technology development, applied microeconomics researchers must concurrently study the interaction between economic incentives of technology producers, users, and these technological advances to form a better understanding of the aggregate impact on information security. With a more comprehensive picture of the security landscape that incorporates firm and user decision making, not only will firms learn to better manage security risk, but R&D organizations will be provided with guidance on how to design these technical security improvements to be more incentive compatible and, therefore, more effective.
For years, Microsoft has been dealing with two major problems related to their network software products: excessive security risk and piracy. Microsoft has struggled with a decision on whether to permit pirates of their software products to have access to security patches (Microsoft 2002, Worthington 2004, Salloway 2004). Unlike the software itself, security patches are difficult to pirate since they arrive weekly to monthly and their value lies in being deployed before attacks. Many users will not rely on non-trustworthy sources to obtain high frequency patches, therefore Microsoft can strategically use a restrictive patch policy to place software pirates in a compromised security setting; this can provide additional economic incentives for conversion into legitimate users. To illustrate how economic modeling of security phenomena can yield important insights that impact firm strategy, software security, and social surplus, we briefly discuss one prior work.
August and Tunca (2008) explicitly model the interaction between software piracy and security risk stemming from unpatched usage. Surprisingly, they demonstrate that there are several market conditions where permitting pirates to have security patches can both be preferable to Microsoft and to society as a whole. Due to security interdependence, the entire network benefits from a security perspective when pirates are allowed to patch their systems. However, lenient policies can also lead to excessive populations of pirates, and not all pirates properly maintain the security of their systems. By incorporating these relevant tradeoffs into their model, August and Tunca (2008) establish how optimal patch policies are influenced by market conditions such as piracy enforcement, security risk, and piracy tendencies in the user population. They clarify that restrictive patch policies, which intuitively are an effective tool, only perform well when paired with appropriate market conditions where pirates have strong incentives to convert into legitimate users. When these conversion incentives are weak, a permissive approach can actually increase both profits and welfare.
In one particular set of conditions, August and Tunca (2008) show that when both a population’s tendency for piracy is low and government enforcement of software piracy law is weak, then a profit-maximizing software vendor such as Microsoft should select a permissive policy for security patches. If Microsoft intended to use a restrictive policy, then, since enforcement is low, it would need to reduce prices to engender pirate conversion. However, price optimality calls for Microsoft to maintain a higher price level which provides very little conversion incentive under a restrictive policy. In this case, many pirates remain in the population as unpatched users which leads to increased security risk on the network. Therefore, under these conditions, choosing a permissive approach enhances security and increases the value of the software in the economy. Furthermore, these market conditions are typical of the U.S. which had a piracy rate of around 20% in 2008, much smaller in comparison to several countries with rates exceeding 90% (BSA-IDC 2009). An important implication from this work is that, in the face of low piracy rates and weak enforcement, Microsoft should continue to exercise permissive patch policies within the U.S. This work also provides government with recommendations on how to use piracy enforcement appropriately.
However, there are many questions that still remain unanswered. For example, if Microsoft was held liable for a portion of the economic losses associated with security vulnerabilities in its software, would software products become more secure? Would society be better off? Recently, there has been a trend in computing back toward a service-based model; products such as those offered by Salesforce.com are now simply hosted by software vendors who charge users a subscription fee – users no longer need to install and maintain local systems. How will this trend impact software security? Can software firms strategically alter product mix to reduce aggregate security risk? The open source movement has been gaining strength and even recent government support. What role will open source software serve to positively affect the current security risk landscape? For this project, we intend to continue with the style of research found in August and Tunca (2008) to significantly increase our understanding of how government, software firms, and users interact to affect software security. Below, we provide an overview of our research questions and goals and then expound on each topic individually in the remaining sections.
Research Questions: Focusing on software that relies on communication across networks, we intend to examine several promising methods to increase software security in a unified study with a consistent methodology. The study is motivated by three major research questions: (1) does imposing liability on software vendors improve information security risk, and what is the appropriate role for government policy; (2) how do various software deployment models (e.g., SaaS versus on-premises) impact security by inducing usage structures with distinct risk profiles; and (3) when accounting for underlying economic incentives, can open source business models lead to heightened security, and under which specific market conditions do they have greatest efficacy.
Research Approach and Goals: To study these issues, we will develop quantitative models grounded in rational economic theory and conduct formal mathematical analysis to garner insights into security risk management. We build upon an interdependent security framework that incorporates software patching as formalized in August and Tunca (2006) while drawing on ideas in Kunreuther and Heal’s seminal work (2002, 2003, 2007). Our contribution to the literature is that we investigate these important security determinants in a context where users affect aggregate security risk on the network through their individual usage and patching decisions. There are currently no models that fully integrate these topics with users’ incentives in existing literature. In such a setting, negative security externalities exist, hence the impact of firm strategy and government policy is difficult to ascertain. There is also no economics literature that studies how openness of source code interacts with security risk. Therefore, this project will generate several significant research outcomes:
- An enriched economic framework for modeling software security that permits examination of (i) software liability on security losses and patching costs, as well as firm investments to reduce security risk in software; (ii) both directed and undirected security attacks and their relationship to the structure of SaaS and on-premises software offerings; and (iii) differences in business models based on open source and proprietary software and how each interacts with security risk
- Guidance for software firms on how to invest in security, under what market conditions to employ SaaS offerings to better manage security risk, and how to choose appropriate source code and licensing strategies for software classes with distinct security risk characteristics
- Guidance for policy makers on what types of software liability will have the greatest impact on software security from a social welfare perspective, what classes of open source software development to consider for subsidization, and how to think about software design for large networks of users to induce better security characteristics, particularly in government sponsored endeavors such as the smart grid
Microsoft (2002, August). Windows XP product activation. Microsoft.com.