to define what should be done to address software assurance, but no explanation of why it is needed in the first place. This set of principles assembled by a team of researchers at the Software Engineering Institute an excerpt from "Foundations for Software Assurance" by Carol Woody and Nancy Mead of SEI, and Dan Shoemaker, University of Detroit Mercy and presented at the Hawaii International Conference on System Sciences (HICSS), Maui, Hawaii 2012. http://resources.sei.cmu.edu/library/asset-view. cfm?assetID=75631. today is based on principles established by saltzer and schroeder in their paper "The Protection of information in Computer systems," which appeared in Communications of the ACM in 1974 [1]. They defined security as "techniques that control who may use or modify the computer or the information contained in it" and described the three main categories of concern: confidentiality, integrity and availability (CIA). Their proposed design principles, which focus on protection mechanisms to "guide the design and contribute to an implementation without security flaws" [1], are still taught in today's classrooms. They established eight principles for security in software design and development [1]: of potential attackers, but rather on the possession of specific, and more easily protected, keys or passwords. more robust and flexible than one that allows access to the presenter of only a single key. privileges necessary to complete the job. depended on by all users. that users routinely and automatically apply the protection mechanisms correctly. however, it is appropriate to consider that these were developed prior to the morris worm that generated a massive denial of service by infecting over 6000 UniX machines on november 2, 1988 [2]. To provide a technology context, consider that the ibm system 360 was in use from 19641978, and the ibm system 370 came on the market in 1972. An advanced operating system mvs (multiple virtual storage) was released in march 1974 [3]. identification of the more than 46500 software vulnerabilities and exposures that are currently exploitable in today's software products as described in the Common vulnerabilities and exposures (Cve) database at http://cve.mitre.org/. When these principles were developed, "buffer overflow," "malicious code," "cross-site scripting" and "zero-day vulnerabilities" were not part of the everyday vocabulary of operational software support personnel. Patches were carefully tested and scheduled to minimise operational disruption instead of pushed into operation to minimise attack vectors. consideration of security within an individual piece of technology, they are no longer sufficient to address the complexity and sophistication of the environment within which that component must operate. We must broaden our horizon to consider the large scale, highly networked, software dependent systems upon which our entire critical infrastructure depends, from phones, power and water to industries such as banking, medicine and retail. describe this broader context. The Committee on National Security Systems (CNSS) [4] defines software assurance as follows: that software is free from vulnerabilities, either intentionally designed into the software or accidentally inserted at any time during its life cycle, and that the software functions in the intended manner." There are vast lists of practices and procedures that describe what should be done to address software assurance. There are also an equal number of complaints that effective assurance is not being addressed in today's software. We posit that some of the inaction stems from a general lack of understanding about why this additional work is needed. In our scrutiny of the wide range of materials published, the case for why to focus on software assurance, a question any two-year-old would ask, has not yet been addressed. We propose the following seven principles in response: organisations without effective software assurance perceive risks based on successful attacks to software and systems and usually respond reactively. They may implement assurance choices such as policies, practices, tools and restrictions based on their perception of the threat of a similar attack and the expected impact should that threat be realised. organisations can incorrectly perceive risk when they do not understand their threats and impacts. effective software assurance requires that risk knowledge be shared among all stakeholders and technology participants; however, too frequently, risk information is considered highly sensitive and is not shared, resulting in uninformed organisations making poor risk choices. alignment of risk across all stakeholders and all interconnected technology elements; otherwise, critical threats will be missed or ignored at different points in the interactions. It is no longer sufficient only to consider highly critical components when everything is highly interconnected. interactions occur at many technology levels (e.g., network, security appliances, architecture, applications, data storage, etc.) and are supported by a wide range of roles. |