background image
· 20 ·
T
here are standards, practices, and methods
to define what should be done to address
software assurance, but no explanation of
why it is needed in the first place. This set of
principles assembled by a team of researchers
at the Software Engineering Institute
formulates a response to that need. This article is primarily
an excerpt from "Foundations for Software Assurance" by
Carol Woody and Nancy Mead of SEI, and Dan Shoemaker,
University of Detroit Mercy and presented at the Hawaii
International Conference on System Sciences (HICSS), Maui,
Hawaii 2012.
The full paper is available for your review at
http://resources.sei.cmu.edu/library/asset-view.
cfm?assetID=75631.
much of the information protection in place
today is based on principles established by saltzer
and schroeder in their paper "The Protection of
information in Computer systems," which appeared in
Communications of the ACM in 1974 [1]. They defined
security as "techniques that control who may use or
modify the computer or the information contained in
it" and described the three main categories of concern:
confidentiality, integrity and availability (CIA). Their
proposed design principles, which focus on protection
mechanisms to "guide the design and contribute to
an implementation without security flaws" [1], are
still taught in today's classrooms. They established
eight principles for security in software design and
development [1]:
1. economy of mechanism: keep the design as simple
and small as possible.
2. fail-safe defaults: base access decisions on
permission rather than exclusion.
3. Complete mediation: every access to every object
must be checked for authority.
4. open design: The design should not be secret. The
mechanisms should not depend on the ignorance
of potential attackers, but rather on the possession
of specific, and more easily protected, keys or
passwords.
5. separation of privilege: Where feasible, a protection
mechanism that requires two keys to unlock it is
more robust and flexible than one that allows access
to the presenter of only a single key.
6. least privilege: every program and every user of
the system should operate using the least set of
privileges necessary to complete the job.
7. least common mechanism: minimise the amount
of mechanism common to more than one user and
depended on by all users.
8. Psychological acceptability: it is essential that the
human interface be designed for ease of use, so
that users routinely and automatically apply the
protection mechanisms correctly.
Time has shown the value and utility in these principles;
however, it is appropriate to consider that these were
developed prior to the morris worm that generated
a massive denial of service by infecting over 6000
UniX machines on november 2, 1988 [2]. To provide a
technology context, consider that the ibm system 360
was in use from 1964­1978, and the ibm system 370
came on the market in 1972. An advanced operating
system mvs (multiple virtual storage) was released in
march 1974 [3].
These principles were assembled prior to the
identification of the more than 46500 software
vulnerabilities and exposures that are currently
exploitable in today's software products as described
in the Common vulnerabilities and exposures (Cve)
database at http://cve.mitre.org/. When these principles
were developed, "buffer overflow," "malicious code,"
"cross-site scripting" and "zero-day vulnerabilities"
were not part of the everyday vocabulary of
operational software support personnel. Patches were
carefully tested and scheduled to minimise operational
disruption instead of pushed into operation to
minimise attack vectors.
While these principles are still usable today in
consideration of security within an individual piece of
technology, they are no longer sufficient to address
the complexity and sophistication of the environment
within which that component must operate. We must
broaden our horizon to consider the large scale,
highly networked, software dependent systems upon
which our entire critical infrastructure depends, from
phones, power and water to industries such as banking,
medicine and retail.
software assurance is the commonly used term to
describe this broader context. The Committee on
National Security Systems (CNSS) [4] defines software
assurance as follows:
"Software assurance (SwA) is the level of confidence
that software is free from vulnerabilities, either
intentionally designed into the software or accidentally
inserted at any time during its life cycle, and that the
software functions in the intended manner."
There are vast lists of practices and procedures
that describe what should be done to address
software assurance. There are also an equal number
of complaints that effective assurance is not being
addressed in today's software. We posit that some of
the inaction stems from a general lack of understanding
about why this additional work is needed. In our
scrutiny of the wide range of materials published,
the case for why to focus on software assurance, a
question any two-year-old would ask, has not yet been
addressed. We propose the following seven principles
in response:
risk
A perception of risk drives assurance decisions.
organisations without effective software assurance
perceive risks based on successful attacks to software
and systems and usually respond reactively. They may
implement assurance choices such as policies, practices,
tools and restrictions based on their perception of
the threat of a similar attack and the expected impact
should that threat be realised. organisations can
incorrectly perceive risk when they do not understand
their threats and impacts. effective software assurance
requires that risk knowledge be shared among all
stakeholders and technology participants; however,
too frequently, risk information is considered highly
sensitive and is not shared, resulting in uninformed
organisations making poor risk choices.
inTeraCTions
Highly connected systems like the Internet require
alignment of risk across all stakeholders and all
interconnected technology elements; otherwise, critical
threats will be missed or ignored at different points
in the interactions. It is no longer sufficient only to
consider highly critical components when everything
is highly interconnected. interactions occur at many
technology levels (e.g., network, security appliances,
architecture, applications, data storage, etc.) and are
supported by a wide range of roles.