Previous Page
Next Page

Certification Objective 2.01–Describe Concepts of Insecure Systems, User Trust, Threat, and Risk

Although the expressions secure system and insecure system are frequently used in various contexts, and most of us intuitively understand what they mean, exam objectives require that you have a clear understanding of these concepts. Unlike the generally accepted definition of information security, no single generally accepted and universally applicable formal definition exists for what is a secure system, so let us formulate one for the purposes of this guide:

A secure system has certain security functionalities and provides certain assurances that it will function in accordance with and enforce a defined security policy in a known environment provided it is operated in a prescribed manner.

Let's now consider this definition part by part to see what each part means.

"A secure system has certain security functionalities…"

For a system to be secure, it should have security mechanisms in place and implement controls that, in particular, prevent, detect, and recover from security violations. At this point, it would be useful to compare the concepts of security and safety. Although these concepts are closely connected and related, they are fundamentally different in scope. Safety is about protection against unintentional, natural threats—that is, actions that are not performed by an intelligent and malicious adversary. Security, in contrast, is about protection against intentional acts of intelligent and malicious adversaries. Although in many cases security mechanisms and safety measures may overlap and support each other, they are not the same, because they guard against different threats.

"…and provides certain assurances…"

Functionality alone is useless if you cannot depend on it. A secure system provides certain assurances that it will enforce security policy and that its functionality performs as described. The degrees of assurances may, for example, be described as very high, high, medium, low, or very low, or be described in more formal terms. We briefly discussed the issue of functionality versus assurance in Chapter 1 and will also discuss some aspects of assurance as related to certification and accreditation of systems in Chapter 3.

"…that it will function in accordance with and enforce a defined security policy…"

To be described as secure, a system must have a defined security policy, because the security policy defines what is "secure" in every particular case. There can be no secure system without a security policy, because it is the security policy that says what should or should not be allowed, when this should or should not occur, and how it should or should not be allowed.

"…in a known environment…"

How and where a system is used greatly affect its security. A system that may be considered secure when used in one environment—such as an isolated, well- guarded data center operated by professionals—may not be considered secure in another environment—such as on the street corner, connected to the Internet, and operated by amateurs. This is because it is impossible to create a system that is secure in all environments and against all threats and that at the same time is functional, effective, and efficient.

"…provided it is operated in a prescribed manner."

There is no such thing as a completely automatic or foolproof system. This final requirement stipulates that a system must be professionally installed, operated, and maintained in accordance with its requirements and must not be misused.

As you can see, the definition of secure system includes many clarifications, restrictions, and conditions. This perhaps illustrates why it is so difficult to develop secure systems and keep them secure once deployed, because of the complex interactions that exist between various external and internal entities, requirements, and environments.

Trust

Security is basically a matter of trust, and the concept of trust has several different meanings when used in the information security context. The first is the traditional dictionary definition of trust: assured reliance on the character, ability, strength, or truth of someone or something. Another definition, attributed to the U.S. National Security Agency, relates to trusted systems: a trusted system or component has the power to break one's security policy. A trusted system breaking security may seem like an oxymoron—how do you trust a component that can break your security policy?—but it is not. Although it is a good engineering practice to have as few trusted components as possible (remember the principles of least privilege and minimization), it is impossible to eliminate them altogether. This means that in any system, you have to trust at least one component that may theoretically break your security policy. This also means that maximum engineering and maintenance efforts are directed at this trusted component to minimize the possibility of security violation.

Trusted path is the term used to describe the secure communication channel that exists between the user and the software (an application or the operating system itself). A trusted path exists when a mechanism is in place to assure users that they are indeed interacting with the genuine application or the operating system and not software that impersonates them. Put simply, trusted path is an assurance that the user's keystrokes are read only by the intended application, and screen output indeed comes from the intended application. Although an important feature of trusted computer systems, trusted path facilities are not widely available in general-purpose operating systems.

Trust relationships between entities follow several rules of trust, and the three most important of those are briefly summarized here:

  • Trust is not transitive. If A trusts B, and B trusts C, it does not mean that A automatically trusts C.

  • Trust is not symmetric. If A trusts B, it doesn't mean that B trusts A.

  • Trust is situational. If A trusts B in situation X, it doesn't mean that A trusts B in situation Y.

User trust refers to the users' expectations of reasonable security of systems, which in practical terms is the responsibility of security administrators who enforce security policy set by management. User trust may also refer to expectations of reasonable operation of systems (hardware and software), which is closely linked to the issue of assurance. User trust is gained and maintained by definition of sound security policies and their professional implementation and enforcement.

Threats

A threat is anyone or anything that can exploit a vulnerability. Threats to information systems may be grouped into the following broad categories:

  • Natural threats Floods, earthquakes, fires, and tornadoes

  • Physical threats Damage, loss, theft, and destruction by humans

  • Logical threats Network attacks, malicious hackers, and software glitches

Exam Watch 

A threat describes the potential for attack or exploitation of a vulnerable business asset. This term defines the cost of an attack weighed against the benefit to the attacker that can be obtained through such an attack. The benefits may be financial, strategic, tactical, or indirect. It does not describe an administrator's decision to accept a specific risk.

It is important to realize that threats come in all sizes and shapes, and they are not limited to the preceding examples. All threats, regardless of type, affect either confidentiality, integrity, or availability of information.

Vulnerabilities

Vulnerabilities are weaknesses that can be exploited by threats. As with threats, types of vulnerabilities can be very different—software bugs, uneducated staff, absence of access controls, and inadequate security management, to name a few. One thing that unites all vulnerabilities is their ability to be exploited by threats, thus posing a risk. Recalling the formula for risk, we can see that if no vulnerabilities are present, there is no risk—however, we know that in practice, any system has vulnerabilities. Although a myriad of vulnerabilities are possible, they belong to one or both of the following groups.

Vulnerability by Design

When a system is poorly designed (that is, security considerations are either not taken into account or are inadequately designed), it is vulnerable by design. Systems vulnerable by design are insecure regardless of whether or not the design in question is well-implemented. The definition of systems in this context is wide and includes computers, networks, protocols, devices, software, operating systems, and applications.

Vulnerability by Implementation

Vulnerability by implementation is caused by bad implementation of an otherwise well-designed system. This means that even if a system is well-designed, with security taken into account at the design stage, it may still be vulnerable because of poor implementation. Of course, systems may be vulnerable both by design and by implementation if ill-designed and ill-implemented at the same time.

Risks and Risk Management

Risk is the likelihood and cost of a threat exploiting a vulnerability. Information security management is about risk management, because in an absolute majority of cases it is either impossible or not cost-effective to eliminate all risks. In these cases, risk management comes to the rescue and helps us to understand risks and decide what risks to minimize, what risks to transfer (insure against), and what risks to accept. An integral part of risk management is risk analysis and assessment: identifying risks and assessing the possible damages that could be caused by identified risks, and deciding how to handle them. Information security risk management involves three steps:

  1. Assign a value and relative importance to information assets and information processing resources.

  2. Assess and analyze the risk.

  3. Decide how to handle identified risks, which usually includes selecting and implementing countermeasures.

A simple formula that conveniently shows the relationship between threats, vulnerabilities, and risk is shown here:

Threats × Vulnerabilities × Asset value = Risk

As you can see, if either threats, vulnerabilities, or asset value equals zero, the resulting risk is also zero. That is, if the asset in question has no value, no vulnerabilities or no threats can affect the asset, resulting in no risks. In practice, risk is never zero.

Assignment of Value to Information Assets

When determining the value of information assets and information processing resources, it is important that you take into account the total value of information, which is often much higher than what may appear at first glance. The following factors, in particular, should be considered when estimating the value of information assets:

  • Cost to acquire or develop information

  • Cost to maintain and protect information

  • Value of information to owners and users

  • Value of information to adversaries

  • Value of intellectual property

  • Price others are willing to pay for the information

  • Cost to replace information if stolen or lost

  • Cost of productivity lost if information is not available

  • Cost of legal and regulatory considerations

  • Damage to reputation or public confidence if information is compromised or lost

Valuation of information assets is a complex and subjective exercise, where very often no single correct value exists; however, the more factors you consider for the purposes of valuation, the more accurate your valuation will be.

Risk Analysis and Assessment

Various approaches and methodologies have been developed for risk analysis and assessment; however, all these methodologies follow either qualitative, quantitative, or hybrid risk-analysis approaches. Quantitative risk analysis requires assignment of numeric/monetary values to assets and risks; as a result, it provides more objective metrics than qualitative risk analysis but at the same time is more complex to perform. Qualitative risk analysis, on the other hand, does not use numeric values but instead deals with such subjective estimates as "low," "medium," and "high" when ranking risks and value of assets. Qualitative risk analysis also heavily depends on the knowledge and experience of those performing the risk analysis, because they use their judgment to decide what values the particular risks should be assigned.

Regardless of the risk analysis method used, the results of risk analysis should in particular include the following:

  • Lists of assets, threats, risks, and vulnerabilities

  • Estimated rates of occurrence

  • Estimates of potential losses on a per-incident and annualized basis

  • Probability of threat occurrences

The relationship between the threats, vulnerabilities, and risks may be further expressed as follows:

  1. Threat exploits vulnerability.

  2. Vulnerability results in risk.

  3. Risk can result in loss.

  4. Exposure to loss can be counteracted by safeguards.

  5. Safeguards affect the ability of threats to exploit vulnerabilities.

The output of the risk analysis and assessment is the input of the next step in risk management: deciding how to handle the risk and selecting and implementing countermeasures, mechanisms, and controls that minimize identified risks.

Selection and Implementation of Countermeasures

After information assets have been identified, their values estimated, and risks affecting them analyzed and assessed, it's time to handle the risks. Generally, you can handle risks in four ways: transfer them, reduce them, accept them, or ignore them. The first three are perfectly acceptable approaches to handling risk, but the last one—ignoring risks—is a recipe for disaster.

Reduction of risk requires selection of countermeasures, which is both a business and a technical decision, because the countermeasures should not only be appropriate from the technical viewpoint but should also make sense business-wise; this is one of the requirements of good information systems governance and management. This requirement means that the process of selection of countermeasures should involve both management and technical staff and should take into account the following diverse considerations:

  • An organization's strategy

  • Product/service/solution cost

  • Planning and design costs

  • Implementation/installation costs

  • Environment modification costs

  • Compatibility issues

  • Maintenance requirements

  • Testing requirements

  • Repair costs

  • Operational costs

  • Effect on productivity of staff

  • Effect on system/network performance

These are just some of the issues that must be considered before selecting a security mechanism or control to make sure it addresses the risks in question, is cost-effective, and when taken as a whole brings more benefit than hassle to the organization.

Exam Watch 

Risk assessment is a critical element in designing the security of systems and is a key step in the accreditation process, the formal acceptance of the adequacy of the system's overall security by the management of a particular organization. For more information about accreditation, see Chapter 3.


Previous Page
Next Page