top of page

Cyber Risk Concepts - CRISC certification notes - Part 2 - Risk Assessment

  • brencronin
  • 1 day ago
  • 21 min read

Risk Evaluation & Risk Assessment


Once the scope of the risk analysis is clearly defined, the next critical phase is Risk Evaluation. This stage involves assessing the potential risks to the organization's people, assets, and data within the context of the defined system or environment.


Risk Evaluation serves as the analytical core of any cyber risk management process. While much of the industry content and discussion around cyber risk tends to focus on this phase, it's important to remember that a mature and effective risk management program must begin with strong Risk Governance to provide structure, alignment, and strategic direction.


This phase is also broad, as it encompasses a variety of risk evaluation methodologies and techniques, ranging from qualitative assessments to quantitative modeling. The diversity of approaches highlights the flexibility required to adapt risk evaluation to different organizational needs, regulatory environments, and risk appetites.


Risk Assessment versus Risk Evaluation


A risk assessment is the structured process of identifying, evaluating, and analyzing risk to provide a clear picture of an organization’s exposure and resilience.


  • Risk Identification pinpoints threats, vulnerabilities, assets, and existing controls.

  • Risk Evaluation determines the likelihood and potential impact of threats exploiting vulnerabilities, leading to harm or loss of value.

  • Risk Analysis integrates these elements, assessing control effectiveness, identifying gaps, and highlighting the difference between the current and desired risk state.


It’s important to clarify terminology: when someone refers to a “risk assessment,” they may mean only the evaluation phase, or the broader lifecycle that includes evaluation, response, and ongoing monitoring. Depending on the framework or standard, risk identification & evaluation may be embedded in the assessment or treated as a separate step.


In general, the assessment process follows a consistent sequence: identifying threat sources and events, assessing vulnerabilities, analyzing controls, determining likelihood and impact, prioritizing risks, recommending controls, and documenting results.


Clarifying the Boundaries Between Risk Assessment, Response, and Monitoring


It's common for organizations and practitioners to blur the lines between Risk Assessment and Risk Response, as the two are closely interconnected, risk responses are inherently driven by the outcomes of risk evaluations/assessments. Similarly, Risk Monitoring & Communication is often blended into these two phases, creating further overlap.

This merging of concepts is frequently referred to under the broader terms "Risk Analysis" or "Risk Assessment." While this shorthand can be convenient, it's important to recognize the distinctions:


  • Risk Evaluation/Assessment identifies and quantifies the risks.

  • Risk Response involves selecting and implementing strategies to manage those risks.

  • Risk Monitoring & Communication ensures risks remain within acceptable thresholds and informs stakeholders of changes.


In fact, the CRISQ certification has a completely separate domain 'Domain 3: Risk Response and Reporting' that overlaps with risk response as well as reporting on that risk.





Clarifying the Risk Assessment Domain


To better understand risk assessments, it is helpful to break the domain into core areas and their major subcomponents:


1. Risk Identification

  • Risk events

  • Threat modeling

  • Vulnerability analysis

  • Risk scenario development


2. Risk Analysis Methods

  • Qualitative analysis

  • Quantitative analysis

  • Business Impact Analysis (BIA)

  • Risk register


3. Assessment Frameworks

  • FAIR (Factor Analysis of Information Risk)

  • OCTAVE (Operationally Critical Threat, Asset, and Vulnerability Evaluation)

  • ISO 31000

  • NIST Risk Management Framework


4. Risk Types

  • Inherent risk

  • Residual risk

  • Risk treatment gap


Risk Identification


Risk Identification - Risk Events


Key threat related definitions:


  • Threat: A potential cause of an unwanted incident (e.g., cyberattack, natural disaster, insider misuse).

  • Threat Actor: An entity (individual or group) responsible for carrying out the threat (e.g., hacker, nation-state, insider).

  • Threat Vector: The method or pathway used to execute the threat (e.g., phishing email, malware, unpatched software).

  • Vulnerability: A weakness in a system, process, or control that can be exploited (e.g., outdated software, weak passwords).

  • Risk: The potential for loss or damage when a threat exploits a vulnerability.

  • Harm: The actual impact to the organization, financial, reputational, operational, or legal.


Threat

  └── Executed by → Threat Actor

        └── Uses → Threat Vector

              └── Exploits → Vulnerability

                    └── Leads to → Risk

                          └── Results in → Harm to Organization


Risk analysis involves both internal and external perspectives:


  • Inside-Out (Internal Perspective): Focuses on understanding your own assets, data, systems, and operations. This includes performing a Business Impact Analysis (BIA) to evaluate the consequences of system downtime, data loss, or corruption. Example: If a key application fails or sensitive data is compromised, what business impact follows?

  • Outside-In (External Perspective): Involves identifying threats originating from outside the organization, such as cyberattacks, third-party failures, or regulatory changes. These risks may be harder to predict or quantify due to limited visibility.


To effectively identify risks from both perspectives, organizations use structured techniques that support comprehensive risk discovery and prioritization.


Risk Identification - Threat Modeling


Threat modeling is a foundational technique in cybersecurity risk identification. It focuses on discovering how adversaries might exploit vulnerabilities in your systems.


Key Phases of Threat Modeling:


  1. Identify Potential Threats - What adversaries, tools, or techniques might target your assets?

  2. Analyze Vulnerabilities and Weaknesses - Where are your systems most susceptible to attack?

  3. Attack Modeling - How might an attacker exploit identified vulnerabilities to compromise systems or data?

  4. Data Flow Mapping - Diagram how data moves across systems to reveal attack vectors and entry points.

The best time to do threat modeling is in the design phase!

Threat Modeling - Understanding the Relationship: Threat, Vulnerability, and Risk


A threat is any circumstance or event with the potential to exploit a vulnerability and cause harm to an organization. Threats are typically carried out by threat actors through specific threat vectors (methods of attack). When a threat successfully exploits a vulnerability, it creates risk, the possibility of loss, damage, or disruption.


Threat Modeling: Understanding the CIA Triad and Adversary Objectives


A fundamental concept in cybersecurity, and a key pillar of threat modeling, is the CIA Triad: Confidentiality, Integrity, and Availability. These three principles define the core objectives for protecting systems, data, and operations. Every cyber threat or attack ultimately aims to compromise one or more of these areas.


The CIA Triad


  • Confidentiality – Ensures that sensitive information is only accessible to authorized individuals. Example Impact: Data breaches or unauthorized disclosures.

  • Integrity – Ensures that data and systems remain accurate, unaltered, and trustworthy. Example Impact: Data tampering, unauthorized system changes.

  • Availability – Ensures that systems and data are accessible when needed to support business functions. Example Impact: Denial of service, system outages.

Extended Considerations: Some practitioners expand the triad to include:
  • Authenticity – Verifying that users, systems, and data are genuine.

  • Non-repudiation – Ensuring actions or events cannot be denied after they’ve occurred.


While useful, these two concepts are often considered subcomponents of Integrity.


Adversarial Objectives: The Dyad Triad


To complement the CIA Triad, threat modeling also considers the Dyad Triad, which represents the core objectives of adversaries. Each adversary goal aligns as the opposing force to the CIA principles:

Adversary Objective

Description

Counter to CIA Principle

Disclosure

Exposing sensitive or private information to unauthorized entities.

Confidentiality

Alteration

Modifying, corrupting, or injecting unauthorized changes into data or systems.

Integrity

Denial

Disrupting access to systems, data, or services, rendering them unavailable.

Availability


Threat Modeling Frameworks: Key Examples and Approaches


Threat modeling helps organizations identify, assess, and mitigate potential threats early in the system design or operational lifecycle. Several established frameworks offer structured methodologies to analyze and address risks from different perspectives. Below are notable threat modeling frameworks and methodologies, each with unique strengths and areas of focus.


STRIDE: Microsoft’s Developer-Centric Model


STRIDE is a widely used threat classification model that categorizes threats into six key types:

STRIDE Category

Security Principle Violated

Spoofing

Authenticity

Tampering

Integrity

Repudiation

Non-repudiation

Information Disclosure

Confidentiality

Denial of Service (DoS)

Availability

Elevation of Privilege

Authorization

STRIDE Process:


Phase 1: System Decomposition


  • Break down the system into components (e.g., servers, APIs, databases).

  • Create Data Flow Diagrams (DFDs) to understand how data moves across the system.

  • Identify trust boundaries and authentication/authorization points.


Phase 2: Threat Analysis


  • Evaluate each component against STRIDE categories.

  • Use threat trees or attack path analysis to assess potential vulnerabilities.

  • Consider business impact and document assumptions.


Phase 3: Mitigation Planning


  • Implement technical and procedural controls for each identified threat.

  • Plan for detection, response, and recovery.

  • Examples: strong authentication for spoofing, integrity checks for tampering, rate-limiting for DoS.


PASTA: Process for Attack Simulation and Threat Analysis


PASTA is a risk-centric threat modeling framework that emphasizes aligning technical threats with business impact.


PASTA's 7 Stages:


  1. Define business and security objectives

  2. Define technical scope (assets, boundaries)

  3. Application decomposition (data flow, architecture)

  4. Threat analysis (external and internal threats)

  5. Vulnerability and weakness analysis

  6. Attack modeling (simulating adversary behavior)

  7. Risk and impact analysis (business consequences)


DREAD: Threat Scoring System


DREAD is a risk assessment model used to prioritize threats based on potential damage and exploitability.

DREAD Factor

Description

Damage

How severe is the impact?

Reproducibility

How easy is it to reproduce the attack?

Exploitability

How easy is it to exploit?

Affected Users

How many users are impacted?

Discoverability

How easy is it to discover the threat?

Each factor is typically scored, and total scores help prioritize remediation efforts.


Attack Trees


Attack Trees are visual models that map how a system can be compromised, starting from an attacker's goal and branching into sub-tasks or attack paths. They are useful for brainstorming and visualizing complex attack scenarios.


TRIKE


TRIKE is a risk-based framework that produces a risk model from system requirements and security audits. It focuses on user roles, actions, and assets to map threats and prioritize mitigations based on defined acceptable risk levels.


VAST (Visual, Agile, and Simple Threat)


VAST is designed for integration into DevOps and Agile environments. It promotes scalability and operational alignment by using:


  • Application threat models for development teams

  • Operational threat models for infrastructure and security teams


Cyber Kill Chain (by Lockheed Martin)


The Cyber Kill Chain outlines the phases of a targeted cyberattack:


  1. Reconnaissance

  2. Weaponization

  3. Delivery

  4. Exploitation

  5. Installation

  6. Command & Control

  7. Actions on Objectives


It’s commonly used for threat detection and incident response planning


Cyber Threat Intelligence (CTI) and Threat Actor Profiling


Understanding threat actors is a foundational element of cyber threat intelligence. Effective profiling helps organizations assess potential risks and design defensive strategies tailored to the adversary's tactics, techniques, and objectives.


Key Threat Actor Profile Elements (A-T-S-C-St-P-I-Ai-Ag)


Each threat actor can be assessed based on the following characteristics:

Attribute

Description

A – Adversary

Who or what is the threat actor? (e.g., nation-state, insider, hacktivist)

T – Target

What or who is the primary target of the actor’s campaign?

S – Scope

What is the extent or range of the targeting activity?

C – Capability

How advanced and mature are the tools and techniques used by the actor?

St – Stealth

How concerned is the actor with avoiding detection during operations?

P – Persistence

How long is the actor likely to remain in the network once a breach occurs?

I – Intent (Internal)

Has the actor been observed targeting or operating inside your organization?

Ai – Activity (Internal)

What is the level of internal activity related to the organization?

Ag – Activity (Global)

Is the actor active globally, and what is the severity of their operations?

Threat Assessment Scoring Model


Each attribute is rated on a scale of 1 to 5, with 5 representing the highest risk or sophistication. The score can be plugged into a weighted formula to assess the overall threat level:


Threat Assessment Score:


Threat Score = A + ∑T + ∑S + C + St + P × (∑I + ∑Ai + ∑Ag)


Customize the formula weights based on your organization’s risk tolerance and environment.

Threat Actor Rating Guide (Scale: 1 – 5)

Attribute

5

4

3

2

1

Adversary

Nation-state

Insider

Criminal Group

Terrorist Group

Hacktivist

Target

Critical Infra

Org

Hardware

Software

No Clear Target

Scope

High-Value Targets

Org Specific

Industry-Wide

Technology-Specific

Broad/Unfocused

Capability

Integrated Custom Tools

Advanced Toolkits

Developed Tools

Augmented Tools

Acquired Tools

Stealth

Extremely High Concern

High Concern

Moderate Concern

Limited Concern

No Concern

Persistence

Long-Term Access (Months+)

Persistent (Weeks+)

Moderate (Days–Weeks)

Short-Term (Hours–Days)

None/Opportunistic

Intent

Geopolitical/Strategic

Financial Gain

Stepping Stone

Personal Motive

Unknown

Activity (Internal)

Active Currently

Recent Activity

Historical Activity

Expressed Interest

None Observed

Activity (Global)

Critical Global Presence

High Activity

Moderate Activity

Low Activity

Dormant/Inactive

Other Models: Threat Box Assessment


(Andy Piaza, Quantifying Threat Actors with Threat Box 30. Metrics have been reported to Cyber.)


Intent: Why would this actor target the organization with this type of attack?

  • 5 — Target-Specific Data

  • 4 — Ideology Association

  • 3 — Sector Association

  • 2 — Regional Association

  • 1 — Target of Opportunity


Willingness Modifier: What constraints could impact the actor’s intent?

  • 0: Strained diplomatic relations, prior hostilities, or perceived significant economic disruption from the organization’s operations.

  • -1: Moderate relations and moderate economic dependencies between the actor and organization.

  • -2: Strong diplomatic, economic, and security ties with relevant governments.


Capability: What evidence indicates the actor can conduct this type of attack?

  • 5 — Significant Capability: Strong evidence of prior attacks of this type, confirmed by multiple trusted sources.

  • 4 — Credible Capability: Credible operational capability, moderately confirmed.

  • 3 — Limited Capability: Some evidence, but from limited sources.

  • 2 — Possible Capability: Very limited evidence; feasibility only confirmed.

  • 1 — Not Capable: No evidence; feasibility unconfirmed.


Novelty Modifier: What advanced skills or techniques are evident?

  • 0: Custom toolset per campaign with demonstrated “living off the land” techniques.

  • -1: Limited/high-cost toolset used across multiple campaigns.

  • -2: Tools are generally available and widely used.


Threat Modeling Tools: Examples and Capabilities


Threat modeling tools help teams visualize system architectures, identify potential security threats, and plan mitigation strategies. Below are examples of widely used commercial and open-source tools that support various threat modeling frameworks and development workflows.


Commercial Tools


Microsoft Threat Modeling Tool


  • Framework: STRIDE-based

  • Functionality: Enables users to visually model system architectures and automatically identify threats based on STRIDE categories.

  • Best For: Developers and architects seeking an easy-to-use, visual tool for STRIDE analysis.


IriusRisk


  • Functionality: Provides automated threat modeling with risk scoring, security control recommendations, and integration into DevSecOps pipelines.

  • Key Features: Real-time updates, threat libraries, and support for compliance standards.

  • Best For: Organizations requiring automation and integration with CI/CD workflows.


ThreatModeler


  • Functionality: A cloud-native threat modeling platform that supports scalable modeling of application architectures.

  • Key Features: Collaborative modeling, automated threat identification, risk scoring, and integration with existing tools (e.g., JIRA, Jenkins).

  • Best For: Enterprises looking for collaboration and enterprise-grade threat modeling capabilities.


Open Source Tools


OWASP Threat Dragon


  • Functionality: Web-based and desktop tool that supports threat modeling through system diagramming and threat analysis.

  • Features: Integrates with GitHub for version control; follows STRIDE methodology.

  • Best For: Security-focused teams seeking a free, accessible, and community-supported tool.


PyTM


  • Functionality: Python-based threat modeling framework allowing users to define system components and threats programmatically.

  • Use Case: Ideal for automating threat models and integrating them into development pipelines.

  • Best For: Developers comfortable with Python looking for flexibility and automation.


ThreatSpec


  • Functionality: Enables threat modeling through code annotations, aligning security considerations with infrastructure-as-code (IaC) practices.

  • Best For: DevSecOps teams looking to embed threat modeling directly into their codebase for seamless, real-time security analysis.


Risk Identification - Vulnerability Analysis


Vulnerability Analysis is the systematic identification and evaluation of weaknesses in systems, processes, people, or controls that could be exploited by threats to cause harm to assets or business objectives.


Inputs to Vulnerability Analysis


Understanding the sources that feed vulnerability identification is critical for the CRISC exam:

  • Asset Inventory: Knowing what you have is the foundation — systems, data, applications, people, and processes.

  • Threat Intelligence: External and internal threat data help correlate vulnerabilities with realistic threat actors or conditions.

  • Vulnerability Scans & Assessments: Automated tools (e.g., Nessus, Qualys) or manual assessments that identify known weaknesses.

  • Penetration Tests: Controlled exploitation to validate vulnerabilities and assess compensating controls.

  • Configuration Reviews: Identifying insecure configurations, weak hardening, or deviation from baselines.

  • Audit Findings / Incident Reports: Historical issues and recurring control failures reveal systemic vulnerabilities.


CRISC expects you to distinguish between technical, procedural, and organizational vulnerabilities:


  • Technical: Unpatched systems, insecure APIs, misconfigurations, unencrypted data transmissions.

  • Procedural: Inadequate change management, poor backup processes, lack of incident response planning.

  • Organizational: Skills gaps, inadequate segregation of duties, weak governance, or unclear accountability.


Analysis and Prioritization of Vulnerabilities


Once vulnerabilities are identified, the next step is evaluating their significance:


  • Likelihood of Exploitation: How easy it is to exploit (CVSS score, threat activity, exploit availability).

  • Impact Severity: Business or operational consequence if exploited (data loss, downtime, compliance violation).

  • Exposure Level: Whether mitigating controls are in place and their effectiveness.


Risk Identification - Risk scenario development


Key CRISC Concept: The analysis must produce risk scenarios — each scenario linking threats → vulnerabilities → assets → impacts.


  1. Brainstorming - Facilitated group discussions to generate a wide range of potential risk scenarios.

  2. Risk Scenario Development - Constructing "what-if" scenarios to explore possible incidents and their business impacts. Risk managers can approach scenario development from two perspectives: top-down and bottom-up.


Bottom-Up vs. Top-Down Scenarios


Bottom-up risk scenario development begins with specific assets, asset groups, or threats. These scenarios are typically more technical in nature and less suited for executive-level reporting. However, they provide valuable insights into the scope, nature, and details of risks, enabling risk managers to uncover nuances that might otherwise be overlooked. This deeper understanding strengthens top-down scenario development by enriching it with concrete data and context, ensuring a more complete view of the organization’s risk landscape. The primary benefit of using top-down risk scenarios is the ability to communicate risks more clearly to executive management.


Realistic Scenario Development


For risk scenarios to be effective, they must go beyond theory and align with the realities of the business. Strong scenario development requires that scenarios are:


  • Relevant to the organization: Scenarios should highlight specific risks and their potential impact on business objectives, rather than relying on generic examples.

  • Kept current: As technology and operations evolve, risk scenarios must be regularly updated to reflect changing exposures and magnitudes.

  • Communicated with purpose: Scenarios often serve as the first and most direct way management learns about emerging risks. They must be presented in a clear, actionable format that enables informed decisions on treatment and response.


Well-crafted scenarios bridge the gap between technical risk details and executive decision-making, ensuring risks are both understood and addressed.


Threat Modeling and CTI Sources


A closely related area to threat modeling is the use of frameworks that organize cyber threat intelligence (CTI) on adversaries and the tactics, techniques, and procedures (TTPs) they employ, most notably, MITRE ATT&CK. Defenders use these frameworks not only to evaluate security control coverage against common attack techniques, but also to map intelligence about specific threat actors targeting their organization. By identifying which techniques those adversaries most frequently deploy, defenders can validate whether their controls effectively mitigate the highest-priority threats. MITRE has further advanced this approach by introducing ATT&CK Campaigns, which provide insight into how techniques evolve and are used in real-world operations.


In Mitre Attack Campaigns, a Campaign groups related attack events by objectives, timeframe, and targets—even when a single threat actor is not yet attributed. This new structure will improve visibility into evolution in adversary techniques, distinguish overlapping operations, and allow users to see trends, changes in tactics, and continued technique usage across campaigns. MITRE plans to incorporate Campaigns starting with the v12 release, converting appropriate existing “Group” entries into Campaigns and supporting both attributed and unattributed campaigns. The Campaign object is defined in STIX format and integrates with existing ATT&CK elements like Groups, Techniques, and Software, preserving backward compatibility with prior versions.


These frameworks also provide critical support for:


  • Threat hunting prioritization – focusing efforts on the most relevant adversary techniques.

  • Red team/blue team planning – aligning offensive and defensive exercises with real-world adversary behaviors.

  • Executive and risk committee reporting – translating technical threats into business risk for leadership visibility.


In practice, they are often applied to control validation activities such as penetration tests, red team engagements, and targeted control assessments. One model that helps organizations structure these assessments is Micro Emulation Plans, a streamlined approach to adversary emulation developed by the MITRE Center for Threat-Informed Defense: https://github.com/center-for-threat-informed-defense/adversary_emulation_library/tree/v4.0/micro_emulation_plans



Risk Analysis Methods


Mapping Risks to Business Impact


For cybersecurity professionals, the real value lies not just in identifying technical vulnerabilities but in translating them into business consequences. Saying “a server is vulnerable” is insufficient, you must articulate what that means in terms of operations, finances, and reputation.


Impact Categories


Frame risks in business-centric terms:


  • Confidentiality – data breaches, loss of intellectual property.

  • Integrity – data tampering, ransomware, unauthorized changes.

  • Availability – downtime, disrupted services, lost productivity.

  • Reputation – loss of customer trust, regulatory scrutiny, brand damage.


Analysis Approaches


  • Qualitative – Use scales like Catastrophic, Major, Moderate, Minor, Insignificant to simplify communication with executives.

  • Quantitative – Where possible, tie risks to numbers (e.g., 72 hours downtime, $5M potential regulatory fine) to provide precision and prioritize response.


Qualitative analysis


Specialized Techniques in Risk Analysis


  • Bayesian Analysis: Uses statistical inference to update risk probabilities based on new evidence.

  • Bowtie Analysis: Visualizes risk management through diagrams showing causes, controls, and consequences.

  • Brainstorming/Interview

  • Cause and consequence analysis: Combines fault tree analysis (FTA) and event tree analysis (ETA).

  • Cause and effect analysis

  • Checklists:

  • Delphi Technique: Gathers expert opinions via structured questionnaires to achieve objective risk assessments.

  • Event Tree Analysis: Examines potential outcomes resulting from an initial event.

  • FAIR

  • Fault Tree Analysis (FTA): Identifies root causes of risks through a top-down approach.

  • Human reliability analysis

  • Lotus Blossom Brainstorming method:

  • Markov Analysis: Analyzes systems with multiple states to predict future states and transitions.

  • Monte-Carlo Simulation: Models various outcomes to understand risk impacts through probability distributions.

  • Operationally critical Threat, Asset, and Vulnerability Evaluation (OCTAVE)

  • Sneak circuit Analysis (SCA):

  • Structured What If Technique (SWIFT)


Quantitative analysis


Risk Analysis - Identify Risk Criteria - Likelihood Criteria


One effective model for estimating risk probability is based on Sherman Kent’s "Words of Estimative Probability," originally used in intelligence assessments:

Term

Probability

Margin of Error

Certain

100%

± 0%

Almost Certain

93%

± 6%

Probable

75%

± 12%

Even Chances

50%

± 10%

Probably Not

30%

± 10%

Almost Certainly Not

7%

± 5%

Impossible

0%

± 0%


Risk Analysis - Identify Risk Criteria - Impact Criteria


Impact levels (e.g., Low, Medium, High, Critical) must be clearly defined using measurable thresholds:


  • Downtime thresholds (e.g., 0–4 hrs = Low, 4–12 hrs = Medium, >12 hrs = High)

  • Financial losses

  • Reputation damage

  • Personnel safety impacts


Although some impacts may seem intangible (like brand reputation), they often tie back to financial loss, which provides a common baseline.


Business Impact Analysis (BIA)


BIA identifies critical systems and processes, their dependencies, and prioritizes recovery based on potential business impact. The BIA is important for defining impact criteria


  • Guides selection of risk response and mitigation strategies

  • Supports defining Recovery Time Objectives (RTOs) and Recovery Point Objectives (RPOs)


Completing a BIA provides a structured view of each critical process and system, including:


  • Name and purpose of the process or system

  • Responsible owner or operator

  • Functional description

  • Dependencies on other systems, suppliers, or key employees

  • Quantified impacts, such as revenue loss, users affected, or disrupted business functions


Criticality Analysis (CA)


Once BIA data is collected and organized, a criticality analysis can be conducted. CA evaluates each process and system by examining:


  • The organizational impact if the system becomes unavailable

  • The likelihood of such an outage


The cost of mitigating or reducing the impact


Essentially, CA is a focused form of risk analysis that zeroes in on high-value processes and systems. To be complete, it must also include or reference a threat analysis, an assessment that identifies realistic threats, maps them to probability of occurrence, and considers the effect of existing or planned mitigating controls. This ensures the organization understands both the inherent risk and the residual risk after protections are applied.


Disaster Recovery (DR) vs. Business Continuity (BC)

Term

Focus

Disaster Recovery (DR)

Technical recovery of apps, data, and infrastructure

Business Continuity (BC)

Continuation of business operations during/after disruptions

Business Impact Analysis (BIA)

Prioritizes recovery actions and supports planning decisions


Key Recovery Metrics


  • Recovery Time Objective (RTO): How quickly services must be restored (forward-looking).

  • Recovery Point Objective (RPO): The maximum acceptable amount of data loss (backward-looking).

  • Maximum Tolerable Downtime (MTD): The absolute limit of downtime the business can sustain.


Example: If your system can tolerate 4 hours of downtime but must be restored within 6 hours:


  • RTO = 4 hours

  • MTD = 6 hours

 

Resilience Concepts


  • Resiliency: The ability of systems to absorb disruptions and maintain operations.

  • Recovery: The act of restoring operations and systems after failure.

  • Reliability: A system’s ability to perform consistently and meet expected standards.


Assessment Frameworks


Types of Assessments


Organizations use different types of assessments to identify risks, threats, and vulnerabilities in systems, processes, and data. These assessments may be manual, automated, or a combination of both.


  • Risk Assessment – Identifies and classifies risks tied to systems or processes.

  • Gap Assessment – Evaluates compliance with policies, standards, or requirements.

  • Threat Modeling – A threat-focused approach to identify possible attack scenarios.

  • Vulnerability Assessment – Detects weaknesses in processes, systems, or applications.

  • Maturity Assessment – Measures process or capability maturity against frameworks such as CMMI or NIST CSF.

  • Penetration Testing – Simulates real-world attacks to validate and exploit vulnerabilities using tools like scanners, fuzzers, and password crackers.

  • Data Discovery – Identifies sensitive data within systems and reviews access rights.

  • Architecture & Design Review – Examines system or process designs for potential weaknesses.

  • Code Review – Manual review of source code for logic and security flaws.

  • Code Scan – Automated scanning of source code for vulnerabilities.

  • Audit – Formal inspection to ensure controls and processes are effective and properly followed.


FAIR (Factor Analysis of Information Risk)


The Factor Analysis of Information Risk (FAIR) framework is a quantitative risk analysis model used to measure and express information security risk in financial terms (e.g., dollars).

FAIR provides a structured and repeatable way to understand, analyze, and communicate risk by decomposing it into measurable factors rather than relying on subjective “high, medium, low” ratings.


FAIR defines risk as:

The probable frequency and probable magnitude of future loss.

This means FAIR quantifies both:


  • How often a loss event is likely to occur (frequency)

  • How severe the loss will be if it occurs (impact)


FAIR Process Steps


  1. Identify the Asset at Risk (e.g., customer data, systems)

  2. Identify the Threats (actors, events that could cause harm)

  3. Identify Vulnerabilities (weaknesses that could be exploited)

  4. Estimate Loss Event Frequency (probable number of events per year)

  5. Estimate Loss Magnitude (financial impact per event)

  6. Derive and Report Risk in Quantitative Terms



  1. Loss Event Frequency (LEF)

    • How often a threat event results in a loss.

    • Derived from:

      • Threat Event Frequency (TEF) → how often threats act.

      • Vulnerability (Vuln) → probability that an action results in loss.


    LEF=TEF×VulnLEF = TEF × VulnLEF=TEF×Vuln


  2. Loss Magnitude (LM)

    • The total expected impact when a loss event occurs.

    • Includes both:

      • Primary Loss (direct costs, e.g., system repair, response costs)

      • Secondary Loss (indirect costs, e.g., reputation damage, legal fees)


  3. Risk Calculation

    • The overall risk is calculated as:


    Risk=LossEventFrequency×LossMagnitudeRisk = Loss Event Frequency × Loss MagnitudeRisk=LossEventFrequency×LossMagnitude


The result is a quantitative estimate of annualized loss exposure (ALE).


When communicating risk analysis, it is important to highlight something FAIR calls the risk management stack. The bottom line is risk analysis is about making better decisions.


ree




FAIR CAM


The Controls Analytics Model (CAM), grounded in the FAIR framework, provides a structured way to evaluate the value and effectiveness of cybersecurity controls. It helps answer questions that often challenge cyber risk managers:


  • Which controls provide the most risk reduction in your cybersecurity program?

  • Which controls contribute the least value?

  • How do controls work together to reduce overall risk?

  • Which control is best suited to mitigate a specific risk?


Traditional approaches, such as maturity models, attempt to assess an organization’s security posture by comparing it against a standard list of controls (e.g., NIST CSF). These models assume a linear relationship, more controls equal less risk, but they fail to measure the actual impact of controls on risk. Similarly, outside-in scanning tools generate numerous false positives and do not link controls directly to measurable risk reduction.


FAIR-CAM addresses these gaps by categorizing controls based on their risk impact:


  1. Loss Event Controls – Controls that directly reduce risk.Example: Multi-factor authentication prevents unauthorized access.

  2. Variance Management Controls – Controls that indirectly affect risk by improving the reliability and consistency of other controls.Example: Regular patching reduces vulnerabilities, ensuring primary controls function as intended.

  3. Decision Support Controls – Controls that influence risk indirectly by guiding better decision-making.Example: Threat intelligence feeds inform security decisions, helping prioritize responses.


Each category can be further analyzed by functional impact on risk, specifically how a control influences either the frequency or magnitude of potential loss:


  • Loss Event Prevention – Reduces the likelihood of a risk event occurring.

  • Loss Event Detection – Improves the ability to identify risk events promptly.

  • Loss Event Response – Reduces the impact or severity of an event once it occurs.


FAIR-CAM enables organizations to prioritize controls based on measurable risk reduction, rather than assuming all controls are equally valuable, helping security leaders allocate resources more effectively and make data-driven decisions.


FAIR 'Materiality Assessment Model' (MAM) https://safe.security/fair-mam/


FAIR-MAM helps organizations both proactively estimate the potential financial loss from cyber risk scenarios and post-incident to assess the material impact of an actual breach.


OCTAVE (Operationally Critical Threat, Asset, and Vulnerability Evaluation)


The OCTAVE methodology, developed by Carnegie Mellon University’s Software Engineering Institute (SEI) in 1999, is a strategic, organizational-level risk assessment framework designed to help organizations identify, evaluate, and manage information security risks. It emphasizes asset-based risk identification, business-focused evaluation, and organizational risk mitigation strategies, rather than focusing solely on IT systems.

OCTAVE Variants:


  • OCTAVE: Designed for larger organizations, leveraging workshops that include internal staff and sometimes external risk facilitators.

  • OCTAVE-S: Tailored for smaller organizations (typically <100 employees) with streamlined processes.

  • OCTAVE Allegro (2007): Scalable across organizations of all sizes, this version is more business-centric, emphasizing critical assets and organizational priorities.


Core Phases of OCTAVE Allegro:


  1. Establish Drivers: Define risk measurement criteria and methodology. Uses qualitative approaches, with optional quantitative inputs for likelihood and impact.

  2. Profile Assets: Identify and characterize critical information assets, their value, priority, impact, and associated security requirements. Map assets to their containers, including networks, systems, and outsourced services.

  3. Identify Threats: Determine areas of concern and develop threat scenarios, using tools such as threat trees to map actors and potential attack paths.

  4. Identify and Mitigate Risks: Assess risk based on asset impact and likelihood, develop risk scores, and recommend mitigation strategies aligned with organizational objectives.


OCTAVE provides a structured, repeatable, and strategic framework for enterprise risk assessment. Its asset-driven, business-focused approach ensures that risk management decisions are aligned with organizational priorities and critical assets.


ISO 31000


ISO/IEC 31000 provides the overarching principles and framework for organizational risk management, guiding how organizations establish, implement, and maintain effective risk management practices.


ISO/IEC 31010:2019 is particularly important for IT governance and is essential for exam preparation because it details risk assessment techniques. Titled “Risk Management – Risk Assessment Techniques,” this standard builds on ISO/IEC 31000:2018 and provides a practical roadmap for the risk assessment process.


Key elements of ISO/IEC 31010 include:


  • Establishing the risk context within the organization

  • Risk identification, analysis, and evaluation

  • Communication and consultation with stakeholders and management

  • Risk treatment and response planning

  • Monitoring and reviewing risks over time


ISO/IEC 31010 serves as the primary reference for understanding and applying structured, repeatable risk assessment practices aligned with organizational objectives.


NIST Risk Management Framework


  • NIST SP 800-30 – Provides guidance on risk management for general information systems.

  • NIST SP 800-37 – Establishes the Risk Management Framework (RMF) for federal information systems, consisting of seven steps:

    1. Prepare

    2. Categorize information systems

    3. Select security controls

    4. Implement security controls

    5. Assess security controls

    6. Authorize information systems

    7. Monitor security controls

  • NIST SP 800-161 – Focuses on risk management for the supply chain


Other Risk Assessment Frameworks


  • ISO/IEC 27005:2018 – Guides information security risk management aligned with ISO/IEC 27001 and 27002. Covers the full risk management lifecycle, including context development, scope definition, and risk assessment principles. It outlines qualitative and quantitative assessment methodologies without prescribing a specific method.

  • British Standard 100-3 – Focuses on risk analysis for IT infrastructure.

  • ISACA Risk IT Framework – Structured into three core process areas: Risk Governance (RG), Risk Evaluation (RE), and Risk Response (RR). This framework aligns with ISO/IEC standards and is particularly relevant for ISACA certifications and exams.


Risk Types


Risk Acceptance Process – A formalized approach to managing risks the organization chooses to accept:


  1. Document the risk, including the business owner accepting it.

  2. Specify risk countermeasures in place.

  3. Define the duration of acceptance.

  4. Obtain final sign-off from the executive team.


Risk Profile – The organization’s overall exposure to all types of risk, including potential impacts from new regulations, technological changes, or shifts in business direction.

Risk Appetite – The amount of risk an organization is willing to accept in order to achieve its objectives. It reflects the general level of risk management is comfortable taking across all business activities and is influenced by the organization’s culture, market environment, and regulatory landscape.

Risk Tolerance – The acceptable level of variation from the organization’s expected risk. It defines how much deviation is permissible before corrective action is needed. When risk exceeds tolerance thresholds, it can threaten the organization’s risk capacity and potentially its continued viability.

Risk Capacity – The maximum level of risk an organization can take without jeopardizing its ongoing existence. Risk appetite and risk tolerance should always remain below this limit.

Inherent Risk - Inherent risk is the level of risk that exists before any controls or mitigating actions are applied. It represents the natural exposure an organization faces due to its activities, systems, or environment, assuming no risk response or control mechanisms are in place. Inherent risk helps identify the baseline level of exposure, providing a starting point for evaluating how effective risk management and controls must be.

Residual Risk - Residual risk is the risk that remains after controls or mitigation strategies have been applied. Even after implementing safeguards (like MFA, firewalls, or monitoring), some risk always remains. Residual risk reflects the effectiveness and limitations of the implemented controls. Residual risk helps determine whether the remaining exposure is within the organization’s risk appetite or if additional mitigation steps are necessary.

Risk Treatment Gap - The difference between inherent risk and residual risk after control implementation, specifically, the portion of risk not fully addressed or reduced by current controls. This “gap” highlights areas where existing controls do not sufficiently mitigate the risk to an acceptable level. It identifies unmitigated or under-mitigated exposure requiring further treatment (e.g., stronger controls, risk transfer via insurance, or acceptance). Understanding the risk treatment gap supports risk response planning, deciding whether to accept, transfer, avoid, or further mitigate the remaining risk.


Risk appetite and tolerance should always be less than risk capacity


References


Quantifying threat actors with box:


Unknown unknows:



 
 
 

Comments


Post: Blog2_Post
  • Facebook
  • Twitter
  • LinkedIn

©2021 by croninity. Proudly created with Wix.com

bottom of page