Skip to main content

CMS Threat Modeling Handbook

Information and resources for teams to help them initiate and complete their system threat model

Last reviewed: 2/21/2024

Contact: CMS Threat Modeling Team | ThreatModeling@cms.hhs.gov

Disclaimer: The information and resources in this document are driven directly at and for CMS internal teams and ADOs to help them initiate and complete threat model exercises. While you may be viewing this document as a publicly available resource to anyone, any information excluded as well as context included is meant for CMS-specific audiences. 

What is Threat Modeling?

Threat Modeling is a proactive, holistic approach of analyzing potential threats and risks in a system or application to identify and address them proactively. It involves analyzing how an attacker might try to exploit weaknesses in the system and then taking steps to mitigate those risks. It enables informed decision-making about application security risks. In addition to producing a model diagram, the process also produces a prioritized list of security improvements to the conception, requirements gathering, design, or implementation of an application.

At CMS, we use threat modeling  to help identify potential weaknesses that could be exploited by malicious actors. The CMS Threat Modeling Team works with System Teams to analyze their system's components, understand how they interact, and envision how an attacker might exploit vulnerabilities. This important work allows System/Business Owners, ISSOs, and Developers to implement appropriate security measures – such as encryption, access controls, or regular software updates – to reduce the chances of a successful attack and to protect sensitive information.

Threat Modeling is typically done with end-phase security testing, can be conducted anytime, but is ideally done early in the design phase of the Software Development Life Cycle (SDLC). Once completed, a threat model can be updated as needed throughout the SDLC, and should be revisited with each new feature or release. This practice promotes identifying and remediating threats, as well as continuously monitoring the effects of internal or external changes.

What are the benefits of Threat Modeling?

At CMS, Threat Modeling supports CMS’ system security and continuous monitoring efforts by supporting the following goals: 

  • Detecting problems early in the software development life cycle (SDLC)
  • Identifying system security requirements 
  • Creating a structured plan to address both system requirements and deficiencies
  • Evaluating attacks on CMS systems teams might not have considered, even security issues unique to your system
  • Staying one step ahead of attackers
  • Getting inside the minds of threat agents and their motivations, skills, and capabilities 
  • Serving as a resource for CMS Penetration Testing and Contingency Planning  activities 

Threat Modeling frameworks

Teams choosing to participate in Threat Modeling at CMS will have the option to work with the CMS Threat Modeling Team during a series of sessions. To successfully complete these sessions, the CMS Threat Modeling  Team will use a number of proven frameworks  including:

These methods were chosen by the CMS Threat Modeling Team because they are expedient, reliable models that use industry-standard language and provide immediate value to CMS teams. Read on to learn about the specifics of these frameworks. 

Four-Question Frame for Threat Modeling 

As your team embarks on its Threat Modeling journey, it’s important that these four questions remain top-of-mind:

  1. What are we working on?
  2. What can go wrong?
  3. What are we going to do about it?
  4. Did we do a good enough job?

These questions form the base of the work that your team and the CMS Threat Modeling Team will complete together. The questions are actionable, and designed to quickly identify problems and solutions, which is the core purpose of Threat Modeling . 

The STRIDE Model 

The STRIDE Threat Modeling framework is a systematic approach used to identify and analyze potential security threats and vulnerabilities in software systems. It provides a structured methodology for understanding and addressing security risks during the design and development stages of a system.

The acronym STRIDE stands for the six types of threats that the framework helps to identify:

Threat typeProperty ViolatedThreat Definition
SpoofingAuthenticationPretending to be something or someone other than yourself
TamperingIntegrityModifying something on disk, network, memory, or elsewhere
RepudiationNon-RepudiationClaiming that you didn’t do something or were not responsible; can be honest or false
Information DisclosureConfidentialityProviding information to someone not authorized to access it
Denial of serviceAvailabilityExhausting resources needed to provide service
Elevation of PrivilegeAuthorizationAllowing someone to do something they are not authorized to do

More information about using the STRIDE method to complete your Threat Modeling Session can be found in section “How to create your Threat Model ”. 

Other Threat Modeling frameworks 

Apart from the STRIDE Threat Modeling framework, there are several other popular Threat Modeling frameworks commonly used in the field of software security. Here are a few notable ones:

PASTA (Process for Attack Simulation and Threat Analysis)

PASTA is a risk-centric Threat Modeling framework that focuses on the business impact of threats. It involves a seven-step iterative process, including defining the objectives, creating an application profile, identifying threats, assessing vulnerabilities, analyzing risks, defining countermeasures, and validating the results with active vulnerability or penetration testing.

LINDDUN

LINDDUN threat modeling is a comprehensive approach that extends beyond traditional security threat modeling by focusing explicitly on various aspects of privacy. It is particularly relevant in the development of systems where user data privacy is of utmost importance, such as in applications handling personal or sensitive information. Here's a breakdown of what LINDDUN stands for and how it is applied:

  1. Linkability: This aspect evaluates whether an attacker can link two or more items of interest (such as messages, actions, individuals) in a way that the system’s design did not intend. The goal is to prevent unauthorized linking of information to protect user privacy.
  2. Identifiability: This examines the risk of identifying a subject (like a user) from the available data. The system should be designed to prevent unauthorized identification of users.
  3. Non-repudiation: This component assesses the possibility that a user cannot deny an action they performed. While non-repudiation is often a security goal, in the context of privacy, it can be undesirable as it might lead to the exposure of a user’s actions.
  4. Detectability: This refers to the ability of an attacker to determine that an item of interest exists. For privacy protection, certain information should not be detectable by unauthorized parties.
  5. Disclosure of Information: This looks at the risk of exposing information to unauthorized entities. The goal is to ensure that confidential information remains private.
  6. Unawareness & Unintervenability: This considers whether users are unaware of the data processing practices, which might impact their privacy. Ensuring that users are informed and consenting to data processing is key to protecting privacy.
  7. Non-compliance: This evaluates the risk of the system not complying with privacy policies and regulations. Ensuring compliance is crucial for legal and ethical reasons..

Mozilla’s Rapid Risk Assessment (RRA)

RRA is designed to quickly identify and prioritize security risks in software projects, allowing teams to allocate their resources effectively. It aims to be a lightweight and agile approach to risk assessment.

These are just a few examples of additional Threat Modeling frameworks. Each framework has its strengths and focuses on different aspects of Threat Modeling, but they all aim to identify and address potential security risks effectively. It may be beneficial for your team to review these frameworks as you start your own threat model. 

Supplemental frameworks and tools

CVSS (Common Vulnerability Scoring System)

CVSS is a vulnerability severity classification system which identifies metrics around the ease-of-exploitation and privilege levels required to exploit a CVE. It is not a method of threat modeling or tracking risk. It is used to advise on remediation cadence and urgency. Once a threat is identified, its associated vulnerability can receive a CVSS score from Critical, High, Medium, Low, or Informational to guide prioritization.

MITRE ATT&CK (Adversary’s Tactics, Techniques and Common Knowledge)

ATT&CK is not a threat modeling methodology per se but can be used in conjunction with other threat modeling frameworks. ATT&CK is a collection of tactics, techniques, and procedures (TTPs) which enumerate the exploitation and post-exploitation actions threat actors can take against vulnerabilities. Some attacks get CVE classifications but rather this is a repository of steps an adversary can chain together which in their whole create a Kill Chain or successful attack. It is a good tool for referencing attack actions in the same manner across technical and non-technical departments. It can be used with threat modeling once threats have been identified to associate the attack actions with the identified threat. ATT&CK is not a compliance framework. 

Many tools and frameworks exist that support threat modeling activities or which can be mapped to a threat modeling methodology such as STRIDE but these should not be relied upon in isolation from other methods.

Threat Modeling tools

The tools needed for Threat Modeling can be as simple as using a Whiteboard to brainstorm ideas and a method to record threats and mitigations (paper, a photo of a diagram, etc.). At CMS, the CMS Threat Modeling Team uses the following tools to communicate with teams and record ideas and information:

Mural (for drawing DFD diagrams)

Teams primarily use Mural as a digital whiteboard for drawing Data Flow Diagrams (DFDs). You can sign up for a Mural space to complete this work by contacting the CMS Cloud Team (CMS email account required). 

NOTE: Some other drawing tools may be alternatively used such as app.diagrams.net (formerly Draw.io), Lucidchart, etc.

CMS Confluence (for recording threats)

Teams use Confluence to fill out their Threat Model Template in a space that is protected and safe from outside users. 

Zoom (for team collaboration)

The CMS Threat Modeling Team will use Zoom to collaborate with other team members on a Threat Model. Threat Modeling sessions are recorded so that all artifacts can be transferred to other systems of record.

YouTube (for additional training)

Your team is encouraged to review the CMS CASP Threat Modeling playlist on CMS YouTube channel before you start your Threat Model. 

Additional or alternative tools may be added in the future to further help CMS ADO Teams with creating and maintaining Threat Models.

Supplemental Threat Modeling tools

As a reference, here are some other threat modeling tools in the industry that may be considered in the future for use at CMS:

Free Tools:

OWASP Threat Dragon

The OWASP Threat Dragon is a free, open-source, cross-platform application for creating threat models. Use it to draw threat modeling diagrams and to identify threats for your system. With an emphasis on flexibility and simplicity it is easily accessible for all types of users.

Microsoft Threat Modeling Tool

The Threat Modeling Tool is a core element of the Microsoft Security Development Lifecycle (SDL). It allows software architects to identify and mitigate potential security issues early, when they are relatively easy and cost-effective to resolve. As a result, it greatly reduces the total cost of development. Also, the tool is designed with non-security experts in mind, making threat modeling easier for all developers by providing clear guidance on creating and analyzing threat models.
NOTE: The Microsoft Threat Modeling Tool is a desktop-only tool that can be installed on Microsoft operating systems only.

Paid Tools (requires paid / annual license(s) for usage):

IriusRisk

IriusRisk is an open Threat Modeling platform that automates and supports creating threat models at design time. The threat model includes recommendations on how to address the risk. IriusRisk then enables the user to manage security risks throughout the rest of the software development lifecycle (SDLC) with best-in-class architectural diagramming and full customization to enable every stakeholder to collaborate.

ThreatModeler

Our patented technology enables intuitive, automated, collaborative threat modeling and integrates directly into every component of your DevSecOps tool chain, automating the “Sec” in DevSecOps from design to code to cloud at scale. ThreatModeler’s SaaS platform ensures secure and compliant applications, infrastructure, and cloud assets in design, saving millions in incident response costs, remediation costs and regulatory fines. It is trusted by software, security and cloud architects, engineers, and developers at companies across the world. Founded in 2010, ThreatModeler is headquartered in Jersey City, NJ.

Devici

Welcome to Devici, where secure design is driven by threat modeling from the inception of every project. Imagine a platform that allows you to integrate security into your software's blueprint. That's the essence of Secure by Design, and we make it attainable for teams of any size. We're not just a threat modeling tool; we're a movement that embraces the craftsmanship required for secure software development. Our name draws inspiration from the genius of Leonardo Da Vinci, who saw the intricate connections between art and science, much like our approach to crafting secure and private software. Just as Da Vinci meticulously studied anatomy, engineering, and more to improve his art, we empower developers and engineers to delve deep into the design of their software, uncovering potential security and privacy threats. We help implement secure by design foundations.
 

How to create your Threat Model 

Read the Threat Modeling Handbook 

Learn about the process of Threat Modeling to decide when the right time is to engage with the CMS Threat Modeling Team based on your system’s current compliance and authorization schedule. 

Fill out the Threat Modeling intake form

Please complete the Threat Modeling Intake Form. The CMS Threat Modeling Team will use the answers you provide in this questionnaire to help inform future planning sessions.

Meet with the CMS Threat Modeling Team

To start things off, facilitators from the CMS Threat Modeling Team will meet with the System/Business Owner, ISSO, and up to two Senior Developers to talk about the process, time commitment, and outputs expected in future Threat Model Sessions. 

Gather system information

Your team should gather and document high level system information, including:

  • System name
  • System description
  • Types or sensitivity of data
  • Scope and external interactions
  • Primary workflows (use cases)

This information will help the CMS Threat Modeling Team in the initial stages of creating your Threat Model .

Gather existing diagrams

The team should gather any existing diagrams such as architecture diagrams, sequence diagrams, etc. that would be helpful in understanding the system or application. This will help inform the creation (or update) of a Data Flow Diagram  (DFD) during the first whiteboard session.

 NOTE: The DFD doesn’t have to be created before the first Threat Modeling session – it can be created together with the CMS Threat Modeling Team.

Identify stakeholders and personas

Before conducting the Threat Model Session, it is important to identify the key stakeholders who will be participating in the creation of the Threat Model . These perspectives/personas are critical to a successful Threat Modeling  session. You can use the following table to inform your work to develop these personas: 

PersonaDescription
Developer

Someone who understands the current application design, and has had the most depth of involvement in the design decisions made to date.

They were involved in design brainstorming or whiteboarding sessions leading up to this point, when they would typically have been thinking about threats to the design and possible mitigations to include.

BusinessSomeone who represents the business outcomes of the workload or feature that is part of the Threat Modeling  process. This person should have an intimate understanding of the functional and non-functional requirements of the workload—and their job is to make sure that these requirements aren’t unduly impacted by any proposed mitigations to address threats.
SecuritySomeone who understands application security principles and how they may be applied to designing, building, and testing applications for resilience and protection against security attacks. The purpose of this role is to support the development team in evaluating threats and devising security controls that mitigate the threats.
InfrastructureSomeone who understands the physical or virtual components that makeup the underlying infrastructure of the Application. Design decisions are offset by Infrastructure considerations. These should be voiced during the Threat Modeling  session, though there’s often aspects of Shared Responsibility Models that may be reflected in the technology used.
Threat Model CoordinatorThe Threat Model subject matter expert (SME) should be the most familiar with the Threat Modeling  process and discussion moderation methods, and should have a depth of IT security knowledge and experience. Discussion moderation is crucial for the overall exercise process to make sure that the overall objectives of the process are kept on-track, and that the appropriate balance between security and delivery of the customer outcome is maintained.

Document current and upcoming work

This is used to help answer “What are we working on” in terms of change to the system.

Complete the Threat Model Template in Confluence 

The CMS Threat Modeling Team uses Confluence to organize their threat models. Copy the Threat Model Template to your own Confluence space, and record the data collected in the previous steps.

Schedule your Threat Modeling  Sessions

Work with your team to coordinate dates and times, and then reach out to the CMS Threat Modeling Team to schedule your Threat Model Sessions. It’s up to the team if they prefer to have one session or to break it up into multiple sessions. Breaking up the session (e.g., three sessions, two hours each, one day apart) gives the team the time and space to learn the structure and concepts involved before going into the next session.

Prepare your team

Send a welcome email to everyone who will attend your Threat Modeling  Session. Be sure to include the following in your email: 

These shared resources will allow everyone on the team to have access to the information they need to successfully complete the Threat Model . 

Identify threats using the STRIDE Model

As a structured method of Threat Modeling, STRIDE is meant to help teams locate threats in a system. It offers a way to organize information so that teams can plan how to mitigate or eliminate the threats. Remember that the acronym STRIDE stands for the six types of threats that the framework helps to identify:

Spoofing Identity

Identity spoofing occurs when the hacker pretends to be another person, assuming the identity and information in that identity to commit fraud. A very common example of this threat is when an email is sent from a false email address, appearing to be someone else. Typically, these emails request sensitive data. A vulnerable or unaware recipient provides the requested data, and the hacker is then easily able to assume the new identity.

Identities that are faked can include both human and technical identities. Through spoofing, the hacker can gain access through just one vulnerable identity to then execute a much larger cyber attack.

Tampering With Data

Data tampering occurs when data or information is changed without authorization. Ways that a bad actor can execute tampering could be through changing a configuration file to gain system control, inserting a malicious file, or deleting/modifying a log file.

Change monitoring, also known as file integrity monitoring (FIM), is essential to integrate into your business to identify if and when data tampering occurs. This process critically examines files with a baseline of what a ‘good’ file looks like. Proper logging and storage are critical to support file monitoring.

Repudiation Threats

Repudiation threats happen when a bad actor performs an illegal or malicious operation in a system and then denies their involvement with the attack. In these attacks, the system lacks the ability to actually trace the malicious activity to identify a hacker.

Repudiation attacks are relatively easy to execute on e-mail systems, as very few systems check outbound mail for validity. Most of these attacks begin as access attacks.

Information Disclosure

Information disclosure is also known as information leakage. It happens when an application or website unintentionally reveals data to unauthorized users. This type of threat can affect the process, data flow and data storage in an application. Some examples of information disclosure include unintentional access to source code files via temporary backups, unnecessary exposure of sensitive information such as credit card numbers, and revealing database information in error messages.

These issues are common, and can arise from internal content that is shared publicly, insecure application configurations, or flawed error responses in the design of the application.

Denial of Service

Denial of Service (DoS) attacks restrict an authorized user from accessing resources that they should be able to access. This affects the process, data flow and data storage in an application. 

Despite increases in DoS attacks, it does seem that protective tools such as AWS Shield and CloudFlare continue to be effective.

Elevation of Privileges

Through the elevation of privileges, an authorized or unauthorized user in the system can gain access to other information that they are not authorized to see. An example of this attack could be as simple as a missed authorization check, or even elevation through data tampering where the attacker modifies the disk or memory to execute non-authorized commands.

Evaluate system interactions and elements

When using the STRIDE method for Threat Modeling  to create your DFD, your team can evaluate threats per interaction and per element. To do this, your team will need to analyze the potential risks associated with each interaction and element within your system. Remember that:

  • Interactions are how different components, modules, users, or external entities communicate with each other. It’s important for teams to understand the flow of information, data, or control between these entities. 
  • Elements are different components of a system, like databases, APIs, user interfaces, and other network components.

To apply STRIDE to your DFD, your team will complete the following steps to apply the STRIDE method to your Threat Model : 

  1. Apply STRIDE categories to interactions and elements

At the start of your analysis, your team will apply STRIDE per interaction to determine if there are any threats related to the data flows between components. After completing the interaction analysis, you will then investigate any additional threats further by applying STRIDE to any element. Any threats that fall outside of interactions and elements should be classified as unstructured threats.

  1. Analyze threats

Consider how each type of threat can manifest and brainstorm potential attack scenarios or vulnerabilities that align with each category. Many development teams will already have ideas of what issues exist inside their systems. Their first-hand experience should be welcomed into the Threat Model Session. Key questions to ask during your session include: How would you attack the system? What are you (most) concerned about?

  1. Determine threat impact and likelihood

Evaluate the potential impact of each identified threat. Consider the consequences in terms of confidentiality, integrity, availability, regulatory compliance, or other relevant factors. Assess the potential damage or harm that can occur if the threat is successfully exploited. Also consider factors such as the level of access required, the complexity of the attack, the presence of mitigating controls, and the motivation and capabilities of potential attackers. Once the initial threat analysis is complete, your team may find that many of the threats are unlikely, low impact, and/or not in the scope of the team’s area of responsibility.

  1. Prioritize threats and define mitigation strategies

Review the remaining threats and work with the team, specifically the ISSO and Business Owner, to identify the major threats. The team then should work on the proposed mitigation plan by identifying team members that are responsible for mitigating the threats, estimate dates of completion, and include this information in the final report for follow-up at a later date (generally 90 days).

  1. Validate and refine: Review the threat analysis and proposed mitigations with your team regularly. Refine the threat analysis and update the mitigation strategies when changes occur within your system.

What to do following your Threat Model Session(s)

In order to answer the question “Did we do a good enough job?”, it is important to review the identified threats, understand the mitigations, determine the risks, and communicate the results with others.

Complete the Threat Model Report

Using the Threat Model Report Template, the data gathered from the Threat Model Session is transferred into a shared report or PDF that can be used for a final review with all stakeholders. It provides information from the Threat Model Session, including system information, DFD, identified (possible) threats, and proposed mitigations. Your teams options for post-session reporting include: 

  • After a review with stakeholders, the final report should be uploaded to the “Assessments” tab of CMS’ FISMA Continuous Tracking System (CFACTS) by the system’s ISSO.
  • Instead of a full report, a PDF of the Mural board + Confluence page may be sufficient for use by the CMS ADO Team. In other cases, a formal document may be needed in order to justify a budgetary request to address a vulnerability that will require additional funds.

Send feedback survey

Create a post-session email to all attendees thanking them for their participation and providing a link to the Threat Model Session feedback form. This information will be used by the CMS Threat Modeling Team for continuous improvement of the CMS Threat Modeling process.

Threat mitigation follow up

Mitigation follow-up is managed by the application ISSO, but should be completed approximately 90 days after the Threat Model Session. All mitigations should be commented on and updated, then attached with the Threat Model report.

Threat Modeling terms and definitions

TermDefinition
ImpactA measure of the potential damage caused by a particular threat. Impact and damage can take a variety of forms. A threat may result in damage to physical assets, or may result in obvious financial loss. Indirect loss may also result from an attack and needs to be considered as part of the impact.
LikelihoodA measure of the possibility of a threat being carried out. A variety of factors can impact the likelihood of a threat being carried out, including how difficult the implementation of the threat is, and how rewarding it would be to the attacker.
ControlsSafeguards or countermeasures that you put in place in order to avoid, detect, counteract, or minimize potential threats against your information, systems, or other assets.
PreventionsControls that may completely prevent a particular attack from being possible.
MitigationsControls that are put in place to reduce either the likelihood or the impact of a threat, while not completely preventing it.
Data Flow Diagram A depiction of how information flows through your system. It shows each place that data is input into or output from each process or subsystem. It includes anywhere that data is stored in the system, either temporarily or long-term.
Trust boundary (in the context of Threat Modeling )A location on the Data Flow Diagram  where data changes its level of trust. Any place where data is passed between two processes is typically a trust boundary. If your application makes a call to a remote process, or a remote process makes calls to your application, that's a trust boundary. If you read data from a database, there's typically a trust boundary because other processes can modify the data in the database. Any place you accept user input in any form is always a trust boundary
Workflows (Use Cases)A written description of how users will perform tasks within your system or application. It outlines, from a user's point of view, a system's behavior as it responds to a request. Each workflow is represented as a sequence of simple steps, beginning with a user's goal and ending when that goal is fulfilled.
System NameFISMA system name that can be found in CFACTS
System DescriptionHigh level description of the system that can be found in CFACTS
External EntityAn outside system or process that sends or receives data to and from the diagrammed system- sources or destinations of information
ProcessA procedure that manipulates the data and its flow by taking incoming data, changing it, and producing an output with it.
Data StoreHolds information for later use waiting to be processed. Data inputs flow through a process and then through a data store while data outputs flow out of a data store and then through a process.
Data FlowThe path the system’s information takes from external entities through processes and data stores.
SpoofingThreat action aimed at accessing and use of another user’s credentials, such as username and password.
TamperingThreat action intending to maliciously change or modify persistent data, and the alteration of data in transit between two computers over an open network, such as the Internet.
RepudiationThreat action aimed at performing prohibited operations in a system that lacks the ability to trace the operations.
Information DisclosureThreat action intending to read a file that one was not granted access to, or to read data in transit.
Denial of Service (DoS)Threat action attempting to deny access to valid users, such as by making a web server temporarily unavailable or unusable.
Escalation of PrivilegesThreat action intending to gain privileged access to resources in order to gain unauthorized access to information or to compromise a system.
TupleLooking at a section of a Data Flow Diagram  by identifying the source, destination, and data type of the data flow.

Threat Modeling references

The following are a list of industry resources the CMS Threat Modeling Team has identified as helpful for those within the CMS community who want to learn more about Threat Modeling: 

OWASP Threat Modeling Process

Threat Modeling Manifesto

Threat Modeling Capabilities

Awesome Threat Modeling - curated list of resources 

AWS - How to Approach Threat Modeling 

STRIDE Threat Modeling: What You Need To Know 

Mozilla: Rapid Risk Assessment (RRA)