Tony Turner

Securing Critical Infrastructure with SEMM - Security Engineering Maturity Matrix

Secure by Design


Secure by Design

The topic of Secure by Design encompasses the practice of designing for security as a functional requirement. It embodies a culture of understanding and empowerment in pursuit of designing and implementing more resilient systems.

A framework to mature security engineering inside your organization and understand your capability to design for security

Article content

SEMM breaks down the engineering process into six discrete activity areas, further broken down into 40 separate activities that support these lifecycle phases and injects security into the process:

  • Plan
  • Design
  • Build (or Buy)
  • Test
  • Deploy
  • Operate

There’s been a lot of discussion over the past few months about the need for security engineering design in critical infrastructure. The Cyber Informed Engineering Strategy document from the US Department of Energy (DoE) in 2022 and the National Cybersecurity Strategy from the US White House just a few weeks ago are prime examples of this. CISA has gone on record several times lambasting product vendors for failure to build products securely, including a collaborative paper released on April 13 advocating for increases in supplier accountability and radical transparency as well as organizational changes to support these initiatives.

The topic of security engineering is decades old now at this point, but cycles in the cybersecurity product vendor market have largely focused on downstream symptoms or very specific problems that needed to be addressed because they are simpler to understand and easier to sell products that solve a specific challenge. Security engineering winds up being a big topic, almost too overwhelming for many critical infrastructure organizations to understand, and sometimes no direct or measurable result from their security investment.

This has led to an enormous problem in our industry where security controls become additive in nature, sometimes without understanding why we are even implementing a particular control. We have largely adopted compliance, or conformance-based approaches because we frequently do not understand the systems we are securing. The end result becomes a cyber hygiene-based approach for industrial control systems that is costly and inefficient. Even worse, as we add more security products to the mix, we increase our attack surface. After all, security products are still software, and software has vulnerabilities. Security tools are not immune to this.

So how do we tackle this very large problem?

There are fantastic frameworks that already exist such as the excellent series from NIST on Systems Security Engineering in the form of Special Publication 800-160 Volumes 1 and 2, and other resources from IEEE and other groups. As mentioned above, the Cyber Informed Engineering (CIE) strategy from DoE is seeing a lot of traction in the industry as well, and you will likely see me make additional references to CIE since a lot of what I am building dovetails with CIE. These are fantastic documents, but really don’t address the core problem of the process of doing security engineering, how to define the capabilities required and measure the results to understand the maturity and progress of the security engineering program.

I’ve been doing a lot of deep process analysis work on the problems involved in designing and developing a software product to manage the workflow of security engineering in critical infrastructure and empower non-security engineers to build secure ICS systems. As part of that work, it led me to the excellent framework by the US Department of Transportation, frequently referred to as the Engineering Vee, named so due to the V shape the process follows as it starts with a basic concept of operations and moves downward on the left describing design deliverables and then works upwards on the right-hand side of the V as it validates those designs prior to deployment and operations.

As part of this work, I realized that while this is a well-known approach, its rarely adopted in a way that results in the outcomes organizations are looking for. Additionally, without understanding the maturity of the security engineering function, it is challenging for organizations to set expectations as to what they should be seeing from their security investment in engineering. Hence, the Security Engineering Maturity Matrix (SEMM) was born.

The Security Engineering Maturity Matrix is not the first maturity model focused on security engineering, but it is the first that I am aware of that focuses on the process of doing the work and the activities involved with building a program. Prior efforts have been more focused on the very specific security risks that need to be addressed, and while this is important and covered in other frameworks, that is not the intent of SEMM.

SEMM breaks down the process into six discrete activity areas, further broken down into 40 separate activities that support these lifecycle phases:

  • Plan
  • Design
  • Build (or Buy)
  • Test
  • Deploy
  • Operate

This approach aligns closely with the DOT Engineering V, but also expands on that model by including additional security capabilities and organizational constructs required to execute the program. The model then establishes a 3-tiered maturity matrix that starts with:

  • Level 1 – “Ad-hoc”, meaning the activity is performed when needed but there is no real program or process and is more of a “just in time” delivery model.
  • Level 2 – “Defined”, where we start to have a rough definition of the activity and it may be done more than once, but it has not reached the level of process maturity to be repeatable in a consistent fashion nor has it become truly strategic across the organization.
  • Level 3 – “Optimized”, where we find practices that are so ingrained in the culture they are universally applied as standard practice and are so strategic they are commonly referenced in board decks and other executive planning activities.

Let’s break down the approach a bit more as we dive into the specific activities for the program.

The first phase is Plan. Before we can build any program there are many activities that are required to establish authority, obtain resources, define processes, and plan for the formation and execution of the program.

  • 1.1 Governance
  • 1.2 Budget
  • 1.3 Staffing
  • 1.4 Process
  • 1.5 Security Culture
  • 1.6 Risk Register
  • 1.7 Training
  • 1.8 Organizational Principles (from CIE)

The second phase is Design. This is arguably where the majority of security engineering effort is applied and is where the system starts to take shape.

  • 2.1 Concept of Operations
  • 2.2 Requirements Management
  • 2.3 Functional Requirements
  • 2.4 Compliance Requirements
  • 2.5 Security Requirements
  • 2.6 Design Process
  • 2.7 Architecture
  • 2.8 Threat Modeling
  • 2.9 Security Design Principles
  • 2.10 Integrate Design
  • 2.11 Finalize Design

The third phase is Build. This is where depending on the decisions we have made so far; we determine to build a system internally or to procure the system to be implemented from a third party. The processes involved with building or procuring, including supply chain risk management, are covered in this phase.

  • 3.1 Build process
  • 3.2 Unit Testing
  • 3.3 QA
  • 3.4 Retrospectives
  • 3.5 Security Feedback
  • 3.6 Acquisition
  • 3.7 Supply Chain Risk Management
  • 3.8 Security Integration

The fourth phase is Test. In this phase, we are evaluating what we have done already and is really a verification of the design process. Test is interesting because as you will see these activities can span multiple phases across Design, Build and even Deploy phases. We have built assurance activities into other phases, but as testing is the primary activity for this phase, and different organizations’ development methodologies will change, we cover the core capabilities here. For instance, in a waterfall-style project assurance might happen at the end, while in Agile it tends to happen more iteratively.

  • 4.1 Subsystem verification
  • 4.2 System Verification
  • 4.3 Security Testing
  • 4.4 System Validation

The fifth phase is Deploy. This is where all our good work starts to result in outcomes we can see and work with stakeholders to start gaining acceptance. In many organizations, especially where there is a continuous integration and continuous deployment delivery model, this phase can be highly iterative and usually highly automated. So, as we look at the activities defined here, it’s important to think about what humans need to do and opportunities to automate the process.

  • 5.1 Integration
  • 5.2 Security Testing
  • 5.3 Acceptance Testing

The sixth and final phase is Operate. This is the state that we will find ourselves in for most of the life of the system until we need to cycle back into previous phases or iterate on new requirements for the system.

  • 6.1 Operate
  • 6.3 Maintain
  • 6.4 Monitor
  • 6.5 Re-Engineer
  • 6.6 Retire

Using a 3-tiered approach makes scoring very easy, as we can establish some basic math to arrive at a maturity indicator for the program.

The average can be applied at each lifecycle stage or in an aggregate across the entire program. For instance, an organization that scores 1.2 for Plan, 2 for Design, 1.2 for Build, 0 for Test, 2.4 for Deploy and 2.6 for Operate would have an aggregate score of 1.56 which is somewhere between Ad-hoc and Defined. Example: AVG(1.2+2+1.2+0+2.4+2.6) =1.56

It is not expected that organizations will necessarily reach maturity level of 3 for any but the most critical such as Energy, Water or Defense. Nor is it anticipated that the entirety of these activities is performed on every system, as a risk-based approach should be leveraged in the expenditure of resources, but if the organization does not even have the maturity to engage in these activities, then they will be incapable of applying proper engineering rigor to the most critical of projects.

We hope you find this approach valuable; a detailed breakdown of the model and the criteria is maintained at

Other resources related to this article:


We are looking for contributors to help us take SEMM to a mature release candidate. Future releases will include a risk-scoring tool and further guidance on the application of this model.

Originally posted April 19, 2023 at

Securing Critical Infrastructure with SEMM - Security Engineering Maturity Matrix

Tony Turner

Founder, CEO

Experienced cybersecurity executive 30+ years, Author of SANS SEC547 Defending Product Supply Chains and Software Transparency.

Author's page