Understanding The Importance of Designing for Security


By Camille Morhardt, Director of Security Initiatives and Communications at Intel, and Tom Garrison, VP and GM of Client Security Strategy and Initiatives at Intel

Robust security is a necessary and critical component of achieving a high-quality product. This is obvious when we consider security in a home or safety in a car or an airplane. And it’s the same with computing devices. From the initial architecture formation through the device build process to ongoing product service and retirement, how companies embrace security best practices in their designs and how they follow through over the device’s lifespan to keep it safe can significantly impact partners and customers.

What does it mean to “design for security” in today’s digital and increasingly connected world? Let’s go over six questions that can help illuminate the importance of designing for security and highlight the critical steps along the way.

  1. Where should you start?

The best way to achieve good security is to design it into the system or device from the very beginning, at the concept phase, then keep security at the forefront of product architects and engineers at every stage of development.

When designing a product, you need to think beyond what you are building your product to do and consider any use cases you might not have considered. For example, consider a server platform that is embedded into an MRI machine in a hospital. A data center is a very different environment than a hospital basement. You have to think holistically about your product and think through the security implications of unintended use cases down the road. Hackers use this philosophy, using devices in completely unexpected ways to uncover potential vulnerabilities. It’s hard to imagine all the potential use cases for a particular device (or how bad actors might attack it), so you need to proactively think of security in layers, and design in defense in depth so that no single exploit is likely to be successful.

  1. What’s the first thing that needs to happen when creating a new product?

From an architecture standpoint, you have to think about how a device might come under attack. That could include hardware, firmware, OS, application, and connectivity types of attacks. Using a ‘design for security’ mindset, you must think about all these security attack scenarios because the weakest link breaks the chain. For example, when thinking about making airplanes safe, designers build in redundancy, so a single failure isn’t likely to cause a crash. But they also consider passenger safety and how best to exit planes quickly. They have robust communications and procedures for what to do if communications are down and many, many other aspects that comprise a safer airplane trip. This same mindset exists in technology, with many security layers built into products from the beginning. An adversary will avoid heavily protected elements of a product and look for the easiest way to break the system.

This means threat modeling needs to be one of the first things to happen when building a product. You can threat model everything from environmental factors and natural disasters to global geopolitics, or you can narrow it down to something like a network or access to a system. It’s about guarding against bad outcomes. Mature organizations often have teams of researchers dedicated to creating and evaluating threat models.

  1. How do you prioritize security when designing and developing a new product?

Once you get into actual design and development, you want to be able to catch known security threats. That process is part of the Secure Development Lifecycle or SDL. SDL is a series of processes that implement security principles and privacy tenets into product development to help support engineers, developers, and researchers. These processes incorporate security-minded engineering and testing at the onset of product development when it’s more effective and efficient to employ. Not only does it include knowledge sharing, but also tools and services that, for example, allow someone to run checks against code. You can imagine the number of checks over time becomes massive, so you need a process that’s efficient and scales to help teams to better ensure they can catch security vulnerabilities.

Automation plays a vital role here. This involves using tools that embed these checks and automate the process so designers can run a multitude of complex security checks with a click of a button. Our teams are constantly working to stay ahead of attackers by trying to find these issues and vulnerabilities before an attacker can exploit them. Beyond the SDL, other initiatives play a major role around security, including training, conferences, Product Security Incident Response Teams (or PSIRTs), bug bounty programs, offensive and defensive research, and industry collaboration.

  1. Is there some sort of final security check involved before a product goes to market?

There’s no single security check, but rather the completion of a gauntlet of checks, that makes a product ready for market. Even early in the Intel development process, a product generally is required to meet appropriate security milestones at that development phase in order to proceed forward. At Intel, we don’t just check for security at the end. It is an integral part of the entire development process. We have a team of more than 200 security researchers internally, and they work with the product teams collaboratively to evaluate the products throughout development.

Our teams work to find and mitigate potential vulnerabilities through internal code reviews, red team activities such as Hack-A-Thons, and other events before products go to market. The data we collect is then used to develop automation and required training to help eliminate future occurrences. We also partner with the external research community, which is full of extremely smart and creative people. We want them working with us, making our platforms better. Sometimes this is known as “Crowdsourced Security” and can include bug bounty programs which provide incentives to researchers to report vulnerabilities.

  1. What happens if researchers identify a major vulnerability via bug bounty programs after the product is already in the field?

At a high level (and this can differ depending on the vulnerability), products with a vulnerability initially go to PSIRTs. At Intel, this team engages with the researcher that uncovered the issue and does the preliminary evaluation to validate and replicate the issue. Then very quickly, it’s triaged with Intel experts for that specific platform area who drop everything to prioritize resolution of the issue. Finding and deploying mitigations for the issue could take days, weeks, or months, depending on the complexity. In the meantime, because Intel follows the common industry practice of Coordinated Vulnerability Disclosure (CVD) for reported security vulnerabilities on launched products, we align with the researchers on a date to publicly disclose the issue to allow time to identify and deploy mitigations, in order to reduce adversary advantage.

Then once we have a mitigation, we need to help ensure that mitigation doesn’t create other unintended problems. Before rolling it out into customer environments, we need to make sure we understand the full extent of its potential impact. First, internally we do what’s called ‘no harm testing’. Later, we do more robust testing with partners and then roll out the update to customers in a coordinated fashion. When possible, we bundle updates together so they can be validated together to save time and money for the customers. In addition to practicing inbound CVD in partnership with external security researchers, Intel also coordinates outbound vulnerability disclosure with industry partners and other external stakeholders, as appropriate, so that all affected parties are disclosing in unison for an optimal defensive position. It’s all about coordinated disclosure.

  1. What role does working with the larger hardware community play in designing for security?

Compute is a complex endeavor that involves hardware from multiple vendors, firmware, operating systems, and applications. And of course, if your hardware goes online, which more and more of it does with the expansion of the Internet of Things, you must strive to secure compute systems across entire ecosystems. We’re really in an interesting time now. With so many connected and smart systems, we must consider security and privacy in every design decision for every product we create. These topics require broad discussion and collaboration, and they deserve our detailed attention to ethical considerations.

And where we are as an industry is far from consensus on these critical considerations: not every company designs for security or maintains a basic framework for how to update their products to stay safer from attackers. There’s no real unanimity across the industry in terms of what holistic security looks like. And those are things that customers really care about. We at Intel, together with our partners in the technology market, have the opportunity to demonstrate what more comprehensive security means. We are leading by example, inviting others to follow, and educating customers that we all should demand more from technology suppliers, which raises the security bar for ourselves and the industry because so much of the world depends on technology.

Designing for security is critical for any organization producing technology products and services today. If you haven’t already, consider the above questions and move to a security by design mindset to help ensure your organization can deliver safer, more reliable products that earn trust within the market.

About the Authors

Camille Morhardt – Director of Security Initiatives and Communications at Intel

 

 

 

 

 

 

Tom Garrison AuthorTom Garrison – VP and GM of Client Security Strategy and Initiatives at Intel



Source link