- La colaboración entre Seguridad y FinOps puede generar beneficios ocultos en la nube
- El papel del CIO en 2024: una retrospectiva del año en clave TI
- How control rooms help organizations and security management
- ITDM 2025 전망 | “효율경영 시대의 핵심 동력 ‘데이터 조직’··· 내년도 활약 무대 더 커진다” 쏘카 김상우 본부장
- 세일포인트 기고 | 2025년을 맞이하며… 머신 아이덴티티의 부상이 울리는 경종
The Serverless Security Machine – Cyber Defense Magazine
By Art Sturdevant, Director of Operations, Censys
Servers are BS. They require constant maintenance, monitoring, and tweaking. As a security practitioner, regardless of where your team lands on the org chart, you’re being charged with securing an ever-evolving landscape against all internal and external threats. The time required just to keep basic services functioning is daunting and now, you’re probably working even harder to secure and protect your remote workforce, all while working from home. While the amount of time required to evaluate and respond to threats is constantly increasing, security budgets, personnel, and tooling are not being adjusted at the same rate or are only adjusted in response to a particular threat or incident.
Given that time is at such a premium, why is your team still deploying infrastructure that requires constant supervision? With all these demands on your team, now is the time to move to a serverless infrastructure.
Traditional servers are great in that they can be provisioned and run forever, but unless the server is under constant load, you’re likely wasting money and resources managing it. Teams are using all kinds of complex tools to deploy new servers, apply configurations, update users, and apply security patches and still, there are servers that live outside of these tools or silently lose connectivity, never to be managed again. Every time a new server is deployed, you’re really managing three different problems — server updates, software updates, and code updates.
Server updates can be risky, which is why large organizations employ a CAB to approve changes and security updates. Teams schedule downtime or work to deploy across zones without interruption, but because these changes apply to the entire operating system and are likely not authored by your team, it can be difficult to anticipate how the change will affect the service you’re trying to manage and even tougher to debug.
Software updates are easier to manage and are likely better understood since the code was written by a team you know. If you’re already familiar with CI/CD models, then you might already be well suited to the serverless lifestyle. Code changes go in, peers review the changes, and the code is deployed in a seamless fashion. It may not always be that flawless, but debugging the code you wrote is almost always easier than debugging operating system changes or behaviors.
By moving to a serverless architecture, you’re removing all the issues around software and security updates, system breaches, user provisioning, system health monitoring and more. These issues are no longer your team’s problem because you’re only responsible for deploying code that runs. All of the system updates and application updates used to run the code are maintained behind the scenes.
Moving to a serverless architecture doesn’t have to be “all or nothing” in order to maximize your time investment. For example, a good first step might be to evaluate the servers in your environment that only perform one task or those that are heavily underutilized. A good sign that you’ve identified a solid candidate is when you find a service/server that is performing a very event-driven task such as a server that collects and ships logs from various SaaS services or systems. If the service operates on a schedule or cron job – you’ve got a perfect first candidate!
Most users start by moving to a containerized version of their code. Docker is a popular tool and is available on nearly all platforms. Once you’ve containerized your code, simply deploy it to a docker host, or a cloud service capable of running containers. Every major cloud provider has support for running containers in production environments.
If you’re looking for something that is truly serverless, consider evaluating a cloud provider’s “Function as a Service” (FaaS) offering. These come with a slight learning curve but also a lot of great features including a deployment model that is easier than containers. FaaS is a model to deploy code (think a python script) and to run it over and over in response to an event. A common scenario might be to fire a chat notification if a storage bucket becomes public or to update TLS certificates on specific hosts as they near expiration. Serverless architecture can allow your team to quickly deploy a proof of concept applications, or full-blown applications to manage all corners of your security program.
Although serverless assets can and often do reduce the administrative burden of managing servers, there are some limitations to be aware of as you adopt this new model.
- Potential Learning Curve: Containerization and FaaS both require a new skillset. If for no other reason than to get deployment working in a seamless fashion from your Continuous Integration/Continuous Deployment tool. Once your team understands the requirements to deploy a service, this is a very repeatable process. Deploying your first serverless project is likely an afternoon project for you or your team.
- Additional Expense: Misconfigurations can result in higher costs than a traditional virtual appliance in the cloud. However, even at the increased expense, consider that your team doesn’t need to manage updates, security patches, or worry about attackers compromising the server. It is a good idea to understand cloud pricing models before automating these tasks to avoid a surprise at the end of the month. Functions should be designed to read each word in the book, not each letter and not the whole book either.
- Increased Latency: Depending on the cloud provider, FaaS and containerized services could result in increased latency because of the “cold start time”. However, once the service is started up, running a second or hundredth service should be fairly quick.
- Task Timeouts: Most cloud providers limit the amount of time a FaaS task can run before it is terminated. A common timeout is between 30 seconds and 15 minutes. If you have a long-running task, you might want to consider breaking it into smaller tasks or moving to containerization since container deployments do not have the same timeout limitations.
- Updates Require Redeployments: To update containers with new code or new software packages, you’ll need to redeploy the container to the cloud. If you’re updating a FaaS function, you’ll just need to redeploy the code. While this might seem like a headache, if you update and deploy using CI/CD tools, this is actually pretty straightforward. Most clouds allow you to deploy with a canary model – meaning you can direct some traffic to your new code and some to your old code and keep adjusting until you’re confident that you haven’t introduced any unexpected problems.
Help your security team alleviate the administrative burdens of managing servers by moving to a fully serverless infrastructure. It may seem daunting at first, but once you have a couple of services or workflows moved over, you’ll wonder why you didn’t make the move sooner.
About the Author
Art Sturdevant is the Director of Operations at Censys. An Information Security professional with over 15 years of experience, Art maintains a passion for open-source projects, entrepreneurship, and the outdoors. Before joining Censys in 2019, he was a Sr. Security Engineer for Duo Security and is also a graduate of Central Michigan University where he graduated with honors with a Bachelor of Science in Business Administration. To learn more about Censys, visit censys.io or email Art at art@censys.io.