6 Docker Security Best Practices for Your Application | Docker


Developing and running secure Docker applications demands a strategic approach, encompassing considerations like avoiding unnecessary bloat in images and access methods. One crucial aspect to master in Docker development is understanding image layering and optimization. Docker images are constructed using layers, each representing specific changes or instructions in the image’s build process.

In this article, we’ll delve into the significance of Docker image layering, the importance of choosing minimal base images, and practical approaches like multi-stage builds. Additionally, we’ll discuss the critical practices of running applications as non-root users, checking images for vulnerabilities using tools like Docker Scout, and implementing Docker Content Trust for image integrity. 

This comprehensive guide aims to equip developers and operators with actionable insights to enhance the security and efficiency of Docker applications.

Understanding Docker image layering

Before we jump into Docker security aspects, we need to understand Docker image layering and optimization. For a better understanding, let’s consider this Dockerfile, retrieved from a sample repository. It’s a simple React program that prints “Hello World.” The core code uses React, a JavaScript library for building user interfaces. 

Docker images comprise layers, and each layer represents a set of file changes or instructions in the image’s construction. These layers are stacked on each other to form the complete image (Figure 1). To combine them, a “unioned filesystem” is created, which basically takes all of the layers of the image and overlays them together. These layers are immutable. When you’re building an image, you’re simply creating new filesystem diffs, not modifying previous layers.

Black and white illustration showing docker image layers, including cmd, expose 3000, copy... , run npm install, etc.
Figure 1: Visual representation of layers in a Docker image.

When you build a Docker image, each instruction in your Dockerfile creates a new layer. Layers are cached, so if you make a change in your code and rebuild the image, only the layers affected by that change will be recreated, saving time and bandwidth. This layering system makes images efficient to use.

You might notice that there are two COPY instructions (as shown in Figure 1). The first COPY instruction copies only package.json (and potentially package-lock.json) into the image. The second COPY instruction copies the remaining application code (excluding files already copied in the first COPY command). If only application code changes, the first two layers are cached, avoiding re-downloading and reinstalling dependencies, which can significantly speed up builds.

1. Choose a minimal base image

Docker Hub has millions of images, and choosing the right image for your application is important. It is always better to consider a minimal base image with a small size, as slimmer images contain fewer dependencies, resulting in less surface area to attract. Not only does a smaller image improve your image security, but it also reduces the time for pulling and pushing images and optimizing the overall development lifecycle.

Black and white illustration showing four example docker images in different sizes, namely 138 mb, 222 mb, 1. 1 gb, and 226 mb.
Figure 2: Example of Docker images with different sizes.

As depicted in Figure 2, we opted for the node:21.6-alpine3.18 image due to its smaller footprint. We selected the Alpine image for our Node application below because it omits additional tools and packages present in the default Node image. This decision aligns with good security practices, as it minimizes the attack surface by eliminating unnecessary components for running your application.

# Use the official Node.js image with Alpine Linux as the base image
FROM node:21.6-alpine3.18

# Set the working directory inside the container to /app
WORKDIR /app

# Copy package.json and package-lock.json to the working directory
COPY package*.json ./

# Install Node.js dependencies based on the package.json
RUN npm install

# Copy all files from the current directory to the working directory in the container
COPY . .

# Expose port 3000
EXPOSE 3000

# Define the command to run your application when the container starts
CMD ["npm", "start"]

2. Use multi-stage builds

Multi-stage builds offer a great way to streamline Docker images, making them smaller and more secure. They allow us to trim down a hefty 1.9 GB image to a lean 140 MB by using different build stages. In this approach, we leverage multiple FROM statements and carefully pick only the necessary pieces from one stage to another. 

We have converted our Dockerfile to a multi-stage one (Figure 3). In the first stage, we use a Node.js image to build the app, manage dependencies, and create application files (see the Dockerfile below). In the second stage, we copy the lightweight files generated in the first step and use Nginx to run them. We skip the build tool required to build the app in the final stage. This is why the final image is small and suitable for the production environment. Also, this is a great representation that we don’t need the heavyweight system on which we build; we can copy them to a lighter runner to run the app.

Illustration of docker multi-stage build, converting the dockerfile to a multi-stage one.
Figure 3: High-level representation of Docker multi-stage build.
# Stage 1: Build the application
FROM node:21.6-alpine3.18 AS builder

# Set the working directory for the build stage
WORKDIR /app

# Copy package.json and package-lock.json
COPY package*.json ./

# Install dependencies
RUN npm install

# Copy the application source code into the container
COPY . .

# Build the application
RUN npm run build

# Stage 2: Create the final image
FROM nginx:1.20

# Set the working directory within the container
WORKDIR /app

# Copy the built application files from the builder stage to the nginx html directory
COPY --from=builder /app/build /usr/share/nginx/html

# Expose port 80 for the web server
EXPOSE 80

# Start nginx in the foreground
CMD ["nginx", "-g", "daemon off;"]

You can access this Dockerfile directly from a repository on GitHub.

3. Check your images for vulnerabilities using Docker Scout

Let’s look at the following multi-stage Dockerfile:

# Stage 1: Build the application
FROM node:21.6-alpine3.18 AS builder

# Set the working directory for the build stage
WORKDIR /app

# Copy package.json and package-lock.json
COPY package*.json ./

# Install dependencies
RUN npm install

# Copy the application source code into the container
COPY . .

# Build the application
RUN npm run build

# Stage 2: Create the final image
FROM nginx:1.20

# Set the working directory within the container
WORKDIR /app

# Copy the built application files from the builder stage to the nginx html directory
COPY --from=builder /app/build /usr/share/nginx/html

# Expose port 80 for the web server
EXPOSE 80

# Start nginx in the foreground
CMD ["nginx", "-g", "daemon off;"]

You can run the following command to build a Docker image:

docker build -t react-app-multi-stage . -f Dockerfile.multi

Once the build process is complete, the CLI lets you view a summary of image vulnerabilities and recommendations. That’s what Docker Scout is all about.

=> exporting to image                                                                                                      0.0s
 => => exporting layers                                                                                                     0.0s
 => => writing image sha256:f348bcb19411fa1c4abf2e682f3dded7963c0c0c9b39c31804df5cd0e0f185d9                                0.0s
 => => naming to docker.io/library/react-node-app                                                                           0.0s

View build details: docker-desktop://dashboard/build/desktop-linux/desktop-linux/sci2bo7xihgwnfihigd8x9uh1

What's Next?
  View a summary of image vulnerabilities and recommendations → docker scout quickview

Docker Scout analyzes the contents of container images and generates a report of packages and vulnerabilities that it detects, helping users to identify and remediate issues. Docker Scout image analysis is more than point-in-time scanning; the analysis gets reevaluated continuously, meaning you don’t need to re-scan the image to see an updated vulnerability report.

If your base image has a security concern, Docker Scout will check for updates and patches to suggest how to replace the image. If issues exist in other layers, Docker Scout will reveal precisely where it was introduced and make recommendations accordingly (Figure 4).

Illustration of docker scout in action, showing steps to analyze, remediate, evaluate.
Figure 4: How Docker Scout works.

Docker Scout uses Software Bills of Materials (SBOMs) to cross-reference with streaming Common Vulnerabilities and Exposures (CVE) data to surface vulnerabilities (and potential remediation recommendations) as soon as possible.

An SBOM is a nested inventory, a list of ingredients that make up software components. Docker Scout is built on a streaming event-driven data model, providing actionable CVE reports. Once the SBOM is generated and exists, Docker Scout automatically checks between existing SBOMs and new CVEs. You will see automatic updates for new CVEs without re-scanning artifacts.

After building the image, we will open Docker Desktop (ensure you have the latest version installed), analyze the level of vulnerabilities, and fix them. We can also use Docker Scout from the Docker CLI, but Docker Desktop gives you a better way to visualize the stuff.
Select Docker Scout from the sidebar and choose the image.

Here, we have chosen the react-app-multi-stage, which we built just now. As you can see, Scout immediately shows vulnerabilities and their level. We can select View packages and CVEs beside that to take a deep look and get recommendations (Figure 5).

Screenshot of docker scout displaying vulnerabilities in sample image.
Figure 5: Docker Scout tab in Docker Desktop.

Now, a window will open, which shows you a detailed report about the vulnerabilities and layer-wise breakdown (Figure 6).

Screenshot of docker scout showing detailed report of vulnerabilities.
Figure 6: Detailed report of vulnerabilities.

To get recommendations to fix the image vulnerabilities, select Recommended Fixes in the top-right corner, and a dialog box will open with the recommended fixes.

As shown in Figure 7, it recommends upgrading Nginx from version 1.20 to 1.24, which has fewer vulnerabilities and fixes all the critical and higher-level issues. Also, a good thing to note is that even though version 1.25 was available, it still recommends version 1.24 because 1.25 has critical vulnerabilities compared to 1.24.

Screenshot of docker scout showing recommended fixes for base image.
Figure 7: Recommendation tab for fixing vulnerabilities in Docker Desktop.

Now, we need to rebuild our image by changing the base image of the final stage to the recommended version 1.24 (Figure 8), which will fix those vulnerabilities.

Screenshot of docker scout showing advanced image analysis screen.
Figure 8: Advanced image analysis with Docker Scout.

The key features and capabilities of Docker Scout include:

  • Unified view: Docker Scout provides a single view of your application’s dependencies from all layers, allowing you to easily understand your image composition and identify remediation steps.
  • Event-driven vulnerability updates: Docker Scout uses an event-driven data model to continuously detect and surface vulnerabilities, ensuring that analysis is always up-to-date and based on the latest CVEs.
  • In-context remediation recommendations: Docker Scout provides integrated recommendations visible in Docker Desktop, suggesting remediation options for base image updates and dependency updates within your application code layers.

Note that Docker Scout is available through multiple interfaces, including the Docker Desktop and Docker Hub user interfaces, as well as a web-based user interface and a command-line interface (CLI) plugin. Users can view and interact with Docker Scout through these interfaces to gain a deeper understanding of the composition and security of their container images.

4. Use Docker Content Trust

Docker Content Trust (DCT) lets you sign and verify Docker images, ensuring they come from trusted sources and haven’t been tampered with. This process acts like a digital seal of approval for images, whether signed by people or automated processes. To enable Docker Content Trust, follow these steps:

Initialize Docker Content Trust

Before you can sign images, ensure that Docker Content Trust is initialized. Open a terminal and run the following command:

export DOCKER_CONTENT_TRUST=1

Sign the Docker image

Sign the Docker image using the following command:

docker build -t <your_namespace>/node-app
docker trust sign <your_namespace>/node-app
...

v1.0: digest: sha256:5fa48a9b4e52a9d9681a5786b4885be080668d06019e91eece6dfded5a0f8a47 size: 1986
Signing and pushing trust metadata
Enter passphrase for <namespace> key with ID 96c9857:
Successfully signed docker.io/<your_namespace/node-app:v1.0

Push the signed image to a registry

You can push the signed Docker image to a registry with:

docker push <your_namespace/node-app:v1.0

Verify the signature

To verify the signature of an image, use the following command:

docker trust inspect --pretty <your_namespace>/node-app:v1.0

Signatures for your_namespace/node-app:v1.0

SIGNED TAG   DIGEST                                                             SIGNERS
v1.0         5fa48a9b4e52a9d968XXXXXX19e91eece6dfded5a0f8a47   <your_namespace>

List of signers and their keys for <your_namespace>/node-app:v1.0

SIGNER       KEYS
ajeetraina   96c985786950

Administrative keys for <your_namespace>/node-app:v1.0

  Repository Key:	47214511f851e28018a7b0443XXXXXXc7d5846bf6f7
  Root Key:	52bae142a9ac98a473c5275bXXXXXX2f4f5068081d567903dd

By following these steps, you’ve enabled Docker Content Trust for your Node.js application, signing and verifying the image to enhance security and ensure the integrity of your containerized application throughout its lifecycle.

5. Practice least privileges

Security is crucial in containerized environments. Embracing the principle of least privilege ensures that Docker containers operate with only the necessary permissions, thereby reducing the attack surface and mitigating potential security risks. Let’s explore specific best practices for achieving least privilege in Docker.

Run as non-root user

We minimize potential risks by running applications without unnecessary high-level access (root privileges). Many applications don’t need root privileges. So, in the Dockerfile, we can create a non-root system user to run the application inside the container with the limited privileges of the non-root user, improving security and holding to the principle of least privilege.

# Stage 1: Build the application
FROM node:21.6-alpine3.18 AS builder

# Set the working directory for the build stage
WORKDIR /app

# Copy package.json and package-lock.json
COPY package*.json ./

# Install dependencies
RUN npm install

# Copy the application source code into the container
COPY . .

# Build the application
RUN npm run build

# Stage 2: Create the final image
FROM nginx:1.20

# Set the working directory within the container
WORKDIR /app

# Set ownership and permissions for nginx user
RUN chown -R nginx:nginx /app && 
    chmod -R 755 /app && 
    chown -R nginx:nginx /var/cache/nginx && 
    chown -R nginx:nginx /var/log/nginx && 
    chown -R nginx:nginx /etc/nginx/conf.d

# Create nginx user and set appropriate permissions
RUN touch /var/run/nginx.pid && 
    chown -R nginx:nginx /var/run/nginx.pid

# Switch to the nginx user
USER nginx

# Copy the built application files from the builder stage to the nginx html directory
COPY --from=builder /app/build /usr/share/nginx/html

# Expose port 80 for the web server
EXPOSE 80

# CMD to start nginx in the foreground
CMD ["nginx", "-g", "daemon off;"]

If we are using Node as the final base image (Figure 9), we can add USER node to our Dockerfile to run the application as a non-root user. The node user is created within the Node image with restricted permissions, unlike the root user, which has full control over the system. By default, the Docker Node image includes a non-root node user that you can use to avoid running your application container as root.

Docker desktop screenshot showing images tab.
Figure 9: Images tab in Docker Desktop.

Limit capabilities

Limiting Linux kernel capabilities is crucial for controlling the privileges available to containers. Docker, by default, runs with a restricted set of capabilities. You can enhance security by dropping unnecessary capabilities and adding only the ones required.

docker run --cap-drop all --cap-add CHOWN node-app

Let’s take our simple Hello World React containerized app and see how it can fit into the example practices for least privilege in Docker and integrate this application with least privilege practices:

FROM node:21.6-alpine3.18
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000

# Drop unnecessary capabilities
CMD ["--cap-drop", "all", "--cap-add", "CHOWN", "npm", "start"]

Add –no-new-privileges flag

Running containers with the --security-opt=no-new-privileges flag is essential to prevent privilege escalation through setuid or setgid binaries. The setuid and setgid binaries allow users to run an executable with the file system permissions of the executable’s owner or group, respectively, and to change behavior in directories. This flag ensures that the container’s privileges cannot be escalated during runtime.

docker run --security-opt=no-new-privileges node-app

Disable inter-container communication

Inter-container communication (icc) is enabled by default in Docker, allowing containers to communicate using the docker0 bridged network. docker0 bridges your container’s network (or any Compose networks) to the host’s main network interface, meaning your containers can access the network and you can access the containers. Disabling icc enhances security, requiring explicit communication definitions with --link options.

docker run --icc=false node-app

Use Linux Security Modules

When you’re running applications in Docker containers, you want to make sure they’re as secure as possible. One way to do this is by using Linux Security Modules (LSMs), such as seccomp, AppArmor, or SELinux.

These tools can provide additional layers of protection for Linux systems and containerized applications by controlling which actions a container can perform on the host system:

  • Seccomp is a Linux kernel feature that allows a process to make a one-way transition into a “secure” state where it’s restricted to a reduced set of system calls. It restricts the system calls that a process can make, reducing its attack surface and potential impact if compromised.
  • AppArmor confines individual programs to predefined rules, specifying their allowed behavior and limiting access to files and resources.
  • SELinux enforces mandatory access control policies, defining rules for interactions between processes and system resources to mitigate the risk of privilege escalation and enforce least privilege principles.

By leveraging these LSMs, administrators can enhance the security posture of their systems and applications, safeguarding against various threats and vulnerabilities.

For instance, when considering a simple Hello World React application containerized within Docker, you may opt to employ the default seccomp profile unless overridden with the --security-opt option. This flexibility enables administrators to explicitly define security policies based on their specific requirements, as demonstrated in the following command:

docker run --rm  -it  --security-opt seccomp=/path/to/seccomp/profile.json node-app

Customize seccomp profiles

Customizing seccomp profiles at runtime offers several benefits:

  • Flexibility: By separating the seccomp configuration from the Dockerfile, you can adjust the security settings without modifying the image itself. This approach allows for easier experimentation and iteration.
  • Granular control: Custom seccomp profiles let you precisely define which system calls are permitted or denied within your containers. This level of granularity allows you to tailor the security settings to the specific requirements of your application.
  • Security compliance: In environments with strict security requirements, custom seccomp profiles can help ensure compliance by enforcing tighter restrictions on containerized processes.

Limit container resources

In Docker, containers are granted flexibility to consume CPU and RAM resources up to the extent allowed by the host kernel scheduler. While this flexibility facilitates efficient resource utilization, it also introduces potential risks:

  • Security breaches: In the unfortunate event of a container compromise, attackers could exploit its unrestricted access to host resources for malicious activities. For instance, a compromised container could be exploited to mine cryptocurrency or execute other nefarious actions.
  • Performance bottlenecks: Resource-intensive containers have the potential to monopolize system resources, leading to performance degradation or service outages across your applications.

To mitigate these risks effectively, it’s crucial to establish clear resource limits for your containers:

  • Allocate resources wisely: Assign specific amounts of CPU and RAM to each container to ensure fair distribution and prevent resource dominance.
  • Enforce boundaries: Set hard limits that containers cannot exceed, effectively containing potential damage and thwarting resource exhaustion attacks.
  • Promote harmony: Efficient resource management ensures stability, allowing containers to operate smoothly and fulfill their tasks without contention.

For example, to limit CPU usage, you can run the container with:

-docker run -it --cpus=".5" node-app

This command limits the container to use only 50% of a single CPU core.

Remember, setting resource limits isn’t just about efficiency — it’s a vital security measure that safeguards your host system and promotes harmony among your containerized applications.

To prevent potential denial-of-service (DoS) attacks, limiting resources such as memory, CPU, file descriptors, and processes is crucial. Docker provides mechanisms to set these limits for individual containers.

--restart=on-failure:<number_of_restarts> --ulimit nofile=<number> --ulimit nproc=<number>

By diligently adhering to these least privilege principles, you can establish a robust security posture for your Docker containers. 

6. Choose the right base image

Finding the right image can seem daunting with more than 8.3 million repositories on Docker Hub. Two beacons can help guide you toward safe waters: Docker Official Images (DOI) and Docker Verified Publisher (DVP) badges.

  • Docker Official Images (marked by a blue badge shield) offer a curated set of open source and drop-in solution repositories. These are your go-to for common bases like Ubuntu, Python, or Nginx. Imagine them as trusty ships, built with quality materials and regularly inspected for seaworthiness. 
  • Docker Verified Publisher Images (signified by a gold check mark) are like trusted partners, organizations who have teamed up with Docker to offer high-quality images. Docker verifies the authenticity and security of their content, giving you extra peace of mind. Think of them as sleek yachts, built by experienced shipwrights and certified by maritime authorities.

Remember that Docker Official Images are a great starting point for common needs, and Verified Publisher images offer an extra layer of trust and security for crucial projects.

Conclusion

Optimizing Docker images for security involves a multifaceted approach, addressing image size, access controls, and vulnerability management. By understanding Docker image layering and leveraging practices such as choosing minimal base images and employing multi-stage builds, developers can significantly enhance efficiency and security. Running applications with least privileges, monitoring vulnerabilities with tools like Docker Scout, and implementing content trust further fortify the containerized ecosystem. 

As the Docker landscape evolves, staying informed about best practices and adopting proactive security measures is paramount. This guide serves as a valuable resource, empowering developers and operators to navigate the seas of Docker security with confidence and ensuring their applications are not only functional but also resilient to potential threats.

Learn more



Source link