When talking about increasing development velocity for your teams, containers are at the forefront of the conversation on the new concepts of modern development. By using them as deployment artifacts, developers can be sure that their code works on the target platforms and the security personnel can have rest assured that the application code is much less likely to permanently affect the machines that the code was deployed to. In a way, unlike most other tooling, containers have mostly been a “win-win” proposition for increasing both development velocity as well as security.
As with all technological advancement, any new tool introduced into your environment also means new security-related worries. Even though containerized environments run isolated, you can easily undo all of the advantages with a few benign-looking keystrokes if you’re not careful. With that in mind, in the following sections we will cover the top 3 pitfalls that you should keep an eye out for when working with containers.
1. Mounting the Docker Daemon Socket into the Container
Even though the Docker Engine is being slowly phased out in many of the orchestration engines, for the foreseeable time it will still the dominant container runtime. Unfortunately, it currently has what is probably the most egregious and pervasive security omission seen in the wild. What the problem basically entails is that mounting the Docker daemon socket (`/var/run/docker.sock` in most platforms) into the container to enable extra functionality comes at a non-obvious cost to your security.
To start us out, let’s take a look at how portainer, a popular Docker management platform, is intended to be run:
docker run -d \ -p 9000:9000 \ —name portainer \ —restart always \ -v /var/run/docker.sock:/var/run/docker.sock \ -v portainer_data:/data \ portainer/portainer
Do you see it? Take a look a the mounted volume `-v /var/run/docker.sock:/var/run/docker.sock`? You just indirectly gave root-level host permissions to this container.
To examine this problem a bit more in detail, we need to step back and take a closer look at what exactly this `/var/run/docker.sock` file is. What might not be very obvious when you run the `docker` command on the CLI is that you are not running the commands directly on the daemon. The CLI is just an API wrapper that connects to `/var/run/docker.sock` for the actual execution of the code. What this means is that, by passing this socket file into the container, you are effectively granting it control of your host’s Docker instance and, with some clever volume mounting, almost all of your host’s protections can be invalidated.
While this practice is generally dangerous to use, it does have some rare valid usages in container management platforms. In the `portioner` tool example mentioned earlier, this setup is required since the tool needs to manage your Docker instances and you may also find that some “Docker-in-Docker” (DinD) configurations cannot work without it. Given its potential for misuse though, you should always ensure that the images you run with this type of configuration are ones that you have audited and that you trust to avoid potential breaches.
2. Neglecting Your Container Images
The second biggest security lapse in the use of container images is usually reserved for the outdated dependencies and tools within the container images themselves. With unmaintained apps, infrequent builds and overzealous image layer caching, there is a very high risk that the images you create or even base images you depend on are stale. These images directly increase the risk of having your containers compromised by attackers using the latest exploits against the unpatched and outdated libraries that your containers use. Even though your hosts are secured by the kernel protections from your container to some extent, the data that your container has access to, as well as processing resources, can still be hijacked to suit the attackers needs.
Due to this pervasive attack vector and the potential to create a jump-point for host compromises, you should ensure that you: continuously update your images (including the ones you use as bases for other images), frequently update your code dependencies and occasionally invalidate the container build cache. Unfortunately, the reality is is that these updates often have a chance of breaking some app functionality, so, depending on your organization’s degree of security-vs-stability requirement, you can tweak the update frequency of your tooling to suit your needs.
3. Use of Unverified Images
The third (but by no means the last!) big problem in container security is a general lack of OpSec when choosing baseline images. Most developers rely heavily on DockerHub for their base images, which could easily open them up to unexpected attacks if sufficient scrutiny isn’t used during the development process and image selection. The distilled version of the issue is that there is no verification that the functionality DockerHub presents you for an image actually matches what the container is supposed to do, so, you can easily end up running a botnet C&C network or mining Bitcoins for someone. By carefully using only verified images from the author(s) of the tool you need and auditing the image layers, you can, in most, cases avoid this type of attack. However, You can really only reduce the risk and not eliminate it completely as image tags are mutable and uploader credentials liable to compromises.
It’s also important to note here that to minimize risks you shouldn’t just look for signs that the images you use are safe, but also to minimize your use of niche images that can be created from relatively safer images with little effort. For example, with only a few lines in a Dockerfile you can have a fully functioning OpenSSL CLI in almost all big verified distributions, but sometimes you may see someone opt to use an image like this that is not from an organization and could contain any number of problems (both intentional and not) in tooling that is designed to secure your infrastructure. Are the keys the linked image creates safe? Is it intentionally using an outdated/flawed version of SSL? The bottom line is that you can’t be sure, but using safer images like Alpine and manually installing openssl is more likely to avoid security issues than using niche images. Keep this in mind when you choose what images to include in your infrastructure.
Containerization has been taking over the tech world as the next step in improving both cycle times and the security of your infrastructure, but relying on containerization alone to provide all of your security without being careful is usually a recipe for breaches. Hopefully, this short list of a top few common issues will help your organization improve its security posture when using these new technologies. This is by no means an exhaustive list, butt rather an attempt to make you aware that every new tool has its quirks and problems and containers are no exception. If you decide to embark on this new paradigm, you should make sure to keep informed about what new attack vectors this new technology adds and how to avoid them. Don’t forget to let us know in the CyberArk Commons if you would like to see more articles about this topic in our blog!
Srdjan Grubor is a Software Engineer at CyberArk where he is building the next-generation digital security products. Srdjan is the author of “Deployment with Docker” book, was one of the first people to receive a Docker Certified Associate certification, and has worked on Linux systems at scale for over a decade. He enjoys breaking things just to see how they work, tinkering, and solving challenging problems.