At their core, containers are just Linux processes that are namespaced
. This means in practice, many containers still run as processes on the same host machine. While namespacing processes using cgroups creates very good boundaries between processes, the isolation is still not perfect.
Note that while isolation of containers is not as strong as for example virtual machines on a hypervisor, even hypervisors sometimes have trouble fully isolating workload from each other
, especially when it comes to I/O or randomness.
lists the following major system resources that are typically shared
when running on a single Operating system or host:
- Filesystem I/O
- Network I/O
- File descriptors
It turns out that Kubernetes has solid concepts to deal with CPU and Memory
by using Resource Quotas (Limits)
. However, all the other resources can not be contained
in an easy way. This means that any container can consume as many of these resources on a worker node.
Of course, I am not the first to notice this, you can read up on Jessie Frazelle’s wonderful blog post about multitenancy in Kubernetes
. It turns out that multitenancy is a complex topic that has no real solution as of now.
So how can we investigate and test the isolation between containers
? In my repository
, I have written four different containers to demonstrate the various forms of isolation or non-isolation:
: Launches a forkbomb, effectively using up all PIDs. PIDs are typically not isolated. Depending on your Kubernetes installation, this may crash your node.
: A container that consumes file descriptors. Also, this is typically not isolated between containers and may crash your node.
: A container that consumes CPU resources. This container can be used to test Resource Quotas (Limits), as these can properly isolate the process.
: This container consumes memory. In a typical setup, this container will be OOM-killed at some point, especially if you enforce resource limits.
As mentioned above, there are other resources that are currently not properly isolated between containers on the same worker node such as Network I/O or Filesystem I/O.
Use the containers above with care. I have heard of Kubernetes clusters being brought to their knees using forkbombs
Ref. https://en.wikipedia.org/wiki/System_resource, https://kubernetes.io/docs/concepts/policy/resource-quotas, https://blog.jessfraz.com/post/hard-multi-tenancy-in-kubernetes/, https://github.com/simonkrenger/k8s-isolation, https://union-click.jd.com/jdc?d=ZKPlW2