Context
I want to host public-facing applications on a server in my home, without compromising security. I realize containers might be one way to do this, and want to explore that route further.
Requirements
I want to run applications within containers such that they
- Must not be able to interfere with applications running on host
- Must not be able to interfere with other containers or applications inside them
- Must have no access or influence on other devices in the local network, or otherwise compromise the security of the network, but still accessible by devices via ssh.
Note: all of this within reason. I understand that sometimes there may be occasional vulnerabilities, like in kernel for example, that would eventually get fixed. Risks like this within reason I am willing to accept.
What I found so far
- Running containers in rootless mode: in other words, running the container daemon with an unprivileged host user
- Running applications in container under unprivileged users: the container user under which the container is ran should be unprivileged
- Networking: The container’s networking must be restricted. I am still not sure how to do this and shall explore it more, but would appreciate any resources.
Alternative solution
I have seen bubblewrap presented as an alternative, but it seems like it is not intended to be used directly in this manner, and information about using it for this is scarce.
By default a container runs with network, storage and resources isolated from the host. What about this isolation is not “proper”?
Because OP is looking for security isolation, which isn’t what containers are for. Much like an umbrella stops rain, but not bullets. You fool.
I still don’t understand why you think containers aren’t adequate.
Say you break into a container, how would you break out?
Kernel exploits. Containers logically isolate resources but they’re still effectively running as processes on the same kernel sharing the same hardware. There was one of those just last year: https://blog.aquasec.com/cve-2022-0185-linux-kernel-container-escape-in-kubernetes
Virtual machines are a whole other beast because the isolation is enforced at the hardware level, so you have to exploit hardware vulnerabilities like Spectre or a virtual device like a couple years ago some people found a breakout bug in the old floppy emulation driver that still gets assigned to VMs by default in QEMU.
You don’t design security solutions on the premise that they’re not working.
Security comes in layers, so if you’re serious about security you do in fact plan for things like that. You always want to limit the blast radius if your security measures fail. And most of the big cloud providers do that for their container/kubernetes offerings.
If you run portainer for example and that one gets breached, that’s essentially free container escape because you can trick Docker into mounting and exposing what you need from the host to escape. It’s not uncommon for people to sometimes give more permissions than the container really needs.
It’s not like making a VM dedicated to running your containers cost anything. It’s basically free. I don’t do it all the time, but if it’s exposed to the Internet and there’s other stuff on the box I want to be hard to get into, like if it runs on my home server or desktop, then it definitely gets a VM.
Otherwise, why even bother putting your apps in containers? You could also just make the apps themselves fully secure and unbreachable. Why do we need a container for isolation? One should assume the app’s security measures are working, right?
If they can find a kernel exploit they might find a hardware exploit too. There’s no rational reason to assume containers are more likely to fail than VMs, just bias.
Oh and you can fix a kernel exploit with an update, good luck fixing a hardware exploit.
Now you’re probably going to tell me how a hardware exploit is so unlikely but since we’re playing make believe I can make it as likely it suits my argument, right?