Hi! Question in the title.

I get that its super easy to setup. But its really worthwhile to have something that:

  • runs everything as root (not many well built images with proper useranagement it seems)
  • you cannot really know which stuff is in the images: you must trust who built it
  • lots of mess in the system (mounts, fake networks, rules…)

I always host on bare metal when I can, but sometimes (immich, I look at you!) Seems almost impossible.

I get docker in a work environment, but on self hosted? Is it really worth while? I would like to hear your opinions fellow hosters.

  • Display Name@lemmy.ml
    link
    fedilink
    English
    arrow-up
    49
    arrow-down
    1
    ·
    1 year ago
    • Podman solves the root issue
    • you can inspect the stuff. You don’t have to, but it helps if you’re not paranoid with popular and widespread images
    • I have no mess

    It’s great that you do install things on bare metal, I did that in the beginning until I discovered docker and I will never go back. Docker/ podman compose is just so good

    • redcalcium@lemmy.institute
      link
      fedilink
      English
      arrow-up
      27
      ·
      1 year ago

      you can inspect the stuff. You don’t have to, but it helps if you’re not paranoid with popular and widespread images

      Dive is a great tool for inspecting docker images. I wish I found it sooner.

    • Molecular0079@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      Out of curiosity, what reverse proxy docker do you use that can run rootless in podman? My main issue, and feel free to correct me if I am wrong, is that most of them require root. And then its not possible to easily connect those containers into the same network as your rootless containers so then your other containers have to be root anyways. I don’t really want my other containers to be host accessible, I want them to be only accessible from within the podman network that the reverse proxy has access to.

      And then there’s issues where you have to enable lingering processes for normal users and also let it access ports < 1024, makes using docker-compose a pain, etc. I haven’t really found a good solution for rootless, but I really want to eventually move that way.

    • QuikxSpec@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Do you have a preferred resource? I’m setting up my NAS and starting to prepare for setting up containers. In the meantime it’s just static storage until I get comfortable

    • Shimitar@feddit.itOP
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      7
      ·
      1 year ago

      Need to study podman probably, stuff running as root is my main dislike.

      Probably if in only used docker images created by me I would be less concerned of losing track of what I am really deploying, but this would deflect the main advantage of easy deploy?

      Portability is a point I didn’t considered too… But rebuilding a bare metal server properly compatimentized took me a few hours only, so is that really so important?

          • Display Name@lemmy.ml
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            2
            ·
            1 year ago

            Unfortunately I do not have a source but to me it was like podman would replace docker as the container technology since red hat focuses on podman and not docker anymore and kubernetes doesn’t support docker anymore. Transitioning obviously takes ages because of companies being very slow.

            • N0x0n@lemmy.ml
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              I hope you’re wrong… With RH’s recent choices in regard of FOSS… I really hope podman won’t replace docker. Specially in the self-hosted/FOSS community !

              • Display Name@lemmy.ml
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 year ago

                What’s wrong with podman?

                It’s still many many years away. Just think about there being still fortran or assembly code

                • N0x0n@lemmy.ml
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  1 year ago

                  Probably nothing, I have never tried it… but docker compose feels so comfortable right now and relearn everything… uuhhg !

      • null@slrpnk.net
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 year ago

        But rebuilding a bare metal server properly compatimentized took me a few hours only, so is that really so important?

        Depends on how much you value your time.

        Compare a few hours on bare metal to a few minutes with containers. Then consider that you also spend extra time on bare metal cleaning up messes. Containers don’t make a mess in the first place.

    • TCB13@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      17
      ·
      1 year ago

      It doesn’t really matter if there are truly open-source and open ecosystems of containerization technologies because in the end people/companies will pick the proprietary / closed option just because “it’s easier to use” or some other specific thing that will be good on the short term and very bad on the long term. This happened with CentOS vs Debian is currently unfolding with Docker vs LXC/RKT/Podman and will happen with Ubuntu vs Debian for all those who moved from CentOS to Ubuntu.

      • ericjmorey@programming.dev
        link
        fedilink
        English
        arrow-up
        6
        ·
        edit-2
        1 year ago

        It cuts both ways. Less commercial interest means only hobby level development (which can be high quality, but is typically slow and unpolished for users).

        So you can spend your energy on making up the gap between the ease of use of the commercially supported software and the pure volunteer projects or you can have free time for things you’re more interested in and jump ship when they squeeze too hard for cash.

            • nonprofitparrot@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              1 year ago

              Lots of docker guides + documentation just don’t work, specifically with podman-compose. The networking options are not fully featured, I ended up having to rig up a bunch of kubernetes services just to be able to use my VPN as a network bridge for my media server stack. I got podman working eventually because I think it’s neat, but it definitely would have been twice as easy to just use docker.

      • lemmyvore@feddit.nl
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        I mean, “it’s easier to use” is a pretty good quality to have. People tend to pick the most user-friendly and time-saving solution, should we really be surprised? On the contrary, I think FOSS should strive to be easier to use.

        • TCB13@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          edit-2
          1 year ago

          I mean, “it’s easier to use” is a pretty good quality to have. People tend to pick the most user-friendly and time-saving solution

          And they don’t consider anything else and they they get themselves into CentOS situations. Or large monopolies like what Microsoft has over Office.

          I think FOSS should strive to be easier to use.

          Yes so do I.

          • lemmyvore@feddit.nl
            link
            fedilink
            English
            arrow-up
            3
            ·
            1 year ago

            And they don’t consider anything else and they they get themselves into CentOS situations. Or large monopolies like what Microsoft has over Office.

            But so what? The kind of people who do this were not going to be grand contributors to FOSS anyway. They’re just consumers, not makers, and they consume the products that make the most sense to them.

            Also, let’s not lay everything solely on consumer stupidity. Microsoft spends a crapton of money lobbying governments, administrations, universities, schools and so on around the world to maintain their monopoly. Corruption at all levels of society is a big factor.

  • umbrella@lemmy.ml
    link
    fedilink
    English
    arrow-up
    36
    ·
    1 year ago

    people are rebuffing the criticism already.

    heres the main advantage imo:

    no messy system or leftovers. some programs use directories all over the place and it gets annoying fast if you host many services. sometimes you will have some issue that requires you to do quite a bit of hunting and redoing things.

    docker makes this painless. you can deploy and redeploy stuff easily and quickly, without a mess. updates are painless and quick too, with everything neatly self-contained.

    much easier to maintain once you get the hang of things.

  • Moonrise2473@feddit.it
    link
    fedilink
    English
    arrow-up
    32
    ·
    1 year ago

    About the root problem, as of now new installs are trying to let the user to run everything as a limited user. And the program is ran as root inside the container so in order to escape from it the attacker would need a double zero day exploit (one for doing rce in the container, one to escape the container)

    The alternative to “don’t really know what’s in the image” usually is: “just download this Easy minified and incomprehensible trustmeimtotallynotavirus.sh script and run it as root”. Requires much more trust than a container that you can delete with no traces in literally seconds

    If the program that you want to run requires python modules or node modules then it will make much more mess on the system than a container.

    Downgrading to a previous version (or a beta preview) of the app you’re running due to bugs it’s trivial, you just change a tag and launch it again. Doing this on bare metal requires to be a terminal guru

    Finally, migrating to a new fresh server is just docker compose down, then rsync to new server, and then docker compose up -d. And not praying to ten different gods because after three years you forgot how did you install the app in bare metal like that.

    Docker is perfect for common people like us self hosting at home, the professionals at work use kubernetes

  • haui@lemmy.giftedmc.com
    link
    fedilink
    English
    arrow-up
    30
    arrow-down
    1
    ·
    edit-2
    1 year ago

    Imo, yes.

    • only run containers from trusted sources (btw. google, ms, apple have proven they cant be trusted either)
    • run apps without dependency hell
    • even if someone breaks in, they’re not in your system but in a container
    • have everything web facing separate from the rest
    • get per app resource statistics

    Those are just what was in my head. Probably more to be said.

  • Aniki 🌱🌿@lemm.ee
    link
    fedilink
    English
    arrow-up
    23
    arrow-down
    3
    ·
    edit-2
    1 year ago

    1.) No one runs rooted docker in prod. Everything is run rootless.

    2.) That’s just patently not true. docker inspect is your friend. Also you can build your own containers trusting no-one. FROM Scratch https://hub.docker.com/_/scratch/

    3.) I think mess here is subjective. Docker folders makes way more sense than Snap mounts.

    • eluvatar@programming.dev
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      1 year ago

      1 is just not true sorry. There’s loads of stuff that only work as root and people use them.

  • Big P@feddit.uk
    link
    fedilink
    English
    arrow-up
    20
    ·
    1 year ago

    Docker is a messy and not ideal but it was born out of a necessity, getting multiple services to coexist together outside of a container can be a nightmare, updating and moving configuration is a nightmare and removing things can leave stuff behind which gets messier and messier over time. Docker just standardises most of the configuration whilst requiring minimal effort from the developer

  • Hexarei@programming.dev
    link
    fedilink
    English
    arrow-up
    19
    ·
    1 year ago

    Others have addressed the root and trust questions, so I thought I’d mention the “mess” question:

    Even the messiest bowl of ravioli is easier to untangle than a bowl of spaghetti.

    The mounts/networks/rules and such aren’t “mess”, they are isolation. They’re commoditization. They’re abstraction - Ways to tell whatever is running in the container what it wants to hear, so that you can treat the container as a “black box” that solves the problem you want solved.

    Think of Docker containers less like pets and more like cattle, and it very quickly justifies a lot of that stuff because it makes the container disposable, even if the data it’s handling isn’t.

    • paws@cyberpaws.lol
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I ended up using Docker to set up pict-rs and y’all are making me happy I did

  • MartianSands@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    20
    arrow-down
    1
    ·
    1 year ago

    I find it makes my life easier, personally, because I can set up and tear down environments I’m playing with easily.

    As for your user & permissions concern, are you aware that docker these days can be configured to map “root” in the container to a different user? Personally I prefer to use podman though, which doesn’t have that problem to begin with

    • micka190@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      1 year ago

      I find it makes my life easier, personally, because I can set up and tear down environments I’m playing with easily.

      Same here. I self-host a bunch of dev tools for my personal toy projects, and I decided to migrate from Drone CI to Woodpecker CI this week. Didn’t have to worry about uninstalling anything, learning what commands I need to start/stop/restart Woodpecker properly, etc. I just commented-out my Drone CI/Runner services from my docker-compose file, added the Woodpecker stuff, pointed it to my Gitea variables and ran docker compose up -d.

      If my server ever crashes, I can just copy it over and start from scratch.

  • ssdfsdf3488sd@lemmy.world
    link
    fedilink
    English
    arrow-up
    18
    ·
    1 year ago

    Because if you use relative bind mounts you can move a whole docker compose set of contaibera to a new host with docker compose stop then rsync it over then docker compose up -d.

    Portability and backup are dead simple.

  • MigratingtoLemmy@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    1 year ago

    Docker can be run rootless. Podman is rootless by default.

    I build certain containers from scratch. Very popular FOSS software can be trusted, but if you’re as paranoid, you should probably run the bare-minimum software in the first-place.

    It’s a mess if you’re not used to it. But yes, normal unix networking is somewhat simpler (like someone mentioned, LXC containers can be a decent idea). Well, you’ll realise that Docker is not really top-dog in terms of complexity when you start playing with the big boys like full-fledged k8s

  • DeltaTangoLima@reddrefuge.com
    link
    fedilink
    English
    arrow-up
    10
    ·
    1 year ago

    To answer each question:

    • You can run rootless containers but, importantly, you don’t need to run Docker as root. Should the unthinkable happen, and someone “breaks out” of docker jail, they’ll only be running in the context of the user running the docker daemon on the physical host.
    • True but, in my experience, most docker images are open source and have git repos - you can freely download the repo, inspect the build files, and build your own. I do this for some images I feel I want 100% control of, and have my own local Docker repo server to hold them.
    • It’s the opposite - you don’t really need to care about docker networks, unless you have an explicit need to contain a given container’s traffic to it’s own local net, and bind mounts are just maps to physical folders/files on the host system, with the added benefit of mounting read-only where required.

    I run containers on top of containers - Proxmox cluster, with a Linux container (CT) for each service. Most of those CTs are simply a Debian image I’ve created, running Docker and a couple of other bits. The services then sit inside Docker (usually) on each CT.

    It’s not messy at all. I use Portainer to manage all my Docker services, and Proxmox to manage the hosts themselves.

    Why? I like to play.

    Proxmox gives me full separation of each service - each one has its own CT. Think of that as me running dozens of Raspberry Pis, without the headache of managing all that hardware. Docker gives me complete portability and recoverability. I can move services around quite easily, and can update/rollback with ease.

    Finally, the combination of the two gives me a huge advantage over bare metal for rapid prototyping.

    Let’s say there’s a new contender that competes with Immich. I have Immich hosted on a CT, using Docker, and hiding behind Nginx Proxy Manager (also on a CT).

    I can spin up a Proxmox CT from my own template, use my Ansible playbook to provision Docker and all the other bits, load it in my Portainer management platform, and spin up the latest and greatest Immich competitor, all within mere minutes. Like, literally 10 minutes max.

    I have a play with the competitor for a bit. If I don’t like it, I just delete the CT and move on. If I do, I can point my photos... hostname (via Nginx Proxy Manager) to the new service and start using it full-time. Importantly, I can still keep my original Immich CT in place - maybe shutdown, maybe not - just in case I discover something I don’t like about the new kid on the block.

    • lemmyvore@feddit.nl
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      1 year ago

      Should the unthinkable happen, and someone “breaks out” of docker jail, they’ll only be running in the context of the user running the docker daemon on the physical host.

      There is no daemon in rootless mode. Instead of a daemon running containers in client/server mode you have regular user processes running containers using fork/exec. Not running as root is part and parcel of this approach and it’s a good thing, but the main motivator was not “what if someone breaks out of the container” (which doesn’t necessarily mean they’d get all the privileges of the running user on the host and anyway it would require a kernel exploit, which is a pretty tall order). There are many benefits to making running containers as easy as running any kind of process on a Linux host. And it also enabled some cool new features like the ability to run only partial layers of a container, or nested containers.

      • DeltaTangoLima@reddrefuge.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Yep, all true. I was oversimplifying in my explanation, but you’re right. There’s a lot more to it than what I wrote - I was more relating docker to what we used to do with chroot jails.

  • Possibly linux@lemmy.zip
    link
    fedilink
    English
    arrow-up
    9
    ·
    1 year ago

    Well docker tends to be more secure if you configure it right. As far as images go it really is just a matter of getting your images from official sources. If there isn’t a image already available you can make one.

    The big advantage to containers is that they are highly reproducible. You no longer need to worry about issues that arise when running on the host directly.

    Also if you are looking for a container runtime that runs as a local user you should check out podman. Podman works very similarly to docker and can even run your containers as a systemd user service.

  • oranki@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    7
    ·
    1 year ago

    Portability is the key for me, because I tend to switch things around a lot. Containers generally isolate the persistent data from the runtime really well.

    Docker is not the only, or even the best way IMO to run containers. If I was providing services for customers, I would definetly build most container images daily in some automated way. Well, I do it already for quite a few.

    The mess is only a mess if you don’t really understand what you’re doing, same goes for traditional services.

  • aleq@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 year ago

    the biggest selling point for me is that I’ll have a mounted folder or two, a shell script for creating the container, and then if I want to move the service to a new computer I just move these files/folders and run the script. it’s awesome. the initial setup is also a lot easier because all dependencies and stuff are bundled with the app.

    in short, it’s basically the exe-file of the server world

    runs everything as root (not many well built images with proper useranagement it seems)

    that’s true I guess, but for the most part shit’s stuck inside the container anyway so how much does it really matter?

    you cannot really know which stuff is in the images: you must trust who built it

    you kinda can, reading a Dockerfile is pretty much like reading a very basic shell script for the most part. regardless, I do trust most creators of images I use. most of the images I have running are either created by the people who made the app, or official docker images. if I trust them enough to run their apps, why wouldn’t I trust their images?

    lots of mess in the system (mounts, fake networks, rules…)

    that’s sort of the point, isn’t it? stuff is isolated