What to people use and recommend for this? I’ve read a bit about portainer, but I’m still learning - and don’t know what the best solutions are.

Today I have a handful of selfhosted services running on my home machine - mostly installed directly, but a couple running as docker containers. As the scale of my selfhosting has grown, I’ve realized that things would be a lot easier to manage if each service was run as its own container, so that installed services are isolated.

The solution I’m looking for would make it easy (possibly a web UI) for me to monitor, modify, update, and remove containerized services, including networking and storage.

Edit: Also I would only want a FOSS solution.

  • Daniel Quinn@lemmy.ca
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    20 hours ago

    Kubernetes. For a homelab, the stripped-down k3s is fantastic and surprisingly easy to get going.

    Once you’ve got Kubernetes set up, you can lean on all the many tools already out there for things like deploying complex projects (Helm) and monitoring (Prometheus/Grafana). OpenLens is a nice piece of software you can use to monitor and control your cluster too, as is k9s.

    • Jul (they/she)@piefed.blahaj.zone
      link
      fedilink
      English
      arrow-up
      2
      ·
      16 hours ago

      What do you use for repeatable recovery and deployment of systems?

      I’ve looked at ArgoCD and FlexCD. ArgoCD was too flaky. When I made changes to helm files it would often fail to deploy them and the UI often wouldn’t really show the detailed errors from things like helm syntax errors, so it was a pain to troubleshoot.

      FlexCD was just really a pain to configure in the first-place and I didn’t want to learn kustomize when I already have helm charts.

      And neither really supported staged deployments or dealt with dependant services well. So I couldn’t get it to deploy the infrastructure level helm charts like PostgreSQL before deploying the services that depend on it. Technically, with Kubernetes it shouldn’t matter about the order of deployment but in reality when ArgoCD would deploy the other stuff first and wait for it to come up and it never came up because the dependencies weren’t there, it caused it to choke a lot.

      Just an example of the issues I’ve had. But I really want an easy way to make lots of small changes to charts and deploy them quickly as well as being able to quickly recover the cluster from backups if something catastrophic happens like a fire without having to manually deploy each chart. Just curious how others handle it or if it’s always manual deployment of charts via CLI only.

      • Daniel Quinn@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        6 hours ago

        I’ve used FluxCD in the past and have looked into ArgoCD, but honestly, I’ve not seen any big benefit from either to be honest. I use k8s both at home and at work, and in both cases, we do “imperative” deploys: you run helm install ... either directly or via the CI and stuff is deployed.

        So for example at my last job, our GitLab CI just had a section triggered exclusively for merges into master that ran helm install ... for all three environments. We had three values.yaml files, one for each environment, and when we wanted to deploy a new version, the process was:

        1. Create a tag for our release version (ie. 1.2.3) and push it to the repo. This would trigger a build and push the resulting image into the container registry.
        2. Push an update to the repo with the new tag set in the appropriate Helm values file. If we wanted to deploy 1.2.3 to development but not yet to staging or production, then the tag: value in each of the environment files would look like this:
        • k8s/chart/environments/development.yaml: tag: 1.2.3
        • k8s/chart/environments/staging.yaml: tag: 1.2.2
        • k8s/chart/environments/production.yaml: tag: 1.2.2

        Once that change is pushed, the CI will automatically apply it with helm install ... and make sure that all three environments are what they’re supposed to be.

        As for dependent services, that should all be in your Helm chart so they’re stood up and torn down together. The specific case you mention about “Service A” being dependent on “Service B” but stood up before “Service B” is ready is a classic problem, but easily solved:

        The dependent service (“A” in this case) should have an entrypoint that checks for everything else before starting. Here’s what I’m using right now in a project:

        #!/bin/sh
        
        while ! nc -z postgres 5432; do
          echo "Waiting for postgres..."
          sleep 0.1
        done
        echo "PostgreSQL started"
        
        touch /tmp/ready
        
        exec "$@"
        

        I’ve even got some code that checks that all the Django migrations have run first for the same situation. The Kubernetes philosophy is that any container should be able to die at any time and be eventually be brought back up and that every container needs to be prepared for this. Typically this means that your containers should operate on the basis of “if I can’t work, die, and hope the problem is solved by the time Kubernetes redeploys me”.

    • thejml@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      ·
      19 hours ago

      This is how I went and what I’d recommend. But that said, it’s a bit of a steep learning curve as not everything in the self hosted/home lab community comes with helm charts.

      • motruck@lemmy.zip
        link
        fedilink
        English
        arrow-up
        1
        ·
        37 minutes ago

        Generally my suggestion is if you use kubernetes at work or want to learn it, self host that way to learn. If not don’t add all the complexity and use docker, docker compose, podman, etc. Kubetnetes is overkill for 90% of the use xases Ost companies have but here we are.