Hi,
I’m using docker-compose to host all my server services (jellyfin, qbittorrent, sonarr, etc.). I’ve recently grouped some of them into individual categories and then merged the individual docker-compose.yml file I had for each service into one per category. But is there actually any reason for not keeping them together?
The reason why is I’ve started configuring homepage and thought to myself “wouldn’t it be cool if instead of giving the server IP each time (per configured service in homepage) I’d just use the service name?” (AFAIK this only works if the containers are all in the same file).
For simplicity sake alone I would say No. As long as services don’t share infrastructure (eg. a database) you shouldn’t mix them so you have an easier time updating your scripts.
Another point is handling stacks. When you create dockers via compose you are not supposed to touch them individually. Collecting them all, or even just in categories, muddies that concept, since you have unrelated services grouped in a single stack and would need to update/up/down/… them all even if you just needed that for a single one.
Lastly networks. Usually you’d add networks to your stacks to isolate their respective Backend into closed networks, with only the exposing container (eg. a web frontend) being in the publicly available network to increase security and avoid side effects.
So right now I have a single compose file with a file structure like this:
docker/ ├─ compose/ │ ├─ docker-compose.yml ├─ config/ │ ├─ service1/ │ ├─ service2/
Would you in that case use a structure like the following?
docker/ ├─ service1/ │ ├─ config/ │ ├─ docker-compose.yml ├─ service2/ │ ├─ config/ │ ├─ docker-compose.yml
Or a different folder structure?
The second one is exactly what I have. One folder for each service containing it’s compose file and all persistent data belonging to that stack(unless it’s something like your media files)
The second is exactly how I do it. Keeps everything separate so easy to move individual services to another host if needed. Easy to restart a single service without taking them all down. Keeps everything neat and organized (IMO).
I have a folder that all my docker services are in. Inside the folder is a folder for each discrete service and within that folder is a unique compose file necessary to run the service. Also in the folder is all the storage folders for that service so it’s completely portable, move the folder to any server and run it and you’re golden. I shut down all the services with a script then I can just tar the whole docker folder and every service and its data is backed up and portable.
In case anyone cares here is my script, I use this for backups or shutting down the server.
#!/bin/bash logger "Stopping Docker compose services" services=(/home/user/docker/*) # This creates an array of the full paths to all subdirs #the last entry in this array is always blank line, hence the minus 1 in the for loop count below for ((i=0; i<=(${#services[@]}-1); i++)) do docker compose -f ${services[i]}/docker-compose.yml down & done #wait for all the background commands to finish wait
This is exactly what I do and could not be happier!
Exactly my setup and for exactly the reasons you mentioned
Exactly what I do except my master folder is
~
I do ~/docker so I also have a docker-prototype folder for my sandbox/messing around with non-production stuff and I have a third folder for retired docker services so I keep the recipe and data in case I go back.
Does portainer just work?
To answer my own question, yes, yes it does. Should’ve done this ages ago…
Could you share your script?
Thanks!
@czardestructo I like the tidiness of this.
No, keep them ungrouped, migration to a new server is much easier, otherwise you need to migrate everything everywhere all at once
You can have the same effect (connect to the named container) if you create a docker network and place everything on the same network
No, no you should not. I haven’t used homepage but you probably just need to attach the services to the same network or just map the ports on the host and just use the host IP.
Probably want to keep services with different life cycles in separate docker compose files to allow you to shutdown/restart/reconfigure then separately. If containers depend on each other, then combining into compose file makes sense.
That said, experimenting is part of the fun, nothing wrong with testing it out and seeing if you like it.
I would not. Create an external network and just add those to the compose files.
Bingo. Or just bite the bullet and dive into Kubernetes
Overkill for home use
Back when I used to use Docker this is what I was doing. If you use a reverse proxy that is Docker-aware (eg Traefik), it can still connect to the services by name and expose them out as subdomains or subpaths based on the names.
But I graduated to Kubernetes a long time ago.
So there’s a million ways to do things and what works for you works for you. For me, putting all services ina single compose file only has downsides.
- Difficult to search, I guess searching for a name and then just editing the file works - but doesn’t it become a mess fairly quickly? I sometimes struggle with just a regular yaml file lmao
- ^also missing an overview of what you’re exactly running - docker ps -a or ctop or whatever works - but with an ls -la I clearly see whats on my system
- How do you update containers? I use a ‘docker compose pull’ to update the images which are tagged with latest.
- I use volume mounts for config and data. A config dir inside the container is mounted like ‘./config:/app/data/config’ - works pretty neatly if each compose file is in its own named directory
- Turning services on/off is just going into the directory and saying docker compose up -d / or down - in a huge compose file don’t you have to comment the service out or something?
You can use an external network if you wish to refer to them all by a name. Just make sure all the containers you wish to refer to are in it.
A compose file is meant for different components of a single service but you’re allowed to experiment with whatever you want
I personally don’t. It is just messier. I only group things that belong together, like a webserver+database, torrentclient+vpn and so on.
I’ll be the opposite of everyone I guess I have all my services in one compose file. Never had an issue with it. Why I have no exposed ports and everything is accessed through a reverse proxy, and the big one it’s easy to just go docker compose and have them all come up or down.
Same for me, it all mostly started from the desire to have a single MariaDB and Postgresql container holding all the databases. Not sure if I could achieve the same result with different compose files, perhaps I can, bit never had the need.
I actually find my setup super comfortable to use
I have multiple files but a single stack. I use an alias to call compose like this:
docker compose -f file1.yaml -f file2.yaml
Etc.
I was thinking about that just today, I have something like 30+ services running on a single compose file and maintenance is slowly becoming hard. Probably moving to multiple compose file.
I’ve thought about going that route, but ultimately decided to adopt something like portainer.io. My thought process behind it was that some projects within each category may have overlapping dependencies and so I’d end up with multiple entries for a particular dependency in the same file which I didn’t like.
I don’t expose services to the internet from my home lab, so I generally just add host entries manually to each of my computers so that I don’t have to type in ip and port.
i go so far the other way with this personally… I actually have a seperate LXC for each docker container and a lot of the time i use docker run instead of docker-compose…
I’ve still not had anyone explain to me why compose is better than a single command line…
I always thought the compose file is great for maintenance. You can always save the docker run commands elsewhere so at the end of the day it’s more of an orchestration choice.
deleted by creator