Can confirm that there is 0 ingress or egress fees, since this is not an S3 container storage server, but a simple FTP server that also has a borg&restic module. So it simply doesnt fall into the e/ingress cost model.
Can confirm that there is 0 ingress or egress fees, since this is not an S3 container storage server, but a simple FTP server that also has a borg&restic module. So it simply doesnt fall into the e/ingress cost model.
Because using a containerization system to run multiple services on the same machine is vastly superior to running everything bare metal? Both from a security and a ease-of-use standpoint. Why wouldnt you use docker?
Caddy and Authentik play very nicely together thanks to caddy forward_auth
directive. Regarding acls, you’ll have to read some documentation, but it shouldnt be difficult to figure out whatsoever. The documentation and forum are great sources of info.
AdGuard Home supports static clients. Unless the instance is being used over TCP (port 53, unencrypted), it is by far the better way to use clientnames in the DNS server addresses and unblock the clients over that.
For DoT: clientname.dns.yourdomain.com
For DoH: https://dns.yourdomain.com/dns-query/clientname
A client, especially a mobile one, can simply not guarantee always having the same IP address.
If you dont fear using a little bit of terminal, caddy imo is the better choice. It makes SSL even more brainless (since its 100% automatic), is very easy to configure (especially for reverse proxying) yet very powerful if you need it, has a wonderful documentation and an extensive extension library, doesnt require a mysql database that eats 200 MB RAM and does not have unnecessary limitations due to UI abstractions. There are many more advantages to caddy over NPM. I have not looked back since I switched.
An example caddyfile for reverse proxying to a docker container from a hostname, with automatic SSL certificates, automatic websockets and all the other typical bells and whistles:
https://yourdomain.com {
reverse_proxy radarr:7878
}
The demo instance would be their commercial service I suppose: https://ente.io/. Since, as are their own words, the github code 1:1 represents the code running on their own servers, the result when selfhosting should be identical.
Theres a Dockerfile that you can use for building. It barely changes the flow of how you setup the container. Bigger issue imo is that it literally is the code they use for their premium service, meaning that all the payment stuff is in there. And I don’t know if the apps even have support for connecting to a custom instance.
Edit: their docs state that the apps all support custom instances, making this more intruiging
Is location the only reason to not use it as the AP? If I had a larger house I’d agree, but as I live in a small apartment, the current router location can easily serve the entire flat, so that is no concern right now.
Ive wanted one of these for a while to replace my ISPs modem+router+switch+wifi-AP. But apparently these devices can be funky to get a good wifi going, and I don’t feel like adding three (mini pc, switch, AP) new devices to my “we don’t talk about it” corner where all the IT is stored. Do you know anything about wifi on these?
You can docker compose up -d <service>
to (re)create only one service from your Dockerfile
I’ll plug another subsonic compatible server here: gonic. It does not have a web player ui, which saves on RAM. And it is really fast too.
It supports sharing via public link. But I don’t think it has sharing with registered users via username.
You make a good point. But I still find that directly exposing a port on my home network feels more dangerous than doing so on a remote server. I want to prevent attackers sidestepping the proxy and directly accessing the server itself, which feels more likely to allow circumventing the isolations provided by docker in case of a breach.
Judging from a couple articles I read online, if i wanted to publicly expose a port on my home network, I should also isolate the public server from the rest of the local LAN with a VLAN. For which I’d need to first replace my router, and learn a whole lot more about networking. Doing it this way, which is basically a homemade cloudflare tunnel, lets me rest easier at night.
Very true! For me, that specific server was a chance to try out arm based servers. Also, I initially wanted to spin up something billed on the hour for testing, and then it was so quick to work that I just left it running.
But I’ll keep my eye out for some low spec yearly billed servers, and move sooner or later.
Your first paragraph hits the nail on the head. From what I’ve read, bots all over the net will find any openly exposed ports in no time and start attacking it blindly, putting strain on your router and a general risk into your home network.
Regarding bandwith: 100% of the traffic via the domain name (not local network) runs through the proxy server. But these datacenters have 1 to 10 gigabit uplinks, so the slowest link in the chain is usually your home internet connection. Which, in my case, is 500mbit down and 50mbit up. And that’s easily saturated on both directions by the tunnel and VPS. plus, streaming a 4K BluRay remux usually only requires between 35 and 40 mbit of upload speed, so speed is rarely a worry.
Hey! I’m also running my homelab on unraid! :D
The reverse proxy basically allows you to open only one port on your machine for generic web traffic, instead of opening (and exposing) a port for each app individually. You then address each app by a certain hostname / Domain path, so either something like movies.myhomelab.com
or myhomelab.com/movies
.
The issue is that you’ll have to point your domain directly at your home IP. Which then means that whenever you share a link to an app on your homelab, you also indirectly leak your home location (to the degree that IP location allows). Which I simply do not feel comfortable with. The easy solution is running the traffic through Cloudflare (this can be set up in 15 minutes), but they impose traffic restrictions on free plans, so it’s out of the question for media or cloud apps.
That’s what my proxy VPS is for. Basically cloudflare tunnels rebuilt. An encrypted, direct tunnel between my homelab and a remote server in a datacenter, meaning I expose no port at home, and visitors connect to that datacenter IP instead of my home one. There is also no one in between my two servers, so I don’t give up any privacy. Comes with near zero bandwith loss in both directions too! And it requires near zero computational power, so it’s all running on a machine costing me 3,50 a month.
I haven’t researched this, but my gut tells me one should be able to connect the two servers via Wireguard (direct tunnel, tailscale, zerotier, what have you) and discretely access the docker api without making it publicly available.
Email encryption, as far as I know, is to this day rarely implemented. So your host as well as any entity in between participants will be able to read your messages. SimpleLogin is also provided by Proton if that means anything to you.
I find there is less management overhead regarding inboxes with SL, compared to creating, managing and logging into multiple receiving addresses under a real mail server.
Sure, you can set one mail account on your domain and define it as catch all, but then won’t be able to send from these names.
Or you can create accounts you want, but then cannot quickly create new inboxes without opening your control dashboard.
Obviously, if you want to register with a service anonymously, you’d use one of the SL domains, which I do plenty too!
And at the end of the chain, all messages run into the same singular Google inbox, making it easier for me to manage all messages from all domains.
I’m sure paid email hosters will have their own advantages, but as I said at the beginning of my original comment, I want to show an alternative solution, not a better solution.
Both UnraidFS and mergerFS can merge drives of separate types and sizes into one array. They also allow removing / adding drives without disturbing the array. None of this is possible with traditional RAID (or at least not without a significant time sink for re-making the array), no matter the type of RAID you use.