I’ve been using glauth + Authelia for a couple years with no issues and almost zero maintenance.
I’ve been using glauth + Authelia for a couple years with no issues and almost zero maintenance.
Yes, absolutely. Ideally there would be an automated check that runs periodically and alerts if things don’t work as expected.
Monitoring if the backup task succeeded is important but that’s tue easy part of ensuring it works.
A backup is only working if it can be restored. If you don’t test that you can restore it in case of disaster, you don’t really know if it’s working.
Ah got it. I didn’t know there was a free tier!
How do you use ChatGPT anonymously? It requires a valid login linked to a payment method. It doesn’t get any less anonymous than that.
The main “instability” I’ve found with testing
or sid
is just that because new packages are added quickly, sometimes you’ll have dependency clashes.
Pretty much every time the package manager will take care of keeping things sane and not upgrading a package that will cause any incompatibility.
The main issue is if at some point you decide to install something that has conflicting dependencies with something you already have installed. Those are usually solvable with a little aptitude
-fu as long as there are versions available to sort things out neatly.
A better first step to newer packages is probably stable
with backports
though.
Not much use to go Ubuntu or Mint, unless you have specific issues with Debian that don’t happen with those. Even then, it may be one apt install
away from a fix.
If you want to try out BSD, power to you. I wouldn’t experiment on a backup computer though, unless by backup you just mean you want to have the spare hardware and will format it with Debian if you ever need to make it your main computer anyway.
Otherwise, just run Debian!
Stability is no longer an advantage when you are cherry picking from Sid lol.
This makes no sense. When 95% of the system is based on Debian stable
, you get pretty much full stability of the base OS. All you need to pull in from the other releases is Mesa and related packages.
Perhaps the kernel as well, but I suspect they’re compiling their own with relevant parameters and features for the SD anyway, so not even that.
Why would they manually package them? Just grab the packages you need from testing
or sid
. This way you keep the solid Debian stable
base OS and still bring in the latest and greatest of the things that matter for gaming.
I don’t think I’ve ever come across a DNS provider that blocks wildcards.
I’ve been using wildcard DNS and certificates to accompany them both at home and professional in large scale services (think hundreds to thousands of applications) for many years without an issue.
The problem described in that forum is real (and in fact is pretty much how the recent attack on Fritz!Box users works) but in practice I’ve never seen it being an issue in a service VM or container. A very easy way to avoid it completely is to just not declare your host domain the same as the one in DNS.
If they’re all resolving to the same IP and using a reverse proxy for name-based routing, there’s no need for multiple A records. A single wildcard should suffice.
Not sure if this is helpful in any way, but it might give you some clue.
100./8 addresses are reserved for CG-NAT.
This is probably the IPv4 address your modem/router is receiving from the ISP.
I might pick it back up some day but at the moment I have other projects going on at the moment.
I’m still using Proxmox myself but unfortunately it’s all fairly manually configured.
I started writing a Terraform provider for Proxmox a while ago.
Unfortunately, the API is a massive mess and the documentation is not very helpful either. It was a nightmare and I eventually gave up.
On macOS I’ve been using Ollama. It’s very easy to setup, can run as a service and expose an API.
You can talk to it directly from the CLI (ollama run
) or via applications and plugins (like https://continue.dev ) that consume the API.
It can run on Linux but I haven’t personally tried it.
Circa 1993, at the age of 13. Took me weeks to download Slackware from BBSs and get it installed. Played around with Mandrake (got an installer CD on an event). Eventually settled on Debian (which took me another few weeks to download, then burn the CDs and install it).
Used Debian on all my computers for many many years. Eventually got a MacBook (around 2005 IIRC) and have been on Mac laptops since. My gaming desktop runs Debian (wrote a blog post about my setup recently: https://blog.c10l.cc/09122023-debian-gaming). My servers, VMs and containers are usually Debian or something directly based on it (Devuan on some containers, Proxmox on my homelab’s bare metal).
I’ve used many other distros along the way, either for work or to experiment. I have huge respect for Fedora on a technical level but still prefer Debian’s philosophy and the apt
ecosystem.
It’s pretty easy with Ollama. Install it, then ollama run mistral-7b
(or another model, there’s a few available ootb). https://ollama.ai/
Another option is Llamafile. https://github.com/Mozilla-Ocho/llamafile
Same. Git GUIs can be great for examining commit trees, visualising patches, etc. For any write operations (this includes things like fecth
and pull
which write to .git
), it’s all in the shell.
I second Debian. Stable is excellent.
Testing has newer packages and is generally almost as stable.
I published my Debian gaming setup a few days ago. Haven’t tried VR on it either as I don’t have a headset, but I assume it works.
Ah NFS… It’s so good when it works! When it doesn’t though, figuring out why is like trying to navigate someone else’s house in pitch dark.