I’ve not tried GPT4ALL but Ollama combined with Open WebUI is really great for selfhosted LLMs and can run with podman. I’m running Bazzite too and this is what I do.
I’ve not tried GPT4ALL but Ollama combined with Open WebUI is really great for selfhosted LLMs and can run with podman. I’m running Bazzite too and this is what I do.
I see there is an m.2 slot too with what looks to be a Kingston SSD.
I’m still confused what era this laptop is from. It might be a SATA m.2.
Wayland was subject to “first mover disadvantage” for a long time. Why be the first to switch and have to solve all the problems? Instead be last and everyone else will do the hard work for you.
But without big players moving to it those issues never get fixed. And users rightly should not be forced to migrate to a broken system that isn’t ready. People just want a system that works right?
Eventually someone had to decide it was ‘good enough’ and try an industry wide push to move away from a hybrid approach that wastes developer time and confuses users.
Right in the middle of Berlin.
(This information may be 30 years out of date)
Nihilism is for suckers. I’m going to make the world a better place and have some fun along the way too.
I’ve had exactly this happen to me. It was my own fault but it took a bit of work figure out.
I don’t really engage with the online mechanics in Elden Ring… Maybe I should? I’ve put hundreds of hours into the game otherwise. I rate and leave messages but I’ve never summoned help for co-op or invaded people except for Varre’s quest where I always just get obliterated by people who are way better prepared than me.
Backups need to be reliable and I just can’t rely on a community of volunteers or the availability of family to help.
So yeah I pay for S3 and/or a VPS. I consider it one of the few things worth it to pay a larger hosting company for.
I’m from the Midwest US and I know there are words and sounds I pronounce with a Midwestern accent but I can still type and spell them correctly.
If’n I typ lik dis den o’course people gonna think I hev the big dumb or that I’m a mole from a Redwall book.
Unironically Powershell is great and learning it has propelled me through the last 12 years of my career as a Sysadmin. My biggest complaints with it are generally Windows complaints or due to legacy powershell modules.
If magic was real, expert magic users would not trust it at all.
“Haha yeah I mostly do transfiguration magic but I do some evocation too occasionally. No I don’t eat any transfigured food or do any of that at home or anything honestly I’m surprised it works at all.”
I intentionally do not host my own git repos mostly because I need them to be available when my environment is having problems.
I make use of local runners for CI/CD though which is nice but git is one of the few things I need to not have to worry about.
I can tell the time perfectly well unless someone asks me what time it is. Then my brain is completely useless and I just have to twist my wrist around awkwardly to show them.
No need to optimize when you can just push people to upgrade their hardware more frequently so you make fat stacks of cash from OEM’s.
Am I crazy? I’m not seeing “Organic Maps” on Fdroid at all. Do I need to add a specific repo?
Edit: Okay yes. I had to include ‘other antifeatures’ which it did warn me about. I hadn’t adjusted that setting beyond the default and assumed I didn’t need to go enable those things.
Do you have any links or guides that you found helpful? A friend wanted to try this out but basically gave up when he realized he’d need an Nvidia GPU.
I’ve been testing Ollama in Docker/WSL with the idea that if I like it I’ll eventually move my GPU into my home server and get an upgrade for my gaming pc. When you run a model it has to load the whole thing into VRAM. I use the 8gb models so it takes 20-40 seconds to load the model and then each response is really fast after that and the GPU hit is pretty small. After I think five minutes by default it will unload the model to free up VRAM.
Basically this means that you either need to wait a bit for the model to warm up or you need to extend that timeout so that it stays warm longer. That means that I cannot really use my GPU for anything else while the LLM is loaded.
I haven’t tracked power usage, but besides the VRAM requirements it doesn’t seem too intensive on resources, but maybe I just haven’t done anything complex enough yet.
I’ve been using ZFS now for a few years for all my data drives/pools but I haven’t gotten brave enough to boot from it yet. Snapshotting a system drive would be really handy.
Burger place with yellowed ceiling tiles and a laminated menu? That shit is gonna be good.
This message brought to you by Jill Stein who pops up every four years to grift money and accomplish nothing except cozying up to Putin. I guess there is still the brain worm dead animal guy?
Third parties in the US are unserious. I wish that weren’t the case but that’s the reality.