• 2 Posts
  • 11 Comments
Joined 2 days ago
cake
Cake day: May 14th, 2026

help-circle
  • Cool. So what happens if I run a version of Android that doesn’t inherit Google security theater cruft? That is to say…what if the user simply…does not…upgrade the Android version to be affected by this (eg: uses an old phone or blocks OS version update?).

    My phone is going on 7yrs old. Perfectly happy with it. When it breaks, I will get a phone of the same era (2nd hand or new-old stock) or investigate other options.

    So, it seems to me, the winning move is not to play the game (in any one of 100 diff ways).

    Or am I missing something here? Is there something that will prevent older tech from working? Because if so, I am happy to YOLO my phone and switch to a dumbphone if I have to.



  • Hey, me too :) As my school teachers use to tell me “Great minds think alike (but fools seldom differ :)”

    For me, I’m thinking of having a LLM as one layer / one container in a homelab that does some specific stuff

    • queries against local docs / notes / manuals / PDFs / wiki material as the trusted knowledge layer
    • uses tools for search, file lookup, shell, git, Docker, Home Assistant, calendar, etc.
    • a local “Codex” / wiki layer that turns my own source material into an inspectable knowledge base
    • provenance and audit trails

    I want to take a screenshot of something, drop it into Syncthing from my phone, then later ask “did I fuck the pins on this?” … and for it to look up the schematics, eyeball the pins and tell me. Or I say “hey, can you grab a copy of X for me, usual params” and have the LLM instruct Sonarr/Radarr/Sabnzdb to do that. (That is, make your OWN “Alexa” with an Arduino ESP32, stick it in a room and then call it when you need it).

    So instead of asking a 70B model to “know” why your media server is down, the system checks service status, logs, last config changes, prior notes, Docker state, network state, etc., then the LLM explains the result in human language. You can probably do that with a 4B (I’m testing that assumption now).

    Same for “find that motherboard note,” “summarize this email thread,” “turn this into a task,” “compare this Ebay listing to my saved hardware notes,” “what did I do last time this broke,” or “run the smoke test and tell me the first real failure.”

    I think small models are the shit for this because if the model only has to classify intent, route the request, render structured evidence, and talk like a normal human…then it doesn’t need to be a giant oracle. The expensive (time wise) part becomes less “make the model smarter” and more “build a better control plane around it.”

    Basically: local LLM as semantic HID; expert system/tool router underneath; user owns the data and the machine.

    As always, ICBW…but fuck it, I’m gonna try.

    PS: I have an idea of how to apply that to coding too…but that’s a project for much later. I’ve been cooking this shit for far too long. The next thing I wanna do is a fun project for myself (that is: ROM hack a parachute and grappling gun into Super Mario Sunshine, so I can basically play “What if Super Mario Sunshine but actually Just Cause 2” on my Wii with the kids.


  • I’m actually thinking of pivoting my router/orchestrater entirely. I think the way forward is to look at expert systems (yes, those ancient things from the long, long ago of…1980) but with modern tooling (that can be user updated), with a small LLM in the middle that the user can talk to. That is, de-emphasize the central role of the LLM entirely; rather, make it the user-facing NLP input/output and let the real programs, running on real silicon, do the work. I might have a different use case than most, but I bet not so different (that is to say, online LLM discussion seem to gravitate around user that use LLMs for coding; Anthropic and OAI internal reports say otherwise)

    Ironically, I’m writing the blurb now while waiting for smoke test #90238472398 to finish.




  • Yeah, transcoding entirely off - directly stream stored 720/1080p files (downloaded like that, although I did use handbrake on the pi once to transcode Space 1999 season 1. Took about 2 days I think).

    Someone else was just talking about Wyse thin clients. I’m fairly sure that a $40 Wyse thin client out performs even the best Pi 4 (maybe 5 sometimes). If I can’t find a way to fix mine, I may have to buy a few for uh…science. IIRC, they idle at about the same as the Pi



  • It’s very ok, as long as you don’t expect multiple 4K streams at it.

    I ran JellyFin on a Pi 4 for about 3 or 4 yrs before it started acting up. So long as you don’t transcode, it works wonderfully well. I had it serving upto 4-5 x 720p streams at same time. IIRC, it can just about do a single 4K, 60? Never tried - all my media is 1080p or less.

    IIRC, mine is overclocked and undervolted using PiTools (and is in a Argon 40 case with a m.2). The Argon 40 case (I think) is causing it to short (something with the daughter-board? Dunno). Better options these days.

    Paperless I don’t use but I don’t see why it shouldn’t be possible.

    Don’t try Immich unless you like pain (or turn off the AI stuff)