• 2 Posts
  • 309 Comments
Joined 8 months ago
cake
Cake day: June 9th, 2024

help-circle


  • I don’t disagree, but if it’s a case where the janky file problem ONLY appears in Jellyfin but not Plex, then, well, jank or not, that’s still Jellyfin doing something weird.

    No reason why Jellyfin would decide the French audio track should be played every 3rd episode, or that it should just pick a random subtitle track when Plex isn’t doing it on exactly the same files.


  • One thing I ran into, though it was a while ago, was that disk caching being on would trash performance for writes on removable media for me.

    The issue ended up being that the kernel would keep flushing the cache to disk, and while it was doing that none of your transfers are happening. So, it’d end up doubling or more the copy time because the write cache wasn’t actually helping removable drives.

    It might be worth remounting without any caching, if it’s on, and seeing if that fixes the mess.

    But, as I said, this has been a few years, so that may no longer be actively the case.


  • If you share access with your media to anyone you’d consider even remotely non-technical, do not drop Jellyfin in their laps.

    The clients aren’t nearly as good as plex, they’re not as universally supported as plex, and the whole thing just has the needs-another-year-or-two-of-polish vibes.

    And before the pitchfork crowd shows up, I’m using Jellyfin exclusively, but I also don’t have people using it who can’t figure out why half the episodes in a tv season pick a different language, or why the subtitles are somtimes english, and sometimes german, or why some videos occasionally don’t have proper audio (l and r are swapped) and how to take care of all of those things.

    I’d also agree your thought that docker is the right approach to go: you don’t need docker swarm, or kubernetes, or whatever other nonsense for your personal plex install, unless you want to learn those technologies.

    Install a base debian via netinstall, install docker, install plex, done.


  • Timely post.

    I was about to make one because iDrive has decided to double their prices, probably because they could.

    $30/tb/year to $50/tb/year is a pretty big jump, but they were also way under the market price so capitalism gonna capital and they’re “optimizing” or someshit.

    I’ve love to be able to push my stuff to some other provider for closer to that $30, but uh, yeah, no freaking clue who since $60/tb/year seems to be the more average price.

    Alternately, a storage option that’s not S3-based would also probably be acceptable. Backups are ~300gb, give or take, and the stuff that does need S3-style storage I can stuff in Cloudflare’s free tier.






  • The chances of both failing is very rare.

    If they’re sequential off the manufacturing line and there’s a fault, they’re more likely to fail around the same time and in the same manner, since you put the surviving drive under a LOT of stress when you start a rebuild after replacing the dead drive.

    Like, that’s the most likely scenario to lose multiple drives and thus the whole array.

    I’ve seen far too many arrays that were built out of a box of drives lose one or two, and during rebuild lose another few and nuke the whole array, so uh, the thought they probably won’t both fail is maybe true, but I wouldn’t wager my data on that assumption.

    (If you care about your data, backups, test the backups, and then even more backups.)


  • You can find reasonably stable and easy to manage software for everything you listed.

    I know this is horribly unpopular around here, but you should, if you want to go this route, look at Nextcloud. It 's a monolithic mess of PHP, but it’s also stable, tested, used and trusted in production, and doesn’t have a history of lighting user data on fire.

    It also doesn’t really change dramatically, because again, it’s used by actual businesses in actual production, so changes are slow (maybe too slow) and methodical.

    The common complaints around performance and the mobile clients are all valid, but if neither of those really cause you issues then it’s a really easy way to handle cloud document storage, organization, photos, notes, calendars, contacts, etc. It’s essentially (with a little tweaking) the entire gSuite, but self-hosted.

    That said, you still need to babysit it, and babysit your data. Backups are a must, and you’re responsible for doing them and testing them. That last part is actually important: a backup that doesn’t have regular tests to make sure they can be restored from aren’t backups they’re just thoughts and prayers sitting somewhere.




  • It is mostly professional/office use where this make sense. I’ve implemented this (well, a similar thing that does the same thing) for clients that want versioning and compliance.

    I’ve worked with/for a lot of places that keep everything because disks are cheap enough that they’ve decided it’s better to have a copy of every git version than not have one and need it some day.

    Or places that have compliance reasons to have to keep copies of every email, document, spreadsheet, picture and so on. You’ll almost never touch “old” data, but you have to hold on to it for a decade somewhere.

    It’s basically cold storage that can immediately pull the data into a fast cache if/when someone needs the older data, but otherwise it just sits there forever on a slow drive.


  • …depends what your use pattern is, but I doubt you’d enjoy it.

    The problem is the cached data will be fast, but the uncached will, well, be on a hard drive.

    If you have enough cached space to keep your OS and your used data on it, it’s great, but if you have enough disk space to keep your OS and used data on it, why are you doing this in the first place?

    If you don’t have enough cache drive to keep your commonly used data on it, then it’s going to absolutely perform worse than just buying another SSD.

    So I guess if this is ‘I keep my whole steam library installed, but only play 3 games at a time’ kinda usecase, it’ll probably work fine.

    For everything else, eh, I probably wouldn’t.

    Edit: a good usecase for this is more the ‘I have 800TB of data, but 99% of it is historical and the daily working set of it is just a couple hundred gigs’ on a NAS type thing.



  • One thing you probably need to figure out first: how are the dgpu and igpu connected to each other, and then which ports are connected to which gpu.

    Everyone does funky shit with this, and you’ll sometimes have dgpus that require the igpu to do anything, or cases where the internal panel is only hooked up to the igpu (or only the dgpu), and the hdmi and display port and so on can be any damn thing.

    So uh, before you get too deep in planning what gets which gpu, you probably need to see if the outputs you need support what you want to do.



  • Then the correct answer is ‘the one you won’t screw up’, honestly.

    I’m a KISS proponent with security for most things, and uh, the more complicated it gets the more likely you are to either screw up unintentionally, or get annoyed at it, and do something dumb on purpose, even though you totally were going to fix it later.

    Pick the one that makes sense, is easy for you to deploy and maintain, and won’t end up being so much of a hinderance you start making edge-case exceptions because those are the things that will 100% bite you in the ass later.

    Seen so many people turn off a firewall or enable port forwarding or set a weak password or change permissions to something too permissive and just end up getting owned that have otherwise sane, if maybe over-complicated, security designs and do actually know what they’re doing, but just getting burned by wandering off from standards because what they implemented originally ends up being a pain to deal with in day-to-day use.

    So yeah, figure out your concerns, figure out what you’re willing to tolerate in terms of inconvenience and maintenance, and then make sure you don’t ever deviate from there without stopping and taking a good look at what you’re doing, what could happen if you do it, and coming up with a worst-case scenario first.