• 1 Post
  • 75 Comments
Joined 1 year ago
cake
Cake day: June 14th, 2023

help-circle



  • When I was still dual-booting Windows and Linux, I found that “raw disk” mode virtual machines worked wonders. I used VirtualBox, so you’d want a guide somewhat like this: https://superuser.com/questions/495025/use-physical-harddisk-in-virtual-box - other VM solutions are available, which don’t require you to accept an agreement with Oracle.

    Essentially, rather than setting aside a file on disk as your VM’s disk, you can set aside a whole existing disk. That can be a disk that already has Windows installed on it, it doesn’t erase what you have. Then you can start Windows in a VM and let it do its updates - since it can’t see the bootloader from within the VM, it can’t fuck it up. You can run any software that doesn’t have particularly high graphics requirement, too.

    I was also able to just “restart in Windows” if I wanted full performance for a game or something like that, but since Linux has gotten very good indeed at running games, that became less and less necessary until one day I just erased my Windows partition to recover the space.


  • PS3 most certainly had a separate GPU - was based on the GeForce 7800GTX. Console GPUs tend to be a little faster than their desktop equivalents, as they share the same memory. Rather than the CPU having to send eg. model updates across a bus to update what the GPU is going to draw in the next frame, it can change the values directly in the GPU memory. And of course, the CPU can read the GPU framebuffer and make tweaks to it - that’s incredibly slow on desktop PCs, but console games can do things like tone mapping whenever they like, and it’s been a big problem for the RPCS3 developers to make that kind of thing run quickly.

    The cell cores are a bit more like the ‘tensor’ cores that you’d get on an AI CPU than a full-blown CPU core. They can’t speak to the RAM directly, just exchange data between themselves - the CPU needs to copy data in and out of them in order to get things in and out, and also to schedule any jobs that must run on them, they can’t do it themselves. They’re also a lot more limited in what they can do than a main CPU core, but they are very very fast at what they can do.

    If you are doing the kind of calculations where you’ve a small amount of data that needs a lot of repetitive maths done on it, they’re ideal. Bitcoin mining or crypto breaking for instance - set them up, let them go, check in on them occasionally. The main CPU acts as an orchestrator, keeping all the cell cores filled up with work to do and processing the end results. But if that’s not what you’re trying to do, then they’re borderline useless, and that’s a problem for the PS3, because most of its processing power is tied up in those cores.

    Some games have a somewhat predictable workload where offloading makes sense. Got some particle effects - some smoke where you need to do some complicated fluid-and-gravity simulations before copying the end result to the GPU? Maybe your main villain has a very dramatic cape that they like to twirl, and you need to run the simulation on that separately from everything else that you’re doing? Problem is, working out what you can and can’t offload is a massive pain in the ass; it requires a lot of developer time to optimise, when really you’d want the design team implementing that kind of thing; and slightly newer GPUs are a lot more programmable and can do the simpler versions of that kind of calculation both faster and much more in parallel.

    The Cell processor turned out to be an evolutionary dead end. The resources needed to work on it (expensive developer time) just didn’t really make sense for a gaming machine. The things that it was better at, are things that it just wasn’t quite good enough at - modern GPUs are Bitcoin monsters, far exceeding what the cell can do, and if you’re really serious about crypto breaking then you probably have your own ASICs. Lots of identical, fast CPU cores are what developers want to work on - it’s much easier to reason about.


  • Yes, because it doesn’t do as much to protect you from data corruption.

    If you have a use case where a barely-measurable increase in speed is essential, but not so essential that you wouldn’t just pay for more RAM to keep it in cache, and also it doesn’t matter if you get the wrong answer because you’ve not noticed the disk is failing, and you can afford to lose everything in the case of a power cut, then sure, use a legacy filesystem. Otherwise, use a modern one.



  • emerges from a brand you’ve probably never heard of

    Writing this on a Tuxedo Pulse 14 / gen 3 as we speak. Great little laptop. I’d wanted something with a few more pixels than my previous machine, and there’s a massive jump from bog-standard 1080p to extremely expensive 4K screens. Three megapixel screen at a premium-but-not-insane price, compiles code like a champion, makes an extremely competent job of 3D gaming, came with Linux and runs it all perfectly.

    “Tuxedo Linux”, which is their in-house distro, is Ubuntu + KDE Plasma. Seemed absolutely fine, although I replaced it with Arch btw since that’s more my style. Presumably they’re using Debian for the ARM support on this new one? This one runs pretty cold most of the time, but you definitely know that you’ve got a 54W processor in a very thin mobile device when you try eg. playing simulation games - it gets a bit warm on the knees. “Not x64” would be a deal-breaker for my work, but for most uses the added battery life would be more valuable than the inconvenience.




  • We’ve a few rescue cats - we got them all when they were about three / four years old. We kept them inside initially for six weeks or so, made sure that they’d got used to living in a new house before we let them outside.

    The one which had been abandoned and had been living outside for a few weeks (a boy) stopped using his litter tray completely, as soon as he was allowed outside again.

    The other two, both girls but a ‘smooth’ changeover, took a bit more time to get used to being outside. One transitioned off of her litter tray after a couple of months by herself; the other took more like four months, and she was a bit of a fair-weather pooper for a while as well.

    My take-home message would be that cats generally prefer to do their business as far away from where they live as possible. Only possible bit of advice would be to wait until the weather’s getting better in case your cats dislike the wind and the rain. I believe forest cats love the frosty weather anyway, though?


  • Yeah.

    There’s a couple of ways of looking at it; general purpose computers generally implement ‘soft’ real time functionality. It’s usually a requirement for music and video production; if you want to keep to a steady 60fps, then you need to update the screen and the audio buffer absolutely every 16 ms. To achieve that, the AV thread runs at a higher priority than any other thread. The real-time scheduler doesn’t let a lower-priority thread run until every higher-priority thread is finished. Normally that means worse performance overall, and in some cases can softlock the system - if the AV thread gets stuck in a loop, your computer won’t even respond to keyboard input.

    Soft real-time is appropriate for when no-one will die if a timeslot is missed. A video stutter won’t kill you. Hard real-time is for things like industrial control. If the anti-lock breaks in your car are meant to evaluate your wheels one hundred times a second, then taking 11 ms to evaluate that is a complete system failure, even if the answer is correct. Note that it doesn’t matter if it gets the right answer in 1 ms or 9 ms, as long as it never ever takes more than 10. Hard real-time performance does not mean good performance, it means predictable performance.

    When we program up PLCs in industrial settings, for our ‘critical sections’, we’ll processor interrupts, so that we know our code will absolutely run in time. We use specialised languages as well - no loops, no recursion - that don’t let you do things that can’t be checked for an upper time bound. Lots of finite state machines! But when we’re done, we know that we’ve got code that won’t miss a time slot in the next twenty years of operation.

    That does mean, ironically, that my old Amiga was a better music computer than my current desktop, despite being millions of times less powerful. OctaMED could take over the whole CPU whenever it liked. Whereas a modern desktop might always have to respond to a USB device or a hard drive, leading to a potential stutter at any time. Tiny probability, but not an acceptable one.




  • Cat food is enriched with the amino acid taurine, which they can’t produce themselves. Dog food is not. Feeding cats exclusively on dog food will kill them eventually, via blindness and heart disease.

    Not a disaster if they steal it from the dog once or twice, but it cannot be their long-term diet.



  • There are, but it’s complicated. Doom (2016) for instance - it doesn’t handle the very large Vulkan swap chain that’s possible on some modern graphics cards, crashes on start-up. Someone patched Proton around that time so that Doom would start; the patch was later reverted since it broke other games. Other games based off of that engine - couple of Wolfensteins, Doom Eternal - have the problem fixed in the binaries, and so run on up-to-date Proton, but depending on your hardware, only a few specific, old, versions of Proton, will do for Doom.

    Regressions get fixed - that’s okay. Buggy behaviour which depended on regressions that got fixed - that’s a problem.


  • That’s almost exactly the problem. English uses helper words exclusively for future tense, and indeed, helper words like ‘to’ to form an infinitive. ‘Will’ is the helper word to show that something is a fact, that it is definite - grammatically, it is indicative. (The sun will rise tomorrow.) ‘Would’ is the helper word to show that something is an opinion, or dependent on something else - grammatically, it is subjunctive. (If you push that, it would fall; if it was cheaper, I would buy it.)

    Spanish has both helper words for future tense (conjugations of ‘ir’, analogous to ‘going to’, often used in speech) and straight-up conjugations for future tense (doesn’t exist in English; often used in writing). It also conjugates verbs differently if they’re indicative, subjunctive, or imperative (asking or telling someone to do something). This is how Spanish manages to have fifty-odd ways to conjugate every verb, which is very confusing to English speakers who make do with three ways and helper words.

    Translating a ‘future tense sentence’ for Duolingo requires you to have psychic powers about whether something is fact or opinion, which helper words are wanted, and so on, and it usually comes down to guessing between multiple ‘correct’ answers, which Duo will reject all but one of.


  • Absolutely this. I’d have argued that ‘every day’ is a more idiomatic translation than ‘daily’, and what native speakers would say, but that’s irrelevant. English tends to emphasise the end of sentences as the most important part, so all these translations are correct depending on the nuance that you intend:

    • Daily in Hamburg, many ships arrive (as opposed to eg. cars, or few ships)
    • Daily, many ships arrive in Hamburg / Many ships arrive daily in Hamburg (as opposed to eg. Bremen)
    • Many ships arrive in Hamburg daily (as opposed to eg. weekly)

    Wouldn’t question any of those constructions as a native speaker. In fact, original responders’ example was why I gave up on Duolingo myself originally, some years ago. Translating ‘future tense’ sentences from Spanish into English or back again is always going to be a matter of opinion, since English doesn’t have the verb conjugations that Spanish does. Guessing the ‘sanctified answer’ is tedious, when a lot of the time it’s not even the most natural form of a sentence.


  • Spot on advice. I’d observe that media files tend to be quite large, and if all that the disk has been used for has been copying these files onto it, then they’re likely to be both relatively defragmented and at the start of the disk, so the reduction in partition size isn’t going to be as slow as it usually is. (Which is very slow.)

    Since media files are relatively infrequently read, I’d probably want to use a filesystem that checks against bit rot instead of ext4 - make sure that they’ve not become corrupt when you want to use them. But that’s Linux holy war territory, so I’ll leave it alone.