• 0 Posts
  • 29 Comments
Joined 1 year ago
cake
Cake day: February 10th, 2024

help-circle
  • When compatible hardware is available, it’s expected that having packages built for RVA23 will have a big impact on performance. You can already see a big part of that with the vector (V) extension: running programs built without it is akin to using x86 programs without SSE or AVX. RVA23 is the first RVA profile that considers V mandatory rather than optional.

    You might see a similar performance impact if you target something like RVA22+V instead of RVA23, but as far as I know the only hardware systems that’d benefit from that are the Spacemit ones (OPi RV2, BPI-F3, Jupiter) while that’d still leave behind VisionFive 2, Pioneer, P550/Megrez, and even an upcoming processor UltraRISC announced recently. The profiles aren’t exactly intended to be used for those kinds of fine-tuned combinations and it’s possible some of the other RVA23 extensions (Zvbb, Zicond, etc.) might have a substantial impact too.

    Hardware vendors want to showcase their system having the best performance it can, so I expect Ubuntu’s aim is to have RVA23 builds ready before RVA23 hardware so that they’ll be the distro of choice for future hardware, even if that means abandoning all existing RISC-V users. imo it would’ve been better to maintain separate builds for RV64GC and RVA23 but I guess they just don’t care enough about existing RISC-V users to maintain two builds.


  • The parent comment mentions working on security for a paid OS, so looking at the perspective of something like the users of RHEL and SUSE: supply chain “paranoia” absolutely does matter a lot to enterprise users, many of which are bound by contract to specific security standards (especially when governments are involved). I noted that concerns at that level are rather meaningless to home users.

    On a personal system, people generally do whatever they need to in order to get the software they want. Those things I listed are very common options for installing software outside of your distro’s repos, and all of them offer less inherent vetting than Flathub while also tampering with your system more substantially. Though most of them at least use system libraries.

    they added “bash scripts you find online”, which are only a problem if you don’t look them over or cannot understand them

    I would honestly expect that the vast majority of people who see installation steps including curl [...] | sh (so common that even reputable projects like cargo/rust recommend it) simply run the command as-is without checking the downloaded script, and likewise do the same even if it’s sudo sh. That can still be more or less fine if you trust the vendor/host, its SSL certificate, and your ability to type/copy the domain without error. Even if you look at the script, that might not get you far if it happens to be a self-extracting one unless you also check its payload.


  • zarenki@lemmy.mltoLinux@lemmy.mlFan of Flatpaks ...or Not?
    link
    fedilink
    arrow-up
    18
    arrow-down
    1
    ·
    8 days ago

    A few reasons security people can have to hesitate on Flatpak:

    • In comparison to sticking with strictly vetted repos from the big distros like Debian, RHEL, etc., using Flathub and other sources means normalizing installing software that isn’t so strongly vetted. Flathub does at least have a review process but it’s by necessity fairly lax.
    • Bundling libraries with an application means you can still be vulnerable to an exploit in some library, even if your OS vendor has already rolled out the fix, because of using Flatpak software that still loads the vulnerable version. The freedesktop runtimes at least help limit the scope of this issue but don’t eliminate it.
    • The sandboxing isn’t as secure as many users might expect, which can further encourage installing untrusted software.

    By a typical home user’s perspective this probably seems like nothing; in terms of security you’re still usually better off with Flatpak than installing random AUR packages, adding random PPA repos, using AppImage programs, installing a bunch of Steam games, blindly building an unfamiliar project you cloned from github, or running bash scripts you find online. But in many contexts none of that is acceptable.


  • zarenki@lemmy.mltoAndroid@lemmy.worldLock screen and ads
    link
    fedilink
    English
    arrow-up
    7
    ·
    2 months ago

    Not a phone, but probably the most mainstream example in the US market: Amazon devices often use lock screen ads by default. They charge $15-$20 more to buy a version of the device without those ads or to get them removed from an existing device. Affects both Fire HD tablets (which use a version of Android without Google services) and Kindle epaper devices (which aren’t Android).


  • The command you’re looking for is btrfs send. See man btrfs-send.

    I know of at least one tool, btrbk, which automates both automatic periodic snapshots and incremental sync, but here’s an example manual process so you can know the basic idea. Run all this in a root shell or sudo.

    As initial setup:

    • Create a btrfs filesystem on the sender drive and another on the receiver drive. No need to link them or sync anything yet, although the receiver’s filesystem does need to be large enough to actually accept your syncs.
    • Use btrfs subvolume create /mnt/mybtrfs/stuff on the sender, substituting the actual mount point of your btrfs filesystem and the name you want to use for a subvolume under it.
    • Put all the data you care about inside that subvolume. You can mount the filesystem with a mount option like -o subvol=stuff if you want to treat the subvolume as its own separate mount from its parent.
    • Make a snapshot of that subvolume. Name it whatever you want, but something simple and consistent is probably best. Something like mkdir /mnt/mybtrfs/snapshots; btrfs subvolume snapshot /mnt/mybtrfs/stuff /mnt/mybtrfs/snapshots/stuff-20250511.
    • If the receiver is a separate computer, make sure it’s booted up and running an SSH server. If you’re sending to another drive on the same system, make sure it’s connected and mounted.
    • Send/copy the entire contents of the snapshot with a command like btrfs send /mnt/mybtrfs/snapshots/stuff-20250511 | btrfs receive /mnt/backup. You can run btrfs receive through SSH if the receiver is a separate system.

    For incremental syncs after that:

    • Make another separate snapshot and make sure not to delete or erase the previous one: btrfs subvolume snapshot /mnt/mybtrfs/stuff /mnt/mybtrfs/snapshots/stuff-20250518.
    • Use another send command, this time using the -p option to specify a subvolume of the last successful sync to make it incremental. btrfs send -p /mnt/mybtrfs/snapshots/stuff-20250511 /mnt/mybtrfs/snapshots/stuff-20250518 | btrfs receive /mnt/backup.

    If you want to script a process like this, make sure the receiver stores the name of the latest synced snapshot somewhere only after the receive completes successfully, so that you aren’t trying to do incremental syncs based on a parent that didn’t finish syncing.


  • zarenki@lemmy.mltoLinux@lemmy.mlThis looks cool but can it game?
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    2 months ago

    “Dynamically compiled” and dynamic linking are very different things, and in turn dynamic linking is completely different from system calls and inter-process communication. I’m no emulation expert but I’m pretty sure you can’t just swap out a dynamically linked library for a different architecture’s build for it at link time and expect the ABI to somehow work out, unless you only do this with a small few manually vetted libraries where you can clean up the ABI. Calling into drivers or communicating with other processes that run as the native architecture is generally fine, at least.

    I don’t know how much Asahi makes use of the capability (if at all), but Apple’s M series processors add special architecture extensions that makes x86 emulation be able to perform much better than on any other ARM system.

    I wouldn’t deny that you can get a lot of things playable enough, but this is very much not hardware you get for the purpose of gaming: getting a CPU and motherboard combo that costs $1440 (64-core 2.2GHz) or $2350 (128-core 2.6GHz) that performs substantially worse at most games than a $300 Ryzen CPU+motherboard combo (and has GPU compatibility quirks to boot) will be very disappointing if that’s what you want it for. Though the same could to a lesser extent be said even about x86 workstations that prioritize core count like Xeon/Epyc/Threadripper. For compiling code, running automated tests, and other highly threaded workloads, this hardware is quite a treat.


  • With one of these Altra CPUs (Q64-22), I can compile the Linux kernel (defconfig aarch64 with modules on GCC 15.1) in 3m8s with -j64. Really great for compiling, and much lower power draw than any x86 system with a comparable core count. Idles at 68W full system power, pulls 130W when all cores are under full load. Pulling out some of my 4 RAM sticks can drive that down a lot more than you’d expect for just RAM. lm_sensors claims the “CPU Power” is 16W and 56W in those two situations.

    Should be awful for gaming. It’s possible to run x86 things with emulation, sure, but performance (especially single-thread) suffers a lot. I run a few containers where the performance hit really doesn’t matter through qemu.

    Ampere has a weird PCIe bug that results in either outright incompatibility or a video output filled with strange artifacts/distortion for the vast majority of GPUs, with the known good selection that aren’t bugged being only a few select Nvidia ones. I don’t happen to have any of those Nvidia cards but this workstation includes one. Other non-GPU PCIe things like NICs, NVMe, and SAS storage controllers work great, with tons of PCIe lanes.


  • Depends on what you consider self-hosted. Web applications I use over LAN include Home Assistant, NextRSS, Syncthing, cockpit-machines (VM host), and media stuff (Jellyfin, Kavita, etc). Without web UI, I also run servers for NFS, SMB, and Joplin sync. Nothing but a Wireguard VPN is public-facing; I generally only use it for SSH and file transfer but can access anything else through it.

    I’ve had NextCloud running for a year or two but honestly don’t see much point and will probably uninstall it.

    I’ve been planning to someday also try out Immich (photo sync), Radicale (calendar), ntfy.sh, paperless-ngx, ArchiveBox (web archive), Tube Archivist (YouTube archive), and Frigate NVR.


  • The 6-month release cycle makes the most sense to me on desktop. Except during the times I choose to tinker with it at my own whim, I want my OS to stay out of my way and not feel like something I have to maintain and keep up with, so rolling (Arch, Tumbleweed) is too often. Wanting to use modern hardware and the current version of my DE makes a 2-year update cycle (Debian, Rocky) feel too slow.

    That leaves Ubuntu, Fedora, and derivatives of both. I hate Snap and Ubuntu has been pushing it more and more in recent years, plus having packages that more closely resemble their upstream project is nice, so I use Fedora. I also like the way Fedora has rolling kernel updates but fixed release for most userspace, like the best of both worlds.

    I use Debian stable on my home server. Slower update cycle makes a lot more sense there than on desktop.

    For work and other purposes, I sometimes touch Ubuntu, RHEL, Arch, Fedora Atomic, and others, but I generally only use each when I need to.


  • If the only problem is that you can’t use dynamic linking (or otherwise make relinking possible), you still can legally use LGPL libraries. As long as you license the project using that library as GPL or LGPL as well.

    However, those platforms tend to be a problem for GPL in other ways. GPL has long been known to conflict with Apple’s App Store and similar services, for example, because the GPL forbids imposing extra limits that restrict user freedom and those stores have a terms of service that does exactly that.


  • If it was a community addition why would it matter? And why would they remove the codecs.

    You don’t have to be a corporation to be held liable for legal issues with hosting codecs. Just need to be big enough for lawyers to see you as an attractive target and in a country where codec patent issues apply. There’s a very good reason why the servers for deb-multimedia (Debian’s multimedia repo), RPM Fusion (Fedora’s multimedia repo), VLC’s site, and others are all hosted in France and do not offer US-based mirrors. France is a safe haven for foss media codecs because its law does not consider software patentable, unlike the US and even most other EU nations.

    Fedora’s main repos are hosted in the US. Even if they weren’t, the ability for any normal user around the world to host and use mirrors is a very important part of an open community-friendly distro, and the existence of patented codecs in that repo would open any mirrors up to liability. Debian has the same exact issue, and both distros settled on the same solution: point users to a separate repo that is hosted in France which contains extra packages for patent-encumbered codecs.


  • I stopped using Arch a long time ago for this same reason. Either Fedora (or derivatives like Nobara) or an atomic/immutable distro (like Bazzite, Silverblue, Kinoite) is probably the way to go.

    I used to feel like Ubuntu was a good option for this, but it no longer is: too often they try to push undesirable changes that need manual tweaking to fix after release upgrades. Debian Stable is generally good for low-maintenance use but doesn’t keep up as well with newer hardware or newer updates to video drivers and mesa, which makes it suboptimal for typical gaming use. Debian Testing can be prone to break things in updates (in my experience, worse than Arch does).

    I saw another comment recommend Rocky/RHEL, but note that their kernel doesn’t support btrfs. Since you mentioned a root snapshot, I expect you probably use it.



  • I’ve been using single-disk btrfs for my rootfs on every system for almost a decade. Great for snapshots while still being an in-tree driver. I also like being able to use subvolumes to treat / and /home (maybe others) similar to separate filesystems without actually being different partitions.

    I had used it for my NAS array too, with btrfs raid1 (on top of luks), but migrated that over to ZFS a couple years ago because I wanted to get more usable storage space for the same money. btrfs raid5 is widely reported to be flawed and seemed to be in purgatory of never being fixed, so I moved to raidz1 instead.

    One thing I miss is heterogenous arrays: with btrfs I can gradually upgrade my storage one disk at a time (without rewriting the filesystem) and it uses all of my space. For example, two 12TB drives, two 8TB drives, and one 4TB drive adds up to 44TB and raid1 cuts that in half to 22TB effective space. ZFS doesn’t do that. Before I could migrate to ZFS I had to commit to buying a bunch of new drives (5x12TB not counting the backup array) so that every drive is the same size and I felt confident it would be enough space to last me a long time since growing it after the fact is a burden.


  • My best guess: whatever they’re filing now was so exhaustively researched that it took months to prepare the strongest case they’re able to make, possibly delayed by the lawyers working on several other cases. Plus waiting until sales have dried up can maximize damages.

    Another possibility is that Nintendo/TPC is planning to make some big Pokémon announcements soon and wants to target this shortly before their own new games to reduce competition. Palworld might seem like more of a threat to the execs now that Pokémon is nearing a major release than it was in the middle of a long drought for the series.



  • A standard called SystemReady exists. For the systems that actually follow its standards, you can have a single ARM OS installation image that you copy to a USB drive and can then boot through UEFI and run with no problems on an Ampere server, an NXP device, an Nvidia Jetson system, and more.

    Unfortunately it’s a pretty new standard, only since 2020, and Qualcomm in particular is a major holdout who hasn’t been using it.

    Just like x86, you still need the OS to have drivers for the particular device you’re installing on, but this standard at least lets you have a unified image, and many ARM vendors have been getting better about upstreaming open-source drivers in the Linux kernel.


  • To the contrary, I would expect the sample to skew more towards people who have a heavily customized X session and strong opinions about window managers while drastically underrepresenting average GNOME users who stick with the default Wayland session. Someone who likes their custom setup can still be waiting for a Wayland equivalent while casual Ubuntu users have been defaulted to Wayland on new non-nvidia installs since early 2021.



  • There’s only one case I’ve found where Wi-Fi use seems acceptable in IoT: ESPHome. It’s open-source firmware for microcontrollers that makes DIY IoT sensors and controls accessible over LAN without phoning home to whatever remote server, without trying to make anything accessible over the Internet, and without breaking in any way if the device has no route to the Internet.

    I still wouldn’t call Wi-Fi use ideal even there; mesh can help in larger homes and Z-Wave/Zigbee radios tend to be more power efficient, though ESP32 isn’t exactly suited for a battery-powered device that’s expected to run 24/7 regardless.