Maybe.
Linux won because it worked. Hurd was stuck in research and development hell. They never were able to catch up.
Maybe.
Linux won because it worked. Hurd was stuck in research and development hell. They never were able to catch up.
However, Linus’s kernel was more elaborate than GNU Hurd, so it was incorporated.
Quite the opposite.
GNU Hurd was a microkernel, using lots of cutting edge research, and necessitating a lot of additional complexity in userspace. This complexity also made it very difficult to get good performance.
Linux, on the other hand, was just a bog standard Unix monolithic kernel. Once they got a libc working on it, most existing Unix userspace, including the GNU userspace, was easy to port.
Linux won because it was simple, not elaborate.
Motorola has been in the tracker game since way before Air Tags.
I remember getting a Bluetooth tracker with my Moto X circa 2014. Back when Tile dominated the market.
TIL. Thanks for the correction.
\1. Many retro games were made for CRT TVs at 480p. Updating the graphics stack modern TVs is valuable, even if nothing else is changed.
\2. All of my old consoles only have analog A/V outputs. And my TV only has one analog A/V input. The mess of adapter cables and swapping is annoying. I want the convenience of playing on a system that I already have plugged in.
\3. I don’t even still have some of the consoles that play my favorite classic games, and getting retro hardware is sometimes difficult. Especially things like N64 controllers with good joysticks.
Studios don’t need to do a full blown remake to solve these problems. But I’m also not going to say the Crash and Spyro remakes weren’t welcome. Nintendo’s Virtual Console emulators toe this line pretty well.
But studios should still put in effort to make these classic games more accessible to modern audiences, and if that means a remake, that’s fine with me.
(I’m mostly thinking about the GameCube/PS2 generation and earlier. I don’t see much value in remakes of the Wii/PS3 generation yet.)
Zsh
No plugin manager. Zsh has a builtin plugin system (autoload
) and ships with most things you want (like Git integration).
My config: http://github.com/cbarrick/dotfiles
Unicode cat 😺
Exactly.
My take is that the issue isn’t with tmpfiles.d, but rather the decision to use it for creating home directories.
RIP Fuchsia
The article says all phones Android 9 and up are in on the network.
But I was under the impression that enrollment in the network was still rolling out? Anyone have details on the current state?
To me, this just sounds like the network isn’t rolled out fully yet (or that NYC residents don’t use Android, which seems suspect) rather than a failing of the device itself.
Mostly I use custom launchers because I don’t like the Google News feed
What’s wild to me is that back in the Google Now era, I was so excited to root so that I could install the extension for Nova to add Google Now.
But these days, the Google “Discovery” feed is trash compared to what Google Now once was.
Cheating is such a hard problem.
Like, this is what leads to invasive client-side anti-cheat. Which also happens to be one of the main blockers for OS portability.
But if you make it so that the server has to constantly validate the game state, you get terrible lag.
You really have to design your game well to deter cheaters. And you have to empower server moderators to ban cheaters. This sorta implies releasing the servers so that communities can run their own instances, because these studios don’t have the resources to handle moderation themselves.
I assume it will still be possible to enable the APIs with adb
.
So this will mostly affect users in that in-between skill level where they know how to use a third-party app store (or how to download and install apps from a file browser) but don’t know how to use adb
. I’m not so sure how big that segment is. Probably pretty small.
For the group of users who are tech-illiterate (the common case), this is actually a very welcome change for malware prevention.
Yeah, but I want both GPU compute and Wayland for my desktop.
Long term, I expect Vulkan to be the replacement to CUDA. ROCm isn’t going anywhere…
We just need fundamental Vulkan libraries to be developed that can replace the CUDA equivalents.
cuFFT
-> vkFFT
(this definitely exists)cuBLAS
-> vkBLAS
(is anyone working on this?)cuDNN
-> vkDNN
(this definitely doesn’t exist)At that point, adding Vulkan support to XLA (Jax and TensorFlow) or ATen (PyTorch) wouldn’t be that difficult.
Unfortunately, those of us doing scientific compute don’t have a real alternative.
ROCm just isn’t as widely supported as CUDA, and neither is Vulkan for GPGPU use cases.
AMD dropped the ball on GPGPU, and Nvidia is eating their lunch. Linux desktop users be damned.
There’s a Wikipedia article on multiple encryption that talks about this, but the arguments are not that compelling to me.
The main thing is mostly about protecting your data from flawed implementations. Like, AES has not been broken theoretically, but a particular implementation may be broken. By stacking implementations from multiple vendors, you reduce the chance of being exposed by a vulnerability in one of them.
That’s way overkill for most businesses. That’s like nation state level paranoia.
No, you don’t split the file. You split the master decryption key.
Each user just needs to remember their own password, and SSS can reconstruct the master key when enough users enter their passwords.
multiple people to agree on decrypting a file
For that, you would use Shamir’s Secret Sharing algorithm rather than multiple encryption.
+1
From an order of magnitude perspective, the max is terabytes. No “normal” users are dealing with petabytes. And if you are dealing with petabytes, you’re not using some random poster’s program from reddit.
For a concrete cap, I’d say 256 tebibytes…