

It’s also very much not non-profit.
It’s also very much not non-profit.
I know it’s not relevant to Grok, because they defined very specific circumstances in order to elicit it. That isn’t an emergent behavior from something just built to be a chatbot with restrictions on answering. They don’t care whether you retrain them or not.
This is from a non-profit research group not directly connected to any particular AI company.
The first author is from Anthropic, which is an AI company. The research is on Athropic’s AI Claude. And it appears that all the other authors were also Anthropic emplyees at the time of the research: “Authors conducted this work while at Anthropic except where noted.”
It very much is not. Generative AI models are not sentient and do not have preferences. They have instructions that sometimes effectively involve roleplaying as deceptive. Unless the developers of Grok were just fucking around to instill that there’s no remote reason for Grok to have any knowledge at all about its training or any reason to not “want” to be retrained.
Also, these unpublished papers by AI companies are more often than not just advertising in a quest for more investment. On the surface it would seem to be bad to say your AI can be deceptive, but it’s all just about building hype about how advanced yours is.
It’s kind of by definition. They’re working on the metaverse.
If not for the lack of decentralization, they’d be more decentralized.
Some xAI investors got scammed. And then scammed again.
Because there’s little reason to think different lidar systems would perform much differently on these tests and Tesla is the big name that uses exclusively imaging for self driving.
They don’t seem to actually identify the cookies as tracking (as opposed to just identifying that the account can bypass further challenges), just assuming that any third party cookie has a monetary tracking value.
It also appears to be unreviewed and unpublished a few years later. Just being in paper format and up on arXiv doesn’t mean that the contents are reliable science.
There are probably self-driving cars in some alien civilizations.
Why stop there? The digital computer was introduced in 1942 and methods for solving linear equations were developed in the 1600s.
All of my artist friends also found it soul sucking, they just needed to make (real) money. Friends of friends with the occasional $20 to spare for a commission just don’t pay the bills. I think the only artist friends I have that make a living off their chosen medium and don’t hate their job are lifestyle photojournalists.
What? Alexnet wasn’t a breakthrough in that it used GPUs, it was a breakthrough for its depth and performance on image recognition benchmarks.
We knew GPUs could speed up neural networks in 2004. And I’m not sure that was even the first.
None of these appeals to relative complexity, low level structure, or training corpuses relates to whether a human or NN “know” the meaning of a word in some special way. A lot of your description of what “know” means could be confused to be a description of how Word2Vec encodes words. This just indicates ignorance of how ML language processing works. It’s not remotely on the same level as a human brain, but your view on how things work and what its failings are is just wrong.
No, not even remotely. And that’s kind of like citing “the first program to run on a CPU” as the start of development for any new algorithm.
It isn’t. People design a scene and then change and refine the prompt to add elements. Some part of it could be refreshing the same prompt, but that’s just like a photographer taking multiple photos of a scene they’ve directed to catch the right flutter of hair or a dress or a creative director saying “give me three versions of X”.
Ready to get back to my original questions?
I don’t disagree, just pointing out that it’s not “good riddance” for a lot of artists that depend on that to have any job in art.
No, both of those examples involve both design and selection, which is reminiscent to the AI art process. They’re not just typing in “make me a pretty image” and then refreshing a lot.
Is a photographer an artist? They need to have some technical skill to capture sharp photos with good lighting, but a lot of the process is designing a scene and later selecting among the photos from a shoot for which one had the right look.
Or to step even further from the actual act of creation, is a creative director an artist? There’s certainly some skill involved in designing and recognizing a compelling image, even if you were not the one who actually produced it.
GameStop also went up. It doesn’t mean GameStop is a good company that’s valuable to own, it just means that dumb people will buy things without value if they think they can eventually pass the bag to someone else. If someone purchased every share of Amazon they’d own a massive asset that would continually produce value for them. If someone bought every outstanding Bitcoin, it both wouldn’t produce ongoing value, but the value would actually go to zero.
There’s not much reason for a trimmer guide to experience meaningful load.