Why stop there? The digital computer was introduced in 1942 and methods for solving linear equations were developed in the 1600s.
Why stop there? The digital computer was introduced in 1942 and methods for solving linear equations were developed in the 1600s.
All of my artist friends also found it soul sucking, they just needed to make (real) money. Friends of friends with the occasional $20 to spare for a commission just don’t pay the bills. I think the only artist friends I have that make a living off their chosen medium and don’t hate their job are lifestyle photojournalists.
What? Alexnet wasn’t a breakthrough in that it used GPUs, it was a breakthrough for its depth and performance on image recognition benchmarks.
We knew GPUs could speed up neural networks in 2004. And I’m not sure that was even the first.
None of these appeals to relative complexity, low level structure, or training corpuses relates to whether a human or NN “know” the meaning of a word in some special way. A lot of your description of what “know” means could be confused to be a description of how Word2Vec encodes words. This just indicates ignorance of how ML language processing works. It’s not remotely on the same level as a human brain, but your view on how things work and what its failings are is just wrong.
No, not even remotely. And that’s kind of like citing “the first program to run on a CPU” as the start of development for any new algorithm.
It isn’t. People design a scene and then change and refine the prompt to add elements. Some part of it could be refreshing the same prompt, but that’s just like a photographer taking multiple photos of a scene they’ve directed to catch the right flutter of hair or a dress or a creative director saying “give me three versions of X”.
Ready to get back to my original questions?
I don’t disagree, just pointing out that it’s not “good riddance” for a lot of artists that depend on that to have any job in art.
No, both of those examples involve both design and selection, which is reminiscent to the AI art process. They’re not just typing in “make me a pretty image” and then refreshing a lot.
Is a photographer an artist? They need to have some technical skill to capture sharp photos with good lighting, but a lot of the process is designing a scene and later selecting among the photos from a shoot for which one had the right look.
Or to step even further from the actual act of creation, is a creative director an artist? There’s certainly some skill involved in designing and recognizing a compelling image, even if you were not the one who actually produced it.
GameStop also went up. It doesn’t mean GameStop is a good company that’s valuable to own, it just means that dumb people will buy things without value if they think they can eventually pass the bag to someone else. If someone purchased every share of Amazon they’d own a massive asset that would continually produce value for them. If someone bought every outstanding Bitcoin, it both wouldn’t produce ongoing value, but the value would actually go to zero.
The problem is that shit art is what employs a lot of artists. Like, in a post-scarcity society no one needing to spend any of their limited human lifespan producing corporate art would be awesome, but right now that’s one of the few reliable ways an artist can actually get paid.
I’m most familiar with photography as I know several professional photographers. It’s not like they love shooting weddings and clothing ads, but they do that stuff anyway because the alternative is not using their actual expertise and just being a warm body at a random unrelated job.
I’m referencing ChatGPT’s initial benchmarks to its capabilities to today. Observable improvements have been made in less than two years. Even if you just want to track time from the development of modern LLM transformers (All You Need is Attention/BERT), it’s still a short history with major gains (alexnet isn’t really meaningfully related). These haven’t been incremental changes on a slow and steady march to AI sometime in the scifi scale future.
Except when it comes to LLM, the fact that the technology fundamentally operates by probabilisticly stringing together the next most likely word to appear in the sentence based on the frequency said words appeared in the training data is a fundamental limitation of the technology.
So long as a model has no regard for the actual you know, meaning of the word, it definitionally cannot create a truly meaningful sentence.
This is a misunderstanding of what “probabilistic word choice” can actually accomplish and the non-probabilistic systems that are incorporated into these systems. People also make mistakes and don’t actually “know” the meaning of words.
The belief system that humans have special cognizance unlearnable by observation is just mysticism.
Bonus trivia, sometimes you may see a downvote on a Beehaw post. As far as I understand the system, that’s because someone on your server downvoted the thing. The system then sends it off to Beehaw to be recorded on the “real” post and Beehaw just doesn’t apply it.
Sure, and that point is being made in multiple other places in these comments. I find it patronizing, but that’s neither here nor there as it’s not what this comment thread is about.
This is a post on the Beehaw server. They don’t propagate downvotes.
Yes? AI is a lot of things, and most have well-defined accuracy metrics that regularly exceed human performance. You’re likely already experiencing it as a mundane tool you don’t really think about.
If you’re referring specifically to generative AI, that’s still premature, but as I pointed out, the interactive chat form most people worry about is 18 months old and making shocking levels of performance gains. That’s not the perpetual “10 years away” it’s been for the last 50 years, that’s something that’s actually happening in the near term. Jobs are already being lost.
People are scared about AI taking over because they recognize it (rightfully) as a threat. That’s not because they’re worthless. If that were the case you’d have nothing to fear.
So just more patronizing. It’s their life, you don’t know better than them how to live it, grief or no.
Except those things didn’t really solve any problems. Well, dotcom did, but that actually changed our society.
AI isn’t vaporware. A lot of it is premature (so maybe overblown right now) or just lies, but ChatGPT is 18 months old and look where it is. The core goal of AI is replacing human effort, which IS a problem wealthy people would very much like to solve and has a real monetary benefit whenever they can. It’s not going to just go away.
There are probably self-driving cars in some alien civilizations.