Researchers found that ChatGPT’s performance varied significantly over time, showing “wild fluctuations” in its ability to solve math problems, answer questions, generate code, and do visual reasoning between March and June 2022. In particular, ChatGPT’s accuracy in solving math problems dropped drastically from over 97% in March to just 2.4% in June for one test. ChatGPT also stopped explaining its reasoning for answers and responses over time, making it less transparent. While ChatGPT became “safer” by avoiding engaging with sensitive questions, researchers note that providing less rationale limits understanding of how the AI works. The study highlights the need to continuously monitor large language models to catch performance drifts over time.
I think this might be what stops AI from taking over as much as people fear. If I was a business owner I wouldn’t want to put my trust in a black box if I can pay someone to ensure it works exactly to my specification
deleted by creator
You uh… you might have chosen the wrong field if you hate displacing labour
deleted by creator
You know, I wouldn’t care about being replaced by a machine, as long as I get UBI. Then I could just do what I like to do and wouldn’t need to care whether I actually make money with it.
That’s not how UBI is supposed to work. You would certainly have enough time to do what you like, just not the resources. Any money you’d get would only cover the absolute necessities like shelter and food.
According to who? Who defines what a “basic necessity” is? It could easily be argued that hobbies are a necessity.
I think that’s what part of the Hollywood writers strike is about. AI generating “good enough” scripts, and studios shelling a few peanuts for some writers to finalize them.
And that’s exactly how it will be used