• 1 Post
  • 13 Comments
Joined 1 year ago
cake
Cake day: June 25th, 2023

help-circle
  • One is multiple parallel goals. Makes it hard to stop playing, since there’s always something you just want to finish or do “quickly”.

    Say you want to build a house. Chop some trees, make some walls. Oh, need glass for windows. Shovel some sand, make more furnaces, dig a room to put them in - oh, there’s a cave with shiny stuff! Quickly explore a bit. Misstep, fall, zombies, dead. You had not placed a bed yet, so gotta run. Night falls. Dodge spiders and skeletons. Trouble finding new house. There it is! Venture into the cave again to recover your lost equipment. As you come up, a creeper awaitsssss you …

    Another mechanism is luck. The world is procedurally generated, and you can craft and create almost anything anywhere. Except for a few things, like spawners. I once was lucky to have two skeleton spawners right next to each other, not far from the surface. In total, I probably spent hours in later worlds to find a similar thing.

    The social aspect can also support that you play the game longer or more than you actually would like. Do I lose my “friends” when I stop playing their game?

    I don’t think Minecraft does these things in any way maliciously, it’s just a great game. But nevertheless, it has a couple of mechanics which can make it addictive and problematic.


  • “For agencies like the FTC to seriously consider action, there has to be harm to customers. But the sneaky formula that mobile developers have pioneered is one where the app itself is free, and the gameplay technically does exist in the application, so where’s the harm? Any rEaSoNaBlE viewer won’t be harmed. They will see and uninstall, and there’s disclosures, so who cares? But these companies aren’t targeting ‘the reasonable customers’, they are targeting the people with addictive personalities who get easily sucked in from a deceptive ad to a predatory product.

    Damn, that’s insane and evil. Like a drug cartel distributing free candies after school, with crystal meth inside. They just weather the storm, well knowing a few “customers” will stick.

    I still don’t understand how this can work so well, which apparently it does given the numbers and scale. I have questions:

    • Why bother making a “main product” at all, if people come for the mini game? Why not make the mini game addictive and predatory, save even more development costs and get less negative reviews as a bonus? Like, why bother with the candy when you can legally sell meth?
    • Why is this exclusive to the mobile market? The same games, ads and arguments could be made for any other platform with “free”, downloadable content like PC. Why don’t they share their crack candies at college?






  • This is silly.

    The article is an anecdote about one incompetent user using a new tool; ChatGPT.

    He uses the wrong tool for what he’s trying to accomplish, finding sources. The free version of ChatGPT cannot search the internet and has no internal fact memory as he seems to wrongly assume.

    So he, like many others, runs into hallucinations.

    Then he jumps to conclusions:

    • Our jobs are safe
    • Chat GPT doesn’t make mistakes or tell falsehoods – it just gets confused
    • He was going to have produce the substance of his keynote address the old fashioned way

    How much weight does this assessment or article have?

    People who better understand what they can expect from a LLM, and who are willing to invest a tad more time into learning how to use a new tool well, will of course produce better results.

    If you want a LLM which can find sources, use a LLM which can find sources. Use the paid ChatGPT 4.0, Bing AI or perplexity.ai.

    Like all tools which are used well, they become a productivity multiplier, which naturally means less workforce is required to do the same work. If your job involves text, and you refuse to learn how to use state of the art tools, your job is probably not that safe. Yes, maybe “for the next week or so”, but AI development did not stop, so what does that help. You’re not going to be replaced by AI, but by people who learned how to work with AI.


    Here’s a paper on the topic, which comes to vastly different conclusions than this anecdotal opinion piece: GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models

    You can upload it to https://www.chatpdf.com/ to get summaries or ask questions.




  • The article complains the usage of the word “hallucinations” would be …

    feeding the sector’s most cherished mythology: that by building these large language models, and training them on everything that we humans have written, said and represented visually, they are in the process of birthing an animate intelligence on the cusp of sparking an evolutionary leap for our species.

    Wether that is true or not depends on wether we eventually create human-level (or beyond) machine intelligences. No one can read the future. Personally I think it’s just a matter of time, but there are good arguments for both sides.

    I find the term “hallucinations” fitting, because it conveys to uneducated people that a claim by ChatGPT should not be trusted, even if it sounds compelling. The article suggests “algorithmic junk”, or “glitches” instead. I believe naive users would refuse to accept an output as junk or a glitch. These terms suggest something is broken, althought the output still seems sound. “Hallucinations” is a pretty good term for that job, and also already established.

    The article instead suggests the creators are hallucinating in their predictions of how useful the tools will be. Again no one can read the future, but maybe. But mostly: It could be both.


    Reading the rest of the article required a considerable amount of goodwill on my part. It’s a bit too polemical for my liking, but I can mostly agree with the challenges and injustices it sees forthcoming.

    I mostly agree with #1, #2 and #3. #4 is particularly interesting and funny, as I think it describes Embrace, Extend, Extinguish.


    I believe AI could help us create a better world (in the large scopes of the article), but I’m afraid it won’t. The tech is so expensive to develop, the most advanced models will come from people who already sit on top of the pyramid, and foremost multiply their power, which they can use to deepen the moat.

    On the other hand, we haven’t found a solution to alignment and control problem, and aren’t certain we will. It seems very likely we will continue to empower these tools without a plan for what to do when one model actually shows near-human or even super-human capabilities, but can already copy, backup, debug and enhance itself.

    The challenges to economy and society along the way are profound, but I’m afraid that pales in comparison to the end game.


  • the results are “high” as much as 10 percent because the researcher do not want to downplay how “intelligent” their new technology is. But it’s not that intelligent as we and they all know it. There is currently 0 chance any “AI” can cause this kind of event.

    Yes, the current state is not that intelligent. But that’s also not what the expert’s estimate is about.

    The estimates and worries concern a potential future, if we keep improving AI, which we do.

    This is similar to being in the 1990s and saying climate change is of no concern, because the current CO2 levels are no big deal. Yeah right, but they won’t stay at that level, and then they can very well become a threat.