• 0 Posts
  • 9 Comments
Joined 1 year ago
cake
Cake day: July 1st, 2023

help-circle


  • And you’re absolutely right about that. That’s not the same thing as LLMs being incapable of constituting anything written in a novel way, but that they will readily with very little prodding regurgitate complete works verbatim is definitely a problem. That’s not a remix. That’s publishing the same track and slapping your name on it. Doing it two bars at a time doesn’t make it better.

    It’s so easy to get ChatGPT, for example, to regurgitate its training data that you could do it by accident (at least until someone published it last year). But, the critics cry, you’re using ChatGPT in an unintended way. And indeed, exploiting ChatGPT to reveal its training data is a lot like lobotomizing a patient or torture victim to get them to reveal where they learned something, but that really betrays that these models don’t actually think at all. They don’t actually contribute anything of their own; they simply have such a large volume of data to reorganize that it’s (by design) impossible to divine which source is being plagiarised at any given token.

    Add to that the fact that every regulatory body confronted with the question of LLM creativity has so far decided that humans, and only humans, are capable of creativity, at least so far as our ordered societies will recognize. By legal definition, ChatGPT cannot transform (term of art) a work. Only a human can do that.

    It doesn’t really matter how an LLM does what it does. You don’t need to open the black box to know that it’s a plagiarism machine, because plagiarism doesn’t depend on methods (or sophisticated mental gymnastics); it depends on content. It doesn’t matter whether you intended the work to be transformative: if you repeated the work verbatim, you plagiarized it. It’s already been demonstrated that an LLM, by definition, will repeat its training data a non-zero portion of the time. In small chunks that’s indistinguishable, arguably, from the way a real mind might handle language, but in large chunks it’s always plagiarism, because an LLM does not think and cannot “remix”. A DJ can make a mashup; an AI, at least as of today, cannot. The question isn’t whether the LLM spits out training data; the question is the extent to which we’re willing to accept some amount of plagiarism in exchange for the utility of the tool.





  • Hear, hear. This isn’t a case of Mercedes selling an upgrade. It’s more akin to selling the car pre-booted and then demanding a monthly payment to remove it under threat of returning to re-apply it if a payment is missed. It’s absolutely a protection racket. Sure would be a shame if something happened to those fancy features we installed.

    The good news is that the companies who will float this first are the ones most likely to do business with politicians, and unfortunately I’m cynical enough to believe that the best way to get regulation in place is to personally inconvenience the decision-makers. I hope that results in action.

    If it doesn’t, well, the next step is self-help. If we’re changing the definition of private property, it’s only so long before people begin questioning whether there’s any point in having private property at all.


  • More and more I think that might be the point. In the absence of users having control over the content they see, the only users left will be the ones who are naive, not tech savvy, and who have very high tolerances for manipulation: people who don’t leave because either they don’t know how, can’t understand or remember a possible better alternative, or can’t muster the effort.

    These are incidentally also the same people who are most likely to be unable to distinguish ads from content, most likely to click on ads, and most likely to engage with click- and rage- bait content. That is: people most vulnerable to corporate predation.

    It’s just like scam robocalls. It’s bad by design, because half the point is to immediately weed out anybody smart enough not to fall for it.


  • It really does. I uninstalled it last night, and now there’s a big empty space on my phone where Sync is supposed to live. My wife and I were talking about it this morning, and we were talking about it in grief terms. It’s kind of like a family member died–a family member who held a lot of memories and history, and so it really is a kind of mourning process.

    For what it’s worth, that grief is part of what motivates me–like I think many of us refugees–to really push Lemmy and find our ways to get the most out of it. I want to punish the people who did this, and the way to do that is to hurt their bottom line. After I uninstalled Sync, my next step was the data request, and my next few days are going to be converting over bookmarks to their Lemmy equivalents until nothing is left.