• 1 Post
  • 28 Comments
Joined 1 year ago
cake
Cake day: June 11th, 2023

help-circle

  • Imo, the true fallacy of using AI for journalism or general text, lies not so much in generative AI’s fundamental unreliability, but rather it’s existence as an affordable service.

    Why would I want to parse through AI generated text on times.com, when for free, I could speak to some of the most advanced AI on bing.com or openai’s chat GPT or Google bard or a meta product. These, after all, are the back ends that most journalistic or general written content websites are using to generate text.

    To be clear, I ask why not cut out the middleman if they’re just serving me AI content.

    I use AI products frequently, and I think they have quite a bit of value. However, when I want new accurate information on current developments, or really anything more reliable or deeper than a Wikipedia article, I turn exclusively to human sources.

    The only justification a service has for serving me generated AI text, is perhaps the promise that they have a custom trained model with highly specific training data. I can imagine, for example, weather.com developing highly specific specialized AI models which tie into an in-house llm and provide me with up-to-date and accurate weather information. The question I would have in that case would be why am I reading an article rather than just being given access to the llm for a nominal fee? At some point, they are not no longer a regular website, they are a vendor for a in-house AI.





  • I’m not anti ai, I use it generative ai all of the time, and I actually come from a family of professional artists myself ( though I am not ). I agree that its a tool which is useful; however, I disagree that it is not destructive or harmful to artist simply because it is most effective in thier hands.

    1. it concentrates the power of creativity into firms which can afford to produce and distribute ai tools. While ai models are getting smaller, there are frequently licensing issues involved (not copywrite, but simply utilizing the tools for profit) in these small models. We have no defined roadmap for the Democratization of these tools, and most signs point towards large compute requirements.

    2. it enables artist to effectively steal the intellectual labor of other artist. Just because you create cool art with it doesn’t mean it’s right for you to scrape a book or portfolio to train your ai. This is purely for practical reasons. Artists today work thier ass of to make the very product ai stands to consolidate and distribute for prennies to the dollar.

    you fail to recognize that possibility that I support ai but oppose its content being copywritable purely because firms would immediately utilize this to evade licensing work. Why pay top dollar for a career concept artist’s vision when you can pay a starting liberal arts grad pennies to use Adobe suit to generate images trained in said concept artists?

    Yes, that liberal arts grad deserves to get paid, but they also deserve any potential whatsoever of career advancement.

    Now imagine instead if new laws required that generative ai license thier inputs in order to sell for profit? Sure, small generative ai would still scrape the Internet to produce art, but it would create a whole new avenue for artist to create and license art. Advanced generative ai may need smaller datasets, and small teams of artist may be able to utilize and license boutique models.


  • I disagree with this reductionist argument. The article essentially states that because ai generation is the “exploration of latent space,” and photography is also fundamentally the “exploration of latent space,” that they are equivalent.

    It disregards the intention of copywriting. The point isn’t to protect the sanctity or spiritual core of art. The purpose is to protect the financial viability of art as a career. It is an acknowledgment that capitalism, if unregulated, would destroy art and make it impossible to pursue.

    Ai stands to replace artist in a way which digital and photography never really did. Its not a medium, it is inference. As such, if copywrite was ever good to begin with, it should oppose ai until compromises are made.








  • Digital foundry had an interesting take on this. Cod makes more than 1 billion a year, and cost probably more each year than any other franchise to develop and maintain. If Microsoft made it an Xbox exclusive, they might cut that 1 billion dollar figure in half, and the franchise might bleed more money than MS would make selling more consoles. In fact, the franchise might go negative.

    Basically, they can’t afford to lose the ps5 playerbase.






  • Just an fyi, defederation doesn’t mean you as a user can’t see any content from a given instance or vice versa. It’s more like from the time of defederation, users on the other Lemmy can’t be seen commenting or posting on your Lemmy. I believe there are other consequences too, but it’s not as straightforward as a ban.

    Defederation is a feature, not a bug. Lemmy was designed with the idea that instances could be more specific in thier content, so for your lemmy to defederate from a Ukraine war footage instance might not be a condemnation, so much as a curation decision.

    Think of it like, an instance has the potential to be either a reddit alternative or a collection of related subreddits.