• 4 Posts
  • 13 Comments
Joined 1 year ago
cake
Cake day: July 1st, 2023

help-circle





  • Of course. I know some open source devs that advice backing up raw training data, LoRa, and essentially the original base models for fine tuning.

    Politicians sent an open letter out in protest when Meta released their LLaMA 2. It is not unreasonable to assume they will intervene for the next one unless we speak out against this.




  • cll7793@lemmy.worldOPtoLocalLLaMA@sh.itjust.works*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    edit-2
    1 year ago

    I hope so, but from what I can tell, we are going to have a repeat of the Patriot Act and the horrors that caused as showed by Edward Snowden.

    The politicians are only getting one side of the argument about AI from CEOs and those in positions of power. It is important that the politicans recognize the good AI is doing as well. This is why I made this post to try to get some voice out there.


  • It would be difficult indeed, but without a doubt they will still try and cause massive damage to our basic freedoms. For example, imagine if one day all chips require DRMs at the hardware level that cannot be disabled. This is just one example of the damage they could do. There isn’t much any consumer can do if they do this since developing your own GPU is nearly impossible.


  • cll7793@lemmy.worldOPtoLocalLLaMA@sh.itjust.works*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    edit-2
    1 year ago

    They are requesting for something beyond watermarking. Yes, it is good to have a robot tell you when it is making a film. What is particularly concerning is that the witnesses want the government to keep track of every prompt and output ever made to eventually be able to trace its origin. So all open source models must somehow encode some form of signature, much like the hidden yellow dots printers produce on every sheet.

    There is a huge difference between a watermark stating that “this is ai generated” and having hidden encodings, much like a backdoor, where they can trace any pubicly released ai image, video, and perhaps even text output, to some specific model, or worse DRM required “yellow dot” injection.

    I know researchers have already looked into encoding hidden undetectable patterns in text output, so an extension to everything else is not unjustified.

    Also, if the encodings are not detectable by humans, then they have failed the original purpose of making ai generated content known.