I personally don’t mind! If you find some models you think are interesting to share, go for it!
I personally don’t mind! If you find some models you think are interesting to share, go for it!
The best way to grow a community is to share the highest quality information possible. The reason I actually stopped being a lurker is because another Lemmy user told me this.
I want to see Lemmy and LocalLLaMA grow. If you can make the content here so good that others seek out our posts for information, then the community will naturally grow.
I will do the same. No problem! I’m very happy that my post was heard! Thank you!!!
Of course. I know some open source devs that advice backing up raw training data, LoRa, and essentially the original base models for fine tuning.
Politicians sent an open letter out in protest when Meta released their LLaMA 2. It is not unreasonable to assume they will intervene for the next one unless we speak out against this.
Also no problem! I feel like I had to share this one.
I hope so, but from what I can tell, we are going to have a repeat of the Patriot Act and the horrors that caused as showed by Edward Snowden.
The politicians are only getting one side of the argument about AI from CEOs and those in positions of power. It is important that the politicans recognize the good AI is doing as well. This is why I made this post to try to get some voice out there.
It would be difficult indeed, but without a doubt they will still try and cause massive damage to our basic freedoms. For example, imagine if one day all chips require DRMs at the hardware level that cannot be disabled. This is just one example of the damage they could do. There isn’t much any consumer can do if they do this since developing your own GPU is nearly impossible.
They are requesting for something beyond watermarking. Yes, it is good to have a robot tell you when it is making a film. What is particularly concerning is that the witnesses want the government to keep track of every prompt and output ever made to eventually be able to trace its origin. So all open source models must somehow encode some form of signature, much like the hidden yellow dots printers produce on every sheet.
There is a huge difference between a watermark stating that “this is ai generated” and having hidden encodings, much like a backdoor, where they can trace any pubicly released ai image, video, and perhaps even text output, to some specific model, or worse DRM required “yellow dot” injection.
I know researchers have already looked into encoding hidden undetectable patterns in text output, so an extension to everything else is not unjustified.
Also, if the encodings are not detectable by humans, then they have failed the original purpose of making ai generated content known.
Of course! Please share this senate hearing around if you want to help. We need to bring awareness to what they are trying to do. Advocating for universal backdoors is insane…
Thank you so much for your answer!
I’m unable to access the wiki for any sub-reddit and I get the “Forbidden” message. It may be for only some users, or all, I’m not sure. But I’m glad we have lemmy!
For smaller communities, it is definitely better to have some content then none. So long as it isn’t spam I feel. You are right in this case.