In its submission to the Australian government’s review of the regulatory framework around AI, Google said that copyright law should be altered to allow for generative AI systems to scrape the internet.
I agree with google, only I go a step further and say any AI model trained on public data should likewise be public for all and have its data sources public as well. Can’t have it both ways Google.
To be fair, Google releases a lot of models as open source: https://huggingface.co/google
Using public content to create public models is also fine in my book.
But since it’s Google I’m also sure they are doing a lot of shady stuff behind closed doors.
I hope that too, but I’m less optimistic. We live in a capitalistic world.
Copyright law already allows generative AI systems to scrape the internet. You need to change the law to forbid something, it isn’t forbidden by default. Currently, if something is published publicly then it can be read and learned from by anyone (or anything) that can see it. Copyright law only prevents making copies of it, which a large language model does not do when trained on it.
A lot of licensing prevents or constrains creating derivative works and monetizing them. The question is for example if you train an AI on GPL code, does the output of the model constitute a derivative work?
If yes, Github Copilot is illegal as it produces code that should comply to multiple conflicting license requirements. If no, I can write some simple AI that is “trained” to regurgitate its output on a prompt, and run a leaked copy of Windows through it, then go around selling Binbows and MSFT can’t do anything about it.
The truth is mostly between the two, this is just piracy, which always has been a gray area because of the difficulty of prosecuting it, previously because the perpetrators were many and hard to find, now it’s because the perpetrators are billion dollar companies with expensive lawyer teams.
The question is for example if you train an AI on GPL code, does the output of the model constitute a derivative work?
This question is completely independent of whether the code was generated by an AI or a human. You compare code A with code B, and if the judge and jury agree that code A is a derivative work of code B then you win the case. If the two bodies of work don’t have sufficient similarities then they aren’t derivative.
If no, I can write some simple AI that is “trained” to regurgitate its output on a prompt
You’ve reinvented copy-and-paste, not an “AI.” AIs are deliberately designed to not copy-and-paste. What would be the point of one that did? Nobody wants that.
Filtering the code through something you call an AI isn’t going to have any impact on whether you get sued. If the resulting code looks like copyrighted code, then you’re in trouble. If it doesn’t look like copyrighted code then you’re fine.
AIs are deliberately designed to not copy-and-paste.
AI is a marketing term, not a technical one. You can call anything “AI”, but it’s usually predictive models that get called that.
AIs are deliberately designed to not copy-and-paste. What would be the point of one that did? Nobody wants that.
For example if the powers that be decided to say licenses don’t apply once you feed material through an “AI”, and failed to define AI, you could say you wrote this awesome OS using an AI that you trained exclusively using Microsoft proprietary code. Their licenses and copyright and stuff doesn’t apply to AI training data so you could sell that new code your AI just created.
It doesn’t even have to be 100% identical to Windows source code. What if it’s just 80%? 50%? 20%? 5%? Where is the bar where the author can claim “that’s my code!”?
Just to compare, the guys who set out to reimplement Win32 APIs for use in Linux (the thing that made it into MacOS as well now) deliberately would not accept help from anyone who ever saw any Microsoft source code for fear of being sued. The bar was that high when it was a small FOSS organization doing it. It was 0%, proven beyond a doubt.
Now that Microsoft is the author, it’s not a problem when Github Copilot spits out GPL code word for word, ironically together with its license.
AI is a marketing term, not a technical one.
The reverse, actually. Artificial intelligence is a field of research that includes things like machine learning, as well as lots of even more mundane applications. It’s pop culture that has hijacked it to mean “a thing exactly as capable as a human brain, but in computer form.”
For example if the powers that be decided to say licenses don’t apply once you feed material through an “AI”, and failed to define AI, you could say you wrote this awesome OS using an AI that you trained exclusively using Microsoft proprietary code.
Once again, it doesn’t matter what you “feed code through.” Copyright applies to the tangible result. If the output from the AI matches closely to something that’s already copyrighted then that copyright applies to it. If it doesn’t match closely then that copyright doesn’t apply to it. The actual process by which the code was produced doesn’t matter one whit. If I took a Harry Potter book, put its pages through a shredder, randomly glued the particles of paper back together and it just so happened to closely replicate Lord of the Rings then the Tolkien estate has a case against me but the Rowling estate does not.
If the resulting code looks like copyrighted code, then you’re in trouble. If it doesn’t look like copyrighted code then you’re fine.
^^ Very much this.
Loads of people are treating the process of AI creating works as either violating copyright or not. But that is not how copyright works. It applies to the output of a process not the process itself. If someone ends up writing something that happens to be a copy of something they read before - that is a violation of copy write laws. If someone uses various works and creates something new and unique then that is not a violation. It does not - at this point in time at least - matter if that someone is a real person or an AI.
AI can both violate copy write on one work and not on another. Each case is independent and would need to be legislated differently. But AI can produce so much content so quickly that it creates a real problem for a case by case analysis of copy write infringement. So it is quite likely the laws will need to change to account for this and will likely need to treat AI works differently from human created works. Which is a very hard thing to actually deal with.
Now, one could also argue the model itself is a violation of copyright. But that IMO is a stretch - a model is nothing like the original work and the copyright law also does not cover this case. It would need to be taken to court to really decide on if this is allowed or not.
Personally I don’t think the conversation should be on what the laws currently allow - they were not designed for this. But instead what the laws should allow. So we can steer the conversation towards a better future. Lots of artists are expressing their distaste for AI models to be trained on their works - if enough people do this laws can be crafted to backup this view.
then go around selling Binbows and MSFT can’t do anything about it
I think this already happen. A very practical example, windows GUI has been copied by many Linus distros. And with windows 11 there’s clearly a reference to Apple MacOS GUI with a sparkling of Google material design.
Should apple and Google be able to sue Microsoft because it “copied” their work? Should Google be able to sue apple because they “copied” the notification drop-down in iOS?
As you say it’s really a grey area because the only reason we consider AI code to be “regurgitated” while human code to be “inspired” is only because we give humans more recognition of their intellectual abilities.
deleted by creator
Exactly this right here.
Someone getting sued does not mean they are wrong or that they lost the case. Each case needs to look at the works in question and decide if that perceptual case violates copy write. Lots of things are taken into account here, and even is small elements might have been used or be similar does not automatically win the case.
There is also a difference between some implementation and the overall feature in question. For instance, APIs are not copy writeable, nor are cords in music, nor what something does overall. Only specific implementations are copy writeable.
The same can apply to AI - if it generates a work that if a human did it it would violate copy write then it does - if not then it does not. But AI shows a different problem. That of scale. There is only a limited amount of work that a human can do. But an AI can produce vastly more content - enough that a case by case evaluation of infringement might not be viable. And if that becomes the case then AI works might need to be treated differently from human created works - or maybe how the models are created and how they can use copy writed works. The current laws were never designed with the speed at which AI can work in mind.
deleted by creator
What do you mean by infringement already? So you mean it automatically infringes copyright for all its output just because it might create something similar to a copyrighted work? Or do you mean that if it does create a copyrighted work that work in infringing on a copyright? Your wording is vague here.
can be shown to be capable of reproducing something close enough to said material
I don’t think it is a good benchmark for forbidding AI generation of content. If you create a random image generate that has no inputs and is truly random then it is capable of generating something similar to copyrighted work - by pure chance. Even if that chance is very low you could generate enough images and show it can create something similar to copyrighted works.
What happens if you create one that is trained only on public domain images or works properly licensed? Its output is still partially random and could still generate an image similar to some other copyrighted work outside of its training set by pure chance.
I would argue that both of these should be allowed. They are not doing anything obviously wrong even if they could be used to generate copyrighted works. Just like you could use photoshop - or a paint brush to create copyrighted work.
But then, what if you take some other AI that is trained on all sorts of data, copyrighted or not. But then the output of that is fed through a checker that compares it to the training set (and maybe more copyrighted content) and rejects/regenerates work until it is known to not infringe on copyrighted work. Making the chances of it ever producing a copyrighted work far less then the above programs? Should that be allowed? It is using copyrighted work much like an artist would and you could argue that any copyrighted work it does produce was by pure accident as there are intentional steps to mitigate that.
If you use a paid service like Midjourney to generate copyrighted content, the company is essentially selling you access to copyrighted content they lack the rights to.
As far as I understand the laws involved, yeah I would expect that to infringe on some copyright holders work and midjourney would likely be coppable for damages. Just like hiring a artist to create some work and they decide to copy some copyrighted work would also make that artist coppable for damages.
And you also have to consider another side of things - if you can effectively stop AI from training on most works you will effectively stunt its usefulness. Which could lead all efforts in regulated nations to become useless which can result in it just moving to places that are much more open with the technology and where authors of the copyrighted work will have far less control over things. IMO AI generated content is out of the bag now and we will not get it back in. So the best we can do is ensure the right people get compensated for their works. Push to hard in the wrong direction (either way) and there is a real chance they never will.
I don’t really have the solutions to many of these problems - but I do think it is worth talking about and don’t think that outright bans (or actions leading to an effective ban) on this tech is the correct way to go.
deleted by creator
You should read this.
An AI model is a derivative work of its training data and thus a copyright violation if the training data is copyrighted.
A human is a derivative work of its training data, thus a copyright violation if the training data is copyrighted.
The difference between a human and ai is getting much smaller all the time. The training process is essentially the same at this point, show them a bunch of examples and then have them practice and provide feedback.
If that human is trained to draw on Disney art, then goes on to create similar style art for sale that isn’t a copyright infringement. Nor should it be.
This is stupid and I’ll tell you why.
As humans, we have a perception filter. This filter is unique to every individual because it’s fed by our experiences and emotions. Artists make great use of this by producing art which leverages their view of the world, it’s why Van Gogh or Picasso is interesting because they had a unique view of the world that is shown through their work.
These bots do not have perception filters. They’re designed to break down whatever they’re trained on into numbers and decipher how the style is constructed so it can replicate it. It has no intention or purpose behind any of its decisions beyond straight replication.
You would be correct if a human’s only goal was to replicate Van Gogh’s style but that’s not every artist. With these art bots, that’s the only goal that they will ever have.I have to repeat this every time there’s a discussion on LLM or art bots:
The imitation of intelligence does not equate to actual intelligence.Absolutely agreed! I think if the proponents of AI artwork actually had any knowledge of art history, they’d understand that humans don’t just iterate the same ideas over and over again. Van Gogh, Picasso, and many others, did work that was genuinely unique and not just a derivative of what had come before, because they brought more to the process than just looking at other artworks.
Yup. There seems to be a strong motive in many to not understand this concept as it makes their practices clearly ethically questionable.
My feeling is that the vast majority of pro-AI techbros come from a computer science, finance, or business background; undoubtedly intelligent people, but completely and utterly lacking in any appreciation or understanding of what actually goes into creative work. I’m sure they genuinely believe that there’s no difference between what a human does and what an AI does, because they think art (or writing, music, etc) are just the product of an algorithm.
Ironically, my background is in mathematics but I also happen to be a writer so I see both sides of the argument. I just see the utter lack of compassion people have for those who produce creative work and the same people believe that if it can be automated, it should be automated.
Likely. Which is weird because algorithms are only a subset of software engineering, which requires abstract and creative thought to perform well.
I really, really, really wish people would understand this.
AI can only create a synthesis of exactly what it’s fed. It has no life experience, no emotional experience, no nurture-related experiences, no cultural experiences that color it’s thinking, because it isn’t thinking.
The “AI are only doing what humans do” is such a brain-dead line of thinking, to the point that it almost feels like it’s 100% in bad faith whenever it’s brought up.
You’re completely wrong, and I’ll tell you why.
None of what you said matters, perception filters, intent, intelligence… it’s all irrelevant to the discussion.
Copyright infringement only gives certain rights, and at least here in Canada using them to generate a model isn’t one of those. Rights are for things like distribution, reproduction, public performance, communication, and exhibition. US law says you can’t “Prepare derivative works based upon the work.” but the model isn’t a derivative work because it’s not really a work at all, you can’t even visually look at the model. You can’t copyright an algorithm in the US or Canada.
Only the created art should be scrutinized for copyright infringement, and these systems can generate both (just like a human can).
Any enforcement should then be handled when that protected work is then used to infringe on the actual rights of the copyright holder.
I wasn’t talking about copyright law in regards to the model itself.
I was talking about what is/isn’t grounds for plagiarism. I strongly disagree with the idea that artists and art bots go through the same process. They don’t and it’s reductive to claim otherwise. It negatively impacts the perception of artists’ work to assert that these models can automate a creative process which might not even involve looking at other artists’ work because humans are able to create on their own.
A person who has never looked upon a single painting in their life can still produce a piece but the same cannot be said for an art bot. A model must be trained on work that you want the model to be able to imitate.
This is why ChatGPT required the internet to do what it does (the privacy violation is another big concern there). The model needed vast quantities of information to be sufficiently trained because language is difficult to decipher. Languages evolved by getting in contact with other languages and organically making new words. ChatGPT will never invent a new word because it’s not intelligent, it is merely imitating intelligence.
“A person who has never looked upon a single painting in their life can still produce a piece but the same cannot be said for an art bot. A model must be trained on work that you want the model to be able to imitate.”
No, they really can’t. Go look a 1 year old’s first attempt at “art” because it’s nothing more than random smashing of colour on paper. A computer could easily generate such “work” as well with no training data at all. They’ve seen art at that point, and still can’t replicate it because they need much more training first.
Humans require books (or teachers who read books) to learn how to read and write. That is “vast quantities of information” being consumed to learn how to do it. If you had never seen or heard of a book, you wouldn’t be able to write a novel. It’s also completely ignoring the fact that you had to previously learn the spoken language as well (which is a vast quantity of information that takes a human decades to acquire proficiency in even with daily practice)
Once again, being reductive about artists’ work. Jackson Pollock’s entire career was smashing colours on a canvas. If you want to argue that Pollock had to look at thousands of paintings before making his, I honestly can’t take you seriously at that point.
A computer could easily generate such “work” as well with no training data at all.
Yes and in the eyes of its creators, that was deemed a failure which is why Midjourney and Dall-E are the way they are. These bots don’t want to create art, they want to imitate it.
Children have barely any experiences and can still create something. You might not deem it worthy of calling it art but they created something despite their limited knowledge and life experience.
Of course, you’d need books to read and write. The words have to be written and you need to see the words in written form if you also want to write them. But one thing you don’t take into account is handwriting. Another thing that is unique to every individual. Some have worse handwriting than others and with practice (like any muscle) it can be improved but you haven’t had to have seen handwritten text before writing it yourself. You only need to be taught how to hold a pen and you can write.
Novels are complex structures of language just like poetry. In order to write novels, you have to consume novels because it’s well understood that to find your own narrative voice you must see how others express theirs. Stories are told in unique ways and it’s crucial as a writer to understand and break these concepts down. Intention and purpose form a core part of storytelling and an LLM cannot and will not be able to express those things.
They’re written in certain ways because the author intended them to be that way, such as Cormac McCarthy deciding to be very minimalist with his punctuation.
I would love to see you make a point that an LLM without being specifically prompted to do so would make that stylistic decision. An LLM can’t make that decision because unless you specify a style it is aware of, it won’t organically do it.I am also a writer. I’ve written a short story. One of my stylistic choices is that I don’t use dialogue tags like “said”. An LLM won’t make that choice because it isn’t designed to do so, it won’t decide to minimise its use of dialogue tags to improve the flow of the narrative unless you told it to.
It’s also completely ignoring the fact that you had to previously learn the spoken language as well (which is a vast quantity of information that takes a human decades to acquire proficiency in even with daily practice).
Yes, in order to learn a spoken language you have to have heard it. However, languages evolve over time. You develop regional accents and dialects. All of the UK speaks English but no two towns speak the same way.
this is stupid I’ll tell you why
Not sure why you think anyone would read anything if that’s how you start it.
a human does not copy previous work exactly like these algorithms, whats this shit take?
A human can absolutely copy previous works, and they do it all the time. Disney themselves license books teaching you how to do just that. https://www.barnesandnoble.com/w/learn-to-draw-disney-celebrated-characters-collection-disney-storybook-artists/1124097227
Not to mention the amount of porn online based on characters from copyrighted works. Porn that is often done as a paid commission, expressly violating copyright laws.
Neither does AI?
But considering that humans do get copyright strikes when they do something too similar that should also applies to AI, doesn’t matter if it’s not exact.
That should tell you something about how companies act. They’re fine with these LLMs plagiarising content but when someone gets marginally close to their own trademarks, they get slammed.
Humans and AI are not the same and an equivalence should never be drawn.
Your feelings don’t really matter, the fact of the matter is that the goal of ai is literally to replicate the function of a human brain. The way we’re building them is often mimicking the same processes.
And LLMs and related technologies, by themselves, are artificial but not intelligent. So, the facts are not in favor of your argument to allow commercial parasitism on creative works.
I think you’re missing a point here. If someone uses these to models to produce and distribute copyright infringing works, the original rights holder could go after the infringer.
The model itself isn’t infringing though, and the process of creating the model isn’t either.
It’s a similar kind of argument to the laws that protect gun manufacturers from culpability from someone using their weapon to commit a crime. The user is the one doing the bad thing, they just produce a tool.
Otherwise, could Disney go after a pencil company because someone used one of their pencils to infringe on their copyright. Even if that pencil company had designed the pencil to be extremely good at producing Disney imagery by looking at a whole bunch of Disney images and movies to make sure it matches the size, colour, etc? No, because a pencil isn’t a copyright infringement of art, regardless of the process used to design it.
Nah. You’re missing the forest for the trees. Let’s get abstract:
Person A makes a living by making product X and selling it.
Person B makes a living by making product Y and selling it.
Both A and B are in the same industry.
Person C uses a machine to extract the essence of product X and Y and blend them. Person C then claims authorship and sells it as product Z, which they sell in competition to X and Y.
Person C has not created anything. Their machine does not have value in the absence of products X and Y, yet received no permission, offers no credit nor compensation. In addition, they are competing for the same customers and harming the livelihoods of A and B. Person C is acting in a purely parasitic manner that cannot be seen as ethical in any widely accepted definition of the word.
deleted by creator
The goal of AI is fictional, and there’s no solid evidence today that it will ever stop being fiction.
What at have today are stupid learning algorithms that are surprisingly good at mimicing intelligent people.
The most apt comparison today is a particularly clever parrot.
I’m all for having the discussion about how to handle AI when we have it, but it’s bad faith to apply it to what we have today.
Critically, what we have today will never ever go on strike, or really make any kind of correct moral decision on it’s own. We must treat it like dumb automation, because it is dumb automation.
the fact of the matter is that the goal of AI is literally to replicate the function of a human brain
…says who? That’s absolutely your feeling and not facts.
Derivative works are only copyright violations when they replicate substantial portions of the original without changes.
The entirety of human civilization is derivative works. Derivative works aren’t infringement.
That’s just not true
It absolutely is. There’s nothing out there in the past thousand years that isn’t based on other prior art, copyright law only replies to direct copies, and there are explicit cutouts past that that allow you to directly copy some things if your work is transformative.
It is not a derivative work, the model does not contain any recognizable part of the original material that it was trained on.
Except when it produces exact copies of existing works, or when it includes a recognisable signature or watermark?
deleted by creator
The point is that if the model doesn’t contain any recognisable parts of the original material it was trained on, how can it reproduce recognisable parts of the original material it was trained on?
That’s sorta the point of it.
I can recreate the phrase “apple pie” in any number of styles and fonts using my hands and a writing tool. Would you say that I “contain” the phrase “apple pie”? Where is the letter ‘p’ in my brain?Specifically, the AI contains the relationship between sets of words, and sets of relationships between lines, contrasts and colors.
From there, it knows how to take a set of words, and make an image that proportionally replicates those line pattern and color relationships.You can probably replicate the Getty images watermark close enough for it to be recognizable, but you don’t contain a copy of it in the sense that people typically mean.
Likewise, because you can recognize the artist who produced a piece, you contain an awareness of that same relationship between color, contrast and line that the AI does. I could show you a Picasso you were unfamiliar with, and you’d likely know it was him based on the style.
You’ve been “trained” on his works, so you have internalized many of the key markers of his style. That doesn’t mean you “contain” his works.Just because you can’t point to a specific part of your brain that contains the letter ‘p’ doesn’t mean it isn’t in there somewhere. If you didn’t contain the letter ‘p’, or Getty watermark, or Picasso’s work, you wouldn’t be able to recognise them when you saw them or tried to replicate them. The act of recognising something that is familiar is basically the brain comparing what the eye sees with what is stored in the memory. The brain stores it differently to an exact copy on a hard drive, but it does, nevertheless, contain everything that it remembers.
Ah, this old paper again. When it first came out it got raked over the coals pretty thoroughly. The authors used an older, poorly-trained version of Stable Diffusion that had been trained on only 160 million images and identified 350,000 images from the training set that had many duplicates and therefore could potentially be overfitted. They then generated 175 million images using tags commonly associated with those duplicate images.
After all that, they found 109 images in the output that looked like fuzzy versions of the input images. This is hardly a triumph of plagiarism.
As for the watermark, look closely at it. The AI clearly just replicated the idea of a Getty-like watermark, it’s barely legible. What else would you expect when you train an AI on millions of images that contain a common feature, though? It’s like any other common object - it thinks photographs often just naturally have a grey rectangle with those white squiggles in it, and so it tries putting them in there when it generates photographs.
These are extreme stretches and they get dredged up every time by AI opponents. Training techniques have been refined over time to reduce overfitting (since what’s the point in spending enormous amounts of GPU power to produce a badly-artefacted copy of an image you already have?) so it’s little wonder there aren’t any newer, better papers showing problems like these.
Nevertheless, the Getty watermark is a recognisable element from the images the model was trained on, therefore you cannot state that the models don’t spit out images with recognisable elements from the training data.
Take a close look at the “watermark” on the AI-generated image. It’s so badly mangled that you wouldn’t have a clue what it says if you didn’t already know what it was “supposed” to say. If that’s really something you’d consider “copyrightable” then the whole world’s in violation.
The only reason this is coming up in a copyright lawsuit is because Getty is using it as evidence that Stability AI used Getty images in the training set, not that they’re alleging the AI is producing copyrighted images.
I said “recognisable”, and it is clearly recognisable as Getty’s watermark, by virtue of the fact that many people, not only I, recognise it as such. You said that the models don’t use any “recognizable part of the original material that it was trained on”, and that is clearly false because people do recognise parts of the original material. You can’t argue away other people’s ability to recognise the parts of the original works that they recognise.
It’s not turning copyright law on its head, in fact asserting that copyright needs to be expanded to cover training a data set IS turning it on its head. This is not a reproduction of the original work, its learning about that work and and making a transformative use from it. An generative work using a trained dataset isn’t copying the original, its learning about the relationships that original has to the other pieces in the data set.
This is artificial pseudointelligence, not a person. It doesn’t learn about or transform anything.
Im not the one anthropomorphising the technology here.
To take those statements seriously, you will need to:
- define and describe in detail the processes by which “a person” learns
- define and describe in detail how “a person” transforms anything
- define and describe in detail what is “intelligence”
- define and describe in detail what these “artificial paeudointelligences” are doing
- define and describe in detail the differences between the latter and the previous points
Otherwise, I’ll claim that “a person” is running exactly the same processes (neural networks, LLMs, hallucinations), and that calling these AIs “artificial paeudointelligences” is nothing else than dehumanizing a minority just because you feel threatened by them.
spoiler
asdfasdfsadfasfasdf
The lines between learning and copying are being blurred with AI. Imagine if you could replay a movie any time you like in your head just from watching it once. Current copyright law wasn’t written with that in mind. It’s going to be interesting how this goes.
Imagine being able to recall the important parts of a movie, it’s overall feel, and significant themes and attributes after only watching it one time.
That’s significantly closer to what current AI models do. It’s not copyright infringement that there are significant chunks of some movies that I can play back in my head precisely. First because memory being owned by someone else is a horrifying thought, and second because it’s not a distributable copy.
the thought of human memory being owned is horrifying. We’re talking about AI. This is a paradigm shift. New laws are inevitable. Do we want AI to be able to replicate small creators work and ruin their chances at profitability? If we aren’t careful, we are looking at yet another extinction wave where only the richest who can afford the AI can make anything. I don’t think it’s hyperbole to be concerned.
The question to me is how you define what the AI is doing in a way that isn’t hilariously overbroad to the point of saying “Disney can copyright the style of having big eyes and ears”, or “computers can’t analyze images”.
Any law expanding copyright protections will be 90% used by large IP holders to prevent small creators from doing anything.
What exactly should be protected that isn’t?
If I had the answer I’d be writing my congresswoman immediately. All I know is allowing AI unfettered access to just have all content is going to be a huge problem.
How many movies are based on each other? It’s a lot, even if it’s just loosely based on it. If you stopped allowing that then you would run out of new things to do.
Let me ask you this: do you think our brains and LLM’s are, overall, pretty distinct? This is not a trick or bait or something, I’m just going through this methodically in hopes my position - which is shared by some others in this thread it seems - is better understood.
I don’t think they work the same way, but I think they work in ways that are close enough in function that they can be treated the same for the purposes of this conversation.
Pen and pencil are “the same”, and either of those and printed paper are “basically the same”.
The relationship between a typical modern AI system and the human mind is like that between a pencil written document and a word document: entirely dissimilar in essentially every way, except for the central issue of the discussion, namely as a means to convey the written word.Both the human mind and a modern AI take in input data, and extract relationships and correlations from that data and store those patterns in a batched fashion with other data.
Some data is stored with a lot of weight, which is why I can quote a movie at you, and the AI can produce a watermark: they’ve been used as inputs a lot. Likewise, the AI can’t perfectly recreate those watermarks and I can’t tell you every detail from the scene: only the important bits are extracted. Less important details are too intermingled with data from other sources to be extracted with high fidelity.
my head […] not a distributable copy.
There has been an interesting counter-proposal to that: make all copies “non-distributable” by replacing the 1:1 copying, by AI:AI learning, so the new AI would never have a 1:1 copy of the original.
It’s in part embodied in the concept of “perishable software”, where instead of having a 1:1 copy of an OS installed on your smartphone/PC, a neural network hardware would “learn how to be a smartphone/PC”.
Reinstalling, would mean “killing” the previous software, and training the device again.
Right, because the cool part of upgrading your phone is trying to make it feel like its your phone, from scratch. Perishable software is anything but desirable, unless you enjoy having the very air you breathe sold to you.
Well, depends on desirable “by whom”.
Imagine being a phone manufacturer and having all your users running a black box only you have the means to re-flash or upgrade, with software developers having to go through you so you can train users’ phones to “behave like they have the software installed”
It’s a dictatorial phone manufacturer’s wet dream.
Yes, that’s exactly my problem with it.
Imagine if you could replay a movie any time you like in your head just from watching it once.
Two points:
-
These AIs can’t do that; they need thousands or millions of repetitions to “learn” the movie, and every time they “replay” the movie it is different from the original.
-
“learning by rote” is something fleshbags can do, and are actually required to by most education systems.
So either humans have been breaking the copyright all this time, or the machines aren’t breaking it either.
You have one brain. You could have as many instances of AI as you can afford. In a general sense, it’s different, and acting like it’s not is going to hit you like a freight train if you don’t prepare for it.
That’s a different goalpost. I get the difference between 8 billion brains, and 8 billion instances of the same AI. That has nothing to do with whether there is a difference in copyright infringement, though.
If you want another goalpost, that IMHO is more interesting: let’s discuss the difference between 8 billion brains with up to 100 years life experience each, vs. just a million copies of an AI with the experience of all human knowledge each.
(That’s still not really what’s happening, which is tending more towards several billion copies of AIs with vast slices of human knowledge each).
It’s all theoretical at this stage, but like everything else that society waits until it’s too late for, I think it’s reasonable to be cautious and not just let AI go unregulated.
It’s not reasonable to regulate stuff before it gets developed. Regulation means establishing some limits and controls on something, which can’t be reasonably defined before that “something” even exists, much less tested or decided whether the regulation has whatever desired effects it intends.
For what is worth, a “theoretical regulation” already exists: it’s the Asimov’s Rules of Robotics. Turns out current AIs are not robots, and that regulation is nonsense when applied to stable diffusion or LLMs.
I disagree. Over the last twenty years or so we have plenty examples of things they should have been regulated from the start that weren’t, and now it’s very difficult to do so. Every “gig economy” business for example.
Well fleshbags have to pay several years worth of salary to get their education, so by your comparison, Google’s AI should too.
Imagine thinking Public Education doesn’t count. Or that no one without a college degree ever invented anything useful. That’s before we get to your notion of “College SHOULD be expensive, for everyone, always”.
The problem with education is NOT that some people pay less for theirs, or nothing at all, nor that some even have the audacity to learn quickly. AI could help everyone to have a chance to learn cheaply, even quickly.
You’re just off on your own little rant now, arguing points I never even implied.
That’s wrong on so many levels:
- Go check the Gutenberg Project and the patent registry, come back when you’ve learned them all, they’re 100% free for everyone.
- Fleshbags have to pay for “dumbed down” educational material just to have a chance at learning anything during their lifespan, AIs don’t.
- The lion’s share of “paying for education” isn’t even paid for education, but for certification. AIs would have to pay the same… if any were dumb enough to spend “several years worth of salary” on some diploma.
- The only part worth paying for, is “hands on experience”, which right now is far more expensive for AIs (need simulations and robots built).
- Training AIs already isn’t free, they need thousands to millions of repetitions to learn the stuff, which means quite a buck in server costs.
So just because fleshbags are really bad at learning, does not mean Google’s AI has to pay for the same shortcomings, they already pay for their own.
-
Removed by mod
So works derived from other works should not be copyrightable? Oh wait, that’s specifically allowed. As long as it’s not being reproduced 1:1 then it falls under fair use. The argument that one should get paid for that is absurd. You can’t copyright the idea of something. If that were the case then you could never write another poem or novel or short story because someone already did that and to do so would be “stealing.” It would be ridiculous.
You have really meandered off the path of what I was talking about. But please, meander. It’s interesting.
Well, that’s what the person you replied to was saying. Essentially the “AI” is only reading the book, it’s not copying the book.
I could rewrite the entire Lord of the rings series in my own words and it wouldn’t be copyright infringement. I could sit there with the movies on repeat and the books all open for reference, I don’t owe the rights holder anything in that case, as long as I’m but reproducing their work.
They’re just trolling. Feel free to block and ignore, it’s the best way of dealing with them until moderation is more reliable.
Removed by mod
When you applied to join Beehaw you agreed to our standards, right? Please don’t be a dick on here, OK?
I might have applied and i wasn’t being a dick. you removed my comment because why? you think the lord of the rings was written for grown-ups or something? time to test you and see how outlandish you can be. i’ll think twice about participating here. not a safe place for me when I haven’t said anything wrong. fortunately for me, technology isn’t my specialty. it’s literature. so, say goodbye to me from your community. also, didn’t appreciate your insult. I’m from the community you’re from. the comment from the other person came from another instance. you have nothing to worry about. in technology you won’t hear a peep from me, because I learned how this place works. humanities and cultural literacy is not appreciated here.
You gotta speak to your audience.
What is this person’s audience? I’m not it. I guess they should have not picked me. LMAO. Poor kid. I just want to give the poor kid a huge hug, bake some Nestle Toll House cookies, and we can Netflix and chill to the whole Lord of the Rings plus the Hobbit. You know, because I have a heart, and this person needs this. Not that I’d enjoy any of it, I’d just be totally sacrificing my whole identity in favor, just to be helpful and all. LMAO
@LastOneStanding @SkepticElliptic ok but YA is more likely to have interesting queer relationships as far as stuff that I can find in a library or at a bookstore. All the adult queer literature tends to be sold online and the authors themselves have to do most of the promoting.
Bizarre amount of assumptions in your ignorant wall of text post. I’m an attorney that’s worked in copyright for small artists and creators. In my current job i fight back against the tech giants and try to reign in specifically Google Amazon and Meta with consumer protection regulations. The fuck are you?
I’m a person that has the same clout around here as you. You’re an anonymous rando unless you wish to advertise your legal services, put your name and pic up here for people to see and seek your services, which you are more than welcome to do. Until then, guess who I and you are? Nobody with an opinion. Welcome, Nobody, Attorney at Law. You just got irritated and you can’t do shit about it.
I don’t need to prove anything to you, but your now multiple wall of text rambling screeds say nothing except ignorant insults. If you want to actually engage with the issue, be my guest. Refute what I’ve said, or something new, or idk, at least interesting. You’re just being irritating for irritating sake, otherwise. You don’t have the same “clout” (lmao what is this, recess?) because you haven’t actually brought anything to the discussion.
They are a troll with nothing but nonsense to say. Thanks for your contribution to the discussion it was a great way to frame the issue.
Removed by mod
To be honest I’m fine with it in isolation, copyright is bullshit and the internet is a quasi-socialist utopia where information (an infinitely-copyable resource which thus has infinite supply and 0 value under capitalist economics) is free and humanity can collaborate as a species. The problem becomes that companies like Google are parasites that take and don’t give back, or even make life actively worse for everyone else. The demand for compensation isn’t so much because people deserve compensation for IP per se, it’s an implicit understanding of the inherent unfairness of Google claiming ownership of other people’s information while hoarding it and the wealth it generates with no compensation for the people who actually made that wealth. “If you’re going to steal from us, at least pay us a fraction of the wealth like a normal capitalist”.
If they made the models open source then it’d at least be debatable, though still suss since there’s a huge push for companies to replace all cognitive labor with AI whether or not it’s even ready for that (which itself is only a problem insofar as people need to work to live, professionally created media is art insofar as humans make it for a purpose but corporations only care about it as media/content so AI fits the bill perfectly). Corporations are artificial metaintelligences with misaligned terminal goals so this is a match made in superhell. There’s a nonzero chance corporations might actually replace all human employees and even shareholders and just become their own version of skynet.
Really what I’m saying is we should eat the rich, burn down the googleplex, and take back the means of production.
I agree with you and all, but as someone who just got done degoogling their phone, Google does give back a whole lot.
Not using their services puts you back a decade technologically. My phone is a small PC again instead of a Star Trek Tricorder.That’s fair, also congratulations. Idk if I would count that towards contributing to the internet though, since it’s all within their walled garden on their own terms. It’s helpful for people, but only insofar as it helps Google. 10 years ago I might be less critical since they were still in their “don’t be evil” phase and creating open source projects like Android left and right, something they’re evidently regretting now and trying to lock down using propriety core apps. It’s also worth noting Google’s AI employees authored “Attention is all you need”, the paper which laid the groundwork for modern Transformer-based LLMs, though that’s an architecture and not a full model or code.
Or, if it was some non-profit doing the work for the good of everyone :')
If only there were some kind of open AI research lab lmao. In all seriousness Anthropic is pretty close to that, though it appears to be a public benefit corporation rather than a nonprofit. Luckily the open source community in general is really picking up the slack even without a centralized organization, I wouldn’t be surprised if we get something like the Linux Foundation eventually.
Okay so I took back the means of production but it says it’s a subscription basis now
That’s late-stage capitalism for you – even revolution comes with a subscription fee
Probably shoulda read the Revolution TOS before clicking “I Agree”.
Copyright law is gaslighting at this point. Piracy being extremely illegal but then this kind of shit being allowed by default is insane.
We really are living under the boot of the ruling classes.
If you want “this kind of stuff” (by which I assume you mean the training of AI) to not be allowed by default, then you are basically asking for a world in which the only legal generative AIs belong to giant well-established copyright holders like Adobe and Getty. That path leads deeper underneath the boots of those ruling classes, not out from under them.
I don’t think it should be allowed to be trained off any of this stuff for entertainment/art/etc. at all. Like the dream future of AI was all the shitty boring stuff handled for us so we could sit back, chill and focus on arts, real scientific research, general individual betterment etc.
Instead we have these companies trying to get them doing all the art and interesting things whilst we all either have no job, money, or good standard of living, or the dangerous / shitty jobs.
So to avoid being “under the boot of the ruling classes” you want the government to be in charge of deciding what is and is not the correct way to produce our entertainment and art?
I use Stable Diffusiuon to generate illustrations for tabletop roleplaying game adventures that I run for my friends. I use ChatGPT to brainstorm ideas for those adventures and come up with dialogue or descriptive text. How big a fine would I be facing under these laws?
I mean there has to be a price to pay here, we can’t have our cake and eat it unfortunately. Caveats like “individual use” could allow this type of use while prevent companies taking the piss.
You seem to be implying that the government is the ruling class too, which (I grant you) may at least in part be the case but at least they’re voted into place. Would you rather have companies that we have no control over realistically use it without limit?
Honest question, what would you see as a fair way to handle the situation?
I mean there has to be a price to pay here,
Why, because you say so?
Would you rather have companies that we have no control over realistically use it without limit?
Yes, because that means I can also use it without limit. And I see no reason to apply special restrictions to AI specifically, companies are already bound by lots of laws governing their behaviour and ultimately it’s their behaviour that is what’s important to control.
Honest question, what would you see as a fair way to handle the situation?
Handle it the way we already handle it. People are allowed to analyze publicly available data however they want. Training an AI is just a special case of analyzing that data, you’re using a program to find patterns in it that the AI is later able to make use of when generating new material.
Why, because you say so?
This is just being obtuse and a bit of a cunt. You can’t expect not to have negative reprecusions as an affect of companies being allowed to just churn out as much AI generated shit as they can. Especially since you also say:
companies are already bound by lots of laws governing their behaviour and ultimately it’s their behaviour that is what’s important to control.
Please read what you’ve again but slowly this time. You’re saying you’re fine with all the other regulation, but it shouldn’t be done here cause of individual liberties when i’ve clearly stated free use can be specifically allowed for here…
Yes, because that means I can also use it without limit.
You’ve again stated your problem when i’ve given a more than sensible solution. Individual free use is fine, why would anyone want to stop you, individually or even with your friends, being creative? The problems comes when companies with huge resources, influence, and nefarious motives decide to use it. How about this time we get ahead of it instead of letting things get out of control then trying to do something about it?
This is just being obtuse and a bit of a cunt.
No, I’m seriously asking. You said that there has to be a price to pay, but I really don’t see why. Why can’t people be free to do these things? It doesn’t harm anyone else.
It’s reasonable to create laws to restrict behaviour that harms other people, but that requires the person proposing those laws to show that this is actually the case. And that the restrictions placed by those laws are reasonable and proportionate, not causing more harm than they prevent.
Individual free use is fine, why would anyone want to stop you, individually or even with your friends, being creative? The problems comes when companies with huge resources, influence, and nefarious motives decide to use it.
There is no sharp dividing line between these things. What if one of the adventures I create turns out so good that I decide to publish it? What if it becomes the basis for a roleplaying system that becomes popular enough that I start a publishing company for it?
The problems comes when companies with huge resources, influence, and nefarious motives decide to use it.
How about if one of those huge companies just wants to produce some entertainment that will sell really well and that I would enjoy?
You’re not really making an argument for banning AI, here. You’re making an argument for banning nefariousness. That’s fine, but that’s kind of a bigger separate issue.
The ruling class is seeing the end of capitalism. They’re getting desperate and making it obvious.
Can we get some young politicians elected who has a degree in IT ? Boomers dont understand technology that’s why these companies keeps screwing the people.
It’s because they’re corrupt and young people are just as susceptible to lobbyists bribes, unfortunately. The gerontocracy doesn’t make things better though, that’s for sure.
True but that doesn’t mean it wouldn’t be better to have politicians who have a better understanding of the systems they’re legislating. “People can be bribed” isn’t a good excuse to not change anything.
Definitely, I didn’t mean to sound too defeatist.
True. Human beings are the worst
This is more true than anything.
Personally I’d rather stop posting creative endeavours entirely than simply let it be stolen and regurgitated by every single company who’s built a thing on the internet.
I just take comfort in the fact that my art will never be good enough for a generative Ai to steal.
If it’s on any major platform, these companies will probably still use it since I doubt at that point if they were allowed to scrape the whole internet they’d have any human looking over the art used.
It’ll just be thrown in with everything else similar to how I always seem to find paper towels in the dryer after doing laundry.
Then I take comfort in the fact it might serve to sabotage whatever it generates.
“Bad” art is still useful in training these models because it can be illustrative of what not to do. When prompting image generators it’s common to include “negative prompts” along with your regular one, telling the AI what sorts of things it should avoid putting in the output image. If I stuck “by Roundcat” into the negative prompts it would try to do things other than the things you did.
I think the topic is more complex than that.
Otherwise you could say you’d rather stop posting creative endeavours entirely than simply let it be stolen and regurgitated by every single artist who use internet for references and inspiration.
There’s not only the argument “but companies do so for profit” because many artist do the same, maybe they are designers, illustrators or other and you’ll work will give them ideas for their commissions
deleted by creator
let people reuse each other’s melodies
I think this is an interesting example, because it’s already like this. Songs reusing other sampled songs are released all the time, and it’s all perfectly legal. Only making a copy is illegal. No one can sue you if you create a character that resembles mickey mouse, but you can’t use mickey mouse.
And pharmaceutical patents serves the same scope, they encourage the company to release publicly papers, data and synthesis methods so that other people can learn and research can move faster.
And the whole point of this is exactly regulating AI like people, no one will come after you because you’ve read something and now you have an opinion about it, no body will get angry if you’ve saw an Instagram post and now you have some ideas for your art.
Of course the distinction between likeness and copy is not that defined, but that’s part of the whole debacle
deleted by creator
Pharmaceutical patents are insanely harmful to the average consumer, at least in the US.
That’s more of a US problem than it is a pharmaceutical patents problem.
Only the rich and powerful or those willing to go deeply into debt are able to benefit from all of that extra research.
Only they are able to benefit from that research at first. Which is how it’s always been, new things are rare and expensive at first and become cheaper and more common over time.
deleted by creator
Look at this.
It’s just a single example, there are endless songs which are samples of samples of samples… Once in a while YouTube content id will have some problems as it’s not perfect. It doesn’t mean the system is fundamentally flawed. Like saying every car on the planet is cursed because once you got a flat tyre.
Only the rich and powerful or those willing to go deeply into debt are able to benefit from all of that extra research.
Pay attention because the alternative to patents is not a “free for all” approach , it’s industrial secrecy. As research is still very much expensive for entities to carry out.
Set aside than, no, extra research benefits everyone in the society as new cures for diseases are discovered faster and medicine evolve organically. Patents were the compromise to ensure companies could monetize their research while sharing their knowledge, are there other possible equilibrium? Sure, but we still have to remember we live in the real world, you can’t have a cake and eat it
deleted by creator
YT’s system that had messed up and not the legal system.
Oh the legal system is very much messed up, YouTube tried to put a bandage in it. You have to consider that usually you would need a full personalized legal contract for each piece of copyrighted material you use. Content id tries to automate the process, but it’s not perfect.
A 10-20% royalty should be more than enough to incentivise research while still preventing price-fixing and monopolies.
Which is what happens with patents today. The company holding the patent rarely also physical produces the drug, they usually have “manufacturing agreements” expecially in geographic far markets; where they let a second company make the drug with the company holding the patent on it and they are free to sell it in exchange for a percentage of the label price.
That’s also what happened with vaccines and many other medications, it’s like the standard procedure lol
Pharmaceutical patents are insanely harmful to the average consumer, at least in the US.
That’s more of a US problem than it is a pharmaceutical patents problem.
Only the rich and powerful or those willing to go deeply into debt are able to benefit from all of that extra research.
Only they are able to benefit from that research at first. Which is how it’s always been, new things are rare and expensive at first and become cheaper and more common over time.
deleted by creator
And of course, the same principle must apply to the resulting AI models themselves.
Voluntary obscurity is always an option, I suppose.
We need to actively start sabotaging the data sources these LLMs are based on. Make AI worthless.
Your comment right here provides useful training data for LLMs that might use Fediverse data as part of their training set. How would you propose “sabotaging” it?
Books will start needing to add a robots.txt page to the back of the book
Which will be ignored by search engines, as is tradition?
… which was the style at the time.
OK, so I shall create a new thread, because I was harassed. Why bother publishing anything if it’s original if it’s just going to be subsumed by these corporations? Why bother being an original human being with thoughts to share that are significant to the world if, in the end, they’re just something to be sucked up and exploited? I’m pretty smart. Keeping my thoughts to myself.
This is a tendency I’ve heard that I haven’t been able to understand. What is the new risk of expressing your thoughts, prose, or poetry online that didn’t exist before and currently exists with LLMs scraping them? How would the corporations exploit your work through data scraping that would demotivate you to express it at all? Because I know tone doesn’t come accross well in text, I want to clarify that these are genuine questions because my answers to these questions seem to be very different than many and I’d like to understand where that difference in perspective comes from.
I think this largely boils down to the time scales required. A person copying your work has a minimum amount of time it takes them to do that, even when it’s just copy and paste. An LLM can copy thousands of different developer’s code, for instance, and completely launder the license. That’s not ok. Why would we allow machines to commit fraud when we don’t allow people to?
This is very interesting for me to think about, since I have so many issues with proprietary technology in general. An LLM copying the code from thousands of proprietary projects is kind of an interesting loophole considering that it would be difficult for any of the individual businesses to prove that their proprietary code was infringed unless the LLM does copy and paste the code exactly. That could cause major changes in the tech industry which I’m not able to predict. Optimally I would like technological development more in the hands of people than behind legal barriers such as with Open Source code and I am not a programmer, so take my musings with a grain of salt.
Except that isn’t exactly how neural networks learn. They aren’t exactly copying work, they’re learning patterns in how humans make those works in order to imitate them. The legal argument these companies are making is that the results from using AI are transformative enough that they qualify as totally new and unique works, and it looks as if that might end up becoming law, depending on how the lawsuits currently going through the courts turn out.
To be clear, technically an LLM doesn’t copy any of the data, nor does it store any data from the works it learns from.
spoiler
asdfasdfsadfasfasdf
Yes, they probably would, so long as the work is transformative enough. You wouldn’t be the first, or last, author to copy LoTR in their own works.
This is why you can go on Instagram and find people selling presets that give photos the look of a famous photographer. They advertise them as such. But even though they are trying to sell something that supposedly allows you to copy the style of someone else, it’s still legal, because it’s transformative enough.
It doesn’t have to make sense, and we don’t have to agree with it, but that’s how the law works.
The problem is if I wholesale copy a paragraph word for word, then yes, I am engaging in plagiarism. The line is not as clear as you think. The difference is I can’t hide what I took as well as AI can and I can’t do it to 10,000 people in an instant.
Just because I engage in plagiarism at scale and hide it better does not mean I did not engage in plagiarism.
Except, what it produces is very similar or identical to some copyrighted works, licensed under the LGPL, like in this case. You don’t have to copy a whole program to plagiarize someone
With each day I hate the internet and these fucking companies even more.
Google can go suck on a lemon!
Lemons are delicious af though. Why reward them for their bs?
Worth considering that this is already the law in the EU. Specifically, the Directive (EU) 2019/790 of the European Parliament and of the Council of 17 April 2019 on copyright and related rights in the Digital Single Market has exceptions for text and data mining.
Article 3 has a very broad exception for scientific research: “Member States shall provide for an exception to the rights provided for in Article 5(a) and Article 7(1) of Directive 96/9/EC, Article 2 of Directive 2001/29/EC, and Article 15(1) of this Directive for reproductions and extractions made by research organisations and cultural heritage institutions in order to carry out, for the purposes of scientific research, text and data mining of works or other subject matter to which they have lawful access.” There is no opt-out clause to this.
Article 4 has a narrower exception for text and data mining in general: “Member States shall provide for an exception or limitation to the rights provided for in Article 5(a) and Article 7(1) of Directive 96/9/EC, Article 2 of Directive 2001/29/EC, Article 4(1)(a) and (b) of Directive 2009/24/EC and Article 15(1) of this Directive for reproductions and extractions of lawfully accessible works and other subject matter for the purposes of text and data mining.” This one’s narrower because it also provides that, “The exception or limitation provided for in paragraph 1 shall apply on condition that the use of works and other subject matter referred to in that paragraph has not been expressly reserved by their rightholders in an appropriate manner, such as machine-readable means in the case of content made publicly available online.”
So, effectively, this means scientific research can data mine freely without rights’ holders being able to opt out, and other uses for data mining such as commercial applications can data mine provided there has not been an opt out through machine-readable means.
I think the key problem with a lot of the models right now is that they were developed for “research”, without the rights holders having the option to opt out when the models were switched to for-profit. The portfolio and gallery websites, from which the bulk of the artwork came from, didn’t even have opt out options until a couple of months ago. Artists were therefore considered to have opted in to their work being used commercially because they were never presented with the option to opt out.
So at the bare minimum, a mechanism needs to be provided for retroactively removing works that would have been opted out of commercial usage if the option had been available and the rights holders had been informed about the commercial intentions of the project. I would favour a complete rebuild of the models that only draws from works that are either in the public domain or whose rights holders have explicitly opted in to their work being used for commercial models.
Basically, you can’t deny rights’ holders an ability to opt out, and then say “hey, it’s not our fault that you didn’t opt out, now we can use your stuff to profit ourselves”.
Common sense would surely say that becoming a for-profit company or whatever they did would mean they’ve breached that law. I assume they figured out a way around it or I’ve misunderstood something though.
I think they just blatantly ignored the law, to be honest. The UK’s copyright law is similar, where “fair dealing” allows use for research purposes (legal when the data scrapes were for research), but fair dealing explicitly does not apply when the purpose is commercial in nature and intended to compete with the rights holder. The common sense interpretation is that as soon as the AI models became commercial and were being promoted as a replacement for human-made work, they were intended to be a for profit competition to the rights holders.
If we get to a point where opt outs have full legal weight, I still expect the AI companies to use the data “for research” and then ship the model as a commercial enterprise without any attempt to strip out the works that were only valid to use for research.
So at the bare minimum, a mechanism needs to be provided for retroactively removing works that would have been opted out of commercial usage if the option had been available and the rights holders had been informed about the commercial intentions of the project.
If you do this, you limit access to AI tools exclusively to big companies. They already employ enough artists to create a useful AI generator, they’ll simply add that the artist agrees for their work to be used in training to the employment contract. After a while, the only people who have access to reasonably good AI is are those major corporations, and they’ll leverage that to depress wages and control employees.
The WGA’s idea that the direct output of an AI is uncopyrightable doesn’t distort things so heavily in favor of Disney and Hasbro. It’s also more legally actionable. You don’t name Microsoft Word as the editor of a novel because you used spell check even if it corrected the spelling and grammar of every word. Naturally you don’t name generative AI as an author or creator.
Though the above argument only really applies when you have strong unions willing to fight for workers, and with how gutted they are in the US, I don’t think that will be the standard.
The solution to only big companies having access to AI by using enough artists to create a useful generator isn’t to deny all artists globally any ability to control their work, though. If all works can be scraped and added to commercial AI models without any payment to artists, you completely obliterate all artists except for the small handful working for Disney, Hasbro, and the likes.
AI models actually require a constant input of new human-made artworks, because they cannot create anything new or unique themselves, and feeding an AI content produced by AI ends up with very distorted results pretty quickly. So it’s simply not viable to expect the 99% of artists who don’t work for big companies to continuously provide new works for AI models, for free, so that others can profit from them. Therefore, artists need either the ability to opt out or they need to be paid.
(The word “artist” here is used to refer to everyone in the creative industries. Writing and music are art just like paintings and drawings are.)
Unfortunately, copyright protection doesn’t extend that far. AI training is almost certainly fair use if it is copying at all. Styles and the like cannot be copyrighted, so even if an AI creates a work in the style of someone else, it is extremely unlikely that the output would be so similar as to be in violation of copyright. Though I do feel that it is unethical to intentionally try to reproduce someone’s style, especially if you’re doing it for commercial gain. But that is not illegal unless you try to say that you are that artist.
https://www.eff.org/deeplinks/2023/04/how-we-think-about-copyright-and-ai-art-0
Copyright law on this varies, actually! In the UK, “fair dealing” actually has an exclusion for using copyrighted material for the purpose of commercially competing with the creator. This also includes derivative works. This does therefore cover style to a certain extent, because works imitating a style of an artist are generally intended to commercially compete with them. From that perspective, taking an artist’s entire portfolio, feeding it into an AI, and producing work in their style at a lower price than the artist does (because an AI produces something in seconds which takes the artist weeks), is pretty obviously an attempt to compete with the artist commercially.
While people like to draw comparisons between AIs and humans copying another artist’s style, the big difference here is that a human artist needs to spend hundreds of hours learning to imitate another artist’s style, at the expense of developing their own style, while the original artist is also continually developing their style. It is bloody hard to imitate another human’s art style. But an AI can do it in minutes, and I haven’t yet seen any valid arguments for how that’s not intended to commercially compete with human artists on a massive scale.
True, I wrote this from a US law perspective, where that kind of behavior is expressly protected. US law is also written specifically to protect things like search engines and aggregators to prevent services like Google from getting sued for their blurbs, but it’s likely also a defense for AI.
Regardless of if it should be illegal or not, I feel that AI training and use is currently legal under current US law. And as a US company, dragging OpenAI to UK courts and extracting payment from them would be difficult for all but the most monied artists.
For the moment, US companies do actually care what the UK courts and regulatory bodies say, because the trifecta of US-UK-EU is what tends to form a base of what the rest of the world decides. It’s why Microsoft have been so unhappy about the UK’s Competition and Markets Authority initially blocking the merger with Blizzard: even with the US and EU antitrust bodies agreeing to it, it did actually matter if the UK didn’t agree (I am so disappointed in the CMA finally capitulating). And some of the lawsuits against the AI companies are taking place in the UK courts, with no indications that the AI companies are refusing to engage. Obviously at this point it’s hard to say what the outcome will be, but the UK legal system does actually have enough clout globally that it won’t be a meaningless result.
Practically you would have to separate model architecture from weights. Weights are licensed as research use only, while the architecture is the actual scientific contribution. Maybe some instructions on best train the model.
Only problem is that you can’t really prove if someone just retrained research weights or trained from scratch using randomized weights. Also certain alterations to the architecture are possible, so only the “headless” models are used.
I think there’s some research into detecting retraining, but I can imagine it’s not fool proof.
I kind of think that as proof-of-concepts, the AI models are kind of interesting. I don’t like the content they produce much, because it is just so utterly same-y, so I haven’t yet seen anything that made me go “wow, that’s amazing”. But the actual architecture behind them is pretty cool.
But at this point, they’ve gone beyond researching an interesting idea into full on commercial enterprises. If we don’t have an effective means of retraining the existing models to remove the data that isn’t licenced for commercial use (which is most of it), then it seems the only ethical way to move forward would be to start again with more selective training data, including only what is commercially licenced. Now the research has been done in how to create these models, it should be quicker to build new ones with more ethically sourced training data.
The standard needs to be opt-in, not opt-out. You can’t take people’s stuff without their permission. Just because they didn’t contact you and tell you directly that you’re not allowed to take their lawn ornaments doesn’t make them free.
Why not? Copyright is a monopoly. Generally society benefits from having it as weak as possible.
This is like the beginning of a Hitchhiker’s Guide to the Galaxy, where they put the responsibility on the main character to go to the department of transportation basement and see that they had posted a notice that they’re going to destroy his house. No Google, you don’t get to dictate that people come to your dark pattern website and tell you you’re not allowed to use their content. Disapproval is implied until people OPT-IN! It’s a good thing Google changed their motto from Don’t Be Evil or we’d have quite the conundrum.
🤖 I’m a bot that provides automatic summaries for articles:
Click here to see the summary
The company has called for Australian policymakers to promote “copyright systems that enable appropriate and fair use of copyrighted content to enable the training of AI models in Australia on a broad and diverse range of data, while supporting workable opt-outs for entities that prefer their data not to be trained in using AI systems”.
The call for a fair use exception for AI systems is a view the company has expressed to the Australian government in the past, but the notion of an opt-out option for publishers is a new argument from Google.
Dr Kayleen Manwaring, a senior lecturer at UNSW Law and Justice, told Guardian Australia that copyright would be one of the big problems facing generative AI systems in the coming years.
“The general rule is that you need millions of data points to be able to produce useful outcomes … which means that there’s going to be copying, which is prima facie a breach of a whole lot of people’s copyright.”
“If you want to reproduce something that’s held by a copyright owner, you have to get their consent, not an opt out type of arrangement … what they’re suggesting is a wholesale revamp of the way that exceptions work.”
Toby Murray, associate professor at the University of Melbourne’s computing and information systems school, said Google’s proposal would put the onus on content creators to specify whether AI systems could absorb their content or not, but he indicated existing licensing schemes such as Creative Commons already allowed creators to mark how their works can be used.
Google is smoking that pack.