Browsed by
Category: AI

US Copyright, Patents, and Generative AI

US Copyright, Patents, and Generative AI

There’s a lot of misinformation and misrepresentation of copyright and patent law when it comes to generative AI. In fact, the US copyright office has already flip-flopped on this issue; and the Chinese courts have come up with a completely different ruling saying that AI generated images can be copyrighted. It appears this could be a political war as much as a legal one.

A lot of the social media hyperbole is being fueled by fear and uncertainty. Not that there isn’t a real problem with generative AI taking away people’s livelihoods or possible copyright violation; but it’s worth knowing what one is talking about before heading off with pitchforks and torches.

I found this article on IPWatchdog to be informative about the actual legal arguments – but it’s important to know the jury is still out; and the US Copyright Office has already ruled exactly the opposite on this issue just a year ago. First off, what does copyright protect (compared to a patent)?

The Supreme Court laid out the difference first in Baker v. Selden, and re-emphasized it a century later in Mazer v. Stein. “Unlike a patent, a copyright gives no exclusive right to the art disclosed; protection is given only to the expression of the idea—not the idea itself.” In this way, each type of intellectual property right exists in different types of creations, which arise in a different ways, and have different requirements for protection. “[C]opyright protects originality rather than novelty or invention,” which is the domain of patents, said the Court in Mazer.

Indeed, what the Court made clear in Feist v. Rural, is that authorial works need to be original; that is, both created independently and “creative.” Other cases, such as Bleistein v. Donaldson, spoke of original expressions as “personal reaction upon nature,” where the author contributes “something recognizably his own,” per Alfred Bell

So the question for copyright becomes ‘Is AI creative?’. This is a tough point because it’s not clear what creativity really is. However, that philosophical or neuroscientific point is not that important when it comes to law. What is important is the previous language used to describe what is protected.

The article author indicates the emerging legal arguments seem to indicate that the kind of ‘creativity’ that is covered by copyright relates to that of human activity. Neither the courts nor the US Copyright Office have so far found AI to be creative with respect to the wording of existing copyright law.

Whether that argument is valid/sticks is a whole other story. Law is fickle and can change. It also doesn’t touch on the question of fair use on publicly displayed images and the argument that AI might be just considered as using copyrighted work to learn techniques/make but making their own reactive/derivative works which is something that art students do and the whole point of going to art school.

Either way, we’re likely see the most important legal decision in a decades with profound repercussions for future generations.

Links:

China rules AI Generated art is copyrightable

China rules AI Generated art is copyrightable

In stark contrast to Western rulings (well, except some early ones) and previous Chinese stance of control over generative AI, a Chinese court just awarded copyright protection to AI-generated images.

The case revolved around the generation of a pop idol image; not the use of copyrighted images in the training of a generative AI model that is the source of a current US lawsuit with artists.

The argument was the one that we’ve been hearing already: because it was a human being who wrote the relevant parameters for the AI model and ultimately selected the image in question, the final output is directly generated based on their intellectual input and “reflects the plaintiff’s personalized expression.”

It will be interesting to see how this goes.

Generative AI legal battles heat up

Generative AI legal battles heat up

More developments in the copyright case of generative AI and artists. The previous lawsuit has been amended and updated.

After a first round in which the judge refused a few arguments, things have gotten tightened up a bit.

  1. New artists – from photographers and game artists – have joined the lawsuit
  2. New arguments have been added:
    • In an effort to expand what is copyrighted by artists, the complaint makes the claim that even non-copyrighted works may be automatically eligible for copyright protections if they include the artists’ “distinctive mark,” such as their signature, which many do contain.
    • AI companies that relied upon the widely-used LAION-400M and LAION-5B datasets — which do contain copyrighted works but only links to them and other metadata about them, and were made available for research purposes — would have had to download the actual images to train their models, thus made “unauthorized copies.” to train their models.
    • The suit claims that the very architecture of diffusion models themselves — in which an AI adds visual “noise” or additional pixels to an image in multiple steps, then tries to reverse the process to get close to the resulting initial image — is itself designed to come as close to possible to replicating the initial training material. The lawsuit cites several papers about diffusion models and claim are simply ‘reconstructing the (possibly copyrighted) training set’.

This third point is likely the actual meat of the suit; but they haven’t spelled it out quite as sufficiently as I think they should have. To me, the questions that are really the crux of the question are:

  1. Do large-scale models work by generating novel output, or do they just copy and interpolate between individual training examples?
  2. Whether training (using copyrighted art) is covered by fair use or qualifies as a copyright violation.

Even if generative AI loses all of these arguments, it doesn’t mean generative AI is going away. They can still be trained on huge volumes of non-copyright images and data, or data that is purchased and licensed for the purpose. Even beyond that, companies have already been training models with data collected from their use (that you give to them for free by using devices like iPhone’s Siri, Amazon Alexa, and Google) and by generated synthetic training data.

Links:

The Trolley problem is not helpful for autonomous vehicles

The Trolley problem is not helpful for autonomous vehicles

Determining what autonomous driving algorithms do in difficult life-and-death situations is a real problem. Until now, many have likened it to the famous ‘trolley problem‘.

There is a runaway trolley barreling down its tracks. Ahead, on the tracks, there are five people tied up and unable to move. The trolley is headed straight for them but you are standing in the train yard next to a lever. If you pull this lever, the trolley will switch to a different set of tracks. However, you notice that there is one person on the side track. You have two (and only two) options:

  1. Do nothing, in which case the trolley will kill the five people on the main track.
  2. Pull the lever, diverting the trolley onto the side track where it will kill one person.

The problem asks which is the more ethical option? Or, more simply: What is the right thing to do?

Analysts have noted that the variations of these “Trolley problems” largely just highlight the difference between deontological and consequentialist ethical systems. Researchers, however, are finding that distinction isn’t actually that useful for determining what autonomous driving algorithms should do.

Instead, they note that drivers have to make many more realistic moral decisions every day. Should I drive over the speed limit? Should I run a red light? Should I pull over for an ambulance?

For example, if someone is driving 20 miles over the speed limit and runs a red light, then they may find themselves in a situation where they have to either swerve into traffic or get into a collision. There’s currently very little data in the literature on how we make moral judgments about the decisions drivers make in everyday situations.

Researchers developed a series of experiments designed to collect data on how humans make moral judgments about decisions that people make in low-stakes traffic situations, and from that developed the Agent Deed Consequence (ADC) model.

The approach is highly utilitarian. It side-steps complex ethical problems by simply collecting data on what average people would consider ethical or not. The early research for ADC claims the judgements of the average people and ethics experts very often match; even if they were not trained in ethics. This more utilitarian approach may be sufficient for some tasks, but inherently is at risk from larger issues ‘If everyone jumped off a bridge, would you?” It’s often referred to as the Bandwagon Fallacy. Decisions made by the masses is something even Socrates argued against in The Republic.

Articles:

David Attenborough AI narrates your life

David Attenborough AI narrates your life

Developer Charlie Holtz combined GPT-4 Vision (commonly called GPT-4V) and ElevenLabs voice cloning technology to create an unauthorized AI version of the famous naturalist David Attenborough narrating his every move on camera.

Articles:

How to get hit by a self-driving car

How to get hit by a self-driving car

Daniel Coppen recently teamed up with media artist Tomo Kihara to develop “How (not) to get hit by a self-driving car,” a street-based game designed to improve people detection in autonomous vehicles by challenging people to avoid being recognized by an object-detection algorithm. 

Participants use creative maneuvers like cartwheels and disguises to test and potentially enhance the AI’s ability to identify pedestrians in varied and unpredictable scenarios.  The game’s creators hope to conduct a global tour to gather diverse data, aiming to share it with researchers and self-driving car developers for better training of these systems.

Article:

AI Hank Williams sings new songs – Like Straight Out of Compton

AI Hank Williams sings new songs – Like Straight Out of Compton

If you don’t think AI is changing things at a fundamental level, witness what is possible with voice models trained by ordinary people like ThereIRuinedIt:

Or Johnny Cash singing Barbie Girl

How? There’s a number of different ways you can try this yourself – but the list grows daily at this point, so do some googling and see what’s available.