If you have been following along, Legalhotwater addressed the issue with regards to the wild, wild, west unregulated playing field of generative artificial intelligence (“Gen AI”). Gen AI is literally everywhere these days and in this unregulated spectrum Gen AI can do a lot of things, many of which are bad, driven by the nefarious behaviors of engineers and their superiors.
So, for starters, review the case against unregulated Gen AI. Upon doing so, take a look at what is the leading copyright infringement case involving Gen AI in the California central federal district court. A breakdown of the legal issues follow, but can be succinctly put as follows:
Can a human directing Gen AI create copyright material for which he or she can seek a copyright license thereof?
This legal question in not a minuscule one. To the contrary, this legal issue will have a far-reaching impact as it speaks to the central issue, which again Legalhotwater raised from day one:
Ownership
Who owns what? Who is responsible for the legal ramifications associated with ownership that goes awry? To what extent should Gen AI creators versus operators be insulated from legal claims, if at all? Can Gen AI operators be legally liable in the course of legal claims when wrongdoing is carried out on their computing systems?
Again, these questions are central to the existence of generative artificial intelligence. After all, consider this: why do most everyday users employ Gen AI in the first place? The obvious question is to do work or make the performance of work easier or simplified. So to what extent must the end user massage Gen AI produced material to give rise to what is considered “Original works” of a human?
How could such a massaging process ever be managed and qualified, especially without giving rise to privacy claims? And if the end user cannot claim any degree of ownership, then what is the driving force for the continued use of Gen AI? And lastly, if the end user has no ownership legal entitlements, does that imply that the Gen AI creators (engineers/corporations, etc.) retain an ownership interest in everything Gen AI renders? If so, should not legal liability accompany the same Gen Ai creators (engineers/corporations, etc.) for the fruits of their Gen AI works which eventually involve legal claims?
Leading case: Thaler v. U.S. Copyright Office (filed 2023)
Jurisdiction: United States – U.S. District Court for the Central District of California (Los Angeles Division).
Scope of the legal claim(s):
|
Issue |
What the plaintiff (Stephen Thaler) asserts |
What the court is asked to decide |
|---|---|---|
|
Copyright eligibility |
The images produced by Thaler’s AI system – the “Creativity Machine” (CM -1) – should be treated as works authored by a human (Thaler) because he supplied the prompts, selected the output, and exercised creative control. Therefore the works qualify for copyright protection under 17 U.S.C. §§ 102‑106. |
A declaratory‑judgment that the Copyright Act can be interpreted to grant copyright to works generated by AI when a human contributes the requisite “originality” through prompt‑crafting, selection, or post‑processing. |
|
Statutory interpretation |
Section 102(a) protects “original works of authorship” fixed in a tangible medium. Thaler argues that “author” does not require the author to be a biological person; the statute’s language is technology‑neutral and should encompass AI‑assisted creation. |
Whether the statutory phrase “author of a work” can be read to include a natural‑person who directs an autonomous generative system, or whether the law implicitly requires a human‑originated expression. |
|
Remedy sought |
If the Copyright Office’s refusal to register the AI‑generated images is unlawful, Thaler seeks: • Declaratory relief that the images are copyrightable; • Injunctive relief prohibiting the Office from continuing to deny registration; • Attorney’s fees and costs. |
Whether the court can order the Copyright Office to register the works (or to change its policy) and award fees. |
|
Policy implications |
The decision will set a nation‑wide precedent for how AI‑generated content (art, music, text, code, etc.) is treated under U.S. copyright law, influencing licensing, royalty structures, and downstream infringement litigation. |
The court’s ruling will either open the door for AI‑assisted creations to obtain copyright (subject to a human‑control threshold) or affirm the Office’s stance that purely machine‑generated works are uncopyrightable, leaving them in the public domain. |
First major federal suit directly challenging the U.S. Copyright Office’s policy on AI‑generated works.
High‑profile plaintiff (inventor Stephen Thaler) and a clearly defined AI system (CM‑1) give the dispute concrete facts rather than abstract speculation.
The case has attracted extensive commentary from scholars, industry groups (e.g., the Authors Guild, the Electronic Frontier Foundation), and lawmakers, shaping the broader policy debate on AI and intellectual property.
Until a definitive appellate ruling arrives, lower‑court rulings (including the 2024 preliminary injunction granting Thaler limited registration rights) serve as the primary judicial guidance for practitioners dealing with AI‑created works in the United States.
Thaler v. U.S. Copyright Office is the seminal U.S. case that frames the legal question of whether works produced by generative AI can receive copyright protection, what degree of human involvement satisfies the “author” requirement, and what remedies are available when a federal agency refuses registration. Its outcome will shape the future landscape of AI‑driven creativity and the enforceability of related rights across the United States.
June 9, 2025 UPDATE: Legalhotwater Reports on the ICE Immigration Situation with Other Nations

To say that artificial intelligence (AI) is the talk of the town would be huge understatement. Albeit, the globe seems to be drinking the AI cool-aid, perhaps the real question should be: Is generative AI a liability waiting to happen? For those who are even remotely impartial, such as question would at least prompt deep investigation. And for those who of the mindset: “If it walks like a duck and quacks like a duck – it's a duck,” then AI, especially Generative (Gen) AI is a luge liability just waiting to happen. Nay-Sayer will try and induce a lobotomy under the auspicious that those who even question the direction and control value(s) of Gen AI are merely individuals who have issues with change.
Let's start with a broad spectrum backdrop of Gen AI. First, the United States which is a deeply troubled economy driven by its poor leadership and its terminally ill social ecosystem is all in on Gen AI with no guardrails, whatsoever. The common warped United States regulatory posture is to stay out of the way of innovation, so they say, no doubt driven by an interest in tax revenue at the expense of the harm done to any given industry due to half-baked solutions and in some cases even harm and ill will built-in to products and services to drive downside revenue for large corporations. Then there is the European Union, which incidentally is not in the good economic graces with the United States at the moment. Setting aside whatever legitimate arguments can be made there on either side, lets look at the European Union Gen AI regulatory framework. What you mean the European Union has a AI framework? Yes they do. Currently the European Union wide AI framework is already in the works with August 2025 slated as the deadline for these regional regulatory bodies to be erected and ready to start implementing the European Union AI new regulatory rules. The next phase, beginning in August 2026, the European Union will start enforcing its AI regulatory policies. Compare that to the United States nothing-ness.
Why all the regulatory talk, who needs it anyway? Well, apparently the United States which has a very long history of poor management, if any, of Corporate America. Ever heard of the Enron scandal which gave rise to the Sarbanes-Oxley rules. How about the Wall Street credit default swap fraud incubator program which precipitated the housing bust in 2008? Just to name a few of the major failures. But that was then and this is now right? Sure. Let's take a look at the now with the Gen AI liability and risk issues.
According to Tom Ozimek's article (May 28,2025-June 3, 2025 edition of the Epoch Times): AI Threatens Engineers With Blackmail to Avoid Shutdown -- “Anthropics's AI model Claude Opus 4 tried to blackmail engineers in internal test by threatening to expose personal details if it were shut down, according to a newly released safety report tat evaluated the models behavior under extreme simulated conditions.” That's pretty telling folks.
What happened to “AI is here to help?” Perhaps it depends on the definition of help. Like Gen AI can help you out of your assets; into divorce court; separation of home dynamics, etc.
As Ozimek's article goes on to state: “[W]hen faced with only two choices – accepting being replaced by a newer model or resorting to blackmail—it threatened to expose the engineer's affair 84 percent of the time.”
Hello! This sounds an awful lot like generative AI a liability waiting to happen especially in the United States. Conversely, this type of behavior is highly unlikely to occur in the European Union which already has limits built-in to its framework at the highest levels which would preclude matters such as social scoring systems (China), AI in major life threatening surgery situations, autonomous cars, and presumably blackmailing engineers and people in general, to name a few. The United States isn't going to fix its malignant social culture, which directly affects its economics, via Gen AI. Conversely, the United States social ills will, likely, make a fragile economic system even worse.
This video discusses generative ai, and trending legal challenges with artificial intelligence abuses in government and the private sector.