Just as a leopard doesn’t change its spots, Google and Meta haven’t changed their ways. Despite mounting legal threats and public backlash, both big tech platforms continue to behave as if rules don’t apply to them.
New evidence has emerged to underscore that Google’s original unofficial motto of “Don’t be evil” was never really their true North Star. Instead, it is a smokescreen for big tech’s naked ambitions. Meta’s early motto—“Move fast and break things”—may have been more honest, but the honesty makes it even more damning. As it turns out, the broken things weren’t just outdated norms or sluggish competitors. They were the foundations of fair competition, user privacy, democratic discourse, and now, copyright law. The damage isn’t merely collateral; it is strategic.
Big tech’s anticompetitive behavior enters its AI era
Now we’re seeing a similar pattern unfold with generative AI. In Kadrey v. Meta, evidence unsealed early this year suggests Meta execs, including Mark Zuckerberg, chose to pirate copyrighted content to train its LLaMA AI model. It was revealed that Meta initially explored licensing but opted instead to download pirated content via BitTorrent from LibGen under the reasoning that doing things the legal way would take too much time.
Worse, the company allegedly stripped copyright management info from the files to cover its tracks. Clearly, they’re following the motto of moving fast and breaking things. This time around, they seem intent on breaking copyright law. Given Meta’s long track record, I’m not sure what is most surprising: the planning of such a sophisticated heist or the ham-handed cover up. Either way, they graciously documented it all in email.
Meanwhile, over in Mountain View, Google has once again leveraged its search dominance to take traffic and revenue from publishers. In May, Google launched AI Mode, which scrapes and summarizes publishers’ original content to give users the answer without needing to click through thereby extracting out all of the incentives for the publisher.
In a bit of stunning bravado, Google rolled out AI Mode just 48 hours before closing arguments in the remedies phase of the Google Search trial, where the evidence clearly shows that Google abused its market power in search to maintain its significant advantages in crawling, clicks and query data which are paramount to the AI era. Google claims publishers can opt out. However, they can only do so by removing themselves from search entirely–which is no choice at all when it involves a company with more than 95% of the mobile queries. Google’s unauthorized use of copyrighted content to create a substitutive product has, to no one’s surprise, led to a massive downturn in traffic to publisher sites. Simultaneously, Google announced that Gemini will soon be on by default for consumers, collecting data about their activities. This is an oft-used strategy by Google. They tune the defaults to maximum data collection, knowing full well that consumers won’t know or take the time to shut them off.
The courts push back
However, despite big tech’s brazen and predictable pattern of brutish behavior, the legal system may be starting to catch up with the platforms’ anticompetitive tactics. Google has been found guilty of violating antitrust law in both the search and ad tech markets. And at least in the search case, the Court has been very focused on ensuring AI is a competitive marketplace rather than the fruit of more Google abuses. In addition, we’re starting to get additional clarity on how copyright law applies in this new digital age of AI.
In Thomson Reuters v. Ross Intelligence, U.S. District Court Judge Stephanos Bibas ruled that Ross infringed copyright by using Westlaw’s headnotes to train an AI competitor, despite Ross’ claims of fair use. Initially, Ross reached out to Thomson Reuters to license the content but ultimately opted to acquire the Westlaw content from a third party, LegalEase (which sounds eerily similar to Kadrey v Meta).
Judge Bibas rejected all of Ross’ defenses, stating that innocent infringement, copyright misuse, merger defense, scenes à faire defense, and fair use did not apply. On fair use, Judge Bibas eloquently analyzed the four established factors: the use’s purpose and character; the copyrighted work’s nature; how much of the work was used and how substantial a part it was relative to the copyrighted work’s whole; and how Ross’s use affected the copyrighted work’s value or potential market.
On the fourth factor, Judge Bibas found that Ross “meant to compete with Westlaw by developing a market substitute.” He wrote that this factor is “undoubtedly the single most important element of fair use.” That seems like an important ruling in light of the way Google’s AI Mode trains on and serves as a substitute for publisher’s original content.
In April, U.S. District Court Judge Sidney Stein rejected OpenAI and Microsoft’s motion to dismiss, thereby allowing all of the copyright and trademark dilution clams from The New York Times’ suit to proceed. While the bar is admittedly lower for a motion to dismiss, Judge Stein noted “that plaintiffs have plausibly alleged the existence of third-party end-user infringement and that defendants knew or had reason to know of that infringement.”
Then, in May, the U.S. Copyright Office released a report on AI training and fair use. It concluded that using massive troves of copyrighted content to generate commercial AI outputs likely fails fair use, especially when done through illegal means. The report also notes that “effective licensing options can ensure that innovation continues to advance without undermining intellectual property rights.” The Copyright Office rightly recognized that creative works are not mere “data” to be harvested, but expressions of human authorship protected by the Constitution and enshrined in U.S. copyright law.
From slogans to standards
So, what does this mean? For one, courts are rejecting the Silicon Valley myth that fair use lets AI companies take whatever they want. Licensing isn’t just viable, it’s required. Congress should pay attention.
Although there will inevitably be bumps along the road as fair use analysis is unique to each case, these rulings act as a compass to where things are headed. They send important signals to big tech companies with a history of anticompetitive behavior: don’t be evil or you may be held liable. The old playbook—take first, ask questions never—isn’t going to work in this new AI era. It’s time for a better North Star: accountability, transparency, and fair competition.