Will the Rule of Law Survive the Age of AI?
Artificial intelligence has already made its way into courtrooms and law firms, but the coming wave of artificial superintelligence will challenge the very foundations of our legal system. Unlike today’s models, which still depend on human oversight, superintelligent systems could reason, argue, and even create legal strategies at a level that rivals or exceeds the best human lawyers. The law is not ready for this, and recent cases show just how fragile our existing frameworks have become.
Copyright has been the first battlefield. In Bartz v. Anthropic, Judge William Alsup ruled that while training AI on legally acquired works might qualify as fair use, using pirated data is not protected. Anthropic’s $1.5 billion settlement, which included destroying illicit datasets, revealed the extraordinary scale of risk companies face when they cut corners on training data. On the other hand, Thomson Reuters v. Ross Intelligence made clear that copying curated Westlaw content to build a competing AI legal research tool was not fair use. Together, these rulings suggest courts may tolerate AI when it learns from materials in ways that are transformative and research oriented, but they will resist when AI tools substitute for the value created by human labor. Yet this patchwork approach leaves unanswered how the law should treat superintelligent systems that generate wholly original works whose creativity exceeds human authorship.
The ethical challenges are equally stark. In Mata v. Avianca and Johnson v. Dunn, attorneys were sanctioned for filing briefs filled with hallucinated citations generated by AI. Judges reminded lawyers that they bear ultimate responsibility, not the machine. But this logic depends on the assumption that humans can realistically verify AI’s outputs. What happens when superintelligent systems produce doctrinal syntheses or arguments so intricate that even experts cannot fully evaluate them? Our professional responsibility rules rest on the premise of human judgment, but that premise may not hold in an era of ASI.
Identity and personhood are also at stake. In Lehrman & Sage v. Lovo, voice actors successfully sued over unauthorized AI voice cloning. The case showed how AI can exploit identity in ways that blur the line between intellectual property and personal rights. As systems advance, we will see not just voices cloned, but entire digital performances and synthetic identities that mimic people with uncanny precision. The law must decide whether identity itself deserves a new form of protection, rather than relying on fragmented doctrines of copyright or misappropriation.
The gaps exposed by these cases point to a deeper crisis. We do not yet know how to assign responsibility when AI makes harmful decisions. Should blame fall on the developer, the lawyer who used the tool, or the system itself? We lack standards for verifying AI-generated arguments, even as hallucination remains a constant risk. Lawyers are not required to disclose when they use AI, raising questions about fairness when one party has access to advanced systems and the other does not. And perhaps most troubling, the use of AI in bail decisions, sentencing, and evidence assessment threatens core constitutional protections if systems remain opaque and unregulated.
The law has always adapted to new technologies, from the printing press to the internet. But artificial superintelligence is different. It is not simply a new tool, it is a potential new actor in the legal system. If courts and policymakers continue to respond case by case, we risk eroding public trust in the justice system before coherent rules can be built. The legal profession must act now to craft frameworks of accountability, transparency, and oversight that match the scale of this transformation. The stakes are not only about intellectual property or courtroom ethics. They are about whether the rule of law itself can survive the age of superintelligence.