March 06, 2026|The AG Line

States Do Not Need an AI Law to Enforce AI

How attorneys general are applying existing civil rights, privacy, consumer protection, and antitrust laws to artificial intelligence

On February 25, 2026, Connecticut Attorney General William Tong issued a memorandum explaining how existing Connecticut laws apply to artificial intelligence, a reminder that states do not need AI-specific legislation to bring enforcement actions involving AI.

Rather than proposing new regulatory frameworks, the memorandum walks through the civil rights, privacy, consumer protection, and antitrust statutes the Office already enforces and explains how those legal frameworks apply when AI systems are used in commercial decision making.

Artificial intelligence does not create a regulatory vacuum. When AI systems influence outcomes in employment, credit, housing, healthcare, pricing, or consumer transactions, the same legal standards still apply.

Connecticut is simply the latest state to say explicitly what many Attorneys General are already acting on. Existing law already reaches artificial intelligence.

How the States See It

The Connecticut memorandum catalogs the primary statutes the Office can use to enforce existing law in AI related contexts. These include the Connecticut Unfair Trade Practices Act (CUTPA), the Connecticut Data Privacy Act (CTDPA), the Connecticut Antitrust Act, and the state’s civil rights and antidiscrimination statutes.

The legal tools themselves are not new. What matters is the enforcement posture.

Across states, AI is increasingly framed as:

  • a delivery mechanism for conduct regulators already police;
  • a force multiplier capable of scaling discrimination, deception, or coordinated pricing; and
  • a business tool that must operate inside existing compliance frameworks rather than outside them.

The memorandum also cautions businesses to exercise care when deploying AI in commercial decision making. That language signals something practical. States expect companies to integrate AI into existing legal risk assessments rather than treat AI systems as an exception to established regulatory regimes.

Federal enforcement messaging has reinforced the same theme. In remarks at Oxford, former Deputy Attorney General Lisa O. Monaco observed that long standing legal principles apply regardless of technological medium: discrimination using AI remains discrimination, price fixing using AI remains price fixing, and identity theft using AI remains identity theft.

We are now under a new administration, and federal AI policy is evolving. We have written separately about the Trump administration’s executive orders on artificial intelligence and what they signal for the federal regulatory posture.

But state authority does not depend on federal rulemaking. State Attorneys General enforce their own statutes, and those statutes already reach the types of conduct AI systems can influence.

Massachusetts officials have delivered a similar message publicly, emphasizing that existing consumer protection, civil rights, and privacy laws already apply when artificial intelligence systems are deployed in the marketplace.

From Practice to Pleading

This is where the risk assessment becomes real.

If AI governance lives only within product, engineering, or innovation teams, the legal framing will eventually be done somewhere else. And it will not be framed as an AI issue. It will be framed as a traditional cause of action.

The Connecticut memorandum effectively provides a roadmap for how those theories would appear in a complaint.

1. A discrimination case

If an AI system influences hiring decisions, credit approvals, housing access, insurance underwriting, healthcare eligibility, or other regulated outcomes, the claim will likely arise under existing civil rights statutes. The allegation will not be that the company deployed artificial intelligence. The allegation will be unlawful discrimination.

2. A privacy case

If personal data is collected, retained, profiled, or used in ways that fail to honor access, deletion, portability, or opt out rights, the claim will arise under state privacy law such as the Connecticut Data Privacy Act.

3. A deception case

If a company overstates the capabilities of an AI system, relies on unreliable outputs, or deploys AI generated content that misleads consumers, the theory will be an unfair or deceptive practice under consumer protection law.

A recent example illustrates how this translation works in practice. In 2024, the Texas Attorney General reached a settlement in what the Office described as the first healthcare generative AI enforcement investigation. The allegations focused on misleading claims about the accuracy and reliability of an AI driven product, and the enforcement vehicle was the Texas Deceptive Trade Practices Act rather than a newly enacted AI statute.

4. A competition case

If algorithmic tools are used to stabilize pricing, coordinate market behavior, or suppress competition, the theory will arise under state antitrust statutes. The conduct may involve AI assisted decision making, but the legal claim will still be restraint of trade.

In that translation, the novelty of the technology quickly fades. What remains is conduct mapped onto statutory frameworks that have existed for decades.

Artificial intelligence becomes legally unremarkable.

That translation is predictable, which means it can also be anticipated.

Monday Morning Checklist

If your company is implementing or expanding AI systems, pressure test the legal layer before the operational model hardens.

1. Map AI use cases to regulated activities

  • Where does the system influence employment, credit, housing, pricing, healthcare, or consumer transactions
  • Which statutory regimes already govern those activities

2. Audit data inputs and downstream effects

  • What personal data is being ingested, profiled, or retained
  • Whether access, deletion, and opt out rights function in practice rather than only on paper

3. Stress test the outward facing narrative

  • How the system would be described in a regulatory complaint
  • Whether claims about accuracy, fairness, or pricing logic are fully supportable

If that exercise produces discomfort, it is doing its job.

Closing Thought

For companies deploying artificial intelligence, the key question is whether governance structures are integrated early enough to withstand being translated into a discrimination, privacy, deception, or antitrust claim.

We are working with clients to align product, data, compliance, and legal teams so that AI deployment is defensible from day one rather than reconstructed under investigative pressure. Reach out to us by email for more information on how we can help.


This communication is not intended to create or constitute, nor does it create or constitute, an attorney-client or any other legal relationship. No statement in this communication constitutes legal advice nor should any communication herein be construed, relied upon, or interpreted as legal advice. This communication is for general information purposes only regarding recent legal developments of interest, and is not a substitute for legal counsel on any subject matter. No reader should act or refraifrom acting on the basis of any information included herein without seeking appropriate legal advice on the particular facts and circumstances affecting that reader. For more information, visit www.buchalter.com.