March 18, 2026|Publications

Artificial Intelligence Issues for Transactional Law Practice

By Roy Hadley, Mitzi L. Hill, Philip Nulud, Jonathan B. Wilson

Artificial intelligence is now affecting transactional law practice from multiple directions at once. On one level, AI is changing how lawyers themselves perform work: drafting contracts, reviewing diligence materials, summarizing records, analyzing risk, and organizing deal process. On another level, AI is changing the underlying businesses and assets that are the subject of transactions. Buyers, sellers, licensors, customers, vendors, investors, and counsel increasingly must evaluate whether AI creates value, introduces hidden risk, or both. For transactional lawyers, AI is no longer a niche technology topic. It is becoming a recurring issue in contract drafting, due diligence, deal structuring, disclosure, representations and warranties, and post-closing integration planning.

That shift matters because AI rarely presents only one legal issue. A single AI-enabled product or business practice can implicate intellectual property ownership, data rights, privacy compliance, cybersecurity exposure, product liability, consumer protection, employment law, and privilege concerns at the same time. The transactional lawyer therefore must approach AI not merely as a technical feature, but as a cross-disciplinary risk and value driver. In some deals, AI may justify premium valuation because it enhances scalability, efficiency, or differentiation. In others, AI may undermine value because the target lacks rights to its training data, depends too heavily on third-party models, cannot substantiate ownership of key outputs, or has deployed AI in ways that trigger emerging regulatory obligations.

This article surveys several of the principal AI issues confronting transactional lawyers today. The goal is practical rather than exhaustive: to identify the questions transactional lawyers should now be asking, the contractual provisions they should consider, and the diligence areas they should no longer treat as optional.

Generative AI and Intellectual Property

The ownership of intellectual property created by generative AI is an evolving landscape. The main types of intellectual property—copyright, trademark, and patents—each have different purposes and requirements, and are affected by generative AI differently.

Copyrights

A copyright protects an original work of authorship fixed in a tangible form or medium. Manuscripts, pictures, songs, videos, and even source code are all types of original works of authorship which might be protectable under a copyright theory.  Copyright law, however, will only protect an original work authored by a human being. U.S. courts and the U.S. Copyright Office have repeatedly held that works created by non-humans are not copyrightable. Most recently, the D.C. Circuit Court held that “humanity [is] a necessary condition for authorship” Thaler v. Perlmutter, 130 F.4th 1039 (D.C. Cir. 2025). Therefore, AI generated works are not copyright protectable at this time.

While this states the law as it stands so far, copyright law may develop differently over time when applied to concepts like vibe coding or other AI-driven software development. While AI generated source code is not eligible for copyright protection, it is possible that copyright law might permit some level of protection for an original work that is jointly created by human and non-human co-authors? How should copyright law treat an article of source code that was initially authored by generative AI but was subsequently finished or modified by a human?  On a practical level, how can subsequent reviewers distinguish between code that is entirely the product of generative AI in contrast to that which is authored jointly by human and AI collaborators?

Legal academics might enjoy the prospect of future legal developments in this area, but practitioners need guidance now on how to address these issues in present-day contracts and negotiations.  

Trademarks

A trademark identifies the source of a good or service and identifies the brand for a source of goods or services. Brand names like Nike, or logos such like the Nike “Swoosh” are both trademarks. Unlike copyright law, trademark law does not decline to enforce a trademark if that trademark was created by generative AI.  Marketplace participants may use generative AI to create logos and brands and be confident that their trademarks will be judged equally alongside human creations.

Patents

Patents are creatures of statute.  In the U.S., a patent is a government-issued intellectual property right that gives the inventor an exclusive right to practice the claims taught in the patent.  To be registrable, a patent must be novel, non-obvious, and useful.

In similar fashion with copyright law, U.S. courts have held that a novel invention will only be susceptible of patent registration if the inventor named in the patent application is human. The Court of Appeals for the Federal Circuit reached this conclusion in Thaler v. Vidal, 43 F.4th 1207 (Fed. Cir. 2022).

In support of this conclusion, the U.S. Patent and Trademark Office in late 2025 issued Inventorship Guidance on the issue of AI-assisted inventions. The guidance takes the position that AI-assisted inventions are not categorically unpatentable. Instead, an inventor may see patent protection for an invention in which the human inventor provided a significant contribution to the invention in collaboration with generative AI.  The test outlined in the guidance was that a human inventor must (1) contribute in a significant manner to the conception of the claimed invention, (2) make a contribution that is not insignificant in quality relative to the full invention, and (3) do more than merely explain well-known concepts or recognize AI output. The USPTO has followed this guidance, rejecting applications where the human contribution is insignificant or where the human contribution did nothing more than explain or amplify a solution created by generative AI.

For transactional lawyers, this approach to patents creates challenges in service contracts and in M&A due diligence. 

Generative AI and Data Privacy

Although data privacy is not a form of intellectual property, AI and privacy law are intertwined in increasingly significant ways. 

As an initial matter, it is important to understand that entering “personal information” into an AI may be covered by a consumer privacy law such as the EU’s General Data Protection Regulation or the California Consumer Privacy Act.  If a user is under a legal duty to protect personal information, exposing it to an AI could run afoul of those privacy laws.  Much like in the copyright setting, a generative AI will ingest personal data, commingle it with other data, and make it part of its general knowledge base.  Exposing personal information to an AI can make it possible for the AI to copy and use the personal information in ways not permitted by the covered individual.  For example, a healthcare worker who has access to protected health information (or “PHI”) pertaining to a patient, potentially exposes that PHI to public access if the healthcare worker exposes the PHI to an AI.

Businesses that want to train AIs using personal information gathered from users, or who want to enter personal information in the course using an AI to generate custom content, will need to obtain appropriate releases from the data subject and implement appropriate safeguards.  A data subject release for “research purposes” might not include exposure of a data subject’s personal information to the public through AI exposure.  

Generative AI and the Attorney-Client Privilege

At least one court has ruled on the application of legal privilege to AI prompts.

In United States v. Heppner, No. 1:25-cr-00503-JSR, the court ruled that a client’s prompts to a public, non-secure AI were not covered by attorney-client privilege.  Although the case involved a client’s research (not a lawyer’s), the ruling is a reminder that use of an AI can have privilege implications, depending on the information entered.  Lawyers therefore should exercise caution, and potentially engage in diligence about the nature of an AI before using it in a practice setting, especially if entering client information. 

Lawyers should also warn clients against using AI in connection with threatened or ongoing litigation.  Courts will one day address myriad fact patterns involving the exposure of privileged information and documentation to an AI.  Consider, for example, what might result if a client took a privileged memorandum and exposed to AI in order to ask the AI for a review of the attorney’s advice.  Would the privileged that would otherwise have protected the attorney’s advice have been waived by the client? 

Another example could involve a privileged memorandum that contained confidential or trade secret information.  If a client exposed the memorandum to an AI, would the confidential or trade secret information contained in the memorandum lose its protected status?

Practical Applications to Transactional Practice

These IP and related considerations regarding generative AI suggest several practical steps that transactional lawyers should take in developing contracts, both for the development or delivery of services and in the M&A context.

            Application to Service Contracts

Service contracts include contracts for consulting services, software development, software support, and related services.  When purchasing such services, the customer will rely on the service contract to ensure that the services are delivered on time, that the services accomplish the goal intended, and that the customer has the right to use (or the right to own, when applicable) the deliverables promised in the contract.

If a vendor in a service contract utilizes generative AI to produce deliverables that contain software source code, those deliverables will not be susceptible to protection under a theory of copyright.  If the ability to own the copyright in software deliverables is a relevant consideration for the customer, the customer’s counsel should take care in drafting the service contract to (a) require the vendor to represent whether the vendor will utilize generative AI in the creation of deliverables and, if so, how, (b) if copyright ownership is an important consideration for the customer, require that the vendor avoid any generative AI in the production of the deliverables, and (c) if applicable, require the vendor to produce documentation to evidence how deliverables were produced by human authors so that the customer will be able to pursue and protect its copyright claims in those deliverables.

If the Service Contract requires the vendor to utilize or have access to confidential personal information, the Service Contract should ensure that the vendor’s utilization does not involve exposure to an AI unless such exposure was permitted by the data subjects. 

            Application to M&A and Due Diligence

M&A transactions, whether structured as an asset purchase, stock purchase, or merger, usually involve both legal due diligence as well as contractual representations that rely on the conclusions reached during due diligence.

AI should now be a routine topic in acquisition due diligence. In many businesses, AI is no longer peripheral. It may be embedded in products and services, used internally to support decision-making, incorporated into software development workflows, trained on customer or employee data, or central to the target’s differentiation narrative. For that reason, a buyer should not limit diligence to asking whether the target “uses AI.” The more important questions are where AI is used, how it is used, what rights support that use, what dependencies it creates, and what risks it introduces.

1. Value and Exposure

A disciplined AI diligence review should examine both value and exposure. On the value side, the buyer may want to determine whether the target truly has proprietary AI capabilities, whether those capabilities are technically and legally defensible, whether the target owns or validly licenses the relevant models, code, datasets, and outputs, and whether the target’s personnel and records can support the company’s claims. On the exposure side, the buyer should assess risks involving IP ownership, data rights, privacy compliance, algorithmic discrimination, regulatory obligations, cybersecurity, vendor lock-in, and product performance. In some transactions, these issues may affect not only diligence conclusions but purchase price, earnout design, indemnity scope, escrow planning, and integration strategy.

An acquirer should begin by mapping all material AI use cases across the target. That includes not only external products marketed as AI-enabled, but also internal uses such as coding assistance, customer support automation, pricing, fraud detection, hiring, underwriting, compliance screening, sales enablement, forecasting, personalization, security operations, and executive analytics. The buyer should also determine whether AI adoption has been centralized and governed or whether employees have engaged in informal “shadow AI” use outside approved systems. Shadow use matters because it can create unrecorded data leakage, inconsistent outputs, undisclosed dependencies, and compliance gaps that are difficult to detect from contract schedules alone.

2. Understanding Your Data

The provenance of data is another core diligence issue. If the target trained models, fine-tuned third-party models, or developed AI-enabled functionality using structured or unstructured datasets, the buyer should examine whether the target had the legal right to use that data for those purposes. Counsel should ask where the data came from, what notices or consents governed its collection, whether contracts restricted reuse or model training, whether personal or sensitive data was involved, what retention and deletion rules applied, and what technical or organizational controls governed the process. Where personal information is involved, AI diligence must be coordinated with privacy diligence, because training or inference use may exceed what data subjects were told or what applicable law permits.

3. Vendors and Dependencies

Third-party dependency is equally important. Many companies describe their offerings as AI-powered while relying heavily on external model providers, APIs, cloud tools, or enterprise platforms. The target’s vendor and licensing contracts may restrict training, benchmarking, reverse engineering, output use, sublicensing, portability, or post-termination use. Those agreements may also contain change-of-control provisions, consent requirements, or pricing terms that become material at closing. A buyer therefore should determine whether the target’s AI capability is truly proprietary, partially dependent, or largely rented. That distinction may radically change how durable the business’s value really is.

4. Coding Practices

Where the target develops software, diligence should address AI-assisted coding practices. Counsel should ask what coding tools were used, what review and testing procedures existed, how code provenance was tracked, whether open-source obligations were triggered, and how human contributions were documented. If the target cannot separate heavily AI-generated material from conventionally authored code, the buyer may face uncertainty around copyright protection, infringement exposure, and reproducibility of the code base. Similarly, where patents are important, counsel should investigate the invention process closely enough to evaluate whether the facts support valid human inventorship under current law.

5. Governance

Governance is another major diligence area. A buyer should assess whether the target has policies, approval processes, documentation standards, testing protocols, escalation mechanisms, incident response plans, and board or management oversight related to AI. The absence of a formal AI governance structure may itself be a material diligence finding, particularly if the target uses AI in high-impact contexts such as employment, healthcare, credit, insurance, education, or consumer eligibility decisions. Emerging frameworks in the EU, Colorado, and California all point in the same direction: higher-risk AI use cases increasingly require disclosures, risk assessments, documented controls, and structured oversight.

Finally, diligence findings should inform the transaction documents. Depending on the target and the issues discovered, the buyer may seek AI-specific representations regarding ownership, data rights, model training practices, compliance with privacy and AI laws, sufficiency of disclosures, absence of algorithmic discrimination claims, validity of licenses, cybersecurity protections, and lack of governmental investigations. In higher-risk deals, buyers may also seek special indemnities, covenants to remediate identified issues, holdbacks, or conditions tied to consents or compliance milestones. The point is not merely to “spot” AI risk. It is to translate AI diligence into contractual protection.

Conclusion

Attorneys practicing in transactional matters should be aware of the ways that AI is being used and how that usage can affect both mundane service contracts and strategic M&A transactions.  Contracts lawyers and M&A lawyers might not think of themselves as “AI lawyers” but the pervasiveness of AI in the business world means that transactional lawyers cannot ignore or hope to hide from the need to understand AI.  At the same time, in-house counsel and business executives who make legal decisions should consider how AI issues can present themselves in transactional law practice.


This communication is not intended to create or constitute, nor does it create or constitute, an attorney-client or any other legal relationship. No statement in this communication constitutes legal advice nor should any communication herein be construed, relied upon, or interpreted as legal advice. This communication is for general information purposes only regarding recent legal developments of interest, and is not a substitute for legal counsel on any subject matter. No reader should act or refraifrom acting on the basis of any information included herein without seeking appropriate legal advice on the particular facts and circumstances affecting that reader. For more information, visit www.buchalter.com.