Who Is Liable When AI Goes Shopping? American Express Sets a New Precedent for Agentic Commerce

14212

As autonomous AI agents begin to handle transactions on behalf of consumers, the financial world is facing a fundamental question: When a machine makes a purchase, who is legally and financially responsible for the outcome? With the rise of “agentic commerce,” card issuers are now under pressure to redefine authorization and liability frameworks before these systems reach mass adoption.

The Dawn of Agentic Commerce

The concept of AI-driven commerce transitioned from theory to reality in early 2025. With Apple integrating sophisticated agent-like intelligence into Siri for millions of users and Google weaving AI-driven shopping directly into its search ecosystem, autonomous systems are increasingly acting as intermediaries for consumer spending.

While this shift promises a frictionless, intent-driven shopping experience, it also introduces significant operational risks. History shows that every major evolution in payments—from the birth of e-commerce to the rise of mobile wallets—has provided new avenues for fraud. Agentic commerce is no different, but the speed at which AI operates compresses the window for detecting and mitigating these risks.

The American Express Precedent: A Safety Net for AI Errors

American Express has emerged as an early mover in addressing the liability gap. In 2026, the company launched an agentic commerce developer kit designed to create a “controlled environment” for AI transactions. Crucially, Amex committed to covering erroneous purchases made by registered AI agents operating on its network.

The Amex model relies on a multi-layered security approach:

  • Agent Registration: AI agents must be verified and issued specific payment credentials.
  • User Authentication: Cardholders must be authenticated before an agent is permitted to transact.
  • Trust Baseline: By covering errors, Amex aims to reduce merchant disputes and chargebacks, encouraging more volume through AI intermediaries.

The Opportunity: This approach treats liability as a design feature, signaling to the market that the network will back the technology’s reliability. Mastercard and Visa have also begun rolling out infrastructure to support similar agent-based transactions.

The Challenge: Critics note that Amex’s current framework focuses on “errors” (such as AI hallucinations or software bugs) rather than “fraud.” If a malicious actor successfully bypasses authentication to control a “verified” agent, the question of liability becomes far more complex.

Lessons from the Apple Pay Launch

To understand the potential pitfalls of agentic commerce, one only needs to look back at the 2014 launch of Apple Pay. At the time, Apple and card issuers touted the security of digital tokens, which made stolen credentials nearly useless. However, they failed to account for bad actors provisioning stolen identity data onto legitimate devices. This led to a surge in fraud at launch—a pattern that could easily repeat if AI agent authentication is not linked to ironclad identity verification.

The Identity Resolution Gap

Most current fraud controls are built to answer a simple question: “Is this payment authorized?” They are not necessarily designed to answer: “Is the person (or agent) behind this payment the actual card member?”

In North America, tools like 3D Secure—which provide device and IP data to issuers—have seen low adoption. Without these signals, issuers struggle to distinguish between a legitimate AI agent acting on a user’s behalf and a compromised bot. This uncertainty makes liability coverage a structural necessity for the industry rather than a mere marketing perk.

Five Strategies for Issuers in the AI Era

Financial institutions must upgrade their authorization stacks before agentic transaction volumes scale. Key priorities include:

  1. Advanced Authorization Logic: Move beyond user-centric signals to include agent identity, provenance, and behavioral context in real-time decisions.
  2. Identity-Centric Infrastructure: Establish systems that verify the specific relationship between a human user and their authorized AI agent.
  3. Redesigned Dispute Frameworks: Create new categories for agent-driven disputes that reflect shared responsibility between the user, the agent provider, and the merchant.
  4. Internal Cross-Team Alignment: Ensure that fraud, product, and customer experience teams are not working in silos when managing AI risks.
  5. Standardization: Participate in industry-wide efforts to define how identity and liability are allocated across open ecosystems.

The Shift from Transactions to Intent

The era of agentic commerce is forcing a structural shift in the payments ecosystem. Liability is moving from a “downstream” process (handled after a dispute occurs) to an “upstream” design element of the transaction itself.

The ultimate success of this new era depends on whether systems can accurately interpret human intent before funds move. Banks and issuers that treat AI agents as just another digital wallet will likely face high friction and rising fraud. Conversely, those that treat identity as a continuous, three-way relationship between the user, the agent, and the issuer will set the standard for the next generation of global commerce.

Source: thefinancialbrand.com

Content