Agentic commerce and payments, driven by advanced Artificial Intelligence, are rapidly transforming how consumers shop and transact. This burgeoning technology ranges from sophisticated GenAI digital shopping assistants to autonomous AI agents capable of executing complex purchases based on human instructions. However, as these AI agents gain independence, a critical question emerges for financial institutions, merchants, and consumers: who bears the risk when things go awry?
The Looming Reality: Uncharted Territory for Payments
A recent white paper by the Consumer Bankers Association and Davis Wright Tremaine LLP highlights significant risks for all stakeholders if agentic commerce, especially autonomous payments, advances without robust safeguards. Unlike traditional payment channels with established protections, the world of AI-driven transactions presents a new challenge.
The Core Dilemma: Who Pays When Bots Go Rogue?
Traditional Electronic Fund Transfer Act rules limit consumer liability for unauthorized transactions. However, this may not extend to mistakes made by AI agents acting on a consumer’s behalf. Consumers could find themselves liable for costly errors their digital agents commit.
For banks, the risk is particularly acute: “Banks should expect customers to reach out for help when agentic transactions go wrong,” states the white paper. As the primary point of contact for payment disputes, financial institutions will inevitably become the battleground for these new-age transaction conflicts. Customer expectations remain high; they anticipate banks will rectify improper or failed payment transactions, regardless of the technology involved.
Understanding the Agentic Commerce Surge
- Explosive Growth: Consumer interest is soaring. During the 2025 holiday season, shopping via GenAI-powered chat services and browsers surged by nearly 700% compared to 2024, according to Adobe data.
- Trust is Key: PYMNTS.com reports that while consumers are ready to delegate significant purchasing tasks to agentic AI, widespread adoption hinges on “payments-grade trust.”
- Frictionless Future: Consumers aren’t seeking novelty but view agentic shopping as a crucial tool for reducing purchasing friction.
- Bypassing Controls: Emerging technologies, like OpenClaw, which allows AI to “do things,” threaten to bypass existing payment controls, necessitating proactive risk assessment by banks and payment providers.
Defining Agentic Commerce: Orchestrating Autonomous Transactions
Major players like Mastercard, Visa, PayPal, and Google are actively developing agentic commerce and payment solutions. The CBA white paper, “Agentic AI Payments: Navigating Consumer Protection, Innovation, and Regulatory Frameworks,” defines agentic tools narrowly as those that orchestrate and execute transactions autonomously, without direct human intervention.
Imagine a command: “Find and buy the best laptop under $500 for a 13-year-old.” A true agentic AI would handle the entire process. Widespread trust in such systems depends on the assurance that consumers remain protected by the same long-standing rules applicable to credit and debit cards—protections not originally designed with AI agents in mind.
A critical concern is the emergence of new payment rails, such as cryptocurrency or stablecoins, which currently lack established mechanisms for refunds and chargebacks—essential components of consumer trust in digital payments. This shift could push consumers, merchants, and banks into unregulated territory.
Current U.S. government action to regulate AI agents appears unlikely in the short term, with a prevailing sentiment to allow these tools to flourish before imposing significant restrictions.
Mapping Risks for Consumers
The rise of agentic commerce introduces several potential pitfalls for consumers:
- Agent Bias: AI agents might not always act in the consumer’s best interest, potentially favoring certain merchants or payment rails due to developer incentives, even if better deals exist elsewhere.
- Costly Mistakes: Poor instructions or insufficient training data could lead to ill-advised purchases. Current AI agents may lack the context to avoid errors, such as buying bulk frozen food for a user with limited freezer space.
- The “Tickle Me Elmo” Trap: A scenario where numerous AI agents simultaneously target the same product, potentially causing price spikes, availability issues, or overwhelming merchant systems.
- Data Breaches: Powerful agents require access to sensitive consumer data—payment history, account balances, health information, past purchases—increasing the attack surface for data breaches. Many AI agent developers may also fall outside the scope of existing data protection laws like Gramm-Leach-Bliley.
- Malicious Agents: The threat of criminals creating fake AI agents to steal financial information or execute unauthorized transactions.
- Overreach and Liability: While consumer liability for unauthorized electronic funds transfers is typically capped, an exception exists if a consumer gives an access device to someone who then exceeds their authority. The legal status of an AI agent in this context is currently undefined.
Key Risks for Banks in the Agentic Era
Many consumer and merchant issues will inevitably ripple back to financial institutions:
- Increased Disputes: If agents frequently exceed instructions or generate disputed transactions, banks could face a surge in dispute handling and potential reimbursement obligations. Agents making purchasing decisions without human review may also increase exposure to merchant fraud or order fulfillment issues.
- Defaulting to Banks: There’s a risk that agents will bypass merchant customer service, defaulting to banks’ dispute and chargeback systems as the first line of resolution.
- Compliance Exposure: AI agents designed to assess creditworthiness could inadvertently perpetuate historical discrimination present in their training data, leading to fair-lending violations.
- Heightened Fraud Risk: If third-party developers create agents operating through merchant APIs or open banking infrastructure, banks may have limited visibility into agent behavior, increasing fraud vulnerability.
Solutions: Still Works in Progress
While various solutions to mitigate these risks have been discussed, many face political or regulatory hurdles, or lack the speed required to address high-velocity agentic purchasing scenarios.
Potential long-term solutions include:
- Agentic AI Version of RESPA: A model similar to the Real Estate Settlement Procedures Act, which bars kickbacks and fees for real estate settlement service referrals, could be adapted for broader consumer commerce.
- State Licensing of AI Providers: Comparable to state-level licensing for money transmitters, though some fear this could unduly burden AI startups.
- Industry Self-Regulation: Developing common standards for agent operations could offer a flexible, industry-led approach to risk management.
Source: Thefinancialbrand.com
日本語
한국어
Tiếng Việt
简体中文