The landscape of digital commerce is rapidly evolving with the emergence of agentic commerce and agentic payments, a frontier where artificial intelligence agents are set to transform how we shop and transact. From sophisticated digital shopping assistants powered by Generative AI to autonomous systems executing purchases based on human instructions, this technology promises unprecedented convenience. However, as these AI agents gain autonomy, a critical question arises: how smoothly will this transition unfold for consumers, merchants, and financial institutions, and who bears the responsibility when things inevitably go wrong?
A recent white paper by the Consumer Bankers Association (CBA) and Davis Wright Tremaine LLP highlights significant risks. It suggests that the widespread adoption of agentic commerce, especially autonomous payments, without robust safeguards could lead to unprecedented challenges that traditional payment channels have historically managed.
Who Pays When AI Agents Misbehave?
One of the most pressing concerns centers on liability. Under the existing Electronic Fund Transfer Act, consumers typically have limited liability for unauthorized transactions. However, this protective shield may not extend to scenarios involving AI agents. The white paper cautions that “consumers may be liable for mistakes their agents make, and these mistakes could be costly.” This potential shift in responsibility places consumers in uncharted territory.
For banks, the stakes are equally high. The report stresses that disputes arising from agentic commerce will invariably land in the laps of financial institutions. “Banks cannot be slow in understanding how their customers will be using AI and agentic payments tools,” the paper states, “because banks should expect customers to reach out for help when agentic transactions go wrong.” As the primary touchpoint for payment disputes, banks are expected to “make them whole for failed improper payment transactions.”
The Growing Appetite for Autonomous Shopping
Consumer interest in agentic commerce is not just theoretical; it’s rapidly gaining traction:
- During the 2025 holiday season, shopping via GenAI-powered chat services and browsers surged by nearly 700% compared to 2024 levels, according to Adobe data.
- A January report by PYMNTS.com reveals that consumers are ready to delegate “meaningful purchasing and financial tasks to agentic AI,” driven not by novelty, but by the promise of reduced friction. However, broad adoption hinges on “payments-grade trust.”
- Emerging technologies like OpenClaw, dubbed “the AI that does things,” underscore the urgency for banks and payment providers to address these evolving risks, as they threaten to bypass existing controls.
Defining the Agentic Commerce Landscape
Major players, including Mastercard, Visa, PayPal, and Google, are actively championing agentic commerce and payments, with a steady stream of related announcements.
The CBA’s white paper, titled “Agentic AI Payments: Navigating Consumer Protection, Innovation, and Regulatory Frameworks,” crystallizes insights from a symposium attended by banks, tech firms, merchants, payment networks, policymakers, and consumer advocates. It defines agentic tools specifically as those that orchestrate and execute transactions autonomously, without direct human intervention. Imagine a command like: “Find and buy the best laptop under $500 for a 13-year-old.” A true agentic AI would handle the entire process.
The widespread acceptance of this autonomous digital commerce heavily relies on consumers’ confidence that established protections governing credit and debit cards will remain intact. Yet, these federal laws and traditional payment network policies were never designed with agentic commerce in mind. A critical vulnerability lies in new payment rails, such as cryptocurrency, which often lack the inherent mechanisms for refunds, chargebacks, and other essential consumer protections taken for granted in traditional digital channels. This development could even accelerate the adoption of new payment methods like stablecoins, pushing consumers, merchants, and banks into unregulated territory.
Adding to the complexity, immediate government action to regulate this space appears unlikely. The current administration has signaled a preference for allowing AI and agentic tools to flourish with a “hands-off” approach, letting consumers test these products before imposing significant regulations.
Unpacking the Risks for Consumers
The white paper meticulously details several risks agentic commerce poses to consumers:
- Conflicting Interests: AI agents might not always act in the consumer’s best interest. They could favor specific merchants or payment rails, potentially due to developer incentives, even if better deals exist elsewhere.
- Costly Mistakes: Poor instructions or insufficient training data could lead to erroneous purchase decisions. Current agent technology may lack the contextual awareness to prevent obvious blunders, such as ordering bulk frozen food for a tiny freezer.
- The “Tickle Me Elmo” Effect: A scenario where numerous agents simultaneously target the same product, pricing, or merchant could cause market disruptions, supply chain issues, or even overwhelm merchants’ systems.
- Data Breach Vulnerabilities: To operate effectively, powerful agents will require extensive access to sensitive consumer data, including payment history, account balances, health information, and past purchases. Many agentic payment applications may be developed by non-financial entities not covered by existing data protection laws like the Gramm-Leach-Bliley Act.
- Malicious Agents: The risk of criminals creating fake agents that mimic legitimate tools to steal financial information or execute unauthorized transactions is a significant concern.
- Agent Overreach: While consumer liability for unauthorized electronic fund transfers is typically capped, an exception exists if the consumer grants access to another “person” who then exceeds their authority. The legal definition of an AI agent as a “person” in this context remains unclear.
Key Agentic Commerce Risks for Banks
The ripple effects of consumer and merchant issues will inevitably impact banks significantly:
- Surge in Disputes: Agents making purchasing decisions without human review could lead to more transactions susceptible to merchant fraud or simply merchants failing to fulfill unexpected order influxes. This could dramatically increase the volume of disputes banks must handle and potentially their need to reimburse customers.
- Defaulting to Bank Systems: There’s a risk that agents will bypass merchants’ customer service functions, defaulting directly to banks’ dispute and chargeback systems, further burdening financial institutions.
- Compliance Exposure: AI agents trained on biased data could inadvertently perpetuate discrimination or redlining, leading to violations of fair-lending laws by steering certain consumers away from credit options.
- Increased Fraud Risk: If third-party developers create agents operating via merchant APIs or open banking infrastructure, banks may have limited visibility into their behavior, escalating fraud risks.
Evolving Solutions: A Work in Progress
While various potential solutions have been discussed, many face significant hurdles in the current political and regulatory environment. For instance, designating the Federal Trade Commission (FTC) as the federal regulator for AI agents appears unlikely. Furthermore, some proposed solutions may lack the agility needed to address the rapid-fire risks posed by high-speed agentic purchasing.
However, some pathways might emerge over time:
- An Agentic AI Version of RESPA: Modeled after the Real Estate Settlement Procedures Act, which prohibits fees and kickbacks for real estate referrals, a similar framework could be adapted for broader consumer commerce.
- State Licensing for AI Providers: Comparable to state-level licensing for money transmitters and lenders, this approach could regulate AI providers, though symposium attendees voiced concerns about potential burdens on startups.
- Industry Self-Regulation: Developing industry-wide standards for agent operations could offer a flexible, responsive solution.
As agentic commerce continues its rapid ascent, banks and financial service providers must proactively engage with these new technologies, understand their implications, and prepare for a future where autonomous agents redefine consumer interaction and liability.
Source: TheFinancialBrand.com
日本語
한국어
Tiếng Việt
简体中文