The New Frontier of Civil Liability: Artificial Intelligence, Autonomy, and Consumer Protection
The New Frontier of Civil Liability: Artificial Intelligence, Autonomy, and Consumer Protection
Sebastián Bozzo Hauri is a lawyer, holds a PhD in Law from the University of Valencia, and currently serves as Dean of the Faculty of Law at Universidad Autónoma de Chile. A specialist in civil law, consumer law, and artificial intelligence, he leads Fondecyt research projects and the Jean Monnet Module on AI and European Private Law. He has directed the Center for Regulation and Consumer Affairs and the Financial Autonomy platform.
Technological evolution has entered a phase that challenges the very foundations of private law. The emergence of systems based on artificial intelligence (AI)—particularly in their most recent form, so-called AI agents—compels a reassessment of the traditional framework of civil liability, especially in the field of consumer law.
The trajectory of AI has followed a path marked by three distinct waves. The first wave was predictive AI, trained on historical data to anticipate future behavior, as seen in recommendation engines and segmentation models. The second wave introduced generative AI—such as ChatGPT or Gemini—capable of producing text, images, or decisions based on prompts. However, it is the third wave, embodied by AI agents, that poses the greatest challenge: software capable of autonomous action, making decisions on behalf of users, interacting across platforms, and executing tasks with minimal human oversight.
These agents are defined as systems governed by operational rules within predefined environments, equipped with high levels of domain-specific knowledge and the ability to process textualized information. Unlike other AI systems, agents do not merely respond—they act: booking tickets, initiating transfers, scheduling medical appointments—automatically. In short, they assume roles traditionally reserved for humans.
This autonomous and adaptive deployment, across both physical and virtual environments, strains the classical elements of civil liability. As scholars such as Barros, Corral, and Aedo have pointed out, the starting point of liability systems is imputable human behavior, where harm is the result of an action or omission. But when damage is caused by a system acting independently, who is liable?
In Chile, civil liability in consumer matters is governed by Law No. 19.496 (LPDC), which rests on the notions of provider, product, and consumer. Here arises the first complexity: can an AI system be considered a product? And its developer or deployer, a provider? The LPDC defines products as goods offered and marketed, typically in exchange for a price. Yet in the digital environment, many AI-based services are offered free of charge in exchange for personal data, constituting a non-monetary consideration that is not properly regulated.
Even if one accepts that an AI system is a product, the difficulties persist. The inherent features of these systems—complexity, interconnectivity, opacity, self-learning, and autonomy—hinder both the identification of the cause of harm and the attribution of liability. For instance, in the event of a flawed recommendation by an AI agent operating on an insurance platform, who is liable: the algorithm’s designer, the data provider, the platform operator, or the third-party implementer?
The opacity of such systems—where not even developers can clearly explain why a system made a certain decision (the so-called "algorithmic black box")—directly impacts the consumer’s burden of proof. This is particularly problematic in a legal regime that requires establishing fault or negligence and a clearly identifiable causal link.
In Europe, these tensions have prompted significant reforms. The recent Directive on Liability for Defective Products (adopted in March 2024) expands the concept of product to include software and AI agents, and eases the burden of proof. Legal presumptions have been established to protect victims where damage arises from complex, opaque, or hard-to-audit systems. The directive recognizes new liable parties, such as implementers, component integrators, and even providers of remote order processing services. Furthermore, it introduces joint liability among economic operators and the right of recourse.
In particular, it is urgent for Latin American countries to adjust their legal frameworks along at least three dimensions:
1. Recognition of new liable parties: including developers, implementers, integrators, and operators of AI systems as potential providers.
2. Expansion of the concept of product: to include autonomous software, even when it is not directly sold to consumers.
3. Presumptions in favor of the consumer: in cases involving damage caused by opaque or complex systems, a reversal of the burden of proof should be established, as the new European directive provides.
The autonomy of AI agents is not merely a technical challenge—it is a profound legal one that calls for a reexamination of the social contract between innovation and the protection of rights. Consumer trust in the digital ecosystem depends on the ability to identify who is liable, to claim redress for damages, and to access effective means of evidence. Without this balance, the promise of artificial intelligence risks becoming a new form of vulnerability.
The law must rise to meet this transformation—not to obstruct innovation, but to endow it with legitimacy and a sense of justice. Civil liability, particularly in the field of consumer protection, must be reimagined in light of AI agents.