top of page
Writer's pictureSofia Grossi

Innovation Law and Regulation Series: AI on Trial, Blaming the Byte

Foreword


Artificial Intelligence systems are set to be the next revolution, forever changing humans’ lives. This new phenomenon and its many effects will cause great changes in our society, which is why regulating is the first step toward ethical development. In fact, unregulated use of these technologies could give rise to negative consequences such as discriminatory uses and disregard for privacy rights. The challenges brought by the use of Artificial Intelligence urge legislators and experts to protect citizens and consumers as regulating becomes a priority if humans wish to protect themselves from unethical and abusive conduct. This series explores the topic of new technologies such as artificial intelligence systems and their possible regulations through legal tools. In order to do so, we will start with an explanation of the rise of new technologies and delve into the complicated question of whether machines can be considered intelligent. Subsequently, the interplay between Artificial Intelligence and different branches of law will be analyzed. The first chapter of this series of articles explored the possibility of granting AI systems with legal personality and the main legislative steps taken in the EU towards that direction. Moving into the realm of civil law the second chapter considered the current debate on the responsibility regime concerning the use and production of AI. The third chapter will discuss the influence that A.I. plays on contract law and the stipulation of smart contracts. The use of AI in criminal law and the administration of justice will be examined in the following chapter with a focus on both the positive and negative implications of their use. The fifth chapter will be dedicated to the use of Artificial Intelligence by public sector bodies. Finally, the complicated relationship between data protection and AI will be discussed in light of the EU General Data Protection Regulation.


The series is divided into six articles:

Innovation Law and Regulation Series: AI on Trial, Blaming the Byte


Artificial Intelligence systems have extraordinary results in carrying out tasks that require a high level of intelligence and capabilities. Yet, as illustrated in the previous chapter of this series, Artificial Intelligence systems are not granted legal personality. As a result, Artificial Intelligence machines are not attributed rights and duties. Being exempted from the attribution of legal obligations, Artificial Intelligence systems do not have direct liability for the damages they might cause. This leads to a worrying scenario in which users of Artificial Intelligence systems who are negatively affected during the use of this technology cannot ask for compensation or have trouble understanding who can be considered liable for the damages suffered. For this reason, regulation is not only desirable but essential in order to fill in the gap that leaves consumers and individuals unprotected.

This essay will illustrate the state of the art of the liability regime surrounding and regulating the use of Artificial Intelligence systems with the aim of providing knowledge on the topic and shedding light on the main solutions and pathways that legislators might decide to follow. The first part of the essay will be devoted to the concept of direct liability of Artificial Intelligence systems and the main implications attached to it. The following part of the essay will consider the concept of vicarious liability already present in Roman times and still existing in current legal frameworks. The essay will then be dedicated to the possibility of envisioning a joint liability regime among relevant parties. On this note, the phenomenon of black boxes will also be considered to shed light on possible issues. Subsequently, the European Union Directive on Product Liability will be examined in order to understand how the provisions contained might prove to be useful in regulating Artificial Intelligence systems’ liability. The U.S. current liability regime will also be taken into account. Finally, the essay will be dedicated to final conclusions and remarks.


Direct Liability

The lack of recognition of legal personality to Artificial Intelligence machines leaves a normative gap with confusion on who should bear the consequences of the damages caused by the system (Glavaničová & Pascucci, 2022). In order to be able to hold the machine liable we should first active legal personality to it, which legislators seem reluctant to do (Čerka, Grigienė & Sirbikytė, 2015). The main interpretation is in fact that Artificial Intelligence systems are not able to think like humans and are not considered sentient entities. In short, they are not considered fully aware of the actions they make, as illustrated in the previous chapter of this series. However, the imposition of direct responsibility would decrease confusion among consumers who could only take action against one entity without trying to navigate the current confusing legal scenario. The solution would be straightforward: once a consumer is allegedly damaged by the Artificial Intelligence system, he or she could take direct action against it in court, either civil or criminal. Artificial Intelligence systems have been demonstrated to be able to develop independent decisions, a result of a real, yet different, intelligence. This was the case during the experiment of the Gaak robot in England. The robot Gaak managed to escape when left unattended, showing the extraordinary capabilities of the machine to become aware of the circumstances and take deliberate actions (Tolocka, 2008). This would justify the here proposed solution of a direct liability regime. If Artificial Intelligent systems are indeed intelligent and capable of taking conscious independent decisions, it seems illogical that they should not take responsibility for the damages they might cause. However, despite the benefits that a regime of direct liability could bring, some problematics aspect remain unresolved. Artificial Intelligence machines are not in fact real persons and the legal concepts of intent or negligence are not easy to detect in their traditional sense. There would need to be a change in how liability is conceived before expecting Artificial Intelligence systems to be held responsible. Furthermore, before holding machines responsible it would be necessary to ensure pecuniary compensation for the injured parties. On this note, recent proposals on the creation of mandatory insurance that covers the negative outcomes caused by the systems (Hevelke & Nida-Rümelin, 2015). Although this seems like an easy solution to implement, legislators must take action to make this possibility a reality before even considering the possibility of attributing direct liability toward Artificial Intelligence systems.


Figure 1: Campaign against robots in occasion of the experiment of the Gaak Robot (Ward, 2016).

Vicarious Liability

The concept of vicarious liability comes from Roman Law. Under Roman law, the father, paterfamilias, was responsible and liable for the actions of others, notably his children and slaves. The latter were not in fact granted legal personality exactly in the same way Artificial Intelligence systems are not nowadays. The liability of entities and persons without legal personality is considered to be an example of limited liability. In Ancient Roman times, this was known as noxal liability (Hillman, 1997). Under this regime, some persons could not be held responsible for their wrongdoings, and the blame, culpa, would fall on the shoulders of another person. The same happened for the damages caused by animals which are notoriously recognized as legal persons (Batra, 2010). The rationale behind the concept of vicarious liability is the Latin phrase culpa in vigilando. As one is considered the master of others, it is only reasonable that this person should also be liable for the legal consequences of their actions. This framework was kept across the years and was transposed into the French legal framework when the Napoleonic Code was issued in 1804 (Ruffolo, 1804). Also, in the Italian Civil Code, there is evidence of the provisions on liability for the actions of others. Articles 2048 and 2049 of the Italian Civil Code, respectively, make the parent and the commissioner responsible for the actions of children and the clerk.


The same framework could be interpreted in a new and evolutive manner that could be appropriate for the needs raised by the use of new technologies. If not worthy of being considered legal persons Artificial Intelligence systems might be considered as mere things. As such, the liability for the damages caused by things would be easily attributed to the guardian, as provided by Article 2051 of the Italian Civil Code. However, this would not solve the problem of lack of protection for consumers as they could be legally considered the guardians of Artificial Intelligence systems. Thus, the status of guardian would have to be conferred to the seller of the Artificial Intelligence systems to ensure appropriate protection for consumers. This would solve the problem and ensure that an entity, such as the producer or seller, is liable for the actions performed by the machine, conceding that corporations do not find loopholes to avoid liability (Diamantis, 2024). Corporations might create licensing agreements that transfer responsibility to underfunded entities that will be unable to ensure fair compensation to the affected consumers. Nevertheless, it would not be appropriate to make the producer of the system entirely responsible for the consequences linked to the use of Artificial Intelligence machines. The European Union has, in fact, made it clear in its proposal for an Artificial Intelligence Regulation that the consumer would be considered liable for the damages suffered when they are the result of misconduct (European Commission, 2021). Putting all responsibility on the manufacturer would cause a disincentive toward innovation and would bring an unbalanced relationship between parties.


Figure 2: The Code Napoléon containing provisions on vicarious liability (Unknown artist, 1810).

A Joint Liability Regime

As mentioned above, responsibility could be borne by entities such as the producer or the seller of the machine. The producer is in fact the person who is responsible for the entry into the market by the product. It follows that the producer could also be responsible for following malfunctioning or wrongdoing by the machine following the above-mentioned concept of vicarious liability. However, other subjects become relevant in the debate on Artificial Intelligence systems’ liability.

The trainer is the person responsible for the training of the machine with obvious involvement in how the outcome of the machine is reached. It is the trainer’s role to decide on what data to train the system and what information will be used by the system to make subsequent actions and take decisions. For this reason, the trainer should bear partial responsibility for the damages that consumers might suffer when the machine takes a wrong and harmful decision. The Committee on Legal Affairs of the EU Parliament has declared that their liability should be proportional to their level of training of the AI system (Bashayreh, Fadi & Tabbara, 2021).

Even more importantly, the debate also focuses on the coder. This person is given the task to elaborate and create the inner algorithm used by the Artificial Intelligence system to function. The algorithm might be considered as the soul of the system; therefore, its creator should bear part of the responsibility (Barfield, 2018). This shared regime of liability would create a more balanced scenario where responsibility is partially divided among involved parties. This would need to be ensured through the issuing of licensing agreements between the parties so that it is clear how and when liability is to be shared (Vladeck, 2014).


Black Boxes Phenomenon

As extraordinary as the current development of technology is, it is still impossible to fully understand how Artificial Intelligence systems work (Neri, Coppola & Miele, 2020). This phenomenon is known as the “Black Box". Machines work in deep neural networks which are so complicated that even tech experts do not know how to comprehend them (Heinert, 2008),

This creates two main problems. Firstly, it is hard for the consumer to prove in court how the damages suffered are linked and, particularly, caused by the functioning of the Artificial Intelligence system. Secondly, it might be unethical and unfair to hold individuals liable for actions of a system that they did not foresee in the creation of the system itself (Hallevy, 2015). In fact, Artificial Intelligent systems are able to develop independent decision-making skills, therefore it is not possible to assume that every decision the system makes is the result of the training received by the trainer or even foreseeable by the coder of the algorithm (Pipon, Monteferrante & Roy, 2022). The question becomes whether it is possible to hold the coder liable for an action made by the system that he could not foresee and thus avoid. A possible answer has been given by the European Commission in its proposal for an Artificial Intelligence Regulation. The European Commission has proposed a system to exclude liability, through exemption clauses, of producers and coders when appropriate mitigation measures have been implemented before the system is put on the market for sale (Bratu & Freeland, 2022).


Figure 3: Difference between a Black Box system and a white box system (Pintelas & Livieris, 2020).


EU Directive on Product Liability

Another possible solution to fill the normative gap and ensure protection in the use of Artificial Intelligence systems is through the update of the Product Liability Directive of 1985. The European Parliament presented this idea in 2020 as a way to ensure a regime of civil liability (European Parliament, 2020). The Directive has in fact proved itself as a useful and effective tool to protect consumers against defective products. The Directive could be interpreted in a new light and perspective in order to apply its provisions to the use of Artificial Intelligence systems. The definition of a product as a movable item can be easily applied to Artificial Intelligence systems. Whenever the product presents a defect, defined as a divergence between the average expectation and the real performance of the product, the consumer has the right to complain about the damages suffered and seek compensation. However, the consumer must demonstrate the damages suffered, the defect of the product, and the link between the former and the latter (European Council, 1985). The producer is exempted from liability only when proving one of the events and circumstances listed in Article 7 of the Directive. For instance, the producer is not liable when proving that he was not aware of the defect because of the lack of technological means. This exemption clause could also be invoked by producers of Artificial Intelligent systems capable of learning independently and therefore potentially able to cause unforeseeable problems and develop unforeseeable defects. This regime could prove itself very successful, especially when combined with the traditional Civil Liability frameworks of European Countries. The European Court of Justice has in fact confirmed that the Product Liability Directive and the traditional Civil provisions on liability are cumulative so that higher protection can be ensured.


Liability Regime in the United States

It is now appropriate to take a look at how non-European systems are tackling the issue. The United States is a country of great innovation and development, and it comes as no surprise that the rising business of self-driving cars is flourishing there. The legal framework is one of the reasons why that is. Legislators are in fact of aid in the innovation and development of Artificial Intelligence products. It has been declared by U.S courts that the conformity of the Artificial Intelligence product to the minimum safety standards issued by the competent public authorities is enough to rule out the liability of the producer in case of malfunctioning or wrongdoing by the machine. While this certainly provides for a thriving business where producers do not have to worry about bearing all the legal consequences of their complicated and obscure products, consumers are left with little to no protection. The Product Liability Directive offers a similar exemption clause stating that when the defect of the product is due to the adherence of the product to safety standards issued by a public body. Differently, the Member States of the European Union all interpret the provision as meaning that the conformity to the standards is the minimum condition to put the product on the market and not enough to exclude liability as the defect has to be verified ex-post (Ruffolo, 2020).


Figure 4: An example of a self-driving car (Marr, 2021).

Conclusions

In conclusion, this essay has illustrated the main positions and solutions to the lack of a clear liability regime in the field of Artificial Intelligence. The normative gap left by legislators leaves consumers unprotected, stakeholders confused and innovation hampered by normative inefficiencies. The skepticism toward the possibility of recognizing legal personality to Artificial Intelligent systems makes the eventuality of direct liability impossible or, at least, improbable. For this reason, it seems necessary and appropriate to turn toward different solutions and pathways. The most traditional seems to be the one represented by the concept of vicarious liability which has a long history behind it as it was already present under Roman Law. Currently, Italy, among other countries, presents similar regimes contained in the Civil Code which might be applied to Artificial Intelligent products. Furthermore, the possibilities of imposing and creating a joint liability regime seem more practical as the responsibility would be shared among relevant parties involved in the manufacturing, creation, and training of the system. However, it is important to consider, for both legal and ethical reasons, that the complicated and obscure nature of Artificial Intelligent systems and their inner algorithm might interrupt the cause-effect relation between, for instance, the coder and the outcome produced by the product resulting from its independent learning abilities. The existing and still in force EU Directive on Product Liability might prove to be particularly of help in solving the current debate as the notions therein contained might easily be applied to the use of Artificial Intelligence products. The U.S. represents a completely different legal framework in which producers have the chance to be exempted from liability in many circumstances with a favorable effect on development and innovation, but a worrying disregard for consumers. It is clear that legislators need to take action soon to ensure that the gap is filled with appropriate and balanced regulation aiming at, on the one hand, protecting consumers and, on the other hand, incentivizing innovation with fair and reasonable obligations for producers, trainers, and coders.


Bibliographical References

Barfield, W. (2018). Liability for Autonomous and Artificially Intelligent Robots. Paladyn, Journal of Behavioral Robotics, 9(1), 193-203.

Bashayreh, M., Sibai F. N. & Tabbara A. (2021). Artificial intelligence and legal liability: towards an international approach of proportional liability based on risk sharing. Information & Communications Technology Law,30:2,169-192.


Bélisle-Pipon, J. C., Monteferrante, E., Roy, M. C. et al. (2022). Artificial intelligence ethics has a black box problem. AI & Soc.

Bratu, I. & Freeland, S. (2022). Artificial Intelligence, Space Liability and Regulation for the Future: A Transcontinental Analysis of National Space Laws. 73rd International Astronautical Congress (IAC), Paris, France, 18-22


Čerka, P., Grigienė, J., Sirbikytė, G. (2015). Liability for damages caused by artificial intelligence. Computer Law & Security Review 31(3). Pages 376-389.


Council Directive 85/374/EEC of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products.


Diamantis, E. (2024). VICARIOUS LIABILITY FOR AI. Forthcoming 99 INDIANA L.J.


European Commission. (2021) Proposal for a REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL LAYING DOWN HARMONISED RULES ON ARTIFICIAL INTELLIGENCE (ARTIFICIAL INTELLIGENCE ACT) AND AMENDING CERTAIN UNION LEGISLATIVE ACTS.


European Parliament. (2020). Resolution of 20 October 2020 with recommendations to the Commission on a civil liability regime for artificial intelligence.


Glavaničová, D., Pascucci, M. (2002). Vicarious liability: a solution to a problem of AI responsibility?. Ethics Inf Technol 24, 28.


Hallevy, G. (2015). Liability for Crimes Involving Artificial Intelligence Systems. Springer.


Heinert, M. (2008). Artificial neural networks – how to open the black boxes?. AIEG 2008 – First Workshop on Application of Artificial Intelligence in Engineering Geodesy.


Hillman, R. W. (1997). Limited Liability In Historical Perspective. Washington and Lee Law Review 5(2).


Mukta, B. (2010). How is the Master of an Animal Liable in Tort Law?. SSRN Electronic Journal.


Neri, E., Coppola, F., Miele, V. et al. (2020). Artificial intelligence: Who is responsible for the diagnosis?. Radiol med125, 517–521.

Toločka, T. R. (2008) Regulated mechanisms (2008). Technologija.


Vladeck, D. C. (2014). Machines without Principals: Liability Rules and Artificial Intelligence. Wash. L. Rev. 117, 149.


Visual Sources

Comments


Author Photo

Sofia Grossi

Arcadia _ Logo.png

Arcadia has an extensive catalog of articles on everything from literature to science — all available for free! If you liked this article and would like to read more, subscribe below and click the “Read More” button to discover a world of unique content.

Let the posts come to you!

Thanks for submitting!

  • Instagram
  • Twitter
  • LinkedIn
bottom of page