Assessing the Viability of Holding AI Criminally Liable – The Criminal Law Blog

0
26

-Vedant Saxena


Introduction

‘M3gan’, short for ‘Model 3 Generative Android’, is the latest in the line of movies involving Artificial Intelligence (AI) models going rogue. ‘M3gan’, hailed as a ‘marvel of artificial intelligence’, is an AI humanoid doll created by a skilled roboticist, Gemma, to give her recently-orphaned niece, Cady, a human-like companion to fill the void. However, being programmed to protect her friend at all costs, M3gan grows overprotective of Cady and goes about killing anyone and everyone who tries to mess with her. While it is arguable that an AI doll going on a killing spree isn’t happening anytime soon, the film creatively rekindles an invaluable discussion. The advent of applications such as ChatGPT and TayBot, which are programmed on the concept of deep learning, is evidence of the fact that AI is now requiring a minimalist amount of human interference in generating results. However, the growing autonomy of AI, especially on account of deep learning neural networking, throws the doors wide open to the commission of criminal acts by AI. With the advent of certain recent cases, it therefore is pertinent to assess the viability of pronouncing criminal liability of AI.

Deep Learning Neural Networking: The birth of consciousness within AI?

The emergence of deep learning has brought about a revolutionary change in the AI industry. A subset of machine learning, the concept of ‘deep learning’ draws inspiration from the human brain and how it ‘learns’ from its surroundings and the information fed to it. An AI application, built upon this concept, comprises a ‘deep learning neural network’, similar but less complicated to a neural network in the human brain, which is responsible for analysing the kind of information comprising the input. Based on the kind of input received, the AI application generates results, unlike a traditional AI system that was pre-programmed to identify certain patterns in the input and subsequently perform pre-decided tasks. The concept of deep learning, therefore, effectively eliminates the established argument that AI is a mere machine that can only function as per the commands of its developers. The deep learning neural network of the AI system allows it to employ its own independent mind in carrying out tasks.

The unusual case of Microsoft’s ‘Tay’

In order to establish the culpability of AI, what needs to be satisfied is whether the AI system harboured the requisite amount of consciousness or autonomous will, so as to render it truly aware of its actions. In order to answer this question, both theoretical and practical aspects need to be considered. Tay, a chatbot developed by Microsoft, comprised a deep-learning neural network that allowed it to improve its performance by chatting with its users. However, within barely a few hours of its launch, it began producing tweets on highly controversial issues, such as supporting Nazism and calling out on feminists. It is however notable that before Tay developed its villainous arc, it was subjected to a number of racist, misogynistic and unpleasant remarks by users. Therefore, its unpleasant transformation could be on account of it being programmed to simply replicate the data it was fed to. Microsoft, obviously, cannot be held criminally responsible for the acts of the chatbot, since the acts were committed by the Tay with no interference from Microsoft. However, with the chatbot essentially being a robot parrot, it is arguable whether it actually had the requisite consciousness to comprehend the implications of its actions.

Deep learning and the ingredient of ‘mens rea’

An act is considered punishable under criminal law only when the requisite mens rea, i.e., the presence of a guilty mind, is present. In the context of a traditional AI system, attributing mens rea to the application would not have been possible, since the way it went about accomplishing tasks was entirely dependent on how it was programmed. In such cases, therefore, it was possible to hold the programmer criminally liable, since the act had effectively been committed by him. However, with the advent of deep learning, AI is entirely capable of committing acts without any human interference. Deep learning grants the attributes of cognition and an ‘autonomous will’ to AI, both of which are necessary ingredients to formulate a guilty mind. The AI is therefore no longer merely a gun in the hand of the human perpetrator; it has grown capable of determining the angle and firing as per its judgement.

Attaching the aspect of mens rea to AI is, however, to be considered in light of the attributes of the perpetrator. Since AI does not harbour emotion, it cannot be held liable for certain categories of crimes, such as hate crimes. Further, it is to be noted that an AI system that was not autonomous enough to have made an independent decision, cannot be considered a perpetrator. However, specific intent may be attributed to an AI system that has grown capable of developing aims by itself.

The Chinese Room Argument and the Robot reply: Does autonomy guarantee comprehension?

While deep learning can be considered to render AI sufficient cognition to perform tasks as per its judgement, the ‘Chinese Room Argument’ has often been used to contend that AI can never possess genuine intelligence and cannot, in the true sense, comprehend the processing of data. The argument is based upon a hypothetical situation comprising an English-speaking man, who does not understand Chinese, locked in a room containing a set of instructions to convert Chinese symbols into English. Upon receiving a sheet of paper comprising a set of questions in Chinese, the man uses the instructions to send his responses in Chinese. To the people on the outside, it may appear that the English-speaking man understands Chinese; however, the man is merely following a set of instructions and does not actually understand what the symbols mean. According to John Searl, the author of the ‘Chinese Room Argument’, once AI witnesses a particular situation, it merely replicates the actions of others who have been through the same and cannot truly comprehend the implications of its actions.[1] As per this argument, therefore, the case of AI may be drawn at parallels with a person of unsound mind, who is incapable of comprehending the implications of its actions and therefore, does not form the requisite mens rea. The case of the chatbot ‘Tay’ could be cited as an example here. 

This Argument has, however, been subjected to much criticism over the years, with the robot argument arguably being the most valuable in the context of the AI debate. This argument modifies the thought experiment in a significant manner by putting a programme into a robot that is capable of perceiving and communicating with the outside world through sensors and effectors. The argument thereby makes the claim that the robot’s causal connection with the environment ensures that it can understand Chinese because it imbues formal symbols with semantics.[2]This argument appears to be more relevant in the instance of deep learning, where the robot is not pre-programmed to perform a specific task but to perceive its surroundings and act accordingly.

The viability of subjecting AI to punishment

While the argument about whether AI could ever garner the requisite mens rea to commit a crime seems to be here to stay for a while, it is important to consider the viability of subjecting AI to punishment. The jurisprudence of punishing a criminal is either to deter him from committing any such action in the future or to entirely eliminate him from society, in the event of no reasonable possibility of his reformation being likely. In this context, it is notable that in spite of possessing a deep learning neural network, an AI system is still a machine and does not harbour human emotions, such as ‘fear’ or ‘pain’. 

While punishing an AI system can most certainly not deter other systems from committing offences, it can certainly be rehabilitated by subjecting it to a certain special environment. A more effective way may be to re-program the AI. While the effectiveness of such a mode of punishment is still to be truly tested, it most definitely makes room for further research into the consciousness of an autonomous AI. 

The aim of punishing AI may also involve providing some amount of psychological relief to the victim and ensure that the architects of the AI system act more responsibly henceforth. Therefore, preventive measures analogous to conventional forms of punishment may be imposed, such as the permanent deletion of the AI software, being equivalent to the death penalty being imposed on a person, or the temporary deletion of AI software, being in lieu of imprisonment. While such mechanisms may not have any real effect on the AI itself and may, in turn, cause heavy losses to the owners of the system, they would effectively eliminate the repetition of such an instance. However, in the context of deep learning neural networks, it is almost impossible to consider what exactly needs to be re-programmed.

Conclusion

The chances of a killer AI doll leaving a trail of bodies, currently, appears more reel than real. However, it can easily be concluded that AI is here to stay and would only keep growing in other fields. Moreover, with rapid advancements in deep learning neural networking, the autonomy of AI has increased many folds and is guaranteed to keep growing. This, coupled with instances such as Microsoft’s Taybot, call for the criminal culpability of AI, to atleast a certain degree. While attributing the conventional elements of crime would be difficult, as discussed above, it is certainly not impossible and rather an indispensable measure, keeping in mind the psychological effects on victims. It is therefore pertinent to amend the current laws to accommodate such instances.


[1] M’Naghten case.

[2] Boden, Margaret A: Escaping from the Chinese Room. University of Sussex, School of Cognitive Sciences, 1987.


The author is a fourth-year student pursuing BA. LLB Hons. at Rajiv Gandhi National University of Law, Punjab.



Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here