– Rohan Mishra
Introduction
The most common phenomenon is ‘seeing is believing’ or ‘what you see is what you will believe’ – but for how long we can rely upon this theory? As society grows with modern day technological advancement, people put a lot of probative weight to digital content such as images and videos. They are accepted at face value as evidence that an event occurred as alleged in the video or picture produced. The ascending ‘deepfake’ technology might turn the present scenario upside down. With the help of generative AI, it has become relatively easier to create media such as images, video, and audio showcasing people appear to do or say things they never said or did.
It is inevitable to prevent deepfake generated media to cross the boundaries of the courtroom, also in the present Indian legal landscape, there does not exist adequate mechanism to tackle the presentation of deepfake evidence in court. Consider a scenario, where a video which is not of the standard quality shows the accused, while crossing the street, stabbed the victim and demanded money. The voice and birthmark on the face of the accused person was also recorded in the video. After the arrest, when the accused was confronted with the video, he protested the contents of the video and claims it as fake. This is a case of deepfake generated media but the pressing question is how in his defence a person can prove it? And how the court should treat such deepfake generated media produced in a trial?
In this piece, the author argues, Firstly, the jurisprudential underpinnings of the admissibility of e-evidence in trials and focuses on what are the laws, judgments, and guidelines to determine the authenticity of e-evidences produced before courts. Secondly, the existing legal standards governing the authenticity of e-evidence is not adequate to tackle the situation in case a deepfake generated media is produced before the court as these standards were developed at a time when no such technology as deepfake existed. Although the newly implemented Bhartiya Sakshya Adhiniyam, 2024 (hereinafter ‘BSA’) to some extent tried to resolved the ambiguity but it fails to address the larger concern. Thirdly, the paper offers a concluding remark followed by some possible solution which should be considered to best deal with the situation.
Deepfakes Defined: The Rise of Technology
The blend of ‘deep learning’ and ‘fake’ are called as ‘Deepfake’. Deep learning is a type of ‘machine learning’ which learns, interact and deals with complex data sets. It is a training process by which the AI tech becomes extremely sharp as plethora of data is continuously fed into the system. Earlier, deepfakes used to be generated on a single neural network system but as the technology improves, deepfakes are now built using ‘generative adversarial networks’ [GANs]. GAN consists of two-part AI models or two neural networks- [i] generator: It generates the altered media such as audio or video and then tries to compare it to the real content the tech is trying to create; [ii] discriminator: It simply works as the authenticator of the generated media and attempts to segregate between the real content and the one created by the generator.
In a nutshell, the deepfake software operates by feeding real data into a machine learning algorithm that is expert in putting one face on top of another. The more you feed real data into the system, more compelling the results. Whereas the other two algorithm i.e., generator and the discriminator try to compete against each other to better their systems; as the discriminator improves itself at spotting fakes and providing feedbacks, the generator corrects its mistakes from the feedback and attempt to produce more realistic fake output of the original content.
Deepfake content first surfaced on the internet in 2017, where an unidentified reddit user released a series of pornographic content by swapping face with some famous female Hollywood celebrities. This kickstarts a series of instances and following these events another reddit user created an application ‘FakeApp’ that allows users to create deepfakes. With the help of fakeapp, creating deepfakes became increasingly inexpensive, extremely easy to operate, and exceedingly difficult to detect. For instance, a video of Bollywood celebrity Rashmika Mandanna sparked a debate regarding the misuse of deepfake as an online video was circulating that caused humiliation to the actress, wherein the actress was depicted in a swimsuit, which was later clarified to be a deepfake and the original video was claimed by a social media influencer. The egregious use of this tech has been seen during election campaign as political parties involve in creating deepfake videos of rival parties with the intention to shape public opinion. This is a serious concern for the stakeholders to discuss the potential misuse of deepfake as it could undermine democracy by spreading falsehood and creating discords.
The Nuances of Deepfakes in Court Proceedings
Categorically, Deepfakes will make the task of trial advocates and judges significantly challenging as the courts are now expected to travel an extra mile to determine the authenticity of digital media produced in the court before admitting them into evidence. Recently, the Delhi High Court in Nirmaan Malhotra v. Tushita Kaul, refused to rely on the photographs produced by a man alleging that his wife has been in a relationship with other man and living in adultery and thus not entitled to receive maintenance. The court observed:
“We have looked at the photographs. It is not clear as to whether the respondent/wife is the person in the photographs, as alluded to by the learned counsel for the appellant/husband. We may take judicial notice of the fact that we are living in the era of deepfakes and, therefore, this is an aspect that the appellant/husband, perhaps, would have to prove by way of evidence before the Family Court,“
Deepfake generated media will only jeopardize the trial proceedings as courts will now have to determine the authenticity of every single digital evidence produced before the court even if prima facie the evidence looks convincing. As the deepfake tech improves its functioning, it gets difficult to identify what is real or fake, thus arises the need to reevaluate the procedure of determining authentication of properly admitted e-evidence.
The present set of provisions governing the procedure for presentation and authentication of e-evidence before court, fall short, as those provisions were enacted far before the advent of deepfake technology. Section 65A and 65B of the Indian Evidence Act, 1872 (hereinafter ‘IEA’) are the two major sections, introduced as special laws, that deals with the admissibility and authenticity standards for e-evidence in court. These sections were introduced with the intent to innovate Indian evidentiary practices and assist the court to deal with e-evidence as the technology advances.
Under the scheme of the act, the purpose of Section 65A is only to refer section 65B. whereas Section 65B (1) provides for non-obstante clause, and a deeming fiction is created whereby it declares that any information contained in an electronic record which is printed on a paper, or transferred to any media such as a CD, USB drive shall be treated as a document and is admissible as evidence of the content of the electronic record without any further proof of the production of the original.The present clause thus creates an exception to the best evidence principle, that when original evidence is available, secondary evidence may not be produced. Section 65B (2(a)) to 65B (2(d)) then requires certain conditions that must be fulfilled in respect of a computer output to address the concern about corruption and tampering of e-evidence. In order to satisfy the conditions as mentioned in sub-section (2), an avenue is found in sub-section (4) which provides for production of certificate that identifies the electronic record containing the statement and describing the manner in which it was produced, or give the particulars of the device involved in the production of the electronic record, or dealing with any of the matters mentioned in sub-section (2). Such a certificate is required to be signed by a responsible official person or any person who is in the management of the operation of relevant device. Further the apex court in the case of Anvar PV v. PK Basheer, put forth certain criteria for the introduction and validation of e-evidence, which includes:
(a) There must be a certificate which identifies the electronic record containing the statement;
(b) The certificate must describe the manner in which the electronic record was produced;
(c) The certificate must furnish the particulars of the device involved in the production of that record;
(d) The certificate must deal with the applicable conditions mentioned under Section 65B (2) of the Evidence Act; and
(e) The certificate must be signed by a person occupying a responsible official position in relation to the operation of the relevant device.
Standing alone, the provisions as well as the companion standards provided in the IEA for the authentication of e-evidence are not sufficient to address the significant threat that deepfake tech poses in many ways – Firstly, the apex court in the case of Arjun Panditrao Khotkar v. Kailash Kushanrao Gorantyal, explicitly held that the requisite certificate as mentioned in sub-section 4 is not necessary if the original device itself is produced. The same can be done if the owner of the laptop, computer, tablet, mobile phone, or any other electronic device steps into the witness box and proves that the concerned device is the same on which the original information is first stored, is owned and operated by him. Thus, the formality of producing a certificate can also be ignored by producing e-evidence directly through this method.
Secondly, merely providing a certificate is too low a standard for determining the authenticity of the content of the media pertaining to which it is sought to be given in the evidence. Further the person only needs to state in the certificate that the same is to the best of his knowledge and belief. Electronic records are more vulnerable to tampering, alteration, excision etc and thus requires a reliable safeguard procedure, but what IEA proposes is completely antithetic and without such safeguards, the whole court proceedings on the proof of e-records can lead to travesty of justice.
Thirdly, Anyhow, when a certificate is issued for e-evidence, the court assumes the content to be true without any further verification. For instance, if a certificate for a CD is issued, it will identify the CD as containing the statement sought to be introduced and describe the way of its production in court. The provision does not allow any further corroboration regarding identification and description. However, if the authenticity of e-evidence is challenged, the court may upon its discretion may take resort to section (BSA) examine the e-evidence in a forensic science laboratory. Practically, taking the opinion of an expert may take years which will delay the court proceedings and reports often end up with the remark as ‘Inconclusive’ which again depends on the court’s conscience whether to admit the evidence.
Section 62 and 63 of BSA replaces section 65A and 65B of IEA, which also falls short to introduce any significant changes regarding the admissibility and authenticity of e-evidence despite repeated remarks by courts over the issue of growing misuse of deepfakes. Section 2(d) of the BSA defines ‘document’ which now includes ‘electronics and digital records,’ expanding its scope to cover emails, text messages, website content and more. In terms of authentication standards, the mandatory production of certificate is retained in the adhiniyam and an additional requirement is also introduced that the certificate must be signed by both the person-in-charge of the e-device and ‘an expert.’ Furthermore, the certificate must include a ‘hash value’ which is a unique alpha-numeric value that represents the content of the file or data. It also acts as a digital signature ensuring document’s security and authenticity. To verify the data, the hash value of the received data has been compared with the hash value of the original merely to check as if the data has been altered.
Undoubtedly, the new provisions add an additional layer of security for verifying the authenticity of e-evidence, but there are some flaws that needs to be mentioned: (i) The section requires an expert signature, but abstains itself from defining as to who qualifies as an expert capable of validating the authenticity of e-evidence. (ii) the provision might act as a safeguard to detect tampering with e-record but it does not vouch for the legitimacy of e-evidence.
Way Forward
The deepfake concern is not just limited to court litigation but also extends to arbitration proceedings. Thus, a central legislature is the need of the hour as the present laws are not enough to separate fake ‘chaff’ and help the ‘wheat’ to survive the challenge of authenticity. Recently, the Central government in the matter of Rajat Sharma v. Union of India, informed the Delhi High Court that a committee has been formed to consider the statutory framework to regulate deepfake. By the time a comprehensive legislation is framed there is a need for the lawyers and judges to exercise greater diligence in verifying the authenticity of digital evidence. That includes trainings of the judges and lawyers should be done in such a manner so they can identify deepfake media in first instances which can save the effort of consulting a forensic expert each time as it will prolong litigation and run up costs through extra diligence and high expenditure on lay and expert witnesses.
Rohan Mishra is an advocate practicing before the courts of Delhi-NCR