Deepfakes and Victimology: Exploring the Impact of Digital Manipulation on Victims

Authors

  • Mahrus Ali Universitas Wahid Hasyim Semarang, Indonesia
  • Zico Junius Fernando Fakultas Hukum Universitas Bengkulu, Indonesia
  • Chairul Huda Universitas Muhammadiyah Jakarta, Indonesia
  • Mahmutarom Mahmutarom Universitas Wahid Hasyim Semarang, Indonesia

DOI:

https://doi.org/10.56087/substantivejustice.v8i1.306

Keywords:

Deepfakes, Victimology, Digital Manipulation, Cyber Crime

Abstract

In the rapidly evolving era of information technology, the emergence of "deepfakes" hyper-realistic digital manipulations powered by artificial intelligence has introduced complex and pressing challenges to the field of victimology. These synthetic media are increasingly used for cybercrime, online harassment, defamation, and disinformation, leading to serious psychological, reputational, and legal consequences for victims. This study employs a normative legal research method that integrates conceptual, comparative, and futuristic approaches. The conceptual approach explores the legal and psychosocial dimensions of digital victimization; the comparative approach identifies legal responses in various jurisdictions; while the futuristic approach is used to predict the trajectory of deepfake threats based on AI development trends and emerging digital behaviors. Unlike previous generalist analyses, this research provides concrete findings: it identifies four dominant forms of digital exploitation through deepfakes namely non-consensual pornography, political disinformation, financial scams, and reputational sabotage. The study also reveals that psychological trauma, reputational harm, and repeat victimization are the most pressing victimological issues in deepfake cases. By applying content analysis to real-world cases, this paper builds a framework for understanding how deepfakes transform the victim–offender dynamic, and it proposes forward-looking strategies for legal reform, victim protection, and digital literacy. This study contributes to filling the current academic gap by offering a victim-centered perspective on the legal and psychosocial consequences of synthetic media, thereby promoting more inclusive and adaptive responses in the face of evolving digital threats.

Downloads

Download data is not yet available.

INTRODUCTION

In today's digital age, society is witnessing the emergence of technologies that are revolutionary but also bring new challenges in understanding and addressing the adverse impacts that the use of these technologies can have.[1] The phenomenon of "deepfakes", which includes advanced digital manipulation using artificial intelligence, has affected the way people interact with information, media, and each other online.[2] In the midst of these technological advancements, this research addresses the urgency of understanding and addressing the victimological impact of deepfakes, with the specific aim of protecting society from the potential harm it brings.

Deepfakes, a technology that uses sophisticated algorithms to create fake videos, images, or audio that are virtually indistinguishable from the real thing, have changed the face of cybercrime and disinformation.[3] With the ability to manipulate visual and audio reality with an unprecedented level of sophistication, deepfakes offer new opportunities for harassment, deception, and propaganda.[4] Concerns have been raised regarding the potential impacts of this technology, which include social divisions, compromises of national security, and the harmful spread of fake news.[5] In this context, it is important to understand how deepfakes affect individuals who fall victim to this digital manipulation, and this is where the study of victimology comes in. Victimology, as the scientific study of victims and victimization, seeks to understand the experiences and consequences of being a victim of crime.[6] In the case of deepfakes, this includes an analysis of how individuals can become victims of digital manipulation and the psychological, social, and economic impacts of this victimization. Historically, victimology has focused on the study of victims of conventional crime, [7] but with the rise of cybercrime and digital manipulation, the field has evolved to encompass new forms of victimization faced by individuals in the digital age. This includes understanding how deepfakes can be used as a tool for abuse and exploitation, as well as the long-term consequences of this victimization on individuals and society at large.

According to a study conducted in 2022, more than half (57%) of the global respondents believe that they are able to identify deepfake videos, while around 43% admit that they have difficulty distinguishing between deepfake and authentic footage.[8] The analysis of the data from this study highlights the varying levels of awareness and perceptive acumen among global consumers in relation to the phenomenon of deepfakes. With 57% of respondents feeling fairly confident in their ability to detect deepfake videos, this suggests that there are a significant number of individuals who have developed or gained awareness and understanding of the techniques and signs that indicate digital manipulation. However, a significant proportion, 43%, still have difficulty distinguishing between genuine and manipulated content. This raises serious concerns about the potential negative impacts of fake news, including the spread of disinformation, defamation, and even political manipulation. This difficulty also indicates that there is considerable room for improvement in education and training to help people recognize fakes so that they can navigate the digital world in a safer and more informed manner. Additionally, these results also raise important questions about how technology and regulation may need to evolve to protect individuals from the potential dangers posed by deepfakes. This includes the development of better detection tools as well as more effective public education on the dangers of deepfakes.

In recent years, the deepfake phenomenon has created a significant wave of excitement and concern, spanning a wide spectrum from high-profile individuals to global political figures. In Indonesia, well-known actress Nagita Slavina was allegedly the victim of a deepfake video that tarnished her good name.[9] A short video featured a woman with a very similar face to her in an inappropriate scene. This case is not unlike a series of incidents involving prominent global figures. Former US Presidents Donald Trump and Barrack Obama, as well as Meta founder Mark Zuckerberg, have been the subject of outrageous deepfake videos, with voice and facial manipulations confusing viewers as to the authenticity of the videos.[10] In fact, world leaders such as Russian President Vladimir Putin and North Korean President Kim Jong-un have not escaped the reach of this phenomenon, becoming targets of deep-fake content that creates false narratives and images about them.[9] In April 2018, a disturbing video rapidly spread across WhatsApp, the globally prevalent instant messaging platform. Captured as if through a CCTV lens, it depicted a group of children engaged in a street cricket game. Abruptly, the scene turned alarming when two men on a motorcycle appeared, snatched one of the smaller children, and then swiftly drove off. This purported "kidnapping" footage sparked immense confusion and fear, igniting a period of mob violence that tragically spanned eight weeks and resulted in the deaths of at least nine innocent individuals. This incident highlighted the dangerous potential of viral misinformation to incite real-world violence and chaos.[11] This phenomenon confirms that deepfakes have become a powerful tool in the spread of disinformation and digital manipulation, which can have a profound and detrimental impact on individuals and society at large.[11]

This research is conducted to address a critical knowledge gap in the study of digital victimization, specifically by examining how deepfakes generate new categories of victims and unprecedented forms of harm that are not yet adequately conceptualized in traditional victimology. Unlike earlier studies that have primarily focused on the technological aspects or legal implications of deepfakes, this study centers its analysis on the victimological dimensions, including psychological trauma, reputational damage, and repeat victimization. Through a qualitative examination of real-world deepfake cases, this research investigates the shifting dynamics between perpetrators and victims in the digital landscape, particularly how power, anonymity, and technology intersect to facilitate exploitation. Furthermore, this study aims to develop a comprehensive theoretical framework that goes beyond existing victimology theories by proposing new conceptual tools to understand how individuals become targets of algorithmic manipulation. This includes identifying the psychological pathways through which deepfake victimization occurs, and the social contexts that enable its spread. In doing so, the study also explores adaptive coping mechanisms and community-based prevention strategies that are tailored to the unique characteristics of digital exploitation. The novelty of this research lies in its interdisciplinary integration of victimology, psychology, and digital media studies, resulting in a more holistic understanding of the deepfake phenomenon. It also offers forward-looking policy recommendations and strategic interventions that reflect the future trajectory of AI-driven harm, something rarely addressed in prior victimological literature. Additionally, by analyzing the broader social and cultural implications of deepfakes such as the erosion of public trust, the polarization of discourse, and the destabilization of democratic institutions this research contributes not only to academic theory, but also to practical societal resilience in the face of emerging digital threats.

METHOD

This research utilizes the normative legal research method, which is divided into three main approaches: conceptual, comparative, and futuristic.[12] First, the conceptual approach is used to construct a foundational understanding of deepfakes, particularly in relation to emerging forms of victimization resulting from AI-based digital manipulation. This approach facilitates the development of a solid theoretical framework by analyzing both classical and contemporary victimology theories, as well as legal concepts concerning victim protection in the digital era. Second, the comparative approach serves to examine and contrast the legal responses of various jurisdictions in addressing the misuse of deepfake technology. This comparative analysis aims to identify best practices and highlight the strengths and weaknesses within different legal systems, particularly in terms of preventive measures and law enforcement efforts. It also illustrates how jurisdictions with adaptive legal frameworks are better positioned to protect deepfake victims. Third, the futuristic approach distinguishes this study by aiming to predict the future trajectory of deepfake technology and its potential socio-legal challenges. Drawing from current trends in artificial intelligence, predictive literature, and global case patterns, this approach is instrumental in formulating forward-looking prevention and intervention strategies that can effectively address the evolving risks posed by deepfakes. This study is descriptive-prescriptive in nature, meaning it not only describes current phenomena but also offers normative recommendations for legal reform to enhance the protection of victims of digital victimization.[13] The data that has been collected will be analyzed in detail through content analysis techniques, which aim to understand and interpret the qualitative information obtained during the research process.[14]

ANALYSIS AND DISCUSSION

Unmasking the Illusion: Advanced Techniques and Challenges in Identifying and Verifying Deepfake Content

In today's digital age, where information travels at the speed of light, humans are faced with new challenges in distinguishing between reality and fiction.[15] The deepfake phenomenon, which refers to digital content manipulated using artificial intelligence technology, has raised significant challenges in the field of content authentication and verification.[16] As a starting point, one must consider advanced techniques that are currently available or under development to identify and verify deepfake content. Some of these techniques include the use of metadata, digital forensic analysis, and artificial neural networks trained to detect dissimilarities and manipulations in videos or images. The development of these technologies is an important cornerstone in building defense measures against the threat of deepfakes. However, as identification techniques evolve, deepfake makers also continue to update and refine their methods, creating a seemingly endless technological arms race. This creates a number of significant challenges in this field. First, there are challenges related to the speed with which deepfakes can be created and disseminated, which often exceeds the ability of researchers and law enforcement to detect and respond. In addition, currently available detection techniques still have limitations, often resulting in false positives or being unable to detect more sophisticated deepfakes. Furthermore, society is also faced with ethical and legal challenges relating to the intervention of deep-fake content. While there is an urgent need to protect individuals and society from the negative impact of fake news, there are also important considerations related to the right to privacy and freedom of expression. How can we balance the need to verify content with respecting these fundamental rights?

In the short period of time between 2019 and 2020, the cyber industry witnessed a dramatic surge in the creation of fake content, with an astonishing 900% increase. Projective data suggests that this rate of growth will not only be maintained but is likely to accelerate, with some in-depth analysis predicting that up to 90% of online content may be synthetically generated by 2026. This pattern, which often results in fraud and social engineering attacks, not only erodes trust in digital technology but also opens up potential dangers that further threaten the stability and security of the business world. Not long ago, in 2022, a study revealed that as many as 66% of cybersecurity professionals had witnessed or experienced deepfake attacks within their organizations.[17]

In this modern era, the deepfake phenomenon has emerged as one of the most obvious and prominent criminal threats resulting from the utilization of artificial intelligence (AI).[18] This technology, which allows for the creation of highly realistic fake videos or images, has opened up access to a new and alarming form of crime.[19] Deepfake technology, which is rooted in the utilization of advanced AI algorithms, has the potential to be used in a variety of crimes, including extortion, fraud, and the spread of disinformation.[20] Not only do deepfakes create new cyber threats, they also create unprecedented challenges in the fields of law, cybersecurity, and media ethics.[21] This is because deepfakes blur the line between reality and fiction, creating space for bad actors to carry out unlimited manipulation and exploitation.[22] On an individual level, the impact of deep fakes can be devastating. Victims of deep-fake crimes can experience severe psychological impacts, including emotional distress and trauma. Meanwhile, at the societal level, deepfakes can undermine public trust in media and institutions and even threaten social and political stability. To address this threat, there is an urgent need to develop effective strategies and tools to detect and combat fake content. This includes improving digital literacy among the general public as well as developing new technologies capable of accurately detecting digital manipulation. In addition, there is a need to build a robust legal framework that can address the unique challenges brought about by deepfakes. This includes the creation of new laws that can address cases of deepfake crime, as well as the provision of adequate support and protection for victims of this type of crime. By recognizing and responding to this real threat of crime presented by deepfakes, society can move towards a future where AI technology is used in an ethical and responsible manner while minimizing the potential for abuse and exploitation.

Amidst the complexity of the challenges presented by the deepfake phenomenon, it is urgent to step up educational initiatives and raise public awareness regarding the serious implications of this technology. This calls for the development and implementation of educational curricula that emphasize digital media literacy, preparing individuals to identify and respond to deepfakes in a critical and informed manner. Furthermore, the urgency to expand collaboration between public and private entities is becoming increasingly important in order to create technological and policy solutions capable of addressing the complex challenges brought about by deepfakes. In this context, special consideration should be given to the potential impact of these technologies on the integrity of democratic processes and public policy formulation. Deepfakes, with their ability to alter public perception and manipulate narratives, present a significant threat to social and political stability. Along with this, it is imperative to formulate strategies that can mitigate this risk, including through advocacy for high-quality journalism and transparency in the dissemination of public information. In addressing this dilemma, society is faced with a series of critical questions: How can we create more efficient and responsive technologies to identify and validate fake content? How do we balance protecting the public from the damaging effects of fake news without infringing on human rights to privacy and free speech? And how can we foster deep digital media literacy among the general public, facilitating safer and more conscious navigation through an ever-changing and uncertain information landscape? By exploring the answers to these questions, we can move towards a society where technology is used as a catalyst to promote truth and justice rather than as an instrument for deception and manipulation. Although faced with monumental challenges, through interdisciplinary cooperation and collaboration, we can aspire to achieve substantive progress in mitigating the deepfake phenomenon and protecting society from its detrimental effects.

The emergence of deepfake technology has raised a series of significant challenges spanning legal, ethical, and social dimensions, with profound impacts on victims. What follows is an in-depth elaboration of the challenges presented by the phenomenon of deepfakes and their relationship to the implications for victims:

Figure 1. Phenomenon of deepfakes and their relationship to the implications for victim

Victim manipulation and exploitation

Deepfake technology allows for the creation of highly realistic but entirely fabricated audio-visual content, making it extremely difficult to distinguish between real and fake material. This technology has become a serious threat, as it is increasingly used to manipulate and exploit individuals. The misuse of deepfakes can take many harmful forms, including financial fraud, emotional manipulation, and technology-facilitated sexual abuse. One of the most alarming examples involves the creation of fake images or videos that depict someone in compromising or humiliating situations. These are often used as tools for blackmail, harassment, or public shaming, causing deep emotional harm to the victims. In this context, the science of victimology plays a crucial role in examining the psychological and social impacts experienced by those targeted. Victims of deepfake abuse may suffer from trauma, anxiety, loss of personal trust, reputational damage, and even withdrawal from social or professional life. The deceptive nature of deepfakes not only violates personal privacy and safety but also highlights the urgent need for strong legal protections and technological safeguards. Addressing this challenge requires an interdisciplinary approach that includes law, psychology, and digital ethics to ensure that victims are protected and that the risks of such digital deception are effectively minimized.

Defamation

The use of deepfakes in targeting individuals can lead to severe defamation, causing irreparable harm to their professional and personal reputation. This modern form of character assassination leverages the power of realistic, yet falsified, audiovisual content to malign a person's image, often leading to public humiliation, loss of credibility, and potential career damage. In addressing the consequences of such defamation, the field of victimology becomes crucial. It not only analyses the profound impact on the victims but also focuses on their rehabilitation. Understanding the psychological trauma the victims experience, the societal perception changes that result from the defamation, and the difficulties they encounter in their personal and professional lives are all part of the process. Rehabilitation efforts aim to provide comprehensive support to the victims, including legal assistance to address and rectify the dissemination of false information, psychological counselling to cope with the emotional distress, and strategies to rebuild and restore their tarnished reputation. The intersection of deepfake technology and defamation thus highlights a pressing need for a multidisciplinary approach that combines technological solutions, legal frameworks, and psychological support to effectively mitigate the damage and aid in the recovery of the victims.

Impact on public trust

Deepfakes, with their ability to create highly realistic yet entirely fabricated audiovisual content, have a profound impact on public trust, fostering confusion and suspicion among the general populace. This erosion of trust extends to key institutions, including the media and governmental bodies. In this scenario, the collective victim is the general public, who find themselves in a predicament where distinguishing between authentic and counterfeit news becomes increasingly challenging. The ramifications of this are significant: as people grapple with discerning truth from fiction, their faith in the integrity of information disseminated by media outlets and public institutions diminishes. This situation creates a fertile ground for widespread misinformation and disinformation, leading to a greater polarization of society. Individuals may become ensconced in their respective echo chambers, further exacerbating societal divisions and making it difficult to establish a common ground of shared truths. The challenge, therefore, lies not just in combating the technical aspects of deepfake technology but also in restoring and maintaining public trust. This involves implementing robust verification mechanisms, educating the public about media literacy, and fostering a culture of critical thinking and skepticism towards online content. In a world increasingly reliant on digital media, the ability to maintain public trust amidst the deluge of potential disinformation becomes not just a technological issue, but a cornerstone of a healthy, functioning democracy.

Threat to the democratic process

The emergence of deep fakes poses a significant threat to the democratic process, particularly in the context of political campaigns and elections. This advanced form of digital manipulation has the potential to undermine the very foundations of democracy by distorting the truth, misleading voters, and altering the perception of political figures and their policies. In this scenario, the victims extend beyond individuals to encompass society as a whole and the democratic system itself. The field of victimology, in this context, plays a pivotal role in analysing and understanding the potential impacts of fake technology on democratic processes. It examines the broader consequences of how these fabricated representations can influence public opinion, sway election outcomes, and erode trust in democratic institutions. The concern is not just about the immediate effects of a specific deepfake but also about the long-term implications for democratic governance, including the risk of increased cynicism and apathy among the electorate. Addressing this threat requires a multifaceted approach. This includes developing technological tools to detect and flag deepfakes, creating legal frameworks to penalise their malicious use, and educating the public about media literacy to enhance their ability to discern real from manipulated content. Moreover, there is a need for proactive measures by social media platforms and news organisations to prevent the spread of fake news. Protecting the democratic process from the influence of these technologies is crucial for ensuring that public discourse and decision-making are based on accurate and reliable information, thereby safeguarding the public interest and maintaining the integrity of democratic systems.

Legal and regulatory challenges

The emergence of deepfake technology poses significant legal and regulatory challenges, particularly in tracing perpetrators and enforcing accountability. Current legal frameworks often fall short in addressing the complexities of deepfake-related crimes, due to the anonymity of creators, the rapid global spread of manipulated content, and unclear legal boundaries regarding consent, privacy, and freedom of expression. Victimology offers valuable insights by shifting the focus beyond punishment to include the protection, support, and recovery of victims such as through easier reporting mechanisms, rapid takedown procedures, and access to psychological assistance. However, this discussion lacks a critical examination of how existing legal systems are adapting or failing to adapt to the deepfake phenomenon. Most of the narrative focuses on the threats posed by deepfakes, without adequately exploring how legal norms and technological advancements could interact in crafting appropriate responses. There is also an absence of references to comparative legal models or existing regulatory instruments, such as the Budapest Convention, the UK’s Online Safety Bill, or the EU Digital Services Act, which could serve as reference points for future legislation. Furthermore, the integration of victim-centered legal approaches such as trauma-informed practices or restorative justice remains underdeveloped. A more robust legal response requires interdisciplinary collaboration between legal scholars, victimologists, technologists, and policymakers to ensure that laws evolve in parallel with digital threats and remain both effective and equitable in protecting individuals from deepfake-related harm.

The phenomenon of deepfakes has profound implications on various sectors of society.[23] First of all, they serve as tools for the manipulation and exploitation of individuals, with the potential to create serious trauma through harassment or blackmail. Victimology in this context serves to explore and understand the psychosocial impact experienced by victims. Furthermore, deepfakes also harm an individual's personal and professional reputation through defamation, where victimology assists in formulating strategies for the rehabilitation and restoration of the victim's reputation. At a macro level, the impact of deepfakes extends to the public sphere, undermining public trust in trusted institutions and media while posing serious challenges to the integrity of democratic processes, with society and democracy itself as the victims. Finally, from a legal perspective, deepfakes pose new regulatory challenges in addressing and punishing these increasingly complex digital crimes. In this regard, the science of victimology contributes by seeking new approaches to protect victims and facilitate legal proceedings against perpetrators.

Digital Victimization in the Deepfake Era: A Comprehensive Study of Psychosocial Impacts

In an increasingly digitized world, the rise of deepfake technology has opened a new chapter in cybercrime and exploitation.[24] Deepfakes, or digital manipulations that use artificial intelligence to create fake videos or images, have changed the way humans understand and experience reality.[25] On the other hand, it has also opened the door to new types of exploitation and abuse, which have a profound psychosocial impact on victims. In this context, this study seeks to investigate the psychosocial impact of deepfakes on victims. Digital technologies have developed rapidly in recent decades, enabling innovations that were previously unimaginable. However, these advancements have also brought new threats, with deepfakes standing out as one of the most prominent contemporary issues.[26] In the face of this phenomenon, it is important to understand and address the psychosocial impact experienced by victims. The psychosocial impact of deepfakes can be analyzed from several different perspectives. On an individual level, victims of deepfakes often experience increased levels of stress, anxiety, and depression. They may feel isolated and helpless, with their reputation and self-image threatened by fake content created without their consent.[27] Shame, loss of dignity, and concerns about society's reaction are also common feelings experienced by victims.

From a social perspective, deep fakes have the potential to damage social relationships and affect group dynamics. Fake content that creates misrepresentations about a person can lead to reputational damage, divorce, and even violence.[28] In addition, deepfakes can be used to run discreditation campaigns against individuals or groups, capitalizing on people's prejudices and fears to achieve specific goals. Society and the legal system are also faced with the challenge of identifying and tackling fake crimes. Given the sophistication of this technology, it can be difficult to distinguish between genuine and fake content, which makes litigation more complex.[29] This also creates a need for new approaches to educating the public about the dangers of deepfakes and developing strategies to protect potential victims. To help victims cope with the psychosocial impact of deepfakes, it is important to develop effective coping strategies and interventions. These may include psychological and social support as well as legal interventions to bring perpetrators to justice. Education and awareness are also key to preventing the spread of fake content and protecting individuals from further exploitation.

In the face of the challenges posed by deepfake technology, it is critical to recognize and mitigate the psychosocial impacts experienced by victims. This involves an in-depth exploration of individual experiences and the initiation of coordinated efforts to develop effective coping strategies and interventions. Through this approach, it is hoped to protect society from the threat of deep fakes and encourage more responsible and safe use of digital technology. As research advances and further understanding of the impact of deepfakes increases, there is a need to build comprehensive and victim-focused solutions to address this phenomenon. This includes the formulation of new policies and regulations, along with public education efforts regarding the risks associated with deepfakes. Only by combining efforts from different sectors of society can it be expected to minimize the negative impact of this technology, creating a safer and more inclusive digital environment for all. Overall, this review emphasizes the urgent need to address the psychosocial impacts of deepfakes, which affect both individuals and society at large. Through deeper understanding and coordinated strategies, it is hoped to reduce the risks associated with this phenomenon and promote the ethical and responsible use of digital technologies.

In the context of the deepfake phenomenon, theories in victimology can be an important tool to explore and understand new dimensions of victimization. Here, we can consider some key theories in victimology to explain their correlation with deepfakes:

Victim Precipitation Theory

highlights the potential role that victims may have in provoking or attracting criminal behavior. In the context of deepfakes, this theory becomes particularly relevant in the analysis of how individuals may unknowingly engage in digital manipulation of themselves through their online behavior. For example, freely sharing personal information or media on social platforms can expand opportunities for exploitation through deep-fake technology. Therefore, it becomes imperative to understand and create awareness regarding how careless use of social media or certain online behaviors can increase the risk of falling victim to deepfakes. In response to this, society needs to adopt proactive prevention strategies through digital media literacy education and training to help individuals identify and avoid potential risks associated with these technologies. Furthermore, this also calls for the development of more sophisticated and effective protection mechanisms that can minimize the potential exploitation and adverse impacts of the deepfake phenomenon.

Rational Choice Theory

Rational Choice Theory is an approach in criminology that focuses on understanding criminal behavior from the offender's perspective. The theory states that a person is more likely to commit a crime if they perceive that the benefits of doing so outweigh the risks or punishments they may face. In the context of deepfakes, this theory can be used to explain that offenders utilize this technology in the belief that they can achieve certain goals, such as gaining financial gain or damaging someone's reputation, while feeling that their risk of punishment is relatively low. For example, a deep-fake offender may create a fake video to damage someone's reputation in the belief that it is difficult to be identified or recognized as the perpetrator of the crime. They may see potential benefits in creating this manipulative content and hope to avoid punishment or serious legal consequences. As such, Rational Choice Theory can be used to analyze offender motivation and behavior in the context of deepfakes, illustrating how offenders make decisions based on considerations of benefits and risks. This is important in the effort to develop more effective law enforcement strategies and prevent the misuse of fake technology.

Teori Routine Activity

Suggests that crime is more likely to occur when there is a combination of three key elements: a willing offender, an adequate target, and a lack of effective supervision. In the context of the deepfake phenomenon, this theory remains relevant as it creates a foundation for understanding the factors that influence the spread and use of manipulative content. With the increased use of technology and the internet, the opportunity to commit crimes such as deepfakes has increased. Deepfakes utilize artificial intelligence technology to create highly convincing content, either visually or audibly. This opportunity is widespread due to easier access to deepfake creation software, social media as a dissemination platform, and the lack of effective policing of content in circulation. In light of this, cybersecurity and surveillance have become more important. To counter the spread of deep fakes that have the potential to damage reputations or create serious consequences, stricter surveillance measures on social media platforms and the internet are needed. This includes developing more sophisticated deepfake detection tools and implementing policies that limit the spread of questionable content.

Teori Repeat Victimization

In the case of deepfakes, victims often face the risk of repeated victimization. This is due to the manipulative nature of the content, which can easily be shared and reused by perpetrators or others who wish to further exploit the victim. For example, a deepfake that defames someone can be used repeatedly in various contexts to damage reputations or create ongoing discomfort for the victim. A key challenge in addressing this repeated victimization is providing ongoing support to the victim. Early intervention is critical to identifying and stopping the spread of harmful fake content. This can include reporting the content to social media platforms or relevant authorities, as well as supporting victims in taking legal action if necessary. Ongoing support is also needed to assist victims in their long-term recovery. This could include psychological counseling or mental support, legal assistance in facing the deepfake perpetrator in court, and rebuilding the victim's reputation and dignity. In addition, restorative justice approaches can be a useful model for dealing with deepfake cases by promoting reconciliation between the victim, the perpetrator (if recognized), and the community. Protecting victims from repeat victimization requires cross-sector efforts, including cooperation between social media platforms, law enforcement agencies, and victim support organizations. With this approach, we can hope to provide better protection to victims of deepfakes and reduce the psychosocial impact they experience.

Amidst the rush of digitization, the phenomenon of deepfakes is increasingly showing its impact on society, with victimology as a critical lens to understand and navigate this challenge. An analysis from a victimology perspective, which considers both offender and victim dynamics, offers insightful insights and response strategies to this digital crime. Firstly, victim precipitation theory highlights the importance of education and digital literacy in preventing victimization. Educating the public about the dangers of sharing personal information freely and teaching them about adequate privacy settings can be a proactive measure to reduce the incidence of victimization. In line with rational choice theory, preventive efforts should be accompanied by strict law enforcement, creating an environment where perpetrators reconsider their decision to develop or distribute deepfake content due to serious legal consequences. Furthermore, within the framework of routine activity theory, there is an urgency to improve cyber security and surveillance. Society should invest in technologies that are capable of detecting and counteracting the production and distribution of fake content while facilitating the reporting and removal of such content more efficiently. This is an important step to limit the opportunities for perpetrators to commit crimes. Equally important is understanding and addressing the impact of repeated victimization, as demonstrated by the theory of repeated victimization. It underscores the importance of early intervention and ongoing support, assisting victims in long-term recovery and rebuilding their lives with dignity. It also builds on the principles of restorative justice, encouraging reconciliation and recovery through open dialog and collaboration between victims, communities, and social media platforms. As such, integrating victimology theory into our analysis of deepfakes not only enables a deeper understanding of the mechanisms of victimization in the digital age but also facilitates the development of more effective and holistic response strategies. Society is faced with the urgent challenge of mitigating the negative impact of deepfakes, and this victimology-based approach offers a smart and informed way to achieve this goal.

CONCLUSION

The phenomenon of deepfakes has raised a number of multidimensional challenges in modern society, creating a serious threat to truth, justice, and social stability. The profound psychosocial impact of deepfakes has emerged as the greatest challenge, with individuals being subjected to manipulation, exploitation, and victimization. In response to this situation, an approach that integrates the principles of victimology is important to understand and address the negative impact of this phenomenon. Public education on the ethical use of technology, better cybersecurity, and psychosocial support for victims are important pillars in developing effective response strategies. In addition, cross-sectoral cooperation and the development of stronger laws and regulations are needed to prevent and address these digital crimes. Amidst these challenges, a deeper understanding of key theories in victimology, such as victimhood theory and rational choice theory, as well as the integration of restorative justice principles, can facilitate the development of more holistic and inclusive response strategies. This includes not only developing more sophisticated detection and verification technologies but also creating a stronger, more resilient, and more knowledgeable society that can protect itself and others from the dangers of deepfakes. Overall, the response to the deepfakes phenomenon should include a multipronged approach that focuses on technology, education, and inter-sectoral cooperation towards a safer, more responsible, and just digital world where technology serves as a tool to create truth and justice rather than a threat to it.

References

[1] Huang L, Ren J. The Negative Impact of Emerging Technology: A Literature Review. Proceedings of the International Conference on E-Business and E-Government, ICEE 2010. 2010;:2576-2579. https://doi.org/10.1109/ICEE.2010.651

[2] S. Karnouskos, “Artificial Intelligence in Digital Media: The Era of Deepfakes,” IEEE Trans. Technol. Soc., vol. 1, no. 3, pp. 138–147. 2020 July;. https://doi.org/10.1109/TTS.2020.3001312

[3] B. Chesney and D. Citron, “Deepfakes: A Looming Challenge for Privacy, Democracy, and National Security,” Calif. Law Rev., vol. 107, no. 6, pp. 1753–1820. 2019;. https://doi.org/10.15779/Z38RV0D15J

[4] Sharma GD, Shah MI, Shahzad U, Jain M, Chopra R. Exploring the nexus between agriculture and greenhouse gas emissions in BIMSTEC region: The role of renewable energy and human capital as moderators. Journal of Environmental Management. 2021;297. https://doi.org/https://doi.org/10.1016/j.jenvman.2021.113316

[5] Hancock JT, Bailenson JN. The Social Impact of Deepfakes. Cyberpsychology, Behavior, and Social Networking. 2021;24(3):149-152. https://doi.org/10.1089/cyber.2021.29208.jth

[6] Fattah EA. Victims and Victimology: The Facts and the Rhetoric. International Review of Victimology. 1989;1(1):43-66. https://doi.org/10.1177/026975808900100104

[7] Davis JR, Elias R. The Politics of Victimization: Victims, Victimology, and Human Rights. The Journal of Criminal Law and Criminology (1973-). 1988;78(4):1183. https://doi.org/10.2307/1143426

[8] Ani P. hare of Consumers Who Say They Could Detect a Deepfake Video Worldwide As of 2022. www.statista.com/: Statista; 2022.

[9] Hardiansyah Z. Selebriti dan Tokoh Publik yang Jadi Korban Video Deepfake Selain Nagita Slavina. tekno.kompas.com/, 2022.

[10] Matern F, Riess C, Stamminger M. Exploiting Visual Artifacts to Expose Deepfakes and Face Manipulations. 2019 IEEE Winter Applications of Computer Vision Workshops (WACVW). Waikoloa Village, HI, USA. 2019 01, 83–92. https://doi.org/10.1109/WACVW.2019.00020

[11] Vaccari C, Chadwick A. Deepfakes and Disinformation: Exploring the Impact of Synthetic Political Video on Deception, Uncertainty, and Trust in News. Social Media + Society. 2020 01;6(1):2056305120903408. https://doi.org/10.1177/2056305120903408

[12] Effendi E, Fernando ZJ, Anditya AW, Chandra MJA. Trading in influence (Indonesia): A critical study. Cogent Social Sciences. 2023 Dec;9(1):2231621. https://doi.org/10.1080/23311886.2023.2231621

[13] Fernando ZJ, Pujiyono P, Susetyo H, Candra S, Putra PS. Preventing bribery in the private sector through legal reform based on Pancasila. Cogent Social Sciences. 2022 Dec;8(1):2138906. https://doi.org/10.1080/23311886.2022.2138906

[14] Putra PS, Fernando ZJ, Nunna BP, Anggriawan R. Judicial Transformation: Integration of AI Judges in Innovating Indonesia's Criminal Justice System. Kosmik Hukum. 2023 08;23(3):233. https://doi.org/10.30595/kosmikhukum.v23i3.18711

[15] Junius Fernando Z, Rozah U, Rochaeti N. The freedom of expression in Indonesia. Cogent Social Sciences. 2022 Dec;8(1):2103944. https://doi.org/10.1080/23311886.2022.2103944

[16] Maras M, Alexandrou A. Determining authenticity of video evidence in the age of artificial intelligence and in the wake of Deepfake videos. The International Journal of Evidence & Proof. 2019 07;23(3):255–262. https://doi.org/10.1177/1365712718807226

[17] World Economic Forum. How Can We Combat the Worrying Rise in Deepfake Content?. https://www.weforum.org/agenda/2023/05/how-can-we-combat-the-worrying-rise-in-deepfake-content/. Accessed 2023.

[18] Campbell C, Plangger K, Sands S, Kietzmann J. Preparing for an Era of Deepfakes and AI-Generated Ads: A Framework for Understanding Responses to Manipulated Advertising. Journal of Advertising. 2022 01;51(1):22–38. https://doi.org/10.1080/00913367.2021.1909515

[19] Kirchengast T. Deepfakes and image manipulation: criminalisation and control. Information & Communications Technology Law. 2020 09;29(3):308–323. https://doi.org/10.1080/13600834.2020.1794615

[20] Mirsky Y, Lee W. The Creation and Detection of Deepfakes: A Survey. ACM Computing Surveys. 2022 01;54(1):1–41. https://doi.org/10.1145/3425780

[21] Diakopoulos N, Johnson D. Anticipating and addressing the ethical implications of deepfakes in the context of elections. New Media & Society. 2021 07;23(7):2072–2098. https://doi.org/10.1177/1461444820925811

[22] Burkell J, Gosse C. Nothing new here: Emphasizing the social and cultural context of deepfakes. First Monday. 2019 Dec;. https://doi.org/10.5210/fm.v24i12.10287

[23] Ghazi-Tehrani AK, Pontell HN. Phishing Evolves: Analyzing the Enduring Cybercrime. Victims & Offenders. 2021 04;16(3):316–342. https://doi.org/10.1080/15564886.2020.1829224

[24] Kerner C, Risse M. Beyond Porn and Discreditation: Epistemic Promises and Perils of Deepfake Technology in Digital Lifeworlds. Moral Philosophy and Politics. 2021 04;8(1):81–108. https://doi.org/10.1515/mopp-2020-0024

[25] Reddy RV, Nethi A, Sukhija S, Gupta Y. Detecting DeepFakes: A Deep Convolutional Neural Network Approach with Depth Wise Separable Convolutions. 2023 International Conference on Emerging Techniques in Computational Intelligence (ICETCI). Hyderabad, India. 2023 09, 33–38. https://doi.org/10.1109/ICETCI58599.2023.10331449

[26] Langguth J, Pogorelov K, Brenner S, Filkuková P, Schroeder DT. Don't Trust Your Eyes: Image Manipulation in the Age of DeepFakes. Frontiers in Communication. 2021 05;6:632317. https://doi.org/10.3389/fcomm.2021.632317

[27] Lucas KT. Deepfakes and Domestic Violence: Perpetrating Intimate Partner Abuse Using Video Technology. Victims & Offenders. 2022 07;17(5):647–659. https://doi.org/10.1080/15564886.2022.2036656

[28] Dobber T, Metoui N, Trilling D, Helberger N, De Vreese C. Do (Microtargeted) Deepfakes Have Real Effects on Political Attitudes?. The International Journal of Press/Politics. 2021 01;26(1):69–91. https://doi.org/10.1177/1940161220944364

[29] Delfino R. Deepfakes on Trial: a Call to Expand the Trial Judge’S Gatekeeping Role to Protect Legal Proceedings from Technological Fakery. SSRN Electronic Journal. 2022;. https://doi.org/10.2139/ssrn.4032094

Additional Files

Published

15-05-2025

How to Cite

Ali, M., Fernando, Z. J., Huda, C., & Mahmutarom, M. (2025). Deepfakes and Victimology: Exploring the Impact of Digital Manipulation on Victims. Substantive Justice International Journal of Law, 8(1). https://doi.org/10.56087/substantivejustice.v8i1.306

Citation Check