Journal of International Commercial Law and Technology
2026, Volume 7, Issue 1 : 595-602 doi: 10.61336/Jiclt/26-01-60
Research Article
AI-Generated Deepfakes and Political Closures: A Comparative Study of India, UK, and USA (2014–2024)
 ,
Received
Jan. 2, 2026
Revised
Jan. 10, 2026
Accepted
Feb. 2, 2026
Published
Feb. 18, 2026
Abstract

Synthetic media, specifically AI generated deepfakes, have become a powerful tool that greatly impact the political discourse across nations. Spread using multi-layered narratives crafted around them, their impact increases to the point of shaping public perceptions. This study aims to understand how deepfakes narrow down the range of viewpoints in public discourse and marginalise dissenting voices to create “political closures.” It also seeks to identify the actors and their motivation behind the deployement of deepfakes. The researchers analyse 38 instances of deepfakes on digital platforms between 2014 to 2024, in three countries: India, the UK and the USA. India saw an increase in deepfakes before and during elections with insufficient regulations; the US saw a tussle between free speech and platforms’ self-policing; the UK showcased discussions around forward-looking laws. The majority of lenses through which the deepfakes were presented are security concerns and stricter regulations to pave the way for higher platform monitoring, speech clampdowns, and government influence on what is acceptable and what isn’t in a public discourse. While these appear as solutions, they also run the risk of threatening free speech in a democratic setup. In the end, deepfakes aren’t just synthetically generated audio-visual clips but the real framework that could enable the tech giants and states to craft agreements on what is the “ultimate truth” and what becomes of the rest.

Keywords
INTRODUCTION

The last couple of years have seen a dramatic rise in generative AI tech (Defernder, 2025). The world has arrived at a place where tech has become so good at creating realities out of thin air in the form of audio-visual content of events that never happened in the first place. Deepfakes began as geeky experiments and have now become frontline weapons in politics. 2014-2024 is a decade where deepfakes saw an explosion worldwide (techUK, 2024). India,

 

the UK, and the US saw a rise in the generation and propagation of deepfakes, particularly before elections heated up or in times of political instability (Al Jazeera, 2025; Drishti IAS, 2025). But to say that deepfakes exist in isolation would be a misrepresentation of facts.

 

Deepfakes are very strategically placed threads that are interwoven with stories spun across channels by media and other actors. These layered media narratives (Benkler et al., 2018) are powered by bots and algorithm boosts to sway the public and box in debate.

 

The Indian elections of 2024 saw a sea of deepfakes that spread through WhatsApp hives and X trends fueled by fake accounts (Blackbird.AI , 2025). The lack of tech infrastructure that could have detected and flagged these fakes was the primary reason the misinformation spread at an industry scale (Mehta, 2024). Political closures were created by narrowing the acceptable viewpoints in public discourse and sidelining opposition, resulting in voter participation being negatively impacted (Kubin & von Sikorski, 2021).

 

This paper argues that deepfakes serve as legitimate tools of intentional tech overlay (Westerlund, 2019), crafted and rolled out by "technological intelligentsia" and media elites at the behest of or in collaboration with the state to create and strengthen the “political closures” and endanger democratic processes (Abbas, 2024). So, instead of viewing them as fringe falsehoods, one needs to view them in the context of the data-targeted ads, propaganda code, and platform rules that threaten open talk.

 

This study aims to accomplish three things. One, take a look at the impact of political deepfakes on three heavyweight democracies simultaneously. Two, gain a deeper understanding of the extent to which deepfakes can be used as tools to create political closures. Three, understand how and why quick fixes like regulations and policies fail to do the needful and act as a means to further the state power in controlling the narratives.

 

REVIEW OF LITERATURE

Layered Media Narratives and Political Communication

Political realities are created when some facts are highlighted and others omitted while strategically embedding them within frames that are open to interpretation and steer how citizens understand them (Entman, 1993; Benkler et al., 2018). As scholar Molly Andrews (2014) noted, politics in general is “the stage for competing stories to be told about the same phenomena”. Digital age is all about the power of storytelling by journalists, politicians, ordinary people, activists, etc. and algorithms power it. These actors build parallel narratives across fragmented platforms and create “hybrid media systems”, a concept given by Chadwick and Howard (2017).

 

Now within these media systems lies a specific threat, the orchestrated construction of “layered media narratives” which was first noted by researchers Benkler, Faris, and Roberts (2018). These narratives are intentionally engineered and amplified using a variety of techniques, including misinformation, disinformation, logical fallacies, atroturfing, algorithmic manipulation and computaitional propaganda. The power here lies not in the creation of these narratives but their placement within the networks and repetitions along those networks to create an illusion of consensus.

 

Synthetic Media, Deepfakes, and Democratic Risk

Deepfakes are defined as synthetic media created using Artificial Intelligence technology to modify pictures and audio and give the false impression that the individuals depicted have actually said or done the things shown (Chesney & Citron, 2019). Cheapfakes, a more accessible and lesser in quality version of deepfakes, are audio-visual manipulations created using readily available and often free-to-use tech. While the output here is not perfectly refined, it is good enough to have a persuasive impact on the viewers.

 

The Oxford Internet Institute (Bradshaw & Howard, 2019) documents a 150% surge in social media manipulation campaigns between 2017 and 2019. Barari and Munger (2021) further explored the impact of deepfakes on a representative sample, where almost 50% of participants were influenced by the deceptive powers deepfakes hold, suggesting that the audience, despite being aware of their existence, was likely to fall for them.

 

This also implies that deepfakes do not function in isolation. Vosoughi, Roy, and Aral (2018) found that fake news spread significantly faster and deeper than real news across categories, most prominently seen in political news. Paris & Donovan (2019) show that social media algorithms that work through engagement-driven recommendation systems amplify polarizing and emotionally provoking content and fail to flag or exclude synthetic media.

 

Political Closure and Democratic Erosion

Scholars across multiple studies have documented the phenomenon of “echo chambers” and “ideological segregation” in online news consumption. The users in these cases are explosed to information aligning with and reinforcing their existing beliefs (Hampton et al., 2014; Gentzkow & Shapiro, 2011). These “echo chambers” are the sites where debates happen in silos, narrowing down the diversity in perspectives and limiting spaces where democratic discourse can exist. This phenomenon can be interpreted as “political closure” (Bakir, 2013).

 

This reduction of discursive space involves a systemic, often politically motivated and

elite-driven, process where the media coverage and digital environments deliberately exclude, diminish or deligitimize dissenting voices, creating a monolithic public sphere (Beck, 1992, 2006). When certain narratives dominate the public discourse, alternative viewpoints become sidelined or obscure to the broader public. The result is the containment of dissent within a narrowed range of acceptable debate (Blumler, 1990; Blumler & Gurevitch, 1995).

 

The Technological Intelligentsia and Computational Propaganda

The main actors that possess the technical expertise and resources to interfere and manipulate media narratives at a mass scale are mainly tech giants and media elites (Woolley & Howard, 2018; Susser et al., 2019) that work with the governments and political parties due to various monetary and power interests. These players form a “technological intelligentsia” that acts as "gatekeepers" of the new digital age, often influencing public discourse through the production and curation of content or by controlling the technical infrastructure of the media (Wang, 2014).

 

This is not just a possibility; this is what happened in the 2016 US and UK elections, where a firm called Cambridge Analytica used Facebook data to profile users psychographically and target potential voters with customized messages specifically designed to exploit their psychological vulnerabilities (Carroll, 2025; Mukunde, 2024; Simms & Redden, 2022). This is one big example of weaponization of tech for political gains (Tarasov & Johnson, 2025; Zarouali et al., 2025).

 

Research Objectives and Questions

This study aims to understand the following:

·        Map political deepfakes' evolution in India, the UK, and the USA (2014-2024).

·        Understand how mainstream news, social media actors, and platforms spin narratives.

·        Theorize deepfakes as infrastructure for political closures.

·        Compare regulatory and platform responses across the three democracies.

 

Research Questions:

·        RQ1: How have the frequency, targets, and contexts of political deepfakes evolved in India, the UK, and the USA between 2014 and 2024?

·        RQ2: What are the dominant frames through which deepfake incidents are narrated in mainstream and digital media, and how do these frames relate to political closures?

·        RQ3: How do deepfake-related narratives function as justification for expanded surveillance, stricter speech regulations, and increased state/platform authority?

·        RQ4: How do differences in media systems and regulatory regimes (India vs. UK vs. USA) shape the deployment, visibility, and interpretation of deepfakes?

 

RESEARCH METHODOLOGY

Research Design

For the purpose of this study, a comparative study design was used along with content and framing analysis. This mixed-methods approach includes descriptive mapping of incidents (by quantifying incidents across the three nations) and qualitative narrative analysis (by exploring the interpretative structures). This framework provide a strong foundation to understand cross-national patterns, causal dynamics and contextual influences.

 

 

Sampling and Data Sources

Primary Data: 38 political deepfake incidents (India n=18, USA n=10, UK n=10). The incidents are publicly documented cases where AI-generated synthetic media were utilised for political purposes such as electoral campaigns, delegitimizing of opponents, propaganda, voter suppression or spreading political falsehoods.

 

Secondary: News coverage from major outlets (BBC, NBC, CNN, Reuters, The Guardian, The Hindu, Indian Express) reporting on deepfake incidents; platform policy statements and fact-checks; government advisories and legal instruments (IT Rules 2021, Online Safety Bill, congressional testimonies); and academic papers on deepfakes and election contexts.

 

 

Country

 

Incidents

 

Years Covered

 

Key Sources

India

18

2019-24

WhatsApp, X, political parties

USA

10

2016-24

foreign actor, social media, troll network

UK

10

2022-24

Elections, fraud cases

Table 1: Deepfake Incident Distribution

 

·        Variables and Coding Scheme

·        For each incident, the following variables were coded:

·        Incident-level variables:

·        Country (India / USA / UK)

·        Year

·        Target (politician, political party, celebrity, institution)

·        Election context (yes/no; if yes, electoral cycle)

·        Alleged source/actor (political party, troll network, foreign actor, unknown)

·        Platform(s) of circulation (WhatsApp, X/Twitter, Facebook, YouTube, mainstream TV, other)

·        Narrative-level variables (framing analysis):

·        Problem definition: What is framed as "the problem"? (Technology itself, "bad actors," voter gullibility, opposition, state surveillance, platform failure, absence of regulation)

·        Causal attribution: Who is blamed for the deepfake? (Foreign state, opposition party, tech company, platform users, lack of regulation)

·        Moral evaluation: What is the ethical stance? (Outrageous violation, humorous spectacle, inevitable technology development, threat to democracy)

·        Treatment recommendation: What solution is proposed? (Regulation and takedowns, platform bans and moderation, user media literacy, increased surveillance and detection technology, legal penalties)

 

Political closure variables:

·        PC1: Justifies expansion of surveillance, data collection, or monitoring.

·        PC2: Justifies new speech restrictions, content takedowns, or speech regulations.

·        PC3: Delegitimizes specific political actors, categories, or movements.

·        PC4: Promotes cynicism or disengagement ("nothing is real," "you cannot trust anything").

 

Analysis Procedure

·        Descriptive incident mapping: Deepfake incidents were organized chronologically and by country to identify trends in frequency, targets, and contexts (RQ1).

·        Framing analysis: For a stratified subsample of 12 incidents (4 per country, selected for narrative prominence and documentation richness), all available news coverage, platform statements, and fact-checks were subjected to framing analysis using Entman's (1993) four-component model: problem definition, causal attribution, moral evaluation, and treatment recommendation.

·        Political closure coding: Narratives surrounding deepfake incidents were inductively coded for how they justify regulatory expansion, speech restrictions, or state authority, and how they marginalize dissenting voices (RQ3).

·        Comparative analysis: Cross-national patterns in incident types, framing, and closure mechanisms were identified and theorized within the framework of layered media narratives and the technological intelligentsia.

 

FINDINGS AND DISCUSSION

Evolution of Political Deepfakes: A Decade of Escalation

India: A huge number of deepfakes, strategically used by political parties, were spread on social media channels during the 2024 Indian general elections. Indian voters received more than 50 million AI-generated voice calls pretending to be local politicians delivering a party message. 75% of the voters encountered at least one deepfake (AI-generated campaign videos, memes and voice clones) during the campaign, making it the largest such campaign globally in terms of scale. Another important observation is the speed with which deepfakes made rounds on WhatsApp groups. The fact-checkers failed to keep up with them, and the platform frameworks couldn’t flag them either.

 

USA: The USA treated the initial deepfakes as isolated novelties or scams. The 2016 elections did see deepfakes of Barack Obama and Hilary clinton, but were largely dismissed and had little impact on the voters. One prominent incident documented was when a voice call posing as President Biden urged New Hampshire voters not to vote in the upcoming Primary. A number of sextortion scams also surfaced that targeted minors and adults both. Unlike India's coordinated strategies, US deepfake incidents were decentralised, coming from scammers, political groups, and foreign actors, followed by weak platform responses.

 

UK: The UK was slow to come up with regulations but quick to anticipate the impact of deepfakes on its democracy. The Crown Prosecution Service (CPS) warned ahead of time that the UK was headed towards its first "deepfake election" in 2024. From Rishi Sunak and Keir Starmer to King Charles III, public figures all over the UK were targeted by creators of deepfakes. After a fake CFO scammed a British firm for 25 million pounds using

 

AI-generated tech, the UK government immediately engaged in debates around public safety. The Online Safety Bill proposed legal and policy frameworks to tackle deepfakes, citing concerns over electoral integrity and public safety.

Cross-national pattern: Deepfakes were observed to have become a part of broader electoral and media systems across the three nations. The key difference is in the way they were utilised. On one hand, India leads in scale and coordination, and the UK, on the other, focuses on prevention. The USA saw the most decentralised and often criminal use of deepfakes. But in every case, deepfakes largely worked within existing media ecosystems alongside bots, algorithms and targeting. They are not the isolated threats initially believed to be.

 

Table 2: Country-Specific Deepfake Traits

 

Aspect

 

India

 

USA

 

UK

 

Peak Period

 

2024 elections

 

2020-2024 primaries

 

2024 pre-election

 

Scale/Reach

 

50M+ voices, 75% exp.

 

Targeted calls/scams

 

Fraud + political hits

 

Main Platforms

 

WhatsApp, X

 

Phone, social

 

Socials and finance apps

 

Coordination

 

Party-led

 

Decentralized

 

Institutional warning

 

Framing Deepfakes: Security, Truth, and Regulation

Analysis of news coverage and institutional discourse surrounding deepfake incidents reveals four dominant frames:

 

Frame 1: The Security Threat Frame

The dominant frames in India when it comes to deepfakes are as threats to both “electoral integrity” and “national security.” While the Election Commission of India ECI ordered political parties to take them down within three hours, the enforcement was uneven. News coverage focused on sythetic media potentially damaging voter confidence and election legitimacy. This frame justified the regulatory actions taken by ECI but also expanded state control over political speech. This is on the same lines as in the UK, where alarms were raised by independent bodies around deepfakes being a security threats requiring instituitional response. Across the three democracies, this fram supports growing state surveillance and regulatory authority.

 

Frame 2: The Truth Crisis Frame

·        The “truth crisis” frame dominated in the USA while continually being negotited through the lenses of the protection of free speech and dire need of stricter platform regulations. In India,

·        it justified regulatory and platform interventions, pointing to Meta's failure to label

·        AI-generated content which was considenered a fundamental failure to protect the truth. This perspective paints deepfakes as a broader "crisis of truth" where trust in information systems and institutions is weakened. Thus demanding solutions that avoid undue restrictions on expression.

 

Frame 3: The Platform Failure Frame

·        This frame is consistent across the three countries where the platforms are presented as weak defenders of information integrity. Prime examples being the deepfake bots on the platform X, YouTube allowing ads containing misinformation to run, and Meta's failure to label

·        AI-generated content. This frame argues platforms should self-regulate more effectively but doesn't always call for state intervention. It encourages platform-led governance and puts the onus on the platform itself, which also runs the risk of selective content resttrictions under the guise of user "safety".

 

Frame 4: The Technological Inevitability Frame

A comparatively smaller frame that presents deepfakes as inevitable outcomes of technological advancement, often wrapped in optimism. Some Indian reports even called AI a “net positive for democracy,” arguing it enabled parties reach voters across language barriers. This view normalizes the exsitence of deepfakes and reduces the emphasis on stricter regulations, indirectly allowing their use to continue.

 

Closure implications: All four frames project deepfakes as a danger, requiring instituitional action by state, regulators, or platforms. None of the frames highlight the importance of independent journalism & fact-checking, need for civic media literacy or open democratic debate. These narratives collectively reinforce state power and platform agency, turning deepfakes into a justification for political closures.

 

Table 3: Framing Breakdown

Frame

Problem Definition

Casual Blame

Moral Evaluation

Recommended Fix

Security Threat

Vote/National risk

Bad actors/state

Democratic threat

Regulations/sur veillance

Truth Crisis

Epistemic Collapse

Tech/platforms

Outrage

Labels/moderati on

Platform Failure

Guard fail

Tech giants

Betrayal

Self-Regulation/ Algorithms

Tech Inevitability

Progress Side effect

Inevitable

Resignation/Opt imism

Adapt/No big regulations

 

Deepfakes as Infrastructure for Political Closure

Analysis reveals three mechanisms through which deepfakes function as infrastructure for political closure:

 

Mechanism 1: Delegitimisation of Opposition

Deepfakes of prominent Indian leaders like Rahul Gandhi and others in the opposition we spread with false claims and mocking imagery. One of the deepfakes showed an opposition leader behind bars, casually playing guitar and singing. This was done in an attempt to weaken the opposition without direct engagement. So instead of debating over actual issues, deepfakes were used to ridicule the opposition. USA too, on similar lines saw deepfakes of Hilary Clinton, then running for president. The imagery reinforced the "lock her up" slogan without offering factual critique. This portraying of opponents as absurd or untrustworthy threatens the trust of people in their candidacy, often making people believe they are left with no other choice than to vote for majority politics.

 

Mechanism 2: Voter Suppression and Disengagement

The cloned voice calls made on behalf of President Biden, telling the democratic voters in New Hampshire to not cast their votes in the upcoming primary is a clear example of direct voter suppresion using AI-tech. Another sublte yet profound impact called "cynisism effect" is observed when deepfakes and misinformation saturate the public sphere. The voters tend to feel that “anything can be faked,” and are likely to diengage entirely. This apathy causes the voters to withdraw instead of participate creating another form of political closure.

 

Mechanism 3: Justifying Regulatory Expansion

The potential danger deepfakes pose is often used as the grounds to push for platform or state action and justify broader control mechanisms. Calls for stricter IT rules and higher platform liability were made in India to tackle deepfakes but it also expanded state oversight of online speech. UK's Online Safety Bill also granted the regulator wide authority over vaguley defined "harmful content", enabling the potential use against dissent in online discourse. The First Amendment limit the state's direct involvement but platform self-regulation has been on the rise in the USA and the safety concerns and platform policies are used to demonetize or ban content and users regularly.

 

Regulatory Responses and the Paradox of Closure

India: The IT Rules 2021 already require platforms to follow government takedown orders and appoint grievance officers answerable to the state. Deepfake cases were absorbed into this framework. Platforms removed flagged content, but the deeper issue is that the state’s power to define and erase “misinformation” remained untouched.

 

UK: The Online Safety Bill empowers Ofcom to regulate “harmful” online content and includes provisions on deepfakes and synthetic media. Although it aims to protect users, the broad definition of “harm” and the wide discretion given to regulators risk enabling political closure under the guise of safety.

 

USA: While there is no defnite laws in place, the US relies on platform self-regulation to tackle deepfakes. It also utilises legal pathways to address incidents of fraud or identity theft done using deepfakes. Yet closure persists in other forms. The platform through the means of demonetization, algorthmic downranking, content moderation and shadowbanning acts in sync with the government systems. These takedowns are often opaque and questionable.

 

Paradox: In all three democracies, both state-led regulation (as seen in India and the UK) and platform governance (as seen in the USA) act as means to exert instituitional control over digital narratives. The collective response to deepfakes is done through centralised interpretative powers, paradoxically deepening the political closure instead of addressing it.

 

Table 4: Regulatory Comparison

Country

Key Framework

Deepfake Fit

Closure Risk

India

IT Rules 2021

Takedown on government orders

State definition of misinformation

UK

Online Safety Bill

Ofcom harm regulations

Vague discretion

USA

First Amendment and Platform self-regulation

Moderation/Demode ration on safety

Opaque algorithmic control

 

CONCLUSION

2014-2024 is a decade that saw the evolution of deepfakes from the lens of technical curiosity into tools of political warfare. The research shows that deepfakes are not isolated fragments of accidental disinformation but deliberately crafted and layered pieces woven into the media ecosystems by tech-elites. India saw deepfakes targeting and influencing voters, USA saw deepfakes scamming the public and UK anticipated the harm and saw multiple deepfake incidents. All these events weakened the public's trust and participation while simultaenously strengthened platform and state control over information. This was justified using the dominant frames like security threats, truth crises, platform failures, and technological inevitability.

 

But instead of restoring public faith in democracies and media systems, these regulations centralised the authority and turned deepfakes into tools for consensus-making and closure of democratic alternatives.

 

Limitations

Data availability: Deepfake incidents are not routinely cataloged; depending on published databases and press coverage may understate incidents, particularly in India where local-language deepfakes may avoid being recorded internationally.

Problems with attribution: This study acknowledges that it is often impossible to identify the source of a deepfake and that attributions in news reports are occasionally speculative.

 

Temporal scope: Rapid institutional and technological change occurred between 2014 and 2024; earlier eras may not have had much documentation of deepfakes because of low awareness and technical complexity.

 

Selection bias: The "dark matter" of deepfakes propagating in closed groups (like WhatsApp) is mostly unquantified; the paper concentrates on documented, publicly accessible deepfake instances.

 

Recommendations

To build a cross-national deepfake database that can assist in differentiating between state, partisan, platform, and non-institutional origins of synthetic media, future research should examine systematic incident tracking and attribution techniques.

 

In order to prevent centralized control, policy regulators must take transparency and democratic oversight into consideration.

 

Platforms ought to establish guidelines for "harmful content" that require labeling. Diverse points of view should be prioritized over cohesive narratives by the algorithms.

 

The development of fact-checking infrastructures, media literacy, and independent journalism should all be supported by civil society.

 

REFERENCES

1.      Abbas, F., and Araz Taeihagh. “Unmasking Deepfakes: A Systematic Review of Deepfake Detection and Generation Techniques Using Artificial Intelligence.” Expert Systems with Applications, vol. 252, 2024, article 124260. https://doi.org/10.1016/j.eswa.2024.124260.

2.      “Deepfake Democracy: Behind the AI Trickery Shaping India’s 2024 Election.” Al Jazeera, 20 Feb. 2024, https://www.aljazeera.com/news/2024/2/20/deepfake-democracy-behind-the-ai-trickery-shaping-indias-2024-elections.

3.      Andrews, Molly. Narrative Imagination and Everyday Life. Oxford UP, 2014.

4.      Bakir, Vian. News, Media and Terrorism: Public and Political Responses to 9/11. Sage, 2013.

5.      Barari, Soubhik, and Michael C. Munger. “Disinformation and the Millennial Generation.” American Politics Research, vol. 49, no. 3, 2021, pp. 262–278. https://doi.org/10.1177/1532673X20980138.

6.      Beck, Ulrich. Risk Society: Towards a New Modernity. Translated by Mark Ritter, Sage, 1992.

7.      Beck, Ulrich. The Cosmopolitan Vision. Polity Press, 2006.

8.      Benkler, Yochai, Robert Faris, and Hal Roberts. Network Propaganda: Manipulation, Disinformation, and Radicalization in American Politics. Oxford UP, 2018.

9.      “Indian Voters Inundated with Deepfakes During the 2024 General Election.” Blackbird.AI, 4 Feb. 2025, https://blackbird.ai/blog/india-election-deepfakes/.

10.   Blumler, Jay G. “Elections, the Media and the Modern Publicity Process.” Public Communication: The New Imperatives, edited by Marjorie Ferguson, Sage, 1990, pp. 101–113.

11.   Blumler, Jay G., and Michael Gurevitch. The Crisis of Public Communication. Routledge, 1995.

12.   Bradshaw, Samantha, and Philip N. Howard. The Global Disinformation Order: 2019 Global Inventory of Organised Social Media Manipulation. Oxford Internet Institute, University of Oxford, 2019.

13.   Carroll, Rob. Cambridge Analytica Revisited: Data, Democracy, and Digital Manipulation. Routledge, 2025.

14.   Chadwick, Andrew, and Philip N. Howard, editors. The Routledge Handbook of Internet Politics. 2nd ed., Routledge, 2017.

15.   Chesney, Robert, and Danielle Citron. “Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security.” California Law Review, vol. 107, no. 6, 2019, pp. 1753–1820. https://doi.org/10.15779/Z38BG2PF6K.

16.   “Deepfakes in Elections: Challenges and Mitigation.” Drishti IAS, 13 May 2024, https://www.drishtiias.com/daily-updates/daily-news-editorials/deepfakes-in-elections-challenges-and-mitigation.

17.   Entman, Robert M. “Framing: Toward Clarification of a Fractured Paradigm.” Journal of Communication, vol. 43, no. 4, 1993, pp. 51–58. https://doi.org/10.1111/j.1460-2466.1993.tb01304.x.

18.   Gentzkow, Matthew, and Jesse M. Shapiro. “Ideological Segregation Online and Offline.” The Quarterly Journal of Economics, vol. 126, no. 4, 2011, pp. 1799–1839. https://doi.org/10.1093/qje/qjr044.

19.   Hampton, Keith N., et al. “Social Media and the ‘Spiral of Silence.’” Pew Research Center, 26 Aug. 2014, https://www.pewresearch.org/internet/2014/08/26/social-media-and-the-spiral-of-silence/.

20.   Kubin, Emily, and Christian von Sikorski. “The Role of (Social) Media in Political Polarization: A Systematic Review.” Annals of the International Communication Association, vol. 45, no. 3, 2021, pp. 188–206. https://doi.org/10.1080/23808985.2021.1976070.

21.   Mehta, Shashank. “Deep Fakes, Deeper Impacts: AI's Role in the 2024 Indian General Election and Beyond.” GNET, 10 Sept. 2024, https://gnet-research.org/2024/09/11/deep-fakes-deeper-impacts-ais-role-in-the-2024-indian-general-election-and-beyond/.

22.   Paris, Britt, and Joan Donovan. Deepfakes and Synthetic Media in the Political Sphere. Data & Society Research Institute, 2019.

23.   “Can Deepfakes Impact Elections?” Reality Defender, 15 Dec. 2025, https://www.realitydefender.com/insights/how-deepfakes-can-impact-elections.

24.   “Deepfakes and Disinformation: What Impact Could This Have on Elections in 2024?” techUK, 15 Jan. 2024, https://www.techuk.org/resource/deepfakes-and-disinformation-what-impact-could-this-have-on-elections-in-2024.html.

25.   Vosoughi, Soroush, Deb Roy, and Sinan Aral. “The Spread of True and False News Online.” Science, vol. 359, no. 6380, 2018, pp. 1146–1151. https://doi.org/10.1126/science.aap9559.

26.   Westerlund, Mika. “The Emergence of Deepfake Technology: A Review.” Technology Innovation Management Review, vol. 9, no. 11, 2019, pp. 39–52. https://doi.org/10.22215/timreview/1282.

27.   Woolley, Samuel C., and Philip N. Howard. Computational Propaganda: Political Parties, Politicians, and Political Manipulation on Social Media. Oxford UP, 2018.

Recommended Articles
Original Article
Exploring The Impact Of Social Media Marketing On Consumer Engagement In Organic Products In Chennai City
Research Article
Political Communication in Indian Digital Media: A Comparative Study of Ranveer Allahbadia and Dhruv Rathee’s Podcasts
Published: 18/02/2026
Research Article
Sangita Rath, et, al, Exploring Retailers’ Perceptions of Fertilizer Promotions in India: A ZMET Analysis. J Int Commer Law Technol. 2026;7(1):603–613.
...
Published: 18/02/2026
Research Article
The Objective Rules Approach as A Mechanism for Determining the Law Applicable to The Transfer of Technology Transfer Contracts
Published: 18/02/2026
Loading Image...
Volume 7, Issue 1
Citations
67 Views
42 Downloads
Share this article
© Copyright Journal of International Commercial Law and Technology