Deepfake videos have moved from internet novelty to a global legal, social, and business problem. They can be used for satire, film production, accessibility, education, and creative storytelling. But they can also be used to fake consent, steal identity, damage reputations, spread sexual abuse, and fuel fraud at scale. Regulators are responding, platforms are tightening rules, and AI companies are adding labels, provenance tools, and consent safeguards. Even so, the core issue remains simple: when a person’s face, voice, or likeness is turned into synthetic media without permission, the harm is often very real.
This article explains what deepfake videos are, what rights people may have when they are targeted, how deepfake videos affect those involved, and what AI companies are doing to fix the problem worldwide. It is educational, not legal advice. Where deepfake laws differ, the safest principle is consent first, disclosure second, and fast removal when harm occurs.

What are deepfake videos?
Deepfake videos are synthetic or manipulated videos created with AI that make a real person appear to say or do something they never actually said or did. Australia’s eSafety Commissioner describes deepfakes as digital photos, videos, or sound files of a real person created with AI to produce an extremely realistic but false depiction.
Not every edited video is a deepfake. A simple filter, colour correction, or cartoon effect is usually just an edit. A deepfake video crosses into higher risk when it imitates a real person’s face, body, voice, or actions in a realistic way that could mislead viewers. That is why YouTube’s disclosure rules focus on realistic altered or synthetic content, especially when it makes a real person appear to do something they did not do.
In practice, deepfake videos usually fall into four broad categories: non-consensual sexual content, impersonation and fraud, political or public-interest deception, and unauthorized commercial use of a person’s likeness. Each category can trigger different legal rights and platform remedies.
Why deepfake videos matter now
The scale of the problem is no longer theoretical. European Parliamentary Research Service material published in 2025 said a deepfake attack occurred every five minutes in 2024, that 49% of companies experienced audio or video deepfakes in 2024, and that generative AI-based deepfakes increased by 118% in 2024. Australia’s eSafety Commissioner has also warned that explicit deepfakes have increased by as much as 550% year on year since 2019.
The harm is also highly gendered. eSafety said 98% of deepfake material online is pornographic and 99% of that material depicts women and girls. A 2025 UN joint statement similarly warned that 90% to 95% of deepfakes online are sexualised images of women.
Deepfake videos are also now a fraud issue, not just a misinformation issue. Deloitte noted that AI-driven deception has already enabled multimillion-dollar fraud, including a case where a worker was convinced during a deepfake video conference to transfer US$25 million. The World Economic Forum also highlighted surging deepfake fraud and losses exceeding US$200 million in Q1 2025 alone. (Learn More).
Snapshot table: what the data is telling us
| Signal | What it shows |
| Deepfake attack every five minutes in 2024 | Deepfakes are now frequent, not rare |
| 49% of companies experienced audio/video deepfakes in 2024 | Businesses are direct targets |
| Explicit deepfakes up as much as 550% since 2019 | Sexual abuse content is growing fast |
| 98% of deepfake material online is pornographic | Harm is concentrated in sexual misuse |
| Google says SynthID has watermarked 10 billion+ pieces of content | Industry is scaling provenance tools |
Source note: The first two figures come from European Parliamentary Research Service material published in 2025. The explicit deepfake figures come from Australia’s eSafety Commissioner. The SynthID figure comes from Google’s 2025 announcement. (Learn More)

What rights do people have when targeted by deepfake videos?
There is no single global “deepfake right.” Instead, people usually rely on a bundle of rights that vary by country. WIPO notes that likeness and voice are protected in many countries, but the protection is not harmonized. In other words, the legal theory may change from place to place, but the same core interests keep appearing: privacy, consent, reputation, control over identity, and protection from fraud and abuse. (Learn More)
1. Privacy and consent rights
If a deepfake video uses someone’s face, body, or voice without permission, privacy is often the first right implicated. This is especially clear in intimate-image abuse. The UK has expanded criminal offences around sharing and creating purported intimate images without consent, including synthetic or “deepfake” imagery. Australia’s eSafety system also allows people to report image-based abuse, including AI sexual deepfakes, and seek removal.
2. Likeness, publicity, and personality rights
A second major issue is control over identity. WIPO explains that deepfakes raise questions around likeness and voice rights, sometimes framed as publicity or personality rights. These rights matter most when a person’s image or voice is used to endorse products, appear in ads, or create a false commercial association.
3. Reputation and defamation
If a deepfake video falsely suggests criminal conduct, sexual conduct, abuse, or dishonesty, it can harm reputation in ways similar to defamation. The damage can be personal, professional, and long-lasting, especially because synthetic clips can spread quickly and remain searchable long after they are debunked. OECD incident summaries and UN human-rights materials both point to privacy violations, reputational damage, and democratic harm as recurring deepfake risks.
4. Data protection and erasure rights
In some jurisdictions, people may also have data protection remedies. The European Commission says individuals can request erasure of personal data when it is no longer needed or when processing is unlawful, and the UK ICO similarly explains the right to erasure under Article 17. This does not solve every deepfake problem, but it can help when personal data, images, or profile material has been processed or distributed unlawfully.
5. Fraud and impersonation protections
When a deepfake video is used to scam someone, the case is not only about speech or privacy. It is also about deception. In the United States, the FTC’s impersonation rule prohibits materially false impersonation of government entities and businesses, and the TAKE IT DOWN Act, signed into law on May 19, 2025, added federal protections around non-consensual intimate visual depictions and removal duties for covered platforms.
How deepfake videos affect people involved
The deepest harm from deepfake videos is often psychological. Victims can feel violated even when the event shown never happened, because the public sees a version of them that appears real. In sexual deepfake cases, the damage often includes humiliation, anxiety, fear, school or workplace fallout, and the feeling of losing control over one’s identity. eSafety’s guidance on AI image-based abuse makes this point very directly.
Deepfake videos can also affect careers and income. A fake executive message can trigger payment fraud. A fake employee clip can affect hiring. A fake celebrity or creator endorsement can undercut licensing value and trust. WIPO has highlighted the commercial dimension of deepfakes in entertainment, where likeness and performance can be reused or simulated in ways that raise consent and compensation questions.
Families, schools, and workplaces are also drawn into the fallout. Australia’s eSafety reported action against services used to “nudify” Australian school children and said some of these services were being used to create explicit deepfake abuse of peers. That shows how quickly deepfake videos move from a tech issue to a safeguarding issue.
At the public level, deepfake videos create a wider trust problem. They can mislead voters, imitate officials, and also create what some researchers call a “liar’s dividend,” where authentic evidence is dismissed as fake. UN human-rights material warns that deepfakes can chill public participation, especially for women journalists, human-rights defenders, and political figures, while other submissions note that real atrocity evidence can be dismissed as fabricated. (Learn More)
A global legal snapshot
| Jurisdiction | Current direction | Why it matters |
| European Union | AI Act transparency rules require disclosure for deepfakes and AI-generated content | Pushes labelling and provenance into law |
| United Kingdom | Online Safety Act covers deepfake intimate image sharing; 2025 law adds offence of creating purported intimate images without consent | Targets intimate-image abuse more directly |
| United States | TAKE IT DOWN Act creates federal removal obligations for non-consensual intimate depictions; FTC also targets impersonation fraud | Focus on victim removal and scam prevention |
| Australia | eSafety supports reporting and removal of image-based abuse, including AI sexual deepfakes | Practical takedown path for victims |
| China | Deep synthesis and 2025 AI labelling rules require labels and restrict concealment of them | Strong platform-side labelling model |
Source note: This table summarises official or quasi-official sources from the EU, UK legislation, the White House, Australia’s eSafety Commissioner, and China’s translated regulatory texts. (Learn More)

What AI companies and platforms are doing to fix it
1. Labelling and disclosure
A major part of the response is simple disclosure. YouTube requires creators to disclose realistic altered or synthetic content and says creators must flag material that makes a real person appear to say or do something they did not do. TikTok similarly requires labelling of realistic AI-generated content and says it may automatically label content when C2PA Content Credentials are present. Meta says it uses industry-standard indicators to label AI-generated content across Facebook, Instagram, and Threads, and has expanded transparency labelling in ads as well.
2. Provenance, metadata, and watermarking
Many companies are moving beyond visible labels to provenance tools. C2PA describes Content Credentials as an open standard for establishing the origin and edits of digital content. Google says its SynthID watermark is embedded in content generated by its AI models, launched a SynthID Detector in 2025, and later said SynthID had already watermarked more than 10 billion pieces of content. Adobe’s Content Credentials system functions like a digital nutrition label, while Microsoft has continued expanding content integrity and watermarking tools across its ecosystem.
3. Model-level safeguards
AI companies are also tightening the tools themselves. OpenAI says images generated through ChatGPT and the DALL·E 3 API include C2PA metadata, and its Sora materials describe likeness-misuse filters, stricter moderation around uploads featuring people, and consent-based rules for depicting real persons. OpenAI also states that depicting a real person requires that person’s consent and prohibits using real people to impersonate, harass, or mislead.
4. Takedown and reporting systems
Fixing deepfake harm is not only about preventing creation. It is also about fast removal. YouTube expanded its privacy complaint process so people can request removal of AI-generated or synthetic content that simulates an identifiable person. TikTok lets users report deepfakes under misinformation and manipulated media categories. Meta’s privacy standards also say users can report imagery they believe violates privacy rights.
5. Industry collaboration
No single company can solve deepfake videos alone, which is why cross-industry coordination matters. The Munich AI Elections Accord brought together major companies including Adobe, Google, Meta, Microsoft, OpenAI, TikTok, and others to counter deceptive AI election content. Partnership on AI has also published a synthetic media framework backed by companies and civil society groups.
Why the current fixes are still not enough
The biggest weakness in the current response is that labels and watermarks work best when content comes from participating tools. They are much less useful when bad actors use open-source models, strip metadata, re-record screens, or circulate cropped clips on platforms that do not preserve provenance. C2PA itself is designed to show origin and edit history, but provenance is not the same as universal detection.
There is also still a legal patchwork. The EU is moving with transparency rules. The UK has strengthened intimate-image offences. The US has added a federal removal law for non-consensual intimate depictions. China has detailed labelling obligations. But globally, a person’s rights still depend heavily on where the content was made, where it was posted, and where the victim lives.
Practical use cases
Use case 1: Non-consensual sexual deepfake
A student or employee finds an explicit fake video using their face. Likely rights involved: privacy, image-based abuse, removal rights, and possibly criminal law. Likely remedies: platform reporting, police complaint where applicable, and rapid takedown requests.
Use case 2: Executive impersonation fraud
A finance team receives a realistic AI video call from a fake executive asking for a transfer. Likely rights involved: fraud, impersonation, and business deception. Likely remedies: payment controls, identity verification, incident response, and reporting to regulators or law enforcement.
Use case 3: Fake political or public-interest clip
A video shows a candidate, journalist, or public official saying something inflammatory that never happened. Likely issues: misinformation, defamation, public trust, and election integrity. Likely remedies: disclosure, labelling, rapid moderation, and provenance checks where available.
Final takeaway
Deepfake videos are not just a content-moderation problem. They are a rights problem. They can affect dignity, privacy, reputation, economic opportunity, democratic participation, and personal safety. AI companies are starting to respond with labels, watermarking, provenance standards, takedown tools, and consent safeguards, but the strongest long-term answer is a combination of law, platform enforcement, technical standards, and public awareness.
The clearest global direction is this: if deepfake videos involve real people, companies and governments increasingly expect disclosure, consent, and a way to remove harmful content quickly. That does not end the problem, but it is becoming the baseline.
FAQs
-
Are deepfake videos always illegal?
No. Deepfake videos are not automatically illegal everywhere. The legal outcome usually depends on consent, fraud, sexual content, defamation, impersonation, commercial misuse, and the specific deepfake laws in the country involved.
-
What is the simplest definition of a deepfake video?
A deepfake video is AI-generated or AI-manipulated video that realistically makes a real person appear to say or do something they never actually said or did. The more realistic and misleading it is, the higher the legal risk.
-
Why are deepfake videos such a serious rights issue?
They can violate privacy, damage reputation, misuse a person’s likeness or voice, enable fraud, and create emotional distress. In sexual deepfake cases, the fact that the event never happened does not reduce the harm to the victim.
-
Which people are most often targeted by deepfake abuse?
Women and girls are disproportionately targeted in sexual deepfakes. eSafety reported that 98% of deepfake material online is pornographic and 99% of that content depicts women and girls.
-
Can victims ask for a deepfake video to be removed?
Often, yes. Removal options may come from platform rules, privacy complaints, image-based abuse regimes, or data protection law. The exact process depends on the platform, the type of harm, and the country involved.
-
What are AI companies doing to reduce harm from deepfake videos?
They are using disclosure labels, watermarking, provenance metadata, moderation filters, consent-based rules for real people, and reporting tools. Many are also participating in cross-industry standards such as C2PA and synthetic media frameworks.
-
Do labels solve the deepfake problem by themselves?
No. Labels help, but they are not enough. Metadata can be stripped, clips can be re-recorded, and bad actors may use tools that do not preserve provenance. Labels are useful, but they are not a complete defense.
-
What is the difference between watermarking and provenance?
Watermarking marks content as AI-generated or altered. Provenance records the content’s history, such as who made it and how it changed. Stronger systems often combine both rather than relying on only one method.
-
Can deepfake videos be used for fraud, not just misinformation?
Yes. Deepfake videos and cloned voices are increasingly used in business scams, payment fraud, and impersonation attacks. This is one reason deepfake laws are now being discussed by regulators far beyond media policy circles.
-
What should a business do first if it receives a suspected deepfake video call?
Pause the transaction, verify identity through a second channel, preserve evidence, alert security staff, and review payment controls. Human confirmation and process discipline are now essential parts of deepfake risk management.
-
Do deepfake videos only affect celebrities and politicians?
No. Private individuals, employees, students, and families are also targets. Some of the fastest-growing harms involve school abuse, workplace scams, and intimate-image exploitation of ordinary people.
-
What is the global trend in deepfake regulation?
The global trend is toward clearer disclosure, stronger takedown rights, tighter rules for intimate-image abuse, and more pressure on platforms and model providers to label or trace synthetic content.


