From Porn to Scams, Deepfakes Are Becoming a Big Racket—And That’s Unnerving Business Leaders and Lawmakers

Deepfake videos, which use artificial intelligence to superimpose a celebrity’s face on a porn star’s body or to make a public figure appear to say or do something outrageous, are spreading like wildfire online.

The vast majority are obvious fakes, but there’s a troubling rise—and even a thriving business—in deepfakes that target women. And, evidence is beginning to emerge of scammers and other scheming political forces utilizing the AI-powered digital tools in plots to destabilize governments and defraud businesses.

Deeptrace, an Amsterdam-based cybersecurity company that is building tools to detect the fakes, has published new research that seeks to quantify the growth of the deepfake phenomenon. It says that over the last seven months, the number of video deepfakes—a term that combines deep learning, a branch of AI, and “fake”—almost doubled to 14,678.

The vast majority—96%—of the suspect videos it found are pornographic in nature. A much smaller number target politicians or well-known businessmen such as Tesla Chief Executive Elon Musk or Amazon CEO Jeff Bezos.

Some 2% of the deepfake videos it found on YouTube feature corporate figures. While most poke fun at them, the trend should alert business leaders that deepfake videos could in future constitute a potent weapon for smear campaigns and reputational damage, the researchers warn.

IT security firms have been sounding such an alarm recently after reports emerged over the summer of digital-savvy fraudsters using artificial intelligence to mimic the voice—a kind of deception known as “synthetic voice impersonation”—of the boss of one company it targeted in an elaborate social engineering ploy.

“There have been several reported cases where synthetic voice audio has allegedly been used to defraud companies. While no concrete evidence has been provided to support claims that the audio was synthetic, the cases illustrate how synthetic voice cloning could be used to enhance existing fraud practices against businesses and individuals,” the report said.

In August, the The Wall Street Journal reported on the case of criminals using artificial intelligence-based software to impersonate a chief executive’s voice. The caper was convincing enough that the gullible victims unwittingly executed a fraudulent transfer of $243,000, thinking it was heading to the parent company in Germany. It was the victimized firm’s insurance company, Euler Hermes Group, that went public with the story in an effort to raise awareness.

The number of such scams is expected to rise, security experts fear. In its report, Deeptrace highlighted a growing commodification of tools and services for creating so-called synthetic media, thus lowering the barrier for non-experts to make deepfakes. (Creating high-quality deepfakes still required skills and experience, experts note.) The company also identified China and South Korea as two hotbeds for the creation of deepfakes.

For now, most deepfake videos are not good enough to fool most people but they will grow more realistic and sophisticated, Henry Ajder, the lead author of Deeptrace’s report, “The State of Deepfakes,” told Fortune.

“The reality-warping, truly indistinguishable deep fakes aren’t here on a large scale yet, but they are coming and at the moment we are not prepared for them,” says Ajder.

Lawmakers, researchers and digital rights activists bemoan the fact that there are more tools to create deepfakes than there are to detect and police them.

So what is being done to counter the spread of malicious deepfakes?

The House Intelligence Committee convened a hearing in June about the national security challenges of artificial intelligence, manipulated media, and deepfakes.

Rep. Yvette Clarke (D-N.Y.) introduced the DEEPFAKES Accountability Act, the first attempt by Congress to criminalize synthetic media used to deceive, defraud, or destabilize the public. State lawmakers in Virginia, Texas, and New York have introduced or enacted their own legislation.

U.S. senators Marco Rubio (R-FL) and Mark Warner (D-VA), both members of the Senate Intelligence Committee, raised concerns last week over the growing threat posed by deepfakes. In letters to 11 social media companies, including Facebook, Twitter, and YouTube, Rubio and Warner urged the platforms to develop industry standards for sharing, removing and archiving synthetic content as soon as possible, in light of foreign threats to the upcoming U.S. election.

And, Facebook announced last month it was joining with Microsoft and putting up $10 million in funding to launch the Partnership on AI – a group created by the tech industry to improve understanding of artificial intelligence – and academics, to launch the Deepfake Detection Challenge.

The return of Deepnudes

Deeptrace researchers also found what they call an “established ecosystem” of deepfake pornography websites. “The fact that those websites all contained advertising and there was a clear financial or business incentive for running these websites is also very important to recognize because it shows they aren’t going away any time soon,” Ajder said.

Case in point is the story of Deepnudes, a controversial computer app enabling users to “strip” photos of clothed women, that was taken offline by its creators earlier this year. The software continues to be independently repackaged and distributed through online channels, giving it new life, the report said.

“The software will likely continue to spread and mutate like a virus, making a popular tool for creating non-consensual deepfake pornography of women easily accessible and difficult to counter,” the report said.

Many experts have voiced fears that attempts could be made to use video or audio deepfakes to attempt to sway next year’s U.S. presidential election after the last vote was dogged by allegations of Russian online interference.

A recent article for the Carnegie Endowment for International Peace thinktank said it was “only a matter of time before maliciously manipulated or fabricated content surfaces of a major presidential candidate in 2020.”

“The key is in the timing. Imagine the night before an election, a deepfake is posted showing a candidate making controversial remarks. The deepfake could tip the election and undermine people’s faith in elections,” the article quotes cybersecurity expert Katherine Charlet and Danielle Citron, vice president of the Cyber Civil Rights Initiative, as saying.

Deepfake videos can also pose a business risk and legal headache to media companies which need to guard against being hoaxed by people who send in videos purportedly showing news events. YouTube has removed a number of deepfakes from its service after users flagged them.

But there are concerns that too heavy-handed a crackdown could undermine free speech.

More must-read stories from Fortune:

—How the man who nailed Madoff got GE wrong
—What’s the difference between a recession and a depression? Here’s what history tells us
Charles Schwab on the lessons he’s learned over a lifetime of investing
Why the repo market is such a big deal—and why its $400 billion bailout is so unnerving
Wells Fargo’s new CEO spent 25 years learning from Jamie Dimon—now he’s taking him on
Don’t miss the daily Term Sheet, Fortune’s newsletter on deals and dealmakers.

Subscribe to the Eye on AI newsletter to stay abreast of how AI is shaping the future of business. Sign up for free.