Business
Hundreds of AI luminaries sign letter calling for anti-deepfake legislation
Hundreds in the artificial intelligence community have signed an open letter calling for strict regulation of AI-generated impersonations, or deepfakes. While this is unlikely to spur real legislation (despite the House’s new task force), it does act as a bellwether for how experts lean on this controversial issue.
The letter, signed by over 500 people in and adjacent to the AI field at time of publishing, declares that “Deepfakes are a growing threat to society, and governments must impose obligations throughout the supply chain to stop the proliferation of deepfakes.”
They call for full criminalization of deepfake child sexual abuse materials (CSAM, AKA child pornography) regardless of whether the figures depicted are real or fictional. Criminal penalties are called for in any case where someone creates or spreads harmful deepfakes. And developers are called on to prevent harmful deepfakes from being made using their products in the first place, with penalties if their preventative measures are inadequate.
Among the more prominent signatories of the letter are:
- Jaron Lanier
- Frances Haugen
- Stuart Russell
- Andrew Yang
- Marietje Schaake
- Steven Pinker
- Gary Marcus
- Oren Etzioni
- Genevieve smith
- Yoshua Bengio
- Dan Hendrycks
- Tim Wu
Also present are hundreds of academics from across the globe and many disciplines. In case you’re curious, one person from OpenAI signed, a couple from Google Deepmind, and none at press time from Anthropic, Amazon, Apple, or Microsoft (except Lanier, whose position there is non-standard). Interestingly they are sorted in the letter by “Notability.”
This is far from the first call for such measures; in fact they have been debated in the EU for years before being formally proposed earlier this month. Perhaps it is the EU’s willingness to deliberate and follow through that activated these researchers, creators, and executives to speak out.
Or perhaps it is the slow march of KOSA towards acceptance — and its lack of protections for this type of abuse.
Or perhaps it is the threat of (as we have already seen) AI-generated scam calls that could sway the election or bilk naive folks out of their money.
Or perhaps it is yesterday’s task force being announced with no particular agenda other than maybe writing a report about what some AI-based threats might be and how they might be legislatively restricted.
As you can see, there is no shortage of reasons for those in the AI community to be out here waving their arms around and saying “maybe we should, you know, do something?!”
Whether anyone will take notice of this letter is anyone’s guess — no one really paid attention to the infamous one calling for everyone to “pause” AI development, but of course this letter is a bit more practical. If legislators decide to take on the issue, an unlikely event given it’s an election year with a sharply divided congress, they will have this list to draw from in taking the temperature of AI’s worldwide academic and development community.
-
Entertainment5 days ago
iPad Pro 2024 now has OLED: 5 reasons this is a big deal
-
Entertainment7 days ago
Why should we care what celebrities like Taylor Swift and Billie Eilish say about Palestine?
-
Entertainment6 days ago
‘Stardew Valley’ has an official cookbook. Here’s how to make Seafoam Pudding.
-
Business6 days ago
Legion’s founder aims to close the gap between what employers and workers need
-
Business5 days ago
Checkfirst raises $1.5M pre-seed, applying AI to remote inspections and audits
-
Business3 days ago
Retell AI lets businesses build ‘voice agents’ to answer phone calls
-
Entertainment5 days ago
Slutshaming on the internet: Read an extract from ‘Sluts’ by Beth Ashley
-
Entertainment4 days ago
8 reasons ‘Evil’ is the greatest show you’re not watching