Connect with us

Entertainment

5 times AI fooled the internet in 2023

Published

on

Deepfakes are a bit like virus mutations, in that the “greatest,” or even “most effective” ones are deeply unsettling. After generative AI went mainstream last year with the wide release of text-spewing chatbots, social media erupted with algorithmically generated images and sounds to go along with them. So as AI continued to chalk up wins, the loser was everyone’s grip on reality.

On the plus side, some deepfakes do not seem like(opens in a new tab) they were intended to fool anyone. Some are even downright whimsical(opens in a new tab). Unfortunately, the ones that go viral are in much grayer territory, ethically speaking.

These are the deepfakes from the first half of 2023 that made a splash because their deep-fakeness is not fake enough for comfort. Enjoy them, and feel free to share them as entertainment, but please do not, y’know, use them to spread hoaxes. The world has more than enough lies in it already.

1. The imaginary version of Trump’s arrest

While the world awaited former President Donald Trump’s arrest, journalist and Bellingcat founder Eliot Higgins decided to get creative. Using AI image generator Midjourney, Higgins created images of Trump violently resisting arrest, running from the NYPD, and doing various activities in prison while wearing an orange jumpsuit.

In reality, Trump surrendered himself to law enforcement on April 20, and was allowed to forgo a mugshot by the Manhattan District Attorney’s office. So those hoping to see a real version of the AI fantasy were likely disappointed by the undramatic reality of the arraignment.

2. The ‘Balenciaga Pope’ that was too good to be true

When an image of Pope Francis wearing a huge white puffer jacket went viral in March, the internet was delighted to see the Pope’s swaggy sartorial upgrade. Unfortunately, it was fake. The image of the Pope in a Balenciaga-style puffer was created(opens in a new tab) by a 31-year-old construction worker using Midjourney.

But the image was realistic enough to fool many people, including Chrissy Teigen who tweeted(opens in a new tab), “I thought the Pope’s puffer jacket was real and didn’t give it a second thought. No way am I surviving the future of technology.” We hear you, Chrissy.

3. Harry Potter serving Balenciaga (not) realness

Apparently, generative AI and Balenciaga go hand-in-hand, since the fashion house was referenced twice in the same month. But this time, Midjourney gave Harry Potter characters the Balenciaga treatment. “You are Balenciaga, Harry,” says a smoldering Hagrid to a Harry Potter with sunken cheeks and a sulky pout over dark trance music.

In the video all the main characters get yassified in some form, with razor-sharp cheekbones and withering looks reminiscent of a fashion campaign that takes itself too seriously. The YouTube video, made by Demonflyingfox, is called “Harry Potter by Balenciaga,” so it was never intended to actually dupe people into thinking it was real. But the way it nails the tone, aesthetic, and rendering of Harry Potter characters as Balenciaga models is a striking example of what tools like Midjourney can achieve.

4. The Weeknd and Drake collab that never happened

In April, 2023, The Weeknd and Drake dropped a fire single called “Heart on My Sleeve.” Except they didn’t. It was AI-generated by an anonymous creator named Ghostwriter. The song went viral, not because of how catchy it is (and it is catchy), but because of the messy copyright issues that generative AI poses to the music industry.

Although it’s unclear what technology was used to create the song, it’s shockingly easy to create audio deepfakes. There are numerous tools out there that use text to speech or an existing audio clip to essentially clone someone’s voice and make it say what you want.

The song was eventually removed from YouTube, Spotify, Apple Music, and other streaming services for copyright violation. In a statement to Billboard, The Weeknd and Drake’s record label Universal Music Group said:

The training of generative AI using our artists’ music (which represents both a breach of our agreements and a violation of copyright law) as well as the availability of infringing content created with generative AI on DSPs, begs the question as to which side of history all stakeholders in the music ecosystem want to be on: the side of artists, fans and human creative expression, or on the side of deep fakes, fraud and denying artists their due compensation.

5. The explosion at the Pentagon hoax

Unlike the Balenciaga Pope which was pretty harmless, the fake image of an explosion at the Pentagon shows how dangerous generative AI can be in nefarious hands. In May, an image of fire and billowing clouds of smoke from an apparent explosion near the Pentagon circulated around Twitter.

The image was quickly debunked as a deepfake by local law enforcement. But the fake photo had real consequences, causing a brief dip in the stock market. Based on the blurred fencing in front of the building and uneven column sizes it was pretty clear that the image was AI generated or digitally manipulated. But as generative AI gets more advanced, deepfakes will be harder to spot.

Advertisement Find your dream job

Trending