Viral image of Pope Francis in a puffy jacket exposes the dangers of AI-generated images

Over the weekend, you may have casually come across some images on Twitter or Instagram of Pope Francis looking fly as hell in a puffy white jacket. You might have believed those images to be real, because really, would Pope Francis wearing a puffy jacket be that surprising? You may have also seen images of former US President Donald Trump being arrested or of Elon Musk holding hands with GM CEO Mary Barra and later with Democratic U.S. Rep. Alexandria Ocasio-Cortez. You may have thought all of these images are real because of how we have come to perceive these popular figures. However, the truth is that they are all fake.

In recent months, there has been a growing number of hyperreal AI images being shared on social media platforms. These images are so highly detailed that it can be hard for the average person to spot them as fake. As someone who has been covering technology for a few years now, I have been trained to look at images with a fair bit of suspicion. But my parents, for example, would easily believe Pope Francis’ drip without hesitation. My mother showed me the image on her phone and told me “only Pope Francis can pull that off.” She believed the image was real before I had to break it to her.

That image of Pope Francis in a puffy jacket was actually created by an AI image-generating app called ‘midjourney’ and was shared on a subreddit last week. This quickly made its way to Twitter where it went viral over the weekend.

The speed at which artificial intelligence (AI) and machine learning (ML) have evolved in recent months is absolutely terrifying. We have ChatGPT ready to take away several jobs with the way it is able to articulate its responses. We have AI images being shared on social media that make it hard to believe what’s fake and what’s real. That line is getting blurred so quickly that people, especially a large portion of elderly people, are getting left behind in the dark.

And these images that we’re talking about are just the start of what’s to come. Sooner than later, AI images are going to get so detailed and real that you won’t be able to tell they’re fake. This would only open the floodgates for misinformation, something social media platforms have been battling much before AI took over. Imagine an AI-generated image of a politician in a scandalous activity just before an election. We already have deepfakes that can make you believe Barack Obama called Donald Trump a “complete dipshit”.

This blurring of the line between what’s real and what’s fake is a serious concern and I fear it’s only going to get worse from here on out. If people can’t trust the authenticity of the images and videos they see online, it becomes much more difficult to make informed decisions. We will end up believing in something that was never real and the potential for that to destroy our beliefs cannot be taken for granted. It’s also concerning from a national security standpoint, as deepfakes and other AI-generated content can be used to spread disinformation and sow discord. But deepfakes are so yesterday. Let’s get back to AI images, shall we?

What can we do about AI images?

The most obvious solution here is to develop tools that can be used to detect AI-generated images. Such tools would have to use advanced algorithms to determine the source of the image or if an image has been manipulated. Another solution is to educate people about the importance of identifying images and videos they see online, and judging their legitimacy before sharing them on social media which would thereby reduce the spread of misinformation.

Social media platforms need to improve their content moderation to curb the spread of AI-generated images. Twitter allows users to add context to images through Community Notes. However, the delay in adding context means the image may have already gone viral. And laying off moderators does not really solve the issue.

At the end of the day, the blurring line between what’s real and what’s fake is a serious concern that needs to be addressed. While AI-generated images and videos may seem harmless, they have the potential to cause real harm if they are used to spread misinformation or manipulate public opinion. By developing better tools for detecting such images, educating people on how to spot them, and improving content moderation, we can help to ensure that the images and videos we see online are a true reflection of reality.