US politicians are pushing for new legislation to criminalize the creation of deepfake images after explicit fake photos of Taylor Swift went viral online. The images were shared on social media platforms like X and Telegram. US Representative Joe Morelle condemned the spread of these pictures, calling it “appalling.” X responded by stating that they were actively removing the images and taking appropriate action against the accounts involved in their dissemination. They also assured that they were closely monitoring the situation to address any further violations and remove the content.
Although many of the images have been taken down, one photo of Swift reportedly garnered 47 million views before being removed. Deepfakes utilize artificial intelligence (AI) to manipulate someone’s face or body in videos. A study conducted in 2023 revealed a 550% increase in the creation of doctored images since 2019, largely due to advancements in AI technology. Currently, there are no federal laws in the US specifically targeting the sharing or creation of deepfake images, although some states have taken steps to address the issue. In the UK, the sharing of deepfake pornography became illegal under the Online Safety Act in 2023.
Democratic Representative Morelle, who previously introduced the Preventing Deepfakes of Intimate Images Act, which aimed to criminalize the sharing of deepfake pornography without consent, called for immediate action on this issue. He emphasized that such images and videos can cause irreparable emotional, financial, and reputational harm, with women being disproportionately affected. According to a report published last year, women constitute 99% of the targets in deepfake pornography. Democratic Representative Yvette D Clarke highlighted that women have been targeted by this technology for years and that advancements in AI have made creating deepfakes easier and cheaper.
Republican Congressman Tom Kean Jr agreed, stating that it is evident that AI technology is advancing faster than the necessary safeguards. He emphasized the need to establish measures to combat this alarming trend, regardless of whether the victim is Taylor Swift or any other young person in the country. While Swift has not publicly addressed the images, the Daily Mail reported that her team is considering legal action against the website that published the AI-generated images.
Concerns about AI-generated content have grown as billions of people participate in elections worldwide this year. Recently, an investigation was sparked by a fake robocall claiming to be from US President Joe Biden, suspected to have been created using AI technology.