Calls for new legislation criminalizing the creation of deepfake images have intensified among US politicians after explicit, manipulated photos of Taylor Swift gained millions of views on various social media platforms, including X and Telegram.
US Representative Joe Morelle condemned the widespread dissemination of these fake images, labeling it as "appalling." X responded by actively removing the images and taking appropriate actions against the accounts responsible for spreading them. The platform assured users that it is closely monitoring the situation to promptly address any further violations and remove the content.
Despite the removal of many images, one photo of Taylor Swift reportedly garnered 47 million views before being taken down. X has taken measures to restrict the searchability of terms related to Taylor Swift and AI-generated content.
Deepfakes, which utilize artificial intelligence (AI) to manipulate faces or bodies in videos, have seen a 550% increase in doctored image creation since 2019, according to a 2023 study. Currently, there are no federal laws in the US specifically addressing the sharing or creation of deepfake images, although some states have taken steps to address the issue.
Democratic Representative Joe Morelle, who introduced the Preventing Deepfakes of Intimate Images Act last year, emphasized the urgent need for action on the matter. He highlighted the potential irreparable emotional, financial, and reputational harm caused by such images, particularly affecting women disproportionately.
Deepfake pornography constitutes the majority of such manipulated content online, with 99% of targets being women, according to a State of Deepfakes report from the previous year. Representative Yvette D Clarke acknowledged that women have been targets of this technology for years and pointed out that advancements in AI make creating deepfakes easier and more affordable.
Republican Congressman Tom Kean Jr echoed concerns about the rapid advancement of AI technology outpacing regulatory safeguards. He emphasized the need to establish measures to combat this concerning trend, whether the victim is Taylor Swift or any young person across the country.
While Taylor Swift has not publicly addressed the incident, reports suggest that her team is contemplating legal action against the site that published the AI-generated images. The incident adds to growing worries about AI-generated content as billions of people participate in elections worldwide, with recent concerns heightened by a fake robocall claiming to be from US President Joe Biden, suspected to have been generated using AI, sparking an investigation.
0 Comments