Behind The Filter: AI, Power and the Cost of Silence
Imagine being able to upload a selfie to social media without worrying too much. This was the situation just a few years or even months ago. What could happen? At worst, someone could download the image and edit it in Photoshop. However, if you've been following the news recently, you'll have come across a new threat emerging from the heart of our society that will greatly affect how we present ourselves online and even impact our offline lives.
AI tools that can undress people and create explicit images and videos using someone's face have existed ever since AI became capable of processing and creating image data. However, perhaps people were unaware of these tools or found visiting these sites shameful. At the end of last year, Grok made it possible to undress people on X without paying any money or facing any restrictions; you just had to comment "undress her" to get the result. In public. This was visible to around 420 million monthly active users. Millions of images have been generated by Grok alone in recent weeks, including explicit material involving minors. They have now limited image generation to Premium users, which is essentially digital sexual abuse behind a paywall ($8 per month). No technical knowledge or fancy hardware is required. All you need is an image, a prompt and a few cents. All hurdles taken down.
However, Grok is just the tip of the iceberg. AlgorithmWatch has started to combat NSTs (non-consensual sexualisation tools), and German activists have launched petitions to push for regulation or urgent prohibitions. Some governments are also increasing pressure on Grok.
Unfortunately, the truth is that there are hundreds, perhaps even thousands, of NST websites that exist solely to satisfy people's desire to see others undressed. It's unclear whether these sites are limited to generating material featuring adults. They claim to have rules about only uploading images with consent, but how can they verify this? Most of them have no imprint. Some are free, some offer paid services and some accept cryptocurrency payments to ensure anonymity. All of them have presets that clearly promote stereotypes. They are all registered with DNS anonymity providers. While it's unclear exactly how much they earn from their users, if a site claims to have 400,000 premium users at around $10 per month, this gives an indication of the scale of this business and the associated issue. It is estimated that 48 million dollars have been made through technology-facilitated sexual violence (TFSV) on just one site. How did we end up here?
The issue, of course, is a societal one. However, if the hurdles to using, hosting or creating those tools were higher, society might change as well.
I had an argument with a man on LinkedIn who said that an app from a Dubai company which undresses people and was actually advertised on TikTok and Google, meaning it could be seen by minors, would be fun. TFSV or the use of NSTs is not fun at all. It's embarrassing and dangerous for the victims - maybe even deadly. It doesn't matter if it's real or not. AI is improving all the time - who can still tell the difference between real and fake images? You could be held for ransom, either for money or sex, or the images could be published. They could be sent to your employer, causing you to lose your job. They could be shared in class chats at school, where the victims would be even more vulnerable, perhaps to the point of stopping going to school or even taking their own lives. Once something is on the internet, it will usually stay there. What seems like fun for the user for a few seconds can destroy someone's whole life.
Changing society is hard and takes time. By the time we succeed, the next immoral tools for exploiting women and children will have been invented, so we need a faster approach.
However, there are actors within that large, opaque system that could effect change. Hosters could provide clearer guidelines on what is and isn't allowed on their infrastructure. I've analysed over 240 pages (NSFs, NSFW AI chats, etc.) and they are hosted on all the major platforms.
When they prohibit the hosting of these tools, it makes things at least a bit harder for those who create them.
The next issue is missing imprints. Without them, you don't know who can be held responsible in case of misuse. In Germany, we need these for blogs too, especially if money is being made from the services. However, most websites only provide contact information such as a Telegram channel and perhaps a Gmail address. Perhaps some global regulations to reform internet rules would be a good idea, especially for global issues like this one that endanger those who are often already victims of violence, whether online or offline.
Withheld for Privacy is a private domain name provider. When you register a domain name with them, they forward the request to NameCheap, a major registrar, who then registers the domain name in their name. So, if you check the Whois entry for a domain name, you will see nothing but an Icelandic address. Fun fact: the address is shared with a clothing store and the penis museum in Rejkyavik. The same rule that applies to hosters should apply to registrars too. They should check what is behind the domain name and, if it is an NST or something similar, block the domain. Alternatively, they could create filters to completely block registration of explicit names. Alternatively, they could stop working with providers like Withheld for Privacy, DomainsByProxy, and similar ones. I know these providers can be useful for activists and journalists, but the same argument could be made for the darknet - it's now mainly misused by criminals. This again shows that tools designed for good can be misused and thus endanger their legitimate purpose.
The same principle applies to payment providers, of course. I've seen payment options from all the usual suspects.
I know that enforcing those rules globally on different service providers (domains, infrastructure, etc.) is difficult, but Big Tech has significant resources and all the decision-makers and C-suite executives have family members who are women or children. So why aren't they acting against it? Is it money? Is it because they are afraid of censorship and not complying with the views of Trump, Musk and others?
If tech companies don't take action, politics becomes the last line of defence for safeguarding people. Now, every country must step in and create laws. The UK threatened to block Grok in their country if they did not filter explicit content in their image generation. Denmark has implemented a law regarding ownership of images, especially when they are used by AI without consent. Brazil and India are also discussing laws that would affect the training of these models, granting owners of images money when their images are used for training purposes.
These ideas and laws won't solve the issue, but at least their governments are taking action rather than just talking about it.
When we talk about AI regulation, we're not talking about censorship, we're talking about basic human rights. This includes the right to one's own image, the implicit right not to be exposed online, and Article 1 of the German Basic Law: "Human dignity is inviolable".