- By Vikas Yadav
- Tue, 31 Oct 2023 11:46 AM (IST)
- Source:JND
Forged sexual images have been a concern ever since the arrival of powerful AI tools in the online landscape. Governments across the globe and big tech companies are rapidly ramping up efforts to curb such activity. In a similar vein, tech companies Snapchat, OnlyFans, Stability AI, short video platform TikTok and more agreed to a joint statement to work together and address the issue of child sexual abuse images generated using AI technologies.
According to Reuters, Britain declared the statement in a policy paper, which listed governments of the United States, Australia and Germany among 27 signatories on Monday. The event is leading up to a global summit hosted by the United Kingdom around AI Safety.
"We resolve to work together to ensure that we utilise responsible AI for tackling the threat of child sexual abuse and commit to continue to work collaboratively to ensure the risks posed by AI to tackling child sexual abuse do not become insurmountable...We resolve to sustain the dialogue and technical innovation around tackling child sexual abuse in the age of AI," the statement read.
While the policy paper praised the opportunities given by artifical intelligence to tackle the spread, it also mentioned it can be used to create sexual abuse material by offenders, which can lead to an "epidemic" of this material online. Citing data from the Internet Watch Foundation, the paper noted that 11,108 AI images were shared on a single dark web space in a month, with 2,978 being around child sexual abuse.
Also Read: Generative AI, The Underlying Tech Behind Famous Chatbot ChatGPT; All You Need To Know
The communication text encouraged relevant actors to "provide transparency on their plans to measure, monitor and mitigate the capabilities which may be exploited by child sexual offenders." It urged the partners to work in collaboration to bolster the safety of children and pivot this evolving technology towards positive uses.