Artificial Intelligence (AI) is rapidly becoming a part of our daily lives. Whether you are asking Siri for assistance or receiving notifications via your cellphone on weather updates, AI is there for you.
However, did you know AI might possibly be making the World Wide Web a safer place for our children? Yes, that’s right, Google has released AI software this week which can find and identify child sexual abuse material (CSAM).
Google Engineering Lead Nikola Todorovic and Product Manager Abhi Chaudhuri wrote in “Google’s official blog post, “Using the internet as a means to spread content that sexually exploits children is one of the worst abuses imaginable. That’s why since the early 2000s we’ve been investing in technology, teams, and working closely with expert organisations, to fight the spread of child sexual abuse material (CSAM) online.”
Using Google’s artificial intelligence. the software helps human moderators find sordid images and videos quicker. Thereby speeding up the review system to have the content removed faster.
But the benefits don’t end there. The programme will also see victims identified sooner. Which in turn means they will receive the necessary help.
How much will the programme cost? Surely, this tech is worth thousands? Not at all. Google is making it available for free to NGOs and industry partners via its Content Safety API.
Set to change the world in terms through the programmer, Todorovic and Chaudhuri want to see further development in online safety in the foreseeable future.
“We will continue to invest in technology and organisations to help fight the perpetrators of CSAM and to keep our platforms and our users safe from this type of abhorrent content. We look forward to working alongside even more partners in the industry to help them do the same.”
If you are interested in using the Content Safety API service, you can reach out to Google via this form.