Google Releases Content Safety API to Identify Child Abuse Images

Google Releases Content Safety API to Identify Child Abuse Images
Google has today announced new artificial intelligence (AI) technology designed to help identify online child sexual abuse material (CSAM) and reduce human reviewers’ exposure to the content.

The move comes as the internet giant faces growing heat over its role in helping offenders spread CSAM across the web. Last week, U.K. Foreign Secretary Jeremy Hunt took to Twitter to criticize Google over its plans to re-enter China with a censored search engine when it reportedly won’t help remove child abuse content elsewhere in the world.

Earlier today, U.K. Home Secretary Sajid Javid launched a new “call to action” as part of a government push to get technology companies such as Google and Facebook to do more to combat online child sexual abuse. The initiative comes after fresh figures from the National Crime Agency (NCA) found that as many as 80,000 people in the U.K. could pose a threat to children online.

The timing of Google’s announcement today is, of course, no coincidence.

Google’s new tool is built upon deep neural networks (DNN) and will be made available for free to non-governmental organizations (NGOs) and other “industry partners,” including other technology companies, via a new Content Safety API.

News emerged last year that London’s Metropolitan Police was working on a AI solution that would teach machines how to grade the severity of disturbing images. This is designed to solve two problems — it will help expedite the rate at which CSAM is identified on the internet, but it will also alleviate psychological trauma suffered by officers manually trawling through the images.

Google’s new tool should assist in this broader push. Historically, automated tools rely on matching images against previously identified CSAM. But with the Content Safety API, Google said that it can effectively “keep up with offenders” by targeting new content that has not previously been confirmed as CSAM, according to a blog post co-authored by engineering lead Nikola Todorovic and product manager Abhi Chaudhuri.

“Quick identification of new images means that children who are being sexually abused today are much more likely to be identified and protected from further abuse,” they said. “We’re making this available for free to NGOs and industry partners via our Content Safety API, a toolkit to increase the capacity to review content in a way that requires fewer people to be exposed to it.”

Most of the major technology companies now leverage AI to detect all manner of offensive material, from nudity to abusive comments. But extending its image recognition technology to include new photos should go some way toward helping Google thwart — at scale — one of the most abhorrent forms of abuse imaginable. “This initiative will allow greatly improved speed in review processes of potential CSAM,” Todorovic and Chaudhuri continued. “We’ve seen firsthand that this system can help a reviewer find and take action on 700 percent more CSAM content over the same time period.”

Among Google‘s partner organizations at launch is U.K.-based charity the Internet Watch Foundation (IWF), which has a mission to “minimize the availability of ‘potentially criminal’ internet content, specifically images of child sexual abuse.”

“We, and in particular our expert analysts, are excited about the development of an artificial intelligence tool which could help our human experts review material to an even greater scale and keep up with offenders by targeting imagery that hasn’t previously been marked as illegal material,” added IWF CEO Susie Hargreaves. “By sharing this new technology, the identification of images could be speeded up, which in turn could make the internet a safer place for both survivors and users.”

Access to the Content Safety API is only available by request, which can be initiated through this form.

Social Media Threatened with Child Protection Laws

Social Media Threatened with Child Protection Laws
Social media firms are being threatened with new laws if they don’t do more to protect children online. In a letter to companies including Facebook and Google, Health Secretary Jeremy Hunt accuses them of “turning a blind eye” to their impact on children. He gives them until the end of April to outline action on cutting underage use, preventing cyber bullying, and promoting healthy screen time.

Google and Facebook say they share Mr Hunt’s commitment to safety.

The age requirement to sign up to Facebook, Instagram, Twitter and Snapchat is 13. To use WhatsApp or to have a YouTube account, you must also be at least 13.
In his letter to the internet firms, Mr Hunt said: “I am concerned that your companies seem content with a situation where thousands of users breach your own terms and conditions on the minimum user age. I fear that you are collectively turning a blind eye to a whole generation of children being exposed to the harmful emotional side effects of social media prematurely. This is both morally wrong and deeply unfair to parents who are faced with the invidious choice of allowing children to use platforms they are too young to access or excluding them from social interaction that often the majority of their peers are engaging in.”

Mr Hunt met social media companies six months ago to discuss how to improve the mental health of young people who use the technology.

He told the Sunday Times, there had been “warm words” and “a few welcome moves” since then, but the overall response had been “extremely limited” – leading him to conclude that a voluntary, joint approach would not be good enough. “None are easy issues to solve I realise, but an industry that boasts some of the brightest minds and biggest budgets should have been able to rise to the challenge,” said Mr Hunt.

The National Bullying Helpline, a charity which deals with online bullying, said the government needed to introduce legislation to govern the social media companies. “Asking Facebook and other social media giants to regulate themselves is like asking the press to regulate themselves. It won’t happen,” it added.

Mr Hunt said the government would not rule out introducing new legislation to tackle the issue when it publishes its response to the Internet Safety Strategy consultation in May. He has also asked the chief medical officer to launch a review into the impact of technology on the mental health of children and young people.

Katie O’Donovan, public policy manager at Google UK, said the company had shown its commitment to protecting children by developing its resources – such as an online safety course which has been taught to 40,000 schoolchildren.

Facebook said it welcomed Mr Hunt’s “continued engagement on this important issue” and shared his ambition to create a safe and supportive environment for young people online. “We continue to invest heavily in developing tools for parents and age-appropriate products to meet this challenge and we look forward to continuing to work with our child safety partners and government to make progress in this area,” said Karim Palant from Facebook.