AFP seeks ethical images
The AFP is looking for a way to ethically source images after the Clearview AI controversy.
In 2020, the Australian Federal Police (AFP) admitted briefly trialling Clearview AI - a controversial facial recognition tool based on a database of billions of images scraped from the internet.
The “limited pilot” was conducted by the AFP-led Australian Centre to Counter Child Exploitation (ACCCE) to see if it would be useful in child exploitation investigations.
Last year, an investigation by the Office of the Australian Information Commissioner (OAIC) found Clearview AI breached Australia’s privacy rules, and that the AFP had separately failed to comply with its privacy obligations by using Clearview AI.
But now, the AFP is working with Monash University on an “ethically-sourced” database of images on which to train artificial intelligence algorithms to detect child exploitation.
The university’s AiLECS (AI for Law Enforcement and Community Safety) Lab will try to collect at least 100,000 images from the community over the next six months, calling on willing adult contributors to populate its “image bank” with childhood photos of themselves.
These photos will help the AI to “recognise the presence of children in ‘safe’ situations, to help identify ‘unsafe’ situations and potentially flag child exploitation material”, the AFP and Monash University said.
AiLECS Lab co-director associate professor Campbell Wilson says that the project seeks to “build technologies that are ethically accountable and transparent”.
“To develop AI that can identify exploitative images, we need a very large number of children’s photographs in everyday ‘safe’ contexts that can train and evaluate the AI models,” he said.
“But sourcing these images from the internet is problematic when there is no way of knowing if the children in those pictures have actually consented for their photographs to be uploaded or used.”