Apple announced this week its plans to create neuralMatch software for iPhones, initially sold only in America, that will scan all user images for illegal content involving childrens. The software is to be installed on all devices, regardless of the consent of the device owner.
The U.S. company’s plans for the new feature were reported by The Financial Times, which noted that Apple’s intentions raise many doubts among security and privacy researchers. This kind of intrusion into the privacy of millions of people, could in time lead to surveillance of all devices not only for adult content.
The neuralMatch system, is designed to use an artificial intelligence algorithm to monitor and inform a team of content reviewers of potentially illegal images on iPhones. If it is confirmed that the detected content may be illegal, the team will contact the appropriate law enforcement agencies.
Security experts are concerned
Security researchers told The Financial Times that while they may support work to combat child abuse, they are concerned that Apple risks allowing governments around the world, to open the door to accessing their citizens’ private data. Over time, access to the data may expand to include other content than the neuralMatch system was originally designed for.
Ross Anderson, professor of security engineering at the University of Cambridge, called neuralMatch – an absolutely appalling idea, because it is going to lead to distributed bulk surveillance of […] our phones and laptops.
The researchers also note that – although the system is currently trained to spot child sex abuse, it could be adapted to scan for any other targeted imagery and text, for instance, terror beheadings or anti-government signs at protests. Apple’s precedent could also increase pressure on other tech companies to use similar techniques.
Matthew Green, a security professor at Johns Hopkins University, warned of the possibilities of such technology – This will break the dam — governments will demand it from everyone.
The system Apple wants to introduce may create pressure on other technology companies to create similar technology. The ability to continuously scan photos stored on iPhones, initially only of U.S. users, will make governments more willing to work with Apple, and other companies will be forced to offer similar solutions to compete with it in this arena.
Algorithm that evaluates all images
Whether an image on an iPhone user’s device is legitimate or not will first be verified by an artificial intelligence algorithm. According to Apple, the algorithm was trained using 200,000 images of sexual abuse collected by the U.S. nonprofit National Center for Missing and Exploited Children.
Every photo on phones and uploaded to iCloud will receive a tag, indicating whether it is suspicious or not. If a certain number of a user’s photos are flagged as suspicious, then Apple will allow all of the person’s suspicious content to be decrypted, and if the human team determines that the content may be illegal, they will forward it to the appropriate authorities.
What if there are faulty verifications?
It is not yet known how the entire system will be protected against errors or so-called ‘false positives‘, i.e. when legal content is recognized as material containing illegal content. It is also unclear who will be responsible for damages that users may suffer in case of false accusations.
Anyone who has seen the 2012 movie The Hunt, starring Mads Mikkelsen, knows how a mere accusation of pedophilia, whether it is right or wrong, can completely destroy a person’s reputation and life. The situation presented in the movie is not very different from reality.
But the most difficult question is whether the system will actually serve its purpose. Advanced iOS users, people who place a high value on their privacy, or specifically people who knowingly store illegal content on their devices, will be looking for ways to circumvent the NeuralMatch system. And looking at analogous situations, it shouldn’t take them long to find a workaround.