Early tests show that it can tolerate image resizing and compression, but not cropping or rotations.
Not “reverse engineered” as such. It has been extracted from iOS 14.3, where it was already, unannounced, and put in a test harness. The investigators demonstrated two images, quite distinct, with the same hash, which means Bad Guys can launch DOS attacks by generating n → ∞ matching images and overloading the manual verification.
From the blog Comments:
One wonders if Apple’s client-side scanning isn’t ultimately intended for markets where false positives are even less of an issue than in the US and the technology is merely being labelled as anti-pedophilia for western consumption.
… well duh … there are major “markets” in jurisdictions where not having a back door would be considered “obstruction of justice”.
One is also led to suspect that Apple announced its Neural Hash/CSAM protection thing as cover when they knew the algorithm was about to be outed.
Edit: Apple claims to have mitigation strategies. I suspect the hashes of images of interest will leak eventually. Cross posted to “Apple Privacy Issues”.