Deep neural networks (DNN) have made the news recently for their application as a predictive software used to create convincing fake videos of politicians and celebrities. You may not be aware, however, that you have most likely already used the same technology in a much less controversial form: the translation app on your smart phone.
Both examples being the tip of the iceberg, application of this exciting technology in the forthcoming years promises to add automatic color to black and white photographs, translate a photograph of a menu into the language of your choice and, perhaps most impressively of all, transform medical image analysis - enabling faster and more accurate diagnosis of injuries and disease, and, very possibly, treatment too.
Sounds inspirationally impressive, but what if this technology, based loosely upon the functioning of neural networks in the human brain, is also vulnerable to the same limitations of the brain: misinterpretation and ‘human’ error? Can this technology be ‘deceived’ into making false predictions? And, if so, what are the implications for both DNNs and research into the functioning of biological neural networks in the brain itself?