Recent research in artificial neural networks shows how to generate adversarial examples - slightly perturbed images that cause complete misclassification or other failures of the model. Is it possible to generate images that will cause surprising effects in biological neural networks?
Optical illusions and camouflage (both evolved and man-made) can be regarded as this.
Neural Population Control via Deep Image Synthesis - pretrained AlexNet is used to approximate first three layers of V4 center of a macaque; an adversarial example is then generated using this model (with gradient ascent and random translations) with the goal to activate a chosen site (a few neurons around the electrode) and deactivate the others. And it works.
Deep Learning Human Mind for Automated Visual Classification - using RNN to classify EEG traces of human viewing ImageNet images. Maybe this could be used to create a (terrible) differentiable model (which makes producing adversarial examples a lot easier) even with common non-invasive EEG. Unfortunately the study seems to have sloppy statistics of the validation data.
This paper presents worse results (but still significant) but seems to have correct experimental settings.
-
Synthesizing Robust Adversarial Examples - generating adversarial objects when we can't control how pixels of our example get mapped to the network input (translation, rotation, perspective). They have even 3D-printed a physical object that causes misclassifications of photos where it is present.
Trypophobia „is an aversion to the sight of irregular patterns or clusters of small holes, or bumps.“ - apparently reaction to specific patterns is hardcoded (by evolutionary selection - predators, parasites?) in people
BLIT is a concept from science-fiction, an image which triggers seizures (or even death) in people. In reality, we have video sequences that rarely trigger epilepsy, though. (now, optimize the function “how big unusual EEG patterns are caused when viewing such video”)