no way to compare when less than two revisions
Differences
This shows you the differences between two versions of the page.
| |
— | blit [2019-12-05 01:28:52] (current) – created - external edit 127.0.0.1 |
---|
| ====== Towards computing a BLIT ====== |
| |
| Recent research in artificial neural networks shows how to generate adversarial examples - slightly perturbed images that cause complete misclassification or other failures of the model. Is it possible to generate images that will cause surprising effects in biological neural networks? |
| |
| Optical illusions and camouflage (both evolved and man-made) can be regarded as this. |
| |
| * [[https://www.biorxiv.org/content/10.1101/461525v1|Neural Population Control via Deep Image Synthesis]] - pretrained AlexNet is used to approximate first three layers of V4 center of a macaque; an adversarial example is then generated using this model (with gradient ascent and random translations) with the goal to activate a chosen site (a few neurons around the electrode) and deactivate the others. And it works. |
| * [[http://perceive.dieei.unict.it/deep_learning_human_mind.php|Deep Learning Human Mind for Automated Visual Classification]] - using RNN to classify EEG traces of human viewing ImageNet images. Maybe this could be used to create a (terrible) differentiable model (which makes producing adversarial examples a lot easier) even with common non-invasive EEG. Unfortunately the study seems to have sloppy statistics of the validation data. |
| * [[https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4546653/|This paper]] presents worse results (but still significant) but seems to have correct experimental settings. |
| * [[https://keenlab.tencent.com/en/whitepapers/Experimental_Security_Research_of_Tesla_Autopilot.pdf|Experimental Security Research of Tesla Autopilot]] - generating adversarial examples for a costly (1 inference/second) blackbox (we don't know the gradient) model. |
| * [[https://arxiv.org/abs/1707.07397|Synthesizing Robust Adversarial Examples]] - generating adversarial objects when we can't control how pixels of our example get mapped to the network input (translation, rotation, perspective). They have even 3D-printed a physical object that causes misclassifications of photos where it is present. |
| * [[https://en.wikipedia.org/wiki/Trypophobia|Trypophobia]] „is an aversion to the sight of irregular patterns or clusters of small holes, or bumps.“ - apparently reaction to specific patterns is hardcoded (by evolutionary selection - predators, parasites?) in people |
| * [[https://www.kaggle.com/cytadela8/trypophobia|There is a Kaggle for that!]] |
| |
| ===== Name of this page…? ===== |
| |
| [[https://en.wikipedia.org/wiki/BLIT_(short_story)|BLIT]] is a concept from science-fiction, an image which triggers seizures (or even death) in people. In reality, we have [[https://en.wikipedia.org/wiki/Denn%C5%8D_Senshi_Porygon#Strobe_lights|video sequences]] that rarely trigger epilepsy, though. (now, optimize the function "how big unusual EEG patterns are caused when viewing such video") |
| |
| ===== Possible uses ===== |
| |
| * fun&trolling |
| * advertisements exploiting bugs in human brain |
| * biological warfare |