Wired reveals a garment that doesn’t exactly make its wearer invisible, but makes it harder for an AI system – like those used in CCTV systems worldwide – to identify its wearer:
Researchers at Northeastern University, MIT and IBM have designed a top printed with a kaleidoscopic patch of colour that renders the wearer undetectable to AI. It’s part of a growing number of “adversarial examples” – physical objects designed to counteract the creep of digital surveillance.
“The adversarial T-shirt works on the neural networks used for object detection,” explains Xue Lin, an assistant professor of electrical and computer engineering at Northeastern, and co-author of a recent paper on the subject. Normally, a neural network recognises someone or something in an image, draws a “bounding box” around it, and assigns a label to that object.
By finding the boundary points of a neural network – the thresholds at which it decides whether something is an object or not – Lin and colleagues have been able to work backwards to create a design that can confuse the AI network’s classification and labelling system.
Looking specifically at two object-recognition neural networks commonly used for training purposes – YOLOv2 and Faster R-CNN – the team were able to identify the areas of the body where adding pixel noise could confuse the AI, and in effect turn the wearer invisible.
The researchers recorded a person walking while wearing a checkerboard pattern and tracked the corners of each of the board’s squares in order to accurately map out how it wrinkles when the person moves. Using this technique improved the ability to evade detection from 27 per cent to 63 per cent against YOLOv2, and from 11 per cent to 52 per cent against Faster R-CNN.
Lin says that the team’s ultimate goal is to find holes in neural networks so that surveillance firms can fix them, rather than to assist people in avoiding detection.