Face ID Researchers from Fudan University in China, the Chinese University of Hong Kong, Indiana University, and Alibaba Inc. have created a baseball cap that can reliably fool facial recognition software into thinking that you’re, well, not you.
The researchers laced the inside of the cap with tiny LEDs that projected infrared dots onto “strategic spots” on the wearer’s face to subtly alter their features, in a technique known as “adversarial learning.”
The device made a facial-recognition system called FaceNet misidentify its targets as various public figures (including musician Moby and Korean politician Lee Hoi-chang) 70 percent of the time.
The infrared dots aren’t visible to the naked eye. So while they can fool security cameras, they can’t yet fool your friends into thinking you’re Moby (as much as we wish they could).
The researchers claim their work is the first one to examine the risk adversarial learning poses to facial recognition. Previous research has tried to fool software with 3D-printed glasses, which this paper refers to as “cool but also conspicuous.”
This research comes at a time when facial recognition in security cameras, is well on its way to becoming ubiquitous. Nvidia, for example, is currently at work installing automatic facial recognition technology into CCTV cameras, with the goal of offering them to cities in the future.
This research is more than an experiment with a fun gadget. It should be a wake-up call to companies and governments that hope to implement facial-recognition technology into security cameras.
“Face-recognition techniques today are still far from secure and reliable when being applied to critical scenarios like authentication and surveillance,” the paper reads.
Smart cities are exciting, but developers should be aware of their (many) vulnerabilities as they work toward making them a reality.