This installation allows participants to experiment with 'fooling' facial detection models andencourages participants to reflect on the limits of extant surveillance technologies. Physicaladversarial attacks 'fool' such models into labelling a face as 'unknown' or ignoring a facealtogether, using physical props to alter the visual input to a model. Try out one of the propsto conduct your own adversarial attack on a facial detection model!
Hi! We are Glen and Ned. We are both PhD candidates at the College of Engineering andComputer Science, Australian National University. Glen studies the social implications ofemerging technology practices, and Ned studies the participation of end users in developing AI-enabled systems.