AI and Cybersecurity

Our work in artificial intelligence (AI) and cybersecurity includes examinations of adversarial manipulation and the legal, policy, and other hurdles of cybersecurity research in AI.


Project Resources

  • Robust Physical-World Attacks on Deep Learning Visual Classification

    In this paper, presented at the 2018 Conference on Computer Vision and Pattern Recognition (CVPR 2018), researchers show that malicious alterations to real world objects could cause an object classifier to misread an image.

    Research Paper
  • Physical Adversarial Examples for Object Detectors

    Presented at the 12th USENIX Workshop on Offensive Technologies (WOOT '18), this paper explores physical adversarial attacks for object detection models, a broader class of deep learning algorithms widely used to detect and label multiple objects within a scene.

    Research Paper
  • Adversarial Machine Learning: Robust Physical-World Attacks on Machine Learning Modules

    Although deep neural networks (DNNs) perform well in a variety of applications, they are vulnerable to adversarial examples resulting from small-magnitude perturbations added to the input data. However, recent studies have demonstrated that such adversarial examples have limited effectiveness in the physical world due to changing physical conditions—they either completely fail to cause misclassification or only work in restricted cases where a relatively complex image is perturbed and printed on paper.

    News