On Feasibility of Intent Obfuscating Attacks

Created by MG96

External Public cs.CR cs.CV

Statistics

Citations
0
References
0
Last updated
Loading...
Authors

Zhaobin Li Patrick Shafto
Project Resources

Name Type Source Actions
ArXiv Paper Paper arXiv
GitHub Repository Code Repository GitHub
Abstract

Intent obfuscation is a common tactic in adversarial situations, enabling the attacker to both manipulate the target system and avoid culpability. Surprisingly, it has rarely been implemented in adversarial attacks on machine learning systems. We are the first to propose using intent obfuscation to generate adversarial examples for object detectors: by perturbing another non-overlapping object to disrupt the target object, the attacker hides their intended target. We conduct a randomized experiment on 5 prominent detectors -- YOLOv3, SSD, RetinaNet, Faster R-CNN, and Cascade R-CNN -- using both targeted and untargeted attacks and achieve success on all models and attacks. We analyze the success factors characterizing intent obfuscating attacks, including target object confidence and perturb object sizes. We then demonstrate that the attacker can exploit these success factors to increase success rates for all models and attacks. Finally, we discuss main takeaways and legal repercussions.

Note:

No note available for this project.

No note available for this project.
Contact:

No contact available for this project.

No contact available for this project.