Physically realizable adversarial machine learning models against on-screen classifiers
Supervisor
Suitable for
Abstract
Co-supervised by Systems Security LabTaking into account the security-critical contexts in which computer vision models should be used in mixed reality / metaverse, it is of a great interest for us to explore ways of crafting defences against malicious actors using adversarial attacks.
Your objective will be to test and improve upon the robustness of existing computer vision models by crafting state of the art adversarial examples in the form of phishing websites that are able to perform on photographs of computer screens captured by a smartphone or XR headset camera.