A year ago, Pokemon Go became immensely popular as players explored their surroundings for Pokemon in the smartphone-based augmented reality (AR) app. This hyper-popular game, which barely scratched the surface of AR’s potential, led to increased interest in the technology. The AR industry is expected to grow to $100 billion by 2020, and with increasing interest in AR automotive windshields and head-mounted displays (HMDs), we could soon be able to experience immersive AR environments like the one depicted by designer and film-maker Keiichi Matsuda in Hyper Reality.
But what would happen if a pop-up ad covers your game, causing you to lose? Or if, while driving, an AR object obscures a pedestrian?
These are the types of situations researchers consider in a new paper, Securing Augmented Reality Output. In the paper, Lab student Kiron Lebeck, along with CSE undergraduate Kimberly Ruth, Lab Affiliate Faculty Franzi Roesner, and Lab Co-Director Yoshi Kohno address how to defend against buggy or malicious AR software that may unintentionally or inadvertently augment a user’s view of the world in undesirable or harmful ways. They ask, how can we enable the operating system of an AR platform to play a role in mitigating these kinds of risks? To address this issue, the team designed Arya, an AR platform that controls output through a designated policy framework, drawing policy conditions from a range of sources including the Microsoft HoloLens development guidelines and the National Highway Traffic and Safety Administration (NHTSA)’s driver distraction guidelines.
By identifying specific “if-then” policy statements, this policy framework allows the Arya platform to apply a specific mechanism, or action, to virtual objects that violate a condition. In a simulated driving experience, for example, Arya makes transparent pop-up ads and notifications that could distract the driver by applying a specified action, in this case transparency, to objects that violate the specific policies:
• Don’t obscure pedestrians,
• Only allow ads to appear on billboards, and
• Don’t distract the user while driving.
By implementing Arya in a prototype AR operating system, the team was able to prevent undesirable behavior in case studies of three environments, including a simulated driving scenario. Additionally, performance overhead of policy enforcement is acceptable even in the un-optimized prototype. The team, among the first to raise AR output security issues, demonstrated the feasibility of implementing a policy framework to address AR output security risks, while also surfacing lessons and directions for future efforts in the AR security space.