Back

August 13, 2014

Robotics, Law, and Policy: A Burgeoning Field

Robots are on the rise. From Amazon Prime Air to the new crowd-sourced family robot JIBO, intelligent machines and systems are gaining attention in both the public and academic sectors. The Pew Research Center’s recent report, “AI, Robotics, and the Future of Jobs,” serves as an example for the topic’s growing popularity. The paper details the views of almost 2,000 technology builders and analysts concerning robots, AI, and what they think the 2025 job market will look like.

Reading the report, one is struck by how many academics there are for whom robotics is a focus. This post aims to present a sampling of the interesting robotics and law research that a number of these experts and their centers are working on. The post is by no means meant to be comprehensive, but gives the reader a sense of the emerging community.

People researching and teaching robotics law and policy

Artificial intelligence has caught the attention of University of Maryland law professors Danielle Citron and Frank Pasquale this year. Their January publication, “The Scored Society: Due Process for Automated Predictions,” examined the intelligent systems that offer customers predictive rankings. The systems, such as credit reporting and employee reviewing, can have large impacts on their audience. For this reason, Citron and Pasquale presented several legal steps that should be taken to prevent customer abuse. These protective measures include challenges to and regulation of the artificially intelligent systems and their scoring methods.

The University of Ottawa’s Ian Kerr has long been involved in the robotics scene, writing “Delegation, Relinquishment and Responsibility: The Prospect of Expert Robots” for the 2012 “WeRobot” conference. Written in conjunction with Jason Millar, the paper offers a look at the role of humans in the world of expert robots. These robots, such as IBM’s Watson, are machines that are able to effectively replace humans in a large variety of tasks. Questions concerning medical application, predictability, decision making, and liability are addressed throughout the piece. Kerr’s class at the University of Ottawa, The Laws of Robotics, allows law students to examine similar ethical and legal questions concerning the future of robotics.

While working as the Executive Director of Yale University’s Information Society Project (ISP), current University of Ohio State assistant law professor Margot Kaminski published “Drone Federalism: Civilian Drones and the Things They Carry.” She examined how to regulate the growing civilian drone scene. Kaminski’s solution was drone federalism; instead of having one federal regulation, each state would create their own privacy laws for drone use. This method would offer more flexible and finely tuned regulations to meet the wants of each state’s civilians. Kaminski asserts that letting each state create its own drone privacy laws will help show which balance of privacy and freedom to operate drones works best. Her work on robotics extended into the classroom as well this past spring quarter, as Kaminski co-taught Yale’s Artificial Intelligence, Robotics, and the Law course with ISP Director Jack Balkin.

The Lab’s own Ryan Calo published “Robotics and the New Cyberlaw” earlier this year, continuing his work on the intersection of robotics and policy. The paper examines how intelligent machines will require new legal standards as they become more prevalent. Advancements in the social, physical, and computing capabilities human-like robots can have will shape the new cyberlaw that governs them, according to Calo. This paper was presented at We Robot 2014 and the discussion held between Calo and Law Professor David Post can be found on their website. Similar to Kerr and Kaminski, Calo has also taught a course on the topic at the University of Washington, titled Robotics Law and Policy.

Conferences and other events

Drones also made a place for themselves at New York University last October where the Drones & Aerial Robotics Conference (DARC) was held. The three day event was created through a partnership between the Center for Information Technology Policy at Princeton University, the Information Society Project at Yale University and the MacArthur Foundation. DARC brought together academic, industry and legal experts to create dialogues on various topics surrounding aerial drones. The days were filled with talks, panels, workshops and even a hackathon covering everything from the Mars rover to drone art. The Conference attendees were also able to see the drones up close at AfterDARC, a demo show held the first night. A complete schedule of the events can be found on their website, as well as information on their speakers and the drones presented.

Since 2012, the annual We Robot conference has served as a place for members of the robotics community to present their work through papers, demonstrations and panel discussions. The third We Robot conference kicked off this year in April at the University of Miami and featured over 100 academic and business robotics experts. The topics discussed ranged from labor to surveillance and more, videos of these discussions can be found on their website. Similar to DARC’s demo show, We Robot featured Birds of a Feather Sessions after the first night of the conference. The sessions were dinners held at nearby restaurants with the goal of bringing together attendees with similar interests. We Robot will be back next year on April 10th and 11th, continuing to add to the ongoing discussion on robotics.

Ongoing research initiatives

A common research goal in technology policy is to develop frameworks for emerging technology. Due to the relatively new issues that advancing technology like robotics can create, frameworks are helpful for providing clear instructions on how to interact, understand and legislate. This goal drives the Oxford Internet Institute’s Digital Personhood project. Public spaces and robot proxies are the subject of the work, as OII senior research fellow Ian Brown and others aim to tie together privacy with and interactions between humans and robots with a cohesive and reflective framework. The goal to develop frameworks for robotic systems drives Tim Hwang’s Social Architecture of Intelligent Systems Initiative at the Data & Society Institute as well. A member of the inaugural fellows class at the Institute, Hwang’s research project is relatively new, but has a wide target. Military, medical, and economic fields are just some of the settings he is looking to develop frameworks for. These guidelines produced by his initiative will help lawmakers understand the nuances and finer points of intelligent systems across these different fields.

In an ongoing project at Stanford’s Center for Internet and Society (CIS), Affiliate Scholars Bryant Walker Smith and Ryan Calo have worked on clarifying the legal issues of autonomous driving. A relatively new development, autonomous driving has quickly arrived in both the public eye and legislation. An analysis done by CIS has found that since 2012, four different states have published laws allowing driverless cars to operate on their roads and another ten have proposed similar legislation. Even with the amount of legislation surrounding the topic increasing, the legality of it is still being debated. Legal Aspects of Autonomous Driving, Smith and Calo’s project is currently playing a large role in this discussion with multiple publications, blog posts and media coverages. Questions such as “How do you ticket a driverless car?” and “What if your autonomous car keeps routing you past Krispy Kreme?” are answered by Smith and other researchers as they tackle the ethics, liability and the legality of the technology. The subject of Smith’s writings also carried over to his classroom while he was at Stanford where his class, titled the same as the project, invited second and third year Stanford Law Students to join in on the discussion.

While it may involve smaller robots, nanotechnology is by no means a small topic in robotics research. With the roles it could play in the future of warfare, medicine and civilian life, understanding and providing proper policy for it is vital. Nanotechnology Threat Anticipation, a project led by Georgia Tech’s Adam Stulberg and Margaret Kosal aims to address this issue. Housed at the Center for International Strategy, Technology, and Policy, the project focuses on creating guidelines for recognizing nanotechnology application. These guidelines will help them determine if the application is harmless (i.e. research), or if it is offensive and threatening in nature. Both Stulberg and Kosal have researched nanotechnology before with publications such as the latter’s “The Security Implications of Nanotechnology”. Georgia Tech’s philosophy department has also approached the subject on broader terms with their Robot Ethics class.