Archives

Tyche: A Risk-Based Permission Model for Smart Homes

In this paper, presented at the 2018 IEEE Cybersecurity Development Conference (SecDev 2018), a team with Lab researchers presents Tyche, a secure development methodology to limit the risk that apps pose to smart home users.

How Public Is My Private Life? Privacy in Online Dating

To understand how users reason about privacy risks they can potentially control through decision making, Lab members studied online dating user’s perceptions about and actions governing their privacy. Their study reveals tensions between privacy and competing user values and goals, and they demonstrate how these results can inform future designs. This paper was presented at the 26th International World-Wide Web Conference.

Computer Security and Privacy for Refugees in the United States

Lab faculty and students are examining cultural assumptions built into security mechanisms. In this paper, published at IEEE Security and Privacy 2018, they interviewed refugees in the U.S. about computer security and privacy, finding that many computer security and privacy related practices include deeply embedded U.S. or Western cultural knowledge and norms. They provide and are currently exploring further recommendations based on their interviews for concrete technical directions to better serve the security and privacy of diverse populations in the U.S. and around the world.

What Pushes Back from Considering Materiality in IT?

An interdisciplinary team of computer scientists, information scientists, and planners explores the invisible environmental impacts of digital technologies in this essay, presenting some ideas on the forces that either de-emphasize or even actively push against considering these impacts. This essay was presented at Fourth Workshop on Computing within Limits (LIMITS 2018).

Decentralized Action Integrity for Trigger-Action IoT Platforms

This paper, presented at the Network and Distributed System Security Symposium (NDSS) 2018, introduces Decentralized Action Integrity, a security principle that prevents an untrusted trigger-action platform from misusing compromised OAuth tokens in ways that are inconsistent with any given user’s set of trigger-action rules.

Rethinking Access Control and Authentication for the Home Internet of Things

Computing is transitioning from single-user devices to the Internet of Things, in which multiple users with complex social relationships interact with a single device. In this paper from the 12th USENIX Workshop on Offensive Technologies (WOOT 2018), a team with Lab researchers begin re-envisioning access control and authentication for such settings in the home IoT.

Regulating Bot Speech

This article in the UCLA Law Review is the first to consider how efforts to regulate bots, while falling short of per se censorship, might nonetheless run afoul of the First Amendment. The article further considers how premature regulation of bot speech may inadvertently curtail a novel and still emerging form of expression.

Physical Adversarial Examples for Object Detectors

Presented at the 12th USENIX Workshop on Offensive Technologies (WOOT ’18), this paper explores physical adversarial attacks for object detection models, a broader class of deep learning algorithms widely used to detect and label multiple objects within a scene.

Data Statements for NLP: Toward Mitigating System Bias and Enabling Better Science

In research published in Transactions of the Association for Computational Linguistics, experts in information science and computational linguistics investigate data statements as a practice to address critical ethical and scientific issues that result when systems developed with data from certain populations are used in systems with other populations.