Human-centered security, privacy, AI
I direct the SPUD (Security, Privacy, Usability and Design) Lab. Our work, at the intersection of HCI, AI and cybersecurity, is oriented around answering the question: How can we design systems that empower people with improved agency over their personal data and experiences online? A few directions of particular interest to us at the moment include:
- Social cybersecurity: creating cybersecurity and privacy systems that have a better understanding of human social behavior;
- Human-centered adversarial machine learning: designing human-centered AI systems that subvert algorithmic surveillance;
- Privacy through Design: developing new design processes that foreground consideration of privacy;
- Corporeal cybersecurity: creating tangible / corporeal cybersecurity and privacy interfaces; and,
- Privacy for the People: designing an end-to-end system to facilitate grassroots privacy collective action.
A few of my papers have been recognized with awards: a best paper at UbiComp (2013), a distinguished paper at SOUPS (2020), three best paper honorable mentions at CHI (2016, 2017, 2020), a best paper honorable mention at CSCW (2021), and an honorable mention for the NSA's Best Scientific Cybersecurity Paper (2014). My lab's work has been generously supported by the NSF and Facebook. My work has also been covered by the popular press, including features on The Atlantic, The Financial Times and Dark Reading.
Prospective Ph.D. Students
I am always on the lookout for talented Ph.D. students. If you are interested in working with me, I encourage you to apply to the HCII Ph.D. at Carnegie Mellon and mention my name in your application. I'm afraid that I am unlikely to respond to cold e-mails; I just have too many other things to do in my limited workday. A few years ago, I made a YouTube video on improving your odds at landing a position in a C.S. Ph.D. program.