Steven Goodman, a white man with short brown hair and a slight beard is visible from the shoulders up standing in front of a solid white background and wearing a dark shirt. He is looking directly at the camera and smiling. The photo is in black and white.

About Me

Hello! I am a recent Ph.D. graduate from Human Centered Design & Engineering at the University of Washington, specializing in accessibility and human-centered AI. My dissertation explored personalizable sound recognition tools for Deaf, deaf, or hard of hearing users through interactive machine learning, and resulted in several publications at top HCI venues (CHI, ASSETS, IMWUT). Previously, at Google Research, I led the design and evaluation of an AI support tool for writers with dyslexia using large language models. Before that, I supported the development of wearable sensing systems at NASA and the University of Minnesota.

I am currently on the job market and seeking industry roles where I can leverage my experience to build inclusive, impactful technologies. I bring expertise in qualitative user research (studies, interviews, usability testing), iterative design and prototyping (wireframing, web applications, wearables), and translating findings into actionable product guidance (experience in academic, industry, and government contexts). More broadly, I am passionate about all issues at the intersection of AI and accessibility, including: AI’s promise and pitfalls for people with disabilities; AI fairness; end-user agency and trust in AI systems; and privacy and data protection.

I'd love to chat with you - please contact me if you’d like to learn more!

Last updated: March 2025

Selected Publications

Talks & Videos

  • April 28, 2025

    SPECTRA: Personalizable Sound Recognition via Interactive ML

    Prepared for CHI 2025, Yokohama, Japan | Slides
  • November 7, 2024

    Human-Centered Sound Recognition Tools (Defense)

    Public defense at University of Washington, Seattle, WA | Slides
  • April 18, 2023

    Human-Centered Sound Recognition Tools (Proposal)

    Public proposal at University of Washington, Seattle, WA | Slides
  • October 25, 2022

    LaMPost: AI-assisted Writing for Dyslexia

    Prepared for ASSETS 2022, Athens, Greece | Slides
  • September 22, 2021

    Toward User-Driven Sound Recognizer Personalization

    Prepared for UbiComp/ISWC 2021, Virtual Event | Slides
  • November 22, 2019

    Smartwatch Sound Feedback Across Contexts

    University of Washington (in anticipation of CHI 2020) | Slides
  • May 5, 2019

    Social Tensions with HMDs for Accessibility

    Social HMDs Workshop at CHI 2019, Glasgow, Scotland | Slides

Demos

  • January 16, 2025

    SPECTRA: Personalizable Sound Recognition via Interactive ML

    Video figure for first-author publication at CHI 2025.
  • April 3, 2022

    ProtoSound: Personalized, Scalable Sound Awareness

    Produced in support of co-authored work at CHI 2022.
  • July 31, 2020

    HoloSound: AR Sound Awareness

    Produced in support of co-authored poster at ASSETS 2020.