Adrian S. Roman
I am an Audio Software Engineer at Tesla and a M.S. student in Computer Science at the University of Southern California.
My research centers on developing explainable AI algorithms for multi-modal machine perception, with a particular emphasis on auditory models for sound event localization and detection (SELD). I believe auditory perception is crucial for creating robust, multi-modal embodied agents.
As a technologist, my core intention is to innnovative with ideas materialized into products that ultimately improve people’s lives, especially those with disabilities. At Oscillo Biosciences, I developed non-linear dynamical systems to simulate human synchronization. I delivered a mobile digital therapy capable of gently re-training rhytmic synchronization on patients who suffer from language disfluencies and aphasia.
As an engineer, at Tesla I develop sound user interfaces. By using sounds, I help humans navigate their vehicles. My contributions also expand other areas of the software stack, such as UI development, audio pipelines, and firmware. Beyond core audio software engineering, I build machine learning systems for speech enhacement and sound event localization and detection.