DUB is a grassroots alliance of faculty, students, researchers, and industry partners interested in Human Computer Interaction & Design at the University of Washington. The DUB acronym stands for Design, Use, Build.
For more information about individual DUB Seminars, see the dub calendar: https://dub.washington.edu/calendar.html
Emotion Measurement in Natural Settings through Everyday Devices
Daniel McDuff, Microsoft Research
Emotions play an important role in our everyday lives. They influence memory, decision-making and well-being. In order to advance the fundamental understanding of human emotions, build smarter affective technology, and ultimately help people, we need to perform research in-situ. It is now possible to quantify emotional responses on a large scale using webcams and wearable devices in everyday environments. I will present work on state-of-the-art automated facial expression recognition tools and insights from analysis from the world’s largest dataset of naturalistic emotional responses (featuring examples from millions of individuals). I’ll show examples of how this data has allowed us to corroborate and extend the understanding of nonverbal behavior, including modeling gender and cultural differences in expression (and what makes a viral video). I’ll present methods for remotely measuring physiology using webcams that allow low-cost and highly scalable measurement of cardio-pulmonary activity including heart rate variability allowing us to capture sympathetic nervous system activity in addition to expressions. Finally, I will discuss how this work will help us bring emotional intelligence to everyday digital devices and potentially track important health conditions.
Daniel McDuff is a Researcher at Microsoft and works on scalable tools to enable the automated recognition and analysis of emotions and physiology. He is also a visiting scientist at Brigham and Women’s Hospital in Boston where he works on deploying these methods in primary care and surgical applications. Daniel completed his PhD in the Affective Computing Group at the MIT Media Lab in 2014 and has a B.A. and Masters from Cambridge University. Previously, Daniel was Director of Research at Affectiva and a post-doctoral research affiliate at the MIT Media Lab. During his Ph.D. and at Affectiva he built state-of-the-art facial expression recognition software and lead analysis of the world’s largest database of facial expression videos.
His work has received nominations and awards from Popular Science magazine as one of the top inventions in 2011, South-by-South-West Interactive (SXSWi), The Webby Awards, ESOMAR and the Center for Integrated Medicine and Innovative Technology (CIMIT). His projects have been reported in many publications including The Times, the New York Times, The Wall Street Journal, BBC News, New Scientist and Forbes magazine. Daniel was named a 2015 WIRED Innovation Fellow and has spoken at TEDx Berlin.