AIRS 4th Annual Meeting: 2012 Title: Introducing RAVDESS: A New Database of Emotional Song and Speech Authors: Steven Livingstone (Ryerson University) Katlyn Peck (Ryerson University) Frank Russo (Ryerson University) Abstract This paper introduces the Ryerson Audio-Visual Database of Emotional Speech and Song. Our purpose in creating this battery was to provide researchers with a high-quality, freely-available set of audio-visual recordings of emotional speech and song in North American English. The battery consists of 12 highly trained actors, speaking and singing short statements with 9 different emotions, each with two emotional intensities. We report on psychometric evaluations, facial motion, and acoustic properties. The battery will allow researchers to assess the relative contributions of audio and visual channels, and to draw comparisons between response to emotional speech and song. With a Ph. D. in Computer Science and Bachelors in Physics and Information Technology, Steven Livingston brings an interdisciplinary skill set to singing research. Since completing his Ph. D. in 2008, he has undertaken a program of research dedicated to understanding the role of facial expressions in singing performance. In 2009, he provided the first time-course analysis of facial expressions in emotional singing. The study, which was done in collaboration with Bill Thompson and Frank Russo, revealed that performers’ facial expressions differentiated their emotional intentions. This research was continued under the supervision of Caroline Palmer and Marcelo Wanderley at McGill University, where he acquired extensive analytical techniques for the study of motion and auditory data. Steven has been an AIRS postdoctoral fellow since 2011, working with Frank Russo at Ryerson University on the development of facial mimicry in emotional singing.