AIRS 5th Annual Meeting: 2013 Title: Is Facial Mimicry an Automatic Response to Emotional Song? Authors: Lisa P. Chan (Department of Psychology, Ryerson University), Frank Russo (Department of Psychology, Ryerson University) Abstract During vocal communication, our faces continuously move and express linguistic, musical, and affective information (Munhall et al., 2004; Thompson, Russo & Livingstone, 2010). To observe and measure these facial movements, facial electromyography (f-EMG) was used, and subtle mirroring of visual aspects of emotional song has been found (Livingstone, Thompson & Russo, 2009; Chan, Livingstone & Russo, 2013). The facial feedback hypothesis states that individuals experience emotions (i.e., happiness) through producing the associated facial expressions (i.e., a smile). Thus, if an individual observes and unconsciously mimics the expression of singers, it could be possible for them to have rapid access to the singer’s emotional intent. Facial mimicry is thought to be an automatic response for non-music stimuli, but it remains unclear if this is the same for emotional song. To further explore and test this, we presented video clips of emotional song to participants who were instructed to inhibit their facial movements. If they still mimicked these expressions, it would serve as support of automaticity for facial mimicry. We used f-EMG to measure the degree of mimicry of muscle activity in the zygomaticus major muscle (associated with happy emotion) and corrugator supercilii muscle (associated with sad emotion). Results showed more zygomaticus activity for happy vs. sad trials, and more corrugator muscle activity for sad vs. happy trials. This suggests that facial mimicry may also be an automatic response to singers' facial expressions in emotional song, and future research will continue to explore and clarify additional questions in this topic area.