AIRS 5th Annual Meeting: 2013 Title: Simulation training using song to enhance emotion perception skills in autism Authors: Lucy M. McGarry (Department of Psychology, Ryerson University), Frank Russo (Department of Psychology, Ryerson University) Abstract It has been found that individuals automatically generate simulations of other people’s facial expressions during speech and song to aid in emotion perception. This is called the facial feedback hypothesis. It has been suggested that this simulation mechanism is related to mirror neuron system (MNS) activity; individuals with greater empathy generate greater MNS activity and greater automatic mimicry of other people’s movements. However, MNS activity and spontaneous motor mimicry is found to be dysfunctional in autism, a disorder that involves deficits in emotion perception. In the current study, we have created a video game for children with autism to play in which we plan to train spontaneous motor mimicry via simulation training. In the video game, children will be asked to explicitly act out the emotions of people in videos, and will see their performances played back in real time. The use of song stimuli followed by speech stimuli is predicted to facilitate mimicry in this population. Videos will be presented audio-visually because multimodal presentation of song is also thought to stimulate MNS activation optimally in this population. Current preliminary data suggests that individuals with low scores on an empathy scale generate greater emotional intensity ratings after simulation training of vocally-expressed emotions. We predict that in the current study, imitation training will facilitate emotion perception and will lead to changes in MNS functioning as measured using EEG. (Relevant also to Theme 2, and 3.3