Elsevier

NeuroImage

Volume 59, Issue 1, 2 January 2012, Pages 718-727
NeuroImage

Decoding the neural representation of affective states

https://doi.org/10.1016/j.neuroimage.2011.07.037Get rights and content

Abstract

Brain activity was monitored while participants viewed picture sets that reflected high or low levels of arousal and positive, neutral, or negative valence. Pictures within a set were presented rapidly in an incidental viewing task while fMRI data were collected. The primary purpose of the study was to determine if multi-voxel pattern analysis could be used within and between participants to predict valence, arousal and combined affective states elicited by pictures based on distributed patterns of whole brain activity. A secondary purpose was to determine if distributed patterns of whole brain activity can be used to derive a lower dimensional representation of affective states consistent with behavioral data. Results demonstrated above chance prediction of valence, arousal and affective states that was robust across a wide range of number of voxels used in prediction. Additionally, individual differences multidimensional scaling based on fMRI data clearly separated valence and arousal levels and was consistent with a circumplex model of affective states.

Highlights

► Affective category identification within participants from fMRI data. ► Affective category identification across participants from fMRI data. ► Circumplex representation of affect recovered from fMRI data.

Introduction

The representation and processing of emotional states in the brain has become a fundamental area of study within cognitive neuroscience. Two distinct approaches to understanding affective states have come to the forefront of the study of emotion. The categorical approach builds on the finding of a core set of distinct basic emotions, as demonstrated by studies of the perception of human facial emotional expressions and basic physiological responses of the autonomic nervous system to emotional stimuli (Ekman, 1992a, Ekman, 1992b). Moreover, these basic emotions, such as anger, fear, disgust, sadness and joy, are thought to be represented by different neural systems (Panksepp, 1992, Panksepp, 1998).

An alternative to the categorical approach is to consider the underlying structure of emotions as deriving from two or three basic dimensions of affective processing (Posner et al., 2005, Rolls, 1999, Schlosberg, 1954, Watson and Tellegen, 1985). A widely accepted dimensional model of affect, developed using multidimensional scaling techniques, conceptualizes the affective space as a circle or circumplex with two underlying primary dimensions: valence and arousal (Russell, 1980). Valence reflects the hedonic tone of the emotional state, ranging from positive to negative, while arousal, or activation, reflects the engagement of the organism, ranging from high to low (Roberts and Wedell, 1994). The circumplex model of affect suggests that all emotions or affective states can be distinguished in terms of varying levels of valence and arousal, with two distinct neural systems mediating the representation of affective states (Barrett, 1998).

As described above, both the categorical and dimensional approaches to understanding emotional states have support from behavioral and neuroimaging studies. One way to resolve this seeming contradiction is to assume that although emotional states can be described by dimensional variation along valence and arousal, further categorical processing of states may overlay this structure and result in activation of distinct cognitive and neural components. Thus, for example, anger and disgust may both be negative and high arousal states, but their categorical processing leads to different implications, as described in appraisal theory (Lazarus, 1991, Lazarus, 1995). Thus, while the methods we describe in the present study build on the circumplex model of affective states, we believe they may also be applied to categorical approaches.

Traditionally, neuroimaging studies have used univariate statistical parametric mapping methods to determine which areas of the brain subserve the processing of emotional stimuli and the generation of emotional states. In a meta-analysis of 162 neuroimaging studies of emotion, Kober et al. (2008) demonstrated that medial frontal areas are co-activated with core limbic structures and that the dorsomedial prefrontal cortex may underlie the generation of emotional states. Consistent with dimensional models of emotion, neuroimaging studies have demonstrated a dissociation of valence and arousal for various stimulus modalities, such as olfactory (Anderson et al., 2003), gustatory (Small et al., 2003), picture (Anders et al., 2004, Grimm et al., 2006, Nielen et al., 2009), word (Kensinger and Corkin, 2004, Lewis et al., 2007, Nielen et al., 2009, Posner et al., 2009), and face (Gerber et al., 2008), as well as emotional experiences induced by the presentation of evocative sentences (Colibazzi et al., 2010). The results of these studies suggest that valence and arousal may be represented in separate neural circuits containing the amygdala, insula, thalamus, dorsal anterior cingulate cortex, and prefrontal regions. These regions are generally consistent with the hypothesis that responses to valence are part of motivational circuitry linked to mesolimbic structures (Lang and Bradley, 2010). Response in the amygdala has been associated with both arousal (Anderson et al., 2003, Colibazzi et al., 2010, Lewis et al., 2007, Small et al., 2003) and valence (Anders et al., 2004, Gerber et al., 2008, Posner et al., 2009), suggesting this region may belong to both valence and arousal networks. Traditionally, the amygdala is thought to be a part of a subcortical pathway devoted to the processing of emotional stimuli, but a more recent view suggests the amygdala modulates multiple networks by identifying and allocating resources to biologically-relevant stimuli (Pessoa and Adolphs, 2010).

As a complement to traditional univariate analyses, examining whole brain processing of states may prove valuable in identifying emotions from neuroimaging and in determining the representational structure of those affective states. The techniques entailed in multi-voxel pattern analysis (MVPA) are well-suited to the study of affect, as the pattern-based approach of MVPA detects cognitive states by jointly investigating information in multiple voxels and is more sensitive compared to univariate statistical parametric mapping (Haynes and Rees, 2006, Norman et al., 2006, O'Toole et al., 2007). Likewise, the results of traditional univariate analyses may be informative for MVPA techniques, as they suggest a core group of brain structures that may contribute to whole brain patterns of activity. Previous neuroimaging studies have investigated the representation of affect with pattern-based approaches, successfully decoding affective states from patterns of brain activity located in specific regions of interest as well as from patterns of whole brain activity. Peelen et al. (2010) used MVPA to investigate which brain regions encode emotions independently of the modality (e.g., body, face, voice) through which they are perceived. Other neuroimaging studies employing MVPA methods have investigated modality-specific encoding of discrete emotional states using emotionally-evocative voice recordings (Ethofer et al., 2009), facial expressions (Said et al., 2010), faces showing expressions of fear (Pessoa and Padmala, 2007), and recall of emotional situations (Sitaram et al., 2011).

The MVPA techniques may also prove useful when the affective state is subtly manipulated, as when incidental rather than intentional methods of inducing affective states are used. In intentional methods participants are typically told to think about the emotional consequences of a particular affective cue, such as a picture, video or story. As such, researchers may be analyzing neural imaging signals related to those voluntary thought processes rather than to the emotional state itself. In incidental methods the affective content of the stimuli is not explicitly processed. While a few studies have used incidental presentation of affective cue stimuli (Ethofer et al., 2009), these procedures may produce a weaker neural signal that reduces the power of univariate approaches. The MVPA approach is well suited to capturing subtle changes in neural processing distributed throughout the brain, and thus may be ideal for studying emotional responses in incidental exposure tasks.

The current study used an incidental affect inducement approach in which affectively scaled pictures were rapidly presented at a 200 ms rate, and the participant's task was simply to maintain focus on a central fixation point. Each picture set consisted of 20 photographs and represented one of five affective conditions resulting from the combination of low and high arousal with positive, neutral and negative valence. Our procedure is incidental in that participants were not told the nature of the study and were not asked to make any evaluations regarding the pictures. Prior scaling of these pictures (Lang et al., 2008) demonstrates that when people perceive each picture they have a reliable affective reaction that can be measured primarily along dimensions of valence and arousal. Our procedure does not disentangle the perception of affect from the experiencing of affect.

The purpose of our study was twofold. First, we explored whether MVPA methods could be used to identify affective states within each individual by decoding functional patterns of whole brain activity, thus extending previous MVPA studies of affect to specifically examining valence and arousal dimensions. Consistent with the circumplex model (Russell, 1980), we hypothesized that functional patterns of whole brain activity would contain information discriminating the states in terms of valence and arousal levels, as elicited by viewing the visual stimuli. Thus, we tested for classification of positive and negative valence, high and low arousal, and finally the four separate states. Our statistical approach within individuals was to train our classifiers on all but one trial of each type and cross-validate the pattern analysis on the remaining trials. We also used MVPA to predict affective states across individuals, by training the classifier on all but one participant and then predicting the emotion state of the excluded participant.

A second purpose of our study was to extract the internal representation of affective states elicited by viewing emotionally-evocative pictures from fMRI data and compare it with predictions from the circumplex model of affect. While our predictions for valence- and arousal-based classification are consistent with the circumplex model of affect, these predictions could also be accounted for by a categorical model that posits four distinct brain areas for the emotional states we induce. However, the categorical approach does not predict recovery of a circumplex relationship from similarity metrics derived from the fMRI data. We used individual differences multidimensional scaling (INDSCAL) to explore the lower dimensional representation of affective states from functional patterns of whole brain activity. The internal representation of affect has been previously shown to separate on the dimension of valence with electroencephalogram (EEG) data (Onton and Makeig, 2009). In this work we extended these findings by demonstrating that the internal representation of affect derived from fMRI data can be separated on both dimensions of valence and arousal, providing additional support for the circumplex model of affect. Thus unlike studies that focus on specific regions of interest, our focus was to utilize the distributed representations elicited by affective stimuli to capture the underlying affective states.

Section snippets

Participants

Thirteen right-handed volunteer adults (12 females) from the University of South Carolina community with normal or corrected to normal vision participated and gave written informed consent in accordance with the Institutional Review Board at the University of South Carolina.

Materials

Participants viewed a series of color photographs obtained from a database provided by the National Institute of Mental Health Center for the Study of Emotion and Attention. Pictures were selected based on normed valence and

Category identification of affective states within participants

A classifier was trained for each participant to determine if it was possible to identify the four affective categories based on whole brain activation elicited by picture stimuli. Classification accuracies for classification of the four affective states significantly exceeded the chance level (0.25) for all levels of the most stable voxels (p < 0.05), for the majority of participants (Fig. 3). The highest classification accuracy obtained for a single participant was 0.77. Accurate classification

Discussion

The current study explored whether MVPA methods applied to whole brain activity patterns could be used to identify the valence and arousal levels elicited by viewing of photographs and to examine the internal representation of affective states from fMRI data for comparison to dimensional models of affect. In contrast to studies that focus on specific regions of interest, our focus was to utilize the distributed representations elicited by affective stimuli to capture the underlying affective

Acknowledgments

We would like to thank John Richards, Michael Schillaci, and Jennifer Vendemia for helpful discussions at the earlier stages of the project, and the two anonymous reviewers for helpful suggestions on the earlier version of the manuscript.

References (54)

  • M.M.A. Nielen et al.

    Distinct brain systems underlie the processing of valence and arousal of affective pictures

    Brain Cogn.

    (2009)
  • K.A. Norman et al.

    Beyond mind-reading: multi-voxel pattern analysis of fMRI data

    Trends Cogn. Sci.

    (2006)
  • F. Pereira et al.

    Machine learning classifiers and fMRI: a tutorial overview

    Neuroimage

    (2009)
  • J.S. Roberts et al.

    Context effects on similarity judgments of multidimensional stimuli: Inferring the structure of the emotion space

    J. Exp. Soc. Psychol.

    (1994)
  • D.L. Robins et al.

    Superior temporal activation in response to dynamic audio–visual emotional cues

    Brain Cogn.

    (2009)
  • S.V. Shinkareva et al.

    Commonality of neural representations of words and pictures

    Neuroimage

    (2011)
  • R. Sitaram et al.

    Real-time support vector classification and feedback of multiple emotional brain states

    Neuroimage

    (2011)
  • D.M. Small et al.

    Dissociation of neural representation of intensity and affective valuation in human gustation

    Neuron

    (2003)
  • S. Anders et al.

    Brain activity underlying emotional valence and arousal: a response-related fMRI study

    Hum. Brain Mapp.

    (2004)
  • A.K. Anderson et al.

    Dissociated neural representations of intensity and valence in human olfaction

    Nat. Neurosci.

    (2003)
  • L.F. Barrett

    Independence and bipolarity in the structure of current affect

    J. Pers. Soc. Psychol.

    (1998)
  • C.M. Bishop

    Pattern Recognition and Machine Learning

    (2006)
  • I. Borg et al.

    Modern Multidimensional Scaling: Theory and Applications

    (1997)
  • T. Colibazzi et al.

    Neural systems subserving valence and arousal during the experience of induced emotions

    Emotion

    (2010)
  • P. Ekman

    Are there basic emotions?

    Psychol. Rev.

    (1992)
  • P. Ekman

    An argument for basic emotions

    Cogn. Emotion

    (1992)
  • C.C. Hagan et al.

    MEG demonstrates a supra-additive response to facial and vocal emotion in the right superior temporal sulcus

    Proc. Natl. Acad. Sci. U. S. A.

    (2009)
  • Cited by (0)

    View full text