Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT
  Identifying object categories from event-related EEG: Toward decoding of conceptual representations

Simanova, I., Van Gerven, M., Oostenveld, R., & Hagoort, P. (2010). Identifying object categories from event-related EEG: Toward decoding of conceptual representations. Poster presented at HBM 2010 - 16th Annual Meeting of the Organization for Human Brain Mapping, Barcelona, Spain.

Item is

Externe Referenzen

einblenden:
ausblenden:
Beschreibung:
-
OA-Status:

Urheber

einblenden:
ausblenden:
 Urheber:
Simanova, Irina1, Autor           
Van Gerven, Marcel2, 3, Autor
Oostenveld, Robert2, Autor           
Hagoort, Peter1, 2, Autor           
Affiliations:
1Neurobiology of Language Group, MPI for Psycholinguistics, Max Planck Society, ou_102880              
2Donders Institute for Brain, Cognition and Behaviour, External Organizations, ou_55236              
3Institute for Computing and Information Sciences, ou_persistent22              

Inhalt

einblenden:
ausblenden:
Schlagwörter: -
 Zusammenfassung: Introduction: Identification of the neural signature of a concept is a key challenge in cognitive neuroscience. In recent years, a number of studies have demonstrated the possibility to decode conceptual information from spatial patterns in functional MRI data (Hauk et al., 2008; Shinkareva et al., 2008). An important unresolved question is whether similar decoding performance can be attained using electrophysiological measurements. The development of EEG-based concept decoding algorithms is interesting from an applications perspective, because the high temporal resolution of the EEG allows pattern recognition in real-time. In this study we investigate the possibility to identify conceptual representations from event-related EEG on the basis of the presentation of an object in three different modalities: an object’s written name, it’s spoken name and it’s line drawing. Methods: Twenty-four native Dutch speakers participated in the study. They were presented concepts from three semantic categories: two relevant categories (animals, tools) and a task category. There were four concepts per category, all concepts were presented in three modalities: auditory, visual (line drawings) and textual (written Dutch words). Each item was repeated 80 times (relevant), or 16 times (task) in each modality. The text and picture stimuli were presented for 300 ms. The interval between stimuli had a random duration between 1000-1200 ms. Participants were instructed to respond upon appearance of items from the task category. Continuous EEG was registered using a 64-channel system. The data were divided into epochs of one second starting 300 ms before stimulus onset. We used the time domain representation of the signal as input to the classifier (linear support vector machine, Vapnik, 2000). The classifier was trained to identify which of two semantic categories (animal or tool) was presented to subject. Performance of the classifier was computed as the proportion of correctly classified trials. Significance of the classification outcome was computed using a binomial test (Burges, 1998). In the first analysis we classified the semantic category of stimuli from the entire dataset, with trials of all modalities equally presented. In the second analysis we classified trials within each modality separately. In the third analysis we compared classification performance for the real categories with the classification performance for pseudo-categories to investigate the role of perceptual features of presented objects without transparent contribution of conceptual information. The pseudo-categories were composed by arranging all the concepts into classes randomly in a way that each class contained exemplars of both categories. Results: In the first analysis we assessed the ability to discriminate patterns of EEG signals referring to the representation of animals versus tools across three tested modalities. Significant accuracy was achieved for nineteen out of twenty subjects. The highest achieved classification accuracy across modalities was 0.69 with a mean value of 0.61 over all 20 subjects. To check whether the performance of the classifier was consistent during the experimental session, we visualized the correctness of the classifier’s decisions over the time-course of the session. Fig 1 shows that the classifier identifies more accurately the trials correspond to the picture blocks than the trials of text and audio blocks (Fig.1). To further assess the modality-specific classification performance, we trained and tested the classifiers within each of the individual modalities separately (Fig. 2). For pictures, the highest classification accuracy reached over all subjects was 0.92, and classification was significant (p<0.001) for all 20 subjects with a mean value of 0.80. The classifier for the auditory modality performed significantly better than chance (p<0.001 and p<0.01) in 15 out of 20 subjects with a mean value of 0.60. The classifier for the orthographic modality performed significantly better than chance in 5 out of 20 subjects, with a mean value of 0.56. Comparison of the classification performance for real- and pseudo-categories revealed a high impact of the conceptually driven activity on the classifier’s performance (Fig 3). Mean accuracies of pseudo-category classification over all subjects were 0.56 for pictures, 0.56 for audio, and 0.55 for text. Significant (p<0.005) differences form the real-categories results were found for all pseudo-categories in the picture modality; for eight out of ten pseudo-categories in the auditory modality, and for one out of ten pseudo-categories in the orthographic modality. Conclusions: The results uncover that stable neural patterns induced by the presentation of stimuli of different categories can be identified by EEG. High classification performances were achieved for all subjects. The visual modality appeared to be much easier to classify than the other modalities. This indicates the existence of category-specific patterns in visual recognition of objects (Kiefer 2001; Liu et al., 2009). Currently we are working towards interpreting the patterns found during classification using Bayesian logistic regression. A considerable reduction of performance has been found when using pseudo-categories instead of the real categories. This indicated that the classifier has identified neural activity at the level of conceptual representations. Our results could help to further understand the mechanisms underlying conceptual representations. The study also provides a first step towards the use of concept encoding in the context of brain-computer interface applications. References: Burges, C. (1998), 'A tutorial on support vector machines for pattern recognition', Data Mining and Knowledge Discovery, vol. 2, no. 2, pp. 121-167. Hauk, O. (2008), 'Imagery or meaning? Evidence for a semantic origin of category-specific brain activity in metabolic imaging', European Journal Neuroscience, vol. 27, no. 7, pp. 1856-66. Kiefer, M. (2001), 'Perceptual and semantic sources of category-specific effects: Event-Related potentials during picture and word categorization', Memory and Cognition, vol. 29, no. 1, pp. 100-16. Liu, H. (2009), 'Timing, timing, timing: Fast decoding of object information from intracranial field potentials in human visual cortex', Neuron, vol. 62, no. 2, pp. 281-90. Shinkareva, S. (2008), 'Using FMRI brain activation to identify cognitive states associated with perception of tools and dwellings', Plos One, vol. 3, no. 1, pp. e1394.

Details

einblenden:
ausblenden:
Sprache(n):
 Datum: 2010
 Publikationsstatus: Keine Angabe
 Seiten: -
 Ort, Verlag, Ausgabe: -
 Inhaltsverzeichnis: -
 Art der Begutachtung: Expertenbegutachtung
 Identifikatoren: -
 Art des Abschluß: -

Veranstaltung

einblenden:
ausblenden:
Titel: HBM 2010 - 16th Annual Meeting of the Organization for Human Brain Mapping
Veranstaltungsort: Barcelona, Spain
Start-/Enddatum: 2010-06-06 - 2010-06-10

Entscheidung

einblenden:

Projektinformation

einblenden:

Quelle

einblenden: