English
 
User Manual Privacy Policy Disclaimer Contact us
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  A Tutorial on Kernel Methods for Categorization

Jäkel, F., Schölkopf, B., & Wichmann, F. (2007). A Tutorial on Kernel Methods for Categorization. Journal of Mathematical Psychology, 51(6), 343-358. doi:10.1016/j.jmp.2007.06.002.

Item is

Basic

show hide
Item Permalink: http://hdl.handle.net/11858/00-001M-0000-0013-CADD-3 Version Permalink: http://hdl.handle.net/21.11116/0000-0003-B9AE-9
Genre: Journal Article

Files

show Files

Locators

show

Creators

show
hide
 Creators:
Jäkel, F1, 2, Author              
Schölkopf, B1, 2, Author              
Wichmann, FA1, 2, Author              
Affiliations:
1Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_1497795              
2Max Planck Institute for Biological Cybernetics, Max Planck Society, Spemannstrasse 38, 72076 Tübingen, DE, ou_1497794              

Content

show
hide
Free keywords: -
 Abstract: The abilities to learn and to categorize are fundamental for cognitive systems, be it animals or machines, and therefore have attracted attention from engineers and psychologists alike. Modern machine learning methods and psychological models of categorization are remarkably similar, partly because these two fields share a common history in artificial neural networks and reinforcement learning. However, machine learning is now an independent and mature field that has moved beyond psychologically or neurally inspired algorithms towards providing foundations for a theory of learning that is rooted in statistics and functional analysis. Much of this research is potentially interesting for psychological theories of learning and categorization but also hardly accessible for psychologists. Here, we provide a tutorial introduction to a popular class of machine learning tools, called kernel methods. These methods are closely related to perceptrons, radial-basis-function neural networks and exemplar theories of catego rization. Recent theoretical advances in machine learning are closely tied to the idea that the similarity of patterns can be encapsulated in a positive definite kernel. Such a positive definite kernel can define a reproducing kernel Hilbert space which allows one to use powerful tools from functional analysis for the analysis of learning algorithms. We give basic explanations of some key concepts—the so-called kernel trick, the representer theorem and regularization—which may open up the possibility that insights from machine learning can feed back into psychology.

Details

show
hide
Language(s):
 Dates: 2007-12
 Publication Status: Published in print
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Method: -
 Identifiers: DOI: 10.1016/j.jmp.2007.06.002
BibTex Citekey: 4784
 Degree: -

Event

show

Legal Case

show

Project information

show

Source 1

show
hide
Title: Journal of Mathematical Psychology
Source Genre: Journal
 Creator(s):
Affiliations:
Publ. Info: Orlando, Fla. : Academic Press
Pages: - Volume / Issue: 51 (6) Sequence Number: - Start / End Page: 343 - 358 Identifier: ISSN: 0022-2496
CoNE: https://pure.mpg.de/cone/journals/resource/954922646040