Help Privacy Policy Disclaimer
  Advanced SearchBrowse


  Some Theoretical Aspects of Human Categorization Behavior: Similarity and Generalization

Jäkel, F. (2007). Some Theoretical Aspects of Human Categorization Behavior: Similarity and Generalization. PhD Thesis, Eberhard-Karls-Universität, Tübingen, Germany.

Item is


show Files




Jäkel, F1, 2, Author           
1Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_1497795              
2Max Planck Institute for Biological Cybernetics, Max Planck Society, Spemannstrasse 38, 72076 Tübingen, DE, ou_1497794              


Free keywords: -
 Abstract: Explanations of human categorization behavior often invoke similarity. Stimuli that are similar to each other are grouped together whereas stimuli that are very different are kept separate. Despite serious problems in defining similarity, both conceptually and experimentally, this is the prevailing view of categorization in prototype models (Posner & Keele, 1968; Reed, 1972) and exemplar models (Medin & Schaffer, 1978; Nosofsky, 1986). This is also the prevailing approach in machine learning (Schölkopf & Smola, 2002). In this thesis, we re-examine the notion of similarity as it is used in models for human categorization behavior from a machine learning perspective. Our current understanding of many machine learning methods has been deepened considerably by the realization that similarity can be modeled as a so-called positive definite kernel. One of the most commonly used similarity measures in psychology, Shepard's universal law of generalization (Shepard, 1987), is shown to be such a positive definite kernel. This leads to two theoretical insights about metric models of psychological similarity. First, early models of similarity introduced the notion of a psychological space with a Euclidean metric that represents the similarity of stimuli (Torgerson, 1952; Ekman, 1954). Shepard's early work on multidimensional scaling can be understood as an effort to overcome the assumption that the similarity of stimuli is captured by a Euclidean metric (Shepard, 1962). Later, Shepard summarized the relationship between similarity and metrics in many psychological spaces with his universal law of generalization (Shepard, 1987). Ironically, however, this thesis demonstrates that the universal law leads to an embedding of similarity into a high-dimensional Euclidean space and therefore results in a return to those roots of multidimensional scaling that Shepard tried to overcome. Second, models for similarity that are based on multidimensional scaling have been heavily criticized by Tversky and coworkers (Beals, Krantz, & Tversky, 1968; Tversky, 1977; Tversky & Gati, 1982). Despite this criticism scaling methods have been used with great success, especially in categorization research (Nosofsky, 1986). Tversky and Gati (1982) reported data that are inconsistent with standard geometric interpretations of similarity that assume the triangle inequality and segmental additivity. Here, it is shown that there are metrics induced by Shepard's law of generalization that do not have the property of segmental additivity. These metrics are therefore consistent with the data. These metrics are also bounded from above, thereby implementing the intuition that stimulus similarity is best defined locally (Indow, 1994). As Shepard's law is used extensively in psychological models of categorization (Nosofsky, 1986; Kruschke, 1992; Love, Medin, & Gureckis, 2004) the insight that similarity can be modeled as a positive definite kernel can also benefit a theoretical analysis of categorization behavior. We show that exemplar models in psychology are closely related to kernel logistic regression (Hastie, Tibshirani, & Friedman, 2001). The link between kernel logistic regression and exemplar theories is their use of radial-basis-function neural networks (Poggio & Girosi, 1989; Poggio, 1990). A traditional concern against exemplar models is their lack of an abstraction mechanism that seemingly limits their generalization performance (Smith & Minda, 1998, 2000). However, kernel logistic regression is used successfully in many applications in machine learning. We find that exemplar theories in psychology are indeed prone to overfitting, i.e. they show poor generalization performance. However, like their relatives in machine learning exemplar models can be equipped with regularization mechanisms that are known to improve generalization performance under real-world category learning conditions.


 Dates: 2007-11-092007
 Publication Status: Published in print
 Pages: 98
 Publishing info: Tübingen, Germany : Eberhard-Karls-Universität
 Table of Contents: -
 Rev. Type: -
 Identifiers: BibTex Citekey: 5597
 Degree: PhD



Legal Case


Project information