English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  On the informativeness of supervision signals

Sucholutsky, I., Battleday, R. M., Collins, K. M., Marjieh, R., Peterson, J., Singh, P., et al. (2023). On the informativeness of supervision signals. In R. Evans, & I. Shpitser (Eds.), Proceedings of the Thirty-Ninth Conference on Uncertainty in Artificial Intelligence (pp. 2036-2046).

Item is

Basic

show hide
Genre: Conference Paper

Files

show Files

Locators

show

Creators

show
hide
 Creators:
Sucholutsky, Ilia1, Author
Battleday, Ruairidh M.1, Author
Collins, Katherine M.2, Author
Marjieh, Raja3, Author
Peterson, Joshua1, Author
Singh, Pulkit1, Author
Bhatt, Umang2, 4, Author
Jacoby, Nori5, Author                 
Weller, Adrian2, 4, Author
Griffiths, Thomas L.1, 2, Author
Affiliations:
1Dept. of Computer Science, Princeton University, ou_persistent22              
2Dept. of Engineering, University of Cambridge,, ou_persistent22              
3Dept. of Psychology, Princeton University,, ou_persistent22              
4Alan Turing Institute,, ou_persistent22              
5Research Group Computational Auditory Perception, Max Planck Institute for Empirical Aesthetics, Max Planck Society, ou_3024247              

Content

show
hide
Free keywords: -
 Abstract: Supervised learning typically focuses on learning transferable representations from training examples annotated by humans. While rich annotations (like soft labels) carry more information than sparse annotations (like hard labels), they are also more expensive to collect. For example, while hard labels only provide information about the closest class an object belongs to (e.g., “this is a dog”), soft labels provide information about the object’s relationship with multiple classes (e.g., “this is most likely a dog, but it could also be a wolf or a coyote”). We use information theory to compare how a number of commonly-used supervision signals contribute to representation-learning performance, as well as how their capacity is affected by factors such as the number of labels, classes, dimensions, and noise. Our framework provides theoretical justification for using hard labels in the big-data regime, but richer supervision signals for few-shot learning and out-of-distribution generalization. We validate these results empirically in a series of experiments with over 1 million crowdsourced image annotations and conduct a cost-benefit analysis to establish a tradeoff curve that enables users to optimize the cost of supervising representation learning on their own datasets.

Details

show
hide
Language(s): eng - English
 Dates: 2023
 Publication Status: Published online
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: -
 Degree: -

Event

show
hide
Title: 39th Conference on Uncertainty in Artificial Intelligence (UAI)
Place of Event: Pittsburgh, PA
Start-/End Date: 2023-07-31 - 2023-08-04

Legal Case

show

Project information

show

Source 1

show
hide
Title: Proceedings of the Thirty-Ninth Conference on Uncertainty in Artificial Intelligence
Source Genre: Proceedings
 Creator(s):
Evans, RJ, Editor
Shpitser, I, Editor
Affiliations:
-
Publ. Info: -
Pages: - Volume / Issue: 216 Sequence Number: - Start / End Page: 2036 - 2046 Identifier: ISSN: 2640-3498