English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  Mainly the actions: Functional knowledge has a primary role in understanding real-world scenes portrayed by either fine or coarse visual information

Ciesielski, K., Webb, A., & Spotorno, S. (2023). Mainly the actions: Functional knowledge has a primary role in understanding real-world scenes portrayed by either fine or coarse visual information. Poster presented at Twenty-Third Annual Meeting of the Vision Sciences Society (VSS 2023), St. Pete Beach, FL, USA.

Item is

Files

show Files

Locators

show
hide
Description:
-
OA-Status:
Not specified

Creators

show
hide
 Creators:
Ciesielski, K, Author
Webb, A1, Author           
Spotorno, S, Author
Affiliations:
1Department of Computational Neuroscience, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_3017468              

Content

show
hide
Free keywords: -
 Abstract: Studies on how individuals understand real-world scenes, and form predictions about them, have traditionally focused on taxonomic knowledge about the scene’s content (its structure and the objects it contains). Recently, functional knowledge, which represents the actions afforded by a scene, has been proposed as a fundamental dimension of scene processing. However, it is unclear how these two kinds of knowledge are related, and in particular whether functional scene understanding requires the mediation of object information. We examined how taxonomic (specifically object-based) and functional (action-based) rapid scene understanding use visual information about fine, local features and objects, conveyed by high spatial frequencies (HSF), and coarse, contextual features, conveyed by low spatial frequencies (LSF). In each trial across four experiments, we presented an HSF or LSF filtered scene and two object or action words, one highly consistent with the scene and the other inconsistent. Participants reported which word was consistent. In the first two experiments, the words were shown simultaneously as primes, followed by the scene image, which appeared until response (Exp.1, online) or for 150ms followed by a 100ms frequency-matched pink-noise mask (Exp.2, online). Exp.3 (online) reversed the word-scene sequence, using the visual scene as a prime, with the same presentation times as in Exp.2. Exp.4 (lab-based), used the paradigm of Exp.2, but added a bandstop filter including HSF and LSF. Responses were faster for action than object words, in all experiments except in the bandstop condition, which showed no differences. We also reported greater accuracy for action words across most experiments. These results suggest that, independently of whether it has been activated before the scene instance is encountered, functional knowledge has a primary role in scene understanding when only fine or coarse visual features are provided and does not require the mediation of object information.

Details

show
hide
Language(s):
 Dates: 2023-08
 Publication Status: Issued
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: DOI: 10.1167/jov.23.9.5689
 Degree: -

Event

show
hide
Title: Twenty-Third Annual Meeting of the Vision Sciences Society (VSS 2023)
Place of Event: St. Pete Beach, FL, USA
Start-/End Date: 2023-05-19 - 2023-05-24

Legal Case

show

Project information

show

Source 1

show
hide
Title: Journal of Vision
  Abbreviation : jov
Source Genre: Journal
 Creator(s):
Affiliations:
Publ. Info: Charlottesville, VA : Scholar One, Inc.
Pages: - Volume / Issue: 23 (9) Sequence Number: 53.341 Start / End Page: 5689 Identifier: ISSN: 1534-7362
CoNE: https://pure.mpg.de/cone/journals/resource/111061245811050