PDF(1192 KB)
PDF(1192 KB)
PDF(1192 KB)
预期调整视觉感知的时间进程:来自脑电的证据*
Time Course of Predictions Adjustment to Visual Perception: Evidence from EEG
视觉预测编码的相关研究通常比较符合和违反预期两类刺激诱发的神经活动差异,而忽略了一类两者折中的刺激: 违反预期但与符合预期的刺激具有相似性。通过结合事件相关电位技术与视觉统计学习范式, 考察符合预期、非相似性违反预期和相似性违反预期三种条件下视觉感知的动态神经过程。结果发现: 在早期阶段(90~140ms), 符合预期和相似性违反预期条件均激发更大P1; 在中期阶段(200~280ms), 仅非相似性违反预期条件激发更大N2; 在晚期阶段(350~500ms), 仅符合预期条件激发更大P3。相似性违反预期条件早期的ERP活动与符合预期相似,但晚期却出现差异,这表明预测编码机制能动态地在效率与精确度间权衡,将相似性违反预期刺激纳入研究可更深入揭示视觉预测编码的内部模型。
Visual predictive coding theory posits that the brain actively generates predictions about incoming sensory input and computes prediction errors when the input deviates from expectations. Numerous studies have investigated the neural correlates of predictive coding by comparing brain responses to expected and unexpected stimuli. However, most research has focused on the dichotomy between stimuli that either conform to or violate expectations, neglecting an intermediate stimulus type that falls between these two extremes: stimuli that violate expectations but share perceptual similarity with expected stimuli. Incorporating this novel stimulus type into the predictive coding framework could offer a more nuanced understanding of the neural mechanisms underlying visual perception and the updating of internal models. The present study aimed to investigate the dynamic neural processes underlying visual perception in three conditions - expected stimuli, unexpected-dissimilar stimuli, and unexpected-similar stimuli - by combining event-related potential (ERP) techniques with a visual statistical learning paradigm. We hypothesized that the perceptual similarity between unexpected and expected stimuli would modulate neural activity in a stage-specific manner, revealing the dynamic interplay between expectation and perceptual similarity in shaping visual predictive coding processes.
In this ERP study, human participants were exposed to sequentially presented pairs of visual object stimuli, where the identity of the first object predicted the second object to varying degrees of expectancy based on learned conditional probabilities. On expected trials, the first object effectively predicted the identity of the second object with a 60% probability, whereas on unexpected trials, the first object only predicted the second object with a 20% probability. For unexpected stimuli, perceptual similarity was further manipulated by presenting either two visually similar objects or two perceptually distinct objects. These were referred to as “unexpected-similar stimuli” and “unexpected-dissimilar stimuli”, respectively. The experiment progressed through three phases, including an initial statistical learning phase to implicitly establish predictive relationships between the object pairs, a thresholding phase to calibrate task difficulty and equate baseline performance across participants, and the main experimental phase.
The results revealed clear differences in the pattern of neural activity related to predictive coding over time, demonstrating dynamic influences of predictions on visual processing and consciousness. In the early time window around 100ms, both expected and unexpected-similar stimuli elicited enhanced P1 ERP components. Considering the cognitive functions referred to P1 components, this indicates rapid attentional selection for both stimulus types. In addition, only the unexpected-dissimilar stimuli subsequently elicited a greater N2 component around 200~300ms, which is consistent with neural surprise responses and suggests that the prediction error signal is activated, triggering higher-level processing to update the internal model. Finally, in the later time window around 350~500ms, only the expected stimuli elicited an enhanced P3 component, suggesting facilitated perceptual discrimination and decision-making for expected inputs. Beyond that, the absence of heightened N2 and P3 components in response to unexpected-similar stimuli reflects the presence of intricate mechanisms in predictive coding process. In other words, although violating predictions, unexpected-similar stimuli do not prompt the updating of internal models, and are incapable of forming more accurate visual representations.
By incorporating the novel stimulus type of unexpected stimuli with similarity into the predictive coding framework, this study sheds light on the characteristics and necessary conditions for updating internal models, providing a more comprehensive understanding of visual predictive coding processes. The results highlight the dynamic interplay between expectation and perceptual similarity in shaping neural responses across different stages of visual processing. This research not only advances our theoretical understanding of predictive coding mechanisms but also has practical implications for optimizing the design of brain-inspired artificial intelligence systems. Furthermore, the findings may offer valuable insights into the neural basis of perceptual and cognitive dysfunctions in certain neurological and psychiatric disorders characterized by impaired predictive coding.
视觉感知 / 预测编码 / 感知相似性 / 事件相关电位 / 视觉统计学习
visual perception / predictive coding / perceptual similarity / ERP / visual statistical learning paradigm
| [1] |
In this functional magnetic resonance imaging study we tested whether the predictability of stimuli affects responses in primary visual cortex (V1). The results of this study indicate that visual stimuli evoke smaller responses in V1 when their onset or motion direction can be predicted from the dynamics of surrounding illusory motion. We conclude from this finding that the human brain anticipates forthcoming sensory input that allows predictable visual stimuli to be processed with less neural activation at early stages of cortical processing.
|
| [2] |
Many theories of perception are anchored in the central notion that the brain continuously updates an internal model of the world to infer the probable causes of sensory events. In this framework, the brain needs not only to predict the causes of sensory input, but also when they are most likely to happen. In this article, we review the neurophysiological bases of sensory predictions of "what' (predictive coding) and 'when' (predictive timing), with an emphasis on low-level oscillatory mechanisms. We argue that neural rhythms offer distinct and adapted computational solutions to predicting 'what' is going to happen in the sensory environment and 'when'.Copyright © 2012 Elsevier Ltd. All rights reserved.
|
| [3] |
This paper presents a review of theoretical and empirical work on repetition suppression in the context of predictive coding. Predictive coding is a neurobiologically plausible scheme explaining how biological systems might perform perceptual inference and learning. From this perspective, repetition suppression is a manifestation of minimising prediction error through adaptive changes in predictions about the content and precision of sensory inputs. Simulations of artificial neural hierarchies provide a principled way of understanding how repetition suppression - at different time scales - can be explained in terms of inference and learning implemented under predictive coding. This formulation of repetition suppression is supported by results of numerous empirical studies of repetition suppression and its contextual determinants.Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
|
| [4] |
The degree to which we perceive real-world objects as similar or dissimilar structures our perception and guides categorization behavior. Here, we investigated the neural representations enabling perceived similarity using behavioral judgments, fMRI and MEG. As different object dimensions co-occur and partly correlate, to understand the relationship between perceived similarity and brain activity it is necessary to assess the unique role of multiple object dimensions. We thus behaviorally assessed perceived object similarity in relation to shape, function, color and background. We then used representational similarity analyses to relate these behavioral judgments to brain activity. We observed a link between each object dimension and representations in visual cortex. These representations emerged rapidly within 200 ms of stimulus onset. Assessing the unique role of each object dimension revealed partly overlapping and distributed representations: while color-related representations distinctly preceded shape-related representations both in the processing hierarchy of the ventral visual pathway and in time, several dimensions were linked to high-level ventral visual cortex. Further analysis singled out the shape dimension as neither fully accounted for by supra-category membership, nor a deep neural network trained on object categorization. Together our results comprehensively characterize the relationship between perceived similarity of key object dimensions and neural activity.Copyright © 2019 The Authors. Published by Elsevier Inc. All rights reserved.
|
| [5] |
Brains, it has recently been argued, are essentially prediction machines. They are bundles of cells that support perception and action by constantly attempting to match incoming sensory inputs with top-down expectations or predictions. This is achieved using a hierarchical generative model that aims to minimize prediction error within a bidirectional cascade of cortical processing. Such accounts offer a unifying model of perception and action, illuminate the functional role of attention, and may neatly capture the special contribution of cortical processing to adaptive success. This target article critically examines this “hierarchical prediction machine” approach, concluding that it offers the best clue yet to the shape of a unified science of mind and action. Sections 1 and 2 lay out the key elements and implications of the approach. Section 3 explores a variety of pitfalls and challenges, spanning the evidential, the methodological, and the more properly conceptual. The paper ends (sections 4 and 5) by asking how such approaches might impact our more general vision of mind, experience, and agency.
|
| [6] |
|
| [7] |
|
| [8] |
This study aimed to characterize the neural generators of the early components of the visual evoked potential (VEP) to isoluminant checkerboard stimuli. Multichannel scalp recordings, retinotopic mapping and dipole modeling techniques were used to estimate the locations of the cortical sources giving rise to the early C1, P1, and N1 components. Dipole locations were matched to anatomical brain regions visualized in structural magnetic resonance imaging (MRI) and to functional MRI (fMRI) activations elicited by the same stimuli. These converging methods confirmed previous reports that the C1 component (onset latency 55 msec; peak latency 90-92 msec) was generated in the primary visual area (striate cortex; area 17). The early phase of the P1 component (onset latency 72-80 msec; peak latency 98-110 msec) was localized to sources in dorsal extrastriate cortex of the middle occipital gyrus, while the late phase of the P1 component (onset latency 110-120 msec; peak latency 136-146 msec) was localized to ventral extrastriate cortex of the fusiform gyrus. Among the N1 subcomponents, the posterior N150 could be accounted for by the same dipolar source as the early P1, while the anterior N155 was localized to a deep source in the parietal lobe. These findings clarify the anatomical origin of these VEP components, which have been studied extensively in relation to visual-perceptual processes.Copyright 2001 Wiley-Liss, Inc.
|
| [9] |
Numerous studies in psychology, cognitive neuroscience and psycholinguistics have used pictures of objects as stimulus materials. Currently, authors engaged in cross-linguistic work or wishing to run parallel studies at multiple sites where different languages are spoken must rely on rather small sets of black-and-white or colored line drawings. These sets are increasingly experienced as being too limited. Therefore, we constructed a new set of 750 colored pictures of concrete concepts. This set, MultiPic, constitutes a new valuable tool for cognitive scientists investigating language, visual perception, memory and/or attention in monolingual or multilingual populations. Importantly, the MultiPic databank has been normed in six different European languages (British English, Spanish, French, Dutch, Italian and German). All stimuli and norms are freely available at http://www.bcbl.eu/databases/multipic.
|
| [10] |
Advanced perceptual systems are faced with the problem \nof securing a principled (ideally, veridical) relationship between the \nworld and its internal representation. I propose a unified approach \nto visual representation, addressing the need for superordinate and \nbasic-level categorization and for the identification of specific \ninstances of familiar categories. According to the proposed theory, \na shape is represented internally by the responses of a small number \nof tuned modules, each broadly selective for some reference shape, \nwhose similarity to the stimulus it measures. This amounts to \nembedding the stimulus in a low-dimensional proximal shape space \nspanned by the outputs of the active modules. This shape space \nsupports representations of distal shape similarities that are \nveridical as Shepard's (1968) second-order isomorphisms \n(i.e., correspondence between distal and proximal similarities \namong shapes, rather than between distal shapes and their proximal \nrepresentations). Representation in terms of similarities to reference \nshapes supports processing (e.g., discrimination) of shapes that are \nradically different from the reference ones, without the need for the \ncomputationally problematic decomposition into parts required by other \ntheories. Furthermore, a general expression for similarity between two \nstimuli, based on comparisons to reference shapes, can be used to \nderive models of perceived similarity ranging from continuous, \nsymmetric, and hierarchical ones, as in multidimensional scaling \n(Shepard 1980), to discrete and nonhierarchical ones, as in the \ngeneral contrast models (Shepard & Arabie 1979; Tversky \n1977).
|
| [11] |
Reports of expectation suppression have shaped the development of influential predictive coding-based theories of visual perception. However recent work has highlighted confounding factors that may mimic or inflate expectation suppression effects. In this review, we describe four confounds that are prevalent across experiments that tested for expectation suppression: effects of surprise, attention, stimulus repetition and adaptation, and stimulus novelty. With these confounds in mind we then critically review the evidence for expectation suppression across probabilistic cueing, statistical learning, oddball, action-outcome learning and apparent motion designs. We found evidence for expectation suppression within a specific subset of statistical learning designs that involved weeks of sequence learning prior to neural activity measurement. Across other experimental contexts, whereby stimulus appearance probabilities were learned within one or two testing sessions, there was inconsistent evidence for genuine expectation suppression. We discuss how an absence of expectation suppression could inform models of predictive processing, repetition suppression and perceptual decision-making. We also provide suggestions for designing experiments that may better test for expectation suppression in future work.Copyright © 2021. Published by Elsevier Ltd.
|
| [12] |
Recent years have seen an explosion of research on the N2 component of the event-related potential, a negative wave peaking between 200 and 350 ms after stimulus onset. This research has focused on the influence of "cognitive control," a concept that covers strategic monitoring and control of motor responses. However, rich research traditions focus on attention and novelty or mismatch as determinants of N2 amplitude. We focus on paradigms that elicit N2 components with an anterior scalp distribution, namely, cognitive control, novelty, and sequential matching, and argue that the anterior N2 should be divided into separate control- and mismatch-related subcomponents. We also argue that the oddball N2 belongs in the family of attention-related N2 components that, in the visual modality, have a posterior scalp distribution. We focus on the visual modality for which components with frontocentral and more posterior scalp distributions can be readily distinguished.
|
| [13] |
This article concerns the nature of evoked brain responses and the principles underlying their generation. We start with the premise that the sensory brain has evolved to represent or infer the causes of changes in its sensory inputs. The problem of inference is well formulated in statistical terms. The statistical fundaments of inference may therefore afford important constraints on neuronal implementation. By formulating the original ideas of Helmholtz on perception, in terms of modern-day statistical theories, one arrives at a model of perceptual inference and learning that can explain a remarkable range of neurobiological facts.
|
| [14] |
|
| [15] |
Humans are good at performing visual tasks, but experimental measurements have revealed substantial biases in the perception of basic visual attributes. An appealing hypothesis is that these biases arise through a process of statistical inference, in which information from noisy measurements is fused with a probabilistic model of the environment. However, such inference is optimal only if the observer's internal model matches the environment. We found this to be the case. We measured performance in an orientation-estimation task and found that orientation judgments were more accurate at cardinal (horizontal and vertical) orientations. Judgments made under conditions of uncertainty were strongly biased toward cardinal orientations. We estimated observers' internal models for orientation and found that they matched the local orientation distribution measured in photographs. In addition, we determined how a neural population could embed probabilistic information responsible for such biases.
|
| [16] |
Electrophysiological and hemodynamical responses of the brain allow investigation of the neural origins of human attention. We review attention-related brain responses from auditory and visual tasks employing oddball and novelty paradigms. Dipole localization and intracranial recordings as well as functional magnetic resonance imaging reveal multiple areas involved in generating and modulating attentional brain responses. In addition, the influence of brain lesions of circumscribed areas of the human cortex onto attentional mechanisms are reviewed. While it is obvious that damaged brain tissue no longer functions properly, it has also been shown that functions of non-lesioned brain areas are impaired due to loss of modulatory influence of the lesioned area. Both early (P1 and N1) and late (P3) event-related potentials are modulated by excitatatory and inhibitory mechanisms. Oscillatory EEG-correlates of attention in the alpha and gamma frequency range also show attentional modulation.
|
| [17] |
Sensory processing of action effects has been shown to differ from that of externally triggered stimuli, with respect both to the perceived timing of their occurrence (intentional binding) and to their intensity (sensory attenuation). These phenomena are normally attributed to forward action models, such that when action prediction is consistent with changes in our environment, our experience of these effects is altered. Although much progress has been made in recent years in understanding sensory attenuation and intentional binding, a number of important questions regarding the precise nature of the predictive mechanisms involved remain unanswered. Moreover, these mechanisms are often not discussed in empirical papers, and a comprehensive review of these issues is yet to appear. This review attempts to fill this void. We systematically investigated the role of temporal prediction, temporal control, identity prediction, and motor prediction in previous published reports of sensory attenuation and intentional binding. By isolating the individual processes that have previously been contrasted and incorporating these experiments with research in the related fields of temporal attention and stimulus expectation, we assessed the degree to which existing data provide evidence for the role of forward action models in these phenomena. We further propose a number of avenues for future research, which may help to better determine the role of motor prediction in processing of voluntary action effects, as well as to improve understanding of how these phenomena might fit within a general predictive processing framework. Furthermore, our analysis has important implications for understanding disorders of agency in schizophrenia.(PsycINFO Database Record (c) 2013 APA, all rights reserved).
|
| [18] |
|
| [19] |
Detecting the presence of an object is a different process than identifying the object as a particular object. This difference has not been taken into account in designing experiments on the neural correlates of consciousness. We compared the electrophysiological correlates of conscious detection and identification directly by measuring ERPs while participants performed either a task only requiring the conscious detection of the stimulus or a higher-level task requiring its conscious identification. Behavioral results showed that, even if the stimulus was consciously detected, it was not necessarily identified. A posterior electrophysiological signature 200-300 msec after stimulus onset was sensitive for conscious detection but not for conscious identification, which correlated with a later widespread activity. Thus, we found behavioral and neural evidence for elementary visual experiences, which are not yet enriched with higher-level knowledge. The search for the mechanisms of consciousness should focus on the early elementary phenomenal experiences to avoid the confounding effects of higher-level processes.
|
| [20] |
Kouider, Sid; Long, Bria; Le Stanc, Lorna; Charron, Sylvain; Fievet, Anne-Caroline; Barbosa, Leonardo S.; Gelskov, Sofie V. PSL Res Univ, Ecole Normale Super, EHESS CNRS ENS, Brain & Consciousness Grp,Lab Sci Cognit & Psycho, F-75005 Paris, France. Kouider, Sid New York Univ Abu Dhabi, Dept Psychol, Div Sci, Abu Dhabi, U Arab Emirates. Long, Bria Harvard Univ, Dept Psychol, Vis Sci Lab, Cambridge, MA 02138 USA.
|
| [21] |
Human event-related potentials (ERPs) have previously been observed to be attenuated for self-triggered sounds and amplified for deviant auditory stimuli. These auditory ERP modulations have been proposed to reflect internal predictions about the sensory consequences of our actions and more generally about our sensory context. The present exploratory ERP study (1) compared the processing of self-triggered tones by either intention-based or stimulus-driven actions, and (2) studied the impact of impulsivity traits on the prediction of action-effects. Our results are consistent with an early distinction, before N1, between tones triggered by intention-based actions and tones triggered by stimulus-driven actions. Moreover, we observed an enhanced N2b for deviant stimuli triggered by intention-based actions only. In addition, N2b modulations were correlated with impulsiveness scores. Altogether our results suggest that action-effect prediction is stronger in intention-based actions and impaired in impulsive participants but replication studies are needed to corroborate the reported findings.Copyright © 2019 Elsevier Ltd. All rights reserved.
|
| [22] |
ERPLAB toolbox is a freely available, open-source toolbox for processing and analyzing event-related potential (ERP) data in the MATLAB environment. ERPLAB is closely integrated with EEG LAB, a popular open-source toolbox that provides many EEG preprocessing steps and an excellent user interface design. ERPLAB adds to EEGLAB's EEG processing functions, providing additional tools for filtering, artifact detection, re-referencing, and sorting of events, among others. ERPLAB also provides robust tools for averaging EEG segments together to create averaged ERPs, for creating difference waves and other recombinations of ERP waveforms through algebraic expressions, for filtering and rereferencing the averaged ERPs, for plotting ERP waveforms and scalp maps, and for quantifying several types of amplitudes and latencies. ERPLAB's tools can be accessed either from an easy-to-learn graphical user interface or from MATLAB scripts, and a command history function makes it easy for users with no programming experience to write scripts. Consequently, ERPLAB provides both ease of use and virtually unlimited power and flexibility, making it appropriate for the analysis of both simple and complex ERP experiments. Several forms of documentation are available, including a detailed user's guide, a step-by-step tutorial, a scripting guide, and a set of video-based demonstrations.
|
| [23] |
|
| [24] |
The potentially interactive influence of attention and prediction was investigated by measuring event-related potentials (ERPs) in a spatial cueing task with attention (task-relevant) and prediction (probabilistic) cues. We identified distinct processing stages of this interactive influence. Firstly, in line with the attentional gain hypothesis, a larger amplitude response of the contralateral N1, and Nd1 for attended gratings was observed. Secondly, conforming to the attenuation-by-prediction hypothesis, a smaller negativity in the time window directly following the peak of the N1 component for predicted compared to unpredicted gratings was observed. In line with the hypothesis that attention and prediction interface, unpredicted/unattended stimuli elicited a larger negativity at central-parietal sites, presumably reflecting an increased prediction error signal. Thirdly, larger P3 responses to unpredicted stimuli pointed to the updating of an internal model. Attention and prediction can be considered as differentiated mechanisms that may interact at different processing stages to optimise perception.Copyright © 2017 Elsevier B.V. All rights reserved.
|
| [25] |
|
| [26] |
|
| [27] |
Psychologists and neuroscientists have had a long-standing interest in the P3, a prominent component of the event-related brain potential. This review aims to integrate knowledge regarding the neural basis of the P3 and to elucidate its functional role in information processing. The authors review evidence suggesting that the P3 reflects phasic activity of the neuromodulatory locus coeruleus-norepinephrine (LC-NE) system. They discuss the P3 literature in the light of empirical findings and a recent theory regarding the information-processing function of the LC-NE phasic response. The theoretical framework emerging from this research synthesis suggests that the P3 reflects the response of the LC-NE system to the outcome of internal decision-making processes and the consequent effects of noradrenergic potentiation of information processing.Copyright 2005 APA, all rights reserved.
|
| [28] |
|
| [29] |
|
| [30] |
We describe a model of visual processing in which feedback connections from a higher- to a lower-order visual cortical area carry predictions of lower-level neural activities, whereas the feedforward connections carry the residual errors between the predictions and the actual lower-level activities. When exposed to natural images, a hierarchical network of model neurons implementing such a model developed simple-cell-like receptive fields. A subset of neurons responsible for carrying the residual errors showed endstopping and other extra-classical receptive-field effects. These results suggest that rather than being exclusively feedforward phenomena, nonclassical surround effects in the visual cortex may also result from cortico-cortical feedback as a consequence of the visual system using an efficient hierarchical strategy for encoding natural images.
|
| [31] |
The responses of neurons in cortical areas V2 and V4 can be significantly modulated by attention to particular locations within an input image. We show that such effects emerge naturally when perception is viewed as a probabilistic inference process governed by Bayesian principles and implemented in hierarchical cortical networks. The proposed model can explain a rich variety of attention-related responses in cortical area V4 including multiplicative modulation of tuning curves, restoration of neural responses in the presence of distracting stimuli, and influence of attention on neighboring unattended locations. Our results suggest a new interpretation of attention as a cortical mechanism for reducing perceptual uncertainty by combining top-down task-relevant information with bottom-up sensory inputs in a probabilistic manner.
|
| [32] |
|
| [33] |
|
| [34] |
Event-related potentials (ERPs) elicited by transient nociceptive stimuli in humans are largely sensitive to bottom-up novelty induced, for example, by changes in stimulus attributes (e.g., modality or spatial location) within a stream of repeated stimuli. Here we aimed 1) to test the contribution of a selective change of the intensity of a repeated stimulus in determining the magnitude of nociceptive ERPs, and 2) to dissect the effect of this change of intensity in terms of "novelty" and "saliency" (an increase of stimulus intensity is more salient than a decrease of stimulus intensity). Nociceptive ERPs were elicited by trains of three consecutive laser stimuli (S1-S2-S3) delivered to the hand dorsum at a constant 1-s interstimulus interval. Three, equally spaced intensities were used: low (L), medium (M), and high (H). While the intensities of S1 and S2 were always identical (L, M, or H), the intensity of S3 was either identical (e.g., HHH) or different (e.g., MMH) from the intensity of S1 and S2. Introducing a selective change in stimulus intensity elicited significantly larger N1 and N2 waves of the S3-ERP but only when the change consisted in an increase in stimulus intensity. This observation indicates that nociceptive ERPs do not simply reflect processes involved in the detection of novelty but, instead, are mainly determined by stimulus saliency.
|
| [35] |
|
| [36] |
The oddball protocol has been used to study the neural and perceptual consequences of implicit predictions in the human brain. The protocol involves presenting a sequence of identical repeated events that are eventually broken by a novel “oddball” presentation. Oddball presentations have been linked to increased neural responding and to an exaggeration of perceived duration relative to repeated events. Because the number of repeated events in such protocols is circumscribed, as more repeats are encountered, the conditional probability of a further repeat decreases—whereas the conditional probability of an oddball increases. These facts have not been appreciated in many analyses of oddballs; repeats and oddballs have rather been treated as binary event categories. Here, we show that the human brain is sensitive to conditional event probabilities in an active, visual oddball paradigm. P300 responses (a relatively late component of visually evoked potentials measured with EEG) tended to be greater for less likely oddballs and repeats. By contrast, P1 responses (an earlier component) increased for repeats as a goal-relevant target presentation neared, but this effect occurred even when repeat probabilities were held constant, and oddball P1 responses were invariant. We also found that later, more likely oddballs seemed to last longer, and this effect was largely independent of the number of preceding repeats. These findings speak against a repetition suppression account of the temporal oddball effect. Overall, our data highlight an impact of event probability on later, rather than earlier, electroencephalographic measures previously related to predictive processes—and the importance of considering conditional probabilities in sequential presentation paradigms.
|
| [37] |
Subordinate-level object processing is regarded as a hallmark of perceptual expertise. However, the relative contribution of subordinate- and basic-level category experience in the acquisition of perceptual expertise has not been clearly delineated. In this study, participants learned to classify wading birds and owls at either the basic (e.g., wading bird, owl) or the subordinate (e.g., egret, snowy owl) level. After 6 days of training, behavioral results showed that subordinate-level but not basic-level training improved subordinate discrimination of trained exemplars, novel exemplars, and exemplars from novel species. Event-related potentials indicated that both basic- and subordinate-level training enhanced the early N170 component, but only subordinate-level training amplified the later N250 component. These results are consistent with models positing separate basic and subordinate learning mechanisms, and, contrary to perspectives attempting to explain visual expertise solely in terms of subordinate-level processing, suggest that expertise enhances neural responses of both basic and subordinate processing.
|
| [38] |
Predictive coding (PC) posits that the brain uses a generative model to infer the environmental causes of its sensory data and uses precision-weighted prediction errors (pwPEs) to continuously update this model. While supported by much circumstantial evidence, experimental tests grounded in formal trial-by-trial predictions are rare. One partial exception is event-related potential (ERP) studies of the auditory mismatch negativity (MMN), where computational models have found signatures of pwPEs and related model-updating processes. Here, we tested this hypothesis in the visual domain, examining possible links between visual mismatch responses and pwPEs. We used a novel visual “roving standard” paradigm to elicit mismatch responses in humans (of both sexes) by unexpected changes in either color or emotional expression of faces. Using a hierarchical Bayesian model, we simulated pwPE trajectories of a Bayes-optimal observer and used these to conduct a comprehensive trial-by-trial analysis across the time × sensor space. We found significant modulation of brain activity by both color and emotion pwPEs. The scalp distribution and timing of these single-trial pwPE responses were in agreement with visual mismatch responses obtained by traditional averaging and subtraction (deviant-minus-standard) approaches. Finally, we compared the Bayesian model to a more classical change model of MMN. Model comparison revealed that trial-wise pwPEs explained the observed mismatch responses better than categorical change detection. Our results suggest that visual mismatch responses reflect trial-wise pwPEs, as postulated by PC. These findings go beyond classical ERP analyses of visual mismatch and illustrate the utility of computational analyses for studying automatic perceptual processes.
|
| [39] |
Foreknowledge of target stimulus features improves visual search performance as a result of 'feature-based attention' (FBA). Recent studies have reported that 'feature-based expectation' (FBE) also heightens decision sensitivity. Superficially, it appears that the latter work has simply rediscovered (and relabeled) the effects of FBA. However, this is not the case. Here we explain why.Copyright © 2016 Elsevier Ltd. All rights reserved.
|
| [40] |
|
| [41] |
Repetition of a stimulus, as well as valid expectation that a stimulus will occur, both attenuate the neural response to it. These effects, repetition suppression and expectation suppression, are typically confounded in paradigms in which the nonrepeated stimulus is also relatively rare (e.g., in oddball blocks of mismatch negativity paradigms, or in repetition suppression paradigms with multiple repetitions before an alternation). However, recent hierarchical models of sensory processing inspire the hypothesis that the two might be separable in time, with repetition suppression occurring earlier, as a consequence of local transition probabilities, and suppression by expectation occurring later, as a consequence of learnt statistical regularities. Here we test this hypothesis in an auditory experiment by orthogonally manipulating stimulus repetition and stimulus expectation and, using magnetoencephalography, measuring the neural response over time in human subjects. We found that stimulus repetition (but not stimulus expectation) attenuates the early auditory response (40–60 ms), while stimulus expectation (but not stimulus repetition) attenuates the subsequent, intermediate stage of auditory processing (100–200 ms). These findings are well in line with hierarchical predictive coding models, which posit sequential stages of prediction error resolution, contingent on the level at which the hypothesis is generated.
|
| [42] |
|
| [43] |
Many previous studies have demonstrated that the visual N1 component is larger for attended-location stimuli than for unattended-location stimuli. This difference is observed typically only for tasks involving a discrimination of the attended-location stimuli, suggesting that the N1 wave reflects a discrimination process that is applied to the attended location. The present study tested this hypothesis by examining the N1 component elicited by attended stimuli under conditions that either required or did not require the subject to perform a discrimination. Specifically, the N1 elicited by foveal stimuli during choice-reaction time (RT) tasks was compared with the N1 elicited by identical stimuli during simple-RT tasks. In three experiments, a larger posterior N1 was observed in choice-RT tasks than in simple-RT tasks, even when several potential confounds were eliminated (e.g., arousal and motor preparation). This N1 discrimination effect was observed even when no motor response was required and was present for both color- and form-based discriminations. Moreover, this discrimination effect was equally large for easy and difficult discriminations, arguing against a simple resource-based explanation of the present results. Instead, the results of this study are consistent with the hypothesis that the visual N1 component reflects the operation of a discrimination process within the focus of attention.
|
| [44] |
|
| [45] |
Perceptual similarity is a cognitive judgment that represents the end-stage of a complex cascade of hierarchical processing throughout visual cortex. Previous studies have shown a correspondence between the similarity of coarse-scale fMRI activation patterns and the perceived similarity of visual stimuli, suggesting that visual objects that appear similar also share similar underlying patterns of neural activation. Here we explore the temporal relationship between the human brain's time-varying representation of visual patterns and behavioral judgments of perceptual similarity. The visual stimuli were abstract patterns constructed from identical perceptual units (oriented Gabor patches) so that each pattern had a unique global form or perceptual 'Gestalt'. The visual stimuli were decodable from evoked neural activation patterns measured with magnetoencephalography (MEG), however, stimuli differed in the similarity of their neural representation as estimated by differences in decodability. Early after stimulus onset (from 50ms), a model based on retinotopic organization predicted the representational similarity of the visual stimuli. Following the peak correlation between the retinotopic model and neural data at 80ms, the neural representations quickly evolved so that retinotopy no longer provided a sufficient account of the brain's time-varying representation of the stimuli. Overall the strongest predictor of the brain's representation was a model based on human judgments of perceptual similarity, which reached the limits of the maximum correlation with the neural data defined by the 'noise ceiling'. Our results show that large-scale brain activation patterns contain a neural signature for the perceptual Gestalt of composite visual features, and demonstrate a strong correspondence between perception and complex patterns of brain activity.Copyright © 2016 Elsevier Inc. All rights reserved.
|
| [46] |
|
| [47] |
An important question in neural correlate of consciousness (NCC) studies is whether event-related potential (ERP) component P3 reflects visual awareness or the confidence with which one reports a visual experience. In the present study, participants detected visual stimuli presented at threshold-level contrast, then rated their subjective confidence with respect to their response on a four-point scale (very confident, quite confident, slightly confident, and not confident at all). Because awareness responses in trials with rating of "not confident at all" were likely noise, we analyzed the data excluding those trials. The ERP results revealed a significant positive difference in P3 amplitude between "aware" and "unaware" trials. P3 amplitude was more positive in aware trials compared to unaware trials. Importantly, this pattern was observed for trials with combined confidence ratings of "very confident" and "quite confident," and for trials with confidence ratings of "slightly confident," suggesting that awareness alone can modulate P3. A significant interaction between awareness and confidence is reported, suggesting that confidence influences P3 as well. In addition, ERP results revealed that visual awareness negativity (VAN) was observed over posterior temporal and occipital electrodes and largely not influenced by confidence. This result indicated that VAN is an early neural correlate of visual awareness.
|
| [48] |
|
| [49] |
Perceptual expectations can change how a visual stimulus is perceived. Recent studies have shown mixed results in terms of whether expectations modulate sensory representations. Here, we used a statistical learning paradigm to study the temporal characteristics of perceptual expectations. We presented participants with pairs of object images organized in a predictive manner and then recorded their brain activity with magnetoencephalography while they viewed expected and unexpected image pairs on the subsequent day. We observed stronger alpha-band (7-14 Hz) activity in response to unexpected compared with expected object images. Specifically, the alpha-band modulation occurred as early as the onset of the stimuli and was most pronounced in left occipito-temporal cortex. Given that the differential response to expected versus unexpected stimuli occurred in sensory regions early in time, our results suggest that expectations modulate perceptual decision-making by changing the sensory response elicited by the stimuli.
|
/
| 〈 |
|
〉 |