Laurence Aitchison and Máté Lengyel, With or without you: predictive coding and Bayesian inference in the brain, Current Opinion In Neurobiology, 46 (2017) 219-227.
Two theoretical ideas have emerged recently with the ambition to provide a unifying functional explanation of neural population coding and dynamics: predictive coding and Bayesian inference. Here, we describe the two theories and their combination into a single framework: Bayesian predictive coding. We clarify how the two theories can be distinguished, despite sharing core computational concepts and addressing an overlapping set of empirical phenomena. We argue that predictive coding is an algorithmic/representational motif that can serve several different computational goals of which Bayesian inference is but one. Conversely, while Bayesian inference can utilize predictive coding, it can also be realized by a variety of other representations. We critically evaluate the experimental evidence supporting Bayesian predictive coding and discuss how to test it more directly.
(web, pdf)
Shaun Raviv, The Genius Neuroscientist Who Might Hold the Key to True AI, Wired, 2019 pp. 1-16.
(pdf)
Anon. (2013), Supplementary Information, , 2013 pp. 1-53.
(pdf)
Ned Block, If perception is probabilistic, why doesn't it seem probabilistic?, Philosophical Transactions B, 2018 pp. 1-25.
The success of the Bayesian perspective in explaining perceptual phenomena has motivated the view that perceptual representations are probabilistic. But if perceptual representation is probabilistic, why doesn't normal conscious perception reflect the full probability functions that the probabilistic point of view endorses? For example, neurons in cortical area MT/V5 that respond to the direction of motion are broadly tuned: a patch of cortex that is tuned to vertical motion also responds to horizontal motion, but when we see vertical motion, foveally, in good conditions, it does not look at all horizontal. The standard solution in terms of sampling runs into the problem that sampling is an account of perceptual decision rather than perception. This paper argues that the best Bayesian approach to this problem does not require probabilistic representation.
(pdf)
Paul Benjamin Badcock et al., The hierarchically mechanistic mind: A free-energy formulation of the human psyche, Physics Of Life Reviews, 31 (2019) 104-121.
This article presents a unifying theory of the embodied, situated human brain called the Hierarchically Mechanistic Mind (HMM). The HMM describes the brain as a complex adaptive system that actively minimises the decay of our sensory and physical states by producing self-fulfilling action-perception cycles via dynamical interactions between hierarchically organised neurocog- nitive mechanisms. This theory synthesises the free-energy principle (FEP) in neuroscience with an evolutionary systems theory of psychology that explains our brains, minds, and behaviour by appealing to Tinbergen’s four questions: adaptation, phylogeny, on- togeny, and mechanism. After leveraging the FEP to formally define the HMM across different spatiotemporal scales, we conclude by exploring its implications for theorising and research in the sciences of the mind and behaviour.
(web, pdf)
Moshe Bar, The proactive brain: using analogies and associations to generate predictions, Trends In Cognitive Sciences, 11 (2007) 280-289.
Rather than passively ‘waiting’ to be activated by sensations, it is proposed that the human brain is con- tinuously busy generating predictions that approximate the relevant future. Building on previous work, this pro- posal posits that rudimentary information is extracted rapidly from the input to derive analogies linking that input with representations in memory. The linked stored representations then activate the associations that are relevant in the specific context, which provides focused predictions. These predictions facilitate perception and cognition by pre-sensitizing relevant representations. Predictions regarding complex information, such as those required in social interactions, integrate multiple analo- gies. This cognitive neuroscience framework can help explain a variety of phenomena, ranging from recognition to first impressions, and from the brain’s ‘default mode’ to a host of mental disorders.
(web, pdf)
Marika Berchicci et al., Prompting future events_ Effects of temporal cueing and time on task on brain preparation to action, Brain And Cognition, 2020 vol. 141 p. 105565.
Prediction about event timing plays a leading role in organizing and optimizing behavior. We recorded antici- patory brain activities and evaluated whether temporal orienting processes are reflected by the novel prefrontal negative (pN) component, as already shown for the contingent negative variation (CNV). Fourteen young healthy participants underwent EEG and fMRI recordings in separate sessions; they were asked to perform a Go/ No-Go task in which temporal orienting was manipulated: the external condition (a visual display indicating the time of stimulus onset) and the internal condition (time information not provided). In both conditions, the source of the pN was localized in the pars opercularis of the iFg; the source of the CNV was localized in the supple- mentary motor area and cingulate motor area, as expected. Anticipatory activity was also found in the occipital- parietal cortex. Time on task EEG analysis showed a marked learning effect in the internal condition, while the effect was minor in the external condition. In fMRI, the two conditions had a similar pattern; similarities and differences of results obtained with the two techniques are discussed. Overall, data are consistent with the view that the pN reflects a proactive cognitive control, including temporal orienting.
(web, pdf)
Rafal Bogacz, A tutorial on the free-energy framework for modelling perception and learning, Journal Of Mathematical Psychology, 76 (2017) 198-211.
developed by Friston, which extends the predictive coding model of Rao and Ballard. These models assume that the sensory cortex infers the most likely values of attributes or features of sensory stimuli from the noisy inputs encoding the stimuli. Remarkably, these models describe how this inference could be implemented in a network of very simple computational elements, suggesting that this inference could be performed by biological networks of neurons. Furthermore, learning about the parameters describing the features and their uncertainty is implemented in these models by simple rules of synaptic plasticity based on Hebbian learning. This tutorial introduces the free-energy framework using very simple examples, and provides step-by-step derivations of the model. It also discusses in more detail how the model could be implemented in biological neural circuits. In particular, it presents an extended version of the model in which the neurons only sum their inputs, and synaptic plasticity only depends on activity of pre-synaptic and post-synaptic neurons.
(web, pdf)
Romain Brette, Is coding a relevant metaphor for the brain?, Behavioral And Brain Sciences, 2019 pp. 1-44.
I argue that the popular neural coding metaphor is often misleading. First, the “neural code” often spans both the experimental apparatus and the brain. Second, a neural code is information only by reference to something with a known meaning, which is not the kind of information relevant for a perceptual system. Third, the causal structure of neural codes (linear, atemporal) is incongruent with the causal structure of the brain (circular, dynamic). I conclude that a causal description of the brain cannot be based on neural codes, because spikes are more like actions than hieroglyphs
(web, pdf)
Christopher L Buckley et al., The free energy principle for action and perception: A mathematical review, Journal Of Mathematical Psychology, 81 (2017) 55-79.
The ‘free energy principle’ (FEP) has been suggested to provide a unified theory of the brain, integrating data and theory relating to action, perception, and learning. The theory and implementation of the FEP combines insights from Helmholtzian ‘perception as inference’, machine learning theory, and statistical thermodynamics. Here, we provide a detailed mathematical evaluation of a suggested biologically plausible implementation of the FEP that has been widely used to develop the theory. Our objectives are (i) to describe within a single article the mathematical structure of this implementation of the FEP; (ii) provide a simple but complete agent-based model utilising the FEP and (iii) to disclose the assumption structure of this implementation of the FEP to help elucidate its significance for the brain sciences.
(web, pdf)
M L Cappuccio and MD Kirchhoff, Unfulfilled Prophecies in Sport Performance: Active Inference and the Choking Effect, Journal Of Consciousness Studies, 2020.
Choking effect (choke) is the tendency of expert athletes to underperform in high-stakes situations. We propose an account of choke based on active inference--a corollary of the free energy principle in cognitive neuroscience. The active inference scheme can explain certain forms of sensorimotor skills disruption in terms of precision-modulated imbalance between sensory input and higherlevel predictions. This model predicts that choke arises when the system fails to attenuate the error signal generated by proprioceptive sensory …
(web, pdf)
I Cheong et al., Predictive Codes for Forthcoming Perception in the Frontal Cortex, Science, 314 (2006) 1308-1311.
Incoming sensory information is often ambiguous, and the brain has to make decisions during perception. “Predictive coding” proposes that the brain resolves perceptual ambiguity by anticipating the forthcoming sensory environment, generating a template against which to match observed sensory evidence. We observed a neural representation of predicted perception in the medial frontal cortex, while human subjects decided whether visual objects were faces or not. Moreover, perceptual decisions about faces were associated with an increase in top-down connectivity from the frontal cortex to face-sensitive visual areas, consistent with the matching of predicted and observed evidence for the presence of faces
(web, pdf)
Andy Clark, Whatever next? Predictive brains, situated agents, and the future of cognitive science, Behavioral And Brain Sciences, 36 (2013) 181-204.
Brains, it has recently been argued, are essentially prediction machines. They are bundles of cells that support perception and action by constantly attempting to match incoming sensory inputs with top-down expectations or predictions. This is achieved using a hierarchical generative model that aims to minimize prediction error within a bidirectional cascade of cortical processing. Such accounts offer a unifying model of perception and action, illuminate the functional role of attention, and may neatly capture the special contribution of cortical processing to adaptive success. This target article critically examines this “hierarchical prediction machine” approach, concluding that it offers the best clue yet to the shape of a unified science of mind and action. Sections 1 and 2 lay out the key elements and implications of the approach. Section 3 explores a variety of pitfalls and challenges, spanning the evidential, the methodological, and the more properly conceptual. The paper ends (sections 4 and 5) by asking how such approaches might impact our more general vision of mind, experience, and agency.
(web, pdf)
A Clark, Beyond the'Bayesian blur': predictive processing and the nature of subjective experience, Journal Of Consciousness Studies, 2018.
Recent work in cognitive and computational neuroscience depicts the brain as in some (perhaps merely approximate) sense implementing probabilistic inference. This suggests a puzzle. If the processing that enables perceptual experience involves representing or approximating probability distributions, why does experience itself appear univocal and determinate, apparently bearing no traces of those probabilistic roots? In this paper, I canvass a range of responses, including the denial of univocality and determinacy itself. I argue that there is reason to think that it is our conception of per- ception itself that is flawed. Once we see perception aright, as the slave of action, the puzzlement recedes. Perceptual determinacy reflects only the mundane fact that we are embodied, active, agents who must constantly engage the world they perceptually encounter.
(web, pdf)
M Colombo, Bayes in the Brain--On Bayesian Modelling in Neuroscience, The British Journal For The Philosophy Of Science, 63 (2012) 697-723.
According to a growing trend in theoretical neuroscience, the human perceptual system is akin to a Bayesian machine. The aim of this article is to clearly articulate the claims that perception can be considered Bayesian inference and that the brain can be considered a Bayesian machine, some of the epistemological challenges to these claims; and some of the implications of these claims. We address two questions: (i) How are Bayesian models used in theoretical neuroscience? (ii) From the use of Bayesian models in theoretical neuroscience, have we learned or can we hope to learn that perception is Bayesian infer- ence or that the brain is a Bayesian machine? From actual practice in theoretical neuro- science, we argue for three claims. First, currently Bayesian models do not provide mechanistic explanations; instead they are useful devices for predicting and systematizing observational statements about people’s performances in a variety of perceptual tasks. That is, currently we should have an instrumentalist attitude towards Bayesian models in neuroscience. Second, the inference typically drawn from Bayesian behavioural perform- ance in a variety of perceptual tasks to underlying Bayesian mechanisms should be understood within the three-level framework laid out by David Marr ([1982]). Third, we can hope to learn that perception is Bayesian inference or that the brain is a Bayesian machine to the extent that Bayesian models will prove successful in yielding secure and informative predictions of both subjects’ perceptual performance and features of the underlying neural mechanisms.
(web, pdf)
M Colombo and C Wright, First principles in the life sciences: the free-energy principle, organicism, and mechanism, Synthese, 2018.
The free-energy principle states that all systems that resist a tendency to physical disintegration must minimize their free energy. Originally proposed to account for perception, learning, and action, the free-energy principle has been applied to the evolution, development, morphology, and function of the brain, and has been called a postulate, an unfalsifiable principle, a natural law, and an imperative. While it might afford a theoretical foundation for understanding the relationship between environment, life, and mind, its epistemic status and scope are unclear. Also unclear is how the free-energy principle relates to prominent theoretical approaches to life science phenomena, such as organicism and mechanicism. This paper clarifies both issues, and identifies limits and prospects for the free-energy principle as a first principle in the life sciences.
(web, pdf)
Dirk De Ridder et al., The Bayesian brain: Phantom percepts resolve sensory uncertainty, Neuroscience And Biobehavioral Reviews, 44 (2014) 4-15.
Phantom perceptions arise almost universally in people who sustain sensory deafferentation, and in multiple sensory domains. The question arises ‘why’ the brain creates these false percepts in the absence of an external stimulus? The model proposed answers this question by stating that our brain works in a Bayesian way, and that its main function is to reduce environmental uncertainty, based on the free- energy principle, which has been proposed as a universal principle governing adaptive brain function and structure. The Bayesian brain can be conceptualized as a probability machine that constantly makes predictions about the world and then updates them based on what it receives from the senses. The free- energy principle states that the brain must minimize its Shannonian free-energy, i.e. must reduce by the process of perception its uncertainty (its prediction errors) about its environment. As completely predictable stimuli do not reduce uncertainty, they are not worthwhile of conscious processing. Unpre- dictable things on the other hand are not to be ignored, because it is crucial to experience them to update our understanding of the environment. Deafferentation leads to topographically restricted prediction errors based on temporal or spatial incongruity. This leads to an increase in topographically restricted uncertainty, which should be adaptively addressed by plastic repair mechanisms in the respective sen- sory cortex or via (para)hippocampal involvement. Neuroanatomically, filling in as a compensation for missing information also activates the anterior cingulate and insula, areas also involved in salience, stress and essential for stimulus detection. Associated with sensory cortex hyperactivity and decreased inhibi- tion or map plasticity this will result in the perception of the false information created by the deafferented sensory areas, as a way to reduce increased topographically restricted uncertainty associated with the deafferentation. In conclusion, the Bayesian updating of knowledge via active sensory exploration of the environment, driven by the Shannonian free-energy principle, provides an explanation for the generation of phantom percepts, as a way to reduce uncertainty, to make sense of the world.
(web, pdf)
Joe Dewhurst, Folk Psychology and the Bayesian Brain, Philosophy And Predictive Processing, 2017 pp. 1-13.
Whilst much has been said about the implications of predictive processing for our scientific understanding of cognition, there has been comparatively little discussion of how this new paradigm fits with our everyday understanding of the mind, i.e. folk psychology. This paper aims to assess the relationship between folk psychology and predictive processing, which will first require making a dis- tinction between two ways of understanding folk psychology: as propositional attitude psychology and as a broader folk psychological discourse. It will be ar- gued that folk psychology in this broader sense is compatible with predictive processing, despite the fact that there is an apparent incompatibility between predictive processing and a literalist interpretation of propositional attitude psy- chology. The distinction between these two kinds of folk psychology allows us to accept that our scientific usage of folk concepts requires revision, whilst rejecting the suggestion that we should eliminate folk psychology entirely.
In section 1 I will introduce predictive processing, giving a quick summary of the framework that focuses on the details most relevant for my comparison with folk psychology. I will also introduce folk psychology and define the distinc- tion between propositional attitude psychology and folk psychological discourse. In section 2 I will consider the relationship between predictive processing and propositional attitude psychology, and in section 3 I will consider the relationship between predictive processing and folk psychological discourse. Finally, in sec- tion 4 I will argue that the distinction between propositional attitude psychology and folk psychological discourse makes space for us to revise our scientific usage of folk psychological concepts without thereby eliminating folk psychology altogether. In T. Metzinger & W. Wiese (Eds.). Philosophy and Predictive Processing: 9. Frankfurt am Main: MIND Group.
(pdf)
Dom, The Bayesian Brain: An Introduction to Predictive Processing, Mindcoolness.Com, 2019 pp. 1-13.
The greatest theory of all time?
The more I learn about the Bayesian brain, the more it seems to me that the theory of predictive processing is about as important for neuroscience as the theory of evolution is for biology, and that Bayes’ law is about as important for cognitive science as the Schrödinger equation is for physics.
(pdf)
Karl Friston, Hierarchical Models in the Brain, Plos Computational Biology, 4 (2008) e1000211-24.
This paper describes a general model that subsumes many parametric models for continuous data. The model comprises hidden layers of state-space or dynamic causal models, arranged so that the output of one provides input to another. The ensuing hierarchy furnishes a model for many types of data, of arbitrary complexity. Special cases range from the general linear model for static data to generalised convolution models, with system noise, for nonlinear time-series analysis. Crucially, all of these models can be inverted using exactly the same scheme, namely, dynamic expectation maximization. This means that a single model and optimisation scheme can be used to invert a wide range of models. We present the model and a brief review of its inversion to disclose the relationships among, apparently, diverse generative models of empirical data. We then show that this inversion can be formulated as a simple neural network and may provide a useful metaphor for inference and learning in the brain.
(web, pdf)
K J Friston et al., DEM: A variational treatment of dynamic systems, Neuroimage, 41 (2008) 849-885.
This paper presents a variational treatment of dynamic models that furnishes time-dependent conditional densities on the path or trajectory of a system's states and the time-independent densities of its parameters. These are obtained by maximising a variational action with respect to conditional densities, under a fixed-form assumption about their form. The action or path-integral of free-energy represents a lower bound on the model's log-evidence or marginal likelihood required for model selection and averaging. This approach rests on formulating the op- timisation dynamically, in generalised coordinates of motion. The resulting scheme can be used for online Bayesian inversion of nonlinear dynamic causal models and is shown to outperform existing approaches, such as Kalman and particle filtering. Furthermore, it provides for dual and triple inferences on a system's states, parameters and hyperpara- meters using exactly the same principles. We refer to this approach as dynamic expectation maximisation (DEM).
(web, pdf)
Karl Friston, Is the free-energy principle neurocentric?, Nature Reviews Neuroscience, 2010 pp. 1-2.
(web, pdf)
Karl Friston, The free-energy principle: a unified brain theory?, Nature Reviews Neuroscience, 2010 pp. 1-13.
A free-energy principle has been proposed recently that accounts for action, perception and learning. This Review looks at some key brain theories in the biological (for example, neural Darwinism) and physical (for example, information theory and optimal control theory) sciences from the free-energy perspective. Crucially, one key theme runs through each of these theories — optimization. Furthermore, if we look closely at what is optimized, the same quantity keeps emerging, namely value (expected reward, expected utility) or its complement, surprise (prediction error, expected cost). This is the quantity that is optimized under the free-energy principle, which suggests that several global brain theories might be unified within a free-energy framework.
(web, pdf)
Karl Friston, The history of the future of the Bayesian brain, Neuroimage, 62 (2012) 1230-1233.
The slight perversion of the original title of this piece (The Future of the Bayesian Brain) reflects my attempt to write prospectively about ‘Science and Stories’ over the past 20 years. I will meet this challenge by dealing with the future and then turning to its history. The future of the Bayesian brain (in neuroimaging) is clear: it is the application of dynamic causal modeling to understand how the brain conforms to the free energy prin- ciple. In this context, the Bayesian brain is a corollary of the free energy principle, which says that any self organizing system (like a brain or neuroimaging community) must maximize the evidence for its own exis- tence, which means it must minimize its free energy using a model of its world. Dynamic causal modeling in- volves finding models of the brain that have the greatest evidence or the lowest free energy. In short, the future of imaging neuroscience is to refine models of the brain to minimize free energy, where the brain re- fines models of the world to minimize free energy. This endeavor itself minimizes free energy because our community is itself a self organizing system. I cannot imagine an alternative future that has the same beau- tiful self consistency as mine. Having dispensed with the future, we can now focus on the past, which is much more interesting:
(web, pdf)
Karl Friston, Life as we know it, Journal Of The Royal Society Interface, 10 (2013) 20130475-12.
This paper presents a heuristic proof (and simulations of a primordial soup) suggesting that life—or biological self-organization—is an inevitable and emergent property of any (ergodic) random dynamical system that possesses a Markov blanket. This conclusion is based on the following arguments: if the coupling among an ensemble of dynamical systems is mediated by short-range forces, then the states of remote systems must be conditionally independent. These independencies induce a Markov blanket that separates internal and external states in a statistical sense. The existence of a Markov blanket means that internal states will appear to minimize a free energy functional of the states of their Markov blanket. Crucially, this is the same quantity that is optimized in Bayesian inference. Therefore, the internal states (and their blanket) will appear to engage in active Bayesian inference. In other words, they will appear to model—and act on—their world to pre- serve their functional and structural integrity, leading to homoeostasis and a simple form of autopoiesis.
(web, pdf)
Karl Friston et al., Active inference and epistemic value, Cognitive Neuroscience, 6 (2015) 187-214.
We offer a formal treatment of choice behavior based on the premise that agents minimize the expected free energy of future outcomes. Crucially, the negative free energy or quality of a policy can be decomposed into extrinsic and epistemic (or intrinsic) value. Minimizing expected free energy is therefore equivalent to maximizing extrinsic value or expected utility (defined in terms of prior preferences or goals), while maximizing information gain or intrinsic value (or reducing uncertainty about the causes of valuable outcomes). The resulting scheme resolves the exploration-exploitation dilemma: Epistemic value is maximized until there is no further information gain, after which exploitation is assured through maximization of extrinsic value. This is formally consistent with the Infomax principle, generalizing formulations of active vision based upon salience (Bayesian surprise) and optimal decisions based on expected utility and risk-sensitive (Kullback-Leibler) control. Furthermore, as with previous active inference formulations of discrete (Markovian) problems, ad hoc softmax parameters become the expected (Bayes- optimal) precision of beliefs about, or confidence in, policies. This article focuses on the basic theory, illustrating the ideas with simulations. A key aspect of these simulations is the similarity between precision updates and dopaminergic discharges observed in conditioning paradigms.
(web, pdf)
Karl Friston et al., Neuroscience and Biobehavioral Reviews, Neuroscience And Biobehavioral Reviews, 68 (2016) 862-879.
(web, pdf)
Karl Friston, Publisher Correction: Does Predictive coding have a future?, Nature Neuroscience, 22 (2018) 144-144.
In the 20th century we thought the brain extracted knowledge from sensations. The 21st century witnessed a ‘strange inversion’, in which the brain became an organ of inference, actively constructing explanations for what’s going on ‘out there’, beyond its sensory epithelia. One paper played a key role in this paradigm shift.
(web, pdf)
Karl J Friston et al., Deep temporal models and active inference, Neuroscience And Biobehavioral Reviews, 90 (2018) 486-501.
Active inference Bayesian Hierarchical Reading Violation
Free energy P300
MMN
1. Introduction
In recent years, we have applied the free energy principle to gen- erative models of worlds that can be described in terms of discrete states in an attempt to understand the embodied Bayesian brain. The resulting active inference scheme (for Markov decision processes) has been applied in a variety of domains (see Table 1). This paper takes active inference to the next level and considers hierarchical models with deep temporal structure (George and Hawkins, 2009; Kiebel et al., 2009; LeCun et al., 2015). This structure follows from generative models that entertain state transitions or sequences over time. The re- sulting model enables inference about narratives with deep temporal structure (c.f., sequential scene construction) of the sort seen in reading. In short, equipping an agent or simulated subject with deep temporal models allows them to accumulate evidence over different temporal scales to find the best explanation for their sensations.
This paper has two agendas: to introduce hierarchical (deep) gen- erative models for active inference under Markov decision processes (or hidden Markov models) and to show how their belief updating can be understood in terms of neuronal processes. The problem we focus on is how subjects deploy active vision to disambiguate the causes of their sensations. In other words, we ask how people choose where to look
ABSTRACT
How do we navigate a deeply structured world? Why are you reading this sentence first – and did you actually look at the fifth word? This review offers some answers by appealing to active inference based on deep temporal models. It builds on previous formulations of active inference to simulate behavioural and electrophysiological responses under hierarchical generative models of state transitions. Inverting these models corresponds to se- quential inference, such that the state at any hierarchical level entails a sequence of transitions in the level below. The deep temporal aspect of these models means that evidence is accumulated over nested time scales, enabling inferences about narratives (i.e., temporal scenes). We illustrate this behaviour with Bayesian belief updating – and neuronal process theories – to simulate the epistemic foraging seen in reading. These simulations reproduce perisaccadic delay period activity and local field potentials seen empirically. Finally, we exploit the deep structure of these models to simulate responses to local (e.g., font type) and global (e.g., semantic) vio- lations; reproducing mismatch negativity and P300 responses respectively.
(web, pdf)
Karl J Friston, Waves of prediction, Plos Biology, 17 (2019) e3000426-7.
Predictive processing (e.g., predictive coding) is a predominant paradigm in cognitive neuro- science. This Primer considers the various levels of commitment neuroscientists have to the neuronal process theories that accompany the principles of predictive processing. Specifi- cally, it reviews and contextualises a recent PLOS Biology study of alpha oscillations and travelling waves. We will see that alpha oscillations emerge naturally under the computa- tional architectures implied by predictive coding-and may tell us something profound about recurrent message passing in brain hierarchies. Specifically, the bidirectional nature of for- ward and backward waves speaks to opportunities to understand attention and how it nuances bottom-up and top-down influences.
(web, pdf)
Karl Friston, A free energy principle for a particular physics, Arxiv.Org, 2019 1906.10184v1, q-bio.NC.
This monograph attempts a theory of every 'thing' that can be distinguished from other things in a statistical sense. The ensuing statistical independencies, mediated by Markov blankets, speak to a recursive composition of ensembles (of things) at increasingly higher spatiotemporal scales. This decomposition provides a description of small things; e.g., quantum mechanics - via the Schrodinger equation, ensembles of small things - via statistical mechanics and related fluctuation theorems, through to big things - via classical mechanics. These descriptions are complemented with a Bayesian mechanics for autonomous or active things. Although this work provides a formulation of every thing, its main contribution is to examine the implications of Markov blankets for self-organisation to nonequilibrium steady-state. In brief, we recover an information geometry and accompanying free energy principle that allows one to interpret the internal states of something as representing or making inferences about its external states. The ensuing Bayesian mechanics is compatible with quantum, statistical and classical mechanics and may offer a formal description of lifelike particles.
(web, pdf)
S Gallagher and B Aguda, Anchoring Know-How: Action, Affordance, and Anticipation, Journal Of Consciousness Studies, 2020.
Action is always situated, always tied to specific contexts, and this is the case with respect to both the non-conscious--and largely subpersonal--processes or mechanisms that make action possible, and the person-level--and sometimes conscious--aspects of action that make action more than mere behaviour. According to one theory about the kind of know-how that we require to do what we do, the â–˜ automatic mechanismsâ–™ that support action are â–˜ perfectly generalâ–™(Stanley, 2011, p. 84), in contrast to the detailed propositional …
(web, pdf)
Steven Gross, Probabilistic Representations in Perception: Are There Any, and What Would They Be?, , .
Nick Shea’s Representation in Cognitive Science commits him to representations in perceptual processing that are about probabilities. This commentary concerns how to adjudicate between this view and an alternative that locates the probabilities rather in the representational states’ associated “attitudes”. As background and motivation, evidence for probabilistic representations in perceptual processing is adduced, and it is shown how, on either conception, one can address a specific challenge Ned Block has raised to this evidence.
(web, pdf)
Micha Heilbron and Maria Chait, Great Expectations: Is there Evidence for Predictive Coding in Auditory Cortex?, Neuroscience, 389 (2018) 54-73.
Predictive coding is possibly one of the most influential, comprehensive, and controversial theories of neural function. While proponents praise its explanatory potential, critics object that key tenets of the theory are untested or even untestable. The present article critically examines existing evidence for predictive coding in the auditory modality. Specifically, we identify five key assumptions of the theory and evaluate each in the light of animal, human and modeling studies of auditory pattern processing. For the first two assumptions – that neural responses are shaped by expectations and that these expectations are hierarchically organized – animal and human studies provide compelling evidence. The anticipatory, predictive nature of these expectations also enjoys empirical support, especially from studies on unexpected stimulus omission. However, for the existence of sep- arate error and prediction neurons, a key assumption of the theory, evidence is lacking. More work exists on the proposed oscillatory signatures of predictive coding, and on the relation between attention and precision. How- ever, results on these latter two assumptions are mixed or contradictory. Looking to the future, more collabora- tion between human and animal studies, aided by model-based analyses will be needed to test specific assumptions and implementations of predictive coding – and, as such, help determine whether this popular grand theory can fulfill its expectations.
(web, pdf)
Casper Hesp et al., A Multi-scale View of the Emergent Complexity of Life: A Free-Energy Proposal, Evolution, Development And Complexity, 132 (2019) 195-227.
We review some of the main implications of the free-energy principle (FEP) for the study of the self-organization of living systems – and how the FEP can help us to understand (and model) biotic...
(web, pdf)
I Hipólito et al., Is the free-energy principle a formal theory of semantics? From variational density dynamics to neural and phenotypic representations, Entropy, 1 (2020) 1-30.
The aim of this paper is twofold: (1) to assess whether the construct of neural representations plays an explanatory role under the variational free-energy principle and its corollary process theory, active inference; and (2) if so, to assess which philosophical stance—in relation to the ontological and epistemological status of representations—is most appropriate. We focus on non-realist (deflationary and fictionalist-instrumentalist) approaches. We consider a deflationary account of mental representation, according to which the explanatorily relevant contents of neural representations are mathematical, rather than cognitive, and a fictionalist or instrumentalist account, according to which representations are scientifically useful fictions that serve explanatory (and other) aims. After reviewing the free-energy principle and active inference, we argue that the model of adaptive phenotypes under the free-energy principle can be used to furnish a formal semantics, enabling us to assign semantic content to specific phenotypic states (the internal states of a Markovian system that exists far from equilibrium). We propose a modified fictionalist account – an organism-centered fictionalism or instrumentalism. We argue that, under the free-energy principle, pursuing even a deflationary account of the content of neural representations licenses the appeal to the kind of semantic content involved in the ‘aboutness’ or intentionality of cognitive systems; our position is thus coherent with, but rests on distinct assumptions from, the realist position. We argue that the free-energy principle thereby explains the aboutness or intentionality in living systems and hence their capacity to parse their sensory stream using an ontology or set of semantic
factors.
(web, pdf)
Jakob Hohwy, How to Entrain Your Evil Demon, Philosophy And Predictive Processing, 2017 pp. 1-15.
The notion that the brain is a prediction error minimizer entails, via the notion of Markov blankets and self-evidencing, a form of global scepticism — an inability to rule out evil demon scenarios. This type of scepticism is viewed by some as a sign of a fatally flawed conception of mind and cognition. Here I discuss whether this scepticism is ameliorated by acknowledging the role of action in the most ambitious approach to prediction error minimization, namely under the free en- ergy principle. I argue that the scepticism remains but that the role of action in the free energy principle constrains the demon’s work. This yields new insights about the free energy principle, epistemology, and the place of mind in nature. In T. Metzinger & W. Wiese (Eds.). Philosophy and Predictive Processing: 2. Frankfurt am Main: MIND Group.
(pdf)
Jakob Hohwy, How to Entrain Your Evil Demon, , 2017 pp. 1-15.
The notion that the brain is a prediction error minimizer entails, via the notion of Markov blankets and self-evidencing, a form of global scepticism — an inability to rule out evil demon scenarios. This type of scepticism is viewed by some as a sign of a fatally flawed conception of mind and cognition. Here I discuss whether this scepticism is ameliorated by acknowledging the role of action in the most ambitious approach to prediction error minimization, namely under the free en- ergy principle. I argue that the scepticism remains but that the role of action in the free energy principle constrains the demon’s work. This yields new insights about the free energy principle, epistemology, and the place of mind in nature. In T. Metzinger & W. Wiese (Eds.). Philosophy and Predictive Processing: 2. Frankfurt am Main: MIND Group.
(pdf)
Jakob Hohwy, The predictive processing hypothesis, The Oxford Handbook Of 4E Cognition, .
Prediction may be a central concept for understanding perceptual and cognitive processing. Contemporary theoretical neuroscience formalizes the role of prediction in terms of probabilistic inference. Perception, action, attention and learning may then be unified as aspects of predictive processing in the brain. This chapter first explains the sense in which predictive processing is inferential and representational. Then follows an exploration of how the predictive processing framework relates to a series of considerations in favour of enactive, embedded, embodied and extended cognition (4e cognition). The initial impression may be that predictive processing is too representational and inferential to fit well to 4e cognition. But, in fact, predictive processing encompasses many phenomena prevalent in 4e approaches, while remaining both inferential and representational.
(web, pdf)
J Benjamin Hutchinson and Lisa Feldman Barrett, The Power of Predictions: An Emerging Paradigm for Psychological Research, Current Directions In Psychological Science, 28 (2019) 280-291.
In the last two decades, neuroscience studies have suggested that various psychological phenomena are produced by predictive processes in the brain. When considered together, these studies form a coherent, neurobiologically inspired program for guiding psychological research about the mind and behavior. In this article, we consider the common assumptions and hypotheses that unify an emerging framework and discuss the ramifications of such a framework, both for improving the replicability and robustness of psychological research and for renewing psychological theory by suggesting an alternative ontology of the human mind.
(web, pdf)
Ashima Keshava, Moving from the What to the How and Where - Bayesian Models and Predictive Processing, Philosophy And Predictive Processing, 2017 pp. 1-10.
The general question of our paper is concerned with the relationship between Bayesian models of cognition and predictive processing, and whether predictive processing can provide explanatory insight over and above Bayesian models. Bayesian models have been gaining influence in neuroscience and the cognitive sciences since they are able to predict human behavior with high accuracy. Mod- els based on a Bayesian optimal observer are fitted on behavioral data. A good fit is hence interpreted as human subjects “behaving” in a Bayes’ optimal fashion. However, these models are performance-oriented and do not specify which processes could give rise to the observed behavior.
Here, David Marr’s (Marr 1982) levels of analysis can help understand the rela- tionship between performance- and process-oriented models or explanations. Bayesian models are situated at the computational level since they specify what the system (in this case the brain) does and why it does it in this manner. Although Bayesian models can constrain the search space for hypotheses at the algorithmic level, they do not provide a precise solution about how a system realizes the observed behavior. Here predictive processing can shed more light on the underlying principles. Predictive processing provides a unifying functional theory of cognition and can thus i) provide an answer at the algorithmic level by answering how the brain realizes cognition, ii) can aid in the interpretation of neurophysiological findings at the implementational level. In T. Metzinger & W. Wiese (Eds.). Philosophy and Predictive Processing: 16. Frankfurt am Main: MIND Group.
(web, pdf)
Stefan J Kiebel, Perception and hierarchical dynamics, Frontiers In Neuroinformatics, 3 (2009) 1-9.
In this paper, we suggest that perception could be modeled by assuming that sensory input is generated by a hierarchy of attractors in a dynamic system. We describe a mathematical model which exploits the temporal structure of rapid sensory dynamics to track the slower trajectories of their underlying causes.This model establishes a proof of concept that slowly changing neuronal states can encode the trajectories of faster sensory signals. We link this hierarchical account to recent developments in the perception of human action; in particular artificial speech recognition. We argue that these hierarchical models of dynamical systems are a plausible starting point to develop robust recognition schemes, because they capture critical temporal dependencies induced by deep hierarchical structure. We conclude by suggesting that a fruitful computational neuroscience approach may emerge from modeling perception as non-autonomous recognition dynamics enslaved by autonomous hierarchical dynamics in the sensorium.
(web, pdf)
Michael D Kirchhoff, Predictive processing, perceiving and imagining: Is to perceive to imagine, or something close to it?, Philosophical Studies, 175 (2017) 751-767.
This paper examines the relationship between perceiving and imagining on the basis of predictive processing models in neuroscience. Contrary to the received view in philosophy of mind, which holds that perceiving and imagining are essentially distinct, these models depict perceiving and imagining as deeply unified and overlapping. It is argued that there are two mutually exclusive implications of taking perception and imagination to be fundamentally unified. The view defended is what I dub the ecological–enactive view given that it does not succumb to internalism about the mind-world relation, and allows one to keep a version of the received view in play.
(web, pdf)
Kestutis Kveraga et al., Top-down predictions in the cognitive brain, Brain And Cognition, 65 (2007) 145-168.
The human brain is not a passive organ simply waiting to be activated by external stimuli. Instead, we propose that the brain con- tinuously employs memory of past experiences to interpret sensory information and predict the immediately relevant future. The basic elements of this proposal include analogical mapping, associative representations and the generation of predictions. This review concen- trates on visual recognition as the model system for developing and testing ideas about the role and mechanisms of top-down predictions in the brain. We cover relevant behavioral, computational and neural aspects, explore links to emotion and action preparation, and con- sider clinical implications for schizophrenia and dyslexia. We then discuss the extension of the general principles of this proposal to other cognitive domains.
(web, pdf)
Kestutis Kveraga et al., Top-down predictions in the cognitive brain, Brain And Cognition, 65 (2007) 145-168.
The human brain is not a passive organ simply waiting to be activated by external stimuli. Instead, we propose that the brain con- tinuously employs memory of past experiences to interpret sensory information and predict the immediately relevant future. The basic elements of this proposal include analogical mapping, associative representations and the generation of predictions. This review concen- trates on visual recognition as the model system for developing and testing ideas about the role and mechanisms of top-down predictions in the brain. We cover relevant behavioral, computational and neural aspects, explore links to emotion and action preparation, and con- sider clinical implications for schizophrenia and dyslexia. We then discuss the extension of the general principles of this proposal to other cognitive domains.
(web, pdf)
Eric Mandelbaum, Troubles with Bayesianism: An introduction to the psychological immune system, Mind And Language, 34 (2018) 141-157.
A Bayesian mind is, at its core, a rational mind. Bayesian- ism is thus well-suited to predict and explain mental processes that best exemplify our ability to be rational. However, evidence from belief acquisition and change appears to show that we do not acquire and update infor- mation in a Bayesian way. Instead, the principles of belief acquisition and updating seem grounded in maintaining a psychological immune system rather than in approximat- ing a Bayesian processor.
(web, pdf)
Leonid M Martyushev, Living systems do not minimize free energy Comment on “Answering Schrödinger’s question: A free-energy formulation” by Maxwell James Dèsormeau Ramstead et al., Physics Of Life Reviews, 24 (2018) 40-41.
(web, pdf)
T Marvan and M Havlík, Is Predictive Processing a Theory of Consciousness?, , 2020.
Predictive Processing theory, hotly debated today in neuroscience, psychology and philosophy, promises to explain a number of perceptual and cognitive phenomena in a simple and elegant manner. In some of its versions, the theory is ambitiously advertised as a new theory of conscious perception. The task of this paper is to assess to which extent an explanation of consciousness needs to invoke the principles of the PP theory. We will be arguing that the PP theory mostly concerns the preconditions of conscious perception …
(web, pdf)
Erik L Meijs, Conscious perception in the predictive brain, , .
(pdf)
W Wanja, Vanilla PP for Philosophers: A Primer on Predictive Processing, Philosophy And Predictive Processing, 2017 pp. 1-18.
The goal of this short chapter, aimed at philosophers, is to provide an overview and brief explanation of some central concepts involved in predictive processing (PP). Even those who consider themselves experts on the topic may find it helpful to see how the central terms are used in this collection. To keep things simple, we will first informally define a set of features important to predictive processing, supplemented by some short explanations and an alphabetic glossary. In T. Metzinger & W. Wiese (Eds.). Philosophy and Predictive Processing: 1. Frankfurt am Main: MIND Group.
(web, pdf)
Geoffrey Hinton, Neural Networks for Machine Learning Lecture 6a Overview of mini-batch gradient descent, , 2012.
(pdf)
M F Panichello et al., Predictive feedback and conscious visual experience, Frontiers In Psychology, 2013.
The human brain continuously generates predictions about the environment based on learned regularities in the world. These predictions actively and efficiently facilitate the interpretation of incoming sensory information. We review evidence that, as a result of this facilitation, predictions directly influence conscious experience. Specifically, we pro- pose that predictions enable rapid generation of conscious percepts and bias the contents of awareness in situations of uncertainty. The possible neural mechanisms underlying this facilitation are discussed.
(web, pdf)
Thomas Parr and Karl J Friston, Generalised free energy and active inference, Biological Cybernetics, 113 (2019) 495-513.
model to predict incoming sensory data. The fit between this model and data may be improved in two ways. The brain could optimise probabilistic beliefs about the variables in the generative model (i.e. perceptual inference). Alternatively, by acting on the world, it could change the sensory data, such that they are more consistent with the model. This implies a common objective function (variational free energy) for action and perception that scores the fit between an internal model and the world. We compare two free energy functionals for active inference in the framework of Markov decision processes. One of these is a functional of beliefs (i.e. probability distributions) about states and policies, but a function of observations, while the second is a functional of beliefs about all three. In the former (expected free energy), prior beliefs about outcomes are not part of the generative model (because they are absorbed into the prior over policies). Conversely, in the second (gen- eralised free energy), priors over outcomes become an explicit component of the generative model. When using the free energy function, which is blind to future observations, we equip the generative model with a prior over policies that ensure preferred (i.e. priors over) outcomes are realised. In other words, if we expect to encounter a particular kind of outcome, this lends plausibility to those policies for which this outcome is a consequence. In addition, this formulation ensures that selected policies minimise uncertainty about future outcomes by minimising the free energy expected in the future. When using the free energy functional—that effectively treats future observations as hidden states—we show that policies are inferred or selected that realise prior preferences by minimising the free energy of future expectations. Interestingly, the form of posterior beliefs about policies (and associated belief updating) turns out to be identical under both formulations, but the quantities used to compute them are not.
(web, pdf)
Cyriel M A Pennartz, Consciousness, Representation, Action: The Importance of Being Goal-Directed, Trends In Cognitive Sciences, 22 (2018) 137-153.
Recent years have witnessed fierce debates on the dependence of conscious- ness on interactions between a subject and the environment. Reviewing neu- roscientific, computational, and clinical evidence, I will address three questions. First, does conscious experience necessarily depend on acute interactions between a subject and the environment? Second, does it depend on specific perception–action loops in the longer run? Third, which types of action does consciousness cohere with, if not with all of them? I argue that conscious contents do not necessarily depend on acute or long-term brain– environment interactions. Instead, consciousness is proposed to be specifi- cally associated with, and subserve, deliberate, goal-directed behavior (GDB). Brain systems implied in conscious representation are highly connected to, but distinct from, neural substrates mediating GDB and declarative memory.
(web, pdf)
Giovanni Pezzulo et al., Hierarchical Active Inference: A Theory of Motivated Control, Trends In Cognitive Sciences, 22 (2018) 294-306.
Motivated control refers to the coordination of behaviour to achieve affectively valenced outcomes or goals. The study of motivated control traditionally assumes a distinction between control and motivational processes, which map to distinct (dorsolateral versus ventromedial) brain systems. However, the respective roles and interactions between these processes remain contro- versial. We offer a novel perspective that casts control and motivational pro- cesses as complementary aspects goal propagation and prioritization, respectively of active inference and hierarchical goal processing under deep generative models. We propose that the control hierarchy propagates prior preferences or goals, but their precision is informed by the motivational con- text, inferred at different levels of the motivational hierarchy. The ensuing integration of control and motivational processes underwrites action and policy selection and, ultimately, motivated behaviour, by enabling deep inference to prioritize goals in a context-sensitive way.
(web, pdf)
Fabienne Picard, State of belief, subjective certainty and bliss as a product of cortical dysfunction, Cortex, 49 (2013) 2494-2500.
(web, pdf)
Michał Andrzej Piekarski, Representations, direct perception and scientific realism. In defence of conservative predictive processing, , 2019 pp. 1-25.
(pdf)
Z Radman, Postscript:'Aheadness'--Prospective Adaptations towards the Actual, Journal Of Consciousness Studies, 2020.
The term'aheadness' has been coined and applied in order to account for the variety of embodied and enactive aspects that shape attitudes, which in turn impact the selection of stimuli in a prospective way. Such an approach is body-centred rather than brain-centred. Consequently,'predictive coping'is taken to be a better explanatory candidate than'predictive coding'. As the cognitive organism is never ignorant or neutral,'aheadness' comes with attitudes, pre-shaping the forthcoming according to needs, moods, emotions, wishes, hopes …
(web, pdf)
Maxwell James Désormeau Ramstead et al., Answering Schrödinger’s question: A free-energy formulation, Physics Of Life Reviews, 24 (2018) 1-16.
stract
The free-energy principle (FEP) is a formal model of neuronal processes that is widely recognised in neuroscience as a unifying theory of the brain and biobehaviour. More recently, however, it has been extended beyond the brain to explain the dynamics of liv- ing systems, and their unique capacity to avoid decay. The aim of this review is to synthesise these advances with a meta-theoretical ontology of biological systems called variational neuroethology, which integrates the FEP with Tinbergen’s four research questions to explain biological systems across spatial and temporal scales. We exemplify this framework by applying it to Homo sapiens, before translating variational neuroethology into a systematic research heuristic that supplies the biological, cognitive, and social sciences with a computationally tractable guide to discovery.
(web, pdf)
Maxwell James Désormeau Ramstead et al., Variational ecology and the physics of sentient systems, Physics Of Life Reviews, 31 (2019) 188-205.
This paper addresses the challenges faced by multiscale formulations of the variational (free energy) approach to dynamics that obtain for large-scale ensembles. We review a framework for modelling complex adaptive control systems for multiscale free energy bounding organism–niche dynamics, thereby integrating the modelling strategies and heuristics of variational neuroethology with a broader perspective on the ecological nestedness of biotic systems. We extend the multiscale variational formulation beyond the action–perception loops of individual organisms by appealing to the variational approach to niche construction to explain the dynamics of coupled systems constituted by organisms and their ecological niche. We suggest that the statistical robustness of living systems is inherited, in part, from their eco-niches, as niches help coordinate dynamical patterns across larger spatiotemporal scales. We call this approach variational ecology. We argue that, when applied to cultural animals such as humans, variational ecology enables us to formulate not just a physics of individual minds, but also a physics of interacting minds across spatial and temporal scales – a physics of sentient systems that range from cells to societies
(web, pdf)
Maxwell James Désormeau Ramstead et al., A tale of two densities: Active inference is enactive inference, Adaptive Behavior, 2019 pp. 1-32.
(pdf)
Rajesh P N Rao and Dana H Ballard, Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-field effects, Nature Neuroscience, 2 (1999) 79-87.
We describe a model of visual processing in which feedback connections from a higher- to a lower- order visual cortical area carry predictions of lower-level neural activities, whereas the feedforward connections carry the residual errors between the predictions and the actual lower-level activities. When exposed to natural images, a hierarchical network of model neurons implementing such a model developed simple-cell-like receptive fields. A subset of neurons responsible for carrying the residual errors showed endstopping and other extra-classical receptive-field effects. These results suggest that rather than being exclusively feedforward phenomena, nonclassical surround effects in the visual cortex may also result from cortico-cortical feedback as a consequence of the visual system using an efficient hierarchical strategy for encoding natural images.
(web, pdf)
R P Rao and D H Ballard, Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-field effects., Nature Neuroscience, 2 (1999) 79-87.
We describe a model of visual processing in which feedback connections from a higher- to a lower-order visual cortical area carry predictions of lower-level neural activities, whereas the feedforward connections carry the residual errors between the predictions and the actual lower-level activities. When exposed to natural images, a hierarchical network of model neurons implementing such a model developed simple-cell-like receptive fields. A subset of neurons responsible for carrying the residual errors showed endstopping and other extra-classical receptive-field effects. These results suggest that rather than being exclusively feedforward phenomena, nonclassical surround effects in the visual cortex may also result from cortico-cortical feedback as a consequence of the visual system using an efficient hierarchical strategy for encoding natural images.
(web, pdf)
K Rauss and Gilles Pourtois, What is bottom-up and what is top-down in predictive coding?, Frontiers In Psychology, 2013.
Everyone knows what bottom-up is, and how it is different from top-down. At least one is tempted to think so, given that both terms are ubiquitously used, but only rarely defined in the psychology and neuroscience literature. In this review, we highlight the problems and limitations of our current understanding of bottom-up and top-down processes, and we propose a reformulation of this distinction in terms of predictive coding.
(web, pdf)
I Robertson and M D Kirchoff, Anticipatory action: Active inference in embodied cognitive activity, Journal Of Consciousness Studies, 2020.
This paper addresses the cognitive basis of anticipatory action. It does so by taking up what we call the acuity problem: the problem of explaining how skilled action seems, on the one hand, to be executed and unfold automatically and reflexively and, on the other hand, to involve anticipation of context-sensitive and constantly changing conditions in performance. The acuity problem invites two contemporary forms of reply, which we label non-inferential enactivism and Helmholtzian inference, respectively. We advance a third avenue for …
(web, pdf)
Wiktor Rorot, Explaining “spatial purport of perception”: a predictive processing approach, Synthese, 2020 pp. 1-24.
Despite the large interest in the human ability to perceive space present in neuroscience, cognitive science and psychology, as well as philosophy of mind, the issues regard- ing egocentric space representation received relatively less attention. In this paper I take up a unique phenomenon related to this faculty: the “spatial purport” of percep- tual experiences. The notion was proposed by Rick Grush to describe the subjective, qualitative aspects of egocentric representations of spatial properties and relations. Although Grush offered an explanation of the mechanism giving rise to appearance of spatial purport, his model had considerable shortcomings. In the paper I thoroughly analyze both the notion of spatial purport and Grush’s explanation of the mechanism at its core in order to develop his theory using the insights provided by the predictive processing theory of mind, and more particularly by the active inference framework. The extended account I offer, named Predictive and Hierarchical Skill Theory, explains phenomena that escaped Grush’s model and furthers the research on egocentric space representation from the perspective of both neuroscience and philosophy of mind.
(web, pdf)
Lukas Schwengerer, Self-Knowledge in a Predictive Processing Framework, , 2018 pp. 1-23.
In this paper I propose an account of self-knowledge based on a framework of predictive processing. Predictive processing understands the brain as a prediction- action machine that tries to minimize error in its predictions about the world. For this view to evolve into a complete account of human cognition we ought to provide an idea how it can account for self-knowledge – knowledge of one’s own mental states. I provide an attempt for such an account starting from remarks on introspection made by Hohwy (2013). I develop Hohwy’s picture into a general model for knowledge of one’s mental states, discussing how predictions about oneself can be used to capture self- knowledge. I further explore empirical predictions, and thereby argue that the model provides a good explanation for failure of self-knowledge in cases involving motor aftereffects, such as the broken escalator phenomenon. I conclude that the proposed account is incomplete, but provides a valuable first step to connect research on predictive processing with the epistemology of self-knowledge.
(pdf)
Peggy Seriès, Neurons That Update Representations of the Future., Trends In Cognitive Sciences, 22 (2018) 671-673.
A recent article shows that the brain automatically estimates the probabilities of possible future actions before it has even received all the information necessary to decide what to do next.
(web, pdf)
José Filipe Silva and Juhana Toivanen, The Active Nature of the Soul in Sense Perception: Robert Kilwardby and Peter Olivi, Vivarium, 48 (2010) 245-278.
This article discusses the theories of perception of Robert Kilwardby and Peter of John Olivi. Our aim is to show how in challenging certain assumptions of medieval Aristo- telian theories of perception they drew on Augustine and argued for the active nature of the soul in sense perception. For both Kilwardby and Olivi, the soul is not passive with respect to perceived objects; rather, it causes its own cognitive acts with respect to external objects and thus allows the subject to perceive them. We also show that Kilwardby and Olivi differ substantially regarding where the activity of the soul is directed to and the role of the sensible species in the process, and we demonstrate that there are similarities between their ideas of intentionality and the attention of the soul towards the corporeal world.
(web, pdf)
M W Spratling, A review of predictive coding algorithms, Brain And Cognition, 112 (2017) 92-97.
Predictive coding is a leading theory of how the brain performs probabilistic inference. However, there are a number of distinct algorithms which are described by the term ‘‘predictive coding”. This article pro- vides a concise review of these different predictive coding algorithms, highlighting their similarities and differences. Five algorithms are covered: linear predictive coding which has a long and influential history in the signal processing literature; the first neuroscience-related application of predictive coding to explaining the function of the retina; and three versions of predictive coding that have been proposed to model cortical function. While all these algorithms aim to fit a generative model to sensory data, they differ in the type of generative model they employ, in the process used to optimise the fit between the model and sensory data, and in the way that they are related to neurobiology.
(web, pdf)
D Huron, Psychological Anticipation: The ITPRA Theory, Journal Of Consciousness Studies, 2020.
A summary of the ITPRA theory of expectation is presented (Huron, 2006). The theory aims to explain the complex dynamic blend of feelings commonly evoked by unfolding events. The theory posits five response systems divided into pre- and post-outcome epochs. Pre- outcome responses include imagination (I), where contemplating future possibilities enables vicarious pre-viewing of likely future feel- ings as a strategy for choosing current behaviours. Tension (T) refers to feelings associated with somatic preparations immediately pre- ceding an anticipated event. Post-outcome responses include pre- diction (P) where positive or negative feelings arise in response to predictive accuracy, with the aim of improving predictive models. Reaction (R) refers to feelings arising from neurologically fast responses such as the startle response. Finally, appraisal (A) refers to feelings arising from neurologically slow cognitive assessments of the final outcome. The theory proposes that all five systems contribute to a dynamically evolving cocktail of feelings evoked by unfolding events.
(web, pdf)
Link R Swanson, The Predictive Processing Paradigm Has Roots in Kant, Frontiers In Systems Neuroscience, 10 (2016) 204-13.
Predictive processing (PP) is a paradigm in computational and cognitive neuroscience that has recently attracted significant attention across domains, including psychology, robotics, artificial intelligence and philosophy. It is often regarded as a fresh and possibly revolutionary paradigm shift, yet a handful of authors have remarked that aspects of PP seem reminiscent of the work of 18th century philosopher Immanuel Kant. To date there have not been any substantive discussions of how exactly PP links back to Kant. In this article, I argue that several core aspects of PP were anticipated by Kant (1996/1787) in his works on perception and cognition. Themes from Kant active in PP include: (1) the emphasis on “top-down” generation of percepts; (2) the role of “hyperpriors”; (3) the general function of “generative models”; (4) the process of “analysis-by-synthesis” and (5) the crucial role of imagination in perception. In addition to these, I also point out that PP echoes Kant’s general project in that it aims to explain how minds track causal structure in the world using only sensory data, and that it uses a reverse-engineer or “top-down” method of analysis. I then locate a possible source of Kant’s influence on PP by tracing the paradigm back to Hermann von Helmholtz, who saw himself as providing a scientific implementation of Kant’s conclusions. I conclude by arguing that PP should not be regarded as a new paradigm, but is more appropriately understood as the latest incarnation of an approach to perception and cognition initiated by Kant and refined by Helmholtz.
Keywords: predictive processing, Kan
(web, pdf)
M Thomas and W Wanja, Philosophy and Predictive Processing, , .
… Philosophy and Predictive Processing . Metzinger Thomas & Wiese Wanja (eds.). MIND Group (2017). Authors, Thomas Metzinger Johannes Gutenberg University Mainz. Abstract, This article has no associated abstract. (fix it). Keywords, No keywords specified (fix it). Categories, No …
(web, pdf)
Kadi Tulver et al., Individual differences in the effects of priors on perception_ A multi-paradigm approach, Cognition, 187 (2019) 167-177.
The present study investigated individual differences in how much subjects rely on prior information, such as expectations or knowledge, when faced with perceptual ambiguity. The behavioural performance of forty-four participants was measured on four different visual paradigms (Mooney face recognition, illusory contours, blur detection and representational momentum) in which priors have been shown to affect perception. In addition, questionnaires were used to measure autistic and schizotypal traits in the non-clinical population. We hy- pothesized that someone who in the face of ambiguous or noisy perceptual input relies heavily on priors, would exhibit this tendency across a variety of tasks. This general pattern would then be reflected in high pairwise correlations between the behavioural measures and an emerging common factor. On the contrary, our results imply that there is no single factor that explains the individual differences present in the aforementioned tasks, as further evidenced by the overall lack of robust correlations between the separate paradigms. Instead, a two- factor structure reflecting differences in the hierarchy of perceptual processing was the best fit for explaining the individual variance in these tasks. This lends support to the notion that mechanisms underlying the effects of priors likely originate from several independent sources and that it is important to consider the role of specific tasks and stimuli more carefully when reporting effects of priors on perception
(web, pdf)
Annie Plessinger, Visualization, , 2012 pp. 1-4.
(pdf)
P F Velasco, Attention in the Predictive Processing Framework and the Phenomenology of Zen Meditation, Journal Of Consciousness Studies, 2017.
In this paper I will use the phenomenology of Zen meditation (zazen) to look at the role of attention within the predictive processing (PP) framework. Section 1 introduces PP, according to which the brain is a dynamical, hierarchical, hypothesis-testing mechanism. Section 2 discusses the current proposal that attention is the process of precision optimization (Hohwy, 2012) and presents some of the challenges for this theory. Section 3 introduces zazen and uses some of the emerging patterns of its phenomenology to clarify …
(web, pdf)
Daniel Williams, Hierarchical Bayesian models of delusion, Consciousness And Cognition, 61 (2018) 129-147.
(web, pdf)
Daniel Yon et al., The Predictive Brain as a Stubborn Scientist, Trends In Cognitive Sciences, 23 (2019) 6-8.
Bayesian theories of perception have traditionally cast the brain as an idealised scientist, refining predictions about the outside world based on evidence sampled by the senses. However, recent predictive coding models include predictions that are resistant to change, and these stubborn predictions can be usefully incorporated into cognitive models
(web, pdf)
Sascha Benjamin Fink Carlos Zednik, Meeting in the Dark Room: Bayesian Rational Analysis and Hierarchical Predictive Coding, Philosophy And Predictive Processing, 2017 pp. 1-13.
At least two distinct modeling frameworks contribute to the view that mind and brain are Bayesian: Bayesian Rational Analysis (BRA) and Hierarchical Predictive Coding (HPC). What is the relative contribution of each, and how exactly do they relate? In order to answer this question, we compare the way in which these two modeling frameworks address different levels of analysis within Marr’s tri- partite conception of explanation in cognitive science. Whereas BRA answers questions at the computational level only, many HPC-theorists answer questions at the computational, algorithmic, and implementational levels simultaneously. Given that all three levels of analysis need to be addressed in order to explain a behavioral or cognitive phenomenon, HPC seems to deliver more complete ex- planations. Nevertheless, BRA is well-suited for providing a solution to the dark room problem, a major theoretical obstacle for HPC. A combination of the two approaches also combines the benefits of an embodied-externalistic approach to resolving the dark room problem with the idea of a persisting evidentiary border beyond which matters are out of cognitive reach. For this reason, the develop- ment of explanations spanning all three Marrian levels within the general Bayes- ian approach may require combining the BRA and HPC modeling frameworks. In T. Metzinger & W. Wiese (Eds.). Philosophy and Predictive Processing: 14. Frankfurt am Main: MIND Group.
(web, pdf)
Floris P de Lange et al., How Do Expectations Shape Perception?, Trends In Cognitive Sciences, 22 (2018) 764-779.
Perception and perceptual decision-making are strongly facilitated by prior knowledge about the probabilistic structure of the world. While the computational benefits of using prior expectation in perception are clear, there are myriad ways in which this computation can be realized. We review here recent advances in our understanding of the neural sources and targets of expectations in perception. Furthermore, we discuss Bayesian theories of perception that prescribe how an agent should integrate prior knowledge and sensory information, and investigate how current and future empirical data can inform and constrain computational frameworks that implement such probabilistic integration in perception.
(web, pdf)