74
Copyright © 2021 Ren et al. This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International license, which permits unrestricted use, distribution and reproduction in any medium provided that the original work is properly attributed. Research Articles: Behavioral/Cognitive Automatic and fast encoding of representational uncertainty underlies the distortion of relative frequency https://doi.org/10.1523/JNEUROSCI.2006-20.2021 Cite as: J. Neurosci 2021; 10.1523/JNEUROSCI.2006-20.2021 Received: 31 July 2020 Revised: 18 February 2021 Accepted: 18 February 2021 This Early Release article has been peer-reviewed and accepted, but has not been through the composition and copyediting processes. The final version may differ slightly in style or formatting and will contain links to any extended data. Alerts: Sign up at www.jneurosci.org/alerts to receive customized email alerts when the fully formatted version of this article is published.

Automatic and fast encoding of representational

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Automatic and fast encoding of representational

Copyright © 2021 Ren et al.This is an open-access article distributed under the terms of the Creative Commons Attribution4.0 International license, which permits unrestricted use, distribution and reproduction in anymedium provided that the original work is properly attributed.

Research Articles: Behavioral/Cognitive

Automatic and fast encoding ofrepresentational uncertainty underlies thedistortion of relative frequency

https://doi.org/10.1523/JNEUROSCI.2006-20.2021

Cite as: J. Neurosci 2021; 10.1523/JNEUROSCI.2006-20.2021

Received: 31 July 2020Revised: 18 February 2021Accepted: 18 February 2021

This Early Release article has been peer-reviewed and accepted, but has not been throughthe composition and copyediting processes. The final version may differ slightly in style orformatting and will contain links to any extended data.

Alerts: Sign up at www.jneurosci.org/alerts to receive customized email alerts when the fullyformatted version of this article is published.

Page 2: Automatic and fast encoding of representational

Automatic and fast encoding of representational

uncertainty underlies the distortion of relative

frequency

Xiangjuan Ren1,2,3 Huan Luo1,4,5* Hang Zhang1,3,4,5,6*

1. School of Psychological and Cognitive Sciences and Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China

2. Academy for Advanced Interdisciplinary Studies, Peking University, Beijing, China 3. Peking-Tsinghua Center for Life Sciences, Peking University, Beijing, China 4. PKU-IDG/McGovern Institute for Brain Research, Peking University, Beijing, China 5. Key Laboratory of Machine Perception, Ministry of Education, Peking University,

Beijing, China 6. Chinese Institute for Brain Research, Beijing, China *Corresponding authors: Huan Luo, email: [email protected] Hang Zhang, email: [email protected] Abbreviated title: Encoding of representational uncertainty Counts:

- Abstract: 230 words - Introduction: 636 words - Discussion: 1443 words - Figures: 7

Acknowledgments HZ was supported by National Natural Science Foundation of China grants 31871101 and 31571117, and funding from Peking-Tsinghua Center for Life Sciences. HL was supported by National Natural Science Foundation of China grant 31930052 and Beijing Municipal Commission of Science and Technology grant Z181100001518002. Part of the analysis was performed on the High-Performance Computing Platform of the Center for Life Sciences at Peking University. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Page 3: Automatic and fast encoding of representational

2

Abstract 1

Humans do not have an accurate representation of probability information in the 2

environment but distort it in a surprisingly stereotyped way (“probability distortion”), as 3

shown in a wide range of judgment and decision-making tasks. Many theories 4

hypothesize that humans automatically compensate for the uncertainty inherent in 5

probability information (“representational uncertainty”) and probability distortion is a 6

consequence of uncertainty compensation. Here we examined whether and how the 7

representational uncertainty of probability is quantified in the human brain and its 8

relevance to probability distortion behavior. Human subjects (13 female and 9 male) 9

kept tracking the relative frequency of one color of dot in a sequence of dot arrays 10

while their brain activity was recorded by magnetoencephalography (MEG). We found 11

converging evidence from both neural entrainment and time-resolved decoding 12

analysis that a mathematically-derived measure of representational uncertainty is 13

automatically computed in the brain, despite it is not explicitly required by the task. In 14

particular, the encodings of relative frequency and its representational uncertainty 15

respectively occur at latencies of approximately 300 ms and 400 ms. The relative 16

strength of the brain responses to these two quantities correlates with the probability 17

distortion behavior. The automatic and fast encoding of the representational 18

uncertainty provides neural basis for the uncertainty compensation hypothesis of 19

probability distortion. More generally, since representational uncertainty is closely 20

Page 4: Automatic and fast encoding of representational

3

related to confidence estimation, our findings exemplify how confidence might emerge 21

prior to perceptual judgment. 22

23

Keywords: judgment; decision making; probability distortion; MEG; steady-state 24

response (SSR); time-resolved decoding 25

Page 5: Automatic and fast encoding of representational

4

Significance Statement 26

Human perception of probabilities and relative frequencies can be markedly distorted, 27

which is a potential source of disastrous decisions. But the brain is not just ignorant of 28

probability; probability distortions are highly patterned and similar across different 29

tasks. Recent theoretical work suggests that probability distortions arise from the 30

brain’s compensation of its own uncertainty in representing probability. Is such 31

representational uncertainty really computed in the brain? To answer this question, we 32

asked human subjects to track an ongoing stimulus sequence of relative frequencies 33

and recorded their brain responses using MEG. Indeed, we found that the neural 34

encoding of representational uncertainty accompanies that of relative frequency, even 35

though the former is not explicitly required by the task. 36

Page 6: Automatic and fast encoding of representational

5

Introduction 37

Humans do not have an accurate representation of probability or relative-frequency 38

information in the environment but distort it in a surprisingly stereotyped way. Typically, 39

small probability is overestimated and large probability underestimated, which can be 40

well fit by a linear-in-log-odds (LLO) model with two parameters (see Zhang & Maloney, 41

2012 for a review). Such “probability distortion” phenomena occur in a variety of 42

judgment and decision-making tasks such as relative-frequency estimation (Attneave, 43

1953; Lichtenstein, Slovic, Fischhoff, Layman, & Combs, 1978; Varey, Mellers, & 44

Birnbaum, 1990), confidence rating (Erev, Wallsten, & Budescu, 1994; Gigerenzer, 45

Hoffrage, & Kleinbölting, 1991; Wallsten, Budescu, Erev, & Diederich, 1997), decision 46

under risk (Gonzalez & Wu, 1999; Luce, 2000; Tversky & Kahneman, 1992; Zhang, 47

Ren, & Maloney, 2020), and are also widely reported in animal behaviors 48

(Constantinople, Piet, & Brody, 2019; Ferrari-Toniolo, Bujold, & Schultz, 2019; Stauffer, 49

Lak, Bossaerts, & Schultz, 2015; Yang & Shadlen, 2007). However, the neural 50

computations that accompany such probability distortions remain largely unknown. 51

One computation that is assumed to be central to probability distortion (Fennell & 52

Baddeley, 2012; Martins, 2006; See & Fox, 2006; Zhang et al., 2020) is compensation 53

for the uncertainty inherent in the representation of noisy probability information 54

(“representational uncertainty”). In particular, recent comparison among an extensive 55

set of computational models of probability distortion (Zhang et al., 2020) suggests that 56

in their judgment and decision-making people take into account representational 57

Page 7: Automatic and fast encoding of representational

6

uncertainty that is proportional to p(1–p), where p denotes probability or relative 58

frequency. That is, representational uncertainty is zero at p = 0 or 1 and maximal at p 59

= 0.5. The p(1–p) form had also been proved by Lebreton et al. (2015) as a measure 60

of uncertainty that correlates with the explicitly reported confidence for a specific 61

judged value. Specifically, their fMRI results showed that during the valuation process, 62

this uncertainty measure is automatically encoded in the ventromedial prefrontal 63

cortex (vmPFC) — the brain region known for encoding value—even in the absence 64

of explicit confidence rating. Motivated by previous neuroimaging studies and 65

computational modelling, we ask whether representational uncertainty of probability 66

information can be automatically encoded in human brain and if yes, whether this 67

encoding can precede the explicit judgment of probability, as theories of probability 68

distortion would expect (Fennell & Baddeley, 2012; Martins, 2006; See & Fox, 2006; 69

Zhang et al., 2020). 70

In the present study, we designed a new experimental paradigm where a major 71

component of the representational uncertainty of probability information (i.e. p(1–p), 72

see Results for details) was varied continuously over time. By using time-resolved 73

neural measurements and assessing the temporal coupling between stimulus 74

variables and ongoing brain activities, we examined whether and how the encoding of 75

representational uncertainty proceeds in time and relates to probability distortion. 76

Twenty-two human subjects participated in the study and were instructed to 77

continuously track the relative-frequency (p) of one color of dots in a sequence of dot 78

Page 8: Automatic and fast encoding of representational

7

arrays (Fig. 1A), during which their brain activities were recorded by 79

magnetoencephalography (MEG). First, we found that, though p was the only variable 80

that subjects needed to track while p(1–p) was task-irrelevant, the periodic changes of 81

p(1–p) as well as p entrained neural rhythms, supporting an automatic tracking of 82

representational uncertainty in the brain even when it is task-irrelevant. Next, by using 83

a time-resolved decoding analysis to delineate the temporal course, we further found 84

that the encoding of p and p(1–p) peaked around 300 and 400 ms after stimulus onset, 85

respectively. Finally, the relative strength of the neural responses to the two variables 86

(p and p(1–p)) in the parietal region correlated with the variation of probability 87

distortion behavior across individuals. Taken together, our results provide neural 88

evidence for an automatic, fast encoding of representational uncertainty in the human 89

brain that might underlie probability distortion observed in a wide range of human 90

behaviors. 91

92

Page 9: Automatic and fast encoding of representational

8

Materials and Methods 93

Experimental Design and Statistical Analyses 94

Subjects. Twenty-two human subjects (aged 18–27, 13 female) participated. No 95

statistical methods were used to pre-determine sample sizes but our sample size was 96

similar to that of previous human neuroimaging studies on probability or numerosity 97

(Fornaciai, Brannon, Woldorff, & Park, 2017; Hsu, Krajbich, Zhao, & Camerer, 2009; 98

Lebreton et al., 2015). All of them had normal or corrected-to-normal vision and 99

passed the Farnsworth-Munsell 100 Hue Color Vision Test (Farnsworth, 1943). The 100

study had been approved by the Institutional Review Board of School of Psychological 101

and Cognitive Sciences at Peking University (#2015-03-13c). Subjects provided 102

written informed consent in accordance with the Declaration of Helsinki and were 103

compensated for their time. 104

105

Apparatus. Subjects were seated ~86 cm in front of a projection screen (Panasonic 106

PT-DS12KE: 49.6 × 37.2 cm, 1024 × 768 pixels, 60-Hz refresh rate) inside the 107

magnetically shielded room. Stimuli were controlled by a Dell computer using Matlab 108

(MathWorks) and PsychToolbox-3 (Brainard, 1997; Pelli, 1997). Subjects’ behavioral 109

responses were recorded by a MEG-compatible mouse system (FOM-2B-10B from 110

Nata technologies) and their brain activities by a 306-channel MEG system (see MEG 111

Acquisition and Preprocessing for details). 112

Page 10: Automatic and fast encoding of representational

9

Task. Each trial started with a white fixation cross on a blank screen for 600 ms, 113

following which a sequence of displays of orange and blue dots was presented on a 114

gray background at a rate of 150 ms per display (Fig. 1A). Subjects were asked to 115

fixate on the central fixation cross and track the relative-frequency of each display. 116

After the sequence ended for 1000 ms, a horizontal scale (0 to 100%) with the starting 117

point in the middle of the bar appeared on the screen and subjects were required to 118

click on the scale to indicate the relative-frequency of orange (or blue) dots on the last 119

display. Half of the subjects reported relative frequency for orange dots and half for 120

blue dots. 121

To encourage subjects to pay attention to each display, one out of six trials were 122

catch trials whose duration followed a truncated exponential distribution (1–6 s, mean 123

3 s), such that almost each display could be the last display. The duration of formal 124

trials was 6 or 6.15 s. Only the formal trials were submitted to behavioral and MEG 125

analyses. 126

On each display, all dots were randomly scattered without overlapping within an 127

invisible circle that subtended a visual angle of 12°. The visual angle of each dot was 128

0.2° and the center-to-center distance between any two dots was at least 0.12°. The 129

two colors of the dots were isoluminant (CIEorange = [43, 19.06, 52.33], CIEblue = [43, –130

15.49, –23.72]). Isoluminant dark gray pixels (CIEgray = [43, 0, 0]) were filled on the gray 131

background (CIEbackground = [56.5, 0, 0]) between the dots as needed to guarantee each 132

Page 11: Automatic and fast encoding of representational

10

display had equal overall luminance, which prevented luminance from being a 133

confounding factor for any abstract variables of interest. 134

Design. We adopted a steady-state response (SSR) design, which could achieve a 135

higher signal-to-noise ratio than the conventional event-related design (Norcia, 136

Appelbaum, Ales, Cottereau, & Rossion, 2015). The basic idea was to vary the value 137

of a variable periodically at a specific temporal frequency and to observe the brain 138

activities at the same frequency as an idiosyncratic response to the variable. The 139

variables of most interest in the present study were relative frequency p and its 140

representational uncertainty quantified by p(1–p). 141

For half of the trials (referred as the P-cycle condition), the value of p in the 142

sequence was chosen from uniform distributions with the ranges alternatively being 143

(0.1, 0.5) and (0.5, 0.9) so that the sequence of p formed cycles of 3.33 Hz. For the 144

other half trials (referred as the U-cycle condition), the value of p was chosen 145

alternatively from (0.1, 0.3) U (0.7, 0.9) and (0.3, 0.7) so that the sequence of p(1–p) 146

(i.e. proxy for uncertainty) was in cycles of 3.33 Hz. For both the P-cycle and U-cycle 147

conditions, p was randomly and independently sampled for each display within the 148

defined distributions. That is, the two cycle conditions had matched individual displays 149

but differed in which variable formed a periodic sequence. Accordingly, the p(1–p) 150

sequence in the P-cycle condition and the p sequence in the U-cycle condition were 151

aperiodic, which had little autocorrelations. 152

Page 12: Automatic and fast encoding of representational

11

The total number of dots on a display (numerosity, denoted N) was varied across 153

trials as well as across individual displays of the same trial. The value of N for a specific 154

display was independent of the value of p and generated as a linear transformation of a 155

beta random variable following Beta(0.1, 10) so that the number of dots in each color 156

(pN or (1–p)N), a possible confounding variable of p, would only have moderate 157

correlations (Pearson’s |r| < 0.5) with p. The linear transformation was chosen to locate 158

N to the range of [10, 90] for half of the trials (the N-small condition) and to [100, 900] 159

for the other half (the N-large condition). 160

The numbers of dots in each color was pN or (1–p)N rounded to the nearest 161

integer. As the result of rounding, the actual relative-frequency was slightly different 162

from the originally chosen p. In data analysis, we would use p to denote the actual 163

relative-frequency that was presented to subjects. 164

To summarize, there were 2 (P-cycle vs. U-cycle) by 2 (N-small vs. N-large) 165

experimental conditions, with all conditions interleaved and each condition repeated 166

for 6 trials in each block. Each subject completed 10 blocks of 24 trials, resulting in 50 167

formal trials and 10 catch trials for each condition. Before the formal experiment, there 168

were 32 practice trials (the first 20 trials consisted of one display and the following 12 169

trials had the same settings as the formal experiment) for subjects to be familiar with 170

the procedure. No feedback was available during the experiment. 171

172

Behavioral Analysis 173

Page 13: Automatic and fast encoding of representational

12

Measures of probability distortion. According to Zhang and Maloney (2012), 174

inverted-S- or S-shaped probability distortions can be well captured by the LLO model 175

p p 1 p0 , (1) 176

where p and p respectively denote the objective and subjective probability or 177

relative-frequency, p log p1 p

is the log-odds transformation, and is 178

Gaussian error on the log-odds scale with mean 0 and variance 2 . The free 179

parameter is the slope of distortion, with 1 corresponding to 180

inverted-S-shaped distortion, 1 to no distortion, and 1 to S-shaped distortion. 181

The free parameter p0 is the crossover point where p p . See the inset of Figure 182

2B for three examples illustrating the functional form of the LLO model (Eq. 1). 183

For each subject and condition, we fit the reported relative-frequency to the LLO 184

model (Eq. 1) and used the estimated ˆ and p̂0 as the measures of 185

relative-frequency distortions. 186

187

Response time. The response time (RT) was defined as the interval between the 188

response screen onset and subjects’ first mouse move. The RT of the first trial of three 189

subjects was mis-recorded due to technical issues and was excluded from further 190

analysis. We divided all trials evenly into 5 bins based on the to-be-reported p, and 191

computed the mean p and mean RT for each bin. A similar binning procedure was 192

applied to p(1–p) to visualize the relationship of RT to p(1–p). 193

194

Page 14: Automatic and fast encoding of representational

13

Non-parametric measures of probability distortion. As a complement to the LLO 195

model (Eq. 1), we used a non-parametric method to visualize the probability distortion 196

curve for each subject. In particular, we smoothed p p using a kernel 197

regression method with the commonly-used Nadaraya-Watson kernel estimator 198

(Aljuhani & Al turk, 2014; Nadaraya, 1964; Watson, 1964): 199

M̂h xK

x xi

hi 1

m

yi

Kx xi

hi 1

m , (2) 200

where xi and y i

( i 1,2,...,m ) denote observed pairs of stimuli and responses, 201

M̂h x denotes the smoothed response at the stimulus value x , and h is a 202

parameter that controls the degree of smoothing and were set to be 0.03. The (∙) 203

denotes the Gaussian kernel function: 204

K z 12

exp z2

2. (3) 205

206

207

Behavioral Modeling 208

Bounded Log-Odds (BLO) model. Zhang et al.(2020) proposed the BLO model of 209

probability distortion, which is based on three assumptions. Below we briefly describe 210

these assumptions and how they are applied to the present study. 211

1. Probability is internally represented in log-odds. BLO assumes that any 212

probability or relative frequency, p, is internally represented as a linear transformation 213

of log-odds, 214

Page 15: Automatic and fast encoding of representational

14

p log p1 p

, (4) 215

which potentially extends from minus infinity ( ) to infinity ( 1 ). 216

2. Representation is encoded on a bounded Thurstone scale. By “Thurstone 217

scale”, we refer to a psychological scale perturbed by independent, 218

identically-distributed Gaussian noise, which was proposed by Thurstone (1927) and 219

has been widely used in modeling representations of psychological magnitudes (e.g., 220

Erev et al., 1994; Lebreton et al., 2015). BLO assumes that the Thurstone scale for 221

encoding log-odds has a limited range , and can only encode a selected 222

interval of log-odds. It defines bounding operation 223

,

,,

(5) 224

to confine the representation of log-odds to the interval , , where and 225

are free parameters. That is, any p that is outside the bounds will be truncated 226

to its nearest bound. The bounded interval , is mapped to the bounded 227

Thurstone scale , through a linear transformation, so that the log-odds p 228

is encoded as 229

p p 2 , (6) 230

where 2

is a free parameter. 231

3. Representational uncertainty is compensated in the final estimate of 232

probability. In our relative-frequency judgment task, we assume representational 233

uncertainty partly arises from random variation of sampling. Suppose subjects may 234

0

Page 16: Automatic and fast encoding of representational

15

not access to all of the dots in a display. For a display of N dots, a sample of ns 235

dots is randomly drawn from the display without replacement and used to infer the 236

relative frequency. The resulting variance of p̂ is thus (Cochran, 1977; Zhang et al., 237

2020): 238

V p̂p 1 p

ns

N ns

N 1 . (7) 239

In the present study, ns is modeled as an increasing function of numerosity: 240

ns b Na , (8) 241

where a 0 and b 0 are free parameters. To keep the resulting V p̂ 0 , we 242

forced V p̂ 0 whenever V p̂ 0. Note that Eq. 7 does not necessarily imply that 243

V p̂ is an increasing function of N, given that ns is not constant but may vary with N 244

(Eq. 8). In fact, for a median subject (a=0.42 and b=1.91) in our experiment, V p̂ is 245

mostly a decreasing function of N. 246

BLO assumes that representational uncertainty is compensated in the final 247

estimate of probability, with the log-odds of p modeled as a weighted average of 248

p and a fixed point 0 on the log-odds scale: 249

p p p 1 p 0 , (9) 250

where p

11 V p̂

is a measure of the reliability of the internal representation, and 251

is Gaussian noise term on the log-odds scale with mean 0 and variance 2 . Here 252

0 and 0 are free parameters. In total, the BLO model for our 253

relative-frequency judgment task has eight parameters: , , a , b , , 0 , , 254

and . 255

Page 17: Automatic and fast encoding of representational

16

256

Model fitting. We considered the BLO model and 17 alternative models (described 257

below). For each subject, we fit each model to the subject’s p in each trial using 258

maximum likelihood estimates. The fminsearchbnd (J. D’Errico), a function based on 259

fminsearch in MATLAB (MathWorks), was used to search for the parameters that 260

minimized negative log likelihood. To verify that we had found the global minimum, we 261

repeated the searching process for 1,000 times with different starting points. 262

263

Factorial model comparison. Similar to Zhang et al. (2020), we performed a factorial 264

model comparison (van den Berg, Awh, & Ma, 2014) to test whether each of the three 265

assumptions in the BLO model outperformed plausible alternative assumptions in 266

fitting behavioral data. The models we considered differ in the following three 267

dimensions. 268

D1: scale of transformation. The scale on which probability is internally 269

represented can be the log-odds scale ( p log p1 p

as in Eq. 4), the 270

Prelec scale ( p log logp ) derived from the two-parameter Prelec 271

function (Prelec, 1998), or the linear scale used in the neo-additive probability 272

distortion functions (see Wakker, 2010 for a review). See Zhang et al. (2020) 273

for more details. 274

D2: bounded versus bounds-free, concerning whether the bounding operation 275

(Eq. 5) is involved. 276

Page 18: Automatic and fast encoding of representational

17

D3: variance compensation. BLO compensates for representational 277

uncertainty following Eq. 9. In relative-frequency judgment, the form of V p̂ 278

is not only proportional to p(1–p), but also depends on N and sampling 279

strategy. In the present study, we modeled sample size as ns b Na , which 280

increases with N. Alternatively, sample size ns may be modeled as a 281

constant. Note that under constant ns the value of V p̂ is still proportional 282

to p(1–p). A third alternative assumption is to set V p̂ constant, which is not 283

theoretically motivated but effectively implemented in classic descriptive 284

models of probability distortion such as LLO or the Prelec function. 285

These three dimensions are independent of each other, analogous to the different 286

factors manipulated in a factorial experimental design. In total, there are 3 (D1: 287

log-odds, Prelec, or linear) × 2 (D2: bounded or bounds-free) × 3 (D3: V p̂ with 288

ns b Na , V p̂ with constant ns, or constant V p̂ ) = 18 models. 289

In this three-dimensional model space, the BLO model we introduced earlier 290

corresponds to (D1=log-odds, D2=bounded, D3=V p̂ with ns b Na ). The model 291

at (D1=log-odds, D2=bounded, D3=V p̂ with constant ns) is also a BLO model, 292

which differs from our main BLO model in the specification of sampling strategies. For 293

simplicity, in presenting the results of factorial model comparison, we will only refer to 294

our main BLO model as BLO. The LLO, and two-parameter Prelec models are also 295

special cases of the 18 models, respectively corresponding to (D1=log-odds, 296

Page 19: Automatic and fast encoding of representational

18

D2=bounds-free, D3=constant V p̂ ) and (D1=Prelec, D2=bounds-free, D3=constant 297

V p̂ ). 298

299

Model comparison method: AICc and group-level Bayesian model selection. We 300

compared the goodness-of-fit of the BLO model with all of the alternative models 301

based on the Akaike information criterion with a correction for sample sizes (AICc) 302

(Akaike, 1974; Hurvich & Tsai, 1989) and group-level Bayesian model selection 303

(Daunizeau, Adam, & Rigoux, 2014; Rigoux, Stephan, Friston, & Daunizeau, 2014; 304

Stephan, Penny, Daunizeau, Moran, & Friston, 2009). 305

306

Model comparison method: 10-fold cross-validation. We used a 10-fold 307

cross-validation (Arlot & Celisse, 2010) to verify the model comparison results based 308

on the metric of AICc. In particular, we divided all trials in each cycle and numerosity 309

condition randomly into 10 folds. Each time 1 fold served as the test set and the 310

remaining 9 folds as the training set. The model parameters estimated from the 311

training set were applied to the test set, for which the log likelihood of the data was 312

calculated. Such computation of cross-validated log likelihood was repeated for each 313

fold as the test set. The cross-validated log likelihoods of all 10 folds were summed 314

and multiplied by –2 to result in a metric comparable to AICc, denoted –2LL. For each 315

subject, we used the model with the lowest –2LL as a reference to compute (–2LL) 316

for each model and summed (–2LL) across subjects. For each subject, we repeated 317

Page 20: Automatic and fast encoding of representational

19

the above procedure for 10 times to reduce randomness and reported the average 318

summed cross-validated (–2LL). 319

320

Model identifiability analysis. To evaluate potential model mis-identification issues 321

in model comparisons, we performed a further model identifiability analysis as follows. 322

Model parameters had been estimated for individual subjects and the model 323

parameters estimated from the 22 subjects’ real data were used to generate synthetic 324

datasets of 22 virtual subjects. We generated 50 synthetic datasets for each of the 18 325

models considered in our factorial model comparison analysis, then fit all the 18 326

models to each synthetic dataset and identified the best fitting model. For each 327

synthetic dataset, the model with the lowest summed AICc was identified as the 328

best fitting model. For the 50 datasets generated by each specific model, we 329

calculated the percentages that the model was correctly identified as the best model 330

and that each of the other models was mistakenly identified as the best model. 331

332

MEG Acquisition and Preprocessing 333

Subjects’ brain activity was recorded by a 306-channel whole-head MEG system 334

(Elekta-Neuromag, 102 magnetometers and 102 pairs of orthogonal planar 335

gradiometers). Head position was measured before each block by an isotrack 336

polhemus system with four head position indicator coils (two on the left and right 337

mastoid, the other two on the left and right forehead below the hairline). Subjects 338

Page 21: Automatic and fast encoding of representational

20

whose between-block head movement exceeded 3 mm would be excluded from 339

further analysis. Horizontal and vertical Electro-oculograms were recorded to monitor 340

eye-movement artifacts. Sampling rate was set to be 1000 Hz and an analog 341

band-pass filter from 0.1 to 330 Hz was applied. Maxwell filtering was used to minimize 342

external magnetic interference and to compensate for head movements (Samu Taulu 343

& Kajola, 2005; S. Taulu & Simola, 2006). 344

Standard preprocessing procedures were applied using Matlab 2016b and the 345

FieldTrip package (Oostenveld, Fries, Maris, & Schoffelen, 2011). The MEG data of 346

each block was first low-pass filtered below 20-Hz and then segmented into epochs of 347

7.6 s relative to trial onset (–0.6 s to 7 s). Independent component analysis (ICA) was 348

applied to aggregated epoch data to remove artifacts including blinks, eye movements, 349

breaths and heart activity. No subject was excluded for excessive head movement or 350

other artifacts. For two subjects, the first trial of one block was excluded due to the 351

failure of synchronization between stimulus onset and MEG recording onset at the 352

beginning of the block. 353

354

Phase Coherence Analysis 355

Given a periodically-changed stimulus sequence (p in P-cycle or p(1–p) in U-cycle), 356

stimulus-evoked brain responses imply that the phase difference between the stimulus 357

and response time series should be coherent across trials at the stimulus frequency 358

Page 22: Automatic and fast encoding of representational

21

(Summerfield & Mangels, 2004). The phase coherence across trials is defined as 359

(Kramer & Eden, 2016): 360

Prs

Pss Prr

, (10) 361

where Pss and Prr

respectively denote the trial-averaged power spectrum for 362

stimulus and response time series, and Prs denotes the magnitude of the 363

trial-averaged cross-spectrum between stimuli and responses. The value of for 364

any specific frequency is between 0 and 1, with larger indicating stronger 365

phase-locked responses. 366

For a specific variable (p or p(1–p)), we computed the phase coherence between 367

stimulus sequence and MEG signals separately for each subject, each magnetometer 368

and each cycle and numerosity condition. In particular, we down-sampled epoched 369

MEG time series to 300 Hz, applied zero padding and Hanning window to single trials, 370

and used Fast Fourier Transformation (FFT) to calculate power spectrum. The 371

resulting frequency resolution was 0.03 Hz. 372

To evaluate the chance-level phase coherence, we shuffled the MEG time series 373

across different time points within each trial and computed phase coherence for the 374

permutated time series. This permutation procedure was repeated for 500 times to 375

produce a distribution of chance-level phase coherences. 376

377

Decoding Analysis 378

Page 23: Automatic and fast encoding of representational

22

Time-resolved decoding. For an aperiodic stimulus sequence (p(1–p) in P-cycle or p 379

in U-cycle), we could infer the brain’s encoding of the stimulus variable at a specific 380

time lag to stimulus onset by examining how well the value of the variable could be 381

reconstructed from the neural responses at the time lag. In particular, we performed a 382

time-resolved decoding analysis using regression methods (Crosse, Liberto, Bednar, 383

& Lalor, 2016; Gonçalves, Whelan, Foxe, & Lalor, 2014; Lalor, Pearlmutter, Reilly, 384

McDarby, & Foxe, 2006; Wyart, Gardelle, Scholl, & Summerfield, 2012): 385

S t g ,n R t ,n tn

, (11) 386

where S t denotes the value of the stimulus that starts at time t, g ,n denotes the 387

to-be-estimated decoding weight for channel n at time lag , R t ,n denotes 388

the response of channel n at time t , and t is a Gaussian noise term. 389

We used the MEG signals of all 306 sensors (102 magnetometers and 204 390

gradiometers) at single time lags to decode p from U-cycle trials or p(1–p) from P-cycle 391

trials, separately for each subject, each cycle and numerosity condition, and each time 392

lag between 0 and 900 ms. Epoched MEG time series were first down-sampled to 120 393

Hz. To eliminate multicollinearity and reduce overfitting, we submitted the time series 394

of the 306 sensors to a principal component analysis (PCA) and used the first 30 395

components (explaining approximately 95% variance) as the regressors. The 396

decoding weights (i.e. regression coefficients) were estimated for normalized stimuli 397

and responses using the L2-rectified regression method implemented in the mTRF 398

toolbox (Crosse et al., 2016) , with the ridge parameter set to 1. 399

Page 24: Automatic and fast encoding of representational

23

We used a leave-one-out cross-validation (Arlot & Celisse, 2010) to evaluate the 400

predictive power of decoding performance as the following. Out of the 50 trials in 401

question (or 49 trials in the case of trial exclusion), each time one trial served as the 402

test set and the remaining trials as the training set. The decoding weights estimated 403

from the training set were applied to the test set, for which Pearson’s r was calculated 404

between the predicted and the ground-truth stimulus sequence. Such computation 405

was repeated for each trial as the test set and the averaged Pearson’s r was used as 406

the measure for decoding performance. 407

We identified time windows that had above-chance decoding performance using 408

cluster-based permutation tests (Maris & Oostenveld, 2007) as follows. Adjacent time 409

lags with significantly positive decoding performance at the uncorrected significance 410

level of .05 by right-sided one-sample t tests were grouped into clusters and the 411

summed t-value across the time lags in a cluster was defined as the cluster-level 412

statistic. We randomly shuffled the stimulus sequence of each trial, performed the 413

time-resolved decoding analysis on the shuffled sequence and recorded the maximum 414

cluster-level statistic. This procedure was repeated for 500 times to produce a 415

reference distribution of chance-level maximum cluster-level statistic, based on which 416

we calculated the P value for each cluster in real data. This test effectively controls the 417

Type error rate in situations involving multiple comparisons. For each subject, we 418

defined the median of all time points for which the decoding performance exceeded the 419

Page 25: Automatic and fast encoding of representational

24

95% percentile of the distribution as the time lag of the peak (Marti, King, & Dehaene, 420

2015). 421

422

Spatial decoding. For the conditions and time windows that had above-chance 423

performances in the time-resolved decoding analysis, we further performed a spatial 424

decoding analysis at individual sensor location to locate the brain regions that are 425

involved in encoding the variable in question. For a specific senor location, the MEG 426

signals at its three sensors (one magnetometer and two gradiometers) within the time 427

window were used for decoding. The time window that enclosed the peak in the 428

time-resolved decoding was 125-ms and 167-ms wide respectively for p and p(1-p) 429

under N-large condition. The decoding procedure was similar to that of the 430

time-resolved decoding described above, except that the number of PCA component 431

used was 19 and 24 respectively for decoding p and p(1–p), which explained 432

approximately 99% variance of the original 48 (16 time lags ⨉ 3 sensors) and 63 (21 433

time lags ⨉ 3 sensors) channels. 434

Considering that the magnetometer and the two planar gradiometers may be 435

sensitive to very different locations, depths and orientations of underlying dipoles, we 436

also performed spatial decoding separately for magnetometers and pairs of 437

gradiometers using all temporal samples within the decoded time windows, without 438

any PCA procedure. 439

Page 26: Automatic and fast encoding of representational

25

The same cluster-based permutation test described above was applied to the 440

spatial decoding results, except that the cluster-level test statistic was defined as the 441

sum of the t values of the continuous sensors (separated by gaps < 4 cm) (Groppe, 442

Urbach, & Kutas, 2011; Maris & Oostenveld, 2007) in a given cluster. 443

444

Ruling out the Effects of Confounding Factors 445

We considered the following eight variables in the stimuli as potential confounding 446

factors, which may covary with p or p(1–p) across displays. 447

(1) N: the total number of dots (numerosity) in the display, which by design was 448

independent of both p and p(1–p). 449

(2) Nt: the number of dots in the target color in the display, that is, Nt = Np. 450

(3) No: the number of dots in the other color in the display, that is, No = N(1–p). 451

(4) AvgLumi: the mean luminance of the display, which was kept constant across 452

different displays and different trials and was thus independent of both p and p(1–p). 453

(5) vCIE-L*: the color variance of the display on the L* dimension in the CIELAB color 454

space, where L* ranges from black to white. 455

(6) vCIE-a*: the color variance of the display on the a* dimension in the CIELAB color 456

space, where a* ranges from green to red. 457

(7) vCIE-b*: the color variance of the display on the b* dimension in the CIELAB color 458

space, where b* ranges from blue to yellow. 459

Page 27: Automatic and fast encoding of representational

26

(8) M-contrast: the Michelson luminance contrast (Wiebel, Singh, & Maertens, 2016), 460

that is, M-contrast= . 461

According to our design, most of these variables had negligible correlations with p 462

and p(1–p) and none of the absolute values of correlation coefficients exceeded 0.53 463

(Pearson’s r). The numerosity (N) had considerable correlations (Pearson’s |r| > 0.79) 464

with all other confounding factors except for the mean luminance (AvgLumi), which 465

was constant across displays. We performed similar phase coherence and 466

time-resolved decoding analyses for these variables as we did for p and p(1–p) but 467

obtained very different patterns, which suggests that the findings we reported for p 468

and p(1–p) in the main text are unlikely to be effects of confounding factors. The 469

patterns of all these confounding factors expect for AvgLumi were almost identical, 470

which may have the same origins such as the automatic encoding of numerosity 471

(Fornaciai et al., 2017; Park, DeWind, Woldorff, & Brannon, 2016). 472

473

Time-Resolved Decoding Analysis based on Cross-Validated Version of 474

Cofound Regression (CVCR) 475

To further rule out the possibility that the observed automatic encoding of p(1–p) may 476

be an artefact of confounding factors, we performed a time-resolved decoding 477

analysis based on the cross-validated version of confound regression (CVCR) method 478

proposed by Snoek, Miletić and Scholte (2019), which allowed us to regress out 479

confounding variables from MEG time series before decoding analysis. 480

Page 28: Automatic and fast encoding of representational

27

The CVCR method consists two modules: encoding and decoding. During 481

encoding, we fit a multiple linear regression model to MEG time series separately for 482

each magnetometer or gradiometer and each specific time lag, with the confounding 483

variables as regressors: 484

Yj j ,i Cii 1

8

j ,0 j , (12) 485

where j denotes sensor number, denotes time lag after stimulus onset (ranging 486

from 0 to 900 ms), Yj denotes MEG signals at sensor j and time lag , Ci (i=1, 487

2, …, 8) denotes the value of the i-th confounding variable, among the eight 488

confounding variables we considered above (i.e., N, Nt, No, AvgLumi, vCIE-L*, 489

vCIE-a*, vCIE-b* and M-contrast), j ,i and j ,0 are free parameters, and j is 490

a Gaussian noise term. 491

Then, subtracting the explained variance of confounding variables from MEG 492

signals and defined the residual as “confounds-regressed-out MEG signal”: 493

Yj ,reg Yjˆ

j ,ii 1

8

Ciˆ

j ,0 . (13) 494

The confounds-regressed-out MEG signal, Yj ,reg , was subsequently used for 495

time-resolved decoding analysis. 496

Following suggestions by Snoek, Miletić and Scholte (2019), we used a 497

leave-one-out procedure as follows. Out of the 50 trials in question (or 49 trials in the 498

case of trial exclusion), each time one trial served as the test set and remaining trials 499

as the training set. The parameters (of Eq. 12) estimated within each fold of training 500

Page 29: Automatic and fast encoding of representational

28

data, ˆ j ,itrain and ˆ j ,0

train , were used to remove the variance related to the confounds 501

from both the training set and test set. The resulting confounds-regressed-out training 502

data, Yj ,regtrain, and test data, Yj ,reg

test , were then concatenated and used for the decoding 503

module. 504

505

Statistical Analysis 506

Linear mixed models (LMMs). Linear mixed models were estimated using ‘fitlme’ 507

function in Matlab R2016b, whose F statistics, degree of freedom of residuals 508

(denominators), and P-values were approximated by Satterthwaite method (Fai & 509

Cornelius, 1996; Giesbrecht & Burns, 1985; Kuznetsova, Brockhoff, & Christensen, 510

2017). Specifications of random effects in LMMs were kept as maximal as possible 511

(Barr, Levy, Scheepers, & Tily, 2013) but without overparameterizing (Bates, Kliegl, 512

Vasishth, & Baayen, 2015). The LMMs reported in the Results are described below. 513

LMM1 and LMM2: the fitted LLO parameters ˆ and p̂0 are respectively the 514

dependent variables; fixed effects include an intercept, the main and interaction effects 515

of categorical variables cycle condition (P-cycle versus U-cycle) and numerosity 516

condition (N-small versus N-large); random effects include correlated random slopes 517

of cycle and numerosity conditions within subjects and random subject intercept. 518

LMM3 and LMM4: the fitted LLO parameters ˆ and p̂0 are respectively the 519

dependent variables; fixed effects include an intercept and the main effect of 520

categorical variable numerosity bin (all trials were evenly divided into four bins 521

Page 30: Automatic and fast encoding of representational

29

according to the value of N in the last display); random effects include correlated 522

random slopes of numerosity bin within subjects and random subject intercept. 523

LMM5: the logarithm of response time, log(RT), is the dependent variable; fixed 524

effects include an intercept, the main effects of continuous variables p, p(1–p) and N; 525

random effects include correlated random slopes of p, p(1–p) and N within subjects 526

and random subject intercept. 527

528

Correction for multiple comparisons. For phase coherence or decoding 529

performance, grand averages across subjects were computed and permutation tests 530

described above were used to assess their statistical significance over chance (Maris 531

& Oostenveld, 2007). The false discovery rate (FDR) method (Benjamini & Hochberg, 532

1995; Benjamini & Yekutieli, 2001) was used for multiple comparison corrections 533

whenever applicable. For the phase coherence spectrum averaged across 534

magnetometers, FDR corrections were performed among the 235 frequency bins 535

within 0–7 Hz. For the phase coherence topography at 3.33 Hz, FDR corrections were 536

performed among all 102 magnetometers. 537

538

Difference between the peak latencies of p(1–p) and p. We defined the peak 539

latency of the decoding performance of p or p(1 p) as follows. First, for the temporal 540

course of the decoding performance of a specific variable, adjacent time points with 541

above-chance t values (uncorrected right-sided P < 0.05 by one-sample t-test) were 542

Page 31: Automatic and fast encoding of representational

30

grouped into clusters. Among clusters between 100 and 500 ms, we then identified 543

the cluster with the largest sum of t values and defined the centroid position (i.e., 544

t-value-weighted average of the time points) of the largest cluster as the peak latency 545

of the variable. For each subject, we estimated the peak latencies of p and p(1 p) in 546

their decoding performance and computed the latency difference between p(1 p) and 547

p. 548

To test whether the latency difference between p(1 p) and p was statistically 549

significant, we estimated the 95% confidence interval of the difference using a 550

bootstrap method (Efron & Tibshirani, 1993) as follows. In each virtual experiment, we 551

randomly re-sampled 22 virtual subjects with replacement from the 22 subjects and 552

calculated the mean latency difference across the virtual subjects. The virtual 553

experiment was repeated for 1,000 times to produce a sampling distribution of the 554

peak latency difference, based on which we estimated the 95% confidence interval of 555

the peak latency difference. 556

557

Data Availability 558

All data are available for download at https://osf.io/db4ms . 559

560

Page 32: Automatic and fast encoding of representational

31

Results 561

As shown in Figure 1A, on each trial, subjects saw a sequence of displays consisting 562

of isoluminant orange and blue dots that was refreshed every 150 ms, and were 563

instructed to track the relative-frequency of dots in one color (i.e. orange or blue, 564

counterbalanced across subjects). To ensure subjects’ persistent tracking throughout 565

the trial, the sequence of displays ended after random duration and subjects needed 566

to report the relative-frequency of the last display by clicking on a percentage scale 567

afterwards. 568

================= 569

Figure 1 about here 570

================= 571

A 2 (P-cycle or U-cycle) by 2 (N-small or N-large) experimental design was used, 572

as we describe below. Specifically, for P-cycle trials, the sequence of displays was 573

generated according to the p value (randomly chosen from a uniform distribution 574

between 0.1 and 0.9) and the displays alternated between lower values of p (<0.5) 575

and higher values of p (>0.5), with every two displays forming a cycle. As the 576

consequence, the values of p in P-cycle trials varied at a rhythm of 3.33 Hz, while the 577

values of p(1–p) would be aperiodic. In contrast, for U-cycle trials (U for uncertainty), 578

the sequence of displays was generated according to the value of p(1–p) in a similar 579

way so that the displays alternated between lower values of p(1–p) (<0.21) and higher 580

values of p(1–p) (>0.21) in cycles of 3.33 Hz, while the values of p were aperiodic. In 581

Page 33: Automatic and fast encoding of representational

32

other words, in P-cycle trials, the p value underwent a rhythmic fluctuation and the 582

p(1–p) value followed an aperiodic random course, whereas the opposite pattern 583

occurred for U-cycle trials. Figure 1B illustrates the P-cycle (left) and U-cycle (right) 584

conditions. 585

Our experimental design had two important features. First, the values of p and 586

p(1–p) were statistically independent of each other. Second, the values of p and p(1–587

p) were dissociated in periodicity. Note that subjects’ task was always to attend to p 588

(relative frequency), while p(1–p) was completely task-irrelevant in both P-cycle and 589

U-cycle trials. 590

Besides, the total number of dots in a display (numerosity, denoted N) was not 591

constant but varied from display to display, independently of p or p(1–p), ranging from 592

10 to 90 (N-small trials) or from 100 to 900 (N-large trials). Manipulating N as well as p 593

allowed us to manipulate the slope of probability distortion (Zhang & Maloney, 2012) 594

and thus to further investigate the encoding of representational uncertainty (Schustek, 595

Hyafil, & Moreno-Bote, 2019). 596

597

Behavioral probability distortions quantified by LLO 598

All the 22 subjects performed well on reporting the relative-frequency of the last 599

display, whose subjective estimate p was highly correlated with the objective 600

relative-frequency p (all Pearson’s r > 0.71, P < 0.001). Given that the stimulus 601

sequence might end unexpectedly, if subjects had failed to track the sequence, they 602

Page 34: Automatic and fast encoding of representational

33

might have missed the last stimulus. However, the mean percentage of large errors 603

( π p p 0.4 ) was low, not only for the normal trials (0.7%) but also for the catch 604

trials (1.9%), though the latter was a little higher than the former (paired-sample t-test, 605

t(21) = 2.49, P = 0.021). Subjects’ sensible reports thus provide evidence that they 606

had tracked the change of p throughout the trial as instructed. 607

Figure 2A shows the reported p of one representative subject, who showed 608

a typical inverted-S-shaped probability distortion—overestimating small p and 609

underestimating large p. As expected, most subjects exhibited the typical 610

inverted-S-shaped probability distortion and a few subjects exhibited the opposite 611

S-shaped distortion, a profile also consistent with previous findings (Varey et al., 1990; 612

Zhang & Maloney, 2012). 613

================= 614

Figure 2 about here 615

================= 616

We next used the LLO model to quantify such inverted-S- or S-shaped distortions, 617

summarizing each subject’s probability distortion behavior for each cycle (P-cycle or 618

U-cycle) and dot numerosity (N-small or N-large) conditions by two parameters fitted 619

from p (see Methods): slope ˆ and crossover point p̂0 . We used linear 620

mixed-effects model analyses (numbered in Methods) to identify the effects of cycle 621

conditions (P-cycle versus U-cycle), numerosity conditions (N-small versus N-large) 622

and their interactions on the estimated ˆ (LMM1) and p̂0 (LMM2). Only 623

Page 35: Automatic and fast encoding of representational

34

numerosity showed a significant influence on ˆ (F(1, 21.00) = 36.30, P < 0.001). No 624

other main effects or interactions on ˆ or p̂0 reached significance (all P > 0.10). In 625

particular, the ˆ estimated from N-large trials was greater than that of N-small trials 626

by 25.75% (Figure 2B). Thus our subsequent analysis will mainly focus on ˆ and 627

collapse the two cycle conditions to estimate ˆ . As illustrated by the inset of Figure 628

2B, a greater ˆ implies a less curved inverted-S-shaped distortion (for ˆ <1) or a 629

more curved S-shaped distortion (for ˆ >1). 630

It is noteworthy that LLO was only used as a way to quantify probability 631

distortions and their differences between conditions. The LLO model by itself could 632

not explain why the slope of distortion would differ between numerosity conditions, as 633

observed here (Fig. 2B). In contrast, the observed different ˆ between the N-small 634

and N-large conditions could be well captured by the bounded log-odds (BLO) model 635

(Zhang et al., 2020), which compensates for the p(1 p) form of representational 636

uncertainty, with greater ˆ implying lower representational uncertainty. Next we will 637

present the BLO model and how it may account for our behavioral results. These 638

modeling analyses provide behavioral evidence for the encoding of representational 639

uncertainty in the brain, which motivates the subsequent MEG analysis. Readers who 640

are not interested in behavioral modeling may feel free to skip some details of the 641

modeling results. 642

643

Page 36: Automatic and fast encoding of representational

35

Bounded Log-Odds model and its behavioral evidence 644

Zhang, Ren, and Maloney (2020) proposed the BLO model as a rational account of 645

probability distortion, which can explain why probability distortion may vary with task 646

or individual. BLO has three main assumptions (see Methods for details): (1) 647

probability is internally represented as log-odds, (2) representation is truncated to a 648

bounded scale, and (3) representational uncertainty is compensated in the final 649

estimate of probability, so that estimates associated with higher uncertainty will be 650

discounted to a greater extent. A full description of BLO would be out of the scope of 651

the present article and thus we only focus on the uncertainty compensation 652

assumption below. 653

According to BLO, when representational uncertainty (denoted V p̂ ) is higher, 654

the subjective estimate of probability is less contributed by the percept of the objective 655

probability and more by a prior estimate, which implies that the higher the 656

representational uncertainty, the shallower the slope of probability distortion. As 657

illustrated in Figure 2C, the representational uncertainty modeled in our experiment 658

(see Methods) is proportional to p(1–p), that is, an inverted-U-shaped function of p. 659

Meanwhile, for a median subject, V p̂ first slightly increases with N (Fig. 2C, from 660

N=10 to N=20) and then dramatically decreases with N (Fig. 2C, from N=20 to N=900). 661

Consequently, BLO predicts that the slope of distortion ˆ is not necessarily constant, 662

but may slightly decrease with N for very small N and mostly increase with larger N (Fig. 663

2D), which is indeed observed in our experiment (Fig. 2E). As expected, linear 664

Page 37: Automatic and fast encoding of representational

36

mixed-effect model analyses showed that the slope of distortion ( ˆ ) increased with N 665

(LMM3: F(3, 22.00) = 21.81, P < 0.001) and the crossover point of distortion ( p̂0 ) 666

hardly varied with N (LMM4: F(3, 22.00) = 2.11, P = 0.13). 667

We are aware that representational uncertainty V p̂ here is not equivalent but 668

proportional to p(1–p). In the neural analysis we will present, we chose to focus on the 669

encoding of p(1–p) (as a proxy for representational uncertainty) instead of V p̂ , 670

because V p̂ was highly correlated with numerosity N and thus would be difficult to 671

be separated from the latter in brain activities. In contrast, p(1–p) and N were 672

independent of each other by design. Besides, p(1–p) has a nice connection with the 673

“second-order valuation” proposed by Lebreton et al. (2015). 674

Model versus data. Zhang et al. (2020) has provided behavioral evidence for the BLO 675

model in two different tasks including relative-frequency judgment. Here we performed 676

similar tests on our behavioral data and fit the BLO model to the reported p for 677

each subject using maximum likelihood estimates. The performance of the LLO model 678

served as a baseline. When evaluating LLO’s performance in fitting the data, we 679

estimated one set of parameters for all conditions instead of estimating different 680

parameters for different conditions as when we applied LLO as a measuring tool for 681

probability distortions. In Figure 2F, we plot the observed p p (smoothed and 682

averaged across subjects) as a function of p separately for different numerosity 683

conditions and contrasted it with the BLO and LLO model predictions. The BLO 684

Page 38: Automatic and fast encoding of representational

37

prediction agreed well with the observed probability distortion function, while LLO 685

largely failed. 686

We further examined how well BLO could predict the observed numerosity effect 687

in the slope of distortion. For each subject, we divided the trials of each numerosity 688

condition evenly into two bins according to the value of N in the last display (i.e., the 689

display that was reported) and then estimated ˆ for each of the four bins. The 690

pattern of ˆ in real data—initial slight decrease and subsequent dramatic increase 691

with N—was quantitatively predicted by the fitted BLO model (Fig. 2E). 692

Model comparisons. Similar to Zhang et al. (2020), we performed a factorial model 693

comparison (van den Berg et al., 2014) to test whether each of the three assumptions 694

in the BLO model outperformed plausible alternative assumptions in fitting behavioral 695

data. Accordingly, we constructed 3 × 3 × 2 = 18 models and fit each model for 696

each subject using maximum likelihood estimates (see Methods for details). The 697

Akaike information criterion with a correction for sample sizes, AICc (Akaike, 1974; 698

Hurvich & Tsai, 1989), which punishes overfitting from additional free parameters, 699

was used as the metric of goodness-of-fit. Lower AICc indicates better fit. For each 700

subject, the model with lowest AICc was used as a reference to compute ∆AICc for 701

each model. According to the summed ∆AICc across subjects, BLO was the best 702

model among the 18 models (Fig. 3A). We further verified the superiority of BLO using 703

a 10-fold cross-validation analysis, which reached the same conclusion (Fig. 3B). 704

Page 39: Automatic and fast encoding of representational

38

We also evaluated whether each of BLO’s three assumptions was the best 705

among its alternatives on the same dimension. In particular, we used the group-level 706

Bayesian model selection (Daunizeau et al., 2014; Rigoux et al., 2014; Stephan et al., 707

2009) to compute the probability that each specific model outperforms the other 708

models in the model set (“protected exceedance probability”) and marginalized the 709

protected exceedance probabilities for each dimension (i.e., adding up the protected 710

exceedance probabilities across the other two dimensions). Indeed, all three 711

assumptions of the BLO model in the present study—log-odd, bounded, and 712

compensation for V p̂ (with ns b Na )—outperformed their alternatives with 713

probabilities of 87.7%, 94.6% and 99.8%, respectively (Fig. 3C). 714

Furthermore, we performed a model recovery analysis (similar to Correa et al., 715

2018) to confirm that the advantage of BLO in factorial model comparison is real and 716

does not result from model mis-identification. In particular, we generated 50 synthetic 717

datasets for each of the 18 models. All the datasets generated from BLO were best fit 718

by BLO. Out of the 850 datasets generated from the other models, only 0.24% were 719

mis-identified to BLO (Fig. 3D). 720

================= 721

Figure 3 about here 722

================= 723

We also verified that the parameters of BLO could be reasonably well identified. 724

Even if some of the BLO parameters were not perfectly identified, it would not influence 725

Page 40: Automatic and fast encoding of representational

39

the results of the neural analysis we report below, most of which did not rely on the 726

estimated BLO parameters. 727

Evidence for representational uncertainty in response time 728

The above modeling analysis on subjects’ reported relative frequency (i.e. model fits 729

as well as the factorial model comparison) suggests that subjects might compensate 730

for representational uncertainty that is proportional to p(1–p) and the N-large condition 731

was associated with lower uncertainty than the N-small condition. 732

The response time (RT) of reporting relative frequency (defined as the interval 733

between the response screen onset and the first mouse move) provides an additional 734

index for representational uncertainty, given that lower uncertainty would lead to 735

shorter RTs. Consistent with Lebreton et al. (2015), RTs were longest at p=0.5 and 736

shortest when p was close to 0 or 1 (Fig. 2G, left). More precisely, RTs increased 737

almost linearly with p(1–p) (Fig. 2G, right), in agreement with what one would expect if 738

representational uncertainty is proportional to p(1–p). Besides, RTs were shorter in 739

N-large trials than in N-small trials, which echoes the lower V p̂ for larger N (Fig. 740

2C) and provides further evidence that the N-large condition was accompanied by 741

lower representational uncertainty. According to a linear mixed-effect model analysis 742

(LMM5), the logarithm of RT, log(RT), increased with p(1–p) (F(1, 22.35) = 50.34, P < 743

0.001), but decreased with N (F(1, 26.16) = 34.45, P < 0.001) and p (F(1, 25.95) = 7.00, 744

P = 0.014). 745

Page 41: Automatic and fast encoding of representational

40

In sum, we found that a considerable portion of variability in subjects’ probability 746

distortion functions and RT patterns can be accounted by differences in 747

representational uncertainty. One may wonder whether the effects of representational 748

uncertainty can be explained away by task difficulty. We doubt not, because higher 749

representational uncertainty does not necessarily correspond to higher task difficulty. 750

For example, representational uncertainty for relative-frequency estimation is 751

proportional to p(1–p), which is maximal at p=0.5 and minimal when p is close to 0 or 1. 752

In contrast, regarding difficulty, there seems to be little reason to expect that 753

relative-frequency estimation itself should be more difficult for p=1/2 than for p=1/3, or 754

be more difficult for p=1/3 than for p=1/4. As another counter-example, 755

representational uncertainty modeled in the present study is higher in the N-small 756

condition than in the N-large condition (Fig. 2C), but there seems to be little reason to 757

expect relative-frequency estimation to be more difficult for displays with fewer dots. It 758

is only when the brain tries to compensate for potential sampling errors in the 759

estimation that it may find more uncertainty (but not difficulty) in representing 760

probability values closer to 0.5 than representing those closer to 0 or 1, and likewise 761

different levels of uncertainty for different numerosities of dots (Zhang et al., 2020). 762

763

Neural entrainment to periodic changes of p or p(1–p) 764

After showing that the BLO models can well capture the behavioral results, we next 765

examined whether the brain response could track the periodically changing p values 766

Page 42: Automatic and fast encoding of representational

41

(P-cycle) or p(1–p) values (U-cycle) in the stimulus sequence. Recall that the P-cycle 767

and U-cycle trials had identical individual displays and differed only in the ordering of 768

the displays: In P-cycle trials, p alternated between small and large values in cycles of 769

3.33 Hz while p(1–p) was aperiodic; in U-cycle trials, p(1–p) alternated in cycles of 770

3.33 Hz while p was aperiodic. 771

As shown in Figure 4, the brain response indeed tracked the periodic changes of 772

p and p(1–p). In particular, in P-cycle trials (Fig. 4, left), the phase coherence between 773

the periodic p values and the MEG time series reached significance at 3.33 Hz 774

(permutation test, FDR corrected PFDR < 0.05), mainly in the occipital and parietal 775

sensors. Importantly, significant phase coherence was also found at 3.33 Hz in 776

U-cycle trials between the periodic p(1–p) values and the MEG time series (Fig. 4, 777

right). That is, the brain activity was also entrained to the periodic changes of the 778

representational uncertainty of p (i.e. p(1–p)). Given that subjects were only asked to 779

track the value of p but not p(1–p), the observed neural entrainment to the 780

task-irrelevant p(1–p) suggests an automatic encoding of representational uncertainty 781

in the brain. 782

================= 783

Figure 4 about here 784

================= 785

Notably, the significant phase coherences between the periodic p or p(1–p) 786

values and MEG time series were not simply because the MEG time series consisted 787

Page 43: Automatic and fast encoding of representational

42

of 3.33 Hz frequency components. In a control analysis, we calculated the phase 788

coherence between the aperiodic variable (i.e. p in U-cycle trials or p(1–p) in P-cycle 789

trials) and the same MEG time series, and found that there were no significant peaks 790

at 3.33 Hz. Moreover, we had carefully controlled potential confounding variables in 791

our experimental design. For example, p and p(1–p) were linearly independent of 792

each other, both of which had negligible correlations with numerosity or low-level 793

visual features such as luminance, contrast, and color variance (see Methods for 794

details). Furthermore, different from p and p(1–p), these potential confounding 795

variables were associated with similar entrainment responses in P-cycles and 796

U-cycles. 797

798

Behavioral probability distortion and the neural entrainment to p and 799

p(1–p) 800

We next examined the relationship between behavioral probability distortion and the 801

neural entrainment to p and p(1–p) on a subject-by-subject basis. Specifically, the 802

phase coherence between a specific variable (p or p(1–p)) and the MEG time series 803

can be considered as a measure of the strength of neural responses to the variable 804

(Kramer & Eden, 2016). We defined 805

p

p 1 p

(13) 806

Page 44: Automatic and fast encoding of representational

43

to quantify the relative strength of p to p(1–p) in neural responses, where p denotes 807

the phase coherence for p in P-cycle trials, and p 1 p denotes the phase coherence 808

for p(1–p) in U-cycle trials. A higher value of would imply a stronger neural 809

encoding of relative-frequency or a weaker encoding of representational uncertainty 810

and is thus supposed to yield probability distortions of a greater slope (Zhang et al., 811

2020). Note that both the behavioral and neural measures we defined below were 812

intra-subject ratios that were unitless and scale free, thus not subject to the potential 813

scaling issues in inter-individual correlation (Lebreton, Bavard, Daunizeau, & 814

Palminteri, 2019). 815

Given that the estimated slope of distortion ˆ was greater in N-large trials than 816

in N-small trials, we would expect to change in the same direction across the 817

numerosity conditions. That is, suppose we define ˆ lnˆN-large

ˆN-small and 818

ln N-large

N-small, there should be a positive correlation between ˆ and . 819

As shown in Figure 5, there was indeed a significant correlation between 820

behavioral and neural measures across subjects in the fronto-parietal region 821

(Pearson’s r = 0.67, FDR-corrected one-tailed PFDR = 0.033), associating the neural 822

entrainment to p and p(1–p) with probability distortion behaviors. 823

================= 824

Figure 5 about here 825

================= 826

Page 45: Automatic and fast encoding of representational

44

Neural encoding of p and p(1–p) over time and the associated brain 827

regions 828

After establishing the automatic tracking of cyclic changes in p(1–p) and its behavioral 829

relevance, we next aimed to delineate the temporal dynamics of p and p(1–p) in the 830

brain signals. Recall that on each trial one of the two variables of our interest (p or 831

p(1–p)) changed periodically and the other was aperiodic. Therefore, we could 832

decode the temporal course of aperiodic p(1–p) in P-cycle trials and aperiodic p in 833

U-cycle trials. 834

In particular, we performed a time-resolved decoding analysis based on all 306 835

sensors (see Methods) using a regression approach that has been employed in 836

previous EEG and MEG studies (Crosse et al., 2016; Wyart et al., 2012) including 837

ours (Huang, Jia, Han, & Luo, 2018; Jia, Liu, Fang, & Luo, 2017; Liu, Wang, Zhou, 838

Ding, & Luo, 2017). The intuition of the time-resolved decoding analysis is as follows. 839

Suppose the onset of each new p or p(1–p) value in the stimulus sequence evokes 840

phase-locked brain responses that extend across time and whose magnitudes are 841

proportional to the encoded value. The resulting MEG time series would then be a 842

superposition of the responses to all the preceding stimuli in the sequence. But as 843

soon as the values of an encoded variable are not correlated over time (i.e. free of 844

autocorrelation), their brain response profiles are separable. In particular, for any 845

stimulus in the sequence, we can use the MEG signals at a specific delay after the 846

stimulus onset to predict the value of the stimulus. Importantly, different from the 847

Page 46: Automatic and fast encoding of representational

45

phase coherence analysis (Fig. 4) that only reflects the overall strength of the brain 848

responses, this time-resolved decoding analysis allows us to assess how p and p(1–p) 849

are encoded over time. 850

Figure 6 plots the temporal course of the decoding performance for p and p(1–p). 851

We found that both p and p(1–p) were successfully decoded from the MEG signals in 852

N-large trials (cluster-based permutation test, Pcluster < 0.05). In particular, the 853

decoding performance for p peaked ~327 ms after stimulus onset (Fig. 6A), while that 854

for p(1–p) peaked ~419 ms after stimulus onset (Fig. 6B). In other words, the neural 855

encodings of relative frequency and its representational uncertainty had distinct time 856

courses, with the latter occurring approximately 100 ms later than the former (95% 857

confidence interval: 17–194 ms). 858

As a confirmation of our decoding results from aperiodic sequences, we 859

performed additional time-resolved decoding analysis for periodic stimulus sequences, 860

that is, p in P-cycle trials and p(1–p) in U-cycle trials. Despite that the periodicity of the 861

stimulus sequence prevented us from separating the neural responses at time t from 862

t+150 ms, t+300 ms, or t+450 ms, etc, the largest peaks decoded from these periodic 863

sequences fell into the same time window as those of the aperiodic sequences (i.e., 864

225–350 ms for p and 308–475 ms for p(1–p)). 865

866

================= 867

Figure 6 about here 868

Page 47: Automatic and fast encoding of representational

46

================= 869

In contrast, none of the potential confounding variables (numerosity, luminance, 870

etc.) showed temporal courses similar to p or p(1–p) when the same time-resolved 871

decoding procedure was applied. The automatic encoding of the task-irrelevant p(1–p) 872

might seem surprising. To further verify that this was not an artifact of potential 873

low-level confounds, we performed an additional decoding analysis for p(1–p) based 874

on the cross-validated version of confound regression (CVCR) (Snoek et al., 2019), 875

where the confounding variables were regressed out from MEG time series before 876

time-resolved decoding analysis (see Methods for details). The decoded temporal 877

course for p(1–p) was little changed (Fig. 7A). A similar decoding analysis for the 878

complete form of representational uncertainty V p̂ resulted in similar temporal 879

course as that of p(1–p) (Fig. 7B). In sum, we found automatic encoding of p(1–p) in 880

brain signals even when it was task-irrelevant and when low-level confounding factors 881

were excluded. 882

================= 883

Figure 7 about here 884

================= 885

Further, based on the time windows that achieved the highest time-resolved 886

decoding performance, we performed spatial decoding analyses at each sensor 887

location separately for p and p(1–p) to assess the associated brain regions (see 888

Methods). The topographies using magnetometers, pairs of gradiometers, or both of 889

Page 48: Automatic and fast encoding of representational

47

them are shown in Figure 6. We found that the encoding of the two variables involved 890

overlapping parietal-occipital regions, including the fronto-parietal region where ∆ 891

and ˆ were positively correlated (Fig. 5). 892

893

894

Page 49: Automatic and fast encoding of representational

48

Discussion 895

We used MEG to investigate probability distortions in a relative-frequency estimation 896

task and found that the human brain encodes not only the task-relevant relative 897

frequency but also its task-irrelevant representational uncertainty. The neural 898

encoding of the representational uncertainty occurs at as early as approximately 400 899

ms after stimulus onset. These results suggest that the human brain automatically 900

and quickly quantifies the uncertainty inherent in probability information. Our findings 901

provide neural evidence for the theoretical hypothesis that probability distortion is 902

related to representational uncertainty. More generally, these findings may connect to 903

the functional role of confidence (estimation of uncertainty) during judgment and 904

decision making. 905

906

Neural computations underlying probability distortions 907

Humans show highly similar probability distortions in tasks involving probability or 908

relative frequency, with the subjective probability typically an inverted-S-shaped 909

function of the objective probability (Zhang & Maloney, 2012). Why do people distort 910

probability information in such a systematic way? Though inverted-S-shaped 911

distortions of probability had been found in the activity of a few brain regions (Hsu et al., 912

2009; Tobler, Christopoulos, O’Doherty, Dolan, & Schultz, 2008), the neural 913

computations involved in transforming objective probabilities to subjective probabilities 914

Page 50: Automatic and fast encoding of representational

49

were largely unknown. Several theories (Fennell & Baddeley, 2012; Martins, 2006; 915

See & Fox, 2006; Zhang et al., 2020) rationalize probability distortion as a 916

consequence of compensating for representational uncertainty, in accordance with the 917

framework of Bayesian Decision Theory (Maloney & Zhang, 2010). In brief, the brain 918

will discount an internal representation of probability according to the level of 919

uncertainty so that it can achieve a more reliable estimate of the objective probability. 920

The higher the uncertainty associated with a representation, to a greater extent the 921

representation is discounted. By modeling human subjects’ probability distortion 922

behavior in two representative tasks—relative-frequency estimation and decision 923

under risk—Zhang et al. (2020) found evidence that representational uncertainty 924

proportional to p(1–p) is compensated in human representation of probability or 925

relative frequency. 926

What we found in the present study is neural evidence for the encoding of p(1–p) 927

during the evaluation of p. For the first time, we have shown that representational 928

uncertainty p(1–p) is not just virtual quantities assumed in computational models of 929

probability distortions but is really computed in the brain. 930

931

Representational uncertainty and confidence 932

The representational uncertainty that concerns us lies in the internal representation of 933

probability or relative-frequency. One concept that may occasionally coincide with 934

such epistemological uncertainty but is entirely different is outcome uncertainty, which 935

Page 51: Automatic and fast encoding of representational

50

is widely studied in the literature of decision under risk (Fiorillo, Tobler, & Schultz, 936

2003; Tobler, O’Doherty, Dolan, & Schultz, 2007), prediction error (Fiorillo et al., 2003) 937

and surprise (Preuschoff, ’t Hart, & Einhäuser, 2011; van Lieshout, Vandenbroucke, 938

Müller, Cools, & de Lange, 2018). For example, for a two-outcome gamble with 939

probability p to receive x1 and probability 1–p to receive x2, the variance of the 940

outcomes is proportional to p(1–p). In the present study, the relative-frequency 941

estimation task we chose to investigate does not involve similar binary uncertain 942

outcomes and thus avoids the possible confusion of outcome uncertainty with 943

representational uncertainty. Therefore, the neural encoding of p(1–p) we found here 944

cannot be attributed to the perception of outcome uncertainty. 945

The latency and spatial topography of the neural encoding of representational 946

uncertainty reported here may remind the reader of the P300 event-related potential 947

(ERP) component, a well-known neural marker of surprise (Donchin, 1981; Mars et al., 948

2008; Sutton, Braren, Zubin, & John, 1965), i.e., more rare and thus more surprising 949

events in a stimulus sequence will evoke a stronger positive ERP component that 950

peaks ~300 ms after stimulus onset. In the present study, however, the task-relevant 951

stimulus value p was uniformly sampled between 0.1 and 0.9, which implies that no 952

events in our stimulus sequences could be considered more rare or surprising than 953

any other events. Therefore, the encoding of representational uncertainty we found at 954

~400 ms should not derive from P300. 955

Page 52: Automatic and fast encoding of representational

51

Another concept that may sound similar to but is actually different from 956

representational uncertainty is task difficulty. Conceptually, representational 957

uncertainty is closely related to the uncertainty of estimation due to random sampling 958

errors (Zhang et al., 2020), whereas task difficulty is usually associated with cognitive 959

demands or efforts (Bankó, Gál, Körtvélyes, Kovács, & Vidnyánszky, 2011; 960

Philiastides, Ratcliff, & Sajda, 2006). As we reasoned earlier, p(1–p) is positively 961

correlated with the representational uncertainty of p, but not necessarily with task 962

difficulty. Moreover, previous EEG or MEG studies on task difficulty did not report the 963

fast phasic neural responses that we found here for encoding representational 964

uncertainty (peaking at ~400 ms). In particular, though a negative ERP component as 965

early as ~220 ms whose amplitude increased with the level of visual noise in an image 966

classification task was once identified as a neural signature of decision difficulty 967

(Philiastides et al., 2006), it was later found to be a specific neural response to visual 968

noise level instead of to difficulty in general (Bankó et al., 2011). 969

In fact, representational uncertainty is more closely related to confidence, except 970

that it is not necessarily bound to any specific judgment or decision (Pouget, 971

Drugowitsch, & Kepecs, 2016). Specifically, the encoding of representational 972

uncertainty may be understood as a special case of the “second-order valuation” 973

proposed by Lebreton et al. (2015). As described earlier, Lebreton et al. (2015) found 974

a principled relationship between an overt numerical judgment and the individual’s 975

confidence about the judgment, with the latter being a quadratic function of the former. 976

Page 53: Automatic and fast encoding of representational

52

They showed in fMRI studies that such quadratic form—as a proxy to confidence—is 977

automatically encoded in vmPFC, even when confidence rating is not explicitly 978

required of. Using the same quadratic form as the proxy, here we found automatic 979

encoding of the representational uncertainty of relative frequency during the tracking 980

of a visual sequence. Meanwhile, our findings extend previous work and contribute to 981

the confidence literature in the following three aspects. 982

First, we found automatic encoding of representational uncertainty even in the 983

absence of overt judgment, not just in the absence of overt confidence estimation. In 984

our experiment, subjects were asked to track the relative frequency in the stimulus 985

sequence but made no overt judgment except for the last display. Moreover, displays 986

were refreshed every 150 ms, apparently leaving no time for deliberation. 987

Second, we resolved the temporal course of the encoding of representational 988

uncertainty, which would be inaccessible under the low temporal resolution of fMRI. In 989

particular, representational uncertainty is encoded as early as 400 ms upon stimulus 990

onset, approximately 100 ms after the encoding of relative frequency itself. The 991

parietal-occipital region is involved in the processing of both relative-frequency and its 992

representational uncertainty but during different time windows. The fast encoding of 993

representational uncertainty we found echoes recent psychophysiological findings in 994

humans (Gherman & Philiastides, 2015, 2018; Zizlsperger, Sauvigny, Händel, & 995

Haarmeier, 2014) and non-human primates (Kiani & Shadlen, 2009) that the 996

Page 54: Automatic and fast encoding of representational

53

confidence encoding for perceptual judgments can be detected in the brain far before 997

confidence rating and even prior to the overt judgment or decision making. 998

Third, our findings connect to the functional importance of confidence encoding in 999

information processing (Bahrami et al., 2010; Koriat, 2015; Pouget et al., 2016) that is 1000

still not well understood but receives growing attention. In particular, here we ask how 1001

the neural encoding of representational uncertainty may relate to probability distortion. 1002

By comparing experimental conditions under which the same individuals’ distortions 1003

of relative frequency differed, we found that a relatively stronger response to 1004

representational uncertainty in the fronto-parietal region corresponds to a shallower 1005

slope of distortion. We conjecture that the automatic, fast encoding of 1006

representational uncertainty might not just be post hoc evaluation but indeed regulate 1007

probability distortion. Meanwhile, we are aware that our current findings are based on 1008

correlational analyses and whether there is causality between the neural encoding of 1009

representational uncertainty and probability distortion still awaits future empirical 1010

tests. 1011

1012

Methodology implications 1013

The “steady-state responses” (SSR) technique (Norcia et al., 2015)—using rapid 1014

periodic stimulus sequence to entrain brain responses—has been widely used with 1015

EEG/MEG to investigate low-level perceptual processes, which increases the 1016

signal-to-noise ratio of detecting the automatic brain responses to the stimuli by 1017

Page 55: Automatic and fast encoding of representational

54

sacrificing temporal information. In our experiment, we constructed periodic p or p(1–1018

p) sequences and showed that SSR can also be used to reveal the brain’s automatic 1019

responses to these more abstract variables. Moreover, we demonstrate the feasibility 1020

to perform time-resolved decoding for the other aperiodic variable embedded in the 1021

same sequence, whereby exploiting the advantages of both the SSR and 1022

time-resolved decoding techniques. Such design would be useful for a broad range of 1023

problems that need to dissociate the processing of two or more variables in brain 1024

activities. 1025

1026

Page 56: Automatic and fast encoding of representational

55

References 1027

Akaike, H. (1974). A new look at the statistical model identification. IEEE Transactions 1028 on Automatic Control, 19(6), 716-723. DOI: 1029 https://doi.org/10.1109/TAC.1974.1100705 1030

Aljuhani, K. H., & Al turk, L. I. (2014). Modification of the adaptive Nadaraya-Watson 1031 kernel regression estimator. Scientific Research and Essays, 9(22), 966-971. 1032 DOI: https://doi.org/10.5897/SRE2014.6121 1033

Arlot, S., & Celisse, A. (2010). A survey of cross-validation procedures for model 1034 selection. Statistics Surveys, 4, 40-79. DOI: https://doi.org/10.1214/09-SS054 1035

Attneave, F. (1953). Psychological probability as a function of experienced frequency. 1036 Journal of Experimental Psychology, 46(2), 81-86. DOI: 1037 https://doi.org/10.1037/h0057955 1038

Bahrami, B., Olsen, K., Latham, P. E., Roepstorff, A., Rees, G., & Frith, C. D. (2010). 1039 Optimally interacting minds. Science, 329, 1081-1085. DOI: 1040 https://doi.org/10.1126/science.1185718 1041

Bankó, É. M., Gál, V., Körtvélyes, J., Kovács, G., & Vidnyánszky, Z. (2011). 1042 Dissociating the effect of noise on sensory processing and overall decision 1043 difficulty. The Journal of Neuroscience, 31(7), 2663-2674. DOI: 1044 https://doi.org/10.1523/JNEUROSCI.2725-10.2011 1045

Barr, D. J., Levy, R., Scheepers, C., & Tily, H. J. (2013). Random effects structure for 1046 confirmatory hypothesis testing: Keep it maximal. Journal of Memory and 1047 Language, 68(3), 255-278. DOI: https://doi.org/10.1016/j.jml.2012.11.001 1048

Bates, D. M., Kliegl, R., Vasishth, S., & Baayen, H. (2015). Parsimonious mixed 1049 models. arXiv preprint, arXiv:1506.04967 [stat.ME]. DOI: 1050 https://arxiv.org/abs/1506.04967 1051

Benjamini, Y., & Hochberg, Y. (1995). Controlling the false discovery rate: a practical 1052 and powerful approach to multiple testing. Journal of the Royal Statistical 1053 Society, 57(1), 289-300. DOI: 1054 https://doi.org/10.1111/j.2517-6161.1995.tb02031.x 1055

Benjamini, Y., & Yekutieli, D. (2001). The control of the false discovery rate in multiple 1056 testing under dependency. The Annals of Statistics, 29(4), 1165-1188. DOI: 1057 https://doi.org/10.1214/aos/1013699998 1058

Brainard, D. H. (1997). The psychophysics toolbox. Spatial Vision, 10, 433-436. DOI: 1059 https://doi.org/10.1163/156856897X00357 1060

Cochran, W. G. (1977). Sampling techniques. New York: John Wiley & Sons. DOI. 1061 Retrieved from 1062 https://www.wiley.com/en-gb/Sampling+Techniques%2C+3rd+Edition-p-978041063 71162407 1064

Constantinople, C. M., Piet, A. T., & Brody, C. D. (2019). An analysis of decision under 1065 risk in rats. Current Biology, 29, 1-9. DOI: 1066 https://doi.org/10.1016/j.cub.2019.05.013 1067

Correa, C. M. C., Noorman, S., Jiang, J., Palminteri, S., Cohen, M. X., Lebreton, M., & 1068

Page 57: Automatic and fast encoding of representational

56

Gaal, S. v. (2018). How the level of reward awareness changes the 1069 computational and electrophysiological signature of reinforcement learning. 1070 The Journal of Neuroscience, 38(48), 10338-10348. DOI: 1071 https://doi.org/10.1523/JNEUROSCI.0457-18.2018 1072

Crosse, M. J., Liberto, G. M., Bednar, A., & Lalor, E. C. (2016). The Multivariate 1073 Temporal Response Function (mTRF) Toolbox: A MATLAB Toolbox for 1074 Relating Neural Signals to Continuous Stimuli. frontiers in Human 1075 Neuroscience, 10, 604-617. DOI: https://doi.org/10.3389/fnhum.2016.00604 1076

Daunizeau, J., Adam, V., & Rigoux, L. (2014). VBA: a probabilistic treatment of 1077 nonlinear models for neurobiological and behavioural data. PLoS 1078 Computational Biology, 10(1). DOI: 1079 https://doi.org/10.1371/journal.pcbi.1003441 1080

Donchin, E. (1981). Surprise! … Surprise? Psychophysiology, 18(5), 493-513. DOI: 1081 https://doi.org/10.1111/j.1469-8986.1981.tb01815.x 1082

Efron, B., & Tibshirani, R. J. (1993). An introduction to the bootstrap. New York: 1083 Chapman & Hall/CRC DOI: https://doi.org/10.1201/9780429246593 1084

Erev, I., Wallsten, T. S., & Budescu, D. V. (1994). Simultaneous over- and 1085 underconfidence: the role of error in judgment processes. Psychological 1086 Review, 101(3), 519-527. DOI: https://doi.org/10.1037/0033-295X.101.3.519 1087

Fai, A. H.-T., & Cornelius, P. L. (1996). Approximate F-tests of multiple degree of 1088 freedom hypotheses in generalized least squares analyses of unbalanced 1089 split-plot experiments. Journal of Statistical Computation and Simulation, 54(4), 1090 363-378. DOI: https://doi.org/10.1080/00949659608811740 1091

Farnsworth, D. (1943). The Farnsworth-Munsell 100 Hue and Dichotomous Test for 1092 Color Vision. Journal of the Optical Society if America, 33(10), 568-578. DOI: 1093 https://doi.org/10.1364/JOSA.33.000568 1094

Fennell, J., & Baddeley, R. (2012). Uncertainty plus prior equals rational bias: an 1095 intuitive Bayesian probability weighting function. Psychological Review, 119(4), 1096 878-887. DOI: https://doi.org/10.1037/a0029346 1097

Ferrari-Toniolo, S., Bujold, P. M., & Schultz, W. (2019). Probability distortion depends 1098 on choice sequence in Rhesus Monkeys. The Journal of Neuroscience, 39(15), 1099 2915-2929. DOI: https://doi.org/10.1523/JNEUROSCI.1454-18.2018 1100

Fiorillo, C. D., Tobler, P. N., & Schultz, W. (2003). Discrete coding of reward probability 1101 and uncertainty by dopamine neurons. Science, 299(5614), 1898-1902. DOI: 1102 https://doi.org/10.1126/science.1077349 1103

Fornaciai, M., Brannon, E. M., Woldorff, M. G., & Park, J. (2017). Numerosity 1104 processing in early visual cortex. NeuroImage, 157, 429-438. DOI: 1105 https://doi.org/10.1016/j.neuroimage.2017.05.069 1106

Gherman, S., & Philiastides, M. G. (2015). Neural representations of confidence 1107 emerge from the process of decision formation during perceptual choices. 1108 NeuroImage, 106, 134-143. DOI: 1109 https://doi.org/10.1016/j.neuroimage.2014.11.036 1110

Page 58: Automatic and fast encoding of representational

57

Gherman, S., & Philiastides, M. G. (2018). Human VMPFC encodes early signatures 1111 of confidence in perceptual decisions. eLIFE, 7, e38293. DOI: 1112 https://doi.org/10.7554/eLife.38293 1113

Giesbrecht, F. G., & Burns, J. C. (1985). Two-stage analysis based on a mixed model: 1114 large-sample asymptotic theory and small-sample simulation results. 1115 International Biometric Society, 41(2), 477-486. DOI: 1116 https://doi.org/10.2307/2530872 1117

Gigerenzer, G., Hoffrage, U., & Kleinbölting, H. (1991). Probabilistic mental models: a 1118 Brunswikian theory of confidence. Psychological Review, 98(4), 506-528. DOI: 1119 https://doi.org/10.1037/0033-295X.98.4.506 1120

Gonçalves, N. R., Whelan, R., Foxe, J. J., & Lalor, E. C. (2014). Towards obtaining 1121 spatiotemporally precise responses to continuous sensory stimuli in humans: 1122 A general linear modeling approach to EEG. NeuroImage, 97, 196-205. DOI: 1123 https://doi.org/10.1016/j.neuroimage.2014.04.012 1124

Gonzalez, R., & Wu, G. (1999). On the Shape of the Probability Weighting Function. 1125 Cognitive Psychology, 38, 129-166. DOI: 1126 https://doi.org/10.1006/cogp.1998.0710 1127

Groppe, D. M., Urbach, T. P., & Kutas, M. (2011). Mass univariate analysis of event1128 related brain potentials/fields I: A critical tutorial review. Psychophysiology, 1129 48(12), 1711-1725. DOI: https://doi.org/10.1111/j.1469-8986.2011.01273.x 1130

Hsu, M., Krajbich, I., Zhao, C., & Camerer, C. F. (2009). Neural response to reward 1131 anticipation under risk is nonlinear in probabilities. Journal of Neuroscience, 1132 29(7), 2231-2237. DOI: https://doi.org/10.1523/JNEUROSCI.5296-08.2009 1133

Huang, Q., Jia, J., Han, Q., & Luo, H. (2018). Fast-backward replay of sequentially 1134 memorized items in humans. eLIFE, 7, e35164. DOI: 1135 https://doi.org/10.7554/eLife.35164 1136

Hurvich, C. M., & Tsai, C. L. (1989). Regression and time series model selection in 1137 small samples. Biometrika, 76(2), 297-307. DOI: 1138 https://doi.org/10.1093/biomet/76.2.297 1139

Jia, J., Liu, L., Fang, F., & Luo, H. (2017). Sequential sampling of visual objects during 1140 sustained attention. PLOS Biology, 15(6), 1-19. DOI: 1141 https://doi.org/10.1371/journal.pbio.2001903 1142

Kiani, R., & Shadlen, M. N. (2009). Representation of confidence associated with a 1143 decision by neurons in the parietal cortex. Science, 324(8), 759-763. DOI: 1144 https://doi.org/10.1126/science.1169405 1145

Koriat, A. (2015). Metacognition: decision-making processes in self-monitoring and 1146 self-regulation. In G. Keren & G. Wu (Eds.), The Wiley Blackwell handbook of 1147 judgment and decision making (Vol. 1, pp. 356-379). Malden, MA: Wiley 1148 Blackwell. DOI: https://doi.org/10.1002/9781118468333.ch12 1149

Kramer, M. A., & Eden, U. T. (2016). Analysis of coupled rhythms in an invasive 1150 electrocorticogram. In T. J. Sejnowski & T. A. Poggio (Eds.), Case studies in 1151 neural data analysis: A guide for the practicing neuroscientist (pp. 123-149). 1152

Page 59: Automatic and fast encoding of representational

58

United States of America: Cambridge, MA: The MIT Press. DOI. Retrieved 1153 from https://b-ok.cc/book/3417232/2c0c79 1154

Kuznetsova, A., Brockhoff, P. B., & Christensen, R. H. B. (2017). lmerTest Package: 1155 Tests in Linear Mixed Effects Models. Journal of Statistical Software, 82(13), 1156 1-26. DOI: https://doi.org/10.18637/jss.v082.i13 1157

Lalor, E. C., Pearlmutter, B. A., Reilly, R. B., McDarby, G., & Foxe, J. J. (2006). The 1158 VESPA: A method for the rapid estimation of a visual evoked potential. 1159 NeuroImage, 32, 1549-1561. DOI: 1160 https://doi.org/10.1016/j.neuroimage.2006.05.054 1161

Lebreton, M., Abitbol, R., Daunizeau, J., & Pessiglione, M. (2015). Automatic 1162 integration of confidence in the brain valutation signal. Nature Neuroscience, 1163 18(8), 1159-1167. DOI: https://doi.org/10.1038/nn.4064 1164

Lebreton, M., Bavard, S., Daunizeau, J., & Palminteri, S. (2019). Assessing 1165 inter-individual differences with task-related functional neuroimaging. Nature 1166 Human Behaviour, 3, 897-905. DOI: 1167 https://doi.org/10.1038/s41562-019-0681-8 1168

Lichtenstein, S., Slovic, P., Fischhoff, B., Layman, M., & Combs, B. (1978). Judge 1169 Frequency of Lethal Events. Journal of Experimental Psychology: Human 1170 Learning and Memory, 4(6), 551-578. DOI: 1171 https://doi.org/10.1037/0278-7393.4.6.551 1172

Liu, L., Wang, F., Zhou, K., Ding, N., & Luo, H. (2017). Perceptual integration rapidly 1173 activates dorsal visual pathway to guide local processing in early visual areas. 1174 PLOS Biology, 15(11), e2003646. DOI: 1175 https://doi.org/10.1371/journal.pbio.2003646 1176

Luce, R. D. (2000). Utility of gains and losses: measurement – theoretical, and 1177 experimental approaches. Mahwah, New Jersey, London: Lawrence Erlbaum 1178 Associates, Inc. DOI: https://doi.org/10.4324/9781410602831 1179

Maloney, L. T., & Zhang, H. (2010). Decision-theoretic models of visual perception 1180 and action. Vision Research, 50(23), 2362-2374. DOI: 1181 https://doi.org/10.1016/j.visres.2010.09.031 1182

Maris, E., & Oostenveld, R. (2007). Nonparametric statistical testing of EEG- and 1183 MEG-data. Journal of Neuroscience Methods, 164, 177-190. DOI: 1184 https://doi.org/10.1016/j.jneumeth.2007.03.024 1185

Mars, R. B., Debener, S., Gladwin, T. E., Harrison, L. M., Haggard, P., Rothwell, J. C., 1186 & Bestmann, S. (2008). Trial-by-trial fluctuations in the event-related 1187 electroencephalogram reflect dynamic changes in the degree of surprise. The 1188 Journal of Neuroscience, 28(47), 12539-12545. DOI: 1189 https://doi.org/10.1523/JNEUROSCI.2925-08.2008 1190

Marti, S., King, J.-R., & Dehaene, S. (2015). Time-resolved decoding of two 1191 processing chains during dual-task interference. Neuron, 88, 1297-1307. DOI: 1192 https://doi.org/10.1016/j.neuron.2015.10.040 1193

Martins, A. C. R. (2006). Probability biases as Bayesian inference. Judgment and 1194

Page 60: Automatic and fast encoding of representational

59

Decision Making, 1(2), 108-117. DOI. Retrieved from 1195 http://journal.sjdm.org/jdm06124.pdf 1196

Nadaraya, E. A. (1964). On Estimating Regression. Theory of Probability and its 1197 Applications, 9(1), 141-142. DOI: https://doi.org/10.1137/1109020 1198

Norcia, A. M., Appelbaum, L. G., Ales, J. M., Cottereau, B. R., & Rossion, B. (2015). 1199 The steady-state visual evoked potential in vision research: A review. Journal 1200 of Vision, 15(6), 1-46. DOI: https://doi.org/10.1167/15.6.4 1201

Oostenveld, R., Fries, P., Maris, E., & Schoffelen, J.-M. (2011). FieldTrip: open source 1202 software for advanced analysis of MEG, EEG and invasive 1203 electrophysiological data. Computational Intelligence and Neuroscience, 2011, 1204 1-9. DOI: https://doi.org/10.1155/2011/156869 1205

Park, J., DeWind, N. K., Woldorff, M. G., & Brannon, E. M. (2016). Rapid and direct 1206 encoding of numerosity in the visual stream. Cerebral Cortex, 26, 748-763. 1207 DOI: https://doi.org/10.1093/cercor/bhv017 1208

Pelli, D. G. (1997). The VideoToolbox software for visual psychophysics: transforming 1209 numbers into movies. Spatial Vision, 10(4), 437-442. DOI: 1210 https://doi.org/10.1163/156856897X00366 1211

Philiastides, M. G., Ratcliff, R., & Sajda, P. (2006). Neural representation of task 1212 difficulty and decision making during perceptual categorization: A timing 1213 diagram. The Journal of Neuroscience, 26(35), 8965-8975. DOI: 1214 https://doi.org/10.1523/JNEUROSCI.1655-06.2006 1215

Pouget, A., Drugowitsch, J., & Kepecs, A. (2016). Confidence and certainty: distinct 1216 probabilistic quantities for different goals. Nature Neuroscience, 19(3), 1217 366-374. DOI: https://doi.org/10.1038/nn.4240 1218

Prelec, D. (1998). The Probability Weighting Function. Econometrica, 66(3), 497-527. 1219 DOI: https://doi.org/10.2307/2998573 1220

Preuschoff, K., ’t Hart, B. M., & Einhäuser, W. (2011). Pupil dilation signals surprise: 1221 evidence for noradrenaline’s role in decision making. Frontiers in 1222 Neuroscience, 5(115), 1-12. DOI: https://doi.org/10.3389/fnins.2011.00115 1223

Rigoux, L., Stephan, K. E., Friston, K. J., & Daunizeau, J. (2014). Bayesian model 1224 selection for group studies - Revisited. NeuroImage, 84, 971-985. DOI: 1225 https://doi.org/10.1016/j.neuroimage.2013.08.065 1226

Schustek, P., Hyafil, A., & Moreno-Bote, R. (2019). Human confidence judgments 1227 reflect reliability-based hierarchical integration of contextual information. 1228 Nature Communications, 10, 5430. DOI: 1229 https://doi.org/10.1038/s41467-019-13472-z 1230

See, K. E., & Fox, C. R. (2006). Between ignorance and truth: partition dependence 1231 and learning in judgment under uncertainty. Journal of Experimental 1232 Psychology: Learning, Memory, and Cognition, 32(6), 1385-1402. DOI: 1233 https://doi.org/10.1037/0278-7393.32.6.1385 1234

Snoek, L., Miletić, S., & Scholte, H. S. (2019). How to control for confounds in 1235 decoding analyses of neuroimaging data. NeuroImage, 184, 741-760. DOI: 1236

Page 61: Automatic and fast encoding of representational

60

https://doi.org/10.1016/j.neuroimage.2018.09.074 1237 Stauffer, W. R., Lak, A., Bossaerts, P., & Schultz, W. (2015). Economic Choices 1238

Reveal Probability Distortion in Macaque Monkeys. The Journal of 1239 Neuroscience, 35(7), 3146-3154. DOI: 1240 https://doi.org/10.1523/JNEUROSCI.3653-14.2015 1241

Stephan, K. E., Penny, W. D., Daunizeau, J., Moran, R. J., & Friston, K. J. (2009). 1242 Bayesian model selection for group studies. NeuroImage, 46, 1004-1017. DOI: 1243 https://doi.org/10.1016/j.neuroimage.2009.03.025 1244

Summerfield, C., & Mangels, J. A. (2004). Coherent theta-band EEG activity predicts 1245 item-context binding during encoding. NeuroImage, 24, 692-703. DOI: 1246 https://doi.org/10.1016/j.neuroimage.2004.09.012 1247

Sutton, S., Braren, M., Zubin, J., & John, E. R. (1965). Evoked-potential correlates of 1248 stimulus uncertainty. Science, 150(3700), 1187-1188. DOI: 1249 https://doi.org/10.1126/science.150.3700.1187 1250

Taulu, S., & Kajola, M. (2005). Presentation of electromagnetic multichannel data: The 1251 signal space separation method. Journal of Applied Physics, 97, 124905 1252 124901-124910. DOI: https://doi.org/10.1063/1.1935742 1253

Taulu, S., & Simola, J. (2006). Spatiotemporal signal space separation method for 1254 rejecting nearby interference in MEG measurements. Physics in Medicine and 1255 Biology, 51, 1759-1768. DOI: https://doi.org/10.1088/0031-9155/51/7/008 1256

Thurstone, L. L. (1927). A law of comparative judgment. Psychological Review, 34, 1257 273-286. DOI: https://doi.org/10.1037/h0070288 1258

Tobler, P. N., Christopoulos, G. I., O’Doherty, J. P., Dolan, R. J., & Schultz, W. (2008). 1259 Neuronal Distortions of Reward Probability without Choice. The Journal of 1260 Neuroscience, 28(45), 11703–11711. DOI: 1261 https://doi.org/10.1523/JNEUROSCI.2870-08.2008 1262

Tobler, P. N., O’Doherty, J. P., Dolan, R. J., & Schultz, W. (2007). Reward value coding 1263 distinct from risk attitude-related uncertainty coding in human reward systems. 1264 J Neurophysiol, 97, 1621-1632. DOI: https://doi.org/10.1152/jn.00745.2006 1265

Tversky, A., & Kahneman, D. (1992). Advances in Prospect Theory: Cumulative 1266 Representation of Uncertainty. Journal of Risk and Uncertainty, 5, 297-323. 1267 DOI: https://doi.org/10.1007/BF00122574 1268

van den Berg, R., Awh, E., & Ma, W. J. (2014). Factorial comparison of working 1269 memory models. Psychological Review, 121(1), 124-149. DOI: 1270 https://doi.org/10.1037/a0035234 1271

van Lieshout, L. L. F., Vandenbroucke, A. R. E., Müller, N. C. J., Cools, R., & de Lange, 1272 F. P. (2018). Induction and relief of curiosity elicit parietal and frontal activity. 1273 The Journal of Neuroscience, 38(10), 2579-2588. DOI: 1274 https://doi.org/10.1523/JNEUROSCI.2816-17.2018 1275

Varey, C. A., Mellers, B. A., & Birnbaum, M. H. (1990). Judgments of Proportions. 1276 Journal of Experimental Psychology: Human Perception and Performance, 1277 16(3), 613-625. DOI: https://doi.org/10.1037/0096-1523.16.3.613 1278

Page 62: Automatic and fast encoding of representational

61

Wakker, P. P. (2010). Prospect theory: for risk and ambiguity. United States of 1279 America: Cambridge University Press. DOI: 1280 https://doi.org/10.1017/CBO9780511779329 1281

Wallsten, T. S., Budescu, D. V., Erev, I., & Diederich, A. (1997). Evaluating and 1282 combining subjective probability estimates. Journal of Behavioral Decision 1283 Making, 10, 243-268. DOI: 1284 https://doi.org/10.1002/(sici)1099-0771(199709)10:3<243::aid-bdm268>3.0.co1285 ;2-m 1286

Watson, G. S. (1964). Smooth regression analysis. Sankhyā: The Indian Journal of 1287 Statistics, Series A, 26(4), 359-372. DOI. Retrieved from 1288 https://www.jstor.org/stable/25049340 1289

Wiebel, C. B., Singh, M., & Maertens, M. (2016). Testing the role of Michelson 1290 contrast for the perception of surface lightness. Journal of Vision, 16(11), 17, 1291 11-19. DOI: https://doi.org/10.1167/16.11.17 1292

Wyart, V., Gardelle, V. d., Scholl, J., & Summerfield, C. (2012). Rhythmic Fluctuations 1293 in Evidence Accumulation during Decision Making in the Human Brain. 1294 Neuron, 76, 847-858. DOI: https://doi.org/10.1016/j.neuron.2012.09.015 1295

Yang, T., & Shadlen, M. N. (2007). Probabilistic reasoning by neurons. Nature, 447, 1296 1075-1080. DOI: https://doi.org/10.1038/nature05852 1297

Zhang, H., & Maloney, L. T. (2012). Ubiquitous log odds: a common representation of 1298 probability and frequency distortion in perception, action, and cognition. 1299 Frontiers in Neuroscience, 6, 1. DOI: https://doi.org/10.3389/fnins.2012.00001 1300

Zhang, H., Ren, X., & Maloney, L. T. (2020). The bounded rationality of probability 1301 distortion. Proceedings of the National Academy of Sciences, 117(36), 1302 22024-22034. DOI: https://doi.org/10.1073/pnas.1922401117 1303

Zizlsperger, L., Sauvigny, T., Händel, B., & Haarmeier, T. (2014). Cortical 1304 representations of confidence in a visual perceptual decision. Nature 1305 Communications, 5, 3940. DOI: https://doi.org/10.1038/ncomms4940 1306

1307

Page 63: Automatic and fast encoding of representational

62

Figure Legends 1308

Figure 1. Experimental design. 1309 (A) Relative-frequency tracking and judgment task. On each trial, a sequence of 1310 orange and blue dot arrays was presented at a rate of 150 ms per display. 1311 Subjects were asked to fixate on the central fixation cross and track the relative 1312 frequency of orange (or blue) dots in each display until a 0–100% response scale 1313 appeared. Subjects then clicked on the scale to report the relative frequency in 1314 the last display. (B) The p and p(1–p) sequences in an example P-cycle or 1315 U-cycle trial. The value of p on each display was randomly chosen from a uniform 1316 distribution ranging from 0.1 to 0.9. In P-cycles (left panels), the p sequence was 1317 generated in such a way that the p value in different displays alternated between 1318 lower (<0.5) and higher (>0.5) values, resulting in a frequency spectrum peaking 1319 at 3.33 Hz (solid line), while the corresponding p(1–p) sequence was aperiodic 1320 (dashed line). In U-cycles (right panels), it was the reverse: the p(1–p) sequence 1321 alternated between lower (<0.21) and higher (>0.21) values in cycles of 3.33 Hz 1322 (dashed line), while the p sequence was aperiodic (solid line). To illustrate the 1323 periodicity or aperiodicity in the p and p(1–p) sequences, the lower and higher p 1324 or p(1–p) values are respectively coded by lighter and darker colors in the bars 1325 above the frequency spectrum plots. Note that “lower” and “higher” did not mean 1326 discretization; the frequency spectra were computed from continuous values of p 1327 and p(1–p). 1328

1329 Figure 2. Behavioral and modeling results. 1330

(A) Probability distortion quantified by LLO model. Main plots: Distortion of 1331 relative-frequency of one representative subject. The subject’s reported 1332

relative-frequency p is plotted against objective relative frequency p, 1333

separately for the N-small and N-large conditions. Each dot is for one trial. Black 1334 curves denote LLO model fits. (B) Slope of distortion ˆ for the N-small and 1335 N-large conditions. Each gray line is for one subject. The thick green line 1336 denotes the group mean. Error bars denote SEM. Inset: illustration of LLO 1337

distortion patterns with varying parameters and p0 . The gray blue, brown, 1338

and dark blue curves respectively correspond to [ =0.6, p0=0.3], [ =0.6, p0=0.7], 1339

[ =1.8, p0=0.3]. (C) Illustration of representational uncertainty (V p̂ ) as a 1340

function of p and N. Larger values of N are coded in darker green (same as D). 1341

Page 64: Automatic and fast encoding of representational

63

The maximal sample size is assumed to follow the form ns

b Na , with a=0.42 1342

and b=1.91 (median estimates across subjects). (D) Illustration of probability 1343

distortion predicted by BLO: p as a function of p and N. Larger values of 1344

N are coded in darker green. (E) Slopes of distortion: data versus model fits. 1345 According to the value of N in the last display, all trials were divided into four bins 1346

and the slope of distortion, ˆ , was estimated separately for each bin. Gray filled 1347

circles denote data. Error bars denote SEM. Red and blue circles respectively 1348

denote BLO and LLO model fits. (F) p p as a function of p and N. Black 1349

curves denote smoothed data on the group level, separately for the N-small and 1350 N-large conditions. Shadings denote SEM. Red and blue curves respectively 1351 denote BLO and LLO model fits. (G) Response time as an additional index for 1352 representational uncertainty. The mean response time (RT) for subjects to 1353 initiate their relative-frequency report is plotted against binned p or p(1–p), 1354 separately for the N-small and N-large conditions. Error bars denote SEM. 1355

1356 Figure 3. Results of factorial model comparisons. 1357

(A) Model comparison method: AICc. For each subject, the model with the 1358 lowest AICc was used as a reference to compute ∆AICc for each model. A lower 1359 value of ∆AICc summed across subjects indicates better fit. The BLO model 1360 outperformed all the alternative models. (B) Model comparison method: 1361 10-fold cross-validation. The cross-validated log likelihood times 2 (denoted 1362 2LL), which is comparable to AICc, was computed. For each subject, the model 1363 with the lowest 2LL was used as a reference to compute ∆( 2LL) for each 1364 model. Same as AICc, a lower value of ∆( 2LL) summed across subjects 1365 indicates better fit. Again, the BLO model outperformed all the alternative models. 1366 D1, D2 and D3 denote the three dimensions of factorial model comparison. (C) 1367 Protected exceedance probability on each model dimension. Each panel is 1368 for model comparison on one dimension. Each assumption of the BLO model 1369 outperformed the alternative assumptions on its dimension. (D) Model 1370 identifiability analysis. Each column is for one specific model that was used to 1371 generate synthetic datasets. Each row is for one model that was fitted to the 1372 synthetic datasets. Summed ∆AICc was used to identify the best fitting model for 1373 each dataset. The color of each cell codes the percentage that the model on its 1374 row was identified as the best model among the 50 datasets generated by the 1375

Page 65: Automatic and fast encoding of representational

64

model on its column. Higher value is coded as more reddish and lower value as 1376 more bluish. Values in each column add up to 1. From left (bottom) to right (top), 1377 the 18 models are 111, 112, 113, 121, 122, 123, 211, 212, 213, 221, 222, 223, 1378 311, 312, 313, 321, 322, 323 where the first digit indexes the D1 assumption (1 1379 for log-odds, 2 for Prelec, and 3 for linear), the second digit indexes the D2 1380 assumption (1 for bounded and 2 for bounds-free), and the third digit indexes the 1381

D3 assumption (1 for V p̂ with ns

b Na , 2 for V p̂ with constant ns, and 3 1382

for constant V p̂ ). The BLO model with sample size ns

b Na is the first 1383

model (111), corresponding to the leftmost column and the bottom row, which is 1384 indicated by an arrow in the plot. The synthetic data that were generated by BLO 1385 were all best fit by BLO (see leftmost column) and those generated by the other 1386 models were seldom best fit by BLO (see bottom row). 1387 1388

Figure 4. Results of phase coherence analysis. 1389 Grand-averaged phase coherence spectrum for magnetometers. Left: The phase 1390 coherence ( ) between the periodic p values and MEG time series in P-cycle 1391 trials. Right: The phase coherence between the periodic p(1–p) values and MEG 1392 time series in U-cycle trials. Light and dark green curves respectively denote the 1393 N-small and N-large conditions. Shadings denote SEM across subjects. The 1394 vertical line marks 3.33 Hz. Dots above the spectrum mark the frequency bins 1395 whose phase coherence was significantly above chance level (permutation tests, 1396 PFDR <0.05 with FDR correction across frequency bins). Insets: Grand-averaged 1397 phase coherence topography at 3.33 Hz for each cycle and numerosity condition. 1398 Solid black dots denote sensors whose phase coherence at 3.33 Hz was 1399 significantly above chance level (permutation tests, PFDR<0.05 with FDR 1400 corrected across magnetometers). 1401

1402 Figure 5. Neural responses to p and p(1–p) predict the slope of probability 1403 distortion. 1404

We defined ˆ lnˆ

N -large

ˆN -small

and ln N -large

N -small

to respectively quantify how much 1405

the behavioral measure of the slope of distortion, ˆ , and the relative strength of 1406 p to p(1–p) in neural responses, , changed across the two numerosity 1407 conditions. Main plot: Correlation between ˆ and the at the fronto-parietal 1408 magnetometer channel MEG0421 (Pearson’s r = 0.67, one-tailed PFDR=0.033 1409

Page 66: Automatic and fast encoding of representational

65

with FDR correction across 102 magnetometers). The logarithm transformation 1410 was used only for visualization. Each dot denotes one subject. Inset: Correlation 1411 coefficient topography between ˆ and , on which MEG0421 is marked by a 1412 solid black dot. 1413

1414 Figure 6. Results of decoding analyses. 1415

Main plots: Time-resolved decoding performance over different time lags for p (A) 1416 and p(1–p) (B), separately for the N-small (light green) and N-large (dark green) 1417 conditions. Shadings denote SEM across subjects. Symbols above the curves 1418 indicate time lags that had above-chance decoding performance (cluster-based 1419 permutation tests), with vertical bars representing Pcluster<0.01 and dots 1420 representing 0.01≤Pcluster<0.05. Insets: Topography of spatial decoding 1421 performance for the time window (highlighted by the funnel-shaped dashed lines) 1422 that contained the peak decoding performance in the time-resolved decoding 1423 analysis. The larger plot of topography represents the spatial decoding results 1424 using temporal samples from both magnetometers and gradiometers at each 1425 location. The smaller plots of topography represent the spatial decoding results 1426 separately for magnetometers (megmag) and pairs of gradiometers (meggrads). 1427 Solid black dots indicate the sensor clusters with above-chance decoding 1428 performance (Pcluster < 0.05, corrected for multiple comparisons using 1429 cluster-based permutation test) for the N-large condition. The time-resolved 1430 decoding performance for p peaked at around 327 ms after stimulus onset, while 1431 that for p(1–p) peaked at around 419 ms. The encodings of p and p(1–p) involved 1432 overlapping parietal-occipital regions, including the fronto-parietal region where 1433

and ˆ were positively correlated (Fig. 5). 1434

1435 Figure 7. Time-resolved decoding analysis based on cross-validated version of 1436 confound regression (CVCR). 1437

(A) Decoding performance for p(1–p) with confounds regressed-out. (B) 1438

Decoding performance for V p̂ with confounds regressed-out. The V p̂ 1439

was computed according to each subject’s fitted BLO model. Curves denote 1440 grand-averaged decoding performance over different time lags, separately for the 1441 N-small (light green) and N-large (dark green) conditions. Shadings denote SEM 1442 across subjects. Vertical bars above the curves indicate time lags that had 1443 above-chance decoding performance (cluster-based permutation tests, 1444 Pcluster<0.01). Eight confounding variables (including N) were regressed out from 1445

Page 67: Automatic and fast encoding of representational

66

MEG time series before decoding analysis (see Methods for methodological 1446 details). The decoded time course from CVCR for p(1–p) was similar to that of the 1447 standard time-resolved decoding analysis (Fig. 6B). The decoded time course was 1448

also similar for V p̂ . 1449

1450

Page 68: Automatic and fast encoding of representational
Page 69: Automatic and fast encoding of representational
Page 70: Automatic and fast encoding of representational
Page 71: Automatic and fast encoding of representational
Page 72: Automatic and fast encoding of representational
Page 73: Automatic and fast encoding of representational
Page 74: Automatic and fast encoding of representational