Upload
eric-jan-wagenmakers
View
212
Download
0
Embed Size (px)
Citation preview
ARTICLE IN PRESS
0022-2496/$ - se
doi:10.1016/j.jm
Journal of Mathematical Psychology 50 (2006) 99–100
www.elsevier.com/locate/jmp
Editorial
Editors’ introduction
The main objective of the scientific enterprise is to findexplanations for the phenomena we observe. Such ex-planations can oftentimes be couched in terms ofmathematical models. In the field of psychology, one mayfor instance wonder what mathematical models bestdescribe or explain distributions of response times, forget-ting curves, changes in categorization performance withlearning, etc, etc. Sir Harold Jeffreys, one of the foundingfathers of modern Bayesian statistics, emphasized that amodel should not be considered in isolation. On the onehand, although a certain model may be bad, that model isstill the best we have until we have discovered a bettermodel to put in its place. On the other hand, although acertain model may be good, there is no perfect model, andthere may be even better models to replace it.
Thus, it is of vital importance to quantitatively comparedifferent models for the same set of data. Here we enter thearena of model selection. One of the most important toolsin this arena is Occam’s razor. This is a simplicity postulatethat states that, all other things being equal, one shouldprefer the simplest model. Obviously, the simplest modelwill generally not do so well in terms of goodness-of-fit.Hence, model selection is concerned with the delicatetradeoff between simplicity and goodness-of-fit. These andother aspects of model selection were discussed at length inthe June 2000 Journal of Mathematical Psychology specialissue on model selection, edited by Jay Myung, MalcolmForster, and Michael Browne.
The 2000 special issue featured a series of tutorial papers,and stimulated many experimental psychologists to evalu-ate models not just in terms of goodness-of-fit, but also interms of model complexity. One may wonder why, only fiveyears later, the Journal of Mathematical Psychology shouldhost a second special issue on the same topic. One reason isthat model selection is a very general academic endeavourthat is of tremendous interest not just to mathematicalpsychologists, but also to statisticians, mathematicians,economists, sociologists, and experimental psychologists.Accordingly, the rate of theoretical development in thisthriving field has been very high over the last five years.The present special issue is in large part concerned with
e front matter r 2006 Elsevier Inc. All rights reserved.
p.2005.01.005
modern methods and modern questions that reflect theprogress in the field. Another raison d’etre is that thepresent special issue differs from the 2000 special issue in atleast three important ways. First, the articles in the currentspecial issue are not meant to be tutorial papers, althoughsome of the articles are, we hope, very accessible to theuninitiated. Second, all articles in this special issueexplicitly compare several different methods of modelselection. Finally, almost all articles show how the modelselection methods under discussion can be applied to datasets obtained inside or outside the laboratory.The present special issue on model selection features
seven articles, whose content ranges from advancedBayesian procedures to traditional hypothesis testing.Navarro, Griffiths, Steyvers, and Lee model individualdifferences using nonparametric Bayesian methods.Their account of the Dirichlet process is both detailedand illuminating. Key concepts involve Chinese restaurantsand the successive breaking of a stick. The Karabatsosarticle also deals with nonparametric Bayesian methods,but Karabatsos uses these methods to select modelsbased on their predictive utility. One of the appealingtheoretical advantages of Karabatsos’ method is itsconsiderable generality. The article by Wagenmakers,Grunwald, and Steyvers reviews and applies the methodof accumulative one-step-ahead prediction error (APE). Itsease of application and its acronym should not detractfrom APE’s considerable theoretical sophistication: themethod is closely related to the principle of MinimumDescription Length and to Bayes factor model selectionusing Jeffreys’ prior. The Myung article provides anexcellent introduction to the modern concept of MinimumDescription Length (MDL). This article also applies themodern MDL method to data from human categorylearning.The article by de Rooij and Grunwald focuses on a
model selection problem in which the MDL method cannotbe automatically applied. This problem is one where themodels of interest (i.e., a Poisson and a geometric model)have infinite parametric complexity. De Rooij and Grun-wald compare performance of different model selectionmethods and show that APE can perform markedly worsethan Bayesian methods and modified MDL methods. Thearticle by Lee and Pope deals with one of the most common
ARTICLE IN PRESSEditorial / Journal of Mathematical Psychology 50 (2006) 99–100100
statistical problems: the 2� 2 contingency table or ‘‘rateproblem’’. Lee and Pope directly compare objectiveBayesian, MDL, and frequentist methods using extensiveMonte Carlo simulations. Generally, the objective Baye-sian method performs similarly to MDL, and both thesemethods may in some situations lead to different conclu-sions than the frequentist method. The Waldorp, Grasman,and Huizenga article is concerned with frequentist testingin case the model under consideration is only approxi-mately true. The authors introduce a new, modified versionof Hotelling’s test to calculate goodness-of-fit in thepresence of model misspecification. Waldorp et al. alsodiscuss how to compute confidence intervals undermisspecification, and apply their methodology to datafrom the daily news memory test.
This special issue was inspired by a model selectionsymposium held in Amsterdam, August 2004. The organiz-
ing committee included Denny Borsboom, MaartenSpeekenbrink, and Ingmar Visser. Financial support wasprovided by the Netherlands Organization for ScientificResearch (NWO), the Interuniversity Graduate School ofPsychometrics and Sociometrics (IOPS), and the GraduateResearch Institute for Experimental Psychology (EPOS).We would like to thank Jay Myung for his friendly adviceon some of the organizational details of editing a specialissue.
Guest Editors
Eric-Jan WagenmakersLourens Waldorp
E-mail addresses: [email protected](E.-J. Wagenmakers), [email protected] (L. Waldorp).