A “No Prescriptives, Please” Proposal Postscript: When Desperate Times Require Desperate Measures

  • Published on
    24-Jan-2017

  • View
    213

  • Download
    1

Embed Size (px)

Transcript

  • COMMENTARY

    A No Prescriptives, Please Proposal Postscript:When Desperate Times Require Desperate Measures

    Daniel H. Robinson & Joel R. Levin

    Published online: 15 August 2013# Springer Science+Business Media New York 2013

    Abstract We appreciate the thoughtful reactions of our colleagues to the no prescriptives,please proposal of Robinson et al. (2013), as well as the opportunity to respond to them. Forthe most part, our colleagues seem to agree that a problem exists in terms of unwarrantedrecommendations for practice appearing too often in educational research journals. At thesame time, we realize that because our proposed policy is rather Draconianin that it seeksto eliminate all recommendations for practice in all primary research journalseveryresponder mentioned that it was too extreme. This was intentional on our part. If, as Harris(2013) suggested, increased awareness and scrutiny become topics for further discussionamong APA editors, then our modest mission will have been accomplished. In this rejoinder,we attempt to restate and clarify our proposal, along with its entailments, followed by briefcomments on each responder's response.

    Keywords Prescriptives . Recommendations . Causality . Policy

    For primary research journals in the fields of psychology and education, over the years, thefollowing editorial policy recommendations have been made:

    & Traditional statistical hypothesis testing should be banned and replaced by effect-sizeestimates and confidence intervals (e.g., Schmidt 1996).

    & The nature of a study's participants must be described more completely with respect tosuch aspects as their demographic, developmental, cognitive, and emotional and behav-ioral characteristics (e.g., Harris 2003).

    & Any study's findings that are reported as being significant must be preceded by themodifier statistically (e.g., Thompson 1996)

    & Structured abstracts are more compact and precise than are conventional abstracts and sothe former should replace the latter (e.g., Mosteller et al. 2004).

    Educ Psychol Rev (2013) 25:353359DOI 10.1007/s10648-013-9238-y

    D. H. Robinson (*)School of Education, Colorado State University, 1588 Campus Delivery, Fort Collins,CO 80523-1588, USAe-mail: dan.robinson@colostate.edu

    J. R. LevinUniversity of Arizona, Tucson, Arizona, USA

  • & Individual experiments or studies should not be published unless they also include sometype of replication component (e.g., Thompson 1996).

    & The high incidence of underpowered studies appearing in our journals dictates thatpower estimates must be reported and deemed adequate for the statistical tests that areconducted (e.g., Schmidt 1996).

    & Because of documented prejudice against the null hypothesis, authors should submittheir manuscripts for review with the Results and Discussion sections removed (e.g.,Walster and Cleary 1970).

    & Primary research journals are not an appropriate outlet for authors to provide prescriptivestatements (i.e., recommendations for practice) to others (Robinson et al. 2013).

    Certain of these recommendations have been adopted by various journals and others havenot been. Many of them may sound unduly prescriptive and extreme. Consider, for example,the banning of null hypothesis significance testing in APA journals, an extreme proposal thatwas recommended by highly qualified and well-meaning psychological researchers andmethodologists (e.g., Carver 1993; Schmidt 1996). The proposal was considered and givenserious attention by APA editors and staffers, as well as APA's Publications and CommunicationsBoard and special committees that were assembled to write the statistical guidelines for the fifth(2001) and sixth (2010) editions of the APA Publication Manual. In the end, hypothesis testingsurvived scrutiny and continues to be endorsed by the manual as a critical inferential statisticaltool (as it is in other academic disciplines), with researchers' provision of effect sizes andconfidence intervals additionally encouraged.1

    We appreciate the thoughtful reactions of our colleagues to the no prescriptives, pleaseproposal of Robinson et al. (2013), as well as the opportunity to respond to them. For the mostpart, our colleagues seem to agree that a problem exists in terms of unwarranted recommenda-tions for practice appearing too often in educational research journals. At the same time, werealize that because our proposed policy is rather Draconianin that it seeks to eliminate allrecommendations for practice in all primary research journalsevery responder mentioned thatit was too extreme. This was intentional on our part. If the asking price for a used car is $15,000and you want to pay only $12,000, your counteroffer conversation starts lower than $12,000. Inour case, the price that we are willing to pay is heightened awareness of a problem that will leadto increased editorial scrutiny. If, as Harris (2013) suggested, increased awareness and scrutinybecome topics for further discussion among APA editors, then our modest mission will havebeen accomplished. In this rejoinder, we attempt to restate and clarify our proposal, along withits entailments, followed by brief comments on each responder's response.

    An Unfortunate Misrepresentation/Mischaracterization of Our Proposal

    Several authors seemed to confuse what we were really proposing with something else. Forexample, Renkl (2013) mentions both practical recommendations and implications. Theseare very different. Our proposal dealt only with the former. A recommendation would besomething like We recommend that teachers employ physical manipulatives when teachinggeometry to 4th grade students. A potential implication might be something like The

    1 Although the incidence of effect size reporting has increased dramatically in education and psychologyjournals over the past decade, it is evident that such reporting generally occurs in an unthinking, indiscriminatefashion (Peng et al. 2013). Insofar as no randomized intervention experiment was conducted at the time andthroughout the decade, it is not possible to determine exactly how much of the increase in authors' provision ofeffect size and confidence interval information is attributable to the manual's encouragement of them per se.

    354 Educ Psychol Rev (2013) 25:353359

  • results of this study might eventually inform the way in which math is taught to 4th gradestudents. Note that the former is very direct and prescriptive, whereas the latter is morevanilla and speculative.

    The motto of the Royal Statistical Society is Aliis exterendum (let others thrash it out) anddates back to 1834. Thus, our no prescriptives, please recommendation may be a bitplagiaristic. All we are suggesting is that primary research articles should stick to reportingthe study's findings. Beyond that, let othersmany of whom were specifically identifiedby Robinson et al. (2013)thrash out the possibilities of what to do with the findings.

    Bans on smoking in indoor public places have increased in popularity in the USA overthe past 20 years. It is now against the law to smoke in some restaurants that previouslypermitted it. Does this prevent smokers from smoking? Certainly not. Smokers are simplyasked to step outside where the secondary smoke is not as easily ingested by nonsmokers.Similarly, our proposed policy would not prevent educational psychologists (who Alexander,2013, suggests are the perfect thrashers) from thrashing out policy recommendations fromresearch findings. It would only restrict the locations where such thrashing is permitted.

    Would the Proposal Eliminate Potentially Useful Practical Information?

    Any intimation that our proposed policy would have the effect of preventing educationalpsychologists from influencing practice (e.g., Alexander, 2013; Renkl, 2013; Wainer andClauser, 2013) seems to be more of an over-reaction than a reaction to the policy itself. Again,let us attempt to explain what the policy would do by taking the previous example of arecommendation for practice concerning manipulatives for teaching mathematics to fourthgraders. If we prevent authors from including the recommendation in a research article, thenwhat can they include that would inform practitioners? If they stick to presenting the findings oftheir study, it might be something like this: Fourth-grade students who were taught geometryusing physical manipulatives correctly solved more problems on a test than did those who weretaught without the manipulatives. This sentence could be the last sentence of the abstract andof the entire article. If practitioners read it, what is the chance that they would come awaywithout the intended message? Do we really also need to include a recommendation?

    In the example that Harris (2013) provides about noise-reducing headphones, we feel thatthe authors should stick to stating the findings of the studynamely, that the headphonesseemed to benefit the learning-disabled students in grades 35, whereas they actuallyharmed the performance of the nondisabled students. This is enough. There is no need togo on and recommend that teachers consider using the headphones for some students. Is thatpoint not already obvious? Similarly, if a study finds that shark cartilage tablets benefittedphysically active 45-year-old men in terms of their recovery time, is it necessary torecommend shark cartilage? We prefer that researchers stick to reporting their findings andlet the reader decide. Aliis exterendumlet others thrash it out.

    Responses to Selected Author Comments

    Renkl (2013) suggested, Even wrong claims might often have important functions foradvancing scientific knowledge. That may be true in some cases, but in others, wrongclaims can be catastrophic. Consider the number of children who have died because theirparents chose no