4
Author accuracy, technical editing and peer review: Useful references JAMA. 1998 Jul 15;280(3):267-9. Can the accuracy of abstracts be improved by providing specific instructions? A randomized controlled trial. Pitkin RM , Branagan MA . Obstetrics & Gynecology, Los Angeles, Calif 90024-3908, USA. [email protected] CONTEXT: The most-read section of a research article is the abstract, and therefore it is especially important that the abstract be accurate. OBJECTIVE: To test the hypothesis that providing authors with specific instructions about abstract accuracy will result in improved accuracy. DESIGN: Randomized controlled trial of an educational intervention specifying 3 types of common defects in abstracts of articles that had been reviewed and were being returned to the authors with an invitation to revise. MEAN OUTCOME MEASURE: Proportion of abstracts containing 1 or more of the following defects: inconsistency in data between abstract and body of manuscript (text, tables, and figures), data or other information given in abstract but not in body, and/or conclusions not justified by information in the abstract. RESULTS: Of 250 manuscripts randomized, 13 were never revised and 34 were lost to follow-up, leaving a final comparison between 89 in the intervention group and 114 in the control group. Abstracts were defective in 25 (28%) and 30 (26%) cases, respectively (P=.78). Among 55 defective abstracts, 28 (51%) had inconsistencies, 16 (29%) contained data not present in the body, 8 (15%) had both types of defects, and 3 (5%) contained unjustified conclusions. CONCLUSIONS: Defects in abstracts, particularly inconsistencies between abstract and body and the presentation of data in abstract but not in body, occur frequently. Specific instructions to authors who are revising their manuscripts are ineffective in lowering this rate. Journals should include in their editing processes specific and detailed attention to abstracts. Arch Dermatol. 2003 May;139(5):589-93. Quality of abstracts in 3 clinical dermatology journals. Dupuy A , Khosrotehrani K , Lebbe C , Rybojad M , Morel P . Service de Dermatologie, Hopital Saint-Louis, 1 avenue Claude Vellefaux, 75010 Paris, France. [email protected] BACKGROUND: Structured abstracts have been widely adopted in medical journals, with little demonstration of their superiority over unstructured abstracts. OBJECTIVES: To compare abstract quality among 3 clinical dermatology journals and to compare the quality of structured and unstructured abstracts within those journals. DESIGN AND DATA SOURCES: Abstracts of a random sample of clinical studies (case reports, case series, and reviews excluded) published in 2000 in the Archives of Dermatology, The British Journal of Dermatology, and the Journal of the American Academy of Dermatology were evaluated. Each abstract was rated by 2 independent investigators, using a 30- item quality scale divided into 8 categories (objective, design, setting, subjects, intervention, measurement of variables, results, and conclusions). Items applicable to the study and present in the main text of the article were rated as being present or absent from the abstract. A global quality score (range, 0-1) for each abstract was established by calculating the proportion of criteria among the eligible criteria that was rated as being present. A score was also calculated for each category. Interrater agreement was assessed with a kappa statistic. Mean +/- SD scores were compared among journals and between formats (structured vs unstructured) using analysis of variance. MAIN OUTCOME MEASURES: Mean quality scores of abstracts by journal and by format. RESULTS: Interrater agreement was good (kappa = 0.71). Mean +/- SD quality scores of abstracts were significantly different among journals (Archives of Dermatology, 0.78 +/- 0.07; The British Journal of Dermatology, 0.67 +/- 0.17; and Journal of the American Academy of Dermatology, 0.64 +/- 0.15; P =.045) and between formats (structured, 0.71 +/- 0.11; and unstructured, 0.56 +/- 0.18; P =.002). The setting category had the lowest scores. CONCLUSIONS: The quality of abstracts differed across the 3 tested journals. Unstructured abstracts were demonstrated to be of lower quality compared with structured abstracts and may account for the differences in quality scores among the journals. The structured format should be more widely adopted in dermatology journals Ann Intern Med. 1994 Jul 1;121(1):11-21. Manuscript quality before and after peer review and editing at Annals of Internal Medicine. Goodman SN , Berlin J , Fletcher SW , Fletcher RH . Johns Hopkins University School of Medicine, Baltimore, Maryland. OBJECTIVE: To evaluate the effects of peer review and editing on manuscript quality. SETTING:

Author accuracy, technical editing and peer review: Useful

  • Upload
    prezi22

  • View
    132

  • Download
    4

Embed Size (px)

DESCRIPTION

 

Citation preview

Page 1: Author accuracy, technical editing and peer review: Useful

Author accuracy, technical editing and peer review: Useful references

JAMA. 1998 Jul 15;280(3):267-9.Can the accuracy of abstracts be improved by providing specific instructions? A randomized controlled trial.Pitkin RM, Branagan MA.Obstetrics & Gynecology, Los Angeles, Calif 90024-3908, USA. [email protected]: The most-read section of a research article is the abstract, and therefore it is especially important that the abstract be accurate. OBJECTIVE: To test the hypothesis that providing authors with specific instructions about abstract accuracy will result in improved accuracy. DESIGN: Randomized controlled trial of an educational intervention specifying 3 types of common defects in abstracts of articles that had been reviewed and were being returned to the authors with an invitation to revise. MEAN OUTCOME MEASURE: Proportion of abstracts containing 1 or more of the following defects: inconsistency in data between abstract and body of manuscript (text, tables, and figures), data or other information given in abstract but not in body, and/or conclusions not justified by information in the abstract. RESULTS: Of 250 manuscripts randomized, 13 were never revised and 34 were lost to follow-up, leaving a final comparison between 89 in the intervention group and 114 in the control group. Abstracts were defective in 25 (28%) and 30 (26%) cases, respectively (P=.78). Among 55 defective abstracts, 28 (51%) had inconsistencies, 16 (29%) contained data not present in the body, 8 (15%) had both types of defects, and 3 (5%) contained unjustified conclusions. CONCLUSIONS: Defects in abstracts, particularly inconsistencies between abstract and body and the presentation of data in abstract but not in body, occur frequently. Specific instructions to authors who are revising their manuscripts are ineffective in lowering this rate. Journals should include in their editing processes specific and detailed attention to abstracts.

Arch Dermatol. 2003 May;139(5):589-93.Quality of abstracts in 3 clinical dermatology journals.Dupuy A, Khosrotehrani K, Lebbe C, Rybojad M, Morel P.Service de Dermatologie, Hopital Saint-Louis, 1 avenue Claude Vellefaux, 75010 Paris, France. [email protected]: Structured abstracts have been widely adopted in medical journals, with little demonstration of their superiority over unstructured abstracts. OBJECTIVES: To compare abstract quality among 3 clinical dermatology journals and to compare the quality of structured and unstructured abstracts within those journals. DESIGN AND DATA SOURCES: Abstracts of a random sample of clinical studies (case reports, case series, and reviews excluded) published in 2000 in the Archives of Dermatology, The British Journal of Dermatology, and the Journal of the American Academy of Dermatology were evaluated. Each abstract was rated by 2 independent investigators, using a 30-item quality scale divided into 8 categories (objective, design, setting, subjects, intervention, measurement of variables, results, and conclusions). Items applicable to the study and present in the main text of the article were rated as being present or absent from the abstract. A global quality score (range, 0-1) for each abstract was established by calculating the proportion of criteria among the eligible criteria that was rated as being present. A score was also calculated for each category. Interrater agreement was assessed with a kappa statistic. Mean +/- SD scores were compared among journals and between formats (structured vs unstructured) using analysis of variance. MAIN OUTCOME MEASURES: Mean quality scores of abstracts by journal and by format. RESULTS: Interrater agreement was good (kappa = 0.71). Mean +/- SD quality scores of abstracts were significantly different among journals (Archives of Dermatology, 0.78 +/- 0.07; The British Journal of Dermatology, 0.67 +/- 0.17; and Journal of the American Academy of Dermatology, 0.64 +/- 0.15; P =.045) and between formats (structured, 0.71 +/- 0.11; and unstructured, 0.56 +/- 0.18; P =.002). The setting category had the lowest scores. CONCLUSIONS: The quality of abstracts differed across the 3 tested journals. Unstructured abstracts were demonstrated to be of lower quality compared with structured abstracts and may account for the differences in quality scores among the journals. The structured format should be more widely adopted in dermatology journals

Ann Intern Med. 1994 Jul 1;121(1):11-21.Manuscript quality before and after peer review and editing at Annals of Internal Medicine.Goodman SN, Berlin J, Fletcher SW, Fletcher RH.Johns Hopkins University School of Medicine, Baltimore, Maryland.OBJECTIVE: To evaluate the effects of peer review and editing on manuscript quality. SETTING: Editorial offices of Annals of Internal Medicine. DESIGN: Masked before-after study. MANUSCRIPTS: 111 consecutive original research manuscripts accepted for publication at Annals between March 1992 and March 1993. MEASUREMENTS: We used a manuscript quality assessment tool of 34 items to evaluate the quality of the research report, not the quality of the research itself. Each item was scored on a 1 to 5 scale. Forty-four expert assessors unaware of the design or aims of the study evaluated the manuscripts, with different persons evaluating the two versions of each manuscript (before and after the editorial process). RESULTS: 33 of the 34 items changed in the direction of improvement, with the largest improvements seen in the discussion of study limitations, generalizations, use of confidence intervals, and the tone of conclusions. Overall, the percentage of items scored three or more increased by an absolute 7.3% (95% CI, 3.3% to 11.3%) from a baseline of 75%. The average item score improved by 0.23 points (CI, 0.07 to 0.39) from a baseline mean of 3.5. Manuscripts rated in the bottom 50% showed two- to threefold larger improvements than those in the top 50%, after correction for regression to the mean. CONCLUSIONS: Peer review and editing improve the quality of medical research reporting, particularly in those areas that readers rely on most heavily to decide on the importance and generalizability of the findings.

JAMA. 1998 Jul 15;280(3):231-3.What makes a good reviewer and a good review for a general medical journal?Black N, van Rooyen S, Godlee F, Smith R, Evans S.London School of Hygiene & Tropical Medicine, England. [email protected]: Selecting peer reviewers who will provide high-quality reviews is a central task of editors of biomedical journals. OBJECTIVES: To determine the characteristics of reviewers for a general medical journal who produce high-quality reviews and to describe the characteristics of a good review, particularly in terms of the time spent reviewing and turnaround time. DESIGN, SETTING, AND PARTICIPANTS: Surveys of reviewers of the 420 manuscripts submitted to BMJ between January and June 1997. MAIN OUTCOME MEASURES: Review quality was assessed independently by 2 editors and by the corresponding author using a newly developed 7-item review quality instrument. RESULTS: Of the 420 manuscripts, 345 (82%) had 2 reviews completed, for a total of 690 reviews. Authors' assessments of review quality were available for 507 reviews. The characteristics of reviewers had little association with the quality of the reviews they produced (explaining only 8% of the variation), regardless of whether editors or authors defined the quality of the review. In a logistic regression analysis, the only significant factor associated with higher-quality ratings by both editors and authors was reviewers trained in epidemiology or statistics. Younger age also was an independent predictor for editors' quality assessments, while reviews performed by reviewers who were members of an editorial board were rated of poorer quality by authors. Review quality increased with time spent on a review, up to 3 hours but not beyond. CONCLUSIONS: The characteristics of reviewers we studied did not identify those who performed high-quality reviews. Reviewers might be advised that spending longer than 3 hours on a review on average did not appear to increase review quality as rated by editors and authors.

Page 2: Author accuracy, technical editing and peer review: Useful

J Am Med Inform Assoc. 2005 Mar-Apr;12(2):225-8. Epub 2004 Nov 23.Accuracy of references in five biomedical informatics journals.Aronsky D, Ransom J, Robinson K.Department of Biomedical Informatics, Eskind Biomedical Library, Vanderbilt University Medical Center, 2209 Garland Avenue, Nashville, TN 37232-8340, USA. [email protected]: To determine the rate and type of errors in biomedical informatics journal article references. METHODS: References in articles from the first 2004 issues of five biomedical informatics journals, Journal of the American Medical Informatics Association, Journal of Biomedical Informatics, International Journal of Medical Informatics, Methods of Information in Medicine, and Artificial Intelligence in Medicine were compared with MEDLINE for journal, authors, title, year, volume, and page number accuracy. If discrepancies were identified, the reference was compared with the original publication. Two reviewers independently evaluated each reference. RESULTS: The five journal issues contained 37 articles. Among the 656 eligible references, 225 (34.3%) included at least one error. Among the 225 references, 311 errors were identified. One or more errors were found in the bibliography of 31 (84%) of the 37 articles. The reference error rates by journal ranged from 22.1% to 40.7%. Most errors (39.0%) occurred in the author element, followed by the journal (31.2%), title (17.7%), page (7.4%), year (3.5%), and volume (1.3%) information. CONCLUSION: The study identified a considerable error rate in the references of five biomedical informatics journals. Authors are responsible for the accuracy of references and should more carefully check them, possibly using informatics-based assistance.

Can Assoc Radiol J. 2004 Jun;55(3):170-3.The accuracy of references in manuscripts submitted for publication.Browne RF, Logan PM, Lee MJ, Torreggiani WC.Department of Radiology, Adelaide and Meath Hospital, Tallaght, Dublin, Ireland.OBJECTIVE: To analyze the errors present in references cited in papers submitted for peer review for possible publication. METHODS: Nineteen consecutive manuscripts submitted for peer review were assessed. They contained a total of 261 references. Manuscripts were submitted to 1 of 5 major radiology journals. Journal references were compared with either the original articles or abstracts obtained through MEDLINE. Book references were checked against the original book. In total, 259 of 261 references were obtained. The remaining 2 references were both out-of-print books that were not available. Each reference was checked and errors were identified as either major or minor, depending on the gravity of the error. Errors were analyzed to see whether they could be attributed to not adhering to journal guidelines or to other reasons. RESULTS: Of a total of 259 references, 56% (n = 145) contained at least 1 error, 53% (n = 137) contained minor errors and 15% (n = 39) contained major errors. Five per cent (n = 13) of references had more than 3 errors, and 79% (n = 274) of all errors were the direct result of authors not following journal instructions. CONCLUSION: Over half of all references included in manuscripts submitted to radiology journals contain at least 1 error. The majority are avoidable, resulting from failure to follow the journal's instructions to authors.

JAMA. 2002 Jun 5;287(21):2821-4Effects of technical editing in biomedical journals: a systematic review.Wager E, Middleton P.Sideview, Buckinghamshire, England. [email protected]: Technical editing supposedly improves the accuracy and clarity of journal articles. We examined evidence of its effects on research reports in biomedical journals. METHODS: Subset of a systematic review using Cochrane methods, searching MEDLINE, EMBASE, and other databases from earliest entries to February 2000 by using inclusive search terms; hand searching relevant journals. We selected comparative studies of the effects of editorial processes on original research articles between acceptance and publication in biomedical journals. Two reviewers assessed each study and performed independent data extraction. RESULTS: The 11 studies on technical editing indicate that it improves the readability of articles slightly (as measured by Gunning Fog and Flesch reading ease scores), may improve other aspects of their quality, can increase the accuracy of references and quotations, and raises the quality of abstracts. Supplying authors with abstract preparation instructions had no discernible effect. CONCLUSIONS: Considering the time and resources devoted to technical editing, remarkably little is know about its effects or the effects of imposing different house styles. Studies performed at 3 journals employing relatively large numbers of professional technical editors suggest that their editorial processes are associated with increases in readability and quality of articles, but these findings may not be generalizable to other journals.

JAMA. 1994 Jul 13;272(2):119-21.Effects of peer review and editing on the readability of articles published in Annals of Internal Medicine.Roberts JC, Fletcher RH, Fletcher SW.Division of General Internal Medicine, Johns Hopkins Bayview Medical Center, Baltimore, MD 21224.OBJECTIVE--To measure the effect of the peer review and editorial processes on the readability of original articles. DESIGN--Comparison of manuscripts before and after the peer review and editorial processes. SETTING--Annals of Internal Medicine between March 1 and November 30, 1992. MANUSCRIPTS--One hundred one consecutive manuscripts reporting original research. MEASUREMENTS--Assessment of readability by means of two previously validated indexes: the Gunning fog index (units of readability in the fog index roughly correlate to years of education) and the Flesch reading ease score. Each manuscript was analyzed for readability and length on receipt and after it had passed through the peer review and editorial processes. Text and abstracts were analyzed similarly but separately. Mean readability scores were compared by two-tailed t tests for paired observations. RESULTS--Mean (+/- SD) initial readability scores of manuscripts and abstracts by the Gunning fog index were 17.16 +/- 1.55 and 16.65 +/- 2.80, respectively. At publication, scores were 16.85 +/- 1.42 and 15.64 +/- 2.42 (P = .0005 and P < .0001 for before-after differences, respectively). By comparison, studies of other print media showed scores of about 11 for the New York Times editorial page and about 18 for a typical legal contract. Similar changes were found for the Flesch scores. The median length of the manuscripts increased by 2.6% and that of the abstracts by 4.2% during the processes. CONCLUSIONS--The peer review and editorial processes slightly improved the readability of original articles and their abstracts, but both remained difficult to read at publication. Better readability scores may improve readership.