3
LETTER FROM AMERICA The top 10 plastic surgery research mistakesewhat they are and how to avoid them Americans love lists. Several books of lists have become best sellers here in the States. A feature on one of our late night television shows is a nightly collection of ten humourous points on a given topic. The United States is a great country. After all, where else could a retired actor become the Governor of California or even President of the United States? Is it too fanciful to believe that a ‘retired’ Hollywood cartoon character could offer a top ten list of helpful suggestions for improving the quality of scientific papers? Consider this ‘Letter from America’: My name is Michael Rodent. Previously, I worked for a major motion picture studio. Although my non-compete contract prohibits me from giving you my stage name, as readers of JPRAS, I know that you are clever enough to discover it. You may remember one of my classic roles in an Academy Award â winning film. I played a sorcerer’s apprentice whose experiment ended disastrously. Based upon that role I have learned much about scientific experimentation way beyond sorcery or alchemy. Now, I am trying to make the world a better place by serving as a spokesperson for a group dedicated to the effective employment of animals in scientific research. Effective employment does not mean that research animals should earn living wages; rather it means that research animals should be employed effectively and not be wasted in experiments that are scientifically unsound for any number of reasons. Our group is called the Research Animals Trust. I have had access to a number of papers that were submitted to or published in JPRAS and other plastic surgery journals and I have been appalled by the frequency and severity of many papers’ defects. Because our members literally lay their lives on the line for many of these papers, I wanted to take the time to list our ten ‘pet’ peeves in the hope that by doing so you will improve the quality of your research. The defects consist of both errors of commission and omission including flaws in experimental design, poor data analysis and problems with presentation. To best under- stand our group’s position, I have categorised these errors by type, but you should note that these types are not mutually exclusive. Any paper could potentially contain more than a single error: Unintelligent design Intelligent Design is a phrase used here in the States to describe an alternative to Darwin’s Theory of Evolution. I am not advocating that type of intelligent design. However, members of my group have been subjected to multiple experiments that can politely be described as unintelligently designed. Without doubt, this is the largest set of experimental errors and it is the single group that sadly cannot be salvaged. Here are the most blatant examples of unintelligent design: 1. Lack of randomisation e There is no excuse for any animal experiment not being truly randomised. I mean truly randomisedrandomised and not pseudo-rando- mised by having alternating control and experimental groups. If you cannot figure out how to randomise your study, then go back to school and learn how before wasting time, money and animals’ lives doing labora- tory experiments or ask a biostatistician for help. 2. Poor controls e Please do not try to save time and money by using the same animal as his own control. Your experiment may create a systemic effect on my colleagues that may create a false negative, or type II, error because of its effect on the control side. 3. Poor models e Kate Moss and Giselle Bu ¨ndchen are models. Sadly, all too often the models chosen for the experiment are like Kate or Giselle because they too bare little semblance to the average person in the real world. When you are using animals keep these things in mind: a. If you are studying nerve healing or nerve repair, do not use underage animals. Remember any surgeon can look like a hero if he limits his peripheral nerve surgery to children. Children get much better results 1748-6815/$ - see front matter ª 2009 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved. doi:10.1016/j.bjps.2009.06.047 Journal of Plastic, Reconstructive & Aesthetic Surgery (2009) 62, 1709e1711

The top 10 plastic surgery research mistakes–what they are and how to avoid them

Embed Size (px)

Citation preview

Page 1: The top 10 plastic surgery research mistakes–what they are and how to avoid them

Journal of Plastic, Reconstructive & Aesthetic Surgery (2009) 62, 1709e1711

LETTER FROM AMERICA

The top 10 plastic surgery research mistakesewhatthey are and how to avoid them

Americans love lists. Several books of lists have becomebest sellers here in the States. A feature on one of our latenight television shows is a nightly collection of tenhumourous points on a given topic. The United States isa great country. After all, where else could a retired actorbecome the Governor of California or even President of theUnited States? Is it too fanciful to believe that a ‘retired’Hollywood cartoon character could offer a top ten list ofhelpful suggestions for improving the quality of scientificpapers?

Consider this ‘Letter from America’:My name is Michael Rodent. Previously, I worked for

a major motion picture studio. Although my non-competecontract prohibits me from giving you my stage name, asreaders of JPRAS, I know that you are clever enough todiscover it. You may remember one of my classic roles in anAcademy Award� winning film. I played a sorcerer’sapprentice whose experiment ended disastrously. Basedupon that role I have learned much about scientificexperimentation way beyond sorcery or alchemy. Now, I amtrying to make the world a better place by serving asa spokesperson for a group dedicated to the effectiveemployment of animals in scientific research. Effectiveemployment does not mean that research animals shouldearn living wages; rather it means that research animalsshould be employed effectively and not be wasted inexperiments that are scientifically unsound for any numberof reasons. Our group is called the Research Animals Trust.

I have had access to a number of papers that weresubmitted to or published in JPRAS and other plasticsurgery journals and I have been appalled by the frequencyand severity of many papers’ defects. Because ourmembers literally lay their lives on the line for many ofthese papers, I wanted to take the time to list our ten ‘pet’peeves in the hope that by doing so you will improve thequality of your research.

The defects consist of both errors of commission andomission including flaws in experimental design, poor dataanalysis and problems with presentation. To best under-stand our group’s position, I have categorised these errors

1748-6815/$-seefrontmatterª2009BritishAssociationofPlastic,Reconstrucdoi:10.1016/j.bjps.2009.06.047

by type, but you should note that these types are notmutually exclusive. Any paper could potentially containmore than a single error:

Unintelligent design

Intelligent Design is a phrase used here in the Statesto describe an alternative to Darwin’s Theory of Evolution.I am not advocating that type of intelligent design.However, members of my group have been subjected tomultiple experiments that can politely be described asunintelligently designed. Without doubt, this is the largestset of experimental errors and it is the single group thatsadly cannot be salvaged.

Here are the most blatant examples of unintelligentdesign:

1. Lack of randomisation e There is no excuse for anyanimal experiment not being truly randomised. I meantruly randomisedrandomised and not pseudo-rando-mised by having alternating control and experimentalgroups. If you cannot figure out how to randomise yourstudy, then go back to school and learn how beforewasting time, money and animals’ lives doing labora-tory experiments or ask a biostatistician for help.

2. Poor controls e Please do not try to save time andmoney by using the same animal as his own control.Your experiment may create a systemic effect on mycolleagues that may create a false negative, or type II,error because of its effect on the control side.

3. Poor models e Kate Moss and Giselle Bundchen aremodels. Sadly, all too often the models chosen for theexperiment are like Kate or Giselle because they too barelittle semblance to the average person in the real world.When you are using animals keep these things in mind:a. If you are studying nerve healing or nerve repair, do

not use underage animals. Remember any surgeoncan look like a hero if he limits his peripheral nervesurgery to children. Children get much better results

tiveandAestheticSurgeons.PublishedbyElsevierLtd.All rightsreserved.

Page 2: The top 10 plastic surgery research mistakes–what they are and how to avoid them

1710 Letter from America

after nerve injury or repair. Therefore, please useadult animals if you are going to extrapolate yourresults to adult people.

b. If you are studying flap physiology, remember thatrodents, rabbits and dogs are amongst the animalsthat have a panniculus carnosus. Humans and pigs donot. Unless you are studying musculocutaneousflaps, a pig is your best friend, not a dog nor a rodentnor a rabbit.

c. If you are studying microvascular anastomoses, bevery careful about choosing rodents as your model.They are hardy creatures and you have to beextremely inept to have less than a 90% patency ratein a control group so any experimental result thatyou hope to discover will require many anastomosesto achieve statistical significance. Also rememberthat rodents can also do something that people donot as well and that is recanalize a 1 mm. diameterfemoral artery. Unless you are regularly monitoringyour rodents’ vessels, you may not know if yourvessel remained patent or recanalized.

Poor data analysis

4. Not seeing the need for blindingeA 2007 study ofresearch articles in this journal and its Americancounterpart showed that the mean number of authorsof research articles had grown to four by 2005. Witha mean of four authors per paper, somebody could haveanalysed the results in a blinded manner. Too often, theonly blind individuals in a study are the subject mice.Blind mice belong in nursery rhymes, neurology orophthalmology journals. In the 21st century, there is noexcuse for not running blinded experiments.

5. Ignoring intent to treateHave you ever heard of‘Intention to treat analysis’. Many papers containstandard boilerplate discussions about how the animalswere treated humanely according to internationalstandards, but all too often these same papers lack anydata stating how many animals ‘dropped out’ of thecontrol and experimental groups before the analysis.This is ‘Intention to Treat Analysis’. Unless you knowhow many animals completed an experiment, you cangenerate false results.

6. Using Faulty Scales e No, I do not mean weighing yoursubjects with inaccurate scales. I mean that sometimeseven randomized, blinded data can be faulty becausethe measurement scales are invalid. For example,somebody studying flap necrosis may arbitrarily assignflap viability to a numerical scale with one being totalnecrosis, two being major flap loss, three being minorflap loss and four being complete flap survival. Theresearcher analyzes data on dozens of flaps anddetermines the difference in flap survival between hiscontrol and experimental groups. He calculates a pseu-doscientific survival index using his 4-point scale. Sadly,he compares his results by calculating the mean foreach group. This is nonsense because it assumes thata completely surviving flap is worth four times as muchas a dead flap, but only twice as much as a flap withmajor loss. In the real world, either necrotic flap wouldprobably require a second operation.

Similarly, sometimes data has been collected logarithmi-cally such as with Semmes-Weinstein monofilament testingor logs of bacterial counts, yet the researchers analyse thedata arithmetically by taking the mean of the data points.

7. Shabby Statistics e Mark Twain gave credit to Disraelifor having said, ‘There are lies, damn lies and statis-tics’. The shabbiest statistics are no statistics at all -nowhere in the paper do you see that the data had beensubjected to a statistical test when it should have.Frequently, the use of statistics is inaccurate.

Examples:

a. Pseudo-percentageseConverting subjects topercentages obscures a deficiency of data, the bestexample of this fallacy is the statement: ‘80% of thesubjects lived and the other ran away’.

b. Not enough power e This does not mean that anexperiment was flawed because the subjects weremice rather than elephants. It means that theauthors did not use enough subjects and may haveproduced a type II error or false negative result.

c. Silent P’s e P’s have been silent at least since Ptol-emaic times. You will see a P values in the paper, butthe authors appear to be sworn to silence becausethey have not disclosed what test they used to gettheir P values. Without knowing what test theauthors used, one cannot know if there is anymeaning to the P value and one cannot verify thedata independently.

d. A tale of too many tails e Having one makes me anexpert on tails. Many things that you measure do notfollow a normal distribution. Using Student’s test nomatter with whatever number of tails you choose iswrong. Rather, non-parametric tests are the bestway to analyse the data.

e. 007 is James Bond’s codename. He has a license tokill. Sadly, we do not know the fate of the rhesusmonkeys subjected to an experiment to compareepineurial and fascicular nerve repairs. The authorsconcluded that their results were significantlygreater at the 0.07 level of confidence e 0.07ethatwas not a typographical error! Not only did theauthors reach this conclusion but also the seniorauthor won the first prize in the 1969 EducationalFoundation Scholarship Contest of the ASPRS!1

8. Leveraging results e One of the causes of the currenteconomic crisis is too much financial leverage. Archi-medes is purported to have said ‘Give me a lever longenough and a fulcrum on which to place it, and I shallmove the world’. To paraphrase Archimedes, ‘Give meenough time and money to do the experiment and I canmake any experiment have statistically significantresults’. Leveraging an experiment by using largernumbers of subjects can produce results that are statis-tically significant. It is not enough for results to bestatistically significant. Common sense trumps statistics.A quote attributed to Gertrude Stein is worth noting, ‘Adifference, to be a difference, must make a difference’.2

Page 3: The top 10 plastic surgery research mistakes–what they are and how to avoid them

Letter from America 1711

Poor presentation

9. When most of you think of presentation andmammals, you think of how they are presented onplates at restaurants. Although for generations myrelatives have enjoyed sampling the fare at finedining establishments, I have difficulty relating tobeing presented on a plate. As far as food is con-cerned, I refer you to my distant cousin, Remy, whostarred in the animated film Ratatouille, and who isa master at preparing food. However, I am not talkingabout food e that would be too cheesy. This is muchmore important. I am talking about the presentationof illustrations that are included with a manuscript. Ibelieve that it is important that none of the illustra-tions capture us in compromising positions. Bycompromising positions I do not mean photos that youfind in tabloid newspapers, rather I mean photos thatadd little of value to the gist of the scientificmessage. A recent example was a photo of one of mydistant relatives who was a control in a microsurgeryexperiment. The photo depicted him lying on anoperating table missing his hindquarter. Obviously, hisfamily was not pleased. Always ask yourself thequestion does the photo add anything of value to thepaper or merely add fuel to the fire for antivivisec-tionists who may find it on the Internet?

10. Finally, let us talk about sacrifice. How often have youread in papers that subjects were sacrificed? Werethey sacrificed on the altar of science? No, this isscience not an Indiana Jones movie. I do hope, as I amsure you do, that the subjects were killed humanely.

Please avoid pretention by claiming that subjects weresacrificed.

I am disturbed and distraught to read papers that arereplete with flaws in their design, execution and analysis.Hopefully, by following my advice in the future I shall nolonger have to say that my brethren have indeed beensacrificed, that their sacrifices have been for naught, andthat the entire scientific venture in which they playeda role was Mickey Mouse.3

References

1. Grabb WC, Bemente SL, Koepke GH, et al. Comparison ofmethods of peripheral nerve suturing in monkeys. Plast ReconstrSurg 1970;46:31e8. The accepted cutoff for p values is 0.05.

2. Although curiosity may have killed the cat, you may be curiousenough to learn more by reading Lang TA, Secic M. How toreport statistics in medicine: annotated guidelines for authors,editors, and reviewers. 2nd ed. Philadelphia: American Collegeof Physicians Press; 2006.

3. Available from: http://en.wikipedia.org/wiki/Mickey_mouse#Pejorative_use_of_Mickey.27s_name [Accessibility veri-fied 12 June 2009.

M. Felix FreshwaterDepartment of Surgery,

University of Miami School of Medicine,9100 S Dadeland Blvd Ste 502,

Miami, FL 33156-7815,USA

E-mail address: [email protected]