of 36/36
Continued on page 3 STATISTICS DIGEST The Newsletter of the ASQ Statistics Division Chair’s Message ............................1 Past Chair’s Message .....................5 Editor’s Corner...............................6 YOUDEN ADDRESS Quality and Statistics: Now THAT’S Entertainment! .................7 MINI-PAPER Statistical Engineering and Tearing Down the Silos of Quality Engineering..............................10 COLUMNS Design of Experiments ..................13 Statistical Process Control..............16 Statistics for Quality Improvement.............................18 Stats 101 ....................................22 Testing and Evaluation ..................27 Standards InSide-Out....................29 FEATURE Approaching Statistics as a Language.............................31 Upcoming Conference Calendar ...34 Statistics Division Committee Roster 2015 .........................................35 IN THIS ISSUE Vol. 34, No. 1 February 2015 Hello Statistics Divis ion. It is unlikely that too many of you know me, so I will introduce myself first. My name is Adam Pintar. I have been a volunteer leader with the division since I first joined in 2010. I was the treasurer for two and a half years and then I transitioned into the chair-elect role (and now chair). You may wonder how I served half of a year. It was due to the ASQ transition from a July to June business year to following the calendar year. With respect to my education and professional life, I will follow Joel’s pattern from last year and say that I have a LinkedIn profile and an employee webpage, so please look me up in either place if you like (LinkedIn and NIST ). I will just mention briefly that my graduate work in statistics was done at Iowa State University and I currently work in the Statistical Engineering Division at the National Institute of Standards and Technology. I am honored to be the Chair of the Statistics Division. And as the Chair, I welcome any member to contact me. I know, that sounds cliché. However, I sincerely mean it and I hope many of you take me up on the offer. I am not as big of a football fan as Joel, but if you would like to talk tennis, I am your guy. It is an exciting year-to-come for the Statistics Division. e biggest change is the document currently occupying your attention. We have decided to switch from the traditional newsletter to a publication with more statistics content. is is the first ever Statistics Digest. I hope that you enjoy it and I want to thank the editor Matt Barsalou for all of his hard work. e motivation for the switch was feedback in the form of survey responses regarding the previous version of the newsletter. e feedback informed us that the mini-paper was the most popular item, which implied to us that statistics content was what our members valued most. Who would have thought? All of the traditional content omitted here (e.g., the treasurer’s report) will continue to be produced, but it will be disseminated through other channels, e.g. e-zines and the website. Other exciting news in the area of statistics content are our intentions to host a foreign language webinar and release in video form the narrated slide shows that were once available from the website. As in previous years, we plan to host 10 to 12 webinars, but unlike previous years, we hope to host one webinar in Spanish. is of course aligns well with the ASQ goal of becoming a global society, but it also reflects the desire of the leadership committee to engage and provide benefits to all division members, worldwide. Some division members may recall the narrated slide shows on basic statistics that were available in the past for download from the Statistics Division website for a small fee. Conditional on overcoming a few technical hurdles, which seems likely, these will be available for viewing again soon and they will be free of charge. Message from the Chair by Adam Pintar Adam Pintar

STATISTICS DIGEST - ASQasq.org/statistics/2015/02/asq-statistics-division... · 2015-08-04 · STATISTICS DIGEST The Newsletter of the ASQ Statistics Division Chair’s Message .....1

  • View
    0

  • Download
    0

Embed Size (px)

Text of STATISTICS DIGEST - ASQasq.org/statistics/2015/02/asq-statistics-division... · 2015-08-04 ·...

  • Continued on page 3

    STATISTICS DIGESTThe Newsletter of the ASQ Statistics Division

    Chair’s Message............................1

    Past Chair’s Message .....................5

    Editor’s Corner...............................6

    YOUDEN ADDRESSQuality and Statistics: NowTHAT’S Entertainment! .................7

    MINI-PAPERStatistical Engineering and TearingDown the Silos of QualityEngineering..............................10

    COLUMNSDesign of Experiments ..................13

    Statistical Process Control..............16

    Statistics for QualityImprovement.............................18

    Stats 101 ....................................22

    Testing and Evaluation..................27

    Standards InSide-Out....................29

    FEATUREApproaching Statisticsas a Language.............................31

    Upcoming Conference Calendar ...34

    Statistics Division Committee Roster2015 .........................................35

    IN THIS ISSUE

    Vol. 34, No. 1 February 2015

    Hello Statistics Divis ion. It is unlikely that too many of you knowme, so I will introduce myself first. My name is Adam Pintar. I havebeen a volunteer leader with the division since I first joined in 2010.I was the treasurer for two and a half years and then I transitionedinto the chair-elect role (and now chair). You may wonder how Iserved half of a year. It was due to the ASQ transition from a July toJune business year to following the calendar year. With respect to

    my education and professional life, I will follow Joel’s pattern from last year and saythat I have a LinkedIn profile and an employee webpage, so please look me up ineither place if you like (LinkedIn and NIST). I will just mention briefly that mygraduate work in statistics was done at Iowa State University and I currently work inthe Statistical Engineering Division at the National Institute of Standards andTechnology.

    I am honored to be the Chair of the Statistics Division. And as the Chair, I welcomeany member to contact me. I know, that sounds cliché. However, I sincerely meanit and I hope many of you take me up on the offer. I am not as big of a football fanas Joel, but if you would like to talk tennis, I am your guy.

    It is an exciting year-to-come for the Statistics Division. e biggest change is thedocument currently occupying your attention. We have decided to switch from thetraditional newsletter to a publication with more statistics content. is is the firstever Statistics Digest. I hope that you enjoy it and I want to thank the editor MattBarsalou for all of his hard work. e motivation for the switch was feedback in theform of survey responses regarding the previous version of the newsletter. efeedback informed us that the mini-paper was the most popular item, which impliedto us that statistics content was what our members valued most. Who would havethought? All of the traditional content omitted here (e.g., the treasurer’s report) willcontinue to be produced, but it will be disseminated through other channels,e.g. e-zines and the website.

    Other exciting news in the area of statistics content are our intentions to host aforeign language webinar and release in video form the narrated slide shows that wereonce available from the website. As in previous years, we plan to host 10 to 12webinars, but unlike previous years, we hope to host one webinar in Spanish. isof course aligns well with the ASQ goal of becoming a global society, but it alsoreflects the desire of the leadership committee to engage and provide benefits to alldivision members, worldwide. Some division members may recall the narrated slideshows on basic statistics that were available in the past for download from theStatistics Division website for a small fee. Conditional on overcoming a few technicalhurdles, which seems likely, these will be available for viewing again soon and theywill be free of charge.

    Message from the Chairby Adam Pintar

    Adam Pintar

    http:www.nist.gov/https://www.linkedin.comhttps://www.linkedin.com

  • 2 ASQ Statistics DIVISION NEWSLETTER Vol. 34, No. 1, 2015 asq.org/statistics

    Submission Guidelines

    Mini-PaperInteresting topics pertaining to the field ofstatistics; should be understandable bynon-statisticians with some statisticalknowledge. Length: 1,500-4,000 words.

    Basic-ToolsApplication oriented and less technicalthan a Mini-Paper. e focus should beon how to apply basic statistical conceptswith an emphasis on industry or services.e use of examples to illustrate points ishighly recommended. Length: 1,000-3,000 words.

    FeatureFocus should be on a statistical concept;can either be of a practical nature or atopic that would be of interest topractitioners who apply statistics. Length:1,000-3,000 words.

    Case StudyFocus should be on an actual example ofthe application of a statistical method inindustry or services. e names of thecompanies involved do not need to bementioned. Length: 1,000-2,500 words.

    General InformationAuthors should have a conceptualunderstanding of the topic and should bewilling to answer questions relating to thearticle through the newsletter. Authors donot have to be members of the StatisticsDivision. Submissions may be made atany time to [email protected]

    All articles will be reviewed. eeditor reserves discretionary right indetermination of which articles arepublished. Submissions should not beoverly controversial. Confirmation ofreceipt will be provided within one weekof receipt of the email. Authors willreceive feedback within two months.Acceptance of articles does not imply anyagreement that a given article will bepublished.

    Visione ASQ Statistics Division promotes innovation and excellence in the application andevolution of statistics to improve quality and performance.

    Missione ASQ Statistics Division supports members in fulfilling their professional needs andaspirations in the application of statistics and development of techniques to improvequality and performance.

    Strategies1. Address core educational needs of members

    • Assess member needs• Develop a “base-level knowledge of statistics” curriculum• Promote statistical engineering• Publish featured articles, special publications, and webinars

    2. Build community and increase awareness by using diverse and effectivecommunications

    • Webinars• Newsletters• Body of Knowledge• Web site• Blog • Social Media (LinkedIn and Twitter)• Conference presentations (Fall Technical Conference, WCQI, etc.)• Short courses • Mailings

    3. Foster leadership opportunities throughout our membership and recognizeleaders

    • Advertise leadership opportunities/ positions• Invitations to participate in upcoming activities• Student grants and scholarships• Awards (e.g. Youden, Nelson and Hunter)• Recruit, retain and advance members (e.g., Senior and Fellow status)

    4. Establish and Leverage Alliances• ASQ Sections and other Divisions• Non-ASQ (e.g. ASA)• CQE Certification • Standards • Outreach (professional and social)

    Updated October 19, 2013

    Disclaimer

    e technical content of material published in the ASQ Statistics Division Newsletter may not have beenrefereed to the same extent as the rigorous refereeing that is undergone for publication in Technometrics orJ.Q.T. e objective of this newsletter is to be a forum for new ideas and to be open to differing points ofview. e editor will strive to review all articles and to ask other statistics professionals to provide reviews ofall content of this newsletter. We encourage readers with differing points of view to write to the editor and anopportunity to present their views via a letter to the editor. e views expressed in material published in thisnewsletter represents the views of the author of the material, and may or may not represent the officialviews of the Statistics Division of ASQ.

    Vision, Mission, and Strategies of the ASQ Statistics Division

  • asq.org/statistics ASQ Statistics DIVISION NEWSLETTER Vol. 34, No. 1, 2015 3

    Message from the ChairContinued from page 1

    ere are many other activities planned for this year, so keep your eyes open for e-zines and have a look at thehttp://www.asq.org/statistics website from time to time. Since this is the first Statistics Digest, I am going to spend the rest of themessage discussing a statistical topic. I thought quite a lot about a topic. I wanted it to be simple but still useful. I finally decidedon simulation because I use it often and it can range from very simple to staggeringly complex, depending upon the problem.

    I will not discuss computer simulation in general. at is a subject for books. My discussion will be about a simple example thatpopped up in my day-to-day work. Since the example is related to my work and it is not perfectly clear what details that I mayor may not release about it, I will disguise the details, while still conveying the underlying structure. Imagine manufacturingwidgets in many different factories (yes, I know cliché again). Many physically distinct factories produce nominally identicalwidgets, but in reality the widgets from different factories are not identical. e goal of the analysis is to quantify (point estimateand uncertainty) the average value of a certain physical property of all widgets. To do this, factories are randomly sampled, twoproduction days within each factory are randomly sampled (probably consecutive, though), and two widgets within eachproduction day are randomly sampled. e data table looks like this:

    In the interest of keeping things simple and short, I will not provide the full details of the model I used for these data. But it isimportant to note for this discussion that I wanted to assume that the four measurements from a single factory were independentobservations from a normal distribution with mean μ and standard deviation σ, denoted N(μ,σ2). However, while looking at theraw table of numbers I noticed that for a few factories both of the day 1 observations were less than the day 2 observations orvice versa. I began to worry that days within a factory may be systematically different from one another. I started to think abouthow likely this ordering would be given the assumption I desired to make. Clearly, P(A12 < A21) and other similar quantitiesunder the N(μ,σ2) assumption are 0.5. However, to complete the calculation, we must also know quantities such asP(A12 < A21 A12 < A21), which are not as easy to infer. e vertical bar notation means conditional probability.

    At this point I decided to do a quick simulation. I will discuss other possible solutions shortly. I initially did the simulation in R,but while writing this note, I checked that they could also be done with Excel. e R code is posted at the end in Figure 1. Pleasesend me an email if you would like to know how I did the calculations in Excel. I am also happy to hear critiques of my R codeas I am sure it could be improved. e general (separate from computing environment) steps of the simulation are as follows:

    1. Generate 4n deviates from a N(0,1) distribution and arrange them into a rectangular array with n rows and 4 columns.It is sufficient to use the N(0,1) distribution because if X follows a, N(μ,σ2) distribution then Z = (X-μ)/σ follows aN(0,1) distribution, and the shift and scale preserves ordering.

    2. Considering each row to be a single trial, count the number of trials for which the first two observations are both lessthan the second two, call it C1.

    3. Count the numbers of trials for which the second two observations are both less than the first two, call it C2.4. e approximate probability of the event of concern is (C1 + C2)/n.

    e value turns out to be about 0.33, which eased my fear about considering the four observations from a single factory to beindependent and identical N(μ,σ2) deviates.

    Continued on page 4

    http://www.r-project.org/

  • 4 ASQ Statistics DIVISION NEWSLETTER Vol. 34, No. 1, 2015 asqstatdiv.org

    Message from the ChairContinued from page 3

    Now, many of you may be thinking why not conduct a t-test? I admit it; that is a good idea. I did not take that approach for threereasons. Before I state them, note that the subject matter expert with whom I was working screened the data for obviouslyproblematic observations before giving them to me. So even in factories where the ordering issue occurred (both day oneobservations below both day two observations and vice versa) all the values within a factory still seemed close together. e firstreason is that the t-test is a comparison of population means. When my curiosity was initially raised, the issue was the orderingso I latched onto that particular question and I simply did not think about comparing the means. Second, the ordering issueoccurred in multiple factories. What if the t-test rejected the null hypothesis of equal means for one factory with the orderingissue but not another? How would that be resolved? Lastly, the probability of the ordering issue occurring in any one factoryprovides a second sanity check for the N(μ,σ2) assumption. Specifically, since there were nƒ factories, I expect the ordering issueto occur in 0.33nƒ factories, so I can compare that value to the number of factories in which it actually occurred. e data passedthat sanity check too.

    I am compelled now to mention that my approach does not conclusively prove that the N(μ,σ2) assumption is valid (the t-testcould not either). It only fails to provide conclusive evidence that the assumption is invalid. at is an important distinction tokeep in mind.

    Some of you may also think that I should have leveraged the multivariate normal distribution to calculate the probability exactly.at is a good point too. While writing this message I started wondering if the exact probability is 1/3 (I have only had thepatience to run the simulation long enough to get three significant figures: 0.333). Please email me if this is obviously true or ifyou actually do the calculation with the multivariate normal distribution.

    I am not trying to argue here that I solved my own problem in the best way possible. I am only trying to impress that when youare faced with a question (a statistics question in particular) and the path forward is not clear, simulation may work well. Incomplicated sample size and the related power of a test calculations, my experience has been that simulation can work very well(t-tests have formulas for such calculations). It has also been my experience that the process of constructing a simulation to do acomplicated calculation can add greatly to your understanding of the problem on which you are working.

    I look forward to serving as chair of the division this year and to hearing from you personally if you so choose.

    Figure 1: R Code

    e mention of products commercial or otherwise does not imply endorsement by the author’s institution, nor are they necessarily the bestavailable for the purpose.

  • asq.org/statistics ASQ Statistics DIVISION NEWSLETTER Vol. 34, No. 1, 2015 5

    Past Chair’s Messageby Joel Smith

    I started typing what felt like a really generic message; however, that’s exactly what anyone who ever reads thesekinds of things would expect: “Wow, time flies…”, “I really want to thank…”, “It’s been a really great year…”,etc. But then I deleted it. No one cares whether I think my year as chair passed quickly or not. A newsletter –ahem, digest – is a really impersonal way to thank people whose work has meant so much, and whether I feel it’sbeen a good year or not, is irrelevant to a division members who can form their own opinions.

    Instead, I want to talk a little bit about the state of statistics. You’re probably aware that roughly half of all peopleknow less than average about statistics, which is a real shame. I’d like to see everyone above average. Schools aren’t teaching thefundamentals effectively and keep changing their curriculum as is evidenced by the constantly changing percentage of studentspassing the AP Statistics exam: down a little one year and up another the next. It’s as though once something moves the percentagelower they fix it, but once things improve, they can’t seem to leave it alone and let us keep on climbing.

    Maybe educators should relate the content to something students understand like sports – tell a kid that Mike Trout has a .333batting average and he knows that Mike will hit the ball once every three times he has an at bat – unless Mike has a hot or coldstreak.

    Or maybe, if we let kids gamble at a younger age they’d understand statistics a little better. I’ve been to Las Vegas a few times andthe really good gamblers watch the slot machines like hawks. As soon as someone gets up from a machine in frustration afterlosing several spins in a row, a good gambler knows that machine is due to pay and has much better odds than the others. Likewise,the good blackjack players know that when they are really down, all they really need is to play enough hands for things to balanceout and make back their money. This fact is so simple that I know hardly anyone who’s ever told me about losing money playingblackjack in a casino. Most end up a little bit ahead, but just can’t remember exactly how much.

    I guess it’s not feasible to get kids to Las Vegas or Atlantic City so maybe we should relate it to the lottery since almost every statehas one. Of course everyone knows the odds of winning the lottery are really low, but if you win the payout is so big, it makesup for it! There are a couple of tricks to making money in the lottery and the best players know them: First, you can’t win if youdon’t play so it’s important to always play. And buying multiple tickets makes it much more likely you’ll win. Buy only one andyou’ll probably never win. Finally, don’t pick an obvious pattern like 1, 2, 3, 4, 5, and 6 – that never comes up; instead, go formore random-looking numbers like birthdays or ages. Kids can understand that if you teach them.

    I just don’t see why students can’t be more prepared for statistics when they already understand the concepts before taking a class.Ask a kid what the odds are of neither of two independent events occurring if each has probability 30% and he struggles to recallthe formula and solve the problem. But tell him there’s only a 30% chance of rain Saturday and again Sunday and he knows hecan safely plan a trip to the beach without worrying about bad weather ruining it. Unless the weatherman was wrong about that30% of course…

    By nature of your being a members of the Statistics Division, at this point you’re probably either thinking I’m a terrible statisticianor realizing all of this was written in jest. There is a point though: In the world and even in the organizations most of us workfor, there are far more people who would not have realized something was wrong, than who had your reaction. Let’s all do whatwe can to turn that ratio the other way.

    Thanks for having me as your chair. You’ll certainly be in good hands in 2015 and I hope we have and continue to make somedifference in your professional life.

    Joel Smith

  • 6 ASQ Statistics DIVISION NEWSLETTER Vol. 34, No. 1, 2015 asq.org/statistics

    As this is my final issue as official Newsletter editor, I would like to express my great appreciation for all those onthe Statistics Division Council who have worked with me over the last few years to put each issue together. Theyhave provided the material, tolerated my revisions and re-writes, and reviewed 40-page drafts looking for waywardtypos and other errors. I am now officially handing over the editor position to Matthew Barsalou, who I knowhas spent countless hours developing the content for the new Statistics Digest.

    I am personally very pleased to see this new direction moving forward and am thrilled to welcome all our newregular contributors! They will be focusing on key technical statistics topics of particular interest to our member community.Meanwhile, information about Division affairs will be migrating to your monthly E-Zine, which will be getting longer. Lookthere for grant and scholarship info, award nominations, and conference announcements and Calls for Papers.

    We encourage you to provide feedback on the new Statistics Digest via our LinkedIn discussion group (group 2115190 if you arenot a member yet). We will be looking at your comments as we go forward into the future.

    I will continue to be behind the scenes here at the Statistics Division as the new Vice Chair of Member Development, and stilltaking a proprietary interest in the new Newsletter.

    Best wishes to all!

    Welcome to the first issue of Statistics Digest! It may be a bit odd to start with volume 34, no 1; however, this isbecause Statistics Digest is a continuation of the Statistics Division Newsletter. Statistics Digest will run less divisionnews and more technical content.

    We will be featuring regular columns with Design of Experiments covered by two of the leading researchers inthe field; Dr. Bradley Jones, Principal Research Fellow at SAS’s JMP Division and Arizona State University’s Dr.Douglas C. Montgomery. Statistical Process Control will be covered by the past-editor of Quality Engineering,Dr. Connie Borror, who has authored over 75 journal articles and is a professor at Arizona State University.

    Statistics for quality improvement will be covered by Dr. Gordon Clark, who is the principal consultant at Clark Solutions, Inc.and a professor at e Ohio State University. Dr. Jack B. ReVelle of ReVelle Solutions, LLC will be explaining basic statisticsconcepts in Stats 101. We also have Assistant Director of the Operational Evaluation Division and Test Science Task Leader atthe Institute for Defense Analyses’ Dr. Laura J. Freeman who will give us an overview of testing and evaluation using statistics.University of Central Florida’s Dr. Mark Johnson will discuss statistics in international standards in his column “Standards InSide-Out.”

    We will be continuing to run a Mini-Paper in every issue and will be adding at least one feature to each issue. e content willrange from basic to cutting-edge so there will be something for everybody with an interest in statistics. is issue includes theYouden Address by our own statistical process control columnist, Connie Borror. e Youden Address is named after Dr. JackYouden and is awarded to those who exemplify Dr. Youden’s approach to experimentation and communicating statistical concepts.If you were not fortunate enough to attend Geoff Vining’s 2014 Technical Communities Conference talk “Statistical Engineeringand Tearing Down the Silos of Quality Engineering,” you can read the resulting Mini-Paper in this issue.

    Mindy Hotchkiss will be handing Statistics Digest over to me and I would like to thank her for her years of dedication to theNewsletter. e first ASQ Statistics Division Newsletter was published on 21 February 1980 and it is a privilege for me to be theeditor of a “new” publication with a 35 years of history behind it. Please contact me at [email protected] to discuss apossible Mini-Paper, Basic-Tools, Feature or Case Study if you would like to be a part of this continuing history.

    Editor’s Corner – Outgoing Editorby Mindy Hotchkiss

    Editor’s Corner – Incoming Editorby Matt Barsalou

    Mindy Hotchkiss

    Matt Barsalou

    http:https://www.linkedin.com

  • asq.org/statistics ASQ Statistics DIVISION NEWSLETTER Vol. 34, No. 1, 2015 7

    YOuDEn ADDrEssQuality and statistics: now THAT’s Entertainment!by Connie M. BorrorSchool of Mathematical and Natural Sciences, Arizona State University

    William “Jack” Youden is one of my heroes. I have never met him, but his work was nothing short of brilliant. In researchinghis life and work in preparation for this talk, I learned about his contributions to the war effort during World War II. I found itfascinating. Also being a fan of classic films, I wondered if Youden and others in his division during the war (which I will discussin more detail later) had ever appeared or been featured in short film clips during that period. Newsreels were often played attheaters before and between movies to update filmgoers about the war effort, selling war bonds, or reminding people that “looselips sink ships!” ere were many of these films made during World War II, so perhaps Youden could be seen somewhere in oneof those! But, unfortunately, he was not. So, then I thought perhaps NIST or the Film Institute of the Academy of MotionPictures Arts and Sciences may have some clips of Youden (if not video, then audio) that I could use for this talk. Unfortunately,again, no.

    But then, thinking about Youden, World War II, short film clips, and of course quality and statistics, I turned my attention toclassic films and how the field of statistics has been portrayed or used in these films. As a result, my Youden talk is focused onquality and statistics in film – a fun look at how they are used, directly or indirectly. I limited my search on these topics to filmsmade prior to 1970. In this presentation, only a few select films are discussed, but they present a wide range of uses of statistics,statistical thinking, and quality (here motivated by a film discussing efficiency rather than just quality).

    Statistics on ScreenAlthough the films featured here are pre-1970, I should at least mention how we have seen statistics used in popular culture infilm and television. e most obvious example is the 2011 film Moneyball where statistical analysis is used in baseball. Ontelevision, we have seen statistics used or mentioned in one way or another in forensics (Numb3rs), politics (West Wing), andmedicine (House), to name a few. On the comedy, Friends, a running joke was that no one knew what the character, ChandlerBing, did at his job. It wasn’t until near the end of the show’s 10 year run, that is was discovered (note: his job was “statisticalanalysis and data reconfiguration”). Even the 1980’s TV series, Miami Vice, got in on the statistics act in an episode, where acharacter proclaimed “I’m a sergeant in statistical analysis.”

    It just seems like there is a place for statistics (and quality) in all genres, big screen and small.

    Make up Your Own Statistics, Everyone Else DoesAs statisticians and engineers, we often here the proclamation that statistics can be used to show or proof anything we want.Sometimes it seems as though people just make numbers up while in conversation. One good example of this in film is 1948’sAn Apartment for Peggy. A less than desirable name for a movie I thought, but it deals with two of my favorite subjects: 1) life inthe United States following World War II and 2) universities. It takes place right after World War II when veterans were returninghome, which resulted in an incredibly large number of people enrolling in college thanks to the G.I. Bill. In this film, a veteranand his pregnant wife face the same dilemma many veterans faced during this time: finding married housing on a universitycampus. It was a real problem at the time, with families living in makeshift housing such as travel trailers or university buildingsturned into sleeping quarters until more permanent housing could be obtained. e husband in the film is becoming disillusionedwith the situation (going to school, small stipend from the government, a pregnant wife, and no real place to live). He tells hiswife that a fellow veteran is working as a car salesman making $150 a week and could get him a job selling cars as well. e wifeobjects and spouts off some statistics she just made up in her head, which is what she does throughout the entire film. She tellshim “a recent survey said that 64% of all used car salesman wish they gone into some other field.” Of course, she made that upand her husband calls her on it, to which she replies “of course I made it up, someone’s always making up statistics, so it mightas well be me. You’d be surprised how many arguments I win with my statistics….no one ever bothers to check.” So true. Oneof the problems we encounter in society is that no one bothers to check.

    Continued on page 8

  • 8 ASQ Statistics DIVISION NEWSLETTER Vol. 31, No. 1 2012 asqstatdiv.org

    Sampling – What Could Go Wrong?Survey results and the statistics they produce are often thrown around in film, but rarely the theme for an entire film. Oneexception is the film Magic Town from 1947. is film is all about conducting surveys and being able to identify public opinionon a myriad of topics. Gallup polls are mentioned quite a bit in this film, but the premise is that a pollster has decided that theremust be a little town out there that is a cross-section of America. One that when polled on any topic, the results would representthe opinion of the entire nation. It would be “Utopia” – surveying one small segment of the country instead of thousands ofpeople at great expense and get the same results. As statisticians, we would love that!

    Early in the film, the pollster happens to find a small city whose survey results on a topic turn out to be identical to resultsobtained by the big polling companies but at a fraction of the cost (note: the example given in the film shows that both surveysgive the exact same percentage, down to the decimal. No error in that!). e pollster moves to this city to begin surveying thetownspeople on a myriad of topics. ankfully, he knew enough to make sure the citizens did not know he was polling them –no bias, you know. But, over time, the citizens find out and come to the conclusion that they as a town could capitalize on theirnew status. us, the beginning of the end. ey begin selling their opinions. In one scene, the citizens discuss opinions as an“export” and we hear “we’re tired of giving them away, from now on, we’re selling them.” Of course it doesn’t end well for thenew public opinion capital of the U.S.!

    Two-Sample Comparison – with n = 1Now, I have to say it was difficult to find any film which discussed hypothesis testing or quality control. at’s not too surprising.But, I did remember the film Cheaper by the Dozen (1950). e film is based on the lives of Frank and Lillian Gilbreth, a husband-and-wife team of industrial engineers who are now celebrated for their groundbreaking work in the early part of the 20th centuryon efficiency (in particular, time studies). e title of the film comes from the fact that they had 12 children. ey would oftencarry out studies together and one particular scene in the film came to mind one day when I was preparing notes on two-sampletesting for a basic statistics class. I wanted to emphasize the problem of anecdotal evidence and how to avoid it and in particularhow a sample size of n = 1 in each group was something to avoid! A single observation for each group was not a good test. escene in the film was very appropriate for this purpose. In it, Lillian Gilbreth has a stop watch and was timing Frank as hebuttoned his vest. He first buttoned the vest from top to bottom with Lillian timing him. He then buttoned the vest from bottomto top – again being timed. So now we have a single time for top-to-bottom and a single time for bottom-to-top, with thebottom-to-top taking less time. To which Frank Gilbreth exclaims “bottom to top, that’s the answer!” A single run and theanswer is known! Interestingly, if you time it yourself (of course with today’s technology) during this scene, you will find differenttimes than Lillian did! e movie is full of studies big and small. At one time, the children were to have their tonsils removedso Frank decided to have the surgeries conducted in his house while he filmed them. e films were to be used to identify wherethe doctors might be wasting time or effort during an operation! ere is also a scene depicting efficiency. Frank Gilbreth,enrolling his children in a new school, demonstrates to the principal how to take a bath “in less time than it takes to play a record.”

    Forensics and “Early Predictive Analytics”ere is no lack of films involving forensics. Even though the word “forensics” is rarely used in older films, it is there nonetheless.Every murder mystery revolves around some type of forensics and statistics and probability are there right behind it, in thebackground. You will often hear “the probability of …” or “the likelihood of…” is this or that number. Estimates of insurancefrauds and probabilities of certain types of death are found in many films. Even analytics can be found, if we look hard enough.Of course the word “analytics” is never used, but it’s there.

    My favorite example of “early predictive analytics” can be found in the 1928 silent movie e Lodger. e film was one of onlya handful of silent films directed by the famous director Alfred Hitchcock. But that’s not why I like it. In the film, a serial killeris on the loose in London. e police are obviously having a difficult time catching the killer, even though he always strikes ona Tuesday evening in different locations around the city. One detective finally decides to do a little crude cluster analysis, notwhat it was called in the film of course, and plotted each incident on a map of the city. A triangular pattern began to emerge and

    YOuDEn ADDrEssContinued from page 7

    Continued on page 9

  • asq.org/statistics ASQ Statistics DIVISION NEWSLETTER Vol. 34, No. 1, 2015 9

    YOuDEn ADDrEssContinued from page 8

    they predicted where he would strike next based on the pattern and location of the past murders. Of course, they were rightabout the location of where he would strike next and caught him. It is interesting to watch this scene and the use of predictiveanalytics to identify the next strike, given all the advances we have today in technology.

    “You(den) Ought to be in Pictures”“You Ought to be in Pictures” is a famous song from the 1930s. After researching Youden for this talk, trying to find any film oraudio of him and being unsuccessful, I decided he would make a great film subject. Youden was assigned to the Bombing AccuracySubsection of the Eighth Air Force Operations Analysis Section during World War II. Although I could not find film of Youdenhimself, there are films available on the internet which document the work of the Eighth Air Force Division. Youden developedand applied novel statistical methods (e.g., the Youden chart) to improve bombing accuracy both in Britain and then in the Pacificduring World War II. He was later awarded the Medal of Freedom. I refer the reader to Miser (1992) for more detail on Youden’scontributions during the war and the significant impact it had on the war effort. I think his work and that of the Eighth AirForce during World War II is some of the most innovative and interesting. It would make a great movie.

    ConclusionsIf you enjoy watching films or television programs where probability, statistics, or statistical analysis are mentioned as much as Ido, you don’t have to look too hard to find them. In this talk, I have only highlighted a few films that, at least briefly, touch onstatistics. ere are so many more that have appeared over the last three decades. I was disappointed in not being able to findsome newsreel or audio clip of Youden. If one is out there, I hope we find it eventually. ank you for this opportunity to talkabout three of my favorite topics: film, statistics, and Dr. William “Jack” Youden.

    AcknowledgmentsI would also like to thank the following people for their help in preparing this presentation and paper:

    ◦ Will Guthrie, NIST◦ Sarah E. Burke, Arizona State University◦ Cassie Blake, Film Institute of AMPAS

    References◦ Briscoe Center for American History, University of Texas at Austin, Digital Collections http://www.cah.utexas.edu/db/dmr/◦ MacArthur, C. (1991). Operations Analysis in the United States Army Eighth Air Force in World War II (History of Mathematics

    Series: Book 4). American Mathematical Society, London Mathematical Society.◦ Miser, H. J. (1992). “Craft in Operations Research.” Operations Research 40(4): 633-639.◦ www.subzin.com

    About the AuthorDr. Connie M. Borror is a Professor in the School of Mathematical and Natural Sciences at Arizona State University West. Sheearned her Ph.D. in Industrial Engineering from Arizona State University in 1998 and joined the School of Mathematical andNatural Sciences in 2005. Her research interests include experimental design, measurement systems analysis, response surfacemethods, and statistical process control and monitoring. She has co-authored two books and over 75 journal articles in theseareas. Dr. Borror is a Fellow of the American Statistical Association and the American Society for Quality and is past-editor ofthe journal Quality Engineering.

    http:

  • 10 ASQ Statistics DIVISION NEWSLETTER Vol. 34, No. 1, 2015 asq.org/statistics

    MInI-PAPErstatistical Engineering and Tearing Down the silos of Quality Engineeringby Geoff ViningVirginia Tech

    Introductione nature of quality engineering is changing. Quality engineers/industrial statisticians can no longer be simply data analysts.Organizations are finding it too easy to ship analysis overseas. A manager or engineer in North America can send the data via theinternet to an analyst in India just before leaving work and can receive the full analysis the next morning. Of course, the analyst inIndia works for much less money than quality engineers in North America.

    Our survival as a profession depends on being able to add value. e future requires us to be able to solve large, unstructured, complexproblems that are of crucial importance to the organization. ese problems require multidisciplinary teams working together. Qualityengineers/industrial statisticians must adapt to the new environment. For many people, this transition will not be easy. e skill setrequired for success is quite different from what most quality engineers/industrial statisticians learned in the university.

    Role of Six SigmaSix Sigma programs are an important part of the foundation for much of this transformation. Ultimately, Six Sigma is a structured,strategic approach for addressing important problems for organizations. Six Sigma projects do not address truly large, unstructured,complex problems; however, such problems typically need Six Sigma or something similar to address important sub-problems. SixSigma provides insights into the skillset necessary for the quality engineer/industrial statistician in the new environment.

    e three basic components to the Six Sigma skillset are organizational psychology, industrial statistics, and project management. Allthree components are vital.

    Too many industrial statisticians in the 1990s dismissed Six Sigma as just another repackaging of the tools we have used for manyyears. Unfortunately, very few industrial statisticians at that time had the necessary people and project management skills to drivethe success of Six Sigma. Industrial statisticians were well-trained in the analytic tools from university. However, their universitycurricula provided no exposure to organizational psychology and project management.

    Organizational psychology, especially change management, is essential for success. Getting people to work together as a well-oiledmachine is not trivial. It requires solid training and practice in team building and team dynamics. It takes quite a bit of skill to getteams through the “forming, storming, norming, and performing” phases. Organizational psychology puts a great deal of emphasison communication, which is crucial for managing change.

    Industrial statistics provide a large arsenal of analytical tools, ranging from the “seven basic tools” to extremely sophisticated statisticalmethodologies. e traditional areas of industrial statistics are statistical process control, experimental design, response surfacemethodology, time series, and modeling.

    Six Sigma places heavy emphasis on software for analysis, which is both good and bad. e software allows the engineer/manager todo lower level analyses for which the standard Six Sigma training provides more than adequate preparation. However, the softwarealso allows the engineer/manager to misapply more sophisticated tools in which they have little or no training.

    Proper project management is essential for generating a timely solution within budget, especially for high impact, complex projects.Success depends on proper time and resource management, which requires the application of sound project management practices.Good organizational psychology and properly applied analytical tools are not enough to ensure timely success within the budgetaryconstraints.

    ere are several lessons learned from Six Sigma. It is important to train subject matter experts in simple tools, both involving “soft”and “hard” skills. Such training allows the subject matter expert to solve a large variety of problems and frees up the qualityengineer/industrial statistician for more technically challenging tasks. It is vital for the subject matter experts to understand thelimitations of their training. Subject matter experts need to know when to bring a statistical tool expert into the process. An important

    Continued on page 11

  • asq.org/statistics ASQ Statistics DIVISION NEWSLETTER Vol. 34, No. 1, 2015 11

    role for quality engineers is to be an integral part of problem solving teams as true colleagues. Success requires true collaboration. SixSigma emphasizes the need for top leadership involvement and commitment. Such leadership ensures that the teams focus on larger,meaningful to bottom-line results as measured by true dollar savings.

    Statistical EngineeringIt is clear that quality engineers/industrial statisticians must learn how to contribute the most value possible to their organizations.As a result, our future focus must be large, unstructured, complex problems, whose solutions require collaboration among highprofile interdisciplinary teams. ese problems cut across the organization, and management views the solutions as vital for theentity’s health and well-being. Success requires new tools and mindset.

    Roger Hoerl and Ron Snee propose a new field that they call statistical engineering. ey are building upon the lessons learned fromSix Sigma. Six Sigma provides a good strategic structure and a demonstrated structured problem solving approach. What is missingis something more tactical that guides how we should deploy our tools. We need to ask how we can generalize solution tactics tosolve future problems. e goal of statistical engineering is the development of appropriate theory to apply known statistical principlesand tools in order to solve high impact problems for the benefit of humanity. Ideally, such theory minimizes “one-off” solutions. eApril 2012 issue of Quality Engineering discusses statistical engineering in detail.

    e heart of statistical engineering is the scientific method, which is an inductive/deductive problem solving process. e basicsteps are

    • Understand the real problem at hand• Define the problem• Discover solutions• Abstract from the concrete• Develop a theory• Test the theory using data• Modify the theory as necessary

    For complex problems, the scientific method requires true interdisciplinary collaboration.

    Every successful application of the scientific method requires data. Essential elements are data collection, data analysis, and datainterpretation. ese tasks go to the heart of what quality engineers/industrial statisticians do very well. Quality Engineering/IndustrialStatistics are the handmaiden to the scientific method. e analytical tools by themselves cannot solve any real problem. However,they can assist the subject matter expert to see better solutions more quickly. e ability to collect, to analyze, and to interpret dataproperly plays a very important role in solving large, unstructured, complex problems. Such problems require very good subjectmatter expertise combined with equally strong understanding of the analytical tools.

    Silos“Silos” are a major impediment to success in finding good solutions to large, unstructured, complex problem. What do we mean bysilos? Silos often exist within disciplines. For example, engineers only speak with other engineers, and statisticians speak only withother statisticians. Silos also exist within industries. For example, automotive people speak only with automotive people, and aerospacepeople speak only with aerospace people. People who live only in their specific silo often “reinvent the wheel” because they are notaware that people in other disciplines or other industries have encountered similar problems and have developed good solutions.Unfortunately, too many of us live only in our silo!

    Solutions to large, unstructured, complex problems require new approaches and new insights. ese new approaches and insights inturn require cross-fertilization across disciplines and industries. For example, people now apply quality engineering tools/methods tofinance, risk management, healthcare, all of which are fields very different from manufacturing where these tools were developed.

    How do we tear down the silos? First, we must recognize the problem. We must understand that interdisciplinary teams are thefuture. Such teams are created for a specific problem and then move onto other tasks. Each team member’s subject matter expertiseis essential. We must get better at applying solutions from other problems to address the problem at hand. We must stop “my onlytool is a hammer; so, every problem is a nail.”

    MInI-PAPErContinued from page 10

    Continued on page 12

  • 12 ASQ Statistics DIVISION NEWSLETTER Vol. 34, No. 1, 2015 asqstatdiv.org

    MInI-PAPErContinued from page 11

    Boundaries between disciplines must become more amorphous and less rigid. It is important to stretch the overlap across disciplines.Each discipline and industry has its own language. As a result, we must learn other languages as we function more in aninterdisciplinary fashion. In the process, we learn new approaches to problems and we minimize re-inventing the wheel.

    ASQ’s Technical Communities CouncilProfessional societies have a large role to play in breaking down silos. ese societies need to create structures that foster

    • Communication• Cooperation • Collaboration

    We need to create more opportunities to interact across disciplines. ASQ’s Technical Communities Council (TCC) is one effortalong these lines.

    e TCC is an umbrella organization which ASQ that provides oversight and guidance to the society’s technical communities, whichare the groups whose primary purpose is to generate and/or disseminate the quality body of knowledge (QBoK) and designated bythe ASQ Board of Directors as a technical community. Current technical communities are the divisions, the interest groups, and thePublications Management Board. e purpose of the TCC is to foster greater co-operation, communication, and collaboration acrossthe technical communities. e TCC is less than two years old and is the successor to the Division Affairs Council (DAC). Properlymanaged, the TCC represents realignment within ASQ to help people address large, unstructured complex problems.

    One way to view the TCC is as ASQ’s “steward” of the QBoK. It ensures that member needs are met over the spectrum of ASQmembers, “newbie,” beginner, intermediate, advanced, and thought-leader. e TCC must guide the technical communities as theygenerate timely new QBoK, especially for emerging areas. It must nurture new communities based upon these emerging areas.

    e TCC must break down the silos among the existing communities. For example, imagine collaboration among the Automotive,Statistics, and Reliability Divisions (all strong divisions) to develop an expanded QBoK specifically for the automotive industrybringing together true interdisciplinary teams of thought leaders and strong subject matter experts. en imagine forming otherinterdisciplinary teams to modify that new QBoK to help address the needs of aerospace or other industries.

    Realizing such a vision requires new thinking and new structures. Some of the resulting changes may cause some level of discomfort,even pain. However, achieving such a vision truly drives ASQ closer to being the “global voice of quality.”

    Concluding RemarksQuality engineering/industrial statistics stand at a major crossroads. ese disciplines must evolve to remain relevant. Statisticalengineering provides an interesting pathway, particularly since it requires working in truly interdisciplinary teams as real colleagues.For many of us, this transition is not easy; however, as a profession it is necessary, helping us to make larger, more meaningfulcontributions to our organizations and to society.

    About the Author: Geoff Vining is a Professor of Statistics at Virginia Tech and an internationally recognized expert in qualityengineering and industrial statistics. He is currently a member of the ASQ Board of Directors and Past Chair of the TCC. He is aFellow of the ASQ. Dr. Vining served as Editor of the Journal of Quality Technology, Editor-in-Chief of Quality Engineering, Chair ofASQ’s Publications Management Board, and as Chair of the Statistics Division. Dr. Vining has received ASQ’s Shewhart Medal andBrumbuagh Award as well as the Statistics Division’s William G. Hunter and Lloyd Nelson Awards.

  • asq.org/statistics ASQ Statistics DIVISION NEWSLETTER Vol. 34, No. 1, 2015 13

    COluMnDesign of Experimentsby Bradley Jones, PhDJMP Division of SAS

    Douglas C. Montgomery, PhDSchool of Computing, Informatics and Decision Systems Engineering, Arizona State University

    Jones and Montgomery (2010) proposed a new graphic diagnostic toolfor evaluating designed experiments. e suggested graph is a cell plotshowing the pairwise correlation between two potential model terms as acolored square. If, as in Figure 1, there are 45 terms of interest, then thecell plot below shows 45x45 = 2025 colored cells.

    Continued on page 14

    What does this plot tell me?Absolute values of the correlations are between zero (white) and one (black). e diagonal line of cells is black for any design, butit is most desirable for all the off-diagonal cells to be white or at least on the white end of the color spectrum. Every cell in Figure1 is either pure white or black. Multiple cells in a row that are colored pure black indicate that the associated model terms areindistinguishable from one another. In such a case, we say that the two effects are confounded. Looking at the first row of cells,we can see that the main effect of factor A is confounded with the DG two-factor interaction.

    e design shown in Table 1 below and graphed in Figure 1 is a regular fractional factorial design for nine factors in 16 runs.Montgomery (2013) shows how to construct these designs. We start with a full factorial design that has the desired number ofruns – in this case one having four factors and 16 runs. en, factor E is generated by the element-wise multiplication of thecolumns for factors C and D. Similarly, F = BD, G = AD, H = ABC, and I = ABCD. e result of this procedure is a design thatis orthogonal for the main effects but which confounds 12 of the 36 two-factor interactions with a main effect. Suppose you ranthis experiment and observed large effects associated with factors A, D, and G. You would like to claim with certainty that thesethree factors are all active. However, an equally explanatory model is the one with the terms D, G, and the DG interaction. So,the confounding in the design produces ambiguous analytical results. e only data-driven way to resolve such ambiguity is todo further experimentation.

    Figure 1. Correlation plot for a textbook design

    Bradley Jones Douglas C. Montgomery

    Evaluating Screening Designs

  • 14 ASQ Statistics DIVISION NEWSLETTER Vol. 34, No. 1, 2015 asq.org/statistics

    Continued on page 15

    Design of ExperimentsContinued from page 13

    Is there some way to avoid confounding main effects and two-factor interactions?Figure 2 shows our diagnostic plot for one of the 87 nonisomorphic orthogonal fractional factorial designs for nine factors in 16runs (see Table 2). Unlike Figure 1, the upper right section of Figure 2 has no black squares. us, no main effect is confoundedwith any two-factor interaction. Moreover, the largest absolute correlation of any main effect with any two-factor interaction isonly one-half. On the negative side, this design has more interactions correlated with every main effect even though the correlationsare not as large.

    Table 1. Minimum aberration 29-5 fractional factorial design.

    Table 2. Alternative orthogonal two-level fractional factorial design.

  • asq.org/statistics ASQ Statistics DIVISION NEWSLETTER Vol. 34, No. 1, 2015 15

    Design of ExperimentsContinued from page 14

    Let us go back to the scenario we considered before. Can we distinguish between the model having the main effects A, D and Gand the alternative model having the main-effects D and G plus the DG two-factor interaction? Absolutely! is is true as longas the observed effects are sufficiently larger than their standard errors.

    What is the bottom line?Comparing Figures 1 and 2 side by side provides a quick way of assessing the relative merits of the two designs. We prefer designswhere the diagnostic plot contains fewer pure black squares. For more information on these alternative non-regular fractionalfactorial designs for 6 to as many as 14 factors in 16 runs see Jones and Montgomery (2010), Shinde, Montgomery, and Jones(2014), and Jones, Shinde and Montgomery (2015).

    ReferencesJones, B. and Montgomery, D.C. (2010), “Alternatives to Resolution IV Screening Designs in 16 Runs”, International Journal of ExperimentalDesign and Process Optimisation, Vol. 1, No. 4, pp. 285-295.

    Jones, B., Shinde, S.M., and Montgomery, D.C. (2015), “Alternatives to Resolution III Regular Fractional Factorial Designs for 9 – 14 Factorsin 16 Runs”, Applied Stochastic Models in Business and Industry, in press.

    Montgomery, D.C. (2013), Design and Analysis of Experiments, 8th Edition, Wiley, Hoboken, NJ

    Shinde, S.M., Montgomery, D.C., and Jones, B. (2014), “Projection Properties of No-Confounding Designs for Six, Seven, and Eight Factorsin 16 Runs, International Journal of Experimental Design and Process Optimization, Vol. 4, No.14, pp. 1-26.

    Figure 2. Another orthogonal 16 run fractional factorial

  • 16 ASQ Statistics DIVISION NEWSLETTER Vol. 34, No. 1, 2015 asq.org/statistics

    Topics in Statistical Process Control

    As part of the ASQ Statistics Division Newsletter (Statistics Digest) focus on more statistical andtechnical content, topics related to statistical process control (SPC) will be one of the areas of emphasisin this new column. e purpose will be to discuss new methods, new applications of standardmethods, as well as illustrations of both basic and advanced concepts involving SPC.

    Statistical process control is a field covering a wide range of methods and applications and involvesmany important tools useful for solving problems encountered in a process under study. ereader may already be familiar with the term “the magnificent seven,” which references sevenmajor SPC tools: Histogram (or stem and leaf plots), Pareto chart, check sheet, cause-and-effect

    diagram, scatter diagram, defect concentration diagram, and control chart.

    Over the years, additional tools have been included in the SPC “toolkit” in order to include important information from, forexample, brainstorming activities among team members or groups involved in process improvement (i.e., affinity diagram). Toolsthat will allow for the inclusion of relationships among ideas (i.e., the interrelationship digraph) brought forth from team membersare also important.

    e control chart is the tool most commonly associated with SPC. In fact, the control chart and the acronym, SPC, are oftenused interchangeably even though SPC involves much more than just the use of control charts. It is an important graphical toolto aid the user in determining if the variability present in the process of interest is natural (common cause) or influenced by somefactor(s) (assignable cause).

    Process control has two central purposes: monitoring and improvement. Control charts are excellent tools for monitoring aprocess, but they do not identify improvement opportunities or tell the user what the problem (opportunity) may be if an out-of-control signal occurs. Determining the cause(s) of an out-of-control process can be accomplished using the remaining“magnificent seven” as well as many other tools such as designed experiments, affinity diagrams, and interrelationship digraphs,for example.

    Decisions and ConsiderationsBefore applying any tool in the SPC toolkit, there are many decisions to be made and options the user should consider. It is veryeasy to simply apply a control chart, for example, to data collected over time for some process – simply because the data is readilyavailable. A sort of, ‘if we have it, why not throw a control chart on it’ type of approach. Many of us have done this (I know Ihave), without perhaps fully considering the process as a whole, only to backtrack and reconfigure our approach later. is is notalways a bad approach. We can learn a great deal about a process this way. However, we can also minimize some of the inefficiencythat often accompanies this approach by carefully and thoughtfully studying a process with some key questions in mind (not anexhaustive list):

    • What is the purpose of this study?

    • For what variable(s) do we (can we) collect data?

    • What type of data is collected?

    COluMnstatistical Process Controlby Connie Borror, PhDProfessor, School of Mathematical and Natural Sciences, Arizona State University

    Continued on page 17

    Connie Borror

  • asq.org/statistics ASQ Statistics DIVISION NEWSLETTER Vol. 34, No. 1, 2015 17

    • How is the data collected?

    • What is the quality characteristic of interest?

    o Is there more than one quality characteristic of interest?

    • How often should samples be collected?

    • How large of a sample is necessary for each time period?

    • What is the maturity of the process and/or monitoring method?

    o Has the process been analyzed to determine if only common cause variation exists? Is the process unstable? Has it beenmonitored before? What phase are we working in, Phase I or Phase II?

    • If we implement a control chart for monitoring purposes, what is the action plan if the process signals out of control?

    In future issues of this column, we will discuss different aspects of statistical process control and provide some guidance foraddressing questions such as those listed above. Examples and illustrations of SPC in a variety of fields such as traditionalmanufacturing, healthcare, service industries, and forensics, to name a few, will be presented. We will highlight SPC in bothunivariate and multivariate applications for new processes as well as established processes.

    Our goal is provide the reader with discussions of tools and techniques that can be immediately applicable in practice or provideideas for future research topics. We will cover basic methods as well as more advanced or new methods in statistical processcontrol. To that end, we encourage the reader to contact us with questions to be addressed in future articles and suggestions forpossible topics in order to make the column one of greatest possible use and impact. Suggestions and questions can be sent [email protected]

    statistical Process ControlContinued from page 16

    2015 Fall Technical Conference

    e 59th annual Fall Technical Conference (FTC) will be held in Houston on October 8-9, 2015. FTC has long been aforum for both statistics and quality and is co-sponsored by ASQ (Chemical & Process Industries Division and the StatisticsDivision) and the American Statistical Association (Section on Physical and Engineering Sciences and Section on Qualityand Productivity).

    If you are interested in presenting an applied or expository paper in any of the three parallel sessions (Statistics, QualityControl, or Tutorial/Case Studies), please contact one of the program committee members below.

    ● ASQ CPID: Marc Banghart ([email protected])● ASQ STAT: Mindy Hotchkiss ([email protected])● ASA SPES: Zhen Wang ([email protected])● ASA Q&P: Alix Ann Robertson ([email protected])

    e deadline for abstract submissions is February 28, 2015. Additional information about the abstract submission formatis available at the conference website (http://asq.org/conferences/fall- technical/).

    mailto:mailto:mailto:mailto:mailto:http:

  • Use of the Value Stream Map in a Lean Six Sigma Project: Case Study Involving Designof Experiments

    Introduction

    Can we achieve the same benefits as eory of Constraints Lean Six Sigma (TLS), an integratedeory of Constraints (TOC) and Lean Six Sigma (LSS) improvement process, by simply using valuestream maps in the define or measure phases of LSS? e motivation for using the Value Stream Map(VSM) is to focus the Lean Six Sigma project on the constraint limiting overall system performance.Experimental results described by Pirasteh and Fox (2011) indicate TLS projects outperformed bothLean and Six Sigma projects since TLS focuses on areas that could result in larger overall system

    benefits. Both Pirasteh and Fox (2011) and Sproull (2009) highlight the VSM as a key tool in identifying the system constraint.Sproul defines the value stream as all those items one does to produce a product that creates value.

    Chen, Li, and Shady (2010) describe a case study where a VSM was used in specifying the focus of a LSS project. e VSMidentified the system constraint and the LSS team used an experimental design to determine the levels of system factors to relievethe constraint. e company in question is a small electrical manufacturing business in the Midwestern United States. ecompany makes industrial switch gears and switchboards. e company had not completed a LSS project, but employees wantedto transform the company using LSS. ey called their approach Lean/Sigma, but we will view it a version of LSS. ey formeda LSS team and started with the switchboard unit since it was the major manufacturing section and used the greatest amount ofpersonnel and equipment.

    Value Stream Map

    e first step was to create a VSM of the current system. e team made two walkthroughs with the manufacturing manager.e first one traced the product flow from the raw material receiving dock to the finished product shipping dock. en the teamwalked from the shipping dock upstream to the raw material dock and collected detailed process information. e resulting VSMappears in Figure 1. e fabrication operation produces 6 parts which are welded together in the welding operation and are thensent to the finishing operation. e final two operations are assembly and wiring. Figure 1 shows a generic VSM where theabbreviation C/T is the average cycle time or the average time a part is in the operation. e abbreviation P/T is average processingtime and C/O is the average changeover time. Chen, Li, and Shady (2010) present the data values in Figure 1.

    18 ASQ Statistics DIVISION NEWSLETTER Vol. 34, No. 1, 2015 asq.org/statistics

    COluMnstatistics for Quality Improvementby Gordon Clark, PhDProfessor Emeritus of the Department of Integrated Systems at The Ohio State University and Principal Consultant at Clark-Solutions, Inc.

    Gordon Clark

    Continued on page 19

    Observations

    e VSM showed that the fabrication operation was the bottleneck operation since its cycle time of 140.5 minutes is longestcycle time of the five operations. Finishing is the operation with the next longest cycle time, which is 128 minutes. e productionprocess begins with fabrication so based on cycle time, downstream operations would have to wait for fabrication to completeparts. e fabrication operation consists of four processes. ey are shearing, plasma cutting, de-burring, and braking. Two of

    Figure 1: Value Stream Map

  • asq.org/statistics ASQ Statistics DIVISION NEWSLETTER Vol. 34, No. 1, 2015 19

    statistics for Quality ImprovementContinued from page 18

    the 6 work pieces must be processed by the plasma cutter. e plasma cutter has an average machine time of 42 minutes plus aloading time of 3 minutes and an unloading time of 2 minutes. Clearly the plasma cutter was the bottleneck process in thefabrication operation with 94 minutes of its 140.5 minute cycle time due to the plasma cutter (Chen, Li, and Shady 2010).

    Kaizen event: Reduce cycle time for the plasma cutting machine.

    e LSS team created a kaizen event to improve the fabrication operation. It used the “5 Whys” approach to identify root causesfor poor performance. For example, the plasma cutter created defects that needed to be reworked and time was required forinspection due to the defects. e defects were created because the plasma cutter was not operating at ideal parameter settings sothe team designed experiments to identify better parameter settings (Chen, Li, and Shady 2010).

    Experimental Design

    e experimental design and results are presented by Chen, Li, and Shady (2010), but Chen, Li, and Cox (2009) give a morein-depth presentation of the design and the analysis of results.

    Performance Measures

    e plasma cutting machine produces holes on work pieces for installing hardware. Holes having excessive beveled edges andpoor circularity cannot be used. A beveled edge is one where the hole is not perpendicular to the face of the switchboard. Figure2 shows the bevel magnitude which is the bevel performance measure. Figure 3 shows the circularity measure, which is thesmallest diameter deviation, |Dtarget – Dsmallest|. Figures 2 and 3 are similar to Figures 5 and 6 in Chen, Li, and Cox (2009).

    Figure 2: Bevel Magnitude

    Figure 3: Circularity Measure

    Continued on page 20

    �������

    Factors

    e LSS team identified four controllable factors, which were tip size, feed rate, voltage, and amperage, as well as twouncontrollable noise factors, which where air pressure and pierce time. e manufacturer was unable to control the uncontrollablefactors so they decided to run the experiments with three levels for the controllable factors and two levels for the uncontrollable(noise) factors.

  • 20 ASQ Statistics DIVISION NEWSLETTER Vol. 34, No. 1, 2015 asq.org/statistics

    statistics for Quality ImprovementContinued from page 19

    Taguchi Parameter Design

    e LSS team chose a Taguchi parameter design because they believed it allowed for a reduction in the time and money to conductthe experiment compared to a factorial design. at is, the Taguchi design would require fewer experiments than a factorial design.Taguchi parameter design is an example of a robust parameter design (Montgomery 2012). Taguchi developed his experimentaldesigns to:• Develop products that are robust to external variability sources. • Minimize variation about target values rather than conform to specifications limits.

    e complete Taguchi design consists of two arrays: an inner array containing the controllable factors and outer array containingthe uncontrollable factors. e inner array is an L9 orthogonal array, and the outer array is an L4 orthogonal array. Note that acomplete factorial design for the inner array would have 81 runs instead of 9. Each of the 9 runs in the inner array was testedacross the 4 runs in the outer array by the LSS team. at gave a total sample size of 36 runs. One can estimate interactionsamong controllable factors and uncontrollable factors, but no information is provided to estimate interactions among controllablefactors. is type of design is called a crossed array design (Montgomery 2012).

    Parameter Design Performance Measures

    Taguchi recommends analyzing both average values and variation. Let yij be an experimental result where i specifies the innerarray row and j specifies the outer array row. For each inner array row, an average value of yij is calculated for the 4 values of j.Taguchi recommends analyzing variation using a signal-to-noise ratio (S/N). When small values of the performance measureare best, the S/N is:

    For both bevel magnitude and circularity measures the target value T is zero and smaller values of yij are preferred. A value ofS/N is computed for each inner array row and large values of the signal-to-noise ratio (S/N) are preferred.

    Experimental ResultsLSS team decided to select the factor levels that had the highest number of preferred occurrences to resolve the conflicts amongthe results based on the performance measures. e bevel magnitude results in Figure 4 and circularity results in Figure 5 werecreated based on data provided by Chen, Li, and Cox (2009).

    Figure 4: Bevel magnitude results Continued on page 21

  • asq.org/statistics ASQ Statistics DIVISION NEWSLETTER Vol. 34, No. 1, 2015 21

    statistics for Quality ImprovementContinued from page 20

    Figure 5: Circularity results

    Project Effectiveness

    e effectiveness of the preferred setting for the four factors was verified by 30 work pieces where each one of them met thequality requirements for subsequent assemblies Chen, Li, and Shady (2010). e cycle time of the plasma cutter was reducedfrom 47 minutes to 30 minutes and that reduced the fabrication operation cycle time to 106.5 minutes so it was no longer thebottleneck operation. In addition, the output quality of the product was improved.

    Improved Approach and Controversy

    is case study clearly shows that robust parameter design can improve system efficiency and effectiveness by improving the meanoutput and reducing variability. Montgomery (2012) notes that Taguchi parameter design was used in the 1980s by largecorporations such as AT&T Bell Labs, Ford Motor Company, and Xerox. However, later studies showed that the experimentalprocedures and data analysis methods advocated by Taguchi could be significantly improved. e next issue of this column willfeature an improved approach to robust parameter design.

    References

    1. Chen, J. C., Y. Li, and B. D. Shady (2010). "From Value Stream Mapping Toward a Lean/Sigma Continuous Improvement Process: AnIndustrial Case Study." International Journal of Production Research 48(4): 1069-1086.

    2. Chen, J.C., Y. Li, and R. A. Cox (2009). “Taguchi-based Six Sigma Approach to Optimize Plasma Cutting Process: An Industrial CaseStudy”. International Journal of Advanced Manufacturing Technology 41: 760-769.

    3. Montgomery, Douglas (1997). Design and Analysis of Experiments, Fourth Edition, John Wiley & Sons, New York, 622-641.

    4. Montgomery, Douglas (2012). Design and Analysis of Experiments, Eighth Edition, John Wiley & Sons, New York, Chapter 12.

    5. Pirasteh, R. M. and R. E. Fox (2011). Profitability with No Boundaries - Optimizing TOC, Six Sigma and Lean Results. Milwaukee, WI,ASQ Quality Press.

    6. Sproull, Bob (2009). e Ultimate Improvement Cycle, Maximizing Profits through the Integration of Lean, Six Sigma, and the eory ofConstraints. CRC Press, Taylor & Francis Group, Boca Raton, FL.

  • 22 ASQ Statistics DIVISION NEWSLETTER Vol. 34, No. 1, 2015 asq.org/statistics

    Measures of Central Tendency and DispersionThe three most widely used measures of central tendency are: the mean (also known as the arithmeticmean or average), the median (aka the center point), and the mode (the value that occurs mostfrequently). Each measure has its own symbol or notation, as follows:

    COLUMNStats 101by Jack B. ReVelle, PhDConsulting Statistician at ReVelle Solutions, LLC

    Continued on page 23

    Let’s use a common data pool to help better understand how to determine each measure of central tendency. Our data pool hasfive values (n = 5): 7, 3, 5, 1, and 9.MeanSum all the values (data points) in the data pool and divide the sum by the number of values (the sample size “n”).

    MedianArrange the values in the data pool into an array of numbers from the largest value to the smallest or vice-versa. The center pointis the median. It has as many numbers above it as it does below. This is true with an odd number of values in the data pool.

    If there is an even number of values in the data pool, identify the two values in the middle of the array and calculate their averagevalue. This average of the two centermost values is the median, as below:

    Jack ReVelle

    ModeArrange the values in the data pool into an array of numbers from the largest value to the smallest or vice-versa. The value thatoccurs most frequently is the mode (unimodal or one mode). If two values occur most frequently, the data pool is bimodal. Ifthree or more values occur most frequently, the data pool is multimodal.

    Reprinted with permission from Quality Press © 2004 ASQ; www.asq.org. No further distribution allowed without permission.

  • asq.org/statistics ASQ Statistics DIVISION NEWSLETTER Vol. 34, No. 1, 2015 23

    stats 101Continued from page 22

    Measures of DispersionThe six most widely used measures of dispersion are: the range, the average deviation, the standard deviation, the quartile, thedecile, and the percentile. Data dispersion results from inconsistent, unpredictable performance. A predictable process producesresults that are consistent from one time period to the next. An unpredictable process produces results that are widely dispersed.

    RangeIdentify the largest and the smallest values in a data pool. The range is the absolute difference (without regard to the mathematicalsign) between these two values. It is standard practice to subtract the smallest value from the largest value. As a result, the rangeis always either zero (if the two numbers are the same) or positive.

    Average DeviationThe average deviation (AD) is calculated using the formula:

    Continued on page 24

    The notation “| |” indicates the absolute value of the difference without regard to the mathematical sign.

  • 24 ASQ Statistics DIVISION NEWSLETTER Vol. 34, No. 1, 2015 asq.org/statistics

    stats 101Continued from page 23

    The average deviation (D), which is also known as the mean deviation, is the arithmetic mean of the absolute deviations of a setof data about the data’s average (mean). The D provides a single value representing the typical distance that data points are fromthe average. It does not provide any idea about the spread, scatter, or density) of data around the average. Consider thisexample:

    Suppose all we know about these two processes is the data, the average and the average deviation D. The mean in both cases isthe same, but the D in Process 1 is much greater than the D in Process 2. Clearly, Process 2 has better control than Process 1with respect to consistency of results.

    However, we have no idea about the concentration of the data with respect to the average and so we need a better way to evaluatedata dispersion around this measure of central tendency. This brings us to a discussion of the standard deviation (SD).

    Standard DeviationThe sample standard deviation (SD) is calculated using this formula:

    In this example our data set has five values (n=5); they are, in no particular order: 7, 3, 5, 1, and 9. We can see from the formulathat we will need to know their average, x. Thus,

    Continued on page 25

    We can also see from the formula that we will need to know the value of the term (x1-x) for each value of x. Thus,

  • asq.org/statistics ASQ Statistics DIVISION NEWSLETTER Vol. 34, No. 1, 2015 25

    Continued on page 26

    Next, we divide Σ (x - x)2 by n - 1. In this example n is the size of our data pool, i.e., n = 5. Therefore,n - 1 = 5 - 1 = 4

    Since we have finished our calculations under the square root sign, we can complete the calculation of the sample standarddeviation (SD).

    stats 101Continued from page 24

    Average Deviation vs. Standard DeviationUsing the SD as a measure of dispersion, we can easily evaluate process consistency. Although the D is much easier to calculate,the SD is far more useful than the D. There are still some situations where the D is used instead of the SD in complex modelfitting. Ds are less sensitive to extreme outliers (values far from the average or trend line) compared to SDs because they don’tsquare the distance before adding them to the values of other data points.

    Since model fitting methods aim to reduce the total deviation from a trendline, methods using SDs can end up creating a trendlinethat diverges away from the majority of points so as to be closer to an outlier. Using the D reduces this distortion, but at the costof complicating the calculation of the trendline. The SD possesses mathematical properties and relationships which, generallyspeaking, makes it more useful in statistics. However, “useful” should never be confused with perfect.

    QuartilesA quartile is 25% of whatever is being evaluated. There are four quartiles used to describe a data set. The first (top) quartile andthe fourth (bottom) quartile represent the top 25% of all the values and the bottom 25%, respectively. The second and thirdquartiles are immediately above and below the median, respectively.

    Furthermore, we can see from the formula that we will need to know the sum of all these (x1 - x)2 terms. Thus,

  • 26 ASQ Statistics DIVISION NEWSLETTER Vol. 34, No. 1, 2015 asq.org/statistics

    stats 101Continued from page 25

    DecilesA decile is 10% of whatever is being evaluated. There are 10 deciles used to describe a data set.

    PercentilesA percentile is 1% of whatever is being evaluated. There are 100 percentiles used to describe a data set. Thirty percentiles are30% of whatever is being observed, e.g., 30% of n = 20 units is (30% or 0.30) times (20) = (0.30) (20) = 6 units.

    Reprinted with permission from Quality Press ©2004 ASQ.www.asq.orgNo further distribution allowed without permission.

    Professional Networking on LinkedIn by Amy Ste. Croix, Social Media ManagerLooking for a place to connect with professionals who apply statistical thinking in their careers? Check out and join theDivision’s LinkedIn group where you can find webinar and conference announcements as well as a series of discussions onvarious statistics-related topics. Participate in exiting divisions or start your own! To get involved or find out moreinformation, visit the Division’s LinkedIn page: https://www.linkedin.com/groups/ASQ-Statistics-Division-2115190

    ASQ Certification Exams Coming! by Brian Sersion, Certification Chair

    Just a quick reminder to members about upcoming certification exam opportunities – most ASQ certification exams arebeing offered on June 2015. Since the application deadline is normally 3 weeks in advance of an exam, be sure to beginplanning now.

    e Top 4 certifications held by our members are Certified Quality Engineer and Certified Quality Auditor (both offeredon June 6, 2015), and Certified Six Sigma Black Belt and Certified Reliability Engineer (both offered March 7, 2015).

    For those of you who attend Fall Technical Conference, the Division has begun offering the CQE, SSBB, and SSGB examsat FTC. ere are also special administrations of many exams scheduled by ASQ Sections. For more information, checkout the ASQ Certification Home page: http://asq.org/cert.

    http:http:

  • asq.org/statistics ASQ Statistics DIVISION NEWSLETTER Vol. 34, No. 1, 2015 27

    COluMnTesting and Evaluationby Laura Freeman, PhDAssistant Director of the Operational Evaluation Division and Test Science Task Leader at the Institute for Defense Analyses

    Continued on page 28

    Defining the Science of TestingTesting and evaluation are essential aspects of the systems engineering process. From system designto the consumer’s decisions to purchase the product, testing and the resulting analyses can provideinsight and inform decisions. Conducting tests and evaluating the results are an essential element ofdeveloping quality products that deliver the performance that are required by the end user.

    A scientific approach to testing provides a structured and defensible process that ensures the rightamount of testing is conducted to answer key questions. A scientifically planned test requires theinput and expertise of all stakeholders including project management, engineering and scientificexpertise, and statistical expertise.

    As every system is different, the engineering and scientific expertise will vary based on the system or process being tested. However,the need for statistical expertise is universal. Why? I like to think of statistics as the operationalization of the Scientific Method.Statistics provide us with the concrete methods and tools for generating new knowledge sought in testing. Additionally, thesestatistical methods are often independent of the application (although the appropriate methods are influenced by the application).e same statistical techniques that are used to compare a generic drug to a brand-name counterpart can be used to compare theaccuracy of a new guidance system in an air-to-ground missile to the original guidance system.

    So, what is the (statistical) science of testing?

    In the test planning phase the field of Design of Experiments (DOE) or Experimental Design provides the science of testing. DOEis a structured and purposeful methodology for test planning. Unfortunately, many have construed the term DOE or experimentaldesign to be synonymous with a narrow set of named test designs (e.g., factorial designs, D-optimal designs). DOE is not just alisting of design types. e essential elements of an experimental design process are (see Figure 1):

    (1) Identify the questions to be answered, also known as the goals or objectives of the test.(2) Identify the quantitative metrics, also known as response variables or dependent variables that will be measured to address

    these key questions.(3) Identify the factors that affect the response variables. Also known as independent variables, these factors frame the broad

    categories of test conditions that affect the outcome of the test.(4) Identify the levels for each factor. e levels represent various subcategories between which analysts and engineers expect

    test outcomes to vary significantly. When performance is expected to vary linearly, two levels are used. Nonlinearperformance typically results in three or more levels.

    (5) Identify applicable test design techniques. Examples include factorial designs, response surface methodology, andcombinatorial designs. e applicable test design method depends on the question, the metrics, the types of factors(numeric or categorical), and available test resources.

    (6) Identify which combinations of factors and levels will be addressed in each test period. If the test is to be a “one-shot”test with no follow-up planned, then a more robust test may be required. If testing can be sequential in nature, thensmaller screening experiments aimed at determining the most important factors should precede more in-depthinvestigations.

    (7) Determine how much testing is enough by using relevant statistical measures (e.g., power, prediction variance, correlationin the factor space, etc.).

    Laura Freeman

  • 28 ASQ Statistics DIVISION NEWSLETTER Vol. 34, No. 1, 2015 asq.org/statistics

    Testing and EvaluationContinued from page 27

    Employing DOE in this holistic sense ensures efficient and adequate testing and also aids in determining how much testing issufficient. As a result, we can determine the optimal allocation of constrained test resources and provide an analytical trade-spacefor test planning.

    Statistical analysis methods provide the science of testing for the analysis of data. ese empirical models allow for objectiveconclusions based on the observed data. Parametric regression methods allow us to maximize information gained from test data,while non-parametric methods can provide a robust assessment of the data free from model assumptions. Bayesian methodsprovide avenues for integrating additional sources of information.

    It is important to clarify that the analysis model should reflect the observed data and not the planning process. In designing testswe often assume a statistical model for the analysis; however, there is no limitation requiring the use of that model in the analysis.Often qualities of the data observed (e.g., skewness, lurking variables) lead us to employ different analysis methods than originallyplanned; this is completely acceptable and can better inform test design on similar products/processes in the future.In the Department of Defense, where I spend my time working on testing and evaluation, large scale efforts are underway toreplace current test strategies with a scientifically rigorous approach using design of experiments. In some industries this mayseem well past due. However, these methods are far from common practice in testing as a whole. In fact, many system engineeringand operations research programs only provide brief introductions to experimental design and the benefits it brings to testingand evaluation.

    is column on testing and evaluation will discuss the statistical methods that provide the science of testing. is inauguralcolumn is focused on DOE. In future columns, I look forward to sharing my thoughts on techniques, methods, and philosophieswe can all use to improve testing and evaluation, whatever your field.

    Figure 1. e essential elements of an experimental design process

  • asq.org/statistics ASQ Statistics DIVISION NEWSLETTER Vol. 34, No. 1, 2015 29

    Continued on page 30

    COluMnstandards Inside-Outby Mark Johnson, PhDUniversity of Central FloridaStandards Representative for the Statistics Division

    Reflections on Life in International Standards

    Two events occurred in 1989 which had a major influence on my professional life in InternationalStandards. One was very sad while the other historic. is initial column on standards covers thesetwo events.

    On July 23, 1989, Richard A. Freund passed away. Dick was a long time member of the StatisticsDivision whose legacy in the q