396
monitoring the future questionnaire responses from the nation’s high school seniors 2012 Jerald G. Bachman, Lloyd D. Johnston, and Patrick M. O’Malley

monitoring the futuremonitoringthefuture.org/datavolumes/2012/2012dv.pdf · monitoring the future ... Questionnaire Form 3 137 Section A 137 Section D 152 ... of high school seniors,

Embed Size (px)

Citation preview

  • monitoring the future questionnaire responses from the nations high school seniors

    2012 Jerald G. Bachman, Lloyd D. Johnston, and Patrick M. OMalley

  • monitoring the future questionnaire responses from the nations high school seniors

    2012 Jerald G. Bachman, Lloyd D. Johnston, and Patrick M. OMalley

    Survey Research Center Institute for Social Research

    The University of Michigan Ann Arbor, Michigan

  • ISSN 0190-9185 ISBN 978-0-87944-001-5 Library of Congress Catalog Card Number 79-640937 Copyright 2014 by the University of Michigan, All rights reserved Published in 2014 by: Institute for Social Research, The University of Michigan, Ann Arbor, Michigan 6 5 4 3 2 1 Printed in the United States of America

  • Changes to the 2012 Question Index

    In this 2012 version of Monitoring the Future Questionnaire Responses from the Nations High School Seniors, 2012; the 2011 Question Index has been included. The 2012 version of the Index is in process and will replace the 2011 Index when it is completed. To assist users of the 2012 volume, a list of new and changed questions in 2012 (by Form) is included below.

    Core (Forms 1-6) New B21l. Annual use of bath salts added. Changes 2011 B21j (annual use of Provigil) deleted in 2012. 2011 B21k (annual use of synthetic marijuana) renumbered B21j in 2012. 2011 B21l reworded as: . . . had an alcoholic beverage mixed with and

    energy drink (like Red Bull) and renumbered B21k in 2012.

    Form 1 New B006a. In recent months, have you noticed the health warnings on cigarette packs? B021. Where did you get the marijuana you used during the last year?

    a. Took from a friend without asking b. Took from a relative without asking c. Given for free by a friend d. Given for free by a relative e. Bought from a friend f. Bought from a relative g. From my own medical marijuana prescription h. Bought from a drug dealer/stranger i. Other method

    B021a. Did you get any of the marijuana you used during the last year from someone elses medical marijuana prescription?

    Changes 2011 B021 (using marijuana with alcohol) renumbered B020k in 2012. Form 2 New

    A04. The next questions ask for your opinions on the effects of using certain drugs and other substances. How much do you think people risk harming themselves (physically or in other ways), if they . . .

    A04e. Try Salvia once or twice? A04f. Take Salvia occasionally? A04g. Try synthetic marijuana (K2, Spice) once or twice? A04h. Take synthetic marijuana occasionally? A04i. Try bath salts (synthetic stimulants) once or twice to get high? A04j. Take bath salts occasionally to get high?

  • Changes 2011 A02q question text changed from Visit social networking websites like Myspace or Facebook to Visit social networking websites (like Facebook) in 2012. 2011 questions A04e-h (Perceived risk of various levels of alcohol use) deleted in 2012. 2011 question E01n (Perceived risk of tying Salvia 1-2 times) deleted in 2012.

    Form 3 New

    D04. During the LAST 12 Months, on how many occasions (if any) have you D04k. taken bath salts (synthetic stimulants) to get high Form 4 New

    A17. During a typical week, how often do you have dinner with one or both of your parents?

    Changes 2011 A17-A23 were renumbered at A18-A24 in 2012. 2011 A021. In 2012 (now Q.A22) the second sentence was changed to read: Not counting medical marijuana (with a doctors prescription), which of the following policies would you favor? 2011 A022. In 2012 (now Q.A23) the following phrase was added to the beginning of the question: Aside from medical marijuana, if it were legal for people to USE marijuana, should it also be legal to SELL marijuana? 2011 A023. In 2012 (now Q.A24) the following phrase was added to the beginning of the question: Aside from medical marijuana, if it were legal to use and legally available, which of the following would you be most likely to do?

    Form 5 New E06. In recent months, have you noticed the health warnings on cigarette packs? Changes 2011 E06-E15 renumbered as E07-E16 in 2012. 2011 E11e, annual use of Provigil deleted in 2012. 2011 E11f-E11i renumbered as E12e-h in 2012. 2011 E11i. In 2012 (now E12h) reworded as: . . . had an alcoholic beverage mixed with an

    energy drink (like Red Bull) Form 6 New B05a. Are the cigarettes you usually smoke menthol? D07. In recent months, have you noticed the health warnings on cigarette packs? During the LAST 12 MONTHS on how many occasions (if any) have you . . . D20n. . . . taken bath salts (synthetic stimulants) to get high Changes 2011 D07 renumbered D06a in 2012. 2011 D20m, annual use of Provigil deleted in 2012. 2011 D20n (synthetic marijuana) renumbered as D20m in 2012. 2011 D20o reworded as: . . . had an alcoholic beverage mixed with an energy drink (like Red Bull) in 2012.

  • iii

    Table of Contents Preface v

    Purposes of This Series of Publications v A Guide for the Reader vi Availability of Archived Data vii Other Publications from the Study vii Acknowledgments viii

    Introduction 1

    The Need for Social Indicators 2 Research Design and Procedures 2

    Reasons for Focusing on High School Seniors 2 Sampling Procedures 3 School Recruiting Procedures 4 Advance Contact with Teachers and Students 4 Questionnaire Administrations 5 Procedures for Protecting Confidentiality 5

    Content Areas and Questionnaire Design 5 Representativeness and Validity 10

    School Participation 10 Student Participation 10 Validity of Self-Report Data 10 Accuracy of the Sample 11 Consistency and the Measurements of Trends 11

    Caveats to the Users of the Data 11 Estimation of Sampling Errors 11 Subgroup Definitions 11 Missing Data Notes 12 Characterizing Heroin Users 12 Interpreting Racial Differences 12 A Note on the Questions about Amphetamine Use 15 A Note on the Questions about Alcohol Use 16 A Note on the Questions about the Use of Hallucinogens Other Than LSD and Tranquilizers 16 A Note on the Questions about the Use of Narcotics Other Than Heroin 16

    Descriptive Results 17

    Introduction to the Table Format and Conventions 17 Definitions of Column Headings 17 Table Contents 18 Questionnaire Forms 1 through 6 21

    Section C (All Forms): Demographic Variables, Family Background, etc. 21 Section B (All Forms): Drug Use 29

    Questionnaire Form 1 46 Section A 46

  • iv MONITORING THE FUTURE

    Section B (Detailed Drug Use) 56 Section D 95

    Questionnaire Form 2 105 Section A 105 Section D 120 Section E 126

    Questionnaire Form 3 137 Section A 137 Section D 152 Section E 156

    Questionnaire Form 4 166 Section A 166 Section D 179 Section E 187

    Questionnaire Form 5 192 Section A 192 Section D 207 Section E 213

    Questionnaire Form 6 221 Section A 221 Section B 232 Section D 238

    Cross-Time Index of Questionnaire Items, 19762012 255

    Introduction to the Indexing Conventions 255 Definitions of Column Headings 255 Question Index 264

    Appendices 353

    Appendix A: Sampling Error Estimates and Tables 353 Appendix B: Procedures Used to Derive Design Effects and Sampling Errors 363 Appendix C: Questionnaire Covers, Instructions, and Sample Page 377

    References 381 Tables

    Table 1: Sample Sizes and Student Response Rates 4 Table 2: Measurement Content Areas 6 Table 3: Prevalence of Drug Use Question Numbers by Questionnaire Form, 2012 7 Table 4: Subject Area Key 258 Table A-1: Confidence Intervals (95% Level) around Percentage Values 357 Table A-2: Confidence Intervals for Differences between Two Percentages

    (95% Confidence Level) 359 Table B-1: Sample Sizes (Unweighted and Weighted) in Subgroups by Year 371 Table B-2: Guidelines for Computing Design Effects (DEFFs) for Percentages

    and Means Derived from Monitoring the Future Samples 376 Figures

    Figure 1: Guide to Table Format 19 Figure 2: Guide to Cross-Time Index 257 Figure B-1: Design Effects for Single Percentages 368 Figure B-2: Design Effects for Differences between Percentages 369

  • *The 1975 volume does not include comparison data for African American and white subgroups, nor does it include data for the subgroups who did and did not plan to complete four years of college. Also, somewhat fewer questionnaire items appear than in the other volumes. These restrictions are the result primarily of missing data problems that occurred only in the 1975 survey. In most respects, however, those seeking to compare different classes of high school seniors will find the 1975 data useful.

    v

    Preface

    This volume is the latest in a series that presents descriptive statistical results from a national survey of young people entitled Monitoring the Future: A Continuing Study of the Lifestyles and Values of Youth. Each year, the study surveys a large sample of high school seniors, located in approximately 130 schools nationwide and drawn to be representative of all seniors in the coterminous United States.

    The first four volumes in this series, dealing with the high school classes of 1975, 1976, 1977, and 1978, were published simultaneously early in 1980. Additional volumes appear on an annual basis. The most important contribution of this series is likely to be the opportunity it provides for the exploration of trends, a process that obviously involves the use of several different volumes. With that in mind, we have kept the format highly consistent from one volume to another, and we have developed an index (described below) that should facilitate the process of matching data across years. It should be noted that the volume presenting 1975 data is different from the others in several respects.*

    Although we designed the volumes to fit together as a series, we also saw distinct advantages in having each volume able to stand aloneto be usable without reference to any of the others. For that reason, the introductory material on research design and questionnaire content, the instructions for accessing the data tables, and the appendices on sampling errors are all included in each volume. Thus, the small price we pay for making each volume self-contained is that much of the text portion is repeated each year. Purposes of This Series of Publications

    Monitoring the Future, which is conducted by the University of Michigans Institute for Social Re-

    search and receives its core funding from the Na-tional Institute on Drug Abuse, is an unusually comprehensive research project in several respects: surveys are conducted annually on an ongoing basis; the samples are large and nationally representative; and the subject matter is very broad, encompassing some 1,400 variables per year. Many peoplescholars, policy makers, educators, and so onwill have an interest in the results.

    The principal investigators are already writing for a number of these audiences and will continue to do so. Because of the limitations in our own time, interests, and expertise, however, there is an enormous amount of valuable information that may never be utilized if the initiative to digest and disseminate all of it must come from us alone. Further, since the project is in part a social indicator series, its value rests to a considerable degree on timely dissemination of the results. For these rea-sons, we wish to share the studys results with others in a number of fields on a rapid and regular basis. We have chosen to provide two major and complementary vehicles for doing this: (a) these annual reference volumes of descriptive results and (b) machine-readable data archives for direct analy-sis by others.

    We believe that many potential users who would not mount the effort required to conduct computer analyses of archived data will be able to get most of what they want from these volumes of descriptive results. The user of any one of these volumes can estimateusually with considerable accuracya wide variety of characteristics of the high school class described in that volume. To take just one example, the user of the 1994 volume can ascertain that 30.7 percent of seniors in the high school class of 1994 reported using marijuana at least once during the preceding year. Further, by utilizing

  • vi MONITORING THE FUTURE

    two or more of these volumes, the reader can describe recent trends in any such characteristic. To continue with the same example, the volume describing the high school class of 1975 indicates that 40.0 percent of these seniors reported mari-juana use during the preceding year; and when that is combined with the data from the intervening years, the reader is able to discern a steady increase between 1975 and 1979 (50.8 percent of that class reported having used marijuana at least once in the preceding year) and a subsequent steady decrease, through 1992, to below the 1975 level. Between 1992 and 1997, there was an increase in marijuana use; from 1997 to 2006, a decline with little further change in 2007, and some increase beginning in 2008. The figures for the years 1975 to 2012 are:

    %

    %

    %

    % 1975

    40.0

    1985

    40.6 1995 34.7 2005 33.6

    1976 44.5

    1986

    38.8 1996 35.8 2006 31.5

    1977 47.6

    1987

    36.3 1997 38.5 2007 31.7

    1978 50.2

    1988

    33.1 1998 37.5 2008 32.4

    1979 50.8

    1989

    29.6 1999 37.8 2009 32.8

    1980 48.8

    1990

    27.0 2000 36.5 2010 34.8

    1981 46.1

    1991

    23.9 2001 37.0 2011 36.4

    1982 44.3

    1992

    21.9 2002 36.2 2012 36.4

    1983 42.3

    1993

    26.0 2003 34.9

    1984 40.0

    1994

    30.7 2004 34.3

    A similar examination of trends can be carried out for about 1,400 different variables dealing with a wide variety of subject areas (listed in Table 2 of the Introduction). A simple indexing system (described below) has been developed that makes it quite straightforward for the reader wishing to do trend analyses to locate the results for a given question for any year of the study. The index in the present volume lists the location of all items included in the surveys from 1976 onward.

    Analyses such as those illustrated above need not be limited to the total sample. The data provided in these volumes also permit examination of a variety of subgroups (e.g., males, females, those from different regions, and those with different levels of drug use experience).

    We recognize that providing ready access to es-sentially undigested data carries certain risks, particularly the risk of mistaken interpretations based on inadequate controls, lack of awareness of sample limitations, etc. More than offsetting the risks, in our view, are the advantages of prompt and widespread dissemination of these nationally representative survey indicator data. Although we are unable to eliminate the risks, we hope to reduce them by providing the reader with an extensive list

    of caveats presented at the end of the Introduction section. We ask all who use the data to read these caveats carefully and take them seriously. A Guide for the Reader

    This volume is divided into three main parts: (1) the introductory section, (2) the section giving the descriptive results from the current years survey, and (3) the cross-time item reference index.

    Introductory Section. The introduction contains a description of the studyits purposes, major content areas, design, field procedures, response rates, and methods of publication and data dissemi-nation. Since most of this material remains un-changed from year to year, anyone having read another of the volumes may wish to skim this section. We do, however, suggest reviewing the caveats, located at the end of the introductory section, which deal with interpreting and extrapo-lating from these data.

    Descriptive Results Section. In this section, all questions contained in this years survey are pre-sented along with percentagized frequency dis-tributions of answers for the entire sample and for selected subgroups. The questions in this section are organized according to the number of the questionnaire form in which they were contained (Forms 1 through 5 in 197588 and Forms 1 through 6 in 1989 on) and according to the order in which the questions occurred. There is one important exception, however. A major segment of questions common to all forms of the questionnaire (dealing with drug use, background characteristics, and some other subjects) is presented only once, at the beginning of the Descriptive Results section, using the data from respondents on all forms. Table 3 (Prevalence of Drug Use Question Numbers by Questionnaire Form, 2012), in the Introduction to this volume, lists the form and question numbers of all prevalence of drug use questions appearing in the Descriptive Results section.

    The first portion of the Descriptive Results sec-tion contains detailed definitions of all subgroups for whom data are presented. It also describes the statistical conventions used in generating the tables.

    Cross-Time Index of Questionnaire Items. Beginning with the 1982 volume, this index is ordered by subject area. (It was ordered by item reference number prior to 1982.) The index is intended to serve two purposes. First, it should be useful in locating items dealing with a subject area of interest. The subject area key at the beginning of the cross-time item index shows the alphanumeric code assigned to various subject

  • PREFACE

    *Single copies of the reports with a publication date prior to 2010 may be ordered without cost from NIDA Drug Publications Research Dissemination Center at 877-NIDA-NIH (877-643-2644); or by e-mail at [email protected]. More recent reports may be viewed and downloaded from the MTF website, www.monitoringthefuture.org.

    vii

    areas (e.g., politics or work and leisure). Having selected a subject area of interest, one can then go to that section of the index to locate all questions in any of the surveys that deal with that subject. (The procedure is slightly different in volumes prior to 1982, as described in those volumes.) To locate the results for any given question in this volume, simply look in column two of the item index for its page location.

    The cross-time index can also be used to deter-mine in which other years an item was used and where to locate it. If, in reviewing the Descriptive Results section, one locates an interesting item and would like to know in which of the other annual surveys of seniors it has appeared, it may be found in the cross-time index by the unique item refer-ence number within the relevant subject area. The index will indicate all years in which that same item appeared and show the questionnaire form, section, and item number for each year. This information thus can be used to decide which other volumes in this series might be of relevance; and the same item may be readily located by its reference number in the index of any other volume prior to the 1982 volume and by its subject area and reference number in any volume from 1982 on.

    Sampling Appendices. Appendix A is provided for those wishing to determine the confidence limits around a percentage estimate, to test the statistical significance of an observed difference between two groups, or to test the significance of a change from one year to another. It gives the necessary procedures and tables for such tests and provides the appropriate factors by which to correct for the fact that the study makes use of a multi-stage probability sample clustered in a limited number of schools (usually about 130). Appendix B provides further detail on sample design effects and the procedures that were used to derive the sampling error estimates.

    Availability of Archived Data

    As was mentioned earlier, this series of volumes is complemented by an archiving of the raw data set for direct analysis by others not associated with the project. The present volumes should be a considerable help to those wishing to access the archived data since they provide a verbatim statement of items and answer categories and, also, because they include basic univariate and bivariate frequency distributions for all items, as well as the

    missing data rates. The cross-time index of items contained in each of these volumes also provides an efficient way to identify the relevant variables in various content areas, and to determine when they were measured and their questionnaire locations. In fact, we hope that the availability of this series will not only facilitate the use of the archived data but actually stimulate interest in their use.

    The individual-level data that are summarized in this and previous volumes are available from: National Addiction and HIV Data Archive Program (NAHDAP)

    The Inter-university Consortium for Political and Social Research (ICPSR)

    Institute for Social Research P.O. Box 1248 Ann Arbor, MI 48106-1248 The ICPSR archive makes individual-level data

    available to interested investigators for analysis. A few variables have been modified or deleted to ensure that the answers of an individual respon-dent cannot be located through pattern recognition (i.e., using various descriptor variables in combination to make a unique identification). The changed or deleted variables include month of birth, race, religious preference, and the sampling weight, which adjusts for selection probability. To access the data, see the NAHDAP website at:

    http://www.ICPSR.umich.edu/NAHDAP

    Other Publications from the Study

    A number of different forms of publications based on Monitoring the Future data have been developed or are being planned. One series of reports, dealing with trends in drug use and drug-related attitudes and beliefs, was published by the National Institute on Drug Abuse in 1977-2009 and on-line by the Institute for Social Research beginning in 2010. The first report in that series, Drug Use among American High School Students, 1975-1977 (Johnston, Bachman, and OMalley), was published in late 1977. Subsequent reports have been issued yearly.* The most recent two-volume report in this series, only available on-line, is Monitoring the Future National Survey Results on Drug Use, 19752012, Volume I. Secondary School Students (Johnston, OMalley, Bachman, and Schulenberg, 2013b). As a supplement to the

  • viii MONITORING THE FUTURE

    series an occasional paper (number varies), available online only, presents subgroup trends in substance use graphically. An occasional paper series has been launched by the project. It includes papers on design and methodological issues as well as special substantive analyses and early, and sometimes more detailed, drafts of articles planned for journal publication. Occasional Paper No. 76, The Monitoring the Future Project after Thirty-seven Years: Design and Procedures (Bachman, Johnston, OMalley and, Schulenberg, 2011), provides a detailed description of the study and its underlying rationale; Occasional Paper No. 65, The Aims and Objectives of the Monitoring the Future Study and Progress toward Fulfilling Them as of 2006 (Johnston, OMalley, Schulenberg, and Bachman, 2006) provides a description of the studys objectives.

    The present series of reference volumes is being published by the Institute for Social Research. A new volume will be added to this series each year. All volumes beginning with the 1996 volume are also available as PDF files on a CD. Beginning with the 2009 volume these volumes will be available online and as PDF files on a CD.

    To request particular papers, reprints, or periodic notification of publications available from the study, write to:

    Monitoring the Future, Room 2311 Institute for Social Research P.O. Box 1248 The University of Michigan Ann Arbor, Michigan 48106-1248 www.monitoringthefuture.org

    Acknowledgments

    A great many people have contributed to the launching and development of this research effort. We are indebted to a number of officials of the National Institute on Drug Abuse and the Special Action Office for Drug Abuse Prevention for their encouragement and advice at the outsetin particular, Richard Bucher, Robert DuPont, William Pollin, and Louise Richards.

    Our colleagues at the Institute for Social Research and elsewhere, who shared their insights, and often their most cherished instrumentation, are too numerous to mention; but their contributions are greatly appreciated. To the former director of the Survey Research Centers Sampling Section, Irene Hess, and to her colleagues and her successor, Steven Heeringa, we owe a particular debt for the creativity of the sampling design for the study. The contributions of the SRC Field Section were also very valuable.

    Major contributions to the production of this particular volume were made by staff members Ginger Maggio, and Adam Burke.

    The present and former members of the project staff (listed on the next page) have, of course, contributed greatly to the building of this large and complex research series.

    Finally, we would like to acknowledge the tens of thousands of high school seniors, their teachers, and their principals, whose cooperation and generous contributions of time make the Monitoring the Future project possible.

    Jerald G. Bachman Lloyd D. Johnston

    Patrick M. OMalley

  • PREFACE

    ix

    Present and Former Members of the Project Staff Bridget Ammon Donna Ando Margaret Bailey Dawn Bare Ellen Berger Patricia Berglund Stephen Berry Mary Lou Bewley Katherine Blatt Mary Lea Bonucchi Katheryn Boris Patty Bradley Jonathon Brenner Philip Brenner Marcy Breslow Alison Bryant Joyce Buchanan Susan Burek Adam Burke Carola Carlier Amy Corning Elaine Cousino William Cross Mary Danneffel- Mashruwala Mary Lou Davis Laura Decker Mary Dempsy William Diedrich Robert Doljanac Karen Donahue Amanda Donovan Ernest Dopp Richard Erickson Leslie Eveland

    Halford Fairchild Maureen Ferrell Marjorie Fisher Michael Fisher Tyrone Forman Randy Franzoi Peter Freedman-Doan Henry Freeman Walter Gruhn John Haeussler Lana Harrison Tanya Hart James Hersey A. Regula Herzog Jerome Hiniker Renee Hinkle-De Groot Ann Impens Justin Jager Jane Jibson Kathryn Johnson Pamela Kittel Deborah Kloska Mark Krell Zenon Kuzmyn Karl Landis Carol Larsen Sally Lawson Sharon Leech W. Christopher Lenhardt Margaret Libsch Jeannette Lim Ludmilla Litus Jinyun Liu A. Katheryn Loker Regina Lumbard

    Mary Lutz Elaine Maggio Ginger Maggio Jennifer Maggs Oksana Malanchuk Gloria Martinez Meghan Martz Julie Maslowsky Wayne McCulloch Timothy McDaniel Joan McGraw Alicia Merline Emily Messersmith Patti Meyer John Miller Joy Miskowski Caroline Mohai Marion Morse Jordana Muroff Lisa Nagy Amanda Nephew Jim Neveaux Jeffrey Nicholls Brian OKeefe David Outcalt Karl Palachuk Kathleen Palmer Megan Patrick Dorothy Paulette Susan Pauls Timothy Perry Crystal Pickett Sharon Pietila Joseph Pleck

    Deborah Poinier Nicholas Prieur Alisa Rastelli Judith Redmond Barbara Renaud Nicole Ridenour Nancy Robinson Willard Rodgers Don Rubinstein Susan Saavedra Deborah Safron Maria Sanchez John Schulenberg Amy Schulz Victor Sellinger Tina Smith Jane Stanton Scott Tarry Ann Taylor Richard Taylor Yvonne Terry-McElrath Lynda Tolen Lorel Vidlund Katherine Wadsworth John Wallace Chuck Westrin Marla Weyher Kris Wiggins Deborah Willis Sandra Wronski Thelma Wurzelbacher Sheri Wynn Shelly Yee Daniel Zahs

  • x MONITORING THE FUTURE

  • 1

    Introduction

    The Monitoring the Future project is designed to explore changes in many important values, behav-iors, and attitudes of contemporary American youth. Two general types of tasks may be distinguished. The first is to provide a systematic and accurate description of the youth population of interest in a given year and to quantify the direction and rate of the changes taking place among them over time. The second task, more analytic than descriptive, involves the explanation of the relationships and trends observed to exist.

    The task of description may be subdivided into three parts: (1) the description of static conditions (which may be accomplished through a single cross-sectional study); (2) the description of cross-cohort or, in longer time-spans, generational changes (usually accomplished using repeated cross-sections of the same age groups); and (3) the description of changes with aging (usually accomplished by means of longitudinal studies). The Monitoring the Future project has the capacity to provide each of these three types of descriptions, since it incorporates all three types of research design into a single study. (This complex design, in which multiple cohorts are followed longitudinally, has been referred to as a cohort-sequential design.)

    The content of this volume exemplifies the first of these types of description in that it contains cross-sectional data on a single high school class. When the data in this volume are used in combination with data from other such volumes in the series, the nature, direction, and rates of change from one cohort of seniors to another also may be describedthe type of reporting that is often labeled as the social indicator approach. The third

    class of descriptiondealing with the maturational changes that occur during the years after high schoolis not dealt with in this series of volumes but is reported in other publications from the study.*

    The accurate characterization of American young people and the description of important changes occurring among them is but one class of objectives for the study. The second class of objectives involves the development of a greater under-standing of what causes these young people to be as they are and why the changes being documented are occurring. An explanation will be sought through both the analysis of change at the aggregate level and at the individual level. The results of these explorations will be disseminated via the various publications discussed in the Preface.

    The merging of the social indicator approach and the analysis of relationships (particularly longitudi-nal relationships) into a single cohort-sequential design is, in our view, synergistic: the effectiveness of the two approaches combined is greater than if the two were undertaken separately. Because the present series of volumes deals with senior year data only, our remarks here focus primarily on the projects social indicator contributionsthe accurate description of high school seniors and the documentation of trends over time. It is worth noting, however, that even the limited amount of bivariate data included in the present volumes will suggest promising directions for relational analyses, many of which can be carried out using the archived data tapes from the project.

    *Panels of students from each graduating class are randomly selected and followed longitudinally by mail. Individuals are surveyed biennially through age 30 and then at five-year intervals. These data allow the characterization of the changes that occur during the early adult years and on through age 50. In addition, the data allow the cross-sectional characterization each year of most Americans (excluding high-school dropouts) in those age groups and the changes occurring in those characteristics.

  • MONITORING THE FUTURE

    *A more extensive description of the research design and procedures may be found in Bachman and Johnston (1978), Bachman, Johnston, and OMalley (2011), or Johnston, OMalley, Bachman , and Schulenberg (2013b).

    2

    The Need for Social Indicators

    The fundamental argument for developing social indicators is that there are considerable scientific and practical benefits derived from having accurate information about current social realities and the way those realities have been changing. For the scientific community, such indicators provide a rich set of information against which existing theory may be tested and from which new theory may be suggested. For the community at large, social indicators permit the reality testing of common perceptions and beliefs about the nation and its people. Past experience has demonstrated that widely held conceptions, which themselves may serve to create or exacerbate problems, can be most inaccurate. One example is the exaggerated popular conception of the size and nature of the gap between generations (see Adelson, 1970, 1979); another is the mistaken assumption that most young people condone the use of a variety of illegal drugs (Johnston, 1973; Johnston, OMalley, Bachman, and Schulenberg, 2013b).

    While deflating exaggerated perceptions, a continual monitoring of indicators on youth also can reveal the beginning of new problems and thus give rise to corrective efforts at a relatively early stage. For instance, most would view with concern the emergence of new types of drug use. Moni- toring the Future documented and reported a sharp increase in regular marijuana use among high school seniors in the late 70s, and, as a result, more attention was paid to the issue by policy makers, the media, and the general public. The discovery of such trends may trigger more intense efforts to discover the causes, consequences, and (when appropriate) preventatives of such behaviors.

    Although social indicators seldom translate directly into specific policy decisions, the availabil-ity of descriptive information can substantially enhance the decision-making capabilities of policy makers. These data provide considerable insight into the size and nature of problems, the rate of change occurring nationally and in subgroups, and, in some instances, information about related factors and even the likely impacts of major social interventions (such as changed drug laws or new incentives for recruitment into the military, to name but two). They also provide the possibility of feed-

    back on the consequences of some social inter-ventions as well as major historical events.

    The particular topics included in our monitoring system are summarized later in this introduction and, of course, they are presented fully in the complete set of data tables. For present purposes, it is enough to indicate that the range of topics is very broad. This breadth is partly due to the wide interests of the principal investigators; but it also reflects our conviction that the concurrent examination of trends in a variety of areas is not only much more cost efficient but also more scientifically productive. Research Design and Procedures*

    The basic research design involves annual data collections from high school seniors during the spring of each year, beginning with the class of 1975. Each data collection takes place in approxi-mately 130 public and private high schools selected to provide an accurate cross-section of high school seniors throughout the coterminous United States. The design also provides for the longitudinal study of a subsample from each class of participating seniors; but since the focus of this series of volumes is exclusively on the data collected annually from seniors, the follow-up procedures will not be dis-cussed here.

    Reasons for Focusing on High School Seniors. There are several reasons for choosing the senior year of high school as an optimal point for monitor-ing the behaviors and attitudes of youth. One is that the completion of high school represents the end of an important developmental stage in this society, since it demarcates both the end of universal public education and, for many, the end of living in the parental home. Therefore, it is a logical point at which to take stock of the accumulated influences of these two environments on American youth.

    Further, the completion of high school represents the jumping off point from which young people diverge into widely differing social environments, including college, business firms, military service, and homemaking. But these environmental transi-tions are not the only important changes that coincide with the end of high school. Most young men and women now reach the formal age of adulthood shortly before or after graduation; more

  • INTRODUCTION

    3

    significantly, they begin to assume adult roles, including financial self-support, marriage, and parenthood.

    Finally, there are some important practical advantages to building a system of data collections around samples of high school seniors. The last year of high school constitutes the final point at which a reasonably good national sample of an age-specific cohort can be drawn and studied econom- ically. The need for systematically repeated, large-scale samples from which to make reliable estimates of change requires that considerable stress be laid on efficiency and feasibility; the present design meets those requirements.

    One limitation in the design is that it does not include in the target population those young men and women who drop out of high school before graduation (or before the last few months of the senior year, to be more precise). This excludes a relatively small proportion of each age cohortbetween 11 and 15 percent (U.S. Bureau of the Census, various years)though not an unim-portant segment, since we know that certain behav-iors such as illicit drug use (Johnston, 1973) and delinquency (Bachman, OMalley, and Johnston, 1978) tend to be higher than average in this group. However, the addition of a representative sample of dropouts would increase the cost of the present research enormously, because of their dispersion and generally higher level of resistance to being located and interviewed.

    For the purposes of estimating characteristics of the entire age group, the omission of high school dropouts does introduce certain biases; however, their small proportion sets outer limits on the bias (Johnston, OMalley, Bachman, and Schulenberg, 2013b, Appendix). For the purposes of estimating changes from one cohort of high school seniors to another, the omission of dropouts represents a problem only if different cohorts have considerably different proportions who drop out. We have no reason to expect dramatic changes in those rates for the foreseeable future, and recently published government statistics indicate a great deal of stability in dropout rates since 1970 (U.S. Bureau of the Census, various years).

    Some may use our high school data to draw conclusions about changes for the entire age group. While we do not encourage such extrapolation, we

    suspect that the conclusions reached often would be valid, since over 80 percent of the age group is in the surveyed segment of the population and we expect that changes among those not in school are very likely to parallel the changes among those who are.

    Sampling Procedures. The procedure for secur- ing a nationwide sample of high school seniors is a multi-stage one. Stage 1 is the selection of particular geographic areas, Stage 2 is the selection of one or more high schools in each area, and Stage 3 is the selection of seniors within each high school.

    Stage 1: Geographic Areas. The geographic areas used in this study are the primary sampling units (PSUs) developed by the Sampling Section of the Survey Research Center for use in the centers nationwide interview studies. Because these same PSUs are used for personal interview studies by the Survey Research Center (SRC), local field represen-tatives can be assigned to administer the data collections in practically all schools.

    Stage 2: Schools. In the major metropolitan areas more than one high school is often included in the sampling design; in most other sampling areas a single high school is sampled. In all cases, the selections of high schools are made such that the probability of drawing a school is proportionate to the size of its senior class. The larger the senior class (according to recent records), the higher the selection probability assigned to the high school. When a sampled school is unwilling to participate, a replacement school as similar to it as possible is selected from the same geographic area.

    Stage 3: Students. Within each selected school, up to about 350 seniors may be included in the data collection. In schools with fewer than 350 seniors, the usual procedure is to include all of them in the data collection. In larger schools, a subset of seniors is selected either by randomly sampling classrooms or by some other random method that is convenient for the school and judged to be unbiased. Sample weights are assigned to each respondent so as to take into account variations in the sizes of samples from one school to another, as well as (smaller) variations in selection probabilities occurring at the earlier stages of sampling.

    The three-stage sampling procedure described above yielded the number of participating schools and students indicated in Table 1.

  • MONITORING THE FUTURE

    4

    Table 1 Sample Sizes and Student Response Rates

    Number of: Year

    Public

    Schools

    Private Schools

    Total

    Schools

    Students*

    Student Response

    Rate** 1975

    111

    14

    125

    15,791

    78%

    1976

    108

    15

    123

    16,678

    77% 1977

    108

    16

    124

    18,436

    79%

    1978

    111

    20

    131

    18,924

    83% 1979

    111

    20

    131

    16,662

    82%

    1980

    107

    20

    127

    16,524

    82% 1981

    109

    19

    128

    18,267

    81%

    1982

    116

    21

    137

    18,348

    83% 1983

    112

    22

    134

    16,947

    84%

    1984

    117

    17

    134

    16,499

    83% 1985

    115

    17

    132

    16,502

    84%

    1986

    113

    16

    129

    15,713

    83% 1987

    117

    18

    135

    16,843

    84%

    1988

    113

    19

    132

    16,795

    83% 1989

    111

    22

    133

    17,142

    86%

    1990

    114

    23

    137

    15,676

    86% 1991

    117

    19

    136

    15,483

    83%

    1992

    120

    18

    138

    16,251

    84% 1993

    121

    18

    139

    16,763

    84%

    1994

    119

    20

    139

    15,929

    84% 1995

    120

    24

    144

    15,876

    84%

    1996

    118

    21

    139

    14,824

    83% 1997

    125

    21

    146

    15,963

    83%

    1998

    124

    20

    144

    15,780

    82% 1999

    124

    19

    143

    14,056

    83%

    2000

    116

    18

    134

    13,286

    83% 2001

    117

    17

    134

    13,304

    82%

    2002

    102

    18

    120

    13,544

    83% 2003

    103

    19

    122

    15,200

    83%

    2004

    109

    19

    128

    15,222

    82% 2005

    108

    21

    129

    15,378

    82%

    2006

    116

    20

    136

    14,814

    83% 2007

    111

    21

    132

    15,132

    81%

    2008

    103

    17

    120

    14,577

    79% 2009

    106

    19

    125

    14,268

    82%

    2010

    104

    22

    126

    15,127

    85% 2011

    110

    19

    129

    14,855

    83%

    2012

    107

    20

    127

    14,343

    83% *Sample weights are assigned to each respondent to correct for unequal probabilities of selection that arise in the multi-stage sampling procedure. **The student response rate is derived by dividing the attained sample by the target sample (both based on weighted number of cases). The target sample is based upon listings provided by schools. Since such listings may fail to take account of recent student attrition, the actual response rate may be slightly underestimated.

    One other important feature of the base-year sampling procedures should be noted here. All schools (except for half of the initial 1975 sample) are asked to participate in two data collections, thereby permitting replacement of half of the total sample of schools each year. One motivation for requesting that schools participate for two years is administrative efficiency: it is a costly and time-consuming procedure to secure the cooperation of schools, and a two-year period of participation cuts down that effort substantially. Another important advantage is that whenever an appreciable shift in scores from one graduating class to the next is observed, it is possible to check whether the shift might be attributable to some differences in the newly sampled schools. This is done simply by repeating the analysis using only the 60 or so schools that participated both years. Thus far, the half-sample approach has worked quite well; and examination of drug prevalence of use data from the matched half-samples showed that the half-samples of repeat schools yielded drug prevalence of use trends that were virtually identical to trends based on all schools.

    School Recruiting Procedures. Early during the fall semester, an initial contact is made with each sampled school. First, a letter is sent to the principal describing the study and requesting per-mission to survey seniors. The letter is followed by a telephone call from a project staff member, who attempts to deal with any questions or problems and (when necessary) makes arrangements to contact and seek permission from other school district officials. Basically the same procedures are followed for schools asked to participate for the second year.

    Once the schools agreement to participate is obtained, arrangements are made by phone for administering the questionnaires. A specific date for the survey is mutually agreed upon and a local SRC representative is assigned to carry out the administration. Advance Contact with Teachers and Students. The local SRC representative is instructed to visit the school two weeks ahead of the actual date of administration. This visit serves as an occasion to meet the teachers whose classes will be affected and to provide them with a brochure describing the study, a brief set of guidelines about the question- naire administration, and a supply of flyers to be distributed to the students a week to 10 days in advance of the questionnaire administration. The guidelines to the teachers include a suggested announcement to students at the time the flyers are distributed.

  • INTRODUCTION

    5

    From the students standpoint, the first informa-tion about the study usually consists of the teachers announcement and the short descriptive flyer. In announcing the study, the teachers are asked to stress that the questionnaires used in the survey are not tests and that there are no right or wrong answers. The flyer tells students that they will be invited to participate in the study, points out that their participation is strictly voluntary, and stresses confidentiality (including a reference to the fact that the Monitoring the Future project has a special government grant of confidentiality that allows their answers to be protected). The flyer also serves as an informative document that the students can show to their parents.

    Questionnaire Administrations. The question-naire administration in each school is carried out by the local SRC representatives and their assistants, following standardized procedures detailed in a project instruction manual. The questionnaires are administered in classrooms during normal class periods whenever possible, although circumstances in some schools require the use of larger group administrations. Teachers are not asked to do anything more than introduce the SRC staff mem- bers and (in most cases) remain in the classroom to help guarantee an orderly atmosphere for the survey. Teachers are urged to avoid walking around the room, so that students may feel free to write their answers without fear of being observed.

    The actual process of completing the question-naire is quite straightforward. Respondents are given sharpened pencils and asked to use them because the questionnaires are designed for auto-matic scanning. Most respondents can finish within a 45-minute class period; for those who cannot, an effort is made to provide a few minutes of additional time.

    Procedures for Protecting Confidentiality. In any study that relies on the voluntary reporting of drug use or other illegal acts, it is essential to develop procedures that guarantee the confiden-tiality of such reports. It is also desirable that these procedures be described adequately to respondents so that they are comfortable about providing honest answers.

    We noted that the first information given to students about the survey consists of a descriptive flyer stressing confidentiality and voluntary partici pation. This theme is repeated at the start of the questionnaire administration. Each participating

    student is instructed to read the message on the cover of the questionnaire, which stresses the importance and value of the study, notes that answers will be kept strictly confidential, states that the study is completely voluntary, and tells the student, If there is any question you or your parents would find objectionable for any reason, just leave it blank. The instructions then point out that in a few months a summary of nationwide results will be mailed to all participants and also that a follow-up questionnaire will be sent to some students after a year. The cover message explains that these are the reasons for asking that name and address be written on a special form that will be removed from the questionnaire and handed in separately. The message also points out that the two different code numbers (one on the questionnaire and one on the tear-out form) cannot be matched except by a special computer file at the University of Michigan. (The content of the inside and outside of the standard questionnaire cover, including the tear-out form, may be found in Appendix C.) Content Areas and Questionnaire Design

    Drug use and related attitudes are the topics that receive the most extensive coverage in the Monitoring the Future project; but the question-naires also deal with a wide range of other subject areas, including attitudes about government, social institutions, race relations, changing roles for women, educational aspirations, occupational aims, and marital and family plans, as well as a variety of background and demographic factors. (Table 2 provides an outline of the 20 general subject areas into which all items are categorized in the Question Index toward the end of this volume.) Given this breadth of content, the study is not presented to respondents as a drug use study, nor do they tend to view it as such.

    Because many questions are needed to cover all of these topic areas, much of the questionnaire content is divided into different questionnaire forms that are distributed to participants in an ordered sequence. (Five forms were used in 1975- 88; a sixth form was added in 1989.) This sequence produces five or six virtually identical subsamples. About one-third of each questionnaire form con- sists of key or core variables that are common to all forms. All demographic variables and some

  • MONITORING THE FUTURE

    6

    Table 2 Measurement Content Areas (Abbreviated Version*)

    A.

    Drugs. Drug use and related attitudes and beliefs, drug availability and exposure, surrounding conditions and social meanings of drug use. Views of significant others regarding drugs. Use of non-alcoholic beverages.

    B.

    Education. Educational lifestyle, values, experiences, and environments.

    C.

    Work and Leisure. Vocational values, meaning of work and leisure, work and leisure activities, preferences regarding occupational characteristics, type of work setting, and cell phone use.

    D.

    Sex Roles and Family. Values, attitudes, and expectations about marriage, family structure, sex roles, and sex discrimination.

    E.

    Population Concerns. Values and attitudes about overpopulation and birth control.

    F.

    Conservation, Materialism, Equity, etc. Values, attitudes, and expectations related to conservation, pollution, materialism, equity, and the sharing of resources. Preferences regarding type of dwelling and urbanicity.

    G.

    Religion. Religious preferences, activities, and views.

    H.

    Politics. Political affiliation, activities, and views.

    I.

    Social Change. Values, attitudes, and expectations about social change.

    J.

    Social Problems. Concern with various social problems facing the nation and the world.

    K.

    Major Social Institutions. Confidence in and commitment to various major social institutions (business, unions, branches of government, press, organized religion, military, etc.).

    L.

    Military. Views about the armed services and the use of military force. Personal plans for military service.

    M.

    Interpersonal Relationships. Qualitative and quantitative characteristics of cross-age and peer relationships. Interpersonal conflict.

    N.

    Race Relations. Attitudes toward and experiences with other racial groups.

    O.

    Concern for Others. Concern for others, voluntary and charitable activities.

    P.

    Happiness. Happiness and life satisfaction, overall and in specific life domains.

    Q.

    Other Personality Variables. Attitudes about self (including self-esteem), locus of control, loneliness, risk-taking, trust in others, importance placed on various life goals, counterculture orientation, hostility, boredom.

    R.

    Background. Demographic and family background characteristics, living arrangements.

    S.

    Deviance and Victimization. Delinquent behaviors, driving violations and accidents (including those under the influence of drugs), victimization experiences.

    T.

    Health. Health habits, somatic symptoms, medical treatment.

    *For a more detailed listing of measurement content areas, see Table 4.

  • INTRODUCTION

    7

    Table 3

    Prevalence of Drug Use Question Numbers by Questionnaire Form, 2012

    Data for question numbers appearing in boldface type are published in this volume. Data for question numbers not in boldface type are not presented in this volume but, where appropriate, are included in the combined Forms 16 data. For marijuana, cocaine and heroin, the combined Forms 16 data comprise data from the general use question in some forms and component data in other forms. For example, the combined Forms 16 data for cocaine are a combination of data from the general cocaine use question in Forms 2 and 5 and data from the component crack and other forms of cocaine questions in Forms 1, 3, 4, and 6. Form 1 marijuana data and hashish data are combined to create the marijuana/hashish data for Form 1 and are then used with the general marijuana/hashish data in Forms 26 to create the combined Forms 16 data. Data on heroin use with and without a needle are combined to create the heroin data for Forms 2, 5, and 6 and are then used with the general heroin use questions in Forms 1, 3, and 4 to create the combined Forms 16 heroin data. Question Numbers Combined

    Questionnaire: Forms 1-6 Form 1 Form 2 Form 3 Form 4 Form 5 Form 6 Cigarettes B01-2 B001-2 B01-2 B01-2 B01-2 B01-2 B01-2 Smokeless Tobacco B19-20 Kreteks D20c Tobacco Using a Hookah D04h Small Cigars D04i Dissolvable Tobacco E12f Snus E12g Alcohol B04a-c B007a-c B4a-c B4a-c B4a-c B4a-c B22a-c Beer D13-14 Wine Coolers D15-16 Wine D17-18 Liquor D19-20 Flavored Alcoholic Beverages E08a-c Alcohol with Caffeine B21k E12h D20o Been Drunk B04d-f B016a-c D11a-c 5+ Drinks in Last 2 Weeks B06 B013 B06 B06 B06 B06 B24 Marijuana/Hashish (combined) B07a-c B07a-c B07a-c B07a-c B07a-c B25a-c Hashish B018a-c Marijuana B019a-c Synthetic Marijuana B21j D04j D20m LSD B08a-c B033a-c B08a-c B08a-c B08a-c B08a-c B26a-c Hallucinogens Other than LSD B09a-c B042a-c B09a-c B09a-c B09a-c B09a-c B27a-c Specific Hallucinogens Other than LSDa Mescaline B045a Peyote B045b "Shrooms" or Psilocybin B045c PCP B045d Concentrated THC B045e PCP E03a-c MDMA ("Ecstasy") B19a-c B18a-c B18a-c Salvia B21h E12d D20l Cocaine B10a-c B10a-c B10a-c Crack B10d-f B076a-c E02a-c B11a-c B11a-c E09a-c B29a-c Other Cocaine B10g-i B077a-c B12a-c B12a-c B30a-c Amphetamines B11a-c B050a-c B11a-c B10a-c B10a-c B11a-c B28a-c Ritalin B21c D04c D20f Adderall B21i D04d D20g Bath Salts (synthetic stimulants) B21h D04k D20n Methamphetamine B11d-f B17a-c B36a-c Crystal Meth ("Ice") B11g-i B12a-c B12a-c

  • MONITORING THE FUTURE

    8

    Table 3 (continued) Question Numbers

    Combined Questionnaire: Forms 1-6 Form 1 Form 2 Form 3 Form 4 Form 5 Form 6

    Specific Amphetaminesb Dexedrine B056a Ritalin B056b Adderall B056c Concerta B056d Methamphetamine B056e Crystal Meth B056f Quaaludes B12a-c B060a-c Sedatives (Barbiturates) B13a-c B062a-c B13a-c B13a-c B13a-c B13a-c B31a-c Specific Sedativesc Phenobarbital B063a Seconal B063b Tuinal B063c Ambien B063d Lunestra B063e Sonata B063f Tranquilizers B14a-c B066a-c B14a-c B14a-c B14a-c B14a-c B32a-c Specific Tranquilizersd Librium B072a Valium B072b Xanax B072c Soma B072d Serax B072e Ativan B072f Klonopin B072g Cough and Cold Medicines B20 D04e D20k Rohypnol B21a D20j GHB D20a Ketamine B21b E12a D20b Heroin B15a-c B087a-c B15a-c B15a-c Heroin with a Needle B15d-f B15a-c B15a-c B33a-c Heroin without a Needle B15g-i B16a-c B16a-c B34a-c Other Narcotics B16a-c B089a-c B17a-c B16a-c B16a-c B17a-c B35a-c OxyContin B21f D04f E12b D20h Vicodin B21g D04g E12c D20i Specific Other Narcoticse Methadone B096a Opium B096b Morphine B096c Codeine B096d Demerol B096e Vicodin B096f OxyContin B096g Percocet B096h Perdocan B096i Ultram B096j

  • INTRODUCTION

    9

    Table 3 (continued) Question Numbers Combined

    Questionnaire: Forms 1-6 Form 1 Form 2 Form 3 Form 4 Form 5 Form 6 Inhalants B17a-c B18a-c B17a-c B18a-c Steroids B18a-c E04a-c E1a-c D12a-c Androstenedione B21d D04a D20d Creatine B21e D04b D20e Stimulant-type drugs for ADHD B22 D08 D03 Non-stimulant-type drugs for ADHD B23 D11 D06

    Energy Drinks E01-E02 Diet Pills B046a-c Stay-Awake Pills B047a-c "Look-Alikes" B048a-c aAsked in a set after a branching question about any use of hallucinogens other than LSD. bAsked in a set after a branching question about any use of amphetamines. cAsked in a set after a branching question about any use of sedatives. dAsked in a set after a branching question about any use of tranquilizers. eAsked in a set after a branching question about any use of narcotics other than heroin.

  • MONITORING THE FUTURE

    *These core variables are the first ones on which data are presented in the Descriptive Results section

    10

    measures of drug use are included in this core set of measures. This use of the full sample for drug and demographic measures provides a more accu- rate estimation on these dimensions and also makes it possible to link them statistically to all of the other measures that are included in a single form only.* Representativeness and Validity

    The samples for this study are intended to be representative of high school seniors throughout the 48 coterminous states. We have already discussed the fact that this definition of the sample excludes one important portion of the age cohort: those who have dropped out of high school before nearing the end of the senior year. But given the aim of representing high school seniors, it will now be useful to consider the extent to which the obtained samples of schools and students are likely to be representative of all seniors and the degree to which the data obtained are likely to be valid.

    We can distinguish at least four ways in which survey data of this sort might fall short of being fully accurate. First, some sampled schools refuse to participate, which could introduce some bias. Second, the failure to obtain questionnaire data from 100 percent of the students sampled in partici-pating schools would also introduce bias. Third, the answers provided by participating students are open to both conscious and unconscious distortions that could reduce validity. Finally, limitations in sample size and/or design could place limits on the accuracy of estimates. Each of these issues is treated extensively elsewhere (Johnston, OMalley, and Bachman, 1984, especially Appendices A and B), so in this section we will present only the highlights of each of those discussions.

    School Participation. As noted in the description of the sampling design, schools are invited to participate in the study for a two-year period. With very few exceptions, each school that has partici- pated for one data collection has agreed to partici- pate for a second. Thus far, an average of about 61 percent of the schools initially invited to participate have agreed to do so each year; for each school refusal, a similar school (in terms of size, geographic area, urbanicity, etc.) was recruited as a replacement. The selection of replacement schools almost entirely removes problems of bias in region, urbanicity, and the like that might result from cer- tain schools refusing to participate. Other potential

    biases are more subtle, however. For example, if it turned out that most schools with drug problems refused to participate, that could seriously bias the drug estimates derived from the sample. And if any other single factor were dominant in most refusals, that also might suggest a source of serious bias. In fact, however, the reasons for schools refusals to participate are varied and largely a function of happenstance events of the particular year. Thus, we feel fairly confident that school refusals have not seriously biased the surveys.

    Student Participation. Completed questionnaires are obtained from three-fourths to four-fifths of all students sampled. The single most important reason that students are missed is that they are absent from class at the time of data collection, and in most cases it is not workable to schedule a special follow-up data collection for them. Students with fairly high rates of absenteeism also report above-average rates of drug use; therefore, there is some degree of bias introduced by missing the absentees. That bias could be largely corrected through the use of special weighting; however, this course was not chosen because the bias in estimates (for drug use, where the potential effect was hypothesized to be the largest) was determined to be quite small and because the necessary weighting procedures would have introduced undesirable complications.

    In addition to absenteeism, student nonpartic-ipation occurs because of schedule conflicts with school trips and other activities that tend to be more frequent than usual during the final months of the senior year. Of course, some students refuse to complete or turn in the questionnaire. However, the SRC representatives in the field estimate this proportion to be only about 1 percent.

    Validity of Self-Report Data. Survey measures of delinquency and of drug use depend upon respondents reporting what are, in many cases, illegal acts. Thus, a critical question is whether such self-reports are likely to be valid.

    Like most studies dealing with these areas, we have no direct, objective validation of the present measures; however, the considerable amount of inferential evidence that exists strongly suggests that the self-report questions produce largely valid data. A number of factors have given us reasonable confidence about the validity of the responses to what are presumably among the most sensitive questions in the study: a low nonresponse on the drug questions; a large proportion admitting to

  • INTRODUCTION

    *A more detailed discussion of the evidence for validity of the measures of illicit drug use may be found in Johnston, et al. (2013b). An analysis of inconsistencies in students reports of monthly vs. yearly drug use is provided by Bachman and OMalley (1981).

    11

    some illicit drug use; the consistency of findings across several years of the present study; strong evidence of construct validity (based on relation-ships observed between variables); a close match between our data and the findings from other studies using other methods; and the findings from several methodological studies that have used objective validation methods.*

    As for others of the measures, a few have a long and venerable historyas scholars of the relevant literatures will recognizethough some of these measures have been modified to fit the present questionnaire format. Many (probably most) ques-tions, however, have been developed specifically for this project through a careful process of question writing, pilot testing, pretesting, and question revision or elimination. Some already have been included in other publications from the study, but many have not; therefore, there currently exists little empirical evidence of their validity and reliability. On the other hand, the fact that virtually all results presented in this volume are based on individual items, rather than indexes or abstract concepts, should make it easy for the reader to judge the degree to which each item has face validity and also the degree to which there is some evidence of construct validity in the bivariate results.

    Accuracy of the Sample. A sample survey never can provide the same level of accuracy as would be obtained if the entire target population were to participate in the surveyin the case of the present study, about 2.54.0 million seniors per year. However, perfect accuracy of this sort would be extremely expensive and certainly not worthwhile considering the fact that a high level of accuracy can be provided by a carefully designed probability sample. The accuracy of the sample in this study is affected both by the size of the student sample and by the number of schools in which they are clustered. Appendix B presents a discussion of the ways in which this clustering and other aspects of the sampling design are taken into account in computing the precision or accuracy of the samples. For the purposes of this introduction, it is sufficient to note that virtually all estimates based on the total sample have confidence intervals of 1.5 percentage points or smallersometimes considerably smaller. This means that, had we been able to invite all schools and all seniors in the 48 coterminous states to participate, the results from such a massive survey would be within an

    estimated 1.4 percentage points of our present sample findings at least 95 times out of 100. We consider this to be a quite high level of accuracy, one that permits the detection of fairly small trends from one year to the next.

    Consistency and the Measurement of Trends. One other point is worth noting in a discussion of the validity of our findings. The Monitoring the Future project is, by intention, a study designed to be sensitive to changes from one time to another. Accordingly, the measures and procedures have been standardized and applied consistently across each data collection. To the extent that any biases remain because of limits in school and/or student participation, and to the extent that there are distor-tions (lack of validity) in the responses of some students, it seems very likely that such problems will exist in much the same way from one year to the next. In other words, biases in the survey esti-mates should tend to be consistent from one year to another, which means that the measurement of trends should be affected very little by any such biases. Caveats to Users of the Data

    In attempting to understand and interpret the statistical results contained in this volume, the reader should keep in mind the several considerations discussed below.

    Estimation of Sampling Errors. Appendix A provides figures that allow the reader to determine the 95 percent confidence interval around any observed percentage for both the total sample and the several subgroups for which statistics are given. Appendix A also provides tables and guidelines for testing the statistical significance of observed differ-ences between subgroups (e.g., males vs. females) and the significance of year-to-year changes for the entire sample and for the various subgroups on any given variable. Estimates based on subgroups containing relatively few cases (e.g., those who have used heroin) will have substantially larger margins of error than those subgroups with large numbers of cases.

    Subgroup Definitions. At the beginning of the Descriptive Results section there is a careful defini-tion of each of the subgroups analyzed in this series of volumes. It is important for the reader to read these definitions so as not to be misled by the necessarily abbreviated descriptions given in the column headings of the data tables. For example,

  • MONITORING THE FUTURE

    *

    12

    the column labeled 4-Year-College Plans includes data only for seniors who expect to graduate from a four-year college, which is only a subset of all who expect to receive some college training. To take another example, several column headings in the lifetime illicit drug use index use the term pills; however, these categories contain people who indicate that they used any of a number of drugsincluding some that usually are not taken in pill form (e.g., cocaine and narcotics other than heroin).

    Missing Data Notes. Each column of percentages for a given variable adds to 100 percent (with slight variations due to rounding), whether based on the entire sample or any one of the subgroups being described. Missing data were not included in computing these percentages, so it is up to the reader to determine whether an exceptional level of missing data exists on any given question. This can be done by comparing the weighted number of cases on which those percentages were based (given in italics under each percentage distribution) with the total number of weighted cases who took the questionnaire (given at the top of the same column). As would be expected, the missing data rate tends to rise modestly toward the end of each questionnaire. Occasionally there is an exceptional level of nonresponse for some other reason, the most common being that the respondent has been branched around a question or set of questions that are inappropriate for him or her. In this case, the percentages are based only on those for whom the question was appropriate and who answered. A footnote has been entered to help call attention to the fact that completing a particular question is contingent upon the answer on a prior question. Most such cases are contained in Form 1 of the questionnaire, which has several sets of detailed questions about drug use that are answered only by those respondents who have used a particular class of drug during the past year.

    Characterizing Heroin Users. Because this project places a special emphasis on the study of drug use, one set of columns distinguishes respondents according to the extent to which they have been involved with illicit drugs. The five drug use categories are based on an index of seriousness of involvement. The index has Guttman-like propertiesthat is, anyone classified at a given level is also very likely to have exhibited any drug-using behaviors that are associated with the lower levels in the index.

    The most extreme category in the index is

    defined in terms of the respondents ever having used heroin during his or her lifetime. However, any interpretations made about heroin users from these data must be tempered with extreme caution, not only because the small number of cases makes estimation in a given year relatively unreliable but also because the heroin users are probably the most selectively represented of all of the subgroups pre- sented here. Heroin usersespecially continuing usersare among those most likely to drop out of high school (Johnston, 1973). Therefore, we feel heroin users are particularly underrepresented in a sample of high school seniors. Furthermore, most of those included in the study report only very occasional heroin use. Frequent usersaddicts, if you willare not well represented in these data. Given these problems with the representation of heroin users, the readers may wonder why it was decided to present them as a separate category of drug users. There are two reasons for doing so. First, because even occasional heroin use represents such an extreme point on the continuum of drug involvement, it seems preferable that the heroin users not be mixed in with the next highest category of use (the more pills category). It remains quite possible, of course, for anyone to combine any two or more categories in the drug use continuum (by computing means of percentages weighted by the number of cases in each category); therefore, those wishing to incorporate heroin users with the next category can do so whenever they wish. The second reason for keeping them separate is that we suspect the data based on high school seniors provide some indication of the characteristics that might be found among heavier heroin users who dropped out of school. Some variables in the study are strongly correlated with the drug use continuum, and the present heroin user category provides both an end point for that continuum and an indication of the direction (further out on the continuum) in which heavier heroin users are likely to lie. In sum, the reporting of separate percentages for heroin users among high school seniors permits greater clarity in analyzing other levels of drug use and provides data that may be suggestive about the larger population of young heroin users. But the particular data are clearly limited and should be interpreted with these cautions in mind.

    Interpreting Racial Differences. Data are given for the two largest racial/ethnic subgroups in the

  • INTRODUCTION

    13

    populationthose who identify themselves as white (Caucasian) and those who identify them-selves as black or African American. Data are not given for the other ethnic categories (American Indian or Alaska Native, Asian American, Mexican American or Chicano, Puerto Rican, Cuban Ameri-can, other Hispanic or Latino, or Native Hawaiian or other Pacific Islander) since each of these groups constitutes a small proportion of the sample in any given year, which means that their small Ns (in combination with their clustered groupings in a limited number of schools) would yield estimates that would be too unreliable. In fact, even African Americanswho constitute approximately 12 percent of each years sampleare represented by only 210 to 425 respondents per year on any single questionnaire form. Further, because our sample is a stratified clustered sample, it yields less accuracy than would be yielded by a pure random sample of equal size (see Appendix B for details). Therefore, because of the limited number of cases, the margin of sampling error around any statistic describing African Americans is larger than for most other subgroups described in this volume.

    There exists, however, a quick way to determine the replicability of any finding involving racial comparisons. Since most questions are repeated from year to year, one can readily establish the degree to which a finding is replicated by looking at the results for the same question in prior or subsequent years. Given the relatively small Ns for African Americans, the reader is urged to seek such replication before putting much faith in the reliabil-ity of any particular racial comparison.

    There are factors in addition to unreliability, however, that could be misleading in the interpre- tation of racial differences. Given the social impor- tance that has been placed on various racial differ- ences reported in the social science literature, we would like to caution the reader to consider the various factors that could account for differences. These factors fall into three categories: differential representation in the sample, differential response tendencies, and the confounding of race with a number of other background and demographic characteristics.

    Differential Representation. Census data charac-terizing American young people in the approxi- mate age range of those in our sample show that somewhat lower proportions of African Americans than whites remain in school through the end of twelfth grade, though these differences have dimin-

    ished appreciably in recent years. Therefore, a slightly different segment of the African American population than the white population resides in our target population of high school seniors. Further, our samples appear to under-represent slightly those African American males who, according to census figures, are in high school at the twelfth grade level. Typically African American males constituted about 6 percent of our sample, whereas census data suggested they should constitute about 7 percent. Therefore, it appears that we are losing more African American males from our target pop- ulation than white males or females of either race. This may be due to generally poorer attendance rates on the part of some African American males and/or an unwillingness on the part of some to participate in data collections of this sort.

    In sum, a smaller segment of the African Ameri-can population than the white population of high school age is represented by the data contained here. Insofar as any characteristic is associated with being a school dropout or absentee, it is likely to be somewhat disproportionately underrepresented among African Americans in our sample. Differential Response Tendencies. In examining our full range of variables, we have noted certain racial differences in response tendencies. First, the ten-dency to state agreement in response to agreedis-agree questions is generally somewhat greater among African Americans than among whites. For example, African Americans tend to agree more with the positively worded items in our index of self-esteem, but they also tend to agree more with the negatively worded items. As it happens, that particular index has an equal number of positively and negatively worded items, so that any overall agreement bias should be self-canceling when the index score is computed. Since all data in this volume are based on single items, however, group differences in agreement bias are likely to affect results on questions employing the agreedisagree format. Fortunately, most of our questions are not of that type. We have also observed a somewhat greater than average tendency for African American respon-dents to select extreme answer categories on attitudinal scales. For example, even if the same proportion of African Americans as whites feel positively (or negatively) about some subject, fewer of the whites are likely to say they feel very positively (or very negatively). We do not have any ready explanations for these differences in response

  • MONITORING THE FUTURE

    14

    styles, but what seems important for our present purposes is that, in the process of interpreting racial differences, the reader should be aware that differences in response to particular questions may be related to these more general tendencies.

    A somewhat separate issue in response tendency is a respondents willingness to answer particular questions. The missing data rate, which can be determined by dividing the number responding to a question by the number taking the questionnaire (given at the top of each page), may reflect will-ingness to answer particular questions. If a particu-lar question or set of questions has a missing data rate higher than is true for the prior or subsequent questions, then presumably more respondents than usual were unwilling (or perhaps unable) to answer it. We have observed such an exaggerated missing data rate for African American males on the set of questions dealing with the respondents own use of illicit drugs. Clearly a respondents willingness to be candid on such questions depends on his or her trust of the research process and of the researchers themselves. We interpret the exaggerated missing data rates for African American males in these sections as possibly reflecting, at least in part, less trust. The reader is advised to check for excep-tional levels of missing data when making comparisons on any variable in which candor is likely to be reduced by lower system trust. One bit of additional evidence related to trust in the research process is that higher proportions of African Americans than whites indicated that if they had used marijuana or heroin, they would not have been willing to report it in the survey.

    Covariance with Other Factors. Some character-istics such as race are highly confounded (corre-lated) with other variablesvariables that may in fact explain some observed racial differences. Put another way, at the aggregate level we might observe a considerable racial difference on some characteristic, but once we control for some back-ground characteristics such as socio-economic level or region of the countrythat is, once we compare the African American respondents with whites who come from similar backgroundsthere may be no racial difference at all.

    Race happens to be correlated with important background and demographic variables much more than some of the other variables dealt with here (such as the respondents sex). When we compare males and females in this sample, we know that observed differences between them are

    not likely to be explained by differences in their family backgrounds (such as the socioeconomic level of their families, the region of the country, or the size of the cities in which they grew up) because males and females on the average come from extremely similar backgrounds. A perusal of Section C of Questionnaire Forms 1 through 6, however, will illustrate that on the average the African American subgroup in this sample is quite different from the white subgroup in a number of such respects. A higher proportion of African Americans live in the South and a higher proportion grew up in families with the father and/or mother absent, and more had mothers who worked while they were growing up. A substantially higher proportion of African Americans are Baptists, and African Americans tend to attribute more importance to religion in their lives than do whites. A higher proportion of African American respondents have children, and, on the average, they are slightly older than the white sample. As was mentioned earlier, African American males are more under-represented in our sample than African American females.

    We note these differences in demographic, back- ground, and ascriptive characteristics because, in any attempt to understand why a racial difference exists, one would want to be able to examine the role of these covarying characteristics. It is not possible, given the amount of data contained in this volume, to factor out the contribution of most of these covarying characteristics when interpreting racial differences. Therefore, while the data here may provide a relatively accurate description of racial differences (keeping in mind our earlier cautions about representativeness and response style), they do not by themselves yield a great deal by way of explanation.

    However, since the full data sets now reside in an archive, described in the Preface, it is possible for the reader to conduct a more in-depth exploration of observed differencesor nondifferences, for that matter. We particularly encourage readers inter-ested in racial comparisons to make further explorations of the data in order to determine the possible role of the other variables discussed here. Users of the data are also encouraged to contact the principal investigators for information on the most recent analyses of racial differences, some of which have been published.*

  • INTRODUCTION

    15

    Change in Race/Ethnicity Question. Since the beginning of the study, respondents have been presented a list of various racial/ethnic categories and instructed to select the one category that best describes them. In order to be more consistent with the Office of Management and Budget guidelines method of assessing race/ethnicity, we revised our race/ethnicity measure. In 2005, the measure was changed in three of the six questionnaire forms. In these changed forms, respondents were presented with a list of racial/ethnic categories and instructed to select one or more responses. An examination of the data showed that relatively few respondents (about 6% in 2005) selected more than one racial/ethnic category.

    For the purposes of the race/ethnicity banner variable used in the 2005 volume, the data from the original race/ethnicity question were combined with data from the changed race/ethnicity question in the following manner: For the original question, respondents were assigned to the racial/ethnic group specified in their response. For the revised question, those checking only White and no other racial/ethnic group were categorized as White; those checking Black or African American and no other racial/ethnic group were categorized as Black. Note that in the combined forms section of the 2005 volume, because some drug use questions occurred in only one or a few forms, there is some variation in the version of the race/ethnicity question upon which the data are based. For the individual form data, the race/ethnicity banner variable is the version of the race/ethnicity question used in that particular form. Thus, in 2005 Forms 2, 4, and 6 contain data for the original race/ethnicity question and Forms 1, 3, and 5 contain data for the changed race/ethnicity question. Based on the analyses that we have examined, we do not believe these different permutations make an appreciable difference in the results. Beginning in the 2006 volume, all questionnaire forms used the revised question on race/ethnicity.

    Given the long list of caveats about interpreting racial differences, one might ask why we chose to publish the statistics for racial subgroups at all. After considerable deliberation among ourselves as well as with colleagues more knowledgeable in the field of race relations, we concluded that the expected benefits of making these data available (along with this caution