15
Measuring Violence-Related Attitudes, Beliefs, and Behaviors Among Youths: A Compendium of Assessment Tools U.S. Department of Health & Human Services CENTERS FOR DISEASE CONTROL AND PREVENTION

Measuring Violence-Related Attitudes, Beliefs, and

  • Upload
    others

  • View
    11

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Measuring Violence-Related Attitudes, Beliefs, and

MeasuringViolence-RelatedAttitudes, Beliefs,and BehaviorsAmong Youths:A Compendium of Assessment Tools

U.S. Department of Health & Human ServicesCENTERS FOR DISEASE CONTROL

AND PREVENTION

Page 2: Measuring Violence-Related Attitudes, Beliefs, and

MeasuringViolence-RelatedAttitudes, Beliefs,and BehaviorsAmong Youths:A Compendium of Assessment Tools

Compiled and Edited by

Linda L. Dahlberg, PhDSusan B. Toal, MPHChristopher B. Behrens, MD

Division of Violence Prevention

National Center for Injury Prevention and Control

Centers for Disease Control and Prevention

Atlanta, Georgia

1998

Page 3: Measuring Violence-Related Attitudes, Beliefs, and

Acknowledgments

In 1992 and 1993, the Centers for Disease Controland Prevention funded 15 evaluation projects whoseprimary goal was to identify interventions thatchange violence-related attitudes, beliefs, andbehaviors among children and youths. The followinginvestigators and program staff from these projectshave made numerous contributions to the field ofviolence prevention and have been instrumental tothe development of this document. We thank themfor making this document possible and for helpingto move the field of violence prevention forward.

J. Lawrence AberHenry (Hank) AthaKris BosworthEdward DeVosDennis D. EmbryLeonard EronDorothy EspelageAlbert D. FarrellDaniel J. FlanneryRobert L. FlewellingVangie A. FosheeRoy M. GabrielNancy G. GuerraMarshall HaskinsTony HopsonCynthia HudleyL. Rowell HuesmannKenneth W. JacksonRussell H. JacksonSteven H. KelderMolly LairdGerry LandsbergLinda Lausell-BryantFletcher LinderChristopher MaxwellAleta L. MeyerHelen NadelPamela OrpinasMallie J. PaschallChristopher L. RingwaltTom RoderickFaith SamplesMark SpellmannDavid A. StonePatrick TolanRick VanAckerAlexander T. VazsonyiWilliam H. Wiist

ii

Page 4: Measuring Violence-Related Attitudes, Beliefs, and

ContentsAcknowledgments . . . . . . . . . . . . . . . . . . . .ii

How To Use This Compendium . . . . . . . . . . . .1How This Compendium Is OrganizedChoosing the Right Instrument

Introduction . . . . . . . . . . . . . . . . . . . . . . . . .5Why Outcome Evaluations Are So ImportantComponents of Comprehensive EvaluationsTen Steps for Conducting Outcome EvaluationsFuture Considerations

Section IAttitude and Belief Assessments . . . . . . . . .13

Section IIPsychosocial and Cognitive Assessments . . .47

Section IIIBehavior Assessments . . . . . . . . . . . . . . . .141

Section IVEnvironmental Assessments . . . . . . . . . . . .243

iii

Page 5: Measuring Violence-Related Attitudes, Beliefs, and

This compendium provides researchers and pre-vention specialists with a set of tools to evaluateprograms to prevent youth violence. This com-pendium is only a first step, however. Newmeasurement tools must be developed, existing toolsmust be improved, and all such measures must bemade available to those of you working in the fieldof youth violence prevention so that the vast array ofprevention programs now being used can be criti-cally reviewed and evaluated. If you are new to thefield of youth violence prevention and unfamiliarwith available measures, you may find this com-pendium to be particularly useful. If you are anexperienced researcher, this compendium may serveas a resource to identify additional measures toassess the factors associated with violence-relatedbehavior, injuries, and deaths among youths.

Although this compendium contains more than100 measures, it is not an exhaustive listing ofavailable measures. Some of the more widely usedmeasures to assess aggression in children, for exam-ple, are copyrighted and could not be included here.Other measures being used in the field, but notknown to the authors, are also not included. Youwill find that many of the measures included in thiscompendium focus on individuals’ violence-relatedattitudes, beliefs, and behaviors. These measureswill be particularly useful if you are evaluating aschool-based program or a community-based pro-gram designed to reduce violence among youths.Few scales and assessments address family, eco-nomic, or other community factors. It is our goal toinclude such measures in future editions of thiscompendium.

Most of the measures in this compendium areintended for use with youths between the ages of 11and 20 years, to assess factors such as attitudestoward violence, aggressive behavior, conflict

resolution strategies, self-esteem, self-efficacy, andexposure to violence. The compendium also con-tains a number of scales and assessments developedfor use with children between the ages of 5 and 10years, to measure factors such as aggressive fan-tasies, beliefs supportive of aggression, attributionalbiases, prosocial behavior, and aggressive behavior.When parent and teacher versions of assessments areavailable, they are included as well.

How This Compendium Is OrganizedThe Introduction, beginning on page 5, provides

information about why outcome evaluations are soimportant and includes some guidance on how toconduct such evaluations. Following theIntroduction, you will find four sections, each focus-ing on a different category of assessments. Eachsection contains the following components:

• Description of Measures. This table summa-rizes key information about all of theassessments included in the section. Eachassessment is given an alphanumeric identifier(e.g., A1, A2, A3) that is used repeatedlythroughout the section, to guide you throughthe array of assessments provided. The tableidentifies the constructs being measured(appearing in alphabetical order down the left-hand column), provides details about thecharacteristics of the scale or assessment,identifies target groups that the assessment hasbeen tested with, provides reliability andvalidity information where known, and identi-fies the persons responsible for developing thescale or assessment. When reviewing theTarget Group information, keep in mind thatwe have included only those target groups weknow and that the reliability information

1

How To Use This Compendium

Page 6: Measuring Violence-Related Attitudes, Beliefs, and

pertains specifically to these groups and maynot apply to other groups. When reviewing theReliability/Validity information, you willnotice that several measures are highly reli-able (e.g., internal consistency > .80) whereasothers are minimally reliable (e.g., internalconsistency < .60). We included measureswith minimal reliability because the reliabilityinformation is based, in some cases, on onlyone target group from one study; these mea-sures may be more appropriate for a differenttarget group. We also included measures withlimited reliability with the hope thatresearchers will try to improve and refinethem. Evidence of validity is available foronly a few of the measures included in thiscompendium.

• Scales and Assessments. The items thatmake up each assessment are provided,along with response categories and someguidance to assist you with scoring andanalysis. In the few instances where scaleshave been adapted, the most recent (modi-fied) version is presented. We also haveprovided information on how to obtain per-mission to use copyrighted materials. Inmost cases, we have presented individualscales rather than the complete instruments

because instruments generally are composedof several scales. This approach increasesthe likelihood that the scales’ test propertieswill be altered. Nonetheless, we did thisbecause the field is very new and thus hasproduced few standardized instruments withestablished population norms for a range oftarget audiences.

• References. This list includes citations forpublished and unpublished materials pertainingto original developments as well as any recentadaptations, modifications, or validations. Inthe few instances where scales have beenadapted, references for the most recent (modi-fied) version are provided. To obtaininformation about the original versions, pleasecontact the developers and refer to any relevantreferences cited.

Choosing the Right InstrumentDeveloping instruments that are highly reliable,

valid, and free of any bias is not always possible.Carefully choose among the measures included inthis document. The criteria on the facing page mayassist you in making this selection. As with anyresearch effort, consider conducting a pilot test tominimize problems and to refine the instrument.

2

Page 7: Measuring Violence-Related Attitudes, Beliefs, and

3

Criterion Rating

Inter-item correlation

Alpha-coefficient

Test-Retest Reliability

Convergent Validity

Discriminant Validity

Exemplary

average of .30 orbetter

.80 or better

Scores correlate morethan .50 across aperiod of at least 1year.

Highly significantcorrelations withmore than two relatedmeasures.

Significantly differentfrom four or moreunrelated measures.

Extensive

average of .20 to .29

.70 to .79

Scores correlate morethan .40 across aperiod of 3-12months.

Significantcorrelations withmore than two relatedmeasures.

Significantly differentfrom two or threeunrelated measures.

Moderate

average of .10 to .19

.60 to .69

Scores correlate morethan .30 across aperiod of 1-3 months.

Significantcorrelations with tworelated measures.

Significantly differentfrom one unrelatedmeasure.

Minimal

average below .10

< .60

Scores correlate morethan .20 across lessthan a 1 monthperiod.

Significantcorrelations with onerelated measure.

Different from onecorrelated measure.

General Rating Criteria for Evaluating Scales

Source: Robinson JP, Shaver PR, Wrightsman LS. Measures of personality and social psychological attitudes.San Diego, CA: Academic Press, Inc., 1991.

Page 8: Measuring Violence-Related Attitudes, Beliefs, and

This compendium of assessment tools is a publication of the

National Center for Injury Prevention and Control of the

Centers for Disease Control and Prevention.

Centers for Disease Control and PreventionJeffrey P. Koplan, MD, MPH, Director

National Center for Injury Prevention and ControlMark L. Rosenberg, MD, MPP, Director

Division of Violence PreventionW. Rodney Hammond, PhD, Director

Editing services were provided by the Office of

Communication Resources, National Center for Injury

Prevention and Control.

Editing:Valerie R. Johnson

Production services were provided by Parallax Digital.

Graphic Design and Layout:Jeff Justice

Caroline Kish

Cover Design:Jeff Justice

Cover Photography:A Kid’s World, James Carroll—Artville, LLC, 1997

Suggested Citation: Dahlberg LL, Toal SB, Behrens CB.

Measuring Violence-Related Attitudes, Beliefs, and Behaviors

Among Youths: A Compendium of Assessment Tools. Atlanta,

GA: Centers for Disease Control and Prevention, National

Center for Injury Prevention and Control, 1998.

Page 9: Measuring Violence-Related Attitudes, Beliefs, and

5

Youth violence is a serious public health prob-lem in America. Despite a recent decline inhomicide rates across the United States,1 homicidecontinues to claim the lives of many young people.The human and economic toll of violence on youngpeople, their families, and society is high. Homicideis the second leading cause of death for persons 15-24 years of age and has been the leading cause ofdeath for African-Americans in this age group forover a decade.2 The economic cost to society associ-ated with violence-related illness, disability, andpremature death is estimated to be in the billions ofdollars each year.3

Researchers and prevention specialists are underintense pressure to identify the factors that placeyoung people at risk for violence, to find out whichinterventions are working, and to design more effec-tive prevention programs. Across the country,primary prevention efforts involving families,schools, neighborhoods, and communities appear tobe essential to stemming the tide of violence, but wemust have solid evidence of their effectiveness. Tofind out what works, we need reliable and validmeasures to assess change in violence-related atti-tudes, beliefs, behaviors, and community factors.Monitoring and documenting proven strategies willgo a long way toward reducing youth violence andcreating peaceful, healthier communities.

Why Outcome Evaluations Are So ImportantDespite the proliferation of programs to prevent

youth violence, we have yet to determine the mosteffective strategies for reducing aggression and vio-lent behavior. We know that promising programsexist, but evaluations to confirm positive effects arelacking.4-6 In their desire to be responsive to con-stituents’ concerns about violence, schools andcommunities often are so involved with prevention

activities that they rarely make outcome evaluationsa priority. Such evaluations, however, are necessaryif we want to know what works in preventingaggression and violence. In the area of youth vio-lence, it is not enough to simply examine how aprogram is being implemented or delivered, or toprovide testimonials about the success of an inter-vention or program. Programs must be able to showmeasurable change in behavioral patterns or changein some of the attitudinal or psychosocial factorsassociated with aggression and violence. To demon-strate these changes or to show that a program madea difference, researchers and prevention specialistsmust conduct an outcome evaluation.

Components of Comprehensive EvaluationsEvaluation is a dynamic process. It is useful for

developing, modifying, and redesigning programs;monitoring the delivery of program components toparticipants; and assessing program outcomes. Eachof these activities represents a type of evaluation.Together, these activities compose the key compo-nents of a comprehensive evaluation.

• Formative Evaluation activities are thoseundertaken during the design and pretesting ofprograms.7 Such activities are useful if youwant to develop a program or pilot test all orpart of an intervention program prior to imple-menting it routinely. You can also useformative evaluation to structure or tailor anintervention to a particular target group or useit to help you anticipate possible problems andidentify ways to overcome them.

• Process Evaluation activities are those under-taken to monitor program implementation andcoverage.7 Such activities are useful if you

Introduction

Page 10: Measuring Violence-Related Attitudes, Beliefs, and

6

want to assess whether the program is beingdelivered in a manner consistent with programobjectives; for determining dose or the extentto which your target population participates inthe program; and for determining whether thedelivery of the program has been uniform orvariable across participants. Process or moni-toring data can provide you with importantinformation for improving programs and arealso critical for later program diffusion andreplication.

• Outcome Evaluation activities are thoseundertaken to assess the impact of a programor intervention on participants.7 Such activitiesare useful if you want to determine if the pro-gram achieved its objectives or intendedeffects—in other words, if the programworked. Outcome evaluations can also helpyou decide whether a program should be con-tinued, implemented on a wider scale, orreplicated in other sites.

Ten Steps for Conducting Outcome EvaluationsOutcome evaluations are not simple to conduct

and require a considerable amount of resources andexpertise. If you are interested in conducting an out-come evaluation, you will need to incorporate bothformative and process evaluation activities and takethe following steps:

• Clearly define the problem being addressed byyour program.

• Specify the outcomes your program isdesigned to achieve.

• Specify the research questions you want theevaluation to answer.

• Select an appropriate evaluation design andcarefully consider sample selection, size, andequivalency between groups.

• Select reliable and valid measures to assesschanges in program outcomes.

• Address issues related to human subjects, such

as informed consent and confidentiality.• Collect relevant process, outcome, and record

data.• Analyze and interpret the data.• Disseminate your findings, using an effective

format and reaching the right audience.• Anticipate and prepare for obstacles.

Define the problem. What problem is your pro-gram trying to address? Who is the targetpopulation? What are the key risk factors to beaddressed? Youth violence is a complex problemwith many causes. Begin by focusing on a specifictarget group and defining the key risk factors yourprogram is expected to address within this group.Draw evidence from the research literature showingthe potential benefit of addressing the identified riskfactors. Given the complexity of the problem ofyouth violence, no program by itself can reasonablybe expected to change the larger problem.

Specify the outcomes. What outcome is yourprogram trying to achieve? For example, are you try-ing to reduce aggression, improve parenting skills, orincrease awareness of violence in the community?Determine which outcomes are desired and ensurethat the desired outcomes match your program objec-tives. A program designed to improve conflictresolution skills among youths is not likely to lead toan increased awareness of violence in the commu-nity. Likewise, a program designed to improveparenting skills probably will not change the interac-tions of peer groups from negative to prosocial.When specifying outcomes, make sure you indicateboth the nature and the level of desired change. Isyour program expected to increase awareness orskills? Do you expect your program to decrease neg-ative behaviors and increase prosocial behaviors?What level of change can you reasonably expect toachieve? If possible, use evidence from the literaturefor similar programs and target groups to help youdetermine reasonable expectations of change.

Page 11: Measuring Violence-Related Attitudes, Beliefs, and

7

Specify the questions to be answered. Researchquestions are useful for guiding the evaluation.When conducting an outcome evaluation of a youthviolence prevention program, you may want todetermine the answers to three questions: Has theprogram reduced aggressive or violent behavioramong participants? Has the program reduced someof the intermediate outcomes or mediating factorsassociated with violence? Has the program beenequally effective for all participants or has it workedbetter for some participants than for others? If multi-ple components of a program are being evaluated,then you also may want to ask: Have all componentsof the program been equally effective in achievingdesired outcomes or has one component been moreeffective than another?

Select an appropriate evaluation design.Choose an evaluation design that addresses yourevaluation questions. Your choice in design willdetermine the inferences you can make about yourprogram’s effects on participants and the effective-ness of the evaluation’s various components.Evaluation designs range from simple one-grouppretest/posttest comparisons to nonequivalent con-trol/comparison group designs to complexmultifactorial designs. Learn about the variousdesigns used in evaluation research and know theirstrengths and weaknesses.

Special consideration should be given to sampleselection, size, and equivalency between groups aspart of your evaluation plan. Outcome evaluationsare, by definition, comparative. Determining theimpact of a program requires comparing personswho have participated in a program with equivalentpersons who have experienced no program or analternative program.7 The manner in which partici-pants are selected is important for the interpretationand generalizability of the results. Sample size isimportant for detecting group differences. Whenestimating the sample size, ensure the sample is

large enough to be able to detect group differencesand anticipate a certain level of attrition, which willvary depending on the length of the program and theevaluation. Before the program is implemented,make sure that the treatment and control/comparisongroups are similar in terms of demographic charac-teristics and outcome measures of interest.Establishing equivalency at baseline is importantbecause it helps you to attribute change directlyresulting from the program rather than changeresulting from an extraneous factor.

Choose reliable and valid measures to assessprogram outcomes. Selecting appropriate measure-ment instruments—ones that you know how toadminister and that will produce findings that youwill be able to analyze and interpret—is an impor-tant step in any research effort. When selectingmeasures and developing instruments, consider thedevelopmental and cultural appropriateness of themeasure as well as the reading level, native lan-guage, and attention span of respondents. Makesure that the response burden is not too great,because you want respondents to be able to com-plete the assessment with ease. Questions or itemsthat are difficult to comprehend or offensive to par-ticipants will lead to guessing or nonresponses.Subjects with a short attention span or an inabilityto concentrate will have difficulty completing alengthy questionnaire.

Also consider the reliability and validity of theinstrument. Reliable measures are those that havestability and consistency. The higher the correlationcoefficient (i.e., closeness to 1.00), the better thereliability. A measure that is highly reliable may notbe valid. An instrument is considered valid if it mea-sures what it is intended to measure. Evidence ofvalidity, according to most measurement specialists,is the most important consideration in judging theadequacy of measurement instruments.

Page 12: Measuring Violence-Related Attitudes, Beliefs, and

8

Address issues related to human subjects.Before data collection begins, take steps to ensurethat participants understand the nature of theirinvolvement in the project and any potential risksassociated with participation. Obtaining informedconsent is necessary to protect participants andresearchers. Obtaining permission from participantseliminates the possibility that individuals willunknowingly serve as subjects in an evaluation. Youmay choose to use active informed consent, in whichcase you would obtain a written statement from eachparticipant indicating their willingness to participatein the project. In some cases, you may decide to usepassive informed consent, in which case you wouldask individuals to return permission forms only ifthey are not willing to participate in the project.Become familiar with the advantages and disadvan-tages of both approaches. Once you have securedinformed consent, you also must take steps to ensureparticipants’ anonymity and confidentiality duringdata collection, management, and analysis.

Collect relevant data. Various types of data can becollected to assess your program’s effects. The out-come battery may be used to assess attitudinal,psychosocial, or behavioral changes associated withparticipation in an intervention or program.Administering an outcome battery alone, however,will not allow you to make conclusions about theeffectiveness of your program. You also must collectprocess data (i.e., information about the materials andactivities of the intervention or program). For example,if a curriculum is being implemented, you may want totrack the number of sessions offered to participantsand the number of sessions attended by participants, aswell as monitor the extent to which program objec-tives were covered and the manner in whichinformation was delivered. Process data allow you todetermine how well a particular intervention is beingimplemented as well as interpret outcome findings.Interventions that are poorly delivered or implementedare not likely to have an effect on participants.

In addition to collecting data from participants,you may want to obtain data from parents, teachers,other program officials, or records. Multiple sourcesof data are useful for determining your program’seffects and strengthening assertions that the programworked. The use of multiple sources of data, how-ever, also presents a challenge if conflictinginformation is obtained. Data from records (i.e., hos-pital, school, or police reports), for example, areusually collected for purposes other than the evalua-tion. Thus, they are subject to variablerecord-keeping procedures that, in turn, may pro-duce inconsistencies in the data. Take advantage ofmultiple data sources, but keep in mind that thesesources have limitations.

Analyze and interpret the data. You can useboth descriptive and inferential statistical techniquesto analyze evaluation data. Use descriptive analysesto tabulate, average, or summarize results. Suchanalyses would be useful, for example, if you wantto indicate the percentage of students in the treat-ment and comparison groups who engaged inphysical fighting in the previous 30 days or the per-centage of students who reported carrying a weaponfor self-defense. You also could use descriptiveanalyses to compute gain scores or change scores inknowledge or attitudes by subtracting the score onthe pretest from the score on the posttest. You couldextend the descriptive analyses to examine the rela-tionship between variables by utilizingcross-tabulations or correlations. For example, youmight want to determine what percentage of studentswith beliefs supportive of violence also reportengaging in physical fights.

Inferential analyses are more difficult to conductthan descriptive analyses, but they yield more infor-mation about program effects. For example, youcould use an inferential analysis to show whetherdifferences in outcomes between treatment and com-parison groups are statistically significant or whether

Page 13: Measuring Violence-Related Attitudes, Beliefs, and

9

the differences are likely due to chance. Knowingthe change scores of the treatment or comparisongroups is not as useful as knowing if the changescores are statistically different. With inferential sta-tistical techniques, evaluators can also take intoaccount (i.e., statistically control for or hold con-stant) background characteristics or other factors(e.g., attrition, program dose, pretest score) betweenthe treatment and comparison groups when assess-ing changes in behavior or other program outcomes.Regardless of the statistical technique you use,always keep in mind that statistical significance doesnot always equate with practical meaningful signifi-cance. Use caution and common sense wheninterpreting results.

Many statistical techniques used by researchersto assess program effects (e.g., analysis of varianceor covariance, structural equation, or hierarchicallinear modeling) require a considerable amount ofknowledge in statistics and measurement. Youshould have a good understanding of statistics andchoose techniques that are appropriate for the evalu-ation design, research questions, and available datasources.

Disseminate your findings. This is one of themost important steps in the evaluation process. Youmust always keep program officials abreast of theevaluation findings, because such information isvitally important for improving intervention pro-grams or services. Also communicate your findingsto research and prevention specialists working in thefield. Keep in mind that the traditional avenues fordisseminating information, such as journal articles,are known and accessible to researchers but notalways to prevention specialists working in commu-nity-based organizations or schools.

When preparing reports, be sure to present theresults in a manner that is understandable to the tar-get audience. School, community, and policy

officials are not likely to understand complex statisti-cal presentations. Reports should be brief and writtenwith clarity and objectivity. They should summarizethe program, evaluation methods, key findings, limi-tations, conclusions, and recommendations.

Anticipate obstacles. Evaluation studies rarelyproceed as planned. Be prepared to encounter anumber of obstacles—some related to resources andproject staffing and others related to the field investi-gation itself (e.g., tension between scientific andprogrammatic interests, enrollment of controlgroups, subject mobility, analytic complexities, andunforeseeable and disruptive external events).8

Multiple collaborating organizations with competinginterests may result in struggles over resources,goals, and strategies that are likely to complicateevaluation efforts. Tension also may exist betweenscientists, who must rigorously document interven-tion activities, and program staff, who must beflexible in providing services or implementing inter-vention activities. During the planning phases of theevaluation, scientific and program staffers must haveclear communication and consensus about the evalu-ation goals and objectives, and throughout theevaluation, they must have mechanisms to maintainthis open communication.

Future ConsiderationsThe field of violence prevention needs reliable,

valid measurement tools in the quest to determinethe effectiveness of interventions. In past years,researchers in violence prevention have looked tothe literature for established measures and havemodified them accordingly to assess violence-related attitudes and behaviors. These adaptationshave sometimes yielded satisfactory results, but inother cases, the measures have not yet proven to bevery reliable. Researchers have also tried to developnew measures to gauge skill and behavior changesresulting from violence prevention interventions. Forexample, a number of researchers have developed

Page 14: Measuring Violence-Related Attitudes, Beliefs, and

10

measures to assess conflict resolution skills. Manyof these measures also require further refinementand validation.

We also need to develop measures that gobeyond assessing individuals’ attitudes, beliefs, andbehaviors. Research findings indicate that a numberof complex factors—related not only to individualsbut to the broader social environment as well—increase the probability of problem behavior duringadolescence and young adulthood. Peers, families,schools, and neighborhoods are all involved in shap-ing attitudes and behaviors related to youth violence.Measures to assess how these factors are related toviolence need to be developed, refined, and madeavailable to researchers and prevention specialists.

To ensure that the instruments we use are cultur-ally appropriate, we must involve a wide range oftarget groups. Violence cuts across all racial and eth-nic groups and is especially prevalent amongAfrican-American and Hispanic youths. Some of themore standardized instruments that have beenadapted for use in violence prevention efforts, how-ever, were not developed specifically for use withminority populations. Thus, the items contained insome of the more standardized instruments may notbe culturally or linguistically appropriate for minor-ity populations.

One final problem we must address is the lackof time-framed measures that can be used for evalu-ation research. To assess the effectiveness of anintervention, we must be able to assess how a partic-ular construct (e.g., attitudes toward violence oraggressive behavior) changes from one point in timeto another point in time following an intervention.Instruments that instruct respondents to indicate“usual behavior,” or to “describe or characterize thebehavior of a child or teenager” are not likely to pre-cisely measure behavior change. Instruments thatinstruct respondents to consider behavior “now or inthe last six months” are also not precise enough tomeasure behavior change.

The field of violence prevention is still verynew, and we must make progress on several fronts.New tools must be developed and existing toolsneed to be improved. More importantly, researchersand specialists dedicated to the prevention of youthviolence must have access to the many measurementtools that have been developed. We hope thatincreased use of and experience with these measureswill help to validate them and will expand ourknowledge about effective strategies to preventyouth violence.

Page 15: Measuring Violence-Related Attitudes, Beliefs, and

11

References

1. Centers for Disease Control and Prevention.National summary of injury mortality data, 1985-1995. Atlanta, GA: National Center for InjuryPrevention and Control, 1997.

2. Anderson RN, Kochanek KD, Murphy SL.Report of final mortality statistics, 1995. MonthlyVital Statistics Report 1997; 45,11(2 Suppl).

3. Miller TR, Cohen MA, Rossman SB. Victim costsof violent crime and resulting injuries. HealthAffairs, 1993;12(4):186-197.

4. Altman E. Violence prevention curricula: sum-mary of evaluations. Springfield, IL: IllinoisCouncil for the Prevention of Violence, 1994.

5. Cohen S, Wilson-Brewer R. Violence preventionfor young adolescents: the state of the art of pro-gram evaluation. Washington, DC: The CarnegieCouncil on Adolescent Development, 1991.

6. Tolan PH, Guerra NG. What works in reducingadolescent violence: an empirical review of thefield. Boulder, CO: The Center for the Study andPrevention of Violence, Institute for BehavioralSciences, University of Colorado, 1994:1-94.

7. Rossi PH, Freeman HE. Evaluation: a systematicapproach. 5th Edition. Newbury Park, CA: SagePublications, 1993.

8. Powell KE, Hawkins DF. Youth violence preven-tion: descriptions and baseline data from 13evaluation projects. American Journal ofPreventive Medicine 1996;12(5 Suppl).