Concerns, Considerations, And New Ideas for Data Collection and Research in Educational Technology Studies

Embed Size (px)

Citation preview

  • 8/7/2019 Concerns, Considerations, And New Ideas for Data Collection and Research in Educational Technology Studies

    1/25

    Concerns, Considerations, and New Ideas for Data Collection and Research in Educational Technology StudiesJRTE | Vol. 43, No. 1, pp. 2952 | 2010 ISTE | www.iste.org

    Concerns, Considerations, and New Ideas for Data Collection

    and Research in Educational Technology StudiesDamian Bebell

    Laura M. ODwyerMichael RussellTom Hoffmann

    Boston College

    Abstract

    In the ollowing pages, we examine some common methodological challengesin educational technology research and highlight new data collection ap- proaches using examples rom the literature and our own work. Given that surveys and questionnaires remain widespread and dominant tools acrossnearly all studies o educational technology, we frst discuss the background and limitations o how researchers have traditionally used surveys to de- fne and measure technology use (as well as other variables and outcomes).

    Trough this discussion, we introduce our own work with a visual analog sliding scale as an example o a new approach to survey design and datacollection that capitalizes on the technology resources increasingly availablein schools. Next, we highlight other challenges and opportunities inherent in the study o educational technology, including the potential or computer adaptive surveying, and discuss the critical importance o aligning outcomemeasures with the technological innovation, concerns with computer-based versus paper-based measures o achievement, and the need to consider thehierarchical structure o educational data in the analysis o data or evaluat-ing the impact o technology interventions. (Keywords: research methodology,survey design, measurement, educational technology research)

    T his paper examines some common methodological issues acing educa-tional technology research and provides suggestions or new data col-lection approaches using examples rom the literature and the authorsown experience. Given that surveys and questionnaires remain widespreadand dominant tools across nearly all studies o educational technology, wefrst discuss the background and limitations o how researchers have tradi-tionally used surveys to defne and measure technology use (as well as other

  • 8/7/2019 Concerns, Considerations, And New Ideas for Data Collection and Research in Educational Technology Studies

    2/25

  • 8/7/2019 Concerns, Considerations, And New Ideas for Data Collection and Research in Educational Technology Studies

    3/25

    Data Collection in Educational Technology Studies

    Following the rise o educational technology resources, hundreds o studies have sought to examine instructional uses o technology across awide variety o educational settings. Despite the large number o studies,many researchers and decision makers have ound past and current re-

    search e orts unsatis actory. Speci cally, criticisms o educational technol-ogy research have ocused on both the lack o guiding theory as well as theailure to provide adequate empirical evidence on many salient outcome

    measures (Roblyer & Knezek, 2003; Strudler, 2003; Weston & Bain, 2010).For example, in Roblyer and Knezeks (2003) call or a national educationaltechnology research agenda, they declare that the next generation o scholar-ship must be more comprehensive and in ormative about the methods andmaterials used, conditions under which studies take place, data sources andinstruments, and subjects being studied; and they must emphasize coher-

    ence between their methods, ndings, and conclusions (p. 69).Although there has been more examination and discussion about gen-

    eral research shortcomings, many critics and authors have examined andhighlighted speci c weaknesses across the published literature. Baker andHerman (2000); Waxman, Lin, and Michko (2003); Goldberg, Russell, andCook (2003); and ODwyer, Russell, Bebell, and ucker-Seeley (2005, 2008)have all suggested that many educational technology studies su er rom a variety o speci c methodological shortcomings. Among other de cits, pastreviews o educational technology research ound it was o en limited by theway student and teacher technology use was measured, a poor selection/alignment o measurement tools, and the ailure to account or the hierar-chical nature o data collected rom teachers and students in schools (Baker& Herman, 2000; Goldberg, Russell, & Cook, 2003; ODwyer, Russell, Bebell,and ucker-Seeley, 2005, 2008; Waxman, Lin, & Michko, 2003).

    Te collective weaknesses o educational technology research has cre-ated a challenging situation or educational leaders and policy makers whomust use fawed or limited research evidence to make policy and unding

    decisions. Even today, little empirical research exists to support many o themost cited claims on the e ects o educational technology. 1 For example,despite a generation o students being educated with technology, there hasyet to be a de nitive study that examines the causal impacts o computer usein school on standardized measures o student achievement. It is a growingproblem that, as an educational community, research and evaluation e ortshave not adequately elucidated the short- and long-term e ects o technol-ogy use in the classroom. Tis situation orces decision makers to rely onweak sources o evidence, i any at all, when allocating budgets and shaping

    uture policy around educational technology.1 Ed ti l t h l g h i t di id d i t t b d t g i ( ) h th t t ith t h l g i t

  • 8/7/2019 Concerns, Considerations, And New Ideas for Data Collection and Research in Educational Technology Studies

    4/25

    Bebell, O'Dwyer, Russell, & Hoffmann

    Not surprisingly, in todays zeitgeist o educational accountability, the callor empirical, research-based evidence that these massive investments are

    afecting the lives o teachers and students has only intensi ed. It is our hopethat the current paper will serve to urther illuminate a small number o

    methodological limitations and concerns that afect much o the educationaltechnology research literature, and will provide real-world examples romour own eforts on promising approaches and techniques to improve utureinquiries.

    Defning and Measuring Technology Use with SurveysSince the earliest adoption o computer-based technology resources in edu-cation, there has been a desire to collect empirical evidence on the impact o technology on student achievement and other outcome variables. Te im-pacts on learning, however, must be placed in the context o technology use.Be ore the impact o technology integration can be studied, there must besolid empirical evidence o how teachers and students are using technology.As such, sound research on the impact o technology integration is predi-cated on the development and application o valid and reliable measures o technology use.

    o measure technology use appropriately (as well as any other variable orindicator), researchers must invest time and efort to develop instruments

    that are both reliable and valid or the in erences that are made. Whethercollected via paper or computer, survey instruments remain one o themost widely employed tools or measuring program indicators. However,the development o survey items poses particular challenges or researchthat ocuses on new and novel uses o technology. Because the ways a giventechnology tool is used can vary widely among teachers, students, and class-rooms, the survey developer must consider a large number o potential uses

    or a given technology-based tool to ully evaluate its efectiveness.For decades, paper-and-pencil administrations o questionnaires or

    survey instruments dominated research and evaluation eforts, but in recentyears an increasing number o researchers are nding distinct advantagesto using Internet-based tools or collecting their data (Bebell & Kay, 2010;Shapley, 2008; Silvernail, 2008; Weston & Bain, 2010). Web-based surveysare particularly advantageous in settings where technology is easily acces-sible, as is increasingly the case in schools. In addition, data collected romcomputer-based surveys can be accessed easily and analyzed nearly instantly,streamlining the entire data collection process. However, the constraints andlimitations o paper-based surveys have been rarely improved upon in theirevolution to computer-based administration; typically, technology-relatedsurveys ail to capitalize on the afordances o technology based data col

  • 8/7/2019 Concerns, Considerations, And New Ideas for Data Collection and Research in Educational Technology Studies

    5/25

    Data Collection in Educational Technology Studies

    technology literature, (b) demonstrate the limitations and considerationswhen quanti ying the requency o technology use with a traditional survey design, and (c) introduce our visual analog sliding scale, which capitalizeson the availability o technology resources in schools to improve the accu-

    racy and validity o traditional survey design.

    Defning Technology UseA historical review o the literature on educational technology reveals thatthe de nition o technology usevaries widely across research studies. Te

    rst large-scale investigation o modern educational technology occurredin 1986 when Congress asked the ederal O ce o echnology Assessment(O A) to compile an assessment o technology use in U.S. schools. Trougha series o reports, O A (1988, 1989, 1995) documented national patternso technology integration and use. en years later, Congress requested thatO A revisit the issue o teachers and technology in K12 schools in depth(O A, 1995, p. 5). In a 1995 O A report, the authors noted that previousresearch on teachers use o technology employed diferent de nitions o what constituted technology use. In turn, these diferent de nitions led tocon using and sometimes contradictory ndings regarding teachers use o technology. By way o another example, a 1992 International Association orthe Evaluation o Educational Achievement (IEA) survey de ned a com-

    puter-using teacher as a teacher who sometimes used computers withstudents. A year later, Becker (1994) employed a more explicit de nition o acomputer-using teacher or which at least 90% o the teachers students wererequired to use a computer in their class in some way during the year. Tus,the IEA de ned use o technology in terms o the teachers use or instruc-tional delivery, whereas Becker de ned use in terms o the students use o technology during class time. Its no surprise that these two diferent de ni-tions o acomputer-using teacher yielded diferent impressions o the extento technology use. In 1992, the IEA study classi ed 75% o U.S. teachers ascomputer-using teachers, whereas Beckers criteria yielded about one thirdo that (approximately 25 %) (O A, 1995). Tis con usion and inconsistency led O A to remark: Tus, the percentage o teachers classi ed as computer-using teachers is quite variable and becomes smaller as de nitions o usebecome more stringent (p. 103).

    In the decade(s) since these original research eforts, teachers use o tech-nology has increased in complexity as technology has become more advanced, varied, and pervasive in schools, urther complicating researcher eforts tode ne and measure use. oo o en, however, studies ocus on technology access instead o measuring the myriad ways that technology is being used.Such research assumes that teachers and students access to technology is an

  • 8/7/2019 Concerns, Considerations, And New Ideas for Data Collection and Research in Educational Technology Studies

    6/25

    Bebell, O'Dwyer, Russell, & Hoffmann

    did not measure student or teacher practices with technology, but com-pared levels o academic achievement among students classi ed as receiv-ing instruction in either high- or low-technology environments. In otherwords, the research had no measures o actual technology use, but instead

    classi ed students based on their access to technology. Although access totechnology has been shown to be an important predictor o technology use(Bebell, Russell, & ODwyer, 2004; Ravitz, Wong, & Becker, 1999), a wide variety o studies conducted in educational environments where technology access is robust, yet use is not, suggest that the assumption is inadequate orresearch that is used to in orm important educational and policy decisionsaround educational technology (Bebell, Russell, & ODwyer, 2004; Weston &Bain, 2010). Clearly, measuring access to computers is a poor substitute orthe measurement o actual use in empirical research, a point that is urtherhighlighted when readers learn that Angrist and Lavys well publicized 2002study de ned and classi ed settings where 10 students shared a single com-puter (i.e., a 10:1 ratio) as the high-access schools.

    oday, several researchers and organizations have developed theirown de nitions and measures o technology use to examine the extent o technology use and to assess the impact o technology use on teaching andlearning. Frequently these instruments collect in ormation on a variety o types o technology use and then collapse the data into a single ge-

    neric technology use variable. Un ortunately, the amalgamated measuremay be inadequate both or understanding the extent to which technol-ogy is being used by teachers and students, and or assessing the impacto technology on learning outcomes (Bebell, Russell, & ODwyer, 2004).Ultimately, decision makers who rely on diferent measures o technology use will likely come to diferent conclusions about the prevalence o useand its relationship with student learning outcomes. For example, somemay interpret one measure o teachers technology use solely as teachersuse o technology or delivering instruction, whereas others may view itas a generic measure o a teachers collected technology skills and uses.Although de ning technology use as a single dimension may simpli y analyses, it complicates eforts by researchers and school leaders to provide valid and reliable evidence o how technology is being used and how usemight relate to improved educational outcomes.

    Recognizing the Variety of Ways Teachers Use TechnologyOne approach to de ning and measuring technology use that we have

    ound efective has concentrated on developing multiple measures thatocus on speci c ways that teachers use technology. Tis approach was

    employed by Mathews (1996) and Becker (1999) in demonstrating the

  • 8/7/2019 Concerns, Considerations, And New Ideas for Data Collection and Research in Educational Technology Studies

    7/25

    Data Collection in Educational Technology Studies

    learning, we examined survey responses rom more than 2,500 K12 pub-lic school teachers who participated in the ederally unded USEIT Study (Russell, ODwyer, Bebell, & Miranda, 2003). Analyzing these results using

    actor analytic techniques we developed seven distinct scales that measure

    teachers technology use: Teachers use o technology or class preparation Teachers pro essional e-mail use

    Teacher-directed student use o technology during class time Teachers use o technology or grading Teachers use o technology or delivering instruction

    Teachers use o technology or providing accommodations Teacher-directed student use o technology to create products

    Analyses that ocused on these seven teacher technology use scalesrevealed that the requency with which teachers employed technology

    or each o these purposes varied widely (Bebell, Russell, & ODwyer,2004). For example, teachers use o technology or class preparation wasstrongly negatively skewed (skewness = -1.12), in erring that a majority o surveyed teachers requently used technology or planning, whereasonly a small number o teachers did not. Conversely, the use o technol-ogy or delivering instruction was strongly positively skewed (1.09),

    meaning that the majority o surveyed teachers rarely used technology to deliver instruction, whereas most reported never or only rarely usingtechnology to deliver instruction. Distributions or teacher-directedstudent use o technology to create products (1.15) and teachers use o technology or providing accommodations (1.04) were also positively skewed. Using technology or grading had a weak positive skew (0.60),whereas teacher-directed student use o technology during class time(0.11) was relatively normally distributed. Teachers use o e-mail, how-ever, presented a bimodal distribution, with a large percentage o teach-ers reporting requent use and a large portion o the sample reportingno use. Interestingly, when these individual scales were combined intoa generic technology use scale (as is o ten done with technology usesurveys), the distribution closely approximated a normal distribution.Thus, the generic technology use measure obscured all o the unique anddivergent patterns observed in the speci ic technology use scales (Bebell,Russell, & ODwyer, 2004).

    Clearly, when compared to a single generic measure o technology use,using multiple measures o speci c technology use ofers a more nuancedunderstanding o how teachers use technology and how these uses vary among teachers Research studies that have utilized this multi aceted ap

  • 8/7/2019 Concerns, Considerations, And New Ideas for Data Collection and Research in Educational Technology Studies

    8/25

    Bebell, O'Dwyer, Russell, & Hoffmann

    1999). For example, when we examined teachers use o technology using ageneric measure that compromised a wide variety o types o technology use,it appeared that the requency with which teachers use technology did not vary noticeably across the number o years they had been in the pro ession.

    In other words, teachers who were brand new to the pro ession appeared touse technology as requently as teachers who had been in the pro ession or11 or more years. However, when distinct individual types o technology usewere examined, newer teachers reported higher levels o technology use orpreparation and slightly higher levels o use or accommodating studentsspecial needs than did more experienced teachers. Conversely, new teach-ers reported less requent use o technology or instructional use and havingtheir students to use technology during class time than their more experi-enced colleagues (Bebell, Russell, & ODwyer, 2004). Tese examples convey the importance o ully articulating and measuring technology use and howdiferent measures o technology use (even with the same data set) can leadto substantially varied results.

    How technology use is de ined and measured (i measured at all)plays a substantial, but o ten overlooked, role in educational technology research. For example, using NAEP data, Wenglinksi (1998) employedtwo measures o technology use in a study on the e ects o educationaltechnology on student learning. he irst measure ocused speci ically

    on use o technology or simulation and higher-order problem solvingand ound a positive relationship between use and achievement. he sec-ond measure employed a broader de inition o technology use and ounda negative relationship between use and achievement. hus, dependinghow one measures use, the relationship between technology use andachievement appeared to di er.

    Similarly, ODwyer, Russell, Bebell, and ucker-Seeley (2005) ex-amined the relationship between various measures o computer useand students English/language arts test scores across 55 intact upperelementary classrooms. heir investigation ound that, while control-ling or both prior achievement and socioeconomic status, studentswho reported greater requency using technology in school to edit theirpapers also exhibited higher total English/language arts test scores andhigher writing scores. However, other measures o teachers and studentsuse o technology, such as students use o technology to create presen-tations and recreational use o technology at home were not associatedwith increased English/language arts outcome measures. Again, di erent

    indings related to how technology use was associated with student testper ormance resulted depending on how the researchers chose to de ineand measure technology use hese examples typi y the complex and

  • 8/7/2019 Concerns, Considerations, And New Ideas for Data Collection and Research in Educational Technology Studies

    9/25

    Data Collection in Educational Technology Studies

    Four Approaches for Representing the Frequency of Technology Use

    Below, we present an extended example o how teachers technology use istypically measured via a survey instrument, including clear limitations totraditional approaches and recommendations or capitalizing on the afor-dances provided by technology or improving overall accuracy and validity.Traditionally, surveys present respondents with a set o xed, close-endedresponse options rom which they must select their response. For example,when measuring the requency o technology use, teachers may be asked toselect rom a discrete number o responses or a given item. As an example,the survey question below (adapted rom the 2001 USEIT teacher survey)asks a teacher the requency with which they used a computer to deliverinstruction:

    During the last school year, how o en did you use a computer to deliverinstruction to your class?

    Never

    Once or twice a year

    Several times a year

    Once a month

    Several times a month

    Once a week

    Several times a week

    Table 1. Assigning Linear Values to Represent Use

    Response Option Assigned Value

    Never 0

    Once or twice a year 1

    Several times a year 2

    Once a month 3

    Several times a month 4

    Once a week 5

    Several times a week 6

    Everyday 7

  • 8/7/2019 Concerns, Considerations, And New Ideas for Data Collection and Research in Educational Technology Studies

    10/25

    Bebell, O'Dwyer, Russell, & Hoffmann

    In this example, a respondent selects the response option that best repre-sents the requency with which s/he uses a computer to deliver instruction.

    o enable the statistical analyses o the results, the researcher must assign anumeric value to each o the potential response options. Using the currentexample, the number assigned to each response option would correspondlinearly with increasingly requent technology use ( or example, Never = 0to Everyday = 7). Tis 8-point scale (07) diferentiates how requently eachteacher uses technology or instruction over the course o a given year. By quanti ying the responses numerically, a variety o arithmetic and statistical

    analyses may be per ormed.In measurement theory, a greater number o response options pro- vides greater mathematical di erentiation o a given phenomenon,which in this case is the requency o technology use. However, requiringrespondents to select a single response rom a long list o options canbecome tedious and overwhelming. Conversely, using ewer responseoptions provides less di erentiation among respondents and less in-

    ormation about the studied phenomenon. As a compromise, survey developers have typically employed 5- to 7-point scales to provide a bal-ance between the detail o measurement and the ease o administration(Dillman, 2000; Nunnally, 1978).

    However, this widely employed approach has an important limitation.Using the current example, the response options are assigned using linearone-step values, whereas the original response options describe nonlinear

    requencies. Linear one-step values result in an ordinal measurement scale,where values do not indicate absolute qualities, nor do they indicate theintervals between the numbers are equal (Kerlinger, 1986, p. 400). From ameasurement point o view, the values assigned in the preceding exampleare actually arbitrary (with the exception o 0, which indicates that a teach-er never uses technology) Although this type o scale serves to diferenti

    Table 2: Assigning Real Values to Represent Use

    Response Option Assigned Value

    Never 0

    Once or twice a year 2

    Several times a year 6

    Once a month 9

    Several times a month 27

    Once a week 36

    Several times a week 108

    Everyday 180

  • 8/7/2019 Concerns, Considerations, And New Ideas for Data Collection and Research in Educational Technology Studies

    11/25

    Data Collection in Educational Technology Studies

    teachers at the beginning and again near the end o the school year. Teaverage value calculated across all teachers during the rst administrationwas 2.5, indicating that, on average, teachers used technology or instructionbetween several times a year and once a month. Te average value calculated

    across the teachers during the second administration was 5.1, or about onceper week. Arithmetically, it appears that the requency with which teach-ers use technology has doubled. Tis doubling, however, is an arti act o thescale assigned to the response options and does not accurately refect theactual change in the requency o use.

    able 2 displays an alternate coding system in which the assigned valuesor each response option are designed to refect the actual requency with

    which teachers could use technology to deliver instruction over the courseo a 180-day school year.

    In this example, the same survey question and response options are pre-sented; however, the researcher assigns values to each response choice thatrepresent real values. Assuming the school year equals 180 days (or ninemonths, 36 weeks) the analyst assigns values to each response option thatrefects the estimated requency o use. Tis approach results in a 180-pointscale, where 0 represents a teacher never using technology and 180 repre-sents everyday use o technology. Tis approach provides easier interpreta-tion and presentation o summary data, because the di erence between

    the numbers actually refects an equal di erence in the amount o attributemeasured (Glass & Hopkins, 1996).In the current example, the resulting survey data takes on qualities o an

    interval measurement scale, whereby equal di erences in the numbers cor-respond to equal di erences in the amounts the attributes measure (Glass &Hopkins, 1996, p. 8). In other words, rather than the 8-step scale presented inthe rst example, the 181-step scale o ers a clearer and more tangible inter-pretation o teachers technology use. Te number o times a teacher may usetechnology can occur at any interval on a scale between 0 and 180; however, inthe current example, teachers responding to the item were still provided withonly eight discrete response options in the original survey question. Te smallnumber o response options typically employed in survey research orces sur- vey respondents to choose a response-option answer that best approximatestheir situation. For example, a teacher may use technology somewhat morethan once a week but not quite several times per week. Faced with inadequateresponse options, the teacher must choose between the two options. In thisscenario, the survey respondent is orced to choose one o the two availableoptions, both o which yield imprecise, and ultimately inaccurate, data. I theteacher selects both options, the analyst typically must discard the data or be

    orced to subjectively assign a value to the response Tus whenever a survey

  • 8/7/2019 Concerns, Considerations, And New Ideas for Data Collection and Research in Educational Technology Studies

    12/25

    Bebell, O'Dwyer, Russell, & Hoffmann

    Recognizing the measurement limitations o limited response options intraditional survey design, as well as the increasing presence o technology in educational settings, we have experimented with ways o improving theaccuracy o our data collection e orts through the use o new technology-

    enabled tools to improve traditional survey data collection. Speci cally,across our recent studies, we have developed and applied an online survey presentation method where survey items are presented with continuousscales that allow the respondent to select a value that accurately refects theirtechnology use rather than relying on a limited number o xed, closed-ended response options (Bebell & Russell, 2006; Bebell, ODwyer, Russell,& Ho mann, 2007; ucker-Seeley, 2008). Trough the use o MacromediaFlash visual analog scale, survey respondents are presented with a ull, butnot overwhelming, range o response options. Tis advancement in datacollection technology allows the same survey item to be measured using aratio scale that presents the entire range o potential use (with every avail-able increment present) to teachers. In the ollowing example, teachers arepresented the technology use survey question with the visual analog scale inFigure 1.

    o complete the survey item, each respondent uses a mouse/trackpad toselect the response on the sliding scale. Although the interactive nature o the visual analog scale is challenging to demonstrate on paper, the program

    is designed to help respondents quickly and accurately place themselveson the scale. In the current example, the teachers response is displayed orthem (in red) under the heading approximate number o times per year.As a survey respondent moves the sliding scale across the response optionson the horizontal line, the approximate number o times per year elddisplays their response in real time. Tus, a teacher can move the slider toany response option between 0 (never) and 180 (daily). In addition, thedescriptions above the horizontal slider provide some amiliar parameters

    or teachers so they can quickly select the appropriate response. By solvingmany o the limitations o traditional categorical survey response options,the visual analog scale provides one example o how digital technologiescan be applied to improve traditional data collection e orts in educationaltechnology research.

    The Potential for Computer Adaptive SurveyingTe application o new technologies in survey research and other datacollection e orts can provide many possibilities or improving the quality o educational technology research. Similarly, computer adaptive survey-ing (CAS) represents the state o the art in development o survey design.In contrast to the current Web based surveys used to collect data which

  • 8/7/2019 Concerns, Considerations, And New Ideas for Data Collection and Research in Educational Technology Studies

    13/25

    Data Collection in Educational Technology Studies

    design o computer adaptive achievement tests, which have been oundto be more e cient and accurate than comparable paper-based tests orproviding cognitive ability estimates (Wainer, 1990). Similarly, a CAS cantailor the survey questions presented to a given student or teacher to probethe speci c details o a general phenomenon.

    ake, or example, a recent study we conducted examining how middleschool teachers and students use computers in a multischool one-to-one(1:1) laptop program (Bebell & Kay, 2009). Past research and theory sug-gested that teachers and students across the multiple study settings wouldlikely use computers in very diferent and distinct ways. So a computer adap-tive survey enabled our research team to probe the speci c ways teachersand students used technology without requiring them to respond to sets o questions that were unrelated to the ways they personally used computers.Tus, i a student reported that she had never used a computer in mathemat-ics class, the survey automatically skipped ahead to other questions in othersubject areas. However, i a student reported that he had used a computer inmathematics class, he would be presented with a series o more detailed andnuanced questions regarding this particular type o technology use (includ-ing their requency o using spreadsheets, modeling unctions, etc.).

    In another recent pilot study, researchers collaborating with the NewHampshire Department o Education created a Web-based school capacity index to estimate the extent to which a given school will have the tech-nological capacity to administer standardized assessments via computer(Fedorchak, 2008). For this instrument, respondents are rst asked aboutthe location and/or type o computers that can be used or testing (labs/

    During the last school year, how ofen did you use acomputer to deliver instruction to your class?

    Use the arrow/mouse to pull the slider to your response.

    Figure 1. Flash visual analog sliding scale.

  • 8/7/2019 Concerns, Considerations, And New Ideas for Data Collection and Research in Educational Technology Studies

    14/25

    Bebell, O'Dwyer, Russell, & Hoffmann

    are uniquely nuanced and speci c to their original descriptions o technol-ogy access.

    Trough such adaptive surveys, a more complete and accurate descrip-tive understanding o a given phenomenon can be acquired. Moreover, due

    to the adaptive nature o the survey, students and teachers are no longerpresented with sets o unrelated survey items, thus decreasing time requiredto complete the survey, decreasing atigue, and increasing the accuracy o in ormation collected. Although the use o computer adaptive testing hasrevolutionized the speed and accuracy o such widespread internationalassessments as the Graduate Record Exam (GRE) and the Graduate Manage-ment Admission est (GMA ), ew examples outside o psychological sur- veys employ such an approach or data collection in research and evaluationstudies. Given the scarcity o time or data collection in most educationalsettings and the wide variety o technology uses and applications o en un-der review, CAS presents a particularly promising direction or educationaltechnology research.

    Use and Alignment of Standardized Tests as Outcomes MeasuresTus ar, this paper has largely ocused on the data collection aspects o educational technology research and ways that educational technology may be improved by the use o surveys that would improve data collection. How-

    ever, survey data collection and measurement represent only one aspect o the overall research or evaluation undertaking. In many instances, data col-lected through surveys is not alone su cient to address the outcomes o aneducational technology study. More typically, studies o educational technol-ogy seek to document the impacts o educational technology on measures o student learning, such as classroom or standardized tests.

    o adequately estimate any potential impact o educational technology onstudent learning, all measures o educational outcomes must rst be care ul-ly de ned and aligned with the speci c uses and intended efects o a giveneducational technology. In other words, when examining the impact o edu-cational technology on student learning, it is critical that the outcome mea-sures assess the types o learning that may occur as a result o technology useand that those measures are sensitive enough to detect potential changes inlearning that may occur. By ederal law, all states currently administer grade-level tests to students in grades 38 in addition to state assessments acrossdiferent high school grade levels and/or end-o -course tests or high schoolstudents. So, or many observers o educational technology programs, suchstate test results provide easily accessible educational outcomes. However,because most standardized tests attempt to measure a domain broadly, stan-dardized test scores o en do not provide measures that are aligned with the

  • 8/7/2019 Concerns, Considerations, And New Ideas for Data Collection and Research in Educational Technology Studies

    15/25

    Data Collection in Educational Technology Studies

    provide valid measures o the types o learning that may likely occur whenstudents and/or their teachers use computers.

    For example, imagine a pilot setting where computers were used extensively in mathematics classes to develop students understanding o graphing and spa-

    tial relationships but in requently or other concepts. Although the state math-ematics test may contain some items relating speci cally to graphing and spatialrelationships, it is likely that these two concepts will only represent a smallportion o the assessment and would be tested using only a very limited numbero items, i at all. As a result, researchers using the total math test score would beunlikely to observe any efects o computer use on these two concepts. However,our own research suggests that it may be possible to ocus on those subsets o test items that speci cally relate to the concepts in question.

    In a recent study o the relationship between students use o technol-ogy and their mathematics achievement, we used the states mandatory Massachusetts Comprehensive Assessment System (MCAS) test as ourprimary outcome measure (ODwyer, Russell, Bebell, & ucker-Seeley, 2005,2008). Recognizing that the MCAS mathematics test assesses several di -

    erent mathematics subdomains, we examined students overall mathemat-ics test score as well as their per ormance within ve speci c subdomainscomprising the test:

    Number sense and operations Patterns, relationships, and algebra

    Geometry Measurement Data analysis, statistics, and probability

    Trough these analyses, we discovered that the statistical models weconstructed or each subdomain accounted only or a relatively smallpercent o the total variance that was observed across students test scores.Speci cally, the largest percentage o total variance explained by any o ourmodels occurred or the total test score (16%), whereas each subdomainscores accounted or even less variance, ranging rom 5% to 12% (ODwyer,Russell, Bebell, & ucker-Seeley, 2008). In part, the low amount o varianceaccounted or by these models likely resulted rom the relatively poor reli-ability o the subtest scores on the MCAS, as each subdomain was composedo a relatively small number o test items; the subdomain measures on themathematics portion o the ourth grade MCAS test had lower reliability estimates than the test in total. Speci cally, the Cronbachs alpha or the

    ourth grade MCAS total mathematics score was high at 0.86, but the reli-abilities o the subdomain scores were generally lower, particularly or thosesubdomains measured with the ewest number o items For example the

  • 8/7/2019 Concerns, Considerations, And New Ideas for Data Collection and Research in Educational Technology Studies

    16/25

    Bebell, O'Dwyer, Russell, & Hoffmann

    o the reliabilities have important implications or this research becausethat unreliability in the outcome variable likely makes it more di cult toisolate statistically signi cant relationships. In other words, despite our besteforts to examine speci c types o impacts o educational technologyusing

    subsets o the total state assessment, we observed that serious psychometriclimitations could result rom insu cient numbers o test items in any oneparticular area.

    Tus, there are many challenges and considerations when measuring stu-dent achievement using state assessment scores, even when subdomains o thetotal test are aligned with practices. Rather than employing state test results,one alternate strategy is to develop customized tests that contain a larger num-ber o items speci cally aligned to the types o learning that the educationaltechnology is designed to afect. Although it can be di cult to convince teach-ers and/or schools to administer an additional test, well-developed alignedassessments will likely result in more reliable scores and provide increased validity or in erences about the impacts o technology use on these concepts.

    Paper versus Computer-Based AssessmentsIn addition to aligning achievement measures with the knowledge and skillsstudents are believed to develop through the use o a given technology, it isalso important to align the method used to measure student learning with

    the methods students are accustomed to using to develop and demonstratetheir learning in the classroom. As an example, a series o experimentalstudies by Russell and colleagues provides evidence that most states paper-based standardized achievement tests are likely to underestimate the per or-mance o students who are accustomed to working with technology simply because they do not allow students to use these technologies when beingtested (Bebell & Kay, 2009; Russell, 1999; Russell & Haney, 1997; Russell &Plati, 2001). Trough a series o randomized experiments, Russell and hiscolleagues provide empirical evidence that students who are accustomed towriting with computers in the classroom per orm between 0.4 and 1.1 stan-dard deviations higher when they are allowed to use a computer to per ormtests that require them to compose written responses (Russell, 1999; Russell& Haney, 1997; Russell & Plati, 2001).

    Other studies replicate similar results, urther demonstrating the im-portance o aligning the mode o measurement with the tools students use(Horkay, Bennett, Allen, Kaplan, & Yan, 2006). One o our more recent stud-ies ocused on the impact o a pilot 1:1 laptop program across ve middleschools on a variety o outcome measures, including students writing skills(Bebell & Kay, 2010). Following two years o participation in technology-rich classrooms seventh grade students were randomly selected to complete

  • 8/7/2019 Concerns, Considerations, And New Ideas for Data Collection and Research in Educational Technology Studies

    17/25

    Data Collection in Educational Technology Studies

    essays on paper were collected on paper be ore a team o trained readerstranscribed and scored them. Te results o this study ound that studentswho used their laptops wrote longer essays (388 words compared to 302)and that these essays received higher scores than students responding to the

    same prompt and assessment using traditional paper and pencil (Bebell &Kay, 2009). Tese diferences were ound to be statistically signi cant, evena er controlling or achievement using students writing scores on the statetest that was completed in a traditional testing environment. Tese resultshighlight the importance o the mode o measurement in studies lookingto explore the impact o educational technology. Speci cally, the mode o administration efect suggests that researchers studying the impact o edu-cational technology are particularly at risk or underestimating the ability o technology-savvy students when they rely on paper-based assessmentinstruments as their outcome measures.

    The Hierarchical Nature of Educational Data

    A nal and related challenge to evaluating the efects o educational technol-ogy programs on teaching and learning is the inherent hierarchical natureo data collected rom teachers and students in schools. Researchers, evalua-tors, and school leaders requently overlook the clustering o students withinteachers [classes?] and teachers within schools as they evaluate the impact o

    technology programs. As a consequence, many studies o educational tech-nology ail to properly account, both statistically and substantively, or theorganizational characteristics and processes that mediate and moderate therelationship between technology use and student outcomes. At each level inan educational systems hierarchy, events take place and decisions are madethat potentially impede or assist the events that occur at the next level. Forexample, decisions made at the district or school levels may have pro oundefects on the technology resources available or teaching and learning in theclassroom. As such, researchers and evaluators o educational technology initiatives must consider the statistical and substantive implications o theinherent nesting o technology-related behaviors and practices within theschool context.

    From a statistical point o view, researchers have become increasingly awareo the problems associated with examining educational data using traditionalanalyses such as ordinary least-squares regression analysis or analysis o vari-ance. Because educational systems are typically organized in a hierarchical

    ashion, with students nested in classrooms, classrooms nested in schools, andschools nested within districts, a hierarchical or multilevel approach to dataanalysis is o en required (Burstein, 1980; Cronbach, 1976; Haney, 1980; Kre& de Leeuw 1998; Raudenbush & Bryk 2002; Robinson 1950) A hierarchical

  • 8/7/2019 Concerns, Considerations, And New Ideas for Data Collection and Research in Educational Technology Studies

    18/25

    Bebell, O'Dwyer, Russell, & Hoffmann

    advantages over traditional analyses. First, the approach allows or the exami-nation o the relationship between technology use and the outcome variableto vary as a unction o classroom, teacher, school, and district characteristics.Second, the approach allows the relationship between technology use and the

    outcome to vary across schools and permits modeling o the variability in therelationship. Tird, diferences among students in a classroom and diferencesamong teachers can be explored at the same time, there ore producing a moreaccurate representation o the ways in which technology use may be relatedto improved educational outcomes (Goldstein, 1995; Kre & de Leeuw, 1998;Raudenbush & Bryk, 2002).

    o date, only a hand ul o published studies in educational technology research have applied a hierarchical data analysis approach. For example,using data collected rom both teachers and students in 55 intact ourthgrade classrooms, ODwyer and colleagues published the ndings romstudies they conducted to examine the impacts o educational technology (ODwyer, Russell, & Bebell, 2004; ODwyer, Russell, Bebell, & ucker-Seeley, 2005, 2008). Capitalizing on the hierarchical structure o the data, theauthors were able to disentangle the student, teacher, and school correlateso technology use and achievement. For example, the authors ound thatwhen teachers perceived pressure rom their administration to use technol-ogy and had access to a variety o technology-related pro essional develop-

    ment opportunities, they were more likely to use technology or a variety o purposes. Conversely, when schools or districts en orced restrictive policiesaround using technology, teachers were less likely to integrate technol-ogy into students learning experiences (ODwyer, Russell, & Bebell, 2004).Looking at the relationship between student achievement on a state test andtechnology use, the authors ound weak relationships between school anddistrict technology-related policies and students scores on the ELA andmathematics assessments (ODwyer, Russell, Bebell, & ucker-Seeley, 2005,2008). O course, as discussed previously, the lack o an observed relation-ship may be due, in this case, to the misalignment and broad nature o thestate test compared to the speci c skills afected by technology use.

    More recently, a large-scale quasi-experimental study o exas 1:1 lap-top Immersion Pilot program employed a three-level hierarchical model todetermine the impacts o 1:1 technology immersion across three cohortso middle school students on the annual exas Assessment o Knowledgeand Skills ( AKS) assessment (Shapley, Sheehan, Maloney, & Caranikas-Walker, 2010). Using this approach, the authors ound that teachers tech-nology implementation practices were unrelated to students test scores,whereas students use o technology outside o school or homework wasa positive predictor In sum researchers and evaluators must pay close at

  • 8/7/2019 Concerns, Considerations, And New Ideas for Data Collection and Research in Educational Technology Studies

    19/25

    Data Collection in Educational Technology Studies

    nuanced and realistic representation o how technology use is related toimportant educational outcomes.

    Discussion/Conclusions

    This paper explores some o the common methodological limitationsthat can pose signi icant challenges in the ield o educational technol-ogy research. Individually, each o these concerns and limitations couldundermine a study or investigation. Collectively, these limitations canseverely limit the extent to which research and evaluation e orts canin orm the development and re inement o educational technology programs. The overall lack o methodological precision and validity iso particular concern, given the considerable ederal, state, and localinvestments in school-based technologies as well as the current emphasison quantitative student outcomes. Many o these limitations contributeto the shortage o high-quality empirical research studies addressing theimpacts o technology in schools. Currently, decision makers contem-plating the merits o educational technology are o ten orced to makedecisions about the expenditure o millions o dollars with only weak and limited evidence on the e ects o such expenditures on instructionalpractices and student learning.

    With the rising interest in expanding educational technology access,

    particularly 1:1 laptop initiatives, the psychometric and methodologicalweaknesses inherent in the current generation o research results in studiesthat (a) ail to capture the nuanced ways laptops are being used in schoolsand (b) ail to align learning outcome measures with the measures o stu-dent learning. Beyond documenting that use o technology increases whenlaptops are provided at a 1:1 ratio, the current research tools used to study such programs o en provide inadequate in ormation about the extent towhich technology is used across the curriculum and how these uses may a ect student learning.

    Although this paper outlines a number o common methodologicalweaknesses in educational technology research, the current lack o high-quality research is undoubtedly a refection o the general lack o supportprovided or researching and evaluating technology in schools. Producinghigh-quality research is an expensive and time-consuming undertaking thatis o en beyond the resources o most schools and individual school districts.At the state and ederal level, vast amounts o unds are expended annually on educational technology and related pro essional development, yet ew,i any, unds are earmarked to research the e ects o these massive invest-ments. For example, the State o Maine originally used a $37.2 million dollarbudget surplus to provide all seventh and eighth grade students and teachers

  • 8/7/2019 Concerns, Considerations, And New Ideas for Data Collection and Research in Educational Technology Studies

    20/25

    Bebell, O'Dwyer, Russell, & Hoffmann

    was allocated or research and evaluation. A surprising number o educa-tional technology investments o similar stature have had even ewer undsdevoted to their study.

    Recognizing that collecting research in educational settings will al-

    ways involve compromises and limitations imparted by scarce resources,we suggest that extensive opportunities currently exist to improve datacollection and analysis within the structure o existing research designs.Just as technology has trans ormed the efciency o commerce and com-munication, we eel that technology can provide many opportunities toadvance the art and science o educational research and measurement.Given that educational technology research typically occurs in educa-tional settings with enhanced technology access and capacity, there is aconspicuously untapped opportunity to employ technology-based tools toenhance research conducted in these high-tech settings. In other words, theeducational technology research community is uniquely situated to pioneertechnology-enhanced research. However, given the budget limitations andreal-world constraints associated with any educational technology researchor evaluation study, it is not surprising to witness that so ew have capital-ized on technology-rich settings. For example, although Web-based surveyshave become commonplace over the past decade, ew represent anythingmore than a computer-based representation o a traditional paper-and-

    pencil survey.In our own work, we have devised new solutions to overcome the ob-stacles encountered while conducting research in schools by capitalizingon those technologies increasingly available in schools. In this article, wehave speci ically shared some o the techniques and approaches that wehave developed over the course o numerous studies in a wide variety o educational settings. For example, we have ound the visual analog scaleto be an improvement over our past e orts to quanti y the requency o technology use via survey. Similarly, we have shared other exampleso our struggles and successes in measuring the impact o educationaltechnology practices on student achievement. The examples rom theliterature and our own examples both serve to underscore how quickly things can change when examining technology in education. For exam-ple, the ways that teachers use technology to support their teaching hasevolved rapidly, as has student computer access in school and at home.In the coming decades, educators will undoubtedly continue to explorenew ways digital-age technologies may bene it teaching and learning,potentially even aster than we have previously witnessed, as the relativecosts o hardware continue to decrease while eatures and applicationsincrease Similarly the ield o assessment continues to evolve as schools

  • 8/7/2019 Concerns, Considerations, And New Ideas for Data Collection and Research in Educational Technology Studies

    21/25

    Data Collection in Educational Technology Studies

    uture, new opportunities will exist or researchers and evaluators toprovide improved services and re lective results to educators and policy makers.

    In closing, it is our hope that the issues this article raises and the speci c

    examples it includes spur critical refection on some o the details impor-tant to data collection and educational technology research. In addition,we hope our own examples reported here also serve to encourage others toproactively develop and share what will be the next generation o researchtools. Indeed, as technology resources continue to expand and as digital datacollection grows increasingly mainstream, we look orward to welcoming ahost o new applications o technology resources or improving educationalresearch and measurement.

    AcknowledgmentsSome o the research summarized in this paper was supported and con-ducted under the Field Initiated Study Grant Program, PR/Award NumberR305 010065, as administered by the O ce o Educational Research andImprovement, U.S. Department o Education. Te ndings and opinionsexpressed in this report do not refect the positions or policies o the O ce o Educational Research and Improvement or the U.S. Department o Education.

    Author NotesDamian Bebell is an assistant research pro essor at Boston Colleges Lynch School o Educa-tion and a research associate at the Technology and Assessment Study Collaborative. He iscurrently directing multiple research studies investigating the efects o 1:1 technology programson teaching and learning, including collaborative research with the Boston Public Schools and the Newton Public Schools. His research interests include the development and re nement o methodological tools to document the impacts o educational technology on learning, educationre orm, testing, and 1:1 computing. Correspondence regarding this article should be addressed to Damian Bebell, 332 Campion Hall, Boston College, Chestnut Hill, MA 02467. E-mail:[email protected]

    Laura M. ODwyer is an assistant pro essor in the Lynch School o Education at Boston Collegeand has contributed to numerous studies that examined issues such as the relationship betweentracking practices and mathematics achievement, the impact o a technology-in used pro essional development program on student and teacher outcomes, the efects o a capacity-building online pro essional development program on teacher practice, and the relationship between the organi-zational characteristics o schools and teachers use o technology as a teaching and learning tool.Correspondence regarding this article should be addressed to Laura ODwyer, 332 Campion Hall,Boston College, Chestnut Hill, MA 02467. E-mail: [email protected]

    Michael Russell is an associate pro essor in Boston Colleges Lynch School o Education, a senior research associate or the Center or the Study o Testing Evaluation and Educational Policy, and th di t th T h l g d A t St d C ll b ti H di t l j t

  • 8/7/2019 Concerns, Considerations, And New Ideas for Data Collection and Research in Educational Technology Studies

    22/25

    Bebell, O'Dwyer, Russell, & Hoffmann

    technology, learning, and assessment and include applications o technology to testing and im- pacts o technology on students and their learning. Correspondence regarding this article should be addressed to Michael Russell, 332 Campion Hall, Boston College, Chestnut Hill, MA 02467.E-mail: [email protected]

    om Hofmann is a research associate at Boston College interested in inter ace design with a ocus on Universal Design principles and usability. He oversees the inter ace design and produc-tion o in ASC Research projects involving laptop and Internet data collection. Correspondenceregarding this article should be addressed to om Hofmann, 332 Campion Hall, Boston College,Chestnut Hill, MA 02467. E-mail: [email protected]

    ReferencesAngrist, J., & Lavy, V. (2002). New evidence on classroom computers and pupil learning. Te

    Economic Journal, 112, 735765.Baker, E. L., & Herman, J. L. (2000).New models o technology sensitive evaluation: Giving up

    old program evaluation ideas. SRI International: Menlo Park, CA. Retrieved January 10,2003, rom http://www.sri.com/policy/designkt/ ound.htmlBebell, D., & Kay, R. (2009).Berkshire Wireless Learning Initiative: Final evaluation report.

    Boston, MA: echnology and Assessment Study Collaborative, Boston College. RetrievedJune 15, 2009, rom http://www.bc.edu/research/intasc/researchprojects/bwli/pd /BWLI_Year3Report.pd

    Bebell, D., & Kay, R. (2010). One to one computing: A summary o the quantitative resultsrom the Berkshire Wireless Learning Initiative. Journal o echnology, Learning, and

    Assessment, 9(2), 160. Retrieved December 28, 2009, rom http://www.jtla.orgBebell, D., ODwyer, L., Russell, M., & Hofmann, . (2007). Methodological challenges (and

    solutions) in evaluating educational technology initiatives . Paper presented at the AnnualMeeting o American Educational Research Association, Chicago, IL.Bebell, D., & Russell, M. (2006).Revised evaluation plan or Berkshire Wireless Learning

    Initiative. Chestnut Hill, MA: Boston College, echnology and Assessment Study Collaborative.

    Bebell, D., Russell, M., & ODwyer, L. M. (2004). Measuring teachers technology uses: Why multiple measures are more revealing. Journal o Research on echnology in Education,37 (1), 4563.

    Becker, H. (1994). Analysis and trends o school use o new in ormation technologies.Washington, DC: O ce o echnology Assessment.

    Becker, H. (1999). Internet use by teachers: Conditions o pro essional use and teacher-directed student use. Irvine, CA: Center or Research on In ormation echnology and Organizations.

    Burstein, L. (1980). Te analysis o multi-level data in educational research and evaluation. InD. C. Berliner (Ed.), Review o research in education(Vol.8, pp. 158233). Washington, DC:American Educational Research Association.

    Cronbach, L. J. (1976). Research on classrooms and schools: Formulation o questions, design, and analysis (Occasional paper). Stan ord, CA: Stan ord Evaluation Consortium, Stan ord University.

    Cuban, L. (2006). Te laptop revolution has no clothes. Education Week, 26 (8). Retrieved onOctober 26, 2006, rom http://www.edweek.org/tb/2006/10/17/1040.html

    Fedorchak, G. (2008). Examining the easibility, efect, and capacity to provide universal accessthrough computer-based testing. Dover, HN: New Hampshire Department o Education.

    Glass, G. V., & Hopkins, K. D. (1996). Statistical methods in psychology and education (3rd ed.).Needham Heights, MA: Allyn & Bacon.

  • 8/7/2019 Concerns, Considerations, And New Ideas for Data Collection and Research in Educational Technology Studies

    23/25

    Data Collection in Educational Technology Studies

    Haney, W. (1980). Units and levels o analysis in large-scale evaluation. New Directions or Methodology o Social and Behavioral Sciences, 6 , 115.

    Horkay, N., Bennett, R. E., Allen, N., Kaplan, B., & Yan, F. (2006). Does it matter i I takemy writing test on computer? An empirical study o mode efects in NAEP. Journal o Technology, Learning, and Assessment, 5(2), 139. Available at http://www.jtla.org

    Kre , I., & de Leeuw, J. (1998).Introducing multilevel modeling . Tousand Oaks, CA: SAGE.Lerman, J. (1998). Youve got mail: 10 nify ways teachers can use e-mail to extend kids learning.

    Retrieved January 10, 2003, rom http://www.electronic-school.com/0398 5.htmlMathews, J. (1996, October). Predicting teacher perceived technology use: Needs assessment model

    or small rural schools.Paper presented at the Annual Meeting o the National Rural EducationAssociation, San Antonio, X.

    McNabb, M., Hawkes, M., & Rouk, U. (1999). Critical issues in evaluating the e ectiveness o technology.Proceedings o the Secretarys Con erence on Educational echnology: Evaluatingthe Efectiveness o echnology. Retrieved January 10, 2003, rom http://www.ed.gov/

    echnology/ echCon /1999/con sum.html

    Nunnally, J. C. (1978). Psychometric theory. New York, NY: McGraw-Hill Book Company.ODwyer, L. M., Russell, M., & Bebell, D. J. (2004) Identi ying teacher, school, and districtcharacteristics associated with elementary teachers use o technology: A multilevelperspective. Education Policy Analysis Archives, 12(48). Retrieved September 14, 2004, romhttp://epaa.asu.edu/epaa/v12n48

    ODwyer, L. M., Russell, M., Bebell, D., & ucker-Seeley, K. R. (2005). Examining the relationshipbetween home and school computer use and students English/language arts test scores. Journal o Technology, Learning, and Assessment, 3(3), 146. Available at http://www.jtla.org

    ODwyer, L. M., Russell, M., Bebell, D., & ucker-Seeley, K. (2008) Examining therelationship between students mathematics test scores and computer use at home and

    at school. Journal o Technology, Learning and Assessment, 6 (5), 146. Available at http://www.jtla.orgO ce o echnology Assessment (O A). (1988). Power on! New tools or teaching and learning.

    Washington, DC: U.S. Government Printing O ce.O ce o echnology Assessment (O A). (1989). Linking and learning: A new course or

    education. Washington, DC: U.S. Government Printing O ce.O ce o echnology Assessment (O A). (1995). Teachers and technology: Making the connection,

    O A-EHR-616. Washington, DC: U.S. Government Printing O ce.Raudenbush, S. W., & Bryk, A. S. (2002).Hierarchical linear models: Applications and data

    analysis methods. Tousand Oaks, CA: Sage Publications.Ravitz, J., Wong, Y., & Becker, H. (1999). Teacher and teacher directed student use o computers

    and sofware. Irvine, CA: Center or Research on In ormation echnology and Organizations.Robinson, W. S. (1950). Ecological correlations and the behavior o individuals. American

    Sociological Review, 15, 351357.Roblyer, M. D., & Knezek, G. (2003). New millennium research or educational technology: A call

    or a national research agenda. Journal o Research on Technology in Education, 36 (1), 6072.Russell, M. (1999). esting on computers: A ollow-up study comparing per ormance on

    computer and on paper. Education Policy Analysis Archives, 7 (20), 147.Russell, M., & Haney, W. (1997). esting writing on computers: An experiment comparing

    student per ormance on tests conducted via computer and via paper-and-pencil. Educational Policy Analysis Archives, 5(3), 120.

    Russell, M., ODwyer, L., Bebell, D., & Miranda, H. (2003).Technical report or the USEIT study.Boston, MA: echnology and Assessment Study Collaborative, Boston College. Retrieved

  • 8/7/2019 Concerns, Considerations, And New Ideas for Data Collection and Research in Educational Technology Studies

    24/25

    Bebell, O'Dwyer, Russell, & Hoffmann

    Salomon, G, Perkins, D., & Globerson, . (1991). Partners in cognition: Extending humanintelligence with intelligent technologies. Educational Researcher, 20, 29.

    Shapley, K. S. (2008).Evaluation o the exas echnology Immersion Pilot (e x IP): Year 2results. Paper presented at the 2008 Annual Meeting o the American Educational ResearchAssociation, New York.

    Shapley, K. S., Sheehan, D., Maloney, C., & Caranikas-Walker, F. (2010). Evaluating theimplementation delity o technology immersion and its relationship with studentachievement. Journal o echnology, Learning, and Assessment, 9(4), 169. Retrieved March11, 2010, rom http://www.jtla.org

    Silvernail, D. (2008). Maines impact study o technology in mathematics (MIS M).Paperpresented at the 2008 Annual Meeting o the American Educational Research Association,New York. Retrieved March 11, 2010, rom http://www2.umaine.edu/mepri/?q=node/11

    Strudler, N. (2003). Answering the call: A response to Roblyer and Knezek. Journal o Researchon echnology in Education, 36 (1), 7377.

    ucker-Seeley, K., (2008). Te e ects o using Likert vs. visual analogue scale response options

    on the outcomes o a Web-based survey o 4th through 12th grade students: Data rom arandomized experiment. Unpublished Doctoral Dissertation, Boston College.U.S. Census Bureau. (2006, August 16). Back to school 20062007: Facts or eatures. U.

    S. Census Bureaus Public In ormation Ofce. Retrieved March 11, 2010, rom http://www.census.gov/Press-Release/www/releases/archives/ acts_ or_ eatures_special_editions/007108.html

    Wainer, H. (1990). Computerized adaptive testing: A primer. Hillsdale, NJ: Lawrence ErlbaumAssociates.

    Waxman, H. C., Lin, M., & Michko, G. M. (2003). A meta-analysis o the e ectiveness o teaching and learning with technology on student outcomes. Naperville, IL: Learning Point

    Associates. Retrieved April 22, 2004, rom http://www.ncrel.org/tech/efects2/Weston, M. E., & Bain, A. (2010). Te end o techno-critique: Te naked truth about 1:1 laptopinitiatives and educational change. Journal o echnology, Learning, and Assessment, 9(6),126. Retrieved March 11, 2010, rom http://www.jtla.org

    Zucker, A., & Hug, S. (2008). eaching and learning physics in a 1:1 laptop school. Journal o Science Education echnology, 17 (6), 586594.

  • 8/7/2019 Concerns, Considerations, And New Ideas for Data Collection and Research in Educational Technology Studies

    25/25

    Copyright of Journal of Research on Technology in Education is the property of International Society for

    Technology in Education and its content may not be copied or emailed to multiple sites or posted to a listserv

    without the copyright holder's express written permission. However, users may print, download, or email

    articles for individual use.