26
Experimental Research by Mary Macin 1 Tuesday, February 15, 2011

Experimental Research Overview

Embed Size (px)

DESCRIPTION

PowerPoint presentation created for graduate course in Research Methodologies. Very wordy and not my usual style, but had too much information to include to do much style-wise.

Citation preview

Page 1: Experimental Research Overview

Experimental Researchby Mary Macin

1Tuesday, February 15, 2011

Page 2: Experimental Research Overview

Experimental Research vs. Other Methods

❖ Can test for cause/effect relationships❖ Manipulation of independent variable(s)

Simply put:Decisions about the forms and values of the IV, as well as about which group receives which treatment are at the sole discretion of the researcher

2Tuesday, February 15, 2011

Page 3: Experimental Research Overview

Variables in Experimental Research

❖ Independent Variable: ❖ Experimental Variable, Cause, or Treatment❖ The activity or characteristic the researcher believes

makes a difference

❖ Dependent Variable:❖ Criterion Variable, Effect, or Posttest❖ Outcome of the study❖ Difference in group(s) that occurs as a result of the

manipulation of the IV❖ Only constraint: must represent a measurable

outcome

3Tuesday, February 15, 2011

Page 4: Experimental Research Overview

Characteristics of Experimental Research

❖ Demanding & Productive, but...❖ Produce the soundest evidence of hypothesized cause-effect

relationships

❖ Difference between Correlational & Experimental Research:

❖ Correlational can be used to predict a specific score for a specific individual

❖ Experimental predicts more global results*

4Tuesday, February 15, 2011

Page 5: Experimental Research Overview

Steps in Experimental Research Study

1. Select and define problem.

2. Select subjects and [measurement] instruments.

3. Select design.

4.Execute procedures.

5. Analyze data.

6.Formulate conclusions.

5Tuesday, February 15, 2011

Page 6: Experimental Research Overview

Role of the Researcher

❖ Forms or selects groups❖ Decides what will happen to each group❖ Attempts to control all variables and factors❖ Observes and measures effect on the groups

Every effort is made to make sure the 2 groups have equivalent variables—except for the independent variable.

6Tuesday, February 15, 2011

Page 7: Experimental Research Overview

Two Groups

❖ Experimental Group❖ Receives the new treatment being investigated

❖ Control Group❖ Receives a different treatment; or❖ Receives same treatment as usual (i.e. is left alone)

The Control Group is needed in order to identify/measure any differences observed as a result of the differing treatments

7Tuesday, February 15, 2011

Page 8: Experimental Research Overview

Potential Issues in Experimental Research

❖ Experimental treatment not given adequate time to take effect❖ Experimental group should be exposed to treatment for a long

enough period of time for the treatment to work

❖ Treatments received by the 2 groups are not “different enough”❖ No difference between the groups will be found if the

experimental treatment and the control treatment are too similar

8Tuesday, February 15, 2011

Page 9: Experimental Research Overview

Experimental Validity

❖ Experiments are considered valid if:❖ The results obtained are only due to the manipulation

of the independent variable

❖ Two conditions must be met:❖ Experiment has internal validity❖ Experiment has external validity

9Tuesday, February 15, 2011

Page 10: Experimental Research Overview

Internal Validity

❖ Observed differences on the dependent variable are the direct result of the researcher’s manipulation of the independent variable.

❖ Campbell & Stanley (1971) identified 8 threats to internal validity:❖ History - becomes more likely the longer a study is; caused by external events.

❖ Maturation - physical/mental changes occurring in subjects over time; more likely to occur when study is extended over a long period of time.

❖ Testing (pretest sensitization) - result of higher scores on a posttest due to participants having taken a pretest; unlike above, more likely to occur when there are short intervals between testing.

❖ Instrumentation - lack of consistency between measuring instruments; data collection leads to unreliable/invalid results.

❖ Statistical Regression - tendency for some scores to move towards the mean score; participants who score the highest and lowest on a pretest are more likely to score lower and higher (respectively) on a posttest.

❖ Differential Selection of Subjects - differences already present between two pre-formed groups could account for differences in posttest results.

❖ Mortality (attrition) - occurs most often in long-term studies; refers to participants who drop out of a group potentially sharing some characteristic that affects the significance of the study.*

❖ Selection-Maturation Interaction, Etc. - if pre-formed groups are used, one group may be at an (dis)advantage due to factors of maturation; the “etc.” refers to the fact that selection can also interact in this way with other factors such as history, testing, instrumentation, etc.

10Tuesday, February 15, 2011

Page 11: Experimental Research Overview

External Validity

❖ Results of the experiment are generalizable to groups and environments outside of the experiment; results of the study can be reconfirmed with other groups, in other settings, and at other times (if the conditions are similar to those present in the experiment).

❖ Bracht & Glass (1968) identified 6 threats to external validity:❖ Pretest-Treatment Interaction - participants react differently to a treatment because they have been pretested; pretests may alert

participants to the make-up of the treatment; therefore, results can only be generalized to other pretested groups.

❖ Multiple-Treatment Interference - the same participants receive the same treatment in succession; effects are carried-over from the first treatment making it hard to determine the effectiveness of the second treatment.

❖ Selection-Treatment Interaction - occurs when participants are not randomly selected for the treatments they receive; can occur when participants are a pre-formed group or an individual; limits the generalizability of the results.

❖ Specificity of Variables - does not depend on the experimental design chosen; threatens validity when a study is conducted:❖ with a specific kind of subject;❖ based on a particular definition of the independent variable;❖ using specific measuring instruments;❖ at a specific time; and ❖ under a specific set of circumstances.

❖ Experimenter Effects - experimenter unintentionally affects the implementation of the study’s procedures, the behavior of the participants, or the assessment of participant behavior, thereby affecting the results of the study.

❖ Reactive Arrangements - factors associated with how a study is conducted effectively influence the feelings and attitudes of the participants; affects generalizability of the results.

11Tuesday, February 15, 2011

Page 12: Experimental Research Overview

Extraneous Variables

❖ The control of extraneous variables is vital to the success of an experiment.

❖ Extraneous variables can be controlled through: ❖ Randomization - subjects should be randomly selected for participation and randomly assigned to groups; randomizing

selection should be attempted whenever possible

❖ Matching - researcher pairs up participants with matching (similar) scores or characteristics (gender, IQ, location), then randomly assigns each participant to a different group than their counterpart; this ensures that the pair with matching IQ scores are not in the same group

❖ Comparing homogenous groups or subgroups - group participants according to their similarity/fit into a variable subgroup (IQ, SAT score); randomly assign half of the subgroup to the experimental group, and the other half of the subgroup to the control group

❖ Using subjects as their own controls - the same participants get both treatments (one treatment at a time); controls for participant differences; can result (negatively) in carry-over effects between the treatments

❖ Analysis of covariance - statistically equate randomly formed groups on a particular variable; can be used to adjust for large differences in pretest scores between groups

12Tuesday, February 15, 2011

Page 13: Experimental Research Overview

Group Designs

❖ Two classes of experimental designs: ❖ Single-Variable: one independent variable; IV is manipulated

❖ Three types—❖ Pre-experimental❖ True experimental*❖ Quasi-experimental

❖ Factorial: two or more independent variables; at least one IV is manipulated

❖ Elaborate on single-variable designs; ❖ Investigates each variable independently and in interaction

with other variables; ❖ Sky’s the limit**

13Tuesday, February 15, 2011

Page 14: Experimental Research Overview

Pre-Experimental Designs

❖ One-Shot Case Study — ❖ One group exposed to one treatment then given posttest

❖ Don’t know level of group knowledge before the treatment!❖ Sources of invalidity are not controlled!

❖ One-Group Pretest-Posttest Design —❖ One group pretested, exposed to one treatment, then posttested

❖ Still a number of factors affecting validity that are not controlled!❖ Other factors may influence any differences observed between the pretest and posttest

❖ Static-Group Comparison —❖ At least two groups; first receives new treatment; second receives usual

treatment; both posttested❖ Purpose of control group is to show how the experimental (first) group would have performed had

they not received the new treatment❖ Effective only to the degree that the two groups are equal to each other

14Tuesday, February 15, 2011

Page 15: Experimental Research Overview

True Experimental Designs

❖ Pretest-Posttest Control Group Design — ❖ At least two randomly-assigned groups; both pretested for dependent variable;

one group then receives the new treatment; then both groups are posttested.❖ Internal invalidity fully controlled by: random assignment, pretesting, & inclusion of a control group❖ Potential risk of interaction between the pretest and the treatment*

❖ Posttest-Only Control Group Design —❖ Same as pretest-posttest design, except there is no pretest

❖ Subjects randomly assigned; exposed to independent variable; then posttested❖ Mortality is not controlled for (no pretest), but may not be a problem anyway

❖ Solomon Four-Group Design —❖ Random assignment of participants to one of four groups

❖ Two groups are pretested; two groups are not pretested❖ One pretested group & one unpretested group receive the experimental treatment❖ All four groups are posttested❖ Combination of the two designs (above) - eliminates both sources of internal invalidity!

15Tuesday, February 15, 2011

Page 16: Experimental Research Overview

Quasi-Experimental Designs

❖ Nonequivalent Control Group Design —❖ Two or more existing groups pretested; administered treatment; and posttested.

❖ Participants’ assignment to groups is not random; assignment of treatments to groups is random❖ Invalidity sources include: regression, selection-treatment interactions (maturation, history, and testing)

❖ Time-Series Design — ❖ One group repeatedly pretested; administered treatment; repeatedly posttested.

❖ Elaboration of the one-group pretest-posttest design; involves testing (pre- and post-) more than once❖ Advantage lies in confidence gained through significant improvement of group scores between pretests and posttests

❖ Counterbalanced Designs — ❖ All groups received all treatments; each group receives treatment in a different

order than others.❖ Any number of groups can be involved; limited only by the number of treatments; # of groups = # of treatments❖ Order of each groups’ receipt of treatment is determined randomly; each group is posttested following each treatment❖ Pretest usually not possible and/or feasible; often used on existing groups❖ Weakness lies in potential for multiple-treatment interference; thus, should only be used when this is not a concern

16Tuesday, February 15, 2011

Page 17: Experimental Research Overview

Factorial Designs

❖ Two or more independent variables; at least one is manipulated by researcher

❖ Term “factorial” comes from the use of multiple variables with multiple levels

❖ 2 x 2 factorial design*❖ Can get very complicated (2 x 3, 3 x 2, etc.)!

❖ Often employed after using a single-variable design;❖ “Variables do not operate in isolation”

❖ Studies how variables behave at different levels**

17Tuesday, February 15, 2011

Page 18: Experimental Research Overview

Single-Subject Experimental Designs

❖ Also referred to as “single-case experimental designs”❖ Used when sample size = 1; or for multiple individuals

considered as 1 group❖ Variation of the time-series design

❖ Typically used as a study of behavioral change in an individual

❖ Participant is own control; exposed to both nontreatment & treatment phases;

❖ Individual’s performance measured repeatedly during all phases

❖ Nontreatment phase = A; Treatment phase = B

18Tuesday, February 15, 2011

Page 19: Experimental Research Overview

Validity in Single-Subject Experiments

❖ External Validity❖ Frequent criticism due to lack of generalizability❖ Can be counteracted through replication

❖ Internal Validity❖ Repeated and Reliable Measurement

❖ If results are to be trusted, treatment must follow exact same procedures every time

❖ Baseline Stability❖ Provides basis for assessing the effectiveness of the treatment; must do enough

baseline measurements to establish a pattern*

❖ The Single Variable Rule❖ Only one variable should be manipulated at any one time!

19Tuesday, February 15, 2011

Page 20: Experimental Research Overview

Types of Single-Subject Designs

❖ A-B-A Withdrawal Designs -- ❖ The A-B* Design

❖ Establishment of baseline stability; treatment given❖ Improvement during treatment = effectiveness of treatment

❖ The A-B-A Design❖ Adds a second baseline measurement to the A-B design❖ Improves validity IF behavior improves during the B phase, and subsequently

deteriorates during the second A phase

❖ The A-B-A-B Design❖ Adds a second treatment phase to the A-B-A design❖ Could add strength to experiment IF behavior improves during treatment twice! ❖ Eliminates ethical concerns from A-B-A design (ending with participant not

receiving potentially effective treatment)

20Tuesday, February 15, 2011

Page 21: Experimental Research Overview

Types of Single-Subject Designs (cont’d)

❖ Multiple-Baseline Designs❖ Alternative to the A-B design❖ Used when unable to withdraw the treatment, or when it would be unethical to do so❖ Three basic types: across behaviors, across subjects, and across settings*

❖ Alternating Treatments Design ❖ Only valid design for assessing effectiveness of 2+ treatments in a single-subject

context❖ Rapid alternation of treatments for a single subject❖ Treatments are alternated randomly

❖ Notice: no withdrawal phase, no baseline phase.❖ Allows for the study of multiple treatments quickly and efficiently❖ Could introduce multiple-treatment interference

21Tuesday, February 15, 2011

Page 22: Experimental Research Overview

Data Analysis/Interpretation

❖ Typically involves graphically-represented results

❖ Design must be evaluated for adequacy; then treatment effectiveness is assessed

❖ Clinical Significance vs. Statistical Significance

❖ t and F tests can be used to test for statistical significance

22Tuesday, February 15, 2011

Page 23: Experimental Research Overview

Replicating Results

❖ As results are replicated, confidence in the procedures used grows❖ Direct replication

❖ Replication by the same investigator in the same setting❖ [Note] the same or different participants may be used

❖ Simultaneous replication ❖ Same problem; same location; and same time

❖ Systematic replication❖ Direct replication with different investigators, behaviors, or settings

❖ Clinical replication ❖ Treatment package with 2+ treatments.*❖ Designed for participants with complex behavior disorders

23Tuesday, February 15, 2011

Page 24: Experimental Research Overview

Example of Experimental Research

❖ Brain-Computer Interface Project❖ University of Illinois at Urbana-Champaign

❖ Collected brain signals through EEG❖ Used one group of 9 individuals❖ Allowed “practice” session before testing, but no

pretest was conducted

24Tuesday, February 15, 2011

Page 25: Experimental Research Overview

Infamous Cases of Unethical Research

❖ Tuskegee Syphilis Study (1932-1972)❖ Nearly 400 African-American men were infected with syphilis❖ Study conducted by Public Health Service❖ Led to the 1979 Belmont Report (modern foundation for ethical research of

human subjects)

❖ Milgram Obedience to Authority Study (began 1961; made public 1963)❖ Residents of New Haven, CT recruited to participate in a study of “memory and

learning”❖ Participants asked to inflict electric shocks in increasing voltages based on

“learner’s” incorrect answers (maximum voltage of 450 volts)❖ Study conducted at Yale University; intended to determine whether ordinary

people would follow orders they considered immoral (i.e. Nazi Holocaust/Adolf Eichmann)

❖ Stanford Prison Experiment (1971)❖ 24 students chosen as “prisoners,” while 9 “guards” were assigned to 3 shifts❖ Shut down after 6 days (originally intended to take 2 weeks) due to a

deterioration of the experiment’s conditions and structure❖ Both prisoners and guards adapted to their given roles--guards becoming

authoritarian and prisoners becoming passive

25Tuesday, February 15, 2011

Page 26: Experimental Research Overview

References

Gay, L. R. (1996). Educational research : competencies for analysis and application / L.R. Gay (5th ed.): Englewood Cliffs, N.J. : Merrill, 1996.

Milgram experiment. (2011, February 7). In Wikipedia, The Free Encyclopedia. Retrieved from http://en.wikipedia.org/w/index.php?title=Milgram_experiment&oldid=412574744.

Stanford prison experiment. (2011, February 11). In Wikipedia, The Free Encyclopedia. Retrieved from http://en.wikipedia.org/w/index.php?title=Stanford_prison_experiment&oldid=413232983.

Omar, C., Akce, A., Johnson, M., Bretl, T., Rui, M., Maclin, E. (2011). A Feedback Information-Theoretic Approach to the Design of Brain-Computer Interfaces. [Article]. International Journal of Human-Computer Interaction, 27(1), 5-23. doi: 10.1080/10447318.2011.535749.

Tuskegee syphilis experiment. (2011, February 3). In Wikipedia, The Free Encyclopedia. Retrieved from http://en.wikipedia.org/w/index.php?title=Tuskegee_syphilis_experiment&oldid=411791432.

26Tuesday, February 15, 2011