Upload
eric-andrews
View
216
Download
1
Tags:
Embed Size (px)
Citation preview
Early Childhood Outcomes CenterEarly Childhood Outcomes Center
Do My Data Count? Do My Data Count? Questions and Methods for Monitoring and Questions and Methods for Monitoring and
Improving our Accountability SystemsImproving our Accountability Systems
Do My Data Count? Do My Data Count? Questions and Methods for Monitoring and Questions and Methods for Monitoring and
Improving our Accountability SystemsImproving our Accountability Systems
Dale Walker, Sara Gould, Charles Greenwood and Tina Yang
University of Kansas, Early Childhood Outcome Center (ECO)
Marguerite Hornback, Kansas Leadership Project, 619 Liaison
Marybeth Wells, Idaho 619 Coordinator
2Early Childhood Outcomes CenterEarly Childhood Outcomes Center
Acknowledgement: Thanks are due to our Kansas colleagues who assisted with the development, administration and analysis of the COSF Survey and team process videos, and to the Kansas Part C and Kansas and Idaho Part B professionals who participated in the COSF process.
Appreciation is also extended to our ECO and Kansas colleagues for always posing the next question..
3Early Childhood Outcomes CenterEarly Childhood Outcomes Center
Purpose of this PresentationPurpose of this Presentation
Explore a range of questions to assist states in establishing the validity of their accountability systems
Illustrate with state examples how outcome data may be analyzed
Discuss ways to gather, interpret, and use evidence to improve accountability systems
Information synthesized from Guidance Document on Child Outcomes Validation to be distributed soon!
4Early Childhood Outcomes CenterEarly Childhood Outcomes Center
Validity of an Accountability SystemValidity of an Accountability System
An accountability system is valid when evidence is strong enough to conclude: The system is accomplishing what it was
intended to accomplish and not leading to unintended results
System components are working together toward accomplishing the purpose
5Early Childhood Outcomes CenterEarly Childhood Outcomes Center
What is Required to Validate our What is Required to Validate our Accountability Systems? Accountability Systems?
Validity requires answering a number of logical questions demonstrating that the parts of the system are working as planned
Validity is improved by ensuring the quality and integrity of parts of the system
Validity requires continued monitoring, maintenance and improvement
6Early Childhood Outcomes CenterEarly Childhood Outcomes Center
Some Important Questions for Establishing Some Important Questions for Establishing the Validity of an Accountability Systemthe Validity of an Accountability System
Is fidelity of implementation of measures high? Are measures sensitive to individual child differences
and characteristics? Are the outcomes related to measures? What are the differences between entry and exit
data? Are outcomes sensitive to change over time? Are those participating in the process adequately
trained?
7Early Childhood Outcomes CenterEarly Childhood Outcomes Center
What Methods can be used to Assess What Methods can be used to Assess System Fidelity?System Fidelity?
COSF ratings and rating process, (including types of evidence used, e.g., parent input)
Team characteristics of those determining ratings
Meeting characteristics or format Child characteristics Demographics of programs or regions Decision-making processes Training information Comparing ratings over time
8Early Childhood Outcomes CenterEarly Childhood Outcomes Center
Fidelity: Analysis of Process to Collect Fidelity: Analysis of Process to Collect Outcomes Data: Video AnalysisOutcomes Data: Video Analysis
Video observation 55 volunteer teams in KS submitted team meeting
videos and matching COSF forms for review Tried to be representative of the state Videos coded
Team characteristics Meeting characteristics Evidence used Tools used (e.g., ECO decision tree)
9Early Childhood Outcomes CenterEarly Childhood Outcomes Center
Fidelity: Analysis of Process to Collect Fidelity: Analysis of Process to Collect Data Using SurveysData Using Surveys
Staff surveys Presented and completed online using
Survey Monkey 279 were completed Analyzed by research partners
May be summarized using Survey Monkey or other online data system
10Early Childhood Outcomes CenterEarly Childhood Outcomes Center
Fidelity: Analysis of Process to Collect Fidelity: Analysis of Process to Collect Data Using State DatabasesData Using State Databases
Kansas provided Part C and Part B data
Idaho provided Part B data
Included: COSF ratings, OSEP categories, child characteristics
11Early Childhood Outcomes CenterEarly Childhood Outcomes Center
Fidelity: Types of Evidence Used in COSF Fidelity: Types of Evidence Used in COSF Rating Meetings (videos only)Rating Meetings (videos only)
Child Strengths (67-73% across outcome ratings)
Child Areas to Improve (64-80%) Observations by professionals (51-73%)
12Early Childhood Outcomes CenterEarly Childhood Outcomes Center
Fidelity: Types of Evidence Used in COSF Fidelity: Types of Evidence Used in COSF Rating Meetings (videos and surveys)Rating Meetings (videos and surveys)
Assessment tools Video- 55% used for all 3 ratings Survey- 53% used one of Kansas’ most
common assessments Parent Input incorporated
Video- 47% Survey- 76%
39% contribute prior to meeting 9% rate separately 22% attend CSOF rating meeting
13Early Childhood Outcomes CenterEarly Childhood Outcomes Center
Fidelity: How can we interpret this Fidelity: How can we interpret this information?information?
Assessment use About half are consistently using a formal
set of questions to assess child functioning Parent involvement
Know how much to emphasize in training Help teams problem-solve to improve
parent involvement
14Early Childhood Outcomes CenterEarly Childhood Outcomes Center
Fidelity: Connection between COSF Fidelity: Connection between COSF and Discussion (Video)and Discussion (Video)
67% documented assessment information but did not discuss results during meetings
44% discussed observations during meetings but did not document in paperwork
15Early Childhood Outcomes CenterEarly Childhood Outcomes Center
How information about the Process has informed QA activities
Used to improve quality of the process Refine the web-based application fields Improve training and technical assistance Refine research questions Provide valid data for accountability and
program improvement
16Early Childhood Outcomes CenterEarly Childhood Outcomes Center
Are Measures Sensitive to Individual and Are Measures Sensitive to Individual and Group Differences and Characteristics? Group Differences and Characteristics?
Essential feature of measurement is sensitivity to individual differences in child performance
Child characteristics Principal exceptionality Gender Program or Regional Differences
17Early Childhood Outcomes CenterEarly Childhood Outcomes Center
Frequency Distribution for one state’s Frequency Distribution for one state’s three OSEP Outcomes for Part B Entrythree OSEP Outcomes for Part B Entry
0.0
5.0
10.0
15.0
20.0
25.0
1 2 3 4 5 6 7 1 2 3 4 5 6 7 1 2 3 4 5 6 7
COSF Rating
Pe
rce
nta
ge
18Early Childhood Outcomes CenterEarly Childhood Outcomes Center
Frequency Distribution for one state’s Frequency Distribution for one state’s three OSEP Outcomes for Part C Entrythree OSEP Outcomes for Part C Entry
0.0
5.0
10.0
15.0
20.0
25.0
1 2 3 4 5 6 7 1 2 3 4 5 6 7 1 2 3 4 5 6 7
COSF Rating
Pe
rcen
tag
e
19Early Childhood Outcomes CenterEarly Childhood Outcomes Center
Interpreting Entry Rating DistributionsInterpreting Entry Rating Distributions
Entry rating distributions If sensitive to differences in child
functioning, should have children in every category
Should have more kids in the middle than at the extremes (1s and 7s)
1s should reflect very severe exceptionalities 7s are kids functioning at age level with no
concerns- shouldn’t be many receiving services
20Early Childhood Outcomes CenterEarly Childhood Outcomes Center
Social Entry Rating by StateSocial Entry Rating by State
Social Entry Rating Percentage by State
0
10
20
30
1 2 3 4 5 6 7
Social Entry Rating
Perc
en
tag
e (
%)
KS
ID
21Early Childhood Outcomes CenterEarly Childhood Outcomes Center
Interpreting Exit RatingsInterpreting Exit Ratings
Exit ratings If distribution stays the same as at entry
Children are gaining at same rate as typical peers, but not catching up
If distribution moves “up”- numbers get higher
Children are closing the gap with typical peers If ratings are still sensitive to differences in
functioning, should still be variability across ratings
22Early Childhood Outcomes CenterEarly Childhood Outcomes Center
Interpreting Social Exit RatingsInterpreting Social Exit Ratings
Social Exit Rating Percentage by State
0
10
20
30
40
1 2 3 4 5 6 7
Social Exit Rating
Perc
en
tag
e (
%)
KS
ID
23Early Childhood Outcomes CenterEarly Childhood Outcomes Center
How can we interpret changes in How can we interpret changes in ratings over time?ratings over time?
Difference = 0: not gaining on typical peers, but still gaining skills
Difference > 0: gaining on typical peers Difference < 0: falling farther behind
typical peers Would expect to see more of the first
two categories than the last if system is effectively serving children
24Early Childhood Outcomes CenterEarly Childhood Outcomes Center
Social Rating Differences by StateSocial Rating Differences by State
Social Rating Difference by State
0
10
20
30
40
-3 -2 -1 0 1 2 3 4 5
Social Rating Difference
Perc
enta
ge (%
)
K S
ID
25Early Childhood Outcomes CenterEarly Childhood Outcomes Center
Are a State’s OSEP Outcome Scores Are a State’s OSEP Outcome Scores Sensitive to Progress Over Time? Examples Sensitive to Progress Over Time? Examples from 2 Statesfrom 2 States
26Early Childhood Outcomes CenterEarly Childhood Outcomes Center
Distributions Across Knowledge and Distributions Across Knowledge and Skills Outcome at Entry and ExitSkills Outcome at Entry and Exit
27Early Childhood Outcomes CenterEarly Childhood Outcomes Center
Distributions Across Social Outcome Distributions Across Social Outcome at Entry and Exitat Entry and Exit
28Early Childhood Outcomes CenterEarly Childhood Outcomes Center
Comparison of State Entry Outcome Comparison of State Entry Outcome Data from 2007 and 2008Data from 2007 and 2008
29Early Childhood Outcomes CenterEarly Childhood Outcomes Center
Importance of Looking at Importance of Looking at Exceptionality Related to Outcome Exceptionality Related to Outcome
Ratings should reflect child exceptionality because an exceptionality affects functioning
DD ratings should generally be lower SL ratings because DD is a more pervasive exceptionality
30Early Childhood Outcomes CenterEarly Childhood Outcomes Center
Meets Needs by Principal Meets Needs by Principal Exceptionality and COSF RatingExceptionality and COSF Rating
Meets Needs Entry Ratings Percentage by Principal Exceptionality
0
10
20
30
40
50
KS ID KS ID
DD SL
Meets Needs Entry Ratings
Per
cent
age
(%)
1
2
3
4
5
6
7
31Early Childhood Outcomes CenterEarly Childhood Outcomes Center
Meets Needs by Principal Meets Needs by Principal Exceptionality and OSEP CategoryExceptionality and OSEP Category
Percentage of OSEP Category for Outcome 3: Meets Needs byPrincipal Exceptionality
01020304050607080
KS ID KS ID
DD (%) SL (%)
OSEP Category
Per
cen
tag
e (%
) A
B
C
D
E
32Early Childhood Outcomes CenterEarly Childhood Outcomes Center
Interpreting Exceptionality ResultsInterpreting Exceptionality Results
Different exceptionalities should lead to different OSEP categories
More SL in E (rated higher to start with- less pervasive and easier to achieve gains)
More DD in D (gaining, but still some concerns- more pervasive and harder to achieve gains)
33Early Childhood Outcomes CenterEarly Childhood Outcomes Center
Gender DifferencesGender Differences
Ratings should generally be consistent across gender. If not, ratings or criteria might be biased.
Need to ensure that gender differences aren’t really exceptionality differences.
Some diagnoses are more common in one gender compared to the other.
34Early Childhood Outcomes CenterEarly Childhood Outcomes Center
Entry Outcome Ratings by Gender Entry Outcome Ratings by Gender
Median Entry Ratings by Gender
01
2
3
4
5
67
Males Females Males Females Males Females
Social Knowledge Meets Needs
Entry Outcomes
Med
ian
Rat
ing
s
Kansas
Idaho
35Early Childhood Outcomes CenterEarly Childhood Outcomes Center
Mean Differences and Ranges in the 3 Outcomes by Gender
36Early Childhood Outcomes CenterEarly Childhood Outcomes Center
Gender and ExceptionalityGender and Exceptionality
Exceptionality Male Female
KS ID KS ID
DD 50.9% 60.9% 50.7% 62.5%
SL 46.2% 33.7% 46.3% 31.3%
37Early Childhood Outcomes CenterEarly Childhood Outcomes Center
Importance of Exploring Gender Importance of Exploring Gender Differences by ExceptionalityDifferences by Exceptionality
Because the same percentage of boys and girls are classified as DD and are classified as SL, rating differences are not the result of exceptionality differences.
38Early Childhood Outcomes CenterEarly Childhood Outcomes Center
Program or Regional Differences in Program or Regional Differences in Distribution of Outcome ScoresDistribution of Outcome Scores
If programs in different parts of the state are serving similar children, then ratings should be similar across programs
If ratings are different across programs with similar children, check assessment tools, training, meeting/team characteristics
39Early Childhood Outcomes CenterEarly Childhood Outcomes Center
Program or Regional Differences in Program or Regional Differences in Distribution of Outcome ScoresDistribution of Outcome Scores
40Early Childhood Outcomes CenterEarly Childhood Outcomes Center
Are the 3 Outcomes Related?Are the 3 Outcomes Related?
Expect there to be patterns of relationships across functional outcomes compared to domains
41Early Childhood Outcomes CenterEarly Childhood Outcomes Center
Correlations Across Outcomes at EntryCorrelations Across Outcomes at Entry
State and Part
Pair ID (B) KS (B) KS (C)
Know vs Meets .726 .732 .633
Social vs Meets .799 .743 .620
Know vs Social .782 .774 .758
N Children 1003 1280 1108
42Early Childhood Outcomes CenterEarly Childhood Outcomes Center
Correlations Between Assessment Correlations Between Assessment Outcomes on BDI and COSF RatingOutcomes on BDI and COSF Rating
Mean
Correlation between COSF Outcome Ratings And BDI
Domain ScoresSocial vs. PerSocial = .65Knowledge vs. Cognitive = .62 Meets Needs vs. Adaptive = .61
43Early Childhood Outcomes CenterEarly Childhood Outcomes Center
Outcome Rating Differences by Outcome Rating Differences by MeasureMeasure
Use of different measures may be associated with different ratings because they provide different information
Different measures may also be associated with different Exceptionalities
44Early Childhood Outcomes CenterEarly Childhood Outcomes Center
Mean Knowledge and Skills Outcome Differences as a Function of Measure
45Early Childhood Outcomes CenterEarly Childhood Outcomes Center
Interpreting Team and Meeting Interpreting Team and Meeting CharacteristicsCharacteristics
Team characteristics Team size and composition
Meeting characteristics How teams meet How parents are included
46Early Childhood Outcomes CenterEarly Childhood Outcomes Center
Team CompositionTeam Composition
2 -4 p ro fs
O th e r
S lic e 3
S lic e 4
Video: 93% 2-4 professionals Survey: 85% 2-4 professionals * 35% SLP, 30% ECE * 95% SLP, 70% ECE
47Early Childhood Outcomes CenterEarly Childhood Outcomes Center
Do teams meet to determine ratings? (survey) 41% always meet as a team 42% sometimes meet as a team 22% members contribute, but one person rates 5% one person gathers all info and makes ratings
How teams meet at least sometimes (survey) In person: 92% Phone: 35% Email: 33%
How Do Teams Complete Outcome How Do Teams Complete Outcome Information?Information?
48Early Childhood Outcomes CenterEarly Childhood Outcomes Center
What Does Team Information Provide What Does Team Information Provide that is Helpful for Quality Assurance? that is Helpful for Quality Assurance?
COSF process is intended to involve teams- happens some of the time
Teams are creative in how they meet- likely due to logistical constraints
Checks the fidelity of the system (if it’s being used as planned)
If we know how teams are meeting, can modify training to accommodate
49Early Childhood Outcomes CenterEarly Childhood Outcomes Center
Decision-Making Process Followed by Decision-Making Process Followed by TeamsTeams
Decision-making process: Standardized steps Consensus reached by teams Deferring to a leader
50Early Childhood Outcomes CenterEarly Childhood Outcomes Center
What Steps Did Teams Use to Make What Steps Did Teams Use to Make Decisions?Decisions?
Use of crosswalks (survey) 59% reported that their team used 94% reported using to map items and sections COSF
outcomes. ECO decision tree use
Video- 95% 6% without discussing evidence (yes/no at each step) Discuss evidence at each step, rate document Discuss and document at each step
Survey- 81%
51Early Childhood Outcomes CenterEarly Childhood Outcomes Center
What Does this Indicate About the What Does this Indicate About the Team Decision-making Process?Team Decision-making Process?
Use of decision tree and crosswalks Indicates teams are using similar processes to
determine ratings across the state Important because steps taken will affect results Even when using the same tools, must check
that teams are using them correctly. Decision tree is intended to be use WITH evidence of
child functioning, not by itself
52Early Childhood Outcomes CenterEarly Childhood Outcomes Center
Did Teams Always Come to a Did Teams Always Come to a Consensus?Consensus?
Consensus Video- 86% Survey- 96% easy or somewhat easy to
reach Deferral
11% team member deferred to another on one rating
53Early Childhood Outcomes CenterEarly Childhood Outcomes Center
Conclusions About How Teams Make Conclusions About How Teams Make DecisionsDecisions
Consensus and deferral Teams are typically making rating
decisions as a team, not letting one or two individuals decide ratings.
This is important because the COSF was intended to be used by a team, not by individuals.
54Early Childhood Outcomes CenterEarly Childhood Outcomes Center
Collaboration Between Part C and Part B Collaboration Between Part C and Part B in Decision-Making in Decision-Making
Collaboration between Part C and Part B Professionals Video- 56% had at least one professional at both
Part C and Part B meetings Survey- 49% collaborate at least sometimes
When Part C and Part B teams collaborate, information and effort is shared
Transition made easier for families, more effective for children
55Early Childhood Outcomes CenterEarly Childhood Outcomes Center
What is Reported about Training?What is Reported about Training?
68% felt adequately trained to complete the COSF process
Perceived proficiency 25% proficient 52% somewhat proficient 23% would like to feel more proficient
56Early Childhood Outcomes CenterEarly Childhood Outcomes Center
Conclusions About TrainingConclusions About Training
Training Most professionals were satisfied If they had not been, would have needed to
re-evaluate training methods Still room for improvement Constant battle due to large staff turnover
rates
57Early Childhood Outcomes CenterEarly Childhood Outcomes Center
How do we apply what we learned How do we apply what we learned about training?about training?
Training Use of crosswalks Use of the ECO decision tree If professionals feel adequately trained If COSF is not reflecting differences in child
functioning, may need to modify training
58Early Childhood Outcomes CenterEarly Childhood Outcomes Center
Some Additional Questions to AskSome Additional Questions to Ask
Are OSEP outcomes affected by variable conditions in a State’s accountability processes Resources
Ability to establish a standard platform for data collection and analysis
Not all states have access to resources, research partners, etc Rates of turn-over in staff
Based on informed, well-trained staff, access to training, TA Uses of technology to support data collection, training
and management Websites to make information readily available statewide for
data entry, analysis and reporting
59Early Childhood Outcomes CenterEarly Childhood Outcomes Center
Summary and Future DirectionsSummary and Future Directions
All states are responsible for establishing the validity of their systems and thereby the power of the decisions made based on the data
Can begin building the case for validity of accountability systems through analyses of outcome data and internal studies of quality and fidelity of implementation
Methods used for data tables, charts and graphs were SPSS Statistical Package, Microsoft Word, and Microsoft EXCEL
60Early Childhood Outcomes CenterEarly Childhood Outcomes Center
Some of these data are published in Greenwood, C. R., Walker, D., Hornback, M., Hebbeler, K., & Spiker, D. (2007). Progress developing the Kansas Early Childhood Special Education Accountability System: Initial findings using the ECO Child Outcome Summary Form (COSF). Topics Early Childhood Special Education, 27(1), 2-18.
This work was supported by grants from the U.S. Office of Special Education Programs to SRI and collaborating partners (ECO Center- H327L030002; General Supervision Enhancement Grant- H326X040018). We extend our appreciation for this support.
Early Childhood Outcomes CenterEarly Childhood Outcomes Center
For More Information see: For More Information see: http://www.fpg.unc.edu/~ECO/
For More Information see: For More Information see: http://www.fpg.unc.edu/~ECO/