204
The Role of SACSCOC Recommendations in Changing Community College Practices for Institutional Effectiveness: A Quantitative Analysis By Kara Larkan-Skinner, M.A. A Dissertation In Higher Education Administration Submitted to the Graduate Faculty Of Texas Tech University in Partial Fulfillment of the Requirements for the Degree of DOCTOR OF EDUCATION Approved Dr. Stephanie J. Jones Chair of Committee Dr. Dimitra Jackson Smith Dr. Paul Matney Dr. Mark Sheridan Dean of the Graduate School December 2015

Copyright 2015, Kara Larkan-Skinner

  • Upload
    others

  • View
    0

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Copyright 2015, Kara Larkan-Skinner

The Role of SACSCOC Recommendations in Changing Community College Practices

for Institutional Effectiveness: A Quantitative Analysis

By

Kara Larkan-Skinner, M.A.

A Dissertation

In

Higher Education Administration

Submitted to the Graduate Faculty Of Texas Tech University in

Partial Fulfillment of the Requirements for

the Degree of

DOCTOR OF EDUCATION

Approved

Dr. Stephanie J. Jones Chair of Committee

Dr. Dimitra Jackson Smith

Dr. Paul Matney

Dr. Mark Sheridan Dean of the Graduate School

December 2015

Page 2: Copyright 2015, Kara Larkan-Skinner

Copyright 2015, Kara Larkan-Skinner

Page 3: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

ii

ACKNOWLEDGMENTS

Like many who have walked this path before me, I am overwhelmed and humbled

by the enormous support system that helped me achieve this lifelong goal. I owe an

immense amount of gratitude to all who have supported me along this journey.

To my committee members, Dr. Stephanie Jones, Dr. Dimitra Jackson Smith and

Dr. Paul Matney, thank you for devoting your time and energy to serve on my committee.

Most of all, thank you for your support and for challenging me to achieve high standards.

Dr. Matney, thank you for encouraging me to embark on this doctoral program path. I

will always be thankful to you, for your leadership at Amarillo College and for your

encouragement and belief in my ability to finish this program. Dr. Jones, thank you for

your unwavering faith in me; for challenging and supporting me throughout this program;

for setting high standards and holding me to those standards; and for being my beacon in

the night when I was lost. Your encouragement helped me through many challenging

times and pushed me to complete this program.

To Cohort 3, when I decided to embark on a dissertation journey, I had no idea

that I would have 13 of the most wonderful people in the world beside me. Thank you

for your support, assistance, and friendship. Your support helped me through difficult

times. I know we were a little “discombobulated” at times, but we made it, keep at it and

I will see you all on the other side.

A special thanks to my current team (Frances, Michael, Mary Jo, Liz and

Howard), for reviewing and editing my dissertation, for challenging me to think through

my research questions, and for being so all-around amazing. Thank you to my current

Page 4: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

iii

and former supervisors for supporting and encouraging me through this program (Drs.

Dwayne Banks and Jeff Kantor and Ms. Danita McAnally). Thank you to all of my

current and former colleagues at Our Lady of the Lake University and Amarillo College

who encouraged and supported me through this journey. Dr. Russell Lowery-Hart thank

you for your encouragement, your faith in me, and for the opportunities that you provided

to me at Amarillo College. I learned so much from you and will always be indebted to

you. Also, thank you to my colleagues across the southern region who took the time to

complete my survey and to those who served as content experts and provided me

feedback on my study. Joe and Jason thank you both for your encouragement and for

assistance with my research study.

To my neighbors who have become an extended family; Donna and Marketta,

thank you for your friendship and for becoming second mothers to my children. I am

forever indebted to you both and am so thankful to have you in my life. Anne Kathleen,

thank you for your friendship and for being my personal APA and grammar advisor.

Thank you to my new and old friends alike for supporting me and being the best

cheerleaders. Cara, my BFC, thanks for your friendship and support over the years. I

think of you and miss you every day.

My family has been my rock, and my constant source of inspiration and

encouragement. “When I was a boy of fourteen, my father was so ignorant I could hardly

stand to have the old man around. But when I got to be twenty-one, I was astonished at

how much the old man had learned in seven years” (Mark Twain). Dad and mom, thanks

for not killing me during my teenage years. I know that I am personally responsible for

shortening your life span by a few years, so the least that I could do is work extra hard as

Page 5: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

iv

an adult to make up for all the grief. In all seriousness mom and dad, thank you for your

love, patience, support, and advice; you are the most wonderful parents. You have

encouraged me to always do my best, sacrificed so that I could have more, and been

amazing grandparents to my children. I love you both so much and I am so proud to call

you my parents. To Aunt Gail, Kelly, Aunt Phyllis, Kathy, Walt, Torie, John, Macie and

Allie, and all of my extended family, thank you for everything, especially your love and

encouragement.

To my children, who have a serious misunderstanding of the type of “doctor” that

I will be after completing this journey; no, I will not be the kind of doctor that drives a

Bugatti; yes, I will be the type of doctor that does not make a lot of money. Aidan and

Sophie, you two are the most amazing children any mom could ask for. You are my

inspiration for living. Thank you for your patience, your love, your hugs, and for always

believing in me. I hope that I will be better at remembering to pack your lunches now

that I have finished school; on second thought, you should just learn to do that for

yourself.

Walt, thank you for supporting my dreams; moving across the state so that I could

launch my career; attending parent-teacher conferences so that I could meet a dissertation

or work deadline; and for being my biggest fan. If not for your love, support,

encouragement, and nagging at me to finish, I would not have achieved this goal. Jerry

Seinfeld once said, “there is no such thing as fun for the whole family.” It may not have

been fun for the whole family (or for any of us), but it definitely brought us closer and

showed us that together we can accomplish anything we set our minds to. Thank you for

being by myside all the way.

Page 6: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

v

Last, thank you to all of the amazing educators from River Road Independent

School District, Amarillo College, University of Louisville, and Texas Tech University

for providing me with an excellent educational foundation that helped me find my career

path and passion in life.

Page 7: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

vi

TABLE OF CONTENTS

ACKNOWLEDGMENTS .................................................................................... ii

ABSTRACT .......................................................................................................... ix

LIST OF TABLES ............................................................................................... xi

CHAPTER ONE INTRODUCTION ...................................................................1

Purpose of the Study ..........................................................................................7

Research Questions ............................................................................................8

Significance of the Study ...................................................................................8

Summary of Conceptual Framework ...............................................................10

Summary of Methodology ...............................................................................11

Assumptions of the Study ................................................................................12

Limitations to the Study ...................................................................................12

Delimitations to the Study ...............................................................................13

Definition of Terms..........................................................................................13

Summary ..........................................................................................................15

Organization of the Remainder of the Study ...................................................16

CHAPTER TWO LITERATURE REVIEW ....................................................17

Overview of Higher Education Accreditation .................................................17

History of Accreditation ............................................................................18 Future of Accreditation ..............................................................................19 Types of Accreditation ...............................................................................19 SACSCOC Regional Accreditation ...........................................................23 Accreditation Benefits ...............................................................................29 Accreditation Challenges ...........................................................................31

Practices Leading to Reaffirmation of Accreditation ......................................39

Tangible Resources ....................................................................................40 Intangible Resources ..................................................................................41 Leadership ..................................................................................................47 Tangible Resources ....................................................................................50 Intangible Resources ..................................................................................51 Leadership ..................................................................................................53

CHAPTER THREE METHODOLOGY ...........................................................56

Restatement of Purpose of the Study ...............................................................56

Page 8: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

vii

Restatement of Research Questions .................................................................56

Research Design...............................................................................................57

Study Institutions .......................................................................................58 Participants .................................................................................................61

Data Collection ................................................................................................61

Instrumentation ..........................................................................................62

Reliability and Validity ....................................................................................67

Data Analysis ...................................................................................................69

Summary ..........................................................................................................76

CHAPTER FOUR RESULTS ............................................................................77

Summary of Research Design ..........................................................................77

Data Collection ..........................................................................................78 Creation of New Variables .........................................................................80

Findings............................................................................................................82

Characteristics of the Sample.....................................................................82 Differences in Perceived Change or Improvement by Recommendations Received....................................................................................................................98 Differences in Perceived Change and Improvements by Recommendations104 Institutional Change and Improvement Predictors ..................................108 Predictors of Severity of Recommendations ............................................112

Summary ........................................................................................................113

CHAPTER FIVE CONCLUSIONS AND RECOMMENDATIONS............114

Overview of the Study ...................................................................................114

Discussion of the Findings .............................................................................116

Differences in Perceived Change or Improvement by Recommendations Received..................................................................................................................116 Differences in Perceived Change and Improvement by Recommendation Severity..................................................................................................................118 Predictors of Severity of Recommendations ............................................131

Implications for Higher Education Practice ...................................................136

Recommendations for Higher Education Practice .........................................143

Recommendations for Future Research .........................................................154

Conclusion .....................................................................................................156

REFERENCES ...................................................................................................158

APPENDIX A .....................................................................................................170

Page 9: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

viii

Texas Tech University Institutional Review Board Approval .......................170

APPENDIX B .....................................................................................................172

Email to Study Participants ............................................................................172

APPENDIX C .....................................................................................................174

Survey Questionnaire Introductory Text .......................................................174

APPENDIX D .....................................................................................................175

SACSCOC Recommendations and Improvements Survey ...........................175

APPENDIX E .....................................................................................................190

Email Reminder to Study Participants ...........................................................190

APPENDIX F .....................................................................................................191

Email Reminder to Study Participants ...........................................................191

Page 10: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

ix

ABSTRACT

Institutions of higher education undergo regional accreditation in order to

ensure academic quality, and ensure that students attending the institution receive

federal financial aid. The process of undergoing regional accreditation is a rigorous

task that many institutions find challenging, especially within the institutional

effectiveness arena. The purpose of this study was to analyze the role of SACSCOC

recommendations and institutional changes based on the perceptions of the SACSCOC

liaison or primary institutional effectiveness personnel at community colleges in the

SACSCOC region that underwent reaffirmation of accreditation between the years

2011 through 2015.

The study utilized a researcher-developed, web-based survey instrument to collect

self-reported data from the SACSCOC liaison or chief institutional effectiveness officer

from 69 institutions. A non-experimental, group-comparison quantitative research design

was used to examine statistically significant differences and relationships between the

SACSCOC recommendations received during the reaffirmation of accreditation process,

and perceived levels of institutional change or improvement within the institutional

effectiveness domain. Research analyses included descriptive, inferential, and predictive

statistics. Specifically, an independent samples t-test, an analysis of variance, and

multiple regression analyses were utilized.

Descriptive statistics revealed the common institutional characteristics,

institutional SACSCOC characteristics, institutional effectiveness practices, types of

changes and improvements made, SACSCOC recommendations received and during

which phase of accreditation the recommendations occurred. A number of statistically

Page 11: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

x

significant differences were found between institutions that received recommendations

and those that did not for changes within intangible resources, tangible resources, and

leadership. Overall, results indicated that recommendations influence many types of

institutional changes. Further, predictors were discovered for the total amount of

institutional change experienced, the total amount of institutional improvement

experienced, and the severity of recommendations received by institutions. A series of

implications and recommendations are provided that are intended to aid community

colleges in preparation for regional accreditation, and specifically in improving

institutionalizing processes that are known to enhance institutional effectiveness.

Page 12: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

xi

LIST OF TABLES Table 1.1 Principles Comprising Institutional Effectiveness ...................................3

Table 2.1 Principles of Accreditation Related to Institutional Effectiveness ........26

Table 3.1 SACSCOC Level I Institutions by Carnegie Categories ........................59

Table 3.2 State Representation of Sample .............................................................60

Table 3.3 Previous Year of Reaffirmation of Sample ............................................60

Table 3.4 Differences between SCA and SRIS .......................................................63

Table 4.1 Survey Response Rate by Year of Accreditation ....................................79

Table 4.2 New Variables Creation Process ...........................................................80

Table 4.3 Institutional Characteristics ..................................................................83

Table 4.4 Institutional SACSCOC Characteristics ................................................85

Table 4.5 Institutional Effectiveness Characteristics ............................................86

Table 4.6 Offsite Recommendations Received .......................................................88

Table 4.7 Onsite Recommendations Received .......................................................89

Table 4.8 C&R Recommendations Received .........................................................89

Table 4.9 Principles Leading to Monitoring Status ...............................................90

Table 4.10 Rank Order Difficult Principles of Accreditation ................................92

Table 4.11 Sources of Difficulty in Demonstrating Compliance ...........................93

Table 4.12 Amount of Change in Institutional Effectiveness by Category ............94

Table 4.13 Amount of Improvement in Institutional Effectiveness by Category .........................................................................................96

Table 4.14 SACSCOC Compliance Today .............................................................97

Table 4.15 t-test Results for Changes and Improvements by Offsite Recommendations ..........................................................................99

Table 4.16 t-test Results for Changes and Improvements by C&R Review Recommendations ........................................................................101

Table 4.17 t-test Results for Changes and Improvements by Monitoring Status ............................................................................................103

Table 4.18 ANOVA Results for Changes and Improvements by Severity of Recommendations ........................................................................106

Table 4.19 Tukey HSD Results for Changes and Improvements by Severity of Recommendations ....................................................................108

Table 4.20 Summary of Multiple Regression Analysis for Total Change Score ............................................................................................109

Page 13: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

xii

Table 4.21 Summary of Multiple Regression Analysis for Total Improvement Score ......................................................................111

Table 4.22 Summary of Multiple Regression Analysis for Severity of Recommendations ........................................................................112

Table 5.1 Differences in Significant Findings by Accreditation Phases ............117

Page 14: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

1

CHAPTER I

INTRODUCTION

U.S. higher education “accreditation is a process of external quality review

created and used . . . to scrutinize colleges, universities and programs for quality

assurance and quality improvement” (Eaton, 2012, p. 1). Accreditation serves four

primary roles: 1) ensures a standard of quality (Eaton, 2009; Head & Johnson, 2011); 2)

provides access to federal and state funds (Eaton, 2009; Jackson, Davis, & Jackson,

2010); 3) instills private sector confidence in higher education (Eaton, 2009); and 4)

allows transferability of student courses or programs between institutions (Eaton, 2009;

Head & Johnson, 2011). Accreditation ensures that institutions of higher education meet

the guidelines established by the Department of Education and is the primary quality

control mechanism in higher education (Brittingham, 2009; Jackson et al., 2010).

Although the accreditation process is time-consuming and arduous, “it confers a number

of benefits to an institution” (Head & Johnson, 2011, p. 37), such as positive changes in

educational programs (Murray, 2002), continuous institutional improvement (Murray,

2002), and increased community engagement (Sandmann, Williams, & Abrams, 2009),

providing an opportunity for professional development, and enabling conditions for

student mobility (Brittingham, 2009).

Regional accreditation agencies are responsible for accrediting “public and

private, mainly nonprofit and degree-granting, two- and four-year institutions” (Eaton,

2012, p. 2) within each agency’s respective geographical region (Jackson et al., 2010).

There are six regional accrediting agencies responsible for accrediting U.S. higher

education institutions: 1) New England Association of Schools and Colleges, 2) Middle

Page 15: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

2

States Association of Colleges and Schools, 3) North Central Association of Schools and

Colleges, 4) Southern Association of Colleges and Schools, 5) Northwest Association of

Colleges and Universities, and 6) Western Association of Schools and Colleges

(Brittingham, 2009; Jackson et al., 2010). The Southern Association of Colleges and

Schools Commission on Colleges (SACSCOC) is the regional accrediting body that is the

focus of this research study and is the regional accreditor responsible for accrediting

colleges and universities in Alabama, Florida, Georgia, Kentucky, Louisiana, Mississippi,

North Carolina, South Carolina, Tennessee, Texas, Virginia, Latin America, and

international institutions that meet SACSCOC qualifications (SACSCOC, 2014a). The

distribution of institutions within each state is provided in Table 1.1.

Table 1.1 SACSCOC Institutions by State

State N % AL 54 6.7% FL 77 9.6% GA 85 10.5% KY 51 6.3% LA 39 4.8% MS 32 4.0% NC 112 13.9% SC 50 6.2% TN 63 7.8% TX 165 20.5% VA 72 8.9% Foreign 6 0.7%

Total

806

100%

Note. Information current as of November 2014.

Each regional accrediting agency develops its own criteria for accreditation

(Brittingham, 2009), but all of the regional accrediting agencies require institutions to

demonstrate effectiveness of the organization (Head & Johnson, 2011).

Page 16: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

3

Institutional effectiveness is a systematic review of the institutional mission,

which results in continuous quality improvement and demonstration that the institution

is accomplishing its mission (SACSCOC, 2011b). Head and Johnson (2011) argued

that every standard covered by the SACSCOC encompasses an institutional

effectiveness standard. The importance of institutional effectiveness is demonstrated

by the fact that all regional accrediting bodies require some level of evidence of

institutional effectiveness (Head & Johnson, 2011). Further, the SACSCOC process

of undergoing accreditation, known as reaffirmation, illustrates the importance of

institutional effectiveness; SACSCOC institutions undergo a reaffirmation process

every 10 years (SACSCOC, 2011b) and an interim review every five years

(SACSCOC, n.d.b). Although the fifth-year review is less comprehensive, it requires

institutions to respond to issues related to institutional effectiveness (SACSCOC,

n.d.b), thus ensuring that the institutional effectiveness domain of accreditation is

reviewed every five years. Institutional effectiveness is a major component of the

SACSCOC reaffirmation of accreditation process, specifically comprising 11

principles (SACSCOC, 2012a) and permeating the entire accreditation process (Head,

2011). The 11 principles comprising institutional effectiveness are provided in Table

1.2.

Table 1.2 Principles Comprising Institutional Effectiveness

Principle 2.4 institutional mission 2.5 institutional effectiveness 3.1.1 mission 3.3.1.1 institutional effectiveness educational programs 3.3.1.2 institutional effectiveness administrative units

Page 17: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

4

3.3.1.3 institutional effectiveness educational support 3.3.1.4 institutional effectiveness research 3.3.1.5 institutional effectiveness community or public service 3.4.7 consortial relationships and contractual agreements 3.5.1 general education competence 4.1 student achievement

Statement of the Problem

The challenges in regional accreditation are well known within the higher

education community, and the act of going through accreditation is a source of angst

for many institutions (Ewell, 2011; Head & Johnson, 2011; Oden, 2009). Regional

accreditation expectations have historically increased (Ewell, 2011) and have become

more complex (Head, 2011), subsequently increasing the amount of time (Ewell,

2011), energy (Murray, 2002), financial cost (Bardo, 2009; Cooper & Terrell, 2013),

and human capacity needed to undertake accreditation (Oden, 2009). The institutional

effectiveness domain of accreditation is the most problematic area for colleges and

universities (Ewell, 2011; Manning, 2011). According to the Director of Training and

Research from the SACSCOC, institutional effectiveness principles comprised the

most out-of-compliance domains of accreditation for the SACSCOC institutions in

2013 (A. G. Matveev, personal communication, September, 11, 2014). Further, these

institutional effectiveness challenges are heightened for community colleges that are

often resource limited (Alfred, 2012), and expected to demonstrate accomplishment of

multiple missions (Ewell, 2011).

Institutions experience a number of challenges during regional accreditation.

These challenges include financial costs (Bardo, 2009; Cooper & Terrell, 2013; Hartle,

2012; Hulon, 2000), time needed to sustain accreditation activities (Murray, 2002;

Page 18: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

5

Oden, 2009), complexity of institutional effectiveness (Baker, 2002; Ewell, 2011;

Manning, 2011), and the complexity of the accreditation process (Chapman, 2007;

Hulon, 2000; Young, Chambers, & Kells, 1983; Young, 2013). Some of these

challenges have contributed to a negative view of accreditation (Hulon, 2000). The

resources required to sustain regional accreditation is identified as a primary reason

that higher education institutions view accreditation negatively (Hulon, 2000).

Combining a high degree of necessary resources (Hulon, 2000) with a complex

accreditation process due to a lack of prescribed definitions of expectations during

regional accreditation (Baker, 2002) has contributed to negative perceptions of

accreditation. Further, some argued that the overall resources required for

accreditation have become unsustainable (Hartle, 2012; Neal, 2008).

Institutions of higher education are under increased scrutiny to demonstrate

quality educational programs (Ewell, 2011), which has led to increased expectations

for higher education accountability from the public and governmental agencies (Head,

2011). Consequently, the regional accrediting bodies have increased their

expectations for higher education institutions, which have translated to increasingly

rigorous accreditation expectations (Allen & Kazis, 2007; Jackson et al., 2010).

Changes within the process and expectations of regional accreditation in the

SACSCOC (SACSCOC, n.d.b; SACSCOC, 2012a) occurred after governmental calls

for increased oversight of higher education institutions in 2006 (U.S. Department of

Education [USDoE], 2006). These changes implemented by the SACSCOC included

implementing a fifth-year interim review and increased expectations within

institutional effectiveness (SACSCOC, n.d.b).

Page 19: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

6

The concept of institutional effectiveness was introduced by the SACSCOC in

1984 as a way for institutions to prove that they were accomplishing their intended

mission; however, the SACSCOC did not provide a definition of institutional

effectiveness until 2005 (Head, 2011). The lack of a prescribed definition from the

SACSCOC and the difficulty in defining institutional effectiveness (Alfred, 2011;

Head, 2011; Ewell, 2011; Manning, 2011) contributes to the difficulty in proving

institutional effectiveness (Head, 2011), because institutions interpret institutional

effectiveness differently (Head, 2011; Ewell, 2011; Manning, 2011). Whatever the

root cause may be, the institutional effectiveness domain of regional accreditation is

one of the most challenging aspects of accreditation for institutions of higher

education (Ewell, 2011; Manning, 2011).

During 2013, 64% of the 75 SACSCOC institutions that went through

reaffirmation of accreditation were out-of-compliance in at least one institutional

effectiveness area during some point in the reaffirmation of accreditation process (A.

G. Matveev, personal communication, September 11, 2014). Further, 23% of

institutions received a recommendation, an out-of-compliance finding (SACSCOC,

2012a), related to institutional effectiveness during the final stage of reaffirmation of

accreditation. This finding indicates that 23% of institutions were at-risk to receive a

negative sanction from the SACSCOC due to being out-of-compliance within

institutional effectiveness at the conclusion of reaffirmation of accreditation (A. G.

Matveev, personal communication, September, 11, 2014). These results indicate that

demonstrating institutional effectiveness is challenging for institutions, regardless of

institutional type. However, regional accreditation requires institutions to evaluate

Page 20: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

7

how effective each institution has been at accomplishing its mission (Head, 2011;

Head & Johnson, 2011). This presents a unique challenge for community colleges,

which generally have multi-faceted missions (e.g., general education, workforce,

college preparation, non-credit instruction, contract training, continuing education,

public service) (Cohen, Brawer, & Kisker, 2013; Ewell, 2011). Demonstration of

accomplishment of multiple missions further complicates the reaffirmation of

accreditation process for community colleges (Ewell, 2011).

Purpose of the Study

The purpose of this study was to analyze the role of SACSCOC

recommendations and institutional changes based on the perceptions of the SACSCOC

liaisons or institutional effectiveness personnel at community colleges in the

SACSCOC region that have undergone reaffirmation between the years 2011 through

2015. This study sought to understand and examine the positive changes within

institutional effectiveness that occurred because of the reaffirmation of accreditation

process. The results of the study are intended to aid community colleges in

preparation for regional accreditation, and specifically in improving institutionalizing

processes that are known to enhance institutional effectiveness. Further, the study’s

results are applicable to the work of the SACSCOC because it has a stake in

understanding how recommendations change institutional behaviors, particularly the

behaviors of community colleges.

Page 21: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

8

Research Questions

The study was guided by the following research questions:

1) What is the statistically significant relationship between the independent

variable of community colleges that receive SACSCOC recommendations and

the dependent variable of overall (or total) level of perceived change or

improvement?

2) What is the statistically significant relationship between the independent

variable of ‘severity of recommendations’ group membership and the

dependent variable of levels of perceived changes or improvements?

3) Which factors (independent variables) best predict the overall level of

institutional change or improvement (dependent variables)?

4) Which factors (independent variables) best predict the severity of

recommendations received by the institutions (dependent variables)?

Significance of the Study

This study on the role of the SACSCOC recommendations in perceived

changes within institutional effectiveness in community colleges is significant for the

field of higher education for several reasons: 1) higher education accountability has

increased and is expected to further increase (Bardo, 2009; Eaton, 2012; Ewell, 2011);

2) institutions are challenged with demonstration of effectively accomplishing their

missions (Cooper & Terrell, 2013; Hartle, 2012; Oden, 2009; Powell, 2013); 3)

accreditation is occurring more frequently (SACSCOC, n.d.b); and 4) the cost of

undergoing accreditation has increased (Bardo, 2009; Cooper & Terrell, 2013).

Federal legislation and public demands for accountability have led to increased

Page 22: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

9

regional accreditation expectations for institutions (Bardo, 2009; Eaton, 2012; Ewell,

2011). The expectations for colleges to demonstrate quality academic programs and

effective operations are anticipated to further increase, which may place greater

demands on colleges (Bardo, 2009).

Many institutions find reaffirmation of accreditation challenging (Cooper &

Terrell, 2013; Hartle, 2012; Oden, 2009; Powell, 2013). Specifically, institutions in

the SACSCOC are challenged within the institutional effectiveness domain of regional

accreditation (Ewell, 2011; Manning, 2011; A. G. Matveev, personal communication,

September, 11, 2014). Institutions are expected to demonstrate accomplishment of

their missions and the standards requiring institutions to provide evidence of this have

expanded (Bardo, 2009; Ewell, 2011). The increase in expectations underscores the

importance of institutional effectiveness, which is integral to the entire SACSCOC

accreditation process (Head & Johnson, 2011; Head, 2011).

Further, colleges are expected to undergo SACSCOC review every five years

(SACSCOC, n.d.b), which means that institutions must be able to demonstrate quality

academic programs and effective operations on a continual basis. In order to

successfully navigate regional accreditation every five years, institutions must remain

in a steady state of compliance. Last, the amplified accreditation expectations have

enlarged the expense of undergoing regional accreditation (Bardo, 2009; Cooper &

Terrell, 2013), which emphasizes the importance of institutions discovering ways to

remain in a steady state of compliance with accreditation standards in an efficient and

cost effective manner.

Page 23: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

10

This study is significant as its results may aid institutions in the institutional

effectiveness domain by providing information on how other community colleges used

SACSCOC recommendations to improve their overall approach to demonstrating

institutional effectiveness. This finding is useful for two reasons: 1) all community

colleges, regardless of reaffirmation status, can use the results of the study to improve

their own approach to institutional effectiveness; and 2) institutions that receive

recommendations in the future will have empirical evidence to assist in making

decisions that will aid the institution in becoming compliant. The study can be used to

assist institutions in achieving a steady state of compliance with institutional

effectiveness standards.

Summary of Conceptual Framework

The conceptual framework for this research study is based on the “framework

for institutional capacity” developed by Alfred, Shults, Jaquette, and Strickland (2009,

p. 77). Within this framework, capacity is defined as “how well a college performs”

(Alfred et al., 2009, p. 77). According to Alfred et al., institutional capacity is

comprised of three components: 1) tangible resources, 2) intangible resources, and 3)

leadership.

Tangible resources are material resources that a college utilizes to achieve its

goals (Alfred et al., 2009). The tangible resources that are relevant to the study are

financial, human resources, and technology; which are necessary in order for colleges

to achieve their intended goals and missions. In comparison, intangible resources are

the non-material resources that aid or hinder a college from achieving desired goals.

Intangible resources include culture, processes, and staff capabilities. The final

Page 24: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

11

component, leadership, determines how tangible and intangible resources are used

(Alfred et al., 2009).

According to the framework for institutional capacity, institutional capacity is

influenced by leaders’ abilities to leverage tangible and intangible resources (Alfred et

al., 2009). Where tangible and intangible resources reflect the overall capacity of an

organization, the decisions made on how to utilize those resources determine the

institution’s overall effectiveness (Alfred et al., 2009). This conceptual framework is

appropriate for the study because it provides the context necessary to understand

important components within institutional effectiveness.

Summary of Methodology

A non-experimental, group-comparison quantitative research design was used

to examine statistically significant differences and relationships between the

SACSCOC recommendations received during the reaffirmation of accreditation

process, and perceived levels of institutional change within the institutional

effectiveness domain by the SACSCOC liaisons of the study institutions. The study

utilized a researcher-developed, web-based survey instrument to collect self-reported

data from the accreditation liaisons or institutional effectiveness personnel at 135

SACSCOC community colleges. Reliability of the instrument was ensured through

the use of Cronbach’s alpha analysis for Likert scale questions (Huck & Cormier,

1996). Validity of the instrument occurred through face and content validity assurance

by experts in the field of institutional effectiveness (Huck & Cormier, 1996).

Research analyses include descriptive, inferential, and predictive statistics (Fitzgerald

& Fitzgerald, 2013). Specifically, independent samples t-tests (Fitzgerald &

Page 25: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

12

Fitzgerald, 2013), analysis of variance tests (ANOVA) (Fitzgerald & Fitzgerald,

2013), and multiple regression analyses (Creswell, 2014) were utilized.

Assumptions of the Study

The assumptions made for this study include:

1) The SACSCOC liaison (or primary institutional effectiveness staff member) is

the appropriate institutional authority to provide both historical information

and perceptions of current compliance and institutional change.

2) The SACSCOC liaison or institutional representative will provide accurate and

truthful information.

Limitations to the Study

The limitations to this study include:

1) It is limited to self-reported data, because the actual recommendations received

by institutions are not publicly available.

2) The institutions who were asked to participate in this study are community

colleges in the SACSCOC region of accreditation; therefore, generalizability of

the findings will only apply to community colleges in the SACSCOC region.

3) The data collection instrument for this study was delimited in scope, length,

and question type in order to balance gathering appropriate information with

ensuring a robust sample size. The survey was limited to multiple-answer

options on similar scales in order to aid the survey respondent and reduce the

total amount of time required to complete the survey (Day, 1989).

Page 26: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

13

Delimitations to the Study

The study had one delimitation. Only community colleges in the SACSCOC

region that underwent reaffirmation of accreditation in the years 2011 through 2015

were invited to participate in the study.

Definition of Terms

The following terms are used throughout the study and are defined below:

Committee on Compliance and Reports Review. The Committee on

Compliance and Reports (C&R) review is the final phase of the reaffirmation of

accreditation process (SACSCOC, 2011a). This review happens immediately prior to

the executive meeting of the SACSCOC (SACSCOC, n.d.a). The executive meeting is

when the final decisions regarding reaffirmation of accreditation occur (SACSCOC

n.d.a).

Institutional effectiveness. Institutional effectiveness is defined as a

systematic review of the mission, resulting in continuous quality improvement, and

demonstration that the institution is accomplishing its mission (SACSCOC, 2011b).

Further, institutional effectiveness includes the 11 related principles of accreditation:

• 2.4 – institutional mission

• 2.5 – institutional effectiveness

• 3.1.1 – mission

• 3.3.1.1 – institutional effectiveness, educational programs

• 3.3.1.2 – institutional effectiveness, administrative units

• 3.3.1.3 – institutional effectiveness, educational support

• 3.3.1.4 – institutional effectiveness, research

Page 27: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

14

• 3.3.1.5 – institutional effectiveness, community or public service

• 3.4.7 – consortial relationships and contractual agreements

• 3.5.1 – general education competence

• 4.1 – student achievement.

Principles of accreditation. The principles of accreditation represent the

specific statements with which institutions must be in compliance in order to become

reaffirmed (SACSCOC, 2012a).

Reaffirmation of accreditation. Reaffirmation of accreditation is the process

that institutions undergo to become re-accredited every 10 years (SACSCOC, 2012a).

The process involves the completion of a self-study, offsite peer review, onsite peer

review, and SACSCOC committee review. Reaffirmation of accreditation is defined

as the entire process of becoming reaffirmed.

Reaffirmed. The term reaffirmed represents the end-product of reaffirmation

of accreditation. Once an institution has successfully completed reaffirmation of

accreditation, the institution becomes reaffirmed (SACSCOC, 2012a).

Recommendations. Recommendations are the issues that were identified as

non-compliant during some phase of the reaffirmation of accreditation process

(SACSCOC, 2012b). The recommendations are areas that a committee determined the

institution needed to address. Institutions are required to address the recommendations

in order to become reaffirmed.

Regional accreditation. Regional accreditation is the process of accrediting

an entire institution and is the primary higher education quality assurance mechanism

Page 28: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

15

(Eaton, 2012). In regional accreditation, the process initiates with a self-study review,

and is followed by peer review and a subsequent site visit (Eaton, 2012).

Standards. The principles of accreditation represent the specific statements

that institutions must be in compliance with in order to become reaffirmed

(SACSCOC, 2012a). These statements are referred to as principles or standards

throughout the research study.

Summary

In summary, the reaffirmation of accreditation process is both rewarding and

challenging for institutions of higher education. Institutional effectiveness represents

one of the most important and challenging areas of the reaffirmation of accreditation

process. Community colleges have special difficulty in defining and demonstrating

institutional effectiveness due to the multiple missions of the colleges. The conceptual

framework used to construct the research study is the framework for institutional

capacity, which states that the interaction among tangible and intangible resources, and

leadership, results in an institution’s overall capacity or performance (Alfred et al.,

2009). The non-experimental, quantitative study seeks to understand how institutions

utilized recommendations to improve institutional effectiveness. Results of the study

may aid practitioners in using recommendations to drive improvement in

demonstrating institutional effectiveness and provide practitioners with specific

behaviors that may improve compliance with the 11 institutional effectiveness

principles.

Page 29: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

16

Organization of the Remainder of the Study

Chapter II comprises a review of the literature on higher education

accreditation. The study methodology and research design are presented in Chapter

III. Chapter IV provides the findings of the study; and Chapter V presents a

discussion of the study’s findings, implications, and recommendations for higher

education practice, and recommendations for future research.

Page 30: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

17

CHAPTER II

LITERATURE REVIEW

Chapter II encompasses a review of the literature on higher education

accreditation. This chapter is organized into four sections: 1) overview of higher

education accreditation, 2) SACSCOC regional accreditation, 3) practices leading to

reaffirmation of accreditation, and 4) conceptual framework. The purpose of this

study was to analyze the role of SACSCOC recommendations and institutional

changes based on the perceptions of the SACSCOC or primary institutional

effectiveness personnel at community colleges in the SACSCOC region that

underwent reaffirmation of accreditation between the years 2011 through 2015.

Overview of Higher Education Accreditation

Accreditation is grounded in American higher education values and has existed

for nearly 200 years (Brittingham, 2009). Although accreditation has a long-standing

history, it has come under increased scrutiny and pressure that is expected to continue

(e.g. Bardo, 2009; Eaton, 2012; Ewell, 2011). Accreditation acts as a type of quality

assurance for institutions of higher education, the public at-large, governmental

bodies, and students (Eaton, 2012; Ewell, 2011). Two types of accreditation, regional

and program, are the primary quality assurance mechanisms in higher education

(Eaton, 2012). Regional and program accrediting agencies examine different aspects

of a college or university, but both serve an overall purpose of ensuring high

standards. Regional accreditation allows institutions of higher learning to disburse

federal financial aid to students, which serves as a root issue causing scrutiny of

accrediting bodies.

Page 31: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

18

History of Accreditation

Higher education regional accreditation was first enacted in the late 1800’s

(Brittingham, 2009). The American system of accreditation is unique compared to

other countries in that it is non-governmental, based on peer review, and accreditation

relies on higher education institutions to honestly self-evaluate (Brittingham, 2009;

Eaton, 2012). The self-evaluation and peer review process has been a founding

principle of accreditation since its inception (Brittingham, 2009), and accreditation

continues to be conducted in this manner to date (SACSCOC, n.d.a). Accreditation

standards mirror the values held in esteem in American culture, including self-

improvement, volunteerism, and ability to achieve goals (Brittingham, 2009; Eaton,

2009).

In 1965, the first Higher Education Act (HEA) was passed, which greatly

increased the availability of federal financial aid and subsequently college enrollments

(Brittingham, 2009). The expansion of financial aid and enrollments further

heightened the need for college oversight, due to the significant amount of federal

funding allocated to higher education (Brittingham, 2009). Evidence of the regional

accreditation oversight expansion occurred when the first regional accreditor, the

SACSCOC, adopted institutional effectiveness as a standard in the 1980’s (Ewell,

2011). The SACSCOC defined institutional effectiveness as the systematic review of

the institution’s mission, resulting in continuous quality improvement, and

demonstration that the institution is accomplishing its mission (SACSCOC, 2011b).

Another increase in oversight expectations occurred in 2006, when regional

accreditation and higher education received criticism from the Secretary’s

Page 32: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

19

Commission on the Future of Higher Education, also known as the Spellings

Commission (USDoE, 2006). The Commission called for greater accountability and

transparency for colleges and universities by stating that “accreditation agencies

should make performance outcomes, including completion rates and student learning,

the core of their assessment as a priority over inputs or processes” (USDoE, 2006, p.

26). The recommendations from this report targeted nearly every area of higher

education operations, but the recommendation for institutions to collect and report on

meaningful student learning outcomes had great repercussions for the regional

accrediting bodies and post-secondary institutions (USDoE, 2006).

Future of Accreditation

Researchers studying the history of higher education accreditation have found

that federal legislation and public outcry for accountability has placed increased

standards on regional accrediting bodies (e.g. Bardo, 2009; Eaton, 2012; Ewell, 2011).

These increased standards on accrediting bodies translate to increased expectations,

oversight, and requirements of evidence for post-secondary institutions (Bardo, 2009).

According to Bardo (2009), college administrators should expect accreditation

demands to increase and should increase college operating expenses accordingly. For

example, one increased expectation facing colleges and universities is the requirement

of assessing student learning outcomes (Ewell, 2011), which has caused increased

expenses for institutions (Bardo, 2009; Cooper & Terrell, 2013).

Types of Accreditation

Three principal types of accreditation exist (Eaton, 2012; Sibolski, 2012). These

accrediting agencies are national, professional, and regional accreditors (Eaton, 2012;

Page 33: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

20

Sibolski, 2012). National accreditors may be either faith or career based (Eaton, 2012),

or accredit “single purpose institutions” (Sibolski, 2012, p. 23). However, professional

and regional accreditations represent the primary types of accreditation (Baker, 2002;

Head & Johnson, 2011). Professional or program accreditors’ major interest lies within

one specific program area, whereas, regional accreditors accredit an entire institution

(Eaton, 2012).

Professional accreditation. An accrediting body that focuses on one area of an

institution, typically one single program, performs professional or program accreditation

(Eaton, 2012). Program accreditation is common in professional program areas because

in order to practice in a number of career fields, respective applicants must have

graduated from a program-accredited school (Council for Higher Education Accreditation

[CHEA], 2010). Program accreditation is often a requirement to obtain a license to

practice in the respective field (CHEA, 2010). According to Eaton (2012), 62 program

accreditors exist and accredit over 22,000 programs. Programs commonly accredited in

this manner are law, medicine, nursing, teaching, and engineering (Eaton, 2012).

Although program accreditation and regional accreditation examine post-secondary

institutions differently, they have complementary goals (Miller, 2000).

Regional accreditation. Regional accreditation is the process of accrediting an

entire institution and is considered the “gold standard of higher education institutional

quality” (Jackson et al., 2010, p. 9). The roles of accreditation agencies are to assure

quality, provide access to federal and state funds, instill public and private sector

confidence in higher education, and ease student transfer between institutions (Eaton,

2012). U.S. higher education institutions are represented by six regional accrediting

Page 34: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

21

agencies, with each accrediting body responsible for oversight of higher education

institutions within the respective jurisdiction (Jackson et al., 2010). The six regional

accrediting agencies are the Middle States Commission on Higher Education, New

England Education Association of Schools and Colleges, North Central Association of

Colleges and Schools, Northwest Commission on Colleges and Universities, Southern

Association of Colleges and Schools Commission on Colleges, and the Western

Association of Schools and Colleges (Jackson et al., 2010).

Regional accreditation bodies act as intermediaries between the USDoE and

the institutions of higher education due to “constitutional limitations” (Jackson et al.,

2010, p. 10). Accreditation stands in for the federal government to ensure that

institutions that receive federal funding are capable of properly administering the

funds and have basic quality control (Ewell, 2011). Brittingham (2009, p. 22)

described the oversight of higher education quality as threefold:

…viewed through the lens of federal financial aid, institutions were overseen

by the triad: states for purposes of licensure and basic consumer protection, the

federal government for purposes of effective oversight of financial aid funds,

and recognized accreditors to ensure sufficient educational quality.

Post-secondary education is primarily under state regulation; however, the

federal government provides Title IV funds for students to attend institutions of higher

education (Brittingham, 2009). Because federal funding is a major funding source for

many institutions, federal regulations are applied to higher education organizations

(Brittingham, 2009; Ewell, 2011). Regional accrediting bodies are the primary

oversight bodies for the federal government, although the federal government does not

Page 35: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

22

employ them, and accrediting agencies do not view themselves as federal law

enforcers (Brittingham, 2009). Regional accreditors are voluntary organizations,

comprised of member institutions, which receive no federal funding (Donahoo & Lee,

2008).

The values that accrediting agencies embrace ensure that: a) higher education

institutions are the primary leaders for academic quality (Eaton, 2012), b) the

institutional mission is central to judgments of quality (Brittingham, 2009; Eaton,

2012; Sibolski, 2012), c) autonomy is essential to enhancing quality (Brittingham,

2009), d) academic freedom is not stifled (Eaton, 2012), and e) diversity of

institutional purpose and mission is upheld (Brittingham, 2009; Eaton, 2012).

Accrediting bodies are held accountable to the organizations that they represent, the

public, and governmental agencies (Brittingham, 2009; Eaton, 2012). Accreditors also

go through a periodic external review by the Council for Higher Education

Accreditation (CHEA) or the USDoE (Eaton, 2012; Sibolski, 2012).

For higher education institutions, the process of regional accreditation involves

a series of steps, which are identified by the accrediting body: 1) the process initiates

with a self-study review, 2) is followed by a peer review and subsequent site visit, and

3) is finalized by a decision made on behalf of the accrediting agency (Brittingham,

2009; Eaton, 2012). Additionally, all regional accreditation is ongoing, meaning that

once an institution becomes accredited, the institution must undergo periodic review

(Brittingham, 2009; Eaton, 2012).

In summary, accreditation is the “public seal of approval” (Ewell, 2011, p. 26),

focuses on quality assurance and improvement (Jackson et al., 2010), and is a

Page 36: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

23

voluntary, peer-review process (Brittingham, 2009). Regional accreditation allows

institutions to prove that the institution is a quality organization worthy of educating

students who receive federal financial aid (e.g., Eaton, 2012; Ewell, 2011; Jackson et

al., 2010). Regardless of the accrediting region or state a school is located in, regional

accreditation provides a basic level of quality assurance (Eaton, 2012). Regional

accreditation remains true to the organizations that it represents by ensuring those in

higher education are involved in the process of accreditation, and that all regional

accreditation reviews are conducted from the lens of the institution’s mission (Eaton,

2012).

SACSCOC Regional Accreditation

The SACSCOC is the regional accrediting agency responsible for oversight of

colleges and universities in Alabama, Florida, Georgia, Kentucky, Louisiana,

Mississippi, North Carolina, South Carolina, Tennessee, Texas, Virginia, Latin

America, and international institutions that meet SACSCOC qualifications

(SACSCOC, 2014a). SACSCOC institutions undergo reaffirmation of accreditation

every 10 years (COCSACS, 2012) and an interim review every five years (SACSOC,

2014a). Additionally, institutions are required to complete a special project aimed at

increasing student learning, known as a Quality Enhancement Plan (COCSACS,

2012).

SACSCOC Reaffirmation Process

The SACSCOC holds institutions accountable to 109 standards for each 10-

year review, known as reaffirmation of accreditation (COCSACS, 2012). This

reaffirmation of accreditation is initiated with a self-study, subsequent offsite peer

Page 37: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

24

review, followed by an onsite peer review, and concluded with a decision from the

SACSCOC (COCSACS, 2012). The fifth-year interim review requires institutions to

respond to only 17 standards (SACSCOC, n.d.b). This review operates similarly to the

reaffirmation of accreditation report, with one exception. No onsite visit occurs during

the fifth-year interim review.

The research study focuses on reaffirmation of accreditation, which occurs on a

decennial basis (COCSACS, 2012). Institutions begin the process of reaffirmation

with a self-study review of 109 principles or standards (COCSACS, 2012). Upon

completion of the self-review, known as the compliance certification, the institution

submits the review to an offsite committee, which determines whether the institution

complies with the standards (SACSCOC, n.d.a.). The offsite review committee

informs an onsite committee regarding the institution’s compliance with each standard

(SACSCOC, n.d.a). The next step of the reaffirmation of accreditation process is an

onsite SACSCOC committee visit, comprised of volunteers from similar institutions

(SACSCOC, n.d.a). After the end of the onsite visit, the institution is given an exit

report, which may include a series of recommendations that the institution must

address in order to become reaffirmed (SACSCOC, n.d.a). The institution then has an

opportunity to address any recommendations prior to the Committee on Compliance

and Reports (C&R) review (SACSCOC, n.d.a). The C&R is a standing committee of

the SACSCOC and is responsible for recommending actions regarding reaffirmation

of accreditation to the Executive Council of the Commission (SACSCOC, n.d.a). The

SACSCOC Executive Council then recommends official action to the SACSCOC,

which then votes to reaffirm or sanction an institution (SACSCOC, n.d.a).

Page 38: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

25

The SACSCOC reaffirmation of accreditation standards are divided into three

areas: 1) federal requirements, 2) core requirements, and 3) comprehensive standards

(COCSACS, 2012). The federal requirements encompass the federally mandated

criteria established by the USDoE (COCSACS, 2012). The core requirements reflect

broad-based, basic expectations that an institution must demonstrate in order to be

accredited with the SACSCOC (COCSACS, 2012). If an institution is found deficient

in any core requirement, the SACSCOC will order a negative sanction against the

institution (COCSACS, 2012). The comprehensive standards reflect standards that

focus on the operations of the institution and generally represent good practices in the

field (COCSACS, 2012). Any institution that is found to be deficient in a

comprehensive standard may be able to correct the deficiency prior to a negative

sanction (COCSACS, 2012). However, institutions can receive negative sanctions for

non-compliance with a comprehensive standard. Institutions are expected to

demonstrate compliance in all areas in order to become reaffirmed (COCSACS, 2012).

The SACSOC was the first regional accrediting body to introduce institutional

effectiveness as an accreditation standard (Ewell, 2011). As of 2015, 11 principles of

accreditation are considered to comprise the institutional effectiveness standards (A.

G. Matveev, personal communication, September, 11, 2014). Institutional

effectiveness standards are found within core requirements, comprehensive standards,

and federal requirements sections of the Principles of Accreditation (SACSCOC,

2011b). These institutional effectiveness principles and their description are provided

in Table 1.3 and reflect the language of the SACSCOC (2011b).

Table 1.3

Page 39: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

26

Principles of Accreditation Related to Institutional Effectiveness (SACSCOC, 2011b) Principle Number

Principle Description

Core Requirements basic, broad-based, foundational requirements that an institution must meet to be accredited with the Commission on Colleges 2.4 The institution has a clearly defined, comprehensive, and published

mission statement that is specific to the institution and appropriate for higher education. The mission addresses teaching and learning and, where applicable, research and public service. (Institutional Mission)

2.5 The institution engages in ongoing, integrated, and institution-wide research-based planning and evaluation processes that (1) incorporate a systematic review of institutional mission, goals, and outcomes; (2) result in continuing improvement in institutional quality; and (3) demonstrate the institution is effectively accomplishing its mission. (Institutional Effectiveness)

Comprehensive Standards set forth requirements in the following four areas: (1) institutional mission, governance, and effectiveness; (2) programs;(3) resources; and (4) institutional responsibility for Commission policies. The Comprehensive Standards are more specific to the operations of the institution, represent good practice in higher education, and establish a level of accomplishment expected of all member institutions. 3.1.1 The mission statement is current and comprehensive, accurately

guides the institution’s operations, is periodically reviewed and updated, is approved by the governing board, and is communicated to the institution’s constituencies.(Mission)

3.3.1 The institution identifies expected outcomes, assesses the extent to which it achieves these outcomes, and provides evidence of improvement based on analysis of the results in each of the following areas: (Institutional Effectiveness)

3.3.1.1 educational programs, to include student learning outcomes

3.3.1.2 administrative support services

3.3.1.3 academic and student support services

3.3.1.4 research within its mission, if appropriate

3.3.1.5 community/public service within its mission, if appropriate

Page 40: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

27

3.4.7 The institution ensures the quality of educational programs and courses offered through consortial relationships or contractual agreements, ensures ongoing compliance with the Principles, and periodically evaluates the consortial relationship and/or agreement against the mission of the institution. (See Commission policy “Collaborative Academic Arrangements.”) (Consortial relationships/contractual agreements)

3.5.1 The institution identifies college-level general education competencies and the extent to which students have attained them. (General education competencies)

Federal Requirements The U.S. Secretary of Education recognizes accreditation by SACS Commission on Colleges in establishing the eligibility of higher education institutions to participate in programs authorized under Title IV of the Higher Education Act, as amended, and other federal programs. Through its periodic review of institutions of higher education, the Commission assures the public that it is a reliable authority on the quality of education provided by its member institutions. The federal statute includes mandates that the Commission review an institution in accordance with criteria outlined in the federal regulations developed by the U.S. Department of Education. As part of the review process, institutions are required to document compliance with those criteria and the Commission is obligated to consider such compliance when the institution is reviewed for initial membership or continued accreditation. 4.1 The institution evaluates success with respect to student

achievement consistent with its mission. Criteria may include: enrollment data; retention, graduation, course completion, and job placement rates; state licensing examinations; student portfolios; or other means of demonstrating achievement of goals. (Student achievement)

Note: (SACSCOC, 2011b).

After undergoing reaffirmation of accreditation, an institution may experience

a variety of outcomes. If the institution has successfully demonstrated compliance and

has no recommendations by the Committee on Compliance and Reports review, the

institution is likely to receive a status of reaffirmed. For institutions that have not

demonstrated compliance with all accreditation standards, the institution may receive

one of several different negative consequences, each increases in order of severity

(SACSCOC, 2013a). The least severe consequence is that the institution may be

Page 41: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

28

placed on to a non-public monitoring status (SACSCOC, 2013a). The next step,

increasing in severity, is for an institution to be placed on to a public sanction

(SACSCOC, 2013a). Possible public sanctions are warning status, probation status, or

the institution could be dropped from accreditation membership (SACSCOC, 2013a).

It is possible for an institution to move between these options, if the institution does

not make significant progress toward compliance. For example, an institution may be

placed initially on monitoring status, but if the institution does not demonstrate

compliance within two years, the institution could be moved to probation status

(SACSCOC, 2013a).

SACSCOC Problem Areas

According to the Director of Training and Research from the SACSCOC,

frequent principles that are found out-of-compliance in the SACSCOC region include

standards related to faculty qualifications, institutional effectiveness, academic

program coordination, number of full-time faculty, intellectual property rights,

financial resources, general education competencies, quality enhancement plan,

financial stability, control of finances, and corporate structure (A. G. Matveev,

personal communication, September, 11, 2014). According to Manning (2011), “one

common strand across all six agencies is the high proportion of colleges receiving

recommendations in the institutional effectiveness areas” (p. 13). In 2013,

institutional effectiveness of educational programs was the second most cited principle

during the offsite review, with 64% of 75 institutions out-of-compliance (A. G.

Matveev, personal communication, September, 11, 2014). Further, institutional

effectiveness of educational programs was the most common principle found out-of-

Page 42: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

29

compliance during the final phase of the reaffirmation process, the C&R review

(SACSCOC, 2011a, p. 1). The C&R review occurs after institutions have had time to

respond to any findings during the offsite or onsite visits (SACSCOC, n.d.a). Of the

most commonly cited principles in the C&R review, institutional effectiveness

principles represented six of the top 10 cited principles (A. G. Matveev, personal

communication, September, 11, 2014).

Accreditation Benefits

Although accreditation has been identified as challenging, it offers a number of

benefits to post-secondary education (Brittingham, 2009; Head & Johnson, 2011;

Murray, 2002; Oden, 2009). Barbara Brittingham, president of the Commission on

Institutions of Higher Education of the New England Association of Schools and

Colleges, stated that accreditation is cost-effective, offers professional development,

works better than government regulation, and provides conditions for student mobility

between institutions (2009). Regional accreditation ensures a minimum threshold of

quality, which is a protective mechanism for students. Additionally, accreditation

allows students to receive federal financial aid, which is important because $238.5

billion in federal financial aid was disseminated to higher education students in the

2012-2013 academic year (CollegeBoard, 2013). Although some benefits of

accreditation are obvious, one of the less obvious benefits of accreditation is how the

process of going through reaffirmation of accreditation could influence and advance

the institution.

Accreditation provides the opportunity to examine the totality of the institution,

in order to determine if the college is on track to accomplish its mission (Oden, 2009).

Page 43: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

30

According to Oden (2009), accreditation is unique because “it is among the only,

indeed perhaps the sole, opportunity we have to inquire together and in depth about the

entirety of what we aim to do” (p. 38). Accreditation requires institutional staff,

faculty, and administration to work together toward one common goal. According to

Head and Johnson (2011, p. 44), “At the heart of the Commission’s philosophy of

accreditation, the concept of quality enhancement presumes each member institution to

be engaged in an ongoing program of improvement and be able to demonstrate how

well it fulfills its stated mission.” The foundation of regional accreditation is to

embrace continuous improvement. Regional accreditation allows institutions to

examine the organizational culture, processes, policies, and services (Head & Johnson,

2011). This deep examination of the institution contributes to better functioning

organizations, provided those involved engage in the process in a meaningful way

(Head & Johnson, 2011).

In 2004, a researcher surveyed 198 SACSCOC community college presidents,

the study revealed that most survey participants found that going through the

accreditation process resulted in positive changes in educational programs (Murray,

2004). The survey further revealed that institutional effectiveness was the area most

affected by accreditation; institutional effectiveness yielded the most positive benefits

for the colleges; and the accreditation process led to continuous improvement (Murray,

2004).

Sandmann, Williams, and Abrams (2009) conducted a case study analysis on

how two universities across two different accrediting regions used accreditation to

increase institutional engagement. The researchers used constant comparative analysis

Page 44: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

31

to identify themes and patterns, which indicated that accreditation was the impetus for

greater engagement for both internal and external constituents (Sandmann et al.,

2009). Further, study findings indicated that “intentionally linking engagement and

accreditation can lead to organizational improvement” (Sandmann et al., 2009, p.

25).

Accreditation Challenges

A review of the literature revealed a number of challenges with respect to

accreditation (e.g., Cooper & Terrell, 2013; Hartle, 2012; Hulon, 2000; Murray, 2002;

Oden, 2009; Powell, 2013). Some have challenged whether regional accreditation is

capable of ensuring quality in higher education (e.g., Hartle, 2012; Neal, 2008). The

cost of accreditation has been described as unsustainable (Hartle, 2012), encompassing

financial expenses (Cooper & Terrell, 2013; Hartle, 2012; Hulon, 2000), the time

spent on accreditation activities (Cooper & Terrell, 2013; Hartle, 2012; Hulon, 2000),

and human resources (Murray, 2002; Oden, 2009). Further, a lack of appropriate

resources (Chapman, 2007; Young, 2013, Young et al., 1983), fear of the results of

accreditation (Young, 2013, Young et al., 1983), and clarity from accrediting bodies

(Baker, 2002; Ewell, 2011; Manning, 2011; Powell, 2013; Young, 2013, Young et al.,

1983) are barriers to successful accreditation experiences.

Due to the importance of college accessibility and the significant amount of

federal funds allocated to higher education, regional accreditation has landed under

intense scrutiny (Brittingham, 2009). The occurrence of a “few bad actors” in higher

education led to a rise in student loan default rates, allegations of fraud and abuse, and

concerns over regional accreditors by the federal government (Brittingham, 2009, p.

Page 45: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

32

22). According to Brittingham (2009), under state watch, some states were unable to

set a “reasonable minimum bar” for higher education quality, thus allowing low

quality or “in some cases, degree mills” to operate (Brittingham, 2009, p. 22). In

2006, the U.S. Secretary of Education’s Commission criticized accreditors for

remaining secretive, having insufficient accountability mechanisms, and impeding

innovation (U.S. Department of Education, 2006). According to the president of the

American Council of Trustees and Alumni, Anne Neal (2008), “on the accreditors’

watch, the quality of higher education is slipping” (p. 26). Neal described

accreditation as part of the problem, citing examples of higher education institutions

lacking rigor within general education, a decline in prose literacy, grade inflation, a

lawsuit against a professionally accredited institution, examples of successful non-

accredited programs, and other examples of negative experiences that institutions had

with regional accrediting bodies. According to Neal (2008), “the accreditation process

suffers from structural problems: secrecy, low standards, and little interest in learning

outcomes” (p. 27). Although the aforementioned issues highlight the national issues

that regional accreditors experience, institutions of higher education experience

challenges when undergoing accreditation as well.

One of the challenges that institutions face when undergoing regional

accreditation is the financial aspect of accreditation (e.g., Bardo, 2009; Cooper &

Terrell, 2013; Hartle, 2012; Wood, 2006; Woolston, 2012). Hartle (2012) argued that

the overall cost of accreditation has reached an unsustainable level. Similarly, Bardo

(2009) recommended that institutions increase the amount of funding allocated to

institutional effectiveness in order to meet the demands of accreditation. For some

Page 46: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

33

institutions, the financial expenses associated with accreditation and specifically in

demonstrating institutional effectiveness are substantial (Cooper & Terrell, 2013;

Hartle, 2012). For example, Cooper and Terrell surveyed 2,348 institutional research

professionals across all regional accrediting regions, resulting in a 12.5% response rate

with 248 participants completing the survey in its entirety (2013). The results of the

study indicated that in 2012-2013 the nationwide average that institutions spent on

assessment of student learning outcomes was $160,000, including $8,400 on

assessment software (Cooper & Terrell, 2013).

Beyond the financial cost, regional accreditation costs institutions in the

dedicated time needed to sustain accreditation activities. Woolston (2012) conducted

a mixed method dissertation research study on the direct and indirect costs of

institutional accreditation as perceived by institutional accreditation liaisons and

determined that “the cost of accreditation to institutions is significant but is more

exacting in terms of time than money” (Woolston, 2012, p. 83). Wood (2006)

conducted a case study, interview and observation study that examined three stages of

planning for accreditation. Wood (2006) found both indirect costs of accreditation

including release time necessary to prepare for the accreditation review and direct

costs of accreditation including professional development, support staff, supplies, and

the accreditation team’s onsite visit. Accreditation is often viewed as unproductive

and overly “onerous” (Oden, 2009, p. 40). Further, the process often requires

significant time, energy and labor (Murray, 2004; Oden, 2009). The time from

preparation of the self-study to the decision from the accrediting body can take several

Page 47: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

34

years (Murray, 2004; Oden, 2009). This time away from regular job duties and the

intensely sustained focus makes the process quite cumbersome (Murray, 2004).

Another challenge identified in the literature includes barriers to completion of

the self-study during reaffirmation (Chapman, 2007; Hulon, 2000; Young, 2013,

Young et al., 1983). In 1983, researchers identified barriers to successful completion

of the accreditation self-study (Young et al., 1983). The barriers identified were

overly complex information required for the accrediting body, a lack of documentation

on the part of the institution, and a lack of training on accreditation expectations

(Young et al., 1983). The fear of the end results of accreditation hampered the ability

to do a thorough and thoughtful review of the institution and the usefulness of the

accreditation process was questioned (Young et al., 1983). According to Young

(2013), many of these barriers existed as recent as 2013. Chapman (2007) conducted a

mixed-methods research study on faculty and administrators at five community

colleges in the SACSCOC region and the results indicated that institutions did not

provide adequate time, resources, or training for those involved in the self-study

process. The study further revealed that limited support existed for those responsible

for completing the self-study (Chapman, 2007). In a case study that utilized document

analysis and interviews of 14 current and former employees at a rural SACSCOC

community college, the primary drivers of negative views of accreditation were the

increased workload and financial expenses associated with going through the

accreditation process (Hulon, 2000).

Complicating the accreditation process is the lack of a common definition and

prescribed standards on demonstration of institutional effectiveness (Ewell, 2011;

Page 48: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

35

Manning, 2011). Institutions of higher education are incredibly diverse in their

missions. This is especially true for community colleges, which serve multiple

missions (e.g., academic, technical, workforce). Regional accreditation can be

complicated due to a lack of prescribed definitions of expectations (Baker, 2002).

Because regional accrediting bodies leave room for institutions to determine their own

definitions of institutional effectiveness through the lens of their mission, challenges in

meeting accreditation expectations can occur (Baker, 2002). Each college is

responsible for demonstrating that the college is accomplishing its unique mission

(Baker, 2002). Defining effectiveness for community colleges is difficult due to the

multiple and unique missions of each college and the different interpretations

encountered when applying institutional effectiveness to community colleges (Ewell,

2011).

Powell (2013) conducted 21 interviews with a diverse participant group, which

included 11 national and regional accrediting agency senior staff, two oversight board

member staff, two higher education lobbyists, two presidents of non-profit institutions,

three higher education administrators, and an assessment leader at a large-public

university. The findings from Powell’s (2013) research were that overwhelmingly, a

lack of understanding of accreditation and accrediting bodies existed. This point was

further illustrated by one respondent who stated that:

One of the challenges is that people don’t understand accreditation. It is seen

as somewhat obscure and opaque. Another challenge is that institutions

Page 49: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

36

have more moving parts now than they used to. Just figuring out how to do

assessment is a challenge; the technology alone is a piece that is not very

highly developed. (Powell, 2013, p. 62).

Accreditation and Institutional Effectiveness

Institutional effectiveness and accreditation have a closely intertwined

relationship. According to Head and Johnson (2011), the relationship is such that “it is

fair to say that the concept of institutional effectiveness permeates the entire

accreditation process” (p. 44). Accrediting bodies review institutions based on the

institution’s mission (Eckel, 2008). Although many definitions of institutional

effectiveness exist, in terms of accreditation in the SACSCOC region, institutional

effectiveness is a systematic review of the mission, resulting in continuous

improvement in institutional quality, and demonstration that the institution is

accomplishing its mission (SACSCOC, 2011b). Because of the mission-centric focus,

the term institutional effectiveness was introduced by the SACSCOC in 1984 as a way

for institutions to prove that they were accomplishing their intended mission (Head,

2011). For the SACSCOC region, institutional effectiveness is covered under both a

core requirement and a comprehensive standard (COCSACS, 2012). Head and

Johnson (2011) argued that every standard covered by the SACSCOC could be viewed

as an institutional effectiveness standard.

Despite the importance of institutional effectiveness, the definition is seldom

consistent between organizations (Head, 2011). Although the SACSCOC began

requiring demonstration of institutional effectiveness in 1984, the SACSCOC did not

provide a definition of institutional effectiveness until 2005, when it was defined in a

Page 50: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

37

resource guide for institutions (Head, 2011). The lack of a clear definition contributed

to the difficulty in proving institutional effectiveness, because institutions interpret the

concept of institutional effectiveness differently.

Researchers have identified difficulties in defining institutional effectiveness,

but in general described institutional effectiveness as a comprehensive, umbrella term

encompassing assessment, evaluation, and institutional research (e.g., Head, 2011;

Manning, 2011). Further, these researchers agree that institutional effectiveness is

shaped by the context of the regional accreditor (e.g., Head, 2011; Manning, 2011).

According to the SACSCOC Principles of Accreditation (COCSACS, 2012, p. 18):

The institution engages in ongoing, integrated, and institution-wide research-

based planning and evaluation processes that (1) incorporate a systematic

review of institutional mission, goals, and outcomes; (2) result in continuing

improvement in institutional quality; and (3) demonstrate the institution is

effectively accomplishing its mission.

Although defining institutional effectiveness is challenging for many higher

education institutions, it is even more challenging for community colleges due to

multi-faceted missions (Ewell, 2011). Community college missions often focus on

general education, workforce, college preparation, non-credit instruction, contract

training, continuing education, and public service (Cohen et al., 2014; Ewell, 2011).

Additionally, because colleges operate independently from each other, and missions

have expanded, defining institutional effectiveness for community colleges is further

complicated (Eckel, 2008; Ewell, 2011; Floyd & Antczak, 2009).

Page 51: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

38

In addition to the complex missions and difficulty in defining institutional

effectiveness, research has shown that the level of knowledge, support, participation,

and perceived usefulness of institutional effectiveness was lacking among some

stakeholder groups (Skolits & Graybeal, 2007). Skolits and Graybeal conducted a

mixed-method case study at one comprehensive community college, which studied

138 full-time employees, in order to assess institutional effectiveness knowledge

across different stakeholder groups (2007). The study further revealed that great

disparities existed between leadership, faculty, and staff groups with respect to

institutional effectiveness (Skolits & Graybeal, 2007). Insufficient time dedicated to

institutional effectiveness activities was the greatest barrier to increased knowledge

and expertise (Skolits & Graybeal, 2007). The results of the study also suggested that

the expectations were unsustainable, and that stakeholders needed more analytical and

data support (e.g., Skolits & Graybeal, 2007).

Despite the challenges with demonstrating institutional effectiveness,

community colleges have made gains in this arena (Ewell, 2011). A number of

community colleges have recognized the importance of institutional effectiveness and

have subsequently increased institutional research capacity (Allen & Kazis, 2007;

Ewell, 2011). Allen and Kazis (2007) examined four high performing community

colleges and found that these colleges ensured that relevant data were provided to a

broad range of stakeholders, beyond top-level leadership (Allen & Kazis, 2007).

Institutions have been expected to prove institutional effectiveness for nearly

30 years, but are still challenged within this area (Ewell, 2011; Manning, 2011).

Institutions are frequently found out-of-compliance within the institutional

Page 52: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

39

effectiveness areas, even after several decades of attempting to demonstrate

institutional effectiveness (Manning, 2011). The lack of a clear definition between

institutions, the complex mission of community colleges, and the lack of

understanding within the organization contribute to the challenges in demonstrating

institutional effectiveness for community colleges (Ewell, 2011; Manning, 2011).

Although a number of challenges exist regarding defining and proving institutional

effectiveness, colleges and the SACSCOC are making improvements in this area

(Ewell, 2011).

Practices Leading to Reaffirmation of Accreditation

The act of going through regional accreditation is a rigorous and time-

consuming activity, requiring institutional capacity (Oden, 2009). Researchers

studying standards within accreditation have identified tangible resources, intangible

resources, and leadership as necessary to institutionalizing accreditation standards

(e.g., Allen & Kazis, 2007; Kezar, 2013; Oden, 2009; Young, 2013). Tangible

resources identified in the literature include money, technology, and human resources

(Young, 2013). Intangible resources identified in the literature include people and

their respective roles, culture, organizational structure, policies and procedures, and

time (Allen & Kazis, 2007; Kezar, 2013; Oden, 2009). Leadership has been found to

influence the outcomes of accreditation (Oden, 2009; Young, 2013) the

institutionalization of accreditation activities, institutional capacity (Allen & Kazis,

2007), and culture (Kezar, 2013).

Page 53: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

40

Tangible Resources

Tangible resources relevant to institutionalizing accreditation are financial,

human, and technological resources (Alfred et al., 2009). These resources heavily

influence institutional leaders’ decision-making ability. Each of these tangible

resources was identified in the literature as important to reaffirmation of accreditation

(Alfred et al., 2009; Nguyen, 2005; Oden, 2009; Young, 2013).

Financial. Alfred et al. (2009) indicated that financial resources are a concern

for community colleges due to declines in state funding. The literature on practices

leading to successful accreditation indicated that financial expenses were necessary

(Bardo, 2009; Young, 2013). Young (2013) found that the accreditation process was

enhanced by adequately resourcing the team conducting the initial self-study. In this

research finding, the proper budget and resources allowed the team to function at a

high level.

Human Resources. According to Alfred et al. (2009), community colleges

must hire, develop, and keep staff with the appropriate skills and experience in a fast-

paced economy. Overall, community colleges rely heavily on part-time faculty and

staff to meet their current demands (Alfred et al., 2009). The literature on

accreditation has identified knowledge of the institution and the SACSCOC as critical

components of accreditation leadership (Young, 2013). Reliance on part-time staff

may indicate a future challenge for accreditation within community colleges (Alfred et

al., 2009). The over-reliance on part-time staff leads to a transient staffing population,

which would pose a significant challenge when conducting a rigorous self-study for

reaffirmation, if institutional knowledge was not retained (Alfred et al., 2009; Young,

Page 54: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

41

2013). Further, the over-reliance on adjunct faculty could present a problem for

institutions to maintain compliance with standard “2.8 - The number of full-time

faculty members is adequate to support the mission of the institution and to ensure the

quality and integrity of each of its academic programs” (COCSACS, 2012, p. 20).

Technological resources. Alfred et al. (2009) described technology as an

important resource for community colleges and stated that community colleges should

improve their use of technology. Further, the researchers questioned whether

community colleges had the ability to invest in technology at the necessary level

(Alfred et al., 2009). This finding is consistent with other researchers who have

identified the use of technology as important for institutionalizing accreditation

(Nguyen, 2005; Oden, 2009; Powell, 2013).

Oden (2009) recommended taking advantage of enhanced technology during

the reaffirmation of accreditation process for both the institutions putting together a

self-study and for the peer review teams gathered by the regional accrediting bodies.

Nguyen (2005) found that institutions with a web-based management system for the

self-study were able to disseminate knowledge across the university, and the website

aided in the process of reaffirmation. Nguyen (2005) also recommended that

institutions build a historical database with digital documents to build institutional

capacity. Further, literature indicated that institutions should utilize technology to aid

communication and collaboration during the self-study (Nguyen, 2005; Young, 2013).

Intangible Resources

Alfred et al. (2009) described intangible resources as the non-quantifiable

resources that reflect the behind-the-scenes aspect of the organization. These

Page 55: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

42

resources help organizations, but are not direct material resources. Though the

resources are not direct, intangible resources were identified as important for

successful accreditation (Young, 2013). Young found that successful SACSCOC

preparation teams were “directly linked to members’ ability to leverage the intangible

resources” (Young, 2013, p. 76). The intangible resources identified in the

accreditation literature include people, culture, organizational structure, professional

development, processes and policies, and time (Alfred et al., 2009; Kezar, 2013;

Tharp, 2012; Young, 2013).

Culture. Culture is defined as “the values and beliefs that are shared by most

of the people at an institution” (Alfred et al., 2009, p. 86). Interestingly, Kezar (2013)

noted that although leadership is an often-identified feature of the literature, leadership

and culture have many overlapping intersections. Further, Kezar (2013) found that

studies that identified leadership as a central issue might have been mislabeling culture

as leadership. Culture is especially important because it influences staff and

leadership behavior (Alfred et al., 2009).

Young (2013) found that the community college president heavily influenced

the culture of the institution and the SACSCOC preparation team. In Young’s (2013)

study, the president took accreditation seriously and set a standard of excellence.

Subsequently, the SACSCOC preparation team embraced high standards.

Additionally, Young found that the culture of the team led to high performance (2013).

Young described the culture of the team as one that emphasized the importance of

advanced planning and one that ensured decisions were made based on data (2013).

Page 56: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

43

Sandmann et al. (2009) recommended viewing accreditation as a means to

improve the future welfare of the organization. This suggests a culture change for

institutions that view accreditation as an act of compliance. Institutions in which the

culture focused on accreditation as a series of best practices rather than compliance

found greater accreditation outcomes (Head & Johnson, 2011; Sandmann et al., 2009).

As institutions consider shifting to a culture of continuous improvement, colleges need

to find ways to involve the right people at the institution (Alfred et al., 2009; Tharp,

2012). Tharp (2012) studied, using a qualitative comparative case study, four

community colleges in California that had a negative regional accreditation finding.

The study results indicated that getting the right people and the right amount of people

was necessary for sustaining motivation on accreditation activities and obtaining a

culture shift (2012). Further, Tharp (2012) found that schools that emphasized

accreditation for compliance purposes did not consistently enforce processes and

policies and had negative accreditation findings.

Data-informed culture. Skolits and Graybeal (2007) found that insufficient

time, analytical support, and data support were barriers to accomplishing and proving

institutional effectiveness. Further, Skolits and Graybeal (2007) suggested that

colleges need to embrace a culture of data based decision-making, which provides

faculty and staff with the resources, including time, to dedicate to improving the

organization. In Young’s (2013) study, a high performing team described data

informed decision-making as central to the team’s culture of excellence. Allen and

Kazis (2007) found that inclusion of the institutional research office in the center of

planning and budgeting leads to a culture change and organizational improvement.

Page 57: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

44

Embracing a data informed decision-making culture may aid in institutionalizing

accreditation practices (Allen & Kazis, 2007; Skolits & Graybeal, 2007; Young,

2013).

Organizational structure. Allen and Kazis (2007) studied four community

colleges that effectively used data for decision-making purposes. The results of the

study indicated that a strong institutional research office and inclusion of the office in

decision-making bodies was critical to creating and sustaining a culture of data based

decision-making (Allen & Kazis, 2007). Offices of institutional research are

responsible for data collection and reporting for institutions of higher education (Allen

& Kazis, 2007). Studies further found that connecting institutional research to strategy

and planning in relation to budget decisions increased organizational effectiveness

(Allen & Kazis, 2007; Lattimore, D’Amico, & Hancock, 2012). Researchers found

that institutional research offices could be used for compliance and for strengthening

institutional use of data for improvement (Allen & Kazis, 2007). Further, the study

indicated that top-level community college leaders must show continuous support for

the institutional research office (Allen & Kazis, 2007).

Accountability should occur at all levels of organizational structure.

Lattimore, D’Amico, and Hancock (2012) found that data based decision-making must

occur at the departmental level and should align with strategic goals. Further, placing

financial strategic goal accountability on departmental units leads to more data driven

organizations. Lattimore et al. (2012) stated that accreditation goals should be

incorporated into the college and that accountability should occur at all organizational

levels.

Page 58: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

45

Role definition. Institutions should be consistent and clearly define

organizational roles (Alfred et al., 2012; Tharp, 2012; Young, 2013). Discrepancy

between expectations and reality was one of the most frequently cited causes of

employee dissatisfaction in community college hiring practices (Alfred et al., 2009).

The notion of consistent role definition applies to accreditation practices as well.

Tharp (2012) found that schools with clearly defined labor roles and less overall

conflict had better accreditation outcomes. These results were consistent with the

conclusions from Young (2013) that high performing accreditation teams were likely

to have clearly defined roles.

Human Capacity. Human capacity involves personnel and the knowledge,

skills, and abilities that each employee brings to an organization (Alfred et al., 2009).

According to Alfred et al. (2009), hiring the right people is one of the most important

intangible resources. Young (2013) found that team effectiveness was enhanced by

having the right people with the right knowledge and skills in the right positions.

Community colleges should provide development to their employees in order to ensure

that faculty and staff have the necessary abilities to contribute to the institution’s

overall capacity (Alfred et al., 2009).

Professional development. Professional development is an organization’s

way of communicating to its employees that they are valuable and worth investing in

(Alfred et al., 2009). Despite the many positive findings associated with professional

development, it is often the first item cut during times of limited budgets in

enrollment-based models of education funding (Wallin & Smith, 2005).

Page 59: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

46

Murray (2002) surveyed 236 Chief Academic Officers at community colleges

in the SACSCOC region to determine the use of professional development in the

SACSCOC schools. Murray (2002) found that financial support for attending

professional conferences was the only professional development activity utilized by all

of the responding institutions. Young (2013) found that internally developed

professional development contributed to a successful self-study team in one institution.

Areas in need of professional development. Wallin and Smith (2005) surveyed

faculty regarding their involvement in college-level decisions. Faculty felt that

serving on committees dedicated to improvement of the college was less relevant to

their work as faculty. Overall, faculty did not see administrative work as critical to the

teaching mission (Wallin & Smith, 2005). Faculty indicated that participating in

innovative program development was an area of interest, but also an area where

faculty had low confidence in their ability (Wallin & Smith, 2005). Kuh and

Ikenberry (2009) surveyed post-secondary institutions and found that 62% of

institutions thought more expertise was needed in order to improve assessment

practices. These findings indicate areas where professional growth is needed in higher

education. The findings also support the need for additional professional development

in creating institutional effectiveness capacity.

Serving on peer review teams. Peer review is a fundamental value embraced

by the higher education field since its inception (Brittingham, 2009). The founding of

regional accreditation was based on this fundamental value (Brittingham, 2009). As

such, accreditation is a volunteer based and peer reviewed process (Brittingham,

Page 60: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

47

2009). Accrediting agencies need volunteers from higher education institutions to

serve on peer review teams.

Serving on peer review teams serves a dual purpose, benefitting both the

individual on the peer review committee and the institution with which the individual

is affiliated (McGuire, 2009; Oden, 2009). The individual benefits from an increased

knowledge of the diversity of higher education, cross-institutional collaboration, a

behind-the-scenes look at accreditation, new organizational structures, new models,

and knowledge of other institutions’ practices in the field (McGuire, 2009; Oden,

2009). The higher education institution benefits because the individual learns about

many best practices, which are subsequently brought back to the institution (McGuire,

2009). Further, serving on a peer review committee is one way for individuals to give

back to the practice of education (Oden, 2009).

In summary, the literature reveals a number of tangible resources, intangible

resources, and leadership qualities that enhance institutional effectiveness and

accreditation. A larger body of literature exists regarding leadership (Alfred, 2012;

Kezar, 2013; Oden, 2009; Young, 2013), and tangible resources (Alfred et al., 2009;

Bardo, 2009; Nguyen, 2005; Oden, 2009; Powell, 2013; Young, 2013) compared to

intangible resources (Allen & Kazis, 2007; Kezar, 2013; Oden, 2009). However, the

literature on intangible resources revealed that these resources are critical to

organizational performance (Allen & Kazis, 2007; Kezar, 2013; Oden, 2009).

Leadership

As described by the framework for institutional capacity, an institution’s ability

to accomplish tasks is related to its overall capacity (Alfred et al., 2009). This overall

Page 61: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

48

capacity is influenced by decisions made regarding the allocation of resources (Alfred

et al., 2009). Leadership is an area repeatedly identified in the literature review as a

significant factor in accreditation (Alfred, 2012; Kezar, 2013; Oden, 2009; Young,

2013). Alfred et al. (2009) defined a leader as “virtually anyone in a position to make

a decision about and deploy resources” (p. 100). This implies that leadership exists

across an institution at many levels, including executive officers, deans, department

chairs, staff unit leaders, faculty, and support staff (Alfred et al., 2009).

Reaffirmation of accreditation provides the opportunity for an organization to

develop future leaders (Alfred et al., 2009; Young, 2013). According to Alfred

(2012), community colleges must rethink the overall approach to leadership and

should consider ways in which to develop future leaders from inside the organization.

Current leaders can embrace a “lead from behind” approach by empowering

individuals, regardless of leadership level, inside the organization to take on leadership

roles (Alfred, 2012, p. 119). Accreditation provides an opportunity for institutions to

embrace the lead from behind approach.

Young (2013) conducted a case study analysis, on a successful accreditation

self-study team at a community college in the SACSCOC region. The findings from

Young’s (2013) study revealed that the accreditation preparation team perceived that

strong, involved leadership was a critical component to completing the accreditation

process. Although the college president’s leadership was identified as important, the

self-study team leader position was identified as the most important leader (Young,

2013). Young (2013) further identified the importance of team member abilities and

behaviors as critical to the overall team performance and found that the team was

Page 62: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

49

influenced by decisions made at various levels of leadership. The findings from

Young (2013) and Alfred et al. (2009) indicated that leadership is important at all

levels of the organization.

Leadership qualities. Researchers interested in understanding leadership

styles leading to successful reaffirmation have identified communication,

transparency, organizational skills, knowledge, and resource management as critical

abilities of leaders (Oden, 2009; Young, 2013). Oden (2009) found that embracing the

reaffirmation of accreditation process as a chance to improve the organization by using

open and honest communication was important. Young (2013) found that the qualities

that the SACSCOC preparation team felt made for a good leader were organizational

skills, detail-orientation, knowledge of the institution, and knowledge of the

SACSCOC requirements (Young, 2013). Further, Young (2013) found that leaders

who were able to increase resources and build a culture of trust led to a successful

completion of the accreditation process.

In summary the literature indicates that in order to institutionalize accreditation

requirements, institutions should consider leadership at all levels of the organization

(Alfred et al., 2009; Young, 2013). Further, community colleges could use the

accreditation process as an opportunity to build future leaders. Future leaders

interested in improving accreditation should have strong organizational skills,

institutional knowledge, knowledge of the SACSCOC accreditation, and should

adequately resource accreditation activities (Young, 2013). Last, institutions should

recognize that leadership is a critical aspect of managing resources, and influencing

accreditation outcomes (Alfred et al., 2009; Young, 2013).

Page 63: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

50

Conceptual Framework of the Study

The conceptual framework for this study is based on the “framework for

institutional capacity” developed by Alfred et al. (2009, p.77). Capacity is defined as

“how well a college performs” (p. 77), and is comprised of three components: 1)

tangible resources, 2) intangible resources, and 3) leadership (Alfred et al., 2009).

Tangible resources are material resources that a college utilizes to achieve goals

(Alfred et al., 2009). Intangible resources are the non-material resources that aid or

hinder a college from achieving desired goals (Alfred et al., 2009). Intangible

resources include culture, processes, and staff capabilities. Leadership determines

how the resources are developed or utilized to achieve goals (Alfred et al., 2009).

Tangible Resources

The tangible resources that are relevant to the study include money, human

resources, and technology. Financial resources are necessary in order to achieve

specific goals (Alfred et al., 2009; Bardo, 2009). Colleges’ expenditures have

increased due to the rising costs of technology and personnel (Alfred et al., 2009;

Bergeron, Baylor, & Flores, 2014). Community colleges’ financial resources are in a

continuous state of decline, with calls for accountability in a state of incline (Alfred et

al., 2009; Bergeron et al., 2014). The rising costs associated with technology (Cooper

& Terrell, 2013; Powell, 2013), personnel (Oden, 2009) and accountability (Bardo,

2009; Cooper & Terrell, 2013; Hartle, 2012; Hulon, 2000) are sources of concern,

which jeopardizes community colleges’ institutional capacity (Alfred et al., 2009).

Investment in human resources is one of the most important investments that

an institution can make (Alfred et al., 2009). According to Alfred et al. (2009), the

Page 64: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

51

demand for talented administrators and staff will exceed the supply, and community

colleges will have to compete for talent inside and outside of higher education. In

order for colleges to thrive, colleges will need to recruit, develop, and retain staff with

the ability to thrive in a resource-limited community college environment (Alfred et

al., 2009). According to Alfred et al., community college staff must work “quickly

and collectively with current information” (Alfred et al., 2009, p. 83). The use of

“rapid learning technology” and continuous development of personnel is necessary for

community college capacity (Alfred et al., 2009, p. 83).

Technology is another tangible resource believed to play an important role in

the college functioning (e.g., Alfred et al., 2009; Nguyen, 2005; Oden, 2009; Powell,

2013). Community colleges are lagging behind in investing in technology and in

ensuring staff have the technological skills needed to thrive in a digitally driven

society (Alfred et al., 2009). However, in order for colleges to reach their intended

goals, colleges must invest in and embrace current technological infrastructure (Alfred

et al., 2009; Powell, 2013). According to Alfred et al. (2009), the solution to

technology investment challenges is “to invest strategically, with a constant focus on

institution-wide goals” (p. 85). Technology is no longer a luxury, rather as much a

necessity to organization functioning as electricity (Alfred et al., 2009).

Intangible Resources

Alfred et al. (2009) described people, culture, organizational structure, time,

and systems as intangible resources that determine an institution’s overall capacity.

These intangible resources are the non-quantifiable resources that reflect the

Page 65: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

52

background of the organization. These resources assist organization functioning, but

are not direct resources.

People comprise one of the major elements of intangible resources, but are

different from the human resource element described as a tangible resource. The

people in an organization represent the overall knowledge the institution holds (Alfred

et al., 2009). This intangible resource reflects the “nature of the workforce in a

college, including competencies, work experience and skills, tacit knowledge, needs

and expectations, perceptions, diversity, and satisfaction” (Alfred et al., 2009, p. 86).

The overall knowledge, abilities, and competencies of the people within an

organization contribute to one aspect of the institution’s overall capacity.

Another intangible and not easily measurable resource is the organizational

culture of the college. Culture is defined as “the values and beliefs that are shared by

most of the people at an institution” (Alfred et al., 2009, p. 86). Culture is especially

important because it influences staff and leadership behavior (Alfred et al., 2009;

Kezar, 2013). Changing organizational culture is a well-documented difficult task

(Alfred et al., 2009; Kezar, 2013). Regardless, the culture of the college plays an

important role in the overall effectiveness and capacity of an organization.

The organizational structure is another intangible resource that influences the

overall capacity of an organization (Kezar, 2013). Alfred et al. (2009) argued that

community colleges have become more bureaucratic in nature, which has led to

increased layers and complexity. Layering is intended to aid organizations in response

times, but is not effective until all institutional systems and processes are updated and

responsive (Alfred et al., 2009).

Page 66: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

53

Time is an intangible resource that “provides the opportunity to disengage from

operations and work with the big picture” (Alfred et al., 2009, p. 95). Time is a

limited resource for community college leaders and staff because of a nearly constant

flow of information due to technology. Time is even more restrictive in educational

systems due to the collaborative nature of colleges and the variety of stakeholders

(Alfred et al., 2009). Quick and efficient systems and employees contribute to the

overall capacity of an institution, ensuring time is available to contemplate the big

picture is an equally important intangible resource.

The systems, policies, and procedures of colleges comprise the final segment

of intangible resources. These systems and polices are necessary to ensure consistency

across the organization, but can cause problems when they are not adequately updated

or enforced (Tharp, 2012). The inconsistent application and updating of policies can

cause barriers to organizational performance (Alfred et al., 2009).

Leadership

According to the framework for institutional capacity, institutional capacity is

influenced by leaders’ abilities to leverage tangible and intangible resources (Alfred et

al., 2009). Where tangible and intangible resources reflect the overall capacity of an

organization, the decisions made on how to utilize those resources determine the

institution’s overall effectiveness (Alfred et al., 2009). Kezar (2013) has proposed that

understanding how leadership is leveraged is critical to understanding “dynamics that

lead to change” (p. 191). According to Alfred et al. (2009), a leader is an institutional

decision maker that determines how resources are utilized. Although leadership is

Page 67: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

54

often viewed from a top-down approach, leadership exists across organizations at a

variety of levels (Alfred et al., 2009; Kezar, 2013; Lattimore et al., 2012).

The conceptual framework serves as the foundation for the study. The

framework is guided by three components that shape college capacity: tangible

resources, intangible resources, and leadership. The conceptual framework is

appropriate for the study because the conceptual framework provides the milieu to

understand the elements that contribute to effective organizations.

Summary

This chapter reviewed the history and process of accreditation in higher

education, the connection between accreditation and institutional effectiveness, the

conceptual framework, and practices leading to reaffirmation of accreditation.

Further, the chapter related the framework for capacity to issues identified in

demonstrating institutional effectiveness and successful completion of the

reaffirmation of accreditation process. Previous research has shown a number of

practices related to successful reaffirmation of accreditation. The research uncovered

in the literature is consistent with the framework for institutional capacity.

Institutions of higher education undergo reaffirmation of accreditation in order

to demonstrate quality operations, to allow students attending to receive federal

financial aid, and to continuously improve. The research study was designed to aid

institutions in using reaffirmation of accreditation to continuously improve and remain

in good standing with the accrediting body. The study sought to provide institutions

with a series of potential improvements, which may aid institutions in decision-making

within the institutional effectiveness realm of accreditation in the SACSCOC region.

Page 68: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

55

The following chapter, Chapter III, presents the research methods and design

for the quantitative study. The chapter includes the specific details and methodology

used to conduct the study.

Page 69: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

56

CHAPTER III

METHODOLOGY

Chapter III describes the methodology and research design for this quantitative

study. This chapter is organized into the following sections: 1) restatement of purpose

of the study, 2) restatement of research questions, 3) research design, 4) data

collection, 5) data analysis, and 6) reliability and validity of the instrument.

Restatement of Purpose of the Study

The purpose of this study was to analyze the role of the SACSCOC

recommendations and institutional changes based on the perceptions of the SACSCOC

liaisons or institutional effectiveness personnel at community colleges in the

SACSCOC region that have undergone reaffirmation between the years 2011 through

2015. This study sought to understand and examine the positive changes within

institutional effectiveness that occurred because of the reaffirmation of accreditation

process. The results of the study are intended to aid community colleges in

preparation for regional accreditation, and specifically in improving institutionalizing

processes that are known to enhance institutional effectiveness. Further, the study’s

results are applicable to the work of the SACSCOC because it has a stake in

understanding how recommendations change institutional behaviors, particularly the

behaviors of community colleges.

Restatement of Research Questions

The study was guided by the following research questions:

1) What is the statistically significant relationship between the independent

variable of community colleges that receive SACSCOC recommendations and

Page 70: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

57

the dependent variable of overall (or total) level of perceived change or

improvement?

2) What is the statistically significant relationship between the independent

variable of ‘severity of recommendations’ group membership and the

dependent variable of levels of perceived changes or improvements?

3) Which factors (independent variables) best predict the overall level of

institutional change or improvement (dependent variables)?

4) Which factors (independent variables) best predict the severity of

recommendations received by the institutions (dependent variables)?

Research Design

This quantitative research study used a group-comparison, non-experimental

design. According to Creswell (2014), quantitative research is appropriate for

examining relationships among variables and for generalizing results. Further,

quantitative research is used to explore causal-comparative relationships, where

groups are compared against an independent variable that has already occurred

(Creswell, 2014). In group-comparison studies, either groups are compared at

approximately the same time, or the history of the groups is compared based on a

particular group outcome (Day, 1989). Quantitative studies are also used for

correlational design studies where “investigators use the correlational statistic to

describe and measure the degree or association (or relationship) between two or more

variables or sets of scores” (Creswell, 2014, p. 12). The correlational design approach

can be used to explore complex relationships among variables, where the correlational

statistic is expanded into regression techniques (Creswell, 2014).

Page 71: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

58

This research study used a causal-comparative approach to compare “two or

more groups in terms of a cause (independent variable) that has already happened”

(Creswell, 2014, p. 12). For instance, an objective of the study was to examine levels

of change that have already happened, depending upon specific group membership.

This type of research is consistent with non-experimental, causal-comparative,

quantitative research (Creswell, 2014). Further, the study examined predictive

relationships between the recommendations received and changes made by the

institutions, which is consistent with a correlational design further expanded into a

predictive analysis (Creswell, 2014).

The study employed a survey research approach in order to collect data.

Survey research is a popular method for collecting information due to the ease of use,

cost effectiveness, and ability to reach a number of individuals across a large

geographical area (Creswell, 2009, 2014; Gay, Mills, & Airasian, 2009). This study

attempted to understand regional accreditation practices across the SACSCOC

membership region (Alabama, Florida, Georgia, Kentucky, Louisiana, Mississippi,

North Carolina, South Carolina, Tennessee, Texas, Virginia, Latin America, and

certain international institutions), thus a large geographical area is involved in the

study. Survey research is also useful for studying a sample with intention to

generalize to a population (Creswell, 2014), which was consistent with one of the

purposes of the study.

Study Institutions

The study includes community colleges in the SACSCOC region. The study

was limited to public, Level I, community colleges as identified by the SACSCOC

Page 72: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

59

member and candidate database (SACSCOC, 2014b). Community colleges that are

accredited by the SACSCOC, located in the following states, are included in the study:

Alabama, Florida, Georgia, Kentucky, Louisiana, Mississippi, North Carolina, South

Carolina, Tennessee, Texas, and Virginia. Community colleges are listed below by

state, and Carnegie Classification: Basic 2010 (i.e., size, institutional control, and

classification) in Table 3.1 (NCESIPEDS, n.d.).

Table 3.1 SACSCOC Level I Institutions by Carnegie Categories AL FL GA KY LA MS NC SC TN TX VA Total

Size Very small 1 0 1 0 0 0 3 1 1 1 0 8 Small 8 2 6 1 5 1 29 4 0 12 7 75 Medium 11 0 15 12 4 9 21 9 8 26 13 128 Large 4 1 2 3 1 5 6 3 5 14 2 46 Very large 0 1 1 0 1 0 1 0 0 10 2 16

Total 273 Control

Private 0 0 1 0 0 0 1 1 1 1 0 5 Public 24 4 24 16 11 15 59 16 13 62 24 268

Total 273 Classification

Rural Serving 19 3 16 14 4 15 47 10 9 39 18 194 Suburban 1 0 6 1 2 0 7 2 1 7 5 68 Urban 4 1 1 1 3 0 4 4 3 15 0 36 Other 0 0 3 0 2 0 2 1 1 2 1 11

Total 273

Study criteria. As of 2014, 273 Level I institutions existed in the SACSCOC

region (SACSCOC, 2014b). Of the 273 institutions, 135 of the institutions went

through reaffirmation of accreditation between the years 2011 through 2015, thus

meeting the study criteria. Descriptive data regarding the state composition and the

year of reaffirmation of accreditation for the sample are illustrated in Table 3.2 and

Table 3.3, respectively. One institution had a negative public sanction; the

Page 73: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

60

membership list indicated that the institution was on warning status as of November

2014 (SACSCOC, 2014b). The remaining 134 institutions were in good standing with

the SACSCOC (SACSCOC, 2014b).

Table 3.2 State Representation of Sample

State n % AL 11 8.2% FL 3 2.2% GA 15 11.1% KY 4 3.0% LA 3 2.2% MS 5 3.7% NC 35 25.9% SC 6 4.4% TN 6 4.4% TX 41 30.4% VA 6 4.4% N = 135

Table 3.3

Previous Year of Reaffirmation of Sample

Year of Reaffirmation n % 2015 28 21% 2014 30 22% 2013 36 27% 2012 23 17% 2011 18 13%

N = 135

The researcher attempted to conduct a census of the institutions that met the study

criteria. A census occurs when the entirety of the population was included in a study

Page 74: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

61

(Day, 1989). Because the study criteria limit the population to 135 institutions, the

researcher offered the opportunity to participate in the study to all of the 135 institutions.

Participants

SACSCOC liaisons at each community college were the primary survey

participants. Each community college in the SACSCOC region has a liaison that

serves as the primary communicator between the SACSCOC and the community

college, and the liaison is responsible for preparation of the accreditation report

(SACSCOC, 2012c). Because the liaison’s role is to prepare the accreditation report,

among other duties related to the SACSCOC compliance, the SACSCOC liaison is

likely to possess an intimate knowledge of the SACSCOC findings during the last

SACSCOC visit.

The option to complete the study survey was available to the appropriate

institutional representative at each of the Level I community colleges that went

through reaffirmation of accreditation in the SACSCOC region between 2011 through

2015. The contact information for each SACSCOC liaison was located through each

institution’s website. For institutions’ where this information was unable to be

determined, the researcher gathered the contact information for the primary

institutional effectiveness personnel.

Data Collection

Quantitative research is the process of explaining occurrences through the

collection of numerical data in order to conduct analysis using mathematical models,

specifically statistics (Aliaga & Gunderson, 2000). In quantitative research, data are

collected via primary or secondary data sources (Sapsford & Jupp, 2006). According

Page 75: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

62

to Sapsford and Jupp (2006), primary sources are “the basic and original material for

providing the researcher’s raw evidence” and secondary sources are “those that discuss

the period studied but are brought into being at some time after it, or otherwise

somewhat removed from the actual events” (p. 142). The research study used primary

data to address the four research questions. In quantitative research, data are collected

via the data source and then prepared for analysis (Sapsford & Jupp, 2006). In the

research study, the primary data source was a researcher-developed web-based survey

instrument.

Instrumentation

The researcher-developed study instrument, entitled the SACSCOC

Recommendations and Improvements Survey (SRIS), was designed to collect primary

data. According to Day (1989), survey questions should align with the study’s

objectives and provide data that will answer the research questions. In order to align

with the study’s research questions, the researcher created the SRIS instrument. Some

of the content in Q3.2 and Q3.3 of the survey is similar to the work of Murray (2004).

The Likert scale included in Q3.2 and Q3.3 of the SRIS has similarities to the Likert

scale from the Survey of COC Accreditation (SCA) developed by Murray (2004):

“Please identify the extent of change your college experienced . . . because of going

through the reaffirmation of accreditation process” (p. 111). Further, one question on

the SRIS inquired about the amount of change that the college experienced closely

mirrors one of the questions on the Murray’s (2004) SCA instrument. Similarities and

differences between the SRIS and the SCA are demonstrated in Table 3.4 below.

Page 76: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

63

Table 3.4 Differences between SCA and SRIS

SCA Likert Scale (Murray, 2004, p.111)

SRIS Likert Scale Q3.2 (see Appendix D)

SRIS Likert Scale Q3.3 (see Appendix D)

(1) Great positive change (1) no change or decrease

(1) no change or decrease

(2) Moderate positive change (2) slight increase

(2) slight improvement

(3) No change (3) moderate increase

(3) moderate improvement

(4) Slight positive change (4) major increase

(4) major improvement

(5) No positive change

Survey. The study utilized a researcher development survey, designed around

four sections: 1) institutional demographics (i.e., institutional characteristics,

institutional effectiveness characteristics, and accreditation characteristics), 2)

SACSCOC recommendations that institutions received during the previous

reaffirmation of accreditation, 3) changes and improvements that occurred because of

the reaffirmation of accreditation process, and 4) perceived state of accreditation

compliance as of the date of the survey administration in Spring 2015.

According to Day (1989), opening a survey with an easy and engaging

question increases response rates, but opening with a filter question may be necessary

depending on the survey design and research goals. The survey opening question is a

filter question that inquired as to which year the participating school underwent

reaffirmation of accreditation by the SACSCOC. The remainder of this section

included a series of questions related to institutional characteristics, institutional

effectiveness characteristics, and accreditation characteristics.

Page 77: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

64

The second section of the survey included questions regarding the institution’s

previous SACSCOC reaffirmation of accreditation results within the domain of

institutional effectiveness. These questions were developed based on the SACSCOC

process of reaffirmation of accreditation (SACSCOC, 2012a), and inquired as to whether

the institution received any recommendations on any of the 11 principles related to

institutional effectiveness during various stages of the reaffirmation of accreditation

(SACSCOC, 2011a; SACSCOC, 2011b). Additionally, one question asked participants

to rank order the standards based on the most difficult standard to demonstrate

compliance during accreditation. Further, participants were asked to identify potential

causes of the difficulty in demonstrating compliance. The questions in this section of the

survey were in chronologic order according to the SACSCOC accreditation process.

The third area of the survey included questions related to the level of changes and

improvements implemented by the institution because of undergoing reaffirmation of

accreditation. This series of questions inquired about the participants’ perceptions of

changes and improvements that occurred at the institution. The questions in this domain

align with the framework for institutional capacity developed by Alfred et al. (2009).

The questions were primarily designed around whether the item is a tangible resource,

intangible resource, or leadership item. Questions in this domain were split into two

sections, Q3.1 and Q3.2. Each section included questions on 4-point Likert scales.

Likert scales are popular for survey research, provide the intensity and direction of a

response, and can be measured in terms of reliability and validity (Day, 1989).

Arranging survey questions in an easy-to-answer format such as a repetitive Likert scale,

reduces the overall cognitive burden on the survey participant (Day, 1989). Section Q3.1

Page 78: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

65

asked participants to rate the extent of change that their institution experienced due to

going through reaffirmation of accreditation on 4-point Likert scales (1 = no change or

decrease, 2 = slight increase, 3 = moderate increase, and 4 = major increase). Section

Q3.2 asked survey participants to rate the extent of improvement that their institution

experienced due to going through reaffirmation of accreditation on a 4-point Likert scales

(1 = no change or decrease, 2 = slight improvement, 3 = moderate improvement, and 4 =

major improvement). The questions in these sections ask about processes, technology,

leadership involvement, professional development, human resources, organizational

structure, governance structure, and financial resources (Alfred et al., 2009).

The final section of the SRIS survey, section Q3.3, asked participants to rate the

likelihood that the institution would receive a recommendation from the SACSCOC, if

the SACSCOC were to visit the institution on the date of the survey’s completion. The

rating options were on 5-point Likert scales (1 = not at all possible, 2 = very unlikely, 3 =

slightly possible, 4 = somewhat likely, and 5 = very likely). The survey items included

the 11 SACSCOC institutional effectiveness principles. For a detailed list of survey

questions, see Appendix D.

Survey administration. Data was collected via a survey of study participants,

after Texas Tech University Institutional Review Board (Appendix A) and content

validity assurance. The survey was sent via web-link to the email addresses of

selected participants at each college. The survey was distributed through the Qualtrics

survey software system provided by Texas Tech University (Qualtrics, 2014) to the

SACSCOC liaisons or primary institutional effectiveness staff member at each

Page 79: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

66

community college in the SACSCOC region that underwent reaffirmation of

accreditation from 2011 through 2015.

The SRIS included introductory text that indicated the purpose of the survey,

the anonymous nature of the survey, the Institutional Review Board (IRB) approval,

and contact information for the researcher (Appendix C). According to Salant and

Dillman (1994), a four-phase process increases response rates for survey

administration. The research study followed a modified version of the four-phase

process. Because the survey was sent via email through the Qualtrics system,

participants were sent a link to the survey along with the study introductory text

(Appendix C, & Appendix D) in the initial email (Appendix B). The second phase

occurred one week after the initial survey email. This contact was an email sent to

invited participants from the Qualtrics system requesting participation and thanking

those who had already participated (Appendix E). The third contact was an email sent

to all non-responders from Qualtrics requesting participation, which occurred

approximately two weeks after the initial survey email was distributed (Appendix F).

The Qualtrics survey system has the ability to track responses separately from survey

participant responses in order to assure anonymity of survey responses. The final

contact was an email sent to non-responders notifying them of the impending closing

of the survey and requesting participation. This contact occurred three weeks after the

initial email notification. The survey was open for 23 days. At the closure of the

survey all participants were thanked for their participation in the study.

Qualtrics allowed the researcher to collect data in a secure environment, and

download the raw data into a statistical software package. After closure of the survey,

Page 80: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

67

the raw data was downloaded into the IBM Statistical Packages for the Social Sciences

(SPSS) software version 22 for analysis.

Reliability and Validity

Development of an instrument requires ensuring that collected data are

measured accurately and consistently (Ray, 1996). Reliability is the term that

designates that the instrument ensures consistency, whereas validity is the term that

describes the accuracy of the instrument (Ray, 1996). A number of techniques to

assess instrument reliability exist. In this study, reliability was assured using “internal

consistency reliability,” which examines parts of the instrument that belong together

(Huck & Cormier, 1996, p. 78). The purpose of this type of reliability measure is to

ensure internal consistency of the instrument. This type of reliability is utilized in

situations where an instrument is given to one group of individuals on only one

occasion (Huck & Cormier, 1996). The primary purpose of internal consistency

reliability is to measure the “degree to which the same characteristic is being

measured” (Huck & Cormier, 1996, p. 81).

Internal consistency reliability was conducted on Likert scale survey questions

through the statistical procedure Cronbach’s alpha (Huck & Cormier, 1996). This

statistical procedure allows flexibility in the types of data that can be analyzed.

Although some reliability procedures can only be used on dichotomous variables, the

Cronbach’s alpha can be used on instruments that have questions that allow three or

more possible answer values (Huck & Cormier, 1996). The complexity of the survey

answer options and the types of variables involved increase the need for a flexible test

of reliability, such as the Cronbach’s alpha test (Huck & Cormier, 1996).

Page 81: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

68

The validity of the instrument was assured through content validity. No

comparative instrument exists; therefore, the ability to achieve criterion-related

validity with the researcher-developed, web-based survey was not possible. Content

validity involves experts reviewing the instrument to ensure that the instrument

measures what it is intended to measure (Huck & Cormier, 1996). In order to achieve

content validity, the instrument was pilot tested with five content experts in the field of

institutional effectiveness and accreditation. During the spring of 2015, three of the

five experts were employed by community colleges and two experts were employed

by four-year universities. The average years of experience in the field of the experts

was 12.8 years. Most of the experts (n = 4, 80%) were the SACSCOC liaison at their

respective institutions.

The individuals who were selected and agreed to participate in the pilot test

completed the survey and served as content experts in establishing content validity.

Following the guidelines set in Huck and Cormier (1996): 1) the experts were asked

to evaluate each question contained in the instrument, 2) the researcher served as a

content guide by reviewing the intended purpose of each question with the experts,

after the experts completed the survey, 3) the experts were asked to identify any

questions that did not address the intended purpose, and 4) the experts were asked to

provide feedback on the questions that appeared problematic. Further, the experts

provided additional suggestions for survey questions that would aid the researcher in

data collection. For instance, the questions related to hiring an institutional

effectiveness consultant, receiving a SACSCOC Vice President visit prior to

accreditation, and the leadership attending the SACSCOC orientation were

Page 82: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

69

suggestions from the experts, and these questions were added to the SRIS instrument.

The original instrument had utilized 5-point Likert scales, but several experts advised

reducing to 4-point Likert Scales in order to ease survey completion and increase

clarity. Additionally, the version presented to the content experts asked participants to

supply budgetary numbers, and the experts suggested changing budgetary questions to

a range of options for the purpose of easing survey burden. Last, one question

inquired as to whether the survey participant worked at the institution during the

previous reaffirmation, and the experts suggested striking that question. The overall

feedback from the experts was that the survey asked important and relevant questions,

but several experts voiced concerns regarding the length of the survey and suggested

that the length be reduced if possible. The researcher made adjustments to the survey

based on the feedback of the experts to ease the burden on survey participants. The

survey described in the instrumentation section above reflects the version of the survey

completed after pilot testing the instrument with the content experts.

Data Analysis

Data analysis is the process of interpreting data systematically through a series

of steps and by applying statistical techniques (Creswell, 2014). According to

Creswell (2014), providing descriptive analysis for all independent and dependent

variables, including the means, standard deviations, and the range of scores of the

variables, is recommended in order to analyze data. The following types of statistical

analyses were utilized for examination of the research questions: descriptive statistics,

inferential statistics, effect sizes, and predictive statistics. Descriptive statistics were

provided for all survey response items, and for all research questions. Inferential

Page 83: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

70

statistics included independent samples t-tests for research question one and one-way

analysis of variance (ANOVA) for research question two. Cohen’s d effect size was

provided for significant independent samples t-test results and eta-squared effect sizes

were provided for all significant ANOVA results. Predictive statistics were utilized

for research questions three and four and involved multiple regression analyses. All

statistical procedures and analyses, except for effect sizes, were conducted in SPSS

software version 22. The Cohen’s d and eta-squared effect sizes were calculated by

utilizing SPSS output in Microsoft Excel.

Descriptive statistics were collected and analyzed for all survey responses.

Frequency counts and percentages were provided for all institutional characteristics,

SACSCOC findings, and difficult cases experienced, because the data were categorical

or binary (Creswell, 2014). For the rank order question, frequency counts, range of

scores, mean and standard deviation were provided. For the Likert scale sections,

sections Q3.1, Q3.2, and Q3.3, the total number of participants, percentages for each

item selected, mean, and standard deviation were provided.

Research question one. Research question one intended to determine whether

statistically significant differences existed for levels of perceived changes or perceived

improvements (dependent variable) between institutions that received SACSCOC

recommendations (independent variable) and those that do not. This research question

examined all SRIS items related to institutional change or improvements (i.e., SRIS

sections Q3.1 and Q3.2) at each level of the reaffirmation of accreditation cycle (i.e.,

any recommendations during any level, offsite, onsite, Committee on Compliance and

Reports review, and monitoring).

Page 84: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

71

This research question included descriptive statistics and inferential statistics

(Creswell, 2014; Fitzgerald & Fitzgerald, 2013). The research question involved a

categorical independent variable (category: yes or no), and a continuous dependent

variable, thus an independent samples t-test was utilized (Fitzgerald & Fitzgerald,

2013). In some cases the dependent variable was an ordinal level variable. According

to Zumbo and Zimmerman (1993), “there is no need to replace parametric statistical

tests by nonparametric methods when the scale of measurement is ordinal and not

interval” (p. 390). These researchers concluded that the use of a parametric test, such

as a t-test, is an acceptable type of test to use for ordinal level data (Zumbo &

Zimmerman, 1993). Further, the data were examined for significant outliers,

homogeneity of variance, and approximately normal distribution (Gravetter &

Wallnau, 2013). For independent samples t-test where homogeneity of variance was

violated, the Welch t-test was provided (Welch, 1947). The Welch t-test

accommodates for the unequal variances and provides a valid test result (Gravetter &

Wallnau, 2013; Welch, 1947).

In order to provide a measure of practical significance, Cohen’s (1992)

coefficient d effect size was provided for all independent samples t-tests that did not

violate the homogeneity of variance assumption. Cohen’s d is a commonly used effect

size calculation for independent samples t-tests (Gravetter & Wallnau, 2013). The

effect size was calculated by using the formula for between-subjects designs for t-tests,

specifically the following formula (Lakens, 2013):

ds = t�1𝑛𝑛1

+1𝑛𝑛2

Page 85: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

72

Cohen (1992) provided guidelines for interpreting effect sizes as small effect sizes (≤

.20), moderate effect sizes (.50) and large effect sizes (≥ .8). Vacha-Haase and

Thompson (2004) suggested that when researchers are conducting research in new or

largely unexplored areas, the use of Cohen’s benchmarks are acceptable, and the

benchmarks should be viewed as guidelines rather than strict rules.

Research question two. Research question two sought to determine a

statistically significant difference in levels of perceived change or improvements

(dependent variable) based on the severity of the recommendations (independent

variable). Similar to research question one, this research question utilized all SRIS

items related to institutional change or improvements (i.e., SRIS sections Q3.1 and

Q3.2) as the dependent variables. The independent variable, severity of the

recommendations, was calculated based on participants’ responses to SRIS sections

Q2.1 and Q2.2. Participants who received no recommendations during any phase of

accreditation were placed into the lowest-severity group and given a score of 1.

Participants who indicated receiving a recommendation during the offsite visit were

placed into the low-severity ranking group and given a score of 2. Participants who

indicated receiving a recommendation during the onsite were placed into the

moderate-severity ranking group and given a score of 3. Last, participants who

received a recommendation during the Committee on Compliance and Review (C&R)

phase, or who were placed on to monitoring, warning, or probation status scored into

the high-severity group with a score of 4.

This research question involved a continuous dependent variable, which

indicated that an analysis of variance (ANOVA) was an appropriate inferential statistic

Page 86: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

73

to utilize (Institute for Digital Research and Education, 2014; Fitzgerald & Fitzgerald,

2013). An ANOVA is useful for research that seeks to make multiple group

comparisons both within and between groups (Leong & Austin, 1996). An ANOVA

requires verification of three necessary assumptions (Gravetter & Wallnau, 2013).

The assumptions of independent observations, normality, and homogeneity of variance

were met.

In addition to the ANOVA, post hoc testing was utilized to identify the

significant differences between the severity of the recommendations and the perceived

level of changes and improvements. An ANOVA is an omnibus test and is only able

to determine if a statistical difference exists between the entire set of mean differences

(Gravetter & Wallnau, 2013). It is unable to determine the specific means that are

significantly different from one another (Gravetter & Wallnau, 2013). In order to

determine the specific significant mean differences, a Tukey’s Honestly Significant

Difference (HSD) was utilized. Tukey’s HSD is often used in conjunction with

ANOVA statistical tests in order to determine the groups that statistically differ from

one another (Gravetter & Wallnau, 2013).

Significance testing indicates that the differences observed are unlikely to have

occurred by chance, and does not give an indication of the size of the effect of the

mean differences (Gravetter & Wallnau, 2013). In order to understand the size of the

mean differences effect, the eta-squared value (η2) was reported for each significant

ANOVA result (Gravetter & Wallnau, 2013). Eta-squared is a popular and widely

accepted effect size utilized with ANOVA tests (Gravetter & Wallnau, 2013; Levine &

Hullett, 2002). Eta-squared provides the percentage of variance accounted for by the

Page 87: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

74

independent variables, and was calculated by the following formula (Gravetter &

Wallnau, 2013):

η2 = 𝑆𝑆𝑆𝑆𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏 ∕ 𝑆𝑆𝑆𝑆𝑏𝑏𝑡𝑡𝑏𝑏𝑡𝑡𝑡𝑡

According to Jaccard and Becker (1997), the interpretation of eta-squared values

varies widely among researchers and across different fields. However, these

researchers interpret eta-squared as a weak effect (near .05), a moderate effect (near

.10), and a strong effect (near .15).

Research question three. Research question three examined factors

(independent variables) that best predict institutional change or improvement

(dependent variable) through multiple regression analyses. The first multiple

regression analysis examined the predictors for total amount of institutional changes

experienced as a result of undergoing reaffirmation of accreditation from the

independent variables:

• difficulty due to insufficient time,

• difficulty due to too many committees,

• advance time spent on the SACSCOC compliance certification,

• institution’s leadership team attended the SACSCOC orientation,

• difficulty due to insufficient knowledge of assessment or institutional

effectiveness, and severity of recommendations.

The second multiple regression analysis examined the predictor for total amount of

institutional improvements experienced as a result of undergoing reaffirmation of

accreditation from

• the amount of change in department or unit-level leadership involvement,

Page 88: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

75

• total annual budget dedicated to institutional effectiveness,

• severity of recommendations,

• institution's leadership team attended the SACSCOC orientation,

• difficulty due to insufficient evidence,

• amount of change in quality or usefulness of reports produced from the

institutional research or institutional effectiveness office.

For both regression models, the independent variables included categorical,

interval, and ordinal levels of data and the dependent variable included continuous

data. Multiple regression analysis was the appropriate test to utilize due to the interval

and categorical variables involved (Institute for Digital Research and Education,

2014). Further, regression models are equipped to handle continuous variables, where

other statistical procedures are not (Leong & Austin, 1996). In addition, multiple

regression tests are used to explain differences in variance and aids in understanding

predictive relationships between variables (Leong & Austin, 1996). Osborne and

Waters (2002) suggested that four assumptions of regression are not robust to errors

and should be verified to ensure trustworthiness of the results. The four assumptions

are normality, linearity, reliability of measurement, and homoscedasticity (Osborne &

Waters, 2002). The assumptions of normality, linearity, reliability of measurement,

and homoscedasticity were inspected.

Research question four. This research question examined factors (independent

variables) that best predict the severity of the recommendations received by institutions

(dependent variable). A multiple regression was run to predict the severity of the

recommendations (dependent variable) from the independent variables:

Page 89: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

76

• difficulty due to insufficient time,

• amount of change in professional development for institutional effectiveness,

• difficulty due to insufficient knowledge of assessment or institutional

effectiveness,

• difficulty due to insufficient technology,

• difficulty due to insufficient evidence,

• difficulty due to insufficient executive level leadership involvement,

• amount of change in number of institutional effectiveness processes).

As described previously in research question three, multiple regression analysis was the

appropriate test to utilize due to the interval and categorical variables involved (Institute

for Digital Research and Education, 2014). Last, the assumptions of normality, linearity,

reliability of measurement, and homoscedasticity were inspected.

Summary

The causal-comparative, quantitative research study utilized a group-

comparison, researcher-developed, online survey instrument to collect data. The

survey participants included all SACSCOC liaisons in Level I community colleges.

Statistical analyses were conducted using a series of descriptive, inferential, and

predictive statistics. Research question one was analyzed via an independent samples

t-test. Research question two was analyzed through an ANOVA. Research questions

three and four were analyzed through multiple regression analyses. The results of the

study are intended to aid community colleges in preparation for regional accreditation,

and specifically in improving institutionalizing processes that are known to enhance

institutional effectiveness.

Page 90: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

77

CHAPTER IV

RESULTS

Chapter IV comprises the findings of the analyses for this quantitative study.

The data collection process and data analyses procedures are provided. The purpose of

this study was to analyze the role of SACSCOC recommendations and institutional

changes based on the perceptions of the SACSCOC liaison or primary institutional

effectiveness personnel at community colleges in the SACSCOC region that

underwent reaffirmation of accreditation between the years 2011 through 2015. The

following research questions guided this study:

1) What is the statistically significant relationship between the independent

variable of community colleges that receive SACSCOC recommendations and

the dependent variable of overall (or total) level of perceived change or

improvement?

2) What is the statistically significant relationship between the independent

variable of ‘severity of recommendations’ group membership and the

dependent variable of levels of perceived changes or improvements?

3) Which factors (independent variables) best predict the overall level of

institutional change or improvement (dependent variables)?

4) Which factors (independent variables) best predict the severity of

recommendations received by the institutions (dependent variables)?

Summary of Research Design

This quantitative research study examined the role of recommendations in

driving changes or improvements in higher education institutions. The study utilized a

Page 91: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

78

group-comparison, researcher-developed, online survey instrument to collect data.

Descriptive, inferential, and predictive statistical analyses were utilized.

Data Collection

A total of 135 institutions were selected for participation in the SACSCOC

Recommendations and Improvements Survey (SRIS). After IRB approval and pilot

testing of the researcher developed, web-based instrument, each institutional

representative was emailed the instrument through the Qualtrics system. The

researcher sent a new institutional representative an email invite to take the survey for

10 invalid email addresses. Reminders were sent at one week, two week, and three

weeks. Participants were also sent a thank you email for their participation.

The survey was sent to 135 participants, and yielded 77 participants for an

initial response rate of 57%. Table 4.1 below shows the participant response rate by

the year of reaffirmation of accreditation. Of the 77 participants, seven institutions did

not fully complete the survey. Additionally, one participant completed the survey in

entirety, but at the halfway point of the survey, the survey response pattern indicated

that the participant selected the same answer for the final half of the survey. This

participant also showed up as extreme outlier (> 3 standard deviations from the mean)

on the variables of total change and total improvement (Osborne & Overbay, 2004).

Due to the survey response pattern and the outlier status, the participant’s scores were

excluded from the analysis. Meade and Craig (2012) studied electronic survey

response patterns and found that around 3-5% of participants report erroneous or

careless responses. These researchers noticed that the careless response rates

increased in longer surveys, (Meade & Craig, 2012) such as the SRIS. According to

Page 92: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

79

Osborne and Overbay, (2004), researchers must rely on preparation and judgement

when determining the appropriate method for handling outliers. Further, a general rule

of thumb for identification of an outlier is if a data point is 3 or more standard

deviations from the mean (Osborne & Overbay, 2004). The final count of participants

included in the survey analysis was 69, yielding a final 51.1% response rate. Further,

one participant did not answer the five questions that inquired about the level of

improvements experienced at the participant’s institution. This participant’s responses

to all other survey questions were complete and the results were included in the study.

However, any analysis examining questions relative to the level of improvement did

not include this participant.

Table 4.1

Survey Response Rate by Year of Accreditation

Year of Accreditation

Number of Eligible

Institutions

Number of Participating Institutions

Survey Completion

Rate 2011 18 5 27.8% 2012 23 10 43.5% 2013 36 20 55.6% 2014 30 17 56.7% 2015 28 17 60.7%

Reliability of the instrument. The sections of the SACSCOC

Recommendations and Improvements Survey (SRIS) that utilized a Likert scale were

analyzed for reliability. In order to establish internal consistency reliability, the

researcher utilized a Cronbach’s alpha (α) test of reliability (Huck & Cormier, 1996).

SRIS section Q3.1 included an 18-item subscale that inquired about the levels of

Page 93: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

80

change (α = .87), SRIS section Q3.2 included a seven-item subscale that inquired

about the levels of improvement (α =.85), and SRIS section Q3.3 included an 11-item

subscale that inquired about the current level of compliance with SACSCOC

institutional effectiveness standards (α = .94). According to Nunnaly (1978), research

instruments should have a reliability score of .70 or greater. Each of the subscales

tested above a .84 threshold and therefore indicated a high degree of reliability for the

Likert scale questions.

Creation of New Variables

In order to prepare the data for statistical analyses, the researcher created a

number of new variables from the original, SRIS data. Table 4.2 provides the details

of the processes used to create the new variables.

Table 4.2 New Variables Creation Process

New Variable

Recoding Method

Variable Description

IE Staff Total

Recode into the sum of institutional research, institutional effectiveness, and other staff.

Total count of all staff dedicated to institutional research and effectiveness.

Type of IE Technology

Reordered the answer options to1 = no central system, 2 = homegrown system, 3 = commercially available system

Type of institutional effectiveness software system owned by the institution

Offsite Any Recs.

Recode all institutional effectiveness offsite recommendations into a single binary (yes/no) variable

The institution received any recommendations during the offsite SACSCOC review.

Onsite Any Recs.

Recode all institutional effectiveness onsite recommendations into a single binary (yes/no) variable

The institution received any recommendations during the onsite SACSCOC review.

C&R Any Recs.

Recode all institutional effectiveness C&R committee recommendations into a single binary (yes/no) variable

The institution received any recommendations during the SACSCOC C&R committee review.

Monitoring Any

Recode all institutional effectiveness monitoring statuses into a single binary

The institution had any instances of monitoring status.

Page 94: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

81

(yes/no) variable Warning Any

Recode all institutional effectiveness warning statuses into a single binary (yes/no) variable

The institution had any instances of warning status.

Probation Any

Recode all institutional effectiveness probation statuses into a single binary (yes/no) variable

The institution had any instances of probation status.

Any Recs. Recode all institutional effectiveness recommendations into a single binary (yes/no) variable.

The institution received any recommendations on any institutional effectiveness principle during any stage of the reaffirmation of accreditation process.

Severity of Recs.

Recode into no recommendations = 0, offsite recommendations = 1, onsite recommendations = 2, C&R and monitoring, warning or probation = 3.

Each institution was given a numeric value related to the most severe recommendation or status received. Institutions that received no recommendations were at the lowest level of severity. Institutions that received a recommendation from the C&R or beyond were at the highest severity.

Intangible Change Score

Recode into the sum of the following variables: professional development for faculty, professional development for educational support, professional development for administration, professional development for institutional effectiveness staff, number of institutional effectiveness committees, number of institutional effectiveness processes, quality or usefulness of reports produced, stakeholders involved in institutional effectiveness, institutional effectiveness governance/committees.

Total sum of scores on the questions related to intangible change.

Tangible Change Score

Recode into the sum of the following variables: staff, financial resources, technology financial resources, technology, overall resources, organizational structure

Total sum of scores on the questions related to tangible change.

Leadership Change Score

Recode into the sum of the following variables: dean or division-level, executive-level, department or unit-level

Total sum of scores on the questions related to leadership change.

Total Sum of all questions related to the extent Total sum of scores on the

Page 95: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

82

Change of changes experienced in SRIS section Q3.1.

questions related to changes experienced by the institution.

Total Improvement

Sum of all questions related to the extent of improvements experienced in SRIS section Q3.2.

Total sum of scores on the questions related to improvements experienced by the institution.

Note. IE = Institutional Effectiveness, Recs. = recommendations.

Findings

The data were analyzed via SPSS 22.0. According to Creswell (2014), providing

descriptive analysis for all independent and dependent variables, including the means,

standard deviations, and the range of scores of the variables, is recommended in order to

analyze data. Descriptive and inferential statistics were provided in order to characterize

the sample and address the research questions. Inferential statistics utilized included t-

tests, analysis of variances, and multiple regression analysis to determine relationships

between the dependent and independent variables.

Characteristics of the Sample

Participants were asked a series of questions related to the demographics of the

institution. Descriptive information included:

• institutional characteristics,

• institutional SACSCOC characteristics,

• institutional effectiveness practices,

• SACSCOC recommendations and institutional effectiveness principles,

• most difficult institutional effectiveness principles,

• sources of difficulty in demonstrating compliance when undergoing

reaffirmation,

• amount of change experienced due to undergoing reaffirmation,

Page 96: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

83

• amount of improvements experienced due to undergoing reaffirmation, and

• the state of compliance with SACSCOC as of Spring 2015.

Institutional characteristics. Participants’ institutions underwent

accreditation from 2011 through 2015, with a majority of institutions representing the

2013 SACSCOC accreditation cohort. Participants’ institutions ranged across all

states of the SACSCOC region, with Texas representing the largest number of

participants (n = 27, 39.1%). All of the participants represented publicly controlled

institutions. A majority of the participants represented institutions that were rural

serving (n = 40, 58%), followed by urban (n = 15, 21.7%), and suburban (n = 13,

18.8%). A majority of participants represented institutions that were classified as a

medium size school (n = 26, 37.7%) and had 51-100 full-time faculty (n = 21, 30.4%).

Institutional characteristic responses are provided in Table 4.3 below.

Table 4.3 Institutional Characteristics

Category Demographics n % SACSCOC Year 2011 5 7.2

2012 10 14.5 2013 20 29.0 2014 17 24.6 2015 17 24.6 State AL, FL, KY, LA 9 13.0 GA 8 11.6 MS, SC, TN, VA 11 15.9 NC 14 20.3 TX 27 39.1 Institutional Control Public 69 100 Institutional Classification Rural 40 58.0 Suburban 13 18.8

Page 97: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

84

Urban 15 21.7 Other 1 1.4 Institutional Size Very Small 3 4.3 Small 17 24.6 Medium 26 37.7 Large 17 24.6 Very Large 6 8.7 Number of Full-time Faculty 0-50 6 8.7 51-100 21 30.4 101-150 18 26.1 151-200 4 5.8 200-250 6 8.7 251-300 5 7.2 Greater than 300 9 13.0

N = 69 Note. States were combined when fewer than five respondent participants

Institutional SACSCOC characteristics. Most participants indicated that the

respective institution spent 18-24 months preparing for the SACSCOC reaffirmation of

accreditation (n = 21, 30.4%). According to participants, only 25% of institutions hired

an accreditation consultant (n = 17, 24.6%). Participants indicated that a majority of

institutions had a Vice President from the SACSCOC visit the institution prior to

reaffirmation of accreditation (n = 49, 71%), and the institution’s leadership team

attended the SACSCOC orientation (n = 51, 73.9%). Further, participants indicated that

the SACSCOC liaison serves on the President’s Cabinet at most institutions (n = 55,

79.7%) and the primary institutional effectiveness staff is one layer away from the

President (n = 38, 55.1%). Institutional SACSCOC characteristic responses are provided

in Table 4.4 below.

Page 98: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

85

Table 4.4 Institutional SACSCOC Characteristics

Category Demographics n % Time Spent on the SACSCOC 6-12 months 4 5.8 Compliance Certification 12-18 months 13 18.8 18-24 months 21 30.4 24-30 months 19 27.5 30-36 months 6 8.7 Greater than 36 months 6 8.7 Institution Hired Accreditation Yes 17 24.6 Consultant No 52 75.4 SACSCOC Vice President Yes 49 71.0 Advance Visit No 20 29.0 Institution's Leadership Team Yes 51 73.9 Attended SACSCOC Orientation Some but not all 15 21.7 No 3 4.3 Layers from President, Primary 1 38 55.1 Institutional Effectiveness Staff 2 26 37.7 3 2 2.9 4 3 4.3 Liaison on President's Cabinet Yes 55 79.7 No 14 20.3

N = 69

Institutional effectiveness practices. The following demographics reflect the

institutional effectiveness characteristics of the study institutions. Most institutions

had one full-time equivalent staff dedicated to institutional effectiveness (n = 38,

56.5%) and one full-time equivalent staff dedicated to institutional research (n = 35,

50.7%). Most frequently, institutions spent $150,001-$200,000 on institutional

effectiveness (n = 12, 17.4%) and $1-$9,999 on institutional effectiveness technology

Page 99: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

86

(n = 25, 36.2%) in the 2014-2015 academic year. Further, most institutions own

commercially available institutional effectiveness software (n = 40, 58%).

Institutional effectiveness practices responses are provided in Table 4.5 below.

Table 4.5 Institutional Effectiveness Characteristics

Category Demographics n % FTE Institutional Effectiveness or Assessment Staff

1 39 56.5 2 16 23.2

3 11 15.9 10 or more 3 4.3 FTE Institutional Research Staff

0 10 14.5 1 35 50.7

2 11 15.9 3 8 11.6 4 1 1.4 5 1 1.4 10 or more 3 4.3 Annual Budget Dedicated to Institutional Effectiveness

$1-$9,999 4 5.8 $10,000-$25,000 9 13.0

$25,001-$50,000 6 8.7 $50,001-$75,000 8 11.6 $75,001-$100,000 8 11.6 $100,001-$150,000 9 13.0 $150,001-$200,000 12 17.4 $200,001-$300,000 6 8.7 $300,001-$400,000 3 4.3 $400,001- $500,000 1 1.4 Greater than $500,000 3 4.3 Total Annual Budget Dedicated to Institutional Effectiveness Technology

$0 3 4.3 $1-$9,999 25 36.2 $10,000-$25,000 21 30.4

$25,001-$50,000 8 11.6 $50,001-$75,000 5 7.2 $75,001-$100,000 1 1.4 $100,001-$150,000 3 4.3 $150,001-$200,000 2 2.9 $200,001-$300,000 1 1.4

Page 100: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

87

Type of Institutional Effectiveness Technology

No central software system

15 21.7

Homegrown system

14 20.3

Commercially available software system

40 58.0

N = 69

SACSCOC recommendations and institutional effectiveness principles.

The following section provides descriptive data regarding the SACSCOC

recommendations received by the institutions during the various phases of the

reaffirmation process. The section entitled any recommendations offers an overall

examination of recommendations received at an aggregate level. This section is

followed by the offsite recommendations, which is the first round of recommendations

that an institution may receive. The next phase of recommendations may occur during

the onsite visit, followed by the committee on compliance and reports (C&R) review.

After the C&R review, an institution may be placed on monitoring, warning, or

probation status respectively. The following sections are in chronological order

following the SACSCOC reaffirmation of accreditation process.

Any recommendations. Most institutions received a recommendation during

the reaffirmation of accreditation process (n = 61, 88.4%). Some institutions indicated

receiving a recommendation during the onsite visit, but not during the offsite visit (n =

7, 10.1%). Although this occurrence is out of sequence with the accreditation process,

it is possible for an institution to receive a recommendation during an onsite visit even

if the institution did not receive a recommendation during the offsite visit (SACSCOC,

2011b).

Page 101: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

88

Offsite recommendations. A majority of institutions received a

recommendation during the first phase of the reaffirmation of accreditation, the offsite

review (n = 54, 78.3%). The SACSCOC principles that institutions were cited for

most frequently included 3.3.1.1 Educational Programs (n = 34, 49.3%), 3.3.1.5

Community/Public Service (n = 25, 36.2%), 3.3.1.3 Academic and Student Support

Services (n = 22, 31.9%), and 3.5.1 General Education Competencies (n = 21, 30.4%).

The mean number of recommendations received during the offsite review was 2.48 (n

= 69, SD = 2.41). Of the 54 institutions that received a recommendation, the mean

amount of recommendations received was 3.17 (SD = 2.29). Recommendations

received during the offsite phase of accreditation are provided in Table 4.6 below.

Table 4.6 Offsite Recommendations Received

Received Recommendations on Principles: n % 3.3.1.1 IE Educational Programs 34 49.3 3.3.1.5 IE Community/Public Service 25 36.2 3.3.1.3 IE Academic & Student Support Services 22 31.9 3.5.1 General Education Competencies 21 30.4 3.3.1.2 IE Administrative Support Services 19 27.5 2.5 Institutional Effectiveness 17 24.6 3.4.7 Consortial/Contractual Agreements 12 17.4 4.1 Student Achievement 9 13.0 2.4 Institutional Mission 5 7.2 3.1.1 Mission 4 5.8 3.3.1.4 IE Research 3 4.3

Onsite recommendations. Half of the participants indicated that their

institution received a recommendation during the onsite portion of reaffirmation of

accreditation (n = 35, 50.7%). The mean amount of recommendations received by all

institutions were 1.08 (SD = 1.61). Of the 34 institutions that received

Page 102: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

89

recommendations during the onsite review, the mean number of recommendations

received were 2.21 (SD = 1.67). Recommendations received during the onsite phase

of accreditation are provided in Table 4.7 below.

Table 4.7 Onsite Recommendations Received

Received Recommendations on Principles:

n % Onsite 3.3.1.1 IE Educational Programs 22 31.9 Onsite 3.3.1.2 IE Administrative Support Services 11 15.9 Onsite 3.3.1.5 IE Community/Public Service 11 15.9 Onsite 3.5.1 General Education Competencies 11 15.9 Onsite 3.3.1.3 IE Academic & Student Support Services 10 14.5 Onsite 3.4.7 Consortial/Contractual Agreements 6 8.7 Onsite 2.5 Institutional Effectiveness 4 5.8 Onsite 4.1 Student Achievement 2 2.9 Onsite 3.1.1 Mission 1 1.4 Onsite 3.3.1.4 IE Research 1 1.4 Onsite 2.4 Institutional Mission 0 0

C&R review. Approximately one-quarter of institutions received a

recommendation at the C&R review (n = 18, 26.1%). The mean number of non-

compliant principles for all institutions was .59 (SD = 1.32). Of the 18 institutions that

received a recommendation from the C&R Review, the mean number of

recommendations received was 2.28 (SD = 1.71). The most common

recommendations were received on principles 3.3.1.1 Educational Programs (n = 13,

18.8%), and 3.5.1 General Education Competencies (n = 7, 10.1%).

Table 4.8 C&R Recommendations Received

Received Recommendations on Principles: n % CR 3.3.1.1 IE Educational Programs 13 18.80 CR 3.5.1 General Education Competencies 7 10.10

Page 103: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

90

CR 3.3.1.2 IE Administrative Support Services 5 7.20 CR 3.3.1.3 IE Academic & Student Support Services 5 7.50 CR 3.3.1.5 IE Community/Public Service 5 7.20 CR 2.5 Institutional Effectiveness 3 4.30 CR 3.4.7 Consortial/Contractual Agreements 2 2.90 CR 4.1 Student Achievement 1 1.40 CR2.4 Institutional Mission 0 0 CR 3.1.1 Mission 0 0 CR 3.3.1.4 IE Research 0 0

Monitoring status. Approximately 20 % of participants indicated that the

institutions were placed on to monitoring status for non-compliance with an

institutional effectiveness principle (n = 15, 21.7%). The mean number of principles

cited was .44 (SD = 1.04). Of the 15 institutions placed on to monitoring status, the

mean number of principles cited was 2.0 (SD = 1.36). The principles most frequently

leading to monitoring status were 3.3.1.1 Educational programs (n = 10, 14.5%), and

3.5.1 General Education Competencies (n = 6, 8.7%).

Table 4.9 Principles Leading to Monitoring Status

Received Recommendations on Principles: n % Monitoring 3.3.1.1 IE Educational Programs 10 14.5 Monitoring 3.5.1 General Education Competencies 6 8.7 Monitoring 3.3.1.5 IE Community/Public Service 5 7.2 Monitoring 3.3.1.3 IE Academic & Student Support Services 4 5.8 Monitoring 3.3.1.2 IE Administrative Support Services 3 4.3 Monitoring 2.5 Institutional Effectiveness 1 1.4 Monitoring 3.4.7 Consortial/Contractual Agreements 1 1.4

Warning status. Warning status is the least severe public negative sanction

(SACSCOC, 2013a). A total of four institutions were placed on to warning status

(5.8%). Of the four institutions, three were placed on to monitoring status for one

Page 104: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

91

institutional effectiveness principle, and one was placed on to warning status for two

institutional effectiveness principles. The principles 3.3.1.1 Institutional Effectiveness

of Educational Programs (n = 2), 3.3.1.2 Administrative Support Services (n = 1),

3.3.1.5 Community/Public Service (n = 1), and 3.4.7 Consortial/Contractual

Agreements (n = 1) were the principles that institutions were placed on to warning due

to inability to demonstrate compliance.

Probation status. The most severe negative public sanction prior to removal

from accredited status is probation status (SACSCOC, 2013a). Only two of the

participants indicated that their institutions were placed on to probation status (2.9%).

The principles 3.3.1.1 Institutional Effectiveness of Educational Programs and 3.3.1.5

Community/Public Service were the principles that the two institutions were placed on

to probation due to inability to demonstrate compliance.

Most difficult institutional effectiveness principles. Participants were asked

to rank order the institutional effectiveness principles in order from most to least

difficult to demonstrate compliance with (1 reflects the most difficulty, 10 reflects the

least difficulty). The rank order question presented inconsistent survey response

patterns. Many participants did not fully complete the ranking question (n = 27,

39.13%). Some participants only ranked up to three items (n = 10). Of the 10

participants , five participants only ranked the number one issue, one participant

ranked two items as the number one issue, one participant ranked the first issue and

then double ranked second place. Some participants stopped ranking after ranking the

first and second item (n = 2). Some participants stopped ranking after ranking the first

three items (n = 2), and one respondent did not complete any of the ranking options.

Page 105: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

92

Due to the inconsistent survey response pattern, the number of participants

differs for each item ranked. Additionally, the two participants that duplicated their

responses are reflected in the frequency counts. The items with the lowest ranking

means (M), which indicates a higher degree of difficulty for the institutions, are 3.3.1.1

Educational Programs (M = 2.58, n = 67), 3.5.1 General Education Competencies (M =

3.74, n = 62), 3.3.1.3 Academic and Student Support Services (M = 4.37, n = 60).

Results shown in Table 4.10.

Table 4.10 Rank Order Difficult Principles of Accreditation

Difficult Principles: n Min. Max. M SD 3.3.1.1 IE Educational Programs 67 1 11 2.58 2.33 3.5.1 General Education Competencies 62 1 10 3.74 2.68 3.3.1.3 IE Academic & Student Support Srvs. 60 1 8 4.37 1.56 3.3.1.2 IE Administrative Support Services 60 1 8 4.42 2.04 2.5 Institutional Effectiveness 57 1 10 4.79 3.04 4.1 Student Achievement 56 1 11 5.30 2.75 3.3.1.5 IE Community/Public Service 59 1 11 5.41 2.60 3.4.7 Consortial/Contractual Agreements 55 0 11 7.64 2.95 2.4 Institutional Mission 57 2 11 8.28 1.67 3.1.1 Mission 57 3 11 8.37 1.55 3.3.1.4 IE Research 41 0 11 8.44 3.41

Sources of difficulty in demonstrating compliance. Participants were asked

to identify themes that contributed to the most difficult institutional effectiveness

SACSCOC principles to demonstrate compliance with. The majority of participants

indicated that insufficient evidence (n = 47, 70.1%) and insufficient buy-in from

faculty (n = 38, 56.7%) contributed to the difficulties experienced during reaffirmation

of accreditation. Further, just under half of the participants indicated that insufficient

Page 106: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

93

institutional process or procedures (n = 33, 49.3%) and insufficient knowledge of

assessment or institutional effectiveness (n = 32, 47.8%) contributed to difficulties

experienced during reaffirmation of accreditation. Results shown in Table 4.11 below.

Table 4.11 Sources of Difficulty in Demonstrating Compliance

Sources of Difficulty: n % Insufficient Evidence 48 69.6 Insufficient Institutional Buy-in from Faculty 38 55.1 Insufficient Institutional Processes or Procedures 35 50.7 Insufficient Knowledge of Assessment or IE 34 49.3 Insufficient Staff 30 43.5 Insufficient Executive Level Leadership Involvement 23 33.3 Insufficient Institutional Buy-in from Administration 22 31.9 Insufficient Appropriate Decision-making 19 27.5 Insufficient Organizational Structure 19 27.5 Insufficient Time 18 26.1 Insufficient Technology 17 24.6 Insufficient Knowledge of Accreditation 17 24.6 Insufficient Institutional Buy-in from Staff 12 17.4 Too Few Committees 9 13.0 Insufficient Financial Resources 9 13.0 Too Many Committees 6 8.7

Changes experienced. When questions related to total change were

aggregated together, nearly all institutions indicated experiencing at least a slight

increase in at least one area (n = 63, 91.3%). Survey results indicated that the highest

mean scores included the amount of change in professional development for faculty

related to institutional effectiveness (M = 2.25, SD = 1.02), the quality or usefulness of

reports produced from the institutional research or institutional effectiveness office (M

= 2.24, SD = 1.06), and professional development for institutional effectiveness staff

(M = 2.03, SD = 0.92). Further, results indicated that the lowest mean scores include

Page 107: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

94

the amount of change related to the number of institutional effectiveness committees

(M = 1.38, SD = 0.73), the institutional effectiveness governance/committee structure

(M = 1.39, SD = 0.67), and the financial resources related to institutional effectiveness

technology (M = 1.46, SD = 0.72). Results shown in Table 4.12 below.

Table 4.12 Amount of Change in Institutional Effectiveness by Category

Category:

(1) No

change or decrease

(2)

Slight increase

(3)

Moderate increase

(4)

Major increase

M SD % % % % Professional development for Faculty

27.5 30.4 29.0 13.0 2.28 1.01

Quality or Usefulness of Reports

29.0 31.9 23.2 15.9 2.26 1.05

Professional development for Inst. Effectiveness Staff

30.4 42.0 18.8 8.7 2.06 0.92

Stakeholders Involved

33.3 40.6 21.7 4.3 1.97 0.86

Professional development for Educational Support

34.8 36.2 24.6 4.3 1.99 0.88

Staff Dedicated to Inst. Effectiveness

43.5 31.9 13.0 11.6 1.93 1.02

Professional development for Admin.

39.1 37.7 18.8 4.3 1.88 0.87

Dean or Division-Leadership

39.1 40.6 15.9 4.3 1.86 0.85

Inst. Effect. Processes

47.8 26.1 21.7 4.3 1.83 0.92

Executive- Leadership 44.9 33.3 17.4 4.3 1.81 0.88 Department or Unit-level Leadership

40.6 39.1 15.9 4.3 1.84 0.85

Page 108: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

95

Overall Resources

44.9 31.9 18.8 4.3 1.83 0.89

Organizational Structure

55.1 27.5 14.5 2.9 1.65 0.84

Technology

59.4 24.6 8.7 7.2 1.64 0.92

Financial Resources

58.0 24.6 13.0 4.3 1.64 0.87

Financial Resources to Technology

65.2 24.6 8.7 1.4 1.46 0.72

Governance or Committee Structure

69.6 23.2 5.8 1.4 1.39 0.67

Committees 75.4 13.0 10.1 1.4 1.38 0.73 N = 69

Improvements experienced. When questions related to total improvements

were aggregated together, nearly all institutions indicated experiencing at least a slight

improvement in at least one area (n = 65, 94.2%). The highest mean scores were

related to improvement of recurring assessment of student learning outcomes (M =

2.70, SD = 1.02), improvement in preparation or readiness for the SACSCOC Fifth

Year Interim Review (M = 2.59, SD = 0.94), and the institution remains in a steady

state of compliance (M =2.41, SD = 1.03). The lowest mean scores were related to

improvement within the overall effectiveness of the organization (M = 2.24, SD =

0.78), and recurring assessment of the nonacademic areas improvement (M = 2.26, SD

= 0.91). Results shown in Table 4.13 below.

Page 109: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

96

Table 4.13 Amount of Improvement in Institutional Effectiveness by Category

Category:

(1) No

change or decrease

(2)

Slight imp.

(3)

Moderate imp.

(4)

Major imp.

M SD % % % % Recurring Assessment of Student Learning Outcomes

17.4 15.9 43.5 21.7 2.71 1.01

Preparation for SACSCOC 5th Year Review

14.5 27.5 39.1 17.4 2.60 0.95

Institution Remains in a Continued State of Compliance

24.6 26.1 31.9 15.9 2.40 1.04

Recurring Assessment of Student Support Services

23.2 31.9 31.9 11.6 2.32 0.97

Recurring Assessment of Nonacademic Areas

21.7 37.7 30.4 8.7 2.26 0.91

Overall Effectiveness of the Organization

15.9 47.8 30.4 4.3 2.24 0.78

N = 68

SACSCOC compliance today. When questions related to SACSCOC

perceived compliance were aggregated together, approximately half of the institutions

indicated that it was slightly possible or greater that the institution would get a

recommendation in at least one of the institutional effectiveness standards if the

SACSCOC were to visit the institution as of the date of the survey’s completion (n =

37, 53.6%). The highest mean scores, indicating a higher possibility of a

recommendation, were compliance with standard 3.3.1.1 Educational Programs (M =

Page 110: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

97

2.39, SD = 1.20), 3.5.1 General Education Competencies (M = 2.14, SD = 0.94), and

3.3.1.2 Administrative Support Services (M =2.01, SD = 0.96). The lowest mean

scores were related to Mission (M =1.57, SD = 0.78), Institutional Mission (M =1.52,

SD = 0.74), and compliance with standard 3.3.1.4 Research within Mission (M = 1.35,

SD = 0.73). Results shown in Table 4.14 below.

Table 4.14 SACSCOC Compliance Today

Principle:

(1) Not at all poss.

(2)

Very unlikely

(3)

Slightly poss.

(4) Somewhat likely

(5)

Very likely

M SD % % % % % Compliance with Standard 3.3.1.1 (IE Educational Programs)

24.6 39.1 15.9 13.0 7.2 2.39 1.20

Compliance with Standard 3.5.1 (General Education Competencies)

26.1 42.0 26.1 2.9 2.9 2.14 0.94

Compliance with Standard 3.3.1.2 (IE Administrative Support Services)

34.8 37.7 20.3 5.8 1.4 2.01 0.96

Compliance with Standard 3.3.1.3 (IE Academic & Student Support Services)

33.3 37.7 24.6 2.9 1.4 2.01 0.92

Compliance with Standard 4.1 (Student Achievement)

33.3 47.8 11.6 4.3 2.9 1.96 0.95

Compliance with Standard, 2.5 (Institutional Effectiveness)

33.3 44.9 17.4 2.9 1.4 1.94 0.87

Page 111: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

98

Compliance with Standard 3.3.1.5 (IE Community/Public Service)

40.6 37.7 13.0 5.8 2.9 1.93 1.02

Compliance with Standard 3.4.7 (Consortial/Contractual Agreements)

49.3 37.7 7.2 2.9 2.9 1.72 0.94

Compliance with Standard 3.1.1 (Mission)

55.1 37.7 4.3 1.4 1.4 1.57 0.78

Compliance with Standard 2.4 (Institutional Mission)

56.5 39.1 1.4 1.4 1.4 1.52 0.74

Compliance with Standard 3.3.1.4 (IE Research within Mission)

72.5 21.7 1.4 1.4 1.4 1.35 0.73

N = 69

Differences in Perceived Change or Improvement by Recommendations Received

Research question one intended to determine whether a statistically significant

difference existed for levels of perceived change or improvement (dependent variable)

between institutions that received recommendations (independent variable) and those that

did not. In order to examine differences in institutions that received any

recommendations during any phase of accreditation and those that did not, the researcher

conducted an independent samples t-test. Results indicated that no significant differences

were found for institutions that received any recommendations (n = 61, 88.4%) and those

that did not (n = 8, 11.6%) for any of the items related to perceived change or perceived

improvements.

Offsite recommendations. In order to examine differences in institutions that

received recommendations and those that did not during the offsite phase of the

Page 112: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

99

reaffirmation of accreditation, the researcher conducted an independent samples t-test.

Significant differences were found regarding the

• leadership change score (p = .017),

• change in financial resources dedicated to institutional effectiveness (p = .026),

• number of institutional effectiveness committees (p = .001),

• amount of change in stakeholders involved in institutional effectiveness (p =

.030), and

• the amount of change in executive-level leadership involvement in institutional

effectiveness (p = .036).

Results shown in Table 4.15 below.

Table 4.15 t-test Results for Changes and Improvements by Offsite Recommendations

Recommendation Received

Yes No 95% CI LL, UL t (67) d M SD N M SD n

Executive-level Leadership

1.9 0.9 54 1.4 0.63 15 -1.026 -.026

-2.1 0.61

Leadership Change Score

7.9 2.9 54 6 2.07 15 -3.99 .390

-2.44 0.71

Stakeholders Involved 2.1 0.9 54 1.53 0.64 15 -1.043 -.075

-2.31 0.67

Note. p < .05. M = mean, SD = standard deviation, CI = confidence interval of the difference, LL = lower limit, UL = upper limit, degrees of freedom are in parenthesis, d = Cohen’s d. Given a violation of homogeneous variances, F(1,67) = -11.06, p = .001, a t-test

not assuming homogeneous variances was calculated for changes in the number of

institutional effectiveness committees. Results indicated that significant differences

between groups existed, t(53.33) = -2.21, p = .032. The results of this test suggest that

institutions that received recommendations (M = 1.44, SD = 0.79) experienced more

Page 113: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

100

changes in the number of committees than institutions that did not receive any

recommendations (M = 1.13, SD = 0.35).

Given a violation of homogeneous variances, F(1,67) = 6.24, p = .015, a t-test not

assuming homogeneous variances was calculated for changes in the financial resources

dedicated to institutional effectiveness. Results indicated that significant differences

between groups existed, t(34.60) = -2.40, p = .022. The results of this test suggest that

institutions that received recommendations (M = 1.74, SD = 0.92) experienced more

changes in the financial resources dedicated to institutional effectiveness than institutions

that did not receive any recommendations (M = 1.27, SD = 0.59).

Onsite recommendations. In order to examine differences in institutions that

received recommendations during the onsite phase of accreditation and those that did not,

the researcher conducted an independent samples t-test. Results indicated that no

significant differences were found for institutions that received any recommendations (n

= 34) and those that did not (n = 35) for any perceived change or perceived

improvements.

C&R review. In order to examine differences in institutions that received

recommendations and those that did not during the C&R portion of the reaffirmation of

accreditation, the researcher conducted an independent samples t-test. Results shown in

Table 4.16 below. Significant differences were found regarding:

• total change related to institutional effectiveness (p = .003),

• change in executive-level leadership involvement (p = .001),

• intangible change score (p = .012),

• change in financial resources dedicated to institutional effectiveness (p = .007),

Page 114: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

101

• faculty professional development (p = .007),

• administration professional development (p = .028),

• institutional effectiveness staff professional development (p = .021),

• processes (p = .040),

• overall resources (p = .003),

• department leadership (p = .019),

• tangible change score (p = .002),

• leadership change score (p = .004), and

• staff dedicated to institutional effectiveness (p = .038).

Table 4.16

t-test Results for Changes and Improvements by C&R Review Recommendations

Recommendation Received: Yes (n = 18) No (n = 51) 95% CI LL, UL

M SD M SD t(67) D

Total Change 36.56 9.28 28.71 8.81 -12.74, -2.96 -3.21 0.88

Executive-level Leadership 2.39 0.92 1.61 0.77 -1.23, -0.34 -3.50 0.96

Intangible Change Score 19.78 5.17 16.06 5.26 -6.59, -0.85 -2.59 0.71

Financial Resources 2.11 1.02 1.47 0.76 -1.10, -0.19 -0.64 0.18

Faculty Professional Development

2.83 0.86 2.08 1.00 -1.28, -0.23 -2.86 0.78

Administration Professional Development

2.28 0.90 1.75 0.81 -.99, -0.07 -2.31 0.63

IE Staff Professional Development

2.50 0.79 1.90 0.92 -1.09, -0.11 -2.45 0.67

Processes 2.22 1.06 1.69 0.84 -1.03, -0.04 -2.18 0.60

Overall Resources 2.39 0.98 1.63 0.77 -1.22, -0.31 -3.35 0.82

Page 115: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

102

Department Leadership 2.28 0.96 1.69 0.76 -1.04, -0.15 -2.65 0.65

Tangible Change Score 12.44 3.75 9.33 3.53 -5.07 -1.15 -3.17 0.87

Leadership Change Score 9.06 2.80 6.92 2.53 -3.56, -0.71 -3.00 0.73

Note. p < .05. M = mean, SD = standard deviation, CI = confidence interval of the difference, LL = lower limit, UL = upper limit, degrees of freedom are in parenthesis, d = Cohen’s d. Given a violation of homogeneous variances, F (1,67) = 7.96, p = .006, a t-test not

assuming homogeneous variances was calculated for changes in the amount of staff

dedicated to institutional effectiveness. Results indicated significant differences between

groups, t(23.08) = -2.20, p = .038. The results of this test suggest that institutions that

received recommendations (M = 2.44, SD = 1.25) experienced more change in the

amount of staff dedicated to institutional effectiveness than institutions that did not

receive any recommendations (M = 1.75, SD = 0.87).

Monitoring status. In order to examine differences in institutions that were

placed onto monitoring status and those that were not, the researcher conducted an

independent samples t-test. Significant differences were found regarding the

• overall resources (p = .001),

• total amount of change related to institutional effectiveness (p = .004),

• the amount of change in executive-level leadership involvement in institutional

effectiveness (p = .001),

• intangible change score (p = .014),

• change in financial resources dedicated to institutional effectiveness (p = .013),

• faculty professional development (p = .005),

• administration professional development (p = .003),

Page 116: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

103

• institutional effectiveness staff professional development (p = .031),

• department leadership (p = .002),

• staff dedicated to institutional effectiveness (p = .033),

• sum of leadership change (p = .001), and

• tangible change score (p = .014).

Results shown in Table 4.17 below.

Table 4.17 t-test Results for Changes and Improvements by Monitoring Status

Recommendation Received:

Yes (n= 15) No (n=54) 95% CI M SD M SD t(67) D

Overall Resources 2.47 .99 1.65 .78 -1.30, -.34 -3.38 .99 Total Change 37.07 9.35 29.00 8.88 -13.30, -2.84 -3.08 0.90 Executive-level Leadership

2.53 0.83 1.61 0.79 -1.39,-.46 -3.96 1.16

Intangible Change Score

20.07 5.26 16.19 5.25 -6.94,-.82 -2.53 0.74

Financial Resources

2.13 1.06 1.50 0.77 -1.12,-.14 -2.58 0.75

Faculty Professional Development

2.93 0.88 2.09 0.98 -1.40,-.28 -3.01 0.88

Administration Professional Development

2.47 0.83 1.72 0.81 -1.22,-.27 -3.13 0.91

IE Staff Professional Development

2.50 0.83 1.90 0.91 -1.13,-.09 -2.33 0.68

Overall Resources 2.47 0.99 1.65 0.78 -1.30,-.34 -3.38 0.99

Department Leadership

2.47 0.92 1.67 0.75 -1.26,-.34 -3.47 1.01

Page 117: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

104

Staff Dedicated to IE

2.47 1.25 1.78 0.90 -1.26,-.12 -2.40 0.70

Leadership Change Score

9.47 2.67 6.93 2.52 -4.03,-1.05 -3.41 1.00

Tangible Change Score

12.27 3.92 9.56 3.60 -4.85,-.57 -2.53 0.74

Note. p < .05. M = mean, SD = standard deviation, CI = confidence interval of the difference, LL = lower limit, UL = upper limit, degrees of freedom are in parenthesis, d = Cohen’s d.

Warning and probation status. Due to the small number of institutions placed

onto warning status (n = 4) and probation status (n = 2), no inferential statistical tests

were run for these phases of the accreditation process.

Differences in Perceived Change and Improvements by Recommendations

Research question two sought to determine statistically significant differences in

levels of perceived change or improvements (dependent variable) based on the severity of

the recommendations (independent variable). A one-way analysis of variance (ANOVA)

was used to compare the mean of the total change score based on the severity of the

recommendations received (α = .05). This test was found to be statistically significant,

F(3, 65) = 3.437, p = .022. The strength of the relationship, as indexed by eta-squared

(η2) was .14, considered a strong effect size. A Tukey HSD test indicated a significant

difference (p = .03) for institutions that received recommendations after the onsite visit

(M = 36.56, SD = 9.28) and institutions that received recommendations at the offsite

review (M = 28.48, SD = 7.97). These results indicated that institutions that received

recommendations after the onsite phase of accreditation made greater levels of change

than institutions that received recommendations at the offsite review.

Page 118: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

105

A one-way ANOVA compared the mean of the total improvement score based on

the severity of the recommendations received and no significant results were found.

Further, analysis on specific types of improvements based on the severity of the

recommendations was conducted via ANOVAs. No significant findings occurred. No

significant results were indicated, thus no further post hoc testing occurred (Gravetter &

Wallnau, 2013).

Further analysis on specific types of changes (dependent variable) based on the

severity of the recommendations (independent variable) were conducted via an ANOVA

and found to be significant. Significant findings occurred for a number of institutional

changes:

• professional development for faculty,

• professional development for staff,

• departmental leadership,

• financial resources,

• number of institutional effectiveness committees,

• overall resources,

• executive-level leadership,

• intangible change score,

• tangible change score, and

• leadership change score.

Eta-squared effect size results indicated that professional development for faculty,

number of institutional effectiveness committees, overall resources, executive-level

leadership, tangible change score, and leadership change score all had strong effect sizes,

Page 119: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

106

near .15, accounting for around 15% of the variance of the independent variable (Jaccard

& Becker, 1997). All significant ANOVA results, including effect sizes, are illustrated in

Table 4.18 below.

Table 4.18 ANOVA Results for Changes and Improvements by Severity of Recommendations

Amount of Change Groups SS df MS F p η2

Faculty PD Between 8.82 3 2.94 3.13 .031 .13

Within 60.95 65 0.94

Total 69.77 68 IE Staff PD Between 6.49 3 2.16 2.74 .050 .11

Within 51.28 65 0.79

Total 57.77 68 Dpt. Leadership Between 5.51 3 1.84 2.73 .051 .11

Within 43.74 65 0.67

Total 49.25 68

Financial Resources to IE

Between 6.26 3 2.09 2.97 .038 .12

Within 45.68 65 0.70

Total 51.94 68 Number of IE Committees

Between 4.80 3 1.60 3.31 .025 .13

Within 31.41 65 0.48

Total 36.20 68 IE Overall Resources Between 7.74 3 2.58 3.63 .02 .14

Page 120: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

107

Within Groups

46.17 65 0.71

Total 53.91 68 IE Executive Leadership

Between 9.69 3 3.23 4.90 .00 .18

Within 42.87 65 0.66

Total 52.55 68 Intangible Change Score

Between 234.89 3 78.30 2.85 .044 .12

Within 1787.06 65 27.50

Total 2021.94 68 Tangible Change Score

Between 129.38 3 43.13 3.26 .027 .13

Within 859.17 65 13.22

Total 988.55 68 Leadership Change Score

Between 69.85 3 23.28 3.41 .022 .14

Within 443.37 65 6.82

Total 513.22 68 Note. SS = Sum of squares, df = degrees of freedom, MS = mean square, F = F distribution, p = probability, η2 = eta-squared. Tukey HSD post hoc analyses were conducted on all possible pairwise contrasts,

given the statistically significant omnibus ANOVA F test. Findings indicated that all

significant results involved the post onsite phase of accreditation. The majority of

significant findings occurred between the post onsite and onsite phases. The statistically

significant findings of analyses are presented in Table 4.19 below. The table illustrates

where significant findings occurred. For example, the amount of change for faculty

Page 121: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

108

professional development was significantly different between the mean scores at the

offsite phase of accreditation and the post onsite phases of accreditation.

Table 4.19 Tukey HSD Results for Changes and Improvements by Severity of Recommendations

No Recs Offsite Onsite Post Onsite

Category M SD M SD M SD M SD Faculty PD 1.91 0.95 2.83 0.86 Inst. Effct Staff PD

1.50 0.54 2.50 0.79

Dept Leadership

1.55 0.69 2.28 0.96

Financial Resources

1.35 0.81 2.11 1.02

Inst. Effct Committees

1.04 0.21 1.67 0.84

Overall Resources

1.60 0.75 2.39 0.98

Executive-level leadership

1.40 0.68 2.39 0.92

Intangible 15.30 4.72 19.78 5.17 Tangible 9.30 3.81 12.44 3.75 Leadership 6.55 2.48 9.056 2.80

Note. p < .05. M = mean, SD = standard deviation. PD = professional development. Institutional Change and Improvement Predictors

The third research question examined factors (independent variables) that best

predicted institutional change or improvement (dependent variable) through multiple

regression analyses. The first multiple regression analysis examined the predictors for

total amount of institutional changes experienced as a result of undergoing reaffirmation

of accreditation. The second multiple regression analysis examined the predictors for

Page 122: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

109

total amount of institutional improvements experienced as a result of undergoing

reaffirmation of accreditation.

A multiple regression was run to predict the total amount of change occurring as a

result of the reaffirmation of accreditation process from the independent variables:

• difficulty due to insufficient time,

• difficulty due to too many committees,

• advance time spent on the SACSCOC compliance certification,

• institution’s leadership team attended the SACSCOC orientation,

• difficulty due to insufficient knowledge of assessment or institutional

effectiveness, and

• severity of recommendations.

These variables statistically significantly predicted the total amount of change, F(6,62) =

7.904, p < .01, adj. R2 = .379. All variables included in the model were statistically

significant, p < .05. The assumptions of linearity, independence of errors,

homoscedasticity, unusual points, and normality of residuals were met. Regression

coefficients and standard errors can be found in Table 4.20 below.

Table 4.20 Summary of Multiple Regression Analysis for Total Change Score

Predictors B SE(B) Β t p Constant 13.65 4.61 2.96 .004

Severity of Recommendations 2.24 1.07 .23 2.10 .040

Advance Time Working on Compliance Certification

2.75 .74 .38 3.71 .000

Leadership Team Attended Orientation

-3.73 1.74 -.22 -2.15 .036

Difficulty: Too Many Committees -7.22 3.31 -.22 -2.18 .033

Page 123: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

110

Difficulty: Insufficient Knowledge of Assessment or Institutional Effectiveness

4.96 2.08 .26 2.39 .020

Difficulty: Insufficient Time 7.64 2.28 .36 3.35 .001 Note. Adjusted R2 = .379 proportion variance explained, B =unstandardized regression coefficient , SE(B)= unstandardized standard error , β = standardized regression coefficient, t = obtained t-value, p = probability. Results of the regression model indicated that severity of recommendations

received, time spent on compliance certification, insufficient knowledge of assessment or

institutional effectiveness, and insufficient time dedicated to accreditation were positively

and significantly correlated with the criterion. Leadership attended the orientation

(reverse coded) and too many committees were significantly and negatively correlated

with the amount of change implemented by the institution.

The second multiple regression analysis examined the predictor for total amount

of institutional improvements experienced as a result of undergoing reaffirmation of

accreditation. A multiple regression was run to predict the total amount of improvement

from the following variables:

• amount of change in department or unit-level leadership involvement,

• total annual budget dedicated to institutional effectiveness,

• severity of recommendations,

• institution's leadership team attended the SACSCOC orientation,

• difficulty due to insufficient evidence, and

• amount of change in quality or usefulness of reports produced from institutional

research office.

The assumptions of linearity, independence of errors, homoscedasticity, unusual points

and normality of residuals were met. These variables statistically significantly predicted

the total amount of improvement, F(6, 61) = 9.233, p < .01, adj. R2 = .424. All variables

Page 124: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

111

added statistically significantly to the prediction, p < .05. Regression coefficients and

standard errors can be found in Table 4.21 below.

Table 4.21 Summary of Multiple Regression Analysis for Total Improvement Score

Predictors B SE(B) β t p Constant 8.47 2.23 3.79 .000 Leadership Team Attended SACSCOC Orientation

-2.19 0.80 -.26 -2.75 .008

Severity of Recommendations 1.00 0.45 .21 2.20 .032 Budget Dedicated to Institutional Effectiveness 0.45 0.17 .26 2.70 .009 Change: Quality of Reports from Institutional Research Office

1.42 0.47 .32 3.06 .003

Difficulty: Insufficient Evidence -2.70 0.97 -.27 -2.77 .007 Change: Unit-level Leadership Involvement 1.33 0.59 .245 2.26 .027

Note. Adjusted R2 = .424 proportion variance explained, B =unstandardized regression coefficient , SE(B)= unstandardized standard error , β = standardized regression coefficient, t = obtained t-value, p = probability. Results indicated that severity of recommendations, leadership attending the

SACSCOC orientation, total institutional effectiveness budget, usefulness of reports from

the institutional research office, difficulty due to insufficient evidence, and change in

departmental or unit level leadership involvement were predictors for the total amount of

improvements made by institutions. Further, results indicated that as severity of

recommendations, leadership attending the SACSCOC orientation, total institutional

effectiveness budget, usefulness of reports from the institutional research or institutional

effectiveness office, and change in departmental or unit-level leadership involvement

increased, the total amount of improvement also increased. Whereas difficulty due to

insufficient evidence decreased, the total amount of change increased.

Page 125: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

112

Predictors of Severity of Recommendations

Research question four examined factors (independent variables) that best predict

the severity of the recommendations received by institutions (dependent variable). A

multiple regression was run to predict the severity of the recommendations (difficulty due

to insufficient time, amount of change in professional development for institutional

effectiveness, difficulty due to insufficient knowledge of assessment or institutional

effectiveness, difficulty due to insufficient executive level leadership involvement, and

amount of change in number of institutional effectiveness processes). The assumptions

of linearity, independence of errors, homoscedasticity, unusual points, and normality of

residuals were met. These variables statistically significantly predicted the total amount

of severity of recommendations, F(5, 63) = 11.115, p < .01, adj. R2 = .427. All variables

added statistically significantly to the prediction, p < .05. Regression coefficients and

standard errors can be found in Table 4.22 below.

Table 4.22 Summary of Multiple Regression Analysis for Severity of Recommendations

Predictors B SE(B) β t P Constant 1.31 0.28 4.73 0

Change: Professional development for Institutional Effectiveness

0.28 0.1 0.26 2.74 0.008

Change: Institutional Effectiveness Processes 0.33 0.11 0.30 3.10 0.003

Difficulty Due to Insufficient Knowledge of Assessment or IE

0.56 0.19 0.29 3.03 0.004

Difficulty: Insufficient Executive Level Leadership Involvement

0.60 0.20 0.29 3.06 0.003

Difficulty: Insufficient Time -0.97 0.22 -0.44 -4.52 0

Page 126: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

113

Note. Adjusted R2 = .427 proportion variance explained, B =unstandardized regression coefficient, SE(B)= unstandardized standard error , β = standardized regression coefficient, t = obtained t-value, p = probability. Results indicated that as the amount of change in professional development for

institutional effectiveness, difficulty due to insufficient knowledge of assessment or

institutional effectiveness, difficulty due to insufficient executive level leadership

involvement, and amount of change in number of institutional effectiveness processes

increased, the severity of recommendations increased. Whereas difficulty due to

insufficient time decreased, the severity of recommendation increased.

Summary

Chapter IV described the descriptive, inferential, and predictive statistical

analyses used to answer the four research questions. Research question one was analyzed

via an independent samples t-test. Results indicated that significant differences occurred

for institutions that received recommendations compared to those that did not receive

recommendations for many areas of institutional changes and at different phases of

accreditation. Research question two was analyzed through ANOVAs and Tukey HSD

tests. Results indicated that significant differences existed for several types of changes

that institutions implemented based on the severity of the recommendations. Research

questions three and four were analyzed through multiple regression analyses. Predictors

were obtained for the total amount of change implemented by institutions, the total

amount of improvements experienced by institutions, and the severity of

recommendations received by the institutions. Chapter V provides an overview of the

study, discussion of the findings, implications and recommendations for higher education

institutions, and suggestions for future research.

Page 127: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

114

CHAPTER V

CONCLUSIONS AND RECOMMENDATIONS

Chapter V provides a discussion of the findings of the study. This chapter

provides an overview of the study, discussion of the findings, implications and

recommendations for higher education practice, and suggestions for future research.

Overview of the Study

The purpose of this study was to analyze the role of the Southern Association of

Colleges and Schools Commission on Colleges (SACSCOC) recommendations and

institutional changes based on the perceptions of the SACSCOC liaisons or primary

institutional effectiveness personnel at Level I community colleges in the SACSCOC

region that have undergone reaffirmation between the years 2011 through 2015. This

study sought to understand the types of changes that institutions make, based on

recommendations received from the regional accrediting body.

The framework for institutional capacity provided the conceptual foundation

for this study (Alfred, Shults, Jaquette, & Strickland, 2009). According to Alfred et al.

(2009), institutional capacity is comprised of three components: 1) tangible resources,

2) intangible resources, and 3) leadership, and is influenced by leaders’ abilities to

leverage tangible and intangible resources. Where tangible and intangible resources

reflect the overall capacity of an organization, the decisions made on how to utilize

those resources determine the institution’s overall effectiveness (Alfred et al., 2009).

The researcher-developed web-based survey was distributed to 135 individuals,

yielding a 51.1% response rate (n = 69). Participants for the study were the SACSCOC

liaison or institutional effectiveness personnel from 69 institutions that went through

accreditation during the 2011-2015 years.

Page 128: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

115

The study was guided by the following research questions:

1) What is the statistically significant relationship between the independent

variable of community colleges that receive SACSCOC recommendations and

the dependent variable of overall (or total) level of perceived change or

improvement?

2) What is the statistically significant relationship between the independent

variable of ‘severity of recommendations’ group membership and the

dependent variable of levels of perceived changes or improvements?

3) Which factors (independent variables) best predict the overall level of

institutional change or improvement (dependent variables)?

4) Which factors (independent variables) best predict the severity of

recommendations received by the institutions (dependent variables)?

The following types of statistical analyses were utilized for examination of the

research questions: descriptive statistics, inferential statistics, and predictive statistics.

Descriptive statistics were provided for all survey response items, and for all four

research questions. Inferential statistics included independent samples t-tests for

research question one and a one-way analysis of variance (ANOVA) for research

question two. Cohen’s d effect size was provided for significant independent samples

t-test results and eta-squared effect sizes were provided for all significant ANOVA

results. Predictive statistics were utilized for research questions three and four and

involved multiple regression analyses. All statistical procedures and analyses, except

for effect sizes, were conducted in SPSS software version 22. The Cohen’s d and eta-

squared effect sizes were calculated by importing SPSS output into Microsoft Excel.

Page 129: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

116

Discussion of the Findings

At a broad level, the purpose of this research was to analyze whether

recommendations from the SACSCOC drive institutional changes and improvements.

Broadly, the results of this study indicated that recommendations serve an important

function in driving institutional change during the reaffirmation of accreditation process.

The findings of this study enhance findings from Woolston (2012), that “institutions are

clearly using accreditation for the quality enhancement purpose for which it was

originally intended over a century ago: Accreditation provides an honest, institution-wide

self-assessment with the intention of leading to university improvement” (p. 171). The

following discussion of the findings is organized by the research questions: 1) differences

in perceived change or improvement by recommendations received, 2) differences in

perceived change or improvement by severity of recommendations, 3) institutional

change and improvement predictors, 4) predictors of severity of recommendations.

Differences in Perceived Change or Improvement by Recommendations Received

Research question one intended to determine whether a statistically significant

difference exists for levels of perceived change or improvement (dependent variable)

between institutions that received recommendations (independent variable) and those that

did not. A number of statistically significant results were found during the various stages

of the reaffirmation of accreditation cycle and are shown in Table 5.1.

Page 130: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

117

Table 5.1 Differences in Significant Findings by Accreditation Phases

Significant Finding Offsite Onsite C & R Monitoring

Total Change X X

Leadership Change Score X X X

Department Leadership X X

Executive-level Leadership X X X

Intangible Change Score X X

Stakeholders Involved X Faculty Professional Development X X

Administration Professional Development X X

IE Staff Professional Development X X

Processes X Number of Committees X Tangible Change Score X X

Financial Resources X X X

Staff X X

Overall Resources X X Overall, results indicated that institutions that received recommendations

experienced greater levels of change. Additionally, analysis of the data indicated that the

accreditation process contributed to institutional changes in the three areas that comprise

the framework for institutional capacity: 1) intangible resources, 2) tangible resources,

and 3) leadership involvement (Alfred et al., 2009). Although changes occurred at each

phase of accreditation, greater levels of change were experienced during the latter phases

of accreditation (i.e., Committee on Compliance and Reports [C&R] review and

monitoring status) when the institution was at risk for receiving (or had received) a

Page 131: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

118

negative sanction. As shown in Table 5.1, leadership changes occurred throughout the

accreditation phases, where change in intangible and tangible resources occurred at

greater levels when the institution was at risk of receiving (or had received) a negative

sanction. Previous researchers found that accreditation is valued by senior administration

(Asgill, 1976; Oden, 2009; Woolston, 2012), which may explain why leadership

differences occurred during the offsite phase as well as the latter phases of accreditation.

These findings emphasize the role of recommendations and the importance of each phase

of accreditation in influencing changes at community colleges. Further, the results

support the literature that accreditation influences institutions toward continuous

improvement (Head & Johnson, 2011; Murray, 2002; Oden, 2009) and provides an

opportunity for professional development (Brittingham, 2009). Last, the results imply

that accreditation influences institutions toward positive changes within institutional

effectiveness. Considering that institutional effectiveness is synonymous with the degree

to which an institution is accomplishing its mission (Head & Johnson, 2011; SACSCOC,

2011b). Arguably, accreditation assists institutions in accomplishing their mission, by

influencing positive changes within institutional effectiveness areas.

Differences in Perceived Change and Improvement by Recommendation Severity

Research question two sought to determine statistically significant differences in

levels of perceived change and improvement (dependent variable) based on the severity

of the recommendations received (independent variable). Although a number of

significant differences in levels of perceived change based on the severity of the

recommendations were found, eta-squared effect size results indicated that professional

development for faculty, number of institutional effectiveness committees, overall

Page 132: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

119

resources, executive-level leadership, tangible change score, and leadership change score

had the strongest effect sizes. These results may indicate that the severity of

recommendations received by the institutions has a greater effect on certain types of

perceived changes.

Analysis of all of the findings for research question two indicated that greater

levels of total change scores were experienced for institutions with more severe

recommendations. Additionally, similar to the results of research question one, every

significant finding in research question two occurred after the onsite review (i.e., C&R

review, monitoring status, warning status, or probation status), which indicates that

institutions made greater levels of change once the institution was at risk of receiving (or

had received) a negative sanction. Examination of these results, in conjunction with the

findings from research question one where no significant differences occurred for

institutions that received recommendations during the onsite phase, reveals that

institutions increased accreditation engagement after the onsite phase of accreditation due

to the potential negative ramifications for the institutions.

One possible explanation for institutions implementing changes late in to the

accreditation process is that institutions may procrastinate due to experiencing

accreditation fatigue. Preparation for accreditation requires significant time, energy, and

planning (Murray, 2002; Oden, 2009; Wood, 2006; Woolston, 2012). The SACSCOC

accreditation process is lengthy, with a majority of the study institutions spending over 18

months preparing for accreditation. The entire SACSCOC reaffirmation process is three

years long and the time from submission of the self-study to the onsite visit is six to nine

months (SACSCOC, 2011b). The accreditation preparation team has likely returned to

Page 133: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

120

regular job duties after the compliance certification is submitted to the SACSCOC. One

of the drivers of negative feelings toward accreditation is the time away from regular job

duties in order to undergo accreditation (Hulon, 2000; Skolits & Graybeal, 2007). After

years of effort and energy, institutions may feel that the job is complete once the self-

study is completed and therefore remove themselves from the accreditation process in

order to go back to regular job duties. It may require the impending threat of a negative

sanction to get institutions engaged in the accreditation process again. Further,

preparation for accreditation requires gathering numerous stakeholder input; after the

accreditors are gone, a much smaller group of individuals may be involved in the final

phases of accreditation. The institution may not have as much momentum for

implementing changes after the onsite visit concludes.

Alternatively, another possible explanation for implementing changes late into the

accreditation process is that institutions may not have adequately prepared for

accreditation. Institutions may not have set aside the necessary amount of time required

to successfully study the daily practices of the college. Institutions that did not use

accreditation as an opportunity to improve may have spent less time preparing rather than

deeply examining the functioning of the college in order to improve.

Additionally, a possible explanation for institutions making changes later into the

accreditation process is due to prior experiences with accreditation where no

consequences occurred. Tharp (2012) found that institutions in the Western Association

of Schools and Colleges region had taken accreditation less seriously due to previous

experiences with accreditors where accreditors had not enforced sanctions on institutions.

One of the participants in Tharp’s (2012) study indicated that his or her institution had

Page 134: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

121

been placed on to warning status for 12 years, prior to finally being placed on to

probation after the accreditor received pressure from the federal government. The move

from warning to probation came as a surprise to the institution because the accreditor had

not enforced consequences in previous reaffirmation of accreditation cycles (Tharp,

2012). Administrators, staff and faculty members with long tenure may recall a time

where accreditation consequences were not as serious or as enforced as the current

approach to accreditation, which could add an additional layer to the already complicated

process of reaffirmation of accreditation.

Institutional Change and Improvement Predictors

The third research question examined factors (independent variables) that best

predict institutional change or institutional improvement (dependent variable). A

primary purpose of the current study was to determine the role of recommendations in

changing institutional effectiveness practices. Findings indicated that as the severity

of the recommendations received by institutions increased so did the amount of

improvements and the amount of changes implemented by institutions. Other

researchers have found that institutions are using accreditation for the purpose of

quality improvement (e.g., Eaton, 2012; Head & Johnson, 2011; Murray, 2002; Oden,

2009; Woolston, 2012; Young, 2013). The current study supports these findings and

the study provides evidence that institutions are engaging in self-regulation. Results

support the idea that “accreditors encourage self-scrutiny for change and needed

improvement through ongoing self-examination and programs” (Eaton, 2012, p. 7).

According to Hartle (2012, p. 20), “accreditation is expected to convincingly

demonstrate its value and credibility.” The current study findings provide evidence

Page 135: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

122

that accreditation recommendations influence changes and improvements at

community colleges, which suggests that accreditation is demonstrating its value and

credibility.

Financial resources. The current study also found that as the amount of financial

resources increased, the total amount of improvements experienced by the institutions

increased. Researchers have indicated that the financial aspect of accreditation and

institutional effectiveness is substantial for institutions (e.g., Cooper & Terrell, 2013;

Hartle, 2012). Cooper and Terrell (2013) found that in 2012-2013, the nationwide

average that institutions spent on assessment of student learning outcomes was $160,000.

In the current study, most institutions spent $150,001 - $200,000 on institutional

effectiveness in the 2014-2015 academic year, similar to the findings in Cooper and

Terrell’s (2013) study. Financial resources are an important needed tangible resource in

both institutional effectiveness and accreditation. Young (2013) found that the

accreditation process was enhanced by providing adequate financial resources during

accreditation preparation, and Bardo (2009) recommended that institutions increase the

amount of funding allocated to institutional effectiveness in order to meet the demands of

accreditation.

Further, the results may indicate that institutions are not allocating sufficient

financial resources toward institutional effectiveness or accreditation prior to

accreditation or prior to receiving recommendations. However, financial or tangible

resources alone are not enough to sustain an effective organization (Alfred et al., 2009).

Leaders must leverage the financial resources sufficiently; because many of the questions

in the improvements section are related to assessment issues, institutions may not

Page 136: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

123

adequately allocate or leverage the financial resources within the assessment area.

According to Suskie (2009), inadequate resourcing of assessment is a contributing factor

to organizational resistance to assessment, which further supports Bardo’s (2009)

findings that institutions must increase financial resources to this area. Results of the

current study suggest that increasing financial resources leads to increased levels of

institutional improvement and subsequent preparation for future accreditation.

Leadership. Institutions where the leadership team attended the SACSCOC

orientation were more likely to experience both greater levels of improvements and

greater levels of change. Leadership is an area repeatedly identified in the literature as a

significant factor in influencing accreditation outcomes and engagement (Alfred, 2012;

Kezar, 2013; Oden, 2009; Young, 2013). Young’s (2013) study found that the college

president heavily influenced the culture of the institution and the SACSCOC preparation

team by taking accreditation seriously and setting a standard of excellence, which led to

positive accreditation outcomes. Attendance at the SACSCOC orientation by

institutional leadership is an opportunity for leadership to demonstrate that accreditation

is important. Additionally, the orientation serves as an opportunity to increase

knowledge of accreditation and institutional effectiveness standards. Arguably, the

SACSCOC orientation serves three major functions: 1) as a visible indicator of the

importance of accreditation, 2) as an opportunity to increase knowledge of accreditation

expectations for the leadership team, and 3) as an opportunity to engage with peer

institutions undergoing accreditation during the same time period (SACSCOC, n.d.c).

The current study found that the amount of change in department or unit-level

leadership was another predictor for the total amount of improvements experienced by

Page 137: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

124

institutions, which indicated that as the level of change in departmental leadership

increased so did the amount of improvements experienced by the institution. At the

department or unit-level, over half of the study institutions experienced some degree of

change in leadership involvement, which may indicate that departmental leaders were not

as involved or engaged in institutional effectiveness practices as needed. Leadership

involvement influences accreditation outcomes (Oden, 2009; Young, 2013). Alfred et al.

(2009) defines a leader as “virtually anyone in a position to make a decision about and

deploy resources” (p. 100). The results from the current study and from the literature on

departmental-level leadership highlight the importance of leadership at levels beyond the

top-level and may add additional support for Alfred’s (2012) finding that community

colleges should consider opportunities to develop future leaders from inside the

organization.

Knowledge of assessment. In the current study, many participants indicated that

their institution experienced difficulty during reaffirmation of accreditation due to

insufficient knowledge of assessment or institutional effectiveness (n = 34, 49.3%) and

insufficient knowledge of accreditation (n = 17, 24.6%). Insufficient knowledge of

assessment or institutional effectiveness was a predictor for both the severity of

recommendations received by the study institutions and for the total amount of change

experienced by institutions. Despite nearly half of the study participants indicating

insufficient knowledge of assessment as a difficulty experienced by their institution, only

13% (n = 9) of institutions experienced a major increase in professional development for

faculty, 8.7% (n = 6) for institutional effectiveness staff, 4.3% (n = 3) for educational

support staff, and 4.3% (n = 3) for administration. Additionally, 17.4% of participants

Page 138: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

125

indicted that the institution made no improvement regarding recurring assessment of

student learning, followed by 15.9% only experiencing a slight improvement. Despite,

one-third of institutions experiencing slight to no change in the improvement within

recurring assessment of student learning outcomes, this variable was the area that

experienced the greatest amount of improvement. Essentially, the findings indicated that

for most institutions, accreditation served as an impetus to improve and institutionalize

recurring assessment of student learning outcomes; however, one-third of institutions still

made little to no improvement in this area.

Insufficient training on accreditation or assessment expectations (Chapman,

2007; Hulon, 2000; Powell, 2013; Young, et al., 1983; Young, 2013), and lack of

clarity on institutional effectiveness and assessment of student learning (Baker, 2003;

Ewell, 2011; Manning, 2011) are major criticisms of accreditation. The study results

confirmed that insufficient knowledge of assessment or institutional effectiveness

remains a problematic area for institutions. However, the relationship between

insufficient knowledge and increased institutional change may indicate that institutions

recognize the need to change, or it may be an effect of the recommendations received,

or an interaction may have occurred between the insufficient knowledge of assessment

or institutional effectiveness and the recommendations received.

Although the SACSCOC began requiring demonstration of institutional

effectiveness in 1984, the SACSCOC did not provide a definition of institutional

effectiveness until 2005, when it was defined in a resource guide for institutions

(Head, 2011). The lack of a clear definition contributed to the difficulty in proving

institutional effectiveness because institutions interpret the concept of institutional

Page 139: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

126

effectiveness differently. Because institutions undergo reaffirmation of accreditation

every 10 years, many institutions in the current study only underwent reaffirmation of

accreditation for the second time since institutional effectiveness was defined in 2005.

Despite institutional effectiveness as a standard existing for thirty years, this is still a

developing area for many institutions and for the SACSCOC.

Data for decisions. The current study found that the quality or usefulness of

reports from the institutional research or effectiveness office was the second highest

amount of change experienced due to the reaffirmation of accreditation process, with

over two-thirds of institutions experiencing some level of increase. Additionally, the

quality or usefulness of reports from the institutional research or effectiveness office was

a predictor for the total amount of improvements experienced by institutions. Most

institutions had only one full-time equivalent staff member dedicated to assessment or

institutional effectiveness (56.5%, n = 39), and one full-time equivalent staff member

dedicated to institutional research (50.7%, n = 35). Further, many institutions

experienced difficulty demonstrating compliance due to insufficient staff dedicated to

accreditation or institutional effectiveness (43.5%, n =30), and 56.5% (n = 39) of

institutions experienced some level of changes in the amount of staff dedicated to

institutional effectiveness due to reaffirmation of accreditation.

Skolits and Graybeal (2007) found that insufficient analytical and data support

were barriers to demonstrating institutional effectiveness and that colleges need to

embrace a culture of data based decision-making to improve the organization. Research

has shown that embracing a data informed decision-making culture aids in institutional

effectiveness (e.g., Allen & Kazis, 2007; Lattimore et al., 2012; Skolits & Graybeal,

Page 140: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

127

2007; Young, 2013). The current study results and the literature indicate that a reciprocal

relationship exists between the quality of reports from the institutional research and

effectiveness office and accreditation. Where the literature indicates that having quality

data and reports enhances institutional effectiveness, the current study results indicate

that undergoing accreditation enhances the quality of the reports from these offices.

Community colleges need to invest in adequate staff with the appropriate skills and

experience (Alfred et al., 2009), and staff involved with accreditation need a thorough

knowledge of the institution and the SACSCOC (Young, 2013). Community colleges

may need to rethink the approach to the institutional research or effectiveness offices,

including ensuring adequate staff with appropriate capabilities. Further, colleges should

ensure that the offices of institutional effectiveness or research are connected to

organizational priorities, including accreditation, in order to benefit the college.

Insufficient evidence. Difficulty due to insufficient evidence was the greatest

source of difficulty experienced by the study institutions undergoing accreditation, with

over two-thirds of institutions reporting this as a source of difficulty in demonstrating

compliance. Further, this difficulty due to insufficient evidence was a negative predictor

for the total amount of improvements experienced by institutions, indicating that as the

difficulty increased, the total improvements decreased. Inadequate documentation on the

part of institutions has been identified as a barrier to successful accreditation (Young et

al., 1983; Young, 2013). Despite increases in the use of technology for accreditation and

historical record keeping, the current study findings indicated that institutions still

struggled with demonstrating compliance through collected evidence. Nguyen’s (2005)

Page 141: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

128

study discovered that web-based accreditation management systems aided institutions

during the accreditation process.

Approximately one-quarter of the study institutions experienced difficulty

demonstrating compliance due to insufficient technology and slightly over half of the

institutions owned commercially available institutional effectiveness technology (58%, n

= 40). Last, most institutions did not experience any change in institutional effectiveness

technology or in the amount of financial resources dedicated to institutional effectiveness

technology. Although many colleges in the current study owned commercially available

software, it is unclear whether the institutions were utilizing the software to its full

capacity. Only one-quarter of participants indicated difficulty due to technology.

However, Alfred et al., (2009) implied that community colleges may not have the ability

to properly invest in technology, and researchers have indicated that the technology

aspect of accreditation is not being fully realized (Alfred et al., 2009; Nguyen, 2005;

Oden, 2009; Powell, 2013). The current study findings may support Powell’s (2013)

findings that the use of technology for accreditation is not highly developed. Institutions

may be unaware of the potential uses of technology to enhance accreditation and

institutional effectiveness. Although most institutions owned technology for institutional

effectiveness or accreditation purposes, the institutions may not have utilized the software

or other technology applications to support the collection of evidence needed for

accreditation.

Decision-making bodies. Over three-quarter of participants indicated that their

institutions experienced no changes to the amount of committees because of undergoing

accreditation, and few experienced difficulty demonstrating compliance due to committee

Page 142: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

129

structure. However, institutions that experienced difficulty due to too many committees

were less likely to experience greater levels of total amount of change. This finding may

indicate that changes are less likely to occur for institutions that have too many decision-

making bodies. According to Suskie (2015), “too many layers of committee review can

bog a college down and make it unresponsive to stakeholders needs” (p. 71) and “work

expands to fill the allotted time” (p. 71). The literature suggests that institutions should

ensure that stakeholder voices are included in assessment (Beno, 2004; Suskie, 2009;

Suskie, 2015). This is especially true regarding the faculty stakeholder group in relation

to assessment of student learning (Beno, 2004; Suskie, 2009; Suskie, 2015). In the quest

to have stakeholder input, some institutions may have gone overboard with establishment

of too many committees or the committees established may have too many individuals

serving on them to be effective.

Dedicated time. Researchers have found that accreditation and institutional

effectiveness require significant time dedication on behalf of institutions (e.g., Skolits

& Graybeal, 2007; Murray, 2004; Oden, 2009; Wood, 2006; Woolston, 2012). The

time from preparation of the self-study to the decision from the accrediting body can

take several years (Murray, 2004; Oden, 2009). The current research study indicated

that most institutions spent 18-24 months preparing for accreditation. This time away

from regular job duties and the intensely sustained focus makes the process quite

cumbersome (Murray, 2004). Although institutions often do not provide adequate

release time from regular job duties to dedicate to accreditation preparation (Chapman,

2007; Skolitis & Graybeal, 2007) and some of the current study participants indicated

difficulty demonstrating compliance due to insufficient time, the study results

Page 143: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

130

indicated that institutions that spent longer preparing the compliance certification

experienced greater levels of total change scores. These findings suggest that

institutions that spent longer preparing the compliance certification were more

prepared for accreditation in general. A plausible explanation is that institutions that

spent more time preparing for accreditation were more engaged in the process and

took the time to truly study the institution for improvement purposes. Compared to

institutions that did not spend as much time preparing, these institutions consequently

were less prepared and less able to make changes prior to accreditation. Further, these

institutions may have approached accreditation as an act to comply with rather than an

opportunity for true self-reflection and improvement.

Interestingly, institutions that experienced difficulty due to insufficient time

were also likely to experience greater levels of change. Results indicated that greater

difficulties due to insufficient time dedicated to institutional effectiveness led to

increased levels of total amount of changes experienced by institutions. Skolits and

Graybeal (2007) found that a perceived lack of time to dedicate to institutional

effectiveness activities was a barrier to increased knowledge and expertise in

assessment and institutional effectiveness. Participants in Skolitis and Graybeal’s

(2007) study felt the challenge of needing additional time to thoughtfully reflect on

what assessment findings meant, and felt that deeper analysis of the information was

necessary. Although the current study does not examine whether increased knowledge

in assessment occurred, the results do suggest that greater levels of institutional

changes occurred for institutions that had a perceived lack of time to dedicate to

institutional effectiveness. Although this finding is somewhat peculiar, it may suggest

Page 144: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

131

that institutions are making changes within institutional effectiveness, regardless of

having the time to thoughtfully reflect on what the information means.

The findings from the time related predictors for the total amount of changes

implemented by institutions seem to represent two ends of a spectrum. On one side of

the spectrum exists institutions that spent significant amounts of time preparing for

accreditation; on the other end of the spectrum exists institutions that perceived

insufficient time to dedicate to institutional effectiveness. Although there appears to

be two distinct groups of institutions, both led to increased changes for the institutions.

This finding may suggest that a U-shaped curve of total change exists with the two

groups above representing the endpoints of the U. The current study does not examine

the effectiveness of the changes implemented at each institution and may reflect an

area for future research.

Predictors of Severity of Recommendations

Research question four examined factors (independent variables) that best

predicted the severity of the recommendations received by institutions (dependent

variable). A multiple regression indicated that the severity of recommendations increased

as the following increased: a) amount of change in professional development for

institutional effectiveness staff, b) difficulty due to insufficient knowledge of assessment

or institutional effectiveness, c) difficulty due to insufficient executive-level leadership

involvement, and d) amount of change in number of institutional effectiveness processes

increased. As the difficulty due to insufficient time decreased, the severity of

recommendations increased.

Page 145: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

132

Institutions that experienced greater levels of difficulty due to insufficient time

dedicated to institutional effectiveness were likely to experience less severe

recommendations, and also greater amounts of change. Over one-quarter of study

institutions (n = 18, 26.1%) indicated difficulty demonstrating compliance due to

insufficient time. Researchers have found that often institutions do not provide

adequate release time from regular job duties to dedicate to accreditation preparation

(Chapman, 2007; Skolitis & Graybeal, 2007). As described in research question three,

the time needed to dedicate to accreditation and to institutional effectiveness is

substantial and one of the primary criticisms of accreditation (Skolits & Graybeal,

2007; Murray, 2004; Oden, 2009; Wood, 2006; Woolston, 2012). However, the

current results indicated that institutions that experienced difficulty due to insufficient

time were less likely to receive a greater severity of recommendations and were more

likely to experience greater levels of change. These findings may be due to a

perception issue on behalf of high achieving institutions. The findings may suggest

that even well prepared institutions feel time pressures to dedicate to institutional

effectiveness or accreditation. Alternatively, institutions that had difficulty due to time

may have needed to make greater levels of changes and subsequently the institution

did not have enough time to dedicate to the number of changes needed by the

institution.

Nearly half of study participants indicated that their respective institutions

experienced difficulty demonstrating compliance due to insufficient knowledge of

assessment or institutional effectiveness. Further, insufficient knowledge of

assessment or institutional effectiveness was a predictor for the severity of

Page 146: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

133

recommendations received by the institutions and for the total change score

experienced by institutions. Despite nearly half of study institutions indicating

insufficient knowledge of assessment as a difficulty experienced, only 13% (n = 9) of

institutions experienced a major increase in professional development for faculty,

8.7% (n = 6) for institutional effectiveness staff, 4.3% (n = 3) for educational support

staff, and 4.3% (n = 3) for administration. Additionally, one-third of participants

indicated that their respective institutions made little to no improvement in recurring

assessment of student learning. Several reasons contribute to resistance to assessment

1) the value and importance are not understood, 2) assessment activities are not

adequately resourced, and 3) fear of and resistance to change (Suskie, 2009).

Insufficient training on accreditation or assessment expectations (Chapman, 2007;

Hulon, 2000; Powell, 2013; Young et al., 1983; Young, 2013), and lack of clarity on

institutional effectiveness and assessment of student learning (Baker, 2003; Ewell,

2011; Manning, 2011) are major criticisms of accreditation. The study results

confirmed that insufficient knowledge of assessment or institutional effectiveness

remains a problematic area for institutions, and that this insufficient knowledge

jeopardizes institutions’ accreditation status.

Further, the current study’s findings may suggest that institutions are not

engaging in enough professional development prior to accreditation, but after

receiving recommendations later in the accreditation process, institutions increase the

amount of professional development. According to Suskie (2009), one of the reasons

that resistance to assessment occurs is because the value and importance are not

understood, and the other reason is that institutions may not adequately resource

Page 147: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

134

assessment. The findings of the current study indicated that as institutions increased

the amount of professional development for institutional effectiveness staff, the

severity of recommendations increased. These findings may imply that institutions

that increased professional development for institutional effectiveness staff and

received more severe recommendations may have insufficient knowledge of

institutional effectiveness or accreditation prior to reaffirmation of accreditation.

Further, the findings may suggest that institutions that experienced greater severity of

recommendations did not adequately resource assessment practices prior to

accreditation. Despite the many positive findings associated with professional

development, it is often the first item cut during times of limited budgets in education

(Wallin & Smith, 2005). Given the financial challenges, facing community colleges

(Alfred et al., 2009); professional development may be an area that was reduced in

order to save money.

Difficulty experienced due to insufficient executive-level leadership involvement

was also a predictor for the severity of recommendations received, which indicates that

executive-level leadership plays an important role in accreditation. Further, institutions

that sent their leadership team to the SACSCOC orientation experienced greater levels of

improvements and changes. Whereas, institutions that experienced difficulty due to a

lack of executive leadership involvement received more severe levels of

recommendations. Leadership involvement is an area repeatedly identified in the

literature as a significant factor in accreditation (e.g., Alfred, 2012; Kezar, 2013; Oden,

2009; Young, 2013). Young (2013) found that top-level leaders determined the level of

engagement in accreditation by the institution. Leaders that emphasized the importance

Page 148: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

135

of accreditation in improving the institution established a positive institutional culture

that led to positive accreditation outcomes (Young, 2013). The current study results

confirm the importance of leadership involvement in accreditation and institutional

effectiveness.

Over half of participants in the current study indicated that insufficient

institutional processes or procedures contributed to difficulties during accreditation.

However, just under half of the institutions indicated that no amount of change in

institutional effectiveness processes and procedures occurred due to undergoing

reaffirmation of accreditation. Further, institutions that experienced greater levels of

changes in institutional effectiveness processes and procedures also experienced greater

levels of severity of recommendations. This finding may indicate that processes play an

important role in accreditation. Arguably, institutions that have adequate processes in

place may receive less severe recommendations than institutions that do not have

adequate processes in place. Institutions that need to make changes to processes and

procedures during accreditation may not discover this until after receiving a

recommendation, which would explain why institutions that make changes to processes

are likely to receive greater levels of severity of recommendations. Tharp (2012) found

that schools that emphasized accreditation for compliance purposes did not consistently

enforce processes and policies and had negative accreditation findings. Further, Tharp

discovered that institutions that viewed accreditation as an opportunity to thoroughly

study the institution and viewed accreditation more favorably were more apt to

consistently enforce and follow policies (2012). The current study results may imply that

Page 149: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

136

some institutions are not consistently enforcing policies and procedures due to a

misplaced accreditation focus on compliance rather than institutional improvement.

Implications for Higher Education Practice

The results of this research study provide a number of implications for higher

education practice. The first implication for higher education practice is that by waiting

to make changes until later in the accreditation phases, institutions place themselves at

risk of receiving negative sanctions and jeopardizing their accreditation status. The

second implication for higher education practice is that institutionalization of institutional

effectiveness may not be occurring at many institutions due to insufficient knowledge of

assessment or institutional effectiveness, disengaged faculty, and deficient evidence. The

final implication for higher education practice is that the offices of institutional research

and effectiveness serve an important, but possibly overlooked, function in accreditation

and in institutional improvement.

By waiting to make changes until later in the accreditation phases, institutions

place themselves at risk of receiving negative sanctions and jeopardizing their

accreditation status. Although this implication is a largely unexplored area in the higher

education accreditation literature, the current study findings indicated that greater levels

of change were identified as the severity of the recommendations increased, thus

suggesting that institutions may not make enough changes prior to or during early phases

of accreditation. Institutions may make changes later in the accreditation process due to

prior experiences with accreditation where no consequences occurred, due to

experiencing accreditation fatigue, or because institutions were not adequately prepared

for accreditation. Tharp (2012) found that institutions in the Western Association of

Page 150: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

137

Schools and Colleges region had taken accreditation less seriously due to previous

experiences with accreditors where the accreditors had failed to enforce sanctions on the

institutions.

A primary driver of the negative perspective of accreditation is the time required

to dedicate to accreditation and the increased workload of institutions undergoing

accreditation (Hulon, 2000; Skolits & Graybeal, 2007). In the current study, the onsite

phase of accreditation was the only phase that had no significant findings. Further, post

hoc testing revealed that significant differences within all leadership and tangible

resources occurred between the onsite and after the onsite review (i.e., C & R,

monitoring, warning, or probation phase), which may indicate that institutions become

more engaged after post onsite recommendations. One possible explanation may be that

institutions become more engaged post onsite review (i.e., C & R, monitoring, warning,

or probation phase), due to the potential negative ramifications for the institutions.

However, another possible explanation may be that institutions breathe a sigh of relief

after the onsite committee leaves the institution. Considering the time and energy that

goes into the preparation for the onsite visit, institutions may suffer accreditation fatigue

(Gaston, 2014; Smith & Finney, 2008). Regarding accreditation, Woolston (2012) found

that “the cost in time is much more of a burden than the financial cost” (p. 145). The

current study revealed that over two-thirds of institutions spent 18 months or greater

preparing the compliance certification. These results indicate that the time spent on

accreditation is substantial, which may also indicate that institutions are experiencing a

sense of accreditation fatigue. Gaston (2014) noted that the SACSCOC is unique because

the accreditor has two peer review committees, the offsite and the onsite committees.

Page 151: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

138

Each additional layer of review is an additional burden on institutions. Gaston (2014)

also found that one regional accrediting body made changes to the accreditation process

due to accreditation fatigue. Some of the changes implemented by the accreditor

included reduction in the amount of onsite visits from two to one, thus allowing

institutions to use “existing committees, program review procedures, and established

assessment and institutional research offices” (Gaston, 2014, p. 130). The results of the

literature and the current study may imply that institutions are experiencing accreditation

fatigue and need to explore options to utilize already existing structures in order to

institutionalize accreditation practices.

Last, findings from the current study indicated that institutions that spent more

time preparing for accreditation experienced greater levels of overall change, and were

less likely to receive greater severity of recommendations. The current study findings

related to the time spent preparing for accreditation may imply that institutions that spend

more time preparing for accreditation have better outcomes than institutions that make

changes during the accreditation process. The time spent on preparation may be an

indicator of the institution’s preparation level, where institutions that spent more time

preparing are subsequently more prepared for accreditation. Oden (2009) observed that

the self-study serves as a catalyst to move colleges to improve “things that are working

just fine but could be better” (p. 39). In the current study, institutions that spent greater

amounts of time preparing for accreditation were likely able to implement necessary

changes prior to undergoing accreditation, compared to institutions that implemented

changes after receiving a recommendation, thus potentially putting the institution’s

accreditation status at risk. The study finding on time spent preparing for accreditation

Page 152: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

139

highlights the importance for institutionalizing accreditation and institutional

effectiveness practices and for adequately preparing for accreditation in order to

implement changes prior to accreditation.

A second implication for higher education practice is that institutionalization of

institutional effectiveness and accreditation practices may be inadequate at many

institutions. A disconnect appears to exist between institutions believing they remain in a

steady state of compliance and the likelihood of receiving a recommendation from the

SACSCOC. Although two-third of institutions believe that the accreditation process aids

the institution in remaining in a steady state of compliance, half of institutions indicated

that the institution was at-risk of receiving a recommendation, which indicates that the

institution may be out of compliance. The following areas may contribute to the

perception of non-compliance and to the implication that institutions are inadequately

institutionalizing accreditation practices: a) insufficient knowledge of assessment, b)

disengaged faculty, and c) deficient evidence.

Many institutions of higher education find assessment challenging (Baker, 2003;

Ewell, 2011; Manning, 2011), so much so that it is often the most out of compliance

finding during accreditation (Manning, 2011). The current study revealed that

insufficient knowledge of assessment or institutional effectiveness was experienced by

many institutions, was a predictor for the severity of recommendations received, and was

a predictor for the total amount of changes experienced by institutions. These findings

imply that institutions may not be engaging in enough quality or sustainable assessment

practices. Assessment practices should be ongoing, otherwise the institution is likely to

receive greater severity of recommendations. Additionally, Gaston (2014) described calls

Page 153: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

140

for increased regional accreditation focus on student learning outcomes, indicating that

accreditation will remain focused on this area. Gaston (2014) also noted that, “no

evolution in higher education has influenced accreditation more than the academy’s shift

in emphasis from what is taught to what is learned” (pp. 128-129). In other words,

assessment, specifically assessment of student learning is not likely to be abandoned by

accreditors, but rather accreditation is likely to apply even greater emphasis to the

importance of assessment. Gaston’s points further illustrate the importance of institutions

engaging in ongoing and sustained institutional effectiveness practices, especially

assessment of student learning.

Assessment of student learning requires faculty involvement and engagement

(Suskie, 2015). In the current study, most participants indicated that institutions

experienced difficulty due to insufficient faculty support or buy-in, which is an important

component to successful assessment and accreditation experiences (Suskie, 2015).

Accreditation is unique because it is one of the only opportunities for all stakeholders to

inquire together about the overall functioning of the institution (Oden, 2009). The

current study found that the greatest amount of change experienced by institutions was

within professional development for faculty and the second greatest level of difficulty

experienced was due to insufficient faculty engagement within assessment. Further, the

study indicated that institutions appeared to increase professional development for faculty

after receiving recommendations. The need for faculty support and knowledge was

indicated, but the significant differences for increased faculty professional development

did not occur until the institution was at risk of receiving a negative sanction, which may

imply that faculty are not involved adequately until later in the accreditation process.

Page 154: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

141

Suskie (2009) has described three reasons why resistance to assessment exists 1)

the value and importance are not understood, 2) assessment activities are not adequately

resourced, and 3) fear of and resistance to change. Additionally, researchers have found

that faculty often do not have the necessary knowledge of assessment (Kuh & Ikenberry,

2009) or may be resistant to assessment (Chapman 2007; Wallin & Smith, 2005).

Further, community colleges often rely on part-time faculty (Alfred et al., 2009), which

may be even further removed from program-level assessment of student learning. All of

these factors may contribute to the difficulties experienced by the institutions involved in

the current study. The current study indicated that faculty knowledge of assessment and

accreditation prior to reaffirmation was insufficient for many institutions. This finding

implies that some institutions are not adopting sustainable practices concerning

assessment of student learning outcomes.

Another potentially contributing factor to insufficient institutionalization of

accreditation practices is that a majority of participants indicated that their institution

experienced difficulty due to inadequate evidence. Institutions that experienced difficulty

due to inadequate evidence were more likely to receive greater severity of

recommendations. Further, institutions that experienced less difficulty due to inadequate

evidence experienced greater levels of total improvement. Institutions spend significant

time preparing for accreditation. Institutions that have difficulty in this area may spend

more time attempting to look for evidence and documentation rather than focusing on

making improvements and building a sustainable infrastructure for institutional

effectiveness. Approximately one-quarter of the study institutions indicated a difficulty

due to insufficient technology. However, literature indicates that technology is not a

Page 155: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

142

well-developed area of accreditation (e.g., Alfred et al., 2009; Nguyen, 2005; Oden,

2009; Powell, 2013,). Nguyen (2005) recommended that institutions build a historical

database with digital documents. Of interest is whether the difficulties due to insufficient

evidence occurred because institutions were not engaging in a required practice, or

whether the difficulties were due to a lack of documentation that may have been assisted

through the aid of technology.

A final implication for higher education practice is that the offices of institutional

research or effectiveness serve an important function in accreditation and in institutional

improvement. Offices of institutional research are responsible for data collection,

reporting, and analysis for institutions of higher education (Allen & Kazis, 2007). The

current study found that as improvements in the quality of the reports from the

institutional research or effectiveness office increased, so did the total amount of

improvements experienced by institutions. This finding may imply a) that accreditation

helps to enhance the quality of reporting or b) that institutional research and effectiveness

offices were not operating at full capacity prior to accreditation. If the latter is the case,

institutions may not be engaged in a data informed decision-making culture (Allen &

Kazis, 2007; Skolits & Graybeal, 2007; Young, 2013). A number of community colleges

have recognized the importance of institutional effectiveness and have subsequently

increased institutional research capacity (Allen & Kazis, 2007; Ewell, 2011) to address

the growing needs within demonstration of institutional effectiveness. However, many of

the current study’s participants indicated staffing two or fewer institutional research or

effectiveness staff members. Additionally, most participants indicated a need to increase

the amount of staff dedicated to institutional effectiveness during accreditation and just

Page 156: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

143

under half of institutions experienced difficulty during accreditation due to insufficient

staff, which may suggest that institutions do not have the necessary capacity. Allen and

Kazis (2007) found that inclusion of the institutional research office in the center of

planning and budgeting leads to a culture change and organizational improvement, and

that a strong institutional research office and inclusion of the office in decision-making

bodies was critical to creating and sustaining a culture of data based decision-making

(Allen & Kazis, 2007). Embracing a data informed decision-making culture may aid in

institutionalizing accreditation practices (Allen & Kazis, 2007; Skolits & Graybeal, 2007;

Young, 2013).

Recommendations for Higher Education Practice

Based on the findings of this research study several recommendations for practice

and policy can be made for higher education institutions and the SACSCOC including a)

institutions need to address the symptoms that may be causing institutions to implement

changes later in the accreditation process, b) institutions need to adopt institutional

effectiveness and accreditation practices that are embedded into the day-to-day operations

of the institution, and c) institutions need to evaluate and ensure the capacity of the

institutional research and effectiveness offices.

The first recommendation for practice is for institutions to address the issues that

may be contributing to institutions implementing changes late into the accreditation

process. These issues include peer comparison opportunities, shifting the impetus of

accreditation preparation from compliance to improvement, and adequately preparing for

accreditation.

Page 157: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

144

In order to assist institutions to implement changes and improvements earlier

prior to the accreditation process, higher education leaders must shift the perception of

accreditation. Accreditation represents a dichotomy between compliance and

improvement. However, a theme that emerges from both the study results and the

literature is that institutions should view accreditation as an opportunity to engage in

best practices for the purpose of improving the organization (Head & Johnson, 2011;

Oden, 2009; Sandmann et al., 2009; Woolston, 2012; Young, 2013). Accreditation

should be viewed as an opportunity to share best practices, learn from peer institutions,

provide direction for institutions seeking to improve, provide important benchmarks,

self-evaluate and improve, plan for the future, and examine measures of institutional

health (Driscoll & de Noriega, 2006; Head & Johnson, 2011; Sandmann et al., 2009;

Woolston, 2012, Young, 2013).

As Woolston (2012) stated, “it is problematic when accreditation is considered a

chore to be accomplished as quickly and painlessly as possible rather than an opportunity

for genuine self-reflection for improvement” (p. 54). This suggests a culture change for

institutions that view accreditation as an act of compliance. Culture is defined as “the

values and beliefs that are shared by most of the people at an institution” (p. 86), and is

especially important because it influences staff and leadership behavior (Alfred et al.,

2009). Institutions where the culture focused on accreditation as a series of best practices

rather than compliance, experienced more positive accreditation outcomes (Head &

Johnson, 2011; Sandmann et al., 2009; Tharp, 2012).

As institutions develop a culture focused on accreditation as a series of best

practices, institutions may be more likely to implement changes and improvements earlier

Page 158: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

145

in the accreditation process, thus avoiding potential sanctions from the SACSCOC. In

the current study, it appears that institutions tended to procrastinate when it came to

making changes. In light of the findings from the current study and the literature, in

addition to changing the perspective of accreditation, institutions should carve out enough

time to spend on the accreditation process. The amount of time needed to dedicate to

accreditation is one of the challenges associated with accreditation (Murray, 2002; Oden,

2009; Woolston, 2012), however the results of this research indicated that institutions that

spent more time preparing for accreditation also made more changes within institutional

effectiveness and were less likely to receive greater severity of recommendations.

Knowing the most common amount of time spent in preparation for accreditation is

important for institutions, because time is a challenge for a number of institutions. Many

community colleges in the SACSCOC region spent 18-24 months preparing for

accreditation; in the future, institutions can use the results of this study as a planning tool.

Additionally, if colleges are finding that they are spending greater amounts of time

preparing for accreditation, this may serve as an indicator that challenges are ahead.

Further, leaders must prioritize accreditation and institutional effectiveness activities by

providing release time from normal job duties for staff and faculty involved.

Peer Comparison

One approach that institutions could utilize to aid in making changes earlier, or

prior to accreditation, is to benchmark against peer institutions. In order to adequately

compare institutional performance on accreditation, institutions need to be aware of peer

performance, use peer group comparisons in order to improve practices, and the

Page 159: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

146

SACSCOC needs to support institutions by making certain accreditation findings publicly

available.

Accreditation provides the opportunity for institutions to share information and

best practices across college campus boundaries. Woolston (2012) found this to be

one of the primary advantages of accreditation. In order to influence decision-making

(Bender, 2002) and prepare for accreditation, institutions should benchmark

themselves against peer institutions in accreditation and institutional effectiveness

practices. According to Dew and Nearing (2004), benchmarking is often the first step

in the continuous improvement process and is accomplished by determining

institutions that perform well in a specific area, studying how these institutions

operate, and adopting the best practices that would work at the respective campus. A

recommendation from the current study is for institutions to utilize the study results to

compare their institution to the survey participants in the following areas: institutional

effectiveness practices, accreditation practices, recommendations received, difficulties

experienced, and changes and improvements implemented. Institutions should keep in

mind that benchmarking is only a starting point (Dew & Nearing, 2004), and should

use this information to form conversations on their respective campuses centered on

continuous improvement.

Publish findings. In order for institutions to benchmark themselves in a

consistent manner, for the purpose of improvement, the SACSCOC should consider

sharing the results of accreditation to its member organizations. Although the

SACSCOC shares results at an aggregate level, the SACSCOC should disaggregate

and publish the SACSCOC findings by institutional-level.

Page 160: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

147

According to Gaston:

…in an ideal world, accreditation organizations would publish detailed results

of every institutional or programmatic accreditation review so that everyone

could understand their implications, and everyone involved in accreditation

reviews, from accreditor staff members to volunteer consultant/evaluators,

would frame their recommendations with no concern for personal or

organizational liability (2014, p. 75).

Accreditors have been criticized due to a general lack of transparency (Neal, 2008;

Gaston, 2014). According to Neal (2008), “the accreditation process suffers from

structural problems: secrecy, low standards, and little interest in learning outcomes” (p.

27). The current study found that the SACSCOC standard related to student learning

outcomes was the institutional effectiveness standard most frequently cited, with nearly

half of institutions receiving a recommendation on this standard during the offsite review.

Publication of the common areas that institutions receive recommendations in would be

beneficial to both institutions and the SACSCOC. At a minimum, the SACSCOC should

publish the results on the top 10 standards that institutions received recommendations for

at each phase of accreditation by institution type and make this information widely

available to its members. The data that the SACSCOC collected and provided to this

researcher was aggregated between all institutions that underwent accreditation during

the same year. It is possible that community colleges and universities face the same

issues during accreditation, but it would be valuable to see the recommendations received

by institution-type in order to better understand the issues specific to community college

accreditation.

Page 161: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

148

Additionally, the SACSCOC should come up with and publish the findings for

thematic scoring approaches that determine why institutions received the

recommendations so that other institutions could use the information to improve their

own approaches to accreditation. The publication of information could remain

anonymous so that institutions would not be individually identified. The anonymity

would reduce any potential legal ramifications to the institutions, evaluating

committee, or to the SACSCCOC (Gaston, 2014), but could still contribute to the

continuous improvement of the SACSCOC institutions. Additionally, publication of

this information would benefit not only the SACSCOC institutions, but also the

regional accrediting bodies. Another criticism of accreditation is the lack of focus on

student learning outcomes (Neal, 2008; USDoE, 2006); however, the current study

demonstrated that the SACSCOC is focused on student learning outcomes.

Publication of this information stands to benefit the SACSCOC and other regional

accrediting agencies, because all of these organizations could use the information to

compare themselves against each other and subsequently encourage continuous

improvement of the regional accrediting agencies and their member organizations.

A second recommendation for higher education practice is that institutions need

to adopt institutional effectiveness and accreditation practices that are embedded into the

day-to-day operations of the institution. Tharp (2012) found that institutions that viewed

accreditation as an opportunity to improve the organization had better accreditation

outcomes and were more likely to develop and follow institutional processes and policies.

An implication for higher education was that study institutions showed signs of

accreditation fatigue. An approach to overcoming accreditation fatigue is for institutions

Page 162: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

149

to embed processes, policies, and procedures into the daily routine of higher education.

Processes, policies, and procedures should be integrated into the everyday operations of

the college so that accreditation is not a “brief burst of activity shortly before an

accreditation review” (Suskie, 2009, p. 73), but rather processes embedded in the

organization’s operations naturally lead to successful reaffirmation of accreditation.

Many participants indicated that a lack of institutional processes or procedures

contributed to difficulties during accreditation. Further, institutions that experienced

greater levels of changes in institutional effectiveness process and procedures also

experienced greater levels of severity of recommendations. Institutions should conduct

an accreditation readiness audit and examine the current institutional policies, processes,

and procedures, and necessary changes should be made prior to preparation of

accreditation to avoid potential negative accreditation findings. Considering the

frequency of accreditation, this is a necessity for colleges. Accreditation once occurred

every 10 years (SACSCOC, n.d.b). The cycle of preparing for accreditation and then not

thinking about accreditation for several years may have been a sustainable practice in that

scenario. However, the SACSCOC regional accreditation now occurs every five years

(SACSCOC, n.d.b), and the evidence indicates that institutions are in a perpetual cycle of

accreditation; consequently, institutions must find ways to embed accreditation practices

into the day-to-day college operations.

Similarly, to embedding policies, processes, and procedures into daily routines,

institutions must find ways to increase the knowledge of assessment or institutional

effectiveness prior to reaffirmation of accreditation. Institutions that experienced the

greatest amount of difficulty due to insufficient knowledge of assessment or institutional

Page 163: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

150

effectiveness experienced greater levels of overall change and were more likely to receive

a greater level of severity of recommendations. Colleges should examine the role of all

stakeholders in assessment or institutional effectiveness processes, ensure resources are

allocated appropriately, and increase knowledge in these areas. Professional

development for institutional effectiveness staff, faculty, and administration were areas

where institutions experienced a number of changes. Institutions should consider a pro-

active approach to increasing knowledge of assessment or institutional effectiveness,

should allocate professional development toward this area prior to accreditation, and

should be cautious in reducing professional development as a cost saving measure.

Regarding the self-study component of accreditation, Woolston (2012) stated that

“self-assessment is ineffectual when there is faculty resistance and a lack of

administrative incentive” (p. 54). Further, because over half of institutions indicated that

difficulty was experienced due to inadequate faculty support or buy-in, and previous

researchers have found insufficient faculty knowledge and engagement in assessment

(e.g., Chapman, 2007; Kuh & Ikenberry, 2009; Skolits & Graybeal, 2007), institutions

should closely examine the role of faculty in assessment or institutional effectiveness

processes. Suskie (2015) recommends finding ways to involve both faculty and students

when designing approaches to assessment. Further, Suskie (2015) recommends

increasing transparency, trusting faculty to conduct assessment, and forming a

collaborative culture to increase involvement and knowledge in the assessment area.

Institutions should conduct an assessment or institutional effectiveness climate survey to

determine the level of engagement of all stakeholders. Further, institutions should ensure

that faculty and administration define student learning outcomes in the same way.

Page 164: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

151

Chapman (2007) found that “college faculty and administrators define student learning

outcomes in a variety of ways, and some struggle with articulating it” (p. 63). In

addition, Chapman has perceived that “faculty must make the connection between

successful student learning and outcomes assessment before they will provide the support

necessary for successful implementation of course, program, or institution learning

outcomes assessment” (p. 30). Last, institutions must provide adequate release time from

normal job duties in order for faculty to increase assessment or accreditation knowledge

and expertise. Inadequate time dedicated to institutional effectiveness activities was the

greatest barrier to increased knowledge and expertise (Skolits & Graybeal, 2007).

As institutions are examining approaches to institutionalizing accreditation

practices and creating a collaborative culture surrounding assessment, institutions are

cautioned to closely examine the overall functioning of committee structure and

effectiveness. Institutions should examine the current committee or governance structure

and determine if decision-making is sluggish or absent due to the structures in place.

Institutions that experienced difficulty due to too many committees were less likely to

experience greater levels of total change scores. This finding indicated that changes were

less likely to occur due to having too many decision-making bodies. According to Suskie

(2015), “too many layers of committee review can bog a college down and make it

unresponsive to stakeholders needs” (p. 71). Suskie (2015) further recommends that

institutions either place committees on hold for one year to determine the necessity of the

committee or merge multiple committees together to increase efficiency. Institutions that

have trouble making decisions due to governance structures or too many committees

should consider adopting Suskie’s (2015) advice and place the committees on hold to

Page 165: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

152

determine better approaches to decision-making, in order to assist the institution in

effectively institutionalizing accreditation practices.

As institutions increase the knowledge of institutional effectiveness and

accreditation, review committee structures, and engage faculty, institutions are likely

to improve at embedding necessary policies in to daily routines, and collecting

necessary evidence so that the act of undergoing accreditation is not a momentous

occasion, but rather an act of compiling information that already exists. Another area

where institutions need to improve is within collecting supporting documentation.

Over half of institutions experienced difficulties demonstrating compliance due to

insufficient documentation. Institutions that lack documentation must spend time

searching or creating documents rather than spending time conducting deep

institutional analysis, which may explain why institutions that experienced difficulty in

this area were more likely to receive a recommendation greater in severity. Another

perspective is that institutions must focus on the minutia of locating documentation

rather than the big picture of how well the institution is performing. Regional

accreditation allows institutions to examine the organizational culture, processes,

policies, and services (Head & Johnson, 2011). This deep examination of the

institution contributes to better functioning organizations, provided those involved

engage in the process in a meaningful way (Head & Johnson, 2011). Insufficient

documentation may prevent institutions from engaging in the process in a meaningful

way.

The use of technology for institutional effectiveness and accreditation is still

rather undeveloped (Alfred et al., 2009; Powell, 2013), but provides an opportunity for

Page 166: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

153

institutions to improve the collection of student learning outcomes, to utilize a repository

for documents that may support future accreditation, to communicate with stakeholders

about assessment, to improve effectiveness and accreditation, and to assist with the

preparation for accreditation (Nguyen, 2005; Oden, 2009). Institutions should examine

the current use of technology for accreditation to determine if technology is supporting

the institution adequately. Further, institutions may need to consider advice from outside

of the organization in order to determine the overall capability of the current technology

and to determine technology that could assist the college in achieving its goals, including

improved documentation for accreditation. As institutions enhance the use of technology

in assisting with accreditation, institutions should remember to ensure that the institution

has the resources necessary to sustain the technology (Alfred et al., 2009), including

training of staff members on the technology and the staff capacity to maintain the

technology. Improvement in technology may aid institutions in engaging in the

accreditation process in a meaningful way.

A third recommendation for higher education practice is that institutions need to

evaluate and ensure the capacity of the institutional research and effectiveness offices.

The offices of institutional research and effectiveness may be overlooked and

underutilized areas of colleges. These offices have been shown to influence and change

institutional culture and to improve organizational effectiveness (Allen & Kazis, 2007;

Lattimore et al., 2012). The current study demonstrated that accreditation has a positive

impact on the quality of reporting from the institutional research and effectiveness areas,

and this increased reporting quality has a positive effect on institutions. Perhaps, as

institutions have access to higher quality information, the quality of organizational

Page 167: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

154

decision-making would be enhanced. Researchers have found that a strong institutional

research office and inclusion of the office in decision-making bodies was critical to

creating and sustaining a culture of data based decision-making (Allen & Kazis, 2007).

Further, connecting institutional research to strategy and planning in relation to budget

decisions has been shown to increase organizational effectiveness (Allen & Kazis, 2007;

Lattimore et al., 2012). Institutions should ensure that they have adequate numbers of

institutional effectiveness and research staff, that the staff have adequate capacity and

resources, and that the institutional research office is included in the accreditation

process. Further, institutions should examine the current use of the institutional research

and effectiveness offices to ensure the offices are represented adequately on institutional-

level committees. Last, senior-level community college leaders should also show

continuous support for offices that supply decision-support (Allen & Kazis, 2007).

Recommendations for Future Research

The current study is limited because the level of changes or improvements may be

affected by the amount of time that has passed since the institution went through

reaffirmation of accreditation. Further, the study does not indicate whether a particular

institutional effectiveness or accreditation practice is likely to lead to a negative finding

during reaffirmation of accreditation. A longitudinal research study that examines

institutional effectiveness practices prior to reaffirmation of accreditation would reduce

this limitation. The study could examine current practices and compare to the

reaffirmation of accreditation findings, once the institution has undergone reaffirmation

of accreditation. However, the study design would need to be longitudinal in nature in

Page 168: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

155

order to accurately reflect whether specific practices prior to reaffirmation affect

reaffirmation of accreditation findings.

Additionally, data from this research study could be utilized to examine

relationships between the difficulties experienced by the institutions and the

recommendations received during various phases of accreditation. The study could

examine whether institutions that experienced a certain type of difficulty were more

likely to receive a recommendation than institutions that did not experience the same

difficulty.

Also, research into the areas of significant differences occurring within

institutional changes compared to institutional improvements were indicated based on the

current study. As previously mentioned, previous researchers (e.g., Head & Johnson,

2011; Oden, 2009; Sandmann et al., 2009; Woolston, 2012; Young, 2013) found that

institutional improvements are one the greatest benefits to accreditation. However, the

current study found no significant differences within changes in institutional

improvements due to receiving recommendations. Further study into the relationship

between institutional improvements and the SACSCOC recommendations would provide

insight into this area.

Last, further research into the perceived compliance with SACSCOC standards as

of the date of the survey may be warranted. The relationship between the perceived state

of compliance in relation to recommendations received would contribute to further

understanding of accreditation compliance. This study would add significant value to

institution preparation for accreditation and sustaining ongoing compliance with

accreditation standards.

Page 169: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

156

Conclusion

Institutions of higher education undergo regional accreditation in order to

ensure academic quality and to ensure that students attending the institution receive

federal financial aid. The process of undergoing regional accreditation is a rigorous

task that many institutions find challenging, especially within the institutional

effectiveness arena. With the increasing standards of regional accreditation,

institutions must find ways to demonstrate compliance with accreditation standards in

resource limited situations. Community colleges are especially challenged to

demonstrate accomplishment of multi-faceted missions in a constrained resource

environment. The purpose of this study was to analyze the role of the SACSCOC

recommendations and institutional changes based on the perceptions of the SACSCOC

liaison or primary institutional effectiveness personnel at community colleges in the

SACSCOC region that underwent reaffirmation of accreditation between the years

2011 through 2015. Specifically, the study sought to determine any positive changes,

within institutional effectiveness, that occurred because of the reaffirmation of

accreditation process. Findings of this study should aid community colleges in

regional accreditation practices and in improving institutionalizing practices that

enhance institutional effectiveness.

The study utilized a researcher-developed, web-based survey instrument to collect

self-reported data from the SACSCOC liaison or chief institutional effectiveness officer

from 69 institutions. A non-experimental, group-comparison quantitative research design

was used to examine statistically significant differences and relationships between the

SACSCOC recommendations received during the reaffirmation of accreditation process

Page 170: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

157

and the perceived levels of institutional change or improvement within the institutional

effectiveness domain. Research analyses included descriptive, inferential, and predictive

statistics. Specifically, an independent samples t-test, an analysis of variance, and

multiple regression analyses were utilized.

Descriptive statistics revealed the common institutional characteristics,

institutional SACSCOC characteristics, institutional effectiveness practices, types of

changes and improvements made, and the SACSCOC recommendations received and

during which phase of accreditation the recommendations occurred. A number of

statistically significant differences were found between institutions that received

recommendations and those that did not for changes within intangible resources, tangible

resources, and leadership. Overall, results indicated that recommendations influenced

many types of institutional changes. Further, predictors were discovered for the total

amount of institutional change experienced, the total amount of institutional improvement

experienced, and the severity of recommendations received by institutions. A series of

implications and recommendations were provided to aid community colleges in

preparation for regional accreditation, and specifically in improving institutionalizing

processes that are known to enhance institutional effectiveness.

Page 171: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

158

REFERENCES

Alfred, R. L. (2011). The future of institutional effectiveness. New Directions for

Community Colleges, 2011(153), 103–113.

Alfred, R. L. (2012). Leaders, leveraging, and abundance: Competencies for the Future.

New Directions for Community Colleges, 2012(159), 109–120.

Alfred, R. L., Shults, C., Jaquette, O., & Strickland, S. (2009). Community colleges on

the horizon: Challenge, choice, or abundance. Lanham, MD: Rowman &

Littlefield Education.

Aliaga, M., & Gunderson, B. (2000). Interactive statistics. Saddle River, NJ: Prentice

Hall.

Allen, L., & Kazis, R. (2007). Building a culture of evidence in community colleges:

Lessons from exemplary institutions. Retrieved from Jobs for the Future website:

http://inpathways.net/cultureofevidence-communitycollege.pdf

Asgill, A. (1976). The importance of accreditation: Perceptions of Black and White

college presidents. Journal of Negro Education, 45(3), 284-294.

Baker, R. L. (2002). Evaluating quality and effectiveness: Regional accreditation

principles and practices. The Journal of Academic Librarianship, 28(1-2), 3–7.

Bardo, J. W. (2009). The impact of the changing climate for accreditation on the

individual college or university: Five trends and their implications. New

Directions for Higher Education, 2009(145), 47–58.

Bender, B.E. (2002). Benchmarking as an administrative tool for institutional

leaders. New Directions for Higher Education, 2002(118), 113-120.

Page 172: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

159

Beno, B. A. (2004). The role of student learning outcomes in accreditation quality

review. New Directions for Community Colleges, 2004(126), 65–72.

http://doi.org/10.1002/cc.155

Bergeron, D., Baylor, E., & Flores, A. (2014). A great recession, a great retreat.

Retrieved from Center for American Progress website:

http://cdn.americanprogress.org/wp-content/uploads/2014/10/PublicCollege-

report.pdf

Brittingham, B. (2009). Accreditation in the United States: How did we get to where we

are? New Directions for Higher Education, 2009(145), 7–27.

Chapman, L. M. (2007). Community college instructors’ and administrators’ beliefs

regarding student learning outcomes assessment and the reaccreditation process

(Doctoral dissertation). Retrieved from ProQuest Dissertations and Theses

database. UMI Number 3283497.

Cohen, A. M., Brawer, F. B., & Kisker, C. B. (2013). The American community college

(6th ed.). San Francisco, CA: Jossey-Bass.

Cohen, J. (1992). A power primer. Psychological Bulletin, 112(1), 155- 159.

CollegeBoard. (2013). Trends in student aid 2013. Retrieved from

http://trends.collegeboard.org/sites/default/files/student-aid-2013-full-report.pdf

Commission on Colleges Southern Association of Colleges and Schools[COCSACS].

(2012). Principles of accreditation: Foundations for quality enhancement.

Retrieved from http://www.sacscoc.org/pdf/2012PrinciplesOfAcreditation.pdf

Cooper, T., & Terrell, T. (2013). What are institutions spending on assessment? Is it

worth the cost? (Occasional Paper No. 18). Retrieved from National Institute for

Page 173: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

160

Learning Outcomes Assessment website:

http://learningoutcomesassessment.org/documents/What%20are%20institutions%

20spending%20on%20assessment%20Final.pdf

Council for Higher Education Accreditation [CHEA]. (2010). The value of accreditation.

Retrieved from http://www.chea.org/pdf/Value%20of%20US%20

Accreditation%2006.29.2010_buttons.pdf

Creswell, J. W. (2009). Research design: Qualitative, quantitative and mixed methods

approaches. Thousand Oaks, CA: Sage.

Creswell, J. W. (2014). Research design. Qualitative, quantitative and mixed methods

approaches. Thousand Oaks, CA: Sage.

Day, L. A. (1989). Designing and conducting health surveys: A comprehensive guide.

San Francisco, CA: Jossey-Bass.

Dew, J. & Nearing, M (2004), Continuous quality improvement in higher education.

Westport, CT: Praeger Publishers.

Donahoo, S., & Lee, W. Y. (2008). Serving two masters: Quality and conflict in the

accreditation of religious institutions. Christian Higher Education, 7(4), 319–338.

Driscoll, A., & de Noriega, D. C. (2006). Taking ownership of accreditation: Assessment

processes that promote institutional improvement and faculty engagement. Stylus

Publishing, LLC.

Eaton, J. S. (2009). Accreditation in the United States. New Directions for Higher

Education, 2009(145), 79–86.

Eaton, J. S. (2012). An overview of US accreditation--Revised. Council for Higher

Education Accreditation, 1–9.

Page 174: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

161

Eckel, P. D. (2008). Mission diversity and the tension between prestige and effectiveness:

An overview of US higher education. Higher Education Policy, 21(2), 175–192.

Ewell, P. T. (2011). Accountability and institutional effectiveness in the community

college. New Directions for Community Colleges, 2011(153), 23–36.

Fitzgerald, J., & Fitzgerald, J. (2013). Statistics for criminal justice and criminology in

practice and research. (Appendix A). Retrieved from

http://www.sagepub.com/fitzgerald/study/materials/appendices/app_a.pdf

Floyd, D. L., & Antczak, L. (2009). Reflections on community college research.

Community College Journal of Research and Practice, 34(1-2), 1–6.

Gaston, P. L. (2014). Higher education accreditation: How it's changing, why it must.

Stylus Publishing, LLC.

Gay, L. R., Mills, G. E., & Airasian, P. (2009). Educational research. Competencies

for analysis and applications. Upper Saddle River, NJ: Pearson.

Gravetter, F., & Wallnau, L. (2013). Essentials of statistics for the behavioral

sciences. Cengage Learning.

Hartle, T. W. (2012). Accreditation and the public interest: Can accreditors continue to

play a central role in public policy? The Journal of the Society for College and

University Planning, 16–21.

Head, R. B. (2011). The evolution of institutional effectiveness in the community college.

New Directions for Community Colleges, 2011(153), 5–11.

Head, R. B., & Johnson, M. S. (2011). Accreditation and its influence on institutional

effectiveness. New Directions for Community Colleges, 2011(153), 37–52.

Page 175: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

162

Huck, S. W., & Cormier, W. H. (1996). Reading statistics and research (2nd ed.). New

York, NY: HarperCollins.

Hulon, J. G. (2000). The impact of regional accreditation on Copiah-Lincoln Community

College (Doctoral dissertation). Retrieved from Proquest Dissertations and Theses

database. UMI Number 9991323.

Institute for Digital Research and Education. (2014). What statistical analysis should I

use? UCLA: Statistical Consulting Group. Retrieved from

http://www.ats.ucla.edu/stat/mult_pkg/whatstat/

Jaccard, J., & Becker, M. A. (1997). Statistics for the behavioral sciences. (3rd ed.).

Pacific Grove, CA: Brooks/Cole.

Jackson, R. S., Davis, J., H., & Jackson, F., R. (2010). Redesigning regional

accreditation: The impact on institutional planning regional accrediting bodies

continue to sharpen their focus on student learning, with implications for

planners. The Journal of the Society for College and University Planning, 38(4),

9–19.

Kezar, A. (2013). Institutionalizing student outcomes assessment: The need for better

research to inform practice. Innovative Higher Education, 38(3), 189–206.

Kuh, G. D., & Ikenberry, S. O. (2009). More than you think, less than we need: Learning

outcomes assessment in American higher education. National Institute for

Learning Outcomes Assessment. Retrieved from http://

www.learningoutcomeassessment.org/documents/niloafullreportfinal2.pdf

Lakens, D. (2013). Calculating and reporting effect sizes to facilitate cumulative science:

a practical primer for t-tests and ANOVAs. Frontiers in Psychology, 4, 863.

Page 176: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

163

Lattimore, J. B., D’Amico, M. M., & Hancock, D. R. (2012). Strategic responses to

accountability demands: A case study of three community colleges. Community

College Journal of Research and Practice, 36(12), 928–940.

Leong, F. T., & Austin, J. T. (1996). The psychology research handbook. A guide for

graduate students and research assistants. London, UK: Sage.

Levine, T.R., & Hullett, C.R. (2002). Eta Squared, Partial Eta Squared, and misreporting

of effect size in communication research. Human Communication Research,

28(4), 612-625.

Manning, T. M. (2011). Institutional effectiveness as process and practice in the

American community college. New Directions for Community Colleges,

2011(153), 13–21.

McGuire, P. A. (2009). Accreditation’s benefits for individuals and institutions. New

Directions for Higher Education, 2009(145), 29–36.

Meade, A. W., & Craig, S. B. (2012). Identifying careless responses in survey

data. Psychological methods, 17(3), 437-455.

Miller, V. D. (2000). The specific criteria cited most often by visiting committees to Level

I institutions (Doctoral dissertation). Retrieved from

http://files.eric.ed.gov/fulltext/ED457930.pdf

Murray, C.H. (2004). A study of two-year college administrators regarding the impact of

the accreditation process on institutional change (Doctoral dissertation).

Retrieved from ProQuest Digital Dissertations. (UMI No. 3157914)

Murray, J. P. (2002). Faculty development in SACS-accredited community colleges.

Community College Review, 29(4), 50–66.

Page 177: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

164

Neal, A. D. (2008). Seeking higher-ed accountability: Ending federal accreditation.

Change: The Magazine of Higher Learning, 40(5), 24–31.

Nguyen, P. T. T. (2005). Reaffirmation of accreditation and quality improvement as a

journey: A case study (Doctoral dissertation). Retrieved from

https://repositories.tdl.org/ttu-

ir/bitstream/handle/2346/16029/Nguyen_Phuong_Diss.pdf?sequence=1

Nunnaly, J. (1978). Psychometric theory. New York: McGraw-Hill.

Oden, R. A. (2009). A college president’s defense of accreditation. New Directions for

Higher Education, 2009(145), 37–45.

Osborne, J.W. & Overbay, A. (2004). The power of outliers (and why researchers should

always check for them). Practical Assessment, Research & Evaluation, 9(6).

Retrieved September 14, 2015 from http://PAREonline.net/getvn.asp?v=9&n=6

Osborne, J.W., & Waters, E. (2002). Multiple Regression Assumptions. ERIC Digest,

1-6. Retrieved from http://files.eric.ed.gov/fulltext/ED470205.pdf

Powell, C. (2013). Accreditation, assessment, and compliance: Addressing the cyclical

challenges of public confidence in American education. Journal of Assessment

and Institutional Effectiveness, 3(1), 54–74.

Ray, W. J. (1996). Methods toward a science of behavior and experience. Pacific

Grove, CA: ITP.

Qualtrics. (2014). Qualtrics research suite. Retrieved from http://www.qualtrics.com

/research-suite/

Salant, P., & Dillman, D. A. (1994). How to conduct your own survey. New York, NY:

John Willey and Sons.

Page 178: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

165

Sandmann, L. R., Williams, J. E., & Abrams, E. D. (2009). Higher education community

engagement and accreditation: Activating engagement through innovative

accreditation strategies. Strategies Planning for Higher Education, 37(3), 15–26.

Sapsford, R. & Jupp, V. (2006). Data collection and analysis. Thousand Oaks, CA:

Sage.

Sibolski. (2012). What’s an Accrediting Agency Supposed to Do? Retrieved from

Society for College and University Planning website:

http://www.scup.org/page/phe/read/article?data_id=31442&view=article

Skolits, G. J., & Graybeal, S. (2007). Community college institutional effectiveness:

perspectives of campus stakeholders. Community College Review, 34(4), 302–

323.

Smith, V. B., & Finney, J. E. (2008). Redesigning regional accreditation: An interview

with Ralph A. Wolff. Change: The Magazine of Higher Learning, 40(3), 18-24.

Retrieved from http://www.changemag.org/Archives/Back%20Issues/May-

June%202008/full-regional-accreditation.html

Southern Association of Colleges and Schools Commission on Colleges [SACSCOC].

(n.d.a). General information on the reaffirmation process. Retrieved from

http://www.sacscoc.org/genaccproc.asp

Southern Association of Colleges and Schools Commission on Colleges [SACSCOC].

(n.d.b). The fifth-year interim report process: An overview. Retrieved from

http://www.sacscoc.org/fifth%20year/Summary.The%20Fifth%20Year%20Interi

m%20Report.pdf

Page 179: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

166

Southern Association of Colleges and Schools Commission on Colleges [SACSCOC]

(n.d.c). 2016 Track A Orientation. Retrieved from

http://www.sacscoc.org/2016TrackAOrientation.asp

Southern Association of Colleges and Schools Commission on Colleges [SACSCOC].

(2011a). Administrative procedures for the meetings of the committees on

compliance and reports. Retrieved from http://www.sacscoc.org/pdf/

081705/interview %20policy%20fin.pdf

Southern Association of Colleges and Schools Commission on Colleges [SACSCOC].

(2011b). Handbook for institutions seeking reaffirmation. Retrieved from

http://www.sacscoc.org/pdf/081705/Handbook%20for%20Institutions%

20seeking%20reaffirmation.pdf

Southern Association of Colleges and Schools Commission on Colleges [SACSCOC].

(2012a). Principles of accreditation: Foundations for quality enhancement.

Retrieved from http://www.sacscoc.org/pdf/2012PrinciplesOfAcreditation.pdf

Southern Association of Colleges and Schools Commission on Colleges [SACSCOC].

(2012b). Reports submitted for committee or commission review. Retrieved from

http://www.sacscoc.org/pdf/

Reports%20requested%20for%20COC%20review.pdf

Southern Association of Colleges and Schools Commission on Colleges [SACSCOC].

(2012c). The accreditation liaison. Retrieved from http://www.sacscoc.org/pdf/

081705/accreditation%20liaison.pdf

Page 180: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

167

Southern Association of Colleges and Schools Commission on Colleges [SACSCOC].

(2013a). Sanctions, denial of reaffirmation, and removal from membership.

Retrieved from http://www.sacscoc.org/pdf/081705/sanction%20policy.pdf

Southern Association of Colleges and Schools Commission on Colleges [SACSCOC].

(2013b). Strategic plan. Retrieved from http://www.sacscoc.org/pdf/

2012sacscocsrtategic.pdf

Southern Association of Colleges and Schools Commission on Colleges [SACSCOC].

(2014a). Frequently Asked Questions. Retrieved from http://www.sacscoc.org/

FAQsanswers.asp#q4

Southern Association of Colleges and Schools Commission on Colleges [SACSCOC].

(2014b). Member, Candidate and Applicant List. Retrieved from

http://www.sacscoc.org/ pdf/webmemlist.pdf

Suskie, L. (2009). Assessing student learning (2nd. ed.). San Francisco, CA: Jossey-Bass.

Suskie , L. (2015). Five dimensions of quality: A common sense guide to accreditation

and accountability. San Francisco, CA: Jossey-Bass.

Tharp, N. M. (2012). Accreditation in the California community colleges: Influential

cultural practices (Doctoral dissertation). Retrieved from

http://www.asccc.org/sites/default/files/dissertation.pdf

U.S. Department of Education. (2006). A test of leadership: Charting the future of U.S.

higher education (Government). Retrieved from

http://www2.ed.gov/about/bdscomm/list/hiedfuture/reports/final-report.pdf

Page 181: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

168

U.S. Department of Education Institute of Education Sciences National Center for

Education Statistics [NCESIPEDS]. (n.d.). IPEDS data center. Retrieved from

http://nces.ed.gov/ipeds/datacenter/

Vacha-Haase, T., & Thompson, B. (2004). How to estimate and interpret various effect

sizes. Journal of Counseling Psychology, 51(4), 473–481.

Wallin, D. L., & Smith, C. L. (2005). Professional development needs of full-time faculty

in technical colleges. Community College Journal of Research and Practice,

29(2), 87–108.

Welch, B. L. (1947). The generalization of students problem when several different

population variances are involved. Biometrika, 34(1-2), 28–35.

Wood, A. L. (2006). Demystifying accreditation: Action plans for a national or regional

accreditation. Innovative Higher Education, 31(1), 43–62.

Woolston, P. J. (2012). The costs of institutional accreditation: A study of direct and

indirect costs (Doctoral dissertation, University of Southern California). Retrieved

from http://search.proquest.com/docview/1152182950?accountid=14709.

(1152182950).

Young, A. L. (2013). The regional accreditation process at community colleges: A case

study of effectiveness (Doctoral dissertation). Theses and Dissertations

Educational Policy Studies and Evaluation. Paper 12. Retrieved from

http://uknowledge.uky.edu/epe_etds/12/

Young, C. M., Chambers, C. M., & Kells, H. R. (1983). Improving institutional

performance through self-study. San Francisco, CA: Jossey-Bass.

Page 182: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

169

Zumbo, B. D., & Zimmerman, D. W. (1993). Is the selection of statistical methods

governed by level of measurement?. Canadian Psychology/Psychologie

Canadienne, 34(4), 390-400.

Page 183: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

170

APPENDIX A

Texas Tech University Institutional Review Board Approval

April 17, 2015

Dr. Stephanie Jones Ed Psychology & Leadership Mail Stop: 1071

Regarding: 505164 The Role of SACSCOC Recommendations on Changing Community College Practices in Institutional Effectiveness: A Quantitative Analysis

Dr. Stephanie Jones:

The Texas Tech University Protection of Human Subjects Committee approved your claim for an exemption for the protocol referenced above on April 17, 2015.

Exempt research is not subject to continuing review. However, any modifications that (a) change the research in a substantial way, (b) might change the basis for exemption, or (c) might introduce any additional risk to subjects must be reported to the Human Research Protection Program (HRPP) before they are implemented.

To report such changes, you must send a new claim for exemption or a proposal for expedited or full board review to the HRPP. Extension of exempt status for exempt protocols that have not changed is automatic.

The HRPP staff will send annual reminders that ask you to update the status of your research protocol. Once you have completed your research, you must inform the HRPP office by responding to the annual reminder so that the protocol file can be closed.

Sincerely,

Page 184: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

171

Rosemary Cogan, Ph.D., ABPP Protection of Human Subjects Committee

Box 41075 | Lubbock, Texas 79409-1075 | T 806.742.3905 | F 806.742.3947 | www.vpr.ttu.edu An EEO/Affirmative Action Institution

Page 185: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

172

APPENDIX B

Email to Study Participants

Dear ________________:

My name is Kara Larkan-Skinner and I am a doctoral candidate at Texas Tech University in the higher education program. I am conducting a study to analyze the role of SACSCOC recommendations and institutional changes at community colleges in the SACSCOC region. The purpose of the study is to understand the types of changes within the institutional effectiveness area that institutions make, based on undergoing SACSCOC reaffirmation of accreditation. The results of the study are intended to aid community colleges in regional accreditation practices, and specifically in improving processes that are known to enhance institutional effectiveness.

Because your institution recently went through reaffirmation of accreditation, your institution is eligible for participation in this study. It is my hope that you will participate in the study in order to assist in the advancement of the field of higher education accreditation. The study will take no more than 30 minutes of your time, and the survey is structured so that you are able to save your answers and work on the survey at times that are convenient for you. The survey will remain open for three weeks, and you will receive reminders during the survey period.

Your survey responses are completely anonymous and cannot be traced back to you or your institution. No personal or institution identifying information will be captured. Additionally, your responses are combined with all other responses and will only be reported in an aggregate format.

If you are willing to participate, please proceed to the survey at:

Follow this link to the Survey: ${l://SurveyLink?d=Take the Survey}

Or copy and paste the URL below into your internet browser: ${l://SurveyURL}

If you are not the appropriate institutional representative to complete the survey, please forward this request for participation in this study to the appropriate person.

If you have any questions regarding this research study, please contact me at [email protected] or call 806-584-0089. This study is being supervised by Dr. Stephanie Jones who also will be glad to answer any questions you may have. Dr. Jones can be reached via email at [email protected] or by phone at (806) 834-1380.

Page 186: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

173

I truly appreciate your time and consideration in allowing me to conduct this study at your institution.

Kind regards,

Kara Larkan-Skinner Doctoral Candidate, Higher Education Administration Texas Tech University

Follow the link to opt out of future emails: ${l://OptOutLink?d=Click here to unsubscribe}

Page 187: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

174

APPENDIX C

Survey Questionnaire Introductory Text

The purpose of the study is to understand the types of changes that institutions make, based on recommendations received from the SACSCOC regional accrediting body. Specifically, the purpose of the study is to examine the amount of changes, within institutional effectiveness, that occurred because of the reaffirmation of accreditation process. The results of the study are intended to aid community colleges in regional accreditation practices, and specifically in improving processes that are known to enhance institutional effectiveness. The information collected in this survey will be kept confidential and will only be seen by the researchers. No personal or institution identifying information is collected in this survey. The responses you provide will only be reported in the aggregate. There is no compensation for your participation in this study. However, you will benefit by contributing knowledge to a study that aims to help colleges better prepare for regional accreditation. This survey should take no more than 30 minutes to complete. Your participation is strictly voluntary. You can refuse to participate and you may choose not to answer any question. You are able to save the survey and complete the survey at your own pace. In order to assist you with completion of the survey, you may need to review the SACSCOC Principles of Accreditation found here: http://www.sacscoc.org/pdf/2012PrinciplesOfAcreditation.pdf Thank you for your time and participation. If you have questions about this study, please contact Kara Larkan-Skinner at [email protected] or call 806-584-0089. This study is being supervised by Dr. Stephanie Jones who also will be glad to answer any questions you may have. Dr. Jones can be reached via email at [email protected] or by phone at (806) 834-1380. Texas Tech University also has a Board, the Institutional Review Board, which protects the rights of people who participate in research. You may contact them with questions by calling (806) 742-2064 or email them at [email protected]. You may also contact them by mail at Institutional Review Board for the Protection of Human Subjects, Office of the Vice President for Research, Texas Tech University, Lubbock, Texas 79409.

Page 188: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

175

APPENDIX D

SACSCOC Recommendations and Improvements Survey

Q1.1 What year did your institution undergo SACSCOC Reaffirmation of Accreditation?

2011 (1) 2012 (2) 2013 (3) 2014 (4) 2015 (5)

Q1.2 Please identify your institution’s Carnegie category for institutional size, control and classification. Link to Carnegie website, if needed: http://carnegieclassifications.iu.edu/lookup_listings/institution.php

Institutional Control

Classification Institutional Size

Private (1)

Public (2)

Rural Serving

(1)

Suburban (2)

Urban (3)

Other (4)

Very Small

(1)

Small (2)

Medium (3)

Large (4)

Very Large

(5)

Unknown Institutional Size (please enter your fall 2014

headcount) (6)

Page 189: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

176

Q1.3 Which best represents the number of full-time faculty employed at your institution during the Fall 2014 semester?

0-50 (1) 51-100 (2) 101-150 (3) 151-200 (4) 200-250 (5) 251-300 (6) Greater than 300 (7)

Q1.4 How far in advance did your institution start working on the SACSCOC compliance certification in number of months?

0-6 months (1) 6-12 months (2) 12-18 months (3) 18-24 months (4) 24-30 months (5) 30-36 months (6) >36 months (7)

Page 190: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

177

Q1.5 Number of full-time equivalent (FTE) staff primarily dedicated to:

0 (1) 1 (2) 2 (3) 3 (4) 4 (5) 5 (6) 6 (7) 7 (8) 8 (9) 9 (10)

10 or more (11)

Institutional effectiveness or assessment (1)

Institutional research (2)

Other (Please explain) (3)

Page 191: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

178

Q1.6 In academic year 2014-2015, which category best reflects the total annual budget dedicated to institutional effectiveness (exclude Quality Enhancement Plan budget)?

$0 (1) $1-$9,999 (2) $10,000-$25,000 (3) $25,001-$50,000 (4) $50,001-$75,000 (5) $75,001-$100,000 (6) $100,001-$150,000 (7) $150,001-$200,000 (8) $200,001-$300,000 (9) $300,001-$400,000 (10) $400,001- $500,000 (11) Greater than $500,000 (12)

Q1.7 In academic year 2014-2015, which category best reflects the total annual technology budget dedicated to any institutional effectiveness technology?

$0 (1) $1-$9,999 (2) $10,000-$25,000 (3) $25,001-$50,000 (4) $50,001-$75,000 (5) $75,001-$100,000 (6) $100,001-$150,000 (7) $150,001-$200,000 (8) $200,001-$300,000 (9) $300,001-$400,000 (10) $400,001-$500,000 (11) Greater than $500,000 (12)

Page 192: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

179

Q1.8 How many layers from the President is the primary institutional effectiveness staff member? (Ex: President to VPAA/Provost =1 layer)

1 (1) 2 (2) 3 (3) 4 (4) 5 or more (5)

Q1.9 Does the accreditation liaison sit on the President’s Cabinet (or respective council/committee)?

Yes (1) No (2)

Q1.10 Did your institution hire an institutional effectiveness or accreditation consultant in preparation for (or during) reaffirmation of accreditation?

Yes (1) No (2)

Q1.11 Did your institution receive a SACSCOC Vice President advance visit prior to reaffirmation of accreditation?

Yes (1) No (2)

Q1.12 Did your institution’s leadership team attend the SACSCOC orientation?

Yes (1) Some but not all leaders attended (Please explain) (2) ____________________ No (3)

Page 193: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

180

Q1.13 Which of the following does your institution have?

No central software system for storing institutional effectiveness information (1) Commercially available institutional effectiveness software system (e.g., TracDat,

Weave, Tk20, SPOL) (2) Homegrown system (e.g., Microsoft Access, Excel, IT developed system) (3) Other option (Please explain) (4) ____________________

Page 194: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

181

Q2.1 Identify any of the following institutional effectiveness principles that your institution received follow up questions or

recommendations on during the:

2.4 (Institutional Mission) (1)

2.5 (Institutional Effectiveness) (2)

3.1.1 (Mission

) (3)

3.3.1.1 (IE-

Educational

Programs) (4)

3.3.1.2 (IE-

Administrative

Support Services)

(5)

3.3.1.3 (IE-

Academic &

Student Support Services

) (6)

3.3.1.4 (IE- Research

within Mission)

(7)

3.3.1.5 (IE -

Community/Public Service)

(8)

3.4.7 (Consortial/Contractual

Agreements) (9)

3.5.1 (General

Education Competencies) (10)

4.1 (Student

Achievement) (11)

None (12)

Offsite review

(1)

Onsite review

(2)

C & R review

(3)

Page 195: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

182

Q2.2 If your institution was placed on to monitoring, warning, or probation status with the SACSCOC, please identify any of the following institutional effectiveness principles involved:

2.4

(Institutional

Mission) (1)

2.5 (Instituti

onal Effectiveness) (2)

3.1.1 (Mission) (3)

3.3.1.1 (IE-

Educational

Programs) (4)

3.3.1.2 (IE-

Administrative

Support Services

) (5)

3.3.1.3 (IE-

Academic &

Student

Support

Services) (6)

3.3.1.4 (IE-

Research within

Mission) (7)

3.3.1.5 (IE - Communit

y/Public Service) (8)

3.4.7 (Consortial/Contractual

Agreements) (9)

3.5.1 (General Educatio

n Compete

ncies) (10)

4.1 (Student

Achievement) (11)

None (12)

Monitoring status

(1)

Warning status (2)

Probation status

(3)

Page 196: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

183

Q2.3 Rank order the following institutional effectiveness principles from the most difficult to the least difficult for your institution to demonstrate compliance with (1 reflects the most difficulty).

______ 2.4 (Institutional Mission) (1)

______ 2.5 (Institutional Effectiveness) (2)

______ 3.1.1 (Mission) (3)

______ 3.3.1.1 (IE- Educational Programs) (4)

______ 3.3.1.2 (IE- Administrative Support Services) (5)

______ 3.3.1.3 (IE- Academic & Student Support Services) (6)

______ 3.3.1.4 (IE- Research within Mission) (7)

______ 3.3.1.5 (IE - Community/Public Service) (8)

______ 3.4.7 (Consortial/Contractual Agreements) (9)

______ 3.5.1 (General Education Competencies) (10)

______ 4.1 (Student Achievement) (11)

Page 197: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

184

Q2.4 In the most difficult cases identified in the previous question, did any of the following broad themes contribute to the challenges?

Yes (1) No (2)

Insufficient financial resources (1)

Insufficient staff (2)

Insufficient technology (3)

Insufficient appropriate decision-making or governance structure (e.g., committees) (4)

Too many committees (5)

Too few committees (6)

Insufficient organizational structure (7)

Insufficient evidence (8)

Insufficient knowledge of accreditation (9)

Insufficient knowledge of assessment or IE (10)

Insufficient executive-level leadership involvement (11)

Insufficient time (12)

Insufficient institutional processes or procedures (13)

Insufficient institutional buy-in from faculty (14)

Insufficient institutional buy-in from administration (15)

Insufficient institutional buy-in from staff (16)

Other (Please explain) (17)

Page 198: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

185

Q3.1 Please identify the extent of change related to institutional effectiveness that your college experienced as a result of going through the reaffirmation of accreditation process (exclude the Quality Enhancement Plan from consideration when answering the questions below).

No change or decrease (1)

Slight increase (2)

Moderate increase (3)

Major increase (4)

Staff dedicated to institutional effectiveness, institutional research, or accreditation (1)

Financial resources dedicated to institutional effectiveness, institutional research, or accreditation (2)

Financial resources dedicated to technology for institutional effectiveness (3)

Technology dedicated to institutional effectiveness or assessment (4)

Professional development for faculty related to institutional effectiveness or assessment (5)

Professional development for educational support staff related to institutional effectiveness or assessment (6)

Professional development for administration related to institutional effectiveness or

Page 199: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

186

assessment (7)

Professional development for institutional effectiveness or institutional research staff related to institutional effectiveness or assessment (8)

Number of institutional effectiveness committees (9)

Number of institutional effectiveness processes (10)

Overall resources dedicated to institutional effectiveness (11)

Dean or division-level leadership involvement in institutional effectiveness (12)

Executive level leadership involvement in institutional effectiveness (13)

Department or unit-level leadership involvement in institutional effectiveness (14)

Quality or usefulness of reports produced from the institutional research or institutional effectiveness office (15)

Stakeholders involved in institutional effectiveness across the university (16)

Page 200: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

187

Institutional effectiveness organizational structure (17)

Institutional effectiveness governance/committee structure (18)

Other (Please explain) (19)

Page 201: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

188

Q3.2 Please identify the extent of improvement related to institutional effectiveness that

your college experienced as a result of going through the reaffirmation of accreditation

process (exclude the Quality Enhancement Plan from consideration when answering the

questions below).

No change or decrease (1)

Slight improvement (2)

Moderate improvement (3)

Major improvement (4)

Overall effectiveness of the organization (1)

Recurring assessment of student learning outcomes after reaffirmation of accreditation (2)

Recurring assessment of non-academic areas after reaffirmation of accreditation (3)

Recurring assessment of student support services areas after reaffirmation of accreditation (4)

Institution remains in a continued state of compliance (5)

Preparation (or readiness) for SACSCOC 5th Year Review (6)

Other (Please explain) (7)

Page 202: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

189

Q3.3 If SACSCOC visited your institution today, what is the likelihood that your institution would receive a recommendation in any of the following area(s)?

Not at all possible (1)

Very unlikely (2)

Slightly possible (3)

Somewhat likely (4)

Very likely (5)

2.4 (Institutional Mission) (1)

2.5 (Institutional Effectiveness) (2)

3.1.1 (Mission) (3)

3.3.1.1 (IE- Educational Programs) (4)

3.3.1.2 (IE- Administrative Support Services) (5)

3.3.1.3 (IE- Academic & Student Support Services) (6)

3.3.1.4 (IE- Research within Mission) (7)

3.3.1.5 (IE - Community/Public Service) (8)

3.4.7 (Consortial/Contractual Agreements) (9)

3.5.1 (General Education Competencies) (10)

4.1 (Student Achievement) (11)

Page 203: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

190

APPENDIX E

Email Reminder to Study Participants

Dear ________________: Last week, I notified you regarding a study that I am conducting about analyzing the role of SACSCOC recommendations and institutional effectiveness changes at community colleges in the SACSCOC region that have undergone reaffirmation between the years 2012 through 2015. This email is to thank those participants who have completed the survey and to remind those who have not participated yet that there is still time to complete the survey. The survey will remain open for 2 more weeks and will close on <insert date>. Your participation is greatly appreciated. The results of this study will be used to advance the practice of higher education accreditation. Your survey responses are completely anonymous and cannot be traced back to you or your institution. Follow this link to the Survey: ${l://SurveyLink?d=Take the Survey}

Or copy and paste the URL below into your internet browser: ${l://SurveyURL} If you are not the appropriate institutional representative to complete the survey, please forward this request to the appropriate person. If you have any questions regarding this research study, please contact me at [email protected] or call 806-584-0089. This study is being supervised by Dr. Stephanie Jones who also will be glad to answer any questions you may have. Dr. Jones can be reached via email at [email protected] or by phone at (806) 834-1380. I truly appreciate your time and consideration in allowing me to conduct this study at your institution. Kind regards, Kara Larkan-Skinner Doctoral Candidate, Higher Education Administration Texas Tech University

Follow the link to opt out of future emails: ${l://OptOutLink?d=Click here to unsubscribe}

Page 204: Copyright 2015, Kara Larkan-Skinner

Texas Tech University, Kara Larkan-Skinner, December 2015

191

APPENDIX F

Email Reminder to Study Participants

Dear ________________: I hope that this email finds you well. I just wanted to remind you that there’s still time to complete the survey on the role of SACSCOC recommendations and institutional effectiveness changes at community colleges in the SACSCOC region The survey is set to close on May 13, 2015. Your participation is greatly appreciated. The results of this study will be used to advance the practice of higher education accreditation and will take no more than 30 minutes of your time. The survey can be found at:

Follow this link to the Survey: ${l://SurveyLink?d=Take the Survey}

Or copy and paste the URL below into your internet browser: ${l://SurveyURL} If you are not the appropriate institutional representative to complete the survey, please forward this request to the appropriate person. If you have any questions regarding this research study, please contact me at [email protected] or call 806-584-0089. This study is being supervised by Dr. Stephanie Jones who also will be glad to answer any questions you may have. Dr. Jones can be reached via email at [email protected] or by phone at (806) 834-1380. Thank you for your time. Kind regards, Kara Larkan-Skinner Doctoral Candidate, Higher Education Administration Texas Tech University

Follow the link to opt out of future emails: ${l://OptOutLink?d=Click here to unsubscribe}