Click here to load reader

What’s In a Name? Measuring the Impact of the Maclean’s Reputational Survey

Embed Size (px)

DESCRIPTION

What’s In a Name? Measuring the Impact of the Maclean’s Reputational Survey. Presented at the 2011 Conference of the Canadian Institutional Research and Planning Association Fredericton, NB October 24, 2011. Garry Hansen Director of Institutional Research and Planning - PowerPoint PPT Presentation

Citation preview

Whats In a Name? Measuring the Impact of the Macleans Reputational Survey

Presented at the 2011 Conference of the Canadian Institutional Research and Planning Association Fredericton, NBOctober 24, 2011Whats In a Name? Measuring the Impact of the Macleans Reputational SurveyGarry HansenDirector of Institutional Research and PlanningSt. Thomas University1The Macleans Reputation IndicatorReputation has a greater impact on overall rank than any other single indicatorCurrently 20% of overall rankProne to bias and gamingLeast transparent indicator: no raw data published

2TopicsThe reputational survey instrument and methodologyHow to calculate actual pre-reputation indicator scoresHow to estimate reputational scores from ordinal ranksThe relationship between reputation and rankingActual magnitude of the differences between the rank intervalsExploring the relationship of reputation and ranking through longitudinal ordinal data

3LimitationsNot a rigorous statistical analysis (cf. Bastedo and Bowman)Based solely on the Primarily Undergraduate category. Previous research (cf. Michael and Drewes 2006) has determined that institution size and type have an impact on how rankings affect choice of university.

4Description of the Survey24 University officials at each ranked institution High school principalsHigh school guidance counsellorsHeads of a wide variety of national and regional organizationsCEOs Corporate recruiters

In 2010, more than 11,000 distributed to:5Macleans Reputational Survey Response Rates6Inferring the Impact of University OfficialsIn 2010, 24 mailouts per university X 49 ranked universities = 1176 mailouts

Response rate for university officials was 27%, so 318 surveys were returned by university officials.

Overall response rate is 7.1% of more than 11,000 (say, 11,500) so approximately 817 total responses.

Responses from university officials make up approximately 39% of all responses.7The Survey InstrumentTwo surveys of three questions eachNational surveyRegional surveysAtlanticQuebecOntarioWestern ProvincesEveryone completes the national survey, but only university officials and high school principals and guidance counselors complete regional survey.8

In terms of overall quality, please place the following universities in one of four quartiles, circling the appropriate figure.In terms of innovation, both institutional and social, please place the following universities in one of four quartiles.As well, please choose three universities that, in your opinion, are emerging as the leaders of tomorrow. These three should be checked off in the final column.9Estimating the Magnitude of the Reputational ScoreOrdinal scores obscure actual magnitude of differenceWe can estimate a reputational score with reasonable confidence because:We know the rangeWe can calculate a pre-reputational score using the raw data provided in the magazineWe know the overall rank and the reputational rank, which serve as tests for an estimated total score composed of the calculated pre-reputational score and the estimated reputational score

10Calculating the Actual Pre-reputation ScoreMean & SD

Z-scores

Percentages

Weighted ScoresBased on Shale, Doug and Yolanda Liu, "Maclean's Arithmetic and Its Effect on Universities' Ratings" (2004)11Step 1: For each indicator, use the raw values provided in the magazine to calculate a mean and standard deviation. Excel formulae AVERAGE() and STDEVP()

12RAW DATAStudent AwardsStud/Fac RatioFaculty AwardsAcadia6.115.41.9Bishop's2.116.91.7Brandon1.416.01.2Brock1.624.60.7Cape Breton1.120.30Lakehead2.123.41.4Laurentian1.715.30.9Lethbridge215.61.9Moncton1.614.00Mount Allison6.215.74.4Mount Saint Vincent1.218.51.4Nipissing1.328.20Ryerson0.831.50.3Saint Mary's2.525.42.5St. Francis Xavier2.217.81.5St. Thomas0.423.21.9Trent2.326.16.2UNBC3.515.13.2UOIT0.938.80UPEI2.916.00.9Wilfrid Laurier1.427.72.9Winnipeg2.222.60Mean2.15921.2771.586StdDev1.4416.3331.52113Step 2: Z-scoresStep 2: For each indicator and institution, calculate the z-score by subtracting the mean from the raw value and dividing by the standard deviation. Excel formula: STANDARDIZE()

14Z-SCORESStudent AwardsStud/Fac RatioFaculty AwardsAcadia2.7350.9280.206Bishop's-0.0410.6910.075Brandon-0.5270.833-0.254Brock-0.388-0.525-0.583Cape Breton-0.7350.154-1.043Lakehead-0.041-0.335-0.123Laurentian-0.3190.944-0.451Lethbridge-0.1100.8960.206Moncton-0.3881.149-1.043Mount Allison2.8050.8811.850Mount Saint Vincent-0.6660.439-0.123Nipissing-0.596-1.093-1.043Ryerson-0.943-1.614-0.846Saint Mary's0.237-0.6510.601St. Francis Xavier0.0280.549-0.057St. Thomas-1.221-0.3040.206Trent0.098-0.7623.034UNBC0.9310.9751.061UOIT-0.874-2.767-1.043UPEI0.5140.833-0.451Wilfrid Laurier-0.527-1.0140.864Winnipeg0.028-0.209-1.043NOTE: Since smaller is better for the Student/Faculty Ratio indicator, reverse the formula (mean raw score / standard deviation).15Step 3: Calculate PercentagesStep 3: For each institution and indicator, calculate a percentage from the z-score based on the normal distribution curve. Excel function NORMSDIST()

16PERCENTAGESStudent AwardsStud/Fac RatioFaculty AwardsAcadia0.9970.8230.582Bishop's0.4840.7550.530Brandon0.2990.7980.400Brock0.3490.3000.280Cape Breton0.2310.5610.148Lakehead0.4840.3690.451Laurentian0.3750.8270.326Lethbridge0.4560.8150.582Moncton0.3490.8750.148Mount Allison0.9970.8110.968Mount Saint Vincent0.2530.6700.451Nipissing0.2760.1370.148Ryerson0.1730.0530.199Saint Mary's0.5940.2580.726St. Francis Xavier0.5110.7090.477St. Thomas0.1110.3810.582Trent0.5390.2230.999UNBC0.8240.8350.856UOIT0.1910.0030.148UPEI0.6960.7980.326Wilfrid Laurier0.2990.1550.806Winnipeg0.5110.4170.14817Steps 4 and 5: Weighted ScoresStep 4: Calculate the actual indicator score by multiplying the weighted value of the indicator by the percentage calculated in Step 3.Step 5: For each institution, sum the indicator scores to calculate a total pre-reputation score out of 80. (Note: the score for St. Thomas University is actually out of 74 because it is excluded from the Medical/Science Research indicator.) 18WEIGHTED SCORESStudent AwardsStudent/Faculty RatioFaculty AwardsAcadia9.978.234.65Bishop's4.847.554.24Brandon2.997.983.20Brock3.493.002.24Cape Breton2.315.611.19Lakehead4.843.693.61Laurentian3.758.272.61Lethbridge4.568.154.65Moncton3.498.751.19Mount Allison9.978.117.74Mount Saint Vincent2.536.703.61Nipissing2.761.371.19Ryerson1.730.531.59Saint Mary's5.942.585.81St. Francis Xavier5.117.093.82St. Thomas1.113.814.65Trent5.392.237.99UNBC8.248.356.85UOIT1.910.031.19UPEI6.967.982.61Wilfrid Laurier2.991.556.45Winnipeg5.114.171.19Weight1010819This is what we now know:Total without Reputation (n/80)Calculated rank w/o reputationReputation RankOverall RankAcadia49.99362Bishop's41.938108*Brandon36.11151817*Brock35.20161114*Cape Breton23.16202221Lakehead43.3771712Laurentian40.12101914*Lethbridge44.15674Moncton36.41142020Mount Allison56.97121Mount Saint Vincent34.08171419Nipissing15.90222122Ryerson21.4621117*Saint Mary's41.3691311St. Francis Xavier38.131247St. Thomas37.39131616Trent48.194126UNBC54.36293UOIT31.9719813UPEI45.105158*Wilfrid Laurier39.781135Winnipeg33.691851020Estimating Reputational ScoresAs a starting point, equally distribute the known range (0-20 points in 2010) across the rank intervals (22 in 2010).Add these estimated reputation scores to the previously calculated pre-reputation score to produce an estimated overall score.From these estimated scores, calculate rank for reputation and rank for overall. Compare the estimated rank with the actual rank in each case.If the estimated overall rank is not equal to the actual overall rank, add or subtract as necessary from the reputation score estimatesRepeat until all estimated ranks match the actual ranks.

21Example

22Exploring the Data

23Pre-Reputation and Estimated Reputation Scores (Macleans 2010)2425Positive or Neutral Impact (2010)Rank w/o RepOverall RankEffect of RepWinnipeg1810+8Wilfrid Laurier115+6UOIT115+6St. Francis Xavier127+5Ryerson2117+4Lethbridge64+2Brock1614+2Acadia32+1Mount Allison110Bishops880Nipissing2222026Negative Impact (2010)Rank w/o RepOverall RankEffect of RepMoncton1420-6Lakehead712-5Laurentian1014-4St. Thomas1316-3UPEI58-3Brandon1517-2Mount Saint Vincent1719-2Saint Marys911-2Trent46-2Cape Breton2021-1UNBC23-127Is reputation a self-fulfilling prophecy?If the reputational indicator is truly independent of the rankings, then the reputational indicator should affect the rankings (since it comprises 20% of the overall rank), but the rankings shouldnt affect the reputational indicator.Research on U.S. News and World Report and the Times Higher Education Supplement rankings has established that overall rankings have a causal impact on future peer assessments of reputation (Bastedo and Bowman 2010).

28Bastedo and Bowman[I]t is difficult to maintain the fantasy that reputational scores are independent from the rankings themselves. It would take a massive, discontinuous change in academic quality to notably influence reputation scores in any given year. Nearly always, the causal chain is that rankings change in response to shifts in their particular indicators (e.g. faculty-student ratio), and reputations shift in response to rankings. But clearly, rankings drive reputation, not the other way around. (Bastedo and Bowman 2011)

The Lag EffectPossible evidence of overall Macleans rank determining reputation found in lag effect after a sudden change in overall rank, reputation changes after a year or moreWhats happening here?Final ThoughtsThe reputational survey has a disproportionate impact on overall rankings.The various types of respondents to the reputational survey may have vastly different conceptions of quality and innovation. Some institutions, by virtue of location and programme offerings, will have a higher profile among certain types of respondents (eg. Corporate or labour market). There is some evidence of circularity for some institutions, the reputation score may be largely driven by the very rankings they inform.Questions?Garry HansenSt. Thomas [email protected]