4
Corrigendum Ascription and departmental rankings revisited: A correction and a reanalysis q David Jacobs Ohio State University, USA Received September 30, 2003 In an article published in this journal (Jacobs, 1999) I found ascription in the rankings of sociology departments produced by the NRCÕs reputational method (Goldberger et al., 1995). After controlling for department research productivity, departments handicapped by ascription gauged in part by a dummy coded 1 for location in a university with state, A&M, or direction in the institutionÕs name did worse in these ratings. But there was a regrettable omission. I labeled the primary ascriptive variable ‘‘state, A&M, direction in name,’’ yet I coded the three State University of New York schools (Albany, Binghamton, and Buffalo) 0 instead of 1, and I did not think to justify this decision. There are good reasons for this choice, as these New York schools have a liberal arts rather than an applied focus. The typical university with state in the last part of its name is a land grant institu- tion with an agriculture school, but these New York institutions do not have an agriculture school, and they now have eliminated or de-emphasized the word state in their names. Yet without a justification for this decision, the article was misleading because the name variable with the SUNY schools coded 1 is significant at the .056 rather than at the .05 level when it is used in models otherwise identical to those reported. I correct this omission here by showing what happens when the more inclusive name measure is entered in a slightly different but more plausible model and by reporting tests sug- gesting that the original coding is most appropriate. Brief justifications for the hypotheses are presented first followed by a discussion of the findings. q Corrigendum to ‘‘Ascription or productivity? The determinants of departmental success in the NRC quality ratings’’ [Social Science Research 28, 228–239; DOI of original article: 10.1006/ssre.1999.0646]. All data used in this analysis and findings mentioned in the text but not shown are available on request. E-mail address: [email protected]. 0049-089X/$ - see front matter Ó 2003 Elsevier Inc. All rights reserved. doi:10.1016/j.ssresearch.2003.09.006 Social Science Research 33 (2004) 183–186 www.elsevier.com/locate/ssresearch Social Science RESEARCH

Ascription and departmental rankings revisited: A correction and a reanalysis

Embed Size (px)

Citation preview

SocialScience

qCorrigendum to ‘‘Ascription or productivity? The determinants of departmental success in the NRC

quality ratings’’ [Social Science Research 28, 228–239; DOI of original article: 10.1006/ssre.1999.0646]. Al

data used in this analysis and findings mentioned in the text but not shown are available on request.

E-mail address: [email protected].

0049-089X/$ - see front matter � 2003 Elsevier Inc. All rights reserved.

doi:10.1016/j.ssresearch.2003.09.006

Social Science Research 33 (2004) 183–186

www.elsevier.com/locate/ssresearch

RESEARCH

Corrigendum

Ascription and departmental rankingsrevisited: A correction and a reanalysisq

David Jacobs

Ohio State University, USA

Received September 30, 2003

In an article published in this journal (Jacobs, 1999) I found ascription in the

rankings of sociology departments produced by the NRC�s reputational method

(Goldberger et al., 1995). After controlling for department research productivity,departments handicapped by ascription gauged in part by a dummy coded 1 for

location in a university with state, A&M, or direction in the institution�s name

did worse in these ratings. But there was a regrettable omission. I labeled the

primary ascriptive variable ‘‘state, A&M, direction in name,’’ yet I coded the three

State University of New York schools (Albany, Binghamton, and Buffalo) 0 instead

of 1, and I did not think to justify this decision. There are good reasons for this

choice, as these New York schools have a liberal arts rather than an applied focus.

The typical university with state in the last part of its name is a land grant institu-tion with an agriculture school, but these New York institutions do not have an

agriculture school, and they now have eliminated or de-emphasized the word state

in their names.

Yet without a justification for this decision, the article was misleading because the

name variable with the SUNY schools coded 1 is significant at the .056 rather than at

the .05 level when it is used in models otherwise identical to those reported. I correct

this omission here by showing what happens when the more inclusive name measure

is entered in a slightly different but more plausible model and by reporting tests sug-gesting that the original coding is most appropriate. Brief justifications for the

hypotheses are presented first followed by a discussion of the findings.

l

184 Corrigendum / Social Science Research 33 (2004) 183–186

Hypotheses

If department ratings are partly due to ascription, university attributes that

should be irrelevant may matter. In addition to the name stigmatization measure,

I entered a dummy scored 1 for predominantly urban commuter schools and hypoth-esized that both coefficients would be negative. Another dummy scored 1 to assess a

halo effect if a department was located in the most prestigious Ivy League universities

(Harvard, Princeton, Yale) should be positive, but departments with more female

graduate students that often specialize in less highly regarded subdisciplines such

as marriage and the family or applied sociology may get worse ratings. If raters re-

ward departments for gender diversity, a higher percentages of female faculty should

increase department prestige scores, so this new variable is included in this reanalyis.

Models that test for such ascriptive effects should include the best available mea-sures of faculty productivity. The department (per faculty) number of articles in the

three most prestigious sociology journals, citations, and books measure this effect.

And larger departments should have more faculty visible to raters. Scholarly recog-

nition takes time, so departments with more full professors should get higher ratings.

Finally, department disparities in faculty recognition may matter, so I entered a

threshold dummy based on a Gini index that assessed intradepartment faculty differ-

ences in citation counts.

Results and conclusions

Model 1 in Table 1 shows the best model, first reported as Model 2 of Table 3 in

Jacobs (1999), while Model 2 in Table 1 shows the results with the university name

variable recoded by scoring the SUNY schools 1 and with female faculty added (re-

fer to Jacobs, 1999, for methodological aspects not discussed here). The recoded

state, direction, A&M variable, and female faculty presence are significant in the re-analysis reported in Model 2. The only contrast between these results and those in

Jacobs (1999) concerns the Ivy League halo effect that no longer is significant in this

model.

Recall, however, that there are good reasons to think that the three SUNY

schools differ from the more applied institutions with these stigmatizing names. This

hypothesis can be tested by entering two name variables. One is coded 1 only for the

SUNY schools while the other is coded exactly as it was in Jacobs (1999). If there are

significant differences between the coefficients on these two variables, the evidencewill suggest that these two effects should not be assessed with the same variable. This

is so because combining two variables by addition requires an assumption that the

coefficients on each are the same.

Model 3 presents a specification otherwise identical to Model 2 but with the two

separate name dummies included. The test for equal coefficients on these variables

shows that they differ, and this result persists in the next two models. In results

not reported here, the coefficients on three dummies that separately assess the com-

ponents of the original variables, A&M, direction, and state (with the SUNY schools

Table 1

The determinants of departmental quality ratings (bolded coefficient pairs indicate significant differences in

the point estimates)

Model 1 Model 2 Model 3 Model 4 Model 5a

Ln faculty size .2826��� .2931��� .3067��� .2893��� .2914���

(8.30) (8.24) (8.68) (8.38) (8.70)

% full professors .0018� .0016� .0019� .0019� .0020��

(2.16) (1.93) (2.28) (2.29) (2.40)

Ln books per faculty .3480��� .3380��� .3135��� .3476��� .3501���

(5.46) (5.06) (4.73) (5.46) (5.52)

Ln articles in prestigious

journals per faculty

.2871��� .3189��� .3231��� .2918��� .2995���

(5.61) (5.84) (6.05) (5.69) (6.05)

Ln citations per faculty .0400�� .0402�� .0374�� .0388�� .0322�

(2.64) (2.58) (2.44) (2.55) (2.07)

Unequal citations per

facultya).0559� ).0574� ).0457 ).0485 ).0013�

()1.84) ()1.84) ()1.48) ()1.56) ()1.69)

% female graduate students ).0030� ).0038�� ).0033�� ).0027� ).0027�

()2.34) ()2.82) ()2.51) ()2.07) ()2.06)

1 if prestigious Ivy

League dept.

.1354� .1188 .1265� .1403� .1302�

(1.89) (1.63) (1.77) (1.96) (1.84)

1 if urban public commutter

department

).1057� ).0954� ).0978� ).1019� ).1127�

()1.97) ()1.76) ()1.84) ()1.90) ()2.08)

Ln % female faculty — .0438� .0444� — —

(1.76) (1.83)

1 if state, A&M, direction in

name (SUNY Depts.¼ 1)

— ).0592� — — —

()1.88)

1 if state, A&M, direction in

name (SUNY Depts.¼ 0)

).0807�� — ).0870�� ).0766� ).0761�

()2.58) ()2.61) ()2.36) ()2.35)

1 if SUNY department — — .0550 .0672 .0737

(0.90) (1.10) (1.23)

Intercept .5121��� .3893� .3172 .4621�� .4748���

(3.68) (2.36) (1.93) (3.16) (3.22)

R2 (corrected) .874��� .869��� .875��� .875��� .875���

N 95 94 94 95 95

Significance: �6 .05, ��

6 .01, ���6 .001 (t values beneath coefficients; one-tailed tests except for the

intercept).a Following Jacobs (1999), in Models 1 to 4 inequality in faculty citations is measured with a dummy

coded 1 if Gini is greater than .3 (its median); in Model 5 this effect is measured with the actual Gini index

computed on citations.

Corrigendum / Social Science Research 33 (2004) 183–186 185

coded 0), do not differ significantly when these three separate variables are entered

together in the last three models. Such findings suggest that the SUNY schools

should not be included in the stigmatizing name indicator, but A&M, direction,

186 Corrigendum / Social Science Research 33 (2004) 183–186

and state (in the last part of a university name) can be combined in one measure. To

show that these results are not due to a missing value (including female faculty re-

moves a case), I drop the female faculty variable in Model 4 and find results with

identical implications. To improve the specification in Jacobs (1999), in the last mod-

el I enter the interval measure of department inequality in faculty citations (Gini) inplace of the threshold coded dummy, but all ascriptive results again persist.

The three tests that show significant coefficient differences on the two name mea-

sures reported in Models 3, 4, and 5 suggest that the original measurement decision is

superior to the new coding. While I prefer the latter models because I believe that the

SUNY schools are not the same as institutions with state in the last part of their

name, either coding choice replicates the primary findings reported in Jacobs

(1999). The corrected results continue to suggest that name based stigmatization

and other forms of ascription that should not matter help explain these NRC rank-ings of sociology departments.

Acknowledgment

I am grateful to the editor for letting me publish this correction.

References

Goldberger, M. L., Maher, B. A., & Flattau P. E. (Eds.) (1995). Research Doctorate Programs in the

United States. Washington DC, National Academy Press.

Jacobs, David. (1999). Ascription or productivity? The determinants of departmental success in the NRC

quality ratings. Social Science Research 28, 228–239.