I wonder whether it is really possible to rate something as complicated as a college or university, or to rank things as different as individual graduate programs. Surveys will always amount to little more than popularity contests, but maybe those tell us something.
Then again, maybe not.
The NRC rankings in music were discussed at length on the American Musicological Society's Internet mailing list (AMSList, subscribe through listproc@ucdavis.edu). If I can get permis sion from the list, I will post the discussion here.
The general consensus there was that the NRC rankings had little meaning, but could be used by the lucky top ten to build public awareness of their programs, and possibly attract more and better students. Most seemed to agree that those rated in the to p would benefit while the others would probably be effected none at all.
It remains to be seen whether this will be true.
There are other more widely known rankings in particular, the ratings published by U.S. News & World Report and those of Business Week for business schools come to mind.
I read something in the New York Times over the weekend (a letter to the editor that I cut out and promptly lost) that mentioned that it had been discovered that schools had deliberately misrepresented themselves in supplying information to U SN&WR in order to improve their standings in the ratings. I had not heard about this, and I have been unable to locate any information on the subject (please write to me if you know anything about this, or some place on the Internet with information about it).
Until I read that, it had never occurred to me that the information in the NRC report might have been intentionally misreported. The thought is frightening.
At my own institution, it seems clear that the numbers for my department are wrong simply because of the incompetence of the person who filled out the forms NYU certainly isn't likely to improve its image by reporting only two graduate students in music!
However, given the correlation supposedly found between larger faculty size and higher rank, I am prompted to wonder if a list of faculty that stretches down the page is likely to sway a survey respondent into thinking that such a large department must by virtue of its size offer a decent education.
It seems clear to me that no human being, no matter how long they've been around in the field of music, would truly be competent to pass judgement on the faculty quality and teaching effectiveness of fifty different music programs. In fact, I was stunn ed when I actually read what the survey asked the respondents to consider in evaluating "effectiveness":
"Please consider the accessibility of the faculty, the curricula, the instructional and research facilities, the quality of graduate students, the performance of graduates, the clarity of stated program objectives and expectations, the appropr iateness of program requirements and timetables, the adequacy of graduate advising and mentorship, the commitment of the program in assuring access and promoting success of students historically underrepresented in graduate education, the quality of assoc iated personnel (post-doctorates, research scientists, et. al.) and other factors that contribute to the effectiveness of the research-doctorate program." [even though these personnel are not listed?] (App. F, p. 124)
For "Change in Program Quality in the Last Five Years" the survey respondents were instructed:
"Please consider both the scholarly quality of the program faculty and the effectiveness of the program in educating research scholars/scientists. Compare the quality of the program today with its quality five years ago not the change i n the program's relative standing among other programs in the field." (App. F, p. 124)
Rating "Effectiveness" according to these criteria is surely a superhuman task who would know this information for anything other than the programs of which they have been a part? Respondents were allowed to check "Don't know well enough to eval uate" for both "quality" and "effectiveness," but I'm afraid that I would have to check that last choice more often than not. This seems to me to be an insurmountable obstacle to any survey that doesn't limit itself to a very small number of programs.
More interesting still would be a survey in which a list of faculty was not provided. My bet is that few respondents would be able to rate more than 20 programs, and most of those would be based not on first-hand knowledge so much as on general reputation.
I think the authors of the NRC study are just fooling themselves if they think that a list of faculty really gets around that problem in any meaningful way.
Below are a couple of links to some information about the more widely known college rankings.
Building reputations: How the game is played
Side-by-side comparison of Business Week and USNR ratings
[ To Overview of NRC Report Critique. . . ]
[ Back to DWF's home page. . . ]
[ Back to previous page. . . ]