This introduction is quite long, and if you are really interested in getting directly to my critique, click here.
Otherwise, please read this document first. It provides a lot of the context of how I came to delve into this issue.
Here's a brief outline of this document:
For an overview of all the materials I have assembled on the subject here at my home page, click here.
On September 13, 1995, the New York Times published an article reporting on a new survey that purported to rate "doctoral-research" programs in 41 different fields at universities across the country ("New Ranking of Graduate Programs Serves Up Familiar Names and a Few Surprises").
In the field of music, this evoked quite a bit of discussion. In my corner of the universe, the rankings raised many eyebrows, not so much for which programs were rated well, as for which programs were not mentioned. However, discussion on the Internet mailing list of the American Musicological Society (AMSList, subscribe at listproc@ucdavis.edu) mostly concerned not the rankings themselves, but the degree to which such ratings were ever meaningful in the first place.
Given the peculiarities of the rankings in Music and my curiosity over the rank of my own program, I searched on the Worldwide Web to see if any of the report text was available for browsing. I discovered that the organization which produced the report, the National Research Council, is associated with the National Academy of Sciences, and that portions of the report were available for viewing from the Web page of the National Academy Press, the publishing arm of the NAS.
I brought that news back to the AMSList, and also downloaded the data tables for Music. I was in for quite a shock. I immediately noticed all sorts of peculiarities in the numbers reported for faculty at the ranked institutions. I showed this table around in my own department, and the general consensus was that there were inconsistencies in the numbers. The number that everyone noticed was for Indiana University, which we all know is the largest music school in the country, but reported only six faculty. This would not have seemed quite so strange were it not for the fact that Illinois reported 74 and Eastman 50, while Michigan reported 22. Something was wrong with these numbers, because the size of these music schools is comparable (each has several hundred students), while the reported numbers varied widely (and certainly not in proportion to the actual number of students).
So, I pulled out the College Music Society Directory and started counting listed faculty. It soon became apparent that some of the numbers represented the entire faculty, regardless of discipline or involvement in doctoral teaching and advising (Illinois), while others represented only Musicology/Music History (Indiana), and others (Eastman) some unidentifiable middleground number.
As it turns out, the NRC's numbers were derived from data about the individual programs collected by designated "Coordinators" at each of the ranked institutions (referred to as "Institutional Coordinators" in the report, or "ICs"). The NRC depended on these data for the material they sent out to survey respondents, each of whom received a list of the faculty in each program being ranked.
By this time there were distant rumblings throughout the University about which programs made it into the top ten, which ones had not, and which ones had fallen in their rankings. So, at the request of my department, I wrote up my findings for internal use.
The report you find here is not that report instead, this is a report based on my "research" (counting, really) aimed at a broad audience, intended for distribution across the Worldwide Web. I am no statistician, so I have probably misinterpreted some of the things in the report. Also, what you see here was originally written on the basis of the materials available from the NRC on the WWW, before I had the opportunity to read the enormous printed report. In light of what I have now read in the printed version, I have altered a few conclusions, and a portion of the text.
However, my central criticism of the NRC report remains: the data upon which the NRC based their report are inaccurate and inconsistent. Therefore, any survey and any conclusions based on those data have to be very skeptically received, for, since the report mostly measures reputation, there is a strong likelihood that the results would have been different had the data been more accurate.
Included here is my revised version of the report. It is no doubt filled with errors and misstatements. For these, I am entirely responsible. However, whatever inaccuracies there may be in the details, from the broadest perspective, my comparison to an other data source undoubtedly calls into question the validity of the results for the field of Music.
If you slog through my entire report, please be sure to keep track of any inaccuracies, mistatements, erroneous conclusions, or outright errors. I would be most grateful if you would forward these to me using the MAILTO link at the end of the report (David Fenton).
I welcome any and all comments and corrections. In particular I seek information from anyone who participated in the data collection, or anyone who was asked to fill out the survey. I would like to hear about your experiences in being confronted with lists of 50 faculties and being asked to rank them.
Click here to read the report now.
In the process of preparing this report, I corresponded at some length with a representative of the NRC, expressing my reservations about the report and giving him the opportunity to examine what I had written.
This individual's response (which may or may not represent the position of the NRC) was that it would have been impossible for the authors of the survey to have verified the accuracy of all the data collected from the "Institutional Coordinators." This amounts to an admission that the data were not tested for accuracy at any point in the preparation of the report the data from the ICs were simply taken on faith and used as the basis for the survey.
I have repeatedly asked if any sampling was done to at least attempt to verify the accuracy of the data, or if anyone familiar with the field of Music was ever asked to "eyeball" the collected data. The NRC representative has responded each time with the "there were just too many programs" response.
Based on these responses, it seems safe to assume that the NRC did not at any point make any efforts to check whether the data they collected bore any resemblance whatsoever to reality. From the comparison to the CMS Directory it seems painfully clear that the data for Music are so inconsistent as to defy comparison between programs. If this proves true, it would surely invalidate any rankings in a survey based on that inaccurate data.
In light of this serious criticism of mine, it is interesting that the NRC did not respond to my questions until after I advised them that I intended to distribute my report on the Worldwide Web. And it was only after they had read my full report that they made a very peculiar request they wanted me to seek permission from them for two things:
The first seemed to me to be a an odd request because they had seen my report and would therefore know that I had cited their documents with every quotation and had fully credited their report as the source of the data I had used. I also included a statement intended to acknowledge any copyright they might hold for the data.
Had I been publishing my report in an academic journal, I would have needed no permission, because all of my quotations and all of my usage of their data were necessary for me to be able to explain my critique of their report, and all of them fully acknowledged the original source. My use of their report clearly falls under the concept of "Fair Use" in copyright law.
The second issue seemed even stranger permission to establish a WWW link? How could they ask such a thing? I had found the link through the use of a WWW search engine, so anyone else could do the same. Their Web page is freely available to anyone who can find it, so it seemed unfathomable to me that they could try to limit access to it. Unquestionably, the right to grant permission to link implies the right to refuse it.
I immediately saw this demand for permission for both usages to be a veiled copyright threat they were telegraphing to me that they might seek to quash the publication of my critique. They had a lot at stake, you see, for my report points out weaknesses in the very heart of this ambitious and extensive survey.
My response was to seek information on my rights, which is where I found out about "Fair Use" (isn't the Internet wonderful?). But, finally, to demonstrate my good faith, I petitioned for the permission they demanded while registering my belief that I was under no legal obligation to do so.
They granted the permission I had requested without delay, but the issues raised are nonetheless very important to the health of the Worldwide Web. What would have happened if I had refused to seek permission on the grounds of my right to "Fair Use?" Would they have sought to block my distribution of the URL for my WWW report?
Ultimately, we will never know. However, my experience should be taken as a cautionary tale for all those embarking on WWW publishing.
I have assembled quite a bit of documentation about this whole issue, including some of my correspondence with the NRC and some of the discussion which ensued in Usenet groups.
Click here to read more about DWF's
adventures in copyright.
My experiences with Copyright Issues includes links to copyright information on the Net.
DWF's ruminations on the significance of these kinds of ratings includes links to some other WWW info. on the subject.
So what should we do about it?
[ To Overview of NRC Report Critique. . . ]
[ Back to DWF's home page. . . ]
[ Back to previous page. . . ]