Apparent Anomalies in Reported Data for the Field of Music in the Report of the National Research Council,"Research-Doctorate Programs in the United States: Continuity and Change"

By David W. Fenton

HTML 3 Version (with Tables)


COPYRIGHT NOTICES
With the exception of data and quotations from the NRC report, all material in this document and the documents reached by the links below (except those linked to the National Academy Press's Home Page) is ©1995 by the author, David W. Fenton. This text and that of all of these documents may not be re-distributed or re-used in any form without including this notice.

The quotations and data from the NRC study are reprinted with permission from RESEARCH-DOCTORATE PROGRAMS IN THE UNITED STATES. ©1995 by the National Academy of Sciences. Courtesy of the National Academy Press, Washington, D.C.


Contents


I. Comparison Data

This discussion is chiefly concerned with a comparison between the reported numbers of faculty in the NRC's report(1) and a count of music faculties made from listings in the College Music Society's Directory of Music Faculties in Colleges and Universities, U.S. and Canada for the period 1992-94 (the NRC's report was for 1993). (2)

The NRC tabulated its faculty counts from "Institutional Coordinator Response Data" provided to the NRC by representatives of each of the institutions surveyed (see note 4). The numbers to which the "Institutional Coordinator" data are compared come from a manual count of members of music faculties listed in the Directory using restricted definitions of both "Research-Doctoral Programs" and "faculty" — only faculty members listed in the Directory as holding full-time appointments in the following research-oriented fields were counted:

In order to ameliorate any overly restrictive effects of this narrow definition, the fields of Music Education and Composition are listed separately in Table 1 if the programs award the Ph.D. Instructors in doctoral programs not awarding the Ph.D. (including the D.M.A. and D.Mus.) have been classified in "Other" (Table 1, column 5).(3)

In the tabulation of data from the CMS Directory, "faculty" members were defined as those holding the ranks of associate, assistant and full professor. Part-time instructors, adjunct instructors and emeritus professors were excluded on the assumption that they would not be teaching Ph.D. candidates. All of these restrictions are consistent with the stated scope of the NRC report.(4)

Table 1: Significant Discrepancies in Numbers of Faculty Reported

Institution123456
ReportedActualMus. Ed.CompositionOther FieldsAdjuncts, etc.
University of Rochester5023475238
University of Illinois742110--466
SUNY-Stony Brook3212--0617
U. Texas, Austin21167*34710
Indiana University618+7**610014
U. North Texas83151045312
Northwestern35198--3248
U. California, San Diego2418--117
Florida State472013**2397
Ohio State501415*22726
University of Washington2814----2147
University of Cincinnati2211--*18141
U. Maryland, College Park45144--2512
USC714----4471
Temple38116*12563
Wesleyan University105----111

Notes on numbers of Actual Faculty (column 2)

Notes on Composition Faculty (column 4)

For composers not counted in column 2:

II. Implications of the Comparison

The conclusion to be drawn from this comparison is quite obvious: the numbers in the NRC report are inaccurate because Insitutional Coordinators did not use consistent criteria for reporting the numbers of faculty. Even were one to abandon the "narrow" limitations of my tally and count all professors involved in the instruction of Ph.D. candidates, the data as reported would still be inaccurate, as can be seen by totaling columns 2 through 5 of Table 1 and comparing the result to column 1. Whereas the NRC report explicitly limited itself to Ph.D.-granting programs, it is clear from the Directory data that some ICs must have reported all faculty, regardless of specialization, and regardless of the proportion of students involved in Ph.D. programs,(5) while others reported only faculty in a limited number of fields and programs. Others apparently included faculty for all doctoral programs, including D.M.A., D.Mus. and E.D.D. programs.

In short, there is no consistent definition of "faculty members in Research-Doctoral programs in Music" that could produce the data on faculty numbers collected by the ICs. That such inconsistent data were collected would seem to suggest that the instructions from the authors of the study to the "Institutional Coordinators" were not sufficiently detailed to insure consistency of reporting from institution to institution. However, the data collection forms printed in Appendix D (pp. 91-98) show that the wording of the instructions to the ICs was explicit and unambiguous. Unquestionably, the errors must have been made on the part of ICs. This leads to two unpleasant alternative explanations:

  1. Some of the "Institutional Coordinators" were incompetent; or
  2. Some institutions deliberately misrepresented their faculty.

Regardless of the question of the intended scope of the survey, and regardless of the ultimate reason for the errors, the fact of this inconsistency alone wholly invalidates any comparisons based on that data, and consequently invalidates any conclusions about apparent correlations between so-called "quality" and faculty size.(6)

Specific examples of these fatal inconsistencies are numerous. The most obvious discrepancy is in the numbers for the largest music school in the country, Indiana University, which is listed as having only 6 faculty members. Comparable large music schools with several hundred students list a wide variety of numbers — the University of Illinois lists 74; Michigan 22; and the Eastman School of Music 50. At the other extreme, a smaller department of music at the State University of New York at Stony Brook reports 32 faculty members. In the case of Indiana University, it is clear that only professors of Music History and Musicology were reported, while Music Theory & Analysis and the Ethnomusicology program (which is part of the Anthropology Department) were omitted. On the other hand, SUNY-Stony Brook appears to have counted the entire faculty (including some emeritus or part-time faculty). And, even worse, Temple University, which has a Ph.D. program only in Music Education with only six faculty members specializing in the field, reported 38 faculty members. The numbers from these three institutions alone are surely not comparable, for they represent entirely different things.

Additionally, not all the separate programs in all the ranked institutions were accounted for. While the figures for UCLA encompass the faculty of three different departments within the University (Dept. of Music, Dept. of Ethnomusicology and Systematic Musicology, and Dept. of Musicology), and those for the Eastman School of Music cover an institution with many departments offering nine different doctoral degrees (D.M.A. in Composition, Conducting, Music Education, Performance, Accompanying; Ph.D. in Composition, Historical Musicology, Music Education, Music Theory), other programs are not so fully reported. Yale University's Department of Music, ranked number 5, awards two Ph.D.'s in academic fields (Historical Musicology and Music Theory), but the Yale School of Music, which awards D.M.A.'s in both Performance and Composition is absent from the rankings. Similarly, New York University's Department of Music (three Ph.D.'s, in Musicology, Ethnomusicology and Composition & Theory) is included, but New York University's Department of Music and Music Education in the School of Education is omitted entirely (two D.A.'s, in Music or Music Therapy; the E.D.D.; and the Ph.D. in Applied Music, Composition, or Music Education). Since the data collection forms gave specific instructions for precisely this situation, the ICs must once again be held responsible for these omissions.(7)

Indeed, the absence of several major music schools in the report seems to suggest that D.M.A.'s were not supposed to have been counted. Three major institutions awarding only the D.M.A. (New England Conservatory of Music, The Juilliard School and the Manhattan School of Music) are omitted from the survey entirely, suggesting that the study's authors truly intended to include only Ph.D. programs and not all doctoral programs after all.(8)

But the flaws in the data are not limited to the numbers alone. Far worse is the fact that comprehensive schools of music are compared as a whole to small-scale academic departments. This "problem of comparability" manifests itself most strikingly in cases where the survey compares small "boutique" programs such as Wesleyan's Ethnomusicology program, which awards one Ph.D. in one specialized field, to sprawling, comprehensive music schools such as Indiana University, which offers doctoral programs in at least seven fields other than Ethnomusicology, including the D.Mus. in four fields (Composition, Conducting, Music Literature and Performance, Music Literature and Pedagogy), and the Ph.D. in three (Musicology, Music Education and Music Theory). Attempting to rank these two programs on a single scale is futile, regardless of how accurately the programs are described — the very area in which Wesleyan exclusively specializes is not even present in the data on the program at Indiana to which it is compared.

For any ranking of "Music Programs" to be at all meaningful, similar programs must be compared. In the field of music, this would undoubtedly mean that doctoral programs in performance should be compared to other performance programs, while academic programs (Musicology, Theory and Ethnomusicology) should be compared to other academic departments. Quite clearly, music schools should not be ranked in comparison to music departments, except in the case of departments which aspire to the same comprehensiveness as full-fledged schools of music. In this light, the reported numbers suggest quite strongly that:

This in turn leads to the third conclusion — that the inconsistencies in the data went unnoticed suggests that:

The heart of the study was the National Survey of Graduate Faculty in Spring 1993. The data collected in this survey produced the rankings of all the evaluated doctoral programs. However, in all cases, the lists of faculty and the lists of respondents were acquired from the ICs:

Survey forms were sent to a sample of faculty raters chosen from lists provided by ICs in all 41 fields included in the study. Each rater received a questionnaire with approximately 50 programs in their field selected at random from the roster of participating programs. For each institution they were asked to rate, raters were given a faculty roster provided by the ICs. . . .
(from the "Executive Summary," Selected Findings — The National Survey of Graduate Faculty, paragraph 1; p. 2 in the printed report; emphasis added)

The demonstrated inconsistency in the faculty numbers reported by Institutional Coordinators casts doubt on the accuracy of all data collected by Institutional Coordinators — if the numbers of "Doctoral-Research" faculty in Music show inconsistency, it follows that the faculty rosters must have been correspondingly inaccurate as well, since it is highly unlikely that the NRC would extract an incorrect count of those instructors from accurate faculty lists submitted by the ICs. Furthermore, since survey respondents were chosen from the same flawed data source, the respondents themselves must also have been inconsistently (and very possibly inappropriately), chosen. Since the data about Music programs from the "Institutional Coordinators" appear to have been used without testing their accuracy or validity, and since the whole structure of the study is built upon the data inconsistently and inaccurately collected by the ICs:

If the base survey data and its rankings must be discarded, any proposed correlations with data drawn from the "Doctorate Records File," Federal Agencies, and from "Associations and Organizations Administrating [sic] Prestigious Awards and Honors" are also invalidated. Therefore:

At the very least, the comparison to the numbers from the CMS Directory raises sufficient doubt that the rankings of music programs must be set aside until such time as the apparent anomalies in the NRC's data are explained.


The report is not limited to the faculty rankings and numbers. A substantial amount of additional data collected from various other sources are presented as the basis for comparison with the faculty rankings. These data are not verifiable in the same fashion as the faculty numbers, but significant anomalies appear nonetheless, particularly in the data on students. These anomalies are detailed in Section III.

Since correlations between data from these outside sources and program rankings in the survey are adduced, should these outside data prove inaccurate, the correlations would need to be discarded even if the survey rankings themselves were to prove valid.

III. Numbers of Students, Ph.D.'s Awarded and Assistantships Reported

A. Students and Ph.D.'s Awarded

The numbers of students reported seem to exhibit the same inaccuracies as the numbers of faculty reported. Since these numbers also originated with the ICs, the criticisms registered above would also apply here. It would appear that some programs reported the total number of graduate students, some reported the total number of doctoral candidates, and others limited their report to Ph.D. candidates alone:

Table 2: Discrepancies in Reported Number of Students and Ph.D.'s awarded (from "Institutional Coordinators")

Institution123
FacultyStudentsReported Ph.D.'s
CUNY Graduate Center3814526
Eastman50119137
University of Michigan222125
University of Illinois7428899
Columbia University1810714
SUNY-Stony Brook3215371
U. Texas at Austin2110325
New York University10213
Indiana University668
Temple University384722

The data in Table 2 were collected from ICs. The numbers for students are for Fall 1992 (the forms were due December 24th, 1992), while the number of reported Ph.D.'s is for the academic years 1987-88 to 1991-92. As was the case with the faculty, these numbers were unambiguously requested on the program description forms (Appendix D, p. 95).(10)

Although it is not possible to test the accuracy of these numbers, it is difficult to imagine that every program reported comparable populations of students. The contrast between the large number reported for Illinois and the small number for Indiana suggests that the same loose and inappropriate definition of "Research-Doctoral" programs obtains in the reporting of student populations as in the reporting of faculty. Once again, it seems likely that the numbers reported by many of the programs include all students, in both master's and doctoral programs, as well as students pursuing the D.M.A., D.Mus. and E.D.D., while others counted only those already admitted to their Ph.D. programs. Since the data collection form requested the numbers of all graduate students who "intended" to get a Ph.D., it seems clear that non-Ph.D. programs should have been excluded, while master's candidates intending to proceed to the Ph.D. should have been counted.(11)

Yet, the reported numbers suggest that this was not uniformly the case. The number of students reported for New York University is clearly a case of misunderstanding, for the actual number of graduate students at the time of the survey was much closer to 30 or 40 than the reported "2." The only possible explanation is that the word "intend"' on the form was interpreted to mean "intend to get the Ph.D. in the current academic year," a fanciful interpretation at best.

The ratios of these one-year student population numbers to the numbers of Ph.D.'s awarded also appear suspect. One would expect some reasonable proportion of students to earn the doctorate each year. Numbers such as those reported for CUNY, Illinois, Columbia and U. Texas suggest either that large numbers of students never complete their degrees, or that the reported number of current students includes large numbers of students who will either leave the program upon earning the master's degree, or are seeking some doctoral degree other than the Ph.D. Given the apparent lack of discrimination between the various doctoral degrees seen in the faculty and student numbers, it seems probable that these numbers for reported Ph.D.'s also include some recipients of other doctoral degrees.

Such mixed numbers are not only not comparable, but wholly inconsistent with the stated scope of the report. Therefore, any conclusions based upon them must be discarded.(12)

B. Student Assistantships.

The data presented on student financial support are derived from the "Doctorate Records File" ("DRF") which is compiled from data collected in the Survey of Earned Doctorates, conducted by the National Academy of Sciences/National Research Council. It is annual, and collects data on sex, race/ethnicity, marital status, citizenship, disabilities, parents' education, dependents, field, schools attended, time spent in completing the Ph.D., financial support, educational debt and post- graduation plans through forms filled out by the recipients of the degree. Supposedly 95% of Ph.D. recipients nationwide respond to the survey (Appendix G of the printed report, p. 144).

Table 3 presents certain notable numbers drawn from the DRF and presented in the NRC's report:

Table 3: Discrepancies in Student Assistantships (*IC Data; **DRF Data)

Institution1234
Reported Students*Reported Ph.D.'s*RA%**TA%**
Harvard4429061
CUNY Graduate Center1452608
Eastman119137012
University of Illinois28899332
Columbia University10714034
SUNY-Stony Brook15371057
U. Texas at Austin10325015
New York University21358

Several interesting points are raised by the data shown in this table. For example, the assistantship percentages for New York University are completely inconsistent with the numbers of students and Ph.D.'s reported, because the percentage of Research Assistantships would require either less than one whole student receiving support, or more Ph.D. recipients. One need know nothing about New York University's program to see that one of these two sets of numbers must be inaccurate.(13)

Further, it is not at all clear from the NRC's printed report whether these percentages from the DRF represent the support of the Ph.D. candidates claimed to be the subject of the NRC study, or if the percentages apply to all graduate or all doctoral programs at the institutions in question. And, finally, no data are reported on fellowships, which generally carry neither research nor teaching duties. Since the study reports correlations between student financial support and time to completion of the degree as well as between degree of dependence on teaching assistantships and "quality" ranking, this omission appears significant.(14)

It is also open to question whether the kind of data collected in this survey is equally valid for the Sciences and the Humanities. The study is silent on whether there has ever been any test of the accuracy of the data collected in this survey, or whether there has ever been an attempt to compare different fields to see if the data collection instrument might be better suited to capturing profiles of graduates in some fields than in others.

The report also fails to mention if the NRC has considered what the DRF data actually mean. The data are apparently used without questioning whether the categories of data requested in the survey actually fit the way programs in various disciplines really work. The DRF data may very well report what graduates put on their forms, but the lack of a "fellowship" category suggests that the categories may not include all significant sources of student financial support.

The potential inaccuracy of these data together with the failure to account for all methods of student support cast serious doubt on both of these correlations.


Appendix A gives the data on faculty for all 65 rated programs.
Appendix B gives the corresponding data for students.
Additional notes on the printed report.
DWF's ruminations on the meaning(s) of these kinds of rankings
Suggestions for responses to the NRC report.
To Overview of NRC Report Critique.
Back to DWF's home page.
Back to previous page.


NOTES:

  1. The full text of the report is not available on the Worldwide Web (http://www.nas.edu/nap/online/researchdoc). The following two Appendices downloaded from the NAP's Web Page were the original basis for the present discussion:


    The "Quality" (HTML) and "Effectiveness" (HTML) data for the Humanities in general were also consulted (Appendix Tables H (Excel 5), pp. 148-57; and I (Excel 5), pp. 198-207).

    Subsequently, I have also examined the full printed report, Research-Doctorate Programs in the United States: Continuity and Change, Marvin L. Goldberger, Brendan A. Maher and Pamela Ebert Flattau, Editors (National Academy Press: Washington, D.C., 1995). All quotations and data from this report and from the Worldwide Web Page are used by permission of the National Academy Press.

  2. There are several caveats about the data tabulated from the Directory. First, these numbers represent a manual count of the printed faculty lists, so that any single data point is subject to normal human error. Second, since each of these lists was provided by the departments involved, there is some variation in the terminology used to report faculty rank. Additionally, the accuracy of the reported fields of specialization is unknown, although I noticed no errors in the faculties known to me. Since the Directory is widely available, these data are subject to verification and correction. However, notwithstanding these potential weaknesses in the compiling of the data from the Directory, the discrepancies with the NRC's reported numbers are too numerous and too large to originate solely from incidental inaccuracies in the numbers collected from the Directory.

  3. Composers on the faculties where Ph.D.'s in Composition are awarded are listed separately only when they are not already included in the narrowly-defined research-doctoral fields by virtue of also teaching Music Theory, Musicology or some other field included in column 2 (see notes to Table 1).

  4. The "General Instructions to the Institutional Coordinator" read in part (Appendix D, p. 93):

    ". . .[The Program Response Form for each field] has been designed to collect descriptive information about a program as well as to generate a roster of faculty members who participate significantly in doctoral education in that field." (emphasis added)

    The criteria for inclusion on the faculty list are also very clearly delineated (Appendix D, p. 96):

    "(a) . . . members of the regular academic faculty (typically holding the rank of assistant, associate, or full professor, and
    (b) regularly teach doctoral students and/or serve on doctoral committees. Include members of the faculty who are currently on leave of absence but meet the above criteria.
    Exclude visiting faculty members or emeritus or adjunct faculty. . . unless they currently participate significantly in doctoral education.
    Members of the faculty who participate significantly in doctoral education in more than one program should be listed on every form that lists a program in which they participate."

    In my tally of the CMS listings, I have actually counted faculty listed as "visiting" on the assumption that they would be teaching graduate seminars.

  5. Temple has no Ph.D. programs in any field except Music Education. Therefore, the number reported for Temple is not for faculty in "Research-Doctoral" programs as it is defined here.

  6. "A strong positive correlation between the number of faculty and its reputational standing has been demonstrated in the past but has not been explored thoroughly. From data collected by the committee, the size-"quality grouping" relationship was found to be the strongest in the Biological Sciences and weakest in the Arts and Humanities. By and large, however, top-rated programs in most fields tended to have a larger number of faculty and more graduate students than lower-rated programs." (From the "Executive Summary," Program Characteristics Associated with "Quality," paragraph 1; p. 3 in the printed report).

  7. The "General Instructions to the Institutional Coordinator" read in part (Appendix D, p. 93):

    ". . .If your institution offers more than one program [in a designated field], please use the extra [form] and provide information separately for that program.
    "For example, if your university offers one doctoral program in statistics and another in biostatistics, include both. Do not, however, consider different specialty areas within the same department to be separate programs."

  8. The printed report makes it clear that Juilliard was found to have met the eligibility requirements (p. 17), but that Juilliard itself declined to participate (p. 28, n. 3). Given that both of the doctoral degrees awarded by Juilliard are D.M.A.'s (in Composition and Performance), it would seem that, in this case, the "Institutional Coordinator" understood the scope of the survey better than the study authors understood Juilliard's program.

  9. A representative of the NRC has argued that it is unreasonable to expect the NRC to have tested the data collected from the ICs, first because officials at the individual institutions should be the best informants about programs at that institution, and, second, because checking directly with the 3,600 programs surveyed would have been a herculean task. Although both of these statements are unquestionably reasonable, in a survey of this nature, which is so dependent on the data collected from so many sources, it seems to me that the designers of the report should have had some desire to confirm that their data collection method produced accurate data. At the very least, there should have been some sampling of a small percentage of the IC reports (5%, for example). Furthermore, a single knowledgeable individual from each of the disciplines surveyed could have "eyeballed" the collated data for any obvious anomalies. Surely someone from the field of Music would immediately have noticed the Indiana University number. If this had been picked up before the survey was prepared, these anomalies could have been addressed at that point, thus insuring the most valid possible survey result.

  10. Question 2 reads:

    "How many Ph.D.'s (or equivalent research-doctorates) have been awarded in the program in each of the last five academic years?"

    Blanks are given for 1987-88, 1988-89, 1989-90, 1990-91 and 1991-92.

    Question 3 on the same form reads:

    "Approximately how many full-time and part-time graduate students enrolled in the program at the present time (Fall 1992) intend to earn doctorates?"

    The numbers are requested for total full-time and part-time students, as well as with female students broken out separately in both of these categories.

    There is no ambiguity in the wording of this form.

  11. The data collection forms are not entirely clear as to the applicability of the distinction between graduate students intending to get a Ph.D. and students already admitted into a Ph.D. program. Undoubtedly this distinction would have more meaning in some programs than in others.

  12. ". . . By and large, however, top-rated programs in most fields tended to have a larger number of faculty and more graduate students than lower-rated programs." (From the "Executive Summary," Program Characteristics Associated with "Quality," paragraph 1, p. 3 in the printed report).

  13. There were actually 14 Ph.D.'s awarded from 1986 to 1992. Apparently the DRF data for the period omits one student, for to have 8% of students receive TA support requires 1 out of 13 students, since 1 of 14 rounds to 7%. This may be due to a differences in the period considered. The 5% figure for RAs is unsupportable, for at least 20 degrees would be necessary to produce this figure.

  14. ". . . Another factor [in causing graduates in the Arts and Humanities to take longer than graduates in other fields to complete their degrees] is thought to be differences in patterns of student support, in which greater dependence on teaching assistantships (TAs) than on research assistantships (RAs) may account for the time it takes a student to earn a degree. From data collected by the committee it was observed that:
    — Graduates from lower-rated programs in many fields tended to utilize TAs as a primary source of student support at a greater rate than graduates of higher-rated programs."

    (From the "Executive Summary," Selected Information About Program Graduates, paragraphs 2-3; p. 5).


[ To Overview of NRC Report Critique. . . ]
[ Back to DWF's home page. . . ]
[ Back to previous page. . . ]


Contact David Fenton
©1995-96, David W. Fenton (last modified Wednesday, April 3, 1996)