Apparent Anomalies in Reported Data for the Field of Music in the Report of the National Research Council,"Research-Doctorate Programs in the United States: Continuity and Change"

By David W. Fenton

HTML 1 Version (no tables)

NOTICE: All material in this document and the documents reached by the links below is ©1995 by the author, David W. Fenton. This text and that of all of these documents may not be re-distributed or re-used in any form without including t his notice.

It is my duty to repeat, in all fairness to the authors of the NRC report, that I have not had the opportunity to examine the full report, only the selected data made available by the NRC on the Worldwide Web.(1) The conclusions ma de here are offered subject to verification that the available data have been correctly interpreted.

Contents


I. Comparison Data

This discussion is chiefly concerned with a comparison between the reported numbers of faculty in the report and a count of music faculties made from listings in the College Music Society's Directory of Music Faculties in Colleges and Universities, U.S. and Canada for the period 1992-94 (the NRC's report was for 1993). Table 1 lists both the NRC's numbers (column 1) and those gleaned from the Directory (columns 2-6).(2)

The NRC tabulated its faculty counts from "Institutional Coordinator Response Data" provided to the NRC by representatives of each of the institutions surveyed. The numbers to which the "Institutional Coordinator" data are compared come from a manual c ount of members of music faculties listed in the Directory using restricted definitions of both "Research-Doctoral Programs" and "faculty" — only faculty members listed in the Directory as holding full-time appointments in the following research-oriented fields were counted:

In order to ameliorate any negative effects of this narrow definition, the fields of Music Education and Composition are listed separately in Table 1 if the programs award the Ph.D. Instructors in doctoral programs not awarding the Ph.D. (including the DMA and DMus) have been classified in "Other" (Table 1, column 5).(3)

In the tabulation of data from the CMS Directory, "faculty" members were defined as those holding the ranks of associate, assistant and full professor. Part-time instructors, adjunct instructors and emeritus professors were excluded on the assum ption that they would not be teaching Ph.D. candidates.

Table 1: Significant Discrepancies in Numbers of Faculty Reported

                              1        2      3        4        5         6
                                                Mus.    Compo-   Other    Adjuncts,
Institution               Reported  Actual    Ed.    sition   Faculty    Etc.
University of Rochester      50       23       4        7       56        33
University of Illinois       74       20      10       --       47         6
SUNY-Stony Brook             32       12      --        0        6        17
U. Texas, Austin             21       17       7       *3       46        10
Indiana University            6      16+       7      **5      103        14
U. North Texas               83       18      10        3       51        12
Northwestern                 35       19       7       --       34        47
U. California, San Diego     24       19      --       *1        0         7
Florida State                47       20      13      **1       40         7
Ohio State                   50       14      15       *2       27        26
University of Washington     28       13      --       --       23        47
University of Cincinnati     22       13      --       *1       80        40
U. Maryland, College Park    45       14       4       --       26        11
USC                           7       15      --       --       41        73
Temple                       38       11       6       --       26        63
Wesleyan University          10        5      --       --        1        11

Notes on numbers of Actual Faculty (column 2)

Notes on Composition Faculty (column 4)

For composers not counted in column 2:

Implications of the Comparison

The conclusion to be drawn from this comparison is quite obvious: the numbers in the NRC report are inaccurate because Insitutional Coordinators did not use consistent criteria for reporting the numbers of faculty. Even were one to abandon these "narro w" limitations and count all professors involved in the instruction of Ph.D. candidates, the data as reported would still be inaccurate, as can be seen by totaling columns 2 through 5 of Table 1 and comparing the result to column 1. Whereas the NRC report explicitly limited itself to Ph.D.-granting programs, it is clear from the Directory data that some ICs must have reported all faculty, regardless of specialization, and regardless of the proportion of students involved in Ph.D. prog rams,(4) while others reported only faculty in a limited number of fields and programs. Others apparently included faculty for all doctoral programs, including DMA, DMus and EDD programs.(5)

Regardless of the question of the intended scope of the survey, the fact of this inconsistency alone wholly invalidates any comparisons based on that data, and consequently invalidates any conclusions about apparent correlations between so-calle d "quality" and faculty size.(6)

In short, there is no consistent definition of "faculty members in Research-Doctoral programs in Music" that could produce the data on faculty numbers collected by the ICs. That such inconsistent data were collected suggests that:

An individual in the field of music would neither neglect to distinguish between research and performing faculty, nor fail to notice the peculiarities in the numbers reported. For instance, the largest music school in the country, Indiana University, is listed as having only 6 faculty members, while the State University of New York at Stony Brook (a much smaller program) reports 32 faculty members. It is clear that Indiana University reported only professors of Music History and Musicology, while Musi c Theory & Analysis and the Ethnomusicology program (which is part of the Anthropology Department) were omitted. On the other hand, SUNY-Stony Brook appears to have counted the entire faculty (including some emeritus or part-time faculty). And, even w orse, Temple University, which has a Ph.D. program only in Music Education with only six faculty members specializing in the field, reported 38 faculty members. The numbers from these three institutions alone are surely not comparable, for they represent entirely different things.

Additionally, not all the programs in all the ranked institutions were accounted for. While the figures for UCLA encompass the faculty of three different departments within the University (Dept. of Music, Dept. of Ethnomusicology and Systematic Musicol ogy, and Dept. of Musicology), and those for the Eastman School of Music cover an institution with nine different doctoral degrees (DMA in Composition, Conducting, Music Education, Performance, Accompanying; Ph.D. in Composition, Historical Musicology, Mu sic Education, Music Theory), other programs are not so fully reported. Yale University's Department of Music, ranked number 5, awards two Ph.D.'s in academic fields (Historical Musicology and Music Theory), but the Yale School of Music, which awards DMAs in both Performance and Composition is absent from the rankings. Similarly, New York University's Department of Music (three Ph.D.'s, in Musicology, Ethnomusicology and Composition & Theory) is included, but New York University's Department of Music and Music Education in the School of Education is omitted entirely (two DAs, in Music or Music Therapy; the EDD; and the Ph.D. in Applied Music, Composition, or Music Education).

But the flaws in the data are not limited to the numbers alone. Far worse is the fact that comprehensive schools of music are compared as a whole to small-scale academic departments. This "problem of comparability" manifests itself most strikingly in c ases where the survey compares small "boutique" programs such as Wesleyan's Ethnomusicology program, which awards one Ph.D. in one specialized field, to sprawling, comprehensive music schools such as Indiana University, which offers doctoral programs in a t least seven fields other than Ethnomusicology, including the DMus in four fields (Composition, Conducting, Music Literature and Performance, Music Literature and Pedagogy), and the Ph.D. in three (Musicology, Music Education and Music Theory). Attemptin g to rank these two programs on a single scale is futile — regardless of how accurately the programs are described the very area in which Wesleyan exclusively specializes is not even present in the data on the program at Indiana to which it is comp ared.

For any ranking of "Music Programs" to be at all meaningful, similar programs must be compared. In the field of music, this would undoubtedly mean that doctoral programs in performance should be compared to other performance programs, while academic pr ograms (Musicology, Theory and Ethnomusicology) should be compared to other academic departments. Quite clearly, music schools should not be ranked in comparison to music departments, except in the case of departments which aspire to the sam e comprehensiveness as full-fledged schools of music. In this light, the reported numbers suggest quite strongly that:

This in turn leads to the third conclusion — that the inconsistencies in the data went unnoticed suggests that:

The heart of the study was the National Survey of Graduate Faculty in Spring 1993. The data collected in this survey produced the rankings of all the evaluated doctoral programs. However, in all cases, the lists of faculty and the lists of respondents were acquired from the ICs:

Survey forms were sent to a sample of faculty raters chosen from lists provided by ICs in all 41 fields included in the study. Each rater received a questionnaire with approximately 50 programs in their field selected at random from the roster of participating programs. For each institution they were asked to rate, raters were given a faculty roster provided by the ICs. . . .
(from the "Executive Summary," Selected Findings — The National Survey of Graduate Faculty, paragraph 1; emphasis added)

The demonstrated inconsistency in the faculty numbers reported by Institutional Coordinators casts doubt on the accuracy of all data collected by Institutional Coordinators — if the numbers of "Doctoral-Research" faculty in Music wer e inconsistently reported by the ICs, it follows that the faculty rosters must have been correspondingly inaccurate, as well, since it is highly unlikely that ICs would report an accurate faculty list while simultaneously reporting an incorrect count of those instructors. Furthermore, since survey respondents were chosen from data from the same flawed source, the respondents themselves must also have been inconsistently chosen. Since the data about Music programs from the "Institutional Coordinator s" appear to have been used without testing their accuracy or validity, and since the whole structure of the study is built upon the data inconsistently and inaccurately collected by the ICs:

If the base survey data and its rankings must be discarded, any proposed correlations with data drawn from the "Doctorate Records File," Federal Agencies, and from "Associations and Organizations Administrating [sic] Prestigious Awards and Honors" are also invalidated. Therefore:

At the very least, the comparison to the numbers from the CMS Directory raises sufficient doubt that the rankings of music programs must be set aside until such time as the apparent anomalies in the NRC's data are addressed.


The report is not limited to the faculty rankings and numbers. A substantial amount of additional data collected from various other sources is presented as the basis for comparison with the faculty rankings. These data are not verifiable in the same fa shion as the faculty numbers, but significant anomalies appear nonetheless, particularly in the data on faculty awards and on student populations. These anomalies are detailed in Sections II and III.

Since correlations between data from these outside sources and program rankings in the survey are adduced, should these outside data prove inaccurate, the correlations would need to be discarded even if the survey rankings themselves were to prove vali d.

II. Numbers and Proportions of Faculty Awards

In Appendix Table J, data on "prestigious" awards to faculty are reported. Also reported is a ratio of awards to faculty, expressed as a percentage. In the following cases, the percentage reported does not correspond to the actual percentage:

Table 2: Discrepancies in Reported Ratios of Faculty Awards

                              1         2        3          4
                                             Reported    Actual
  Institution                Faculty   Awards  Ratio (%)  Ratio(%)
University of Chicago        12         9       58         75
University of Pennsylvania   15         7       40         47
Eastman School of Music      50         7       12         14
University of Illinois       74         4        4          5
Stanford                     14         5       29         36
University of Colorado       12         2        8         17

The number of errors is not great, but it is puzzling. It may be that the discrepancy originates in some factor not clear from the downloadable data tables.

More significant anomalies appear in the actual numbers of awards reported, which, unlike the data on faculty size, cannot be independently confirmed. However, the numbers suggest the possibility that, once again, data were inconsistently collected, sp ecifically:

  1. What "prestigious" awards were counted?
  2. How long was the period under consideration?

In the absence of Appendix G, which details the data sources (but was not made available for download), the data may be evaluated only for their general sense. If the period covered is the year 1993 only, then it seems remarkable that the faculty of th e University of Chicago held prestigious awards at a ratio of three awards to every four faculty members, and Harvard at nearly two for every three. A ten-year period (1983-1993) would make more sense. However, in the case of New York University at least, the stated number of awards (1) would be understated by at least one for 1993 alone, and by a factor from five to ten between 1983 and 1993 for NEH grants, Guggenheims, ACLS and I Tatti Harvard Fellowships alone, not to mention numerous other awar ds which are prestigious within the field of music, if not well known to the general public. Regardless of whether the period covered is one year or ten, the numbers from institution to institution appear to be irregular.

Once again, inconsistencies probably reflect inadequacies in the data sources upon which the report relies. If so, any correlations between faculty awards and "quality" ranking must be discarded on the grounds of invalid data.(7)

III. Numbers of Students, Ph.D.'s Awarded and Assistantships Reported

A. Students and Ph.D.'s Awarded

The numbers of students reported seem to exhibit the same inaccuracies as the numbers of faculty reported. Since these numbers also originated with the ICs, the criticisms registered above would also apply here. It would appear that some programs reported the total number of graduate students, some reported the total number of doctoral candidates, and others limited their report to Ph.D. candidates alone:

Table 3: Discrepancies in Reported Number of Students and Ph.D.'s awarded

                        1         2        3
                                          Reported
 Institution           Faculty  Students    Ph.D.'s
CUNY Graduate Center    38        145        26
Eastman                 50        119       137
University of Illinois  74        288        99
Columbia University     18        107        14
SUNY-Stony Brook        32        153        71
U. Texas at Austin      21        103        25
New York University     10          2        13
Temple University       38         47        22

The data in Table 3 were collected from ICs. Presumably, the numbers for students are for the year 1993, while the number of reported Ph.D.'s is for the cumulative period since the previous report (1983-1993).

Although it is not possible to test the accuracy of these numbers, it is difficult to imagine that the large numbers of students reported for the departments listed in Table 3 truly represent students in "Research-Doctoral" programs only. In fact, it s eems likely that the numbers reported by many of the programs include all students, in both masters and doctoral programs, as well as students in DMA, DMus and EDD programs. Even in cases where only students working towards a degree in "Research-Do ctoral" fields were reported, it is probable that graduate students not yet admitted to the Ph.D. program were counted in some instances.

The numbers of Ph.D.'s awarded also appear suspect, particularly in comparison to the numbers of students reported for one year. One would expect some reasonable proportion of students to earn the doctorate each year. Numbers such as those reported for CUNY, Illinois, Columbia and U. Texas suggest either that large numbers of students never complete their degrees, or that the reported number of current students includes large numbers of students who are not working toward the Ph.D.

Such mixed numbers are not only not comparable, but wholly inconsistent with the stated scope of the report. Therefore, any conclusions based upon them must be discarded.(8)

B. Student Assistantships.

The data presented on student financial support are said to be derived from the "Doctorate Records File." In the absence of Appendix G, which explains the data sources (not available for download), the source of these numbers is not clear. However, certai n hypotheses can be constructed based on the data presented in Table 4:

Table 4: Discrepancies in Student Assistantships

                          1         2       3     4
               Reported  Reported
 Institution            Students   Ph.D.'s  RA%   TA%
Harvard                   44        29      0    61
CUNY Graduate Center     145        26      0     8
Eastman                  119       137      0    12
University of Illinois   288        99      3    32
Columbia University      107        14      0    34
SUNY-Stony Brook         153        71      0    57
U. Texas at Austin       103        25      0    15
New York University        2        13      5     8

Several interesting points are raised by the data shown in this table. The numbers of students and Ph.D.'s reported by New York University are completely inconsistent with the assistantship percentages, because the percentages would require less than o ne whole student receiving support. Either the numbers of students reported by the ICs are incorrect or the assistantship percentages are inaccurate.

Further, it is not at all clear whether these percentages represent the support of the Ph.D. candidates claimed to be the subject of the NRC study, or if the percentages apply to all graduate or all doctoral programs at the institutions in question. An d, finally, no data are reported on fellowships, which generally carry neither research nor teaching duties. Since the study reports correlations between student financial support and time to completion of the degree as well as between degree of dependenc e on teaching assistantships and "quality" ranking, this omission appears significant.(9)

The potential inaccuracy of these data together with the failure to account for all methods of student support cast serious doubt on both of these correlations.


Appendix A gives the data on faculty for all 65 rated programs.
Appendix B gives the corresponding data for students.

Return to DWF's home page.


NOTES:

  1. The full text of the report is not available on the Worldwide Web (http://www.nas.edu/nap/online/researchdoc). The following two Appendices are the basis for the present di scussion:
    The "Quality" (HTML) and "Effectiveness" (HTML) data for the Humanities in general were also consulted (App endix Tables H (Excel 5) and I (Excel 5)).

  2. There are several caveats about the data tabulated from the Directory. First, these numbers represent a manual count of the printed faculty lists, so that any single data point is subject to normal human error. Second, si nce each of these lists was provided by the departments involved, there is some variation in the terminology used to report faculty rank. Additionally, the accuracy of the reported fields of specialization is unknown, although I noticed no errors in the f aculties known to me. Since the Directory is widely available, these data are subject to verification and correction. However, notwithstanding these potential weaknesses in the compiling of the data from the Directory, the discrepancies with the NRC's reported numbers are too numerous and too large to originate solely from incidental inaccuracies in the numbers collected from the Directory.

  3. Composers on the faculties where Ph.D.'s in Composition are awarded are listed separately only when they are not already included in the narrowly-defined research-doctoral fields by virtue of also teaching Music Theory, Musicolo gy or some other field included in column 2 (see notes to Table 1).

  4. Temple has no Ph.D. programs in any field except Music Education. Therefore, the number reported for Temple is not for faculty in "Research-Doctoral" programs as it is defined here.

  5. It is interesting to note that at least three large music schools awarding only DMAs at the doctoral level (New England Conservatory of Music, The Juilliard School and the Manhattan School of Music) are omitted from the survey entirely. This suggests that the study's authors truly intended to include only Ph.D. programs and not all doctoral programs after all.

  6. "A strong positive correlation between the number of faculty and its reputational standing has been demonstrated in the past but has not been explored thoroughly. From data collected by the committee, the size-"quality grouping " relationship was found to be the strongest in the Biological Sciences and weakest in the Arts and Humanities. By and large, however, top-rated programs in most fields tended to have a larger number of faculty and more graduate students than lower-rated programs." (From the "Executive Summary," Program Characteristics Associated with "Quality," paragraph 1).

  7. "To explore the relationship between "quality" scores in the Arts and Humanities and "scholarship," the committee compiled a list of awards and honors using a variety of sources. The list was matched against a list of faculty members provided by the ICs. From this analysis, the committee observed that a larger share of faculty in top-rated programs in the Arts and Humanities were likely to have received a prestigious award than faculty in lower-rated programs. This relati onship was most evident in the fields of Classics, Comparative Literature, Philosophy, and English Language and Literature, reflecting in part the sources of information that were used to compile this listing of awards and honors. . . ."
    (From the "Executive Summary," Program Characteristics Associated with "Quality," paragraph 5; emphasis added.)

    Note once again the reliance on faculty lists supplied by ICs.

  8. ". . . By and large, however, top-rated programs in most fields tended to have a larger number of faculty and more graduate students than lower-rated programs." (From the "Executive Summary," Program Characteristics Associated with "Quality," paragraph 1.)

  9. ". . . Another factor [in causing graduates in the Arts and Humanities to take longer than graduates in other fields to complete their degrees] is thought to be differences in patterns of student support, in which greater depend ence on teaching assistantships (TAs) than on research assistantships (RAs) may account for the time it takes a student to earn a degree. From data collected by the committee it was observed that:
    — Graduates from lower-rated programs in many fields tended to utilize TAs as a primary source of student support at a greater rate than graduates of higher-rated programs."

    (From the "Executive Summary," Selected Information About Program Graduates, paragraphs 2-3.)


Contact David Fenton
©1995-96, David W. Fenton (last modified Wednesday, April 3, 1996)