Digest of Significant Points Gleaned from the Full NRC Report


These are just the notes I typed up while I was reading the report. They are not in any real order, unfortunately. I include them because there are quite a few interesting details here that I haven't mentioned anywhere else.

Enjoy!


General

  1. Apparently, "research-doctoral" simply means "awards a Ph.D." There seems to be nothing in Ch. 2 "Study Design" which explicitly suggests that any other degree is relevant. Likewise, there is no consideration of the possibility that not all Ph.D. 's are awarded in "research" fields.

  2. Juilliard declined to participate in the study (p. 28, n. 3.), even though it met the eligibility requirements (p. 17). This despite the fact that Juilliard awards no doctoral degrees except the DMA.

  3. The faculty lists were due in by Dec. 24, 1992 (App. D, p. 90).

  4. The "General Instructions to the Institutional Coordinator" read (Appendix D, p. 93):
    ". . .[The Program Response Form for each field] has been designed to collect descriptive information about a program as well as to generate a roster of faculty members who participate significantly in doctoral education in that field."
    ". . .If your institution offers more than one program [in a designated field], please use the extra [form] and provide information separately for that program."
    "For example, if your university offers one doctoral program in statistics and another in biostatistics, include both. Do not, however, consider different specialty areas within the same department to be separate programs."

  5. The program response form asked for the number of "Ph.D.'s (or equivalent research-doctorates)" which were awarded in each of the years from 1987-88 to 1991-92. The form also specifically asked "Approximately how many full-time and part-time grad uate students enrolled in the program at the present time (Fall 1992) intend to earn doctorates?" This is broken down as total and females, and full- and part-time. (App. D., p. 95) Maybe this was erroneously interpreted at NYU to mean students who would earn it in the present year? Apparently "intend" is the key word that must have been misinterpreted.

  6. Faculty member list criteria are very clearly delineated (App. D, p. 96):
    "(a) . . . members of the regular academic faculty (typically holding the rank of assistant, associate, or full professor, and
    (b) regularly teach doctoral students and/or serve on doctoral committees. Include members of the faculty who are currently on leave of absence but meet the above criteria.
    Exclude visiting faculty members or emeritus or adjunct faculty. . . unless they currently participate significantly in doctoral education.
    Members of the faculty who participate significantly in doctoral education in more than one program should be listed on every form that lists a program in which they participate."

  7. The ICs were also asked to check the names of at least 2 of each academic rank (i.e., 6 each?) who would be available and well-qualified to be survey respondents. (App. D, p. 96)

  8. They apparently checked the faculty list for those holding Ph.D.'s against the Doctorate Records File, because they asked that "those who do not hold a Ph.D. or equivalent research-doctorate from a university in the United States" be explicitly i dentified. The results of this matching was not released with the faculty lists. (App. D, p. 96).

  9. Appendix Table C-1 shows production of Ph.D.'s in the Arts & Humanities, 1986-90 (p. 81). It shows 2,573 Music Ph.D.'s!! Surely this is all doctoral degrees.

  10. The table that shows which schools produce 80%, 90% and 100% of the A & H Ph.D.',s (C-2, p. 81) has an error in the 100% column. It reads "2,57" when it must be something like 2,57X (given that 90% is 2,337).

  11. The respondents to the survey were entirely selected from the faculty lists submitted by the ICs (App. F, p. 115). The number of raters was 4x the number of programs in all fields except Biological Sciences, and no fewer than 200 from fields with fewer than 50 programs rated. Thus, the minimum number of questionnaires was 200. Each respondent's questionnaire listed 50 randomly selected programs. (App. F, p. 115)

  12. The description of the survey is somewhat ambiguous in respect to who evaluated whom:
    "Owing to the fact that the sample was drawn from faculty lists provided by Institutional Coordinators crossing departmental boundaries, respondents occasionally indicated that they did not consider themselves qualified to rate programs in a c ertain disciplinary area." (App. F, p. 116)

    Does this mean related but different fields? Or different fields entirely? The letter to raters on p. 118 suggests that it was within a field, and that the "interdisciplinary" problem occurred in fields with well-delineated specializations. Music would fall into this class, which must be why #14 was necessary.

  13. Although 200 surveys were sent out, the minimum number of responses was 100. (App. F, p. 116)

  14. Music was one of four fields which required followup because of the interdisciplinary problem alluded to in #12. (App. F, p. 116)

  15. Survey respondents had to report their highest degree, where they received it and when, and what field. They also had to indicate their one area of specialization (Physics lists 16 plus "Other"). What were the specializations for Music? (App. F, p. 123).

  16. Respondents considered Scholarly Quality with these instructions:
    "Please consider only the scholarly competence and achievements of the faculty. It is suggested that no more than five programs be designated "distinguished."

    For "Effectiveness of Program in Educating Research Scholars/Scientists:"

    "Please consider the accessibility of the faculty, the curricula, the instructional and research facilities, the quality of graduate students, the performance of graduates, the clarity of stated program objectives and expectations, the appropr iateness of program requirements and timetables, the adequacy of graduate advising and mentorship, the commitment of the program in assuring access and promoting success of students historically underrepresented in graduate education, the quality of assoc iated personnel (post-doctorates, research scientists, et. al.) and other factors that contribute to the effectiveness of the research-doctorate program." [even though these personnel are not listed?] (App. F, p. 124)

    For "Change in Program Quality in the Last Five Years:"

    "Please consider both the scholarly quality of the program faculty and the effectiveness of the program in educating research scholars/scientists. Compare the quality of the program today with its quality five years ago -- not the cha nge in the program's relative standing among other programs in the field." (App. F, p. 124)

    Rating "Effectiveness" according to these criteria is surely a superhuman task — who would know this info. for anything other than the programs of which they have been a part? Respondents were allowed to check "Don't know well enough to evaluate" for both "quality" and effectiveness." In fact the first question for each factor rated familiarity from 1 to 3. (App. F, p. 125) There were basically the five questions: for Q & E, familiarity and ranking; for change, 1 to 3 or "don't know."

  17. In Music, 260 raters were selected to rank 65 programs. The sample was adjusted to 255 who were considered qualified. Of the mere 107 completed questionnaires received, one was completed by someone other than the originally selected rater, giving a usable rater percentage of 42%. There were 129 non-respondents with no known reason, 19 actually gave reasons. Music had the lowest response rate in the A & H. Only the Biological Sciences tended to the same range as Music. (App. F, Table 1, p. 134), a nd that is an area identified in the study as one in which there were a number of cross-disciplinary problems, leading to large numbers of respondents feeling unqualified to respond. This suggests the possibility that the same may be true of the field o f Music.

  18. In Music, Asst. Profs. returned surveys at a very low rate, so a large number of Asst. Profs. were chosen randomly for the survey (as opposed to having been nominated by the ICs). This is consistent with all the other disciplines. (App. F, Table 2, p. 136).

    Outside Data Sources (Appendix G)

  1. Citation data was not used for the Humanities because it was not relevant. (p. 143).

  2. The only federal research money relevant to the Humanities was from the NEH. The names were matched by computer, with the institutional/field matching done by hand. These data included amount, duration and agency. (p. 144)

  3. The "Doctorate Records File" is created from data collected in the Survey of Earned Doctorates, conducted by the NAS, NRC. It is annual, and covers sex. race/ethnicity, marital status, citizenship, disabilities, dependents, field, schools attende d, time spent in completing the Ph.D., financial support, educational debt, post grad. plans, and parents' education. Supposedly 95% of Ph.D. recipients respond to the survey. (p. 144)

  4. In the Humanities, Honors and Awards were substituted for citations. The organizations included: Nobel, Guggenheim, MacArthur, Humbolt, Fulbright, NEH, ACLS, Am. Antiquarian Soc., Huntington Library, Newberry Library, Am. School of Classical Stud ies in Athens, Folger Library Post-Docs, Residency at the Center for Advance[d?] Study in Behavioral Sciences, Res. at the Inst. for Advance[d?] Study in the Visual Arts, Res. at the Getty Center, Woodrow Wilson Scholars, Am. Academy of Arts and Sciences, Am. Philosophical Society and American Academy at Rome [sic]. (p. 145)

Keys to Downloaded Data Tables

Table J:

Table P:

Table Q:
Confidence Intervals — programs whose ranges do not overlap are considered (at a .05 level of significance) to be essentially different. The wide range for Indiana University compared to the others should have set off warning bells. It seems that in most programs, there were more respondents for the better programs, with a correspondingly narrower range for the confidence interval. Music tends not to vary as much as other disciplines from high- to low-rated programs. Also, the median for only 4 p rograms falls below 2, and none below 0. This is similar to Anthropology, with a very different distribution at the top end, and relatively narrower confidence ranges than Music. It's pretty similar to Art History, given only 38 rated programs. Rather dif ferent from English, but there are twice as many programs. Spanish & Portuguese programs are very strange, with only one above 4 (non-overlapping with others) and only two below 2.

Interesting Conclusions from Table R, Changes Since 1982

  1. The top three quarters increased the size of faculty, but the top quarter the least, and the second tier the most. Bottom quarter faculties decreased in size by more than the top quarter increased.

  2. The number of Ph.D.'s produced increased in the top and third quarters, while decreasing sharply in the second quarter (even accounting for the relatively larger number of Ph.D.'s produced in the second-quarter programs).

  3. Median Years to Degree increased in all quarters, least in the middle two quarters, most in the bottom quarter.

  4. Quality rating rose for all quarters, but increased more as quality fell.

Selected Findings (Ch. 2)

  1. The two areas they single out for the strongest correlation with "quality" rankings are the two areas most flawed in the field of music: size (faculty, students, graduates), and faculty awards. (p. 34)

  2. They identify modern European language programs as the ones tending to have the fewest faculty (10 to 15, generally; p. 34). Had they correctly classified Music programs (truly only research programs), they would have found something similar, I t hink. In fact, here are the average numbers of faculty reported:
           Spanish/Portuguese       13.35
           French                   12.98
           German                   11.26
           Music                    22.36
    

    For Music, my calculation of the average of their reported faculty sizes comes out to 21.84 (still rounds the same). However, for the programs where I re-counted the faculty, the average is 14.62. If you put back all the schools for which I did not inv estigate, the average comes to 16.71.

  3. Music, Philosophy and Spanish/Portuguese are the only Humanities programs where the top quarter does not have the largest average faculty size. There are no other fields in which this is the case. There also appear to be oddities about t he ratio of faculty members to students in these other programs (the same as in Music). (p. 35)

  4. In regards to Grants & Awards, they admit that for the Humanities the data are very weak (p. 37), and say more work is needed before "conclusions can be drawn about faculty activity along this dimension." (p. 37)

  5. German & Spanish/Portuguese are the only Humanities where the top-rated programs had fewer students than those in the second quarter (Table 3-4, p. 38). Only one other program (Biological Sciences, Molecular and General Genetics) also exhibits th is pattern. It would be interesting to see how Music would look if the masking effect of incorrect reporting were removed.

  6. A point is made of the fact that Faculty awards cluster disproportionately in programs in the top quarter (p. 41). Music is singled out as being one of the fields in which this is most plainly the case — in fact, Music has a greater drop-off from the first quarter to the lower quarters than any other field in the Humanities. But, they also suggest that this pattern may be indicative of "limitations of the data base. . . ." (p. 41)

  7. The "change" measures are completely baffling, because there is actually no general correspondence between the respondents' opinions of the change in quality and the actual changes in rank. (p. 41-2). Maybe there shouldn't be, since the rankings are relative, not absolute.

  8. Patterns of change over time (App. R, described p. 44) can have no validity if the numbers being compared are inaccurate.

  9. Music was one of the programs in which average number of faculty is identified as having experienced the most growth (p. 44). This is wholly counter to what we know to be the case, is it not? The CMS Directories run like this:
         Year          Faculty    Institutions   Average
         1980-92        23,508       1,398       16.815450
         1982-84        24,796       1,536       16.143229
         1984-86        25,392       1,538       16.509752
         1986-88        26,108       1,541       16.942245
         1988-90        26,666       1,545       17.259546
         1990-92        29,663       1,745       16.998853
         1992-94        30,582       1,808       16.914823
         1993-94        31,138       1,783       17.463825
         1994-95        32,124       1,830       17.554098
    

    (Apparently, the CMS went from bi-annual to annual in 1993).

    It would be interesting to know if any of the faculties that experienced the greatest increases were those which appear to have reported faculty members outside the scope of the report.

  10. Interesting: overall, in A&H, the top programs produced fewer degrees in the period compared to the 1982 report. Linguistics is pointed out as one field where "degree production" increased, but it also increased for some quarters in Art History ( lower two), Music (top and third) and Spanish and Portuguese (bottom two) (Figure 3-6, p. 45). Why is Music so anomalous on this? Doesn't it suggest that the numbers of Ph.D. recipients may be wrong? Or that a number of programs in the top quarter for Mus ic would be in a different quarter if they had been ranked on the basis of accurate faculty lists?

  11. The study finds that "Degree recipients in the Sciences and Engineering are more likely to have received research assistantship (RA) support. . . than students in the Arts and Humanities. . . ." (p. 51). They then go on to describe differences in how "research assistantships" relate to doctoral research. However, they clearly do not understand the role played in the Humanities by full-support fellowships, which require neither teaching nor research duties, and allow the student to concentrate on her course-work and research. Apparently, the DRF data does not include any category for this.

    They also consider the TA/RA juxtaposition to be one between teaching and no teaching. This is false in the Humanities, I believe. They also don't seem to delve into what the DRF data actually mean. How well do the requested categories of data actually fit the way programs in various disciplines actually work? The DRF data may very well report what graduates put on their forms, but what if the categories don't fit a graduate's situation? What do the reported numbers mean then?


[ Back to the Introduction. . . ]
[ To Overview of NRC Report Critique. . . ]
[ Back to DWF's home page. . . ]
[ Back to previous page. . . ]


Contact David Fenton
©1995-96, David W. Fenton (last modified Wednesday, April 3, 1996)