These are just the notes I typed up while I was reading the report. They are not in any real order, unfortunately. I include them because there are quite a few interesting details here that I haven't mentioned anywhere else.
Enjoy!
". . .[The Program Response Form for each field] has been designed to collect descriptive information about a program as well as to generate a roster of faculty members who participate significantly in doctoral education in that field."". . .If your institution offers more than one program [in a designated field], please use the extra [form] and provide information separately for that program.""For example, if your university offers one doctoral program in statistics and another in biostatistics, include both. Do not, however, consider different specialty areas within the same department to be separate programs."- The program response form asked for the number of "Ph.D.'s (or equivalent research-doctorates)" which were awarded in each of the years from 1987-88 to 1991-92. The form also specifically asked "Approximately how many full-time and part-time grad uate students enrolled in the program at the present time (Fall 1992) intend to earn doctorates?" This is broken down as total and females, and full- and part-time. (App. D., p. 95) Maybe this was erroneously interpreted at NYU to mean students who would earn it in the present year? Apparently "intend" is the key word that must have been misinterpreted.
- Faculty member list criteria are very clearly delineated (App. D, p. 96):
"(a) . . . members of the regular academic faculty (typically holding the rank of assistant, associate, or full professor, and(b) regularly teach doctoral students and/or serve on doctoral committees. Include members of the faculty who are currently on leave of absence but meet the above criteria.Exclude visiting faculty members or emeritus or adjunct faculty. . . unless they currently participate significantly in doctoral education.Members of the faculty who participate significantly in doctoral education in more than one program should be listed on every form that lists a program in which they participate."- The ICs were also asked to check the names of at least 2 of each academic rank (i.e., 6 each?) who would be available and well-qualified to be survey respondents. (App. D, p. 96)
- They apparently checked the faculty list for those holding Ph.D.'s against the Doctorate Records File, because they asked that "those who do not hold a Ph.D. or equivalent research-doctorate from a university in the United States" be explicitly i dentified. The results of this matching was not released with the faculty lists. (App. D, p. 96).
- Appendix Table C-1 shows production of Ph.D.'s in the Arts & Humanities, 1986-90 (p. 81). It shows 2,573 Music Ph.D.'s!! Surely this is all doctoral degrees.
- The table that shows which schools produce 80%, 90% and 100% of the A & H Ph.D.',s (C-2, p. 81) has an error in the 100% column. It reads "2,57" when it must be something like 2,57X (given that 90% is 2,337).
- The respondents to the survey were entirely selected from the faculty lists submitted by the ICs (App. F, p. 115). The number of raters was 4x the number of programs in all fields except Biological Sciences, and no fewer than 200 from fields with fewer than 50 programs rated. Thus, the minimum number of questionnaires was 200. Each respondent's questionnaire listed 50 randomly selected programs. (App. F, p. 115)
- The description of the survey is somewhat ambiguous in respect to who evaluated whom:
"Owing to the fact that the sample was drawn from faculty lists provided by Institutional Coordinators crossing departmental boundaries, respondents occasionally indicated that they did not consider themselves qualified to rate programs in a c ertain disciplinary area." (App. F, p. 116)Does this mean related but different fields? Or different fields entirely? The letter to raters on p. 118 suggests that it was within a field, and that the "interdisciplinary" problem occurred in fields with well-delineated specializations. Music would fall into this class, which must be why #14 was necessary.
- Although 200 surveys were sent out, the minimum number of responses was 100. (App. F, p. 116)
- Music was one of four fields which required followup because of the interdisciplinary problem alluded to in #12. (App. F, p. 116)
- Survey respondents had to report their highest degree, where they received it and when, and what field. They also had to indicate their one area of specialization (Physics lists 16 plus "Other"). What were the specializations for Music? (App. F, p. 123).
- Respondents considered Scholarly Quality with these instructions:
"Please consider only the scholarly competence and achievements of the faculty. It is suggested that no more than five programs be designated "distinguished."For "Effectiveness of Program in Educating Research Scholars/Scientists:"
"Please consider the accessibility of the faculty, the curricula, the instructional and research facilities, the quality of graduate students, the performance of graduates, the clarity of stated program objectives and expectations, the appropr iateness of program requirements and timetables, the adequacy of graduate advising and mentorship, the commitment of the program in assuring access and promoting success of students historically underrepresented in graduate education, the quality of assoc iated personnel (post-doctorates, research scientists, et. al.) and other factors that contribute to the effectiveness of the research-doctorate program." [even though these personnel are not listed?] (App. F, p. 124)For "Change in Program Quality in the Last Five Years:"
"Please consider both the scholarly quality of the program faculty and the effectiveness of the program in educating research scholars/scientists. Compare the quality of the program today with its quality five years ago -- not the cha nge in the program's relative standing among other programs in the field." (App. F, p. 124)Rating "Effectiveness" according to these criteria is surely a superhuman task who would know this info. for anything other than the programs of which they have been a part? Respondents were allowed to check "Don't know well enough to evaluate" for both "quality" and effectiveness." In fact the first question for each factor rated familiarity from 1 to 3. (App. F, p. 125) There were basically the five questions: for Q & E, familiarity and ranking; for change, 1 to 3 or "don't know."
- In Music, 260 raters were selected to rank 65 programs. The sample was adjusted to 255 who were considered qualified. Of the mere 107 completed questionnaires received, one was completed by someone other than the originally selected rater, giving a usable rater percentage of 42%. There were 129 non-respondents with no known reason, 19 actually gave reasons. Music had the lowest response rate in the A & H. Only the Biological Sciences tended to the same range as Music. (App. F, Table 1, p. 134), a nd that is an area identified in the study as one in which there were a number of cross-disciplinary problems, leading to large numbers of respondents feeling unqualified to respond. This suggests the possibility that the same may be true of the field o f Music.
- In Music, Asst. Profs. returned surveys at a very low rate, so a large number of Asst. Profs. were chosen randomly for the survey (as opposed to having been nominated by the ICs). This is consistent with all the other disciplines. (App. F, Table 2, p. 136).
Table P:
Table Q:
Confidence Intervals programs whose ranges do not overlap are considered (at a .05 level of significance) to be essentially different. The wide range for Indiana University compared to the others should have set off warning bells. It seems that
in most programs, there were more respondents for the better programs, with a correspondingly narrower range for the confidence interval. Music tends not to vary as much as other disciplines from high- to low-rated programs. Also, the median for only 4 p
rograms falls below 2, and none below 0. This is similar to Anthropology, with a very different distribution at the top end, and relatively narrower confidence ranges than Music. It's pretty similar to Art History, given only 38 rated programs. Rather dif
ferent from English, but there are twice as many programs. Spanish & Portuguese programs are very strange, with only one above 4 (non-overlapping with others) and only two below 2.
Spanish/Portuguese 13.35 French 12.98 German 11.26 Music 22.36
For Music, my calculation of the average of their reported faculty sizes comes out to 21.84 (still rounds the same). However, for the programs where I re-counted the faculty, the average is 14.62. If you put back all the schools for which I did not inv estigate, the average comes to 16.71.
Year Faculty Institutions Average 1980-92 23,508 1,398 16.815450 1982-84 24,796 1,536 16.143229 1984-86 25,392 1,538 16.509752 1986-88 26,108 1,541 16.942245 1988-90 26,666 1,545 17.259546 1990-92 29,663 1,745 16.998853 1992-94 30,582 1,808 16.914823 1993-94 31,138 1,783 17.463825 1994-95 32,124 1,830 17.554098
(Apparently, the CMS went from bi-annual to annual in 1993).
It would be interesting to know if any of the faculties that experienced the greatest increases were those which appear to have reported faculty members outside the scope of the report.
They also consider the TA/RA juxtaposition to be one between teaching and no teaching. This is false in the Humanities, I believe. They also don't seem to delve into what the DRF data actually mean. How well do the requested categories of data actually fit the way programs in various disciplines actually work? The DRF data may very well report what graduates put on their forms, but what if the categories don't fit a graduate's situation? What do the reported numbers mean then?
[ Back to the Introduction. . . ]
[ To Overview of NRC Report Critique. . . ]
[ Back to DWF's home page. . . ]
[ Back to previous
page. . . ]