Skip to content Skip to navigation

Now I Know My ABC’s: Demythologizing Grade Inflation

Tomorrow's Teaching and Learning

Message Number: 
1708

To hear some tell it, the professors who give the lowest grades and who receive the worst evaluations are actually the best teachers because they refuse to “buy” their students’ evaluations by handing out high grades to all. 

 

Folks:

The posting below, a bit longer than most, looks at some myths about grade inflation. While written 20 years ago it is still very much relevant today. It is from  Chapter 15 – Now I Know My ABC’s: Demythologizing Grade Inflation, by Jeremy Freese, Julie E. Artis, and Brian Powellin the book, The Social Worlds of Higher Education: Handbook for Teaching in a New Century, edited by: Bernice A. Pescosolido, Indiana University and Ronald Aminzade, University of Minnesota. Published by Pine Forge Press, A Sage Publications Company 2455 Teller Road, Thousand Oaks, California 91320. www.sagepublishing.com Copyright © 1999 by Pine Forge Press. All rights reserved. Reprinted with permission.

Regards,

 

Rick Reis

reis@stanford.edu

UP NEXT: Ready to Go Expat?

 

 

Tomorrow’s Teaching and Learning

----------4543 words ----------

Now I Know My ABC’s: Demythologizing Grade Inflation

When traditional grading standards are relaxed, students learn that success can be accomplished without work, thereby undermining the purpose of education and denying themselves the education they purport to seek…. But the consequences of grade inflation are even more severe. Tacitly condoning grade inflation sends the message that it is all right to manipulate a system for personal gain or nonacademic interests. Education was supposed to be above such corruption by valuing ethics and responsibility, challenge and diversity, and above all else, justice and knowledge…. This is not the legacy Aristotle or Thucydides had in mind.

-Suzanne E. Fry,

San Diego Union-Tribune(August 6, 1995)

In a culture where artificial measures of class performance have come to matter more than learning itself, no one should be surprised that college professors are rolling out A students at the rate rabbits roll out bunnies…. The problem isn’t that college students are too smart, it’s that our culture is too dumb. Old notions of firm standards by which to measure accomplishment and failure – standards the implicit assumption of which was that few achieve and many fail – have been abandoned in favor of the “I’m okay, you’re okay” syndrome, the essence of which is that high grades, measureless popularity, and “self-esteem” are part of the American birthright, to be bestowed by entitlement rather than earned by achievement.

-Jonathan Yardley,

The Washington Post(June 16, 1997)

Recently, there has been a flurry of stories in the media decrying “grade inflation” as the newest crisis in higher education. As exemplified by the opening quotes, the prevailing belief is that professors are much more generous in dispensing A’s now than they were in the past and that these rising grades reflect an academy that has forgotten its ideals and adopted a “consumer-driven” ethos where student satisfaction is sought not by providing a rewarding education but by providing the most vacuous of educational rewards – the easy A.  Where professors once stood accused of being aloof and indifferent to students’ needs, professors now are accused of pandering to students by giving them the grades they want but do not deserve. Some have attributed grade inflation to the increasing number of professors who are children of the liberal 1960s and 1970s and who now lack the hard edge needed to provide students with a “bell-shaped” grade distribution.  More insidiously, some have tied grade inflation to the increasing importance of student evaluations and recommendations in determining faculty promotions and pay raises. In this scenario, faculty seek to ingratiate themselves with students and “buy” good evaluations with high grades.

Whatever the cause, individual universities have responded to these reports by taking a variety of measures to show that they are fighting grade inflation. Stanford University has revived the F. Following Dartmouth University’s lead, Duke University has proposed coupling its grades with an “achievement index” that takes into account the overall distribution of grades in the students’ classes. Among the recommendations to fight grade inflation at Indiana University are grade indexing, greater scrutiny of professors’ grades in tenure and promotion decisions, required discussions of the “rigor of department’s grading patterns” at annual budget conferences between the dean and departmental chairs, and even contests sponsored by the university teaching center offering a free lunch for the instructor who offered the best tip for keeping grades down. [Note 1]

Whether conducted in the media, faculty meetings, or the classroom, debates about grade inflation usually focus either on who is to blame for the problem or how best to solve it. Less attention is devoted to discerning how widespread grade inflation is or even to verifying that grade inflation iswidespread. In his classic An Invitation to Sociology, Berger (1963) describes the “debunking motif” of the discipline, where sociologists often are lonely skeptics who demand evidence for what everyone else takes for granted. As teachers, we feel a professional obligation to participate in the ongoing discussion about grade inflation; however, as sociologists, we feel that the first step in this discussion should be determining what the facts about grade inflation are.  Here, we cannot rely on watercooler assessments of how things are or recollections of how they used to be but instead must consider the extant data on average grades.

We believe that an inspection of these data shows that most of the claims about grade inflation in higher education are considerably overstated. There is evidence of grade inflation in high schools, among undergraduates at “elite” universities, and among undergraduates generally back in the Vietnam War era.  For most colleges and universities, however, evidence that grades have risen in the past 20 years is scant.  Moreover, any increase in average grades can likely be accounted for by the demographic and institutional changes in higher education over the past two decades.  Indeed, we argue here that the entire debate about grade inflation has been confused by the propagation of a series of myths that contradict the available evidence.  By debunking these myths, we seek to introject facts into an issue that has been marred by posturing and nostalgia and to contribute toward a more reasoned dialogue on grading practices in the academy.

Myth 1: Grade Inflation Is an Increasingly Significant Problem at Most Colleges and Universities

To examine the question of whether undergraduate grades are increasingly inflated, we must first clarify what is meant by grade inflation.  Grade inflation is a separate issue from whether the requirements of undergraduate classes have become easier or whether the expectations of professors have changed.  Instead, the question is simply whether the grades given by universities are higher now than they were in the past.  If class requirements or professors’ expectations have slackened without a concurrent change in the distribution of grades, then one would think this is a matter better addressed by recommending that professors assign more extensive or difficult work rather than by revamping universities’ grading policies.

When our home university, Indiana University, first began alerting faculty to the supposed problem of grade inflation, the senior author (Powell) dutifully checked whether or not he was part of the problem by examining his grade distributions from the first sociology class he had ever taught (in 1980) through the first class he had taught at Indiana (in 1985) and all classes since.  He found that his grades had remained essentially the same over this period.  His reaction to this was ambivalent.  He was pleased that he had been consistent over time, as this meant that the grade inflation that was supposedly rampant within the university could not be his fault.  At the same time, he also had thought that he had become a better teacher since the first time he taught and that students were getting more out of his classes, and so he would have thought that students should be getting higher grades in his class now than when he first started teaching.  One of our colleagues has remarked that his goal is to have everyone receive an A (although this never has come close to being realized) because for everyone to earn an A would imply that he had imparted mastery of the material to all students.

Satisfied that he was not to blame for grade inflation, the senior author became suspicious when he discussed the problem with his colleagues in the sociology department because all of those who had looked back at their grades also reported little change.  Obviously, if grade inflation was indeed rampant at Indiana, then someone had to be responsible for it.  We checked the department’s records and found that our colleagues had been telling the truth; the average grades given within our department have remained consistently between a 2.7 and 2.9 over the past two decades.

Perhaps instructors within the sociology department were impervious to the pressures that had led to grade inflation elsewhere in the university. Yet, we found that average grades throughout the university have been remarkably consistent over the past two decades.  In the fall semester of the 1973-1974 academic year, undergraduates earned an average grade of 2.86.  In the fall semester of the 1996-1997 year, the average grade was 2.90 – hardly a difference worth considering as a crisis of standards or pedagogical integrity. [Note 2]

Media reports of grade inflation have focused on elite schools, for example, Harvard, Stanford, and Duke universities.  These reports are not inaccurate; there is clear evidence of an increase in the average grades at these schools, although, as we discuss later, possible explanations of this change extend far beyond the frequent lament that professors are lowering their grading standards.  The evidence of grade inflation at public universities and “non-elite” private colleges, however, is much more suspect.  The best evidence to confirm or disconfirm claims about grade inflation at the national level would seem to come from the large surveys by the National Center of Education Statistics that include college transcripts as part of their data. Using these data, Adelman (1995) finds that over the past two decades, the mean grade point average (GPA) for all college students who earned bachelor’s degrees actually declinedfrom 2.98 to 2.89.  In short, when the nation’s undergraduates are considered as a whole, there is not only no such thing as grade inflation but quite possibly a slight grade deflation.  The absence of rising grades is not a feature of just one department or one university but rather of undergraduate life in general.  The most prominent exception is at the nation’s most elite schools, which have received the majority of attention from the media on this issue but still house only a small minority of America’s college students.

Even in schools where there is evidence that grades have increased, however slightly, we have no reason to believe that this increase is attributable to the actions of individual professors grading too generously, despite the claims of a professor writing for The Washington Postwho wrote, “The younger members of the [faculty] have never even known what a C was all about – let alone what a Gentleman’s C was” (Twitchell 1997: C23). As sociologists, we try to teach students how to distinguish between individualistic and social structural explanations of behavior.  And yet, discussions of grade inflation often deteriorate into individualistic attributions of certain professors giving higher grades.  Indeed, an apparent assumption of grade inflation is that nothing has changed structurally within certain universities or within higher education that can explain increasing average grades.  On closer inspection, we suggest that claims about grade inflation and its origins must take into account a number of demographic and institutional factors.

Changing Gender, Racial, and Age Composition of Students

Over the past several decades, virtually every college and university has experienced increases in the number of female, Asian American, and “nontraditional” students. Evidence shows that female students work harder, have a greater commitment to academic performance, and do better in college than their male counterparts.  Indeed, because the sharpest rise in coeducation occurred in the late 1960s and early 1970s, the undisputed grade changes in this era may have been largely due to the rise of coeducation. [Note 3] Similarly, for a variety of reasons, Asian Americans earn higher average standardized test scores (especially in mathematics and the sciences) and high school GPAs on matriculating in college and tend to do better than members of other racial/ethnic groups while in college.  At Indiana University, the number of Asian Americans has increased sixfold over the past 20 years.  Among nontraditional students, a large number are returning women, who, as a group, have been highly successful in the classroom.  Any one of these compositional changes could explain fluctuations in a school’s overall average grades; all should be taken into account in any examination of why some schools’ grades might be changing.

Improving Student Credentials

For elite institutions at least, today’s incoming freshmen have better credentials (e.g., standardized test scores, high school grades) now than they did just 20 or 30 years ago.  Competition for admission to elite universities is extremely keen, more so than in the past.  Although we do not argue that incoming freshmen at all colleges and universities have better credentials than they did 20 years ago, those schools with the greatest increases in grades are precisely those with the most dramatic rises in the quality of their incoming students.

Changes in the Classroom

Since the 1970s, most colleges and universities have made extensive changes in curricula.  Schools now require fewer courses, especially in the sciences, mathematics, and foreign languages. Schools have given more latitude to students in course selection and have encouraged independent studies, tutorials, and internships.  A byproduct of these changes may have been slight increases in grades.  Students historically have performed better in their elective courses than in required ones, especially science and mathematics.  Indeed, some have argued that humanities and social science departments are primarily responsible for grade inflation because the average grades in these departments generally are higher than those in natural sciences and mathematics.  This argument, however, ignores the fact that these disciplinary differences occurred beforethe alleged rise in grades. Rather, the minor increments in grades might be a function of changes in the distribution of courses, and not of the shifts in grading strategies of professors.

The Rise in Professional Schools

As students and parents increasingly seem to equate a college education with occupational training, more schools have expanded their professional programs.  More students at Indiana University, for example, are graduating with professional degrees in business, public administration, education, nursing, health sciences, and recreation, while fewer students are earning degrees in the liberal arts. This shift offers yet another explanation for the minor increases in overall grades. Grades typically are higher in professional schools than in liberal arts programs.  In 1995-1996, the average course grade at Indiana’s College of Arts and Sciences was 2.81, as compared to 2.90 in the business school, 3.21 in the optometry school, 3.36 in the education school, and 3.43 in social work.  Differences in grades among these schools have not varied appreciably over time, but the changing distribution of students within these schools has, accounting for an overall slight increment in grades.

Withdrawal Inflation

Although there is equivocal evidence of grade inflation, there is persuasive evidence of withdrawal inflation.  More students are exercising their option to withdraw from a course.  If more students withdraw, then grades may fluctuate even if professors maintain the same grading standards.  As an illustration, 4.8 percent of all students registered in classes at Indiana University in 1978-1979 withdrew, compared to 7.4% in 1995-1996.  Although we cannot know with certainty all students’ reasons for withdrawing, our experiences in the classroom suggest that students who withdraw often are faring poorly.  If we assume that the average grade of students who withdraw is a D, then the 2.6% increase in withdrawals that occurred at Indiana should translate into an approximately 0.05 increase (on a 4-point scale) in the average grade, which is actually greater than the observed increase in grades in that period (0.04). Thus, although universities and colleges may wish to reexamine policies regarding withdrawals, we should not confuse problems resulting from such policies with those resulting from changing faculty grading standards.

Myth 2: A Grade of C Did, and Should, Indicate “Average” Performance

Perhaps one reason for the unquestioning adherence to the myth of grade inflation is a corollary myth: that C was and should be average.  In the debate over grade inflation, some critics decry that the average grade of college students is not a C but rather between a B and a B-. These critics remain nostalgic for the bell curve in which C or the “Gentleman’s C”, the allegedly normative standard for “average” students from the decades ago, is the modal grade. They dismiss the B/B- average as yet another example of the Lake Wobegon syndrome, in which “all the children are above average.”

The Gentleman’s C was and is more a myth than fact. It is mathematically impossible in most grading systems employed in modern colleges and universities.  At Indiana University, for example, a student must maintain a C (2.0) average to avoid being placed on probation; students who consistently earn less than a 2.0 average cannot remain on campus.  By definition, then, the only way for the mean grade of college graduates, and college students in good standing, to be a C would be if there is no variation in grades whatsoever – a violation of statistical assumptions of normal distributions and of virtually every professor’s experience with students. If there is a normal distribution in grades, then the tail ends of this distribution for college graduates, and other successful college students, must be C (2.0) and A (4.0), not F (0.0) and A, and the median should be approximately a 3.0, not a 2.0.  Because the 2.0 minimum requirement has been long-standing at Indiana and at most other universities and colleges, it should not be surprising that B (or B-) is regarded as average and has been for a long time and that this is why grade distributions have remained fairly constant for so long.

Before unreflectively equating C with “average,” we must ask “average compared to whom?” To students in the course? To students in the university?  To students in all universities and colleges? To all adults, regardless of whether they attend colleges or universities? Another way in which to demonstrate the social construction of the meaning of “average” and why it does not necessarily translate into a C is to examine grades among graduate students.  If we really believe that C indicates “average,” then should we be alarmed that the GPA of graduate students typically is between 3.5 and 3.7 (e.g., 3.56 in 1973-1974 and 3.66 in 1995-1996 in Indiana University)? Of course not. Most professors understand that graduate students must maintain a 3.0 average (and, in some cases, a 3.3 average) to continue their studies. Consequently, any grade lower than a B, in effect, is equivalent to a subpar performance.  It is, therefore, no surprise that B+ or A- is considered “average” and that the GPA of graduate students is approximately halfway between the highest possible (A) and lowest acceptable (B) grade.

Myth 3: One Can Buy Good Teaching Evaluations with Good Grades

In seeking to explain the allegedly significant and alarming rise in student grades, some have offered yet another myth: that grade inflation is the result of universities and colleges’ overly solicitous focus on student needs and demands. In the eyes of proponents of this myth, grades have increased primarily because postsecondary institutions and their professors are pandering to students.  Advocates of this position attribute the rise in student influence and, in turn, the rise in grades to two factors: student activism in the 1960s and 1970s and the advancement of a business-centered and profit-driven university system that bases its policies and decisions on consumer (i.e., student) demands.

Student Activism

Certainly, increasing student activism affected many fundamental changes in the curriculum, the role of students, and the expectations of professors.  Activism called into question the infallibility of professors, on the one hand, and the relative lack of power and knowledge of students on the other, and it encouraged students to take a more active role in their education.  Students also insisted that they should have the right to evaluate the very people who evaluated them – professors.  Thus, although there were student evaluations of professors prior to the 1960s and 1970s, they were used far more frequently thereafter.

Business-Centered University / College Policies

Given the large number of baby boomers in academia, it is not surprising that many view the rise of student activism of the 1960s and 1970s with almost nostalgic fondness but see as insidious the rise of a more profit-motivated (tuition-driven) university philosophy.  As operating costs have increased, universities have become more attuned to their client base – students – and to demands of legislators for state schools to become more accountable to the public.  Such accountability has led to a renewed emphasis on teaching, “relevance,” and “student needs” as well as, correspondingly, greater reliance on student input (i.e., student evaluations).  In turn, student evaluations have become increasingly influential in tenure and promotion decisions and even in annual pay raises, especially at schools that have explicitly allocated some proportion of annual pay increments to reward teaching excellence.

This reliance on student evaluations has not been uniformly praised. Critics question whether students can fairly and knowledgeably evaluate teaching and contend that giving students indirect sway over faculty promotions and raises inevitably compromises faculty grading by, in effect, pressuring professors to confer higher grades to students than they deserve.  Such reasoning implies that students can be bought with higher grades and that faculty can be bought with high student evaluations.

Such reasoning has not been borne out by experiences in our department (sociology) at Indiana University.  Examining the grading practices of professors in our department since 1990, we found that the professors who have won departmental or university teaching awards gave slightly lowergrades than did their colleagues who have not won such awards.  Moreover, our graduate student instructors who have received the highest student evaluations were, on average, no more lenient than those who have received the poorest evaluations.  Furthermore, looking at evaluations across courses, we find that student reports of the grades they are receiving in a class correlate poorly with their overall evaluations of the instructor, [Note 4] whereas overall evaluations correlate strongly with students’ assessments of instructors’ clarity, enthusiasm, fairness, and impartiality. [Note 5]

Our findings are consistent with the bulk of literature on this topic, which indicates that students’ evaluations are at most weakly correlated with students’ grades (Doyle 1983; d’Apollonia and Abrami 1997; Felder 1992; Lowman 1990; Marsh 1987; Marsh and Roche 1997; McKeachie 1990). [Note 6]  Contradicting this general pattern of findings is the recent and highly publicized study by Greenwald and Gillmore (1997) that finds student evaluations to be positively correlated with grading leniency; however, Marsh and Roche (1997), d’Apollonia and Abrami (1997), and McKeachie (1990) all raise cogent criticisms to this study’s methodology.  It is perhaps telling that far more press has been given to Greenwald and Gillmore’s (1997) claims about the connection between evaluations and grading leniency than to the preponderance of evidence that has failed to find such a connection.  Moreover, student evaluations correspond surprisingly closely with professors’ evaluations of one another’s teaching (Aleamoni 1978; Felder 1992; Lowman 1990; Marsh 1987).  Consequently, although it might be convenient for some to ignore the overall literature and argue that high evaluations are a response to high and undeserved grades, such reasoning is at best ill-informed and at worst self-serving.

Conclusion

Part of teaching sociology is showing how many claims about social phenomena that are widely circulated in the media cannot stand up to close, empirical examination.  In this chapter, we have shown that the much-hyped crisis of grade inflation within undergraduate education does not reflect actual trends in grading over the past 20 years, which in reality have remained mostly stable.  Moreover, even in those institutions where there is clear evidence of grade inflation, the most common explanation of grade inflation is unlikely.  Rather than attributing grade inflation to the increasing softness of individual professors or of even a whole generation of new professors, we described a variety of compositional and institutional changes in universities that may lead to increases in average grades.

Whereas we have questioned the myths surrounding grade inflation in this chapter, sociologists typically are not content simply to debunk myths; they also seek to explain the persistence of the myths.  If we are correct that widespread cries about grade inflation are misguided, then the inevitable next question is why so many in the media, the academy, and the general public have been quick to embrace the idea that grades are inappropriately rising, especially given the ready availability of alternative explanations and the relative lack of hard evidence.  Perhaps most obviously, grade inflation is consonant with the belief that America has undergone wholesale deterioration of values and cultural standards.  In TheWay We Never Were, Coontz (1992) suggests that contemporary debates about the family are hopelessly clouded by public nostalgia for the days when every family was a wholesome “traditional family,” a nostalgia that Coontz argues is based more on selective memory and wishful thinking.  Discussion of higher education might be similarly clouded by a self-enhancing tendency of past college graduates to believe that today’s college students are being graded much easier than they were graded.

At the same time, although the popularity of the grade inflation myth might stem partially from a memory-distorting nostalgia, it should be pointed out that the widespread belief in grade inflation also serves the interests of some parties.  The idea of grade inflation provides a shield for those in the academy who resist student-centered learning and forgo opportunities to make their course content as accessible as possible to students.  Grade inflation allows these professors to claim that they are refusing to pamper today’s spoiled students.  To hear some tell it, the professors who give the lowest grades and who receive the worst evaluations are actually the best teachers because they refuse to “buy” their students’ evaluations by handing out high grades to all.  In this way, the myth of grade inflation can make the bad teacher accountable to no one because it demeans the credibility of student evaluations, one of the few systematic means of monitoring college teaching.  It is thus perhaps perverse that some universities are experimenting with the idea of offering rewards to professors who do the most to “combat” grade inflation.  Such systems might reward some instructors who make high demands for excellence but also would reward those instructors whose students perform poorly according to classroom standards because they are not learning anything.

In addition, claims of grade inflation also provide fodder for those who have made a cottage industry of attacking the academy.  Over the past two decades, various villains have been held up as representing the ills of higher education – the deadbeat professor, the professor too focused on research to have any time for students, the foreign professor who cannot communicate with students, and the hopelessly incompetent graduate student instructor.  To this list, one can add the professor popular with students because he or she gives A’s to nearly everyone in class. In all of these cases, one might be able to locate isolated examples, but charges that these ills are rampant within the academy are consistently refuted by the available evidence.

A recurring theme of critiques of higher education is that universities have acquired political, research, or administrative agendas that are at cross-purposes with their supposed goal of education students in “the basics” needed for successful careers. This theme is evident in the argument that rewarding high student evaluations gives professors strong incentive to loosen their standards and that this degradation of standards has caused an unjustified increase in average grades.  By showing that extant claims of grade inflation are overstated, we provided evidence that grading practices do not seem to have been affected by the increased emphasis on student evaluations.  Consequently, the alleged conflict between students evaluating teachers and teachers being able to demand excellence from their students should itself be considered more a matter of myth than truth.

Notes

1. The only entry the center received was by one of the authors of this chapter (Powell), whose entry was titled “There is no such thing as a free lunch … or grade inflation.”  The center decided not to award a winner.

2. These figures are based on semester grade point averages of students. Using an alternative measure, average grades of courses, we find the same pattern. Of course, there has been some fluctuation in average grades between 1973 and 1996, with an initial decrease between 1973 and 1984 and an upswing since then, but these changes have been trivial.

3. This, of course, is not to deny the influence of the Vietnam War on rising grades in the late 1960s and early 1970s. One frequently used explanation for rising grades in the Vietnam War era is that professors graded students more generously to help them meet the criteria necessary for keeping their draft deferments (or that students worked harder to keep their deferments).

4. For example, in our analysis of courses taught by graduate student instructors this year, we found a slightly negativecorrelation (-0.09) between average grades and student evaluations of the instructor.

5. In discussing the link between course evaluations and grades, we compare the average evaluations for a class to that class’s average grade. This comparison is appropriate for examining the claim that higher average evaluations are obtained by those instructors with more generous average grades.  Within individual classes, students who are performing better in the course tend to evaluate the instructor more positively.  This might be for a variety of reasons; for example, students who enjoy a class more than do their peers might put more effort into their studies. Yet, if higher grades per se caused higher student evaluations, then we would expect average class grades to be correlated with average class evaluations, which research has shown is not the case.

6. Researchhasshown that student evaluations are correlated with class size, course level, academic discipline, and (perhaps most tellingly) students’ sense of fairness in the evaluation process (Lowman 1990; McKeachie 1990).

Acknowledgement

We gratefully acknowledge Robert Fulk and Kathryn Henderson for their helpful suggestions and input.

References

Adelman, Cliff. 1995. “A’s Aren’t That Easy.” The New York Times, May 17.

Aleamoni, L.M. 1978. “Development and Factorial Validation of the Arizona Course/Instructor Evaluation Questionnaire.” Educational and Psychological Measurement38: 1063-67.

Berger, Peter L. 1963. Invitation to Sociology: A Humanistic Perspective. Garden City, NY: Doubleday.

Coontz, Stephanie. 1992. The Way We Never Were: American Families and the Nostalgia Trap. New York: Basic Books.

d’Apollonia, Sylvia and Philip C. Abrami. 1997. “Navigating Student Ratings of Instruction.”American Psychologist52: 1198-208.

Doyle, Kenneth O., Jr. 1983. Evaluating Teaching. New York: Free Press.

Felder, Richard M. 1992. “What Do They Know, Anyway?” Chemical Engineering Education26: 134-35.

Greenwald, Anthony G. and Gerald Gillmore. 1997. “Grading Leniency Is a Removable Contaminant of Student Ratings.” American Psychologist52: 1209-17.

Lowman, Joseph. 1990. Mastering the Techniques of Teaching. San Francisco: Jossey-Bass.

Marsh, Herbert W. 1987. Students’ Evaluations of University Teaching: Research Findings, Methodological Issues and Directions for Future Research.  New York: Pergamon.

Marsh, Herbert W. and Lawrence A. Roche, 1997. “Making Students’ Evaluations of Teaching Effectiveness Effective: The Critical Issues of Validity, Bias, and Utility.” American Psychologist52: 1187-97.

McKeachie, Wilbert J. 1990. Teaching Tips: A Guidebook for the Beginning College Teacher. Boston: Houghton Mifflin.

Twitchell, James B. 1997. “Stop Me before I Give Your Kid Another ‘A’.” The Washington Post, June 4, p. C23.