In this month's Carnegie Perspectives looks at some of the cautions to keep in mind when launching assessment and accountability initiatives. It is by Alex C. McCormick, senior scholar at the Foundation. The posting is #34 in the monthly series called Carnegie Foundation Perspectives. These short commentaries exploring various educational issues are produced by the CFAT. The Foundation invites your response at: CarnegiePresident@carnegiefoundation.org. © 2007 The Carnegie Foundation for the Advancement of Teaching, 51 Vista Lane, Stanford, CA 94305 Reprinted with permission.
UP NEXT: Live Green or Die - Can Engineering Schools "go green" Fast Enough to Save Our Planet?
------------------------------------------- 1,053 words -----------------------------------------------
First, Do No Harm
Introduction by Lee Shulman
Alex McCormick's timely essay brings to our attention one of the most intriguing paradoxes associated with high-stakes measurement of educational outcomes. The more importance we place on going public with the results of an assessment, the higher the likelihood that the assessment itself will become corrupted, undermined and ultimately of limited value. Some policy scholars refer to the phenomenon as a variant of "Campbell's Law," named for the late Donald Campbell, an esteemed social psychologist and methodologist. Campbell stated his principle in 1976: "The more any quantitative social indicator is used for social decision making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor."
In the specific case of the Spellings Commission report, Alex points out that the Secretary's insistence that information be made public on the qualities of higher education institutions will place ever higher stakes on the underlying measurements, and that very visibility will attenuate their effectiveness as accountability indices. How are we to balance the public's right to know with an institution's need for the most reliable and valid information? Alex McCormick's analysis offers us another way to think about the issue.
Article by Alexander C. McCormick
Accountability is in the air, and the news, these days. In response to various "common-sense" proposals to fix problems in education, a friend of mine used to say, "If you think there's a simple solution, you don't understand the problem." Recent accountability proposals show how true this is. In any accountability regime, it's not sufficient to simply select a set of performance measures. It's equally important to consider how the system will affect behavior. A well-designed accountability system motivates substantive change, not merely gaming the system. And the last thing you want is a system that undermines useful diagnostic tools in the name of accountability.
In January, the National Center for Education Statistics proposed some additions to the mass of data that it gathers annually from colleges and universities. Normally an arcane subject, to be sure. But buried in the proposal was a provision-clearly motivated by the Secretary of Education's Commission on the Future of Higher Education-that could seriously hamper current efforts to improve college quality. Russ Whitehurst, director of the Department of Education's Institute for Education Sciences (which houses the statistics agency), subsequently offered vague assurance that the most damaging of these proposals would probably not be implemented. Let's hope he's right.
The last twenty years have seen calls for greater accountability by higher education, accompanied by growing influence of college rankings by U.S. News and World Report. College officials complain that the rankings, which purport to measure college quality, improperly emphasize inputs and resources rather than what happens on campus. But in response to accountability demands, they argue that the work of their institutions is too complex, too varied, and too ephemeral to be reduced to simple output measures. Although there is merit to both claims, the quest to improve college quality is far from hopeless.
Several relatively new college-quality initiatives show such promise that they were named by the Secretary's Commission. Colleges and universities participating in these projects have access to sophisticated assessments of effective educational practices (from the National Survey of Student Engagement, or NSSE, and its community college counterpart, CCSSE) and of their students' critical thinking, analytic and writing skills (from the Collegiate Learning Assessment, or CLA). NSSE and CLA send participating institutions confidential reports showing how they perform relative to their peers; CCSSE posts results on its website. This is valuable information that presidents, deans, department chairs, and faculty members can-and do-use to improve the quality of college education.
But the Commission and the Secretary want more information that students and parents can use to compare institutions. The Secretary often complains that she has access to more comparative information when buying a car than when investing in her children's college education.
So the statistics agency proposed adding an "accountability" section to its annual compilation of college and university data. In the first phase, colleges would be asked which assessments they participate in, whether they post the results online, and the corresponding Web address. So far, so good-many institutions post this information, and this would make it easier to find. The mischief begins in the second phase, wherein institutions report the assessments they participate in and their "score" on each one. Knowing which assessments a college uses is a good idea, but reporting scores to the government will do far more harm than good.
Why? Let's set aside the problem of reducing complex assessments to a single institution-wide score. (If you had one score for every auto maker, would that help you choose the best station wagon?) The real danger is transforming a diagnostic exercise into grading and ranking. It's one thing for college officials to have a confidential report from a sophisticated assessment identifying where improvement is needed. It's quite another when that information is made public; the emphasis shifts quickly from diagnosis to damage control (although CCSSE results are public, community colleges do not compete in national and regional markets the way four-year institutions do). And recall that these are voluntary assessments that institutions pay to participate in. If your doctor and financial planner posted your physical and fiscal health on the Web, would you see them more often? Would you see them at all?
It doesn't take much in the way of critical thinking skills to see where this leads. If the Department doesn't produce rankings, others will. In NSSE's case, students' survey responses will determine their college's standing, and by extension, the value of their degree. So they will act in their own self-interest to make their college look good, compromising the fundamental requirement for useful information: candor. More likely, though, colleges will simply opt out, as they surely will for performance-based assessments like CLA, because participation would risk too severe a public-relations penalty. Thus would an ill-conceived push for consumer information drive colleges away from the most promising assessment and improvement initiatives in decades.
Higher education institutions must systematically assess and improve their performance. But not all diagnostic information is suitable for accountability and consumer information, and a ham-fisted approach like this could sabotage important efforts to diagnose and improve colleges and universities.