I have been writing about the ways in which our governance cultures have been moving away from a rule-command structure to a behavior management structure (Backer, Larry Catá, Surveillance and Control: Privatizing and Nationalizing Corporate Monitoring after Sarbanes-Oxley, Law Review of Michigan State University
2004). I have suggested that while we continue to produce rules and law, we increasingly seek to govern conduct not by rules but by using measurement and assessment tools (Backer, Larry Catá, Global Panopticism: States, Corporations and the Governance Effects of Monitoring Regimes. Indiana Journal of Global Legal Studies, Vol. 15, 2008).
(Pix (c) Larry Catá Backer 2014)
Measurement and assessment tools have substantial advantages over law--they provide a way for totalizing regulation in ways that law is incapable of doing. It can focus on specific behaviors that are to be emphasized and those for which the regulator is uninterested, and it provides a veil of neutrality and measurement for what are quite pointed choices for directing behavior. Best of all these form fo regulation makes it easier to internalize standards within the governed classes--law, in effect, moves from being an external command that must be obeyed to an internalized "understanding" of what is "right" (e.g., Elements
of Law 3.0 Notes of Readings: I-E (What is Law? Law Beyond Law--Social
Norms, Contract Communities, and Disclosure Regimes)).
There are advantages--governing through the technical rules of assessment makes it possible to avoid the transaction costs of rule making. These involve time, effort, and transparency. In addition, rule making through the manipulation of assessment techniques, within the university setting, also avoids the need to subject managerial decisions to shared governance. One can cut the faculty out of governance by making them the object of assessment rather than the partner in developing substantive rules. It advances a project of celebrating the (empty) forms of shared governance while abandoning effective shared governance as a functional matter.
Many times, the university is not the driving force of this movement; it is just complicit in its development by outside evaluating agencies. Nowhere is this better evidenced than in the growing market for outside reputation and quality evaluations of universities. This post includes a report from the American Educational Research Association, Randall Reback and Molly Alter, True for Your School? How Changing Reputations Alter Demand for Selective U.S. Colleges, published in: Educational Evaluation and Policy Analysis 36(1), March 2014, and described in Eric Hoover, Your College's Reputation Matters in Measurable Ways , Jan. 16, 2014. It speaks to some evidence of the effect of measurement toolkits on university behavior, and on the power of stakeholders, in this case consumers of education, to affect the internal governance of universities and their development, and on outside assessment agencies, to shape the educational agenda.
Randall Reback and Molly Alter
True for Your School? How Changing Reputations Alter Demand for Selective U.S. Colleges, Educational Evaluation and Policy Analysis
March 2014, vol. 36 no. 1
Abstract
There is a comprehensive literature documenting how colleges’ tuition, financial aid packages, and academic reputations influence students’ application and enrollment decisions. Far less is known about how quality-of-life reputations and peer institutions’ reputations affect these decisions. This paper investigates these issues using data from two prominent college guidebook series to measure changes in reputations. We use information published annually by the Princeton Review—the best-selling college guidebook that formally categorizes colleges based on both academic and quality-of-life indicators—and the U.S. News and World Report—the most famous rankings of U.S. undergraduate programs. Our findings suggest that changes in academic and quality-of-life reputations affect the number of applications received by a college and the academic competitiveness and geographic diversity of the ensuing incoming freshman class. Colleges receive fewer applications when peer universities earn high academic ratings. On the other hand, unfavorable quality-of-life ratings for peers are followed by decreases in the college’s own application pool and the academic competitiveness of its incoming class. This suggests that potential applicants often begin their search process by shopping for groups of colleges where non-pecuniary benefits may be relatively high.
________________________________________
January 16, 2014
Your College's Reputation Matters in Measurable Ways
Chronicle of Higher Education
By Eric Hoover
A college's reputation shapes its short-term enrollment fortunes in measurable ways, according to a new study.
Where a selective college stands in annual rankings compiled by Princeton Review and U.S. News & World Report affects the number of applications it receives as well as the competitiveness and geographic diversity of its freshman class. That's one finding in a report published this month in Educational Evaluation and Policy Analysis, a journal of the American Educational Research Association.
Shifts in the ratings of a given college's academic strength and students' quality of life predict significant changes in demand, the researchers conclude. Their findings also suggest that changes in a college's ratings can either help or hinder its competitors' efforts to recruit and enroll students.
Randall Reback, an associate professor of economics at Barnard College, and Molly Alter, a research analyst for the Research Alliance for New York City Schools at New York University, examined the qualitative data in Princeton Review's annual college guides, which include top-20 lists based on students' evaluations of various aspects of their colleges ("Happy Students," "Most Beautiful Campus"). They also looked at the U.S. News rankings and the National Center for Education Statistics' Integrated Postsecondary Education Data System.
In short, commercial measures of colleges' quality and desirability influence enrollment metrics, even when controlling for other variables—such as costs—that might explain students' choices.
. . . . .
A college receives more applications, the study found, after landing on Princeton Review's lists of colleges with the best overall academic experience, happiest students, and most beautiful campuses. Yet it receives fewer applications after appearing in a list of ugliest campuses or when described in the guide as having "unhappy students."
Changes in ratings of a competitor's reputation can affect a college in different ways. An institution sees more applications after a rival college receives lower marks for academic quality, according to the study. But an institution sees fewer applications and less-competitive applicants after peer colleges receive unfavorable quality-of-life ratings.
That finding suggests that applicants often begin their searches by looking at groups of colleges that rate highly in specific categories.
. . . .
As for the power of academic ratings, the researchers said that being listed among the top 25 colleges on a U.S. News list is associated with a 6-to-10-percent increase in applications. Whether a college is ranked 10th or 20th doesn't seem to matter; merely appearing on the list explains the increase, the study suggests.
'Unscientific' and Idiosyncratic
Colleges also saw a slight increase in applications after making the Princeton Review list for best academic experience. "Front-of-book advertising may be important in the initial phases of the college-search process," the report says.
. . . . .
Given the sway college guides have among some students, the authors suggest that a review of ratings systems by an independent organization might serve the public interest. "Reputations matter," Mr. Reback said. "Whether rankings accurately measure reputation is another question."
No comments:
Post a Comment