Monday, September 7, 2015

The Power of Assessment and the Production of Knowledge: An Example and a Warning for Faculty Complacent About the Mechanics of Value





(Pix © Larry Catá Backer 2015)


I have been speaking to the issue of assessment--not so much as a set of techniques designed to extract data that can be evaluated, but rather than a mechanics for the management of behavior through the instrumental use of data extraction (here, here, and here; for the theory see Backer, Larry Catá, Global Panopticism: States, Corporations and the Governance Effects of Monitoring Regimes. Indiana Journal of Global Legal Studies, Vol. 15, 2007).  

But theory tends to bore.  And in our culture theory has become detached from the way in which knowledge is produced and understood--and employed.  One teaches now by example, and perhaps clusters examples around a theory that may help provide context to example or fashion example into instrument to be used not so much to explain or understand, but to employ to change the world around us. 

Consider the issue of publication.  Publication has multiple purposes.  Of course it is employed as a traditionally important means of transmitting knowledge, of conversing with communities. But it is also an integral part of prestige markets for status within knowledge communities and has substantial repercussions for status within the profession and advancement within the institutional labor markets in which academi9cs labor for the most part. (Discussed in Larry Catá Backer, Defining, Measuring and Judging Scholarly Productivity: Working Toward a Rigorous and Flexible Approach, 52(3) Journal of Legal Education 317-341 (2002)).  

Measuring the relational place of academics within disciplinary prestige markets depends in large part on managing a hierarchy of publication venues (and publication forms) from which prestige determining data may be mined. Each venue for the display of knowledge production must be assessed and placed within a vertically arranged hierarchy that can serve as a basis for producing data on "value" by aggregating the "value points" for the appearance of writings within these publication venues. The great battles recently over the place of blogs and other "unconventional" forms of publication are not so much about the value of these forms of knowledge production but rather of their susceptibility to placement within well defined hierarchies of "value." 

The core objective, then, on which this system is grounded, is a set of premises for measuring value.  Some of these go to the form of knowledge production to be valued--it must be in writing, it must find its way to a specific form of publication, and the form of publication must itself must embrace certain characteristics--it must be published in physical and tangible form (a journal or book, etc.; though now electronic publication is slowly becoming more acceptable without loss of "value"),  it must be published by a reputable publication house with certain characteristics, and the process for evaluation of the object of publication must undergo certain ritualized tests (peer review, editing, etc.). In the absence of these minimum criteria, publication venues have no value (thus the problems of blogs, self publication and "unusual" venues and forms of publication--video, multi-media etc.).

But beyond the minimum requirements, publication venues must be assessed in order to arrange them in a rank order.  This ranking is essential to make meaningful, to be able to purposefully count, publications by academics, and to value that data harvesting exercise (the counting of publications) in a way that appears neutral (the criteria are uniformly applied), are certain and predictable (and thus from a Western perspective--fair). That is, to make it possible to assess publication, it is necessary first to develop criteria to assess (and rank order) the basis of assessment (the universe of publications). Notice the irony--the value of a publication is not inherently bound up in itself.  It is instead bound up heavily in the publication within which it is presented to the community of scholars and more importantly, to cadres of administrators who use the harvesting of this value--not in the publication itself but in its placement-- to assess the quality of the work of the academic for produces the work for publication. Scholars and administrators approach a work not in itself but fro out of the value derived from the place of publication within which it is embedded. The value of knowledge, and its assessment, then is contingent on the value of the publication venue.  

And the value of a publication venue is based in large part of the aggregation of data  the choice of which is not necessarily inherent in an absolute value of venue publication but in a blended choice of data that is produced by or from the publication venue that is privileged in assessment.  Control of the choice of data to be measured, then, determines the way that publication venue hierarchies are constructed. But these data are not fixed and immutable.  Each data point can be affected by the behaviors of the publication venue.  As such, the choice of data to privilege effectively controls the behavior of publication venues, where behavior changes can have a net positive effect on the "quality" of the data produced and increase the assessment value of the publication venue. And knowledge of the data privileged and those marginalized in turn have a tremendous effect on the way in which publication venues organize themselves--and determine what they may emphasize or marginalize.  

Let us consider the hypothetical Journal of Hard Science Studies (JHSS).   JHSS is one of the journals published by High Reputation University Press (HRUP).  To some extent it can leverage HRU's reputation in building its own.  HRU's reputation, in turn, has been built on its ability to dominate  in the production of data used to measure the "value" and "reputation" of universities (size of library, entering class GPA and standardized test scores, etc.).  The JHSS publishes two sorts of items.  The first, and most important, are the results of complex and leading edge studies in the "Hard Science" field.  It is well known among the Hard Science discipline as a venue where new and important work is published.  It also publishes reviews of current important work in the Hard Science field. These reviews do not advance knowledge but they tend to be useful in summarizing progress in the field and tend to be popular.  They do not add so much as popularize knowledge among high level Hard Science academics.  Reviews are fairly easy to produce and are a favorite means of increasing publication numbers for scientists approaching the end of their academic careers.  Articles of new work is much more difficult to produce.  In economic terms, reviews are less costly than articles, but for the disciplinem articles are more value for advancing knowledge than reviews. 

The "value" of JHSS for academic seeking high value publication venues is based almost entirely (for our example--the reality is often more complex but I am holding all other variables for purposes of this illustration) on "impact." Impact is currently measured solely by two sets of data--download rates and citation rates.  This data is amassed without regard to what is downloaded or cited, or for what purposes there are downloads or citations.  Each download or citation adds positive value to "impact" (even if for example, there were strong downloads and citations to prove that an article was laughable for its content, methods, etc.). There is no distinction made, for purposes of calculating "impact", between reviews and articles.   In economic terms, though articles are much more costly to produce and more valuable than reviews--they are both valued equally for purposes of the data extracted to assess journal impact. But there is a difference: reviews tend to be downloaded and accessed at a far higher rate than articles.  Reviews also tend to be cited with greater frequencies, especially in the introduction to articles.

JHHS has traditionally had an impact value of .48.  That is quite respectable.  And that impact value was sufficient to keep a healthy flow of articles coming for peer review and publication.  But recently, China and Australia changed thew criteria for assessing faculty in the discipline of Hard Science.  Going forward, Hard Science academics will be assessed on the "value" only of publications that are published in journals with impact values of greater than .50.  Publications in journals with impact value of .50 or less will be treated as non-academic publications, whatever the inherent quality of the publication.   For JHHS, the resulting change in academic assessment is disastrous.  First it changes the assessment of the quality of its publication solely by operation of a change in data assessment.  Send it threatens to reduce the number of articles submitted to it from two states with large numbers of subscribers and contributors of JHHS content.  

But the solution is readily in hand.  Immediately after the announcement by China and Australia, JHHS commissioned a larger number of reviews and reduced the number of published articles.  As a result, the number of downloads and citations to JHHS increased substantially, enough to raise its impact value to .53.  As a consequence of that change, the number of articles submitted increased dramatically as well. Assessment metrics have both defined and been defined by their parameters.  They have served as both tool and instrument.  The problem with assessment here is that it does not measure against a constant--the act of measuring itself changes the field being measured and affects the legitimacy of the measure originally chosen. The assessment machine then is a data generating device in constant motion; it does not measure "facts" as much as responses to the structures of measuring itself.  

To most of us in academic departments, this hypothetical suggests nothing more than the self evident.  It describes the ordinary course of events that most take for granted.  But a careful consideration suggests that this is precisely what ought not to be taken for granted. It suggests that quality has been subsumed within proxies, embedded in data, the importance of which are no longer in the control or managed by those most knowledgeable about a field.  In effect, a disciplinary community has effectively lost control over the valuation and assessment of quality, and ceded its structuring to administrators, governments, publishing houses and the like, whose interests may not be to protect the quality of or ensure a robust advancement of knowledge. It suggests that assessment may not be as much concerned about the inherent quality of knowledge produced, as it is about knowledge as gesture, as a means of generating interest--as a publicity device. Publication houses are interested in their own prestige, generating reader and contributors.  Universities are interested in the revenue generating power of academic work--through grants and prestige markers.  These are important--no doubt.  But a discipline might be well advised to be more interested in the control of the factors used to measure the value of knowledge production.  It is the only way that a discipline can control itself--and retain its own integrity.  To do otherwise is to become a factor in the production of something else for someone else--and the legitimacy of the discipline will sooner or later erode and dissipate.

No comments:

Post a Comment