# What Counts --- #### [Kathleen Fitzpatrick](http://kfitz.domains.msu) [@kfitz](http://twitter.com/kfitz) // [kfitz@msu.edu](mailto:kfitz@msu.edu) Note: I want to start by thanking Martin for inviting me to talk with you today, as well as thanking Jennifer for everything involved in getting me here. ## agenda 1. problems 2. principles Note: First, a brief overview of what I'm going to talk about this morning. We're experiencing some problems in our promotion and tenure processes and policies that are being surfaced in many cases by the digital work that many scholars are producing today. I'm going to talk through a few of those problems, and then present a highly opinionated set of principles that I'd like to see applied to our processes and policies in order to make them do the work we actually intend them to do. ## contradictions Note: The largest, and in fact most central of the problems that I am seeing in our tenure and promotion processes and policies today is that they are riven with internal contradictions, contradictions that are in some cases inevitable but that in many cases are engineered into the process for what seem to us like good reasons. ## small / large Note: Among the inevitable contradictions is the fact that they apply to a vanishingly small population, and thus should represent a similarly small problem, and yet for that small population they can support or disrupt an entire career, and so are of the highest possible stakes, making changes to our processes and standards seem like an insurmountably large obstacle to institutional change. ## individual / institutional Note: The enormity of this obstacle to institutional change is heightened by the contradiction between these policies' institutional ownership and their individual implementation; despite the fact that tenure and promotion reviews are conducted and overseen by people with agency within our institutions, those people often feel themselves to be powerless before institutional policy. ## personal / impersonal Note: And as so many institutional policies and processes, they are intended to be neutral, systemic, and impersonal, all in the name of being fair, and yet they are always deeply personal in their application, especially for the scholar undergoing the evaluation. ## subjective / objective Note: As a result, where the tenure and promotion process should at least in theory require scholars and administrators to exercise the greatest, most careful possible judgment -- an inevitably subjective judgment -- we end up, for many highly important reasons, creating and implementing processes that are as objective as possible. ## quality / quantity Note: And in order for those processes to be objective in their design and their implementation, we wind up turning away from assessments of the work's quality to assessments of its quantity. ## counting Note: In other words, we wind up counting things. And we obsess about what counts, what it counts for, what it counts as. ## digital Note: One thing you might note about all of those contradictions and concerns in promotion and tenure policies and processes is that none of them have much to do with the changing nature of scholarship today, or the growing role of digital scholarship. In other words, promotion and tenure is always already problematic, as the French theorists might suggest, and the digital transitions we're seeing today are likely only rendering those contradictions visible. ## first Note: Given that, it would probably be a good idea for us to start by considering what it is we're trying to do in the promotion and tenure process in the first place, so that we might find ways to best surface those purposes and principles in whatever our processes become. ## threshold Note: We've long treated that review as a threshold exercise: an assessment of whether the candidate has done enough to qualify. The result, in too many cases, is burnout and unhappiness in the associate rank. ![The Onion headline: Newly Tenured Professor Now Inspired to Work Harder Than Ever](http://kfitz.msu.domains/presentations/images/onion.png) Note: There’s a reason, after all, why The Onion found this funny, and it’s not just about the privileges of lifetime tenure producing entitled slackers. Assistant professors run the pre-tenure period as a race and, making it over the final hurdle, too often collapse, finding themselves exhausted, without focus or direction, depressed to discover that what is ahead of them is only more of the same. The problem is not the height of the hurdles or the length of the track; it’s the notion that the pre-tenure period should be thought of as a race at all, something with a finish line at which one will either have won or lost, but will anyway be done. ## milestone Note: I believe that we can find a better way of supporting and assessing the careers of junior faculty if we start by approaching the tenure review in a different way entirely, thinking of it not as a threshold exercise but instead as a milestone, a moment of checking in with the progress of a much longer, more sustained and sustainable career. ## intellectual leadership Note: This notion of the milestone comes to me in part from the ways my dean, Chris Long, and my associate dean, Bill Hart-Davidson, have recently talked with faculty from the College of Arts & Letters at MSU about charting their path to intellectual leadership. Charting this path requires first understanding what "intellectual leadership" is (and how it might change over the course of a career), and then establishing some steps to get there. Early in a career, for instance, intellectual leadership might be about establishing a voice within the process; later, it might look more like helping other scholars establish their own voices. And in every case, depending on the scholar's long-term goals, the steps along the path are going to be different. Some of these steps are smaller, stepping stones, things the scholar can control: writing an article, submitting a grant proposal, and so forth. Larger steps toward the long term goal -- steps over which the scholar might not have full control -- are milestones: publishing a book, getting tenure. ## goals vs steps Note: However much these milestones might look like end goals before we get to them, they really are steps along the way. The goal of intellectual leadership remains farther out, on the horizon. Milestones like the tenure review, however, provide moments of checking in to ensure that things are on course. ## career Note: Taking this view requires us to stop and think about the shape of a career overall. The promise of that distinguished career is what we hire junior faculty for, after all -- the promise that they will engage with their material and their colleagues and their students over the long term, that they will use those engagements to come to some kind of prominence in their fields. The tenure review, at the end of the first six years of those careers, should ideally not be a moment of determining whether those candidates have thus far done X quantity of work (where X is enough to earn tenure, allowing the candidate to safely rest). Rather, in an ideal universe, we should use the tenure review to ask whether the promise with which those candidates arrived is beginning to bear out. ## beginning Note: I should say that again: _beginning_ to bear out. The most productive question we can ask in the promotion and tenure review is not whether the full potential of a candidate has been achieved, but rather whether what has been done to this early point gives us sufficient confidence in what will happen over the long haul that we want the candidate to remain a colleague, and do that work with us, for as long as possible. ## quality / quantity Note: In order to figure that out, the questions we ask about the work cannot focus solely on whether there has been enough of it, but rather must focus on its importance, its potential for impact, its quality. We already reach out, in the vast majority of cases, to experts in the candidate's field, requesting their careful evaluation of the work and its significance. Reframing our own assessment practices to foreground not the quantity of work produced but the ways we see it beginning to have an impact on its field can help us transform the exercise into one that supports our most important scholarly goals and values. ## digital, but not solely Note: And, not incidentally, such a foregrounding of the potential for impact might help us more fairly evaluate the newer kinds of digital projects in which many scholars today engage. But they also might encourage use to reassess a range of forms of work that go undercredited, encouraging us to acknowledge and properly value forms of intellectual labor that too often get shoved under the category of "service to the field." In my own area of the humanities, such work includes translation, or the production of scholarly editions, or the editing of scholarly journals. None of these forms of work carries the same weight in most review processes as the scholarly monograph, and yet -- just to pick up one of those examples -- what more powerful position in shaping the direction of a field is there than that of the journal editor? ## what counts Note: This is just one of the kinds of problems that we need to encounter. But again, I want to emphasize that it’s not enough simply to add “digital work” or “journal editing” to the list of kinds of work that we accept for tenure and promotion, not least because the impulse then is to apply currently understood standards to those objects: are there kinds of journals that “count,” and kinds that don’t? Does the journal have to have a specified impact factor? Or even where we're more enlightened about our metrics for impact, and we employ a broader range of what get referred to as "alternative metrics," we run the risk of creating new modes of assessment that lead us toward increasing objectivity, perhaps, but also increasing impersonality, increasing (utilitarianism), and increasing rigidity. ## what do we value Note: Instead, I want to approach the problem from a different direction, thinking less about better ways of conducting ostensibly neutral assessment and more about ways of focusing on the things we really care about. This different mode of approach may require us to give up our reliance on some relatively easy, objective, quantitative measures, in favor of seeking out more complex, more subjective qualitative judgments -- but I would suggest that these kinds of complex judgments about research in our fields are the core of our job as scholars, and that we have a particular ethical obligation to take our responsibility for such judgments seriously. This different direction will also require us to think as flexibly as we can about how our practices should not only change now, but continue to evolve as the work that junior scholars produce changes. So what follows are a few principles -- as I said at the outset, some admittedly highly opinionated principles -- that we might consider in thinking about the policies and procedures that will enable us to focus less on what counts and more on what we genuinely value in scholarly work. ## (1) Don't let “but we don’t know how to evaluate this kind of work” stand as a reason not to evaluate it. Note: The first of these is that we simply have to get past the "but I don't know how to evaluate that kind of work" stage of the process. Many disciplinary organizations have developed statements about and guidelines for the evaluation of new kinds of scholarly work. For instance, the MLA’s Committee on Information Technology put forward its first such set of best practices back in 2000, and then updated those guidelines in 2012. The organization has also held numerous workshop on evaluation processes for digital work at its annual convention. And the AHA, CAA, and CCCC all have similar documents and support available. There are also excellent university policies available that can be emulated, including at Emory and at the University of Nebraska at Lincoln. ## (2) Support evaluator learning. Note: Despite the existence of these excellent criteria and models for evaluating new work, however, many faculty, especially those who have long worked in exclusively traditional forms, need support in beginning to read, interpret, and engage with digital projects and other new forms of scholarly work. This need is of course what led to the MLA's workshops; similar kinds of workshops have been held at the summer seminars of the Association of Departments of English and the Association of Department of Foreign Languages, and at NEH-funded summer workshops. On the local level, my own College of Arts & Letters at MSU has begun holding regular workshops both for candidates and chairs on the review process, surfacing questions and concerns and supporting faculty in producing the best possible environment for evaluation. ## (3) Engage with the work on its own terms, and in its own medium. Note: Supporting evaluators in the process of learning how to engage with new kinds of work is crucial precisely because the work under review must be dealt with as it is, as itself. More or less every year I hear reports from scholars whose work is web-based but who have been asked to print out and three-hole-punch that work in order to have it considered as part of their dossiers -- or the contemporary version thereof: they've been asked to turn a web-based project into a PDF in order to be submitted through the dossier system. Needless to say, eliminating the interaction involved in web-based projects undermines the very thing that makes them work. As the MLA guidelines frame it, “respect medium specificity” -- engage with new work in the ways its form requires. ## (4) Dance with the one you brought. Note: In the same way that the work demands to be dealt with on its own terms, it’s crucial that tenure review processes engage with the candidates we’ve actually hired, rather than trying to transform them into someone else. While it’s tempting to advise junior scholars to take the safer road to tenure by adhering to traditional standards and practices in their work, such advice runs the risk of derailing genuinely transformative projects. In all cases, but perhaps especially when candidates have been hired into positions focused on new forms of research and teaching, or when they have been hired because of their innovative work thus far, they need to be supported in charting their own paths toward intellectual leadership. In creating that support, it’s particularly important to guard against doubling the workload on the candidate by requiring them both to complete the project and to do traditional work as well. This is a recipe for exhaustion and frustration; candidates should be encouraged to focus on the forms of their work that present the greatest promise for impact in their fields. ## (5) Prepare and support junior faculty as they “mentor up.” Note: My emphasis on supporting the candidates that you have doesn’t mean those candidates shouldn't have to persuade their senior colleagues of the importance of their work. Scholars working in innovative modes and formats must be able to articulate the reasons for and the significance of their work to a range of traditional audiences -- and not least, their own campus mentors. In theory, at least, this is the case for all scholars; it’s the purpose that the “personal statement” in the tenure dossier is meant to serve. For scholars working in non-traditional formats, however, there is additional need to explain the work to others, and to give them the context for understanding it. That process cannot begin with, but rather must culminate in, the personal statement. Throughout the pre-tenure period, candidates should be given opportunities to present their work to their colleagues, such that they have lots of experience explaining their work -- and ample responses to their work -- by the time the tenure review begins. They also need champions -- mentors who, having examined the work and coming to understand its value, will help them continue to “mentor up” by arguing on behalf of that work among their colleagues. ## (6) Use field-appropriate metrics. Note: Every field has its own ways of measuring impact, and the measures used in one field will not automatically translate to another. A colleague of mine whose PhD is in literature, and who began her career as a digital humanist, now holds a position that is half situated in an English department and half in an information science department. Her information science colleagues, in beginning her tenure review, calculated her h-index -- and it was abysmal. The good news is that her colleagues then went on to calculate the h-indexes of the top figures in the digital humanities, and discovered that they were all equally terrible. Metrics like the h-index or citation counts simply do not apply across all fields. It’s absolutely necessary that we recognize the distinctive measures of impact used in specific fields and assess work in those fields accordingly. ## (7) Maybe be a little suspicious of counting as a method. Note: As those metrics indicate, we tend to like numbers in our assessment processes. They feel concrete and objective, and some of them are demonstrably bigger than others. The problem is that we tend only to count those things that are countable, and too often, if it can’t be counted, it doesn’t count. But there is an enormous range of significant data that cannot be captured or understood quantitatively. Citation counts, for instance: such metrics can tell us how often an article has been referred to in the subsequent literature, but they can’t tell us whether the article is being praised or buried through those citations, whether it’s being built upon or whether it’s being debunked. So while I’m glad that problematic metrics like journal impact factor are gradually being replaced with a more sophisticated range of article-level metrics, I still want us to be a bit cautious about how we use those numbers. This includes web-based metrics: hits and downloads can be really affirming for scholars, but they don’t necessary indicate how closely the work is being attended to, and they aren’t comparable across fields and subfields of different sizes. If we’re going to use quantitative metrics in the review process, they need careful interpretation and analysis -- and even better, should be accompanied by a range of qualitative data that captures the reception and engagement with the candidate’s work. ## (8) Engage appropriate experts in the field to evaluate the work. Note: It is, by and large, the external reviewers that we have relied upon to produce the qualitative assessment of the tenure dossier. These experts are generally well-placed, more senior members of the candidate’s subfield who are asked to evaluate the quality of the work on its own terms, as well as the place that work has within the current discourses of the subfield. Where candidates present dossiers that include non-traditional work, however, we must seek out external reviewers who are able to evaluate not just the work’s content -- as if it were the equivalent of a series of journal articles or a monograph -- but also its formal aspects, accounting for the technical value of the work and the significance that it has for the field. This kind of medium-specific review is, I would argue, necessary for all forms of nontraditional work: a candidate whose dossier includes translation should have at least one qualified external reviewer asked to focus on the significance of the translation; a candidate whose dossier includes journal editing should have at least one qualified external reviewer asked to focus on the significance of that editorial work for the field. ## (9) But do not overvalue the judgments of those experts. Note: The external reviewers that are engaged by a department or a college to assess the work of a candidate are often in the best place to evaluate the quality of that work, its place within the subfield, its significance and reception, and the like. But all too often these reviewers are called upon -- or take it upon themselves -- to make judgments that are outside the scope of their expertise. It would be best for us to refrain from asking, or even specifically enjoin, reviewers from indicating whether a candidate’s work would merit tenure at their institution, or whether a candidate is among the “top” scholars in their field. Such comparisons rely on false equivalences among institutions and among scholars, and they are invidious at best. // Even more, departments must use the judgments of those experts to inform their own judgment, and not supplant it. Departments know the internal circumstances and values of the institution in ways that external reviewers cannot. And while the members of a departmental tenure review body might not be experts in a candidate’s specific area of interest, bringing in such experts cannot be used absolve them of responsibility for exercising their own judgments, including engaging directly with the candidate’s work themselves. ## (10) Be a little suspicious of objectivity. Note: The desire to externalize judgment -- whether by relying upon quantitative metrics or on the assessments of external reviewers -- is understandable: we want our processes to be as uncontroversial, as scrupulous, and therefore as objective as possible. And there are certain subjective judgments -- such as those around questions of “collegiality” or “fit” -- that should not have any place in our review processes. But aside from those issues, we must recognize that all judgment is inherently subjective. It is only by surfacing, acknowledging, and questioning our own presuppositions that we can find our way to a position that is both subjective and fair. This is a kind of work that scholars -- especially those in the qualitative social sciences and the humanities -- should be well equipped to do, as it’s precisely the kind of inquiry that we bring to our own subject matter. // Moreover -- and this is something that really demands a whole talk of its own -- we need to acknowledge that “peer review” is not itself a singular, objective marker of quality research. And there isn’t just one appropriate way for peer review to be conducted. Many publications and projects are experimenting with modes of review that are providing richer feedback and interaction than can the standard double-blind process; it’s crucial that those new modes of review be assessed on their own merits, according to the evidence of quality work that they produce, and not dismissed as providing insufficiently objective criteria for evaluation. ## (11) Reward -- or at least don’t punish -- collaboration. Note: Along those lines: I have been told by members of university promotion and tenure committees that an open peer review process, or other forms of openly commentable work, would doom a tenure candidate because anyone who participated in that process would be excluded as a potential external reviewer. The intent again is objectivity: any scholar who has had any contact with the candidate’s work, or has engaged in any communication with the candidate, or has participated in any projects with the candidate, could not possibly be at the arms-length distance required to evaluate the work. // This is not only a pretty dubious form of the insistence on objectivity and a highly destructive misunderstanding of the nature of collaboration in highly networked fields today. I understand the impulse, to ensure that the judgment provided by an external reviewer is as focused on the work as possible, without being colored by a personal relationship. But there are degrees, and we need to be able to make distinctions among them. At my own prior institution, the line was one about personal benefit: if potential external reviewers stand to gain in their own careers from a positive outcome in the review process -- a dissertation director who becomes more highly esteemed the more highly placed his former advisees are; a co-author whose work gains greater visibility the more her partner’s career advances, and so forth -- such reviewers should obviously not be engaged. But other levels of interaction should not disqualify reviewers, including co-participants in conference sessions, commenters on online projects, members of advisory boards on which the candidate also serves, and so forth. In fact, a key component of impact on a field is about those kinds of connections: we should want tenure candidates to be developing active relationships with other key members of their fields, to be working with them in a wide variety of ways. Such relationships should be disclosed in the review process, but they should not be used to eliminate the reviewers who might in fact be the best placed to assess the candidate’s work. ## in the end Note: The key thing, in the end, is that the tenure review should be focused on assessing the impact that the candidate’s work is beginning to have on its field, and the confidence that impact to this point gives you about the importance of the work to come. And the ways that we understand and assess impact need to be lifted out of the contradictions in which they've become mired. We need to understand and appreciate that the tenure review process is and ought to be individual, personal, and subjective, and we need to seek ways to be equitable in our practices without trying to impose artificial, and impossible, impersonality and objectivity. We need to reorient our thinking away from what counts ## what do we value Note: and more toward what we value, and why. Each aspect of the standards and processes that we bring to the tenure review process should be reconsidered in that light: are the measures we use, the evaluators we engage, the ways the work is being read or experienced, are all of these aspects producing the best possible way of thinking about how our scholarly values are being manifested in a career in process, and are they guiding us to the most responsible way of considering its future. ## these slides http://kfitz.msu.domains/presentations/whatcounts.html ## resources - [MLA Guidelines for Evaluating Work in Digital Humanities and Digital Media](https://www.mla.org/About-Us/Governance/Committees/Committee-Listings/Professional-Issues/Committee-on-Information-Technology/Guidelines-for-Evaluating-Work-in-Digital-Humanities-and-Digital-Media) - [AHA Guidelines for the Professional Evaluation of Digital Scholarship by Historians](https://www.historians.org/teaching-and-learning/digital-history-resources/evaluation-of-digital-scholarship-in-history/guidelines-for-the-professional-evaluation-of-digital-scholarship-by-historians) - [CAA Guidelines for the Evaluation of Digital Scholarship in Art and Architectural History](http://www.collegeart.org/pdf/evaluating-digital-scholarship-in-art-and-architectural-history.pdf) - [CCCC Promotion and Tenure Guidelines for Work with Technology](http://cccc.ncte.org/cccc/resources/positions/promotionandtenure) ## resources - [Emory College Principles and Procedures for Tenure and Promotion](http://college.emory.edu/faculty/documents/faculty-advancement/tenure-track/tenure-and-promotion-principles-and-procedures_11_16.pdf) (see esp. Appendix D) - [University of Nebraska at Lincoln Promotion & Tenure Criteria for Assessing Digital Research in the Humanities](https://cdrh.unl.edu/articles/promotion) ## thank you [Kathleen Fitzpatrick](http://kfitz.msu.domains) // [@kfitz](http://twitter.com/kfitz) // [kfitz@msu.edu](mailto:kfitz@msu.edu)