Assessment and Power in the University

Universities have been assessing students by grading their work since the Middle Ages.  Sometimes students complained that the professor wasn’t fair, but nobody thought the system was fundamentally flawed.

Then, about three decades ago, a new idea arose in American universities—that campus bureaucrats needed to assess student learning outcomes. This occurred as part of a legitimate effort to distinguish between the effects of only admitting top quality students (selection effect) and actually teaching students, perhaps of indifferent quality, new skills and knowledge (training effect). Mid- and lower-tier colleges that were anxious to show that they were adding value to their graduates (even if they did not carry the prestige of the top schools) were quick to embrace assessment.

The  “assessment” craze did little harm until the 1990s, when its advocates began to argue that while grading measured student performance, it did not measure student learning. That has led to the growth of a whole industry that purports to measure student learning.

When I started teaching in the mid-1990s, I had my history students write weekly essays and take a couple of essay-based exams. From that I could determine whether they understood the content I was teaching and I could see their writing develop as the semester progressed. That gave me the information I needed to award grades but also to see if there were any consistent problems that I needed to address. I still do this and any substantive changes I make to my courses come from this type of feedback.

In the assessment era, however, this is considered insufficient. For about a decade now, university assessment directors, who are often ex officio members of curriculum committees (even though few of them are faculty members), have been instructing professors to develop program- and course-level statements called Student Learning Outcomes (SLOs). Typically, professors have three or more of these per program or course.

These statements have to use an approved list of verbs. SLOs have to be concrete and measurable. You shouldn’t use words like “understand” in formulating a SLO because that is not something a student does and it’s hard to measure. Likewise one ought to avoid the term “ability” because that suggests something inherent to the student and not a demonstration of what he has learned. Worse, it suggests that some might have more ability than others.

Above all “critical thinking” is not a desirable outcome because it’s too vague and mushy.

So we end up with goals (sorry, that’s a forbidden term too, I should say “outcomes”) for our courses like: “Describe the origins of the institution of slavery in the US.” “Compose an essay that distinguishes between the nature of European Imperialism before and after 1850.” “Distinguish between primary and secondary source material.”

Those are worthy skills for a student to have, but they don’t capture the more important, but vague, abilities like becoming a critical reader and writer or making valid inferences from historical material that most professors would consider the real goals of their courses.

Then you need to develop a mechanism for finding out whether your students have met the SLO. You will be forgiven for thinking that the students’ grades in the course might answer this question. Not so. You need a separate process, usually involving the insertion into your course of some test that is easy to grade and yields a quantifiable result, reports to several committees, and some meetings to determine whether the SLOs have been met.

Most universities don’t require that material used for assessment be graded, but students usually don’t put much effort into work that does not affect their grades, so in practice assessment material becomes part of the curriculum.

That’s not the end. Next you have to identify some aspect of your course that is not going as well as you would like and change something about your course or program to remedy this “problem.” This is called “closing the loop” and it has to happen every year.

Amazingly, there is no evidence that learning outcomes assessment has improved student learning or led to any improvement in what universities do.

So, each year we find that although our students have met the SLOs (and they always do), there is nonetheless some contrived problem that we respond to with an equally contrived but heavily documented and carefully reported change. Loop closed.

Virtually every university in the country now has an assessment office devoted to overseeing and directing this byzantine process. Those offices have steadily been gaining staff, power, authority, and resources.

Amazingly, there is no evidence that learning outcomes assessment has improved student learning or led to any improvement in what universities do.

On a practical level, no one even pretends that what distinguishes good schools from bad schools is their commitment to or execution of learning outcomes assessment.

Nor does anyone seriously propose that people who were educated before the age of assessment received inferior educations because bureaucrats did not assess their learning.

A couple of years ago I wrote a piece in the Chronicle of Higher Education pointing out the lack of evidence that learning outcomes assessment improves student learning. In researching the article I looked high and low for a study that showed assessment has improved student learning or that a robust assessment program might make one college better than another. I found nothing. No one has bothered to assess the effects of assessment.

The response to my piece from the assessment world was anecdotes and panel discussions about how to deal with assessment doubters, but no evidence to support their claims that students learn more when we follow their formula.

And because assessment employs none of the basic principles of research design, there is no reason to believe that further investment in assessment (as opposed to actual scholarly research on student learning) will yield meaningful results.

For example, assessors never use the control groups that are a standard feature of real research. So even if you can show that the students in a particular course do 15 percent better at something at the end of course than they did at the beginning, there is no way to say that it was something intrinsic to the course that caused the change.

Worse, the entire assessment process is designed and executed by people with a strong interest in the outcome of the process.

For those and other reasons, I concluded in this Inside Higher Ed essay last November that assessment is just an “empty ritual” that wastes time and money.

So Why Do Universities Continue to Invest Money in Assessment Offices?

The proximate answer is that they do so because the accreditors tell them they have to. That is, accrediting standards require schools to set learning outcomes for courses and to quantify the results.

If it wants federal student aid money, a college must remain accredited. Therefore, a big component of the accreditation process is to heap quantities of data on the accreditors to show them that your students are meeting the learning outcome objectives.

Ask an assessment bureaucrat whether assessment works or not and inevitably the response will be, “Do you want to lose X million in federal money?” and never, “Here is concrete evidence that assessment works.”

It’s the quality control equivalent of the TSA’s security theater. Just as taking off your shoes and not carrying liquids provides little more than the appearance of security, demanding that universities provide “evidence” that they are doing a good job of teaching their students provides the appearance of accountability.

The consequences of this are far reaching. Universities have to pay directly to support a new class of assessment bureaucrats, but they also pay indirectly because faculty have to deal with assessment both in their classes and in the many committees that have grown up around the assessment imperative. All of that time and effort represent resources that are not spent on other more productive things.

If This Is So Useless and Costly, Why Don’t Universities Fight Back?

I contend that universities don’t resist assessment because it serves a purpose. Although it has nothing to do with student learning, assessment strengthens the parts of the university administrators can most easily control—the staff and the most staff-like members of the faculty—and weakens the more independent-minded elements of the faculty by forcing them to comply with this empty ritual.

Submitting to the assessment agenda is actually attractive to some faculty members. For academics who are uninterested in or unable to thrive in the traditional faculty roles of teaching and scholarship, assessment offers a place where the ability to master the ever-changing jargon of curriculum maps, to police the baroque language of Student Learning Outcomes, and to devise rubrics creates a type of expertise.

Active scholars avoid assessment committees like the plague, so people who are not busy in their labs or the archives are the ones who end up on these committees.

On one level, this might seem like a good allocation of human resources; it lets the researchers do their research and shifts the burden of the bureaucratic busy work to the less scholarly. Unfortunately, these committees have real power and are taking control of the curriculum in an increasingly centralized, top-down process.

The result is that American public universities, once the envy of the world, are increasingly being run in the centralized and bureaucratized manner of our secondary schools, which are global laggards.

  • DrOfnothing

    Yes, the “assessment structure” is a complete boondoggle and needs to be scrapped. Unfortunately, that change has to start at the top (i.e. with the accreditation mechanism). Otherwise, universities cannot risk running afoul of their strictures and being financially penalised as a consequence.

    • FC

      Exactly…and I trust that won’t happen until the dollars available begin to dry up. The University of Missouri is have to implement change because their incoming freshman enrollment is down 30%. Furthermore, the new Chancellor has announced they are now going to promote open more open dialogue vs. the shout-down of those with whom one may not agree. (Granted the MU issue has nothing to do with assessments, but surely their are some administrators the university can no longer afford.)

  • Glen_S_McGhee_FHEAP

    As a matter of historical accuracy, “grading” in America dates from eighteen century. Perhaps the author is thinking of something else.

    “The history of grading in American colleges was eloquently detailed by Mary Lovett Smallwood (1935). She related that marking, or grading, to differentiate students was first used at Yale. The scale was made up of descriptive adjectives and was included as a footnote to Stiles’s 1785 diary.”

    The following historical outline is very instructive (I thought):
    http://www.indiana.edu/~educy520/sec6342/week_07/durm93.pdf

    On the contrary, assessment is not an “empty ritual,” but the engine that drives the entire apparatus of stratification, of producing “fails” and “passes” (Luhmann) when viewed from the social and cultural perspective. It is the basis of awarding credentials, which may, in turn, improve life-chances. Maybe.

    Above and beyond problems with assigning nominal variables (per measurement theory, error theory, item response theory, etc.), there is always the imposed blindness of assessment — the idea that measured changes are the result of schooling, to the exclusion of all other social and cultural influences. No one questions this assumption; doing so would undermine “assessment” itself.

    Lastly, accreditors are unfairly targeted — Student learning outcomes do not, in fact, originate with accreditation associations, but are a response by member schools to the federal requirement for accrediting standards that assess institutional “success with respect for student achievement in relation to the institution’s mission.” (HEA Sec. 496(a)(5)(A))

    Congress wrote these criteria into the statute, and then delegated the interpretation and implementation to membership associations recognized by the Secretary of Education. If there is a problem here, it started with Congress; and it does not rest with the accreditation associations, but with the student learning standards mandated by the Higher Education Act of 1965.

  • bdavi52

    Why is this surprising?

    The purpose of bureaucracy is bureaucracy. It is the means and the end. It is the outcome and the tool used to generate the outcome. Bureaucrats are birthed by other bureaucrats, splitting apart in rapid expansion, one cell becoming two becoming four, becoming eventually an uncountable Kafkaesque Kastle of bureaucratic delight. So here in Xanadu U. we find cubicles contained in clusters becoming departments becoming entire self-contained, self-replicating organizations of Bartleby’s, each with their own buildings & engraved letterheads and websites!

    They do nothing; they add nothing; they contribute nothing to the actual/real output of any system (be it a ‘good’, a ‘service’, an education, a degree) — rather they encumber & obstruct. They are the break, the roundabout, the interruption in what otherwise would be the shortest distance between two points, and they expand that interruption exponentially, given the slightest opportunity.

    They stand, by design, between the actual worker (be it a teacher, a laborer, a mason, a builder) and the tasks the worker, by his or her nature, actually performs. They produce professional interference while other bureaucrats gleefully measure and report on the interference process: “Please see below the list of faculty members who have yet to complete the SLO’s for this semester’s programs. Your department is currently performing at a 73% SLOEffLevel, and has been so noted in the College’s Semester Strategy & Outcome Summation (SSOS). Please forward your official Action Response Reports (ARR”s) to your current SlOEffLevel Rating, ASAP to allow the schools Performance to Target Measures PTTM’s) to be accurately updated.”

    Change the background just slightly (the dialogue can remain constant) and Office Space’s TPS report scene could be AnyPolyState University anywhere, juggling SLO’s: https://www.youtube.com/watch?v=jsLUidiYm0w

    • Glen_S_McGhee_FHEAP

      I loved the first paragraph. Max Weber called it the “iron cage.”

      Kafka once wrote, “Every revolution evaporates and leaves behind only the slime of a new bureaucracy.” This is my favorite Kafka quote.

      And I loved Melville’s ‘Bartleby, the Scrivener,’ which shows better than any one else the effect of bureaucracies (such as the Dead Letter Office) on those trapped in them, including the narrator of the story.

      But here’s one reference that tops them all: Zygmunt Bauman, Modernity and the Holocaust — describes the precise mechanisms by which bureaucracies produce, among other things, social indifference on the level of the Holocaust. http://www.faculty.umb.edu/lawrence_blum/courses/290h_09/readings/bauman_intro.pdf