There was a disturbing little article in the Times a couple weeks ago about the increasing share of college and university budgets going to athletic facilities, sports teams, student clubs, fancy dormitories, and other non-instructional ends. The article quotes Ohio University Professor Richard K. Vedder giving the obvious explanation: "In the zeal to get students, [schools] are going after them on the basis of recreational amenities." Professor Vedder studies the economics of higher education, but it doesn't take a specialist to see how this is playing out.
I was riding the Q train over the Manhattan Bridge a couple days ago, and the whole car was plastered with an ad campaign for Monroe College; the ads touted the school's sports teams, its student clubs, its athletic facilities; they compared the dormitories to luxury condos; and they said very little about academics. I think it's pretty clear that what we're seeing is a higher-education system that's responding to market forces.
This is something that we want to take a close look at, because a lot of people think that the model for public primary and secondary schooling ought to be a lot more market-based. In an article from 2000, for example, U. of C. professor of economics and Nobel laureate James Heckman makes the argument in no uncertain terms: "Once it is recognized that public schools, especially inner-city public schools, are a virtual monopoly, while the U.S. university system is highly competitive, the mystery of the poor performance of the former, and the great success of the latter vanishes.".
On the surface, the decreasing emphasis on instruction in college and university budgets seems like an argument against Heckman. In fact, I think that decreasing emphasis tells a different story, not about the merits of competition in education, but about the tools by which education is measured.
Heckman's thinking on this issue is, like that of so many academics, distorted by the assumtions and paradigms of his field. Economists are used to thinking about goods whose quality is relatively easy to determine. An education is vastly more complicated than your average widget, and its quality is extremely difficult to assess. It is the tools and structures by which we evaluate it, far more than the competition or lack thereof, that will ultimately determine quality of education. Only what is measurable can be rewarded or punished by the market. In other words, it's all about incentives.
What Are Those Universities Up To?
The first question is why colleges and universities want so damn many applicants. Monroe is actually an extreme example, because it's a for-profit organization: the more students they can draw at a given tuition, the more money they're making. Fine, but what about schools like CUNY or MCNY, which I've also caught advertising on the subway train; true, their ads were more substantive and instructionally oriented, but they're still ads—they're still trying to pull in more applicants. They must somehow be incentivized to increase their applicant pool.
The first thing to understand is that the incentives in a non-profit school aren't much different from those in a for-profit. The job of the president of a college or university is to bring in money, and he is evaluated by the board of trustees on how much money he brings in. Now, beneath the president are a number of provosts, deans, vice presidents and so on, who are evaluated on other, more specific, hopefully more academic grounds, but all of these are evaluated by the president, whose sole economic incentive is to get the school more money. So, from a purely economic point of view, the school operates as a funding-accumulator. Now, we know there's more going on than that, but on some level, quality of education, prestige of school, and every other attribute of an institution can be viewed as ways that the school tries to draw donations.
There are two ways in which I suspect drawing applicants becomes a goal for the school: first, ratio of applicant pool to class size is taken as a measure of the exclusivity of an institution; second, the larger the applicant pool, the more select a school can be in picking its incoming classes—thus, a larger applicant pool means a more accomplished student body, which means an alumni body with more earning power and more donations down the line.
This second benefit of a larger applicant pool cannot be underestimated. After all, by the age of 18, personality is already well developed: a good student at 18 is likely to remain a good student, and a bad one a bad one; IQ, likewise, is already set, and has been for ages; a good attitude, an ambitious or hardworking nature, a creative or adaptive mind—these are qualities that have likely already developed. If one wishes to improve the intelligence, creativity, earning potential, and so on of one's graduates, by far the easiest way to do that must be to admit more intelligent, creative, employable freshman.
Not surprisingly, the dean of admissions at most schools is evaluated based on his ability to increase the size and quality of the school's applicant pool..
Why Do Those Kids Want Those Fancy Gyms So Damn Bad?
That, by my reasoning, is the story from the institutions' point of view, but what about the kids—why are fancy gyms and dormitories the best way to draw applicants? On the surface, it doesn't make a lot of sense. Ostensibly, people go on to higher education to increase their employability and to improve their intellect; dormitories and gyms serve neither of those purposes. After all, a college tuition is a stunningly high price for a gym membership and a bedroom in a giant industrial building with shared bath and kitchens. So, what the heck are these kids up to?
Partly, of course, this is another case of that great downfall of classical economics, people behaving irrationally and against their economic interests (damn them!) Indeed, it's easy to blame the kids: they don't know what's good for them, they just want to have a good time and party a lot, and so on. There's probably some truth to that, and no wonder—after all, we wouldn't expect a bunch of fifth graders, left to their own devices, to pick a tough, academically rigorous junior high, over an easy-going school with great sports teams and a nice student lounge. As with the fifth graders, though, parents are closely involved in selecting colleges. What's more, given the heavy burden of debt most kids take on as undergrads, it's hard to believe that many of them are only in it for the luxurious dorms. We might expect rich kids whose tuitions are paid outright to take their education more lightly, but in fact, spending on student services and athletics is increasing faster among low-tuition schools than among high ones.
Another possibility here is that universities are simply bad marketers—that they're operating under the false premise that kids choose colleges based largely on the quality of the dorms and the gyms—but this seems like an awfully far-fetched premise to have simply invented; and it's particularly suspicious that universities and colleges of all types across the country all adopted it without evidence.
I think there must be subtler forces at work here—and as you might have guessed, I have some thoughts on what they are. I suspect the problem stems from something that I have believed since early in my undergraduate education—namely, that college applicants and their parents have very poor information about the schools to which they're applying. There follows a long discussion of why this is the case, but if you want to take my word on this, feel free to skip it.
Why College Applicants and Their Parents Know So Little
When parents are choosing a primary, secondary, or even pre-school for their kids, they have a lot of very concrete information. First of all, there are standardized test scores: SATs, APs, and regents for a highschool, state tests for a public grade-school, ERBs and other standardized tests for a private one. Even more important are exmissions, a word I actually learned earlier today for the placement of graduates in schools at the next level of education—for example, a preschool's exmissions are the elementary schools to which it sends its graduates. Parents and prospective students also have access to useful quantitative data on a school: they can observe classes, tour the cafeteria and hallways, even talk to current students about their school experience.
College and university applicants have much foggier data. There is no standard exit exam for colleges; some graduates take the GRE or other grad school exams, but they often do so years after finishing college, and many of them don't take such exams at all; even when they are taken, GRE scores rarely get reported to undergraduate alma maters. Colleges have their version of exmissions—placement in jobs or graduate schools—but again, these data are the private information of the graduate and are not consistently kept on record by school. Even were those data available, they would be hard to interpret, because the number and type of jobs, graduate schools, internships, and fellowships which a college graduate might pursue are so vast that evaluating them becomes a daunting if not impossible task for any prospective student or parent.
There are numerous other quantitative measures of school quality in higher education—the number of alumni defaulting on college loans, the percentage of students or faculty who have received national awards and fellowships, the of frequency of publication among professors; but these are so numerous and difficult to interpret, that only a preternaturally avid and well-informed college applicant could be expected to put them to any use.
It might seem that the qualitative data on colleges is more accessible. After all, observing classes, touring campuses, and talking to current students is a standard part of the college selection process. In this case, the problem is one of scale. In all but the largest highschools and middle schools, it's possible to stand in the cafeteria at lunch time or the hallway between classes and learn a lot about school culture; in a day or two of classroom observations, one can see a representative sample of teachers and classes; and in moderately sized schools it's even possible to gauge the general student attitude towards the school by talking to a handful of randomly chosen students.
At your average university, the case is very different. Classes are numerous and various; they differ greatly from professor to professor, department to department, and lecture to seminar. Student culture does not reveal itself in hallways and quads between classes, nor can it be gleaned from a handful of student interviews; it is rich and complex, occurs on and off campus, in dormitories, classrooms, libraries, apartments, and around town; different aspects of it will reveal themselves during the day and night, on weekends and weekdays, in clement and inclement weather; and it is highly heterogeneous, consisting of numerous sub-cultures, many of which don't have much contact with one another.
So what sources of information does the college applicant have? There are several, and they're all fairly bad: first, there are the national college rankings that are published by news organizations; second, there are brochure materials and advertising; third, the advice of college counselors; fourth, campus tours and the Q & A sessions that follow; fifth, the overnight visit; and finally, the college interview and conversations with other current students and alumni.
At all but the smallest colleges, the last of these will provide inconsistent and unrepresentative information because of the cultural heterogeneity that I've already discussed. Let's take the other four one at a time. A college ranking is a one-dimensional list of schools, based on a pastiche of questionable data points, and provides no detail whatsoever. Brochure material and advertisements are printed by the school and only contain statistics that show the school in a favorable light. College counselors are a bit of a black box: where does their information come from? Do they have any special knowledge beyond what's available to anyone else? The campus tour is a bizarre little ritual, equal parts irrelevant historical trivia and—aha!—showcase of the school's physical plant: gymnasiums, dormitories, computer labs, lecture halls and so on. Q & A sessions might be useful to a really investigative applicant, but even with a good deal of expertise, it's hard to come up with really useful questions—and who knows, if someone did, whether the dean running the Q & A would have the answer.
What You Get When You Don't Assess
I could offer some conjectures as to what precisely happens in the absence good measures of educational quality at the tertiary level. Chief among them would be the cultural fetish of the "college experience," wherein, actual educational value being difficult to assess, the emphasis is placed instead on the life and lifestyle of a college student as a class-defining ritual. Such an emphasis causes applicants to consider not where they will be best educated—since this is unknowable—but where they will have the best time, the most elite and upperclass college experience. Additionally, physical amenities and impressive sports teams may become proxies for educational quality. They are big, tangible demonstrations of institutional wealth and property; they can be easily photographed and their images printed in brochures; and their function is easily understood. State-of-the-art facilities, the applicant assumes, implies state-of-the-art education.
Such conjectures, however, are tangential, because my point is that you cannot incentivize what you do not measure. If the quality of the actual education at colleges and universities cannot, without great effort and expertise, be known to the consumer—the prospective student—then we must expect quality of actual education to be neglected in favor of more measurable attributes. True, there will be individual institutions that independently maintain sight of the original goals of higher education, but they will be a shrinking minority.
There's a very broad point I want to make here, though it strays outside the scope of this blog, and it is this: a market is a wonderfully efficient and effective motivator of behavior, but it's very difficult to be sure precisely what behavior it will motivate; in an unregulated market, incentives will tend to drift towards the cheapest and most obvious proxy for quality. An unregulated market for food produces the saltiest, fattiest, most brightly colored, and most quickly prepared food; an unregulated market for films produces the schlockiest, most emotionally manipulative, least challenging films, peopled with the handsomest actors, enhanced by the newest technology.
If we believe that true quality—in education, food, art, etc.—does not quickly reveal itself, then we can only entrust the maintenance of quality to market competition if we provide external measures of quality and make those measures readily and unambiguously available to consumers. The FDA has done this very succesfully in the food market, by requiring packaging to include nutritional information. The DoE has tried to do this, with less success, in primary and secondary education, through state exams.
Those of us who hate state exams—and I'm surely one—should remember where we'd be without any kind of assessment: we'd be at the mercy of market forces or, in the absence of competition, sheer human laziness.
For more—much more—on the role of measurement in producing educational quality, check out my next post.
 Thanks to Josh Jackson for information on how the performance of university presidents and other administrators is assessed.
 Over the past decade, spending on student services at private research universities increased at a rate about 64% faster than that of instructional spending; at public research universities, spending on student services increased about 100% faster; and at community colleges, that difference was about 180%. This is all according to the article.
 Forbes, for example, determined its 2008 list of college rankings based on the following five factors: professor reviews from RateMyProfessors.com; percentage of alumni "listed among the notable people in Who's Who in America"; percentage of students and faculty winning "nationally competitive awards like Rhodes Scholarships or Nobel Prizes;" percentage of students graduating in four years; and average student debt upon graduation.
The last two appear irrelevant—as opposed to merely trivial or arbitrary. They're included because Forbes compiles its report in conjunction with the Center for College Affordability and Productivity, which attempts to put itself in a student's shoes." This student's perspective, evidently, leads to a number of questions, including the following. "If I have to borrow to pay for college, how deeply will I go into debt? What are the chances I will graduate in four years?" What I don't understand is why you would use average student debt upon graduation—which must depend heavily on factors like average parental income, which could not be relevant to an individual student assessing her own risk of falling deeply into debt—rather than, say, tuition; or, if you want to be a little more in-depth, total scholarships awarded by the school per year, divided by tuition. Using average student debt seems almost willfully foolish.
By sheer coincidence—I found the Forbes methodology in a Google search of college rankings—the gentleman in charge of developing this whole ranking system is the very same Dr. Vedder who's quoted in the Times article with which I began this post. Interesting, I think, that Dr. Vedder, who admonishes colleges for their excessive zeal in soliciting applicants and speaks with haughty academic remove of the "country-clubization of the American university," participates in the very obfuscation and misdirection that leads to misplaced incentives in American higher education.
(all of the above information about Forbes's college rankings comes from the Forbes website.)