Monday, July 19, 2010

Higher Education: Not Really About Education Anymore


There was a disturbing little article in the Times a couple weeks ago about the increasing share of college and university budgets going to athletic facilities, sports teams, student clubs, fancy dormitories, and other non-instructional ends. The article quotes Ohio University Professor Richard K. Vedder giving the obvious explanation: "In the zeal to get students, [schools] are going after them on the basis of recreational amenities." Professor Vedder studies the economics of higher education, but it doesn't take a specialist to see how this is playing out.

I was riding the Q train over the Manhattan Bridge a couple days ago, and the whole car was plastered with an ad campaign for Monroe College; the ads touted the school's sports teams, its student clubs, its athletic facilities; they compared the dormitories to luxury condos; and they said very little about academics. I think it's pretty clear that what we're seeing is a higher-education system that's responding to market forces.

This is something that we want to take a close look at, because a lot of people think that the model for public primary and secondary schooling ought to be a lot more market-based. In an article from 2000, for example, U. of C. professor of economics and Nobel laureate James Heckman makes the argument in no uncertain terms: "Once it is recognized that public schools, especially inner-city public schools, are a virtual monopoly, while the U.S. university system is highly competitive, the mystery of the poor performance of the former, and the great success of the latter vanishes.".[1]

On the surface, the decreasing emphasis on instruction in college and university budgets seems like an argument against Heckman. In fact, I think that decreasing emphasis tells a different story, not about the merits of competition in education, but about the tools by which education is measured.

Heckman's thinking on this issue is, like that of so many academics, distorted by the assumtions and paradigms of his field. Economists are used to thinking about goods whose quality is relatively easy to determine. An education is vastly more complicated than your average widget, and its quality is extremely difficult to assess. It is the tools and structures by which we evaluate it, far more than the competition or lack thereof, that will ultimately determine quality of education. Only what is measurable can be rewarded or punished by the market. In other words, it's all about incentives.


What Are Those Universities Up To?

The first question is why colleges and universities want so damn many applicants. Monroe is actually an extreme example, because it's a for-profit organization: the more students they can draw at a given tuition, the more money they're making. Fine, but what about schools like CUNY or MCNY, which I've also caught advertising on the subway train; true, their ads were more substantive and instructionally oriented, but they're still ads—they're still trying to pull in more applicants. They must somehow be incentivized to increase their applicant pool.

The first thing to understand is that the incentives in a non-profit school aren't much different from those in a for-profit. The job of the president of a college or university is to bring in money, and he is evaluated by the board of trustees on how much money he brings in. Now, beneath the president are a number of provosts, deans, vice presidents and so on, who are evaluated on other, more specific, hopefully more academic grounds, but all of these are evaluated by the president, whose sole economic incentive is to get the school more money. So, from a purely economic point of view, the school operates as a funding-accumulator. Now, we know there's more going on than that, but on some level, quality of education, prestige of school, and every other attribute of an institution can be viewed as ways that the school tries to draw donations.

There are two ways in which I suspect drawing applicants becomes a goal for the school: first, ratio of applicant pool to class size is taken as a measure of the exclusivity of an institution; second, the larger the applicant pool, the more select a school can be in picking its incoming classes—thus, a larger applicant pool means a more accomplished student body, which means an alumni body with more earning power and more donations down the line.

This second benefit of a larger applicant pool cannot be underestimated. After all, by the age of 18, personality is already well developed: a good student at 18 is likely to remain a good student, and a bad one a bad one; IQ, likewise, is already set, and has been for ages; a good attitude, an ambitious or hardworking nature, a creative or adaptive mind—these are qualities that have likely already developed. If one wishes to improve the intelligence, creativity, earning potential, and so on of one's graduates, by far the easiest way to do that must be to admit more intelligent, creative, employable freshman.

Not surprisingly, the dean of admissions at most schools is evaluated based on his ability to increase the size and quality of the school's applicant pool..[2]


Why Do Those Kids Want Those Fancy Gyms So Damn Bad?

That, by my reasoning, is the story from the institutions' point of view, but what about the kids—why are fancy gyms and dormitories the best way to draw applicants? On the surface, it doesn't make a lot of sense. Ostensibly, people go on to higher education to increase their employability and to improve their intellect; dormitories and gyms serve neither of those purposes. After all, a college tuition is a stunningly high price for a gym membership and a bedroom in a giant industrial building with shared bath and kitchens. So, what the heck are these kids up to?

Partly, of course, this is another case of that great downfall of classical economics, people behaving irrationally and against their economic interests (damn them!) Indeed, it's easy to blame the kids: they don't know what's good for them, they just want to have a good time and party a lot, and so on. There's probably some truth to that, and no wonder—after all, we wouldn't expect a bunch of fifth graders, left to their own devices, to pick a tough, academically rigorous junior high, over an easy-going school with great sports teams and a nice student lounge. As with the fifth graders, though, parents are closely involved in selecting colleges. What's more, given the heavy burden of debt most kids take on as undergrads, it's hard to believe that many of them are only in it for the luxurious dorms. We might expect rich kids whose tuitions are paid outright to take their education more lightly, but in fact, spending on student services and athletics is increasing faster among low-tuition schools than among high ones.[3]

Another possibility here is that universities are simply bad marketers—that they're operating under the false premise that kids choose colleges based largely on the quality of the dorms and the gyms—but this seems like an awfully far-fetched premise to have simply invented; and it's particularly suspicious that universities and colleges of all types across the country all adopted it without evidence.

I think there must be subtler forces at work here—and as you might have guessed, I have some thoughts on what they are. I suspect the problem stems from something that I have believed since early in my undergraduate education—namely, that college applicants and their parents have very poor information about the schools to which they're applying. There follows a long discussion of why this is the case, but if you want to take my word on this, feel free to skip it.


Why College Applicants and Their Parents Know So Little

When parents are choosing a primary, secondary, or even pre-school for their kids, they have a lot of very concrete information. First of all, there are standardized test scores: SATs, APs, and regents for a highschool, state tests for a public grade-school, ERBs and other standardized tests for a private one. Even more important are exmissions, a word I actually learned earlier today for the placement of graduates in schools at the next level of education—for example, a preschool's exmissions are the elementary schools to which it sends its graduates. Parents and prospective students also have access to useful quantitative data on a school: they can observe classes, tour the cafeteria and hallways, even talk to current students about their school experience.

College and university applicants have much foggier data. There is no standard exit exam for colleges; some graduates take the GRE or other grad school exams, but they often do so years after finishing college, and many of them don't take such exams at all; even when they are taken, GRE scores rarely get reported to undergraduate alma maters. Colleges have their version of exmissions—placement in jobs or graduate schools—but again, these data are the private information of the graduate and are not consistently kept on record by school. Even were those data available, they would be hard to interpret, because the number and type of jobs, graduate schools, internships, and fellowships which a college graduate might pursue are so vast that evaluating them becomes a daunting if not impossible task for any prospective student or parent.

There are numerous other quantitative measures of school quality in higher education—the number of alumni defaulting on college loans, the percentage of students or faculty who have received national awards and fellowships, the of frequency of publication among professors; but these are so numerous and difficult to interpret, that only a preternaturally avid and well-informed college applicant could be expected to put them to any use.

It might seem that the qualitative data on colleges is more accessible. After all, observing classes, touring campuses, and talking to current students is a standard part of the college selection process. In this case, the problem is one of scale. In all but the largest highschools and middle schools, it's possible to stand in the cafeteria at lunch time or the hallway between classes and learn a lot about school culture; in a day or two of classroom observations, one can see a representative sample of teachers and classes; and in moderately sized schools it's even possible to gauge the general student attitude towards the school by talking to a handful of randomly chosen students.

At your average university, the case is very different. Classes are numerous and various; they differ greatly from professor to professor, department to department, and lecture to seminar. Student culture does not reveal itself in hallways and quads between classes, nor can it be gleaned from a handful of student interviews; it is rich and complex, occurs on and off campus, in dormitories, classrooms, libraries, apartments, and around town; different aspects of it will reveal themselves during the day and night, on weekends and weekdays, in clement and inclement weather; and it is highly heterogeneous, consisting of numerous sub-cultures, many of which don't have much contact with one another.

So what sources of information does the college applicant have? There are several, and they're all fairly bad: first, there are the national college rankings that are published by news organizations; second, there are brochure materials and advertising; third, the advice of college counselors; fourth, campus tours and the Q & A sessions that follow; fifth, the overnight visit; and finally, the college interview and conversations with other current students and alumni.

At all but the smallest colleges, the last of these will provide inconsistent and unrepresentative information because of the cultural heterogeneity that I've already discussed. Let's take the other four one at a time. A college ranking is a one-dimensional list of schools, based on a pastiche of questionable data points,[4] and provides no detail whatsoever. Brochure material and advertisements are printed by the school and only contain statistics that show the school in a favorable light. College counselors are a bit of a black box: where does their information come from? Do they have any special knowledge beyond what's available to anyone else? The campus tour is a bizarre little ritual, equal parts irrelevant historical trivia and—aha!—showcase of the school's physical plant: gymnasiums, dormitories, computer labs, lecture halls and so on. Q & A sessions might be useful to a really investigative applicant, but even with a good deal of expertise, it's hard to come up with really useful questions—and who knows, if someone did, whether the dean running the Q & A would have the answer.


What You Get When You Don't Assess

I could offer some conjectures as to what precisely happens in the absence good measures of educational quality at the tertiary level. Chief among them would be the cultural fetish of the "college experience," wherein, actual educational value being difficult to assess, the emphasis is placed instead on the life and lifestyle of a college student as a class-defining ritual. Such an emphasis causes applicants to consider not where they will be best educated—since this is unknowable—but where they will have the best time, the most elite and upperclass college experience. Additionally, physical amenities and impressive sports teams may become proxies for educational quality. They are big, tangible demonstrations of institutional wealth and property; they can be easily photographed and their images printed in brochures; and their function is easily understood. State-of-the-art facilities, the applicant assumes, implies state-of-the-art education.

Such conjectures, however, are tangential, because my point is that you cannot incentivize what you do not measure. If the quality of the actual education at colleges and universities cannot, without great effort and expertise, be known to the consumer—the prospective student—then we must expect quality of actual education to be neglected in favor of more measurable attributes. True, there will be individual institutions that independently maintain sight of the original goals of higher education, but they will be a shrinking minority.

There's a very broad point I want to make here, though it strays outside the scope of this blog, and it is this: a market is a wonderfully efficient and effective motivator of behavior, but it's very difficult to be sure precisely what behavior it will motivate; in an unregulated market, incentives will tend to drift towards the cheapest and most obvious proxy for quality. An unregulated market for food produces the saltiest, fattiest, most brightly colored, and most quickly prepared food; an unregulated market for films produces the schlockiest, most emotionally manipulative, least challenging films, peopled with the handsomest actors, enhanced by the newest technology.

If we believe that true quality—in education, food, art, etc.—does not quickly reveal itself, then we can only entrust the maintenance of quality to market competition if we provide external measures of quality and make those measures readily and unambiguously available to consumers. The FDA has done this very succesfully in the food market, by requiring packaging to include nutritional information. The DoE has tried to do this, with less success, in primary and secondary education, through state exams.

Those of us who hate state exams—and I'm surely one—should remember where we'd be without any kind of assessment: we'd be at the mercy of market forces or, in the absence of competition, sheer human laziness.

For more—much more—on the role of measurement in producing educational quality, check out my next post.




[1]   J. Heckman, "Policies to Foster Human Capital," Research in Economics 54, no. 1 (2000): 3–56

[2]   Thanks to Josh Jackson for information on how the performance of university presidents and other administrators is assessed.

[3]   Over the past decade, spending on student services at private research universities increased at a rate about 64% faster than that of instructional spending; at public research universities, spending on student services increased about 100% faster; and at community colleges, that difference was about 180%. This is all according to the article.

[4]   Forbes, for example, determined its 2008 list of college rankings based on the following five factors: professor reviews from RateMyProfessors.com; percentage of alumni "listed among the notable people in Who's Who in America"; percentage of students and faculty winning "nationally competitive awards like Rhodes Scholarships or Nobel Prizes;" percentage of students graduating in four years; and average student debt upon graduation.
     The last two appear irrelevant—as opposed to merely trivial or arbitrary. They're included because Forbes compiles its report in conjunction with the Center for College Affordability and Productivity, which attempts to put itself in a student's shoes." This student's perspective, evidently, leads to a number of questions, including the following. "If I have to borrow to pay for college, how deeply will I go into debt? What are the chances I will graduate in four years?" What I don't understand is why you would use average student debt upon graduation—which must depend heavily on factors like average parental income, which could not be relevant to an individual student assessing her own risk of falling deeply into debt—rather than, say, tuition; or, if you want to be a little more in-depth, total scholarships awarded by the school per year, divided by tuition. Using average student debt seems almost willfully foolish.
      By sheer coincidence—I found the Forbes methodology in a Google search of college rankings—the gentleman in charge of developing this whole ranking system is the very same Dr. Vedder who's quoted in the Times article with which I began this post. Interesting, I think, that Dr. Vedder, who admonishes colleges for their excessive zeal in soliciting applicants and speaks with haughty academic remove of the "country-clubization of the American university," participates in the very obfuscation and misdirection that leads to misplaced incentives in American higher education.
     (all of the above information about Forbes's college rankings comes from the Forbes website.)

13 comments:

  1. "....you cannot incentivize what you do not measure..." Depends what you mean by measure. The administration of a college can incentivize serious intellectual activity by rewarding teachers who provide it and hiring teachers who have provided it elsewhere. They may not know how to measure (quanitify) it, but often they can recognize it, just the way we recognize traveling (in basketball) or singing off-key or what constitutes "serious art." All of us recognize those teachers who were "good" or taught as a lot, even if we can't exactly measure their goodness or what they taught.

    The end of this blog (when you talk about market motivations) gets very interesting and, as you say, suggests applications far beyond education. But I think if we ask a certain number of people who came out of any school at any level, "Was it good a school? Did you learn a lot?" we'll learn a lot by their answers, whether or not we know what those answers are "based on." That would tell me more about the quality of the school than test score results.

    ReplyDelete
  2. This is in response to the first paragraph of the above comment.

    You're right, it does depend on what you mean by "measure." I have to write a whole section of this blog about measurement and the cult of measurement in modern education, but I'll try to respond to your comments on this point more succinctly here.

    College admin can reward teachers who provide "serious intellectual activity" only to the extent that they can figure out who provides it. In other words, they must be able to measure it-- but that does NOT necessarily mean quantify it.

    Measurement, in the social sciences, is generally understood to be either quantitative or qualitative. Thus, in my discussion of measurements of school quality, I included school observations and conversations with students, which are clearly not quantitative measures.

    Measuring "serious intellectual activity" is a very tricky problem. You may say that you know it when you see it, but it's very difficult to create a system of incentives based on that. A few reasons:

    1. A measurement that's based purely on intuition is very difficult to defend or justify. A dean who rewards teachers based on his intuitive belief that they are fostering "serious intellectual activity" is sure to be seen as playing favorites-- in fact, he may well end up playing favorites; or, he may have his head up his ass. When you leave it in the hands of various persons' intuition, you're assuming that they all have good intuition; you're leaving yourself no means of determining whether their judgement is sound.

    2. To gain an intuitive sense of a teacher's quality, you have to spend a significant amount of time observing her classes and lesson plans-- and, if you're going to do a decent job of it, you'll need to do some of the reading on her syllabi, read a random sampling of her students' papers, and so on. Given the number of professors at your average university, that's a staggering undertaking. It would require dozens of academic deans whose time is devoted principally or wholly to that endeavor. With enough time and resources, you can measure almost anything-- but you may end up spending many times as much time and money measuring something as you did producing it.


    Now, in fact, there is a group of people who is already doing the very job that I just envisioned for that army of academic deans, and doing it in far more depth than the deans probably would; I'm talking, of course, about the students. This leads to an interesting question: why not base teacher-hiring and teacher bonuses on student evaluations?

    I think that’s actually a good question, and I have a few thoughts on it, but it’s a little off topic. The point is that if you used student evaluations to measure teacher quality, you would create a metric by which to determine an average rating for teachers, based on all the student data collected, and that too would constitute a system of measurement.

    So, again, no matter how you go about measuring, you have to measure if you want to incentivize. Anything unmeasured cannot be rewarded or punished.

    ReplyDelete
  3. This is in response to the second paragraph of Henry's comment above.

    What you're saying-- that you can determine a lot about a school by talking to a handful of its graduates-- seems like it must be true, but I think it's not.

    Taking my own alma mater, Brown University, as an example, I know plenty of people who would say they liked it and plenty who would say they did not. Of the first group, my impression from talking to its members is that most of them would probably have enjoyed any high-end liberal arts school-- Harvard, Princeton, Yale, etc. Of the second group, my guess is that most of them would have disliked almost any school they went to, because they (we) were bothered by very broad problems with academia and higher education.

    Of course, some Brown-haters might have hated another school less or more, and some Brown-lovers might have loved another school less or more, but the point is there's really no way of knowing, because very few of them went to more than one school. What you learn, when you talk to graduates, is really their attitude towards life, school, academia, privelege and so on, not their attitude towards the particular institution they attended.

    Even those who attended multiple schools are don't provide very useful data, because A. almost none attended more than three colleges, and B. they are self-selected to be people who disliked the first school or two that they went to.

    ReplyDelete
  4. This is in response to Max's first comment on measurement:

    When I arrived at college, in 1963, there were two chief measures of a teacher. One was his or her publications, the quantity and quality (as determined by reviews, estimation of peers, citations, influence, awards – in short the word of mouth of his colleagues) and the other was her or his reputation as a teacher, which was generally a synthesis of the word of mouth of the students. No doubt both yardsticks were imperfect, fools were occasionally lionized and geniuses neglected; charismatic lecturers probably got more points than stern taskmasters who, in the end, may have imparted more knowledge or wisdom or even passion. But overall, I would bet, these “vague” methodologies were at least as accurate (i.e. effective in guiding both university hirings and student course selection) as whatever scientific methods we might come up with today in an effort to exclude prejudice and provide objectivity. Or, rather, try to come up with and fail.

    What happened to those “simple” methods? They were demystified and politicized. We discovered the culture prejudices behind the professional reputations, and if we haven’t yet similarly deconstructed student opinion, we’re intuitively suspicious of its prejudices -- even if we can’t quite name them. We’ve also complicated the notion of quality through concerns for diversity, the culture wars, political infighting within departments and in the college or university as a whole. The very notion of objective standards now seems quaint. Whose objective standards? we shrewdly ask.

    Yet I suspect that much of that demystifying analysis, though perhaps “true” and even interesting, is beside the point and chiefly helps to make methodologically simple judgments difficult or impossible. (Sometimes one even imagines that that is the purpose.) So I propose the following system: ask people in a position to know what they think of a particular teacher or department or whole school. There will be mistakes, but I suspect it will give us as good an answer as we’re likely find. And the questionnaire is simple to design.

    ReplyDelete
  5. Quantity and quality of publications is still probably the most important factor in determining which professors get hired and tenured. That’s actually a big part of the problem we're seeing, because quantity of publications should, if anything, be negatively correlated with teaching quality. Good teaching requires a lot of time and thought, and those who focus on research and publication don't have that time nor that mental energy. Quality of publication is more of a mixed bag, but just as the best sports players rarely make good coaches and the best writers rarely make good teachers, we should not expect the best researchers necessarily to make good teachers. I would expect no correlation between quality of publication and quality of teaching.

    Now as for a professor’s reputation among the students, there’s no call to bemoan the abandonment of that as a measure of professor quality. Surely, a professor’s reputation is still the primary measure used by students when selecting classes, just as it always has been—but as such, it is an entirely informal, implicit measure. It is not, and to my knowledge never has been, used as a formal tool by the administration to, say, help determine bonuses, salaries, hiring, firing, or tenuring.

    Were it used in that way, students would quickly become aware of their power over their instructors’ financial futures. Aside from the potential corruptions that would introduce into both grades and survey data—bribes, bargains, retributions, etc.—it might do significant damage the psychological relationship between teacher and student. Let me be clear: I’m not arguing that these problems necessarily outweigh the benefits of using student questionnaires as a formal measure of teacher quality, but I am saying that you need to be careful. If you’re nostalgic (as, indeed, I am) for a time when intuitive measurements were better trusted, you should consider whether, in the past, explicit economic stakes were ever attached to such measures.


    The writer of the preceding comment (my father) suggests the following system for measuring school quality “ask people in a position to know what they think of a particular... whole school." But who are these people in a position to know? I have spent a blog post and a long comment arguing that they don't exist—that no one's in a position to know what they think of a whole school. A student is in a position to know what she thinks of a professor, maybe even an entire department; but none are in a position to know what they think of an entire school. In reacting against the cult of quantitative measurement, we must be careful not to romanticize—to over-mystify—intuitive measures. We must consider the actual experiences of the intuiting individual, in order to determine whether her perspective is useful.

    ReplyDelete
  6. I'm not suggesting that one student is "in a position to know" very much. But ask enough students, and you begin to get a picture. How many is enough? Depends what you want to know. And that's a good question: what do we want to know?

    But, meanwhile, what are the alternatives to these intuitive methods? Specifically.

    ReplyDelete
  7. A reader sent me a link to a blogger arguing vociferously against the use of student surveys to evaluate teachers. It’s a short and entirely anecdotal post—but anecdotal arguments should appeal to my quantitative-data-wary interlocutor in this little debate. The writer’s argument is essentially that the true value that one derives from a class is not appreciated until years later—that fluffy, entertaining classes may at first appear dynamic and exciting, while baffling, frustrating classes may ultimately lead to deeper insights. I don’t wholly agree, but it’s so relevant, I thought I’d pass it on: http://opinionator.blogs.nytimes.com/2010/06/21/deep-in-the-heart-of-texas/

    ReplyDelete
  8. I'm not arguing that an individual student only has a piece of the picture (in which case asking more students would give a more complete picture.) I'm arguing that an individual student has no real useful information on the quality of her college. Having been to only one—or, in rare cases, two or three or four—colleges, her feelings towards her school(s) will tell us a lot about her own disposition, her own attitude towards school and academia; but it can tell us very little about the school she went to. It’s like asking a bunch of people who, in all their lives, have only ever driven or ridden in 1995 Honda Accords to rate the performance of the Honda Accord. The problem isn’t that you need to ask more people, it’s that the people you’re asking have no perspective whatsoever.
    Please do not think that I’m making a purely logical argument here. The above argument is, in fact, my logical explanation for an intuition that I have every time I talk to a peer about their undergraduate experience.

    As for specific alternatives, I don't have really specific ones, but I'm planning to address this issue more directly, if not quite specifically, in an upcoming post.

    ReplyDelete
  9. Thanks for taking the time to discuss this, I feel strongly about it and love learning more on this topic. If possible, as you gain expertise, would you mind updating your blog with extra information? It is extremely helpful for me. HSC answers

    ReplyDelete
  10. The securing on a web-based degree in a custom curriculum can help you in procuring better vocation openings and going for well-paying positions. Yet, prior to taking a crack at a school or college, ensure that it is authorize. In case you can, attempt to get criticism from people who have finished the program.
    coding for kids

    ReplyDelete
  11. I can give you the address Here you will learn how to do it correctly. Read and write something good. 인스타 팔로워 늘리기

    ReplyDelete
  12. Superior post, keep up with this exceptional work. It's nice to know that this topic is being also covered on this web site so cheers for taking the time to discuss this! Thanks again and again! ibm edi training

    ReplyDelete
  13. Guardians would likewise instruct their youngster on the use of dialects. They would not need their youngster getting and utilizing some unacceptable words at a youthful age.Best Schools in Bangalore

    ReplyDelete