You are hereSammlung von Newsfeeds / Kategorien / Clay Shirky

Clay Shirky


The End of Higher Education’s Golden Age

Clay Shirky - Mi, 01/29/2014 - 23:20

Interest in using the internet to slash the price of higher education is being driven in part by hope for new methods of teaching, but also by frustration with the existing system. The biggest threat those of us working in colleges and universities face isn’t video lectures or online tests. It’s the fact that we live in institutions perfectly adapted to an environment that no longer exists.

In the first half of the 20th century, higher education was a luxury and a rarity in the U.S. Only 5% or so of adults, overwhelmingly drawn from well-off families, had attended college. That changed with the end of WWII. Waves of discharged soldiers subsidized by the GI Bill, joined by the children of the expanding middle class, wanted or needed a college degree. From 1945 to 1975, the number of undergraduates increased five-fold, and graduate students nine-fold. PhDs graduating one year got jobs teaching the ever-larger cohort of freshman arriving the next.

This growth was enthusiastically subsidized. Between 1960 and 1975, states more than doubled their rate of appropriations for higher education, from four dollars per thousand in state revenue to ten. Post-secondary education extended its previous mission—liberal arts education for elites—to include both more basic research from faculty and more job-specific training for students. Federal research grants quadrupled; at the same time, a Bachelor’s degree became an entry-level certificate for an increasing number of jobs.

This expansion created tensions among the goals of open-ended exploration, training for the workplace, and research, but these tensions were masked by new income. Decades of rising revenue meant we could simultaneously become the research arm of government and industry, the training ground for a rapidly professionalizing workforce, and the preservers of the liberal arts tradition. Even better, we could do all of this while increasing faculty ranks and reducing the time senior professors spent in the classroom. This was the Golden Age of American academia.

As long as the income was incoming, we were happy to trade funding our institutions with our money (tuition and endowment) for funding it with other people’s money (loans and grants.) And so long as college remained a source of cheap and effective job credentials, our new sources of support—students with loans, governments with research agendas—were happy to let us regard ourselves as priests instead of service workers.

Then the 1970s happened. The Vietnam war ended, removing “not getting shot at” as a reason to enroll. The draft ended too, reducing the ranks of future GIs, while the GI bill was altered to shift new costs onto former soldiers. During the oil shock and subsequent recession, demand for education shrank for the first time since 1945, and states began persistently reducing the proportion of tax dollars going to higher education, eventually cutting the previous increase in half. Rising costs and falling subsidies have driven average tuition up over 1000% since the 1970s.

Golden Age economics ended. Golden Age assumptions did not. For 30 wonderful years, we had been unusually flush, and we got used to it, re-designing our institutions to assume unending increases in subsidized demand. This did not happen. The year it started not happening was 1975. Every year since, we tweaked our finances, hiking tuition a bit, taking in a few more students, making large lectures a little larger, hiring a few more adjuncts.

Each of these changes looked small and reversible at the time. Over the decades, though, we’ve behaved like an embezzler who starts by taking only what he means to replace, but ends up extracting so much that embezzlement becomes the system. There is no longer enough income to support a full-time faculty and provide students a reasonably priced education of acceptable quality at most colleges or universities in this country.

Our current difficulties are not the result of current problems. They are the bill coming due for 40 years of trying to preserve a set of practices that have outlived the economics that made them possible.

* * *

Part of the reason this change is so disorienting is that the public conversation focuses, obsessively, on a few elite institutions. The persistent identification of higher education with institutions like Swarthmore and Stanford creates a collective delusion about the realities of education after high school; the collapse of Antioch College in 2008 was more widely reported than the threatened loss of accreditation for the Community College of San Francisco last year, even though CCSF has 85,000 students, and Antioch had fewer than 400 when it lost accreditation. Those 400, though, were attractive and well-off young people living together, which made for the better story. Life in the college dorm and on the grassy quad are rarities discussed as norms.

The students enrolled in places like CCSF (or Houston Community College, or Miami Dade) are sometimes called non-traditional, but this label is itself a holdover from another era, when residential colleges for teenage learners were still the norm. After the massive expansion of higher education into job training, the promising 18-year-old who goes straight to a residential college is now the odd one out.

Of the twenty million or so students in the US, only about one in ten lives on a campus. The remaining eighteen million—the ones who don’t have the grades for Swarthmore, or tens of thousands of dollars in free cash flow, or four years free of adult responsibility—are relying on education after high school not as a voyage of self-discovery but as a way to acquire training and a certificate of hireability.

Though the landscape of higher education in the U.S., spread across forty-six hundred institutions, hosts considerable variation, a few commonalities emerge: the bulk of students today are in their mid-20s or older, enrolled at a community or commuter school, and working towards a degree they will take too long to complete. One in three won’t complete, ever. Of the rest, two in three will leave in debt. The median member of this new student majority is just keeping her head above water financially. The bottom quintile is drowning.

One obvious way to improve life for the new student majority is to raise the quality of the education without raising the price. This is clearly the ideal, whose principal obstacle is not conceptual but practical: no one knows how. The value of our core product—the Bachelor’s degree—has fallen in every year since 2000, while tuition continues to increase faster than inflation.

The other way to help these students would be to dramatically reduce the price or time required to get an education of acceptable quality (and for acceptable read “enabling the student to get a better job”, their commonest goal.) This is a worse option in every respect except one, which is that it may be possible.

* * *

Running parallel to the obsession with elite institutions and students is the hollowing out of the academic job market. When the economic support from the Golden Age began to crack, we tenured faculty couldn’t be forced to share much of the pain. Our jobs were secure, so rather than forgo raises or return to our old teaching loads, we either allowed or encouraged those short-term fixes—rising tuition, larger student bodies, huge introductory lectures.

All that was minor, though, compared to our willingness to rely on contingent hires, including our own graduate students, ideal cheap labor. The proportion of part-time and non-tenure track teachers went from less than half of total faculty, before 1975, to over two-thirds now. In the same period, the proportion of jobs that might someday lead to tenure collapsed, from one in five to one in ten. The result is the bifurcation we have today: People who have tenure can’t lose it. People who don’t mostly can’t get it. The faculty has stopped being a guild, divided into junior and senior members, and become a caste system, divided into haves and have-nots.

Caste systems are notoriously hard to change. Though tenured professors often imagine we could somehow pay our non-tenured colleagues more, charge our students less, and keep our own salaries and benefits the same, the economics of our institutions remain as they have always been: our major expense is compensation (much of it for healthcare and pensions) distributed unequally between tenured and contingent faculty, and our major income is tuition.

I recently saw this pattern in my home institution. Last fall, NYU’s chapter of the American Association of University Professors proposed reducing senior administrative salaries by 25%, alongside a ‘steady conversion’ of non-tenure-track jobs to tenure-track ones ‘at every NYU location’. The former move would save us about $5 million a year. The latter would cost us $250 million.

Now NYU is relatively well off, but we do not have a spare quarter of a billion dollars per annum, not even for a good cause, not even if we sold the mineral rights under Greenwich Village. As at most institutions, even savage cuts in administrative compensation would not allow for hiring contingent faculty full time while also preserving tenured faculty’s benefits. (After these two proposals, the AAUP also advocated reducing ‘the student debt burden by expanding needs‐based financial aid’. No new sources of revenue were suggested.)

* * *

Many of my colleagues believe that if we just explain our plight clearly enough, legislators will come to their senses and give us enough money to save us from painful restructuring. I’ve never seen anyone explain why this argument will be persuasive, and we are nearing the 40th year in which similar pleas have failed, but “Someday the government will give us lots of money” remains in circulation, largely because contemplating our future without that faith is so bleak. If we can’t keep raising costs for students (we can’t) and if no one is coming to save us (they aren’t), then the only remaining way to help these students is to make a cheaper version of higher education for the new student majority.

The number of high-school graduates underserved or unserved by higher education today dwarfs the number of people for whom that system works well. The reason to bet on the spread of large-scale low-cost education isn’t the increased supply of new technologies. It’s the massive demand for education, which our existing institutions are increasingly unable to handle. That demand will go somewhere.

Those of us in the traditional academy could have a hand in shaping that future, but doing so will require us to relax our obsessive focus on elite students, institutions, and faculty. It will require us to stop regarding ourselves as irreplaceable occupiers of sacred roles, and start regarding ourselves as people who do several jobs society needs done, only one of which is creating new knowledge.

It will also require us to abandon any hope of restoring the Golden Age. It was a nice time, but it wasn’t stable, and it didn’t last, and it’s not coming back. It’s been gone ten years more than it lasted, in fact, and in the time since it ended, we’ve done more damage to our institutions, and our students, and our junior colleagues, by trying to preserve it than we would have by trying to adapt. Arguing that we need to keep the current system going just long enough to get the subsidy the world owes us is really just a way of preserving an arrangement that works well for elites—tenured professors, rich students, endowed institutions—but increasingly badly for everyone else.

Kategorien: Clay Shirky

Healthcare.gov and the Gulf Between Planning and Reality

Clay Shirky - Di, 11/19/2013 - 21:40

Back in the mid-1990s, I did a lot of web work for traditional media. That often meant figuring out what the client was already doing on the web, and how it was going, so I’d find the techies in the company, and ask them what they were doing, and how it was going. Then I’d tell management what I’d learned. This always struck me as a waste of my time and their money; I was like an overpaid bike messenger, moving information from one part of the firm to another. I didn’t understand the job I was doing until one meeting at a magazine company.

The thing that made this meeting unusual was that one of their programmers had been invited to attend, so management could outline their web strategy to him. After the executives thanked me for explaining what I’d learned from log files given me by their own employees just days before, the programmer leaned forward and said “You know, we have all that information downstairs, but nobody’s ever asked us for it.”

I remember thinking “Oh, finally!” I figured the executives would be relieved this information was in-house, delighted that their own people were on it, maybe even mad at me for charging an exorbitant markup on local knowledge. Then I saw the look on their faces as they considered the programmer’s offer. The look wasn’t delight, or even relief, but contempt. The situation suddenly came clear: I was getting paid to save management from the distasteful act of listening to their own employees.

In the early days of print, you had to understand the tech to run the organization. (Ben Franklin, the man who made America a media hothouse, called himself Printer.) But in the 19th century, the printing press became domesticated. Printers were no longer senior figures — they became blue-collar workers. And the executive suite no longer interacted with them much, except during contract negotiations.

This might have been nothing more than a previously hard job becoming easier, Hallelujah. But most print companies took it further. Talking to the people who understood the technology became demeaning, something to be avoided. Information was to move from management to workers, not vice-versa (a pattern that later came to other kinds of media businesses as well.) By the time the web came around and understanding the technology mattered again, many media executives hadn’t just lost the habit of talking with their own technically adept employees, they’d actively suppressed it.

I’d long forgotten about that meeting and those looks of contempt (I stopped building websites before most people started) until the launch of Healthcare.gov.

* * *

For the first couple of weeks after the launch, I assumed any difficulties in the Federal insurance market were caused by unexpected early interest, and that once the initial crush ebbed, all would be well. The sinking feeling that all would not be well started with this disillusioning paragraph about what had happened when a staff member at the Centers for Medicare & Medicaid Services, the department responsible for Healthcare.gov, warned about difficulties with the site back in March. In response, his superiors told him…

[...] in effect, that failure was not an option, according to people who have spoken with him. Nor was rolling out the system in stages or on a smaller scale, as companies like Google typically do so that problems can more easily and quietly be fixed. Former government officials say the White House, which was calling the shots, feared that any backtracking would further embolden Republican critics who were trying to repeal the health care law.

The idea that “failure is not an option” is a fantasy version of how non-engineers should motivate engineers. That sentiment was invented by a screenwriter, riffing on an after-the-fact observation about Apollo 13; no one said it at the time. (If you ever say it, wash your mouth out with soap. If anyone ever says it to you, run.) Even NASA’s vaunted moonshot, so often referred to as the best of government innovation, tested with dozens of unmanned missions first, several of which failed outright.

Failure is always an option. Engineers work as hard as they do because they understand the risk of failure. And for anything it might have meant in its screenplay version, here that sentiment means the opposite; the unnamed executives were saying “Addressing the possibility of failure is not an option.”

* * *

The management question, when trying anything new, is “When does reality trump planning?” For the officials overseeing Healthcare.gov, the preferred answer was “Never.” Every time there was a chance to create some sort of public experimentation, or even just some clarity about its methods and goals, the imperative was to avoid giving the opposition anything to criticize.

At the time, this probably seemed like a way of avoiding early failures. But the project’s managers weren’t avoiding those failures. They were saving them up. The actual site is worse—far worse—for not having early and aggressive testing. Even accepting the crassest possible political rationale for denying opponents a target, avoiding all public review before launch has given those opponents more to complain about than any amount of ongoing trial and error would have.

In his most recent press conference about the problems with the site, the President ruefully compared his campaigns’ use of technology with Healthcare.gov:

And I think it’s fair to say that we have a pretty good track record of working with folks on technology and IT from our campaign, where, both in 2008 and 2012, we did a pretty darn good job on that. [...] If you’re doing it at the federal government level, you know, you’re going through, you know, 40 pages of specs and this and that and the other and there’s all kinds of law involved. And it makes it more difficult — it’s part of the reason why chronically federal IT programs are over budget, behind schedule.

It’s certainly true that Federal IT is chronically challenged by its own processes. But the biggest problem with Healthcare.gov was not timeline or budget. The biggest problem was that the site did not work, and the administration decided to launch it anyway.

This is not just a hiring problem, or a procurement problem. This is a management problem, and a cultural problem. The preferred method for implementing large technology projects in Washington is to write the plans up front, break them into increasingly detailed specifications, then build what the specifications call for. It’s often called the waterfall method, because on a timeline the project cascades from planning, at the top left of the chart, down to implementation, on the bottom right.

Like all organizational models, waterfall is mainly a theory of collaboration. By putting the most serious planning at the beginning, with subsequent work derived from the plan, the waterfall method amounts to a pledge by all parties not to learn anything while doing the actual work. Instead, waterfall insists that the participants will understand best how things should work before accumulating any real-world experience, and that planners will always know more than workers.

This is a perfect fit for a culture that communicates in the deontic language of legislation. It is also a dreadful way to make new technology. If there is no room for learning by doing, early mistakes will resist correction. If the people with real technical knowledge can’t deliver bad news up the chain, potential failures get embedded rather than uprooted as the work goes on.

At the same press conference, the President also noted the degree to which he had been kept in the dark:

OK. On the website, I was not informed directly that the website would not be working the way it was supposed to. Had I been informed, I wouldn’t be going out saying “Boy, this is going to be great.” You know, I’m accused of a lot of things, but I don’t think I’m stupid enough to go around saying, this is going to be like shopping on Amazon or Travelocity, a week before the website opens, if I thought that it wasn’t going to work.

Healthcare.gov is a half-billion dollar site that was unable to complete even a thousand enrollments a day at launch, and for weeks afterwards. As we now know, programmers, stakeholders, and testers all expressed reservations about Healthcare.gov’s ability to do what it was supposed to do. Yet no one who understood the problems was able to tell the President. Worse, every senior political figure—every one—who could have bridged the gap between knowledgeable employees and the President decided not to.

And so it was that, even on launch day, the President was allowed to make things worse for himself and his signature program by bragging about the already-failing site and inviting people to log in and use something that mostly wouldn’t work. Whatever happens to government procurement or hiring (and we should all hope those things get better) a culture that prefers deluding the boss over delivering bad news isn’t well equipped to try new things.

* * *

With a site this complex, things were never going to work perfectly the first day, whatever management thought they were procuring. Yet none of the engineers with a grasp of this particular reality could successfully convince the political appointees to adopt the obvious response: “Since the site won’t work for everyone anyway, let’s decide what tests to run on the initial uses we can support, and use what we learn to improve.”

In this context, testing does not just mean “Checking to see what works and what doesn’t.” Even the Healthcare.gov team did some testing; it was late and desultory, but at least it was there. (The testers recommended delaying launch until the problems were fixed. This did not happen.) Testing means seeing what works and what doesn’t, and acting on that knowledge, even if that means contradicting management’s deeply held assumptions or goals. In well run organizations, information runs from the top down and from the bottom up.

One of the great descriptions of what real testing looks like comes from Valve software, in a piece detailing the making of its game Half-Life. After designing a game that was only sort of good, the team at Valve revamped its process, including constant testing:

This [testing] was also a sure way to settle any design arguments. It became obvious that any personal opinion you had given really didn’t mean anything, at least not until the next test. Just because you were sure something was going to be fun didn’t make it so; the testers could still show up and demonstrate just how wrong you really were.

“Any personal opinion you had given really didn’t mean anything.” So it is in the government; any insistence that something must work is worthless if it actually doesn’t.

An effective test is an exercise in humility; it’s only useful in a culture where desirability is not confused with likelihood. For a test to change things, everyone has to understand that their opinion, and their boss’s opinion, matters less than what actually works and what doesn’t. (An organization that isn’t learning from its users has decided it doesn’t want to learn from its users.)

Given comparisons with technological success from private organizations, a common response is that the government has special constraints, and thus cannot develop projects piecemeal, test with citizens, or learn from its mistakes in public. I was up at the Kennedy School a month after the launch, talking about technical leadership and Healthcare.gov, when one of the audience members made just this point, proposing that the difficult launch was unavoidable, because the government simply couldn’t have tested bits of the project over time.

That observation illustrates the gulf between planning and reality in political circles. It is hard for policy people to imagine that Healthcare.gov could have had a phased rollout, even while it is having one.

At launch, on October 1, only a tiny fraction of potential users could actually try the service. They generated concrete errors. Those errors were handed to a team whose job was to improve the site, already public but only partially working. The resulting improvements are incremental, and put in place over a period of months. That is a phased rollout, just one conducted in the worst possible way.

The vision of “technology” as something you can buy according to a plan, then have delivered as if it were coming off a truck, flatters and relieves managers who have no idea and no interest in how this stuff works, but it’s also a breeding ground for disaster. The mismatch between technical competence and executive authority is at least as bad in government now as it was in media companies in the 1990s, but with much more at stake.

* * *

Tom Steinberg, in his remembrance of his brilliant colleague Chris Lightfoot, said this about Lightfoot’s view of government and technology:

[W]hat he fundamentally had right was the understanding that you could no longer run a country properly if the elites don’t understand technology in the same way they grasp economics or ideology or propaganda. His analysis and predictions about what would happens if elites couldn’t learn were savage and depressingly accurate.

Now, and from now on, government will interact with its citizens via the internet, in increasingly important ways. This is a non-partisan issue; whichever party is in the White House will build and launch new forms of public service online. Unfortunately for us, our senior political figures have little habit of talking to their own technically adept employees.

If I had to design a litmus test for whether our political class grasps the internet, I would look for just one signal: Can anyone with authority over a new project articulate the tradeoff between features, quality, and time?

When a project cannot meet all three goals—a situation Healthcare.gov was clearly in by March—something will give. If you want certain features at a certain level of quality, you’d better be able to move the deadline. If you want overall quality by a certain deadline, you’d better be able to simplify, delay, or drop features. And if you have a fixed feature list and deadline, quality will suffer.

Intoning “Failure is not an option” will be at best useless, and at worst harmful. There is no “Suddenly Go Faster” button, no way you can throw in money or additional developers as a late-stage accelerant; money is not directly tradable for either quality or speed, and adding more programmers to a late project makes it later. You can slip deadlines, reduce features, or, as a last resort, just launch and see what breaks.

Denying this tradeoff doesn’t prevent it from happening. If no one with authority over the project understands that, the tradeoff is likely to mean sacrificing quality by default. That just happened to this administration’s signature policy goal. It will happen again, as long politicians can be allowed to imagine that if you just plan hard enough, you can ignore reality. It will happen again, as long as department heads imagine that complex technology can be procured like pencils. It will happen again as long as management regards listening to the people who understand the technology as a distasteful act.

Kategorien: Clay Shirky