Business competition is always interesting, in part because smart companies figure out how to avoid competition by specializing and differentiating their product or service. When Sun Tzu admonished his generals against assaulting walled fortresses, he understood that head-to-head competition is a sure path to a headache.
Many US universities have not read their Sun Tzu: they compete head-to-head for the same students. In normal markets, schools would specialize. Some would seek students with strong quantitative skills, others would focus on training people who are especially empathic. Some might cater to students who write well, or are poor, male, female, interested in fashion or language studies, or born in another country. This happens of course (except for the male part: the US has no men’s colleges left and only nine all women’s colleges), but most colleges recruit for the same student profile: high grades, high test scores, compelling outside activities. They are assaulting a walled fortress. What accounts for their failure to differentiate?
One problem is that universities are lazy: they compete for those who need them least. No university seeks out or even really wants people who most need education. They seek students who will be successful even if the university is not. I wrote earlier about selection effects: the tendency of elite universities to compete for students with traits that strongly predict future success regardless of education. When those young people proceed to be professionally and often economically successful, their alma mater is always there, hand outstretched, with a gentle reminder of their formative influence. Employers, desperate for a shorthand method of segmenting talent markets, reinforce these effects by preferentially hiring graduates of “good” colleges. Pretty soon, your college becomes a critical part of your personal brand. It’s a racket, and one in which I enthusiastically participate, benefit from, and perpetuate as a parent, student, employer, and advisor to university leaders.
Selection effects lead to a second market failure: universities don’t scale. What other business deliberately limits access to a compelling service? Can you imagine a law firm declaring that they would only accommodate the first 100 clients? Unthinkable — they will grow to meet demand and maybe a bit more. For top universities, being selective is not a necessity, it’s a choice. Most elite schools admit about the same number of students today as they did 100 years ago — that’s what makes them elite schools.
As my younger son starts to think about college, I have begun to pay attention to how colleges are thinking about him. He will be a major catch (translation: they will compete for him because he shows every sign of being a kid who will do just fine in life with or without their help). So how do colleges compete for talent? In particular, how do colleges compete for the students that they all think they want?
When competing for talented high school students, universities worry either about their selectivity or their yield. Selectivity is admissions/applicants. Yield is enrollment/admissions. To boost selectivity, you do more marketing to increase the number of applicants. But to boost yield, you actually have to improve your school. Boosting yield is really hard in a competitive market, which is why yield drives college rankings (which improve yield — a longer story). US News, the FT, the Economist and others that have jumped into the college ranking game realize that yield is a very strong market indicator of quality. After all, a school that could admit 1,000 students and then enroll all of them would have to be seen by every student as their very best choice.
Except that most schools cheat. To avoid head-to-head competition, most universities schools offer students the following deal: don’t force us to compete and we will give you a leg up. They call it early decision but they should call it yield improvement. They tell students that if they apply early to their school only, the school will lower the admissions bar. Careful research suggests that students who apply for early decision receive an advantage equal to an extra 150 points on their SAT score. (Universities deny this, but the numbers are unequivocal.) Colleges enforce severe penalties against students who are admitted early and do not enroll by colluding to blacklist the offending student — a practice that should arguably be challenged in court.
Early decision is marketed as a way to reduce the stress of applying to a dozen colleges — and it does that. But it has a benefit that few seem to have noticed: it boosts the school’s yield. Every student admitted under early decision programs will attend: that’s the deal. The yield on early decision admissions is 100% — small wonder that they are growing as a share of total admissions. Nor is there anything wrong with this: it allows applicants into better schools than they would on average get into if they applied later. Businesses do similar things all the time, ranging from no-shop agreements during M&A discussions to exclusive distribution deals in exchange for preferential pricing. Negotiated exclusivity is a battle-tested element of many walled fortresses.
In the last decade however, the most selective schools have started to rethink early decision. They have decided to compete on selectivity by removing exclusivity. They say to students: “apply early, get an early decision and you are not bound by our offer”. Today Harvard, Princeton, Yale, Chicago, Stanford, and MIT offer nonbinding “early action” programs and a handful of other schools do as well. These schools realized that they were the top choice for the overwhelming majority of those they admit — they already had great yields. So they decided to increase their selectivity by signaling that they want every strong student to apply. With no reason not to take a shot at it and no obligation to attend, applications skyrocketed and schools that were already preposterously selective became even more so.
The result of these two strategies is exactly what you would expect: applications have skyrocketed, driving admission rates down. Acceptances went out today and this year, Yale accepted fewer than 7% its applicants, the lowest acceptance rate in its history. It offered 1,991 seats to 29,610 applicants for an entering class of about 1,300. Harvard admitted 5.8%, Princeton 7.3%.
There are actually three reasons that Ivy League applications are up. First is early action. Second is the Common Application, an online form that makes it easier for students who do not apply for early decision to apply to many more schools than they used to. When Harvard or Stanford say that they are twice as selective as they used to be, remember that each student they consider is also applying to twice as many schools. When I was seventeen, I applied to four schools and hand typed each application. Few kids today who are serious about college apply to fewer than eight and many apply to more. When the music stops, most kids who have prepared themselves for college still end up with a chair.
The third reason that Ivies are attracting interest is more mundane: they pay better. Harvard, Yale, Princeton and Columbia are “need blind” schools, where the ability of a student’s family to pay isn’t considered by the admissions office. Harvard has announced that it will boost its financial-aid budget to $182 million, a 5.8% increase. For many students, it actually costs much less to attend an expensive, elite school than it does a local state university. The trick, of course, is getting in.
Over the next decade, early decision will not solve the core problem facing most non Ivy universities: they are a lousy investment. Universities operate medieval business models designed for a core purpose that has disappeared. As late as the 1970s, universities accounted for a huge share of knowledge creation, storage, and transmission. They earned the right to certify who was smart and talented because they held a near monopoly on skill and knowledge. This monopoly has now vanished as knowledge creation has diffused and fragmented, information storage has become free and ubiquitous, and skill transmission has taken many forms — some teacherless. The main privilege that colleges cling to is certification — and that too is increasingly under challenge.
The assault comes not mainly from online education, which has been widely discussed and over-hyped, but from enterprises that treat education as a business opportunity. For example, the MBA with the fastest payback now comes from not from a college but from Hult International, which was tiny five years ago and is now the largest producer of MBAs in the world. 2U now builds large scale, fully credentialed online degrees that produce thousands of graduates each year in partnership with major universities. The Mozilla Open Badge Initiative and dozens of social startups want to supplant traditional degrees with more specific and timely credentials. There are literally hundreds of enterprises devoted to disrupting higher education — whose walled fortresses will not withstand the siege for long.
Education mattered to me, it matters more to my kids, and will matter even more to my grandkids. But thanks to competition and innovation, it will cost less, deliver more, and signal actual capabilities much more precisely than today’s universities do.
Religion seems to help a lot of people. It reinforces admirable values and provides community, kinship, charity, music, a moral compass, and stories. But to me, religious faith is stone soup: like the soldiers whose boiling rocks induced villagers to donate carrots, potatoes, and meat until there was a plentiful stew after the stones were removed. We could worship an old bicycle chain instead of an almighty deity if it brings us together to do useful work and help the least fortunate among us. Come to think of it, there are plenty of reasons to prefer the greasy chain over dead guys in white robes.
My distrust of superstition has always meant that God and I are not on speaking terms. He presumably finds my lack of gratitude amusing. So we leave each other alone. I also think that despite its good works, the cost of religion usually exceeds its benefits. For millennia, religious intolerance has been a global scourge that has led millions to a life of hatred and violent death. Churches exploit the poor more often than they help them, just as they abuse the emotionally vulnerable more often than they comfort them. Religious leaders happily frighten children into believing that they will burn in hell forever if they don’t submit to church authority and doctrine. This is designed to terrify the weak into compliance — it is nothing like parents fibbing about Santa Claus.
Most religions oppress women, perpetuate outrageous sexual myths, and demand sexual conformity, but if hell exists, it surely has an especially warm corner reserved for the modern Catholic church, which abuses children on a horrific scale. That thousands of priests swore an oath to chastity before sodomizing more than ten thousand Catholic boys and (20% of the time anyway) raping Catholic girls is both horrifying and outrageous.
As with all great crimes, we will never be able to fully count all of the victims. The governing body of the church admits that more than 3,000 priests have been accused of sex abuse during the past 50 years. In the US, more than 3,000 Catholics decided to speak out, lawyer up, and file lawsuits as they fled the church. The church has paid out between $2-3 billion to victims to settle these claims, depending on who is keeping score. Eight diocese have gone bankrupt due to an inability to pay these settlements. One widely cited count documents 6,115 priests who stand accused of sexually assaulting 16,324 minors. The actual account may be a fraction of this or it may be a multiple. What we know for sure and the Vatican concedes is that child abuse is simply not reported in much of the third world. If it were, one doubts that the church would still be growing there (watch the Philippines, where people are finding their voice and beginning to accuse predatory priests. The spread of Catholicism has been stopped cold).
It was thus with a jaded eye that I watched the cardinals assemble this week at the Sistine Chapel. Like everyone else, I was surprised to hear of white smoke on the second day of the conclave. I was moved to see that the man who emerged wore not the traditional large ornate cross of gold, but a simple cross of wood. The cardinals had chosen a man who as Cardinal had refused his palace and limo and who, upon his elevation, refused to ascend the papal throne. Instead, he greeted the cardinals standing up, as brothers. I was stunned to hear that Bergolio chose the name of Francis after Assisi, who renounced his wealth, lived with the poor, founded the Franciscans, and spent a lot of time protesting outside the Vatican. Francis of Assisi was never ordained as a priest, much less a pope and as Robert Francis Kennedy noted, is a name associated with the quest for social justice, not the papacy. In his first talk, Francis spoke simply and generously. He asked that people pray for him, not to him, as most popes do. My antipathy to the church aside, this had to be good news.
The world was stunned that an Argentinian had ascended to the papacy. This is not really surprising, since half of all Catholics now live in Latin America. Shocking to me is that the cardinals had chosen a Jesuit, the order that is a century-old pain in the Vatican ass. Jesuits are an intellectual order that values study and critical debate. They are big in the US and founded some of our best high schools and colleges, including Santa Clara, Boston College, and Georgetown. Jesuit priests take vows of poverty, which most gold-bedecked Cardinals think is beneath them. They are famously disrespectful of authority, challenging the Vatican on contraception, abortion, gay marriage, the role of women, the need for political revolution and all manner of causes (to be sure, not all Jesuits dissent on all of these issues. Francis appears to conform to Vatican thinking on all of them). Throughout the ages the Vatican has kept its distance from the Jesuits. Despite their size and influence, no order has been cast out further from the center of Vatican power. The cardinals of Rome are about as likely to name a Jesuit pope as the United Nations is to name a North Korean Secretary General. Indeed, the Vegas oddsmakers did not even have Bergolio on their list. When in his first words, Francis noted that the cardinals had “reached a very long way” to find a new pope, most people assume he was referring to the distance from Rome to Buenos Aires. He could as easily have been describing the reach it took for his fellow cardinals to support a Jesuit pope.
I confess to a soft spot in my heart for Jesuits because so many of my best teachers were of that order. In high school and in college, every single teacher who challenged me to think deeply and critically and to live a moral life was either Jewish or a fallen Jesuit. Three were former priests who decided that a vow of chastity was nuts.
Will Francis liberate the church? Nope. He is theologically conservative even if he modernized the Argentinian church. He made some dubious compromises with the military junta that will become public knowledge in the coming months. More fundamentally, he faces an unimaginable turnaround challenge rooted in both sex and money. The ongoing financial corruption of the Vatican’s Curia is deep and entrenched. Organized pedophilia and its cover up could kill the church, and arguably should. The average age of priests increases by about 10 months per year. Still, I wish Francis well, even if I would not miss his church if it vanished. I sympathize with those who have trouble kicking the Catholic habit, and for them, and for children who are forced into religious life before they can make a choice, I truly hope that Pope Francis fulfills the promise of his first glorious hours.
We now collect extraordinary amounts of data from weblogs, wifi sessions, phone calls, sensors, and transactions of all kinds. The quantities are hard to imagine: we created about five exabytes of data in all of human history until 2003. We now create that much data every two days.
Deriving insights from this mountain of data is a big challenge for most organizations. People who are good at this are highly valued and in desperately short supply (career tip: study statistics). But mining data for insights is just the beginning: explaining what you have learned to non-statisticians is often an even bigger challenge. Visualizing data and forming it into a coherent story is a completely separate skill, and one that is evolving quickly. For my money, the pioneers in the field are McKinsey’s Gene Zelazny and the always impressive Edward Tufte. Modern masters include Hans Rosling and Garr Reynolds.
It’s not easy to visualize data effectively and it is even harder to weave it into a compelling story. Statisticians make lousy novelists – and vice versa (career tip: study fiction). It often takes a team to do a great job of analyzing and creatively presenting information. Done well, the results are artistic, for me anyway. Here are two good examples.
The first is on wealth and inequality in America — not an easy topic to portray.
The second is a Hans Rosling’s classic TED Talk on global demographics.
Talk to a book author, publisher, or retailer about the future of their business and the denials begin. “Never before have there been so many good books to read”. “Books are the backbone of civilization”. “Life without books is unimaginable”. As a cultural argument, this may be true, but as an economic one, this is a view only of the supply side.
An alleged ancestor of mine once observed that “facts are stubborn things”. Forget for a moment about our beloved books. Imagine a product you care little about: chemicals perhaps, motorcycles, or furniture. Imagine the industry that produces this product: the designers, manufacturers, marketers, distributors, and retailers. You can determine the health of this industry with a few key measures. We can apply these same vital signs to the publishing business to determine whether our loved one has long to live.
Sales. Not every industry with declining sales is in trouble, but most are. According to BookScan, adult nonfiction print unit book sales peaked in 2007 and have declined each year since. Retail bookstore sales peaked the same year and have also fallen each year according to the U.S. Census Bureau. I sold an online book business I had started at about that time for a simple reason: I couldn’t figure out how to keep growing it.
eBooks are growing fast but do not close the gap. Print sales dropped 17% from 2010-2011, and e-books grow 117% (I have borrowed liberally from a nice fact set gathered here). The result was a 5.8% decline in total book sales according to the Association of American Publishers. Combined print and e-book sales of adult trade books fell by 14 million units in 2010, according to the Book Industry Study Group.
Unit economics. OK, but you can make good money in shrinking industries. You may sell fewer items, but make more on each item you sell. So long as you add more value than cost, customers will happily pay you for your product, even in a shrinking market. So what is happening to the unit economics of books?
Start with prices. Book prices have gone up every year for more than ten years – a good sign, right? Not necessarily, because the number of books sold continues to shrink as noted above. But the number of books published is exploding. Bowker reports that over three million books were published in the U.S. in 2010. 316,480 of these were new traditional titles – meaning that publishers introduce 867 new books every day.
But that is the traditional tip of the publishing iceberg: 90%, or more than 2.7 million, books published were “non-traditional” titles in 2010. These are mainly self-published books, reprints of books in the public domain, and resurrected out of print books. They vary enormously. Some become best selling light porn for housewives. Others are spam created by software that pirates an existing title in a few hours (one professor wrote a program that has produced 800,000 specialized “books” for sale on Amazon). Others are highly specialized books not attractive to a publisher. When Baker and Taylor reports that book prices are going up, they are describing traditional, not “non-traditional” publishing.
What about costs? The cost of bringing a traditional book to market is high and largely fixed, so the declining unit sales of the average book is a huge problem. According to BookScan, the best count we have, Americans only bought 263 million adult nonfiction books 2011, meaning that the average U.S. nonfiction book now sells fewer than 250 copies per year (collectors of first editions take note: it is second editions that turn out to be rare!) Only a few titles become big sellers. In a randomized analysis of 1,000 business books released in 2009, only 62 sold more than 5,000 copies, according to the New York Times.
So we have an industry that is shrinking and being divided among many more products and players, few of which can make money. Not a good sign.
Marketing and distribution. Well, maybe better marketing and more rational distribution can target new customers and grow demand while reducing costs. It happens in hard goods and industrial businesses all the time. Can we fix the book market?
Investments in marketing are very tough to justify. A publisher needs to acquire the book (pay an advance to the author), then develop (edit) it, design, name, print, launch, distribute, warehouse, sell, and handle returns (about a quarter of all books flow backwards from retailer to distributor or publisher — a huge cost avoided by eBooks). After all of this, the average conventional book generates only $50,000 to $200,000 in sales, which radically limits how much publishers can invest in marketing. Increasingly, publishers operate like venture capitalists, putting small amounts of money to work to see what catches on and justifies additional investment. Only proven (or outrageous) authors attract large marketing budgets.
Increasingly, book marketing is done by authors, not publishers. But how does an author market a book? The only way she can: to her friends and community. With too much to read, we read what our friends advise (women especially read with their friends. Publishers are intensely interested in what reading groups choose to read in an era where there is no general audience for nonfiction and fiction is highly segmented). Some products catch on and a few become blockbusters — even some books that begin as unconventional titles without publishers.
So marketing is tough to fix — how about distribution? It’s a disaster. Retail is hopeless: your chances of finding any given book in a bookstore are less than 1%. For example, there are about 250,000 business titles in print. A small bookstore can carry 100 of these titles; a superstore perhaps 1,500. Stores are for best-sellers – if there are any. Online can carry every title, but really, who needs 250,000 business books? Online selling solves this problem, but enjoys such massive returns to scale that concentration is unavoidable. In the latest count, Amazon had a 27% share of all book sales (including, I estimate, about two thirds of all eBook sales — meaning that they monopolize the only part of the book business that is growing).
Underlying demand. OK, fine — the industry is broken. But music was busted too: retailers evaporated, product proliferated and went digital, the label’s value-added shrank, and piracy was a much bigger issue than in books. And despite it all, music is making a comeback: sales are up if you count concerts and ring tones. A few people make a living at it and a few become stars. Is this the future of books?
It doesn’t look that way for reasons articulated by Steve Jobs: although people still listen to music, many people have simply stopped reading books. Speaking about the Amazon Kindle, he argued:
It doesn’t matter how good or bad the product is, the fact is that people don’t read anymore. Forty percent of the people in the U.S. read one book or less last year. The whole conception is flawed at the top because people don’t read anymore.”
Ouch. Setting aside Jobs known tendency to dismiss technologies that he later pursued, is demand for books actually dropping? Are we even reading the books we buy?
The evidence is overwhelming that we read fewer books than we used to. As summarized nicely in the New Yorker, the National Endowment for the Arts has since 1982 teamed with the Census Bureau to survey thousands of Americans about our reading. When they began, 57% of Americans surveyed claimed to have read a work of creative literature (poems, plays, narrative fiction) in the previous twelve months. The share fell to 54% in 1992, and to 47% 2002. Whether you look at men or women, kids, teenagers, young adults or the middle-aged; we all read less literature, and far fewer books.
This is not a small problem, nor is it confined to the book business. The N.E.A. found that active book readers are more likely to play sports, exercise, visit art museums, attend theatre, paint, go to music events, take photographs, volunteer, and vote. Neurologists have demonstrated that the book habit builds a wide range of cognitive abilities. Reading grows powerful and important neural pathways that not only make reading easier as we do more of it, but enable us to analyze, comprehend, and connect information.
But for the first time in human history, people all over the world are reading fewer books than they used to. Faced with compelling media alternatives, humans everywhere are abandoning the book. We read, but we are losing the habit of reading deeply. Having conquered illiteracy, we are now threatened by aliteracy. Reviving the book industry is only possible if we can revive the book itself.
More women than men now graduate from college and they earn better grades. But at every level of educational attainment, men still earn more money and the gap grows larger with time. Are we systematically underpaying women? If so, why would labor markets behave that way?
To some, the question is laughably simple: greedy male capitalists exploit women. This is true, but greedy capitalists exploit men too. Greed, not fairness, should lead to some rough market price of equivalent skill and experience unless talented women are somehow different from anything else that is bought and sold in quantity. Asserting that bosses are greedy and biased doesn’t explain much. After all, greedy bosses cheerfully bid up the price of copper. At the risk of commodifying half the population, it is useful to ask why are they not bidding up the price of talented women?
There are, it turns out, many reasons that women are paid less than men. Most raise issues worth addressing, whether or not they conform neatly to the “glass ceiling” narrative suggested by the infographic on the right. The reasons for pay inquality include:
- Deliberate discrimination. Don’t kid yourself — this happens a lot. Goodyear Tire famously paid women less then men who did the same work and took pains to conceal their policy. The Lilly Ledbetter Equal Pay Act resulted when Ledbetter discovered the facts and sued Goodyear. It is interesting to ask how the story would have been different if the fact of discriminatory pay had been known all along. Would women have demanded raises? Or would they have changed jobs and, if they did, would the result have been higher or lower aggregate pay for women?
- Motherhood-induced time off and reductions in work hours. Part of the reason that women earn less is that they take more time off to raise kids. When women first graduate, pay differences are less than they will ever be, for reasons we will touch on later. By age 30 however, significant differences emerge. Harvard’s Larry Katz and Claudia Goldin joined with Chicago’s Marianne Bertrand to follow nearly 3,000 M.B.A.’s over 15 years. The women started off making 88 percent as much as men, but 10 to 15 years later, they made only 55 cents for every dollar of men’s pay.
The scholars accounted for differences in grades, course choices, and previous experience. Their conclusion: kids kill careers. They found that the women’s pay deficit was almost entirely because women interrupted their careers more often and tended to work fewer hours. The rest was mostly explained by career choices: for instance, more women worked at nonprofits, which pay less. A subsequent study by scholars at CUNY, also published by NBER, largely confirmed this finding.
This explanation dodges the underlying question of why are the financial penalties to taking time off so high? After all, if between age 30 and 35, I take a year of maternity leave and work 4 days a week for six or seven years, I might sacrifice two years of work experience between age 30 and 40. Meaning that as a 40 year old woman, I have the same experience as a 38 year old man who contributed zero to raising his kids. Is there really something so magical about the fourth decade of life that missing some work justifies a permanent economic penalty? A Labor Department study completed in 1992 concluded that time off for career interruptions explain only about 12% of the gender gap (not counting part time work and experience effects and, unlike Goldin, et al, they did not focus only at MBAs).
There remains of course, the threshold question of whether women should be the default caretaker and disproportionately bear the professional cost associated with raising children. In many households of course, they do not — but this is still the exception.
- Experience gaps. People with more experience are paid more if experience translates into superior productivity. Women who leave the workforce to have children pay twice: once because they work fewer hours and a second time because they accumulate less experience. This is reflected in their earnings when they return. This gap should be erased when a woman has been back at work awhile and research indicates that it takes about four years if a woman returns to work full-time, which many do not.
- Occupational differences. To the extent that some occupations are more heavily male and higher paid while others are more heavily female and lower paid, earnings gaps may indicate a problem of occupational mix. As women enter traditional male occupations, as has occurred very widely during the past 40 years, these effects lessen. Some scholars figure that most of the reduction of the gender gap during the last four decades is due to women entering traditionally male occupations; others determine that only a third of the gap was closed this way.
Most studies do not ask why a profession earned less money to start with. After all, pay in many professions (including teaching) declined as they became more female and pay in some current professions (including law) appears to be going through something similar. Scholars who study these differences often have trouble sorting out historic patterns of gender discrimination from productivity or skill related pay differences.
- Attraction to less successful industries or firms. Some industries and some companies pay both men and women less than do other industries. If women choose these industries or companies disproportionately, their average earnings will be lower compared to average men’s earnings, even if they are receive equal pay for equal work. Lower productivity industries, notably service intensive retail, education, and some health care, pay both women and men less than do higher productivity industries. The problem, of course, is that pay gaps persist within industries, not simply between them — but it is the averages that make for enticing infographics. A Labor Department study completed in 1992 concluded that 22% of the gap between men and women’s earnings could be explained by variation in industries.
In some cases, women also seem to choose firms within an industry that pay both men and women less (perhaps because they offer more flexible work arrangements). Janice Madden studied women stockbrokers, for example, whose pay is strictly performance-driven. She documented that although women were assigned inferior accounts and performed as well as men when they were not, a relatively small share of the total pay gap was the result of this unequal treatment. Although the industry paid women quite well, women were more likely than men to work in smaller, less successful brokerages.
- Lower expectations and inferior bargaining. Most women do not bargain for their salaries as aggressively as do most men. I once had to explain to a VP I had hired that I had expected her to counter my initial compensation offer, not simply to accept it. As a result, I was underpaying her relative to her colleagues and industry norms. I was mildly annoyed as I explained that I would pay her more than she agreed to but expected her to take a more aggressive view of her economic value in the future.
There is plenty of evidence that this example was not unusual, even though my response probably was. The problem begins with expectations: women expect to be paid less than men do. A 2012 survey of 5,730 students at 80 universities found that women expected starting salaries that were nearly $11,000 lower than their male classmates. Women veterinarians, who bill their own clients at rates they set, were found to set their prices lower than their male colleagues and to more frequently “relationship price” meaning not charge friends or clients for small amounts of work. A similar effect occurs in law firms, where a lucrative partnership often depends on billed hours. The most prominent scholarly work in this area is by Linda Babcock at Carnegie Mellon, whose book title captured her major finding: Women Don’t Ask. Babcock realized the problem when she noticed that the plum teaching assistant positions at her university had gone to men who had bothered to ask about them, not to women, who expected them to be posted somewhere.
The effect on women of not negotiating is huge. According to Babcock, women are more pessimistic about the how much is available when they do negotiate and so they typically ask for and get less when they do negotiate—on average, 30 percent less than men. She cites evidence from Carnegie Mellon masters degree holders that eight times more men negotiated their starting salaries than women. These men were able to increase their starting salaries by an average of 7.4 percent, or about $4,000. In the same study, men’s starting salaries were about $4,000 higher than the women’s on average, suggesting that the gender gap between men and women’s starting salaries might have been closed had more of the women negotiated. Over a professional lifetime, the cost to women of not negotiating was more than $1 million.
Fortunately, this is pretty easy to fix. Women can learn quickly that everything is negotiable. The Jamkid pointed me to a recent investigation by his teacher John List at the University of Chicago, showing that given an indication that bargaining is appropriate, women are just as willing as men to negotiate for more pay. List finds that men remain more likely than women to ask for more money when there is no explicit statement in a job description that wages are negotiable.
Although legislation and litigation will surely be useful to discourage and penalize employers who systematically discriminate against women at scale, as WalMart is alleged to have done, most of the forces that contribute to inappropriately low pay for women will not be remedied in court. Two policy remedies however, could make a large difference and are politically achievable.
- Paid parental leave. Paid leave for new parents is essentially a tax on non-parents (we already tax non-parents by allowing parents to deduct children as dependents). This makes sense — we want to reduce the economic penalty associated with having children. Every modern country except for the United States grants mothers and fathers the right to take time off work with pay following the birth or adoption of a child. European countries provide a period of at least 14 to 20 weeks of parental leave, with 70 to 100% of wages replaced. These countries also provide paid parental leave subsequent to this, although the duration of the job-protection and leave payment differs substantially across nations. The total duration of paid leave exceeds nine months in the majority of advanced countries. Canada, for example, currently provides at least one year of paid leave, with around 55% of wages replaced, up to a ceiling.
California is the only state that currently requires paid parental leave. Initial evidence suggests that the act has doubled maternity leave from three to seven weeks (barbaric by European standards) and raised the wages of new mothers by 6-9%. It’s a start, but we should join the modern world, and perhaps follow Denmark, which last I checked required that husbands take equal time away from work on the birth of a child in order to minimize the long term impact on women’s earnings. To those who worry that by subsidizing overpopulation in a capacity constrained planet, I would point to declining birth rates throughout Europe and the realization, which is slowly beginning to dawn on the world, that we face far more threats from low birth rates at the moment than we do from high ones.
- Structured Dislosure. The second step we could take is to force employers to disclose wage disparities among people who do the same work. This requirement needs a careful touch and FASB/SEC rulemaking, but if a public company had to disclose the difference in pay between men and women by occupational category, it would quickly become a benchmark that everyone from board members to managers would need to look at and live with. Scholars and consultants would quickly compare businesses. Websites like Glass Door would post comparisons. Companies would feel pressure to either justify disparities with additional data (showing, for example, that men had more experience or more training within the same occupational category) or face demands to explain themselves. Women would be emboldened by data to ask for their due. These metrics would not be susceptible to industry or occupation differences, nor would they disclose salaries – only the percentage difference between men’s pay and women’s.
It might also begin a deeper, more fact-based, discussion about the sources of economic inequality. And such a disclosure would quickly expose the most embarrassing economic fact of all: some men — but relatively few women — are shockingly overpaid.
The transition from fields to factories always mixes agony with hope. Families abandon land and traditions that often go back generations, move to cities and reset their lives from sunlight to time clocks. Mass production industries flourish for a few generations. Life is hardly a bed of roses, but it is nearly always better or people would return to the farm — and nobody returns. Factory work means educated kids, savings, medical care, and consumer goods like refrigerators and cars. Manufacturing jobs may be dangerous or tedious, but they are also deliver opportunity and hope to millions of people.
Eventually, of course, this all changes. Service industries like hospitality, health care, education, or banks grow faster than manufacturing. Consumers buy stuff made elsewhere. In the US at least, income disparity has increased and life experience stratified until many people — including many with low incomes — have no understanding of manufacturing or factory work. Factories seem somehow dirty, Dickensian, and something to be avoided.
The result in the US is a schizophrenic attitude towards manufacturing. We are divided between those see no future for factories and those who believe that manufacturing is vital to our economy. It’s Palo Alto vs. Pittsburgh. Sunny Silicon Valley typically sees the future in online technologies, clean tech, or biotech and associate manufacturing with an economic time and place that is as far gone as the family farm. Pittsburgh’s workers, managers, policymakers, and professors argue passionately that the decline of the middle class and the decline of manufacturing employment are inexorably linked and urge government action to restore our competitive position.
As both a confirmed Silicon Valley technologist and a former machinist, union man, and factory worker, I understand both world views. Perhaps more importantly, I have studied a recent report by my former colleagues at the McKinsey Global Institute that details the role of manufacturing in the US and global economies (click here to download the 170 page report or here for the summary. I highly recommend it and relied on it for most of the data and charts that follow). The punch line: Palo Alto and Pittsburgh both have it wrong, even when their prevailing myths contain elements of past truths. Manufacturing still matters, but for different reasons than either group believes.
Lets start with Silicon Valley. Palo Alto sees America through a prism coated in software and web services with an economic future built on service and information businesses. With the quaint and unprofitable exceptions of Tesla, the odd 3D printer, and notwithstanding the atonal musings of Andy Grove, we haven’t made anything in Silicon Valley since we drove out disk drives and semiconductors a generation ago. We view manufacturing as a relic of the industrial age, not as an engine of innovation. This belief is held in place by several myths, including:
1. Companies that make things have a lot in common.
Manufacturing is not a sector: companies that make things vary enormously in the nature of their products, operations, and economics.
Some, like steel or aluminum plants, are incredibly energy intensive and heavy. Manufacturers need to be near water (for transport), raw materials, and cheap power (Alcoa chairman and former Treasury Secretary Paul O’Neill once described aluminum to me as “congealed electricity”). Labor costs are completely secondary.
Pharmaceuticals, in contrast, live or die on product development. They need access to capital, technology, and skilled researchers. A furniture maker needs semi-skilled workers and access to distribution.
It is hardly useful to talk about manufacturing as a single thing — it really isn’t. The McKinsey report tries to segment manufacturers into five groups and describe the requirements and challenges of each, illustrated on the right. The scheme illustrates fundamental differences between manufacturing sectors, although there remain enormous variation even within segments.
These groups require vastly different skills and have fared quite differently in advanced countries, with the final group, the so-called labor-intensive tradables, not surprisingly accounting for the biggest share of job losses.
2. Manufacturing is a commodity that contributes little to the US standard of living
Nope: manufacturing matters, just not like it used to. McKinsey found that throughout the developed world, manufacturing is declining in its share of economic activity but contributes disproportionately to a nation’s exports, productivity growth, R&D, and innovation.
As the chart on the right illustrates, manufacturing contributes to productivity growth (the basis for all increases in living standards) at about double the rate that it contributes to employment. It also produces spillover effects that are frequently not captured in data about manufacturing.
Manufacturing adds economic value, much of which is transferred to consumers in the form of lower prices (which are economically indistinguishable from a pay increase). On a value-added basis, manufacturing represents about 16% of global GDP, but accounted for 20% of the growth of global GDP in the first decade of this century.
Finally, manufacturing accounts for 77% of private sector R&D, which drives a huge share of technology innovation. It is far from clear that Silicon Valley would exist without it.
3. Our future is in knowledge-intensive services, not manufacturing.
Once again, our traditional categories are not helpful. Manufacturing frequently is a knowledge intensive business. (It surprises many people to learn, for example, there are more dollars of information than dollars of labor in a ton of US made steel).
Manufacturing is increasingly data intensive. Big Data is revolutionizing manufacturing products and processes, no less than services. Data enables manufacturers to target products to very specific markets. The “Internet of Things” relies on sensors, social data, and intelligent devices to rapidly inform how products are designed, built, and used. Huge data sets have also enabled new ways for manufacturers to gather customer insights, optimize inventory, price accurately, and manage supply chains.
This is not your father’s factory. Most US manufacturing jobs are not even in production. As the accompanying chart shows, they are service jobs linked to manufacturing or inside manufacturing companies.
4. Manufacturing depends on low cost labor, which is why it has fled overseas.
This particular conceit is endemic in Palo Alto. McKinsey documents one possible reason: no sector, not even textiles, has shifted production overseas as fast as computers and electronics. Indeed, as the chart at the right illustrates, some manufacturing sectors have actually added jobs during the past ten years.
There is a second dimension to this myth however: that manufacturing jobs are factory jobs. As illustrated above, many jobs in manufacturing companies are service like jobs, including R&D, procurement, distribution, sales and marketing, post sales service, back office support, and management. These jobs make up between 30 and 55 percent of manufacturing employment in the US. Much of the work of manufacturing does not involve direct product fabrication, assembly, warehousing, or transportation.
The final misunderstanding is that most factory jobs are unskilled or low paying. In fact, manufacturers world wide are currently experiencing chronic skill shortages. McKinsey projects a potential shortage of more than 40 million high skilled workers around the world by 2020 — especially in China.
In short, the standard Silicon Valley view is much too narrow: manufacturing is and will remain a high value industry that contributes meaningfully to our standard of living. Manufacturing (some of it, anyway), is a competitive asset.
Move east to Pittsburgh, and you will quickly discover that a completely different manufacturing mythology prevails, focused mainly on job-creation. In these parts, the loss of manufacturing jobs is understandably considered a crisis for the US. Politicians pay homage to “good-paying manufacturing jobs” and blame the inability of a high school grad to get a factory job that supports a family, a home, and a motorboat on cheatin’ Chinese and union-bustin’ outsourcers. Dig a bit deeper, and you will discover that these beliefs are also grounded in economic myths, such as:
1. Manufacturing jobs pay more than service sector jobs.
This view often reflects the wishes of people with a history in “rust belt” manufacturing. In fact, manufacturing jobs pay very much like service jobs do — except at the very low end, where manufacturing creates far fewer minimum wage service jobs that are common in hospitality and retail.
Part of the reason for this of course, is that manufacturers can have low value work performed overseas — not an option for McDonalds, Walmart, or others who deliver services face-to-face.
As shown on the right, manufacturing creates about the same number of jobs in each pay band as do service sector jobs, except that there are fewer low-paying jobs and a few more high paying ones. An important caveat is that manufacturing company jobs may be more likely to include benefits, which are excluded from this calculation.
That all said, it is no longer given that manufacturing is a source of better paying jobs.
2. We should look to manufacturing for the jobs we need.
OK, but at least manufacturing creates decent jobs. Why not promote manufacturing to create jobs — even if they pay the same as service sector jobs?
The answer depends on your country’s stage of development, on domestic demand for manufactured goods, and on how robust your service sector is. For the US, the case for public policies favoring manufacturing is weak.
McKinsey documents what many have observed: manufacturing jobs decline once a country reaches about $7-10,000 GDP/person, as illustrated on the right. This pattern holds both across and within countries. As a result, manufacturing jobs are declining everywhere, except in the very poorest countries (even China is losing manufacturing jobs).
But all low cost labor countries do not enjoy equivalent manufacturing sectors. More important even than stage of development is the level of domestic demand for manufactured goods and the robustness of the domestic service sector. The US and the UK have such large service sectors that we derive smaller share of our GDP from manufacturing, even though in absolute terms both countries have robust manufacturing sectors.
There are typically two parts to the belief that US jobs are flowing overseas. First is the underlying view that jobs are a zero-sum asset to be fought over like territory. This idea has political salience, but is economic nonsense. Jobs are the complex result of many things including the availability of public or private capital, legal and regulatory systems, local demand conditions, and managerial competence. Cheap Chinese labor is typically the least of it.
The other idea however, is that we can somehow return to 1950 when unionized manufacturing jobs dominated the US economy. This is no more likely than a return to small family farming (and like those who romanticize what Marx aptly termed “the isolation of rural life”, those who idealize factory work often have suspiciously clean fingernails).
As the accompanying chart shows, manufacturing as a share of economic activity is in long term secular decline in all high and middle income countries worldwide — including China. It is only growing as a share of the economy in very poor countries. As the UN has pointed out, Haiti is in desperate need of sweatshops. Vietnam and Burma are growing manufacturing’s share of economic output — often at China’s expense.
Manufacturing matters enormously, just like agriculture does. But it is not growing as a share of economic output. (McKinsey highlights one interesting exception to this rule. Sweden has maintained manufacturing as a share of its economy by targeting high growth sectors and especially by investing twice as much in training as other EU countries. Most importantly however, they devalued the krona against the Euro to make exports competitive — effectively taxing imports).
4. Companies build plants overseas in search of cheap labor
There was a time when labor costs were a determining factor in locating production facilities. This is much less true today, when location decisions are driven by many factors other than labor costs, as the chart on the right illustrates.
Depending on how a company competes and whether it is locating research, development, process development, or production facilities it’s location criteria may or may not turn on factor costs such as labor. Proximity to consumers or to talent may matter more. In some cases taxes matter. In other cases access to suppliers matter.
The rising cost of commodity inputs transportation during the past two decades has altered this calculation. Steel, for example, was about 8% iron ore cost and 81% production costs as recently as 1995. Today ore is more than 40% of the cost of a ton of steel and production costs are only 26%. Steel companies care much more about the cost of ore than the cost of labor.
Likewise transportation costs have skyrocketed with energy prices and infrastructure demands (the US grows highway use by about 3%/year and grows highways by about 1%/year. Anyone living here knows the result). Producers from P&G to Ikea and Emerson now are forced to locate plants near customers to minimize transportation costs. As a strategy for plant location decisions, labor arbitrage looks very 1980s.
Politicians and union leaders say this all the time and it is sheer idiocy. Most would not be caught dead in my German car, which was designed and made in Tennessee, but beam proudly at the sight of a Buick van imported from China.
High productivity manufacturing benefits consumers, as companies pass on savings to Americans in the form of lower product costs. As illustrated by the chart to the right, most consumer durables cost today about what they did in the 1980s — and quality is much higher. Economists have estimated that Walmart, Target, and Costco reduce retail prices by 1-3% each year because they pass to consumers savings extracted from manufacturers (this, by the way, is a big reason that manufacturing continues to shrink as a share of our economy. We pay less for our stuff and more for services like education and health care.)
Americans say we believe in “Made in USA” campaigns, but as consumers, we are famously delusional. When surveyed, we profess to favor locally produced merchandize. But our wallets don’t lie: we buy high quality, low cost stuff regardless of where it comes from.
So how do we grow US manufacturing? Same as always: by creating innovative materials, processes, and products. McKinsey sees “a robust pipeline of technological innovations that suggest that this trend will continue to fuel productivity and growth in the coming decades”. Of course most innovations are hard to foresee. One reliable source of innovation turns out to be anything that reduces weight, such as nanomaterials, some biotech, light weight steels, aluminum, and carbon fiber. It turns out although we buy more stuff each year, the total weight of our purchases actually declines because nearly everything we buy, including cars and airplanes, weighs less than it used to.
Manufacturers have come to appreciate the power and the necessity of innovation. During the Clinton administration debates over CAFE standards, car company engineers soberly advised us that the theoretical limit of internal combustion engines was a 10-15% improvement over the current average of 17 miles per gallon. Today these companies have already doubled that efficiency and speak openly about doubling it again, even as they invest in non-combustion solutions that are even more efficient.
In short, manufacturing matters for different reasons than it used to. It used to be a plentiful source of unskilled jobs; today its value is as driver of innovation, productivity improvement, and consumer value. It’s an exciting part of the economy, even if it cannot solve every problem we face related to job creation and economic growth.
I got the iPhone 5 because it was free. The loathsome ATT charged me $300 and I sold the old iPhone 4 on eBay for that amount. I hate ATT, but their 4G LTE is really fast in the Bay Area and factory unlocking a phone under contract to enable overseas SIMs and free tethering is trivially easy. The new phone is like the old one but bigger, faster and thinner — reinforcing my view that Apple post Steve is an incremental innovator, not a disruptive one.
Most of the changes are the result of a new operating system, not new hardware. But one feature is blowing me away — totally changing how I use my phone. The new feature is keyboard dictation, which appears on all iOS 6 keyboards, whether you have the new iPhone or not.
By dictation, I emphatically do not mean Siri. Siri is a dog that performs a few well-chosen show tricks and inspired at least one hysterical advertising spoof. Siri is very useful for directions, reminders, OpenTable reservations, and a good laugh. Siri entertains — but dictation delights.
Dictation has been around for a decade and on iOS since the 4S and third generation iPad, but it was always more trouble than it was worth. But suddenly, dictation not only works, it works shockingly well. For text messages, emails, tweets, and even first drafts of longer documents it is massively faster to dictate than to type (unfortunately I still need to type blog posts the old way. Maybe that explains the 60 day hiatus…). I have a hard time understanding why Apple is not using its ad dollars to promote dictation, not Siri — unless the processing costs are huge and they are losing money on the feature.
What changed? In a word, Nuance plus a massive investment in cloud infrastructure. Nuance Communications is the public company behind Dragon Dictate — which has been the market leader in desktop speech recognition for the past 15 years at least (the company was founded in 1992 out of SRI as Visoneer, known mainly for early OCR software). Neither Apple nor Nuance talk about it, but it looks to many people like Apple has licensed its dictation software, including Siri’s front end interpreter, from Nuance. One sign: before Apple bought Siri, it used to carry a “speech recognition by Dragon” label (earlier, Siri had used Vlingo, which apparently did not work as well). Not only that, but Nuance has built several speech recognition apps for the iPhone and iPad that work exactly like the speech recognition built into the iPad and iPhone 5.
This is interesting in part because Apple never licenses critical technology for long. It insists on controlling its core technology from soup to nuts, so many people assume that Apple has considered buying Nuance. The problem is that Nuance holds licenses with many Apple competitors who would disappear if Apple bought the company. Apple would need to massively overpay for the asset — something they never do. More likely, Apple will hire talented speech recognition people and build its own proprietary competing product, just like it did with maps when it declared independence from Google. In this case, figure that dictation will regress for a year or two, just as maps have done, because real time, accurate speech recognition makes maps look simple. Plus Nuance protects its patents aggressively and these patents are, according to some writers, not easy to avoid.
Although Google is avoiding them nicely; Android speech recognition is also outstanding. How do they do it? The Google way: throw talent at it. Google hired more PhD linguists than any other company and then they hired Mike Cohen. Cohen is an original co-founder of Nuance and if anyone can build voice recognition without tripping on the Nuance patents, he can. Apple appears likely to pursue a similar course.
Mobile dictation works by capturing your words, compressing them into a wave file, sending it to a cloud server, processing it using Nuance software, converting it to text, and sending it back to your device where it appears on your screen. Like all good advanced technology, it passes Arthur Clark’s third law: it is indistinguishable from magic. The tricky bit is the software processing, which has to have a rich set of rules based on context. The software decides on the meaning of each word based not only on the sound pattern, but on the words it heard before and after the word it is deciding upon.
This is highly recursive logic and nontrivial to execute real time. Try saying “I went to the capital to see the Capitol”, “I picked a flower and bought some flour”, or “I wore new clothes as I closed the door” and you begin to understand the problem that vexes not only software, but English learners everywhere. Apple dictation handles these ambiguities perfectly — meaning that it either gets the answer right, or it realizes that there are multiple possible answers, takes a guess, and hovers the alternative so that you can correct it with a quick touch.
It takes a little bit of practice to use dictation well. It helps to enunciate like a fifth grade English teacher and to learn how to embed punctuation. The iPhone OS6 User Guide has a list of available commands. Four are all you need: “comma”, “period”,”Question mark”, and ”New Paragraph” (or “Next Paragraph”). You can also insert emoticons “smiley” , “frowny” and “winky” . For anything else, speaking the punctuation usually works: “exclamation point”, “all caps”, “no caps”, “dash”, “semicolon”, “dollar sign”, “copyright sign”, “quote”, etc.
Overall, the experience of accurate mobile dictation is a magic moment — like the first time you use a word processor or a spreadsheet (for those who recall typewriters and calculators), or the first browser or email (yeah, we didn’t used to have those, either). Give it a try. Apple has done something amazing and for once, actually under-hyped it.
When George McGovern ran for president, I was the age the JamKid is now. He was then, and remained, a remarkable and vastly underappreciated American. He was a decorated war hero who had seen gruesome combat and calmly led a massive crusade against the Vietnam war. He was a professor with a PhD in History who would never have dreamed of calling himself “Dr. McGovern” (unlike say, Germany, where most politicians are eager to run as Herr Docktor). He was a democratic Democrat, whose commission reset the party rules and stripped the insiders of much of their power. More than anyone, McGovern closed the smoke-filled rooms (and frankly, made the Party more difficult to govern and more dependent on large donors). He created the UN Food for Peace program and believed profoundly in helping the poor and desperate, even in the face of evidence that foreign aid did not promote economic self-sufficiency. He was an early proponent of dietary guidelines and as early as 1973 warned of the growing amount of sugar in the US diet.
I met McGovern a few times and had dinner with him once. He was a modest, self-effacing guy, who knew a surprising amount about labor history. I learned that his dissertation was on the 1913 Colorado coal strikes. He also knew a lot about farming, not simply because he was from South Dakota, but because he had a lifelong aversion to hunger after seeing Italians starving during his wartime service. I learned that he was probably the last person to ever speak to Bobby Kennedy — whose assassination shook him, and me, even more deeply than the loss of JFK.
McGovern was widely reviled. During his run for the presidency in 1972, the New York Post referred to him as “George S. (for surrender) McGovern” in virtually everything it wrote. He was not a great campaigner, although he brought hundreds of people into politics and many of them stayed — including Bill Clinton. His hastily-considered choice of Missouri Senator Tom Eagleton as his running mate ranks with McCain’s choice of Sara Palin in textbook examples of disastrously poor vetting. Despite an Obama-like grassroots campaign led by campaign manager and future Senator Gary Hart, McGovern lost 49 states to Richard Nixon, the worst landslide in modern US history. Although he later joked that “for many years, I wanted to run for the Presidency in the worst possible way and last year, I did”, it had to hurt to lose an election to a man he knew to be deeply dishonest and corrupt.
In later years, the former minister, professor, Congressman, global food program director, Senator, and presidential candidate ran a 150 bed inn in Stratford, Connecticut. After the business went bankrupt, he reflected often and publicly on the role of government regulations and lawsuits in constraining small business. At one point, he surprised conservatives when he wrote in the Wall St. Journal that “I … wish that during the years I was in public office I had had this firsthand experience about the difficulties business people face every day. That knowledge would have made me a better U.S. senator and a more understanding presidential contender.”
Part of why politics in the US works is that people as courageous and talented as George McGovern are drawn to public service. I worry that this is becoming less true. In part due to reforms McGovern championed, parties are weaker and the path to public office now less dependent on political parties and more dependent on large financial backers than ever before. We are at risk of drawing more light than heat to the national stage. It will be ironic and unfortunate if the result of George McGovern’s wonderful career is that we see fewer like him in the future.
In 1997, I had an idea: if I could aggregate millions of used, rare, and out of print books from around the world on a single website, I could enable people to find and buy books that were otherwise impossible to locate. Like hundreds of others with similar ideas for selling things online, I started an ecommerce company.
As Atlantic writer Derek Thompson points out, that was the year that the US enjoyed an odd service sector convergence: 14 million Americans worked in retail, 14 million in health and education, and 14 million in professional & business services.
Fifteen years later, the landscape has changed. “Books You Thought You’d Never Find” is a silly idea. Book retailers are dying. The company I founded has made an impressive effort to transition from retail to services.
The employment picture reflects these changes. Health care jobs have grown by almost 50%, professional/business services grew almost 30%, but, as the chart below illustrates, retail grew less than 3%, adding only 26,000 jobs a year. There is
mounting evidence that retail employment is about to now decline sharply. Fifteen years from now, these may be the good old days for brick and mortar stores.
Retail revolutions are nothing new. Boutiques challenged general stores throughout the nineteenth century. Department stores, arose starting with Wanamaker’s in 1896 and challenged boutiques. Starting in the 1920s, car-friendly strip malls challenged main streets. In 1962, Walmart, Target, Kmart, and Kohls each opened their first store and initiated the era of big box retail. In 1995, Jeff Bezos incorporated Cadabra — but changed the name to Amazon at the last minute, in part because it started with an “A” and most internet search results were alphabetical.
Today, e-commerce is not just killing some stores – it is killing almost all stores. Today there are very few successful brick and mortar retailers. Consider the obvious losers in recent years — none much lamented.
- Department stores are dying. Sears, like K-Mart before it, is struggling to survive. Credit default swaps that insure investors against a Sears default on its debt obligations trading at record highs because markets believe that it won’t make it. JC Penny is undergoing a major makeover under the leadership of Ron Johnson, who created the Apple Stores. Johnson has changed the look, the targeted demographic, the colors, the brands, the formats, the shopping experience, and the name of the stores (now JCP). The results have impressed nobody, least of all customers.
- Specialty retailers are not exempt from the onslaught. Online retail is relentlessly taking share in many specialty retail categories, resulting in total dollars available to physical retailers stagnating or even declining. This is starting to put intense pressure on their top lines. According to comScore, online retail rose to a record $35.3 billion in during the last holiday season — a 15 percent increase from the previous year.
- Media retailers are dead or dying. Music stores are a fading memory, Borders is gone, Blockbuster, Barnes and Noble and your “independent” bookstore are all next (Barnes & Noble admitted as much when it spun off the Nook reader as a separate entity rather than tie its fate to dying retail stores. Media retailers are dying because of digitization of course — but Amazon offers better prices and more selection even on printed books, CDs, or DVDs.
- Big box “category killers” like Best Buy, or Staples are being hammered by online sites that translate dramatically lower debt and operating costs into lower prices. Best Buy posted five straight quarters of profit decline before reporting a $2.6 billion loss on March 29.
- Big box general retailers like Wal-Mart, Target, and Costco are seeing declining sales. Until its third fiscal quarter last year, Wal-Mart had posted eight consecutive quarters of declining sales at stores open more than 12 months. Analysts forecast declining same-store sales and profit for Target this year.
Demographic changes are also putting pressure on stores. Urbanization hurts strip malls. Baby Boomers no longer have kids at home. Their kids are marrying later and delaying having their own children, meaning fewer are buying houses that need to be updated and furnished. As these Millennials hit their peak spending years, they are completely accustomed to shopping online. For many Millennials, shopping malls were a teenage social venue — not a place to buy stuff. It is no accident that shopping malls have yet to emerge from the recent recession.
There is good reason to expect this change to accelerate. Physical retailers are typically very highly leveraged and operate on narrow profit margins. Material declines in their top lines make them quickly unprofitable. As stores close or reduce selection, more customers become accustomed to shopping online, which accelerates the trend. E-commerce maven turned VC Jeff Jordan recently cited the example of Circuit City, which was “preceded by just six quarters of declining comp store sales. They essentially broke even in their fiscal year ending in February 2007; they declared bankruptcy in November 2008 and started liquidating in January 2009″. Nor, Jordan notes, did the bankruptcy of Circuit City help out Best Buy any more than the loss of Borders helped Barnes & Noble. “Not even the elimination of the largest competitor provides material reprieve from brutal market headwinds.”
There are a few bright spots in the wasteland of retail stores. The need for fresh food and last-minute purchases mean that 7-11, Trader Joe’s, and the corner produce grocer have enduring customer demand. Customers may want to touch some high ticket items before buying them, which gives high margin retailers like Williams Sonoma and Apple an opportunity to offer a fun, informing, hands on retail experiences (although both of these companies do a growing share of their business online and Apple lets you pay for a items under $50 on your phone and walk out of the store without ever talking to a salesperson or cashier). Stores like Home Depot that do a significant share of business with contractors who are time-sensitive not price-sensitive and need a large number of items quickly have sustainable value propositions.
Online commerce enjoys enormous advantages, from vastly larger selection, much lower fixed costs and debt, to a more customized shopping experience and 24/7 operations. Small wonder that revenue per employee at Amazon is nearing a million dollars, whereas at Wal-Mart — once a paragon of retailing efficiency — it is under $200,000. Hundreds of websites, not simply Amazon, have benefitted from the explosion of online retail — and tens of thousands of small retailers use Amazon or eBay’s commerce infrastructure to power specialized businesses. Of course UPS and Fedex benefit as well, in part because they make money on both the initial sale and on the subsequent return of wrong sizes and unwanted gifts.
Since the dawn of commerce in the Nile delta, humans have have purchased goods in physical markets. No doubt we will continue to purchase, or at least preview, stuff in stores. But if e-commerce achieves a fraction of the opportunity it currently has in front of it, retail stores as we currently know them will become a thing of the past. It is hard to imagine that we will miss them for long.
Dear Newly Appointed Berkeley Chancellor:
Congratulations! Even though as of this writing, you have not yet been named, you take over the leadership of UC Berkeley at a critical time. At the end of your tenure, the world’s premier public university will either have found a sustainable path forward or will have entered a period of long-term decline. Do us a favor – do not screw this up.
You arrive at a moment when higher education is in wonderful and overdue ferment. Online education is challenging your traditional business model and unsustainable tuition increases. Badges and other alternative credentials threaten your historic right to certify talent. Most of all, Berkeley like other public universities that serve as engines of knowledge-creation and social mobility are under unprecedented financial pressure.
Berkeley, in particular, has a lot at stake. It is, as I noted here, an amazing public institution, despite its bottomless capacity for self-parody (as you know, my wife is a dean at Cal). 48 out of 52 Berkeley doctoral programs rank in the top 10 of their fields nationally — the highest share of any university in the world. By any measure: NSF Graduate Research Fellowships (#1), National Academy of Sciences members on the faculty (#2 behind Harvard), members of the National Academy of Engineering (#2 behind MIT), membership in the American Philosophical Society, the American Academy of Arts and Sciences, or winners of National Medal of Science — Berkeley excels. It is by a considerable distance earth’s finest public university.
And it serves a public mission. Berkeley’s single proudest claim, ahead even of its 24 national rugby championships, is that it enrolls more students on Pell Grants than all of the Ivy League schools put together. A Pell Grant is a scholarship based on financial need. By serving academically qualified students on Pell Grants, Berkeley ensures that smart, hard-working kids from low income families access to a top-flight education.
You may regret the flow of private funds into a public university, but you cannot and should not try to prevent it. Actually, you will devote a great deal of time to encouraging private donations so that Berkeley can remain accessible to middle income students who are not eligible for Pell Grants. This requires building organizational muscles that atrophied when Berkeley, like most public universities, avoided the intellectually distasteful but indispensable work of raising private funds. Berkeley is still building the endowment required to sustain these efforts. The endowment matters because over time, money buys quality — just ask Stanford. It is no accident that although many state universities undertake serious research and offer outstanding educations, only three of the “Public Ivies”, Texas, Michigan, and California, make the list of America’s best-endowed universities.
You realize, of course, that the endowment data shown here are misleading in one important respect: the University of California is less a university than a federation of ten highly autonomous campuses ranging from prestigious broad spectrum research institutions like Berkeley, UCLA, and San Francisco to campuses with pockets of excellence like San Diego, Irvine, and Davis, to schools like Riverside and Merced that are not easily distinguished from the State University.
These data also illustrate why the Regents hired your boss. Of the great public universities, only the University of Texas took endowment-building seriously, making former UT President Mark Yudoff irresistible as the current president of UC.
But Berkeley, along with San Francisco and UCLA, has begun to focus on endowment building. It is no surprise that taken together, these three campuses now hold three-quarters of all UC endowment funds. Faced with the choice of compromising academic excellence, raising tuition to levels that reduce access to higher education for many students, or undertaking a covert privatization to maintain the finances of their institution, all three of these schools have raised tuition and quietly sought private funds. Your job is to continue this course.
Soft privatization is not without its management challenges however, especially at Berkeley. First, Cal is a state-owned enterprise. The barely functional California government retains full and largely unwelcome control over your budget and governance, even though it contributes less each year to your operating revenue. Second, your boss happily taxes richer campuses like yours to support poorer ones, so to raise a dollar of endowment, you will often have to attract more than a dollar of donations. Third, strong faculty governance provisions, while occasionally improving decision quality, mostly serve to protect the comfortably and correctly tenured and prevent needed program rationalization. Your biggest risk is not privatization — it is paralysis.
To get Berkeley fully upright and sailing, you need to mend both a broken income statement (you lose money every year and must stop the bleeding, even if the state makes good on a new round of cuts) and a broken balance sheet (your endowment may be larger than other UC campuses, but it is still pathetic. Look at the data.) Nothing else you do will matter unless you set audacious goals to fix your core economics.
I respectfully suggest two:
- Raise the endowment to $10 billion. This is preposterous, but probably achievable over 7-10 years. It requires that you aggressively take the case for Berkeley to the public, to alumni, and to Silicon Valley. It is an incredibly compelling case, but you need to personify it and champion it consistently, loudly, and effectively. The impressive “Campaign for Berkeley” is a great start that is committed to adding $3 billion to the endowment by 2013. This will be no time to stop. Committing the campus to achieving a $10 billion endowment would ensure access to undergraduate education for all students, the ability to attract and retain outstanding faculty, and a credible case for increased operating autonomy. (And really — if Texas can do it, how hard can it be?)
- Move to a three semester academic year. You waste a beautiful, expensive campus by using it fully for only 30 weeks each year. Adding a third semester to the academic year earns you $150 million annually, relieves a great deal of pressure to raise tuition, and dwarfs all other growth or cost saving opportunities even if you eliminate Summer Session and make no changes to faculty teaching loads. The move would be rapidly copied by other UC campuses and other top flight universities. This represents low risk, no-brainer money because you fully control both the revenue and costs of the initiative. It would enable you to admit more students, significantly increase the size of your faculty (which won’t happen any other way) and offer each professor an option to earn additional income by teaching the third semester or use it for research, as many do now. You are not dumbing down any degree: students would do just as much work, but those who wish to complete an eight semester undergraduate degree in 2.6 years instead of four could do so. Relative to budget cuts or tuition increases, improving the utilization of your largest and most expensive asset is much less painful or politically controversial than any other economic opportunity you face.
Granted, $150 million each year does not plug your entire budget shortfall — but it is a serious start that would be noticed by alumni and other donors. You will need to continue to rationalize campus operations and consolidate weaker units or athletic programs (remember Berkeley had a Mining program until a brave chancellor decided to face reality).
With everyone else in California, I wish you the very best of luck in your new position. Just don’t mess with the rugby team. Some cows really are sacred.
Coaches are often known as masters of technique, but really they teach life skills: teamwork and competition, ferocity and empathy, solidarity and resistance. Great coaches inspire performance that players cannot produce without them. They model leadership by keeping their player’s interests first and by knowing when to admonish, encourage, or reprimand. Many of us recall our best coaches as fondly as we do our best teachers. The best video example of this is here.
Robbie plays junior varsity rugby. Rugby is by some measures the fastest growing youth sport in the United States, fueled in part by growing concerns about the long term costs of its distant relative, gridiron football. US football branched off of rugby a century ago (the games were once so similar that in 1924 a US football team won the Olympic gold medal in rugby — something that is inconceivable today). The sport originated at the Rugby School in England and remains popular throughout the British Commonwealth. The movie Invictus, drew a famous contrast with soccer, which it termed ”a gentleman’s sport played by hooligans” whereas rugby is “a hooligan’s sport played by gentlemen”. When you see a large, aggressive guy covered in mud refer to the referee as “sir”, you grasp the point.
Coaches have made Northern California the center of youth rugby in the United States. Rugby is the oldest sport on the University of California Berkeley campus where, starting in the 1970s, three coaches began producing world-class rugby teams (without ever resorting to recruiting scholarships). Doc Hudson was the first Cal rugby coach to take the game to a new level by building a globally competitive Cal side. Hudson recruited and coached a team that in 1967 and again in 1971 toured Australia and New Zealand and won more games than they lost (roughly the equivalent of a New Zealand baseball team doing that in the US today). The captain of the team was a tall kid named Ned Anderson, a lineman who played lock for Cal. Anderson also represented Cal on the all-University of California side that achieved a winning record on a similar international tour during the summer of 1970.
In 1975, at the age of 29, Anderson succeeded Doc Hudson as the fifth head rugby coach in Cal history. Anderson’s teams continued Cal’s winning rugby tradition through the ’70s and, in 1980, won the first official national collegiate championship, beating Air Force, 15-9. Under his leadership, the Bears also won the four titles from 1981-83. In 1982, Anderson hired Jack Clark, one of his former players, as an assistant coach and in 1984, swapped jobs with him. Clark became the head coach and Anderson his assistant until 1985 and again from 2003-2008. Clark won an astonishing 22 more national titles, for a lifetime coaching record of 523-68-5 – an 88% win rate. For more than three decades, Cal has dominated rugby more than any team has ever dominated a US collegiate men’s sport, as the graph below illustrates.
Because many Cal alumni settle in Northern California and because rugby players are especially passionate about their sport, Northern California is blessed with outstanding rugby coaches. Ned Anderson is now in his sixties and walks with a slight stoop. When he sauntered onto the rugby pitch and quietly began coaching kids earlier this year, parents had no idea that the soft spoken gentleman in charge of our kids was a legendary coach and a member of the Cal Rugby Hall of Fame.
But the players figured it out immediately. Robbie would get in the car after practice and shake his head, saying “this new coach totally knows what he is doing. I don’t know who he is, but he is really good.” Anderson is not only coaching 16 year olds, but doing it so modestly that we had to Google him to learn anything about his past.
The results have been impressive. Robbie’s team was undefeated going into the Northern California playoffs this weekend in Sacramento. It takes nothing away from the players to note that they give Anderson a lot of credit for their success.
On Saturday, they played a very tough quarterfinal game. It was breezy but hot on the pitch and the opposing team was good. They scored first by crossing the goal line and touching the ball down (no longer required in football, but the origin of ”touchdown”). Robbie’s team recovered and decisively won a hard-fought game. Unfortunately, Ned Anderson missed the victory due to an unmovable scheduling conflict. He would have been proud to see his boys come back to win and disappointed to see them lose the championship match the next day to a very athletic (and well-coached) team from Sierra Foothill. It was the team’s only loss the entire season. Had he been there, Anderson would likely have congratulated his boys on being the #2 team in Northern California and begun work right away to improve their record.
John Boyd was a legendary US fighter pilot during the Korean War who later became a fighter pilot instructor. He had a standing bet with his students: he would meet you in the air at 30,000 feet and you would get on his tail. He would reverse the positions and get you in his guns in 40 seconds or he would give you 40 dollars — about $375 today and a lot of money for an Air Force captain. Boyd challenged anyone and everyone including students, other instructors, and the best fighter pilots from around the world. Many took the challenge, but Boyd never lost. He was the best fighter pilot in the world and many believe the best ever.
As a Colonel, John Boyd developed a framework to help train combat fighter pilots that became known as the OODA Loop (for observe, orient, decide, and act). He argued that the key to tactical success in combat is to obscure your intentions from your opponent while you simultaneously clarify and anticipate his intentions. By operating at a faster tempo in rapidly changing conditions, you both inhibit your opponent from adapting or reacting to changes and suppress his awareness of your actions. You cause an opponent to over- or under-react to uncertainty, ambiguity, or confusion. In military parlance, adopted by many technology strategists, you get inside their OODA Loop.
As an example, Barack Obama has been well inside Mitt Romney’s OODA Loop for the past month on issues of gender equality. His statements have frequently caused Romney to react in ways that Obama has clearly anticipated and exploited. But that’s another post. Today’s question is, has Amazon penetrated Apple’s OODA Loop with respect to eBooks? It sure looks like it.
The story begins in 2001, when Amazon observes Apple’s iTunes business model. Amazon CEO Jeff Bezos must have been awed watching Steve Jobs turn digital music, which was free and widely pirated, into a money machine. Jobs integrated a device (iPods), a store (iTunes), and a wholesale deal with music labels for content under which they agreed to let him set the retail price of tracks (it would be $.99). Within a few years, Apple was the world’s largest music retailer and record stores were a distant memory (although a very fond one). Steve Jobs had figured out how to compete with free — the first but not last technology leader to perform this trick.
Amazon copied Apple in building its book market. It built a device (Kindles), tied to its store and it bargained with book publishers for a wholesale deal for content. Like Jobs, Bezos insisted that publishers let him set the retail price, which he targeted at $9.99 per book. It is likely that publishers in some cases set wholesale prices higher than that and Bezos lost money on early book sales, but as the market grew, his pricing power grew with it and the full cost of each eBook declined as well. Bezos knew that when Apple entered the book market late, they would be forced to either a) stick to their traditional wholesale model, where he had a significant first mover advantage, knew more about online retailing, and held a brand advantage (do you really think “book” when you think iTunes?) or b) try to compete by attracting publishers and letting them control the product price. Bezos knew he would win either way.
Bezos also knew that “talent copies, genius steals” did not apply to Steve Jobs, who never copied anybody. He had a pretty good idea that Apple would try to convince publishers to adopt “agency pricing“, which, in contrast to wholesale pricing, gives the publisher the right to set the retail price and pays the retailer a commission. Jobs knew that agency pricing would attract publishers who resented price pressure from Amazon and that publishers backed by Apple would force Amazon to raise ebook prices. But only the largest publishers were strong enough to threaten to withdraw content from Amazon — most stuck with their wholesale pricing deals. Bezos raised prices reluctantly and selectively to keep large publishers from defecting. That’s why some ebooks now cost $14.99 on Amazon, while most cost $9.99.
Better yet, Bezos also knew that the manner of Apple’s entry into the book market looked a lot like price-fixing. Price fixing rarely gets you into trouble when, as in Apple’s music or Amazon’s book terms, you force retail prices lower, but collaborative arrangements that lead to higher prices to consumers frequently incur the wrath of the Department of Justice Antitrust Division. Bezos also understood that Apple could fall afoul of laws against price-fixing, even though Amazon, not Apple, has an effective eBook monopoly. A monopoly is generally not illegal unless you use it to jack up prices.
So what does Amazon do the day the Department of Justice discloses its investigation into Apple’s alleged price fixing? It lowers eBook prices. Apple has an estimated 15% share of the eBook market (courtesy, one suspects, of simple iPad users who don’t know any better). That share is heading nowhere but down under the agency model, which is why Apple should give it up as part of a quick settlement with the DOJ. I would not want to be eBook strategist Eddy Cue at Apple this week.
But Apple’s is not the only OODA loop in Bezos’ crosshairs. He is also deeply inside the heads of publishers, whose cockpits are blaring with enemy radar lock-in sirens — the last sound many fighter pilots ever hear. As he often does, Clay Shirky said it best:
Publishing is not evolving. Publishing is going away. Because the word “publishing” means a cadre of professionals who are taking on the incredible difficulty and complexity and expense of making something public. That’s not a job anymore. That’s a button. There’s a button that says “publish,” and when you press it, it’s done.
Amazon has demonstrated a much greater ability than Apple to observe, orient, decide, and act to dominate the eBook market. This is the second sign of peak Apple in as many weeks and another indication that Jeff Bezos has taken over from Steve Jobs as the reigning strategist of the technology world. That said, eBooks is not the most important market where these two companies will go head to head. That would be payments, because nobody else has 100 million credit cards on file. Bezos should think very hard about this one. Apple owns a big piece of mobile and has the moves to be on his tail with guns blazing in about 40 seconds.
Lenin famously bragged that “Capitalists will sell us the rope with which we will hang them.” It would surely gall him to learn that the art of destroying capitalists with their own products has been mastered not by a militant, vanguard-led proletariat but by entrepreneurial capitalists. It appears that even universities, finally, are getting the hang of it and learning to sow the seeds of their own destruction.
As an earlier post detailed, universities rarely go out of business. This is thanks to the magic of a three part lock that secures their position and protects them from institutional challenge. For centuries, universities have enjoyed the exclusive right to allocate valuable social capital.
- Select talent. There is no evidence at all that Stanford, Harvard, or Berkeley do a better job of training undergraduates than Ohio State, Texas A&M, or the University of Florida. But they select far stronger students. If colleges were assigned students randomly, the value of “elite” degrees would plummet overnight. Harvard delivers 90% of its value the day it admits a student, although the market recognizes the value only when the student graduates. In a previous post, I described an experiment I once proposed to compare students admitted to Harvard Business School who attended with those admitted who did not attend. Others have since confirmed what we all know: Berkeley selects strong students, it does not create them. You aren’t smart because you went to Berkeley; you went to Berkeley because you were a certain kind of smart.
- Credential talent. College degrees confer professional access and mobility. Since mobility is “path dependent” (your current options are constrained by past decisions, even if past circumstances are no longer relevant), it matters enormously what choices a credential opens up for you. Take it from a factory worker who went to Harvard Business School.
- Signal social standing. Signaling is a cousin of credentialing. A credential is a specific signal to the labor market that a person completed a course of study and mastered a body of knowledge. But it is relevant mainly early in a career. The broader social and economic signal conferred by a university degree extends well beyond the time when the details of the course work are forgotten. An honors degree from the University of Maryland confers standing, especially in Baltimore, that extends well beyond the knowledge gained from a degree in European History. There are very few signals of social standing as powerful as a college degree, even though very little evidence suggests that this should be the case. Powerful alumni affiliations reinforce this effect.
It takes decades for universities to establish these privileged positions, which is why, with rare exception, the top decile universities of fifty years ago are the top decile universities today. This is partly due to the place university degrees have come to hold in our culture. It is an unquestioned (but economically threatened) article of faith among middle class families, including mine, that providing children access to higher education is an essential to giving them a full range of life choices. Most people are disinclined to risk their kid’s future on education institutions with highly plausible training programs but unproven power to select, credential, and signal. (Yeah, I’m looking at you Minerva Project).
The paradox is that universities clearly add value (after all, college degree holders earn a million dollars more over their lifetimes than non degree holders and many economists declare it our single most competitive industry) but much of this “value” has nothing to do with learning, which is what employers presumably value. And if the credential cannot communicate what you know, then its signaling effects diminish. More accurate and effective approaches to credentialing and signaling become plausible. As detailed earlier, there are dozens of startups ramping up high quality educational programs that are either free or very low cost. But without credentials accepted by employers, all of the free online courses in the world will not translate into increased economic opportunity for graduates. To make these programs viable, they need a portable credential that is widely accepted by employers but not controlled by universities. Who would devise such a thing?
Universities, of course. As Kevin Carey describes in the current Journal of Higher Education, the future is full of badges, not unlike the ones you earned as a scout. UC Davis, together with The Mozilla Foundation and the MacArthur Foundation is prototyping the development of digital “open badges” that validate “skill, quality, or interest”. Badges would be online and would allow a potential employer to access details of a student’s written work, test results, videos, etc. Open badges would communicate a great deal more than “BA in Economics from Sonoma State”, which is what employers get today. The article failed to record any sense of irony among the rope makers at the University of California.
Under the Mozilla Open Badge framework, a badge “is a symbol or indicator of an accomplishment, skill, quality or interest” used to represent skill or achievement. Badges support a wide variety of learning beyond traditional classrooms including online courses, after-school programs, as well as work and life experiences. Badges not only signal achievement to peers, potential employers, educational institutions and others, but they are a way to recognize and document informal learning as well. Fully developed, badges should help people transfer learning across jobs, industries, and places and portray a richer, more complete profile of an individual’s professionals strengths.
Mozilla expects there to be many types of badges. Some capture specific skills, something traditional degrees do quite poorly. Badges can support specialized and emerging fields that do not yet credential learners. They can document a much larger diversity of skills, social habits, motivations, etc. Badges potentially represent an alternative to traditional degrees as a way to enhance identity and reputation among peers, find peers and mentors with similar interest, formalize camaraderie, teams, and communities of practice that today often form around universities or professional associations.
Open digital badges, unlike the scouting ones, are valuable because of their metadata. They link to videos, documents, or testimonials demonstrating the work that lead to earning the badge. They link to the issuing authority, which can be a school, a professional body, an international credentialing agency, a community of professional practice, a course, or a company. The supporting metadata reduces the risk of gaming and builds in a system of formal or implicit validation. In this system, a digital badge is backed by metadata that explain the badge, the issuer, the issue date, criteria for earning the badge, the earner’s work or evidence behind the badge, and the current validity of the badge, which, unlike a college degree, can be set to expire.
Mozilla is creating an Open Badge Infrastructure to serve as the core technical scaffolding for a badge ecosystem that supports a multitude of issuers, badge earners, and badge displayers. This infrastructure includes the core repositories and management interfaces (each user’s Badge Backpack), as well as specifications required to issue or display badges. Users can build a “Badge Backpack”, which serves as a repository for their digital badge data, accessible only to them, where they can view badges, set privacy controls, create groups, and share badges. Startups like BadgeStack, which gamifies badges, can build OBI compliant sites and award apps.
Open badges are a promising idea and one deserving of investigation by companies, entrepreneurs, universities, and investors. They threaten traditional university credentials because they are:
- Granular. Employers care what you can do; they care relatively little about what you study, except as an indicator of what you can probably do. Badges are likely to reflect specific skills (“architecting social media databases” or “PHP”). Some may complement licensure (“palliative care nursing”) others may document skills in areas where little certification is available today (“Thai cooking” or “cloud-based SQL database administration”).
- Open. To work, badges need an approval process and an ontology that reflects a hierarchy of skills. An licensed vocational nurse may be able to earn a badge in discontinuing intravenous drips, but I’d prefer that the Thai cook obtain his or her LVN certification before tackling this skill. Once these structures and privacy controls are established, the technology for making badges machine readable, searchable, embeddable, and portable is relatively trivial.
- Able to evolve. The structure of badges itself needs to be open. Today “Thai Cooking” may be a sensible badge. Tomorrow it may be “Kitchen safety and peppers” (I worked with a cook who accidentally sent 50 diners choking and gasping out the door, hospitalizing two of them for lack of this knowledge). Badges that are ten years old will frequently fade in value as others rise. Badges create a market in skill certification — precisely what should replace university degrees.
- Cumulative. A single badge may or may not signal a great deal, but a sash full of badges accumulated over many years of effort makes you an Eagle Scout. Employers are very likely to value particular combinations of badges for specific jobs. Today, resumes or transcripts do a notoriously poor job of communicating these capabilities.
- Essential to reputation markets. Badges form core elements of emerging reputation marketplaces, where professionals collect, curate, and disseminate information that reflects their professional skills and achievements much as Fair-Isaac today distributes information about your credit history. For some positions (VP Marketing for a startup, for example), leadership history may matter more than a documented set of specific skills, but badges will still contribute to the overall picture.
Badges are not a sure thing. At first they will complement university degrees, not substitute for them. Badges face nontrivial privacy and trust issues — many which Mozilla is addressing quite well. They are an essential foundation for a portfolio that documents a range of professional skills, achievements, experiences, and relationships.
One of the largest challenges facing open badges are cold start problems: early adopters will not have very few badges and employers will be unfamiliar with them. These are the sort of market development problems that entrepreneurs are good at conquering, although this makes them no less formidable. Mozilla may crack this market wide open.
Henry Blodget is the former head of Internet research at Merrill Lynch. (Background: once upon a time there was something called Internet research. And once upon a time there was something called Merrill Lynch). NY Attorney General Eliot Spitzer convicted Blodget of touting stocks in public while sending emails disparaging those same securities. Spitzer was later thrown out of office after his foes revealed him to be “Client 9″ in an expensive Wall St. prostitution ring. Both men are now exiled from their former professions and both have become media entrepreneurs.
Blodget now leads Business Insider and last night presented a very good piece of research on the growth of mobile to a conference in San Francisco. Towards the end, he describes Draw Something, a game app that has created a buzz around here. Blodget argued that Draw Something, by a startup called OMGPOP, reveals just how explosive the combination of mobile, social, and games can be. He reminded us that Draw Something launched just six weeks ago.
- It has since been downloaded 20 million+ times. It is the #1 app in 79 countries. It has 12m daily users and generates $100,000 of revenue daily for a small team in New York.
- Users are highly engaged. They drew 3 drawings per second on Feb 12. Two weeks later, they drew 100 times that: 333 drawings per second. 10 days later, it was up to 3,000 drawings per second. Users bring in other users, who bring in even more users. This is what viral growth now looks like on global, Internet scale — and stories like this are about to become fairly common.
- The resulting growth rate is unhinged. It tool AOL nine years to acquire 1 million users. It took Facebook nine months to earn its first million users. Draw Something did it in nine days.
On March 16, TechCrunch ran a headline; “Zynga No Longer Has The Biggest Game On Facebook By Daily Users. OMGPOP Does.” Five days later, Zynga bought the startup (which had been trying to mix games, mobile, and social since 2007) for $180+ million. Early backers include Y-Combinator, Marc Andreessen, Kevin Rose, and a bunch of other folks whose periscopes are slightly longer than yours or mine. Note that Zynga may have made a really smart acquisition or, quite likely, overpaid for a game with a short life span. We’ll know soon enough.
Here is the full Blodget presentation. You cannot make this stuff up…
Apple has quickly raised worker wages to address the highly publicized problems with working conditions in its supplier network. The decision protects Apple’s pristine brand and costs the company next to nothing. It cleverly exploits the high-minded principles and low-level economic literacy of those of us who are its devoted customers.
A series of well-researched articles by Charles Duhigg in the New York Times that included a long article on sweatshop subcontractors put Apple on the defensive. It appears that hard working people risk their lives to make sure that our iPads are shiny. Apple responded by asking the Fair Labor Association to investigate working conditions at its Chinese suppliers. Back home, Mike Daisey’s professional self-immolation magnified the controversy by forcing NPR to retract a series of assertions about Apple’s Chinese suppliers. This week, Apple CEO Tim Cook visited a huge Foxconn assembly plant in China as the FLA issued its report. Cook knows a lot about manufacturing both as a global supply chain expert and as a former factory worker.
Cook played the event perfectly. When the FLA reported that, shockingly, Chinese factory workers endure long hours for low pay, he promptly gave workers a raise by pledging to cut hours without cutting pay. The audience applauded, the curtain dropped, and the world returned to its apps.
The story displays a confidence and an ability to turn crisis into yet another advantage that makes me wonder whether we are approaching peak Apple. Apple raised Chinese wages not simply because it cares so much, but because it can afford to care so little. They know that their move causes bigger problems for their competitors than it does for them. Apple cares less about Chinese labor costs than Dell, HP, Google, and many others who produce lower margin products that use more Chinese labor. Apple spends about $8.25 per iPhone on Chinese labor – a completely irrelevant number in the lifetime economics of an iPhone. Had Chinese workers targeted Apple for a campaign to increase their wages, they would have chosen well.
Is Apple’s move good for Chinese workers? Sure — for some of them anyway. Apple’s decision does not mean that Chinese workers will necessarily take home more money — just that they will work fewer hours. This may not sit well with workers at Foxconn and other subcontractors, most of whom move from the countryside, live in company housing at the factory, and want to maximize their earnings, not minimize their working hours. Duhigg’s excellent reporting cited a factory where workers rioted when hours were reduced under pressure from a western customer, acknowledging:
The other (workers) we talked to all seemed to regard it as a plus that the factory allowed them to work long hours. Indeed, some had sought out this factory precisely because it offered them the chance to work more.”
Does China benefit from this decision? Not necessarily. Manufacturing jobs are declining China in favor of Vietnam and Cambodia (the great promise of the campaign this week by Nobel Peace Prize winner Aung San Suu Kyi is that Burma will attract urban factories to relieve the punishing life of rural peasants). It surprises many Americans to learn that manufacturing employment in China is actually declining. With the Apple settlement raising labor costs, peasants in adjacent countries can cheer: soon they too can trade in their hoes and hats for a white coats and the opportunity to polish iPads. Nobody said economic progress was beautiful.
Apple’s decision to polish its “Think Different” brand built on images of Ghandi and Cesar Chavez is tribute to both the company’s high moral tone and to it’s willingness to indulge the low economic literacy of its Western customers. Apple sells products to people who prefer a world in which every kid can go to college and work eight hour days. Apple customers hate sweatshops, even those that are demonstrable vehicles of economic progress. We have a hard time acknowledging that countries in South Asia, sub-Saharan Africa, or Haiti demonstrably need more sweatshops. We commit what economist Harold Demsetz memorably called the Nirvana Fallacy: we compare the choices facing overseas workers to the alternatives we have, instead of to the alternatives they have.
As economist Eric Crampton notes:
Harold Demsetz warned in a beautiful piece of economic writing back in 1969 against what he called Nirvana Theorizing. He said there that we can’t say markets fail just because they deliver outcomes that we don’t like; rather, we have to compare the outcomes of markets to real-world achievable alternatives. We can’t just assume Nirvana on the other side of the scale. And, most of the arguments against sweatshops effectively assume Nirvana on the other side: if only we were to ban sweatshops or, more realistically, impose bans on the import of products produced by sweatshop labour, the employees would suddenly be freed to pursue fulfilling careers or to go and get that Bachelor’s in Cultural Studies that they’ve always wanted….. It’s only the evil sweatshops that are keeping them from achieving their dreams.
If only it were that easy. For proper comparative institutional analysis, we really have to look at how working in a sweatshop compares with what else these workers could be doing.
Inconveniently for the Nirvana view, thousands of people voluntarily line up outside of Foxconn’s gates when factory jobs open up. Those clamoring to work at Foxconn know that factory work is tough and sometimes dangerous. But, like factory workers everywhere, they know that farm work is worse. The Times documented a horrific aluminum dust explosion in a Foxconn plant. This is not something to take lightly (my grandfather, uncle, and kid brother all died on the job or from occupational illness; occupational safety has never been an abstract problem to me), but the risks of factory work are nothing compared to the risk of illness (especially malaria), injury, or poisoning faced by Chinese peasants. Just about everyone who has tried both farm and factory prefers the latter. I worked in several factories; most of the jobs were boring and some were wildly unsafe (I thought for awhile that Westinghouse had a “hire the handicapped” policy because so many of my co-workers were missing fingers or limbs. D’oh). But two days spent harvesting hay under idyllic conditions hurt me worse than any factory job I ever did. Former paper and aluminum mill worker Tim Cook also understands this extremely well.
Crampton cites recent work by Benjamin Powell on standards of living associated with sweatshop work showing that in most of the countries he studied, the average wages were equal to or better than the national average. In poor countries like Cambodia, Haiti, Nicaragua and Honduras, sweatshops paid twice the national average. This is why countries like Bangladesh, where 80% of the population lives on less than $2 per day, need more sweatshops, not fewer. Crampton reminds us of Nick Kristof’s reporting on workers in a garbage dump in Phnom Penh. Kristof gets it:
Another woman, Vath Sam Oeun, hopes her 10-year-old boy, scavenging beside her, grows up to get a factory job, partly because she has seen other children run over by garbage trucks. Her boy has never been to a doctor or a dentist, and last bathed when he was 2, so a sweatshop job by comparison would be far more pleasant and less dangerous.
I’m glad that many Americans are repulsed by the idea of importing products made by barely paid, barely legal workers in dangerous factories. Yet sweatshops are only a symptom of poverty, not a cause, and banning them closes off one route out of poverty. At a time of tremendous economic distress and protectionist pressures, there’s a special danger that tighter labor standards will be used as an excuse to curb trade.
When I defend sweatshops, people always ask me: But would you want to work in a sweatshop? No, of course not. But I would want even less to pull a rickshaw. In the hierarchy of jobs in poor countries, sweltering at a sewing machine isn’t the bottom.”
Tom Harkin, a progressive, pro-labor Senator from Iowa, introduced a law in Congress in 1992 that understandably prohibited the import of products made by children under age 15. In 1997, UNICEF investigated the effects of the Harkin Bill and found that even though the legislation had not taken effect, the mere threat had
“…panicked the garment industry of Bangladesh, 60 per cent of whose products — some $900 million in value — were exported to the US in 1994. Child workers, most of them girls, were summarily dismissed from the garment factories. A study sponsored by international organizations took the unusual step of tracing some of these children to see what happened to them after their dismissal. Some were found working in more hazardous situations, in unsafe workshops where they were paid less, or in prostitution.”
Once again, sweatshops are hardly the bottom of the heap — indeed the export shops targeted by Harkin are on average the better places to work. Most child labor is local production or rag picking, so if you ban exports, you may push some of the world’s most vulnerable children into the garbage dump, begging, child prostitution and starvation. This is not an argument for unfettered child labor or dangerous factories — just a note that exploitation is relative not absolute, protection is never free, and economic progress proceeds in steps not leaps.
Apple understands that paying slightly higher wages simultaneously pressures their competitors, appeals to western decency, and exploits economically ill-considered aversion to sweatshop labor. But in technology, companies with more competitive advantages than they can possibly exploit should worry about hitting their peak. Having watched first Microsoft and now Google decline after amassing what once seemed to be insurmountable advantages, it is time to ask whether peak Apple is now in sight?
Imagine a market with incumbents whose core processes are unchanged since medieval times that is held together by huge federal subsidies and protected by a system of self-accreditation designed to exclude rivals. Imagine that the resulting enterprises exploited their monopoly power by overcharging customers and wasting the revenue that resulted on guaranteeing senior employees lifetime employment and discretionary funds, on massively expensive professional sports teams, and on protecting an overstaffed and comically inefficient bureaucracy worthy of the Indian railroads. Who would put up with such a mess?
Welcome to American colleges and universities, which are both the envy of the world and ripe for disruption. It’s a big business (about $350 billion in the US alone) and a really soft target. It is, after all, run by tenured scholars whose idea of competition is a snarky jibe in the faculty lounge. The dons have allowed their costs to not only rise faster than family incomes, but faster than health care costs, which ain’t easy. That they have lasted this long is due to the monopoly they enjoy on certifying talent. As Kevin Carey noted in a recent New Republic article,
The historic stability of higher education is remarkable. As former University of California President Clark Kerr once observed, the 85 human institutions that have survived in recognizable form for the last 500 years include the Catholic Church, a few Swiss cantons, the Parliaments of Iceland and the Isle of Man, and about 70 universities. The occasional small liberal arts school goes under, and many public universities are suffering budget cuts, but as a rule, colleges are forever.
Small wonder that thousands of startups are now focusing on the market for higher education. Even the guy who discovered disruption, Clayton Christensen, has declared that online technologies will thoroughly disrupt education at all levels, predicting that half of all K-12 classes will be taught online by 2019.
During the past five years, online higher education has gone mainstream. The Sloan Foundation estimates that more 30% of all enrolled college students, some six million people, participated in on-line learning at accredited U.S. colleges and universities in 2011 and that the U.S. market for online higher education grew 12-14 percent annually between 2004-2009.
Many educators are realizing that the explosion of online education not simply due to its lower cost; it is often higher quality as well. Sometimes this is because of dramatically higher investment in course and instructor development. Christensen notes that the largely online University of Phoenix spends about $200 million each year developing online teachers and highlights a key difference with traditional universities: “..Harvard defines research as creating new knowledge, while The University of Phoenix defines it as finding new ways to provide knowledge. It blows the socks off of us in their ability to teach so well.”
Online education is quickly killing the in-class lecture, since recorded lectures have obvious advantages. Students can watch them when they are ready — after they are off work or when the kids are asleep. They can replay the confusing bits or skip the obvious parts. Most important however, is that the lectures themselves are more likely to delivered by world class teachers like Norman Nemrow, whose online accounting course has been taken by several hundred thousand students or by Walter Lewin, the MIT physicist whose lectures are shown on television. Supposedly over five million people have taken his intro to physics course (watch his promo reel below to see why. What? Your professor did not have a promo reel?)
It is not only lectures that fare better online. Instructors in online classes can measure outcomes and tailor the course to the needs of each student. Modern learning management systems provide live seminars with multi-location live video, backchat, social media, and many other capabilities not available in a classroom. Quizzes can be graded instantly so that both faculty and students get feedback fast enough to change course. Algorithms distill questions from thousands of students so that they can be answered either live or off-line. Students can undertake projects online with “classmates” who have never been on the same campus — or even the same country.
This is a time of vast experimentation with online education technologies. Two years ago, the Kahn Academy began to attract huge notice as a self-tutoring tool based on the brief lectures of one talented teacher. A year ago, 2Tor closed a large Series C and got very serious about providing major universities with technology, marketing, and course development assistance. A month ago, Google’s self-driving car maven Sebastian Thrun gave the talk at BLD in Munich that launched Udacity after 160,000 students from around the world completed his Stanford-based online computer science course (268 students achieved perfect scores on all the quizzes). In October, Knewton, an education technology startup, raised $33 million in its 4th round of funding to roll out its adaptive online learning platform. Earlier this year, Apple launched a suite of authoring and course scheduling tools to allow universities to move content to iTunes University. Only yesterday ShowMe launched its 2.0 platform that takes the Kahn Academy model and makes it social — anyone can use the platform to teach anything.
Universities are developing their own online education initiatives, often plagued by a terrifying thought: what if online education is just another form of digital media? They know full well that that as books, movies, and music, moved online, few incumbents survived. In each case:
- Content was disaggregated and mashed. Just as record albums were broken into songs, ringtones, and clips, educational content is unlikely to remain entirely within current disciplines or courses. Literature will not remain separated from history, nor calculus from chemistry. As technology makes it easier to recombine and repurpose courseware, it may become possible for two students to complete the same course without confronting the same content in the same sequence or manner. New forms of learning will produce certifications not limited to degrees, concentrations, or even courses.
- Engagement became social. Digital movies benefitted Hollywood much less than YouTube and Netflix. It should not surprise us to see more learning become self-paced, socially certified, and delivered outside of colleges and universities. Startups may increase the demand for formal education, but they could also substitute for it just as many of the needs once filled by campus fraternities or alumni associations are now met by online social networks.
- Value shifted from content creators to aggregators. Book publishers and music labels learned that aggregators of content (Amazon and iTunes) hold a lot of cards. Will universities aggregate and distribute high quality educational content regardless of its origin? Or will universities, like film studios, attempt to remain relevant by offering exclusive, premium-priced, high-quality, proprietary content protected through careful online distribution and syndication? Top universities are betting on the Hollywood model, which is not only under sustained attack, but presumes producers who control their IP. Universities, in contrast, rarely limit the ability of their faculty to sell lectures and other courseware to the highest bidder, even though the university paid the professor to produce the content. In no other industry is such theft conceivable — a fact that Udacity will not be the last to exploit.
- The product went global. Books, movies, and music are licensed or sold in tightly controlled, nationally bounded markets, but digital media is naturally global because there are far fewer natural distribution barriers. This means more customers, which is why universities are now lusting after talented and wealthy Indian and Chinese students who are (at the moment anyway) willing to pay US-type tuition for a degree from a globally prestigious institution.
- Prices fell as comparison shopping became easier. It appears that the revenue optimal price for eBooks is between $2 and $5, depending on the author and in some cases the publisher. For songs it is between $1-$2, forcing record labels and publishers to seek entirely different business models to monetize their content. As a result, many of media markets actually shrank as they went online (if you only measure product sales. In music, for example, the market is about the same size, because concerts and merchandise make up for losses in record sales). Once
The response of universities to the rise of online education is like the response of Barnes and Noble to online bookselling. Faced with the rise of Amazon.com in the 1990s, the chain store simply created barnesandnoble.com. When Amazon launched the Kindle, they launched the Nook and merchandised it in their increasingly irrelevant bookstores. But the winner of this contest will of course be the company that is not forced to carry the cost of several hundred bookstores. Open Yale, MIT’s Open Courseware and MITx courses, Stanford’s Massively Open Online Courses including Corsera, and many others like it all share the Barnes and Noble problem: they need to price their offering to pay for extraordinarily high fixed cost institutions. Their disruptors do not.
Barnes and Noble charges customers for a wide range of activities unrelated to book purchases. It designed many of its stores as community centers where authors and could meet readers. It built fun sections for kids to discover books. It integrated Starbucks in many locations. But books are simply not going to be sold in stores much longer, so these activities added more cost than value and ended up making the problem worse. Likewise, many institutions of higher education support multiple activities with tuition: research, sport, socialization, teaching, and credentialing. Online education exposes the fault lines between these different businesses, just as Amazon did with Barnes and Noble.
- Research. Top schools recruit faculty based on their ability to contribute new knowledge to their field not on their ability to teach. This is terrific for graduate students, who apprentice and occasionally indenture themselves to senior faculty, but suboptimal for undergraduates because the correlation between insightful research and capable undergraduate teaching is somewhere between weak and negative. Once undergraduates can receive a higher quality education at a lower cost by studying online, many will do so. Once Amazon made books cheaper, nobody wanted to pay for those kids play areas — not even people who liked them.
- Sports. That giant sucking sound is money draining from university budgets to support massively wasteful professional sports programs — while managing to abuse college athletes in the process. Intercollegiate sports are fine. Division 1 football and basketball is a scandal — and both universities and the NCAA know it.
- Teaching. Teaching and learning are rapidly becoming another online interactive social media. Some online learning will doubtless be indistinguishable from games. This part of what a university does will be rapidly mashed, commodified, and redistributed, just as books and movies have been. Universities often claim that they make use of these online technologies in “hybrid” classrooms. This is like selling Nooks in bookstores: the customers who buy will never come back.
- Socialization. Residential undergraduate programs deliver to young people a group of peers and the experience of learning independently with them. Some of what the university provides is in loco parentis — a structured environment for 18-22 year olds to transition to self-sufficiency as they learn. The question is how much families will pay for this service. As high quality online education becomes universally available, middle class families will be very tempted to forgo residential colleges for their kids. Now that families cannot enhance their incomes by working longer hours, sending a second adult to work, or borrowing easy money against overvalued homes, families will be willing to cut back on college expenses if it does not compromise the quality of their children’s education.
- Credentialing. Credentials are necessary for employers and future education institutions to distinguish between similar candidates. Many markets with this problem rely on brands or other signaling effects (watch how you select wine next time you are confronted with dozens of plausible choices). University degrees emerged long ago as a critical signal of professional capability independent of what the degree holder knows. Part of this is because of selection effects, as Malcom Gladwell explained some years ago:
Social scientists distinguish between…treatment effects and selection effects. The Marine Corps, for instance, is largely a treatment-effect institution. It doesn’t have an enormous admissions office, grading applicants along four separate dimensions of toughness and intelligence. It’s confident that the experience of undergoing Marine Corps basic training will turn you into a formidable soldier. A modeling agency, by contrast, is a selection-effect institution. You don’t become beautiful by signing up with an agency. You get signed up by an agency because you’re beautiful.
Top-tier universities produce top graduates by accepting applicants who are very likely to succeed — they trade heavily on selection effects. I once published a proposal in the campus newspaper challenging the Dean of the Harvard Business School to compare people who were admitted to HBS but did not attend with those who were admitted but did attend to see if the school was adding value or simply selecting people who were going to succeed anyway. He showed little enthusiasm for my research proposal, although other scholars (including Alan Krueger, who now chairs Obama’s Council on Economic Advisors) have since documented these selection effects.
Treatment effects also create signals, whether anybody learns anything or not. Imagine that you have two job candidates who 25 years earlier attended the same school and took the same courses. One candidate failed every course and did not graduate. The other got straight As in the courses and graduated with honors, but has forgotten 100% of the material. Neither currently knows anything that they learned in college. But if this is all the information you had, you would hire the successful student — you’d be crazy not to. You have a signal that this person is capable of hard work and learning, even if they don’t retain it 25 years later. In labor markets, signaling matters a lot and university degrees are powerful signals. Online education will not quickly change this — although the creation of alternative credentialing mechanisms may.
Who decides what signal a degree sends? Employers do. If Google or Goldman begin hiring software engineers or managers who received their professional degrees online, the value of elite professional degrees will come into question. As a future post will detail, this is very likely to happen, since the knowledge and skill imparted by most professional degree programs can more easily be standardized, sequenced, and captured on standardized tests than undergraduate education can. Universities are rushing to offer professional degrees online because because students are willing to pay high tuition to finance a degree that will significantly increase their earnings. If competition from online professional degree programs pressures schools to reduce either tuition or admissions requirements, universities will see their professional degree cash cow led to slaughter. For this reason, better known universities hope fervently that dozens of competing online degree programs to emerge, saturate the market, and preserve the signaling value of the premium degree they offer.
As high quality education moves online, it will kill the weakest first: those schools that charge more and deliver less. Elite research universities will be forced to trade heavily on their brand and the signaling value of their credential, which may become easier as online programs proliferate and education markets become even more global. The experience of going to college may never be reducable to interactive social media — but classroom teaching and learning surely is.
Perhaps the only thing more depressing than the sight of hundreds of students and faculty on a “99 Mile March” to defend California’s system of higher education from budget cuts is the failure of the state legislature to vigorously defend the engine of the state’s wealth and economic mobility. The protests now underway in Sacramento demonstrate why California’s system of higher education is far too important to entrust to faculty, students, or legislators.
Any group wishing to challenge cuts in public spending that benefit them directly has a political problem if taxpayers see costs without benefits. So the group wishing to protect themselves from the cuts needs to answer two questions: Who else benefits from the spending? and How can we increase our public contributions? If the marchers trying to channel their inner Cesar Chavez had taken the first question seriously, they would have built a coalition. Had they been courageous in their answer to the second, they would have built a movement. Chavez, not incidentally, understood this and knew that those who ask only what their country can do for them produce the political equivalent of a large yawn.
- The California Community Colleges is the largest higher education system in the nation. Its 112 colleges provide more than 2.9 million students with basic skills education, workforce training, preparation for four-year universities. They attract working student and ambitious immigrants in very large numbers. I used my local community college to learn blueprint reading and geometric dimensioning and tolerances to become a machinist; I also took classes in physics because they were exceptionally good. Almost 60% of CSU graduates and 30% of UC graduates originally transferred from a California community college.
- California State University has 23 campuses, some 427,000 students, and 44,000 faculty and staff. It is the largest, most diverse, and one of the most affordable university systems in the country. CSU graduates 44% of the life sciences college graduates California, more than 60% of all of the state’s teachers, including 9 out of 10 of California’s public school educators, and 45% of the state’s computer and electronic engineers. CSU is an outstanding resource for underserved populations, awarding more than half of the bachelor degrees earned by African American, Latino, and Native American students in California.
- The University of California is a top-tier research university and an economic catalyst for the state. UC’s ten campuses enroll more than 220,000 students and employ more than 170,000 faculty and staff. Its three national labs manage hundreds of millions of dollars of state and federal research. A recent study estimated that UC generates $46.3 billion in annual economic activity for California, not counting benefits such as technology startups that grow directly out of university research.
Berkeley remains the premier UC campus and an amazing public institution, despite its bottomless capacity for self-parody (disclosure: my wife is a dean at Cal).
- A National Research Council analysis of U.S. universities concluded that UC Berkeley has the largest number of highly ranked graduate programs in the country. The analysis ranked doctoral programs within a range (such as between 1st and 5th), and found that 48 out of the 52 Berkeley programs assessed ranked within the top 10 nationally.
- Over the past decade (2000-2009), the National Science Foundation awarded more Graduate Research Fellowships to UC Berkeley students than to those of any other university (MIT was 2nd; Stanford 3rd; Harvard 4th).
- 135 Berkeley faculty are members of the prestigious National Academy of Sciences, exceeded only by Harvard with 150. 91 are members of the National Academy of Engineering, exceeded only by MIT with 105. Membership in the American Philosophical Society, the American Academy of Arts and Sciences, or winners of National Medal of Science teach overwhelmingly at three schools — Berkeley, Harvard and Stanford. Only one of these is a public institution.
- Berkeley’s single proudest claim however, ahead even of its 24 national rugby championships, is that it enrolls more students on Pell Grants than all of the Ivy League schools put together. A Pell Grant is a scholarship based on financial need. By serving academically qualified students on Pell Grants, Berkeley ensures that smart, hard-working kids from low income families can get a top-flight education.
The combined effects of this system on the California economy are astonishing. Community college students who earned a vocational degree or certificate in 2003-2004 saw their wages jump from $25,856 to $57,594 three years after earning their degree , an increase of over 100 percent. Census data indicate that an average college graduate earns a million dollars more during their working lifetime than a high school graduate. A masters degree adds another $400,000 and a doctorate another million on top of that. A professional degree adds another million on top of that (average lifetime earnings are $1.2 for those with only a high school diploma, $2.1, $2.5, $3.4, and $4.4 for BA, Masters, PhD, and professional degrees, respectively). Depending on what you count and how you count it, a dollar invested in California higher education returns $3, $5, or $14 but it doesn’t really matter — it’s a great public investment. And because many of the benefits accrue to individuals, not all of the investment needs to be public; students can and should bear some of the cost, so long as financing is available with repayment schemes adjusted for students that pursue lower paying occupations.
This system of higher education is the goose whose eggs make California the Golden State. And playing the part of Aesop’s short-sighted fool who wishes to slaughter the creature for short term gain, is California’s legendary and dysfunctional state legislature. This august body last year, cut $500 million from UC, $500 million from CSU, and $400 million from the California Community Colleges to close a state budget deficit in 2011-12. California is not alone in this trend: between 2002 and 2010, states have cut funds for public research universities by 20 percent in constant dollars according to a report issued by the National Science Foundation.
Every Californian should be alarmed at the proposed “trigger cuts”, which would slash another $300 million, unless taxpayers pass a special ballot initiative to increase state taxes — which are already the nation’s highest. Where does all the money go? Pension spending aside (and it cannot be put aside for long), it goes to health care and to prisons. California is the only state that spends more on prisons ($10.7 billion in the current budget) than on higher education ($9.8 billion). Both are dwarfed by the state’s $42 billion Health and Human Services budget.
How Can We Contribute?
On these facts alone, students and faculty should be able to rally public support, not just each other. Serious and sustained public support however, requires more. What, precisely, are students and faculty offering to contribute to make increased public investment even more compelling? Two ideas could completely change the conversation.
1. Use campuses year round.
The University of California and CSU teach either three ten week quarters or two fifteen week semesters each year, meaning that 33 large and expensive campuses are fully utilized less than 60% of the time. Adding a fourth quarter or a third semester would produce revenue that would go a very long ways to paying for fixed assets, including facilities, administration, admissions, counseling, health care, technology, and similar services that are staffed year round, even though paying students attend only part of the year.
Even if campuses completely eliminated so called “Summer Session” revenue and did not alter faculty teaching loads and research expectations (meaning that new faculty would be hired or current ones paid more in order to meet increased teaching demand), year round operations would generate hundreds of millions of dollars for the system as a whole (and based on limited public data on UC, it would more than makes up for the loss of state funding). Without reducing degree requirements or creating a cut-rate three-year undergraduate degrees, this approach would enable a student who undertook full time studies to complete the work for an undergraduate degree in three years or less and would substantially relieve pressure for additional tuition increases.
Faculty should embrace year round operations, which would not need to reduce time for research at UC but would offer the option for all faculty to earn additional teaching income. They should rally in support of an approach that would enable most departments to add new faculty positions, which will simply not happen otherwise. Year round campus operations would be very quickly copied by other top flight universities if UC and Cal State take the lead. As financial stewards of state educational resources, university leaders should admit that operating campuses 30 weeks per year (a schedule designed originally to make sure students made it home to help with the harvest) is economically wasteful, socially indulgent, and politically untenable.
2. Repay a public investment with public service.
As a candidate, Barrack Obama proposed a national service program that would serve as “a central cause of my presidency”. It hasn’t, but it should. He proposed to spend $3.5 billion to expand AmeriCorps to 250,000 volunteers and to double the size of the Peace Corps. He envisioned tuition tax credits for college students who performed community service while in school. His plan supported promising nonprofit community startups, expanded the GI Bill, and created a Classroom Corps to help teachers and students in high-need and underserved schools. Obama proposed a Health Corps to improve public health information and outreach to areas with inadequate health systems such as rural areas and inner cities and a Clean Energy Corps to promote weatherization, renewable energy projects, pollution clean up, trea planting, and park maintenance. Other volunteers would serve veterans or help communities with disaster preparation.
Not only is national service a good idea, but the politics of enacting it are not nearly as awful as they are on many other issues (recall that the GI Bill, one of the most popular and successful social programs in US history, was initiated by the American Legion together with FDR). In this spirit, the federal government could, for a modest amount of money, guarantee loans and offer tuition tax credits proportional to the public service performed. By accepting only freshman applicants who have performed at least one year of community service, California could set an example for the rest of the country and begin the long overdue process of rebuilding public support for its activities.
One of the faculty spokespersons for the current band of protesters is a leftist professor, a tenured friend who for decades has rarely missed a good demonstration. Not long ago, I asked him what he believed today that he did not believe in 1968, when his political habits were formed. He thought for a moment before declaring that ”1968 was a very good year”. But he and his campus comrades appear to have forgotten the lessons of ’68: that lasting political change requires public education, coalition building, and a commitment to public service. Why do I suspect that the last people to figure this out will be those closest to the problem?
On New Year’s Day, a friend mentioned that Frank Bardacke had published his long-anticipated history of the rise and fall of Cesar Chavez and the United Farmworkers. It was worth the wait, he assured me and “completely stunning. Just get it and read it. You won’t put it down.”
He was right.
Bardacke, a respected labor activist and educator based in Watsonville California, was first mentioned in this blog six years ago in connection with his research on Cesar Chavez. Like Bill Gates and Mark Zuckerberg, he dropped out of Harvard after his freshman year and moved west to change the world. Unlike them, he joined the Berkeley Free Speech Movement and has had an abiding interest in radical politics ever since. In the early 70s, I traveled to China with Bardacke to get a first hand look at Mao’s proletarian dictatorship. Frank admired all things proletarian; I feared the dictators. Bardacke often views the world through a different template than I do, but I have learned a lot from him and continue to have enormous respect for his views.
Bardacke became a farmworker – one of a handful of Anglos and surely the only former Harvard student to work the celery fields. He became fluent in Spanish and formed friendships with many of the union staff and farmworkers who appear in his book. He spent more than a decade interviewing every major participant in the drama, reading every known book on the farmworkers and scouring every archive. He received help in managing this massive project from faculty in history and politics at nearby UC Santa Cruz.
The result, Trampling Out the Vintage: Cesar Chavez and the Two Souls of the United Farmworkers, is the most complete account yet of the rise and fall of the UFW. It is also an epic, Shakespearean drama with all of the elements of a Hollywood blockbuster. The pitch meeting would be surreal:
OK, picture this: we have a conservative Catholic who fasts and marches like he’s Ghandi. He courts progressive clerics and hires liberal Jews and alienated Anglos to mobilize immigrant Mexicans and Philipinos to fight Slavic and Italian growers. At first David slays Goliath, but then he morphs into King Lear and destroys his newly built kingdom amidst slaughter and recrimination. We’ve got side plot romances between devotees who work for $5/week and bad food trying to raise farmworker pay. We’ve got violent Teamster, UFW, and grower thugs straight out of the Sopranos. We’ve got a certifiably batshit human potential guru who wreaks havoc getting everyone to criticize everyone else. And under the carpet here somewhere, we may even have communists trying to advance a proletarian revolution without a proletariat. How can we miss?”
Astonishingly, it is a true story and Bardacke delivers it with intelligence and compassion. Unique among labor historians, he grounds his analysis in “the work itself”, with brilliant, memorable descriptions of how different stages of production for different crops in different regions of California all affect the ability and willingness of different crews to self organize. He describes clearly why organizing was often sustained by the tight-knit, highly skilled lechugeuros or the celery cutters, not the garlic or asparagus workers or those in ladder crops. He describes the skill and endurance that the work requires, introduces leaders that arise from various crews, and captures in fine detail how they interact with a union that was built on a very different set of principles from farm work. In a decade spent organizing waiters, housekeepers, nurses, bartenders, machinists, cannery workers, and assembly workers, I observed precisely these differences. The work itself shapes our propensity to organize. Bardacke is the first writer to apply this principle to the fields and he does so with a deep understanding and compassion for the work.
Bringing an existing union into a workplace is an act of industrial combat not for the faint of heart — but starting a new union from scratch is a herculean task that almost always fails. I started a company that has lasted more than a decade, a public agency that lasted three years, and a union (United Espresso Workers – I was a bit early) that lasted all of three weeks. With the proud exception of the United Farmworkers, I cannot think of a single independent union formed in the United States in the past 50 years that was not sponsored and controlled by an incumbent union (I can think of several that tried and died – but none who made it).
This was not always true — new unions once spawned regularly in the US. There are many reasons for the change, but the lack of competition between unions has positioned them nicely for extinction. Organizations evolve through the mutation, variation, and selection that is always produced by competition. The labor movement stopped growing the instant the AFL joined with the CIO and prohibited unions from competing with each other. When two teachers unions competed, both grew. The instant the Teamsters stopped raiding the UFW, growth stopped. I hated the Teamsters (who were kicked out of the AFL-CIO for corruption and are not subject to the noncompete provisions) and I took a nasty beating from them once, but like sharks or wolves, they have their place in the ecosystem. (I am aware of no union leader who agrees with this view, by the way. Most feel that they have all the competition they can handle from employers).
But for a brief moment following the civil rights movement in the 1960s, a new labor union arose in the United States and in the least likely place. If you had asked in 1960 where in the economy a new union might appear, you would never have selected the farmworkers of California. Organizers prefer workers who are tied to one place and to one employer, not workers who are seasonal and often itinerant. Probably wrongly, organizers prefer workers who are covered by labor laws, which had always exempted farmworkers. Organizers like English-speaking Americans, not Tagalog or Spanish-speaking immigrants or Braceros who are tolerated for a season then ushered back to Mexico. A dozen or so failed efforts by farmworkers to form agricultural unions seemed to validate Marx and Lenin’s belief that workers would organize once they were forced into factories and worked for a single employer.
Bardacke demonstrates that Cesar Chavez succeeded in organizing farmworkers because he was, at heart, a brilliant and hard-working Alinksy-trained community organizer. As a community organizer, Chavez pioneered an enormous innovation that had the potential to transform labor organizing: he mastered the secondary boycott (illegal for most workers under the federal labor law, which thoughtfully excludes farmworkers). Chavez tirelessly organized enormous boycott operations in grapes, lettuce, and against major retailers including Safeway.
Farmworker boycotts were the Occupy movement of the 70s and 80s – a way for college students, community activists, and middle class young people to participate directly in the tough work of social change. And credit Chavez’s brilliant leadership, it worked magnificently: faced with effective boycotts, growers raised wages and improved working conditions and politicians begged the army of grass-roots Chavistas to help register voters and turn them out on election day. The UFW became a powerful force for social change.
But the UFW was only briefly a powerful labor union. Bardacke correctly diagnoses the boycott as creating a formidable tension within the UFW. He frames the tension between labor and boycott organizing as a struggle between the “two souls” of the UFW. The metaphor is fraught. As Bardacke demonstrates, the UFW collapses not because it has two souls, but because none of its activities were organized, financed, or led in a manner that enable them to grow. The problem is not that community organizing is a distraction – most American labor unions lack a community service organization and are much the weaker for it. This is tragic: having discovered and refined one of the few recent innovations in union organizing, Chavez cannot let it grow. Instead, he strangles his own child.
One of the heros of Bardacke’s book is Marshall Ganz, one of America’s most innovative labor organizers. Ganz also dropped out of Harvard, but moved south to organize for civil rights before heading west. After his exile from the UFW, Ganz helped the Silicon Valley Central Labor Council build a powerful neighborhood-based political organization for the 1984 elections. He was terrific at posing fundamental questions – and at directing me and others to writers and thinkers who helped answer them. In 1984 he urged me to read, of all things, a business book, In Search of Excellence. I quickly developed an appetite for business writing. decided to get trained in it, and ended up working with the book’s authors. Marshall returned to Harvard, got his degree after a 28 year hiatus, and now teaches at the Kennedy School. (His version of the UFW story, told in Why David Sometimes Wins, is a fine companion volume. It suffers for being his PhD dissertation and dwells more deeply on theories of organizing and less on the dynamics of local struggles).
So let’s ask a Marshall Ganz-like question: what does it take for an organization to grow successfully? Venture capitalists, a group not deeply concerned with the welfare of those who produce their salads, obsess about this question. There are at least as many answers as there are VCs, but common elements include:
- A big market. If there is not substantial demand for the product or service an organization produces, the organization cannot get very big.
- Positive unit economics. If serving one more person imposes more cost on the organization than it generates in revenue, then growth makes no economic sense and the organization will depend for growth on funding from charity or government. Anyone can sell a dime for a nickel; selling a nickel for a dime means that an organization has to add at least a nickel’s worth of value if it wants to grow.
- Customer or member acquisition costs that scale. Every organization has a cost of acquiring a customer that must be repaid over the lifetime of that customer or member. Smart organizations exhibit declining COA: the cost of acquiring each incremental customer declines with scale. Very smart organizations (and effective social movements) are viral: COA approaches zero as current participants recruit new ones. See Facebook, Google, or Arab Spring.
- Leadership. Growth is very, very demanding on an organization. Everyone in a fast-growing organization has to grow with it: jobs change radically every few months. Not everyone grows at the same pace, so leaders must recruit furiously, communicate direction and values continually, promote and replace people regularly, and test what works all the time. It is stressful and a lot of fun – ask anyone who has been involved in a fast-growing company, boycott, strike, or organizing campaign.
Back to the fields. Boycotts have completely different economics than labor organizations. Boycotts have huge markets: liberals eager to shop their conscience. Churches and colleges do the recruiting at very low cost to the boycott sponsors. Every convert adds more value (the grapes they don’t buy) than cost (the very low cost of volunteers leafleting).
Shakespeare’s immortal eulogy delivered by Mark Anthony for Julius Caesar resonates this week: “The evil that men do lives after them; The good is oft interred with their bones.” We lost five remarkable men from different parts of the world. Four of them made the planet an immeasurably better place. One devoted his life to evil that survives his death.
George Whitman, 1913-2011
I have known hundreds of booksellers; the most memorable by far was George Whitman, proprietor of Shakespeare and Company, across from Notre Dame at point zero in Paris.
His store, like its namesake run by Sylvia Beach during the 1930s, became point zero for two generations of writers and wanderers. I am one of tens of thousands of people who was taken in by George, absorbed into his literary world, made part of his little “Rag and Bone shop of the heart”. George never cared about money, food, or finery — he cared about people, literature, and travelers. He was especially drawn to young people, to whom his generosity was legendary.
I last saw George four years ago. My tribute to him at the time reads nicely today. I recalled my days living in Shakespeare in January of 1976, decades after Jackie Onassis had come through as a student and around the time that a young Greek immigrant named George Soros hung his hat at Shakespeare & Co. for several days.
The New York Times ran a wonderful obituary about George, who had written his own eulogy years earlier. Inscribed over a doorway that led to the upstairs of Shakespeare was a motto: “Be not inhospitable to strangers,” it counseled, “for they may be angels in disguise”. George did not, in fact, treat every visitor like an angel in disguise. But he gave visitors a place to discover their literary angels, and more than a few rose to the challenge.
Christopher Hitchens, 1949-2011
A life that partakes even a little of friendship, love, irony, humor, parenthood, literature, and music, and the chance to take part in battles for the liberation of others cannot be called ‘meaningless’ except if the person living it is also an existentialist and elects to call it so.”
Beware the irrational, however seductive. Shun the ‘transcendent’ and all who invite you to subordinate or annihilate yourself. Distrust compassion; prefer dignity for yourself and others. Don’t be afraid to be thought arrogant or selfish. Picture all experts as if they were mammals. Never be a spectator of unfairness or stupidity. Seek out argument and disputation for their own sake; the grave will supply plenty of time for silence. Suspect your own motives, and all excuses. Do not live for others any more than you would expect others to live for you.”
Hitchens was always provocative, occasionally irritating, and frequently funny. I will miss his voice enormously.
Vaclav Havel, 1936-2011
Imagine a political upheaval so profound as to be accurately called a revolution, so bloodless and smooth as to be called velvet, and so artistic that its leader was a playwright who conducted the insurrection from, and I am not making this up, the Magic Lantern Theatre, in Prague. Vaclav Havel is the Nelson Mandela of Eastern Europe, and his personal role as catalyst of the communist collapse his hard to overstate. From the Times:
In 1977, Havel was one of three leading organizers of Charter 77, a group of 242 artists and activists who called for basic human rights in Czechoslovakia. Havel was arrested and imprisoned. He spent five years in and out of Communist prisons, lived for decades under daily police surveillance and suffered the suppression of his literary works.
Later he served 14 years as president, resigning rather than see his country separated. He is author of 19 plays and dozens of essays, including “The Power of the Powerless”, which influenced a generation of activists much as King’s “Letter from a Birmingham Jail” had done in the United States. By the time he became President of Czechoslovakia, Havel had written more serious fiction than most heads of state had read.
Timothy Garten Ash, then a British graduate student, witnessed the remarkable Havel in action during the Velvet Revolution. Havel’s moral standing, his poetic use of language, and his patience made him as the dominant figure in resistance politics in Prague in 1989. Garten Ash reports in his indispensable first hand account of events that year in Prague, Budapest, and Berlin that Havel served as the chief behind-the-scenes negotiator who brought about the end of more than 40 years of Communist rule and the peaceful transfer of power. The revolt was so smooth that it took just weeks to complete and not a single shot was fired.
Warren Hellman 1934-2011
In business school, I became friends with Marco, the kid in the next seat everyone called Mick. I recall the day when a classmate told me “his father is Hurricane Hellman — the youngest partner in the history of Lehman Brothers. He ran the place before he turned 40″. Although I only met Warren Hellman a handful of times, I came to respect him as an icon of a group of prominent postwar Bay Area business Republicans who were deeply civic, secular Jews whose contribution to life in these parts is rarely noted. Architect Art Gensler and Gap Founder Don Fischer are others, as, excepting the Republican bit, are banker Bill Hambrecht and Levis heir Robert Haas.
If you live in the Bay Area, it is hard to overstate the impact of Warren Hellman. He saved San Francisco over a billion dollars by financing a ballot measure to reform the city’s tottering pension system. He built the parking garage beneath teh DeYoung Museum in Golden Gate Park. He chaired the Board of Trustees at Mills College and reversed the decision to admit men (still a very popular decision, although I have argued a dubious one). He funded the San Francisco Free Clinic and endowed aquatic sports at UC Berkeley, where he had played water polo as a student. And in 2001, Hellman launched the Hardly Strictly Bluegrass festival, an annual three day event in Golden Gate park that draws more than 300,000 people and is put on for free. Hellman paid the musicians, usually including EmmyLou Harris and the late Hazel Dickens. Hellman himself was a serious amateur banjo player and toured with his group, the Wronglers, until quite recently.
Hellman was not only born into a remarkable family, but he created one as well. He was the great grandson of Isaias Hellman, California’s first banker, who created what became Wells Fargo Bank and built the University of Southern California. His kids are high achievers who share his passion for athletics. Warren competed in extreme sports, once finishing a 100-mile high altitude race in the Sierra after falling and breaking a rib at mile 25. His kids have won championships in m mountain bike racing, skiing, and other sports.
Hellman was the sort of one percenter that the Bay Area loves: a guy who took much more pleasure from giving his money away than he did from making it; who walked away from Wall Street to build an investment firm as “the opposite of Lehman Brothers”, who rarely wore a tie and never seemed to take himself terribly seriously, and who was disarmingly candid about his many failures. He has much to teach the pashas of Silicon Valley; I sincerely hope that they are up to the task.
Kim Jong Il, ??-2011
Those looking for evidence that God has a sense of humor had a fine week. Not only did the Iraq war and the life of Christopher Hitchens end on the same day, but the loss of four of our finest was followed by the unmourned death of perhaps the worst human alive.
History will struggle to find a single kind word to say about Kim Jong Il. He built a hermetic garrison state, imprisoned and starved millions of his people, sponsored untold terrorist activities including the downing of a civilian airliner, and undertook military provocations and kidnappings against Japan and South Korea. He developed and tested thermonuclear weapons and sold them to some of the most unstable governments in the world, including Pakistan. He refined his doctrine of Juchu into a personality cult that represents the precise opposite of everything George Whitman, Christopher Hitchens, Vaclav Havel, or Warren Hellman stood for.
As Shakespeare predicted, the evil that Kim did will survive him. Kim’s sudden death is problem for South Korea but an even larger problem for China. China has tended to treat North Korea as their pain-in-the-ass psychotic kid brother who refuses his meds but performs a useful service by keeping the neighbors on their guard. But an unstable North Korea is not a good thing for China. There is a strong argument that China will need to take over North Korea as a client state — effectively a new province. In a generation or two, Korea would either unify in a Chinese economic sphere or the North would be forcibly absorbed, Tibet-like, into Han culture. It ain’t Jeffersonian democracy, but it is hard to argue that this would be a worse outcome for the people of North Korea than the continued demented rule of the last standing communist dynasty.
I have taken up running and, like boomers everywhere, I worry about hurting myself. Data suggest that between a third and half of runners get hurt running every year, making running a surprisingly high risk exercise. Why is this?
Journalist Chris McDougall wondered why he was getting hurt when humans have been running for two million years. His best-selling book, Born to Run, is a well-told tale of people who run barefoot without getting hurt and of researchers who discover a paradox: support can make you weaker, not stronger. The more support a running shoe gives you, the more it weakens your foot, ankle, and calf muscles and the more prone you become to injury.
McDougall presents the stories that led to the science and the science that has led to a resurgence of barefoot or minimal shoe running. He visits the Tarahumara, an impoverished clan of long distance runners living in the very remote Copper Canyons of Mexico. McDougall romanticizes their lives, describing men and women of all ages routinely running for dozens of miles in sandals over hot, steep mountains.
Scientists have studied the Tarahumara for years because their isolation makes them good subjects. As roads arrive, the Tarahumara embrace modernity: their diet goes from corn meal and long runs to pickup trucks and Hohos. Epidemiologists have documented the diabetes, cancer, and heart disease that result. McDougall looks past this, focusing instead on the propensity of the canyon-dwelling Tarahumara and some of their more crazed gringo brethren to race ridiculous distances wearing heuraches cut from old tires.
Back home, McDougall consults a Stanford track coach who refuses to let his athletes wear expensive running shoes and discovers data suggesting that both the extent and severity of injuries go up with the price of shoes. He interviews Daniel Lieberman, a Harvard biomechanics professor, who explains precisely how the support a of a running shoe makes most runners over stride and heel strike, which delivers a much sharper blow than a barefoot runner who lands mid foot. A good video of Lieberman explaining his research is below. The peer reviewed work is here in Nature.
Lots of testing and learning is still being done both by individuals and by researchers, but nobody these days takes for granted that running shoes are always helpful. Shoe companies are trying to shift their designs and their message to promote “minimalist” shoes, some of which are now best-sellers.
Is this just a fad? Of course any shoe can become a fad if well marketed. On the other hand, humans have run barefoot for two million years but have worn running shoes for only about 30. I would not bet against barefoot running, given the injury rates that shod runners experience.
Protection turns out to be deceptive. It seems completely normal to me that as a runner, I would prefer a protective shoe. I want lots of cushioning. I want to avoid pronation, which must be awful because it sounds so bad. It would be simple to sell me orthotics — hey, my knees hurt sometimes. Although some people surely do fine in running shoes, for many people, highly protective shoes are like a cast. They reduce your mobility and your foot gets continually weaker as a result.
Economists, of course, know that protection often makes competitors weaker. They believe instinctively that competition strengthens counterparties, be they muscles, individuals, teams, companies, or regions. I have even argued that those who want stronger labor unions need to force unions to compete. Economists left and right can show that trade protection weakens both parties, although this knowledge never stops companies, communities, or workers who are hurt by trade from seeking it. Doubtless some similar principal applies to parenting: too much protection weakens your kids. Fine, now buckle your damned seat belt.
To evaluate social programs or parenting, we need the equivalent of the Tarahumara — a group isolated from extraneous influences that can test whether social protections produce more benefits than costs. Fortunately, an impressive young economist has shown that many of our protective programs are testable. Esther Duflo is an MIT professor, a MacArthur genius grant winner, and the winner of the 2010 John Bates Clark Medal for the best economist under the age of forty. Watch her fascinating TED talk on how she tests programs to fight malaria, educate kids, and immunize children. This is barefoot economics at its best.
Testing of this sort requires an appetite for failure. Politicians, business people, and scientists each approach tests differently, depending on how failure affects them.
- Politicians pay a huge price for failure. This forces them to simplify problems and promise sound bite solutions. If they do not do this, they won’t be elected and they won’t be politicians. Politicians cannot say “wow, this is a tough problem. Let’s try a bunch of things, fail at most of them, and learn what works.” Most politicians suffer from what Tim Hartford calls the “God Complex”. Hartford writes the Undercover Economist column for the Financial Times. He has published a terrific book called Adapt: Why Success Always Starts with Failure. You can get a flavor of his thinking at his fantastic TED talk. The God Complex is the equivalent of intelligent design: certainty that complex systems can best be managed centrally and that complex questions can be answered without the painful process of trial and error. Parents, CEOs, physicians, gods, and anyone else who pays a high price for failure are especially vulnerable.
- Business people embrace trial and error mainly because markets force them to. Hartford notes that ten percent of all businesses fail every year. A market economy can be looked at as a huge, ongoing experiment that evolves, like every complex system, because of variation and selection. The best leaders of complex systems acknowledge that leading edge problems don’t have obvious solutions and encourage a structured process of trial and error. Hartford’s book discusses the value of lots of small, low cost trials that are decoupled so that they don’t spill over and of carefully documenting and interpreting results. An important and highly recommended read.
- Scientists love failure. It’s how they learn. They understand that humans have evolved as complex systems through millions of years of variation and selection. They reason either deductively from data or inductively to ask have we evolved to run? Evolutionary biologists have long noted that the unique way we sweat for thermoregulation, our hairlessness, our odd bipedal design (more energy efficient than any quadruped), our unusual ability to breath multiple times per step, and our highly engineered feet, ankles, and hips all suggest anatomy designed to run.
But until the 1980s, researchers were stymied by one big problem: we are slow. Why on earth would running matter, when every mammal worth eating can outrun us?
It fell to David Carrier, a graduate student at the University of Utah, to notice something that had escaped other scientists: we are built for endurance, not for speed. The case for humans designed for endurance running is now widely accepted. This is partly because we have discovered a story that backs the data. Hunter-gatherers in the central Kalahari Desert in Southern Africa still practice persistence hunting: they run their prey to death (there is one other group that practices persistence hunting — or at least remembers it. Our pals the Tarahumara). Running down a large mammal takes as little as an hour or as long as 8 hours, but if a human can keep a mammal galloping so that it cannot catch its breath, cool down, or rejoin its herd, it will collapse of exhaustion before the human does. It appears that before we invented spears, humans survived by high-endurance, persistence hunting. Barefoot.
The BBC managed to film a group of men in the Kalahari hunting a kudu this way. Despite the drums and the breathless narration, it is a stunning film. Notice that the runners are shod in cheap shoes that do not let them heel strike. They look a lot like the sneakers we all wore as kids.