Top Ten Books of 2014

PikettyMy reading for 2014 was more focused for the same reason my blogging has been reduced — I have a day job. Also, two of the books were real projects, entailing runs to the sources, blogosphere, and Wikipedia for background, context, and understanding. These books were demanding (long and in places technical) and singular — representing a new peak for their genre. I enjoyed both Thomas Piketty’s 696 page Capital and the 21st Century and Andrew Robert’s 976 page Napoleon: A Life enough to work through them both.

Piketty is of course the French economist who appears to be famous everywhere except France (a problem he corrected this morning by refusing the Legion d’Honeur, declaring that “it is not the job of government to determine who has honor”.  Spoken like a French historian). Piketty has wrestled more deeply and seriously with the problem of inequality than any previous economic historian. He actually has produced three books: an amazing history of inequality based on European census data rarely deployed for these purposes, an economic analysis of the causes of inequality. He argues that if r, the rate of return on assets, exceeds g, the overall rate of economic growth, family wealth will grow faster than the economy, and can become gigantic. It is a widely derided formulation, but well argued and defended. His third book consists of recommendations for taxing wealth that are largely laughable — as he all but concedes.

NapoleonRobert’s book doubles as a history of 19th century France and, for that matter, Europe. Napoleon led not only a remarkable and
consequential life, but one that shaped a great deal of modern Europe. He wrote some 30,000 letters during his lifetime, and Roberts is the first scholar to have access to the full pile, lucky guy. The book is compelling and very well-written. If anything, it needed to be longer. Both books are worthwhile investments, even if both will occasionally leave you screaming — Piketty for what he fails to understand about business and technology, Roberts because his subject can be so simultaneously brilliant and boneheaded. (Moscow? Really?)

I read from several other categories: books on startups, business and economic history, math and science books, and pulp detective fiction, especially noir. Here are the winners:


Running_leanEntrepreneurship is in the air — often comically. Books that fanned the startup flames this year included Brad Feld’s, Venture Deals: Be Smarter Than Your Lawyer and Venture Capitalist, the remarkable Ben Horowitz’s The Hard Thing About Hard Things: Building a Business When There are No Easy Answers, and Sean Ellis’ Startup Growth Engines: Case Studies of How Today’s Most Successful Startups Unlock Extraordinary Growth. All three of these writers are deeply experienced at the trials of early stage companies and all have useful lessons to teach. Feld offers a field guide to the legal aspects of starting a company, to which I would say simply, read his book, then hire a competent lawyer. Horowitz writes excellent columns, which are less compelling when assembled into a book. Sean Ellis has put together a useful set of lessons around specific startups he has worked with.

thiel_6_4_frontTop startup books of 2014 were Ash Maurya’s Running Lean: Iterate from Plan A to a Plan That Works. This, finally, is the book that Steve Blank would have written in his groundbreaking The Four Steps to the Epiphany — which instead emerged as one of the worst expressions of great thinking ever put to print. In most cases bad writing is the soft underbelly of bad thinking — but Blank proves that sometimes you just need an editor. Eric Riess, who has built a franchise around “Lean Startups” by rewriting Blank and adding ideas like LAMP, does not come close to producing the how to manual for a startup team that Ash Maurya has written. This really is the go to book for early stage technology entrepreneurs.

You won’t get much practical advice from Zero to One: Notes on Startups or How to Build the Future by the estimable Peter Thiel. Parts of this book will delight you, and some of it should outrage you. This book derives from Thiel’s widely followed Stanford course, which was capably blogged by his coauthor, Blake Masters. The book has moments of brilliance however — notably his description of how companies actually compete and his relentless search for companies that go from zero to one (bringing something altogether new into the world) as opposed to from 1 to many (scaling up derivative businesses). Marc Andreesen, who needs to write a book and is smart enough to write a very good one, likes to say that Peter Thiel is always half right. Seems correct — and the half that is right is also highly (Zero to One) original.

Business History

How_We_Got_to_NowNominations include the aforementioned Dr. Piketty, Anita Raghavan’s The Billionaire’s Apprentice: The Rise of The Indian American Elite and the Fall of the Galleon Hedge Fund, Bryce Hoffman’s well-told American Icon: Alan Mulally and the Fight to Save Ford Motor, John Brooks’, Business Adventures: Twelve Classic Tales from the World of Wall Street, Martin Wolf’s The Shifts and the Shocks: What We’ve Learned-and Have Still to Learn from the Financial Crisis, Robert Litan’s Trillion Dollar Economists: How Economists and Their Ideas Have Tranformed Business, William Rosen’s The Most Powerful Idea in the World: A Story of Steam, Industry, and Invention, and Vaclav Smil’s Making the Modern World: Materials and Dematerialization.

everythingstoreRaghavan, Hoffman, and Brooks give us well-told tales. The disgusting and criminal behavior of my former McKinsey colleagues, the turnaround of Ford Motor, and the Bill Gates-endorsed collection of New Yorker business essays all make for great and educational reading. Martin Wolf and Robert Litan wrote exceptional books for those interested in financial economics and another post mortem on recent financial crises — although I had quite my fill of those in 2012-13. And I have an obvious soft spot for the sort of well-researched, highly readable economic and technology histories that Rosen and Smil have written.

In a competitive year, I pick Marc Levinson’s The Great A&P and the Struggle for Small Business in America, Brad Stone’s The Everything Store: Jeff Bezos and the Age of Amazon, and Steve Johnson’s How We Got to Now: Six Innovations That Made the Modern World as the year’s best books of business history. Levinson’s book does not quite rise to the level of The Box, but is nonetheless a really wellgreatapcoverfinal told tale of retail innovation — a story that Brad Stone picks up with the history of Amazon. Based on my own front row seat at some of the events he describes, Stone gets a lot of things right about Amazon’s culture and history. It is an amazing company, even if not always an attractive one. Steve Johnson exceeded my low expectations for a book made into a PBS show and structured around seemingly random innovations. But the book works and brings with it more delightful insights per page than any in recent memory. You cannot know in advance how the sack of Constantinople will lead directly to telescopes but Johnson traces the path with confidence without inferring causality where none exists. Great read.

Math and Science

This year’s contenders are Alan Lightman’s The Accidental Universe: The World You Thought You Knew, John Brockman’s Thinking: The New Science of Decision-Making, Jordan Ellenberg’s How Not to Be Wrong: The Power of Mathematical Thinking, and Joshua D. Angrist’s Mastering ‘Metrics: The Path from Cause to Effect.

Lightman is an unusual writer, as the first professor to receive appointments from MIT in both the sciences (he is a physicist) and the humanities. He offers looks at the universe from several perspectives — not all equally successful. His opening chapter, entitled the Accidental Universe, is the strongest and by itself a remarkable read. Brockman is closely associated with the Edge, a foundation that brings together thinkers from a wide range of disciplines. His book touches on developmental in neuroscience, decision theory, linguistics, problem solving, and more, but consists mainly of unedited transcripts of informal discussions or presentations, presumably at Edge conferences he has hosted. This gives the book a stream of consciousness feel, and leaves it vulnerable to rambling, repetition, and superficiality. Ellenberg writes well and not especially technically. Fine book overall but, like Lightman, the first chapter (on lessons by Abraham Wald on the value of constantly looking for reasons why you could be wrong) has the strongest material. Angrist’s book on econometrics was wonderfully organized and well written — but the higher math got away from me. I liked it even if I did not understand it all.

Best book in this category goes to Nate Silver for The Signal and the Noise: Why So Many Predictions Fail-but Some Don’t. The book is not good because Silver famously called the Presidential election correctly — it is a genuinely good summary of the science of prediction, which is a vital part of science and business, not just politics. But prediction is difficult, “especially concerning the future” as Niels Bohr famously noted. We fall prey to cognitive biases and often mistake noise for signal. Silver does an excellent job of walking a general reader through the swamp.

Social Criticism

china_airborneI am a sucker for books on causes or those written to expound a strong point of view. This year’s pile included Adam Minter’s Junkyard Planet: Travels in the Billion-Dollar Trash Trade, two books on the economics of higher education — Elizabeth Armstrong’s Paying for the Party and Joel Best’s The Student Loan Mess: How Good Intentions Created a Trillion Dollar Problem, Megan McArdle’s The Up Side of Down: Why Failing Well Is the Key to Success, Michael Pollan’s Second Nature: A Gardener’s Education, Jonathan Safron Foer’s Eating Animals, and Steve Levitt’s Think Like a Freak: The Authors of Freakonomics Offer to Retrain Your Brain.

Minter is a great writer, as any New Yorker reader knows. Trash matters — but ultimately not enough to keep me interested. The higher ed books both fail to segment the problem. Some higher education is really valuable and essentially self-financing and much is not. Average data tells you little, except that the debt problem is unsustainably large. McArdle knows her Dylan: “she knows there is no success like failure, but that failure is no success at all”. She fails to deliver a book’s worth of insights — although she is a fine writer and terrific blogger. Pollan’s book is great, period. Safron Foer somewhat crudely attempts to moralize factory farming — a topic that others, notably Pollan, have addressed far more more effectively as an omnivore’s dilemma instead of a vegetarian manifesto. Levitt’s book is Freakonomics II — good, clean behaviorist fun, just like the first one.

CowenThe winners in this category are James Fallow’s China Airborne and Tyler Cowan’s Average Is Over: Powering America Beyond the Age of the Great Stagnation. Fallows is a great essayist and social thinker — you can learn from almost anything he writes and when he combines his love of China with his love of airplanes, run, don’t walk to read this book. Tyler Cowan, whose Marginal Revolution blog is indispensable to economic thinkers, has exposed the forces that underlie as much of the inequality problem as r>g — that in many fields, a huge share of the income goes to the top talent and that technology appears to make this problem worse, not better.

Pulp Fiction

eisler-novels-keanu-740x400Now to the fun stuff. I binge read police procedural and detective fiction, especially noir. In earlier years, I pigged out on the complete Raymond Chandler, Lee Child, and Michael Connelly. This year I happily read the complete Barry Eisler, whose ten or so John Rain novels, mostly set in Tokyo, are terrific escapism and helpfully move noir fiction out of Los Angeles. Rain is an attractive character (evidently to be played by Kenau Reeves in forthcoming movies) who combines the mandatory brooding nature, love of scotch, jazz and beautiful women with a wonderful introspection and deftness at the killer’s craft. Eisler is, of all things, an accomplished Silicon Valley attorney with an obvious love of Japan and the genre. Highly recommended. (Note that Eisler decided unhelpfully to rename all of his novels, so they all have new pub dates and you will need to search a bit to figure out the sequence, which matters. Pro tip to Barry: next time, number the titles).

Books, Culture

On a City Bike, all Traffic Lights are Yellow

11949849782053089133traffic_light_yellow_dan_01.svg.hiIn both the US and in Europe, the use of bicycles in cities has shot up. According to the League of American Bicyclists (which endorses none of what follows), bike use has gone up 39 percent nationally since 2001. In the seventy largest US cities, commuter bike use is up 63 percent. Leading the pack is San Francisco, where I bike to work most days; Chicago, New  York, and Washington have also seen huge increases. European cities, which generally had a head start, have also seen an increase in bike commuting. The growing ubiquity of City Bikes (public rental bikes designed for short urban commutes. In London, “Boris Bikes” after the mayor who sponsored them.) has accelerated this trend.

Cycling is safe. Mile for mile, your odds of dying while walking or cycling are essentially the same. Surprisingly, even with more cyclists on the road, fewer cyclists are getting killed by cars.  From 1995 to 1997, an average of 804 cyclists in the United States died every year in motor-vehicle crashes. During an equivalent three-year period from 2008 to 2010, that average fell to 655. (The number rose again in 2011; not clear why). Credit does not appear to be because of bike helmets, which continue to generate serious debates. On balance, they seem to prevent death from skull fractures but do little to prevent brain injury from concussion.

Traffic laws that slow cars down make a big difference. According to the Economist, dying while cycling is three to five times more likely in America than in Denmark, Germany or the Netherlands mainly due to cars traveling more than 30 mph. Europe frequently has traffic “calming” laws to slow cars down when bikes are nearby. It also helps pedestrians: A pedestrian hit by a car moving at 30mph has a 45% chance of dying; at 40mph, the chance of death is 85%, according to Britain’s Department of Transport.

The British seem to gather better national data on cycling accidents than anyone else, although they appear to be far worse statisticians (they unhelpfully conclude that most bike accidents occur during those times when people ride bikes, for example). Nonetheless, they document a finding that will surprise no experienced urban cyclist: “Almost two thirds of cyclists killed or seriously injured were involved in collisions at, or near, a road junction…” In other words, cars kill cyclists at intersections.

Knowing this, the single largest safety priority of every urban cyclist must be to avoid cars where possible; yield to them where not. Making this your number one safety priority  brings with it some surprising implications.

In short, when biking in a city, all traffic lights are yellow. Avoiding cars means stopping on green when you must and going on red when you can.


Culture, Cycling


Blue-Jasmine-offIn Blue Jasmine, Cate Blanchett delivers what may be her strongest performance yet. Her frenetic, scheming, recast Blanche Dubois tops even her jaw-dropping performance in I’m Not There — the 2007 movie where she plays a highly plausible Bob Dylan.

Jasmine is married to Hal (Alec Baldwin), a sleaze-ball Wall Streeter who delivers the requisite mansion, Hampton home, and yacht. When Hal is exposed as a Bernie Madoff style crook, he goes to prison and Jasmine goes to live with her working class sister in San Francisco. In a clear homage to Tennessee William’s Street Car Named Desire, nothing goes especially well after that.

The movie is in most respects brilliant. Allen is an active director, his camera everywhere, his cuts clean and well-considered. It is hard to argue with the verdict of The New Yorker’s David Denby, who pronounced Blue Jasmine “the strongest, most resonant movie Woody Allen has made in years.” Sally Hawkins and Andrew Dice-Clay deliver solid performances as Jasmine’s adopted sister and her husband who never forgives Hal for costing them the only money they ever had.

At one level, Blue Jasmine is another in the latest of Allen’s attempts to get out of Manhattan that started with Midnight in Paris and continued with To Rome, With Love. I hope that Parisians and Romans did not cringe at the complete mess Allen made of their city, the way most Bay Area residents will at Blue Jasmine. Woody Allen appears to have not progressed in his view of California since Annie Hall in 1977 where is he famously puts down Los Angeles by declaring to his best friend Marty that “I don’t want to move to a city where the only cultural advantage is being able to make a right turn on a red light.”

The working class heros of Blue Jasmine are not San Franciscans — they are from Jersey. In the Bay Area, car mechanics have foreign accents (Mine is Yemeni, but you can just about pick your country). Nobody here debates where the best clams are — that happens in New York and Boston. Even in New York, State Department officials have not had $10 million homes since the 1930s (Dwight Westlake, played by Peter Sarsgaard, should have been a venture capitalist). Dr. Flicker would have been a Cal grad and second generation Chinese. There is no Post Street entrance to the jeweler Shreve & Co. — Dwight was walking into a wall (or was that the the point?). The produce even in a working class grocery store would have been ostentatiously organic — although having an Indian owner was a nice touch. The custom Karl Lagerfeld designed Chanel clothing would have been made to look even more out of place (in truth, Blanchett wore it magnificently). Allen would not have closed the film in South Park with a shot that could have been Central Park.

Or maybe he would have. Maybe a bit of Manhattan has crept into San Francisco and in some places more than a bit. But then why would Jasmine move west? She coulda stayed in Brooklyn. Or better, New Orleans.

Artists, Culture, Film, People

Early Warning Signs


Which one is the applicant?


Business competition is always interesting, in part because smart companies figure out how to avoid competition by specializing and differentiating their product or service. When Sun Tzu admonished his generals against assaulting walled fortresses, he understood that head-to-head competition is a sure path to a headache.

Many US universities have not read their Sun Tzu: they compete head-to-head for the same students. In normal markets, schools would specialize. Some would seek students with strong quantitative skills, others would focus on training people who are especially empathic. Some might cater to students who write well, or are poor, male, female, interested in fashion or language studies, or born in another country. This happens of course (except for the male part: the US has no men’s colleges left and only nine all women’s colleges), but most colleges recruit for the same student profile: high grades, high test scores, compelling outside activities. They are assaulting a walled fortress. What accounts for their failure to differentiate?

One problem is that universities are lazy: they compete for those who need them least. No university seeks out or even really wants people who most need education. They  seek students who will be successful even if the university is not. I wrote earlier about selection effects: the tendency of elite universities to compete for students with traits that strongly predict future success regardless of education. When those young people proceed to be professionally and often economically successful, their alma mater is always there, hand outstretched, with a gentle reminder of their formative influence. Employers, desperate for a shorthand method of segmenting talent markets, reinforce these effects by preferentially hiring graduates of “good” colleges. Pretty soon, your college becomes a critical part of your personal brand. It’s a racket, and one in which I enthusiastically participate, benefit from, and perpetuate as a parent, student, employer, and advisor to university leaders.

Selection effects lead to a second market failure: universities don’t scale. What other business deliberately limits access to a compelling service? Can you imagine a law firm declaring that they would only accommodate the first 100 clients? Unthinkable — they will grow to meet demand and maybe a bit more. For top universities, being selective is not a necessity, it’s a choice. Most elite schools admit about the same number of students today as they did 100 years ago — that’s what makes them elite schools.

As my younger son starts to think about college, I have begun to pay attention to how colleges are thinking about him. He will be a major catch (translation: they will compete for him because he shows every sign of being a kid who will do just fine in life with or without their help). So how do colleges compete for talent? In particular, how do colleges compete for the students that they all think they want? 

When competing for talented high school students, universities worry either about their selectivity or their yield. Selectivity is admissions/applicants. Yield is enrollment/admissions. To boost selectivity, you do more marketing to increase the number of applicants. But to boost yield, you actually have to improve your school. Boosting yield is really hard in a competitive market, which is why yield drives college rankings (which improve yield — a longer story). US News, the FT, the Economist and others that have jumped into the college ranking game realize that yield is a very strong market indicator of quality. After all, a school that could admit 1,000 students and then enroll all of them would have to be seen by every student as their very best choice.

Except that most schools cheat. To avoid head-to-head competition, most universities schools offer students the following deal: don’t force us to compete and we will give you a leg up. They call it early decision but they should call it yield improvement. They tell students that if they apply early to their school only, the school will lower the admissions bar. Careful research suggests that students who apply for early decision receive an advantage equal to an extra 150 points on their SAT score. (Universities deny this, but the numbers are unequivocal.) Colleges enforce severe penalties against students who are admitted early and do not enroll by colluding to blacklist the offending student — a practice that should arguably be challenged in court.

Early decision is marketed as a way to reduce the stress of applying to a dozen colleges — and it does that. But it has a benefit that few seem to have noticed: it boosts the school’s yield. Every student admitted under early decision programs will attend: that’s the deal. The yield on early decision admissions is 100% — small wonder that they are growing as a share of total admissions. Nor is there anything wrong with this: it allows applicants into better schools than they would on average get into if they applied later. Businesses do similar things all the time, ranging from no-shop agreements during M&A discussions to exclusive distribution deals in exchange for preferential pricing. Negotiated exclusivity is a battle-tested element of many walled fortresses. 

In the last decade however, the most selective schools have started to rethink early decision. They have decided to compete on selectivity by removing exclusivity. They say to students: “apply early, get an early decision  and you are not bound by our offer”. Today Harvard, Princeton, Yale, Chicago, Stanford, and MIT offer nonbinding “early action” programs and a handful of other schools do as well. These schools realized that they were the top choice for the overwhelming majority of those they admit — they already had great yields. So they decided to increase their selectivity by signaling that they want every strong student to apply. With no reason not to take a shot at it and no obligation to attend, applications skyrocketed and schools that were already preposterously selective became even more so.

The result of these two strategies is exactly what you would expect: applications have skyrocketed, driving admission rates down. Acceptances went out today and this year, Yale accepted fewer than 7% its applicants, the lowest acceptance rate in its history. It offered 1,991 seats to 29,610 applicants for an entering class of about 1,300. Harvard admitted 5.8%, Princeton 7.3%.

There are actually three reasons that Ivy League applications are up. First is early action. Second is the Common Application, an online form that makes it easier for students who do not apply for early decision to apply to many more schools than they used to. When Harvard or Stanford say that they are twice as selective as they used to be, remember that each student they consider is also applying to twice as many schools. When I was seventeen, I applied to four schools and hand typed each application. Few kids today who are serious about college apply to fewer than eight and many apply to more. When the music stops, most kids who have prepared themselves for college still end up with a chair.

The third reason that Ivies are attracting interest is more mundane: they pay better. Harvard, Yale, Princeton and Columbia are “need blind” schools, where the ability of a student’s family to pay isn’t considered by the admissions office. Harvard has announced that it will boost its financial-aid budget to $182 million, a 5.8% increase. For many students, it actually costs much less to attend an expensive, elite school than it does a local state university. The trick, of course, is getting in.

Over the next decade, early decision will not solve the core problem facing most non Ivy universities: they are a lousy investment. Universities operate medieval business models designed for a core purpose that has disappeared. As late as the 1970s, universities accounted for a huge share of knowledge creation, storage, and transmission. They earned the right to certify who was smart and talented because they held a near monopoly on skill and knowledge. This monopoly has now vanished as knowledge creation has diffused and fragmented, information storage has become free and ubiquitous, and skill transmission has taken many forms — some teacherless. The main privilege that colleges cling to is certification — and that too is increasingly under challenge.

The assault comes not mainly from online education, which has been widely discussed and over-hyped, but from enterprises that treat education as a business opportunity. For example, the MBA with the fastest payback now comes from not from a college but from Hult International, which was tiny five years ago and is now the largest producer of MBAs in the world. 2U  now builds large scale, fully credentialed online degrees that produce thousands of graduates each year in partnership with major universities. The Mozilla Open Badge Initiative and dozens of social startups want to supplant traditional degrees with more specific and timely credentials. There are literally hundreds of enterprises devoted to disrupting higher education — whose walled fortresses will not withstand the siege for long.

Education mattered to me, it matters more to my kids, and will matter even more to my grandkids. But thanks to competition and innovation, it will cost less, deliver more, and signal actual capabilities much more precisely than today’s universities do.

Competition, Economics, Education


francisReligion seems to help a lot of people. It reinforces admirable values and provides community, kinship, charity, music, a moral compass, and stories. But to me, religious faith is stone soup: like the soldiers whose boiling rocks induced villagers to donate carrots, potatoes, and meat until there was a plentiful stew after the stones were removed. We could worship an old bicycle chain instead of an almighty deity if it brings us together to do useful work and help the least fortunate among us. Come to think of it, there are plenty of reasons to prefer the greasy chain over dead guys in white robes.

My distrust of superstition has always meant that God and I are not on speaking terms. He presumably finds my lack of gratitude amusing. So we leave each other alone. I also think that despite its good works, the cost of religion usually exceeds its benefits. For millennia, religious intolerance has been a global scourge that has led millions to a life of hatred and violent death. Churches exploit the poor more often than they help them, just as they abuse the emotionally vulnerable more often than they comfort them. Religious leaders happily frighten children into believing that they will burn in hell forever if they don’t submit to church authority and doctrine. This is designed to terrify the weak into compliance — it  is nothing like parents fibbing about Santa Claus.

Most religions oppress women, perpetuate outrageous sexual myths, and demand sexual conformity, but if hell exists, it surely has an especially warm corner reserved for the modern Catholic church, which abuses children on a horrific scale. That thousands of priests swore an oath to chastity before sodomizing more than ten thousand Catholic boys and (20% of the time anyway) raping Catholic girls is both horrifying and outrageous.

As with all great crimes, we will never be able to fully count all of the victims. The governing body of the church admits that more than 3,000 priests have been accused of sex abuse during the past 50 years. In the US, more than 3,000 Catholics decided to speak out, lawyer up, and file lawsuits as they fled the church. The church has paid out between $2-3 billion to victims to settle these claims, depending on who is keeping score. Eight diocese have gone bankrupt due to an inability to pay these settlements. One widely cited count documents 6,115 priests who stand accused of sexually assaulting 16,324 minors. The actual account may be a fraction of this or it may be a multiple. What we know for sure and the Vatican concedes is that child abuse is simply not reported in much of the third world. If it were, one doubts that the church would still be growing there (watch the Philippines, where people are finding their voice and beginning to accuse predatory priests. The spread of Catholicism has been stopped cold).

It was thus with a jaded eye that I watched the cardinals assemble this week at the Sistine Chapel. Like everyone else, I was surprised to hear of white smoke on the second day of the conclave. I was moved to see that the man who emerged wore not the traditional large ornate cross of gold, but a simple cross of wood. The cardinals had chosen a man who as Cardinal had refused his palace and limo and who, upon his elevation, refused to ascend the papal throne. Instead, he greeted the cardinals standing up, as brothers. I was stunned to hear that Bergolio chose the name of Francis after Assisi, who renounced his wealth, lived with the poor, founded the Franciscans, and spent a lot of time protesting outside the Vatican. Francis of Assisi was never ordained as a priest, much less a pope and as Robert Francis Kennedy noted, is a name associated with the quest for social justice, not the papacy. In his first talk, Francis spoke simply and generously. He asked that people pray for him, not to him, as most popes do. My antipathy to the church aside, this had to be good news.

The world was stunned that an Argentinian had ascended to the papacy. This is not really surprising, since half of all Catholics now live in Latin America. Shocking to me is that the cardinals had chosen a Jesuit, the order that is a century-old pain in the Vatican ass. Jesuits are an intellectual order that values study and critical debate. They are big in the US and founded some of our best high schools and colleges, including Santa Clara, Boston College, and Georgetown. Jesuit priests take vows of poverty, which most gold-bedecked Cardinals think is beneath them. They are famously disrespectful of authority, challenging the Vatican on contraception, abortion, gay marriage, the role of women, the need for political revolution and all manner of causes (to be sure, not all Jesuits dissent on all of these issues. Francis appears to conform to Vatican thinking on all of them). Throughout the ages the Vatican has kept its distance from the Jesuits. Despite their size and influence, no order has been cast out further from the center of Vatican power. The cardinals of Rome are about as likely to name a Jesuit pope as the United Nations is to name a North Korean Secretary General. Indeed, the Vegas oddsmakers did not even have Bergolio on their list. When in his first words, Francis noted that the cardinals had “reached a very long way” to find a new pope, most people assume he was referring to the distance from Rome to Buenos Aires. He could as easily have been describing the reach it took for his fellow cardinals to support a Jesuit pope.

I confess to a soft spot in my heart for Jesuits because so many of my best teachers were of that order. In high school and in college, every single teacher who challenged me to think deeply and critically and to live a moral life was either Jewish or a fallen Jesuit. Three were former priests who decided that a vow of chastity was nuts.

Will Francis liberate the church? Nope. He is theologically conservative even if he modernized the Argentinian church. He made some dubious compromises with the military junta that will become public knowledge in the coming months. More fundamentally, he faces an unimaginable turnaround challenge rooted in both sex and money. The ongoing financial corruption of the Vatican’s Curia is deep and entrenched. Organized pedophilia and its cover up could kill the church, and arguably should. The average age of priests increases by about 10 months per year. Still, I wish Francis well, even if I would not miss his church if it vanished. I sympathize with those who have trouble kicking the Catholic habit, and for them, and for children who are forced into religious life before they can make a choice, I truly hope that Pope Francis fulfills the promise of his first glorious hours.



Culture, People, Political leaders

Statistical Storytellers

We now collect extraordinary amounts of data from weblogs, wifi sessions, phone calls, sensors, and transactions of all kinds. The quantities are hard to imagine: we created about five exabytes of data in all of human history until 2003. We now create that much data every two days.

Deriving insights from this mountain of data is a big challenge for most organizations. People who are good at this are highly valued and in desperately short supply (career tip: study statistics). But mining data for insights is just the beginning: explaining what you have learned to non-statisticians is often an even bigger challenge. Visualizing data and forming it into a coherent story is a completely separate skill, and one that is evolving quickly. For my money, the pioneers in the field are McKinsey’s Gene Zelazny and the always impressive Edward Tufte. Modern masters include Hans Rosling and Garr Reynolds.

It’s not easy to visualize data effectively and it is even harder to weave it into a compelling story. Statisticians make lousy novelists – and vice versa (career tip: study fiction). It often takes a team to do a great job of analyzing and creatively presenting information. Done well, the results are artistic, for me anyway. Here are two good examples.

The first is on wealth and inequality in America — not an easy topic to portray.

The second is a Hans Rosling’s classic TED Talk on global demographics.

Artists, Business, Business people, Economics, People, Technology

Denial is Bigger than Amazon

Pile-of-books-006Talk to a book author, publisher, or retailer about the future of their business and the denials begin. “Never before have there been so many good books to read”. “Books are the backbone of civilization”. “Life without books is unimaginable”. As a cultural argument, this may be true, but as an economic one, this is a view only of the supply side.

An alleged ancestor of mine once observed that facts are stubborn things”. Forget for a moment about our beloved books. Imagine a product you care little about: chemicals perhaps, motorcycles, or furniture. Imagine the industry that produces this product: the designers, manufacturers, marketers, distributors, and retailers. You can determine the health of this industry with a few key measures. We can apply these same vital signs to the publishing business to determine whether our loved one has long to live.

Sales. Not every industry with declining sales is in trouble, but most are. According to BookScan, adult nonfiction print unit book sales peaked in 2007 and have declined each year since. Retail bookstore sales peaked the same year and have also fallen each year according to the U.S. Census Bureau. I sold an online book business I had started at about that time for a simple reason: I couldn’t figure out how to keep growing it.

eBooks are growing fast but do not close the gap. Print sales dropped 17% from 2010-2011, and e-books grow 117% (I have borrowed liberally from a nice fact set gathered here). The result was a 5.8% decline in total book sales according to the Association of American Publishers. Combined print and e-book sales of adult trade books fell by 14 million units in 2010, according to the Book Industry Study Group.

Unit economics. OK, but you can make good money in shrinking industries. You may sell fewer items, but make more on each item you sell. So long as you add more value than cost, customers will happily pay you for your product, even in a shrinking market. So what is happening to the unit economics of books?

Start with prices. Book prices have gone up every year for more than ten years  — a good sign, right? Not necessarily, because the number of books sold continues to shrink as noted above. But the number of books published is exploding. Bowker reports that over three million books were published in the U.S. in 2010. 316,480 of these were new traditional titles – meaning that publishers introduce 867 new books every day.

But that is the traditional tip of the publishing iceberg: 90%, or more than 2.7 million, books published were “non-traditional” titles in 2010. These are mainly self-published books, reprints of books in the public domain, and resurrected out of print books. They vary enormously. Some become best selling light porn for housewives. Others are spam created by software that pirates an existing title in a few hours (one professor wrote a program that has produced 800,000 specialized “books” for sale on Amazon). Others are highly specialized books not attractive to a publisher. When Baker and Taylor reports that book prices are going up, they are describing traditional, not “non-traditional” publishing.

pile-of-books 2

What about costs? The cost of bringing a traditional book to market is high and largely fixed, so the declining unit sales of the average book is a huge problem. According to BookScan, the best count we have, Americans only bought 263 million adult nonfiction books 2011, meaning that the average U.S. nonfiction book now sells fewer than 250 copies per year (collectors of first editions take note: it is second editions that turn out to be rare!) Only a few titles become big sellers. In a randomized analysis of 1,000 business books released in 2009, only 62 sold more than 5,000 copies, according to the New York Times.

So we have an industry that is shrinking and being divided among many more products and players, few of which can make money. Not a good sign.

Marketing and distribution. Well, maybe better marketing and more rational distribution can target new customers and grow demand while reducing costs. It happens in hard goods and industrial businesses all the time. Can we fix the book market?

Investments in marketing are very tough to justify. A publisher needs to acquire the book (pay an advance to the author), then develop (edit) it, design, name, print, launch, distribute, warehouse, sell, and handle returns (about a quarter of all books flow backwards from retailer to distributor or publisher — a huge cost avoided by eBooks). After all of this, the average conventional book generates only $50,000 to $200,000 in sales, which radically limits how much publishers can invest in marketing. Increasingly, publishers operate like venture capitalists, putting small amounts of money to work to see what catches on and justifies additional investment. Only proven (or outrageous) authors attract large marketing budgets.

Increasingly, book marketing is done by authors, not publishers. But how does an author market a book? The only way she can: to her friends and community. With too much to read, we read what our friends advise (women especially read with their friends. Publishers are intensely interested in what reading groups choose to read in an era where there is no general audience for nonfiction and fiction is highly segmented). Some products catch on and a few become blockbusters — even some books that begin as unconventional titles without publishers.

So marketing is tough to fix — how about distribution? It’s a disaster. Retail is hopeless: your chances of finding any given book in a bookstore are less than 1%. For example, there are about 250,000 business titles in print. A small bookstore can carry 100 of these titles; a superstore perhaps 1,500. Stores are for best-sellers – if there are any. Online can carry every title, but really, who needs 250,000 business books? Online selling solves this problem, but enjoys such massive returns to scale that concentration is unavoidable. In the latest count, Amazon had a 27% share of all book sales (including, I estimate, about two thirds of all eBook sales — meaning that they monopolize the only part of the book business that is growing).

pile of books 3Underlying demand. OK, fine — the industry is broken. But music was busted too: retailers evaporated, product proliferated and went digital, the label’s value-added shrank, and piracy was a much bigger issue than in books. And despite it all, music is making a comeback: sales are up if you count concerts and ring tones. A few people make a living at it and a few become stars. Is this the future of books?

It doesn’t look that way for reasons articulated by Steve Jobs: although people still listen to music, many people have simply stopped reading books. Speaking about the Amazon Kindle, he argued:

It doesn’t matter how good or bad the product is, the fact is that people don’t read anymore. Forty percent of the people in the U.S. read one book or less last year. The whole conception is flawed at the top because people don’t read anymore.”

Ouch. Setting aside Jobs known tendency to dismiss technologies that he later pursued, is demand for books actually dropping? Are we even reading the books we buy?

The evidence is overwhelming that we read fewer books than we used to. As summarized nicely in the New Yorker, the National Endowment for the Arts has since 1982 teamed with the Census Bureau to survey thousands of Americans about our reading. When they began, 57% of Americans surveyed claimed to have read a work of creative literature  (poems, plays, narrative fiction) in the previous twelve months. The share fell to 54% in 1992, and to 47% 2002. Whether you look at men or women, kids, teenagers, young adults or the middle-aged; we all read less literature, and far fewer books.

This is not a small problem, nor is it confined to the book business. The N.E.A. found that active book readers are more likely to play sports, exercise, visit art museums, attend theatre, paint, go to music events, take photographs, volunteer, and vote. Neurologists have demonstrated that the book habit builds a wide range of cognitive abilities. Reading grows powerful and important neural pathways that not only make reading easier as we do more of it, but enable us to analyze, comprehend, and connect information.

But for the first time in human history, people all over the world are reading fewer books than they used to. Faced with compelling media alternatives, humans everywhere are abandoning the book. We read, but we are losing the habit of reading deeply. Having conquered illiteracy, we are now threatened by aliteracy. Reviving the book industry is only possible if we can revive the book itself.

Book Wars, Books, Classic Jam, Competition, Culture, e-Books, Economics, Technology

Are Men Overpaid?


More women than men now graduate from college and they earn better grades. But at every level of educational attainment, men still earn more money and the gap grows larger with time. Are we systematically underpaying women? If so, why would labor markets behave that way?

To some, the question is laughably simple: greedy male capitalists exploit women. This is true, but greedy capitalists exploit men too. Greed, not fairness, should lead to some rough market price of equivalent skill and experience unless talented women are somehow different from anything else that is bought and sold in quantity. Asserting that bosses are greedy and biased doesn’t explain much. After all, greedy bosses cheerfully bid up the price of copper. At the risk of commodifying half the population, it is useful to ask why are they not bidding up the price of  talented women?

There are, it turns out, many reasons that women are paid less than men. Most raise issues worth addressing, whether or not they conform neatly to the “glass ceiling” narrative suggested by the infographic on the right. The reasons for pay inquality include:

The scholars accounted for differences in grades, course choices, and previous experience. Their conclusion: kids kill careers. They found that the women’s pay deficit was almost entirely because women interrupted their careers more often and tended to work fewer hours. The rest was mostly explained by career choices: for instance, more women worked at nonprofits, which pay less. A subsequent study by scholars at CUNY, also published by NBER, largely confirmed this finding.

This explanation dodges the underlying question of why are the financial penalties to taking time off so high? After all, if between age 30 and 35, I take a year of maternity leave and work 4 days a week for six or seven years, I might sacrifice two years of work experience between age 30 and 40. Meaning that as a 40 year old woman, I have the same experience as a 38 year old man who contributed zero to raising his kids. Is there really something so magical about the fourth decade of life that missing some work justifies a permanent economic penalty?  A Labor Department study completed in 1992 concluded that time off for career interruptions explain only about 12% of the gender gap (not counting part time work and experience effects and, unlike Goldin, et al, they did not focus only at MBAs).

There remains of course, the threshold question of whether women should be the default caretaker and disproportionately bear the professional cost associated with raising children. In many households of course, they do not — but this is still the exception.

Most studies do not ask why a profession earned less money to start with. After all, pay in many professions (including teaching) declined as they became more female and pay in some current professions (including law) appears to be going through something similar. Scholars who study these differences often have trouble sorting out historic patterns of gender discrimination from productivity or skill related pay differences.

In some cases, women also seem to choose firms within an industry that pay both men and women less (perhaps because they offer more flexible work arrangements). Janice Madden studied women stockbrokers, for example, whose pay is strictly performance-driven. She documented that although women were assigned inferior accounts and performed as well as men when they were not, a relatively small share of the total pay gap was the result of this unequal treatment. Although the industry paid women quite well, women were more likely than men to work in smaller, less successful brokerages.

There is plenty of evidence that this example was not unusual, even though my response probably was. The problem begins with expectations: women expect to be paid less than men do. A 2012 survey of 5,730 students at 80 universities found that women expected starting salaries that were nearly $11,000 lower than their male classmates. Women veterinarians, who bill their own clients at rates they set, were found to set their prices lower than their male colleagues and to more frequently “relationship price” meaning not charge friends or clients for small amounts of work. A similar effect occurs in law firms, where a lucrative partnership often depends on billed hours. The most prominent scholarly work in this area is by Linda Babcock at Carnegie Mellon, whose book title captured her major finding: Women Don’t Ask. Babcock realized the problem when she noticed that the plum teaching assistant positions at her university had gone to men who had bothered to ask about them, not to women, who expected them to be posted somewhere.

The effect on women of not negotiating is huge. According to Babcock, women are more pessimistic about the how much is available when they do negotiate and so they typically ask for and get less when they do negotiate—on average, 30 percent less than men. She cites evidence from Carnegie Mellon masters degree holders that eight times more men negotiated their starting salaries than women. These men were able to increase their starting salaries by an average of 7.4 percent, or about $4,000. In the same study, men’s starting salaries were about $4,000 higher than the women’s on average, suggesting that the gender gap between men and women’s starting salaries might have been closed had more of the women negotiated. Over a professional lifetime, the cost to women of not negotiating was more than $1 million.

Fortunately, this is pretty easy to fix. Women can learn quickly that everything is negotiable. The Jamkid pointed me to a recent investigation by his teacher John List at the University of Chicago, showing that given an indication that bargaining is appropriate, women are just as willing as men to negotiate for more pay. List finds that men remain more likely than women to ask for more money when there is no explicit statement in a job description that wages are negotiable.

Although legislation and litigation will surely be useful to discourage and penalize employers who systematically discriminate against women at scale, as WalMart is alleged to have done, most of the forces that contribute to inappropriately low pay for women will not be remedied in court. Two policy remedies however, could make a large difference and are politically achievable.

California is the only state that currently requires paid parental leave. Initial evidence suggests that the act has doubled maternity leave from three to seven weeks (barbaric by European standards) and raised the wages of new mothers by 6-9%. It’s a start, but we should join the modern world, and perhaps follow Denmark, which last I checked required that husbands take equal time away from work on the birth of a child in order to minimize the long term impact on women’s earnings. To those who worry that by subsidizing overpopulation in a capacity constrained planet, I would point to declining birth rates throughout Europe and the realization, which is slowly beginning to dawn on the world, that we face far more threats from low birth rates at the moment than we do from high ones.

It might also begin a deeper, more fact-based, discussion about the sources of economic inequality. And such a disclosure would quickly expose the most embarrassing economic fact of all: some men — but relatively few women — are shockingly overpaid.



Business, Economics, Labor, Politics

Manufacturing Myths in Palo Alto and Pittsburgh

The transition from fields to factories always mixes agony with hope. Families abandon land and traditions that often go back generations, move to cities and reset their lives from sunlight to time clocks. Mass production industries flourish for a few generations. Life is hardly a bed of roses, but it is nearly always better or people would return to the farm — and nobody returns. Factory work means educated kids, savings, medical care, and consumer goods like refrigerators and cars. Manufacturing jobs may be dangerous or tedious, but they are also deliver opportunity and hope to millions of people.

Eventually, of course, this all changes. Service industries like hospitality, health care, education, or banks grow faster than manufacturing. Consumers buy stuff made elsewhere. In the US at least, income disparity has increased and life experience stratified until many people — including many with low incomes — have no understanding of manufacturing or factory work.  Factories seem somehow dirty, Dickensian, and something to be avoided.

The result in the US is a schizophrenic attitude towards manufacturing. We are divided between those see no future for factories and those who believe that manufacturing is vital to our economy. It’s Palo Alto vs. Pittsburgh. Sunny Silicon Valley typically sees the future in online technologies, clean tech, or biotech and associate manufacturing with an economic time and place that is as far gone as the family farm. Pittsburgh’s workers, managers, policymakers, and professors argue passionately that the decline of the middle class and the decline of manufacturing employment are inexorably linked and urge government action to restore our competitive position.

As both a confirmed Silicon Valley technologist and a former machinist, union man, and factory worker, I understand both world views. Perhaps more importantly, I have studied a recent report by my former colleagues at the McKinsey Global Institute that details the role of manufacturing in the US and global economies (click here to download the 170 page report or here for the summary. I highly recommend it and relied on it for most of the data and charts that follow). The punch line: Palo Alto and Pittsburgh both have it wrong, even when their prevailing myths contain elements of past truths. Manufacturing still matters, but for different reasons than either group believes.

Lets start with Silicon Valley. Palo Alto sees America through a prism coated in software and web services with an economic future built on service and information businesses. With the quaint and unprofitable exceptions of Tesla, the odd 3D printer, and notwithstanding the atonal musings of Andy Grove, we haven’t made anything in Silicon Valley since we drove out disk drives and semiconductors a generation ago. We view manufacturing as a relic of the industrial age, not as an engine of innovation. This belief is held in place by several myths, including:

1. Companies that make things have a lot in common.  

Manufacturing is not a sector: companies that make things vary enormously in the nature of their products, operations, and economics.

Some, like steel or aluminum plants, are incredibly energy intensive and heavy. Manufacturers need to be near water (for transport), raw materials, and cheap power (Alcoa chairman and former Treasury Secretary Paul O’Neill once described aluminum to me as “congealed electricity”). Labor costs are completely secondary.

Pharmaceuticals, in contrast, live or die on product development. They need access to capital, technology, and skilled researchers. A furniture maker needs semi-skilled workers and access to distribution.

It is hardly useful to talk about manufacturing as a single thing — it really isn’t. The McKinsey report tries to segment manufacturers into five groups and describe the requirements and challenges of each, illustrated on the right. The scheme illustrates fundamental differences between manufacturing sectors, although there remain enormous variation even within segments.

These groups require vastly different skills and have fared quite differently in advanced countries, with the final group, the so-called labor-intensive tradables, not surprisingly accounting for the biggest share of job losses.


2.  Manufacturing is a commodity that contributes little to the US standard of living

Nope: manufacturing matters, just not like it used to. McKinsey found that throughout the developed world, manufacturing is declining in its share of economic activity but contributes disproportionately to a nation’s exports, productivity growth, R&D, and innovation.

As the chart on the right illustrates, manufacturing contributes to productivity growth (the basis for all increases in living standards) at about double the rate that it contributes to employment. It also produces spillover effects that are frequently not captured in data about manufacturing.

Manufacturing adds economic value, much of which is transferred to consumers in the form of lower prices (which are economically indistinguishable from a pay increase). On a value-added basis, manufacturing represents about 16% of global GDP, but accounted for 20% of the growth of global GDP in the first decade of this century.

Finally, manufacturing accounts for 77% of private sector R&D, which drives a huge share of technology innovation. It is far from clear that Silicon Valley would exist without it.


3. Our future is in knowledge-intensive services, not manufacturing.

Once again, our traditional categories are not helpful. Manufacturing frequently is a knowledge intensive business. (It surprises many people to learn, for example, there are more dollars of information than dollars of labor in a ton of US made steel).

Manufacturing is increasingly data intensive. Big Data is revolutionizing manufacturing products and processes, no less than services. Data enables manufacturers to target products to very specific markets. The “Internet of Things” relies on sensors, social data, and intelligent devices to rapidly inform how products are designed, built, and used. Huge data sets have also enabled new ways for manufacturers to gather customer insights, optimize inventory, price accurately, and manage supply chains.

This is not your father’s factory. Most US manufacturing jobs are not even in production. As the accompanying chart shows, they are service jobs linked to manufacturing or inside manufacturing companies.


4. Manufacturing depends on low cost labor, which is why it has fled overseas. 

This particular conceit is endemic in Palo Alto. McKinsey documents one possible reason: no sector, not even textiles, has shifted production overseas as fast as computers and electronics. Indeed, as the chart at the right illustrates, some manufacturing sectors have actually added jobs during the past ten years.

There is a second dimension to this myth however: that manufacturing jobs are factory jobs. As illustrated above, many jobs in manufacturing companies are service like jobs, including R&D, procurement, distribution, sales and marketing, post sales service, back office support, and management. These jobs make up between 30 and 55 percent of manufacturing employment in the US. Much of the work of manufacturing does not involve direct product fabrication, assembly, warehousing, or transportation.

The final misunderstanding is that most factory jobs are unskilled or low paying. In fact, manufacturers world wide are currently experiencing chronic skill shortages. McKinsey projects a potential shortage of more than 40 million high skilled workers around the world by 2020 — especially in China.

In short, the standard Silicon Valley view is much too narrow: manufacturing is and will remain a high value industry that contributes meaningfully to our standard of living. Manufacturing (some of it, anyway), is a competitive asset.

Move east to Pittsburgh, and you will quickly discover that a completely different manufacturing mythology prevails, focused mainly on job-creation. In these parts, the loss of manufacturing jobs is understandably considered a crisis for the US. Politicians pay homage to “good-paying manufacturing jobs” and blame the inability of a high school grad to get a factory job that supports a family, a home, and a motorboat on cheatin’ Chinese and union-bustin’ outsourcers. Dig a bit deeper, and you will discover that these beliefs are also grounded in economic myths, such as:

1. Manufacturing jobs pay more than service sector jobs.

This view often reflects the wishes of people with a history in “rust belt” manufacturing. In fact, manufacturing jobs pay very much like service jobs do — except at the very low end, where manufacturing creates far fewer minimum wage service jobs that are common in hospitality and retail.

Part of the reason for this of course, is that manufacturers can have low value work performed overseas — not an option for McDonalds, Walmart, or others who deliver services face-to-face.

As shown on the right, manufacturing creates about the same number of jobs in each pay band as do service sector jobs, except that there are fewer low-paying jobs and a few more high paying ones. An important caveat is that manufacturing company jobs may be more likely to include benefits, which are excluded from this calculation.

That all said, it is no longer given that manufacturing is a source of better paying jobs.


2. We should look to manufacturing for the jobs we need.

OK, but at least manufacturing creates decent jobs. Why not promote manufacturing to create jobs — even if they pay the same as service sector jobs?

The answer depends on your country’s stage of development, on domestic demand for manufactured goods, and on how robust your service sector is. For the US, the case for public policies favoring manufacturing is weak.

McKinsey documents what many have observed: manufacturing jobs decline once a country reaches about $7-10,000 GDP/person, as illustrated on the right. This pattern holds both across and within countries. As a result, manufacturing jobs are declining everywhere, except in the very poorest countries (even China is losing manufacturing jobs).

But all low cost labor countries do not enjoy equivalent manufacturing sectors. More important even than stage of development is the level of domestic demand for manufactured goods and the robustness of the domestic service sector. The US and the UK have such large service sectors that we derive smaller share of our GDP from manufacturing, even though in absolute terms both countries have robust manufacturing sectors.

3. Low wage nations like China are stealing our manufacturing jobs.

There are typically two parts to the belief that US jobs are flowing overseas. First is the underlying view that jobs are a zero-sum asset to be fought over like territory. This idea has political salience, but is economic nonsense. Jobs are the complex result of many things including the availability of public or private capital, legal and regulatory systems, local demand conditions, and managerial competence. Cheap Chinese labor is typically the least of it.

The other idea however, is that we can somehow return to 1950 when unionized manufacturing jobs dominated the US economy. This is no more likely than a return to small family farming (and like those who romanticize what Marx aptly termed “the isolation of rural life”, those who idealize factory work often have suspiciously clean fingernails).

As the accompanying chart shows, manufacturing as a share of economic activity is in long term secular decline in all high and middle income countries worldwide — including China. It is only growing as a share of the economy in very poor countries. As the UN has pointed out, Haiti is in desperate need of sweatshops. Vietnam and Burma are growing manufacturing’s share of economic output — often at China’s expense.

Manufacturing matters enormously, just like agriculture does. But it is not growing as a share of economic output. (McKinsey highlights one interesting exception to this rule. Sweden has maintained manufacturing as a share of its economy by targeting high growth sectors and especially by investing twice as much in training as other EU countries. Most importantly however, they devalued the krona against the Euro to make exports competitive — effectively taxing imports).

4. Companies build plants overseas in search of cheap labor

There was a time when labor costs were a determining factor in locating production facilities. This is much less true today, when location decisions are driven by many factors other than labor costs, as the chart on the right illustrates.

Depending on how a company competes and whether it is locating research, development, process development, or production facilities it’s location criteria may or may not turn on factor costs such as labor. Proximity to consumers or to talent may matter more. In some cases taxes matter. In other cases access to suppliers matter.

The rising cost of commodity inputs transportation during the past two decades has altered this calculation. Steel, for example, was about 8% iron ore cost and 81% production costs as recently as 1995. Today ore is more than 40% of the cost of a ton of steel and production costs are only 26%. Steel companies care much more about the cost of ore than the cost of labor.

Likewise transportation costs have skyrocketed with energy prices and infrastructure demands (the US grows highway use by about 3%/year and grows highways by about 1%/year. Anyone living here knows the result). Producers from P&G to Ikea and Emerson now are forced to locate plants near customers to minimize transportation costs. As a strategy for plant location decisions, labor arbitrage looks very 1980s.

5. If consumers would only buy local, we could restore our manufacturing base. 

Politicians and union leaders say this all the time and it is sheer idiocy. Most would not be caught dead in my German car, which was designed and made in Tennessee, but beam proudly at the sight of a Buick van imported from China.

High productivity manufacturing benefits consumers, as companies pass on savings to Americans in the form of lower product costs. As illustrated by the chart to the right, most consumer durables cost today about what they did in the 1980s — and quality is much higher. Economists have estimated that Walmart, Target, and Costco reduce retail prices by 1-3% each year because they pass to consumers savings extracted from manufacturers (this, by the way, is a big reason that manufacturing continues to shrink as a share of our economy. We pay less for our stuff  and more for services like education and health care.)

Americans say we believe in “Made in USA” campaigns, but as consumers, we are famously delusional. When surveyed, we profess to favor locally produced merchandize. But our wallets don’t lie: we buy high quality, low cost stuff regardless of where it comes from.

So how do we grow US manufacturing? Same as always: by creating innovative materials, processes, and products. McKinsey sees “a robust pipeline of technological innovations that suggest that this trend will continue to fuel productivity and growth in the coming decades”. Of course most innovations are hard to foresee. One reliable source of innovation turns out to be anything that reduces weight, such as nanomaterials, some biotech, light weight steels, aluminum, and carbon fiber. It turns out although we buy more stuff each year, the total weight of our purchases actually declines because nearly everything we buy, including cars and airplanes, weighs less than it used to.

Manufacturers have come to appreciate the power and the necessity of innovation. During the Clinton administration debates over CAFE standards, car company engineers soberly advised us that the theoretical limit of internal combustion engines was a 10-15% improvement over the current average of 17 miles per gallon. Today these companies have already doubled that efficiency and speak openly about doubling it again, even as they invest in non-combustion solutions that are even more efficient.

In short, manufacturing matters for different reasons than it used to. It used to be a plentiful source of unskilled jobs; today its value is as driver of innovation, productivity improvement, and consumer value. It’s an exciting part of the economy, even if it cannot solve every problem we face related to job creation and economic growth.

7 Reasons, Business, Economics, Education, Labor, Politics, Technology

The iPhone Gets Nuanced

iPhone 5 Keyboard

I got the iPhone 5 because it was free. The loathsome ATT charged me $300 and I sold the old iPhone 4 on eBay for that amount. I hate ATT, but their 4G LTE is really fast in the Bay Area and factory unlocking a phone under contract to enable overseas SIMs and free tethering is trivially easy. The new phone is like the old one but bigger, faster and thinner — reinforcing my view that Apple post Steve is an incremental innovator, not a disruptive one.

Most of the changes are the result of a new operating system, not new hardware. But one feature is blowing me away — totally changing how I use my phone. The new feature is keyboard dictation, which appears on all iOS 6 keyboards, whether you have the new iPhone or not.

By dictation, I emphatically do not mean Siri. Siri is a dog that performs a few well-chosen show tricks and inspired at least one hysterical advertising spoof. Siri is very useful for directions, reminders, OpenTable reservations, and a good laugh. Siri entertains — but dictation delights.

Dictation has been around for a decade and on iOS since the 4S and third generation iPad, but it was always more trouble than it was worth. But suddenly, dictation not only works, it works shockingly well. For text messages, emails, tweets, and even first drafts of longer documents it is massively faster to dictate than to type (unfortunately I still need to type blog posts the old way. Maybe that explains the 60 day hiatus…).  I have a hard time understanding why Apple is not using its ad dollars to promote dictation, not Siri — unless the processing costs are huge and they are losing money on the feature.

What changed? In a word, Nuance plus a massive investment in cloud infrastructure. Nuance Communications is the public company behind Dragon Dictate — which has been the market leader in desktop speech recognition for the past 15 years at least (the company was founded in 1992 out of SRI as Visoneer, known mainly for early OCR software). Neither Apple nor Nuance talk about it, but it looks to many people like Apple has licensed its dictation software, including Siri’s front end interpreter, from Nuance. One sign: before Apple bought Siri, it used to carry a “speech recognition by Dragon” label (earlier, Siri had used Vlingo, which apparently did not work as well). Not only that, but Nuance has built several speech recognition apps for the iPhone and iPad that work exactly like the speech recognition built into the iPad and iPhone 5.

This is interesting in part because Apple never licenses critical technology for long. It insists on controlling its core technology from soup to nuts, so many people assume that Apple has considered buying Nuance. The problem is that Nuance holds licenses with many Apple competitors who would disappear if Apple bought the company. Apple would need to massively overpay for the asset — something they never do. More likely, Apple will hire talented speech recognition people and build its own proprietary competing product, just like it did with maps when it declared independence from Google. In this case, figure that dictation will regress for a year or two, just as maps have done, because real time, accurate speech recognition makes maps look simple. Plus Nuance protects its patents aggressively and these patents are, according to some writers, not easy to avoid.

Although Google is avoiding them nicely; Android speech recognition is also outstanding. How do they do it? The Google way: throw talent at it. Google hired more PhD linguists than any other company and then they hired Mike Cohen. Cohen is an original co-founder of Nuance and if anyone can build voice recognition without tripping on the Nuance patents, he can. Apple appears likely to pursue a similar course.

Mobile dictation works by capturing your words, compressing them into a wave file, sending it to a cloud server, processing it using Nuance software, converting it to text, and sending it back to your device where it appears on your screen. Like all good advanced technology, it passes Arthur Clark’s third law: it is indistinguishable from magic. The tricky bit is the software processing, which has to have a rich set of rules based on context. The software decides on the meaning of each word based not only on the sound pattern, but on the words it heard before and after the word it is deciding upon.

This is highly recursive logic and nontrivial to execute real time. Try saying “I went to the capital to see the Capitol”, “I picked a flower and bought some flour”, or “I wore new clothes as I closed the door” and you begin to understand the problem that vexes not only software, but English learners everywhere. Apple dictation handles these ambiguities perfectly — meaning that it either gets the answer right, or it realizes that there are multiple possible answers, takes a guess, and hovers the alternative so that you can correct it with a quick touch.

It takes a little bit of practice to use dictation well. It helps to enunciate like a fifth grade English teacher and to learn how to embed punctuation. The iPhone OS6 User Guide has a list of available commands. Four are all you need: “comma”, “period”,”Question mark”, and  “New Paragraph” (or “Next Paragraph”). You can also insert emoticons “smiley” :-), “frowny” :( and “winky” ;-). For anything else, speaking the punctuation usually works: “exclamation point”, “all caps”, “no caps”, “dash”, “semicolon”, “dollar sign”, “copyright sign”, “quote”, etc.

Overall, the experience of accurate mobile dictation is a magic moment — like the first time you use a word processor or a spreadsheet (for those who recall typewriters and calculators), or the first browser or email (yeah, we didn’t used to have those, either). Give it a try. Apple has done something amazing and for once, actually under-hyped it. 

Mobile, Technology

George McGovern 1922-2012

George McGovern. All photos (c) NYT

When George McGovern ran for president, I was the age the JamKid is now. He was then, and remained, a remarkable and vastly underappreciated American. He was a decorated war hero who had seen gruesome combat and calmly led a massive crusade against the Vietnam war. He was a professor with a PhD in History who would never have dreamed of calling himself “Dr. McGovern” (unlike say, Germany, where most politicians are eager to run as Herr Docktor). He was a democratic Democrat, whose commission reset the party rules and stripped the insiders of much of their power. More than anyone, McGovern closed the smoke-filled rooms (and frankly, made the Party more difficult to govern and more dependent on large donors). He created the UN Food for Peace program and believed profoundly in helping the poor and desperate, even in the face of evidence that foreign aid did not promote economic self-sufficiency. He was an early proponent of dietary guidelines and as early as 1973 warned of the growing amount of sugar in the US diet.

I met McGovern a few times and had dinner with him once. He was a modest, self-effacing guy, who knew a surprising amount about labor history. I learned that his dissertation was on the 1913 Colorado coal strikes. He also knew a lot about farming, not simply because he was from South Dakota, but because he had a lifelong aversion to hunger after seeing Italians starving during his wartime service. I learned that he was probably the last person to ever speak to Bobby Kennedy — whose assassination shook him, and me, even more deeply than the loss of JFK.

Fighting sugar in 1975

McGovern was widely reviled. During his run for the presidency in 1972, the New York Post referred to him as “George S. (for surrender) McGovern” in virtually everything it wrote. He was not a great campaigner, although he brought hundreds of people into politics and many of them stayed — including Bill Clinton. His hastily-considered choice of Missouri Senator Tom Eagleton as his running mate ranks with McCain’s choice of Sara Palin in textbook examples of disastrously poor vetting. Despite an Obama-like grassroots campaign led by campaign manager and future Senator Gary Hart, McGovern lost 49 states to Richard Nixon, the worst landslide in modern US history. Although he later joked that “for many years, I wanted to run for the Presidency in the worst possible way and last year, I did”, it had to hurt to lose an election to a man he knew to be deeply dishonest and corrupt.

With campaign staffer, Bill Clinton

In later years, the former minister, professor, Congressman, global food program director, Senator, and presidential candidate ran a 150 bed inn in Stratford, Connecticut. After the business went bankrupt, he reflected often and publicly on the role of government regulations and lawsuits in constraining small business. At one point, he surprised conservatives when he wrote in the Wall St. Journal that “I … wish that during the years I was in public office I had had this firsthand experience about the difficulties business people face every day. That knowledge would have made me a better U.S. senator and a more understanding presidential contender.”

Part of why politics in the US works is that people as courageous and talented as George McGovern are drawn to public service. I worry that this is becoming less true. In part due to reforms McGovern  championed, parties are weaker and the path to public office now less dependent on political parties and more dependent on large financial backers than ever before. We are at risk of drawing more light than heat to the national stage. It will be ironic and unfortunate if the result of George McGovern’s wonderful career is that we see fewer like him in the future.

Elections, People, Political leaders, Politics, Reform

Store Closing: the Death of Brick and Mortar Retail

In 1997, I had an idea: if I could aggregate millions of used, rare, and out of print books from around the world on a single website, I could enable people to find and buy books that were otherwise impossible to locate. Like hundreds of others with similar ideas for selling things online, I started an ecommerce company.

As Atlantic writer Derek Thompson points out, that was the year that the US enjoyed an odd service sector convergence: 14 million Americans worked in retail, 14 million in health and education, and 14 million in professional & business services.

Fifteen years later, the landscape has changed. “Books You Thought You’d Never Find” is a silly idea. Book retailers are dying. The company I founded has made an impressive effort to transition from retail to services.

The employment picture reflects these changes. Health care jobs have grown by almost 50%, professional/business services grew almost 30%, but, as the chart below illustrates, retail grew less than 3%, adding only 26,000 jobs a year. There is

mounting evidence that retail employment is about to now decline sharply. Fifteen years from now, these may be the good old days for brick and mortar stores.

Retail revolutions are nothing new. Boutiques challenged general stores throughout the nineteenth century. Department stores, arose starting with Wanamaker’s in 1896 and challenged boutiques. Starting in the 1920s, car-friendly strip malls challenged main streets. In 1962, Walmart, Target, Kmart, and Kohls each opened their first store and initiated the era of big box retail. In 1995, Jeff Bezos incorporated Cadabra — but changed the name to Amazon at the last minute, in part because it started with an “A” and most internet search results were alphabetical.

Today, e-commerce is not just killing some stores – it is  killing almost all stores. Today there are very few successful brick and mortar retailers. Consider the obvious losers in recent years — none much lamented.

Demographic changes are also putting pressure on stores. Urbanization hurts strip malls. Baby Boomers no longer have kids at home. Their kids are marrying later and delaying having their own children, meaning fewer are buying houses that need to be updated and furnished. As these Millennials hit their peak spending years, they are completely accustomed to shopping online. For many Millennials, shopping malls were a teenage social venue — not a place to buy stuff. It is no accident that shopping malls have yet to emerge from the recent recession.

There is good reason to expect this change to accelerate. Physical retailers are typically very highly leveraged and operate on narrow profit margins.  Material declines in their top lines make them quickly unprofitable. As stores close or reduce selection, more customers become accustomed to shopping online, which accelerates the trend. E-commerce maven turned VC Jeff Jordan recently cited the example of Circuit City, which was “preceded by just six quarters of declining comp store sales.  They essentially broke even in their fiscal year ending in February 2007; they declared bankruptcy in November 2008 and started liquidating in January 2009″.  Nor, Jordan notes, did the bankruptcy of Circuit City help out Best Buy any more than the loss of Borders helped Barnes & Noble. “Not even the elimination of the largest competitor provides material reprieve from brutal market headwinds.”

There are a few bright spots in the wasteland of retail stores. The need for fresh food and last-minute purchases mean that 7-11, Trader Joe’s, and the corner produce grocer have enduring customer demand. Customers may want to touch some high ticket items before buying them, which gives high margin retailers like Williams Sonoma and Apple an opportunity to offer a fun, informing, hands on retail experiences (although both of these companies do a growing share of their business online and Apple lets you pay for a items under $50 on your phone and walk out of the store without ever talking to a salesperson or cashier). Stores like Home Depot that do a significant share of business with contractors who are time-sensitive not price-sensitive and need a large number of items quickly have sustainable value propositions.

Online commerce enjoys enormous advantages, from vastly larger selection, much lower fixed costs and debt, to a more customized shopping experience and 24/7 operations. Small wonder that revenue per employee at Amazon is nearing a million dollars, whereas at Wal-Mart — once a paragon of retailing efficiency — it is under $200,000. Hundreds of websites, not simply Amazon, have benefitted from the explosion of online retail — and tens of thousands of small retailers use Amazon or eBay’s commerce infrastructure to power specialized businesses. Of course UPS and Fedex benefit as well, in part because they make money on both the initial sale and on the subsequent return of wrong sizes and unwanted gifts.

Since the dawn of commerce in the Nile delta, humans have have purchased goods in physical markets. No doubt we will continue to purchase, or at least preview, stuff in stores. But if e-commerce achieves a fraction of the opportunity it currently has in front of it, retail stores as we currently know them will become a thing of the past. It is hard to imagine that we will miss them for long.

Book Wars, Business, Competition, e-Books, eCommerce, Economics, Technology

Memo to the New Chancellor: Saving UC Berkeley

Dear Newly Appointed Berkeley Chancellor:

Congratulations! Even though as of this writing, you have not yet been named, you take over the leadership of UC Berkeley at a critical time. At the end of your tenure, the world’s premier public university will either have found a sustainable path forward or will have entered a period of long-term decline. Do us a favor — do not screw this up.

You arrive at a moment when higher education is in wonderful and overdue ferment. Online education is challenging your  traditional business model and unsustainable tuition increases. Badges and other alternative credentials threaten your historic right to certify talent. Most of all, Berkeley like other public universities that serve as engines of knowledge-creation and social mobility are under unprecedented financial pressure.

Berkeley, in particular, has a lot at stake. It is, as I noted here, an amazing public institution, despite its bottomless capacity for self-parody (as you know, my wife is a dean at Cal). 48 out of 52 Berkeley doctoral programs rank in the top 10 of their fields nationally — the highest share of any university in the world. By any measure: NSF Graduate Research Fellowships (#1), National Academy of Sciences members on the faculty (#2 behind Harvard), members of the National Academy of Engineering (#2 behind MIT), membership in the American Philosophical Society, the American Academy of Arts and Sciences, or winners of National Medal of Science — Berkeley excels. It is by a considerable distance earth’s finest public university.

And it serves a public mission. Berkeley’s single proudest claim, ahead even of its 24 national rugby championships, is that it enrolls more students on Pell Grants than all of the Ivy League schools put together. A Pell Grant is a scholarship based on financial need. By serving academically qualified students on Pell Grants, Berkeley ensures that smart, hard-working kids from low income families access to a top-flight education.

You may regret the flow of private funds into a public university, but you cannot and should not try to prevent it. Actually, you will devote a great deal of time to encouraging private donations so that Berkeley can remain accessible to middle income students who are not eligible for Pell Grants. This requires building organizational muscles that atrophied when Berkeley, like most public universities, avoided the intellectually distasteful but indispensable work of raising private funds. Berkeley is still building the endowment required to sustain these efforts. The endowment matters because over time, money buys quality — just ask Stanford. It is no accident that although many state universities undertake serious research and offer outstanding educations, only three of the “Public Ivies”, Texas, Michigan, and California, make the list of America’s best-endowed universities.

You realize, of course, that the endowment data shown here are misleading in one important respect: the University of California is less a university than a federation of ten highly autonomous campuses ranging from prestigious broad spectrum research institutions like Berkeley, UCLA, and San Francisco to campuses with pockets of excellence like San Diego, Irvine, and Davis, to schools like Riverside and Merced that are not easily distinguished from the State University.

These data also illustrate why the Regents hired your boss. Of the great public universities, only the University of Texas took endowment-building seriously, making former UT President Mark Yudoff irresistible as the current president of UC.

But Berkeley, along with San Francisco and UCLA, has begun to focus on endowment building. It is no surprise that taken together, these three campuses now hold three-quarters of all UC endowment funds. Faced with the choice of compromising academic excellence, raising tuition to levels that reduce access to higher education for many students, or undertaking a covert privatization to maintain the finances of their institution, all three of these schools have raised tuition and quietly sought private funds. Your job is to continue this course.

Soft privatization is not without its  management challenges however, especially at Berkeley. First, Cal is a state-owned enterprise. The barely functional California government retains full and largely unwelcome control over your budget and governance, even though it contributes less each year to your operating revenue. Second, your boss happily taxes richer campuses like yours to support poorer ones, so to raise a dollar of endowment, you will often have to attract more than a dollar of donations. Third, strong faculty governance provisions, while occasionally improving decision quality, mostly serve to protect the comfortably and correctly tenured and prevent needed program rationalization. Your biggest risk is not privatization — it is paralysis.

To get Berkeley fully upright and sailing, you need to mend both a broken income statement (you lose money every year and must stop the bleeding, even if the state makes good on a new round of cuts) and a broken balance sheet (your endowment may be larger than other UC campuses, but it is still pathetic. Look at the data.) Nothing else you do will matter unless you set audacious goals to fix your core economics.

I respectfully suggest two:

Granted, $150 million each year does not plug your entire budget shortfall — but it is a serious start that would be noticed by alumni and other donors. You will need to continue to rationalize campus operations and consolidate weaker units or athletic programs (remember Berkeley had a Mining program until a brave chancellor decided to face reality).

With everyone else in California, I wish you the very best of luck in your new position. Just don’t mess with the rugby team. Some cows really are sacred.


Classic Jam, Competition, Economics, Education


Coaches are often known as masters of technique, but really they teach life skills: teamwork and competition, ferocity and empathy, solidarity and resistance. Great coaches inspire performance that players cannot produce without them. They model leadership by keeping their player’s interests first and by knowing when to admonish, encourage, or reprimand. Many of us recall our best coaches as fondly as we do our best teachers. The best video example of this is here.

        Ned Anderson, UC Berkeley 1971

Robbie plays junior varsity rugby. Rugby is by some measures the fastest growing youth sport in the United States, fueled in part by growing concerns about the long term costs of its distant relative, gridiron football. US football branched off of rugby a century ago (the games were once so similar that in 1924 a US football team won the Olympic gold medal in rugby — something that is inconceivable today). The sport originated at the Rugby School in England and remains popular throughout the British Commonwealth. The movie Invictus, drew a famous contrast with soccer, which it termed “a gentleman’s sport played by hooligans” whereas rugby is “a hooligan’s sport played by gentlemen”. When you see a large, aggressive guy covered in mud refer to the referee as “sir”, you grasp the point.

Coaches have made Northern California the center of youth rugby in the United States. Rugby is the oldest sport on the University of California Berkeley campus where, starting in the 1970s, three coaches began producing world-class rugby teams (without ever resorting to recruiting scholarships). Doc Hudson was the first Cal rugby coach to take the game to a new level by building a globally competitive Cal side. Hudson recruited and coached a team that in 1967 and again in 1971 toured Australia and New Zealand and won more games than they lost (roughly the equivalent of a New Zealand baseball team doing that in the US today). The captain of the team was a tall kid named Ned Anderson, a lineman who played lock for Cal. Anderson also represented Cal on the all-University of California side that achieved a winning record on a similar international tour during the summer of 1970.

In 1975, at the age of 29, Anderson succeeded Doc Hudson as the fifth head rugby coach in Cal history. Anderson’s teams continued Cal’s winning rugby tradition through the ’70s and, in 1980, won the first official national collegiate championship, beating Air Force, 15-9. Under his leadership, the Bears also won the four titles from 1981-83. In 1982, Anderson hired Jack Clark, one of his former players, as an assistant coach and in 1984, swapped jobs with him. Clark became the head coach and Anderson his assistant until 1985 and again from 2003-2008. Clark won an astonishing 22 more national titles, for a lifetime coaching record of 523-68-5  — an 88% win rate. For more than three decades, Cal has dominated rugby more than any team has ever dominated a US collegiate men’s sport, as the graph below illustrates.

Because many Cal alumni settle in Northern California and because rugby players are especially passionate about their sport, Northern California is blessed with outstanding rugby coaches. Ned Anderson is now in his sixties and walks with a slight stoop. When he sauntered onto the rugby pitch and quietly began coaching kids earlier this year, parents had no idea that the soft spoken gentleman in charge of our kids was a legendary coach and a member of the Cal Rugby Hall of Fame.

But the players figured it out immediately. Robbie would get in the car after practice and shake his head, saying “this new coach totally knows what he is doing. I don’t know who he is, but he is really good.” Anderson is not only coaching 16 year olds, but doing it so modestly that we had to Google him to learn anything about his past.

The results have been impressive. Robbie’s team was undefeated going into the Northern California playoffs this weekend in Sacramento. It takes nothing away from the players to note that they give Anderson a lot of credit for their success.

On Saturday, they played a very tough quarterfinal game. It was breezy but hot on the pitch and the opposing team was good. They scored first by crossing the goal line and touching the ball down (no longer required in football, but the origin of  “touchdown”). Robbie’s team recovered and decisively won a hard-fought game. Unfortunately, Ned Anderson missed the victory due to an unmovable scheduling conflict. He would have been proud to see his boys come back to win and disappointed to see them lose the championship match the next day to a very athletic (and well-coached) team from Sierra Foothill. It was the team’s only loss the entire season. Had he been there, Anderson would likely have congratulated his boys on being the #2 team in Northern California and begun work right away to improve their record.

Culture, People, Sports

Is Amazon Inside Apple’s OODA Loop?

John Boyd was a legendary US fighter pilot during the Korean War who later became a fighter pilot instructor. He had a standing bet with his students: he would meet you in the air at 30,000 feet and you would get on his tail. He would reverse the positions and get you in his guns in 40 seconds or he would give you 40 dollars — about $375 today and a lot of money for an Air Force captain. Boyd challenged anyone and everyone including students, other instructors, and the best fighter pilots from around the world. Many took the challenge, but Boyd never lost. He was the best fighter pilot in the world and many believe the best ever.

As a Colonel, John Boyd developed a framework to help train combat fighter pilots that became known as the OODA Loop (for observe, orient, decide, and act). He argued that the key to tactical success in combat is to obscure your intentions from your opponent while you simultaneously clarify and anticipate his intentions. By operating at a faster tempo in rapidly changing conditions, you both inhibit your opponent from adapting or reacting to changes and suppress his awareness of your actions. You cause an opponent to over- or under-react to uncertainty, ambiguity, or confusion. In military parlance, adopted by many technology strategists, you get inside their OODA Loop.

As an example, Barack Obama has been well inside Mitt Romney’s OODA Loop for the past month on issues of gender equality. His statements have frequently caused Romney to react in ways that Obama has clearly anticipated and exploited. But that’s another post. Today’s question is, has Amazon penetrated Apple’s OODA Loop with respect to eBooks? It sure looks like it.

The story begins in 2001, when Amazon observes Apple’s iTunes business model. Amazon CEO Jeff Bezos must have been awed watching Steve Jobs turn digital music, which was free and widely pirated, into a money machine. Jobs integrated a device (iPods), a store (iTunes), and a wholesale deal with music labels for content under which they agreed to let him set the retail price of tracks (it would be $.99). Within a few years, Apple was the world’s largest music retailer and record stores were a distant memory (although a very fond one). Steve Jobs had figured out how to compete with free — the first but not last technology leader to perform this trick.

Amazon copied Apple in building its book market. It built a device (Kindles), tied to its store and it bargained with book publishers for a wholesale deal for content. Like Jobs, Bezos insisted that publishers let him set the retail price, which he targeted at $9.99 per book. It is likely that publishers in some cases set wholesale prices higher than that and Bezos lost money on early book sales, but as the market grew, his pricing power grew with it and the full cost of each eBook declined as well. Bezos knew that when Apple entered the book market late, they would be forced to either a) stick to their traditional wholesale model, where he had a significant first mover advantage, knew more about online retailing, and held a brand advantage (do you really think “book” when you think iTunes?) or b) try to compete by attracting publishers and letting them control the product price. Bezos knew he would win either way.

Bezos also knew that “talent copies, genius steals” did not apply to Steve Jobs, who never copied anybody. He had a pretty good idea that Apple would try to convince publishers to adopt “agency pricing“, which, in contrast to wholesale pricing, gives the publisher the right to set the retail price and pays the retailer a commission. Jobs knew that agency pricing would attract publishers who resented price pressure from Amazon and that publishers backed by Apple would force Amazon to raise ebook prices. But only the largest publishers were strong enough to threaten to withdraw content from Amazon — most stuck with their wholesale pricing deals. Bezos raised prices reluctantly and selectively to keep large publishers from defecting. That’s why some ebooks now cost $14.99 on Amazon, while most cost $9.99.

Better yet, Bezos also knew that the manner of Apple’s entry into the book market looked a lot like price-fixing. Price fixing rarely gets you into trouble when, as in Apple’s music or Amazon’s book terms, you force retail prices lower, but collaborative arrangements that lead to higher prices to consumers frequently incur the wrath of the Department of Justice Antitrust Division. Bezos also understood that Apple could fall afoul of laws against price-fixing, even though Amazon, not Apple, has an effective eBook monopoly. A monopoly is generally not illegal unless you use it to jack up prices.

So what does Amazon do the day the Department of Justice discloses its investigation into Apple’s alleged price fixing? It lowers eBook prices. Apple has an estimated 15% share of the eBook market (courtesy, one suspects, of simple iPad users who don’t know any better). That share is heading nowhere but down under the agency model, which is why Apple should give it up as part of a quick settlement with the DOJ. I would not want to be eBook strategist Eddy Cue at Apple this week.

But Apple’s is not the only OODA loop in Bezos’ crosshairs. He is also deeply inside the heads of publishers, whose cockpits are blaring with enemy radar lock-in sirens — the last sound many fighter pilots ever hear. As he often does, Clay Shirky said it best:

Publishing is not evolving. Publishing is going away. Because the word “publishing” means a cadre of professionals who are taking on the incredible difficulty and complexity and expense of making something public. That’s not a job anymore. That’s a button. There’s a button that says “publish,” and when you press it, it’s done.

Amazon has demonstrated a much greater ability than Apple to observe, orient, decide, and act to dominate the eBook market. This is the second sign of peak Apple in as many weeks and another indication that Jeff Bezos has taken over from Steve Jobs as the reigning strategist of the technology world. That said, eBooks is not the most important market where these two companies will go head to head. That would be payments, because nobody else has 100 million credit cards on file. Bezos should think very hard about this one. Apple owns a big piece of mobile and has the moves to be on his tail with guns blazing in about 40 seconds.

Book Wars, Books, Business, Competition, Culture, e-Books, eCommerce, Economics, People, Technologists, Technology

Lenin’s Rope: Universities Help Disrupt Universities

Lenin famously bragged that “Capitalists will sell us the rope with which we will hang them.” It would surely gall him to learn that the art of destroying capitalists with their own products has been mastered not by a militant, vanguard-led proletariat but by entrepreneurial capitalists. It appears that even universities, finally, are getting the hang of it and learning to sow the seeds of their own destruction.

As an earlier post detailed, universities rarely go out of business. This is thanks to the magic of a three part lock that secures their position and protects them from institutional challenge. For centuries, universities have enjoyed the exclusive right to allocate valuable social capital.

It takes decades for universities to establish these privileged positions, which is why, with rare exception, the top decile universities of fifty years ago are the top decile universities today. This is partly due to the place university degrees have come to hold in our culture. It is an unquestioned (but economically threatened) article of faith among middle class families, including mine, that providing children access to higher education is an essential to giving them a full range of life choices. Most people are disinclined to risk their kid’s future on education institutions with highly plausible training programs but unproven power to select, credential, and signal. (Yeah, I’m looking at you Minerva Project).

The paradox is that universities clearly add value (after all, college degree holders earn a million dollars more over their lifetimes than non degree holders and many economists declare it our single most competitive industry) but much of this “value” has nothing to do with learning, which is what employers presumably value. And if the credential cannot communicate what you know, then its signaling effects diminish. More accurate and effective approaches to credentialing and signaling become plausible. As detailed earlier, there are dozens of startups ramping up high quality educational programs that are either free or very low cost. But without credentials accepted by employers, all of the free online courses in the world will not translate into increased economic opportunity for graduates. To make these programs viable, they need a portable credential that is widely accepted by employers but not controlled by universities. Who would devise such a thing?

Universities, of course. As Kevin Carey describes in the current Journal of Higher Education, the future is full of badges, not unlike the ones you earned as a scout. UC Davis, together with The Mozilla Foundation and the MacArthur Foundation is prototyping the development of digital “open badges” that validate “skill, quality, or interest”. Badges would be online and would allow a potential employer to access details of a student’s written work, test results, videos, etc. Open badges would communicate a great deal more than “BA in Economics from Sonoma State”, which is what employers get today. The article failed to record any sense of irony among the rope makers at the University of California.

Under the Mozilla Open Badge framework, a badge “is a symbol or indicator of an accomplishment, skill, quality or interest” used to represent skill or achievement. Badges support a wide variety of learning beyond traditional classrooms including online courses, after-school programs, as well as work and life experiences. Badges not only signal achievement to peers, potential employers, educational institutions and others, but they are a way to recognize and document informal learning as well. Fully developed, badges should help people transfer learning across jobs, industries, and places and portray a richer, more complete profile of an individual’s professionals strengths.

Mozilla expects there to be many types of badges. Some capture specific skills, something traditional degrees do quite poorly. Badges can support specialized and emerging fields that do not yet credential learners. They can document a much larger diversity of skills, social habits, motivations, etc. Badges potentially represent an alternative to traditional degrees as a way to enhance identity and reputation among peers, find peers and mentors with similar interest, formalize camaraderie, teams, and communities of practice that today often form around universities or professional associations.

Open digital badges, unlike the scouting ones, are valuable because of their metadata. They link to videos, documents, or testimonials demonstrating the work that lead to earning the badge. They link to the issuing authority, which can be a school, a professional body, an international credentialing agency, a community of professional practice, a course, or a company. The supporting metadata reduces the risk of gaming and builds in a system of formal or implicit validation. In this system, a digital badge is backed by metadata that explain the badge, the issuer, the issue date, criteria for earning the badge, the earner’s work or evidence behind the badge, and the current validity of the badge, which, unlike a college degree, can be set to expire.

Mozilla is creating an Open Badge Infrastructure to serve as the core technical scaffolding for a badge ecosystem that supports a multitude of issuers, badge earners, and badge displayers. This infrastructure includes the core repositories and management interfaces (each user’s Badge Backpack), as well as specifications required to issue or display badges. Users can build a “Badge Backpack”, which serves as a repository for their digital badge data, accessible only to them, where they can view badges, set privacy controls, create groups, and share badges. Startups like BadgeStack, which gamifies badges, can build OBI compliant sites and award apps.

Open badges are a promising idea and one deserving of investigation by companies, entrepreneurs, universities, and investors. They threaten traditional university credentials because they are:

Badges are not a sure thing. At first they will complement university degrees, not substitute for them. Badges face nontrivial privacy and trust issues — many which Mozilla is addressing quite well. They are an essential foundation for a portfolio that documents a range of professional skills, achievements, experiences, and relationships.

One of the largest challenges facing open badges are cold start problems: early adopters will not have very few badges and employers will be unfamiliar with them. These are the sort of market development problems that entrepreneurs are good at conquering, although this makes them no less formidable. Mozilla may crack this market wide open.

Business, Competition, Economics, Education, Labor, Social, Technology

Draw This…

Henry Blodget is the former head of Internet research at Merrill Lynch. (Background: once upon a time there was something called Internet research. And once upon a time there was something called Merrill Lynch).  NY Attorney General Eliot Spitzer convicted Blodget of touting stocks in public while sending emails disparaging those same securities. Spitzer was later thrown out of office after his foes revealed him to be “Client 9″ in an expensive Wall St. prostitution ring. Both men are now exiled from their former professions and both have become media entrepreneurs.

Blodget now leads Business Insider and last night presented a very good piece of research on the growth of mobile to a conference in San Francisco. Towards the end, he describes Draw Something, a game app that has created a buzz around here. Blodget argued that Draw Something, by a startup called OMGPOP, reveals just how explosive the combination of mobile, social, and games can be. He reminded us that Draw Something launched just six weeks ago.

On March 16, TechCrunch ran a headline; “Zynga No Longer Has The Biggest Game On Facebook By Daily Users. OMGPOP Does.” Five days later, Zynga bought the startup (which had been trying to mix games, mobile, and social since 2007) for $180+ million. Early backers include Y-Combinator, Marc Andreessen, Kevin Rose, and a bunch of other folks whose periscopes are slightly longer than yours or mine. Note that Zynga may have made a really smart acquisition or, quite likely, overpaid for a game with a short life span. We’ll know soon enough.

Here is the full Blodget presentation. You cannot make this stuff up…

Business, Competition, Economics, Mobile, Social, Startups, Technology

Peak Apple: Understanding the Foxconn Deal

Apple has quickly raised worker wages to address the highly publicized problems with working conditions in its supplier network. The decision protects Apple’s pristine brand and costs the company next to nothing. It cleverly exploits the high-minded principles and low-level economic literacy of those of us who are its devoted customers.

A series of well-researched articles by Charles Duhigg in the New York Times that included a long article on sweatshop subcontractors put Apple on the defensive. It appears that hard working people risk their lives to make sure that our iPads are shiny. Apple responded by asking the Fair Labor Association to investigate working conditions at its Chinese suppliers. Back home, Mike Daisey’s professional self-immolation magnified the controversy by forcing NPR to retract a series of assertions about Apple’s Chinese suppliers. This week, Apple CEO Tim Cook visited a huge Foxconn assembly plant in China as the FLA issued its report. Cook knows a lot about manufacturing both as a global supply chain expert and as a former factory worker.

Cook played the event perfectly. When the FLA reported that, shockingly, Chinese factory workers endure long hours for low pay, he promptly gave workers a raise by pledging to cut hours without cutting pay. The audience applauded, the curtain dropped, and the world returned to its apps.

The story displays a confidence and an ability to turn crisis into yet another advantage that makes me wonder whether we are approaching peak Apple. Apple raised Chinese wages not simply because it cares so much, but because it can afford to care so little. They know that their move causes bigger problems for their competitors than it does for them. Apple cares less about Chinese labor costs than Dell, HP, Google, and many others who produce lower margin products that use more Chinese labor. Apple spends about $8.25 per iPhone on Chinese labor — a completely irrelevant number in the lifetime economics of an iPhone. Had Chinese workers targeted Apple for a campaign to increase their wages, they would have chosen well.

Is Apple’s move good for Chinese workers? Sure — for some of them anyway. Apple’s decision does not mean that Chinese workers will necessarily take home more money — just that they will work fewer hours. This may not sit well with workers at Foxconn and other subcontractors, most of whom move from the countryside, live in company housing at the factory, and want to maximize their earnings, not minimize their working hours. Duhigg’s excellent reporting cited a factory where workers rioted when hours were reduced under pressure from a western customer, acknowledging:

The other (workers) we talked to all seemed to regard it as a plus that the factory allowed them to work long hours. Indeed, some had sought out this factory precisely because it offered them the chance to work more.”

Does China benefit from this decision? Not necessarily. Manufacturing jobs are declining China in favor of Vietnam and Cambodia (the great promise of the campaign this week by Nobel Peace Prize winner Aung San Suu Kyi is that Burma will attract urban factories to relieve the punishing life of rural peasants). It surprises many Americans to learn that manufacturing employment in China is actually declining. With the Apple settlement raising labor costs, peasants in adjacent countries can cheer: soon they too can trade in their hoes and hats for a white coats and the opportunity to polish iPads. Nobody said economic progress was beautiful.

Apple’s decision to polish its “Think Different” brand built on images of Ghandi and Cesar Chavez is tribute to both the company’s high moral tone and to it’s willingness to indulge the low economic literacy of its Western customers. Apple sells products to people who prefer a world in which every kid can go to college and work eight hour days. Apple customers hate sweatshops, even those that are demonstrable vehicles of economic progress. We have a hard time acknowledging that countries in South Asia, sub-Saharan Africa, or Haiti demonstrably need more sweatshops. We commit what economist Harold Demsetz memorably called the Nirvana Fallacy: we compare the choices facing overseas workers to the alternatives we have, instead of to the alternatives they have.

As economist Eric Crampton notes:

Harold Demsetz warned in a beautiful piece of economic writing back in 1969 against what he called Nirvana Theorizing. He said there that we can’t say markets fail just because they deliver outcomes that we don’t like; rather, we have to compare the outcomes of markets to real-world achievable alternatives. We can’t just assume Nirvana on the other side of the scale. And, most of the arguments against sweatshops effectively assume Nirvana on the other side: if only we were to ban sweatshops or, more realistically, impose bans on the import of products produced by sweatshop labour, the employees would suddenly be freed to pursue fulfilling careers or to go and get that Bachelor’s in Cultural Studies that they’ve always wanted….. It’s only the evil sweatshops that are keeping them from achieving their dreams.

If only it were that easy. For proper comparative institutional analysis, we really have to look at how working in a sweatshop compares with what else these workers could be doing.

Inconveniently for the Nirvana view, thousands of people voluntarily line up outside of Foxconn’s gates when factory jobs open up. Those clamoring to work at Foxconn know that factory work is tough and sometimes dangerous. But, like factory workers everywhere, they know that farm work is worse. The Times documented a horrific aluminum dust explosion in a Foxconn plant. This is not something to take lightly (my grandfather, uncle, and kid brother all died on the job or from occupational illness; occupational safety has never been an abstract problem to me), but the risks of factory work are nothing compared to the risk of illness (especially malaria), injury, or poisoning faced by Chinese peasants. Just about everyone who has tried both farm and factory prefers the latter. I worked in several factories; most of the jobs were boring and some were wildly unsafe (I thought for awhile that Westinghouse had a “hire the handicapped” policy because so many of my co-workers were missing fingers or limbs. D’oh). But two days spent harvesting hay under idyllic conditions hurt me worse than any factory job I ever did. Former paper and aluminum mill worker Tim Cook also understands this extremely well.

Crampton cites recent work by Benjamin Powell on standards of living associated with sweatshop work showing that in most of the countries he studied, the average wages were equal to or better than the national average. In poor countries like Cambodia, Haiti, Nicaragua and Honduras, sweatshops paid twice the national average. This is why countries like Bangladesh, where 80% of the population lives on less than $2 per day, need more sweatshops, not fewer. Crampton reminds us of Nick Kristof’s reporting on workers in a garbage dump in Phnom Penh. Kristof gets it:

Another woman, Vath Sam Oeun, hopes her 10-year-old boy, scavenging beside her, grows up to get a factory job, partly because she has seen other children run over by garbage trucks. Her boy has never been to a doctor or a dentist, and last bathed when he was 2, so a sweatshop job by comparison would be far more pleasant and less dangerous.

I’m glad that many Americans are repulsed by the idea of importing products made by barely paid, barely legal workers in dangerous factories. Yet sweatshops are only a symptom of poverty, not a cause, and banning them closes off one route out of poverty. At a time of tremendous economic distress and protectionist pressures, there’s a special danger that tighter labor standards will be used as an excuse to curb trade.

When I defend sweatshops, people always ask me: But would you want to work in a sweatshop? No, of course not. But I would want even less to pull a rickshaw. In the hierarchy of jobs in poor countries, sweltering at a sewing machine isn’t the bottom.”

Tom Harkin, a progressive, pro-labor Senator from Iowa, introduced a law in Congress in 1992 that understandably prohibited the import of products made by children under age 15. In 1997, UNICEF investigated the effects of the Harkin Bill and found that even though the legislation had not taken effect, the mere threat had

“…panicked the garment industry of Bangladesh, 60 per cent of whose products — some $900 million in value — were exported to the US in 1994. Child workers, most of them girls, were summarily dismissed from the garment factories. A study sponsored by international organizations took the unusual step of tracing some of these children to see what happened to them after their dismissal. Some were found working in more hazardous situations, in unsafe workshops where they were paid less, or in prostitution.”

Once again, sweatshops are hardly the bottom of the heap — indeed the export shops targeted by Harkin are on average the better places to work. Most child labor is local production or rag picking, so if you ban exports, you may push some of the world’s most vulnerable children into the garbage dump, begging, child prostitution and starvation. This is not an argument for unfettered child labor or dangerous factories — just a note that exploitation is relative not absolute, protection is never free, and economic progress proceeds in steps not leaps.

Apple understands that paying slightly higher wages simultaneously pressures their competitors, appeals to western decency, and exploits economically ill-considered aversion to sweatshop labor. But in technology, companies with more competitive advantages than they can possibly exploit should worry about hitting their peak. Having watched first Microsoft and now Google decline after amassing what once seemed to be insurmountable advantages, it is time to ask whether peak Apple is now in sight?

Business, China, Competition, Economics, Labor, Mobile, Politics, Technology

Will Technology Burst Higher Education’s Bubble?

Imagine a market with incumbents whose core processes are unchanged since medieval times that is held together by huge federal subsidies and protected by a system of self-accreditation designed to exclude rivals. Imagine that the resulting enterprises exploited their monopoly power by overcharging customers and wasting the revenue that resulted on guaranteeing senior employees lifetime employment and discretionary funds, on massively expensive professional sports teams, and on protecting an overstaffed and comically inefficient bureaucracy worthy of the Indian railroads. Who would put up with such a mess?

Welcome to American colleges and universities, which are both the envy of the world and ripe for disruption. It’s a big business (about $350 billion in the US alone) and a really soft target. It is, after all, run by tenured scholars whose idea of competition is a snarky jibe in the faculty lounge. The dons have allowed their costs to not only rise faster than family incomes, but faster than health care costs, which ain’t easy. That they have lasted this long is due to the monopoly they enjoy on certifying talent. As Kevin Carey noted in a recent New Republic article,

The historic stability of higher education is remarkable. As former University of California President Clark Kerr once observed, the 85 human institutions that have survived in recognizable form for the last 500 years include the Catholic Church, a few Swiss cantons, the Parliaments of Iceland and the Isle of Man, and about 70 universities. The occasional small liberal arts school goes under, and many public universities are suffering budget cuts, but as a rule, colleges are forever.

Small wonder that thousands of startups are now focusing on the market for higher education. Even the guy who discovered disruption, Clayton Christensen, has declared that online technologies will thoroughly disrupt education at all levels, predicting that half of all K-12 classes will be taught online by 2019.

During the past five years, online higher education has gone mainstream. The Sloan Foundation estimates that more 30% of all enrolled college students, some six million people, participated in on-line learning at accredited U.S. colleges and universities in 2011 and that the U.S. market for online higher education grew 12-14 percent annually between 2004-2009.

Many educators are realizing that the explosion of online education not simply due to its lower cost; it is often higher quality as well. Sometimes this is because of dramatically higher investment in course and instructor development. Christensen notes that the largely online University of Phoenix spends about $200 million each year developing online teachers and highlights a key difference with traditional universities: “..Harvard defines research as creating new knowledge, while The University of Phoenix defines it as finding new ways to provide knowledge. It blows the socks off of us in their ability to teach so well.”

Online education is quickly killing the in-class lecture, since recorded lectures have obvious advantages. Students can watch them when they are ready — after they are off work or when the kids are asleep. They can replay the confusing bits or skip the obvious parts. Most important however, is that the lectures themselves are more likely to delivered by world class teachers like Norman Nemrow, whose online accounting course has been taken by several hundred thousand students or by Walter Lewin, the MIT physicist whose lectures are shown on television. Supposedly over five million people have taken his intro to physics course (watch his promo reel below to see why. What? Your professor did not have a promo reel?)

It is not only lectures that fare better online. Instructors in online classes can measure outcomes and tailor the course to the needs of each student. Modern learning management systems provide live seminars with multi-location live video, backchat, social media, and many other capabilities not available in a classroom. Quizzes can be graded instantly so that both faculty and students get feedback fast enough to change course. Algorithms distill questions from thousands of students so that they can be answered either live or off-line. Students can undertake projects online with “classmates” who have never been on the same campus — or even the same country.

This is a time of vast experimentation with online education technologies. Two years ago, the Kahn Academy began to attract huge notice as a self-tutoring tool based on the brief lectures of one talented teacher. A year ago, 2Tor closed a large Series C and got very serious about providing major universities with technology, marketing, and course development assistance. A month ago, Google’s self-driving car maven Sebastian Thrun gave the talk at BLD in Munich that launched Udacity after 160,000 students from around the world completed his Stanford-based online computer science course (268 students achieved perfect scores on all the quizzes). In October, Knewton, an education technology startup, raised $33 million in its 4th round of funding to roll out its adaptive online learning platform. Earlier this year, Apple launched a suite of authoring and course scheduling tools to allow universities to move content to iTunes University. Only yesterday ShowMe launched its 2.0 platform that takes the Kahn Academy model and makes it social — anyone can use the platform to teach anything.

Universities are developing their own online education initiatives, often plagued by a terrifying thought: what if online education is just another form of digital media? They know full well that that as books, movies, and music, moved online, few incumbents survived. In each case:

The response of universities to the rise of online education is like the response of Barnes and Noble to online bookselling. Faced with the rise of in the 1990s, the chain store simply created When Amazon launched the Kindle, they launched the Nook and merchandised it in their increasingly irrelevant bookstores. But the winner of this contest will of course be the company that is not forced to carry the cost of several hundred bookstores. Open Yale, MIT’s Open Courseware and MITx courses, Stanford’s Massively Open Online Courses including Corsera, and many others like it all share the Barnes and Noble problem: they need to price their offering to pay for extraordinarily high fixed cost institutions. Their disruptors do not.

Barnes and Noble charges customers for a wide range of activities unrelated to book purchases. It designed many of its stores as community centers where authors and  could meet readers. It built fun sections for kids to discover books. It integrated Starbucks in many locations. But books are simply not going to be sold in stores much longer, so these activities added more cost than value and ended up making the problem worse. Likewise, many institutions of higher education support multiple activities with tuition: research, sport, socialization, teaching, and credentialing. Online education exposes the fault lines between these different businesses, just as Amazon did with Barnes and Noble.

Social scientists distinguish between…treatment effects and selection effects. The Marine Corps, for instance, is largely a treatment-effect institution. It doesn’t have an enormous admissions office, grading applicants along four separate dimensions of toughness and intelligence. It’s confident that the experience of undergoing Marine Corps basic training will turn you into a formidable soldier. A modeling agency, by contrast, is a selection-effect institution. You don’t become beautiful by signing up with an agency. You get signed up by an agency because you’re beautiful.

Top-tier universities produce top graduates by accepting applicants who are very likely to succeed — they trade heavily on selection effects. I once published a proposal in the campus newspaper challenging the Dean of the Harvard Business School to compare people who were admitted to HBS but did not attend with those who were admitted but did attend to see if the school was adding value or simply selecting people who were going to succeed anyway. He showed little enthusiasm for my research proposal, although other scholars (including Alan Krueger, who now chairs Obama’s Council on Economic Advisors) have since documented these selection effects.

Treatment effects also create signals, whether anybody learns anything or not. Imagine that you have two job candidates who 25 years earlier attended the same school and took the same courses. One candidate failed every course and did not graduate. The other got straight As in the courses and graduated with honors, but has forgotten 100% of the material. Neither currently knows anything that they learned in college. But if this is all the information you had, you would hire the successful student — you’d be crazy not to. You have a signal that this person is capable of hard work and learning, even if they don’t retain it 25 years later. In labor markets, signaling matters a lot and university degrees are powerful signals. Online education will not quickly change this — although the creation of alternative credentialing mechanisms may.

Who decides what signal a degree sends? Employers do. If Google or Goldman begin hiring software engineers or managers who received their professional degrees online, the value of elite professional degrees will come into question. As a future post will detail, this is very likely to happen, since the knowledge and skill imparted by most professional degree programs can more easily be standardized, sequenced, and captured on standardized tests than undergraduate education can. Universities are rushing to offer professional degrees online because because students are willing to pay high tuition to finance a degree that will significantly increase their earnings. If competition from online professional degree programs pressures schools to reduce either tuition or admissions requirements, universities will see their professional degree cash cow led to slaughter. For this reason, better known universities hope fervently that dozens of competing online degree programs to emerge, saturate the market, and preserve the signaling value of the premium degree they offer.

As high quality education moves online, it will kill the weakest first: those schools that charge more and deliver less. Elite research universities will be forced to trade heavily on their brand and the signaling value of their credential, which may become easier as online programs proliferate and education markets become even more global. The experience of going to college may never be reducable to interactive social media — but classroom teaching and learning surely is.

Best of JamSideDown, Books, Competition, Culture, e-Books, Economics, Education, Politics, Reform, Social, Startups, Technology

Two Questions for California’s College Students

Perhaps the only thing more depressing than the sight of hundreds of students and faculty on a “99 Mile March” to defend California’s system of higher education from budget cuts is the failure of the state legislature to vigorously defend the engine of the state’s wealth and economic mobility. The protests now underway in Sacramento demonstrate why California’s system of higher education is far too important to entrust to faculty, students, or legislators.

Any group wishing to challenge cuts in public spending that benefit them directly has a political problem if taxpayers see costs without benefits. So the group wishing to protect themselves from the cuts needs to answer two questions: Who else benefits from the spending? and How can we increase our public contributions? If the marchers trying to channel their inner Cesar Chavez had taken the first question seriously, they would have built a coalition. Had they been courageous in their answer to the second, they would have built a movement. Chavez, not incidentally, understood this and knew that those who ask only what their country can do for them produce the political equivalent of a large yawn.

Who Else Benefits?
California’s three-tiered system of higher education is justifiably the envy of 49 states. If Hollywood and Silicon Valley are the symbols of California’s economic prowess, our colleges and universities are the engines that make them possible. Consider:

Berkeley remains the premier UC campus and an amazing public institution, despite its bottomless capacity for self-parody (disclosure: my wife is a dean at Cal).

The combined effects of this system on the California economy are astonishing. Community college students who earned a vocational degree or certificate in 2003-2004 saw their wages jump from $25,856  to $57,594 three years after earning their degree , an increase of over 100 percent. Census data indicate that an average college graduate earns a million dollars more during their working lifetime than a high school graduate. A masters degree adds another $400,000 and a doctorate another million on top of that. A professional degree adds another million on top of that (average lifetime earnings are $1.2 for those with only a high school diploma, $2.1, $2.5, $3.4, and $4.4 for BA, Masters, PhD, and professional degrees, respectively). Depending on what you count and how you count it, a dollar invested in California higher education returns $3$5, or $14 but it doesn’t really matter — it’s a great public investment. And because many of the benefits accrue to individuals, not all of the investment needs to be public;  students can and should bear some of the cost, so long as financing is available with repayment schemes adjusted for students that pursue lower paying occupations.

This system of higher education is the goose whose eggs make California the Golden State. And playing the part of Aesop’s short-sighted fool who wishes to slaughter the creature for short term gain, is California’s legendary and dysfunctional state legislature. This august body last year, cut $500 million from UC, $500 million from CSU, and $400 million from the California Community Colleges to close a state budget deficit in 2011-12. California is not alone in this trend: between 2002 and 2010, states have cut funds for public research universities by 20 percent in constant dollars according to a report issued by the National Science Foundation.

Every Californian should be alarmed at the proposed “trigger cuts”, which would slash another $300 million, unless taxpayers pass a special ballot initiative to increase state taxes — which are already the nation’s highest. Where does all the money go? Pension spending aside (and it cannot be put aside for long), it goes to health care and to prisons. California is the only state that spends more on prisons ($10.7 billion in the current budget) than on higher education ($9.8 billion). Both are dwarfed by the state’s $42 billion Health and Human Services budget.

How Can We Contribute?
On these facts alone, students and faculty should be able to rally public support, not just each other.  Serious and sustained public support however, requires more. What, precisely, are students and faculty offering to contribute to make increased public investment even more compelling? Two ideas could completely change the conversation.

1. Use campuses year round.
The University of California and CSU teach either three ten week quarters or two fifteen week semesters each year, meaning that 33 large and expensive campuses are fully utilized less than 60% of the time. Adding a fourth quarter or a third semester would produce revenue that would go a very long ways to paying for fixed assets, including facilities, administration, admissions, counseling, health care, technology, and similar services that are staffed year round, even though paying students attend only part of the year.

Even if campuses completely eliminated so called “Summer Session” revenue and did not alter faculty teaching loads and research expectations (meaning that new faculty would be hired or current ones paid more in order to meet increased teaching demand), year round operations would generate hundreds of millions of dollars for the system as a whole (and based on limited public data on UC, it would more than makes up for the loss of state funding). Without reducing degree requirements or creating a cut-rate three-year undergraduate degrees, this approach would enable a student who undertook full time studies to complete the work for an undergraduate degree in three years or less and would substantially relieve pressure for additional tuition increases.

Faculty should embrace year round operations, which would not need to reduce time for research at UC but would offer the option for all faculty to earn additional teaching income. They should rally in support of an approach that would enable most departments to add new faculty positions, which will simply not happen otherwise. Year round campus operations would be very quickly copied by other top flight universities if UC and Cal State take the lead. As financial stewards of state educational resources, university leaders should admit that operating campuses 30 weeks per year (a schedule designed originally to make sure students made it home to help with the harvest) is economically wasteful, socially indulgent, and politically untenable.

2. Repay a public investment with public service.
As a candidate, Barrack Obama proposed a national service program that would serve as “a central cause of my presidency”. It hasn’t, but it should. He proposed to spend $3.5 billion to expand AmeriCorps to 250,000 volunteers and to double the size of the Peace Corps. He envisioned tuition tax credits for college students who performed community service while in school. His plan supported promising nonprofit community startups, expanded the GI Bill, and created a Classroom Corps to help teachers and students in high-need and underserved schools. Obama proposed a Health Corps to improve public health information and outreach to areas with inadequate health systems such as rural areas and inner cities and a Clean Energy Corps to promote weatherization, renewable energy projects, pollution clean up, trea planting, and park maintenance. Other volunteers would serve veterans or help communities with disaster preparation.

Not only is national service a good idea, but the politics of enacting it are not nearly as awful as they are on many other issues (recall that the GI Bill, one of the most popular and successful social programs in US history, was initiated by the American Legion together with FDR). In this spirit, the federal government could, for a modest amount of money, guarantee loans and offer tuition tax credits proportional to the public service performed. By accepting only freshman applicants who have performed at least one year of community service, California could set an example for the rest of the country and begin the long overdue process of rebuilding public support for its activities.

One of the faculty spokespersons for the current band of protesters is a leftist professor, a tenured friend who for decades has rarely missed a good demonstration. Not long ago, I asked him what he believed today that he did not believe in 1968, when his political habits were formed. He thought for a moment before declaring that “1968 was a very good year”.  But he and his campus comrades appear to have forgotten the lessons of ’68: that lasting political change requires public education, coalition building, and a commitment to public service. Why do I suspect that the last people to figure this out will be those closest to the problem?

Classic Jam, Education