Written by John Goodman
How did humankind ever get out of what Thomas Hobbes called the state of nature? That's the place where life was "solitary, poor, nasty, brutish, and short."
Suppose you and I meet in some primitive place where there is no rule of law, no property rights and no common ethical code. We sheathe our swords and temporarily cooperate to achieve some goal - say, gathering fruit from a tree. The task having been accomplished, how do you know that when your back is turned, I won't draw my sword, slay you and grab the bounty all for myself? How do I know you won't do the same to me?
If this is a one-time-never-to-be-repeated encounter, isn't it in our rational self-interest (narrowly construed) to grab all we can get when opportunity arises? Shouldn't we assume that others will do the same? And, if so, doesn't self-interested behavior imply periodic warfare? Even if we agree to temporary truces, how could anybody be trusted to keep the peace if some advantageous opportunity to break it were to arise?
In Hobbes' version, the problem is solved by a Leviathan (or state) that imposes order by force on everyone else. But that surely is not how the problem was historically solved. For one thing, no one person is powerful enough to impose his will on everyone else. Of course, a coalition of people conceivably could overpower everyone else. But that coalition would be no more stable or lasting than any other voluntary agreement in the Hobbesian world. (More about that in Part II.)
The answer to the problem is that our ancestors probably never were in a Hobbesian jungle. And even if some were, evolution favored those genes and those cultures that produced behavior very different from what I described above. Most modern humans act differently - not because we choose to act differently, but because we are "hard-wired" to do so.
How do we know this? Because of the fascinating discoveries in a not-very-well-known field called "experimental economics." Most of the experiments are conducted with college students and the results have been confirmed again and again with many different students on many different campuses, with varied backgrounds and cultures.
One of the experiments is called the "Ultimatum Game," and it works like this. You and I are sitting in front of computer terminals in different rooms. We do not know each other's identities and we never will - so no reason to worry that we'll meet the next day and you will have to defend your actions.
To begin, you are given the opportunity to divide $100 between the two of us. After you propose a division I can either accept or reject. If I accept, we both get to keep the amount of money you propose. But if I reject your offer, we both walk away with nothing.
Before going on, let's stop and note that the Ultimatum Game has a lot of the characteristics of a Hobbesian jungle. There are no property rights, no rule of law, no agreed on ethical rules. There is simply an opportunity for you to get some part of $100, but you need my cooperation to get it.
Now if we both were purely self-interested, this problem has a straightforward solution. You should offer me $1 and propose to keep $99 for yourself. After all, you're giving me the opportunity to have a dollar I would not otherwise have. How could I refuse?
But it turns out, I do refuse. And just about everyone else who plays this game would refuse as well. In fact, any split worse than $60/$40 tends to be rejected routinely.
Why is that? Apparently, people have an underlying sense of fairness and an offer, say, of $61/$39 is perceived as so unfair that the responder will give up the opportunity to have $39 in order to punish the offerer and prevent him from getting $61. Perhaps anticipating that kind of response, most initial offers tend to be within the $60/$40 range - way away from the $99/$1 offer.
There are many other variations on this theme. In some games, A can walk away with a sum of money - no questions asked. Or he can let B make a decision (an offer to cooperate) that will make both A and B better off (reciprocation) or will give almost all the spoils to B and leave A with almost nothing (a predatory response that rejects A's offer to cooperate). In some games, A is allowed to respond to B's predatory response by choosing an option that leaves both players with very little pay off (thus, punishing B for his predatory behavior).
Here's the bottom line: Most people have a basic sense of fairness and they act on that sense in how they play these games. In particular, people seem to be preprogrammed to (1) offer to cooperate with others to achieve mutually beneficial goals, even when the offer entails some risk; (2) reciprocate the offers of others, even when reciprocation entails some risk; (3) not take undue advantage of the vulnerability of others in such trust relationships; and (4) be willing to punish those who betray their trust.
Moreover, these ingrained inclinations are not only true of college students. Similar behavior can be observed among primitive tribes around the world - where peoples' lifestyles are thought to be closest to those of our distant ancestors.
Note: These are the very characteristics that also are needed in order for people to engage in any kind of trading relationships beyond the most simple form of exchange. Matt Ridley wrote in The Wall Street Journal the other day that the reason our ancestors succeeded, whereas the Neanderthals did not, was specialization and exchange. But in an environment with no world government, no international law, no courts, etc., complex trading relationships among diverse people over considerable periods of time require trust, cooperation and reciprocation - the very type of behavior we are describing here.
Much of the original work discovering those principles was done by NCPA author and Nobel Laureate Vernon Smith, who has a good review of the literature in Rationality in Economics. (See also Robert Axelrod, The Evolution of Cooperation, Robert Frank, Passions Within Reasons and Michael Taylor, The Possibility of Cooperation.) Here is my summary of Smith's description of "fairness principles" that seem to be hard-wired in most people:
Note that the notion of hard-wired ethics contradicts the Aristotelian principle of tabula rasa but it is consistent with our previous report that even babies seem to have an innate ethical sense. Note also, that not everyone is ingrained to the same extent. And some people seemed to not be so ingrained at all. Machiavellians - sociopaths and psychopaths (called "Machs" in the trade), who are identified by psychological tests (and I also think by MRI scans), do not share these ethics and do not act on them. Some of the Machs will no doubt end up in Congress some day.
While I think this analysis is vitally important, I don't think it's the whole of the story. We've said nothing here about attachments to family and extended family and "irrational" attachments to our own group when it comes in conflict with other groups - leading to tribal warfare. This behavior is also probably the product of genetic and cultural evolution.
John C. Goodman is president and CEO of the National Center for Policy Analysis. The Wall Street Journal and the National Journal, among other publications, have called him the "Father of Health Savings Accounts," and the Media Research Center credits him, along with former Sen. Phil Gramm and columnist Bill Kristol with playing the pivotal role in the defeat of the Clinton Administration's plan to overhaul the U.S. health care system. He is also the Kellye Wright Fellow in health care. The mission of the Wright Fellowship is to promote a more patient-centered, consumer-driven health care system.
Dr. Goodman's health policy blog is the only right-of-center health care blog on the Internet. It is the only place where pro-free enterprise, private sector solutions to health care problems are routinely examined and debated by top health policy experts throughout the country-conservative, moderate and liberal.