Feb. 21st, 2015

blatherskite: (Default)
Nearly 50 years ago, Garrett Hardin coined one of those rare phrases that becomes part of our linguistic heritage: "the tragedy of the commons". His article, though occasionally difficult reading, remains a classic work of explication that has never been more important and relevant than it is today.

Hardin's basic notion revolves around two key principles: First, there is the perceived inevitability of negative consequences (the tragedy part of the phrase). Second, there is the concept of a "commons", which is a resource that is shared by all members of a community. The tragedy results from the human desire (all else being equal) to maximize their enjoyment of the commons, particularly if there are no perceived consequences and if others are seen to be benefiting more than we are. This can degrade into a kind of mutually assured destruction, in which the goal is to consume more than anyone else -- kind of like watching two competitive eaters who meet at an all-you-can-eat buffet.

The problem with any resource, whether held in common or privately held, is that it is finite. The resource may seem effectively unlimited, at least so long as few people are trying to utilize it, but it is nonetheless limited. The limit on utilization of the resource is commonly described as its "carrying capacity" -- the maximum utility that the resource can provide to all utilizers of the resource without degrading the resource's ability to continue supporting those utilizers. When the carrying capacity of a resource is exceeded, the resource degrades, producing progressively less benefits, and everyone suffers.

Unfortunately, in the absence of some regulatory body or self-policing mechanism, there appears to be no constraint that prevents individuals from increasing their use of the resource until it begins to degrade. We have countless historical examples of this, many of which Jared Diamond covers in his superb book, Collapse: How Societies Choose to Fail or Succeed.

Hardin describes the problem of finite resources in the context of limiting the population, something explored long before him by Thomas Malthus. It's certainly true that any resource can more easily sustain a small population than a large one; there's ample theoretical and empirical support for this. In Hardin's context, the problem is that each living being, human or other, requires a certain minimum amount of energy to survive -- and more than that if it wants to reproduce, remain healthy, and enjoy its life. Because the amount of energy available is finite, this imposes a maximum limit on the carrying capacity of an environment. Although clever science and technology can bring us closer to exploiting that maximum, basic thermodynamics tells us that we cannot surpass that maximum. Of greater concern is that the negative consequences of trying to reach that maximum are often severe, witness the greenhouse effect being created by excessive combustion of fossil fuels to generate the energy that lets us enjoy our current standard of living.

One of the more important points raised in Hardin's article is often forgotten -- even though Hardin starts the article with this point. This is the realization that not all problems have scientific or technological solutions. Problems whose roots lie in human behavior require human-centric solutions: you can't solve the problem without first understanding the human side of the problem. Sadly, the modern debate over how to solve the problem of global warming continues to focus largely on scientific and technological solutions -- worse yet, on economic solutions that rely on a faulty understanding of both science and human behavior. As in the case of energy, science and technology can help us mitigate the problem, but the problem itself won't go away unless we solve the human aspects of the problem.

Arthur Hlavaty, a colleague and deep thinker, described this using a phrase that I hope will become equally well known: "An environment is a massively multiplayer prisoner's dilemma." The prisoner's dilemma is a fundamental though exercise from the often abstruse field of game theory. Simplistically, it states that people sometimes refuse to cooperate even when cooperation would be in their best interests. There are various reasons for this, but possibly the biggest one is the perception that the risk of an unfavorable outcome of a decision (the downside potential) is less than the potential benefit (the upside potential) to be achieved by acting selfishly. Though the selfish choice might work in the case of two prisoners, it won't work when everyone is in the same lifeboat -- sharing the same commons.

I'd like to tell you that I have a solution to this problem that will save us. Sadly, I don't. What I can tell you is that the solution won't be found if we close our eyes, cross our fingers, and hope that science and technology will save us. Most likely, they won't, and betting that they will turns our current predicament into a dismayingly real instantiation of the theoretical prisoner's dilemma: betting that the personal benefits are so high that they outweigh the large risk that everyone will suffer if we guess wrong. That's not a bet I'm prepared to take, and in the absence of evidence that we, as a society, will voluntarily step back from that bet and choose cooperation, we're going to need strong and insightful leaders who are capable of finding ways to motivate us to cooperate.

Profile

blatherskite: (Default)
blatherskite

Expand Cut Tags

No cut tags