The benevolence assumption and unintended consequences

Two of what I consider the most useful economic lines of reasoning are the Austrian and Public Choice traditions. (Hence, going to George Mason for graduate school.) Though they are built on different foundations, they are not necessarily antagonistic to each other. The Austrian school makes different assumptions about many things than the majority general-equilibrium positions and therefore challenges them from the outside, while Public Choice largely takes these for granted and pokes holes at them from the inside. The one-two punch of knowledge problems and incentive problems when analyzing policies is a combination of both and should in my view be a standard challenge to any bad policy proposal.

One thing that continues to trouble me in Austrian Political Economy (APE) is the benevolence assumption—that policy makers have the best intentions in mind. I’m not sure this is a necessary part of APE, but it is a standard part. The reasoning typically follows that even despite having pure intentions, policy makers can pursue bad policies that almost everybody, the policy makers included, can agree have undesirable consequences. “Unintended consequences” is the typical phrase, and while these could feasibly be positive consequences they are almost always negative.

Public Choice, on the other hand, uses the symmetry assumption: that if we consider people to mainly act in their own (perceived) interests in the private sphere, we ought to maintain this assumption when we analyze behavior in the public sphere. There may still be a role for ideology, but this is one of many inputs into public decision making. PC is the go-to idea when we consider the role of elected officials, bureaucrats, and interest groups in policy.

This isn’t to say that APE is wrong or misguided; I believe it’s true as far as it goes. It’s just to say that any time I look at how actual policies get implemented, the self-interest of some entity with influence on the process is an important part. The hinge idea for me is that APE implicitly assumes either an overly simplified policy-making system or that the elements of a more complex, realistic system are not very bright. There are a lot of bright people in the world, and it’s not only economists outside the policy world who’ve heard of unintended consequences. In any concrete scenario where the consequences are different from the stated objectives of a policy, surely at least some of them were brought up as possible beforehand.

This leads me to be even more cynical about the policy making process than the average APE/PC fusionist. If it’s true that some of these outcomes were deliberated beforehand—and how could this be otherwise?—and the policy was pursued anyway, that must give us some information about the policy makers’ mindsets. Either they 1) considered these outcomes sufficiently unlikely, 2) didn’t really care about them one way or another, 3) willfully set them aside, or 4) actually intended for these outcomes to happen. (In a world of uncertainty, the complete set of effects of a decision can never be foreseen, but at least some can be reasonably relied on to happen or not.)

Case 1 is easy to consider, so I won’t go into it here. Case 4 is the bread and butter of interest group analysis. Here, an interest group convinces a policy maker or a set of policy makers to enact some rule by which they will benefit, and in turn the interest group supports the policy maker(s), either through campaign contributions, the coordinated votes of its members, a cushy “consulting” gig later on, or some other measure or combination of these. If the cause is not popular enough, a legislator in this position can trade his support for other similar issues pushed by other legislators in exchange for their support of his issue. An important point with case 4 is that to describe this outcome as an unintended consequence is entirely wrong.

Cases 2 and 3 are extremely interesting. While APE/PC usually focus on economic issues, these can be more easily demonstrated through a theme that ought to be familiar to all of us by now: military interventions. It would be difficult to believe that all of the smart people who work in the armed forces or the intelligence services failed to foresee many of the possible consequences of decades of violent intervention, such as anti-American sentiment among many Muslims or blowback. Thus, case 1 does not seem to apply. Case 4 may be overly cynical, so I won’t get into that here. But cases 2 and 3 could account for a lot. We can’t necessarily separate decisions into cases 2 or 3 without being somehow privy to the decision making process, but we can make educated guesses. Dissidents in the groups mentioned above have written since the invasion of Iraq, for instance, that the Bush administration knew or at least had strong reason to believe that there were no weapons of mass destruction to be found but decided to invade anyway. This seems to fall into case 3. This now opens up another line of thinking: there must have been some overriding goal behind the willful decision to ignore this evidence. Again, we can’t confidently say without more information what it was, but obviously there was one (or more).

All this is to say that in analyzing real-world policy decisions, we should subdivide unintended consequences into foreseen/reasonably foreseeable and unforeseen/reasonably unforeseeable consequences. (We could further subdivide in the next obvious way, but this post is long enough already.)

Again, I am a proponent of the Austrian point of view, but it may not be the most useful tool in the toolkit in many analyses of policy decisions.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s