Bias in scholarship: Greek grammar edition

In a previous post I gave some excerpts of Jason BeDuhn’s excellent book Truth in Translation. I’ve continued to think about one of the sections. In his discussion of why John 1:1c is usually translated incorrectly (e.g. in every example here) BeDuhn refers to a common defense of the traditional rendering, an appeal to “Colwell’s Rule” about article use and definiteness. He writes:

Yet another argument made in defense of the traditional English translation of John 1:1 is based on something called “Colwell’s Rule.” This is a supposed rule of Greek grammar discovered by the great biblical scholar E.C. Colwell. Colwell introduced his rule in the article “A Definite Rule of the Use of the Article in the Greek New Testament.” Based on a sampling of New Testament passages, Colwell formulated his rule as follows: “A definite predicate nominative has the article when it follows the verb; it does not have the article when it precedes the verb” (Colwell, page 13). There are two problems with using “Colwell’s Rule” to argue for the traditional translation of John 1:1. The first problem is that the rule does nothing to establish the definiteness of a noun. The second problem is that the rule is wrong.

. . .

Colwell’s mistake, as so often is the case in research, is rooted in a misguided method. He began by collecting all of the predicate nouns in the New Testament that he considered to be definite in meaning, and then, when some of them turned out to look indefinite in Greek, he refused to reconsider his view that they were definite, but instead made up a rule to explain why his subjective understanding of them remained true, even though the known rules of Greek grammar suggested otherwise. Notice that he had already decided that the predicate nouns he was looking at were definite, based on his interpretation of their meaning rather than on the presence or absence of the one sure marker of definiteness in Greek: the article. His predetermination of definiteness made his whole study circular from the start.

Colwell decided that the nouns he was looking at were definite before he even started his research. He was not prepared to change his mind about that. So when nouns he thought were definite showed up without the definite article, he assumed some rule of grammar must case the article to be dropped. He never even considered the possibility that the article wasn’t there because the noun was not definite. It seems that Colwell was misled by how we might say something in English. If a certain expression is definite in English, he assumed it was definite in Greek, regardless of what the grammer suggested. Of course, Colwell know perfectly well that Greek communicates meaning in different ways than English does. It was an unconscious habit of mind that interfered with this usual capable scholarship in this instance. It was a bias derived from his everyday use of English.

As flawed as the original “Colwell’s Rule” is, it has been made worse by misrepresentation down through the years. Notice that, according to Colwell, his “rule” allows him to explain why a noun that you already know (somehow) to be definite turns up sometimes without the definite article. The “rule” does nothing to allow you to determine that a noun is, or is not, definite. Even if “Colwell’s Rule” were true, it would at most allow the possibility that an article-less predicate nominative before a verb is definite. It could never prove that the word is definite. But since the rule leaves no way to distinguish between a definite and indefinite predicate nominative before a verb, many have mistaken it as making all pre-verb predicate nominatives definite.

Most people couldn’t be less interested in the minutiae of Greek grammar and its implications for Christian theology, and frankly I’m not terribly enthused about it either. But if BeDuhn and others are correct—which I don’t know for certain, not being expert in Koine Greek, though he continues after the excerpt to make a good case—it illustrates a point about scholarship: many intelligent, conscientious scholars in a field can be wrong because of bias. If they all share the same perspective, they don’t check each other. In this example, at least several hundred thousand people have read the original Greek passage countless numbers of times, and most have failed to see it correctly.

I think about this a lot when I see articles from Heterodox Academy. I think this is less of a problem in economics than in other social science fields, although that too could be bias. But it ought to be keeping scholars awake at night, at least for a little while.

Advertisements

Excerpts from BeDuhn’s Truth in Translation

Here are a few passages I found interesting in Jason BeDuhn’s Truth in Translation. Overall I recommend it highly, especially for Christians but also for people who aren’t Christian but who are still interested in what the Bible says, e.g. people interested in the Western intellectual tradition, of which the Bible is an essential text.

I. The fundamental problem of Biblical translation:

Since the passages of the Bible can be fit together to form many different interpretations and theologies, we must be aware of how easy it is to reverse the process, and read those interpretations and theologies back into the individual passages. It is perfectly legitimate for those various interpretations to be made and maintained on the basis of a biblical text that does not preclude them. What is not legitimate is changing the Bible so that it agrees with only one interpretation, that is, changing it from the basis of interpretation into a product of interpretation. (pp. 61-62)

II. Sola scriptura has pros and cons:

Although a few Protestant biblical scholars participated in the [New American Bible] translation, it is largely the work of Catholic scholars and received the sanction of the Catholic church. One might assume a distinctly Catholic bias in the finished product. But ideologically the Catholic church is under less pressure to find all of its doctrines in the Bible than is the case with Protestant denominations, and this fact, combined with the vast resources of Catholic biblical scholarship, seems to have worked to the NAB’s advantage. (p. 34)

III. Hebrews 1:8 is rendered in the King James Version as “But unto the Son he saith, Thy throne, O God, is for ever and ever: a sceptre of righteousness is the sceptre of thy kingdom.” Many other translations follow this pattern. BeDuhn says it should rightly be translated with “…God is your throne, for ever and ever…” These two footnotes were interesting:

1. It should be noted that the author of Hebrews is familiar with, and does use, vocative forms of nouns, such as kurie, “O Lord,” just two verses later, in 1:10. So he or she could have used a vocative form of “God” in 1:8 to make direct address perfectly clear, if that is what was intended.

2. Rolf Furuli, in his book The Role of Theology and Bais in Bible Translation, reaches the same conclusion: “Thus, in this passage the theology of the translator is the decisive factor in the translation” (Furuli, page 47). (p. 101)

Note: “he or she” in footnote 1 is not mere courtesy. There is some inconclusive scholarly speculation that the unnamed author of the Letter to the Hebrews was a woman.

IV. The most controversial chapter of Truth in Translation concerns John 1:1, rendered in the KJV as “In the beginning was the Word, and the Word was with God, and the Word was God.” BeDuhn credits the New World Translation (the official translation of the Jehovah’s Witnesses) as the only one under review to translate it accurately: “In the beginning was the Word, and the Word was with God, and the Word was a god.” [Emphasis is mine.] I’m not a Greek scholar so I am not equipped to judge the grammatical part of his argument, but his case is reasonable overall.

If John had wanted to say “the Word was God,” as so many English translations have it, he could have very easily done so by simply adding the definite article “the” (ho) to the word “god” (theos), making it “the god” and therefore “God.” He could simply have written ho logos ēn ho theos (word-for-word: “the word was the god”), or ho logos ho theos ēn (word-for-word: “the word the god was”). But he didn’t. If John didn’t, why do the translators?

The culprit appears to be the King James translators. As I said before, these translators were much more familiar and comfortable with their Latin Vulgate than they were with the Greek New Testament. They were used to understanding passages based on reading them in Latin, and this worked its way into their reading of the same passages in Greek. Latin has no articles, either definite or indefinite. So the definite noun “God” and the indefinite noun “god” look precisely the same in Latin, and in John 1:1-2 one would see three occurrences of what appeared to be the same word, rather than the two distinct forms used in Greek. Whether a Latin noun is definite or indefinite is determined solely by context, and that means it is open to interpretation. The interpretation of John 1:1-2 that is now found in most English translations was well entrenched in the thinking of the King James translators based on a millennium of reading only the Latin, and overpowered their close attention to the more subtle wording of the Greek. After the fact — after the King James translation was the dominant version and etched in the minds of English-speaking Bible readers — various arguments were put forward to support the KJV translation of John 1:1c as “the Word was God,” and to justify its repetition in more recent, and presumably more accurate translations. But none of these arguments withstands close scrutiny. (pp. 115-116)

BeDuhn later writes that he thinks the best translation would be “…and the Word was divine.” Perhaps this phrase will find its way onto the page at some point. Another sample from the same section:

The translators of the KJV, NRSV, NIV, NAB, NASB, AB, TEV, and LB all approached the text of John 1:1 already believing certain things about the Word, certain creedal simplifications of John’s characterization of the Word, and made sure that the translation came out in accordance with their beliefs. Their bias was strengthened by the cultural dominance of the familiar KJV translation which, ringing in their ears, caused them to see “God” where John was speaking more subtly of “a god” or “a divine being.” Ironically, some of these same scholars are quick to charge the NW translation with “doctrinal bias” for translating the verse literally, free of KJV influence, following the most obvious sense of the Greek. It may very well be that the NW translators came to the task of translating John 1:1 with as much bias as the other translators did. It just so happens that their bias corresponds in this case to a more accurate translation of the Greek. (pp. 124-125)

One last sample from this chapter, and possibly the only time in the book where BeDuhn waxes interpretive:

When one says “the Word was divine” a qualitative statement is being made, as Harner suggests. The Word has the character appropriate to a divine being, in other words, it is assigned to the god category. Of course, once you make the move of saying the Word belongs to that categeory, you have to count up how many gods Christians are willing to have, and start to do some philosophical hair-splitting about what exactly you mean by “god.” As Christians chewed on this problem in the decades and centuries after John, some of them developed the idea of the Trinity, and you can see how a line can be drawn from John 1:1 to the later Trinity explanation as a logical development. But John himself has not formulated a Trinity concept in his gospel. Instead, he uses more fluid, ambiguous, mystical language of oneness, without letting himself get held down to technical definitions. (p. 130)

V. In the chapter on “the Holy Spirit”, which he cautions should several times be translated “a holy spirit”, he writes:

…Some things that would be handled with “which” in English, because they are not persons, are referred to with the equivalent of “who/whom” in Greek because the nouns that name them are either “masculine” or “feminine.” But even though the ‘personal” category is larger in Greek than in English, the “Holy Spirit” is referred to be a “neuter” noun in Greek. Consequently, it is never spoken of with personal pronouns in Greek. It is a “which,” not a “who.” It is an “it,” not a “he.”

This is a case, then, where the importance of the principle of following the primary, ordinary, generally recognized meaning of the Greek when translating becomes clear. To take a word that everywhere else would be translated “which” or “that,” and arbitrarily change it to “who” or “whom” when it happens to be used of “the holy spirit,” is a kind of special pleading. In other words, it is a biased way to translate. And because this arbitrary change cannot be justified linguistically, it is also inaccurate. (p. 140)

And further:

…Since the KJV program followed by most modern translations capitalizes “Spirit” only when a reference to the “Holy Spirit” is understood, any appearch of a capitalized “Spirit” implies “Holy Spirit.” An issue of accuracy, therefore, is whether the original Greek suggests that the “Holy Spirit” is meant when the word “spirit” appears. The decision to capitalize “Spirit” when the references is thought to be to the “Holy Spirit” gives license to the biased insertion of the “Holy Spirit” into dozens of passages of the Bible where it does not belong. (pp. 143-144)

VI. After commending the New World Translation and the New American Bible as the most accurate of the translations compared:

I have pondered why these two translations, of all those considered, turned out to be the least biased. … [A]t the risk of greatly oversimplifying things, I think one common element the two denominations behind these translations share is their freedom from what I call the Protestant’s Burden. By coining this phrase, I don’t mean to be critical of Protestantism. … I use this expression simply to make an observation about one aspect of Protestantism that puts added pressure on translators from its ranks.

You see, Protestant forms of Christianity, following the motto of sola scriptura, insist that all legitimate Christian beliefs (and practices) must be found in, or at least based on, the Bible. That’s a very clear and admirable principle. The problem is that Protestant Christianity was not born in a historical vacuum, and does not go back directly to the time that the Bible was written. Protestantism was and is a reformation of an already fully developed form of Christianity: Catholicism. When the Protestant Reformation occurred just five hundred years ago, it did not reinvent Christianity from scratch, but carried over many of the doctrines that had developed within Catholicism over the course of the previous thousand years and more. In this sense, one might argue that the Protestant Reformation is incomplete, that it did not fully realize the high ideals that were set for it.

For the doctrines that Protestantism inherited to be considered true, they had to be found in the Bible. And precisely because they were considered true already, there was and is tremendous pressure to read those truths back into the Bible, whether or not they are actually there. Translation and interpretation are seen as working hand in hand, and as practically indistinguishable, because Protestant Christians don’t like to imagine themselves building too much beyond what the Bible spells out for itself. So even if most if not all of the ideas and concepts held by modern Protestant Christians can be found, at least implied, somewhere in the Bible, there is a pressure (conscious or unconscious) to build up those ideas and concepts within the biblical text, to paraphrase or expand on what the Bible does say in the direction of what modern readers want and need it to say. (pp. 163-164)

Catholicism avoids this pressure by accepting church tradition as legitimate, and the Jehovah’s Witnesses avoid this pressure by representing a more radical break from previous traditions, allowing them to take “a fresh approach to the text, with far less presumption than that found in many of the Protestant translations” (p. 165) There is, of course, their use of “Jehovah” 237 times in the New Testament—where the Greek has it zero times—but that at least is jarringly obvious to the reader.

VII. Only at the end does BeDuhn explain why he chose the passages he did:

I could only consider a small number of samples in this book. Another set of samples might yield some different configuration of results. But the selection of passages has not been arbitrary. It has been driven mostly by an idea of where one is most likely to find bias, namely, those passages which are frequently cited as having great theological importance, the verses that are claimed as key foundations for the commitments of belief held by the very people making the translations. Choosing precisely those passages where theology has most at stake might seem deliberately provocative and controversial. But that is exactly where bias is most likely to interfere with translation. Biblical passages that make statements about the nature and character of Jesus or the Holy Spirit are much more likely to have beliefs read into them than are the passages that mention what Jesus and his disciples had for lunch. (p. 166)

Frankfurt’s “On Bullshit” pt. 1: politics and social media

Harry Frankfurt’s great essay “On Bullshit” was originally published in 1986 but has aged incredibly well. Briefly, for background, a lie depends on the truth, as the speaker of a lie intends to misrepresent something that is not true as something that is. In contrast, bullshit isn’t the misrepresentation of something false as something true; truth and falsity don’t really enter into the equation. Here’s a sample that is especially relevant today:

Why is there so much bullshit? Of course it is impossible to be sure that there is relatively more of it nowadays than at other times. There is more communication of all kinds in our time than ever before, but the proportion that is bullshit may not have increased. Without assuming that the incidence of bullshit is actually greater now, I will mention a few considerations that help to account for the fact that it is currently so great.

Bullshit is unavoidable whenever circumstances require someone to talk without knowing what he is talking about. Thus the production of bullshit is stimulated whenever a person’s obligations or opportunities to speak about some topic are more excessive than his knowledge of the facts that are relevant to that topic. This discrepancy is common in public life, where people are frequently impelled—whether by their own propensities or by the demands of others—to speak extensively about matters of which they are to some degree ignorant. Closely related instances arise from the widespread conviction that it is the responsibility of a citizen in a democracy to have opinions about everything, or at least everything that pertains to the conduct of his country’s affairs. The lack of any significant connection between a person’s opinions and his apprehension of reality will be even more severe, needless to say, for someone who believes it his responsibility, as a conscientious moral agent, to evaluate events and conditions in all parts of the world.

As the scope of government has increased over time politicians are led to articulate positions about more and more things they don’t really know or care about. Most people can recognize these as bullshit at least some of the time, and many dislike it at least some of the time, but as Frankfurt says it probably is inevitable given the circumstances.

Social media is in large part about expressing the image of yourself that you want other people to have: how good/caring/special/smart/patriotic/etc. you are and what socio-political tribe you’re part of. Since it’s so low-cost to broadcast these messages to the world, people broadcast them constantly. But of course one can’t be expert in everything, and can’t deeply care about everything. Which leads to mountains of bullshit.

So here’s the tricky part: is there any end in sight to all the bullshit? I expect some adjustment to social media bullshit as people learn how meaningless it really is, but political bullshit seems unstoppable.

Wednesday nexus

New Mexico’s legislature passed a law ending civil forfeiture. Some commentary. Will the (Republican) governor sign it? I’m a little out of the loop of NM politics but I would be surprised since she has a law-n-order background and obviously wants to run for president soon.

“Everything is problematic”: an insider’s critique of the student activist social justice movement. Very insightful.

Are economics majors anti-social? A study that, to me as an economist, seems remarkably silly even though one of the co-authors is an economist. (He’s also an activist though.) Economics classes make students less likely to donate to left-leaning or generic but ineffectual organizations? Is this a Bad Thing?

The early history of British (free?) trade. Commerce with out-groups is a very old phenomenon.

A program that guesses an author’s gender based on a writing sample. Sort of fun/interesting but not very useful, not for me anyway.

Virginia DMV survey on ride-sharing regulations

The Virginia DMV currently has a survey available asking for feedback on for-profit ridesharing services like Uber, Lyft, etc. I doubt it will do much good, but it only takes a few minutes. I’m posting the questions (in bold) and my answers here.


1. Insurance requirements protect injured customers and third parties. Current insurance requirements are based on the nature of the transportation service and the number of passengers in the vehicle. Insurance companies generally distinguish commercial and personal insurance policies—all passenger carriers licensed in Virginia must have a commercial policy.

What are your views on TNCs and other for-hire passenger transportation companies being required to maintain certain levels of insurance?

There is no reason why this will not emerge as a standard absent state direction, and at more market-appropriate rates than those set by diktat. It’s hard for me to believe that VA DMV thinks riders are not concerned about safety and responsibility enough to make the companies respond to market pressure. The first lawsuit would fix this overnight, and knowing this in advance, the services will get in shape without having to learn the hard way.

2. Under current law, DMV reviews company owners applying for authority to operate a for-hire transportation service in Virginia. Local governments have authority to conduct background checks for companies and drivers within their jurisdiction. These checks serve two goals. First, they insure that company owners have a record of providing honest and reliable service. Second, it ensures that taxi drivers have not been convicted of certain crimes or driving offenses.

What are your views on companies and/or drivers being subject to criminal history and background checks? Do you believe this is something best performed by a government entity or by the individual companies?

I think it’s a good thing, but it’s best performed by individual companies. An equilibrium will emerge privately, as dangerous drivers are bad for business unless enabled by a government-supported cartel system. On top of this, there are plenty of offenses one could have committed in the past that have no bearing on present capacity to drive safely, quickly, and conveniently that would almost certainly be a bar to employment if DMV is running the show.

This model makes little economic sense in an age when smartphones can let riders provide instant feedback both to the services and to other potential riders.

3. Local governments can set standards for vehicles used for hire in their jurisdictions. Some localities have age and mileage restrictions for taxis, which further limit the vehicles that can be used. Together, these requirements are designed to ensure that all vehicles, including those carrying passengers for-hire, are in sound working order and do not pose a risk to public safety.

What are your views on for-hire vehicles being subject to higher standards than other vehicles registered in Virginia; what should those standards be?

As if poorly-working vehicles were not bad for business in a very immediate way? Not only would the riders themselves be angry, but it would be bad advertising. If anything, standards could very feasibly be lower for these vehicles than for ordinary vehicles, as regular Joes like me do not see their business reputations suffer if their vehicles break down in traffic. For me and the people around me in traffic it’s an inconvenience; for Uber et al. it’s a huge business problem.

As far as what the standards should be, the more minimally-specified, the better. Consumers face quality vs. price tradeoffs every day, and there’s not one answer that satisfies everybody. If all cars were held to the standards of brand new Cadillacs, quality would be very high but few could afford it. Fortunately, there is a large enough market for consumer automobiles that lets some people drive brand new Cadillacs but doesn’t ban me from driving an old but affordable Kia.

4. State laws require that certain carriers publish their fares. Localities have authority to regulate fares within their jurisdiction.

What are your views on governments setting requirements relating to fares; what should those requirements be, if any?

Though I’ve met a great many very smart people working in government positions, they have no more wisdom or appreciation for the vast array of consumer preferences in this area of the economy than they do in any other area of the economy. By what logic does not make sense to regulate fares for carriers but not for restaurants or clothing stores? If fares are too high, potential riders will find other means of transportation, pushing fares back down. Fare regulation makes perfect sense in the context of enforcing cartels, but it does not make sense in a competitive market the likes of which we’re going to see eventually, regulations or not, due to technological progress.

5. Certain passenger carriers must show that the service is necessary in the community in which they seek to operate and localities have authority to control the number of taxis in the community. This local authority stems from the community’s interest in controlling traffic and pollution that may result from roaming or idling taxis.

What are your views on standards relating to public necessity for transportation services and on potential limits to the number of operators allowed in an area?

As these services are substitutes for traditional taxi services, more of them on the road means fewer taxis on the road, and indeed a vehicle that is only in operation when asked for seems like it should cause less traffic and pollution than a taxi driving around in circles. If regulatory agencies had to pick winners and losers, it seems that they would do better to mandate the newer style of organization than to allow idling taxis in high-traffic areas. Of course, there is probably room for both models of organization, as not everybody has or uses smartphones for the transportation purposes. It’s simply not credible to think that allowing more vehicles from Uber et al. will simply add to the current total. Traffic is already nightmarish in Northern Virginia, but it should in the long run get better instead of worse by allowing ride-sharing services.

The “necessary in the community” clause is the most transparently cartel-serving requirement there is. If it were obvious that a service is necessary in the community, why wouldn’t the regulatory agencies have issued more permits in the first place? Nobody knows in advance what services the community demands; they have to try and see. Services are organized and offered on a guess that there is an unfulfilled demand out there. Most of the time they are wrong, which is why most businesses fail. But the successes of ride-sharing services in the places where they have been offered is a demonstration that sufficient demand exists.

6. Licenses and permits show the traveling public that the companies, drivers, and vehicles meet minimum standards. Customers and the public can contact the licensing authorities with complaints and concerns; and the authority can follow up by investigating and, if needed, suspending the passenger carrier.

What verification methods would you design to confirm that standards are being met?

The standard mandated vehicle inspections that all Virginia drivers must pass are probably sufficient as a legal minimum. Firms with reputations to uphold and no cartel protections have incentives to have internal controls to make sure their vehicles exceed the standards.


I don’t know how seriously anybody will take these comments—not very seriously is my guess—but it works for a post.

Dialects and aggregation

One of the many interesting things I learned in Jesse Byock’s Medieval Iceland: Society, Sagas, and Power is that in medieval Iceland there were no regional accents. Iceland is about the size of the state of Kentucky, with its population mainly along the coast, and in this era travel was extremely slow. Horses and boats were the fastest means of getting from one place to another. However, the settlers shared a common cultural background, and many of the men would travel to local councils several times a year and the national council once a year. They would settle disputes and determine legal rules, but also share news and generally maintain cultural ties.

Compare this to the United States, a much larger country, where there persist to this day many different regional accents. The important point to consider here, as detailed in David Hackett Fischer’s Albion’s Seed: Four British Folkways in America, is that the cultures that made up the US were already distinct in the British Isles before they transplanted themselves here, in terms of accents but also in terms of many other things. Even in the early Anglo-Saxon period there were different branches of the Anglo-Saxon language spoken and written in various parts of England.

Beyond the scope of Byock’s book and of my own knowledge are possible regional accents in Norway, where the bulk of the settlers and the culture of Iceland were from. But regardless, these coalesced in Iceland into one truly national culture. (I don’t know but doubt that they have since fragmented.) The cultures of the United States have been slowly coalescing since the 1600s, but they still retain enough regional identity to be distinguished.

The point here is that this would be a very interesting analytical tool to apply to other countries and nations in the classical sense. Germany, for instance, is a partially artificial amalgamation of different smaller cultures, as we know from history and as the maps here show. China’s linguistic variety is well-documented, and reflects known historical facts. The pre-Columbian inhabitants of the Western Hemisphere had an astonishing degree of linguistic variation. (About indigenous languages, linguist Edward Sapir famously wrote “We may say, quite literally and safely, that in the state of California alone there are greater and more numerous linguistic extremes than can be illustrated in all the length and breadth of Europe.”) I know less about other countries, but clearly the same process is at work worldwide.

All in all this should make us careful in aggregrating people together as “Indians”, “Chinese”, “Germans”, or whoever else. It may be the case that some countries really do represent nations in a 1:1 relationship, such as Iceland. But frequently this is not the case, and thinking otherwise may conceal important information when we analyze the world. We need to be very cautious about data aggregated on “national” levels.

On Property

It’s long been one of my main guiding ideas that a great deal of intellectual confusion results from the misuse of language. “Property” is a term so poorly defined that it’s worth talking about for a moment.

Example 1: One of the constant themes in Robert Kee’s The Green Flag: A History of Irish Nationalism is how various reform-minded leaders were unable to find a solution to the widespread poverty in Ireland that respected “the rights of property”. This meant mainly the rights of the landed class, whose titles dated back to the Anglo-Norman conquest. On any theory of just land ownership this fails, as this was land taken by conquest and parceled out by royal favor. The serfs who worked the land lived in desperate poverty, as had many generations before them, and this poverty was a cause of social instability for hundreds of years. The difference between the classes was more obvious in the beginning when the landowners were Anglo-Normans and the peasants Irish, but after the conquest the landowners were absorbed into Irish culture. It became a distinction only of class.

By conflating Anglo-Norman titles acquired by force with legitimate property, the Irish and British elites of later centuries were unable to conceive of a solution that respected property titles and allowed for the alleviation of property. The problem was not that private property ownership per se leads to poverty; the problem was that, according to a libertarian theory of property, the wrong people had title to the land. Breaking up the estates into smaller plots, each owned by the peasants who worked them, would have greatly reduced poverty very quickly, but this would have required a better understanding of legitimate property rights and of course been in conflict with the interests of the powerful.

This is just one real-world example. Almost anywhere in the third world today you could find others.

Example 2: The use of the phrase “intellectual property” is another conflation. On one hand is property, physical stuff that can be mixed with labor, and on the other hand there are ideas and extensions of ideas, i.e. sounds, pictures, words. I favor the term “intellectual monopoly” myself, as this makes it more clear that the government is behind what is really an artificial right.

Ideas are non-rival: my “consumption” of an idea does not prevent your simultaneous “consumption” of the same good. By contrast, you and I could not both enjoy the same Cuban sandwich at the same time. Rivalry is one of the reasons why humans devised rules and theories concerning physical property, but this characteristic of physical property is entirely absent from books, music, etc. The now-classic example is that if I borrow a cd of yours and copy it, I return the cd to you good as new, and now you and I can both enjoy the music.

Supporters of intellectual property have several lines of reasoning on their side. One is that it is a reward for creating things, which the creators deserve. I think this fails from the non-rivalry argument, and moreover the concept of “owning an idea” is absurd. Nobody owns language, for instance, but a person or corporation can own certain combinations of words.

Second, this reward is an incentive to further creation without which creative output would slow. This is rarely ever addressed as an empirical question, though it should be. It’s not at all clear that on net intellectual property rights stimulate creativity, and I suspect it’s actually the opposite. A lot of intellectual property is held defensively, where the holder does nothing with it but prevents competitors from using it as well. Stephan Kinsella has a lot more on this point (and on opposition to intellectual property generally).

Third, the cynical reasoning is that current owners of said property have a lot at stake and fight tooth and nail not to lose it. Disney pushes for the extension of copyrights every time Steamboat Willie approaches public domain, and they always get it. People have a tendency to prefer the status quo over change, and so a lot of support for the current IP regime is not really ideological at all.

Example 3: There are differences when considering private property, public property, and common property. Your house is private property. City sidewalks are public property. Common property is not encountered so much anymore, but still includes things like air. Before the current property regime took shape, there were trails, for instance, that were considered common property. The trail was made by many people, and no one person could claim to own it, but at the same time the government was not considered its owner either. If a person were to erect a toll booth in the middle, he’d be laughed out of town by the other users of the trail. If the government were to try the same, it too would be laughed out of town.

Many, possibly most major thoroughfares in the eastern part of the United States started out this way. Though now considered government property, it was not always the case. These paths/trails/roads were products of human action by many actors, and the part X “made” could not be separated from the part Y “made”. This was also a common form of property among American Indians before the conquest. Land could be held in common by a tribe, without anybody in particular necessarily owning a particular part of it, and other tribes knew and mostly respected this.

It may be the case that common property is rarely applicable in most instances in the modern day, but if it is in some non-zero proportion we’d do well to keep it in mind.


A lot of confusion about property rights stems from linguistic abuse. I don’t mean to suggest that we can resolve these issues quickly or simply, just that we should keep these different concepts in mind when we refer to “property”.

Calling ourselves “American”

It is something of a shame that the English language, for all its adaptability, does not have a better demonym for citizens of the United States of America. In English we simply say “Americans”, which works fine in day-to-day usage in the US, but in Spanish this is no good. In Spanish, an “americano” is someone from the Western Hemisphere; “América” is the Spanish version of “the Americas” in English. When I studied in Uruguay and said things like “Americans have a different way of approaching that topic [than the local custom]” they would always comment back that my statement was nonsensical.

In Spanish there are two demonyms for people from the United States of America: “norteamericano” and “estadounidense”. The first suffers from the flaw that its denotation is the main sense of the word, and its connotation is not specific enough. The second suffers from the fact that to bring this into English would result in some kind of gross linguistic awkwardness, so much that it simply wouldn’t be done.

There are two other reasons. When the United States of America were first brought together as a distinct unit, there were no other sovereign states to consider in the Western Hemisphere; the definition sufficed to specify exactly what was meant. In the modern day, “Americans” form the dominant country in the world and don’t feel any specific urgency to get more specific because damn it, you know exactly what we mean!

I see this persisting for a long, long time. Perhaps as demographic, cultural, and economic changes add up over hundreds of years the standard English word will more closely resemble the standard Spanish word. I guess nobody alive right now will ever know.

UPDATE: In response to a Facebook comment, I’m adding this part.

“Gringo” frequently works pretty well—though a Mexican term it’s well understood in other parts of Latin America—and I have no particular objection to it, but it is mildly pejorative and I don’t see that this will become standard on that account. It also doesn’t really apply outside of white people in the USA, and there are many non-white people (by the standard conception of “white” and by more inclusive conceptions). For instance President Obama is not a gringo, but he’s still the most powerful man in the country. As the Hispanic population grows the term will becomes less and less applicable.

“Yankee” refers to a specific cultural group in the early period before the USA existed as a country. Descendants of Yankees are still referred to by this name, but it doesn’t apply to most of the country. Certain regions may not care if this term becomes general, but certain other regions will oppose its adoption. Still no dice.

Languages, languages, especially English

I’ve often wondered how exactly English came to be so dominant as a world language. The British deserve the credit for getting this started, but after that it’s a little fuzzy. Other countries, most notably Spain and France, had empires as well. The French language is spoken, at least by elites, in many countries, but most of these are former colonies. Spanish is spoken by a large number of people as well, but mostly as a first language. Yet English has by far the largest number of non-primary speakers, over a billion.

The long history of the British empire, as I said, got this started. But before they were dominant there was the Spanish empire, which now pales in linguistic influence. The relevant difference may be that Spain’s empire was (relatively) geographically concentrated, while Britain’s possessions were far-flung across the world.

One key component in the 20th century is the fact that two great-power nations used it. There are other great powers, but no two with a language in common. In a case of 2 vs. 1 vs. 1…, it looks like the language shared by two powers wins. (Austria-Hungary and Germany shared a language, but I have trouble classifying Austria-Hungary on the same level as the US and UK, and even within Austria-Hungary most people would have preferred speaking another language than German.)

I still don’t have a great answer, but a study mentioned in Die Welt today says that English is the most efficient language, so perhaps that’s another reason.