Surface    |    Backfill    |    About    |    Contact


28.1.09

I guess "more brazen" is, technically, a form of change

So Barack Obama takes office and, with big fanfare, introduces new ethics rules banning lobbyists from working for his administration. Then he immediately turns around and tries to hire a Raytheon lobbyist to work at the Pentagon and a Goldman Sachs lobbyist to oversee the financial industry bailout.

At least Bush could say that he was breaking rules that were made by someone else and that therefore he didn't agree with them in the first place.

23.1.09

Non-Kantian Autonomy

In the comments to my post on abortion, laurenhat asks how a sentience-based theory such as the one I propose deals with the fact that sentience comes in degrees:

I agree that sentience is a good basis on which to accord autonomy and rights. But you talk about sentience like it's a binary thing -- while I can see that it basically is when it comes to woman vs. fetus, there's no clear line in my head. Birth seems to me to be a good place to draw the line for killing living beings, because (a) we have to draw the line somewhere, and (b) the baby's health and mother's health are no longer closely tied after birth. But it's not like I think that birth is the moment of sentience. I also don't think living creatures are all equally sentient, and I'm curious about your views there and how it affects your decisions. Is it as important to you not to exploit honeybees as fish, or fish as pigs? Why/how does sentience play into that, or not?


I think the key to answering this lies in getting away from the Kantian model of moral considerability. In the Kantian model (which echoes Christian theories of soul-posession), morality begins by assigning intrinsic value to various entities on the basis of some characteristic, such as membership in Homo sapiens or sentience. Then moral action is that which respects the intrinsic value of those entities which hold it. To respect an entity's intrinsic value is generally held to consist, at least in part, of granting it autonomy.

This model becomes tricky when you are using a characteristic such as sentience which admits of degrees (i.e., pretty much every criterion of considerability that has been proposed except raw speciesism). You're faced with two choices: 1) you can set a threshold such that anything that's even a tiny bit more sentient than the threshold gets full intrinsic value and anything that's even a tiny bit less gets no intrinsic value, or 2) you can give things greater or lesser levels of intrinsic value. Neither of these is very satisfying.

To me, the solution is to drop out the step of assigning intrinsic value. Morality then works as follows. Sentience, as I use the term, simply means the ability to care what happens to you. Autonomy means the condition in which an entity's caring about its fate makes a difference in what actually happens to it (that is, the entity "gets its way," or is presented with some justification as to why it didn't get its way in this case)*. Moral action consists in granting as much autonomy to all entities concerned with some action as possible. Thus, it's meaningless to talk about granting autonomy to things beyond the bounds of their sentience -- you can't get your way on an issue you don't care about. Autonomy thus scales automatically to track differences in sentience levels between entities and over time. So in questions like fish versus pigs, what matters is the degree to which the fish and pigs care about whatever it is I'm proposing to do to them -- there can be no principle like "pigs are more important than fish" except as a rough empirical generalization about the species' typical ability to enjoy autonomy.


*Note the difference between "autonomy" as I'm using it and the common notion of "independence." Independence is the condition in which an entity enjoys its autonomy through its own unaided exertions. The value of independence is parasitic on autonomy -- independence is only good insofar as the independent entity values not just some outcome, but also values the fact of getting that outcome through their own work. Modern Western cultures tend to have an exaggerated view of how possible independence is (given our embeddedness in social structures).

22.1.09

The collective action problem of job search advice

Being back on the academic job market, I've read a fair number of articles -- on blogs, personal websites, university websites, and publications like the Chronicle of Higher Education -- giving advice to job-seekers. While I read and try to take to heart as much advice as I can find, I also find the presence of so much advice offered to the general public a bit disconcerting.

It's understandable that I want such advice -- I want to get a job. And it makes sense why my advisors and close colleagues would offer such advice specifically to me, since they have both personal and person-regarding-altruistic motives for wanting me in particular to get a job. To offer such advice to the world at large, then, would seem a natural extension -- you want everyone, not just yourself and your personal friends, to get a job.

This kind of extension makes sense for non-competitive goods. It's sensible to offer teaching advice to the world at large, because every professor could become a better teacher. The same thing goes for advice on doing good research -- it's sensible and desirable for everyone in academia to do better research.

But job-hunting is different in the crucial respect of being a zero-sum game. There are X jobs available each cycle, and each job will be filled with exactly one candidate. Thus, offering advice to all job-seekers doesn't increase the number of people with jobs. If we assume, for the sake of simplicity, that all job-seekers read the advice, all it does is intensify the level of competition.

The question, then, is whether intensifying the level of competition is a good thing. There are some situations in which more intense competition is intrinsically good -- after all, there's a reason that most players and fans prefer major-league baseball over little league, even though both leagues have one winner and one loser per game. This seems unlikely to be the case in the job market, since I've never heard of either a job-seeker or a selection committee member who loves the process itself.

Another possibility is that the competition creates better candidates than would have existed otherwise. The model here is Adam Smith's idea of the market, in which competition drives businesses to improve their products. Tougher competition in the academic job market may, for example, drive potential candidates to work more on improving their teaching skills, which will make them better at the job if they get it. I doubt that this is a primary reason for most academics to improve their teaching and research, but it's not insignificant (it was, for example, one element of my decision to take my current adjunct job).

It's also possible that more intense competition could also improve the committee's ability to pick the best candidate, by making the competition more informative. Some of the advice seems well-suited to this -- for example, advice on writing a better CV will lead to CVs that more clearly explain candidates' qualifications, thus enabling the committee to more effectively pick the best candidate. Something similar could be said for advice that helps candidates stay calm during interviews, thus enabling them to show their full qualifications.

However, much of the advice -- and, it seems, the most actively sought advice -- does not fit any of the above categories of advice that's beneficial at the social level. I'm talking here of the "tips and tricks" genre of advice. This kind of advice aims simply to help the candidate advance their own personal prospects, even to game the system in a sense. For example, one article (I've lost the links to all the things I've read) talked about choosing the right outfit to send a certain message to the committee about yourself. This particular piece of advice is interesting because in a sense it goes beyond merely being futile when offered to all candidates (as opposed to just to one candidate whose victory you favor) into being self-defeating. The committee is (consciously or unconsciously) judging candidates' clothing because they believe it reveals something important and informative about them. But after the advice is given, all wearing the right outfit is actually revealing is whether you've read the advice article.

Then again, perhaps I'm looking at it wrong by assuming that the writers of such articles are primarily motivated by making some social-level improvement in the job search process. They may be writing mostly to drive traffic to their website (which will increase as the job competition becomes more intense and thus candidates need to keep up on the best advice) and/or to increase their prestige as someone who gives out advice.

Privacy is the conclusion, not the premise

In his otherwise welcome proclamation* on the anniversary of Roe v. Wade, Barack Obama says something that is common in liberals' discussions of "cultural issues" but which I find problematic:

this decision ... stands for a broader principle: that government should not intrude on our most private family matters.


It's common to frame the right to abortion as a matter of privacy -- the government shouldn't be nosing in on women's decisions about their reproductive choices. A similar framing is used in opposing anti-sodomy laws -- the government shouldn't be peering into our bedrooms to see who we're having sex with. This is an appealing framing because it seems to settle the issue procedurally (via limits to the scope of legitimate government action) rather than having to engage in -- and hence hash out one correct answer to -- substantive questions like "is abortion wrong?" or "is sodomy wrong?" There's a tantalizing potential here for a securing broader agreement, so that people could say (like John Kerry) "I personally find abortion wrong, but since it's a private decision it's none of the government's business."

But I think resting our arguments on privacy is ultimately question-begging and gets the direction of inference backwards. What things are private cannot be defined in an objective, value-free manner. Rather, whether something is private depends on whether it's the type of potential wrongdoing that the government ought to be interfering with. Nobody can disagree that government should not interfere with private matters, because by definition "private" is just "that with which government shouldn't interfere."

Many progressive victories have come from arguing in this "wrong, therefore not private" manner. Most relevant in this context is the fact that spousal rape is now considered a crime, that is, a subject fit for government interference, rather than a "most private family matter" as it had once been deemed. Spousal rape wasn't turned into a crime by some sort of analysis revealing that it wasn't really private after all, and therefore it's subject to government control -- instead it was decided that it was a serious enough problem to merit government interference, and therefore it was ruled no longer private.

In my mind, the broader principle that encompasses abortion rights is the principle of defending the autonomy of sentient beings (such as women) and not according independent moral status to non-sentient beings (such as fetuses). We can establish who's sentient or not on the basis of external factual information, then deduce which situations do not implicate potential serious violations of sentient beings' autonomy (and ones in which one sentient party's autonomy rights would always trump any others), and then declare those situations off-limits to government interference. To say the choice to get an abortion is a "private" matter is to summarize or restate the conclusion, not to justify it.

Putting the "no government interference" and "private" ideas the right way around does not necessarily force the John Kerrys of the world to become pro-lifers. They could argue that the wrong doesn't rise to the level of seriousness to warrant government action, or that government action is counterproductive in this case, or that government action would have worse side-effects of various sorts. But they have to argue that, not just presume it by using the label "private."

*My enthusiasm for his statement is tempered by the fact that -- despite the gushing of various bloggy types about the spiffy new website -- I can't find this proclamation anywhere on WhiteHouse.gov.

19.1.09

Plus ca change ...

I'm reading Jane Addams' The Second Twenty Years At Hull-House, which has a long chapter on "Immigrants Under the Quota." The book was published in 1930, but aside from the details of the national quota systems that were in place, it could have been written this year. The nativist bigotry, the pitting of immigrants and native workers against each other, the indiscriminate raids -- and her proposed solutions, like expanded legal immigration, aid to sender countries, unionization, and preventing employer exploitation of immigrants, are in line with typical 21st-century progressive ideas.

This all has to be balanced against the unfortunate previous chapter of the book, "A Decade of Prohibition." Despite detailing the many social ills that arose from the prohibition of alcohol -- which again parallel the modern prohibition on other drugs -- she remains a steadfast proponent of the 18th Amendment. She even cites as reason for her optimism the successful prohibition of other drugs, as well as the abolition of slavery (especially grating given the racial dimension of the current war on drugs).

18.1.09

Stay or Go in California

I keep forgetting to link to the news that California is considering adopting the Australian "stay or go" approach to wildfire. "Stay or go" is short for "prepare, stay, and defend, or leave early." The evidence fom Australia is pretty strong that:
  1. Houses rarely burn down in the initial pass of the fire front. Rather, they burn down later as embers land on eaves, porches, etc. and start the house on fire. So an able-bodied person can shelter in the house while the fire passes, then go around extinguishing embers, saving both their own life and their house.

  2. Most fire deaths occur when people evacuate at the last minute and are overtaken in the open or in their vehicles.

These two points in combination create a tragic irony when chivalrous families send the women and children fleeing for supposed safety at the last minute, while the men stay behind to face the alleged greater danger of the fire at home. Thus, most fire agencies in Australia recommend that homeowners prepare their homes -- installing fire-resistant roofing, putting screens over eaves and places embers could enter, clearing vegetation near the house. If their preparations are good, and someone in the household is physically able to do the "defend" part, they should stay behind in the event of a fire. If not, they should evacuate early.

Implemented properly -- with extensive education and aid in the "preparation" phase -- the Australian strategy both empowers residents of fire-prone areas and places responsibility on them. This is in contrast to the typical U.S. policy, in which residents are implicitly treated as self-centered and incompetent (which they may well be, in the absence of the explicit or implicit training from a "stay or go" policy!), and are hustled out of the area so as not to interfere with the work of firefighters.

The spread of "stay or go" fits with a general tendency toward personal/household responsibility in fire policy. The emerging conventional wisdom (captured, for example, in Roger Kennedy's Wildfire and Americans) has a libertarian bent -- people in fire-prone areas, especially well-off new migrants to the wildland-urban interface, are portrayed as reliant on subsidies and bringing a sense of entitlement to have the government save them from things. But they must now be made -- through things like differential insurance rates and stay-or-go policies -- to take responsibility for their own safety if they choose to live in these areas.

This libertarian shift is good as far as it goes, but it doesn't go far enough. On the one hand, it's based on an implicit default model of the homeowner as an independent and autonomous self (a model which has been extensively critiqued when it shows up in Rawls' and other political philosophy). On the other hand, it places too much of the emphasis on surface-level individual choice, rather than looking deeper into the structural reasons people end up in fire-prone living situations, and which affect how well a policy like stay-or-go can be implemented (for example, leaving early involves a lot of uncertainty about whether the danger will eventually arrive, and hence the potential for unnecessary evacuation or waiting late, which may be both emotionally and cognitively burdensome, as well as logistically costly with respect to work and family responsibilities).

There's an interesting paper to be written somewhere in here about the implicit political philosophies of these policies, and their connection to macro-political-economic changes (for example, "stay or go" seems to mesh nicely with the neoliberal devolution of responsibility seen elsewhere in environmental and social policy).

16.1.09

A few quick links

As I head out the door to work, "the environment is complicated" edition.

Climate change may be offsetting some aspects of other recent land-use changes in the southwest. Invasive species can have major consequences for ecosystems, but so can efforts to eradicate them (warning: cute bunny picture). So I agree that the new administration and congress should put more money into earth-monitoring systems.

Thoughts on "Sea Kittens"

PeTA's latest performance art piece -- trying to rename fish "sea kittens" -- is a breath of fresh air, since it's merely silly, not offensive and progressive-coalition-fracturing. I doubt it will work, because the strategy of analogizing fish to kittens (as opposed to trying to get people to appreciate fish's sentience in their own right) is too big a leap for most people to make on the Great Chain of Being, and thus will lead to counterproductive focus on the points of disanalogy.

Some of the responses to PeTA are lacking as well. Gwen at Sociological Images points out that the sea kitten campaign is aimed at children. She keeps enough analytical distance in her post that she may deny intending that as a criticism, but it's hard to believe many readers wouldn't interpret it as one, and agree -- those wacky activists are targeting our kids! But gwen also links to an NPR piece that quotes two Alaskan pre-teen girls who are skeptical of the "sea kitten" idea (validating my point above about disanalogies). Chastity and Harmony are not in some neutral, un-indoctrinated state -- they've been quite explicitly taught by their parents, community, media, and government that fish are food. Indeed, fish-as-food are even part of their identity, as evidenced by their praise of the quality of Alaskan fish. People have a tendency to take what's normal in their society as neutral and non-indoctrinated, leading to the idea that kids ought to be raised in a "normal" way (meat-eating, church-going, etc.) and then allowed to make their own choice as adults. You may think the substance of PeTA's message is incorrect, but there's nothing seedy, or different from their opponents, about the fact that they're aiming it at kids.

The issue of aiming a message at kids does, however, raise some issues for veganism. I understand the appeal of trying to reach the young, before their habits are too deeply engrained. But it's easy to get trapped into thinking of veganism as purely a matter of personal moral choice -- killing animals for food is wrong, so don't do it. But choice is only as good as the environment in which you're trying to make it. Kids in particular have limited ability to exercise choice, since their lives are run to a large degree by parents and other authority figures. And veganism is a tougher choice for kids to push than past successes like "buy me that toy!" and "put in CFL bulbs!" because food choices are more deeply structurally embedded (though there are success stories -- including the then-girlfriend who influenced me to stop eating meat -- of kids whose parents were receptive to them becoming plant-eaters). So instead of motivation-side efforts like "sea kittens" purports to be, I'm more intrigued by opportunity-side proposals like getting schools to offer vegan lunch options. This would be particularly effective if the vegan options are presented as just another food alongside the meat, available to anyone who thinks it looks tasty, rather than as special food for kids with special requirements. That would serve to normalize animal-free meals in kids' minds without putting them on a collision course with their parents.

Also in the NPR article, fisheries observer Mary Powers declares that opposition to fish-eating is "unpatriotic," which I guess is Alaska's version of "What's good for GM is good for America." Her claim is silly enough not to need detailed deconstruction, but it does remind me of a thought I had the other day about the limits to veganism's boycott effect in making major change in animal agriculture. For all the hyperventilating about the bank and automaker bailouts, the agricultural sector is already heavily reliant on Uncle Sam (and in fact the subsidies ). Were veganism ever to get popular enough to put a major dent in Big Ag's profits, it seems inevitable that the feds would step in to prop them up (further), so that we would continue to produce lots of meat even if fewer people are interested in eating it.

This post might make me #1 now

Speaking of unfortunate search results, this blog is #2 on UK Google for "utilitarianism is stupid," despite the fact that I think utilitarianism has a lot going for it as a moral theory.

12.1.09

A Vegetarian Diet

In theory, the English word "diet" means "the set of things someone eats"* (sort of the climate to the weather of each day's menu). It functions fine in that capacity when we stick a modifier on it -- "low-sodium diet," "liquid diet," "standard American diet." But left unadorned, "diet" is assumed to mean "weight loss diet." This is symptomatic of how our culture understands food. The primary question that springs up whenever the choice of what food to eat is raised, is whether that food is slimming or fattening.

We can see some of this tendency at work in the AP's article** on a new survey of child vegetarians and vegans. A substantial part of the story is taken up with the question of plant-based diets and weight. The article says the right things about these topics -- that few people go veg as a way to lose weight, and giving up animals isn't any guarantee of weight loss***. What interests me is that this topic had to be addressed at such length. For example, the issue of motivation is introduced not simply with the statement that most do it for ethical reasons, but with the denial of the presumably-otherwise-assumed idea that it's done for health (which, as is typical, is conflated with weight).

*It also means "decision-making assembly," which was a disappointing discovery when, as a young person, I went to find out more about Martin Luther and the Diet of Worms.

**If anyone wonders why I post about AP articles so much, it's because I get paid to read the AP wire and pick which stories should go in the newspaper.

***A college friend recently expressed shock at my current eating choices in light of how scrawny she remembered me being back in my undergrad days. I replied "French fries are vegan." I was actually skinnier as an omnivore, though I attribute the change to aging and a lack of opportunities to walk for my errands in Arizona.

Natural Lactose Tolerance

I find it interesting how people can come to the same conclusions as I do, yet be swayed by arguments I find quite lacking. Today's example comes from Alicia, who says the argument against consuming milk that she found most compelling, and which tipped her over the edge, was the idea that adult milk consumption is unnatural.

There are a number of things right about Alicia's post. She's right to say lactose intolerance should be understood as a mismatch between a person's biology and the expectations of their society, not a deficiency in the lactose intolerant person. And the statistics she cites about the prevalence of lactose intolerance worldwide, and its particular concentration among people of color, is a good corrective to the idea that adult milk consumption is a human universal. So "everyone should drink milk!" has both ableist and ethnocentric components.

Where I disagree is her further claim, which she says tipped the balance for her in going vegan, that milk consumption is unnatural. In general, I don't find naturalness arguments compelling on their own, except as a rough indicator of potential problems when we lack other, more direct information (which is not the case here -- we have centuries of direct information on the effects of consuming milk). Further, I would argue that it's perfectly natural for European-descended people to be lactose-tolerant. European lactose tolerance isn't just a coincidence -- it's a product of evolution, a quintessentially natural process if ever there was one, fitting Europeans to better take advantage of the foods available in their cultural-economic environment (i.e., one with dairy cows). So absent further arguments about the harm to cows and the environment (which I think Alicia also accepts), those of us who are perfectly able to digest lactose as adults should be free to continue consuming milk.

11.1.09

débitage du aubergine

I'm amused, given my plant-eating ways, that when you type "debitage" into Google, it suggests "debitage blog," "debitage archaeology," and "débitage du chevreuil" -- which is apparently a French term for some sort of animal butchering technique. And for all the people that will probably now be finding this post in their searches for "débitage du chevreuil," may I suggest this link.

Assignment Desk

It's not a formal New Year's Resolution, but I would like to get back in the habit of posting more regularly this year. I realize I don't have even one percent of the readers of someone like Ezra Klein or Matt Yglesias, but perhaps someone out there still has an idea of something they'd like me to write about. Comments are open.

Pessimistic About Prisons

The AP says that some prison reformers see the current economic crisis as an opportunity to get governments to rethink their harsh and ineffective prison policies. I'm rather more pessimistic.

I understand the promotional value of framing prison reform as a natural response to tightening budgets, and I agree that a reformed system would be cheaper, particularly in the big picture when reduced costs from crime and community disruption are taken into account. But I also think that for prison reform to really work, we need a shift in thinking from "crack down on criminals" to "prevent crime."* And I don't think changes made primarily for fiscal reasons will bring about such as shift, and thus they won't be sustainable beyond the end of this recession.

I think it's more likely that prison conditions will get worse. In a crisis, surrounded by economic uncertainty, people will cling to a strong need for imposed order and insider-outsider dynamics. The first things to go will be rehabilitative programs like job training and GED classes, and budgets for things like health care and food will tighten (though Sheriff Joe and Sheriff Greg are way ahead of the game here). Early release and reduced sentences for non-violent offenders will have some beneficial effects, though those folks will mostly just be dumped back on the street without any re-integration programs. When jobs are scarce, how popular will it be to take steps to open them up to people with criminal histories? The likely fate of social service and early-intervention programs that try to help people before they end up committing crimes seems obvious. Prison overcrowding will be rampant (the pictures with the AP story are frightening, although my wife assures me that's not normal -- yet). Overcrowding and a lack of outlets will fuel conflict between groups of prisoners and between prisoners and guards (if you've got nothing else to do all day, why not start a race riot?), in turn justifying harsher treatment. And there will be a strong temptation to turn more prisoners over to private companies with minimal oversight. All in all, not a pretty picture.

*It's interesting that a lot of tough moral dilemmas are of the form "Would you do X beneficial thing, even if it hurt someone?" whereas prison reform asks "Would you do Y beneficial thing, even if it was nice to prisoners?"

10.1.09

When Scientists Assume

Here's a nice illustration of the importance of lay knowledge in risk management:

The FDA detected melamine and its byproduct cyanuric acid separately in four of 89 containers of infant formula tested in the fall, but never at the same time. A can of milk-based liquid Nestle Good Start Supreme Infant Formula with Iron contained traces of melamine while three different cans of Mead Johnson's Enfamil LIPIL with Iron had traces of cyanuric acid.

The FDA says studies show potentially dangerous health effects from the industrial chemicals only when both are present. The lack of dual contamination is key, say agency officials, and thus there have been no recalls of the tainted formula.

In a letter Friday, consumer advocates told FDA commissioner Andrew C. von Eschenbach and U.S. Department of Health and Human Services Secretary-Designate Tom Daschle that they were concerned the FDA was assuming parents would never feed their babies more than one type of formula. They said they had heard from a concerned mother who routinely fed her baby two different formulas because "one caused constipation, and one caused loose bowels, but together the baby's digestion seemed just right."


It's easy to think -- especially in the case of industrial chemicals like melamine -- that we should rely strictly on science to tell us how dangerous something is. The alternative is typically conceptualized as laypeople doing their own risk assessments, which run the danger of relying heavily on anecdotal information or unfounded assumptions (which is not at all to say that lay epidemiology is necessarily invalid).

Harder to dismiss is lay knowledge that exposes unwarranted assumptions being made by scientific risk assessors. In order to make an assessment of the risk of an activity or product, we must know both the mechanisms by which it causes harms (the chemistry of melamine and the physiology of its ingestion, in this case) as well as the social practices by which the activity is carried out and safety measures may be implemented. These latter areas are not ones that scientists can claim any particular expertise in. Too often, they complete their risk analyses using unfounded assumptions about the social side -- typically by assuming that products are used as intended by the manufacturer. Then scientists' expertise in the first half of the risk assessment is taken to justify the whole package.

7.1.09

Defining Bravery Down

I think some American liberals are a little too quick to stick the label "brave" on expressing minority-but-mainstream opinions. For example, Jeff Fecke thinks Jon Stewart is "very brave" for doing a bit on the Daily Show criticizing the fact that politicians on TV are all taking Israel's side in the current war. It reminds me of all the people who pat themselves on the back for being so brave about opposing the Iraq war back in 2003. I admit my perception of the Iraq debate may be a little skewed since at the time I was at Clark University, where the student body is divided between the Marxists and the people that think the Marxists are right-wing reactionaries. But really, what risks is Stewart running my making some mildly-more-pro-Palestinian commentary? Maybe he'll be criticized by some pundits -- most of whom are right-wingers who didn't like him to begin with. If he's still got his show after all the vicious stuff he's said about the Republicans over the past 8 years, I can't see how this will be the thing that leads to some sort of actual retalliation. I doubt someone will so much as key his car over this.

(Note: Comments addressing the substance of the Israel-Palestine conflict will be considered off-topic. There are millions of other places on the internet to rehash that.)

2.1.09

Incentives Require Opportunities

Dave Roberts makes a good point -- gas prices have to get punishingly high to make a serious dent in people's driving habits because most people don't have the option of not driving so much. There's this pervasive "Ten things YOU can do to save the Earth!" mentality that makes us over-focus on pushing individuals to make different decisions. But our decisions are highly constrained by collective, infrastructural conditions like the presence or absence of public transit and bad zoning laws and the sprawling development that they cause. I suppose if you incentivize people enough they'll vote for government action to change those collective conditions, but if government policy is your incentivizing lever, why not just cut to the chase?

In related news, here's a case of incentives gone awry: I'm intending sometime in the next week or two to drive an hour to Mesa in order to ride the new Phoenix light rail just for the sake of riding it. I doubt I'll ever actually use it for getting around because it doesn't help me get anyplace that I need to go.

1.1.09

Jane Addams meets Friedrich Nietzsche

Brian Leiter makes an interesting argument*, derived from Nietzsche, that universal moral truths (i.e. those of the "you ought to do X" form, not "it would be good for you to do X") do not exist. It turns on the relative lack of progress in moral philosophy as compared to other disciplines, like natural science or math. In the natural sciences and math, there have been progressive answering of questions and demonstration, to the satisfaction of scholars from around the world, that certain theories are sound. In moral philosophy, on the other hand, careful thought by a tradition of philosophers stretching back thousands of years has failed to produce the same convergence -- the foundational disputes between deontologists and utilitarians, virtue ethicists and care theorists, remain as serious as ever, and many scholars have simply turned to within-paradigm talk rather than trying to build a unified theory that can gain the assent of all. Leiter-Nietzsche argue that (by Occam's Razor) we should conclude that the best explanation for the condition of moral philosophy is not that everyone is stupid (or worse yet, that, say, everyone but the utilitarians is stupid), but rather that there's nothing there to discover -- no moral truths exist** to compel convergence and agreement in the way physical truths compel the triumph of oxygen over phlogiston and relativity over fixed frames of reference. That the non-existence of moral truths has not itself compelled a realization of their discipline's futility can be explained as the result of the great social and individual investment in moral debate, which makes practitioners reluctant to admit they're engaged in mere sophistry***.

My first thought was to ask how social science fits into Leiter-Nietzsche's contrast between progressive natural science and sophistic moral philosophy. Social science seems to have made little more progress than moral philosophy -- foundational debates between Marxists and neoclassical economics, for example, perisist even if disguised by the disciplinary barrier between economics and sociology. Indeed, social scientists have it worse than moral philosophers in that they can't even agree on what would count as progress in their field (the basic positivist-interpretivist divide). But surely it would be absurd to say that this must be because the facts social scientists are pursuing do not exist, that there is no such thing as "how a society works."

Then Jane Addams comes in. A crucial assumption in Leiter's paper is that the activities of philosophers thus far -- mostly "armchair" conceptual analysis, deductive theorizing, and intuition-probing -- are methods that would lead to discovery of moral truths if such things exist. Their failure is then a telling indictment of the whole project. But the philosophical school of Pragmatism -- represented best in this context by Addams and John Dewey -- would hold that such theorizing is the wrong way to go about advancement in the moral field. The natural sciences, pragmatists would note, were stuck in a rut much like moral philosophy's current one up until the Enlightenment. During that period they worked on the armchair model, deriving theories of physical laws from deductive reasoning and interpretation of ancient authorities. Hence, for example, the pre-Kepler inability to consider that planets might orbit in imperfect ellipses rather than perfect circles. If the theories, moral as well as scientific, were not as diverse as modern moral philosophy, it's only because of the existence of the Church as an institution with a vested interest in maintaining the appearance of unified truth. What allowed the natural sciences to become genuinely progressive was the adoption of the experimental method, developing theories through engagement and experience with the world being studied. Pragmatists propose that moral progress can only come by incorporating the experimental method into moral study****. Indeed, Addams says she learned the importance of the experimental method experimentally, as the first lesson that came out of her experience running the Hull-House charity.

We can talk of a few general advances in moral philosophy. Though the specifics remain hotly contested, it is generally accepted now that any acceptable moral theory must condemn slavery, for example, must reject divine natural law, and must be democratic in some form. These advances were arrived at not by Cartesian cogitation but by the give-and-take of reflection and practice in actual societies that found the contrary doctrines, popular as they once were, to be unworkable responses to the real problems we encounter.

A variety of reasons may be proposed for why the natural sciences could sieze on the experimental method and be carried so far by it whereas moral philosophy has lagged. The most obvious is difficulty. Compared to human activity, physical processes are relatively tractable in scale, controllability, and manipulability. It's easy to put a plant in a box and pour fertilizer on it to see what happens, but it's difficult to change the system of cooperation in a neighborhood to see what happens. Truly applying the experimental method to moral study requires much broader cooperation and much more time and resources (making it ironic that while cutting-edge science is typically practiced in teams numbering in the dozens, philosophy papers are still mostly the product of a single author).

Another important factor is the historical origin of the fields. The era of productive and progressive natural science is closely linked to its marriage with technology. The desire for machines and processes with which to manipulate nature exerted strong pressure on natural science to "get it right" in response to specific practical problems, while suiting the competitive work of individuals or small teams that, as mentioned above, is more feasible in the natural sciences than the moral ones. On the other hand, moral philosophy is a child of theology. While submisison to scripture and deductively asking what a perfect being would require have been found wanting as routes to moral advancement, the image of the authoritative, foundational macro-theory has continued to hang over moral philosophy.

None of this is to say that experimental moral learning hasn't occurred -- talk to any acivist involved in a serious moral struggle. But the learning still happens mostly piecemeal and as a personal or particular-group by-product. It hasn't been institutionalized, systematized, and ratified by the academy the way scientific advances have.

It should be said that the distance between Addams and Nietzsche may not be quite so great as I make it out to be*****. I'm no Nietzsche scholar, nor have I read Leiter's other work that may clarify this, but Leiter does at times refer to Nietzsche's critique as being directed against "morality in the pejorative sense" or a "Platonic" conception of moral truths. This leaves open the question of exactly how much of the normative field Leiter-Nietzsche intend to debunk. The idea of normative statements in the sense of "good for" -- which Leiter insists Nietzsche accepts -- has a pragmatist flavor to it. But Nietzsche is commonly understood, in his concern to free great men to exercise their will to power, to have a far more egoistic interpretation of what kind of normativity is left than the explicitly social and democratic ideals of the pragmatists.



*The link takes you to a draft paper -- I'm responding to the Dec. 10 2008 version. Doubtless elements will change before it's officially published.

**One might say that it's simply that no moral truths are knowable -- though in practice that amounts to the same thing unless one can give a clear specification of the conditions under which the epistemic barrier might someday be surmounted.

***Though if this is true, it seems unlikely that Leiter's paper would make much headway, seeing as it merely states an argument rather than changes the social context.

****I would point out here that the pragmatist experimental method -- of trying out the implications of various moral theories and seeing what they amount to in practice -- is different from the field of "experimental philosophy," which attempts to disprove certain armchair philosophical claims by showing that their presuppositions are not in fact shared by ordinary people.

*****I was interested to note recently that Pierre Bourdieu -- who has a basically pragmatist orientation despite rarely having the term applied to him -- makes frequent, approving references to Nietzsche.