Surface    |    Backfill    |    About    |    Contact


Blaming Environmentalists For Fires

The idea that environmentalism is responsible for bad wildfires has become one of the central rhetorical devices of contemporary green-bashing. Usually there's at least a semblance of an explanation -- environmentalism is said to have led to abandoning controlled burning and other fire control measures, so that fires easily got out of control. I don't entirely agree with that argument*, but at least it's an argument.

However, it seems that to some people have gotten to the point where saying anything about wildfires constitutes proof that environmentalism is wrong. Take this article by Miranda Devine. She attempts to use the 2003 Canberra fires as proof that environmentalists are wrong (as well as self-righteous and pushy) on the uranium mining issue. But in reading her description of the Canberra fires, it's clear that the blame for deaths in those fires lies with a flawed public communication effort by the firefighting authorities (which I think is true, though I haven't followed the investigation into these fires as closely as I could). Last I heard, poor public communication is not a central tenet of environmentalism.

* It's more true of Australia than the US (since in the US environmentalists were among the people fighting against the militarized "all fires are bad" ideology), but in both countries the picture is far more complex. I think the biggest factor in both countries is the expansion of exurban settlement, which is missed by the "blame greenies" storyline. Exurbanization 1) puts more people in danger zones, 2) fragments ecosystems and exposes them to increased ignition sources, 3) expands and spreads out the assets to be defended, complicating fire prevention and firefighting, 4) puts heavy reliance on homeowners to keep their own property in order -- but through ignorance, naivete, entitlement, and differing values they don't, and 5) leads to meddling by homeowners in fire control projects on adjacent lands because of concern over smoke, the risk of escaped controlled burns, and the aesthetic disamenities of mechanical treatment or controlled burning.

I also suspect -- though I don't have clear evidence of this -- that there are a pair of vicious cycles going on among fire authorities. On the one hand, they assume that environmentalists are against them, so they don't bother to create fire control plans that meet safety objectives as well as satisfying environmental concerns, thus provoking environmentalists to oppose those plans. On the other hand, the "environmentalists cause bad fires" storyline creates a powerful temptation to slap a fire control justification on projects that primarily serve other purposes, creating additional points of seeming fire safety vs environmentalism conflict.


The Unfairness Of Yucca Mountain

The proposal for a national nuclear waste repository at Yucca Mountain is back in the news, as the Department of Energy moves forward with plans, people turn their attention to nuclear power as an alternative to increasingly expensive oil, and a proposal to make Nevada the second contest of the 2008 Democratic presidential primary gains steam. I don't have a strong view about the substantive merits of centralized versus dispersed storage of nuclear waste, or the suitability of the Yucca Mountain site on engineering grounds. What I do have an opinion on is whether the current approach to establishing a centralized repository at that site is a good one.

To some degree, the dispute over Yucca Mountain is a technical dispute over what the real level of risk is. But it also goes deeper, so that purely technical debate about milirems and geological stability will not resolve the issue. The deeper dispute arises from the fact that there are two ways of looking at what makes a risk acceptable, which I'll call the "economic paradigm" and the "social paradigm." Each paradigm can be treated as a descriptive theory (how actual people actually do think about risks) or as a normative theory (how people should think about risks).

The economic paradigm says that the acceptability of a risk is entirely a function of its (percieved) level of harm. In this way of thinking there are some levels of risk that are de minimis -- so unlikely, and/or of such small magnitude, that they effectively don't count. For risks above the de minimis level, we can apply some sort of cost-benefit criterion, so that for a given level of benefit, we would put up with a certain level of risk. There is much room for debate about how the de minimis level and the exchange rate between risks and benefits should be

Proponents of Yucca Mountain typically work within the economic paradigm. Their primary arguments focus on establishing that the harms from the waste repository fall below the de minimis level. Secondarily, they point to benenfits -- either to society at large, or specifically to those who will bear the risk -- that outweigh the risk.

The social paradigm doesn't deny that the level of harm plays a role in shaping risk acceptability. But it points out that social factors -- the why and how of imposing and mitigating risks -- can play as large, or even larger, of a role. An unfair risk can be just as unacceptable as a harmful risk. Research on risk perception consistently shows that if a risk is imposed through a democratic, participatory process in which the affected people have a say, people will accept a certain level of risk -- but the same risky project would be greeted with insatiable howls of outrage if it was implemented through the "DAD" ("Decide, Announce, Defend") approach. Think, as an analogy, of the way you might be angry if your housemate just went and used some of your milk, even though you would gladly have given them that same milk if they had asked permission first.

Opponents of Yucca Mountain are thinking in the social paradigm. In one sense, putting all of the country's waste in one state seems intrinsically unfair -- why should Nevadans have to bear the risks (however small they may be) for the rest of the country's energy choices? This prima facie distributional unfairness can, however, be overcome through a properly democratic approach to decision-making. If the people of Nevada were to feel that they had been given a real say in how the nation's nuclear waste would be handled, and that Yucca Mountain was not a foregone conclusion, they would be much more likely to support Yucca Mountain. And if they still said "no thanks," such a participatory process would be able to identify a solution that would be acceptable to whoever ended up living next to the waste. (There is an excellent case study of how this all can work out based on a landfill siting process in Canton Aargau, Switzerland*.) Further, there are concerns about politically motivated intervention in the supposedly benevolent dictatorship of the bureaucrats and scientists who chose the current plan -- notably Congress's 1987 order to the DOE to only consider the feasibility of Yucca Mountain.

Because the prevailing institutions accept only economic-paradigm arguments, people who oppose risks for social reasons will often have to recast their arguments in economic terms, creating a frustrating proxy battle. But social-paradigmers' larger assessments of the harms are not just a strategic move -- there's understandable spillover between knowledge of the fairness of a process, and skepticism about the data on the harms. There's enough uncertainty in technical risk assessment that it's quite reasonable to be concerned that if someone is proposing to impose a risk in an unfair way, they may have (consciously or unconsciously) resolved those uncertainties in ways that make the outcome more favorable to them, and hence unfavorable to the people who will have to directly bear the risk.

So to try to defend the project with strictly economic paradigm arguments miss the point. Even if you believe that the economic paradigm is normatively correct, your arguments will fall on deaf ears unless you can either win Nevadans over to that paradigm first, or satisfy their fairness concerns. Resolving the question of fairness is critical. At sites across the country, nuclear waste sits in temporary storage, produced over the past few decades pursuant to the DOE's now-broken promise that it would find an acceptable place to permanently store it.

* Full disclosure: One of the authors, Tom Webler, is one of my bosses on a different research project.


Climate Change Is Morally Repugnant

I'm a pretty sorry excuse for a blogger, since I'm only just now getting around to commenting on a much-blogged article by Daniel Gilbert about climate change. Gilbert argues that people aren't concerned about climate change, because people all have certain evolved cognitive biases.

The problem is that it's not "people" who are unconcerned about climate change, because many people are concerned. Any psychologically worthwhile theory of risk perception must be able to recognize the diversity of views and account for both the skeptics and the alarmists.

Gilbert is right to point out that risks will attract more attention if (among other things) they're blameable on humans, morally repugnant, immediate, and quick. But he writes as if these four criteria are objective features of various potentially risky activities. That assumption may be close enough in the case of immediacy and speed of onset. (Though we should note the big debates over whether certain particular events, such as Hurricane Katrina, are immediate impacts of climate change. Depending on where and how you live, the impacts may be much closer than they are for other, more sheltered, people.) But Gilbert's first two criteria are clearly not objective. Whether a risk is human-caused or morally repugnant depends on your worldview -- so therefore some people do get worked up about climate change, while others brush it off.

Let's start with whether climate change is human-caused, since that one is easy to dispose of. Gilbert frames it as a matter of impersonal atmospheric chemistry. To see that an alternative frame is possible, all you need to do is mention climate change to an environmentalist (or even just a run-of-the-mill Democrat). You'll quickly learn that climate change has a few definite faces behind it -- President Bush, oil company CEOs, and SUV drivers in particular.

Then there's the most talked-about aspect of the article: Gilbert (following Mary Douglas) says a risk must be "morally repugnant" to generate concern. This is as culturally-relative a criterion as you could ask for. Take his example of the "risk" of homosexuality. For people who incorrectly think that homosexuality is morally repugnant, it's easy to see it as posing a major risk, conjuring up scenarios of the breakdown of family bonds and plagues of STDs. But for those of us who don't find homosexuality to be morally repugnant, such scare stories sound ridiculous. Right or wrong, moral views shape risk perception.

There are two mechanisms leading from assessing an activity as morally repugnant to seeing it as having risky consequences. On the one hand, there's a functionalist route -- sounding the alarm about a risk will justify implementing policies that you liked anyway. On the other hand, there's the role of avoiding cognitive dissonance. We like to think about things as being either good or bad, so if we already think something is bad, we'll be more open to believing additional bad things about it than additional good things, and vice-versa.

Now let's turn to climate change. Gilbert declares that it is not morally repugnant -- but morally repugnant to who? It should be no surprise that concern over climate change has found a comfortable home among those of us who think, on independent grounds, that the modern capitalist system is in need of an overhaul (whether reformist or radical). The modern economy produces inequality, unhappiness, unfreedom, and anomie*. So it's not a big leap to see that system as also producing risks such as climate change (and pollution and deforestation and so on). Perhaps more importantly, as David Roberts and the Bishop of London point out, the consequences of climate change are morally repugnant to those of us who have well-tuned moral senses.

On the other hand, those whose moral compasses are calibrated to approve of the modern lifestyle are going to be disinclined to worry about climate change. After all, action to avert or mitigate climate change would require things like regulation and changes in conusmption patterns, which such people regard as morally repugnant.

We won't understand why there isn't more concern about climate change if we treat people in general as an undifferentiated mass.

*The point here is that it produces too much of these things, regardless of whether or not it produces less than some other economic system that has been tried.


Chimaeras and Environmentalism

David Barash thinks that creating human-ape hybrids would be a great way to strike a blow for truth and reason. His main motivation is to disprove creationism -- though how designing a new creature will prove evolution escapes me. More interesting to me was his secondary claim that such hybrids would also promote a stronger environmental ethic:

Moreover, the benefits of such a physical demonstration of human-nonhuman unity would go beyond simply discomfiting the naysayers, beyond merely bolstering a "reality based" as opposed to a bogus "faith based" worldview. I am thinking of the powerful payoff that would come from puncturing the most hurtful myth of all time, that of discontinuity between human beings and other life forms. This myth is at the root of our environmental destruction — and our possible self-destruction.

Four decades ago, historian Lynn White wrote a now-classic article in the journal Science making the point that much of the damaging disconnect derives from the Judeo-Christian proclamation of radical discontinuity between people and the rest of "creation." White argued that the Western world took its marching orders from a literal reading of Genesis: not only to go forth and multiply but also to dominate and, whenever inclined, to destroy the animate world, which, lacking our unique spiritual essence, existed only for human use and abuse. Whereas "we" are special, chips off the old divine block, "they" (all other life forms) are wholly different, made merely of matter. Hence, they don't really matter.

I think Barash is making a confusion between two senses in which there can be "discontinuity" between humans and other life. There can be discontinuity due to a lack of sameness, or discontinuity based on a lack of interdependence. The question of sameness is the territory of animal rights philosophy, while the question of interdependence is addressed in environmental ethics. A "proof" in the case of one type of sameness doesn't necessarily entail anything about the other.

The ability to create a human-animal hybrid speaks to the question of sameness. It would show that humans and apes aren't all that different from each other. (I don't think it would be an especially powerful "proof" -- believers in the existence of souls could easily invoke some sort of "one drop rule" to classify the hybrids, just as creationists dismiss "missing link" fossils as all either obviously ape or obviously human.) So perhaps having a bunch of hybrids running around would motivate people to give more moral consideration to apes.

But our environmental crisis is not, at root, a result of not caring enough about apes. It's not even just about not caring about any individual life form. After all, environmental problems put humans (including even rich white male humans) at risk. Insofar as our environmental crisis has a philosophical basis -- and I think it's as much a result of technology and of social structure as of philosophy -- the problem is that we don't recognize the interdependence of humans and other life forms, as well as nonliving elements of the ecosystem. (Note that the mere mystical recognition that everything is connected is not enough -- we also have to understand how the connections work.)

Environmentalism demands that we see how the fortunes of each member of the ecological community (including humans) are dependent on each other and on the community, and how the actions of each member (especially humans) can affect the community. This has nothing to do with whether one of those species is genetically related to another. An alien species who evolved on a completely different planet, or a group of angels created from scratch by God, could quite justifiably see themselves as "wholly different" from Earth's life forms. But they would, upon settling on the Earth, have just as much need for an environmental ethic as humans do.

If anything, creating human-ape hybrids would reinforce the environmentally damaging ideology of separateness-as-lack-of-interdependence. It would be one more encouragement to see nature, including human biology, as something we can manipulate at will. Human and animal genes (and the lives created with them) become just resources and tools for proving points in ideological disputes.



In (belated) honor of this blog's fifth birthday, I'm making a long-overdue update to the kiosk. People who point out that "conservation" and "conservative" or "ecology" and "economics" have the same etmological root are now in the kiosk. I agree that one can make a conservative argument for conservation, and that economics and ecology should be integrated. But it doesn't prove those things to point out the origins of the words. It's just a cliched attempt at a "hook."


Let DC Vote

This is one of the world's comparatively minor injustices, but nevertheless one that it's useful to be reminded of from time to time, since there's no excuse for it: Residents of Washington, DC have no representation in Congress. All they have is a non-voting delegate in the House (though since 1961 they have had three electoral votes for President).

At a bare minimum, DC needs a Representative with status equal to that of the other Reps. The Senate is a bit tougher of a question, since I think the two-Senators-per-state system is wrong (I'd rather change the Senate to nationwide Proportional Representation, in which DC would of course vote). So I'm undecided between giving DC two Senators of its own, or the alternative (and more politically feasible) suggestion of letting it vote in the Maryland Senate elections. Indeed, I would be happy to retrocede DC back into Maryland, just like Arlington long ago returned to Virginia.

There are two basic arguments advanced against giving DC representation: the "vested interest" argument and the "non-favoritism" argument. I don't think either holds water, particularly when matched up against the competing claims of political equality for all citizens.

The "vested interest" argument says that DC residents are all federal employees, so if they were able to vote they'd vote for higher taxes and bigger government to benefit themselves at the expense of people working in the private sector. This argument is wrong at three levels -- principle, sociological, and efficiency. At the level of principle, it fails because the right to have a say in how a community (such as the nation) is run flows from membership in that community, as defined by someone's actual entanglement with the lives and fortunes of others*. It's not contingent upon whether you will vote the right way.

On the sociological level, the vested interest argument fails because it assumes that federal employees are uniquely inclined and able to vote in their self-interest at the expense of others.

But even if we accept the vested interest argument at the level of principle and sociology, it fails at the level of efficiency because it makes an unjustified equation between "federal employee" and "DC resident." If you want to disenfranchise federal employees, then disenfranchise federal employees. While the federal government is the biggest employer in DC, it only provides 27% of the jobs, so there are plenty of people in DC -- from Georgetown professors to taxi drivers -- who do not work for the feds. They shouldn't lose the vote based on their neighbors' jobs. What's more, there are loads of federal employees who don't live in DC. The Washington metro area has expanded well beyond the boundaries of the District. Why should a Commerce Department number-cruncher get representation in Congress just because she happens to live across the border in Montgomery County or Alexandria? And of course there are all the federal employees in regional offices scattered across the country. A Park Ranger at Yellowstone gets two Senators and a Representative, while his colleague at the Smithsonian gets nothing. I won't even go into the millions of Americans who are effectively federal employees because they work for defense contractors, or agribusinesses that benefit from the federal push for ethanol, or are otherwise beneficiaries of pork.

The "non-favoritism" argument refers back to the original purpose of creating the District of Columbia, rather than putting our capital in Philadelphia or New York. Some people simply assert that DC wasn't supposed to be in a state, period. But since there's no reason to accept that as a fundamental axiom, we have to look at the reasons why DC shouldn't be in a state.

I think the real reson DC was created from scratch is to avoid showing favoritism to already-existing states and cities. That's a very worthy goal, especially since the union was so fragile at the time of DC's founding. But giving the current residents of DC the vote wouldn't undermine that goal. The White House isn't going to up and move to Philly if we give Elanor Holmes Norton some real power.

A more updated version of the non-favoritism argument is that if the people of DC had any power, they would meddle in the federal government's business, either accidentally or maliciously impairing its ability to do its job and be fair to all Americans.

But as it stands today, the big problem is just the opposite -- DC has no power to defend itself against the meddling of the federal government. The city can't even cast a single "nay" vote when the legislators from the remainder of the country gang up to overrule DC citizens' democratically-chosen laws. One prominent example is the feds' resistance to a commuter tax. So instead of being able to recoup some money from the rich suburbanites who work in the city and use its services, DC has to jack up the taxes and fees on its own disproportionately poor and minority population. Some reasonable limits on the city's jurisdiction over federal property (with a high burden of proof on the federal government to show it had been harmed) is a much better solution than the disenfranchisement of an entire city.

Empirical evidence is helpful here too. The governments of the UK and France seem to be doing just fine without taking the vote away from the citizens of London and Paris. State governments likewise haven't needed to kick the people of Harrisburg or Albany out of their legislatures. And the federal government has plenty of offices outside of the bounds of DC (including the Pentagon) which haven't been crippled by meddling voters.

*I'm happy to follow this principle to the logical conclusion that immigrants should be able to vote.


Australia To Be World's Top Horse And Buggy Exporter

I guess you have to give John Howard credit for being honest. The Aussie PM is excited about the prospect of Australia becoming an "energy superpower" by expanding its share of the fossil fuel market. Howard rejects not only the Kyoto Protocol but also any alternative (such as a carbon tax) other than end-of-the-pipe carbon cleanup technology. Burning fossil fuels comes first, because that's what will make Australia rich. Protecting the environment can't be allowed to interfere.

Australia is well placed to be an innovator in clean energy, with its cloudless skies and wide-open spaces ready for solar and wind power. But those kind of innovations won't make money right away for established mining companies, and Howard is clear on whose back he's watching.

Howard repeatedly cites "pragmatism" as a reason to focus on older forms of energy. It's a common rhetorical trick, portraying older energy technologies as known quantities while renewable energy is speculative and risky. The problem is, if we demand that our energy source be clean -- which Howard gives lip service to -- the plausibility of that claim goes out the window. Is it really "pragmatic" to aim for a massive engineering fix that will turn dirty energy technologies into clean ones, but not "pragmatic" to expand the use of already-existing technologies that are intrinsically clean?


Can *Beta* Males Be (Pro)Feminists?

Over at Pandagon, there's a good discussion of whether "alpha males" -- men who strongly exhibit stereotypically masculine characteristics, like assertiveness, self-control, extroversion, leadership, risk-taking, not-taking-shit-from-anyone -- can be (pro)feminists. The conclusion, with which I agree, seems to be a unanimous "yes." (I will note, in a probably futile attempt to forestall semantic debate, that I realize that the alpha and beta categories are generalized and fuzzy and not mutually exclusive. In any event, we can talk about the alpha-male characteristics without necessarily packaging them under that term.)

What interests me is the implication that whether beta males (unassuming, conciliatory, tolerant, behind-the-scenes, risk-averse) can be (pro)feminist is unproblematic. The issue is raised by, and hence focuses on, alpha males, who are trying to do away with the particular patriarchal expressions of alpha maleness in their lives. Confusion between these two levels -- and hence improper generalization of feminists' criticisms of patriarchal forms of alpha maleness into criticism of alpha maleness tout court -- seems to be the core of the problem. Anti-feminists often make this implication explicit, when they charge that feminism wants to turn all men into beta-males, and cite the inevitability of alpha males as a reason why feminism will never succeed.

Looking at the feminist and (pro)feminist responses to the alpha male question, though, it seems that it's alpha male (pro)feminists whose existence is unproblematic. Indeed, the paradigm case of (pro)feminist action -- boldly calling out another man on his sexist behavior -- is also a classically alpha male act. So perhaps we should be asking whether it's possible for beta males to be (pro)feminists.

I must make clear that there's one jump of logic I'm not willing to make yet. It would be easy enough to end this post by saying "pity the poor beta male, who is left out of (pro)feminism! We must reassure him that he's OK, that he can be (pro)feminist in his own way." (A very beta-male sort of argument, incidentally.) Instead we need to entertain the possibility that a certain degree of alpha maleness is a requirement for being a (pro)feminist, at least in a world where injustice and privilege must be actively fought, and where men have no mitigating circumstances or excuses.


Some Animal Research Snark

Study shows mice do feel empathy, researchers do not.

A Canadian research team in the Pain Genetics Lab at McGill University discovered that a mouse's response to pain is intensified in the presence of another mouse that is also in pain. In addition, according to a study published in the June 30th issue of Science, the mice appear to synchronize their pain responses.

"Both of those things, ultimately, are suggestive of empathy," said Jeffrey Mogil, a psychology professor and one of the study's lead authors.

... According to Mogil, scientists may be able to use the mouse model to more closely study the mechanism of empathy, particularly the genes, neurochemicals, and brain areas involved. The finding could be useful for studying human conditions such as autism, which is associated with a reduced ability to empathize.

"Empathy is a very hot topic with humans, but the problem with humans is that we can't really do any experiments on them," Mogil said. "You can stick them under an imager and see what parts of the brain lights up, and that's about it."

We've discovered that mice are even more like humans in the way they think and experience the world. It seems like the appropriate reaction would be to be more cautious about causing them pain, since we're cautious about causing humans pain. But instead the reaction is to get excited about the prospect of causing them more pain, since they work just like humans but for whatever reason their pain doesn't count morally.

(I should be cautious about attributing this viewpoint to the researchers themselves based on just a few paragraphs of quotes. The unempathetic storyline may well be an effect of the "exciting scientific breakthrough" frame that the author of the story used, in which case it's the writer who's at fault for not thinking about how the content of the story reflects on the form of its presentation.)


Can an animal rights activist accept medical treatment invented through animal testing?

In the comments to a recent post, Jenn asks

And I'm still curious to know how many animal rights activsts refuse medical treatment for themselves or someone they love based on its history in animal experimentation?

I have no idea what actual animal rights activists think about this question. I can only speak for myself -- and as animal rights activists go, I'm a pretty sorry excuse for one, since I still occasionally eat meat when traveling or visiting. Nevertheless, I don't think it's necessarily hypocritical for someone committed to animal rights to accept the use of a medical treatment whose development required animal experimentation.

The philosophical basis of animal rights is generally consequentialist -- that is, what makes an act right or wrong is its effects on the welfare of humans and other animals*. Medical animal experiments offer a tradeoff: the decrease in the welfare of the animals used in the experiments developing the treatment, versus the decrease in the welfare of the sick humans (and animals) who could have been cured had the experiments been done. Animal rights activists typically conclude that the welfare reductions from refraining from the experiments are morally preferrable to those from doing them (after all, if they didn't, Jenn would have no argument with them)**.

The key reason why it would make sense to accept medical treatment that has already been invented, while opposing the use of animals in the invention of new treatments, is that the harm to animals done in creating it is a sunk cost. Those same animals have been harmed regardless of how many or how few people later benefit from the treatment. Once the experiments have unfortunately been done, the tradeoff is not between harming animals to help people, or letting people suffer to save animals. It's strictly a matter of helping people versus letting them suffer.

Imagine, as an analogy, that I have a car that you want. You offer me $10,000 for it, which is the accepted market value of the car. I refuse to sell it, because it's worth more than ten thousand dollars. My refusal makes you angry, so out of spite you drive your truck full speed into my driveway, totalling the car in question. We go to court, and the judge orders you to pay restitution of $10,000 (assume that filing an appeal to try to get a better settlement is out of the question). Now, should I refuse to accept any restitution, since I didn't think the loss of my car was worth $10,000 when you offered to buy it? Does my acceptance of the restitution entail that $10,000 for the car was a fair deal after all, and thus I was wrong not to have sold it to you? Of course not. The car is lost no matter what, so I might as well get whatever benefits I can out of the situation.

Another way to think of it is this: I imagine a situation in which medical experiments were being done on unwilling humans, and I was one of the unlucky victims. In that situation, I would hope that the research being done on me discovered a cure for a deadly disease, and that that cure would be widely used by people with that disease. And I can hold that hope while still believing that it would have been better, all things considered, for me and my fellow victims to have not been forced into the experiment, and therefore for the cure never to have been developed. If I'm going to die anyway, I'd rather give my life for some benefit (however small) to others, rather than giving it for nothing.

In a later comment, Jenn raises a contrast with buying leather goods. After all, they don't wait for you to order the shoes before they go and kill the cow to make them. One important factor that distinguishes the two cases, of course, is the size of the benefits that come from using the animal product -- a life-saving medical treatment certainly creates a greater and more important welfare increase than getting to wear leather instead of some other material.

The benefits of abstaining from leather come through a deterrence or boycott mechanism. The more shoes sit unsold on the shelf, the less incentive Nike has to produce additional pairs, and hence the fewer cows it will have killed. With medical treatments, the connection between taking what exists and producing more is not so close, and so the deterrent effect of a boycott is reduced. In the production of leather, the purchase of one good spurs the producer to produce another just like it, using the same animal-harming process. But accepting a medical treatment most directly incentivizes the provision of the same treatment to additional patients -- which we've already established wouldn't directly harm any animals. The incentive given to the development of a new product line is more diffuse. Things are further complicated because the leather industry is, from farm to retailer, purely a creature of the profit motive, closely integrated by market forces. The medical sector, on the other hand, includes institutions (hospitals and university research labs) which at least claim to serve higher goals (health and knowledge) alongside profit. For this reason the incentive that use of a treatment provides for development of additional treatments will not be as strong. The different institutions are also often less closely linked than in the case of leather. Much medical research is funded by the NIH, whose funding decisions are not connected directly to the rates of use of existing treatments, but rather are based on the review boards' conceptions of what are important avenues of further research.

Thus the cost-benefit ratio for accepting a life-saving medical treatment invented through animal experimentation is more favorable than that for buying leather, so it's not prima facie implausible that someone could say yes to the medical treatment but no to leather. Obviously one could dispute someone else's decision of where to draw the line, or challenge the facts they've used to decide which side of their line a particular act falls on. But then we're into the territory of analyzing specific people's particular versions of animal rights, not animal rights activists in general. (My own unresolved feelings on this issue probably aren't a particularly good test case, since I recently purchased a pair of leather shoes.)

* Though there's disagreement about whether we should focus on the overall welfare of all beings taken together (utilitarianism) or on securing a certain basic level of welfare to each individual being (a true "rights" view).

** Some do hold that certain experiments would be morally justifiable -- if the amount of suffering caused in the experiment is small enough, if the experiment is fairly certain to contribute directly to the finding of a cure, and if the disease being cured is sufficiently painful and widespread (and for some, if the disease being cured substantially afflicts the species being used in the experiment). Utilitarians would typically be more likely to accept a wider set of experiments.


If a tree burns in the woods and there's nobody to hear it ...

Let's go back to writing about something I actually know something about.

The headlines say climate change causes wildfires. And indeed, a new study (pdf) found a strong correlation between the increase during the 1980s in the number and length of wildfires in the western US and increased temperatures.

But before we rush off to base our wildfire policy on these findings, two grains of salt are in order: 1) explaining a phenomenon is not the same as explaining the problem associated with that phenomenon, and 2) the solution to a problem is not simply the cause applied in reverse. This post will deal only with the first issue, hopefully I'll be able to post on the second tomorrow.

By almost any measure you care to use, the most wildfire-prone state in the US is Alaska. But you hardly ever hear about Alaska in the news (I should know -- I have a Google News alert set up for "wildfire"). Why? Because hardly anybody lives in Alaska, especially in the interior where most of the fires are. Alaska has lots of wildfires, but it doesn't have much of a wildfire problem. By itself, a phenomenon in nature is morally and politically neutral. It becomes a problem, an issue to be concerned about, when it intersects with humans and the things we value*.

This becomes important in the climate change study because the authors note an important regional difference. The correlation between climate and fire is much stronger in the northwest than in the southwest. In the southwest, "land use" -- the conventional wisdom of fire suppression leading to overgrown forests -- is declared by default to be the major factor in that region's fires. The study comes to an overall conclusion similar to the northwest regional conclusion because its data is drawn from federal forest lands, of which there are far more in the northwest. (The southwest has more non-forest but still fire-prone lands such as shrublands, as well as more land under other tenure, such as Indian reservations).

Let's take, then, that the basic regional dichotomy of fire causes -- climate in the northwest, land use in the southwest -- is accurate. This is useful in itself, since fire policy ought to be tailored to the local situation rather than being based on overbroad generalizations. But we might still wonder what can we say about the overall fire problem. To do that, we have to take our understanding of the phenomenon and couple it with an understanding of the people at risk.

The biggest buzzword in fire policy today is "urban-wildland interface" or "wildland-urban interface," abbreviated UWI or WUI**. The UWI is the landscape formed when residential settlement abuts "wild" areas such as forests. There has been a great expansion in UWI in the First World over the past few decades, and most of our major recent fires (such as Southern California in 2003) have been UWI fires. So one quick and dirty measure of how many people are at risk is how many people live in the UWI in each region.

I went to this document (pdf) for estimates of the number of houses in the UWI in the northwest (WA, OR, ID, MT, and WY) and the southwest (CA, AZ, NM, NV, UT, and CO). Adding it up, we find that there are nearly 2 million UWI houses in in the northwest, but nearly 7 million in the southwest. Multiplying by the Census's figures for average household size, we get a very rough estimate of 5 million people at risk in the northwest, and 9 million in the southwest. If we measure vulnerability by population at risk, the causes of fire in the southwest are three to four times as important to the fire problem as the causes of fire in the northwest. Put another way, there would have to be three to four times as many fires in the northwest in order for that region's fire causes to be as important to the national-level fire problem.

Of course, sheer number of people in the UWI is a very crude proxy for vulnerability. You'd then have to factor in things like poverty (a quick glance at some data suggests it might be a wash in terms of regional comparisons) and race (I suspect the southwest is more diverse).

*I'm setting aside possible detrimental effects of changed fire regimes on animals and ecosystems, because most of the discussion on this topic has been very anthropocentric in this regard.

**"WUI" seems to be gaining popularity in the US, but I find "UWI" to be more euphonious both as a full phrase and as an acronym.


The Real You vs A New Creation

In describing his efforts at recognizing privilege and becoming a better (pro)feminist, Malachi at Feminist Allies writes:

A major part of the problem is that I *do* have a lot (some say an excess) of self-confidence, a forceful personality, and some take-charge instincts. Thanks, patriarchy. But disentangling what’s really me form what’s the patriarchy’s influence, what’s self-confidence and what’s self-aggrandizement, what’s inspiring leadership and what’s privileged domination is no mean feat.

What interests me about this bit is the implied distinction between the "real" self and the constructed self, which is a common one in thinking about the effects of oppression systems on individuals. The model here is that there's some inherent pre-social and morally neutral real personality. Then patriarchy came along and added some stuff on top of that, stuff that is bad because it leads to harming others. Malachi's task is then to strip away this fake addition to reveal the real egalitarian person underneath.

I don't want to criticize the substantive changes Malachi is making in how he lives his life, since as far as I can tell from his few posts so far he's on the right track. (I don't mean to pick on him personally, he's just the latest person to raise a common idea.) But I do want to raise some questions about the model of identity -- call it "real core with fake trappings," or RCFT -- that he uses to explain himself. I think the RCFT model points us in the wrong direction (at least as far as understanding the identity of those in positions of privilege -- not being oppressed in any way myself, I won't presume to speak for what models are accurate representations of that experience).

The first problem with the RCFT is that it locates the criterion of value in the wrong place. Self-confidence is good because it benefits people, not because it's a feature of a real underlying self. And self-aggrandizement is bad because it hurts people, not because it's a fake accretion slapped on by social forces.

But perhaps more importantly, I don't think you can separate a "real," presocial core identity from a distorted or less real identity built up by social forces. Your socially constructed identity is your real identity. What exists independently of social construction is at best a set of underdetermined potentials and constraints, not a fully formed identity. If you're a man living in a modern Western country, being patriarchal is part of who you really are.

In thinking about a better model of how to conceptualize becoming less of a patriarch, I thought of a line from one of history's greatest sources of sexism*, St. Paul. In 2 Corinthians 5: 17, he wrote:

Therefore, if anyone is in Christ, he is a new creation; the old has gone, the new has come!

Replace "in Christ" with "a (pro)feminist,"** and you have a better way of looking at the process people like Malachi (and myself) are engaged in. Whatever you think of the moral value of the conversions that Paul achieved in building the early church, he had seen quite a few people adopt new outlooks on life by the time he wrote his letter to the Corinthians, so he had some insight into the psychology of it. Becoming a Christian was not a matter of stripping away some sinful trappings that had been added by the devil or the world in order to reveal the real godly core. It was a matter of taking someone who had a real identity as a pagan and remaking them so that their real identity became Christian. So men who want to become better (pro)feminists have to recognize that we have a real patriarchally-constructed identity, and then replace it with an equally real "new creation" -- or better, "new construction" -- along feminist lines. It's a matter of remaking, not stripping away.

*Lynn Gazis-Sax has some interesting thoughts on whether Paul himself was actually that sexist, but in any case it's undeniable that his words became fuel for generations of later sexists.

**My UU side would argue that the two are equivalent, because the Bible should be read such that to be "in Christ" has nothing to do with holding factual theological-historical beliefs about some carpenter from Nazareth -- rather it means nothing more or less than adopting an attitude of love toward all persons, which is achieved (in the realm of gender) through feminism. But these theological issues are beside the point of this post, especially since I presume people not from a Christian background aren't going to be too keen on being told "you should be 'in Christ,' except that that what I mean by that is totally different from what it sounds like or what most people mean by it."


Us vs Them and the Ad Hominem Defense

I mentioned this in a comment to Amp's post about the "ad hominem defense," but then I decided it was worth a full post. An ad homiem defense is when a liberal rattles off their lefty credentials in response to some specific criticism from the left (or mutatis mutandis for any other ideology). There's a particularly egregious example at the end of this Hank Fox post. Fox came in for a lot of criticism for saying that he wouldn't eat at an Arby's where one of the employees had a facial piercing, because he finds piercings disgusting. (In the linked post, he tries to defend himself by claiming that his problem is that people with piercings care too much about what others think, as if the clean-cut and wholesome look isn't just as much a show put on for others, and as if his vocal boycott of Arby's isn't essentially a demand that other people should care what others think about their appearance.) After his painfully self-righteous rationalization, Fox hauls out his liberal bona fides to prove that his anti-piercing views couldn't possibly be a case of bigotry.

Fox's post is interesting to me because it makes so clear one important element of the ad hominem defense: its use of the Us vs Them frame. He asks us to imagine a room full of people, and reminds us that if Rush Limbaugh and his ilk were on one side of the room, he and his critics would end up together on the opposite side. This is a vision of politics in which there are only two camps. Criticism may only be made against the other camp. If someone's liberal enough to get into the liberal camp, then they're one of Us. If you criticize someone, you must be implicitly seeing them as one of Them, an enemy on the same level as Rush. The choice is between total solidarity and total animosity. The only debate is over where to draw the line -- to we, like the users of the ad hominem defense, draw a magnanimously wide tent in order to focus on our real enemies on the far right? Or do we, as ad-hom-defenders' critics are assumed to, draw the line narrowly to include only a pure in-group on the "Us" side?

But of course this is not how politics works. So far as I know, nobody who criticised Fox's views of pierced people thinks that he's therefore wholly in Rush Limbaugh's camp. As I said to Hugo Schwyzer a while back,

... a person's membership in the cause is never all-or-nothing. Your sins don't wipe out the other good work you've done, but the other good work you've done doesn't earn you indulgences.

I think the mentality behind the ad hominem defense goes some way toward explaining why white people are reluctant to engage in deep discussions of race (and men in discussions of feminism, etc.) -- and I don't claim that I'm immune to this. There's a fear of discovering that while you thought you were one of Us, you are actually one of them. It's easier to pretend that race doesn't exist than to risk feeling lumped in with the KKK because you said or did something racially insensitive. Strategies like the "don't you have bigger fish to fry" argument that Amp discussed serve to keep the fundamental line between Us and Them in a comfortable spot. (Note that this is a problem with the assumptions privileged people make, not with anything that their critics are doing.)


Blaming Bush For Natural Disasters

John McGrath makes an offhand remark citing Hurricane Katrina as evidence that Bush's climate change policies have led to disaster (analogous to the way his WMD policy led to the disaster in Iraq). I agree that Bush's policies on climate change are deplorable, and that Bush's deplorable policies bear a fair bit of responsibility for the Katrina disaster. But the share of the Bush-blame that can be attributed specifically to his action on climate change is very small. Climatologists remain divided on the question of how much climate change will alter the frequency of severe weather events, and how much of that alteration is already visible.

Blaming Katrina on Bush's climate change policies may be politically convenient as a way of generating pressure to change those policies. But it's politically inconvenient in a broader sense, because it reinforces the "natural disaster" frame for understanding what went wrong with Katrina (and what continues to go wrong in many other hazard events).

The "natural disaster" frame envisions society as moving along innocently, minding its own business, when wham! it gets hit by an extreme geophysical event that causes destruction and death. Causal responsibility, and hence blame, lie on the side of the geophysical event. So therefore interventions to prevent or mitigate disasters focus on controlling the event, a "hazard-side" strategy.

Over half a century ago Gilbert White -- the father of natural hazards research, and hardly a political radical -- pointed out that "natural disasters" are actually the result of the intersection of natural and social conditions. Whether there is a disaster, and what kind of damage it does, depends on how social practices and individual choices put human values at risk of being undercut by changes in the natural environment. Later more radical thinkers elaborated the idea of "vulnerability," with the slogan "there's no such thing as a [purely] natural disaster." We have to focus on the reasons why humans become vulnerable to extreme geophysical events.

Framing Bush's responsibility for Katrina as a matter of his climate change policy places our focus on the hazard event. The problem becomes the fact that there was a Category 5 hurricane, and the change we need is to control greenhouse gas emissions so as not to increase the frequency of Category 5 hurricanes. This focus ignores the central role in the disaster played by New Orleanians' (and our whole economy's) vulnerability to hurricanes. This vulnerability is the product of an economic system dependent on oil and the creation of economic inequalities, a system of racial oppression, and a hubristic attitude to the environment. Across a broad range of issues, Bush's policies have served to maintain this system (though he is of course far from the sole creator or sustainer of it).

The "blame climate change" redirection of attention is especially unfortunate given that the sources of vulnerability in the case of Katrina are so fundamental to what's wrong in so many other facets of modern America. Big events like natural disasters are powerful political-rhetorical resources. They need to be used wisely, to cut at the most fundamental problems.


Guest Blogging

For reasons beyond my ken, Ampersand has invited me to be a guest blogger on Alas for the next month. Everything I post there will be cross-posted here, but I imagine the comments threads will be more lively over at Alas.

Backward Ethics

Eric Schwitzgebel asks why ethics professors don't live their lives any more ethically than the rest of us. Assuming that the empirical claim (for which he offers only anecdotal evidence) is true, and assuming that it makes any sense to speak of a single scale of the ethicalness of behavior independent of any particular ethical theory (to which most ethics professors woulnd't adhere), the answer seems simple. The philosophical discipline of ethics is not about changing your behavior, it's about justifying it. Ethicists begin with their intuitions about which behaviors are ethical, and then work out some explanation that justifies and systematizes them.

Admittedly I've only read philosophical treatises on ethics as a hobby, but I have yet to encounter an ethicist (Jeremy Bentham and Peter Singer being partial exceptions) who said "while I and many other people assume that X is morally right, the basic principles that I have proposed entail that X is morally wrong, so I will now assert that we should not do X." I recently read some of R.M. Hare's work, and though I generally like his system of basic principles, I was embarassed by his constant protestations that nothing in his system could possibly ever produce a counter-intuitive conclusion.

So the difference between ethicists and regular people is not the content of what we believe is right or wrong, it's how sophisticated our justifications for those beliefs are.


Lay Off Lieberman

I admit I have't been following this, or any other, race very closely, but I don't really get the vitriol directed at Joe Lieberman over his decision to run as an independent if he loses the Democratic primary. I'm no Lieberman fan -- aside from a few environmental bills he's sponsored, he's done little but give aid and comfort to the conservative agenda, and I'll be thrilled if Ned Lamont manages to oust him. So on the merits, I don't want Lieberman to be Senator, and am therefore unhappy about any turn of events (such as running as an independent) that increases his chances.

But much of the criticism paints his decision as wrong in itself (and so presumably it would be equally wrong for Lamont to run as an independent). There seem to be two lines of reasoning here: loyalty and democracy. The loyalty argument is easiest for me to dismiss, because I see loyalty to the party as a fairly minor virtue, if indeed it is one at all. Given that the party appears poised to reject him, I see no obligation on Lieberman's part to place the interests of the party institution above his obligations to fight for what's best for the people of Connecticut, America, and the world (though of course I think Lieberman is deeply mistaken about what's best for the people of Connecticut, America, and the world).

The democracy argument is that it's somehow undemocratic for Lieberman to continue running after losing a vote. This would be true if he were to continue to insist on being the Democratic Party's nominee. But he's running for Senator of all of Connecticut. The goal is to have the Senator with the broadest support among all the people of the state. If a candidate is solidly on one side of the political fence (as Lamont is), then the primary can serve as a useful test of popularity. But with a centrist like Lieberman, the views of the most liberal third of Connecticut's voters (the one who would participate in the primary) say little about the will of Connecticutians as a whole. It's quite plausible that 50% of the people of Connecticut want Lieberman for their senator, but that because those voters are spread out across both parties as well as the independents, he wouldn't get 50% of the votes in the Democratic primary. An independent run is the only way that a coalition like Lieberman's supporters, who don't sit neatly within the ideological ambit of either major party, would be able to make their will known.

Related is the idea that Lieberman's move somehow undermines the purpose of the primary. If you see the primaries as basically ways to reduce the number of candidates on the final ballot, this is true. Too many candidates on the ballot is confusing and -- in a system without IRV -- can lead to vote-splitting between closely allied candidates that ends up putting a person with minority support into office. But this justification works best when the primary system is open -- anyone can vote in either primary. This gives the system the flexibility to focus on the matchups that most need to be settled before the final ballot is printed. It's important to note in this regard that Lieberman vs Lamont is not a classic vote-splitting scenario, given Lieberman's centrism (as described above).

A closed primary system -- like Connecticut's -- serves a different purpose (albeit one also served by an open primary). A closed primary is an instrument of the party. It acts as a screening tool for deciding which candidate the party should throw its endorsement and resources behind. But this purpose is in no way jeopardized by Lieberman's independent run. He will make his run without the expectation of any support from the institutional apparatus of the Democratic Party. But there isn't, and shouldn't be, any rule that says only people who have a party institution behind them can run for office. In this sense, Lieberman sticking it out after he fails to get the Democratic Party's endorsement isn't much different from Lamont sticking it out after he failed to get the AFL-CIO's endorsement.


Taboo vs Sin

Hafidha Sofia mentions having been frequently asked about the consequences of accidentally violating Muslim laws -- e.g. eating food that you didn't know contained pork. She says that Islam is quite clear that you aren't held responsible for violations of the law that result from ignorance or necessity.

On the one hand, the question about accidentally eating pork isn't so off the wall -- it's just getting at the difference between sin and taboo. With taboos, the connection is causal, so all that matters is whether you do something -- just like the way you'd still be dead if you ate some cyanide by accident. (I recently read an interesting article about how indigenous people in the Andes will sometimes see misfortunes as punishments for having stepped in a holy place that nobody knew existed.) Sin, on the other hand, is a justice system, so the punishment is withheld if you have a good excuse, like ignorance or necessity.

But one has to wonder why the sin/taboo distinction was such an important issue to so many non-Muslims. Part of it may simply be that it's just that our culture has set these questions up as a standard "imponderable," which people fall back on in trying to make conversation (much like the supposedly deep and problematic question of whether vegetarians can engage in fellatio).

It may also be due to the connection between taboo and primitiveness*. To the Western mind, Islam's developmental status is ambiguous. On the one hand it has various intellectual trappings like a holy book, and it claims to be the next step in the Judeo-Christian tradition. On the other hand, it's stereotypically associated with cruel barbarian hordes from the East (including terrorists). Some people will seek confirmation of Islam's primitiveness (as evidenced by its use of taboos) so that they will be able to dismiss it as an intellectual threat and so that they can look down on its followers. Other people (and I suspect this motivation is more common) seek confirmation of its non-primitiveness (as evidenced by its use of a sin framework) so that they feel less threatened by their Muslim neighbors.

* Obviously I'm speaking here just of how our culture perceives things. Modern Western culture has its share of taboos -- just think of the pervasive anxiety among men that certain acts may "make me gay."