Progress Report: Independent Cost-Benefit Analysis of Proposed Regulations

On September 1st, 2014, four members of the EA Society of DC met to do a test run of our project to comment on proposed regulations. One of us had heard of a proposed EPA regulation through industry contacts and expressed interest in commenting on it, so we decided to take that one as our test case. We met at a University of Maryland computer lab for a day, and spent a few more hours doing follow-up over email. We submitted our comment, which can be read here, before the deadline.

Cost-Benefit Analysis of Cost-Benefit Analysis

Our comment recommended accelerating the regulatory deadline, on the basis of our analysis which suggested that this would save the world about $1.5 billion in economic costs due to global warming. We spent something like 40 person-hours on this, which – if our recommendation is solely responsible for the recommended change – amounts to about $37.5 million per person-hour invested, fairly impressive even if you discount by a large factor for the probability that we affected the outcome.

What We Did

The proposed regulation was to ban a set of refrigerants with a high impact on global warming, now that feasible alternatives with a lower global warming impact had been invented. There were about seven comments posted, of which we were able to read three. Two were from a manufacturer of frozen margarita-makers, asking if their company would be affected by the regulation, and one was a strongly worded emotional comment from an engineer asking the EPA to ban a different refrigerant as well. We were optimistic that we could contribute significantly to the discourse by performing a competent cost-benefit analysis.

The first couple of hours were spent getting a picture of what exactly were the most important effects of the regulation, which turned out to involve a lot of digging, because the rule covered refrigerants that affect a lot of industries. It turned out that some of the biggest global warming savings were from banning a few refrigerants used in automobiles, so we decided to focus on just this aspect of the regulation.

Once we had a basic framework of analysis, we set about gathering sources to estimate the annual impact of the regulation on global warming, and building our model for what the effect was, noting our sources for data in a Google Doc separately from the Excel doc with the model. Our initial estimate was that the effect of the regulation was strongly positive, and that the deadline should be accelerated.

After we wrote up a first draft of the comment, we started to cite our sources, and couldn’t find the source for one of our numbers. When we tried to re-estimate this part, the numbers we found were different, and the estimated impact of the regulation was negative. By this point it was getting late and we were all tired, so we called it a night.

Later, communicating over email, we realized that when counting the social cost of carbon, we hadn’t correctly adjusted for time. Once we corrected this, the numbers came out strongly positive again. We finished the write-up, satisfied ourselves that the reasoning was correct and backed up by facts, and submitted the comment.

Lessons Learned

  1. Spend a while figuring out the most important few effects of the regulation. This is something we did, which saved us a lot of time.
  2. Make sure someone with access to the venue commits to get there in advance. Two of us were waiting outside the locked computer lab until someone with a key card got there.
  3. List sources, reasons as we build our model, writing the whole thing down in one place, rather than building the model first, then going back and citing sources/evidence later. This would have helped us notice problems earlier.
  4. Expect to have to go back and change our minds later, don’t spend a lot of time making it perfect, give people a chance to sleep on it.

Aftermath

As of now there appear to have been 194 comments submitted. Many appear to be from people in industries affected, but a quick skim of a few of them suggest that most are not related to the auto industry, which is the effect we focused on.

We don’t have a good way to estimate how influential our comment was yet, but the results of this attempt were encouraging. Further iterations of this project seem high-value.

Posted in Uncategorized | 3 Comments

Orders of Doom

The Ant and the Grasshopper

The Ant knew that food would be hard to come by in the winter, so one hot summer day, as the Grasshopper frittered away the day leaping and dancing and making merry, the ant thought of nothing but gathering food for the hive. It even hoped to save enough food for the Grasshopper, who was not responsible for an upbringing and genetic makeup that gave it insufficient Conscientiousness.

The Ant focused so completely on gathering food, making its route more efficient, carrying the most efficient load possible, that it missed the shadow that fell over it mid-afternoon. The Anteater’s tongue sprung out to carry the Ant to a waiting, hungry mouth. The Ant was delicious.

Meanwhile, a Black Swan ate the Grasshopper. The Grasshopper was even more delicious, having feasted on a variety of treats during its short but pleasant life.

-Not Aesop’s Fables

I’m trying to figure out what the important problems are in the world so that I can figure out what I should do about it. But there are a few very different ways the world could be. Depending on which is the case, I might want to do very different things to save the world. This is an enumeration of the cases I’ve thought of so far.

I’ll start with existential risks, because they have the potential to affect the largest number of people.

Continue reading

Posted in Uncategorized | Leave a comment

Is Veganism Correct?

I’m hearing lots about how it’s an obvious result that Effective Altruists should be vegans. This seems like a possible result to me, but not an obvious one. Here’s why.

Continue reading

Posted in Uncategorized | Leave a comment

Saving the Whole Planet via Ancestor Simulations

Abstract: The question of whether or not we should create ancestor simulations is important, not just ethically but also for purposes of predicting whether or not we are in one. This post discusses reasons to think that we should influence the future to create ancestor simulations. One reason in particular is quite interesting: By influencing the future to simulate the past, we increase the likelihood that we are ourselves being simulated right now. And if we influence the future to simulate the past with a special algorithm that preserves the minds of people as they die and leads them into a happy afterlife….

The Simulation Argument proposes that at least one of these has to be true:

  1. The human species is very likely to go extinct before reaching a “posthuman” stage.
  2. Any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof).
  3. We are almost certainly living in a computer simulation.

One of the variables in the Simulation Argument is (let’s say) P(S), or the probability that the future will contain ancestor simulations. We can influence this variable! For example, we can enact constitutional amendments or construct AIs that will, when the time is ripe millions of years from now, create a bunch of ancestor-simulations. They don’t have to be perfectly accurate, so long as they are good enough we can’t tell whether we are in one of them or not.

But why would we want to increase this variable? Well, get this: Suppose we specify in the constitutional amendment or the AI’s goal structure or whatever that the ancestor-simulations created ought to have an afterlife. As in, an algorithm that preserves the minds of people as they die in the simulation, and then moves them to another environment where they are punished for their crimes, rewarded for their good deeds, and generally treated to a higher standard of living and fully informed about the state of the world as a whole (perhaps they are accepted as citizens into the future society)

Supposing that, then we ought not only suspect that we are in a simulation, but that we have a wonderful afterlife to look forward to in which we will be rewarded for our good deeds and punished for our misdeeds.

Is this something we want? Hell yeah! So let’s start lobbying congress!

Objection: This is very well if you are selfish, but from an unselfish perspective it makes no sense. The resources used to create ancestor simulations could be used to make an equivalent number of happy future people. So really what you are doing is making future people less happy and giving them a slight chance of being in the past, so that the past people get a large chance of being in the future. But the chances all balance out; it’s a zero-sum game because there is always one original planet full of people with no afterlife.

Reply: Okay, fair enough. But maybe there are other reasons for doing this. Maybe we want there to be “equality of expectation,” we know that one planetfull of people will die with no afterlife, but we want there to be many people with an equal chance of getting unlucky rather than just a few people who know they are screwed. Another reason: If we punish the evil and reward the good, we give even the original people an incentive to be good, which is important since the original people have such control over the future. Finally: We might want to populate our future society with people who have been raised in diverse environments, and historical environments may be an important part of that. For example, we might want our society to be partially inhabited by the kind of people who grew up thinking that they would die soon and that their virtue would not be rewarded.

Objection: The diversity thing could probably be achieved without creating an environment full of suffering. Also, there is something intrinsically wrong with withholding the truth from people, even if you tell them the truth eventually.

Issues relating to obscure ideas in the philosophy of consciousness:

If consciousness in a world depends only on the implementation of an algorithm, rather than on the number of said implementations, (and if various other background assumptions like psychological continuity theory hold) then by simulating a person who died and then saving the simulated person, you are literally saving the person who died. Thus, if it turns out to be possible to accurately recover the brain-states of our ancestors, then ancestor-simulations are practically a moral necessity, to right the wrongs of the past at essentially no cost.

It’s possible that a brain or something that simulates a brain isn’t sufficient for consciousness. (Maybe there’s some other component we haven’t found yet that’s necessary.) And if so, it’s at least possible that some other construction might be sufficient for consciousness. In principle, we should be able to figure out what this extra thing is and construct a corresponding simulation.

But we should favor simpler theories over more complicated ones. If the complexity of the rule-according-to-which-systems-are-conscious is part of the complexity of a theory, then (since we can probably make simpler implementations of our own minds than brains) by creating ancestor-simulations with said techniques in the future, we could ensure that we have a 100% chance of afterlife, i.e. that nobody (except for p-zombies) has to really die. (If every theory is correct, i.e. all conceivable worlds are real, then this would be no better from an unselfish perspective than creating normal future people using that technique)

Warning: The above arguments depend on various implausible philosophy of consciousness premises. That’s OK, but if it turns out that each argument has an evil twin with the same amount of plausibility that argues for the opposite conclusion…

Posted in Uncategorized | Leave a comment

Give Smart, Help More

This post is about helping people more effectively. I’m not going to try to pitch you on giving more. I’m going to try to convince you to give smarter.

There’s a summary at the bottom if you don’t feel like reading the whole thing.

Do you want to help people? At least a little bit?

Imagine that there is a switch in front of you, in the middle position. It can only be flipped once. Flip it up, and one person somewhere on the other side of the world is cured of a deadly disease. Flip it down, and ten people are cured. You don’t know any of these people personally, they are randomly selected. You will never see their faces or hear their stories. And this isn’t a trick question – they’re not all secretly Pol Pot or something.

What do you do? Do you flip it up, flip it down, or leave it as it is? Make sure you think of the answer before you look ahead.

I’m going to assume you flipped the switch down. If you didn’t, this post is not for you.

If you did, then why did you do that? Not because down is easier or more pleasant. Because it helps more people, and costs you nothing more. So if you made that choice and did it for that reason, you want to help people. Even people you don’t know and will never meet. This might not be a preference that is particularly salient or relevant in your life right now, but when you chose between a world where more people are helped and a world where fewer people are helped, you chose the one where more people are helped. To summarize:

We agree that it is good to help people, and better to help more people, even if they’re strangers or foreigners.

You probably already donate to charity. Why?

Most Americans give at least some money to charity. In 2010, in a Pew Research Center study, 95% of Americans said that they gave to a charitable organization specifically to help with the earthquake in Haiti. So when you add people who give, but didn’t give for that, you end up with nearly everyone. And if you look at tax returns, the IRS reports that in 2011, out of 46 million people who itemized deductions, 38 million listed charitable contributions. That’s 82%. So either way, most people give. Which means you probably do. (I’m assuming that most of my readers are in countries sufficiently similar to America for the conclusion to transfer.)

Why do you give to charity? That’s actually a complicated question. People give for lots of reasons. You might be motivated by the simple fact that people will be helped, yes. But there are lots of other valid reasons to give to charity. You could want to support a cause that someone you care about is involved in, like sponsoring someone’s charitable walk. You could want to express your solidarity with and membership in an institution like a church or community center. You could just value the warm fuzzy feeling that comes along with the stories you hear about what the charity does.

So here are some reasons why we give:

  • Warm fuzzies.
  • Group affiliation
  • Supporting friends
  • The simple preference for people to be helped.

All of these things are okay reasons to give, and I’m going to repeat that later for emphasis. I’m going to say some things that sound like I’m hating on warm fuzzies, but I’m really not.  To be clear: Warm fuzzies are nice! They feel good! You should do things that feel good! They just shouldn’t be confused with other things that are good for different reasons.

The Power of Smarter Giving

I’m going to make up some numbers here.

Imagine three people: Kelsey, Meade, and Shun. They have the same job, that they all enjoy, and make $50,000 per year. They each give $1,000 per year to charity, 2% of their income. But they want to help people more.

Let’s say that they give to a charity that tries to save lives by providing health care to people who can’t access it. Each of their $1,000 donations purchases interventions that collectively add one year to one person’s life, on average. That’s actually a pretty good deal already – I’d certainly buy a year of extra life for myself, for that kind of money. I’m going to call that “helping one person,” though we understand that it’s just an average.

But now they each want to help more people. Kelsey decides to just give more, by cutting back on other expenses. Less savings, more meals at home, shorter vacations. Kelsey’s able to scrape together an extra $1,000, so Kelsey’s now giving $2,000, adding a year to two people’s lives on average. On the other hand, Kelsey has fewer of other enjoyable things.

Meade decides instead of cutting back on expenses, to put in extra hours to get promoted to a job that’s more stressful but pays better. After six months of this, let’s say Meade is successful, and gets a 10% pay bump. Then Meade gives all that extra money to charity. That’s $6,000 now that Meade is giving, adding on average a year of life each to 6 people.

Now how about Shun? Shun is lazy, like me. Shun decides that they don’t want to work hard to help people. But Shun is willing to do 3 hours of research online, to find the best way to save lives. Shun finds a charity where outside researchers agree that a $1,000 donation on average adds a year of life to each of 10 people. Maybe because they focus on the cheapest treatments, like vaccines. Maybe because they operate in poor countries where expenses are lower, and there’s more low-hanging health care fruit. Either way, Shun spend 3 hours doing research and now Shun’s $1,000 per year adds a year of life to each of 10 people.

To summarize: Kelsey is scraping by to give $2,000 to give 2 people an extra year of life. Meade put in six months’ extra hours at work – and has a more stressful job – and their $6,000 gives 6 people an extra year of life. Shun spent just 3 hours doing research on the internet, stills has the job Shun loves and gets to live the way Shun likes, and their $1,000 now gives 10 people an extra year of life each.

Kelsey – 2

Meade – 6

Shun – 10

Who would you rather be?

I don’t want to deprecate any of these strategies. Sometimes your situation is different. Kelsey’s a great person for trying to help people. There are a lot of reasons that Meade’s strategy could be better than it sounds. But Shun went for the low-hanging fruit – and was able to help the most people, but suffered the least for it.

If my numbers are realistic, then researching different charities’ effectiveness is an incredibly cheap way to help more people.

Why is this the case? Because in the numbers I made up, there was an order of magnitude effectiveness difference between two charities. One charity helped ten times as many people per dollar as another did.

This is sometimes true in the real world. Some charitable activities work better than others.

GiveWell, an organization that evaluates how effectively charities produce positive outcomes, thinks that there is a difference in effectiveness between two of their top-rated charities by a factor of between 2 and 3.

To repeat: one of GiveWell’s top-rated charities is 2-3 times as effective as another. GiveWell only has three top-rated charities.

Then think about how different these numbers must be, on average, from the non-top-rated charities – or unrateable ones that don’t try to measure outcomes at all. So a factor of 10 isn’t unrealistic – but even if it’s a factor of 2, that’s a better return on time invested than Meade got – they might have worked more than three extra hours every week!

How do I do the research?

Was Shun’s three hours of research a realistic estimate? It wouldn’t be if nobody were already out there helping you – but fortunately there are now several organizations designed to help you figure out where your money does the most good.

The most famous one is probably still Charity Navigator. Charity Navigator basically reports on charities’ finances, which is helpful in figuring out whether your money is going toward the programs you think it is, or whether it is going toward executives’ paychecks and fancy gala fundraisers. Charity Navigator is a good first step, if all you want to do is weed out charities that are literally scams.

But we should be more ambitious. Remember, we don’t just want to be not cheated. We’re happiest if people actually get helped. And to know that, we don’t just need to know how much program your money buys – we need to know if that program works.

GiveWell, AidGrade, Giving What We Can, and The Life You Can Save are all organizations that try to evaluate charities not just by how much work they do, but whether they can show that their work improves outcomes in some measurable way. All three seem to have mutual respect for one another, and I know there have been some friendly debates between GWWC and GiveWell on methodology.

If you want to search for more stuff on this, a good internet search term is “Effective Altruism“.

If you really, really don’t feel like spending a few hours doing research, you’ll do fine giving to one of GiveWell’s top 3.

Existential Risk: A Special Case

I want to put in a special plug here for a category of charity that gets neglected, where I think you can get a lot of bang for your buck in terms of results, and that’s charities that try to mitigate existential risk.

An existential risk is something that might be unlikely – or hard to estimate – but if it happens, it would wipe out humanity. Even a small reduction in the chance of an extinction event could help a lot of people – because you’d be saving not only people at the time, but future generations. Giving What We Can has recently acknowledged this as a promising area for high-impact giving, and GiveWell’s shown some interest as well.

Examples of existential risk are:

  • Nuclear Weapons
  • Biotechnology
  • Nanotechnology
  • Asteroids
  • Artificial Intelligence

Organizations that focus on existential risk include:

  • The Future of Humanity Institute (FHI) takes an academic approach, mostly focused on raising awareness and assessing risks.
  • The Lifeboat Foundation – I actually used to give to them, but I’m not sure quite what they really do, so I put that on hold – I may pick it up later if I learn something encouraging.
  • The Machine Intelligence Research Institute (MIRI) is working on the specific problem of avoiding an unfriendly intelligence explosion – by building friendly artificial intelligence. They believe this will also help solve many other existential risks.

In particular, MIRI is holding a fundraiser where new large donors (someone who has not yet given a total of $5,000 to MIRI) who make a donation of $5,000 or more, are matched 3:1 on the whole donation. Please consider it if you think MIRI’s work is important. [UPDATE: This was a success and is now over.]

But Didn’t You Say Meade Had a Good Strategy Too?

Yes. If you are super serious about helping people a lot, you might want to consider making career choices partly on that basis. I don’t have a lot to say about this personally, but 80,000 hours specializes in helping people with this kind of thing.

One thing I can add is that it’s easy to get intimidated by the difficulty of the optimal career choice for helping and thereby avoid making a knowably better choice. Don’t do that. Better is better. Don’t worry about making the perfect choice – you can always change your mind later when you think things through more.

Leveraged Giving and Meta-Charity

When we talk about leverage in giving, people usually take it literally and think about matched donations. Matched donations are fine, they double effectiveness and that’s great, but a factor of 5-10 from research will be more important than a factor of 2 from matched giving.

But there’s another kind of leverage – giving in ways that increases the effectiveness or quality of others’ giving. For example, you could give to GWWC, AidGrade, or GiveWell, and this would mean that everyone else who gives based on their recommendations makes a sligjtly more effective choice – or that they’re able to convince more people to give at all. You could probably do a quick back-of-the-envelope Fermi estimate to figure out what the impact is – whether there’s a multiplier effect or not. Giving What We Can actually gives some numbers themselves – and I know that if GiveWell thinks they can’t use the money, they’ll just pass it along to their top-rated charities.

There’s also a special case of leverage, and that’s the Center For Applied Rationality, or CFAR. CFAR is trying to help people think better and more clearly, and act more effectively to accomplish their goals. A large part of their motivation for this, is to create a large community of people interested in effective altruism, with the skills to recognize the high-impact causes, and the personal effectiveness to actually do something to help. If your lifetime donations just create one highly motivated person, then you’ve “broken even” – in other words you’ve helped at least as many people as you would have, giving directly. But right now it’s a much more leveraged opportunity, because CFAR plans to eventually become self-sustaining, but for their next few years they’ll still probably depend on donations to supplement any fees they can charge for their training.

This year I’m part of the group matching donations for CFAR’s end-of-year fundraiser. If you want to spend some of my money to try to build a community of true guardians of humanity, please do! [UPDATE: This fundraiser also concluded, successfully.]

So I should give all my charity budget to the one most effective charity?

Probably not.

Now, that’s not because of “diversification”. The National Center for Charitable Statistics (NCCS) estimates that there are about half a million charities in the US alone. That’s plenty of diversity – I don’t think anything’s at risk of being neglected just because you give your whole charity budget to the best one.

The reason why you don’t want to give everything to the charity you think helps the most, is those four reasons people give:

  • Warm fuzzies.
  • Group affiliation
  • Supporting friends
  • The simple preference for people to be helped.

And there are probably lots of others, but for now I’ll just group them all together as “warm fuzzies” for the sake of brevity.

If you force yourself to pretend that you only care about helping, you’ll feel bad about missing out on your warm fuzzies, and eventually you’ll find an excuse to abandon the strategy.

I want to be clear that all of these are okay reasons to give! Some people, when they hear this argument, assume that it means, “Some of my donations are motivated by my selfish desire for warm fuzzies. This is wrong! I should just give to charity to help people. I shouldn’t spend any charity money on feeling good about myself.”

You are a human being and you deserve to be happy. Also you probably won’t stick with a strategy that reliably makes you feel bad. So unfortunately, the exact optimal helping-strategy is unlikely to work for you (though if it does, that’s fine too).

Fortunately, we can get most of the way to a maximum-help strategy without giving up on your other motivations, because of:

One Weird Trick to Get Warm Fuzzies on the Cheap

The human brain has a defect called scope insensitivity (but don’t click through until you read this section, there’s a spoiler). It basically means that the part of us that has feelings doesn’t understand about very large or very small quantities. So while you intellectually might have a preference for helping more people over fewer, you’ll get the same feel-good hit from helping one person and hearing their touching story, as you would from helping a group of ten.

In a classic experiment, researchers told people, assigned randomly into three groups, about an ecological problem that was going to kill some birds, but could be fixed. They asked participants how much they would personally be willing to pay to fix the problem. The only thing they changed from group to group, was how many birds would be affected.

One group was told 2,000 birds were affected, and they were willing to pay on average $80 each. The other two groups were told 20,000 and 200,000 birds were affected, respectively. How much do you think they were willing to pay? Try to actually guess before you look at the answer.

Here’s how much the average person in each group was willing to pay:

2,000 birds: $80

20,000 birds: $78

200,000 birds: $88

So, basically the same, with some random variation.

Why do we care about this? Because it suggests that you should be able to get your warm fuzzies with a very small donation. Your emotions don’t care how much you helped – they care whether you helped at all.

So you should consider setting aside a small portion of your charity budget for the year, and spreading it equally between everything it seems like a good idea to give to. It probably wouldn’t cost you much to literally not say no to anything – just give every cause you like a dollar! You might even get more good vibes this way, then before, when you were trying to accomplish helping and warm fuzzies with the exact same donations.

Then give the rest to charity you think is most effective.

Summary:

You probably already want to help people you don’t know, and give to charity. Researching charities’ effectiveness in producing outcomes is a cheap way of making your donation help more people.

These organizations can help with your research:

Because of scope insensitivity, you should try to get your warm fuzzies and your effective helping done separately: designate a small portion of your charity budget for warm fuzzies, and give a tiny bit to every cause you’ll fee l good about.

You may also be interested in some higher leverage options. CFAR is trying to create more people who care about effective altruism are effective enough to make a difference, and they have a matched donations fundraiser going on right now, which I’m one of the matchers for. [UPDATE: This fundraiser was successful, and is now over.]

Existential Risk is another field where beneficial effects are underestimated and you should consider giving, especially to FHI or MIRI.

MIRI in particular has a matched donations fundraiser going on now, where new large donors (>$5,000) will be matched at a 3:1 rate. [UPDATE: This fundraiser was successful, and is now over.]

Cross-posted at my personal blog.

Posted in Uncategorized | 1 Comment

Whatever Is Not Best Is Forbidden

At this year’s CFAR Alumni Reunion, Leah Libresco hosted a series of short talks on Effective Altruism. She now has a post up on an issue Anna Salamon brought up, the disorienting nature of some EA ideas:

For some people, getting involved in effective altruism is morally disorienting — once you start translating the objects and purchases around you into bednets, should you really have any of them? Should you skip a gruel diet so you can keep your strength up, work as an I-banker, and “earn to give” — funneling your salary into good causes? Ruminating on these questions can lead to analysis paralysis — plus a hefty serving of guilt.

In the midst of our discussion, I came up with a speculative hypothesis about what might drive this kind of reaction to Effective Altruism. While people were sharing stories about their friends, some of their anxious behaviors and thoughts sounded akin to Catholic scrupulosity. One of the more exaggerated examples of scrupulosity is a Catholic who gets into the confessional, lists her sins, receives absolution, and then immediately gets back into line, worried that she did something wrong in her confession, and should now confess that error.

Both of these obviously bear some resemblance to anxiety/OCD, period, but I was interested in speculating a little about why. In Jonathan Haidt’s The Righteous Mind, he lays out a kind of factor analysis of what drives people’s moral intuitions. In his research, some moral foundations (e.g. care/harm) are pretty common to everyone, but some (sanctity/degradation or “purity”) are more predictive in some groups than others.

My weak hypothesis is that effective altruism can feel more like a “purity” decision than other modes of thought people have used to date. You can be inoculated against moral culture shock by previous exposure to other purity-flavored kinds of reasoning (deontology, religion, etc), but, if not (and maybe if you’re also predisposed to anxiety), the sudden clarity about a bestmode of action, that is both very important, and very unlikely for you pull off everyday may trigger scrupulosity.

EAs sometimes seem to think of the merit of an action as a binary quality, where either it is obligatory because it has the “bestness” attribute and outweighs the opportunity cost, or it is forbidden because it doesn’t. You’re allowed to take care of yourself, and do the best known thing given imperfect information, but only if it’s “best.” This framing is exhausting and paralyzing because you’re never doing anything positively good, everything is either obligatory or forbidden.

It doesn’t have to be that way; we can distinguish between intrapersonal and interpersonal opportunity cost.

I’m not a public utility, I’m a person. If I help others in an inefficient way, or with less of my resources than I could have employed, then I’ve helped others. If last year I gave to a very efficient charity, but this year I switched to a less efficient charity, then I helped others last year, and helped others again this year. Those are things to celebrate.

But if I pressure or convince someone else to divert their giving from a more efficient to a less efficient charity, or support a cause that itself diverts resources from more efficient causes, then I have actually harmed others on net.

Cross-posted at my personal blog.

Posted in Uncategorized | Leave a comment

Project: Comment on Proposed Regulations

In the US, there’s a mandatory comment period for new regulations, and regulators are required to review and consider every comment. Regulators I’ve talked to have said that most comments are of low quality (either by cranks or interested parties), and that a clearly argued analysis that pointed out a flaw or unintended consequence of the regulation would have a good chance of affecting the outcome. I think that developing expertise in this kind of thing could potentially be a high-leverage way to affect policy outcomes – my central estimate is that the long-run value per hour invested is quite high. But the variance on my estimate is high too – so we should experiment!

The Effective Altruism Society of DC is meeting this Labor Day, Monday, September 1st, to actually comment on some regulations (or try), in order to assess the feasibility of this project.

Estimate of Value

I assume that regulations have two kinds of defects. Minor defects constitute about 5% of the regulation’s gross impact in excessive costs or forgone benefits. Major defects constitute about 50%. Once we know what we’re doing, how often will we produce these changes?

No matter how well targeted our efforts were, I would be surprised if we made a minor improvement more than once every two times we wrote a comment, but I’d also be surprised if we could on average make a difference less often than once every two hundred times (0.5%) – with a central estimate of once every twenty times, or 5%.

Similarly, my range for the rate at which we cause a major improvement in the regulation is 0.1% – 1% – 10%. So in the pessimistic case, for a regulation with a given impact, the expected value of reviewing it is 5% * 0.5% + 50% * 0.1% =   0.075% of the regulation’s gross impact.

Since the FDA regulator we consulted estimated that it would take one person 10 hours to research and write a comment on one regulation, this means that the value per hour is 0.0075% of a regulation’s total impact under the pessimistic assumptions. In the optimistic case, the expected value per hour spent reviewing a regulation is (5% * 50% + 50% * 10%) / 10 = 0.75% of the regulation’s total impact. For my central estimate, the expected value per hour spent is (5% * 5% + 50% * 1%) / 10 = 0.075% of the regulation’s total impact.

Now, how do we measure the total impact of the regulations we’ll be reviewing?

Model 1 – Review Regulations Regardless of Impact:

The Competitive Enterprise Institute (CEI) estimated that regulatory compliance costs the economy about $1.8 trillion per year. They’re a conservative or libertarian think tank (I couldn’t find left-liberal estimates easily, let me know if you can) so I’ll round down and assume that the gross impact (including foregone benefits of better regs) is about $1,000,000,000,000. There are about a million regulations (this is a high estimate  counting every sentence with an instruction as a separate regulation), so the average cost of a regulation is $1,000,000.

If we reviewed new regulations regardless of impact, the pessimistic, central, and optimistic economic value produced per person-hour for a group specializing in regulatory review would be: $1,000,000 * 0.0075% = $75/hr (pessimistic) $1,000,000 * 0.075% = $750/hr (central) $1,000,000 * 0.75% = $7,500/hr (optimistic)

Model 2 – Review Highest Impact Regulations:

The first estimate assumed that while we will learn how to write effective comments and select regulations that need them, we don’t select regulations on the basis of their gross impact. What if we target the highest-impact regulations? If regulators estimate that a regulation will cost $100M or more, the regulation is designated as major and regulators are required to perform a Regulatory Impact Analysis. We could target only those regulations designated as major. This would limit the scope of the project, but there are something on the order of ten major regulations issued per year, so this could be a valuable part-time project or component of a larger project. Assuming conservatively that the gross impact of each major regulation is exactly $100M, the pessimistic, central, and optimistic economic value produced per person-hour for a group specializing in regulatory review would be: $100,000,000 * 0.0075% = $7,500/hr (pessimistic) $100,000,000 * 0.075% = $75,000/hr (central) $100,000,000 * 0.75% = $750,000/hr  (optimistic) I’ll take the geometric mean of the two central estimates, for a final estimate of $7,500 in economic value produced per hour invested in the project, in the long run.

In real life, the per-hour value of the initial test is much higher than the value of the project as a whole, because it helps us figure out whether we live in world where the problem is more tractable than I imagined (in which case we go ahead) or way less than I thought (in which case we give up and don’t incur any more costs).

Weaknesses in the Analysis

I used estimates of cost in place of estimates of a regulation’s total impact. This means that I may be undercounting the benefits of this project, as we may find ways to increase the benefits of regulations as well as reducing their costs.

Economic impact is not obviously convertible into QALYs; the US is a rich country, and insofar as the economic savings ends up consumed by Americans, the impact of this intervention may be much lower than an intervention of a similar monetary impact in a very poor country. GiveWell staff informally estimate something on the order of a $5,000 cost (this is near the lower bound) per African life saved for their top recommended charities, while US regulators use something on the order of a $5,000,000 value of life in cost-benefit analysis. Since we can’t take the social value produced home with us and mail it to Africa, this is a serious disadvantage of commenting on regulations relative to, say, earning to give.

Pessimistically, if this means that it actually costs $5M to save an extra American life, but $5k to save a life in a developing economy, and all the gains to regulatory improvements accrue only to Americans, then we should divide the estimates by 1,000 to be able to compare the benefits in terms of life outcomes. Using this discount rate, my central estimate is now that each hour we spend on the project in the long run will produce an improvement in life outcomes equivalent to giving $7.50 to one of GiveWell’s top-rated charities. That’s pretty disappointing. Shouldn’t we just give up?

The variation in those estimates was pretty high. In the optimistic scenario where we focus on major regulations, that’s the equivalent of giving $750 to an efficient charity for each hour invested – so if trying this out can give us info about whether we’re in the optimistic, central, or pessimistic case, then an initial experiment is tremendously valuable.

Second, this isn’t an isolated project – it resembles other possible interventions strongly enough that, in trying this out, we’re likely to get other ideas for how to improve the world by influencing policy for the better, and develop transferable expertise. Some interventions that would be nearby:

  • Comment on state- or municipal-level regulations or other decisions
  • Talk to policymakers and regulators directly
  • Survey existing and proposed laws and regulations to find potentially high-leverage changes

The Plan

Effective Altruists at the University of Maryland have graciously offered to host our test run of this project. We will convene between 11AM and 11:30 in the front foyer of Glenn L Martin Hall (the eastern foyer, closer to Route 1) at the University of Maryland, College Park, MD 20740. (You may need to call me to get in – if you don’t have my number, you can leave a comment on this post asking for it.) Then we plan to go to one of the computer labs, where we will talk through our plan for what to do and how to do it. From 4-5PM we’ll wrap up, review each other’s drafts, and hopefully submit comments and debrief.

Since this is a test, documenting what we’ve done is crucial. Here are things we’ll try to keep track of:

• Who showed up
• How much time we spent on each of selecting/researching/writing/reviewing the first draft & who wrote it
• The reviewed/proofread draft and who reviewed it
• When we submitted the comments
• What our recommendations were (change vs keep vs just don’t do it)
• Expected value of recommended action (estimated)

Eventually, if our comments draw responses:
• The responses
• The nature of the responses (no action, change, drop the reg)
• Expected value of actual change (estimated)

Of course, six hours is a long time; we’ll get hungry, and food will be ordered. Hopefully we’ll also get to know each other better, improving our ability to cooperate in the future; working on a project is a great way to become better friends! Here’s the meetup.com page for the event. If you need any more info, you can ask in the comments here, or there.

Posted in Uncategorized | Tagged , , , | 7 Comments