In the US, there’s a mandatory comment period for new regulations, and regulators are required to review and consider every comment. Regulators I’ve talked to have said that most comments are of low quality (either by cranks or interested parties), and that a clearly argued analysis that pointed out a flaw or unintended consequence of the regulation would have a good chance of affecting the outcome. I think that developing expertise in this kind of thing could potentially be a high-leverage way to affect policy outcomes – my central estimate is that the long-run value per hour invested is quite high. But the variance on my estimate is high too – so we should experiment!
The Effective Altruism Society of DC is meeting this Labor Day, Monday, September 1st, to actually comment on some regulations (or try), in order to assess the feasibility of this project.
Estimate of Value
I assume that regulations have two kinds of defects. Minor defects constitute about 5% of the regulation’s gross impact in excessive costs or forgone benefits. Major defects constitute about 50%. Once we know what we’re doing, how often will we produce these changes?
No matter how well targeted our efforts were, I would be surprised if we made a minor improvement more than once every two times we wrote a comment, but I’d also be surprised if we could on average make a difference less often than once every two hundred times (0.5%) – with a central estimate of once every twenty times, or 5%.
Similarly, my range for the rate at which we cause a major improvement in the regulation is 0.1% – 1% – 10%. So in the pessimistic case, for a regulation with a given impact, the expected value of reviewing it is 5% * 0.5% + 50% * 0.1% = 0.075% of the regulation’s gross impact.
Since the FDA regulator we consulted estimated that it would take one person 10 hours to research and write a comment on one regulation, this means that the value per hour is 0.0075% of a regulation’s total impact under the pessimistic assumptions. In the optimistic case, the expected value per hour spent reviewing a regulation is (5% * 50% + 50% * 10%) / 10 = 0.75% of the regulation’s total impact. For my central estimate, the expected value per hour spent is (5% * 5% + 50% * 1%) / 10 = 0.075% of the regulation’s total impact.
Now, how do we measure the total impact of the regulations we’ll be reviewing?
Model 1 – Review Regulations Regardless of Impact:
The Competitive Enterprise Institute (CEI) estimated that regulatory compliance costs the economy about $1.8 trillion per year. They’re a conservative or libertarian think tank (I couldn’t find left-liberal estimates easily, let me know if you can) so I’ll round down and assume that the gross impact (including foregone benefits of better regs) is about $1,000,000,000,000. There are about a million regulations (this is a high estimate counting every sentence with an instruction as a separate regulation), so the average cost of a regulation is $1,000,000.
If we reviewed new regulations regardless of impact, the pessimistic, central, and optimistic economic value produced per person-hour for a group specializing in regulatory review would be: $1,000,000 * 0.0075% = $75/hr (pessimistic) $1,000,000 * 0.075% = $750/hr (central) $1,000,000 * 0.75% = $7,500/hr (optimistic)
Model 2 – Review Highest Impact Regulations:
The first estimate assumed that while we will learn how to write effective comments and select regulations that need them, we don’t select regulations on the basis of their gross impact. What if we target the highest-impact regulations? If regulators estimate that a regulation will cost $100M or more, the regulation is designated as major and regulators are required to perform a Regulatory Impact Analysis. We could target only those regulations designated as major. This would limit the scope of the project, but there are something on the order of ten major regulations issued per year, so this could be a valuable part-time project or component of a larger project. Assuming conservatively that the gross impact of each major regulation is exactly $100M, the pessimistic, central, and optimistic economic value produced per person-hour for a group specializing in regulatory review would be: $100,000,000 * 0.0075% = $7,500/hr (pessimistic) $100,000,000 * 0.075% = $75,000/hr (central) $100,000,000 * 0.75% = $750,000/hr (optimistic) I’ll take the geometric mean of the two central estimates, for a final estimate of $7,500 in economic value produced per hour invested in the project, in the long run.
In real life, the per-hour value of the initial test is much higher than the value of the project as a whole, because it helps us figure out whether we live in world where the problem is more tractable than I imagined (in which case we go ahead) or way less than I thought (in which case we give up and don’t incur any more costs).
Weaknesses in the Analysis
I used estimates of cost in place of estimates of a regulation’s total impact. This means that I may be undercounting the benefits of this project, as we may find ways to increase the benefits of regulations as well as reducing their costs.
Economic impact is not obviously convertible into QALYs; the US is a rich country, and insofar as the economic savings ends up consumed by Americans, the impact of this intervention may be much lower than an intervention of a similar monetary impact in a very poor country. GiveWell staff informally estimate something on the order of a $5,000 cost (this is near the lower bound) per African life saved for their top recommended charities, while US regulators use something on the order of a $5,000,000 value of life in cost-benefit analysis. Since we can’t take the social value produced home with us and mail it to Africa, this is a serious disadvantage of commenting on regulations relative to, say, earning to give.
Pessimistically, if this means that it actually costs $5M to save an extra American life, but $5k to save a life in a developing economy, and all the gains to regulatory improvements accrue only to Americans, then we should divide the estimates by 1,000 to be able to compare the benefits in terms of life outcomes. Using this discount rate, my central estimate is now that each hour we spend on the project in the long run will produce an improvement in life outcomes equivalent to giving $7.50 to one of GiveWell’s top-rated charities. That’s pretty disappointing. Shouldn’t we just give up?
The variation in those estimates was pretty high. In the optimistic scenario where we focus on major regulations, that’s the equivalent of giving $750 to an efficient charity for each hour invested – so if trying this out can give us info about whether we’re in the optimistic, central, or pessimistic case, then an initial experiment is tremendously valuable.
Second, this isn’t an isolated project – it resembles other possible interventions strongly enough that, in trying this out, we’re likely to get other ideas for how to improve the world by influencing policy for the better, and develop transferable expertise. Some interventions that would be nearby:
- Comment on state- or municipal-level regulations or other decisions
- Talk to policymakers and regulators directly
- Survey existing and proposed laws and regulations to find potentially high-leverage changes
Effective Altruists at the University of Maryland have graciously offered to host our test run of this project. We will convene between 11AM and 11:30 in the front foyer of Glenn L Martin Hall (the eastern foyer, closer to Route 1) at the University of Maryland, College Park, MD 20740. (You may need to call me to get in – if you don’t have my number, you can leave a comment on this post asking for it.) Then we plan to go to one of the computer labs, where we will talk through our plan for what to do and how to do it. From 4-5PM we’ll wrap up, review each other’s drafts, and hopefully submit comments and debrief.
Since this is a test, documenting what we’ve done is crucial. Here are things we’ll try to keep track of:
• Who showed up
• How much time we spent on each of selecting/researching/writing/reviewing the first draft & who wrote it
• The reviewed/proofread draft and who reviewed it
• When we submitted the comments
• What our recommendations were (change vs keep vs just don’t do it)
• Expected value of recommended action (estimated)
Eventually, if our comments draw responses:
• The responses
• The nature of the responses (no action, change, drop the reg)
• Expected value of actual change (estimated)
Of course, six hours is a long time; we’ll get hungry, and food will be ordered. Hopefully we’ll also get to know each other better, improving our ability to cooperate in the future; working on a project is a great way to become better friends! Here’s the meetup.com page for the event. If you need any more info, you can ask in the comments here, or there.