Saving the Whole Planet via Ancestor Simulations

Abstract: The question of whether or not we should create ancestor simulations is important, not just ethically but also for purposes of predicting whether or not we are in one. This post discusses reasons to think that we should influence the future to create ancestor simulations. One reason in particular is quite interesting: By influencing the future to simulate the past, we increase the likelihood that we are ourselves being simulated right now. And if we influence the future to simulate the past with a special algorithm that preserves the minds of people as they die and leads them into a happy afterlife….

The Simulation Argument proposes that at least one of these has to be true:

  1. The human species is very likely to go extinct before reaching a “posthuman” stage.
  2. Any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof).
  3. We are almost certainly living in a computer simulation.

One of the variables in the Simulation Argument is (let’s say) P(S), or the probability that the future will contain ancestor simulations. We can influence this variable! For example, we can enact constitutional amendments or construct AIs that will, when the time is ripe millions of years from now, create a bunch of ancestor-simulations. They don’t have to be perfectly accurate, so long as they are good enough we can’t tell whether we are in one of them or not.

But why would we want to increase this variable? Well, get this: Suppose we specify in the constitutional amendment or the AI’s goal structure or whatever that the ancestor-simulations created ought to have an afterlife. As in, an algorithm that preserves the minds of people as they die in the simulation, and then moves them to another environment where they are punished for their crimes, rewarded for their good deeds, and generally treated to a higher standard of living and fully informed about the state of the world as a whole (perhaps they are accepted as citizens into the future society)

Supposing that, then we ought not only suspect that we are in a simulation, but that we have a wonderful afterlife to look forward to in which we will be rewarded for our good deeds and punished for our misdeeds.

Is this something we want? Hell yeah! So let’s start lobbying congress!

Objection: This is very well if you are selfish, but from an unselfish perspective it makes no sense. The resources used to create ancestor simulations could be used to make an equivalent number of happy future people. So really what you are doing is making future people less happy and giving them a slight chance of being in the past, so that the past people get a large chance of being in the future. But the chances all balance out; it’s a zero-sum game because there is always one original planet full of people with no afterlife.

Reply: Okay, fair enough. But maybe there are other reasons for doing this. Maybe we want there to be “equality of expectation,” we know that one planetfull of people will die with no afterlife, but we want there to be many people with an equal chance of getting unlucky rather than just a few people who know they are screwed. Another reason: If we punish the evil and reward the good, we give even the original people an incentive to be good, which is important since the original people have such control over the future. Finally: We might want to populate our future society with people who have been raised in diverse environments, and historical environments may be an important part of that. For example, we might want our society to be partially inhabited by the kind of people who grew up thinking that they would die soon and that their virtue would not be rewarded.

Objection: The diversity thing could probably be achieved without creating an environment full of suffering. Also, there is something intrinsically wrong with withholding the truth from people, even if you tell them the truth eventually.

Issues relating to obscure ideas in the philosophy of consciousness:

If consciousness in a world depends only on the implementation of an algorithm, rather than on the number of said implementations, (and if various other background assumptions like psychological continuity theory hold) then by simulating a person who died and then saving the simulated person, you are literally saving the person who died. Thus, if it turns out to be possible to accurately recover the brain-states of our ancestors, then ancestor-simulations are practically a moral necessity, to right the wrongs of the past at essentially no cost.

It’s possible that a brain or something that simulates a brain isn’t sufficient for consciousness. (Maybe there’s some other component we haven’t found yet that’s necessary.) And if so, it’s at least possible that some other construction might be sufficient for consciousness. In principle, we should be able to figure out what this extra thing is and construct a corresponding simulation.

But we should favor simpler theories over more complicated ones. If the complexity of the rule-according-to-which-systems-are-conscious is part of the complexity of a theory, then (since we can probably make simpler implementations of our own minds than brains) by creating ancestor-simulations with said techniques in the future, we could ensure that we have a 100% chance of afterlife, i.e. that nobody (except for p-zombies) has to really die. (If every theory is correct, i.e. all conceivable worlds are real, then this would be no better from an unselfish perspective than creating normal future people using that technique)

Warning: The above arguments depend on various implausible philosophy of consciousness premises. That’s OK, but if it turns out that each argument has an evil twin with the same amount of plausibility that argues for the opposite conclusion…

This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *