Some of my replies on Youtube

Sometimes it seems like I follow the opposite of Thumper’s Rule.  Even when I mostly like stuff, I’m prone to being silent until I spot a flaw to point out!

Recently I’ve been less like that, but here I wanted to share two comments I made that admittedly followed that trend.

  1. Julia Galef Newcomb’s problem

    In her video, she explained how two different theories of decision making (“State of the World” Causal decision theory VS Evidential decision theory) seem to come to opposite conclusions in this paradox.

    Now that I’m revisiting this, I think the “state of the world” thing (as stated in the video) is a failure to actually be what it’s claimed to be:  Causal decision theory.   It’s stated as if the person’s choice will not cause the contents of the box.  Yet that’s precisely the line of causation that the thought experiment tells you will take place.  So any true Causal Decision Theory should accept that.

    Because of that mixup, my comment below might look like it isn’t totally addressing the position as stated in the video, I addressed causality, which is what really matters.  By definition, causality determines what outcome (such as the contents of the box) will be caused.

    I said:

    The two decision theories don’t predict different actions if done correctly. The state of the world isn’t just the contents of the box. It is also the decision maker, and the mind reading, the fact that the future is physically pre-determined, etc.. Given THAT state of the world, which to pick? You get the same answer as the evidential approach. Your idea of “the state of the world” has to depend on evidence anyways, and so does the “which action causes the best expected outcome” part.

    It only seems that they give different answers because you are, essentially, approximating the answers, rather than computing them exactly. Or because you disagree on the state of the world. Maybe you disagree that the “state of the world” includes the fact that the future is pre-determined in some way (or at least functionally identical to determinism). But then you are simply rejecting the thought experiment, which dictates that this is indeed the state of the world.

    Also, the correct answer can change (and both decision theories will agree) once the situation is not idealized. Once there is uncertainty. Then you need to use Bayesian decision making based on the values and probabilities. But also note that the values in this calculation don’t merely depend on the absolute value of the dollars, but also how important each outcome is in your life. For some people, it might be more important to guarantee that they get more than zero dollars (maybe they are starving poor) so taking the $1000 might be the only rational choice. For others, $1000 might not be enough (maybe they need ransom money immediately), so increasing the probability of the million might be the only rational choice.

    (I also replied to several other commenters)

    Check out her channel! One of her videos even had advice for solving “paradoxes”, and when I watched that one I got the feeling maybe she’d even agree with my above comment.

  2. Also, Arthur Isaac Ecumenopolises

    He argued something to the effect of “since we value people more than we value heaps of unused raw material, it’s better (all else being equal) if all the unused material in the universe is turned into more people”.  Hmmm.  Maybe, if it increases our chances of finding people that are more perfect soul mates etc?  And similar “lottery” situations, where you need to “buy more tickets” to ensure “winning”.

    Anyways, at the time, I said:

    I have to disagree that more people is good since you said “all things being equal”. I take that to mean that a lesser number of people have nothing to gain by choosing one option over the other. That means more people would be entirely neutral, not good, but not bad either. You say intelligent life is more valuable than inanimate asteroids and dead planets. Of course that is true in at least two ways: first, the good that they can bring to other people. But eventually, if you have enough, I think more people will make no difference to anyone (except for those people who want to have children, but plenty of advanced nations have very low birth rates, so who knows). Second, intrinsic value. We are intrinsically valuable because we value ourselves. But that is different from valuing a state of the world where there are more people. Moral arguments, such as the trolley problem, work because people already value their own futures and such. In the trolley solution, it’s not because more people will exist, it’s because there is statistically less disaster. And we want to live in a world where there is statistically less disaster. People want us to make that world, and we want other people to make that world.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s