Some Musing About Different Concept Styles

This is a bit speculative, and maybe not as precise as I’d like it to be, but I think there’s really something here to be aware of.  For example, what I’m going to say here might help people avoid needless conflict.

There seem to be these two major styles of thinking and conceptualizing and such.  Different ways of looking at things?

  • One, is to see things sort of all together, at once, a big continuous whole.  “Holistic“.  Certainly, this is how reality is, and I think people who use this method believe this to be a strength.  “Reality is complicated!”  they will shout (I think I’ve seen PZ Myers and/or others say such things over at Pharyngula).  See also this advocacy for “lawless science”.
  • The other is to identify precise variables.  “Analytic“.  This is a method that is nicer for analysis.  It in no way fails to account for the complicatedness of reality.

I think a lot of difficulty arises when there is a dispute between parties that use a different one of these styles. First, I think, because the two sides have difficulty understanding the other.  Second, I think, because they believe only their own thinking is correct, so surely the other must be wrong.

Yet I suspect that both can be used to describe reality with equal accuracy.  And it might be advantageous to use both, rather than just one.  Obviously I’m sold on the utility of the more analytic approach.  But I’m also writing a blog post which (on retrospect?) looks strikingly like advocacy for the holistic way of thinking.  If I had to guess right now, I’d say each is more efficient at different tasks, even though theoretically both will end up with equally accurate descriptions of reality.

Other possibly related things to check out:

Some of my replies on Youtube

Sometimes it seems like I follow the opposite of Thumper’s Rule.  Even when I mostly like stuff, I’m prone to being silent until I spot a flaw to point out!

Recently I’ve been less like that, but here I wanted to share two comments I made that admittedly followed that trend.

  1. Julia Galef Newcomb’s problem

    In her video, she explained how two different theories of decision making (“State of the World” Causal decision theory VS Evidential decision theory) seem to come to opposite conclusions in this paradox.

    Now that I’m revisiting this, I think the “state of the world” thing (as stated in the video) is a failure to actually be what it’s claimed to be:  Causal decision theory.   It’s stated as if the person’s choice will not cause the contents of the box.  Yet that’s precisely the line of causation that the thought experiment tells you will take place.  So any true Causal Decision Theory should accept that.

    Because of that mixup, my comment below might look like it isn’t totally addressing the position as stated in the video, I addressed causality, which is what really matters.  By definition, causality determines what outcome (such as the contents of the box) will be caused.

    I said:

    The two decision theories don’t predict different actions if done correctly. The state of the world isn’t just the contents of the box. It is also the decision maker, and the mind reading, the fact that the future is physically pre-determined, etc.. Given THAT state of the world, which to pick? You get the same answer as the evidential approach. Your idea of “the state of the world” has to depend on evidence anyways, and so does the “which action causes the best expected outcome” part.

    It only seems that they give different answers because you are, essentially, approximating the answers, rather than computing them exactly. Or because you disagree on the state of the world. Maybe you disagree that the “state of the world” includes the fact that the future is pre-determined in some way (or at least functionally identical to determinism). But then you are simply rejecting the thought experiment, which dictates that this is indeed the state of the world.

    Also, the correct answer can change (and both decision theories will agree) once the situation is not idealized. Once there is uncertainty. Then you need to use Bayesian decision making based on the values and probabilities. But also note that the values in this calculation don’t merely depend on the absolute value of the dollars, but also how important each outcome is in your life. For some people, it might be more important to guarantee that they get more than zero dollars (maybe they are starving poor) so taking the $1000 might be the only rational choice. For others, $1000 might not be enough (maybe they need ransom money immediately), so increasing the probability of the million might be the only rational choice.

    (I also replied to several other commenters)

    Check out her channel! One of her videos even had advice for solving “paradoxes”, and when I watched that one I got the feeling maybe she’d even agree with my above comment.

  2. Also, Arthur Isaac Ecumenopolises

    He argued something to the effect of “since we value people more than we value heaps of unused raw material, it’s better (all else being equal) if all the unused material in the universe is turned into more people”.  Hmmm.  Maybe, if it increases our chances of finding people that are more perfect soul mates etc?  And similar “lottery” situations, where you need to “buy more tickets” to ensure “winning”.

    Anyways, at the time, I said:

    I have to disagree that more people is good since you said “all things being equal”. I take that to mean that a lesser number of people have nothing to gain by choosing one option over the other. That means more people would be entirely neutral, not good, but not bad either. You say intelligent life is more valuable than inanimate asteroids and dead planets. Of course that is true in at least two ways: first, the good that they can bring to other people. But eventually, if you have enough, I think more people will make no difference to anyone (except for those people who want to have children, but plenty of advanced nations have very low birth rates, so who knows). Second, intrinsic value. We are intrinsically valuable because we value ourselves. But that is different from valuing a state of the world where there are more people. Moral arguments, such as the trolley problem, work because people already value their own futures and such. In the trolley solution, it’s not because more people will exist, it’s because there is statistically less disaster. And we want to live in a world where there is statistically less disaster. People want us to make that world, and we want other people to make that world.

What are things?

There’s different ways to say what something “is”. For example a hammer. You can talk about:

For hammers, like for many things, an intensional definition can be developed. In the case of hammers I think it would mostly have to do with its function, what we use it to do, and how it works.

See also:

Though usually, a ton of these ways of looking at the thing come together into a concept or “construct”.  Even degree or probability might go into the concept.  An ice cream sandwich might be “a sandwich” in some sense, but when someone says they are bringing you “a sandwich” you’d be rather confused if it turned out to be an ice cream sandwich.  And yet clearly the two have enough similarity in form and purpose that it makes sense to use the name “ice cream sandwich” rather than “ice cream mystery object”, as if it were a wholly unfamiliar shape.

See also: