Well, W has been up is that Y has been L-ing a lot over the last few months, and every time I need to sit down and write another WYL, another “urgent” project/travel/thing flares up. Good for productivity but bad for consistency. For those of you who have bugged me in the meantime, thanks for your interest, and sorry if you dislike such sporadic writing!
Idea Weaving: Emotions and Rationality as Computation
- Before you think I’m regurgitating Kahneman, I defend that I barely read the book (I think I’m 2 chapters in). It would be pleasant to see overlap though.
- main idea: you can think of beliefs as the fundamental unit of thinking and cognition, and feelings as heuristics/layers on top of beliefs that offer quick computation that is often wrong. OR, you can think of feelings as the fundamental unit of thinking and cognition, and beliefs as tools/layers on top of feelings that offer deep computation that is often sour. These are very parallel frameworks, and it isn’t immediately fair to say which one is right or wrong, or if “right/wrong” is even the right pair of words to use. I think “resonant/sour” (like using music) is what the “feeling” computation feels like, and forcing ourselves to use “right/wrong” will induce a preference for computation in those terms. It seems more empowering to be able to think in both ways and to understand people who primarily think in the other way, and kind of realize that they are just both forms of computation.
- a fairly strong anti-intellectual who happens to be really intelligent said the following to me the other day, which is among the most honest things I’ve heard any human being say. Conversation went (expanded to include context at cost of accuracy, not that I remember exact words anyway):
- him: “I hate group X.”
- me: “oh why?” [I happen to belong in group X and I was bracing myself for a biting argument.]
- him: “what you mean why?”
- me: “like, what’s bad about group X?”
- him: “I don’t think group X does anything wrong, just that I had a negative experience with group X and I feel bad when I think of them. I’m not sure I have some universal logical argument that proves that they are bad and should be hated.”
- and after this, the above idea really clicked with me. For him the emotions are the only fundamental part of human experience (some existentialists think this way as well), and we all seek to rationalize what we feel in other layers of mental software that sometimes serves only to take up extra disk space. While I don’t agree with him 100%, I do agree that this is a very powerful way of thinking that is frequently a blindspot for “educated” folk like me. In particular, his thinking framework allowed him to make such an honest admission while if I were in a similar situation I’d have, even if I were honest about my emotions, felt compelled, by debate/writing/rationality, to wrap my answer in some form of universal policy, something like “I hate X since I hate things Y that satisfy some Z, and X belongs to Y because of Z,” which really, really obfuscates reality, which is “I hate X and I need a reason.”
- here’s a great heartwarming parallel between the two computation systems when it comes to self-improvement/debugging:
- In math I was taught (thanks J. for reminding me) that confusion is a good thing, since it lets you zoom in on the actual conflicting beliefs so you get an opportunity to learn.
- Rilke said that (paraphrased) sadness is a good thing, since it lets you zoom in on the conflicting emotions so you get an opportunity to sort them out.
- here’s a parallel of bad ways to treat problems. When we are confused but refuse to address it (as an emotion-computer may caution, this frequently happens when we try to protect our emotional insecurity underneath the belief overhead), we can rationalize, which avoids and numbs the main problem under layers of belief fluff. Similarly, when we have an emotional issue and refuse to address it (as a belief-computer may caution, this frequently happens when we fail to actually find the source of the real problem and analyze it under the emotional overhead), we drown ourselves in dance/sex/TV, which avoids and numbs the main problem under layers of emotional fluff.
Some Direct Applications of Above:
- recurring bias: When faced with something annoying (like considering a last-minute flight), a person “computing emotionally” will just have an emotional aversion and not do it. A person “computing rationally” may write down some sort of cost/benefit analysis that assigns some fairly arbitrary number to the utility of the flight and compare to the cost of the flight. Both of these decisions may involve about same amount of “computation” (and may have even been just wrappers around the same neurons firing!!) but both people may find the other decision process dumb.
- the two has different forms of data storage that we may call “understanding”. For example, you can “know” that a strong chess player will destroy you 95% of the time and you can “feel” that a strong chess player will destroy you 95% of the time, and those feel very different (try, say, Switching) but are really just equivalent concepts in the two frameworks. And frequently when someone says something sagely like “you see but you don’t understand” they mean something like knowing/feeling one (usually the latter) but not the other. If you like Kahneman maybe you can call this something like “system 1 understanding” vs “system 2 understanding.” What experts would agree cross-domain seems to be that having both understandings simultaneously is somewhat “deeper” (using your visual cortex to see math while knowing how to crunch through math with reasoning, etc.). The moments where you understand in both ways seem to feel the closest to mastery, like when you both instinctively feel some shape in Go is weak and clearly know that you can think of a sequence of moves to attack it.
- W. asked (in response to previous WYL about how I “should” be wrong some percentage of time: “I do own it pretty well when it comes down to a question of empirical fact, and someone just shows me I’m wrong. It’s a lot harder when it’s a bunch of theoretical arguments. Usually I find most arguments boil down to value differences (which can be discussed and empathized with but it’s rare to see change), empirical facts (which can then be verified and resolved), or they dissolve once they are recognized as semantic. It seems rare that someone presents me with a set of a priori logical statements that completely floors me and I surrender to their superior viewpoint. But maybe this shouldn’t be so rare? Maybe I’m not noticing opportunities to be wrong?” My answer-in-progress, to vibe off my framework, is: one can admit that a set of values (or some consequence from a set of values) “feel sour” or “feel bad” and agree that this is comparable in degree (if just only in dimension) from when something is “proven wrong.”
- I have a million more things to say about this, but I think I’ll stop for now. The main idea is that it is really important to think about both “emotional thinking” and “rational thinking” as different ways to do computation. And it seems annoying for either side when side A attempts to usurp the other way of doing computation (or this meta-concept) as side A’s its constituents, or worse, second-class-citizens.
- Thanks a lot to J. (and many others) for forcing me to talk about this so I can flesh it out a bit), but really thanks to A. for sending me an email titled “okay, you are right about feelings being ‘fundamental.’ ” (tongue-in-cheek interpretation: this admission just “proves” that the main computation I did in deciding to write this is due to feelings from personal pride as opposed to me believing that there is a deep idea =D)
Mathematics and Elephant Questions
- (purposefully meandering intro) I’m not going to somehow distill everything I’ve learned from doing mathematics, but I’ve been doing a lot of it in a math pilgrimage (one reason for late WYLs) that spanned 3 weeks across 3 different cities: Snowbird, Boston, and Chicago. The intensity has awakened some ideas, starting with the meta-idea in this line, which is that a burst of intensity can burn some big lessons even to things you’ve been learning for a long time, so tempo changes sound good.
- Anyway, I’ve decided to coin the idea of “elephant questions/statements” to describe questions that *directly address the elephant in the room.* The relation to math is that the math community/culture is just full of situations where there is a huge elephant in the room and nobody talks about it. This isn’t particularly surprising (and I have no argument that math does it more or less than other disciplines), but it does make me think about the skill of deciding to pop the elephant question/statement, and how well it has served me despite me not previously calling it that name.
- in a relationship: “I think you’re unhappy about X. Am I right?”
- in a general math talk for (normal!) undergrads where the word “algebraic curve” comes up with no explanation: “what’s an algebraic curve?”
- in a business meeting: “hey Bob, I know we’re trying to get this thing done, but I think this meeting is not going to be productive since we’re still emotionally sensitive since you just had that shouting match with Carol yesterday and people are still preoccupied with that mental image. Can we cool off for a bit and talk business later when everyone’s unwound a bit?”
- this is even more important when someone involved DOES NOT SEE THE ELEPHANT, since now you can make him/her see it and you’ve just cleared up even more information.
- failure to ask elephant questions seem to usually stem from a confusion between “common knowledge” and “mutual language” since we don’t know if everyone else knows everyone else sees the elephant, even though we can be sure pretty much everyone has seen the elephant. I’m not sure how useful this is, but it is a cute theoretical point.
- I’m forcing my own frameworks here, but I do think a nontrivial contribution to the failure-to-ask-elephants-situation is that we “know” everyone knows it and would be thankful when we ask the question, but we don’t “feel” that people would find it okay. (we have some stupid mental image of people yelling at us for asking elephant questions). Mastery in this comes when we feel that asking it is actually okay, not just know.
- … well, outside of situations where we do have bad consequences, obviously. But my experience is people are actually pretty cool if you asking elephant questions actually BOTH
- A) addresses and makes progress to fix the problem; nobody cares if you just point out sensitive issues with no solution
- B) doesn’t make anyone present look stupid. People are sensitive, and there will be a collective sigh of relief when people see that they (or other people in the room) are not being antagonized.
- I visited Taiwan for about a week with LW about a month ago for the first time and hopefully not the last. My tea and food knowledge have leveled up a bit.
- Food-wise, one of my highlights was Fourplay, which features bartenders that only make custom drinks based on any clues the customer would give. (if you’ve visited my home bar, basically they do a legit version of what I pretend to do) I’ve learned a couple of tricks, and wish to apply them when I do more bartending. I’m surprised few *bars* do this kind of thing as their main shtick, though through many *bartenders* would do some version upon request, from personal experience.
- My mom has access to something that watches mainland China TV using the internet, so there’s a Taiwanese channel in particular that looks like it is sponsored by the Mainland, as opposed to the Taiwanese TV that I watched in Taiwan. Juxtoposing the two has been really interesting, not just that one is much more Blue and the other is much more balanced between Blue/Green (these are just party colors, but I’m sure you can guess which channel is which without knowing which color is which)
- the first (?) subway stabbing incident in Taiwanese history happened when I was there (in fact, I was 2 stops away in the subway). It is amazing to see the reactions, which mirrored U.S. responses to school shootings and 9/11.
- first, people find anything they can to blame: the person, the school, violent video games, parents in general, the police for showing up late
- second, people look for policy: we should put a cop in every subway car, we should ban video games…
- third, the media and internet: sympathizers, conspiracy theorists, people calling each other dumb, every single channel playing the same footage over and over again…
- I was fairly numb, though I’m not implying that these reactions are silly (though I do find them excessive). I think that in any harmonious place like Taiwan, such things are bound to be extremely surprising. This is more of a reflection of my own days in America desensitizing me to the stories of random acts of violence. I think I’ve learned more about myself than above Taiwan through this incident, and I somehow wished that we all lived in societies that would thus overreact, since it would have at least meant they were peaceful.
- Thanks to S. for the following hack that applies visualization to the idea of “clearing mental ram” from the skill hacking discussion: say you are doing something frustrating. You can imagine picking up the frustration and putting it to one side while you work. This doesn’t make the work more fun but makes it more effective since you are purposefully not paying attention to the frustration.
- T. and several others have resonated with my “An unflattering interpretation is that I’m trying too hard to not look judgmental to other people that I may have just become worse at judging” comment (on me not being harsh enough when judging candidates from “Reasonability Tests.”). It seems to be a common problem, and I really in particular like A.’s thoughts. I may write a part II for this.