From: M. Taylor Saotome-Westlake Date: Mon, 1 Aug 2022 01:11:49 +0000 (-0700) Subject: memoir: email review up to 30 March 2019 X-Git-Url: http://534655.efjtl6rk.asia/source?a=commitdiff_plain;h=a8b07bd83ab3ee09b4f25a8ed39a35f7c7d8e9ea;p=Ultimately_Untrue_Thought.git memoir: email review up to 30 March 2019 A lot of content to still turn into prose. Going over this, I'm struck by—the amount of time we spent on this is absurd. I think confronting the paper trail is helping me see this project as "telling a story that actually happened" rather than being in the vengeful mode of thinking that the purpose of this post is to have receipts for denouncing Yudkowsky as Bad and Dishonest. I think it's higher-integrity and better writing if it's just a story rather than an instrument of revenge, although the story can talk about instrument-of-revenge temptations. --- diff --git a/notes/a-hill-email-review.md b/notes/a-hill-email-review.md index cd8b299..41ee895 100644 --- a/notes/a-hill-email-review.md +++ b/notes/a-hill-email-review.md @@ -69,9 +69,28 @@ then I say, maybe don't reply before Friday 2:47-3:55 a.m.: some replies from Michael 4:01 a.m.: I got the unit test passing; why do I keeping lying about email hiatus? 4:15 a.m.: "predictably bad ideas" email to Anna/Michael/Ben/Sarah/Zvi/Scott/Alicorn/Mike - - -> When I look at the world, it looks like [Scott](http://slatestarcodex.com/2014/11/21/the-categories-were-made-for-man-not-man-for-the-categories/) and [Eliezer](https://twitter.com/ESYudkowsky/status/1067183500216811521) and [Kelsey](https://theunitofcaring.tumblr.com/post/171986501376/your-post-on-definition-of-gender-and-woman-and) and [Robby Bensinger](https://www.facebook.com/robbensinger/posts/10158073223040447?comment_id=10158073685825447&reply_comment_id=10158074093570447&comment_tracking=%7B%22tn%22%3A%22R2%22%7D) seem to think that some variation on ["I can define a word any way I want"]() is sufficient to end debates on transgender identity. +5:26 a.m.: five impulsive points +5:58 a.m.: everyone's first priority should be to make sure Zack can sleep +6:14 a.m.: I'm really sorry about this. I don't need Scott's help and I think Michael is being a little aggressive about that, but I guess that's also kind of Michael's style? +6:18 a.m.: Michael's response: What the FUCK Zack!?! +7:27 a.m.: Home now. Going to get in bed. I could say more, but that would be high-variance, and we don't want variance right now +19 Mar: Michael seems disappointed with me for what he perceived as me escalating and then deescalating just after he came to help, but which from my perspective felt like me just trying to communicate that I don't want to impose too many costs on my friends just because I felt upset today. (Subject: "yet another strategy huddle (III)") +19 Mar: Ben writes off Scott and ignores a boundary; nominates Jessica for helping me +19 Mar: maybe there's a reason to be attached to the "rationalist" brand name/social identity, because my vocabulary has been trained really hard on this subculture; theistic evolution Church analogy +20 Mar: planning to meet Zvi in person for first time +20 Mar: Ben asks Scott to alter the beacon (I think this became https://slatestarcodex.com/2019/07/04/some-clarifications-on-rationalist-blogging/ ) +20 Mar: super-proton concession (why now? maybe he saw my "tools shattered" Tweet, or maybe the Quillette article just happened to be timely) +21 Mar: I think of the proton criticism; nominate Ben as mission commander +21 Mar: I suggest to Anna that chiming in would be a movement towards peace with Michael's gang +21 Mar: Ben and Michael don't think the techincal quibble makes any progress +22 Mar: I literally ran into Scott on the train +22 Mar: Scott does understand the gerrymandering/unnaturalness problem, but it sort of seems like he doesn't understand that this is about cognitive algorithms rather than verbal behavior ("someone tells you that classifying a burrito as a sandwich will actually save ten million lives and ensure friendly AI in our lifetime, would you still refuse to classify a burrito as a sandwich?"—well, it doesn't matter what I say), whereas Eliezer definitely understands +24 Mar: Michael to me on Anna as cult leader +24 Mar: I tell Michael that I might be better at taking his feedback if he's gentler +30 Mar: hang out with Jessica (previous week was Zvi and Nick and anti-fascist Purim) + + +> When I look at the world, it looks like [Scott](http://slatestarcodex.com/2014/11/21/the-categories-were-made-for-man-not-man-for-the-categories/) and [Eliezer](https://twitter.com/ESYudkowsky/status/1067183500216811521) and [Kelsey](https://theunitofcaring.tumblr.com/post/171986501376/your-post-on-definition-of-gender-and-woman-and) and [Robby Bensinger](https://www.facebook.com/robbensinger/posts/10158073223040447?comment_id=10158073685825447&reply_comment_id=10158074093570447&comment_tracking=%7B%22tn%22%3A%22R2%22%7D) seem to think that some variation on ["I can define a word any way I want"]() is sufficient to end debates on transgender identity. > And ... I want to be nice to my trans friends, too, but that can't possibly be the definitive end-of-conversation correct argument. Not _really_. Not if you're being serious. @@ -93,9 +112,6 @@ The Craft Is Not The Community, (https://srconstantin.wordpress.com/2017/08/08/t Anyway, meanwhile, other conversations were happening. - - - Michael— > Ben once told me that conversation with me works like this. I try to say things that are literally true and people bend over backwards to pretend to agree but to think that I am being metaphorical. @@ -320,3 +336,78 @@ Date: Tue Mar 19 00:27:53 2019 -0700 It's sharing code with the internal resïssue-quote endpoint, because that's convenient. + + + + +Scott, like, you sent me that email piously arguing that it's OK to gerrymander category boundaries because it's important for trans people's mental health. And there's a couple things that I don't understand. + +(1) If creatively reinterpreting the meanings of words because the natural interpretation would make people sad is OK ... why doesn't that just generalize to an argument in favor of outright lying when the truth would make people sad? I don't understand why semantic mind games are OK, but lying is wrong. (Compare: "If you’re going to ignore a probabilistic counterargument, why not ignore a proof, too?") From my perspective the motte-and-bailey definition-hacking mind games are far more cruel than a simple lie. Speaking of which---- + +(2) If "mental health benefits for trans people" matter so much, then, like, why doesn't my mental health matter? Aren't I trans, sort of? Like, what do I have to do in order to qualify, was my being on hormones for only 5 months not good enough? Or do I have to be part of the political coalition in order for my feelings to count? We had an entire Sequence whose specific moral was that words can be wrong. And of the 37 Ways that Words Can Be Wrong, number 30 is "Your definition draws a boundary around things that don't really belong together," and number 31 is, "You use a short word for something that you won't need to describe often, or a long word for something you'll need to describe often." And this is the thing I've been trying to tell everyone about. And when I try to talk to you about it, you're like, "Doesn't matter, mental health benefits for trans people are more important according to the utilitarian calculus! Also, you're wrong." And like ... getting shut down by appeal-to-utilitarianism (!?!?) when I'm trying to use reason to make sense of the world is observably really bad for my mental health! Does that matter at all? Also, if I'm philosophically wrong, then Eliezer was wrong in 2009, because everything I'm doing now, I learned from him. Do you really think "c'mon, parsimony doesn't matter" is an improvement on the Sequences?! + +(3) You write, "wouldn't it be embarrassing if the community solved Friendly AI and went down in history as the people who created Utopia forever, and you had rejected it because of gender stuff?" + +But the original reason it seemed plausible that we would create Utopia forever wasn't "because we're us", it was because we were trying to do systematically correct reasoning, with the hope of using the systematically correct reasoning to build aligned superintelligence. If we're not going to do systematically correct reasoning because that would make people sad, then that undermines the reason that it was plausible that we would create Utopia forever. You can't just forfeit the mandate of Heaven like that and still expect to still rule China. + +(4) You write, "I realize you are probably thinking something like 'This isn't just me getting triggered by some culture war thing evolved to trigger me, it actually really *is* that important that rationalists get transgender right!' [...] All I can say is that I beseech you, in the bowels of Christ, think it possible that you may be mistaken." + +I don't know how much social proof influences you, but is the fact that I managed to round up Michael and Ben and Sarah and [name redacted] into a coalition to help me argue with Eliezer shift your estimate at all of whether my concerns have some objective merit? It can simultaneously be the case that I have the culture-war PTSD that you propose, and that my concerns have merit. + +(5) I don't know how other people's confidentiality intuitions work. Am I being an asshole by cc'ing in my coalition members (plus Anna, who is my friend even though she hasn't been helping me with this particular quest) in an email that references things you said in an email that was addressed only to me? I think this should be OK; I don't think we're talking about anything particularly secret. + +---- + + +I'm really sorry about this. I don't need Scott's help and I think Michael is being a little aggressive about that, but I guess that's also kind of Michael's style? I'm really sorry, because I still think that even very emotionally disturbed people are clearly incentivizable, and so the narrative of, "Oh, poor Zack, he's crazy, he can't help it" is not attributing me enough agency. Like, if I get really upset literally once every two years and impose a lot of care costs on other people in the process, I should be held accountable for that somehow. Last time, I ended up giving people some money (I didn't end up doing the public-praise part that I proposed in the blog post about it, because it turned out that that would have been too socially weird) but I don't know if even the money was a good social experiment or not. + +> No. +> We don't have punishment. +> We don't have morality. +> I'm trying to change that and you are undermining the attempt! +> We won't be friends. +> We're not friends anymore already! Once again, I am trying to change that and you are making it impossible. +In terms of making intellectual progress, I'm still most excited about the idea I was talking myself into all day in the "strategy huddle II" thread (I'll forward that to Jessica): "If Scott and Eliezer don't want to talk, whatever, I can just write those guys off (why did I think they were so important anyway?) and just focus on writing up the mathematically rigorous version of the thing I'm trying to say about how categories can be value-dependent because your values determine which dimensions you pay attention to (like 'birds' vs. 'flying things'), but that's really different from 'people will be sad if I use this category', and we should be able to demonstrate mathematically that the 'people will be sad if I think this' categories are less useful for making inferences. Not only will I learn lots of stuff in the process of figuring out the details, but when it's finished, our coalition can hype it up as interesting new 'Human's Guide to Words'-like content, and then at least the mathy people in our alleged community will see the parsimonious-categories-are-important thing in a context where the math will convince them and politics won't distract them." + +So ... I really should have just stuck to that plan and not freaked out: forget this stupid "appeal to local celebrities" game and just do cool math for two months. But when it was 4 a.m. (because I'd been obsessing all day, and I really wanted to meet my manager's deadline for the code that we want for our business, and I did write some code intermittently between 11 p.m. and 4 a.m.), I was just really upset at the whole situation, and I sent some more hasty emails, I think because at some psychological level I wanted Scott to know, "I'm so frustrated about your use of 'mental health for trans people' as an Absolute Denial Macro that I'm losing sleep over it even though we know sleep is really important for me." But then when Michael started advocating on my behalf (Michael seems to have a theory that people will only change their bad behavior when they see a victim who is being harmed), I started to minimize my claims because I have a generalized attitude of not wanting to portray/sell myself as a victim, even if it is technically literally true that I'm really hurt in a way that is causally related to other people behaving in ways that are bad. And I feel really guilty and wanting-to-appease about disappointing Michael (who has done so much for me), even though I know that Michael doesn't want me to care about disappointing him in particular, because I should just care about doing the Right Thing. I think that Michael thinks that aggression is more honest than passive-aggression. I think that's obviously true, but I just can't be too aggressive to my friends and people I admire (e.g., Anna, Scott, Eliezer). I just won't do it. I can passively let my metaphorical lawyers like Ben do it while accurately claiming to represent my interests, but I won't do it myself because I'm scared of damaging relationships. (Or offending people I worship as demigods.) + +So, I don't know. I think having been on the edge of a bad outcome but stepping away (I got 2ish hours worth of napping; I'll be just fine) will help me move on, and dissolve my unhealthy attachment to the "rationalist" brand name/social identity? (At the meeting on Sunday, it was pointed out that forging a new brand-name/Schelling-point-for-interesting-people-to-gather is easier than recapturing an old one.) I just ... we had a whole Sequence specifically about how you can't define a word any way you want, because you need words to map to the statistical structure in the world. I keep trying to point out, "Look, isn't that inconsistent with this newly fashionable 'categories were made for man'/'hill of meaning in defense of validity' moral? Right? Right? You have to see it? It can't possibly just be me who sees it?!" and Scott and Eliezer seem to keep rounding me off to, "Zack is upset about trans stuff which doesn't matter" even though I keep explicitly disclaiming that (while it's true that I'm upset about trans stuff that doesn't matter), that's not the part I expect them to care about. And I've just been really freaking out for three and a half months on end, because I learned everything I know from Eliezer, and the other half of everything I know from Scott, and like ... they can't really be that stupid, right? They have to just be fucking with me, right?? + +That's really unhealthy. (We were warned that it's unhealthy: "nervously seeking reassurance is not the best frame of mind in which to evaluate questions of rationality".) + +The tension between "I don't want to do too much social aggression" and "losing the Category War within the rationalist community is absolutely unacceptable; if I have to do social aggression and lose friends and burn bridges if there's any hope of getting people to see the hidden-Bayesian-structure-of-language-and-cognition thing again, then I'll do it" has been causing me a lot of pain and making me make wildly inconsistent decisions. (Emailing Scott at 4 a.m., and then calling Michael "aggressive" when he came to defend me is just crazy.) + +If there isn't actually any such thing in the world as a "rationalist community" (it's just a category boundary in my head that I keep attaching importance to, even though Sarah clearly explained to my why this is dumb two years ago, and I just haven't made the corresponding mental adjustments), then maybe losing the Category War within it is acceptable? Maybe that's the answer? If so, I wish I had been able to find the answer without leaving so much of a mess. + +Wait, maybe there is a reason to be attached to the "rationalist" brand name/social identity that isn't just me being stupid, and if we can find what the non-stupid reason is, then we can make better decisions. + +I think a non-stupid reason is that the way I talk has actually been trained really hard on this subculture for ten years: most of my emails during this whole campaign have contained multiple Sequences or Slate Star Codex links that I can just expect people to have read. I can spontaneously use the phrase "Absolute Denial Macro" in conversation and expect to be understood. That's a massive "home field advantage." If I just give up on "rationalists" being as sane as we were in 2009 (when we knew that men can't become women by means of saying so), and go out in the world to make intellectual friends elsewhere (by making friends with Quillette readers or arbitrary University of Chicago graduates), then I lose all that accumulated capital. The language I speak is mostly educated American English, but I rely on subculture dialect for a lot. My sister has a chemistry doctorate from MIT (so speaks the language of STEM intellectuals generally), and when I showed her "... To Make Predictions", she reported finding it somewhat hard to read, probably because I casually use phrases like "thus, an excellent motte", and expect to be understood without the reader taking 10 minutes to read the link. This essay, which was me writing from the heart in the words that came most naturally to me, could not be published in Quillette. The links and phraseology are just too context-bound. + +Maybe that's why I feel like I have to stand my ground and fight a culture war (even though I really don't want to be socially aggressive, and the contradiction between the war effort and my general submissiveness makes me make crazy decisions). + +I feel like I have to be careful phrasing my complaints about Berkeley culture, because if I'm not super-careful, people will round me off as saying, "Help! Censorship! I'm being repressed!" and then counterargue that I'm observably not being censored (which is true, modulo Ben's concerns about distributed silencing, which I feel motivated to downplay because I don't want to sell myself as a victim). The problem is much subtler than that. Berkeley "rationalists" are very good at free speech norms and deserve credit for that! But it still feels like a liberal church where you can say "I believe in evolution" without getting socially punished. Like, it's good that you can do that. But I had a sense that more is possible: a place where you can not just not-get-punished for being an evolutionist, but a place where you can say, "Wait! Given all this evidence for natural selection as the origin of design in the biological world, we don't need this 'God' hypothesis anymore. And now that we know that, we can work out whatever psychological needs we were trying to fulfil with this 'church' organization, and use that knowledge to design something that does an even better job at fulfilling those needs!" and have everyone just get it, at least on the meta level. + +I can accept a church community that disagrees on whether evolution is true. (Er, on the terms of this allegory.) I can accept a church community that disagrees on what the implications are conditional on the hypothesis that evolution is true. I cannot accept a church in which the canonical response to "Evolution is true! God isn't real!" is "Well, it depends on how you choose to draw the 'God' category boundary." I mean, I agree that words can be used in many ways, and that the answer to questions about God does depend on how the asker and answerer are choosing to draw the category boundary corresponding to the English language word 'God'. That observation can legitimately be part of the counterargument to "God isn't real!" But if the entire counterargument is just, "Well, it depends on how you define the word 'God', and a lot of people would be very sad if we defined 'God' in a way such that it turned out to not exist" ... unacceptable! Absolutely unacceptable! If this is the peak of publicly acceptable intellectual discourse in Berkeley, CA, and our AI alignment research group is based out of Berkeley (where they will inevitably be shaped by the local culture), and we can't even notice that there is a problem, then we're dead! We're just fucking dead! Right? Right?? I can't be the only one who sees this, am I? What is Toronto?????? + +Ben— +> Just wrote to Scott explicitly asking him to alter the beacon so that people like Zack don't think that's the place to go for literally doing the thing anymore, and ideally to redirect them to us (but that's a stretch). If he actually does this, that would be a really good time to start the new group blog. + +> This seems like the sort of thing where it's actually in his local interest to help us, as it reduces our incentive to ask him to do hard things. + + +Michael— +> This all seems sound, but also not worth the digression. Zack, do you feel comfortable with generalizing the sort of things that Scott said, and the things that others here have said about fear of talking openly, and assuming that something similar is probably happening with Eliezer too? + +> If so, now that we have common knowledge, there is really no point in letting technical quibbles distract us. We need to deal with the actual crisis, which is that dread is tearing apart old friendships and causing fanatics to betray everything that they ever stood for while it's existence is still being denied. + + +He said that he wasn't sure why the grand moral of "A Human's Guide to Words" was "You can't define a word any way you want" rather than "You can define a word any way you want, but then you have to deal with the consequences." + +Ultimately, I think this is a pedagogy decision that Eliezer got right. If you write your summary slogan in relativist language, people predictably take that as license to believe whatever they want without having to defend it. Whereas if you write your summary slogan in objectivist language—so that people know they don't have social permission to say that "it's subjective so I can't be wrong"—then you have some hope of sparking a useful discussion about the exact, precise ways that specific, definite things are, in fact, relative to other specific, definite things. + + +You texted last week, "Am I agitating you in a manner that isn't a good idea? If so, please tell me." + +I'm tempted to speculate that I might be better at taking your feedback about how to behave reasonably rather than doing the "cowering and submission to whoever I'm currently most afraid of losing relationship-points with" thing, if you try to be gentler sometimes, hopefully without sacrificing clarity? (Concrete examples: statements like, "What the FUCK Zack!?!" are really hard on me, and I similarly took "We're not friends anymore already!" badly at the time because I was reading it as you damning me, but that one is mostly my poor reading comprehension under sleep deprivation, because in context you were almost certainly responding to my "everyone will still be friends" babbling.) + +But, if you think that gentleness actually sacrifices clarity in practice, then I guess I could just do a better job of sucking it up.