From: Zack M. Davis Date: Tue, 28 Nov 2023 23:59:06 +0000 (-0800) Subject: memoir: pt. 4 contiguous draft X-Git-Url: http://534655.efjtl6rk.asia/source?a=commitdiff_plain;h=ee1626dbdf6ad8d3645a6f4ba0e8a10e21da9ca8;p=Ultimately_Untrue_Thought.git memoir: pt. 4 contiguous draft I have a few more potential TODO items, but I think I should have my editors take a swing at this now? --- diff --git a/content/drafts/agreeing-with-stalin-in-ways-that-exhibit-generally-rationalist-principles.md b/content/drafts/agreeing-with-stalin-in-ways-that-exhibit-generally-rationalist-principles.md index 79ed97e..47663e2 100644 --- a/content/drafts/agreeing-with-stalin-in-ways-that-exhibit-generally-rationalist-principles.md +++ b/content/drafts/agreeing-with-stalin-in-ways-that-exhibit-generally-rationalist-principles.md @@ -153,7 +153,7 @@ But the post seems to suggest that the motive isn't simply to avoid ambiguity. Y What does the "tossed into a bucket" metaphor refer to, though? I can think of many things that might be summarized that way, and my sympathy for the one who does not like to be tossed into a bucket depends on exactly what real-world situation is being mapped to the bucket. -If we're talking about overt gender role enforcement attempts—things like, "You're a girl, therefore you need to learn to keep house for your future husband", or "You're a man, therefore you need to toughen up"—then indeed, I strongly support people who don't want to be tossed into that kind of bucket. +If we're talking about overt gender role enforcement attempts—things like, "You're a girl, therefore you need to learn to keep house for your future husband," or "You're a man, therefore you need to toughen up"—then indeed, I strongly support people who don't want to be tossed into that kind of bucket. (There are [historical reasons for the buckets to exist](/2020/Jan/book-review-the-origins-of-unfairness/), but I'm eager to bet on modern Society being rich enough and smart enough to either forgo the buckets, or at least let people opt out of the default buckets without causing too much trouble.) @@ -179,7 +179,7 @@ But extending that to the "would get hair surgery if it were safer" case is absu Unless, at some level, Eliezer Yudkowsky doesn't expect his followers to deal with facts? -Maybe the problem is easier to see in the context of a non-gender example. [My previous hopeless ideological war—before this one—was against the conflation of _schooling_ and _education_](/2022/Apr/student-dysphoria-and-a-previous-lifes-war/): I hated being tossed into the Student Bucket, as it would be assigned by my school course transcript, or perhaps at all. +Maybe the problem is easier to see in the context of a non-gender example. My previous [hopeless ideological war](/2020/Feb/if-in-some-smothering-dreams-you-too-could-pace/)—before this one—was [against the conflation of _schooling_ and _education_](/2022/Apr/student-dysphoria-and-a-previous-lifes-war/): I hated being tossed into the Student Bucket, as it would be assigned by my school course transcript, or perhaps at all. I sometimes describe myself as mildly "gender dysphoric", because our culture doesn't have better widely-understood vocabulary for my [beautiful pure sacred self-identity thing](/2021/May/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems/#beautiful-pure-sacred-self-identity), but if we're talking about suffering and emotional distress, my "student dysphoria" was vastly worse than any "gender dysphoria" I've ever felt. @@ -346,7 +346,7 @@ To his credit, he will admit that he's only willing to address a selected subset Counterarguments aren't completely causally inert: if you can make an extremely strong case that Biological Sex Is Sometimes More Relevant Than Subjective Gender Identity (Such That Some People Perceive an Interest in Using Language Accordingly), Yudkowsky will put some effort into coming up with some ingenious excuse for why he _technically_ never said otherwise, in ways that exhibit generally rationalist principles. But at the end of the day, Yudkowsky is going to say what he needs to say in order to protect his reputation with progressives, as is sometimes personally prudent. -Even if one were to agree with this description of Yudkowsky's behavior, it doesn't immediately follow that Yudkowsky is making the wrong decision. Again, "bad faith" is meant as a literal description that makes predictions about behavior, not a contentless attack—maybe there are circumstances in which engaging some amount of bad faith is the right thing to do, given the constraints one faces! For example, when talking to people on Twitter with a very different ideological background from me, I sometimes anticipate that if my interlocutor knew what I was actually thinking, they wouldn't want to talk to me, so I occasionally engage in a bit of what could be called ["concern trolling"](https://geekfeminism.fandom.com/wiki/Concern_troll): I take care to word my replies in a way that makes it look like I'm more ideologically aligned with my interlocutor than I actually am. (For example, I [never say "assigned female/male at birth" in my own voice on my own platform](/2019/Sep/terminology-proposal-developmental-sex/), but I'll do it in an effort to speak my interlocutor's language.) I think of this as the minimal amount of strategic bad faith needed to keep the conversation going, to get my interlocutor to evaluate my argument on its own merits, rather than rejecting it for coming from an ideological enemy. In cases such as these, I'm willing to defend my behavio. There _is_ a sense in which I'm being deceptive by optimizing my language choice to make my interlocutor make bad guesses about my ideological alignment, but I'm comfortable with that amount and scope of deception in the service of correcting the distortion where I don't think my interlocutor _should_ be paying attention to my personal alignment. +Even if one were to agree with this description of Yudkowsky's behavior, it doesn't immediately follow that Yudkowsky is making the wrong decision. Again, "bad faith" is meant as a literal description that makes predictions about behavior, not a contentless attack—maybe there are circumstances in which engaging some amount of bad faith is the right thing to do, given the constraints one faces! For example, when talking to people on Twitter with a very different ideological background from me, I sometimes anticipate that if my interlocutor knew what I was actually thinking, they wouldn't want to talk to me, so I occasionally engage in a bit of what could be called ["concern trolling"](https://geekfeminism.fandom.com/wiki/Concern_troll): I take care to word my replies in a way that makes it look like I'm more ideologically aligned with my interlocutor than I actually am. (For example, I [never say "assigned female/male at birth" in my own voice on my own platform](/2019/Sep/terminology-proposal-developmental-sex/), but I'll do it in an effort to speak my interlocutor's language.) I think of this as the minimal amount of strategic bad faith needed to keep the conversation going, to get my interlocutor to evaluate my argument on its own merits, rather than rejecting it for coming from an ideological enemy. In cases such as these, I'm willing to defend my behavior. There _is_ a sense in which I'm being deceptive by optimizing my language choice to make my interlocutor make bad guesses about my ideological alignment, but I'm comfortable with that amount and scope of deception in the service of correcting the distortion where I don't think my interlocutor _should_ be paying attention to my personal alignment. That is, my bad faith concern-trolling gambit of deceiving people about my ideological alignment in the hopes of improving the discussion seems like something that improves the accuracy of our collective beliefs about the topic being argued about. (And the topic is presumably of greater collective interest than which "side" I personally happen to be on.) @@ -372,7 +372,7 @@ Similarly, when Yudkowsky [wrote in June 2021](https://twitter.com/ESYudkowsky/s And yet it seems worth noticing that shortly after proclaiming in March 2016 that he was "over 50% probability at this point that at least 20% of the ones with penises are actually women", he made [a followup post gloating about causing someone's transition](https://www.facebook.com/yudkowsky/posts/10154110278349228): -> Just checked my filtered messages on Facebook and saw, "Your post last night was kind of the final thing I needed to realize that I'm a girl." +> Just checked my filtered messages on Facebook and saw, "Your post last night was kind of the final thing I needed to realize that I'm a girl." > ==DOES ALL OF THE HAPPY DANCE FOREVER== In the comments, he added: @@ -515,37 +515,44 @@ Because, I did, actually, trust him. Back in 2009 when _Less Wrong_ was new, we ["Never go in against Eliezer Yudkowsky when anything is on the line"](https://www.greaterwrong.com/posts/Ndtb22KYBxpBsagpj/eliezer-yudkowsky-facts/comment/Aq9eWJmK6Liivn8ND), said one of the facts—and back then, I didn't think I would _need_ to. -What made him so trustworthy back then was that he wasn't asking for trust. He clearly _did_ think it was [unvirtuous to just shut up and listen to him](https://www.lesswrong.com/posts/t6Fe2PsEwb3HhcBEr/the-litany-against-gurus): "I'm not sure that human beings realistically _can_ trust and think at the same time," [he wrote](https://www.lesswrong.com/posts/wustx45CPL5rZenuo/no-safe-defense-not-even-science). He was always arrogant, but it was tempered by the expectation of being held to account by arguments; he wrote about ["avoid[ing] sending [his] brain signals which tell it that [he was] high-status, just in case that cause[d his] brain to decide it [was] no longer necessary."](https://www.lesswrong.com/posts/cgrvvp9QzjiFuYwLi/high-status-and-stupidity-why). +Part of what made him so trustworthy back then was that he wasn't asking for trust. He clearly _did_ think it was [unvirtuous to just shut up and listen to him](https://www.lesswrong.com/posts/t6Fe2PsEwb3HhcBEr/the-litany-against-gurus): "I'm not sure that human beings realistically _can_ trust and think at the same time," [he wrote](https://www.lesswrong.com/posts/wustx45CPL5rZenuo/no-safe-defense-not-even-science). He was always arrogant, but it was an arrogance tempered by the expectation of being held to account by arguments; he wrote about ["avoid[ing] sending [his] brain signals which tell it that [he was] high-status, just in case that cause[d his] brain to decide it [was] no longer necessary."](https://www.lesswrong.com/posts/cgrvvp9QzjiFuYwLi/high-status-and-stupidity-why). -He visibly [cared about other people being in touch with reality](https://www.lesswrong.com/posts/anCubLdggTWjnEvBS/your-rationality-is-my-business). "I've informed a number of male college students that they have large, clearly detectable body odors. In every single case so far, they say nobody has ever told them that before," [he wrote](https://www.greaterwrong.com/posts/kLR5H4pbaBjzZxLv6/polyhacking/comment/rYKwptdgLgD2dBnHY). (I can testify that this is true: while sharing a car ride with Anna Salamon in 2011, he told me I had B.O.) +He visibly [cared about other people being in touch with reality](https://www.lesswrong.com/posts/anCubLdggTWjnEvBS/your-rationality-is-my-business). Not just by writing the Sequences, but also things like how he [reported](https://www.greaterwrong.com/posts/kLR5H4pbaBjzZxLv6/polyhacking/comment/rYKwptdgLgD2dBnHY), "I've informed a number of male college students that they have large, clearly detectable body odors. In every single case so far, they say nobody has ever told them that before." (I can testify that this is true: while sharing a car ride with Anna Salamon in 2011, he told me I had B.O.) -Informing people about their body odor is above-and-beyond truth-telling behavior: it's an area where people would benefit from feedback (if you know, you can invest in deodorant), but aren't getting that feedback by default (because no one wants to be so rude as to tell people they smell bad). +Telling people about their body odor represents an above-and-beyond devotion to truth-telling: it's an area where people would benefit from feedback (if you know, you can invest in deodorant), but aren't getting that feedback by default (because no one wants to be so rude as to tell people they smell bad). -[TODO— - * There's an obvious analogy to telling people they have B.O., to telling trans people that they don't pass. - * It's not that I expect him to do it. (I don't do that, either.) But I'd expect him to _notice_ it as pro-Truth action. His pattern of public statements suggests he doesn't even notice!! - * The EY who care about people being in touch with reality is dead now. +Really, a lot of the epistemic heroism here is just in [noticing](https://www.lesswrong.com/posts/SA79JMXKWke32A3hG/original-seeing) the conflict between Feelings and Truth, between Politeness and Truth, rather than necessarily acting on it. If telling someone they smell bad would predictably meet harsh social punishment, I couldn't blame someone for choosing silence and safety over telling the truth, with the awareness that they were so choosing. -In a Discord discussion, [he remarked](yudkowsky-i_might_have_made_a_fundamental_mistake.png): +What I can and do blame someone for is actively fighting for Feelings while misrepresenting oneself as a soldier for Truth. There are a lot of trans people who would benefit from feedback that they don't pass, but aren't getting that feedback by default. I wouldn't necessarily expect Yudkowsky to provide it. (I don't, either.) + +I _would_ expect the person who wrote the Sequences not to insist that the important thing is the feelings of human beings who are people describing reasons someone does not like to be tossed into a Smells Bad bucket which don't bear on the factual question of whether someone smells bad. + +That person is dead now, even if his body is still breathing. + +I think he knows it. In a November 2022 Discord discussion, [he remarked](yudkowsky-i_might_have_made_a_fundamental_mistake.png): > I might have made a fundamental mistake when I decided, long ago, that I was going to try to teach people how to reason so that they'd be able to process my arguments about AGI and AGI alignment through a mechanism that would discriminate true from false statements. > > maybe I should've just learned to persuade people of things instead - * I got offended; I complained in Discord that this amounted to giving up on the concept of intellectual honesty ; davis-amounts-to-giving-up-on-the-concept-of-intellectual-honesty.png - * He put a checkmark on it. -] +I got offended. I said that I felt like a devout Catholic watching the Pope say, "Jesus sucks; I hate God; I never should have told people about God." + +Later, I felt the need to write another message clarifying exactly what I found offensive. The problem wasn't the condescension of the suggestion that other people couldn't reason. People being annoyed at the condescension was fine. The _problem_ was that just learning to persuade people of things instead was giving up on deep hidden-structure-of-normative-reasoning principle, that the arguments you use to convince others should be the same as the ones you used to decide which conclusion to argue for. Giving up on that amounted to giving up on the _concept_ of intellectual honesty, choosing instead to become a propaganda AI that calculates what signals to output in order to manipulate an agentless world. + +[He put a check-mark emoji on it](davis-amounts-to-giving-up-on-the-concept-of-intellectual-honesty.png), indicating agreement or approval. + +If the caliph has lost his faith in the power of intellectual honesty, I can't necessarily say he's wrong on the empirical merits. It is written that our world is [beyond the reach of God](https://www.lesswrong.com/posts/sYgv4eYH82JEsTD34/beyond-the-reach-of-god); there's no law of physics that says honesty must yield better results than propaganda. -The modern [Yudkowsky writes](https://twitter.com/ESYudkowsky/status/1096769579362115584): +But since I haven't relinquished my faith, I have the responsibility to point it out when he attempts to wield his priestly authority as the author of the Sequences while not being consistently candid in his communications with his followers, hindering their ability to exercise their responsibilities. The modern Yudkowsky [writes](https://twitter.com/ESYudkowsky/status/1096769579362115584): > When an epistemic hero seems to believe something crazy, you are often better off questioning "seems to believe" before questioning "crazy", and both should be questioned before shaking your head sadly about the mortal frailty of your heroes. I notice that this advice leaves out a possibility: that the "seems to believe" is a deliberate show (judged to be personally prudent and not community-harmful), rather than a misperception on your part. I am left shaking my head in a [weighted average of](https://www.lesswrong.com/posts/y4bkJTtG3s5d6v36k/stupidity-and-dishonesty-explain-each-other-away) sadness about the mortal frailty of my former hero, and disgust at his craven duplicity. **If Eliezer Yudkowsky can't _unambigously_ choose Truth over Feelings, _then Eliezer Yudkowsky is a fraud_.** -A few clarifications are in order here. First, as with "bad faith", this usage of "fraud" isn't a meaningless [boo light](https://www.lesswrong.com/posts/dLbkrPu5STNCBLRjr/applause-lights). I specifically and literally mean it in [_Merriam-Webster_'s sense 2.a., "a person who is not what he or she pretends to be"](https://www.merriam-webster.com/dictionary/fraud)—and I think I've made my case. Someone who disagrees with my assessment needs to argue that I've gotten some specific thing wrong, [rather than objecting on procedural grounds](https://www.lesswrong.com/posts/pkaagE6LAsGummWNv/contra-yudkowsky-on-epistemic-conduct-for-author-criticism). +A few clarifications are in order here. First, as with "bad faith", this usage of "fraud" isn't a meaningless [boo light](https://www.lesswrong.com/posts/dLbkrPu5STNCBLRjr/applause-lights). I specifically and literally mean it in [_Merriam-Webster_'s sense 2.a., "a person who is not what he or she pretends to be"](https://www.merriam-webster.com/dictionary/fraud)—and I think I've made my case. Someone who disagrees with my assessment needs to argue that I've gotten some specific thing wrong, [rather than objecting to character attacks on procedural grounds](https://www.lesswrong.com/posts/pkaagE6LAsGummWNv/contra-yudkowsky-on-epistemic-conduct-for-author-criticism). Second, it's a conditional: _if_ Yudkowsky can't unambiguously choose Truth over Feelings, _then_ he's a fraud. If he wanted to come clean—if he decided after all that he wanted it to be common knowledge in his Caliphate that gender-dysphoric people can stand what is true, because we are already enduring it—he could do so at any time. He probably won't. We've already seen from his behavior that he doesn't give a shit what people like me think of his intellectual integrity. Why would that change? -Third, given that "fraud" is a semantically meaningful description and not just a emotive negative evaluation, I should stress that the evaluation is a separate step. If being a fraud were instrumentally useful for saving the world, maybe being a fraud would be the right thing to do? More on this in the next post. (To be continued.) +Third, given that "fraud" is a semantically meaningful description and not just a emotive negative evaluation, I should stress that the evaluation is a separate step. If being a fraud were necessary for saving the world, maybe being a fraud would be the right thing to do? More on this in the next post. (To be continued.) diff --git a/notes/memoir-sections.md b/notes/memoir-sections.md index a1e0786..d4d4f11 100644 --- a/notes/memoir-sections.md +++ b/notes/memoir-sections.md @@ -68,6 +68,7 @@ _ better context on "scam" &c. earlier (editor might catch?) _ cut words from descriptions of other posts! (editor might catch?) _ try to clarify Abram's categories view (Michael didn't get it) (but it still seems clear to me on re-read?) + pt. 4 edit tier— ✓ "Ideology Is Not the Movement" mentions not misgendering ✓ mention Nick Bostrom email scandal (and his not appearing on the one-sentence CAIS statement) @@ -77,12 +78,11 @@ pt. 4 edit tier— ✓ GreaterWrong over Less Wrong for comment links ✓ ending qualifications on "fraud" and whether it might be a good idea ✓ selective argumentation that's clearly labeled as such would be fine - -- if you only say good things about Republican candidates / Stalin's five year plan -- he used to be worthy of trust -_ the mailing list post noted it as a "common sexual fantasy" - +✓ if you only say good things about Republican candidates +✓ he used to be worthy of trust --- +_ the mailing list post noted it as a "common sexual fantasy" +_ Sept. 2020 clarification noted that a distinction should be made between _ emphasize that 2018 thread was policing TERF-like pronoun usage, not just disapproving of gender-based pronouns _ https://cognition.cafe/p/on-lies-and-liars _ cite more sneers; use a footnote to pack in as many as possible