From: Zack M. Davis Date: Fri, 3 Nov 2023 01:06:31 +0000 (-0700) Subject: memoir: pt. 4 editing X-Git-Url: http://534655.efjtl6rk.asia/source?a=commitdiff_plain;h=7632b7a2b3c678a9b7b975def788d3d4f09d389f;p=Ultimately_Untrue_Thought.git memoir: pt. 4 editing --- diff --git a/content/2023/a-hill-of-validity-in-defense-of-meaning.md b/content/2023/a-hill-of-validity-in-defense-of-meaning.md index d09dfc5..106cc45 100644 --- a/content/2023/a-hill-of-validity-in-defense-of-meaning.md +++ b/content/2023/a-hill-of-validity-in-defense-of-meaning.md @@ -496,7 +496,7 @@ I asked the posse if this analysis was worth sending to Yudkowsky. Michael said ----- -That week, former MIRI researcher Jessica Taylor joined our posse (being at an in-person meeting with Ben and Sarah and another friend on the seventeenth, and getting tagged in subsequent emails). I had met Jessica for the first time in March 2017, shortly after my psychotic break, and I had been part of the group trying to take care of her when she had [her own break in late 2017](https://www.lesswrong.com/posts/pQGFeKvjydztpgnsY/occupational-infohazards), but other than that, we hadn't been particularly close. +That week, former MIRI researcher Jessica Taylor joined our posse (being at an in-person meeting with Ben and Sarah and another friend on the seventeenth, and getting tagged in subsequent emails). I had met Jessica for the first time in March 2017, shortly after my psychotic break, and I had been part of the group trying to take care of her when she had [her own break in late 2017](https://www.lesswrong.com/posts/pQGFeKvjydztpgnsY/occupational-infohazards), but other than that, we hadn't been particularly close. Significantly for political purposes, Jessica is trans. We didn't have to agree up front on all gender issues for her to see the epistemology problem with "... Not Man for the Categories", and to say that maintaining a narcissistic fantasy by controlling category boundaries wasn't what _she_ wanted, as a trans person. (On the seventeenth, when I lamented the state of a world that incentivized us to be political enemies, her response was, "Well, we could talk about it first.") Michael said that me and Jessica together had more moral authority than either of us alone. diff --git a/content/drafts/agreeing-with-stalin-in-ways-that-exhibit-generally-rationalist-principles.md b/content/drafts/agreeing-with-stalin-in-ways-that-exhibit-generally-rationalist-principles.md index 6677f8c..7a74617 100644 --- a/content/drafts/agreeing-with-stalin-in-ways-that-exhibit-generally-rationalist-principles.md +++ b/content/drafts/agreeing-with-stalin-in-ways-that-exhibit-generally-rationalist-principles.md @@ -366,7 +366,7 @@ Depending on the details, maybe—but you probably shouldn't expect the victims When someone else doesn't see the problem with Jane Austen's characters, Thellim [redoubles her determination to explain the problem](https://www.glowfic.com/replies/1592987#reply-1592987): "_She is not giving up that easily. Not on an entire planet full of people._" -Thellim's horror at the fictional world of Jane Austen is basically how I feel about "trans" culture in the current year. It _actively discourages self-modeling!_ People who have cross-sex fantasies are encouraged to reify them into a gender identity which everyone else is supposed to unquestioningly accept. Obvious critical questions about what's actually going on etiologically, what it means for an identity to be true, _&c._ are strongly discouraged as hateful, hurtful, distressing, _&c._ +Thellim's horror at the fictional world of Jane Austen is basically how I feel about "trans" culture in the current year. It _actively discourages self-modeling!_ People who have cross-sex fantasies are encouraged to reify them into a gender identity which everyone else is supposed to unquestioningly accept. Obvious critical questions about what's actually going on etiologically, what it means for an identity to be true, _&c._ are strongly discouraged as hateful and hurtful. The problem is _not_ that I think there's anything wrong with fantasizing about being the other sex, and wanting the fantasy to become real—just as Thellim's problem with _Pride and Prejudice_ is not there being anything wrong with wanting to marry a suitable bachelor. These are perfectly respectable goals. @@ -396,21 +396,21 @@ This is the part where Yudkowsky or his flunkies accuse me of being uncharitable But the substance of my complaints is [not about Yudkowsky's conscious subjective narrative](https://www.lesswrong.com/posts/sXHQ9R5tahiaXEZhR/algorithmic-intent-a-hansonian-generalized-anti-zombie). I don't have a lot of uncertainty about Yudkowsky's theory of himself, because he told us that, very clearly: "it is sometimes personally prudent and not community-harmful to post your agreement with Stalin about things you actually agree with Stalin about, in ways that exhibit generally rationalist principles, especially because people do _know_ they're living in a half-Stalinist environment." I don't doubt that that's [how the algorithm feels from the inside](https://www.lesswrong.com/posts/yA4gF5KrboK2m2Xu7/how-an-algorithm-feels-from-inside). -But my complaint is about the work the algorithm is _doing_ in Stalin's service, not about how it feels; I'm talking about a pattern of publicly visible behavior stretching over years. (Thus, "take actions" in favor of/against, rather than "be"; "exert optimization pressure in the direction of", rather than "try".) I agree that everyone has a story in which they don't look terrible, and that people mostly believe their own stories, but it does not therefore follow that no one ever does anything terrible. +But my complaint is about the work the algorithm is _doing_ in Stalin's service, not about how it feels; I'm talking about a pattern of publicly visible _behavior_ stretching over years, not claiming to be a mind-reader. (Thus, "take actions" in favor of/against, rather than "be"; "exert optimization pressure in the direction of", rather than "try".) I agree that everyone has a story in which they don't look terrible, and that people mostly believe their own stories, but it does not therefore follow that no one ever does anything terrible. -I agree that you won't have much luck yelling at the Other about how they must really be doing `terrible_thing`. (People get very invested in their own stories.) But if you have the _receipts_ of the Other repeatedly doing `terrible_thing` in public from 2016 to 2021, maybe yelling about it to _everyone else_ might help _them_ stop getting suckered by the Other's phony story. +I agree that you won't have much luck yelling at the Other about how they must really be doing `terrible_thing`. (People get very invested in their own stories.) But if you have the _receipts_ of the Other repeatedly doing the thing in public from 2016 to 2021, maybe yelling about it to _everyone else_ might help _them_ stop getting suckered by the Other's empty posturing. Let's recap. -In January 2009, Yudkowsky published ["Changing Emotions"](https://www.lesswrong.com/posts/QZs4vkC7cbyjL9XA9/changing-emotions), essentially a revision of [a 2004 mailing list post responding to a man who said that after the Singularity, he'd like to make a female but "otherwise identical" copy of himself](https://archive.is/En6qW). "Changing Emotions" insightfully points out [the deep technical reasons why](/2021/May/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems/#changing-sex-is-hard) men who sexually fantasize about being women can't achieve their dream with forseeable technology—and not only that, but that the dream itself is conceptually confused: a man's fantasy-about-it-being-fun-to-be-a-woman isn't part of the female distribution; there's a sense in which it _can't_ be fulfilled. +In January 2009, Yudkowsky published ["Changing Emotions"](https://www.lesswrong.com/posts/QZs4vkC7cbyjL9XA9/changing-emotions), essentially a revision of [a 2004 mailing list post responding to a man who said that after the Singularity, he'd like to make a female but "otherwise identical" copy of himself](https://archive.is/En6qW). "Changing Emotions" insightfully points out [the deep technical reasons why](/2021/May/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems/#changing-sex-is-hard) men who sexually fantasize about being women can't achieve their dream with forseeable technology—and not only that, but that the dream itself is conceptually confused: a man's fantasy-about-it-being-fun-to-be-a-woman isn't part of the female distribution; there's a sense in which it _can't_ be fulfilled. It was a good post! Though Yudkowsky was merely using the sex change example to illustrate [a more general point about the difficulties of applied transhumanism](https://www.lesswrong.com/posts/EQkELCGiGQwvrrp3L/growing-up-is-hard), "Changing Emotions" was hugely influential on me; I count myself much better off for having understood the argument. -But later, in a March 2016 Facebook post, Yudkowsky [proclaimed that](https://www.facebook.com/yudkowsky/posts/10154078468809228) "for people roughly similar to the Bay Area / European mix, I think I'm over 50% probability at this point that at least 20% of the ones with penises are actually women." +But seven years later, in a March 2016 Facebook post, Yudkowsky [proclaimed that](https://www.facebook.com/yudkowsky/posts/10154078468809228) "for people roughly similar to the Bay Area / European mix, I think I'm over 50% probability at this point that at least 20% of the ones with penises are actually women." -This seemed like a huge and surprising reversal from the position articulated in "Changing Emotions"! The two posts weren't _necessarily_ inconsistent, _if_ you assumed gender identity is an objectively real property synonymous with "brain sex", and that "Changing Emotions"'s harsh (almost mocking) skepticism of the idea of true male-to-female sex change was directed at the sex-change fantasies of _cis_ men (with a male gender-identity/brain-sex), whereas the 2016 Facebook post was about _trans women_ (with a female gender-identity/brain-sex), which are a different thing. +This seemed like a huge and surprising reversal from the position articulated in "Changing Emotions". The two posts weren't _necessarily_ inconsistent, if you assumed gender identity is a real property synonymous with "brain sex", and that the harsh (almost mocking) skepticism of the idea of true male-to-female sex change in "Changing Emotions" was directed at the erotic sex-change fantasies of _cis_ men (with a male gender-identity/brain-sex), whereas the 2016 Facebook post was about _trans women_ (with a female gender-identity/brain-sex), which are a different thing. -But this potential unification seemed very dubious to me, especially if "actual" trans women were purported to be "at least 20% of the ones with penises" (!!) in some population. _After it's been pointed out_, it should be a pretty obvious hypothesis that "guy on the Extropians mailing list in 2004 who fantasizes about having a female but 'otherwise identical' copy of himself" and "guy in 2016 Berkeley who identifies as a trans woman" are the _same guy_. So in October 2016, [I wrote to Yudkowsky noting the apparent reversal and asking to talk about it](/2023/Jul/blanchards-dangerous-idea-and-the-plight-of-the-lucid-crossdreamer/#cheerful-price) (offering to pay $1000 under the [cheerful price protocol](https://www.lesswrong.com/posts/MzKKi7niyEqkBPnyu/your-cheerful-price)). Because of the privacy rules I'm adhering to in telling this Whole Dumb Story, I can't confirm or deny whether he accepted and any such conversation occured. +But this potential unification seemed dubious to me, especially if trans women were purported to be "at least 20% of the ones with penises" (!!) in some population. After it's been pointed out, it should be a pretty obvious hypothesis that "guy on the Extropians mailing list in 2004 who fantasizes about having a female but 'otherwise identical' copy of himself" and "guy in 2016 Berkeley who identifies as a trans woman" are the _same guy_. So in October 2016, [I wrote to Yudkowsky noting the apparent reversal and asking to talk about it](/2023/Jul/blanchards-dangerous-idea-and-the-plight-of-the-lucid-crossdreamer/#cheerful-price). Because of the privacy rules I'm adhering to in telling this Whole Dumb Story, I can't confirm or deny whether any such conversation occured. Then, in November 2018, while criticizing people who refuse to use trans people's preferred pronouns, Yudkowsky proclaimed that "Using language in a way _you_ dislike, openly and explicitly and with public focus on the language and its meaning, is not lying" and that "you're not standing in defense of truth if you insist on a word, brought explicitly into question, being used with some particular meaning". But _that_ seemed like a huge and surprising reversal from the position articulated in ["37 Ways Words Can Be Wrong"](https://www.lesswrong.com/posts/FaJaCgqBKphrDzDSj/37-ways-that-words-can-be-wrong). After attempts to clarify via email failed, I eventually wrote ["Where to Draw the Boundaries?"](https://www.lesswrong.com/posts/esRZaPXSHgWzyB2NL/where-to-draw-the-boundaries) to explain the relevant error in general terms, and Yudkowsky would eventually go on to [clarify his position in Septembmer 2020](https://www.facebook.com/yudkowsky/posts/10158853851009228). @@ -418,29 +418,31 @@ But then in February 2021, he reopened the discussion to proclaim that "the simp End recap. -At this point, the nature of the game is very clear. Yudkowsky wants to make sure he's on peaceful terms with the progressive _Zeitgeist_, subject to the constraint of not saying anything he knows to be false. Meanwhile, I want to actually make sense of what's actually going on in the world as regards to sex and gender, because _I need the correct answer to decide whether or not to cut my dick off_. +At this point, the nature of the game is very clear. Yudkowsky wants to make sure he's on peaceful terms with the progressive _Zeitgeist_, subject to the constraint of [not writing any sentences he knows to be false](https://www.lesswrong.com/posts/xdwbX9pFEr7Pomaxv/meta-honesty-firming-up-honesty-around-its-edge-cases#2__The_law_of_no_literal_falsehood_). Meanwhile, I want to make sense of what's actually going on in the world as regards to sex and gender, because _I need the correct answer to decide whether or not to cut my dick off_. -On "his turn", he comes up with some pompous proclamation that's very obviously optimized to make the "pro-trans" faction look smart and good and make the "anti-trans" faction look dumb and bad, "in ways that exhibit generally rationalist principles." +On "his turn", he comes up with some pompous proclamation that's obviously optimized to make the "pro-trans" faction look smart and good and make the "anti-trans" faction look dumb and bad, "in ways that exhibit generally rationalist principles." -On "my turn", I put in an _absurd_ amount of effort explaining in exhaustive, _exhaustive_ detail why Yudkowsky's pompous proclamation, while [not technically saying making any unambiguously "false" atomic statements](https://www.lesswrong.com/posts/MN4NRkMw7ggt9587K/firming-up-not-lying-around-its-edge-cases-is-less-broadly), was _substantively misleading_ as constrated to what any serious person would say if they were actually trying to make sense of the world without worrying what progressive activists would think of them. +On "my turn", I put in an absurd amount of effort explaining in exhaustive, _exhaustive_ detail why Yudkowsky's pompous proclamation, while [not technically saying making any unambiguously false atomic statements](https://www.lesswrong.com/posts/MN4NRkMw7ggt9587K/firming-up-not-lying-around-its-edge-cases-is-less-broadly), was substantively misleading compared to what any serious person would say if they were trying to make sense of the world without worrying what progressive activists would think of them. -At the start, I _never_ expected to end up arguing about something so _trivial_ as the minutiae of pronoun conventions (which no one would care about if historical contingencies of the evolution of the English language hadn't made them a Schelling point and typographical attack surface for things people do care about). The conversation only ended up here after a series of derailings. At the start, I was _trying_ to say something substantive about the psychology of straight men who wish they were women. +At the start, I never expected to end up arguing about something so trivial as the minutiae of pronoun conventions (which no one would care about if historical contingencies of the evolution of the English language hadn't made them a Schelling point for things people do care about). The conversation only ended up here after a series of derailings. At the start, I was trying to say something substantive about the psychology of straight men who wish they were women. -In the context of AI alignment theory, Yudkowsky has written about a "nearest unblocked strategy" phenomenon: if you directly prevent an agent from accomplishing a goal via some plan that you find undesirable, the agent will search for ways to route around that restriction, and probably find some plan that you find similarly undesirable for similar reasons. +In the context of AI alignment theory, Yudkowsky has written about a "nearest unblocked strategy" phenomenon: if you directly prevent an agent from accomplishing a goal via some plan that you find undesirable, the agent will search for ways to route around that restriction, and probably find some plan that you find similarly undesirable for broadly similar reasons. Suppose you developed an AI to [maximize human happiness subject to the constraint of obeying explicit orders](https://arbital.greaterwrong.com/p/nearest_unblocked#exampleproducinghappiness). It might first try administering heroin to humans. When you order it not to, it might switch to administering cocaine. When you order it to not use any of a whole list of banned happiness-producing drugs, it might switch to researching new drugs, or just _pay_ humans to take heroin, _&c._ -It's the same thing with Yudkowsky's political risk minimization subject to the constraint of not saying anything he knows to be false. First he comes out with ["I think I'm over 50% probability at this point that at least 20% of the ones with penises are actually women"](https://www.facebook.com/yudkowsky/posts/10154078468809228) (March 2016). When you point out that [that's not true](https://www.lesswrong.com/posts/QZs4vkC7cbyjL9XA9/changing-emotions), then the next time he revisits the subject, he switches to ["you're not standing in defense of truth if you insist on a word, brought explicitly into question, being used with some particular meaning"](https://archive.is/Iy8Lq) (November 2018). When you point out that [_that's_ not true either](https://www.lesswrong.com/posts/FaJaCgqBKphrDzDSj/37-ways-that-words-can-be-wrong), he switches to "It is Shenanigans to try to bake your stance on how clustered things are [...] _into the pronoun system of a language and interpretation convention that you insist everybody use_" (February 2021). When you point out [that's not what's going on](/2022/Mar/challenges-to-yudkowskys-pronoun-reform-proposal/), he switches to ... I don't know, but he's a smart guy; in the unlikely event that he sees fit to respond to this post, I'm sure he'll be able to think of _something_—but at this point, _I have no reason to care_. Talking to Yudkowsky on topics where getting the right answer would involve acknowledging facts that would make you unpopular in Berkeley is a _waste of everyone's time_; he has a [bottom line](https://www.lesswrong.com/posts/34XxbRFe54FycoCDw/the-bottom-line) that doesn't involve trying to inform you. +It's the same thing with Yudkowsky's political risk minimization subject to the constraint of not saying anything he knows to be false. First he comes out with ["I think I'm over 50% probability at this point that at least 20% of the ones with penises are actually women"](https://www.facebook.com/yudkowsky/posts/10154078468809228) (March 2016). When you point out that his own pre–[Great Awokening](https://www.vox.com/2019/3/22/18259865/great-awokening-white-liberals-race-polling-trump-2020) writings [explain why that's not true](https://www.lesswrong.com/posts/QZs4vkC7cbyjL9XA9/changing-emotions), then the next time he revisits the subject, he switches to ["you're not standing in defense of truth if you insist on a word, brought explicitly into question, being used with some particular meaning"](https://archive.is/Iy8Lq) (November 2018). When you point out that his earlier writings also explain why [_that's_ not true either](https://www.lesswrong.com/posts/FaJaCgqBKphrDzDSj/37-ways-that-words-can-be-wrong), he switches to "It is Shenanigans to try to bake your stance on how clustered things are [...] _into the pronoun system of a language and interpretation convention that you insist everybody use_" (February 2021). When you point out that [that's not what's going on](/2022/Mar/challenges-to-yudkowskys-pronoun-reform-proposal/), he switches to ... I don't know, but he's a smart guy; in the unlikely event that he sees fit to respond to this post, I'm sure he'll be able to think of something—but at this point, _I have no reason to care_. Talking to Yudkowsky on topics where getting the right answer would involve acknowledging facts that would make you unpopular in Berkeley is a waste of everyone's time; he has a [bottom line](https://www.lesswrong.com/posts/34XxbRFe54FycoCDw/the-bottom-line) that doesn't involve trying to inform you. Accusing one's interlocutor of bad faith is frowned upon for a reason. We would prefer to live in a world where we have intellectually fruitful object-level discussions under the assumption of good faith, rather than risk our fora degenerating into an acrimonious brawl of accusations and name-calling, which is unpleasant and (more importantly) doesn't make any intellectual progress. I, too, would prefer to have a real object-level discussion under the assumption of good faith. -Accordingly, I tried the object-level good-faith argument thing _first_. I tried it for _years_. But at some point, I think I should be _allowed to notice_ the nearest-unblocked-strategy game which is _very obviously happening_ if you look at the history of what was said. I think there's _some_ number of years and _some_ number of thousands of words of litigating the object-level _and_ the meta level after which there's nothing left for me to do but jump up to the meta-meta level and explain, to anyone capable of hearing it, why in this case I think I've accumulated enough evidence for the assumption of good faith to have been _empirically falsified_. +Accordingly, I tried the object-level good-faith argument thing _first_. I tried it for _years_. But at some point, I think I should be allowed to notice the nearest-unblocked-strategy game which is obviously happening if you look at the history of what was said. I think there's some number of years and some number of thousands of words[^wordcounts] of litigating the object level (about gender) and the meta level (about the philosophy of categorization) after which there's nothing left for me to do but jump up to the meta-meta level of politics and explain, to anyone capable of hearing it, why I think I've accumulated enough evidence for the assumption of good faith to have been empirically falsified.[^symmetrically-not-assuming-good-faith] -(Obviously, if we're crossing the Rubicon of abandoning the norm of assuming good faith, it needs to be abandoned symmetrically. I _think_ I'm doing a _pretty good_ job of adhering to standards of intellectual conduct and being transparent about my motivations, but I'm definitely not perfect, and, unlike Yudkowsky, I'm not so absurdly mendaciously arrogant to claim "confidence in my own ability to independently invent everything important" (!) about my topics of interest. If Yudkowsky or anyone else thinks they _have a case_ based on my behavior that _I'm_ being culpably intellectually dishonest, they of course have my blessing and encouragement to post it for the audience to evaluate.) +[^wordcounts]: ["The Categories Were Made for Man to Make Predictions"](/2018/Feb/the-categories-were-made-for-man-to-make-predictions/) (2018), ["Where to Draw the Boundaries?"](https://www.lesswrong.com/posts/esRZaPXSHgWzyB2NL/where-to-draw-the-boundaries) (2019), and ["Unnatural Categories Are Optimized for Deception"](https://www.lesswrong.com/posts/onwgTH6n8wxRSo2BJ/unnatural-categories-are-optimized-for-deception) (2021) total to over 20,000 words. -What makes all of this especially galling is the fact that _all of my heretical opinions are literally just Yudkowsky's opinions from the 'aughts!_ My whole thing about how changing sex isn't possible with existing or forseeable technology because of how complicated humans (and therefore human sex differences) are? Not original to me! I [filled in a few technical details](/2021/May/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems/#changing-sex-is-hard), but again, this was _in the Sequences_ as ["Changing Emotions"](https://www.lesswrong.com/posts/QZs4vkC7cbyjL9XA9/changing-emotions). My thing about how you can't define concepts any way you want because there are mathematical laws governing which category boundaries [compress](https://www.lesswrong.com/posts/mB95aqTSJLNR9YyjH/message-length) your [anticipated experiences](https://www.lesswrong.com/posts/a7n8GdKiAZRX86T5A/making-beliefs-pay-rent-in-anticipated-experiences)? Not original to me! I [filled in](https://www.lesswrong.com/posts/esRZaPXSHgWzyB2NL/where-to-draw-the-boundaries) [a few technical details](https://www.lesswrong.com/posts/onwgTH6n8wxRSo2BJ/unnatural-categories-are-optimized-for-deception), but [_we had a whole Sequence about this._](https://www.lesswrong.com/posts/FaJaCgqBKphrDzDSj/37-ways-that-words-can-be-wrong) +[^symmetrically-not-assuming-good-faith]: Obviously, if we're crossing the Rubicon of abandoning the norm of assuming good faith, it needs to be abandoned symmetrically. I _think_ I'm doing a pretty good job of adhering to standards of intellectual conduct and being transparent about my motivations, but I'm definitely not perfect, and, unlike Yudkowsky, I'm not so absurdly mendaciously arrogant to claim "confidence in my own ability to independently invent everything important" (!) about my topics of interest. If Yudkowsky or anyone else thinks they _have a case_ based on my behavior that _I'm_ being culpably intellectually dishonest, they of course have my blessing and encouragement to post it for the audience to evaluate. -Seriously, do you think I'm smart enough to come up with all of this indepedently? I'm not! I ripped it all off from Yudkowsky back in the 'aughts _when he still gave a shit about telling the truth_. (Actively telling the truth, and not just technically not lying.) The things I'm hyperfocused on that he thinks are politically impossible to say in the current year, are almost entirely things he _already said_, that anyone could just look up! +What makes all of this especially galling is the fact that _all of my heretical opinions are literally just Yudkowsky's opinions from the 'aughts!_ My whole thing about how changing sex isn't possible with existing or forseeable technology because of how complicated humans (and therefore human sex differences) are? Not original to me! I [filled in a few technical details](/2021/May/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems/#changing-sex-is-hard), but again, this was in the Sequences as ["Changing Emotions"](https://www.lesswrong.com/posts/QZs4vkC7cbyjL9XA9/changing-emotions). My thing about how you can't define concepts any way you want because there are mathematical laws governing which category boundaries [compress](https://www.lesswrong.com/posts/mB95aqTSJLNR9YyjH/message-length) your [anticipated experiences](https://www.lesswrong.com/posts/a7n8GdKiAZRX86T5A/making-beliefs-pay-rent-in-anticipated-experiences)? Not original to me! I [filled in](https://www.lesswrong.com/posts/esRZaPXSHgWzyB2NL/where-to-draw-the-boundaries) [a few technical details](https://www.lesswrong.com/posts/onwgTH6n8wxRSo2BJ/unnatural-categories-are-optimized-for-deception), but [_we had a whole Sequence about this._](https://www.lesswrong.com/posts/FaJaCgqBKphrDzDSj/37-ways-that-words-can-be-wrong) + +Seriously, do you think I'm smart enough to come up with all of this indepedently? I'm not! I ripped it all off from Yudkowsky back in the 'aughts _when he still gave a shit about telling the truth_. (Actively telling the truth, and not just technically not lying.) The things I'm hyperfocused on that he thinks are politically impossible to say in the current year, are almost entirely things he already said, that anyone could just look up! I guess the point is that the egregore doesn't have the reading comprehension for that?—or rather, the egregore has no reason to care about the past; if you get tagged by the mob as an Enemy, your past statements will get dug up as evidence of foul present intent, but if you're doing good enough of playing the part today, no one cares what you said in 2009? @@ -448,17 +450,17 @@ Does ... does he expect the rest of us not to _notice_? Or does he think that "e But I don't, think that everybody knows. And I'm not, giving up that easily. Not on an entire subculture full of people. -Yudkowsky [defends his behavior](https://twitter.com/ESYudkowsky/status/1356812143849394176): +Yudkowsky [defended his behavior in February 2021](https://twitter.com/ESYudkowsky/status/1356812143849394176): > I think that some people model civilization as being in the middle of a great battle in which this tweet, even if true, is giving comfort to the Wrong Side, where I would not have been as willing to tweet a truth helping the Right Side. From my perspective, this battle just isn't that close to the top of my priority list. I rated nudging the cognition of the people-I-usually-respect, closer to sanity, as more important; who knows, those people might matter for AGI someday. And the Wrong Side part isn't as clear to me either. -There are a number of things that could be said to this,[^number-of-things] but most importantly: the battle that matters—the battle with a Right Side and a Wrong Side—isn't "pro-trans" _vs._ "anti-trans". (The central tendency of the contemporary trans rights movement is firmly on the Wrong Side, but that's not the same thing as all trans people as individuals.) That's why Jessica joined our posse to try to argue with Yudkowsky in early 2019. (She wouldn't have, if my objection had been, "trans is Wrong; trans people Bad".) That's why Somni—one of the trans women who [infamously protested the 2019 CfAR reunion](https://www.ksro.com/2019/11/18/new-details-in-arrests-of-masked-camp-meeker-protesters/) for (among other things) CfAR allegedly discriminating against trans women—[understands what I've been saying](https://somnilogical.tumblr.com/post/189782657699/legally-blind). +There are a number of things that could be said to this,[^number-of-things] but most importantly: the battle that matters—the battle with a Right Side and a Wrong Side—isn't "pro-trans" _vs._ "anti-trans". (The central tendency of the contemporary trans rights movement is firmly on the Wrong Side, but that's not the same thing as all trans people as individuals.) That's why Jessica Taylor [joined our posse to try to argue with Yudkowsky in early 2019](/2023/Jul/a-hill-of-validity-in-defense-of-meaning/#jessica-joins). (She wouldn't have, if my objection had been, "trans is Wrong; trans people Bad.") That's why Somni—one of the trans women who [infamously protested the 2019 CfAR reunion](https://www.ksro.com/2019/11/18/new-details-in-arrests-of-masked-camp-meeker-protesters/) for (among other things) CfAR allegedly discriminating against trans women—[understands what I've been saying](https://somnilogical.tumblr.com/post/189782657699/legally-blind). [^number-of-things]: Note the striking contrast between ["A Rational Argument"](https://www.lesswrong.com/posts/9f5EXt8KNNxTAihtZ/a-rational-argument), in which the Yudkowsky of 2007 wrote that a campaign manager "crossed the line [between rationality and rationalization] at the point where you considered whether the questionnaire was favorable or unfavorable to your candidate, before deciding whether to publish it"; and these 2021 Tweets, in which Yudkowsky seems completely nonchalant about "not have been as willing to tweet a truth helping" one side of a cultural dispute, because "this battle just isn't that close to the top of [his] priority list". Well, sure! Any hired campaign manager could say the same: helping the electorate make an optimally informed decision just isn't that close to the top of their priority list, compared to getting paid. - Yudkowsky's claim to have been focused on nudging people's cognition towards sanity seems incredibly dubious: if you're focused on sanity, you should be spontaneously noticing sanity errors on both political sides. (Moreover, if you're living in what you yourself describe as a "half-Stalinist environment", you should expect your social environment to make proportionately _more_ errors on the "pro-Stalin" side.) As for the rationale that "those people might matter to AGI someday", judging by local demographics, it seems much more likely to apply to trans women themselves, than their critics! + Yudkowsky's claim to have been focused on nudging people's cognition towards sanity seems dubious: if you're focused on sanity, you should be spontaneously noticing sanity errors in both political camps. (Moreover, if you're living in what you yourself describe as a "half-Stalinist environment", you should expect your social environment to make proportionately more errors on the "pro-Stalin" side.) As for the rationale that "those people might matter to AGI someday", judging by local demographics, it seems much more likely to apply to trans women themselves, than their critics! -The battle that matters—and I've been _very_ explicit about this, for years—is over this proposition eloquently [stated by Scott Alexander in November 2014](https://slatestarcodex.com/2014/11/21/the-categories-were-made-for-man-not-man-for-the-categories/) (redacting the irrelevant object-level example): +The battle that matters—and I've been very explicit about this, for years—is over this proposition eloquently [stated by Scott Alexander in November 2014](https://slatestarcodex.com/2014/11/21/the-categories-were-made-for-man-not-man-for-the-categories/) (redacting the irrelevant object-level example): > I ought to accept an unexpected [X] or two deep inside the conceptual boundaries of what would normally be considered [Y] if it'll save someone's life. There's no rule of rationality saying that I shouldn't, and there are plenty of rules of human decency saying that I should. @@ -468,17 +470,17 @@ In order to take the side of Truth, you need to be able to tell Joshua Norton th You need to be able to tell a prideful autodidact that the fact that he's failing quizzes in community college differential equations class, is evidence that his study methods aren't doing what he thought they were (even if it hurts him). -And you need to be able to say, in public, that trans women are male and trans men are female _with respect to_ a female/male "sex" concept that encompasses the many traits that aren't affected by contemporary surgical and hormonal interventions (even if it hurts someone who does not like to be tossed into a Male Bucket or a Female Bucket as it would be assigned by their birth certificate, and—yes—even if it probabilistically contributes to that person's suicide). +And you need to be able to say, in public, that trans women are male and trans men are female with respect to a concept of binary sex that encompasses the many traits that aren't affected by contemporary surgical and hormonal interventions (even if it hurts someone who does not like to be tossed into a Male Bucket or a Female Bucket as it would be assigned by their birth certificate, and—yes—even if it probabilistically contributes to that person's suicide). If you don't want to say those things because hurting people is wrong, then you have chosen Feelings. Scott Alexander chose Feelings, but I can't really hold that against him, because Scott is [very explicit about only speaking in the capacity of some guy with a blog](https://slatestarcodex.com/2019/07/04/some-clarifications-on-rationalist-blogging/). You can tell from his writings that he never wanted to be a religious leader; it just happened to him on accident because he writes faster than everyone else. I like Scott. Scott is alright. I feel sad that such a large fraction of my interactions with him over the years have taken such an adversarial tone. -Eliezer Yudkowsky ... did not _unambiguously_ choose Feelings. He's been very careful with his words to strategically mood-affiliate with the side of Feelings, without consciously saying anything that he consciously knows to be unambiguously false. And the reason I can hold it against _him_ is because Eliezer Yudkowsky does not identify as just some guy with a blog. Eliezer Yudkowsky is _absolutely_ trying to be a religious leader. He markets himself as a master of the hidden Bayesian structure of cognition, who ["aspires to make sure [his] departures from perfection aren't noticeable to others"](https://twitter.com/ESYudkowsky/status/1384671335146692608). +Eliezer Yudkowsky did not _unambiguously_ choose Feelings. He's been very careful with his words to strategically mood-affiliate with the side of Feelings, without consciously saying anything that he consciously knows to be unambiguously false. And the reason I can hold it against _him_ is because Eliezer Yudkowsky does not identify as just some guy with a blog. Eliezer Yudkowsky is _absolutely_ trying to be a religious leader. He markets himself as a master of the hidden Bayesian structure of cognition, who ["aspires to make sure [his] departures from perfection aren't noticeable to others"](https://twitter.com/ESYudkowsky/status/1384671335146692608). -In making such boasts, I think Yudkowsky is opting in to being held to higher standards than other mortals. If Scott Alexander gets something wrong when I was trusting him to be right, that's disappointing, but I'm not the victim of false advertising, because Scott Alexander doesn't _claim_ to be anything more than some guy with a blog. If I trusted him more than that, that's on me. +In making such boasts, I think Yudkowsky is opting in to being held to higher standards than other mortals. If Scott Alexander gets something wrong when I was trusting him to be right, that's disappointing, but I'm not the victim of false advertising, because Scott Alexander doesn't claim to be anything more than some guy with a blog. If I trusted him more than that, that's on me. -If Eliezer Yudkowsky gets something wrong when I was trusting him to be right, _and_ refuses to acknowledge corrections (in the absence of an unsustainable 21-month nagging campaign) _and_ keeps inventing new galaxy-brained ways to be wrong in the service of his political agenda of being seen to agree with Stalin without technically lying, then I think I _am_ the victim of false advertising. His marketing bluster was optimized to trick people like me into trusting him, even if my being _dumb enough to believe him_ is on me. +If Eliezer Yudkowsky gets something wrong when I was trusting him to be right, and refuses to acknowledge corrections (in the absence of an unsustainable 21-month nagging campaign) and keeps inventing new galaxy-brained ways to be wrong in the service of his political agenda of being seen to agree with Stalin without technically lying, then I think I _am_ the victim of false advertising. His marketing bluster was optimized to trick people like me into trusting him, even if my being dumb enough to believe him is on me. Because, I did, actually, trust him. Back in 'aught-nine when _Less Wrong_ was new, we had a thread of hyperbolic ["Eliezer Yudkowsky Facts"](https://www.lesswrong.com/posts/Ndtb22KYBxpBsagpj/eliezer-yudkowsky-facts) (in the style of [Chuck Norris facts](https://en.wikipedia.org/wiki/Chuck_Norris_facts)). And of course, it was a joke, but the hero-worship that make the joke funny was real. (You wouldn't make those jokes for your community college physics teacher, even if he was a good teacher.)