From: M. Taylor Saotome-Westlake Date: Sun, 24 Jul 2022 00:55:01 +0000 (-0700) Subject: long confrontation 12: back to Austin, vent marketing anxiety X-Git-Url: http://534655.efjtl6rk.asia/source?a=commitdiff_plain;h=a0c980c871e502ed329305af3d38e2434eeff71a;p=Ultimately_Untrue_Thought.git long confrontation 12: back to Austin, vent marketing anxiety "DIMND" --- diff --git a/content/drafts/a-hill-of-validity-in-defense-of-meaning.md b/content/drafts/a-hill-of-validity-in-defense-of-meaning.md index 7c46406..1971233 100644 --- a/content/drafts/a-hill-of-validity-in-defense-of-meaning.md +++ b/content/drafts/a-hill-of-validity-in-defense-of-meaning.md @@ -8,7 +8,7 @@ Status: draft > > —Zora Neale Hurston -Recapping our story so far—in a previous post, ["Sexual Dimorphism in Yudkowsky's Sequences, in Relation to My Gender Problems"](/2021/May/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems/), I told the the part about how I've "always" (since puberty) had this obsessive sexual fantasy about being magically transformed into a woman and also thought it was immoral to believe in psychological sex differences, until I got set straight by these really great Sequences of blog posts by Eliezer Yudkowsky, which taught me (incidentally, among many other things) how absurdly unrealistic my obsessive sexual fantasy was given merely human-level technology, and that it's actually immoral _not_ to believe in psychological sex differences given that psychological sex differences are actually real. In a subsequent post, ["Blanchard's Dangerous Idea and the Plight of the Lucid Crossdreamer"](/2022/TODO/blanchards-dangerous-idea-and-the-plight-of-the-lucid-crossdreamer/), I told the part about how, in 2016, everyone in my systematically-correct-reasoning community up to and including Eliezer Yudkowsky suddenly starting claiming that guys like me might actually be women in some unspecified metaphysical sense, and insisted on playing dumb when confronted with alternative explanations of the relevant phenomena or even just asked what that means, until I eventually had a stress- and sleep-deprivation-induced delusional nervous breakdown, got sent to psychiatric prison once, and then went crazy again a couple months later. +Recapping our story so far—in a previous post, ["Sexual Dimorphism in Yudkowsky's Sequences, in Relation to My Gender Problems"](/2021/May/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems/), I told the the part about how I've "always" (since puberty) had this obsessive sexual fantasy about being magically transformed into a woman and also thought it was immoral to believe in psychological sex differences, until I got set straight by these really great Sequences of blog posts by Eliezer Yudkowsky, which taught me (incidentally, among many other things) how absurdly unrealistic my obsessive sexual fantasy was given merely human-level technology, and that it's actually immoral _not_ to believe in psychological sex differences given that psychological sex differences are actually real. In a subsequent post, ["Blanchard's Dangerous Idea and the Plight of the Lucid Crossdreamer"](/2022/TODO/blanchards-dangerous-idea-and-the-plight-of-the-lucid-crossdreamer/), I told the part about how, in 2016, everyone in my systematically-correct-reasoning community up to and including Eliezer Yudkowsky suddenly starting claiming that guys like me might actually be women in some unspecified metaphysical sense, and insisted on playing dumb when confronted with alternative explanations of the relevant phenomena or even just asked what that means, until I eventually had a stress- and sleep-deprivation-induced delusional nervous breakdown, got sent to psychiatric jail once, and then went crazy again a couple months later. That's not the really egregious part of the story. The thing is, psychology is a complicated empirical science: no matter how "obvious" I might think something is, I have to admit that I could be wrong—[not just as an obligatory profession of humility, but _actually_ wrong in the real world](https://www.lesswrong.com/posts/GrDqnMjhqoxiqpQPw/the-proper-use-of-humility). If my fellow rationalists merely weren't sold on the autogynephilia and transgender thing, I would certainly be disappointed, but it's definitely not grounds to denounce the entire community as a failure or a fraud. And indeed, I _did_ [end up moderating my views](/2022/Jul/the-two-type-taxonomy-is-a-useful-approximation-for-a-more-detailed-causal-model/) compared to the extent to which my thinking in 2016–7 took Blanchard–Bailey–Lawrence as received truth. At the same time, I don't particularly regret saying what I said in 2016–7, because Blanchard–Bailey–Lawrence is still very obviously _directionally_ correct compared to the nonsense everyone else was telling me. @@ -56,11 +56,9 @@ Some of the replies tried explain the problem—and Yudkowsky kept refusing to u > > But even ignoring that, you're not standing in defense of truth if you insist on a word, brought explicitly into question, being used with some particular meaning. -Dear reader, this is the moment where I _flipped the fuck out_. If the "rationalists" didn't [click](https://www.lesswrong.com/posts/R3ATEWWmBhMhbY2AL/that-magical-click) on the autogynephilia thing, that was disappointing, but forgivable. If the "rationalists", on Scott Alexander's authority, were furthermore going to get our own philosophy of language wrong over this, that was—I don't want to say _forgivable_ exactly, but it was—tolerable. I had learned from my misadventures the previous year that I had been wrong to trust "the community" as a reified collective and put it on a pedastal—that had never been a reasonable mental stance in the first place. +Dear reader, this is the moment where I _flipped the fuck out_. Let me explain. -But trusting Eliezer Yudkowsky—whose writings, more than any other single influence, had made me who I am—_did_ seem reasonable. If I put him on a pedastal, it was because he had earned the pedastal, for supplying me with my criteria for how to think—including, as a trivial special case, how to think about what things to put on pedastals. - -So if the rationalists were going to get our own philosophy of language wrong over this _and Eliezer Yudkowsky was in on it_ (!!!), that was intolerable. This "hill of meaning in defense of validity" proclamation was just such a striking contrast to the Eliezer Yudkowsky I remembered—the Eliezer Yudkowsky whom I had variously described as having "taught me everything I know" and "rewritten my personality over the internet"—who didn't hesitate to criticize uses of language that he thought were failing to carve reality at the joints, even going so far as to [call them "wrong"](https://www.lesswrong.com/posts/FaJaCgqBKphrDzDSj/37-ways-that-words-can-be-wrong): +This "hill of meaning in defense of validity" proclamation was just such a striking contrast to the Eliezer Yudkowsky I remembered—the Eliezer Yudkowsky whom I had variously described as having "taught me everything I know" and "rewritten my personality over the internet"—who didn't hesitate to criticize uses of language that he thought were failing to carve reality at the joints, even going so far as to [call them "wrong"](https://www.lesswrong.com/posts/FaJaCgqBKphrDzDSj/37-ways-that-words-can-be-wrong): > [S]aying "There's no way my choice of X can be 'wrong'" is nearly always an error in practice, whatever the theory. You can always be wrong. Even when it's theoretically impossible to be wrong, you can still be wrong. There is never a Get-Out-Of-Jail-Free card for anything you do. That's life. @@ -160,11 +158,31 @@ If you were Alice, and a _solid supermajority_ of your incredibly smart, incredi It makes sense that Yudkowsky might perceive political constraints on what he might want to say in public. (Despite my misgivings, and the fact that it's basically a running joke at this point, this blog is still published under a pseudonym; it would be hypocritical of me to accuse someone of cowardice about what they're willing to attach their real name to, especially when you look at what happened to the _other_ Harry Potter author.) -But if Yudkowsky didn't want to get into a distracting political fight about a topic, then maybe the responsible thing to do would have been to just not say anything about the topic, rather than engaging with the _stupid_ version of the opposition and stonewalling with "That's a policy question" when people try to point out the problem? +But if Yudkowsky didn't want to get into a distracting political fight about a topic, then maybe the responsible thing to do would have been to just not say anything about the topic, rather than engaging with the _stupid_ version of the opposition and stonewalling with "That's a policy question" when people try to point out the problem?! + +------ + +... I didn't have all of that criticism collected so legibly on 28 November 2018. But that, basically, is why I _flipped the fuck out_ when I saw that Twitter thread. If the "rationalists" didn't [click](https://www.lesswrong.com/posts/R3ATEWWmBhMhbY2AL/that-magical-click) on the autogynephilia thing, that was disappointing, but forgivable. If the "rationalists", on Scott Alexander's authority, were furthermore going to get our own philosophy of language wrong over this, that was—I don't want to say _forgivable_ exactly, but it was—tolerable. I had learned from my misadventures the previous year that I had been wrong to trust "the community" as a reified collective and put it on a pedastal—that had never been a reasonable mental stance in the first place. + +But trusting Eliezer Yudkowsky—whose writings, more than any other single influence, had made me who I am—_did_ seem reasonable. If I put him on a pedastal, it was because he had earned the pedastal, for supplying me with my criteria for how to think—including, as a trivial special case, how to think about what things to put on pedastals. + +So if the rationalists were going to get our own philosophy of language wrong over this _and Eliezer Yudkowsky was in on it_ (!!!), that was intolerable. I remember going downstairs to confide in a senior engineer about the situation—not just the immediate impetus of this Twitter thread, but this whole thing + + + +a masculine guy, who by his manner + + + +not just the immediate impetus of + + + + + --------- -I was physically shaking. I remember going downstairs to confide in a senior engineer about the situation. But if Yudkowsky was _already_ stonewalling his Twitter followers, entering the thread myself didn't seem likely to help. (And I hadn't intended to talk about gender on that account yet, although that seemed less important given the present crisis.) +But if Yudkowsky was _already_ stonewalling his Twitter followers, entering the thread myself didn't seem likely to help. (And I hadn't intended to talk about gender on that account yet, although that seemed less important given the present crisis.) It seemed better to try to clear this up in private. I still had Yudkowsky's email address. I felt bad bidding for his attention over my gender thing _again_—but I had to do _something_. Hands trembling, I sent him an email asking him to read my ["The Categories Were Made for Man to Make Predictions"](/2018/Feb/the-categories-were-made-for-man-to-make-predictions/), suggesting that it may qualify as an answer to his question about ["a page [he] could read to find a non-confused exclamation of how there's scientific truth at stake"](https://twitter.com/ESYudkowsky/status/1067482047126495232), and that, because I cared very much about correcting what I claim are confusions in my rationalist subculture, that I would be happy to pay up to $1000 for his time, and that, if he liked the post, he might consider Tweeting a link. diff --git a/notes/a-hill-marketing.txt b/notes/a-hill-marketing.txt index b080f85..5a063d8 100644 --- a/notes/a-hill-marketing.txt +++ b/notes/a-hill-marketing.txt @@ -1,3 +1,22 @@ +I could use some politics/ettiquiete advice + +I want to publish (as a comment on the _Less Wrong_ linkpost for my forthcoming memoir) a summary of Why I Don't Trust Eliezer Yudkowsky's Intellectual Honesty, and promote the comment on social media + +and I'm kind of facing conflicting deciderata + +Right now, I have Yudkowsky blocked on Twitter. It seems wrong to leave him blocked while Tweet-promoting my comment, because that's attacking his reputation _behind his back_, which is dishonorable. ("A true friend stabs you in the front.") + +But if I unblock and tag him, then I'm effectively bidding for his attention, which I had said that I wasn't going to do anymore, which is the reason I have him blocked. + +(Really, that's _also_ why I'm publishing the comment. I tried talking to the guy _first_, a lot, before eventually concluding (after using up what I assume is my lifetime supply of Eliezer-bandwidth) that it's a waste of time. My final email to him said, "I think the efficient outcome here is that I tell the Whole Dumb Story on my blog and never bother you again", and even if that wasn't a _promise_, it seems healthier to basically stick to that—I mean, like, I said a few words to him about _Planecrash_ at the anti-smallpox party, but my position is that that doesn't count as "bothering him" in the relevant sense.) + +If he _wants_ to engage to defend his reputation, that's _great_—I would actually prefer that—but I don't want it to look like I'm trying to demand that, or threaten him into more engagement: rather, the function of the comment is to explain to _everyone else_ why I, in fact, don't trust him anymore, which is something I seflishly want them to know. + +probably the right move is to unblock and tag, but then explain the "stabs you in the front" rationale in a followup Tweet? + +----- + + I'm attacking his reputation seflishly; he's _welcome_ to defend himself, if he wants, but I'm trying to do this in a way where I'm not _asking_ for that; I'm addressing the audience rather than him (because I said the efficient outcome is that i never bother him again) Definitely in-bounds— diff --git a/notes/a-hill-twitter-reply.md b/notes/a-hill-twitter-reply.md index abc292a..088f07c 100644 --- a/notes/a-hill-twitter-reply.md +++ b/notes/a-hill-twitter-reply.md @@ -1,128 +1,8 @@ -So, now having a Twitter account, I was browsing Twitter in the bedroom at the rental house for the dayjob retreat, when I happened to come across [this thread by @ESYudkowsky](https://twitter.com/ESYudkowsky/status/1067183500216811521): +28 November 2018 -> Some people I usually respect for their willingness to publicly die on a hill of facts, now seem to be talking as if pronouns are facts, or as if who uses what bathroom is necessarily a factual statement about chromosomes. Come on, you know the distinction better than that! -> -> _Even if_ somebody went around saying, "I demand you call me 'she' and furthermore I claim to have two X chromosomes!", which none of my trans colleagues have ever said to me by the way, it still isn't a question-of-empirical-fact whether she should be called "she". It's an act. -> -> In saying this, I am not taking a stand for or against any Twitter policies. I am making a stand on a hill of meaning in defense of validity, about the distinction between what is and isn't a stand on a hill of facts in defense of truth. -> -> I will never stand against those who stand against lies. But changing your name, asking people to address you by a different pronoun, and getting sex reassignment surgery, Is. Not. Lying. You are _ontologically_ confused if you think those acts are false assertions. -Some of the replies tried explain the problem—and Yudkowsky kept refusing to understand— -> Using language in a way _you_ dislike, openly and explicitly and with public focus on the language and its meaning, is not lying. The proposition you claim false (chromosomes?) is not what the speech is meant to convey—and this is known to everyone involved, it is not a secret. -—repeatedly: - -> You're mistaken about what the word means to you, I demonstrate thus: https://en.wikipedia.org/wiki/XX_male_syndrome -> -> But even ignoring that, you're not standing in defense of truth if you insist on a word, brought explicitly into question, being used with some particular meaning. - -Dear reader, this is the moment where I _flipped the fuck out_. If the "rationalists" didn't [click](https://www.lesswrong.com/posts/R3ATEWWmBhMhbY2AL/that-magical-click) on the autogynephilia thing, that was disappointing, but forgivable. If the "rationalists", on Scott Alexander's authority, were furthermore going to get our own philosophy of language wrong over this, that was—I don't want to say _forgivable_ exactly, but it was—tolerable. I had learned from my misadventures the previous year that I had been wrong to trust "the community" as a reified collective and put it on a pedastal—that had never been a reasonable mental stance in the first place. - -But trusting Eliezer Yudkowsky—whose writings, more than any other single influence, had made me who I am—_did_ seem reasonable. If I put him on a pedastal, it was because he had earned the pedastal, for supplying me with my criteria for how to think—including, as a trivial special case, how to think about what things to put on pedastals. - -So if the rationalists were going to get our own philosophy of language wrong over this _and Eliezer Yudkowsky was in on it_ (!!!), that was intolerable. This "hill of meaning in defense of validity" proclamation was just such a striking contrast to the Eliezer Yudkowsky I remembered—the Eliezer Yudkowsky whom I had variously described as having "taught me everything I know" and "rewritten my personality over the internet"—who didn't hesitate to criticize uses of language that he thought were failing to carve reality at the joints, even going so far as to [call them "wrong"](https://www.lesswrong.com/posts/FaJaCgqBKphrDzDSj/37-ways-that-words-can-be-wrong): - -> [S]aying "There's no way my choice of X can be 'wrong'" is nearly always an error in practice, whatever the theory. You can always be wrong. Even when it's theoretically impossible to be wrong, you can still be wrong. There is never a Get-Out-Of-Jail-Free card for anything you do. That's life. - -[Similarly](https://www.lesswrong.com/posts/d5NyJ2Lf6N22AD9PB/where-to-draw-the-boundary): - -> Once upon a time it was thought that the word "fish" included dolphins. Now you could play the oh-so-clever arguer, and say, "The list: {Salmon, guppies, sharks, dolphins, trout} is just a list—you can't say that a list is _wrong_. I can prove in set theory that this list exists. So my definition of _fish_, which is simply this extensional list, cannot possibly be 'wrong' as you claim." -> -> Or you could stop playing nitwit games and admit that dolphins don't belong on the fish list. -> -> You come up with a list of things that feel similar, and take a guess at why this is so. But when you finally discover what they really have in common, it may turn out that your guess was wrong. It may even turn out that your list was wrong. -> -> You cannot hide behind a comforting shield of correct-by-definition. Both extensional definitions and intensional definitions can be wrong, can fail to carve reality at the joints. - -One could argue the "Words can be wrong when your definition draws a boundary around things that don't really belong together" moral doesn't apply to Yudkowsky's new Tweets, which only mentioned pronouns and bathroom policies, not the [extensions of common nouns](https://www.lesswrong.com/posts/HsznWM9A7NiuGsp28/extensions-and-intensions). - -But this seems pretty unsatifying in the context of his claim to ["not [be] taking a stand for or against any Twitter policies"](https://twitter.com/ESYudkowsky/status/1067185907843756032). One of the Tweets that had recently led to radical feminist Meghan Murphy getting [kicked off the platform](https://quillette.com/2018/11/28/twitters-trans-activist-decree/) read simply, ["Men aren't women tho."](https://archive.is/ppV86) - -If the extension of common words like 'woman' and 'man' is an issue of epistemic importance that rationalists should care about, then presumably so is Twitter's anti-misgendering policy—and if it _isn't_ (because you're not standing in defense of truth if you insist on a word, brought explicitly into question, being used with some particular meaning) then I'm not sure what's _left_ of the "Human's Guide to Words" sequence if the [37-part grand moral](https://www.lesswrong.com/posts/FaJaCgqBKphrDzDSj/37-ways-that-words-can-be-wrong) needs to be retracted. - -I think it _is_ standing in defense of truth if have an _argument_ for why my preferred word usage does a better job at "carving reality at the joints", and the one bringing my usage explicitly into question doesn't have such an argument. As such, I didn't see the _practical_ difference between "you're not standing in defense of truth if you insist on a word, brought explicitly into question, being used with some particular meaning", and "I can define a word any way I want." About which, again, a previous Eliezer Yudkowsky had written: - -> ["It is a common misconception that you can define a word any way you like. [...] If you believe that you can 'define a word any way you like', without realizing that your brain goes on categorizing without your conscious oversight, then you won't take the effort to choose your definitions wisely."](https://www.lesswrong.com/posts/3nxs2WYDGzJbzcLMp/words-as-hidden-inferences) -> -> ["So that's another reason you can't 'define a word any way you like': You can't directly program concepts into someone else's brain."](https://www.lesswrong.com/posts/HsznWM9A7NiuGsp28/extensions-and-intensions) -> -> ["When you take into account the way the human mind actually, pragmatically works, the notion 'I can define a word any way I like' soon becomes 'I can believe anything I want about a fixed set of objects' or 'I can move any object I want in or out of a fixed membership test'."](https://www.lesswrong.com/posts/HsznWM9A7NiuGsp28/extensions-and-intensions) -> -> ["There's an idea, which you may have noticed I hate, that 'you can define a word any way you like'."](https://www.lesswrong.com/posts/i2dfY65JciebF3CAo/empty-labels) -> -> ["And of course you cannot solve a scientific challenge by appealing to dictionaries, nor master a complex skill of inquiry by saying 'I can define a word any way I like'."](https://www.lesswrong.com/posts/y5MxoeacRKKM3KQth/fallacies-of-compression) -> -> ["Categories are not static things in the context of a human brain; as soon as you actually think of them, they exert force on your mind. One more reason not to believe you can define a word any way you like."](https://www.lesswrong.com/posts/veN86cBhoe7mBxXLk/categorizing-has-consequences) -> -> ["And people are lazy. They'd rather argue 'by definition', especially since they think 'you can define a word any way you like'."](https://www.lesswrong.com/posts/yuKaWPRTxZoov4z8K/sneaking-in-connotations) -> -> ["And this suggests another—yes, yet another—reason to be suspicious of the claim that 'you can define a word any way you like'. When you consider the superexponential size of Conceptspace, it becomes clear that singling out one particular concept for consideration is an act of no small audacity—not just for us, but for any mind of bounded computing power."](https://www.lesswrong.com/posts/82eMd5KLiJ5Z6rTrr/superexponential-conceptspace-and-simple-words) -> -> ["I say all this, because the idea that 'You can X any way you like' is a huge obstacle to learning how to X wisely. 'It's a free country; I have a right to my own opinion' obstructs the art of finding truth. 'I can define a word any way I like' obstructs the art of carving reality at its joints. And even the sensible-sounding 'The labels we attach to words are arbitrary' obstructs awareness of compactness."](https://www.lesswrong.com/posts/soQX8yXLbKy7cFvy8/entropy-and-short-codes) -> -> ["One may even consider the act of defining a word as a promise to \[the\] effect [...] \[that the definition\] will somehow help you make inferences / shorten your messages."](https://www.lesswrong.com/posts/yLcuygFfMfrfK8KjF/mutual-information-and-density-in-thingspace) - -One could argue that this is unfairly interpreting Yudkowsky's Tweets as having a broader scope than was intended—that Yudkowsky _only_ meant to slap down the specific false claim that using 'he' for someone with a Y chromosome is lying, without intending any broader implications about trans issues or the philosophy of language. It wouldn't be realistic or fair to expect every public figure to host a truly exhaustive debate on all related issues every time a fallacy they encounter in the wild annoys them enough for them to Tweet about that specific fallacy. - -However, I don't think this "narrow" reading is the most natural one. Yudkowsky had previously written of what he called [the fourth virtue of evenness](http://yudkowsky.net/rational/virtues/): "If you are selective about which arguments you inspect for flaws, or how hard you inspect for flaws, then every flaw you learn how to detect makes you that much stupider." He had likewise written [of reversed stupidity](https://www.lesswrong.com/posts/qNZM3EGoE5ZeMdCRt/reversed-stupidity-is-not-intelligence) (bolding mine): - -> **To argue against an idea honestly, you should argue against the best arguments of the strongest advocates**. Arguing against weaker advocates proves _nothing_, because even the strongest idea will attract weak advocates. - -Relatedly, Scott Alexander had written about how ["weak men are superweapons"](https://slatestarcodex.com/2014/05/12/weak-men-are-superweapons/): speakers often selectively draw attention to the worst arguments in favor of a position, in an attempt to socially discredit people who have better arguments for the position (which the speaker ignores). In the same way, by _just_ slapping down a weak man from the "anti-trans" political coalition without saying anything else in a similarly prominent location, Yudkowsky was liable to mislead his readers (who trusted him to argue against ideas honestly) into thinking that there were no better arguments from the "anti-trans" side. - -To be sure, it imposes a cost on speakers to not be able to Tweet about one specific annoying fallacy and then move on with their lives without the need for [endless disclaimers](http://www.overcomingbias.com/2008/06/against-disclai.html) about related but stronger arguments that they're _not_ addressing. But the fact that [Yudkowsky disclaimed that](https://twitter.com/ESYudkowsky/status/1067185907843756032) he wasn't taking a stand for or against Twitter's anti-misgendering policy demonstrates that he _didn't_ have an aversion to spending a few extra words to prevent the most common misunderstandings. - -Given that, I have trouble reading the Tweets Yudkowsky published as anything other than an attempt to intimidate and delegitimize people who want to use language to reason about sex rather than gender identity. [For example](https://twitter.com/ESYudkowsky/status/1067490362225156096), deeper in the thread, Yudkowsky wrote: - -> The more technology advances, the further we can move people towards where they say they want to be in sexspace. Having said this we've said all the facts. Who competes in sports segregated around an Aristotelian binary is a policy question (that I personally find very humorous). - -Sure, _in the limit of arbitrarily advanced technology_, everyone could be exactly where they wanted to be in sexpsace. Having said this, we have _not_ said all the facts relevant to decisionmaking in our world, where _we do not have arbitrarily advanced technology_. As Yudkowsky [acknowledges in the previous Tweet](https://twitter.com/ESYudkowsky/status/1067488844122021888), "Hormone therapy changes some things and leaves others constant." The existence of HRT does not itself take us into the Glorious Transhumanist Future where everyone is the sex they say they are. - -The _reason_ for having sex-segregated sports leagues is because the sport-relevant multivariate trait distributions of female bodies and male bodies are quite different. If you just had one integrated league, females wouldn't be competitive (in most sports, with some exceptions [like ultra-distance swimming](https://www.swimmingworldmagazine.com/news/why-women-have-beaten-men-in-marathon-swimming/)). - -It's not that females and males are exactly the same except males are 10% stronger on average (in which case, you might just shrug and accept unequal outcomes, the way we shrug and accept it that some competitors have better genes). Different traits have different relevance to different sports: women do better in ultraswimming _because_ that competition is sampling a corner of sportspace where body fat is an advantage. It really is an apples-to-oranges comparison, rather than "two populations of apples with different mean weight". - -Given the empirical reality of the different multivariate trait distributions, "Who are the best athletes _among females_" is a natural question for people to be interested in, and want separate sports leagues to determine. Including male people in female sports leagues undermines the point of having a separate female league, and [_empirically_, hormone replacement therapy after puberty](https://link.springer.com/article/10.1007/s40279-020-01389-3) [doesn't substantially change the picture here](https://bjsm.bmj.com/content/55/15/865). - -(Similarly, when conducting [automobile races](https://en.wikipedia.org/wiki/Auto_racing), you want there to be rules enforcing that all competitors have the same type of car for some common-sense-reasonable operationalization of "the same type", because a race between a sports car and a [moped](https://en.wikipedia.org/wiki/Moped) would be mostly measuring who has the sports car, rather than who's the better racer.) - -In light of these _empirical_ observations, Yudkowsky's suggestion that an ignorant comittment to an "Aristotelian binary" is the main reason someone might care about the integrity of women's sports, is revealed as an absurd strawman. This just isn't something any scientifically-literate person would write if they had actually thought about the issue _at all_, as contrasted to having _first_ decided (consciously or not) to bolster one's reputation among progressives by dunking on transphobes on Twitter, and wielding one's philosophy knowledge in the service of that political goal. The relevant empirical facts are _not subtle_, even if most people don't have the fancy vocabulary to talk about them in terms of "multivariate trait distributions". - -I spend a few paragraphs picking on the "sports segregated around an Aristotelian binary" remark because sports is a case where the relevant effect sizes are _so_ large as to make the point [hard for all but the most ardent gender-identity partisans to deny](/2017/Jun/questions-such-as-wtf-is-wrong-with-you-people/), but the point is very general. For example, the _function_ of sex-segrated bathrooms is to _protect females from males_, where "females" and "males" are natural clusters in configuration space that it makes sense to want words to refer to. - -Yudkowsky's claim to merely have been standing up for the distinction between facts and policy questions doesn't seem credible. It is, of course, true that pronoun and bathroom conventions are policy decisions rather than a matter of fact, but it's _bizarre_ to condescendingly point this out _as if it were the crux of contemporary trans-rights debates_. Conservatives and gender-critical feminists _know_ that trans-rights advocates aren't falsely claiming that trans women have XX chromosomes. If you _just_ wanted to point out that the organization of sports leagues is a policy question rather than a fact (as if anyone had doubted this), why would you throw in the "Aristotelian binary" strawman and belittle the matter as "humorous"? There are a lot of issues that I don't _personally_ care much about, but I don't see anything funny about the fact that other people _do_ care. - -If any concrete negative consequence of gender self-identity categories is going to be waved away with, "Oh, but that's a mere _policy_ decision that can be dealt with on some basis other than gender, and therefore doesn't count as an objection to the new definition of gender words", then it's not clear what the new definition is _for_. The policymaking categories we use to make decisions are _closely related_ to the epistemic categories we use to make predictions, and people need to be able to talk about them. - -An illustrative example: like many gender-dysphoric males, I [cosplay](/2016/Dec/joined/) [female](/2017/Oct/a-leaf-in-the-crosswind/) [characters](/2019/Aug/a-love-that-is-out-of-anyones-control/) at fandom conventions sometimes. And, unfortunately, like many gender-dysphoric males, I'm _not very good at it_. I think someone looking at some of my cosplay photos and trying to describe their content in clear language—not trying to be nice to anyone or make a point, but just trying to use language as a map that reflects the territory—would say something like, "This is a photo of a man and he's wearing a dress." The word _man_ in that sentence is expressing _cognitive work_: it's a summary of the [lawful cause-and-effect evidential entanglement](https://www.lesswrong.com/posts/6s3xABaXKPdFwA3FS/what-is-evidence) whereby the photons reflecting off the photograph are correlated with photons reflecting off my body at the time the photo was taken, which are correlated with my externally-observable secondary sex characteristics (facial structure, beard shadow, _&c._), from which evidence an agent using an [efficient naïve-Bayes-like model](http://lesswrong.com/lw/o8/conditional_independence_and_naive_bayes/) can assign me to its "man" category and thereby make probabilistic predictions about some of my traits that aren't directly observable from the photo, and achieve a better [score on those predictions](http://yudkowsky.net/rational/technical/) than if the agent had assigned me to its "woman" category, where by "traits" I mean not (just) particularly sex chromosomes ([as Yudkowsky suggested on Twitter](https://twitter.com/ESYudkowsky/status/1067291243728650243)), but the _conjunction_ of dozens or hundreds of observable measurements that are [_causally downstream_ of sex chromosomes](/2021/Sep/link-blood-is-thicker-than-water/): reproductive organs _and_ muscle mass (sex difference effect size of [Cohen's _d_](https://en.wikipedia.org/wiki/Effect_size#Cohen's_d)≈2.6) _and_ Big Five Agreeableness (_d_≈0.5) _and_ Big Five Neuroticism (_d_≈0.4) _and_ short-term memory (_d_≈0.2, favoring women) _and_ white-to-gray-matter ratios in the brain _and_ probable socialization history _and_ [any number of other things](https://en.wikipedia.org/wiki/Sex_differences_in_human_physiology)—including differences we might not necessarily currently know about, but have prior reasons to suspect exist: no one _knew_ about sex chromosomes before 1905, but given all the other systematic differences between women and men, it would have been a reasonable guess (that turned out to be correct!) to suspect the existence of some sort of molecular mechanism of sex determination. - -Forcing a speaker to say "trans woman" instead of "man" in that sentence depending on my verbally self-reported self-identity may not be forcing them to _lie_, exactly. (Because it's understood, "openly and explicitly and with public focus on the language and its meaning", what _trans women_ are; no one is making a false-to-fact claim about them having ovaries, for example.) But it _is_ forcing the speaker to obfuscate the probabilistic inference they were trying to communicate with the original sentence (about modeling the person in the photograph as being sampled from the "men" [cluster in configuration space](https://www.lesswrong.com/posts/WBw8dDkAWohFjWQSk/the-cluster-structure-of-thingspace)), and instead use language that suggests a different cluster-structure ("trans women", two words, are presumably a subcluster within the "women" cluster). Crowing in the public square about how people who object to be forced to "lie" must be ontologically confused is _ignoring the interesting part of the problem_. Gender identity's [claim to be non-disprovable](https://www.lesswrong.com/posts/fAuWLS7RKWD2npBFR/religion-s-claim-to-be-non-disprovable) mostly functions as a way to [avoid the belief's real weak points](https://www.lesswrong.com/posts/dHQkDNMhj692ayx78/avoiding-your-belief-s-real-weak-points). - -To this one might reply that I'm giving too much credit to the "anti-trans" coalition for how stupid they're not being: that _my_ careful dissection of the hidden probabilistic inferences implied by pronoun choices is all well and good, but that calling pronouns "lies" is not something you do when you know how to use words. - -But I'm _not_ giving them credit for _for understanding the lessons of "A Human's Guide to Words"_; I just think there's a useful sense of "know how to use words" that embodies a lower standard of philosophical rigor. If a person-in-the-street says of my cosplay photos, "That's a man! I _have eyes_ and I can _see_ that that's a man! Men aren't women!"—well, I _probably_ wouldn't want to invite such a person-in-the-street to a _Less Wrong_ meetup. But I do think the person-in-the-street is _performing useful cognitive work_. Because _I_ have the hidden-Bayesian-structure-of-language-and-cognition-sight (thanks to Yudkowsky's writings back in the 'aughts), _I_ know how to sketch out the reduction of "Men aren't women" to something more like "This [cognitive algorithm](https://www.lesswrong.com/posts/HcCpvYLoSFP4iAqSz/rationality-appreciating-cognitive-algorithms) detects secondary sex characteristics and uses it as a classifier for a binary female/male 'sex' category, which it uses to make predictions about not-yet-observed features ..." - -But having _done_ the reduction-to-cognitive-algorithms, it still looks like the person-in-the-street _has a point_ that I shouldn't be allowed to ignore just because I have 30 more IQ points and better philosophy-of-language skills? As it is written: "intelligence, to be useful, must be used for something other than defeating itself." - -I bring up my bad cosplay photos as an edge case that helps illustrate the problem I'm trying to point out, much like how people love to bring up [complete androgen insensitivity syndrome](https://en.wikipedia.org/wiki/Complete_androgen_insensitivity_syndrome) to illustrate why "But chromosomes!" isn't the correct reduction of sex classification. But to differentiate what I'm saying from mere blind transphobia, let me note that I predict that most people-in-the-street would be comfortable using feminine pronouns for someone like [Blaire White](http://msblairewhite.com/). That's evidence about the kind of cognitive work people's brains are doing when they use English language singular third-person pronouns! Certainly, English is not the only language; ours is not the only culture; maybe there is a way to do gender categories that would be more accurate and better for everyone! But to _find_ what that better way is, I think we need to be able to _talk_ about these kinds of details in public. And _in practice_, the attitude evinced in Yudkowsky's Tweets seemed to function as a [semantic stopsign](https://www.lesswrong.com/posts/FWMfQKG3RpZx6irjm/semantic-stopsigns) to get people to stop talking about the details. - -If you were actually interested in having a real discussion (instead of a fake discussion that makes you look good to progressives), why would you slap down the "But, but, chromosomes" idiocy and then not engage with the _drop-dead obvious_ "But, but, clusters in high-dimensional configuration space that [aren't actually changeable with contemporary technology](https://www.lesswrong.com/posts/QZs4vkC7cbyjL9XA9/changing-emotions)" steelman, [which was, in fact, brought up in the replies](https://twitter.com/EnyeWord/status/1068983389716385792)? - -Satire is a very weak form of argument: the one who wishes to doubt will always be able to find some aspect in which the obviously-absurd satirical situation differs from the real-world situation being satirized, and claim that that difference destroys the relevence of the joke. But on the off-chance that it might help _illustrate_ my concern, imagine you lived in a so-called "rationalist" subculture where conversations like this happened— - -
-

Bob: "Look at this [adorable cat picture](https://twitter.com/mydogiscutest/status/1079125652282822656)!"

-

Alice: "Um, that looks like a dog to me, actually."

-

Bob: "[You're not standing](https://twitter.com/ESYudkowsky/status/1067198993485058048) in defense of truth if you insist on a word, brought explicitly into question, being used with some particular meaning. [Now, maybe as a matter of policy](https://twitter.com/ESYudkowsky/status/1067294823000887297), you want to make a case for language being used a certain way. Well, that's a separate debate then."

-
- -If you were Alice, and a _solid supermajority_ of your incredibly smart, incredibly philosophically sophisticated friend group _including Eliezer Yudkowsky_ (!!!) seemed to behave like Bob (and reaped microhedonic social rewards for it in the form of, _e.g._, hundreds of Twitter likes), that would be a _pretty worrying_ sign about your friends' ability to accomplish intellectually hard things (_e.g._, AI alignment), right? Even if there isn't any pressing practical need to discriminate between dogs and cats, the _problem_ is that Bob is [_selectively_](http://slatestarcodex.com/2014/08/14/beware-isolated-demands-for-rigor/) using his sophisticated philosophy-of-language insight to try to _undermine Alice's ability to use language to make sense of the world_, even though Bob obviously knows goddamned well what Alice was trying to say; it's _incredibly_ obfuscatory in a way that people would not tolerate in almost _any_ other context. - -It makes sense that Yudkowsky might perceive political constraints on what he might want to say in public. (Despite my misgivings, and the fact that it's basically a running joke at this point, this blog is still published under a pseudonym; it would be hypocritical of me to accuse someone of cowardice about what they're willing to attach their real name to, especially when you look at what happened to the _other_ Harry Potter author.) - -But if Yudkowsky didn't want to get into a distracting political fight about a topic, then maybe the responsible thing to do would have been to just not say anything about the topic, rather than engaging with the _stupid_ version of the opposition and stonewalling with "That's a policy question" when people try to point out the problem? [TODO: "not ontologically confused" concession]