From: M. Taylor Saotome-Westlake Date: Sat, 23 Jul 2022 21:27:13 +0000 (-0700) Subject: long confrontation 9: consolidate thread reply scraps X-Git-Url: http://534655.efjtl6rk.asia/source?a=commitdiff_plain;h=bcd99a97623fdffcc594e5786f20cfb65a41d2ff;p=Ultimately_Untrue_Thought.git long confrontation 9: consolidate thread reply scraps I have all the prewritten material in the separate buffer; once I finish editing it into connected prose, I can paste it back into the main ms. and continue working on the main ms. The next scratcher box was a "CROWN". --- diff --git a/notes/a-hill-of-validity-sections.md b/notes/a-hill-of-validity-sections.md index b206e72..f69bd6d 100644 --- a/notes/a-hill-of-validity-sections.md +++ b/notes/a-hill-of-validity-sections.md @@ -1039,3 +1039,38 @@ The HEXACO personality model considers "honesty" and "humility" a single factor I'm not usually—at least, not always—so much of a scrub as to play chess with a pigeon (which shits on the board and then struts around like it's won), or wrestle with a pig (which gets you both dirty, and the pig likes it), or dispute what the Tortise said to Achilles (You might group things together _on the grounds_ of their similarly positive consequences—that's what words like _good_ do—but that's distinct from choosing _the categorization itself_ because of its consequences.) + +—and would be unforgivable if it weren't so _inexplicable_. + +... not _actually_ inexplicable. There was, in fact, an obvious explanation: that Yudkowsky was trying to bolster his reputation amongst progressives by positioning himself on the right side of history, and was tailoring a fake rationality lesson to suit that goal. But _Eliezer Yudkowsky wouldn't do that_. I had to assume this was a honest mistake. + +At least, a _pedagogy_ mistake. If Yudkowsky _just_ wanted to make a politically neutral technical point about the difference between fact-claims and policy claims _without_ "picking a side" in the broader cultural war dispute, these Tweets did a very poor job of it. I of course agree that pronoun usage conventions, and conventions about who uses what bathroom, are not, themselves, factual assertions about sex chromosomes in particular. I'm not saying that Yudkowsky made a false statement there. Rather, I'm saying that it's + + +Rather, previously sexspace had two main clusters (normal females and males) plus an assortment of tiny clusters corresponding to various [disorders of sex development](https://en.wikipedia.org/wiki/Disorders_of_sex_development), and now it has two additional tiny clusters: females-on-masculinizing-HRT and males-on-feminizing-HRT. Certainly, there are situations where you would want to use "gender" categories that use the grouping {females, males-on-feminizing-HRT} and {males, females-on-masculinizing-HRT}. + +[TODO: relevance of multivariate— + +(And in this case, the empirical facts are _so_ lopsided, that if we must find humor in the matter, it really goes the other way. Lia Thomas trounces the entire field by _4.2 standard deviations_ (!!), and Eliezer Yudkowsky feels obligated to _pretend not to see the problem?_ You've got to admit, that's a _little_ bit funny.) + +https://www.lesswrong.com/posts/cu7YY7WdgJBs3DpmJ/the-univariate-fallacy +https://www.lesswrong.com/posts/vhp2sW6iBhNJwqcwP/blood-is-thicker-than-water + +] + +[TODO: sentences about studies showing that HRT doesn't erase male advantage +https://twitter.com/FondOfBeetles/status/1368176581965930501 +] + +[TODO sentences about Lia Thomas and Cece Tefler] https://twitter.com/FondOfBeetles/status/1466044767561830405 (Thomas and Tefler's —cite South Park) +https://www.dailymail.co.uk/news/article-10445679/Lia-Thomas-UPenn-teammate-says-trans-swimmer-doesnt-cover-genitals-locker-room.html +https://twitter.com/sharrond62/status/1495802345380356103 Lia Thomas event coverage +https://www.realityslaststand.com/p/weekly-recap-lia-thomas-birth-certificates Zippy inv. cluster graph! + +] + +Writing out this criticism now, the situation doesn't feel _confusing_, anymore. Yudkowsky was very obviously being intellectually dishonest in response to very obvious political incentives. That's a thing that public intellectuals do. And, again, I agree that the distinction between facts and policy decisions _is_ a valid one, even if I thought it was being selectively invoked here as an [isolated demand for rigor](http://slatestarcodex.com/2014/08/14/beware-isolated-demands-for-rigor/) because of the political context. Coming from _anyone else in the world_, I would have considered the thread fine—a solidly above-average performance, really. I wouldn't have felt confused or betrayed at all. Coming from Eliezer Yudkowsky, it was—confusing. + +Because of my hero worship, "he's being intellectually dishonest in response to very obvious political incentives" wasn't in my hypothesis space; I _had_ to assume the thread was an "honest mistake" in his rationality lessons, rather than (what it actually was, what it _obviously_ actually was) hostile political action. + +(I _want_ to confidently predict that everything I've just said is completely obvious to you, because I learned it all specifically from you! A 130 IQ _nobody_ like me shouldn't have to say _any_ of this to the _author_ of "A Human's Guide to Words"! But then I don't know how to reconcile that with your recent public statement about [not seeing "how there's scientific truth at stake"](https://twitter.com/ESYudkowsky/status/1067482047126495232). Hence this desperate and [_confused_](https://www.lesswrong.com/posts/5JDkW4MYXit2CquLs/your-strength-as-a-rationalist) email plea.) diff --git a/notes/a-hill-twitter-reply.md b/notes/a-hill-twitter-reply.md index 7148210..4907c5c 100644 --- a/notes/a-hill-twitter-reply.md +++ b/notes/a-hill-twitter-reply.md @@ -98,168 +98,16 @@ But the point is general. If _any_ concrete negative consequence of gender self- An illustrative example: like many gender-dysphoric males, I [cosplay](/2016/Dec/joined/) [female](/2017/Oct/a-leaf-in-the-crosswind/) [characters](/2019/Aug/a-love-that-is-out-of-anyones-control/) at fandom conventions sometimes. And, unfortunately, like many gender-dysphoric males, I'm _not very good at it_. I think someone looking at some of my cosplay photos and trying to describe their content in clear language—not trying to be nice to anyone or make a point, but just trying to use language as a map that reflects the territory—would say something like, "This is a photo of a man and he's wearing a dress." The word _man_ in that sentence is expressing _cognitive work_: it's a summary of the [lawful cause-and-effect evidential entanglement](https://www.lesswrong.com/posts/6s3xABaXKPdFwA3FS/what-is-evidence) whereby the photons reflecting off the photograph are correlated with photons reflecting off my body at the time the photo was taken, which are correlated with my externally-observable secondary sex characteristics (facial structure, beard shadow, _&c._), from which evidence an agent using an [efficient naïve-Bayes-like model](http://lesswrong.com/lw/o8/conditional_independence_and_naive_bayes/) can assign me to its "man" category and thereby make probabilistic predictions about some of my traits that aren't directly observable from the photo, and achieve a better [score on those predictions](http://yudkowsky.net/rational/technical/) than if the agent had assigned me to its "woman" category, where by "traits" I mean not (just) particularly sex chromosomes ([as Yudkowsky suggested on Twitter](https://twitter.com/ESYudkowsky/status/1067291243728650243)), but the _conjunction_ of dozens or hundreds of observable measurements that are [_causally downstream_ of sex chromosomes](/2021/Sep/link-blood-is-thicker-than-water/): reproductive organs _and_ muscle mass (sex difference effect size of [Cohen's _d_](https://en.wikipedia.org/wiki/Effect_size#Cohen's_d)≈2.6) _and_ Big Five Agreeableness (_d_≈0.5) _and_ Big Five Neuroticism (_d_≈0.4) _and_ short-term memory (_d_≈0.2, favoring women) _and_ white-to-gray-matter ratios in the brain _and_ probable socialization history _and_ [any number of other things](https://en.wikipedia.org/wiki/Sex_differences_in_human_physiology)—including differences we might not necessarily currently know about, but have prior reasons to suspect exist: no one _knew_ about sex chromosomes before 1905, but given all the other systematic differences between women and men, it would have been a reasonable guess (that turned out to be correct!) to suspect the existence of some sort of molecular mechanism of sex determination. -Making someone say "trans woman" instead of "man" in that sentence depending on my verbally self-reported self-identity may not be forcing them to _lie_, exactly. +Forcing a speaker to say "trans woman" instead of "man" in that sentence depending on my verbally self-reported self-identity may not be forcing them to _lie_, exactly. (Because it's understood, "openly and explicitly and with public focus on the language and its meaning", what _trans women_ are; no one is making a false-to-fact claim about them having ovaries, for example.) But it _is_ forcing the speaker to obfuscate the probabilistic inference they were trying to communicate with the original sentence (about modeling the person in the photograph as being sampled from the "men" [cluster in configuration space](https://www.lesswrong.com/posts/WBw8dDkAWohFjWQSk/the-cluster-structure-of-thingspace)), and instead use language that suggests a different cluster-structure ("trans women", two words, are presumably a subcluster within the "women" cluster). Crowing in the public square about how people who object to be forced to "lie" must be ontologically confused is _ignoring the interesting part of the problem_. Gender identity's [claim to be non-disprovable](https://www.lesswrong.com/posts/fAuWLS7RKWD2npBFR/religion-s-claim-to-be-non-disprovable) mostly functions as a way to [avoid the belief's real weak points](https://www.lesswrong.com/posts/dHQkDNMhj692ayx78/avoiding-your-belief-s-real-weak-points). -(Because it's understood, "openly and explicitly and with public focus on the language and its meaning", what _trans women_ are: +[not entitled to ignore when dumb people have a point] -But it _is_ forcing them to obfuscate the probabilistic inference they were trying to communicate with the original sentence (about modeling the person in the photograph as being sampled from the "men" [cluster in configuration space](https://www.lesswrong.com/posts/WBw8dDkAWohFjWQSk/the-cluster-structure-of-thingspace)), and instead use language that suggests a different cluster-structure ("trans women", two words, are presumably a subcluster within the "women" cluster). +This philosophical point is distinct from my earlier claims supporting Blanchard's two-type ta +think you _already_ have enough evidence—if [used efficiently](https://www.lesswrong.com/posts/MwQRucYo6BZZwjKE7/einstein-s-arrogance)—to see that the distribution of actual trans people we know is such that the categories-are-not-abritrary point is relevant in practice. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -—and would be unforgivable if it weren't so _inexplicable_. - -... not _actually_ inexplicable. There was, in fact, an obvious explanation: that Yudkowsky was trying to bolster his reputation amongst progressives by positioning himself on the right side of history, and was tailoring a fake rationality lesson to suit that goal. But _Eliezer Yudkowsky wouldn't do that_. I had to assume this was a honest mistake. - -At least, a _pedagogy_ mistake. If Yudkowsky _just_ wanted to make a politically neutral technical point about the difference between fact-claims and policy claims _without_ "picking a side" in the broader cultural war dispute, these Tweets did a very poor job of it. I of course agree that pronoun usage conventions, and conventions about who uses what bathroom, are not, themselves, factual assertions about sex chromosomes in particular. I'm not saying that Yudkowsky made a false statement there. Rather, I'm saying that it's _bizarre_ to condescendingly point this out _as if it were the crux of contemporary trans-rights debates_. Conservatives and gender-critical feminists _know_ that trans-rights advocates aren't falsely claiming that trans women have XX chromosomes. But the question of what categories epistemically "carve reality at the joints", is _not unrelated_ to the question of which categories to use in policy decisions: the _function_ of sex-segrated bathrooms is to protect females from males, where "females" and "males" are natural clusters in configuration space that it makes sense to want words to refer to. - -Even if the thread only explicitly mentioned pronouns and not the noun "woman", in practice, and in the context of elite intellectual American culture in which "trans women are women" is dogma, I don't see any _meaningful_ difference between "you're not standing in defense of truth if you insist on a word, brought explicitly into question, being used with some particular meaning" and "I can define the word 'woman' any way I want." (About which, the Yudkowsky of 2008 had some harsh things to say, as excerpted above.) - - - - - - -Rather, previously sexspace had two main clusters (normal females and males) plus an assortment of tiny clusters corresponding to various [disorders of sex development](https://en.wikipedia.org/wiki/Disorders_of_sex_development), and now it has two additional tiny clusters: females-on-masculinizing-HRT and males-on-feminizing-HRT. Certainly, there are situations where you would want to use "gender" categories that use the grouping {females, males-on-feminizing-HRT} and {males, females-on-masculinizing-HRT}. - - - - - - - - -[TODO: relevance of multivariate— - - - - - - - - - - -(And in this case, the empirical facts are _so_ lopsided, that if we must find humor in the matter, it really goes the other way. Lia Thomas trounces the entire field by _4.2 standard deviations_ (!!), and Eliezer Yudkowsky feels obligated to _pretend not to see the problem?_ You've got to admit, that's a _little_ bit funny.) - - -https://www.lesswrong.com/posts/cu7YY7WdgJBs3DpmJ/the-univariate-fallacy -https://www.lesswrong.com/posts/vhp2sW6iBhNJwqcwP/blood-is-thicker-than-water - - -] - -[TODO: sentences about studies showing that HRT doesn't erase male advantage -https://twitter.com/FondOfBeetles/status/1368176581965930501 - - - - -] - -[TODO sentences about Lia Thomas and Cece Tefler] https://twitter.com/FondOfBeetles/status/1466044767561830405 (Thomas and Tefler's —cite South Park) -https://www.dailymail.co.uk/news/article-10445679/Lia-Thomas-UPenn-teammate-says-trans-swimmer-doesnt-cover-genitals-locker-room.html -https://twitter.com/sharrond62/status/1495802345380356103 Lia Thomas event coverage -https://www.realityslaststand.com/p/weekly-recap-lia-thomas-birth-certificates Zippy inv. cluster graph! -https://www.swimmingworldmagazine.com/news/a-look-at-the-numbers-and-times-no-denying-the-advantages-of-lia-thomas/ -] - - - -Writing out this criticism now, the situation doesn't feel _confusing_, anymore. Yudkowsky was very obviously being intellectually dishonest in response to very obvious political incentives. That's a thing that public intellectuals do. And, again, I agree that the distinction between facts and policy decisions _is_ a valid one, even if I thought it was being selectively invoked here as an [isolated demand for rigor](http://slatestarcodex.com/2014/08/14/beware-isolated-demands-for-rigor/) because of the political context. Coming from _anyone else in the world_, I would have considered the thread fine—a solidly above-average performance, really. I wouldn't have felt confused or betrayed at all. Coming from Eliezer Yudkowsky, it was—confusing. - -Because of my hero worship, "he's being intellectually dishonest in response to very obvious political incentives" wasn't in my hypothesis space; I _had_ to assume the thread was an "honest mistake" in his rationality lessons, rather than (what it actually was, what it _obviously_ actually was) hostile political action. - - - - - - - - - -, then you're knowably, predictably making your _readers_ that much stupider. - -, which has negative consequences for his "advancing the art of human rationality" project, even if you haven't said anything false—particularly because people look up to you as the one who taught them to aspire to a _[higher](https://www.lesswrong.com/posts/DoLQN5ryZ9XkZjq5h/tsuyoku-naritai-i-want-to-become-stronger) [standard](https://www.lesswrong.com/posts/Nu3wa6npK4Ry66vFp/a-sense-that-more-is-possible)_ [than](https://www.lesswrong.com/posts/XqmjdBKa4ZaXJtNmf/raising-the-sanity-waterline) [merely not-lying](https://www.lesswrong.com/posts/MN4NRkMw7ggt9587K/firming-up-not-lying-around-its-edge-cases-is-less-broadly). - - - - - - - - - - -It's true that the same word can be used in many ways depending on context. But you're _not done_ dissolving the question just by making that observation. - - -And the one who triumphantly shouts in the public square, "And *therefore*, people who object to my preferred use of language are ontologically confused!" is _ignoring the interesting part of the problem_. Gender identity's [claim to be non-disprovable](https://www.lesswrong.com/posts/fAuWLS7RKWD2npBFR/religion-s-claim-to-be-non-disprovable) mostly functions as a way to [avoid the belief's real weak points](https://www.lesswrong.com/posts/dHQkDNMhj692ayx78/avoiding-your-belief-s-real-weak-points). - - - - -This encoding might not confuse a well-designed AI into making any bad predictions, but [as you explained very clearly, it probably will confuse humans](https://www.lesswrong.com/posts/veN86cBhoe7mBxXLk/categorizing-has-consequences): - -> You can see this in terms of similarity clusters: once you draw a boundary around a group, the mind starts trying to harvest similarities from the group. And unfortunately the human pattern-detectors seem to operate in such overdrive that we see patterns whether they're there or not; a weakly negative correlation can be mistaken for a strong positive one with a bit of selective memory. - - - - - - - - - -(I _want_ to confidently predict that everything I've just said is completely obvious to you, because I learned it all specifically from you! A 130 IQ _nobody_ like me shouldn't have to say _any_ of this to the _author_ of "A Human's Guide to Words"! But then I don't know how to reconcile that with your recent public statement about [not seeing "how there's scientific truth at stake"](https://twitter.com/ESYudkowsky/status/1067482047126495232). Hence this desperate and [_confused_](https://www.lesswrong.com/posts/5JDkW4MYXit2CquLs/your-strength-as-a-rationalist) email plea.) - -In your email of 29 November, you wrote, "I hope I would have noticed if I had tweeted anything asserting [Zack's] factual statements to be factually false, since that would imply knowledge I don't claim to have," and in your Twitter [reply link](https://twitter.com/ESYudkowsky/status/1068071036732694529) to my post (thanks!!), you wrote, "[w]ithout yet judging its empirical content." However, as Michael emphasized ("That's not what Zach is talking about at all and not what the debate is about and you know this"), the _main_ point I'm trying to make is a philosophical one, not an empirical one: that category boundaries and associated language are not arbitrary (if you care about human intelligence being useful), and that sex (or "gender") is no exception. - -I _also_ made some empirical claims favoring Blanchard's two-type homosexual/autogynephilic model of MtF (as we discussed in 2016). But as I _tried_ to make clear in the post (I fear I am not as good of a writer as you, and perhaps should have put in more effort to make it clearer, but see [footnote 10](http://unremediatedgender.space/2018/Feb/the-categories-were-made-for-man-to-make-predictions/#note-10)), I don't think you _need_ the full autogynephilia theory to show that the categories-are-not-arbitrary point has implications for discourse on transgender issues, and I think you _already_ have enough evidence—if [used efficiently](https://www.lesswrong.com/posts/MwQRucYo6BZZwjKE7/einstein-s-arrogance)—to see that the distribution of actual trans people we know is such that the categories-are-not-abritrary point is relevant in practice. - -Consider again the 6.7:1 (!!) cis-woman-to-trans-woman ratio among 2018 _Slate Star Codex_ survey respondents. The ratio in the general population is going to be more like 86:1 (estimate derived from dividing 50% (female share of population according to [Fisher's principle](https://en.wikipedia.org/wiki/Fisher%27s_principle)) by 0.58% (trans share of U.S. population according to a [2016 report](http://williamsinstitute.law.ucla.edu/wp-content/uploads/How-Many-Adults-Identify-as-Transgender-in-the-United-States.pdf))). +Consider again the 6.7:1 (!!) cis-woman-to-trans-woman ratio among 2018 _Slate Star Codex_ survey respondents. A curious rationalist, having been raised to believe that trans women are women, and considering observations like this, might ask the question: "Gee, I wonder _why_ women-who-happen-to-be-trans are _so much_ more likely to read _Slate Star Codex_, and be attracted to women, and, um, have penises, than women-who-happen-to-be-cis?" @@ -267,48 +115,49 @@ If you're _very careful_, I'm sure it's possible to give a truthful answer to th Maybe we'd _usually_ prefer not to phrase it like that, both for reasons of politeness, and because we can be more precise at the cost of using more words ("Interests and sexual orientation may better predicted by natal sex rather than social gender in this population; also, not all trans women have had sex reassignment surgery and so retain their natal-sex anatomy"?). But I think the short version needs to be _sayable_, because if it's not _sayable_, then that's artificially restricting the hypothesis spaces that people use to think with, which is bad (if you care about human intelligence being useful). -You see the problem, right? I'm kind of at my wits' end here, because I _thought_ the point of this whole "rationality" project was to carve out _one_ place in the entire world where good arguments would _eventually_ triumph over bad arguments, even when the good arguments happen to be mildly politically inconvenient. And yet I keep encountering people who seem to be regarded as aspiring-rationalists-in-[good-standing](https://srconstantin.wordpress.com/2018/12/24/contrite-strategies-and-the-need-for-standards/) who try to get away with this obfuscatory "We can define the word 'woman' any way we want, and you _need_ to define it by self-identification because otherwise you're _hurting trans people_" maneuver as if that were the end of the discussion! - -A [few](https://en.wikipedia.org/wiki/Rule_of_three_(writing)) linkable examples of this sort of thing— - - * The immortal Scott Alexander [wrote](https://slatestarcodex.com/2014/11/21/the-categories-were-made-for-man-not-man-for-the-categories/), "An alternative categorization system is not an error, and borders are not objectively true or false." (You read my reply.) +Satire is a very weak form of argument: the one who wishes to doubt will always be able to find some aspect in which the obviously-absurd satirical situation differs from the real-world situation being satirized, and claim that that difference destroys the relevence of the joke. But on the off-chance that it might help _illustrate_ my concern, imagine you lived in a so-called "rationalist" subculture where conversations like this happened— - * _Vox_ journalist and author of the popular Tumblr blog _The Unit of Caring_ Kelsey Piper [wrote](https://theunitofcaring.tumblr.com/post/171986501376/your-post-on-definition-of-gender-and-woman-and)— +**Bob**: "Look at this [adorable cat picture](https://twitter.com/mydogiscutest/status/1079125652282822656)!" +**Alice**: "Um, that looks like a dog to me, actually." +**Bob**: "[You're not standing](https://twitter.com/ESYudkowsky/status/1067198993485058048) in defense of truth if you insist on a word, brought explicitly into question, being used with some particular meaning. [Now, maybe as a matter of policy](https://twitter.com/ESYudkowsky/status/1067294823000887297), you want to make a case for language being used a certain way. Well, that's a separate debate then." -> "If you can find anyone explaining why [_woman_ as 'adult human biological female'] is a good definition, **or even explaining what good properties it has** [!?!! bolding mine—ZMD], I'd appreciate it because I did sincerely put in the effort and—uncharitably, it's as if there's just 'matches historical use' and 'doesn’t involve any people I consider icky being in my category'." +If you were Alice, and a _solid supermajority_ of your incredibly smart, incredibly philosophically sophisticated friend group _including Eliezer Yudkowsky_ (!!!) seemed to behave like Bob (and reaped microhedonic social rewards for it in the form of, _e.g._, hundreds of Twitter likes), that would be a _pretty worrying_ sign about your friends' ability to accomplish intellectually hard things (_e.g._, AI alignment), right? Even if there isn't any pressing practical need to discriminate between dogs and cats, the _problem_ is that Bob is [_selectively_](http://slatestarcodex.com/2014/08/14/beware-isolated-demands-for-rigor/) using his sophisticated (and correct!) philosophy-of-language insight to try to _undermine Alice's ability to use language to make sense of the world_, even though Bob obviously knows goddamned well what Alice was trying to say. -[(My reply.)](http://unremediatedgender.space/2018/Apr/reply-to-the-unit-of-caring-on-adult-human-females/) + _incredibly_ obfuscatory in a way that people would not tolerate in almost _any_ other context. - * On Facebook, MIRI communications director Rob Bensinger [told me](https://www.facebook.com/robbensinger/posts/10158073223040447?comment_id=10158073685825447&reply_comment_id=10158074093570447&comment_tracking=%7B%22tn%22%3A%22R2%22%7D)— +With respect to transgender issues, this certainly _can_ go both ways: somewhere on Twitter, there are members of the "anti-trans" political coalition insisting, "No, that's _really_ a man because chromosomes" even though they know that that's not what members of the "pro-trans" coalition mean—although as stated earlier, I don't think Eric Weinstein is guilty of this. But given the likely distribution of your Twitter followers and what they need to hear, I'm very worried about the _consequences_ (again, remaining agnostic about your private intent) of slapping down the "But, but, chromosomes" idiocy and then not engaging with the _drop-dead obvious_ "But, but, clusters in high-dimensional configuration space that [aren't actually changeable with contemporary technology](https://www.lesswrong.com/posts/QZs4vkC7cbyjL9XA9/changing-emotions)" steelman. -> Zack, "woman" doesn't unambiguously refer to the thing you're trying to point at; even if no one were socially punishing you for using the term that way, and even if we were ignoring any psychological harm to people whose dysphoria is triggered by that word usage, there'd be the problem regardless that these terms are already used in lots of different ways by different groups. The most common existing gender terms are a semantic minefield at the same time they're a dysphoric and political minefield, and everyone adopting the policy of objecting when anyone uses man/woman/male/female/etc. in any way other than the way they prefer is not going to solve the problem at all. +It makes sense that (I speculate) you might perceive political constraints on what you want to say in public. (I still work under a pseudonym myself; it would be wildly hypocritical of me to accuse anyone else of cowardice!) But I suspect that if you want to not get into a distracting political fight about topic X, then maybe the responsible thing to do is just not say anything about topic X, rather than engaging with the _stupid_ version of anti-X, and then [stonewalling](https://www.lesswrong.com/posts/wqmmv6NraYv4Xoeyj/conversation-halters) with "That's a policy question" when people [try to point out the problem](https://twitter.com/samsaragon/status/1067238063816945664)? -In context, I argue that this was an attempted [conversation-halter of the appeal-to-arbitrariness type](https://www.lesswrong.com/posts/wqmmv6NraYv4Xoeyj/conversation-halters): if you scroll up and read my comments, I think it should be very clear that I understood that words can be used in many ways, and that I was objecting to another commenter's word usage for a _specific stated reason_ about the expressive power of language. ("To say that someone _already is_ a woman simply by virtue of having the same underlying psychological condition that motivates people to actually take the steps of transitioning (and thereby _become_ a trans woman) kind of makes it hard to have a balanced discussion of the costs and benefits of transitioning.") Rob didn't even acknowledge my argument! (Although the other commenter, to her credit, did!) +_bizarre_ to condescendingly point this out _as if it were the crux of contemporary trans-rights debates_. Conservatives and gender-critical feminists _know_ that trans-rights advocates aren't falsely claiming that trans women have XX chromosomes. But the question of what categories epistemically "carve reality at the joints", is _not unrelated_ to the question of which categories to use in policy decisions: the _function_ of sex-segrated bathrooms is to protect females from males, where "females" and "males" are natural clusters in configuration space that it makes sense to want words to refer to. -Now, to be fair to Rob, it's certainly possible that he was criticizing me specifically because I was the "aggressor" objecting to someone else's word usage, and that he would have stuck up for me just the same if someone had "aggressed" against me using the word _woman_ in a sense that excluded non-socially-transitioned gender-dysphoric males, for the same reason ("adopting the policy of objecting when anyone uses man/woman/male/female/etc. in any way other than the way they prefer is not going to solve the problem at all"). But given my other experiences trying to argue this with many people, I feel justified in my suspicions that that wasn't actually his algorithm? If socially-liberal people in the Current Year _selectively_ drag out the "It's pointless to object to someone else's terminology" argument _specifically_ when someone wants to talk about biological sex (or even _socially perceived_ sex!) rather than self-identified gender identity—but objecting on the grounds of "psychological harm to people whose dysphoria is triggered by that word usage" (!!) is implied to be potentially kosher—that has a pretty stark distortionary effect on our discussions! +I agree that this is the only reason you should care. -(_Speaking_ of psychological harm, it may not be a coincidence that ten days after this exchange, I lost a lot of sleep, had a nervous breakdown, and ended up being involuntarily committed to the psychiatric ward for three days. Maybe in some people's book, that makes me a less reliable narrator, because I'm "crazy." But when everyone I [trusted](https://www.lesswrong.com/posts/wustx45CPL5rZenuo/no-safe-defense-not-even-science) to help keep me sane seemed intent on applying social pressure and clever definition-hacking mind games to get me to accept that men like me can _literally_ be women _on the basis of our saying so_, perhaps it is _understandable_ that I was pretty upset?) + -I'm a transhumanist like everyone else; I want to support my trans friends like everyone else (and might end up being one of those friends if the technology gets better or I grow a spine) but Scott's, Kelsey's, and Rob's performance above _can't possibly_ be the way to have the discussion if we're going to be intellectually honest about it! +I'm not giving them credit for understanding the lessons of "A Human's Guide to Words", but I think there's a useful sense of "know how to use words" that embodies a lower standard of philosophical rigor than that: people did, in fact, use languge to build this entire technological civilization even though (unfortunately) the vast majority of them have read neither you nor S. I. Hayakawa. -(_Needless to say_ but I'm saying it anyway because I've been trained to perceive the need because I live in a trash fire: Scott, Kelsey, and Rob are all great and smart people who I love; I'm attacking a _flawed argument pattern_, not people.) +If a person-in-the-street says of my cosplay photos, or self-identified trans woman Danielle Muscato, "That's a man! I have eyes and I can see that that's a man! Men aren't women!"—well, I probably wouldn't want to invite such a person-in-the-street to a Less Wrong meetup. But I do think the person-in-the-street is performing useful cognitive work. (A rock couldn't do that!) Because I have the hidden-Bayesian-structure-of-language-and-cognition-sight (thanks!!), I know how to sketch out the reduction of "Men aren't women" to something more like "This cognitive algorithm detects secondary sex characteristics and uses it as a classifier for a binary female/male 'sex' category, which it uses to make predictions about not-yet-observed features ..." -You replied to me (in part): +But having done the reduction-to-cognitive-algorithms, it still looks like the person-in-the-street has a point that I shouldn't be allowed to ignore just because I have 30 more IQ points and better philosophy-of-language skills? As it is written: "intelligence, to be useful, must be used for something other than defeating itself." -> "Lying" is for per se direct falsehoods. That's all I was policing—people who do claim to be aspiring toward truth, messing up on an easy problem of elementary epistemological typing and the nature of words, but who might still listen maybe if somebody they respect nopes the fallacy. The rest of human civilization is a trash fire so whatever. +I bring up me and Danielle Muscato as examples because I think those are edge cases that help illustrate the problem I'm trying to point out, much like how people love to bring up complete androgen insensitivity syndrome to illustrate why "But chromosomes!" isn't the correct reduction of sex classification. But to differentiate what I'm saying from mere blind transphobia, let me note that I predict that most people-in-the-street would be comfortable using feminine pronouns for someone like Blaire White (who is also trans). That's evidence about the kind of cognitive work people's brains are doing when they use English language singular third-person pronouns! Certainly, English is not the only language; ours is not the only culture; maybe there is a way to do gender categories that would be more accurate and better for everyone! But to find what that better way is, I think we need to be able to talk about these kinds of details in public. And I think statements like "Calling pronouns lies is not what you do when you know how to use words" hinder that discussion rather than helping it, by functioning as semantic stopsigns. -Some of us still have to live in this trash fire! And if you don't care about that—if you don't care about that, the next generation of AI researchers (assuming we have time for another generation) is growing up in this trash fire and a lot of them probably follow you on Twitter! Judging by local demographics, a _surprising number_ of them are gender-dysphoric males. If, as I'm claiming, the political push for trans rights is seducing them into adopting [_generalized_ bad patterns of reasoning](https://www.lesswrong.com/posts/XTWkjCJScy2GFAgDt/dark-side-epistemology) (_e.g._, "Trans women are women, [_by definition_](https://www.lesswrong.com/posts/cFzC996D7Jjds3vS9/arguing-by-definition)"), surely _that_ matters? +Again, satire is a very weak form of argument, but if it helps at all, I feel like Alice in the following dialogue. -The clever definition-hacking mind games are _not technically false_—there is almost never any occasion on which I can catch anyone in a _explicit lie_ like "Benya Fallenstein and Jessica Taylor have XX karyotypes." But I still think the clever definition-hacking games are _incredibly_ obfuscatory in a way that people would not tolerate in almost _any_ other context. +Bob (loudly, in the public square): When people say "Now let us bow our heads and praise the Lord our God", they're not lying, because "Now let us bow our heads" is a speech act, not a statement of fact. +Alice (via private email): I agree that it's a speech act rather than a factual assertion, but isn't that observation pretty misleading in isolation? I don't understand why you would say that and only that, unless you were deliberately trying to get your readers to believe in God without actually having to say "You should believe in God." +Bob: Calling speech acts "lies" is not what you do when you know how to use words. But mostly, I think this is not very important. -Satire is a very weak form of argument: the one who wishes to doubt will always be able to find some aspect in which the obviously-absurd satirical situation differs from the real-world situation being satirized, and claim that that difference destroys the relevence of the joke. But on the off-chance that it might help _illustrate_ my concern, imagine you lived in a so-called "rationalist" subculture where conversations like this happened— +As with all satire, you can point out differences between this satirical dialogue and the real-world situation that it's trying to satirize. But are they relevant differences? To be sure, "Does God exist?" is a much more straightforward question than "Are trans women women?" because existence questions in general are easier than parismonious-categorization-that-carves-nature-at-the-joints questions. But I think that "when you take a step back, feel the flow of debate, observe the cognitive traffic signals", the satirical dialogue is exhibiting the same structural problems as the conversation we're actually having. -**Bob**: "Look at this [adorable cat picture](https://twitter.com/mydogiscutest/status/1079125652282822656)!" -**Alice**: "Um, that looks like a dog to me, actually." -**Bob**: "[You're not standing](https://twitter.com/ESYudkowsky/status/1067198993485058048) in defense of truth if you insist on a word, brought explicitly into question, being used with some particular meaning. [Now, maybe as a matter of policy](https://twitter.com/ESYudkowsky/status/1067294823000887297), you want to make a case for language being used a certain way. Well, that's a separate debate then." +Can you think of any other context where "Now, maybe as a matter of policy, you want to make a case for language being used a certain way. Well, that's a separate debate then" would seem like a smart thing to say? That's not a rhetorical question. This actually seems like a broken conversation pattern for any X, Y, and Z: -If you were Alice, and a _solid supermajority_ of your incredibly smart, incredibly philosophically sophisticated friend group _including Eliezer Yudkowsky_ (!!!) seemed to behave like Bob (and reaped microhedonic social rewards for it in the form of, _e.g._, hundreds of Twitter likes), that would be a _pretty worrying_ sign about your friends' ability to accomplish intellectually hard things (_e.g._, AI alignment), right? Even if there isn't any pressing practical need to discriminate between dogs and cats, the _problem_ is that Bob is [_selectively_](http://slatestarcodex.com/2014/08/14/beware-isolated-demands-for-rigor/) using his sophisticated (and correct!) philosophy-of-language insight to try to _undermine Alice's ability to use language to make sense of the world_, even though Bob obviously knows goddamned well what Alice was trying to say. +Alice: It's not true that X is an instance of Y, because of reason Z! +Bob: Using language in a way you dislike, openly and explicitly and with public focus on the language and its meaning, is not lying. Now, maybe as a matter of policy, you want to make a case for language being used a certain way. Well, that's a separate debate then. +Alice: Fine, have it your way. As a matter of policy, I argue that we should use language such that we would say that X is not an instance of Y. And the reason that's a good policy decision is Z. +Bob: ... um, sorry, out of time, gotta go. -With respect to transgender issues, this certainly _can_ go both ways: somewhere on Twitter, there are members of the "anti-trans" political coalition insisting, "No, that's _really_ a man because chromosomes" even though they know that that's not what members of the "pro-trans" coalition mean—although as stated earlier, I don't think Eric Weinstein is guilty of this. But given the likely distribution of your Twitter followers and what they need to hear, I'm very worried about the _consequences_ (again, remaining agnostic about your private intent) of slapping down the "But, but, chromosomes" idiocy and then not engaging with the _drop-dead obvious_ "But, but, clusters in high-dimensional configuration space that [aren't actually changeable with contemporary technology](https://www.lesswrong.com/posts/QZs4vkC7cbyjL9XA9/changing-emotions)" steelman. +And I'm still really confused, because I still feel like everything I'm saying here is a trivial application of Sequences-lore. If I'm getting something wrong, I should be overjoyed to be harshly corrected by the Great Teacher! A simple person like me is as but a mere worm in the presence of the great Eliezer Yudkowsky! But if it looks like the Great Teacher is getting something wrong (wrong with respect to the balanced flow of arguments and evidence in which every "step is precise and has no room in it for your whims", although not wrong in the sense of making a factually incorrect statement) and the Great Teacher neither corrects me nor says "OK, you're right and I was wrong, well done, my student", what am I supposed to conclude? Is this a prank—a test? Am I like Brennan in "Initiation Ceremony", being evaluated to see if I have the guts to stand by my vision of the Way in the face of social pressure? (If so, I'm not doing a very good job, because I definitely wouldn't be writing this if I hadn't gotten social proof from Michael, Ben, and Sarah.) Did I pass?? -It makes sense that (I speculate) you might perceive political constraints on what you want to say in public. (I still work under a pseudonym myself; it would be wildly hypocritical of me to accuse anyone else of cowardice!) But I suspect that if you want to not get into a distracting political fight about topic X, then maybe the responsible thing to do is just not say anything about topic X, rather than engaging with the _stupid_ version of anti-X, and then [stonewalling](https://www.lesswrong.com/posts/wqmmv6NraYv4Xoeyj/conversation-halters) with "That's a policy question" when people [try to point out the problem](https://twitter.com/samsaragon/status/1067238063816945664)? Having already done so, is it at all possible that you might want to provide your readers a clarification that [category boundaries are not arbitrary](http://lesswrong.com/lw/o0/where_to_draw_the_boundary/) and that [actually changing sex is a very hard technical problem](https://www.lesswrong.com/posts/QZs4vkC7cbyjL9XA9/changing-emotions)? Or if you don't want to do that publicly, maybe an internal ["Halt, Melt, and Catch Fire event"](https://www.lesswrong.com/rationality/fighting-a-rearguard-action-against-the-truth) is in order on whatever processes led to the present absurd situation (where I feel like I need to explain "A Human's Guide to Words" to you, and Michael, Sarah, and Ben all seem to think I have a point)? Alternatively, if _I'm_ philosophically in the wrong here, is there a particular argument I'm overlooking? I should be overjoyed to be corrected if I am in error, but "The rest of human civilization is a trash fire so whatever" is _not a counterargument!_ +In a functioning rationalist community, there should never be any occasion in which "appeal to Eliezer Yudkowsky's personal authority" seems like a good strategy: the way this is supposed to work is that I should just make my arguments with the understanding that good arguments will be accepted and bad arguments will be rejected. But I've been trying that, and it's mostly not working. On any other topic, I probably would have just given up and accepted the social consensus by now: "Sure, OK, whatever, trans women are women by definition; who am I to think I've seen into the Bayes-structure?" I still think this from time to time, and feel really guilty about arguing for the Bad Guys (because in my native Blue Tribe culture, only Bad people want to talk about sexual dimorphism). But then I can't stop seeing the Bayes-structure that says that biological sex continues to be a predictively-useful concept even when it's ideologically unfashionable—and I've got Something to Protect. What am I supposed to do?