From: M. Taylor Saotome-Westlake Date: Sat, 23 Jul 2022 22:43:25 +0000 (-0700) Subject: long confrontation 10 X-Git-Url: http://534655.efjtl6rk.asia/source?a=commitdiff_plain;h=74ecb11d08d1ae8f176bc07bdb04f709f933a986;p=Ultimately_Untrue_Thought.git long confrontation 10 HRSHOE --- diff --git a/notes/a-hill-of-validity-sections.md b/notes/a-hill-of-validity-sections.md index f69bd6d..014bdfc 100644 --- a/notes/a-hill-of-validity-sections.md +++ b/notes/a-hill-of-validity-sections.md @@ -1074,3 +1074,9 @@ Writing out this criticism now, the situation doesn't feel _confusing_, anymore. Because of my hero worship, "he's being intellectually dishonest in response to very obvious political incentives" wasn't in my hypothesis space; I _had_ to assume the thread was an "honest mistake" in his rationality lessons, rather than (what it actually was, what it _obviously_ actually was) hostile political action. (I _want_ to confidently predict that everything I've just said is completely obvious to you, because I learned it all specifically from you! A 130 IQ _nobody_ like me shouldn't have to say _any_ of this to the _author_ of "A Human's Guide to Words"! But then I don't know how to reconcile that with your recent public statement about [not seeing "how there's scientific truth at stake"](https://twitter.com/ESYudkowsky/status/1067482047126495232). Hence this desperate and [_confused_](https://www.lesswrong.com/posts/5JDkW4MYXit2CquLs/your-strength-as-a-rationalist) email plea.) + +And I'm still really confused, because I still feel like everything I'm saying here is a trivial application of Sequences-lore. If I'm getting something wrong, I should be overjoyed to be harshly corrected by the Great Teacher! A simple person like me is as but a mere worm in the presence of the great Eliezer Yudkowsky! But if it looks like the Great Teacher is getting something wrong (wrong with respect to the balanced flow of arguments and evidence in which every "step is precise and has no room in it for your whims", although not wrong in the sense of making a factually incorrect statement) and the Great Teacher neither corrects me nor says "OK, you're right and I was wrong, well done, my student", what am I supposed to conclude? Is this a prank—a test? Am I like Brennan in "Initiation Ceremony", being evaluated to see if I have the guts to stand by my vision of the Way in the face of social pressure? (If so, I'm not doing a very good job, because I definitely wouldn't be writing this if I hadn't gotten social proof from Michael, Ben, and Sarah.) Did I pass?? + +In a functioning rationalist community, there should never be any occasion in which "appeal to Eliezer Yudkowsky's personal authority" seems like a good strategy: the way this is supposed to work is that I should just make my arguments with the understanding that good arguments will be accepted and bad arguments will be rejected. But I've been trying that, and it's mostly not working. On any other topic, I probably would have just given up and accepted the social consensus by now: "Sure, OK, whatever, trans women are women by definition; who am I to think I've seen into the Bayes-structure?" I still think this from time to time, and feel really guilty about arguing for the Bad Guys (because in my native Blue Tribe culture, only Bad people want to talk about sexual dimorphism). But then I can't stop seeing the Bayes-structure that says that biological sex continues to be a predictively-useful concept even when it's ideologically unfashionable—and I've got Something to Protect. What am I supposed to do? + +I agree that this is the only reason you should care. diff --git a/notes/a-hill-twitter-reply.md b/notes/a-hill-twitter-reply.md index 4907c5c..c1eef74 100644 --- a/notes/a-hill-twitter-reply.md +++ b/notes/a-hill-twitter-reply.md @@ -100,64 +100,47 @@ An illustrative example: like many gender-dysphoric males, I [cosplay](/2016/Dec Forcing a speaker to say "trans woman" instead of "man" in that sentence depending on my verbally self-reported self-identity may not be forcing them to _lie_, exactly. (Because it's understood, "openly and explicitly and with public focus on the language and its meaning", what _trans women_ are; no one is making a false-to-fact claim about them having ovaries, for example.) But it _is_ forcing the speaker to obfuscate the probabilistic inference they were trying to communicate with the original sentence (about modeling the person in the photograph as being sampled from the "men" [cluster in configuration space](https://www.lesswrong.com/posts/WBw8dDkAWohFjWQSk/the-cluster-structure-of-thingspace)), and instead use language that suggests a different cluster-structure ("trans women", two words, are presumably a subcluster within the "women" cluster). Crowing in the public square about how people who object to be forced to "lie" must be ontologically confused is _ignoring the interesting part of the problem_. Gender identity's [claim to be non-disprovable](https://www.lesswrong.com/posts/fAuWLS7RKWD2npBFR/religion-s-claim-to-be-non-disprovable) mostly functions as a way to [avoid the belief's real weak points](https://www.lesswrong.com/posts/dHQkDNMhj692ayx78/avoiding-your-belief-s-real-weak-points). +To this one might reply that I'm giving too much credit to the "anti-trans" coalition for how stupid they're not being: that _my_ careful dissection of the hidden probabilistic inferences implied by pronoun choices is all well and good, but that calling pronouns "lies" is not something you do when you know how to use words. -[not entitled to ignore when dumb people have a point] +But I'm _not_ giving them credit for _for understanding the lessons of "A Human's Guide to Words"_; I just think there's a useful sense of "know how to use words" that embodies a lower standard of philosophical rigor. If a person-in-the-street says of my cosplay photos, "That's a man! I _have eyes_ and I can _see_ that that's a man! Men aren't women!"—well, I _probably_ wouldn't want to invite such a person-in-the-street to a _Less Wrong_ meetup. But I do think the person-in-the-street is _performing useful cognitive work_. Because _I_ have the hidden-Bayesian-structure-of-language-and-cognition-sight (thanks to Yudkowsky's writings back in the 'aughts), _I_ know how to sketch out the reduction of "Men aren't women" to something more like "This [cognitive algorithm](https://www.lesswrong.com/posts/HcCpvYLoSFP4iAqSz/rationality-appreciating-cognitive-algorithms) detects secondary sex characteristics and uses it as a classifier for a binary female/male 'sex' category, which it uses to make predictions about not-yet-observed features ..." -This philosophical point is distinct from my earlier claims supporting Blanchard's two-type ta - -think you _already_ have enough evidence—if [used efficiently](https://www.lesswrong.com/posts/MwQRucYo6BZZwjKE7/einstein-s-arrogance)—to see that the distribution of actual trans people we know is such that the categories-are-not-abritrary point is relevant in practice. - -Consider again the 6.7:1 (!!) cis-woman-to-trans-woman ratio among 2018 _Slate Star Codex_ survey respondents. +But having _done_ the reduction-to-cognitive-algorithms, it still looks like the person-in-the-street _has a point_ that I shouldn't be allowed to ignore just because I have 30 more IQ points and better philosophy-of-language skills? As it is written: "intelligence, to be useful, must be used for something other than defeating itself." -A curious rationalist, having been raised to believe that trans women are women, and considering observations like this, might ask the question: "Gee, I wonder _why_ women-who-happen-to-be-trans are _so much_ more likely to read _Slate Star Codex_, and be attracted to women, and, um, have penises, than women-who-happen-to-be-cis?" +I bring up my bad cosplay photos as an edge case that helps illustrate the problem I'm trying to point out, much like how people love to bring up [complete androgen insensitivity syndrome](https://en.wikipedia.org/wiki/Complete_androgen_insensitivity_syndrome) to illustrate why "But chromosomes!" isn't the correct reduction of sex classification. But to differentiate what I'm saying from mere blind transphobia, let me note that I predict that most people-in-the-street would be comfortable using feminine pronouns for someone like [Blaire White](http://msblairewhite.com/). That's evidence about the kind of cognitive work people's brains are doing when they use English language singular third-person pronouns! Certainly, English is not the only language; ours is not the only culture; maybe there is a way to do gender categories that would be more accurate and better for everyone! But to _find_ what that better way is, I think we need to be able to _talk_ about these kinds of details in public. And _in practice_, the attitude evinced in Yudkowsky's Tweets seemed to function as a [semantic stopsign](https://www.lesswrong.com/posts/FWMfQKG3RpZx6irjm/semantic-stopsigns) to get people to stop talking about the details. -If you're _very careful_, I'm sure it's possible to give a truthful answer to that question without misgendering anyone. But if you want to give a _concise_ answer—perhaps not a _maximally rigorous_ answer, but an answer that usefully [points](https://www.lesswrong.com/posts/YF9HB6cWCJrDK5pBM/words-as-mental-paintbrush-handles) to the true causal-structure-in-the-world while still fitting in a Tweet—I think you _need_ to be able to say something like, "Because trans women are men." (At least as a _live hypothesis_, even if you prefer an intersex-brain etiology for the people we know.) - -Maybe we'd _usually_ prefer not to phrase it like that, both for reasons of politeness, and because we can be more precise at the cost of using more words ("Interests and sexual orientation may better predicted by natal sex rather than social gender in this population; also, not all trans women have had sex reassignment surgery and so retain their natal-sex anatomy"?). But I think the short version needs to be _sayable_, because if it's not _sayable_, then that's artificially restricting the hypothesis spaces that people use to think with, which is bad (if you care about human intelligence being useful). +If you were actually interested in having a real discussion (instead of a fake discussion that makes you look good to progressives), why would you slap down the "But, but, chromosomes" idiocy and then not engage with the _drop-dead obvious_ "But, but, clusters in high-dimensional configuration space that [aren't actually changeable with contemporary technology](https://www.lesswrong.com/posts/QZs4vkC7cbyjL9XA9/changing-emotions)" steelman, [which was, in fact, brought up in the replies](https://twitter.com/EnyeWord/status/1068983389716385792)? Satire is a very weak form of argument: the one who wishes to doubt will always be able to find some aspect in which the obviously-absurd satirical situation differs from the real-world situation being satirized, and claim that that difference destroys the relevence of the joke. But on the off-chance that it might help _illustrate_ my concern, imagine you lived in a so-called "rationalist" subculture where conversations like this happened— -**Bob**: "Look at this [adorable cat picture](https://twitter.com/mydogiscutest/status/1079125652282822656)!" -**Alice**: "Um, that looks like a dog to me, actually." -**Bob**: "[You're not standing](https://twitter.com/ESYudkowsky/status/1067198993485058048) in defense of truth if you insist on a word, brought explicitly into question, being used with some particular meaning. [Now, maybe as a matter of policy](https://twitter.com/ESYudkowsky/status/1067294823000887297), you want to make a case for language being used a certain way. Well, that's a separate debate then." - -If you were Alice, and a _solid supermajority_ of your incredibly smart, incredibly philosophically sophisticated friend group _including Eliezer Yudkowsky_ (!!!) seemed to behave like Bob (and reaped microhedonic social rewards for it in the form of, _e.g._, hundreds of Twitter likes), that would be a _pretty worrying_ sign about your friends' ability to accomplish intellectually hard things (_e.g._, AI alignment), right? Even if there isn't any pressing practical need to discriminate between dogs and cats, the _problem_ is that Bob is [_selectively_](http://slatestarcodex.com/2014/08/14/beware-isolated-demands-for-rigor/) using his sophisticated (and correct!) philosophy-of-language insight to try to _undermine Alice's ability to use language to make sense of the world_, even though Bob obviously knows goddamned well what Alice was trying to say. +
+

Bob: "Look at this [adorable cat picture](https://twitter.com/mydogiscutest/status/1079125652282822656)!"

+

Alice: "Um, that looks like a dog to me, actually."

+

Bob: "[You're not standing](https://twitter.com/ESYudkowsky/status/1067198993485058048) in defense of truth if you insist on a word, brought explicitly into question, being used with some particular meaning. [Now, maybe as a matter of policy](https://twitter.com/ESYudkowsky/status/1067294823000887297), you want to make a case for language being used a certain way. Well, that's a separate debate then."

+
- _incredibly_ obfuscatory in a way that people would not tolerate in almost _any_ other context. +If you were Alice, and a _solid supermajority_ of your incredibly smart, incredibly philosophically sophisticated friend group _including Eliezer Yudkowsky_ (!!!) seemed to behave like Bob (and reaped microhedonic social rewards for it in the form of, _e.g._, hundreds of Twitter likes), that would be a _pretty worrying_ sign about your friends' ability to accomplish intellectually hard things (_e.g._, AI alignment), right? Even if there isn't any pressing practical need to discriminate between dogs and cats, the _problem_ is that Bob is [_selectively_](http://slatestarcodex.com/2014/08/14/beware-isolated-demands-for-rigor/) using his sophisticated philosophy-of-language insight to try to _undermine Alice's ability to use language to make sense of the world_, even though Bob obviously knows goddamned well what Alice was trying to say; it's _incredibly_ obfuscatory in a way that people would not tolerate in almost _any_ other context. -With respect to transgender issues, this certainly _can_ go both ways: somewhere on Twitter, there are members of the "anti-trans" political coalition insisting, "No, that's _really_ a man because chromosomes" even though they know that that's not what members of the "pro-trans" coalition mean—although as stated earlier, I don't think Eric Weinstein is guilty of this. But given the likely distribution of your Twitter followers and what they need to hear, I'm very worried about the _consequences_ (again, remaining agnostic about your private intent) of slapping down the "But, but, chromosomes" idiocy and then not engaging with the _drop-dead obvious_ "But, but, clusters in high-dimensional configuration space that [aren't actually changeable with contemporary technology](https://www.lesswrong.com/posts/QZs4vkC7cbyjL9XA9/changing-emotions)" steelman. -It makes sense that (I speculate) you might perceive political constraints on what you want to say in public. (I still work under a pseudonym myself; it would be wildly hypocritical of me to accuse anyone else of cowardice!) But I suspect that if you want to not get into a distracting political fight about topic X, then maybe the responsible thing to do is just not say anything about topic X, rather than engaging with the _stupid_ version of anti-X, and then [stonewalling](https://www.lesswrong.com/posts/wqmmv6NraYv4Xoeyj/conversation-halters) with "That's a policy question" when people [try to point out the problem](https://twitter.com/samsaragon/status/1067238063816945664)? -_bizarre_ to condescendingly point this out _as if it were the crux of contemporary trans-rights debates_. Conservatives and gender-critical feminists _know_ that trans-rights advocates aren't falsely claiming that trans women have XX chromosomes. But the question of what categories epistemically "carve reality at the joints", is _not unrelated_ to the question of which categories to use in policy decisions: the _function_ of sex-segrated bathrooms is to protect females from males, where "females" and "males" are natural clusters in configuration space that it makes sense to want words to refer to. -I agree that this is the only reason you should care. - +This philosophical point is distinct from my earlier claims supporting Blanchard's two-type ta -I'm not giving them credit for understanding the lessons of "A Human's Guide to Words", but I think there's a useful sense of "know how to use words" that embodies a lower standard of philosophical rigor than that: people did, in fact, use languge to build this entire technological civilization even though (unfortunately) the vast majority of them have read neither you nor S. I. Hayakawa. +think you _already_ have enough evidence—if [used efficiently](https://www.lesswrong.com/posts/MwQRucYo6BZZwjKE7/einstein-s-arrogance)—to see that the distribution of actual trans people we know is such that the categories-are-not-abritrary point is relevant in practice. -If a person-in-the-street says of my cosplay photos, or self-identified trans woman Danielle Muscato, "That's a man! I have eyes and I can see that that's a man! Men aren't women!"—well, I probably wouldn't want to invite such a person-in-the-street to a Less Wrong meetup. But I do think the person-in-the-street is performing useful cognitive work. (A rock couldn't do that!) Because I have the hidden-Bayesian-structure-of-language-and-cognition-sight (thanks!!), I know how to sketch out the reduction of "Men aren't women" to something more like "This cognitive algorithm detects secondary sex characteristics and uses it as a classifier for a binary female/male 'sex' category, which it uses to make predictions about not-yet-observed features ..." +Consider again the 6.7:1 (!!) cis-woman-to-trans-woman ratio among 2018 _Slate Star Codex_ survey respondents. -But having done the reduction-to-cognitive-algorithms, it still looks like the person-in-the-street has a point that I shouldn't be allowed to ignore just because I have 30 more IQ points and better philosophy-of-language skills? As it is written: "intelligence, to be useful, must be used for something other than defeating itself." +A curious rationalist, having been raised to believe that trans women are women, and considering observations like this, might ask the question: "Gee, I wonder _why_ women-who-happen-to-be-trans are _so much_ more likely to read _Slate Star Codex_, and be attracted to women, and, um, have penises, than women-who-happen-to-be-cis?" -I bring up me and Danielle Muscato as examples because I think those are edge cases that help illustrate the problem I'm trying to point out, much like how people love to bring up complete androgen insensitivity syndrome to illustrate why "But chromosomes!" isn't the correct reduction of sex classification. But to differentiate what I'm saying from mere blind transphobia, let me note that I predict that most people-in-the-street would be comfortable using feminine pronouns for someone like Blaire White (who is also trans). That's evidence about the kind of cognitive work people's brains are doing when they use English language singular third-person pronouns! Certainly, English is not the only language; ours is not the only culture; maybe there is a way to do gender categories that would be more accurate and better for everyone! But to find what that better way is, I think we need to be able to talk about these kinds of details in public. And I think statements like "Calling pronouns lies is not what you do when you know how to use words" hinder that discussion rather than helping it, by functioning as semantic stopsigns. +If you're _very careful_, I'm sure it's possible to give a truthful answer to that question without misgendering anyone. But if you want to give a _concise_ answer—perhaps not a _maximally rigorous_ answer, but an answer that usefully [points](https://www.lesswrong.com/posts/YF9HB6cWCJrDK5pBM/words-as-mental-paintbrush-handles) to the true causal-structure-in-the-world while still fitting in a Tweet—I think you _need_ to be able to say something like, "Because trans women are men." (At least as a _live hypothesis_, even if you prefer an intersex-brain etiology for the people we know.) -Again, satire is a very weak form of argument, but if it helps at all, I feel like Alice in the following dialogue. +Maybe we'd _usually_ prefer not to phrase it like that, both for reasons of politeness, and because we can be more precise at the cost of using more words ("Interests and sexual orientation may better predicted by natal sex rather than social gender in this population; also, not all trans women have had sex reassignment surgery and so retain their natal-sex anatomy"?). But I think the short version needs to be _sayable_, because if it's not _sayable_, then that's artificially restricting the hypothesis spaces that people use to think with, which is bad (if you care about human intelligence being useful). -Bob (loudly, in the public square): When people say "Now let us bow our heads and praise the Lord our God", they're not lying, because "Now let us bow our heads" is a speech act, not a statement of fact. -Alice (via private email): I agree that it's a speech act rather than a factual assertion, but isn't that observation pretty misleading in isolation? I don't understand why you would say that and only that, unless you were deliberately trying to get your readers to believe in God without actually having to say "You should believe in God." -Bob: Calling speech acts "lies" is not what you do when you know how to use words. But mostly, I think this is not very important. +With respect to transgender issues, this certainly _can_ go both ways: somewhere on Twitter, there are members of the "anti-trans" political coalition insisting, "No, that's _really_ a man because chromosomes" even though they know that that's not what members of the "pro-trans" coalition mean—although as stated earlier, I don't think Eric Weinstein is guilty of this. But given the likely distribution of your Twitter followers and what they need to hear, I'm very worried about the _consequences_ (again, remaining agnostic about your private intent) of -As with all satire, you can point out differences between this satirical dialogue and the real-world situation that it's trying to satirize. But are they relevant differences? To be sure, "Does God exist?" is a much more straightforward question than "Are trans women women?" because existence questions in general are easier than parismonious-categorization-that-carves-nature-at-the-joints questions. But I think that "when you take a step back, feel the flow of debate, observe the cognitive traffic signals", the satirical dialogue is exhibiting the same structural problems as the conversation we're actually having. -Can you think of any other context where "Now, maybe as a matter of policy, you want to make a case for language being used a certain way. Well, that's a separate debate then" would seem like a smart thing to say? That's not a rhetorical question. This actually seems like a broken conversation pattern for any X, Y, and Z: +It makes sense that (I speculate) you might perceive political constraints on what you want to say in public. (I still work under a pseudonym myself; it would be wildly hypocritical of me to accuse anyone else of cowardice!) But I suspect that if you want to not get into a distracting political fight about topic X, then maybe the responsible thing to do is just not say anything about topic X, rather than engaging with the _stupid_ version of anti-X, and then [stonewalling](https://www.lesswrong.com/posts/wqmmv6NraYv4Xoeyj/conversation-halters) with "That's a policy question" when people [try to point out the problem](https://twitter.com/samsaragon/status/1067238063816945664)? -Alice: It's not true that X is an instance of Y, because of reason Z! -Bob: Using language in a way you dislike, openly and explicitly and with public focus on the language and its meaning, is not lying. Now, maybe as a matter of policy, you want to make a case for language being used a certain way. Well, that's a separate debate then. -Alice: Fine, have it your way. As a matter of policy, I argue that we should use language such that we would say that X is not an instance of Y. And the reason that's a good policy decision is Z. -Bob: ... um, sorry, out of time, gotta go. +_bizarre_ to condescendingly point this out _as if it were the crux of contemporary trans-rights debates_. Conservatives and gender-critical feminists _know_ that trans-rights advocates aren't falsely claiming that trans women have XX chromosomes. But the question of what categories epistemically "carve reality at the joints", is _not unrelated_ to the question of which categories to use in policy decisions: the _function_ of sex-segrated bathrooms is to protect females from males, where "females" and "males" are natural clusters in configuration space that it makes sense to want words to refer to. -And I'm still really confused, because I still feel like everything I'm saying here is a trivial application of Sequences-lore. If I'm getting something wrong, I should be overjoyed to be harshly corrected by the Great Teacher! A simple person like me is as but a mere worm in the presence of the great Eliezer Yudkowsky! But if it looks like the Great Teacher is getting something wrong (wrong with respect to the balanced flow of arguments and evidence in which every "step is precise and has no room in it for your whims", although not wrong in the sense of making a factually incorrect statement) and the Great Teacher neither corrects me nor says "OK, you're right and I was wrong, well done, my student", what am I supposed to conclude? Is this a prank—a test? Am I like Brennan in "Initiation Ceremony", being evaluated to see if I have the guts to stand by my vision of the Way in the face of social pressure? (If so, I'm not doing a very good job, because I definitely wouldn't be writing this if I hadn't gotten social proof from Michael, Ben, and Sarah.) Did I pass?? -In a functioning rationalist community, there should never be any occasion in which "appeal to Eliezer Yudkowsky's personal authority" seems like a good strategy: the way this is supposed to work is that I should just make my arguments with the understanding that good arguments will be accepted and bad arguments will be rejected. But I've been trying that, and it's mostly not working. On any other topic, I probably would have just given up and accepted the social consensus by now: "Sure, OK, whatever, trans women are women by definition; who am I to think I've seen into the Bayes-structure?" I still think this from time to time, and feel really guilty about arguing for the Bad Guys (because in my native Blue Tribe culture, only Bad people want to talk about sexual dimorphism). But then I can't stop seeing the Bayes-structure that says that biological sex continues to be a predictively-useful concept even when it's ideologically unfashionable—and I've got Something to Protect. What am I supposed to do?