From: M. Taylor Saotome-Westlake Date: Sun, 21 Aug 2022 19:00:19 +0000 (-0700) Subject: memoir: some reading/editing in late 2018 region ... X-Git-Url: http://534655.efjtl6rk.asia/source?a=commitdiff_plain;h=c9bd45636c91ff82ed9481a0fdba66278717b3b8;p=Ultimately_Untrue_Thought.git memoir: some reading/editing in late 2018 region ... Reading existing prose and making a few tweaks isn't the most productive possible motion (it definitely needs to be done at the end, but it doesn't obviously need to be done now, with so many gaps in the ms.), but it is motion. Fire and motion! --- diff --git a/content/drafts/a-hill-of-validity-in-defense-of-meaning.md b/content/drafts/a-hill-of-validity-in-defense-of-meaning.md index 42a19d1..f444644 100644 --- a/content/drafts/a-hill-of-validity-in-defense-of-meaning.md +++ b/content/drafts/a-hill-of-validity-in-defense-of-meaning.md @@ -26,7 +26,7 @@ In the case of Alexander's bogus argument about gender categories, the relevant Importantly, this is a very general point about how language itself works _that has nothing to do with gender_. No matter what you believe about politically-controversial empirical questions, intellectually honest people should be able to agree that "I ought to accept an unexpected [X] or two deep inside the conceptual boundaries of what would normally be considered [Y] if [positive consequence]" is not the correct philosophy of language, _independently of the particular values of X and Y_. -Also, this ... really wasn't what I was trying to talk about. _I_ thought I was trying to talk about autogynephilia as an _empirical_ theory of psychology, the truth or falsity of which obviously cannot be altered by changing the meanings of words. But at this point I still trusted people in my robot cult to be basically intellectually honest, rather than fucking with me because of their political incentives, so I endeavored to respond to the category-boundary argument as if it were a serious argument: when I quit my dayjob in March 2017 in order to have more time to study and work on this blog, the capstone of my sabbatical was an exhaustive response to Alexander, ["The Categories Were Made for Man to Make Predictions"](/2018/Feb/the-categories-were-made-for-man-to-make-predictions/) (which Alexander [graciously included in his next links post](https://archive.ph/irpfd#selection-1625.53-1629.55)). A few months later, I followed it up with ["Reply to _The Unit of Caring_ on Adult Human Females"](/2018/Apr/reply-to-the-unit-of-caring-on-adult-human-females/), responding to a similar argument. I'm proud of those posts: I think Alexander's and _Unit of Caring_'s arguments were incredibly dumb, and with a lot of effort, I think I did a pretty good job of explaining exactly why to anyone who was interested and didn't, at some level, prefer not to understand. +Also, this ... really wasn't what I was trying to talk about. _I_ thought I was trying to talk about autogynephilia as an _empirical_ theory of psychology of late-onset gender dysphoria in males, the truth or falsity of which obviously cannot be altered by changing the meanings of words. But at this point I still trusted people in my robot cult to be basically intellectually honest, rather than fucking with me because of their political incentives, so I endeavored to respond to the category-boundary argument as if it were a serious argument: when I quit my dayjob in March 2017 in order to have more time to study and work on this blog, the capstone of my sabbatical was an exhaustive response to Alexander, ["The Categories Were Made for Man to Make Predictions"](/2018/Feb/the-categories-were-made-for-man-to-make-predictions/) (which Alexander [graciously included in his next links post](https://archive.ph/irpfd#selection-1625.53-1629.55)). A few months later, I followed it up with ["Reply to _The Unit of Caring_ on Adult Human Females"](/2018/Apr/reply-to-the-unit-of-caring-on-adult-human-females/), responding to a similar argument. I'm proud of those posts: I think Alexander's and _Unit of Caring_'s arguments were incredibly dumb, and with a lot of effort, I think I did a pretty good job of explaining exactly why to anyone who was interested and didn't, at some level, prefer not to understand. Of course, a pretty good job of explaining by one niche blogger wasn't going to put much of a dent in the culture, which is the sum of everyone's blogposts; despite the mild boost from the _Slate Star Codex_ links post, my megaphone just wasn't very big. At this point, I was _disappointed_ with the limited impact of my work, but not to the point of bearing much hostility to "the community". People had made their arguments, and I had made mine; I didn't think I was _entitled_ to anything more than that. @@ -74,9 +74,9 @@ This "hill of meaning in defense of validity" proclamation was just such a strik One could argue the "Words can be wrong when your definition draws a boundary around things that don't really belong together" moral doesn't apply to Yudkowsky's new Tweets, which only mentioned pronouns and bathroom policies, not the [extensions](https://www.lesswrong.com/posts/HsznWM9A7NiuGsp28/extensions-and-intensions) of common nouns. -But this seems pretty unsatifying in the context of his claim to ["not [be] taking a stand for or against any Twitter policies"](https://twitter.com/ESYudkowsky/status/1067185907843756032). One of the Tweets that had recently led to radical feminist Meghan Murphy getting [kicked off the platform](https://quillette.com/2018/11/28/twitters-trans-activist-decree/) read simply, ["Men aren't women tho."](https://archive.is/ppV86) This doesn't seem like a policy claim; rather, Murphy was using common language to express the fact-claim that members of the natural category of adult human males, are not, in fact, members of the natural category of adult human females. +But this seems pretty unsatisfying in the context of his claim to ["not [be] taking a stand for or against any Twitter policies"](https://twitter.com/ESYudkowsky/status/1067185907843756032). One of the Tweets that had recently led to radical feminist Meghan Murphy getting [kicked off the platform](https://quillette.com/2018/11/28/twitters-trans-activist-decree/) read simply, ["Men aren't women tho."](https://archive.is/ppV86) This doesn't seem like a policy claim; rather, Murphy was using common language to express the fact-claim that members of the natural category of adult human males, are not, in fact, members of the natural category of adult human females. -If the extension of common words like 'woman' and 'man' is an issue of epistemic importance that rationalists should care about, then presumably so is Twitter's anti-misgendering policy—and if it _isn't_ (because you're not standing in defense of truth if you insist on a word, brought explicitly into question, being used with some particular meaning) then I wasn't sure what's _left_ of the "Human's Guide to Words" sequence if the [37-part grand moral](https://www.lesswrong.com/posts/FaJaCgqBKphrDzDSj/37-ways-that-words-can-be-wrong) needs to be retracted. +If the extension of common words like 'woman' and 'man' is an issue of epistemic importance that rationalists should care about, then presumably so was Twitter's anti-misgendering policy—and if it _isn't_ (because you're not standing in defense of truth if you insist on a word, brought explicitly into question, being used with some particular meaning) then I wasn't sure what's _left_ of the "Human's Guide to Words" sequence if the [37-part grand moral](https://www.lesswrong.com/posts/FaJaCgqBKphrDzDSj/37-ways-that-words-can-be-wrong) needed to be retracted. I think I _am_ standing in defense of truth if have an _argument_ for _why_ my preferred word usage does a better job at "carving reality at the joints", and the one bringing my usage explicitly into question doesn't have such an argument. As such, I didn't see the _practical_ difference between "you're not standing in defense of truth if you insist on a word, brought explicitly into question, being used with some particular meaning", and "I can define a word any way I want." About which, again, a previous Eliezer Yudkowsky had written: @@ -100,7 +100,7 @@ I think I _am_ standing in defense of truth if have an _argument_ for _why_ my p > > ["One may even consider the act of defining a word as a promise to \[the\] effect [...] \[that the definition\] will somehow help you make inferences / shorten your messages."](https://www.lesswrong.com/posts/yLcuygFfMfrfK8KjF/mutual-information-and-density-in-thingspace) -One could argue that this is unfairly interpreting Yudkowsky's Tweets as having a broader scope than was intended—that Yudkowsky _only_ meant to slap down the specific false claim that using 'he' for someone with a Y chromosome is lying, without intending any broader implications about trans issues or the philosophy of language. It wouldn't be realistic or fair to expect every public figure to host a truly exhaustive debate on all related issues every time a fallacy they encounter in the wild annoys them enough for them to Tweet about that specific fallacy. +One could argue that this is unfairly interpreting Yudkowsky's Tweets as having a broader scope than was intended—that Yudkowsky _only_ meant to slap down the specific false claim that using "he" for someone with a Y chromosome is "lying", without intending any broader implications about trans issues or the philosophy of language. It wouldn't be realistic or fair to expect every public figure to host a truly exhaustive debate on all related issues every time a fallacy they encounter in the wild annoys them enough for them to Tweet about that specific fallacy. However, I don't think this "narrow" reading is the most natural one. Yudkowsky had previously written of what he called [the fourth virtue of evenness](http://yudkowsky.net/rational/virtues/): "If you are selective about which arguments you inspect for flaws, or how hard you inspect for flaws, then every flaw you learn how to detect makes you that much stupider." He had likewise written [of reversed stupidity](https://www.lesswrong.com/posts/qNZM3EGoE5ZeMdCRt/reversed-stupidity-is-not-intelligence) (bolding mine): @@ -128,13 +128,13 @@ I'm picking on the "sports segregated around an Aristotelian binary" remark beca Yudkowsky's claim to merely have been standing up for the distinction between facts and policy questions doesn't seem credible. It is, of course, true that pronoun and bathroom conventions are policy decisions rather than a matter of fact, but it's _bizarre_ to condescendingly point this out _as if it were the crux of contemporary trans-rights debates_. Conservatives and gender-critical feminists _know_ that trans-rights advocates aren't falsely claiming that trans women have XX chromosomes. If you _just_ wanted to point out that the organization of sports leagues is a policy question rather than a fact (as if anyone had doubted this), why would you throw in the "Aristotelian binary" strawman and belittle the matter as "humorous"? There are a lot of issues that I don't _personally_ care much about, but I don't see anything funny about the fact that other people _do_ care. -If any concrete negative consequence of gender self-identity categories is going to be waved away with, "Oh, but that's a mere _policy_ decision that can be dealt with on some basis other than gender, and therefore doesn't count as an objection to the new definition of gender words", then it's not clear what the new definition is _for_. The policymaking categories we use to make decisions are _closely related_ to the epistemic categories we use to make predictions, and people need to be able to talk about them. +If any concrete negative consequence of gender self-identity categories is going to be waved away with, "Oh, but that's a mere _policy_ decision that can be dealt with on some basis other than gender, and therefore doesn't count as an objection to the new definition of gender words", then it's not clear what the new definition is _for_. -An illustration: like many gender-dysphoric males, I [cosplay](/2016/Dec/joined/) [female](/2017/Oct/a-leaf-in-the-crosswind/) [characters](/2019/Aug/a-love-that-is-out-of-anyones-control/) at fandom conventions sometimes. And, unfortunately, like many gender-dysphoric males, I'm _not very good at it_. I think someone looking at some of my cosplay photos and trying to describe their content in clear language—not trying to be nice to anyone or make a point, but just trying to use language as a map that reflects the territory—would say something like, "This is a photo of a man and he's wearing a dress." The word _man_ in that sentence is expressing _cognitive work_: it's a summary of the [lawful cause-and-effect evidential entanglement](https://www.lesswrong.com/posts/6s3xABaXKPdFwA3FS/what-is-evidence) whereby the photons reflecting off the photograph are correlated with photons reflecting off my body at the time the photo was taken, which are correlated with my externally-observable secondary sex characteristics (facial structure, beard shadow, _&c._), from which evidence an agent using an [efficient naïve-Bayes-like model](https://www.lesswrong.com/posts/gDWvLicHhcMfGmwaK/conditional-independence-and-naive-bayes) can assign me to its "man" category and thereby make probabilistic predictions about some of my traits that aren't directly observable from the photo, and achieve a better [score on those predictions](http://yudkowsky.net/rational/technical/) than if the agent had assigned me to its "adult human female" category, where by "traits" I mean not (just) particularly sex chromosomes ([as Yudkowsky suggested on Twitter](https://twitter.com/ESYudkowsky/status/1067291243728650243)), but the _conjunction_ of dozens or hundreds of measurements that are [_causally downstream_ of sex chromosomes](/2021/Sep/link-blood-is-thicker-than-water/): reproductive organs _and_ muscle mass (sex difference effect size of [Cohen's _d_](https://en.wikipedia.org/wiki/Effect_size#Cohen's_d)≈2.6) _and_ Big Five Agreeableness (_d_≈0.5) _and_ Big Five Neuroticism (_d_≈0.4) _and_ short-term memory (_d_≈0.2, favoring women) _and_ white-to-gray-matter ratios in the brain _and_ probable socialization history _and_ [any number of other things](https://en.wikipedia.org/wiki/Sex_differences_in_human_physiology)—including differences we might not necessarily currently know about, but have prior reasons to suspect exist: no one _knew_ about sex chromosomes before 1905, but given all the other systematic differences between women and men, it would have been a reasonable guess (that turned out to be correct!) to suspect the existence of some sort of molecular mechanism of sex determination. +An illustration: like many gender-dysphoric males, I [cosplay](/2016/Dec/joined/) [female](/2017/Oct/a-leaf-in-the-crosswind/) [characters](/2019/Aug/a-love-that-is-out-of-anyones-control/) at fandom conventions sometimes. And, unfortunately, like many gender-dysphoric males, I'm _not very good at it_. I think someone looking at some of my cosplay photos and trying to describe their content in clear language—not trying to be nice to anyone or make a point, but just trying to use language as a map that reflects the territory—would say something like, "This is a photo of a man and he's wearing a dress." The word _man_ in that sentence is expressing _cognitive work_: it's a summary of the [lawful cause-and-effect evidential entanglement](https://www.lesswrong.com/posts/6s3xABaXKPdFwA3FS/what-is-evidence) whereby the photons reflecting off the photograph are correlated with photons reflecting off my body at the time the photo was taken, which are correlated with my externally-observable secondary sex characteristics (facial structure, beard shadow, _&c._), from which evidence an agent using an [efficient naïve-Bayes-like model](https://www.lesswrong.com/posts/gDWvLicHhcMfGmwaK/conditional-independence-and-naive-bayes) can assign me to its "man" category and thereby make probabilistic predictions about some of my traits that aren't directly observable from the photo, and achieve a better [score on those predictions](http://yudkowsky.net/rational/technical/) than if the agent had assigned me to its "adult human female" category, where by "traits" I mean not (just) particularly sex chromosomes ([as Yudkowsky suggested on Twitter](https://twitter.com/ESYudkowsky/status/1067291243728650243)), but the _conjunction_ of dozens or hundreds of measurements that are [_causally downstream_ of sex chromosomes](/2021/Sep/link-blood-is-thicker-than-water/): reproductive organs _and_ muscle mass (sex difference effect size of [Cohen's _d_](https://en.wikipedia.org/wiki/Effect_size#Cohen's_d) ≈ 2.6) _and_ Big Five Agreeableness (_d_ ≈ 0.5) _and_ Big Five Neuroticism (_d_ ≈ 0.4) _and_ short-term memory (_d_ ≈ 0.2, favoring women) _and_ white-to-gray-matter ratios in the brain _and_ probable socialization history _and_ [any number of other things](https://en.wikipedia.org/wiki/Sex_differences_in_human_physiology)—including differences we might not necessarily currently know about, but have prior reasons to suspect exist: no one _knew_ about sex chromosomes before 1905, but given all the other systematic differences between women and men, it would have been a reasonable guess (that turned out to be correct!) to suspect the existence of some sort of molecular mechanism of sex determination. -Forcing a speaker to say "trans woman" instead of "man" in that sentence depending on my verbally self-reported self-identity may not be forcing them to _lie_, exactly. (Because it's understood, "openly and explicitly and with public focus on the language and its meaning", what _trans women_ are; no one is making a false-to-fact claim about them having ovaries, for example.) But it _is_ forcing the speaker to obfuscate the probabilistic inference they were trying to communicate with the original sentence (about modeling the person in the photograph as being sampled from the "man" [cluster in configuration space](https://www.lesswrong.com/posts/WBw8dDkAWohFjWQSk/the-cluster-structure-of-thingspace)), and instead use language that suggests a different cluster-structure ("trans women", two words, are presumably a subcluster within the "women" cluster). Crowing in the public square about how people who object to be forced to "lie" must be ontologically confused is _ignoring the interesting part of the problem_. Gender identity's [claim to be non-disprovable](https://www.lesswrong.com/posts/fAuWLS7RKWD2npBFR/religion-s-claim-to-be-non-disprovable) mostly functions as a way to [avoid the belief's real weak points](https://www.lesswrong.com/posts/dHQkDNMhj692ayx78/avoiding-your-belief-s-real-weak-points). +Forcing a speaker to say "trans woman" instead of "man" in that sentence depending on my verbally self-reported self-identity may not be forcing them to _lie_, exactly. (Because it's understood, "openly and explicitly and with public focus on the language and its meaning", what _trans women_ are; no one is making a false-to-fact claim about them having ovaries, for example.) But it _is_ forcing the speaker to obfuscate the probabilistic inference they were trying to communicate with the original sentence (about modeling the person in the photograph as being sampled from the "man" [cluster in configuration space](https://www.lesswrong.com/posts/WBw8dDkAWohFjWQSk/the-cluster-structure-of-thingspace)), and instead use language that suggests a different cluster-structure ("trans women", two words, are presumably a subcluster within the "women" cluster). Crowing in the public square about how people who object to being forced to "lie" must be ontologically confused is _ignoring the interesting part of the problem_. Gender identity's [claim to be non-disprovable](https://www.lesswrong.com/posts/fAuWLS7RKWD2npBFR/religion-s-claim-to-be-non-disprovable) mostly functions as a way to [avoid the belief's real weak points](https://www.lesswrong.com/posts/dHQkDNMhj692ayx78/avoiding-your-belief-s-real-weak-points). -To this one might reply that I'm giving too much credit to the "anti-trans" coalition for how stupid they're not being: that _my_ careful dissection of the hidden probabilistic inferences implied by words (including pronoun choices) is all well and good, but that calling pronouns "lies" is not something you do when you know how to use words. +To this one might reply that I'm giving too much credit to the "anti-trans" faction for how stupid they're not being: that _my_ careful dissection of the hidden probabilistic inferences implied by words (including pronoun choices) is all well and good, but that calling pronouns "lies" is not something you do when you know how to use words. But I'm _not_ giving them credit for _for understanding the lessons of "A Human's Guide to Words"_; I just think there's a useful sense of "know how to use words" that embodies a lower standard of philosophical rigor. If a person-in-the-street says of my cosplay photos, "That's a man! I _have eyes_ and I can _see_ that that's a man! Men aren't women!"—well, I _probably_ wouldn't want to invite such a person-in-the-street to a _Less Wrong_ meetup. But I do think the person-in-the-street is _performing useful cognitive work_. Because _I_ have the hidden-Bayesian-structure-of-language-and-cognition-sight (thanks to Yudkowsky's writings back in the 'aughts), _I_ know how to sketch out the reduction of "Men aren't women" to something more like "This [cognitive algorithm](https://www.lesswrong.com/posts/HcCpvYLoSFP4iAqSz/rationality-appreciating-cognitive-algorithms) detects secondary sex characteristics and uses it as a classifier for a binary female/male 'sex' category, which it uses to make predictions about not-yet-observed features ..." @@ -156,13 +156,13 @@ If you were Alice, and a _solid supermajority_ of your incredibly smart, incredi Imagine an Islamic theocracy in which one Meghan Murphee had recently gotten kicked off the dominant microblogging platform for speaking disrespectfully about the prophet Muhammad. Suppose that [Yudkowsky's analogue in that world](/2020/Aug/yarvin-on-less-wrong/) then posted that Murphee's supporters were ontologically confused to object on free inquiry grounds: saying "peace be unto him" after the name of the prophet Muhammad is a _speech act_, not a statement of fact: Murphee wasn't being forced to lie. -I think the atheists of our world, including Yudkowsky, would not have any trouble seeing the problem with this scenario, nor hesitate to agree that it _is_ a problem for that Society's rationality. It is, of course, true as an isolated linguistics fact that saying "peace be unto him" is a speech act rather than a statement of fact, but it's _bizarre_ to condescendingly point this out _as if it were the crux of debates about religious speech codes_. The _function_ of the speech act is to signal the speaker's affirmation of Muhammad's divinity. That's _why_ the Islamic theocrats want to mandate that everyone says it: it's a lot harder to atheism to get any traction if no one is allowed to _talk_ like an atheist. +I think the atheists of our world, including Yudkowsky, would not have any trouble seeing the problem with this scenario, nor hesitate to agree that it _is_ a problem for that Society's rationality. It is, of course, true as an isolated linguistics fact that saying "peace be unto him" is a speech act rather than a statement of fact, but it's _bizarre_ to condescendingly point this out _as if it were the crux of debates about religious speech codes_. The _function_ of the speech act is to signal the speaker's affirmation of Muhammad's divinity. That's _why_ the Islamic theocrats want to mandate that everyone says it: it's a lot harder for atheism to get any traction if no one is allowed to _talk_ like an atheist. And that's exactly why trans advocates want to mandate against misgendering people on social media: it's harder for trans-exclusionary ideologies to get any traction if no one is allowed to _talk_ like someone who believes that sex (sometimes) matters and gender identity does not. Of course, such speech restrictions aren't necessarily "irrational", depending on your goals! If you just don't think "free speech" should go that far—if you _want_ to suppress atheism or gender-critical feminism—speech codes are a perfectly fine way to do it! And _to their credit_, I think most theocrats and trans advocates are intellectually honest about the fact that this is what they're doing: atheists or transphobes are _bad people_, and we want to make it harder for them to spread their lies or their hate. -In contrast, by claiming to be "not taking a stand for or against any Twitter policies" while insinuating that people who oppose the policy are ontologically confused, Yudkowsky was being either (somewhat implausibly) stupid or (more plausibly) intellectually dishonest: of _course_ the point of speech codes is suppress ideas! Given that the distinction between facts and policies is so obviously _not anyone's crux_—the smarter people in the "anti-trans" coalition already know that, and the dumber people in the coalition wouldn't change their alignment if they were taught—it's hard to see what the _point_ of harping on the fact/policy distiction would be, _except_ to be seen as implicitly taking a stand for the "pro-trans" coalition, while putting on a show of being politically "neutral." +In contrast, by claiming to be "not taking a stand for or against any Twitter policies" while insinuating that people who oppose the policy are ontologically confused, Yudkowsky was being either (somewhat implausibly) stupid or (more plausibly) intellectually dishonest: of _course_ the point of speech codes is suppress ideas! Given that the distinction between facts and policies is so obviously _not anyone's crux_—the smarter people in the "anti-trans" faction already know that, and the dumber people in the faction wouldn't change their alignment if they were taught—it's hard to see what the _point_ of harping on the fact/policy distiction would be, _except_ to be seen as implicitly taking a stand for the "pro-trans" faction, while [putting on a show of being politically "neutral."](https://www.lesswrong.com/posts/jeyvzALDbjdjjv5RW/pretending-to-be-wise) It makes sense that Yudkowsky might perceive political constraints on what he might want to say in public—especially when you look at what happened to the _other_ Harry Potter author. (Despite my misgivings—and the fact that at this point it's more of a genre convention or a running joke, rather than any attempt at all to conceal my identity—this blog _is_ still published under a pseudonym; it would be hypocritical of me to accuse someone of cowardice about what they're willing to attach their real name to.) @@ -176,7 +176,7 @@ But trusting Eliezer Yudkowsky—whose writings, more than any other single infl So if the rationalists were going to get our own philosophy of language wrong over this _and Eliezer Yudkowsky was in on it_ (!!!), that was intolerable, inexplicable, incomprehensible—like there _wasn't a real world anymore_. -But if Yudkowsky was _already_ stonewalling his Twitter followers, entering the thread myself didn't seem likely to help. (And I hadn't intended to talk about gender on that account yet, although that seemed unimportant in light of the present cause for _flipping the fuck out_.) +But if Yudkowsky was _already_ stonewalling his Twitter followers, entering the thread myself didn't seem likely to help. (And I hadn't intended to talk about gender on that account yet, although that seemed unimportant in light of the present cause for flipping out.) It seemed better to try to clear this up in private. I still had Yudkowsky's email address. I felt bad bidding for his attention over my gender thing _again_—but I had to do _something_. Hands trembling, I sent him an email asking him to read my ["The Categories Were Made for Man to Make Predictions"](/2018/Feb/the-categories-were-made-for-man-to-make-predictions/), suggesting that it may qualify as an answer to his question about ["a page [he] could read to find a non-confused exclamation of how there's scientific truth at stake"](https://twitter.com/ESYudkowsky/status/1067482047126495232)—and that, because I cared very much about correcting what I claimed were confusions in my rationalist subculture, that I would be happy to pay up to $1000 for his time—and that, if he liked the post, he might consider Tweeting a link—and that I was cc'ing my friends Anna Salamon and Michael Vassar as a character reference (Subject: "another offer, $1000 to read a ~6500 word blog post about (was: Re: Happy Price offer for a 2 hour conversation)"). Then I texted Anna and Michael begging them to chime in and vouch for my credibility. @@ -186,13 +186,15 @@ Again, I realize this must seem weird and cultish to any normal people reading t Anna didn't reply, but I apparently did interest Michael, who chimed in on the email thread to Yudkowsky. We had a long phone conversation the next day lamenting how the "rationalists" were dead as an intellectual community. +[TODO: section about the policy I'm following here respecting Yudkowsky's privacy] + As for the attempt to intervene on Yudkowsky—well, [again](/2022/TODO/blanchards-dangerous-idea-and-the-plight-of-the-lucid-crossdreamer/#cheerful-price-privacy-constraint), I don't think I should say whether he replied to Michael's and my emails, or whether he accepted the money, because any conversation that may or may not have occured would have been private. But what I _can_ say, because it was public, is we saw [this addition to the Twitter thread](https://twitter.com/ESYudkowsky/status/1068071036732694529): > I was sent this (by a third party) as a possible example of the sort of argument I was looking to read: [http://unremediatedgender.space/2018/Feb/the-categories-were-made-for-man-to-make-predictions/](/2018/Feb/the-categories-were-made-for-man-to-make-predictions/). Without yet judging its empirical content, I agree that it is not ontologically confused. It's not going "But this is a MAN so using 'she' is LYING." -Look at that! The great _Eliezer Yudkowsky_ said that my position is not ontologically confused. That's _probably_ high praise coming from him! You might think that should be the end of the matter. Yudkowsky denounced a particular philosophical confusion; I already had a related objection written up; and he acknowledged my objection as not being the confusion he was trying to police. I _should_ be satisfied, right? +Look at that! The great Eliezer Yudkowsky said that my position is not ontologically confused. That's _probably_ high praise coming from him! You might think that should be the end of the matter. Yudkowsky denounced a particular philosophical confusion, I already had a related objection written up, and he acknowledged my objection as not being the confusion he was trying to police. I _should_ be satisfied, right? -I wasn't, in fact, satisfied. This little "not ontologically confused" clarification buried in the replies was _much less visible_ than the bombastic, arrogant top level pronouncement insinuating that resistance to gender-identity claims _was_ confused. (1 Like on this reply, _vs._ 140 Likes/21 Retweets on start of thread.) I expected that the typical reader who had gotten the impression from the initial thread that the great Eliezer Yudkowsky thought that gender-identity skeptics didn't have a leg to stand on, would not, actually, be disabused of this impression by the existence of this little follow-up. Was it greedy of me to want something _louder_? +I wasn't, in fact, satisfied. This little "not ontologically confused" clarification buried in the replies was _much less visible_ than the bombastic, arrogant top level pronouncement insinuating that resistance to gender-identity claims _was_ confused. (1 Like on this reply, _vs._ 140 Likes/21 Retweets on start of thread.) I expected that the typical reader who had gotten the impression from the initial thread that Yudkowsky thought that gender-identity skeptics didn't have a leg to stand on, would not, actually, be disabused of this impression by the existence of this little follow-up. Was it greedy of me to want something _louder_? Greedy or not, I wasn't done flipping out. On 1 December, I wrote to Scott Alexander (cc'ing a few other people), asking if there was any chance of an _explicit_ and _loud_ clarification or partial-retraction of ["... Not Man for the Categories"](https://slatestarcodex.com/2014/11/21/the-categories-were-made-for-man-not-man-for-the-categories/) (Subject: "super-presumptuous mail about categorization and the influence graph"). _Forget_ my boring whining about the autogynephilia/two-types thing, I said—that's a complicated empirical claim, and _not_ the key issue. @@ -394,7 +396,6 @@ Michael said that me and Jess together have more moral authority] [TODO section: wrapping up with Scott; Kelsey; high and low Church https://slatestarcodex.com/2019/07/04/some-clarifications-on-rationalist-blogging/] - [SECTION: treachery, faith, and the great river I concluded that further email prosecution was not useful at this time. My revised Category War to-do list was: @@ -443,6 +444,16 @@ I would be sympathetic to "rationalist" leaders like Anna or Yudkowsky playing t ] +[TODO: after some bouncing off the posse, what was originally an email draft became a public _Less Wrong_ post, "Where to Draw the Boundaries?" (note, plural) + * Wasn't the math overkill? + * math is important for appeal to principle—and as intimidation https://slatestarcodex.com/2014/08/10/getting-eulered/ + * four simulacra levels got kicked off here + * I could see that I'm including subtext and expecting people to only engage with the text, but if we're not going to get into full-on gender-politics on Less Wrong, but gender politics is motivating an epistemology error, I'm not sure what else I'm supposed to do! I'm pretty constrained here! + * I had already poisoned the well with "Blegg Mode" the other month, bad decision + ] + + + [TODO small section: concern about bad faith nitpicking— One reason someone might be reluctant to correct mistakes when pointed out, is the fear that such a policy could be abused by motivated nitpickers. It would be pretty annoying to be obligated to churn out an endless stream of trivial corrections by someone motivated to comb through your entire portfolio and point out every little thing you did imperfectly, ever. @@ -450,18 +461,11 @@ One reason someone might be reluctant to correct mistakes when pointed out, is t I wondered if maybe, in Scott or Eliezer's mental universe, I was a blameworthy (or pitiably mentally ill) nitpicker for flipping out over a blog post from 2014 (!) and some Tweets (!!) from November. Like, really? I, too, had probably said things that were wrong _five years ago_. But, well, I thought I had made a pretty convincing that a lot of people are making a correctable and important rationality mistake, such that the cost of a correction (about the philosophy of language specifically, not any possible implications for gender politics) would actually be justified here. If someone had put _this much_ effort into pointing out an error I had made four months or five years ago and making careful arguments for why it was important to get the right answer, I think I _would_ put some serious thought into it rather than brushing them off. - ] -[TODO: after some bouncing off the posse, what was originally an email draft became a public _Less Wrong_ post, "Where to Draw the Boundaries?" (note, plural) - * Wasn't the math overkill? - * math is important for appeal to principle—and as intimidation https://slatestarcodex.com/2014/08/10/getting-eulered/ - * four simulacra levels got kicked off here - * I could see that I'm including subtext and expecting people to only engage with the text, but if we're not going to get into full-on gender-politics on Less Wrong, but gender politics is motivating an epistemology error, I'm not sure what else I'm supposed to do! I'm pretty constrained here! - * I had already poisoned the well with "Blegg Mode" the other month, bad decision - * We lost?! How could we lose??!!?!? -] + +* We lost?! How could we lose??!!?!? [TODO: my reluctance to write a memoir, displacement behavior diff --git a/notes/a-hill-of-validity-sections.md b/notes/a-hill-of-validity-sections.md index 9caa716..be4f7bd 100644 --- a/notes/a-hill-of-validity-sections.md +++ b/notes/a-hill-of-validity-sections.md @@ -1,11 +1,22 @@ -editing tier— +near editing tier— _ Anna thought badmouthing Michael was OK by Michael's standards, trying to undo + +with internet available— +_ quote from "Kolmogorov complicity" about everything being connected +_ citation/explanation for saying "Peace be unto him" +_ link "other Harry Potter author" +_ address the "maybe it's good to be called names" point from "Hill" thread +_ quote part of the "Hill" thread emphasizing "it's a policy decision", not just "it's not lying", if there is one besides the "Aristotelian binary" Tweet +_ quote "maybe as a matter of policy" secondary Tweet earlier before quote _ 2019 Discord discourse with Alicorner + + +far editing tier— +_ the right way to explain how I'm respecting Yudkowsky's privacy +_ Nov. 2018 continues thread from Oct. 2016 conversation _ better explanation of posse formation -_ address the "maybe it's good to be called names" point from "Hill" thread _ maybe quote Michael's Nov 2018 texts? -_ the right way to explain how I'm respecting Yudkowsky's privacy _ clarify sequence of outreach attempts _ clarify existence of a shadow posse member _ mention Nov. 2018 conversation with Ian somehow @@ -19,6 +30,7 @@ _ explain the adversarial pressure on privacy norms _ first EY contact was asking for public clarification or "I am being silenced" (so Glomarizing over "unsatisfying response" or no response isn't leaking anything Yudkowksy cares about) _ mention the fact that Anna had always taken a "What You Can't Say" strategy + people to consult before publishing, for feedback or right of objection— _ Iceman _ Ben/Jessica