From: M. Taylor Saotome-Westlake Date: Fri, 16 Jun 2023 04:56:27 +0000 (-0700) Subject: memoir: "A Hill" editing sweep to end X-Git-Url: http://534655.efjtl6rk.asia/source?a=commitdiff_plain;h=1be0cbdb87758fc25e8667eb428ae5d971f5a978;p=Ultimately_Untrue_Thought.git memoir: "A Hill" editing sweep to end --- diff --git a/content/drafts/a-hill-of-validity-in-defense-of-meaning.md b/content/drafts/a-hill-of-validity-in-defense-of-meaning.md index 187ffd0..7b702d2 100644 --- a/content/drafts/a-hill-of-validity-in-defense-of-meaning.md +++ b/content/drafts/a-hill-of-validity-in-defense-of-meaning.md @@ -160,7 +160,7 @@ Satire is a very weak form of argument: the one who wishes to doubt will always

Bob: Look at this adorable cat picture!

Alice: Um, that looks like a dog to me, actually.

-

Bob: You're not standing in defense of truth if you insist on a word, brought explicitly into question, being used with some particular meaning. Now, maybe as a matter of policy, you want to make a case for language being used a certain way. Well, that's a separate debate then.

+

Bob: You're not standing in defense of truth if you insist on a word, brought explicitly into question, being used with some particular meaning. Now, maybe as a matter of policy, you want to make a case for language being used a certain way. Well, that's a separate debate then.

⁕ ⁕ ⁕

@@ -465,70 +465,74 @@ The language I spoke was _mostly_ educated American English, but I relied on sub Maybe that's why I felt like I had to stand my ground and fight for the world I was made in, even though the contradiction between the war effort and my general submissiveness was having me making crazy decisions. -Michael said that a reason to make a stand here in "the community" was that if we didn't, the beacon of "rationalism" would continue to lure and mislead others, but that more importantly, we needed to figure out how to win this kind of argument decisively, as a group; we couldn't afford to accept a _status quo_ of accepting defeat when faced with bad faith arguments _in general_. Ben reported writing to Scott to ask him to alter the beacon so that people like me wouldn't think "the community" was the place to go for literally doing the rationality thing anymore. +Michael said that a reason to make a stand here in "the community" was that if we didn't, the [beacon](http://benjaminrosshoffman.com/construction-beacons/) of "rationalism" would continue to lure and mislead others, but that more importantly, we needed to figure out how to win this kind of argument decisively, as a group; we couldn't afford to accept a _status quo_ of accepting defeat when faced with bad faith arguments _in general_. Ben reported writing to Scott to ask him to alter the beacon so that people like me wouldn't think "the community" was the place to go for literally doing the rationality thing anymore. As it happened, the next day, Wednesday, we saw these Tweets from @ESYudkowsky, linking to a _Quillette_ article interviewing Lisa Littman on her work on rapid onset gender dysphoria: > [Everything more complicated than](https://twitter.com/ESYudkowsky/status/1108277090577600512) protons tends to come in varieties. Hydrogen, for example, has isotopes. Gender dysphoria involves more than one proton and will probably have varieties. https://quillette.com/2019/03/19/an-interview-with-lisa-littman-who-coined-the-term-rapid-onset-gender-dysphoria/ - +> > [To be clear, I don't](https://twitter.com/ESYudkowsky/status/1108280619014905857) know much about gender dysphoria. There's an allegation that people are reluctant to speciate more than one kind of gender dysphoria. To the extent that's not a strawman, I would say only in a generic way that GD seems liable to have more than one species. (Why now? Maybe he saw the tag in my "tools have shattered" Tweet on Monday, or maybe the _Quillette_ article was just timely?) -The most obvious reading of these Tweets was as a "concession" to my general political agenda. The two-type taxonomy of MtF was the thing I was _originally_ trying to talk about, back in 2016–2017, before getting derailed onto the present philosophy-of-language war, and here Yudkowsky was backing up "my side" on that by publicly offering an argument that there's probably a more-than-one-type typology. +The most obvious reading of these Tweets was as a "concession" to my political agenda. The two-type taxonomy of MtF was the thing I was _originally_ trying to talk about, back in 2016–2017, before getting derailed onto the present philosophy-of-language war, and here Yudkowsky was backing up "my side" on that by publicly offering an argument that there's probably a more-than-one-type typology. -At this point, some readers might think that should have been the end of the matter, that I should have been satisfied. I had started the recent drama flare-up because Yudkowsky had Tweeted something unfavorable to my agenda. But now, Yudkowsky was Tweeting something _favorable_ to my agenda! Wouldn't it be greedy and ungrateful for me to keep criticizing him about the pronouns and language thing, given that he'd thrown me a bone here? Shouldn't I "call it even"? +At this point, some readers might think that that should have been the end of the matter, that I should have been satisfied. I had started the recent drama flare-up because Yudkowsky had Tweeted something unfavorable to my agenda. But now, Yudkowsky was Tweeting something _favorable_ to my agenda! Wouldn't it be greedy and ungrateful for me to keep criticizing him about the pronouns and language thing, given that he'd thrown me a bone here? Shouldn't I "call it even"? That's not how it works. The entire concept of there being "sides" to which one can make "concessions" is an artifact of human coalitional instincts; it's not something that _actually makes sense_ as a process for constructing a map that reflects the territory. My posse and I were trying to get a clarification about a philosophy-of-language claim Yudkowsky had made a few months prior ("you're not standing in defense of truth if [...]"), which I claimed was substantively misleading. Why would we stop prosecuting that, because of this _unrelated_ Tweet about the etiology of gender dysphoria? That wasn't the thing we were trying to clarify! Moreover—and I'm embarrassed that it took me another day to realize this—this new argument from Yudkowsky about the etiology of gender dysphoria was actually _wrong_. As I would later get around to explaining in ["On the Argumentative Form 'Super-Proton Things Tend to Come in Varieties'"](/2019/Dec/on-the-argumentative-form-super-proton-things-tend-to-come-in-varieties/), when people claim that some psychological or medical condition "comes in varieties", they're making a substantive _empirical_ claim that the [causal or statistical structure](/2021/Feb/you-are-right-and-i-was-wrong-reply-to-tailcalled-on-causality/) of the condition is usefully modeled as distinct clusters, not merely making the trivial observation that instances of the condition are not identical down to the subatomic level. -As such, we _shouldn't_ think that there are probably multiple kinds of gender dysphoria _because things are made of protons_ (?!?). If anything, _a priori_ reasoning about the cognitive function of categorization should actually cut in the other direction, (mildly) _against_ rather than in favor of multi-type theories: you only want to add more categories to your theory [if they can pay for their additional complexity with better predictions](https://www.lesswrong.com/posts/mB95aqTSJLNR9YyjH/message-length). If you believe in Blanchard–Bailey–Lawrence's two-type taxonomy of MtF, or Littman's proposed rapid-onset type, it should be on the _empirical_ merits, not because multi-type theories are especially more likely to be true. +As such, we _shouldn't_ think that there are probably multiple kinds of gender dysphoria _because things are made of protons_ (?!?). If anything, _a priori_ reasoning about the cognitive function of categorization should actually cut in the other direction, (mildly) _against_ rather than in favor of multi-type theories: you only want to add more categories to your theory [if they can pay for their additional complexity with better predictions](https://www.lesswrong.com/posts/mB95aqTSJLNR9YyjH/message-length). If you believe in Blanchard–Bailey–Lawrence's two-type taxonomy of MtF, or Littman's proposed rapid-onset type, it should be on the empirical merits, not because multi-type theories are _a priori_ more likely to be true (which they aren't). Had Yudkowsky been thinking that maybe if he Tweeted something favorable to my agenda, then me and the rest of Michael's gang would be satisfied and leave him alone? But ... if there's some _other_ reason you suspect there might be multiple species of dysphoria, but you _tell_ people your suspicion is because dysphoria has more than one proton, you're still misinforming people for political reasons, which was the _general_ problem we were trying to alert Yudkowsky to. (Someone who trusted you as a source of wisdom about rationality might try to apply your _fake_ "everything more complicated than protons tends to come in varieties" rationality lesson in some other context, and get the wrong answer.) Inventing fake rationality lessons in response to political pressure is _not okay_, and the fact that in this case the political pressure happened to be coming from _me_, didn't make it okay. -I asked the posse if this analysis was worth sending to Yudkowsky. Michael said it wasn't worth the digression. He asked if I was comfortable generalizing from Scott's behavior, and what others had said about fear of speaking openly, to assuming that something similar was going on with Eliezer? If so, then now that we had common knowledge, we needed to confront the actual crisis, which was that dread was tearing apart old friendships and causing fanatics to betray everything that they ever stood for while its existence was still being denied. +I asked the posse if this analysis was worth sending to Yudkowsky. Michael said it wasn't worth the digression. He asked if I was comfortable generalizing from Scott's behavior, and what others had said about fear of speaking openly, to assuming that something similar was going on with Eliezer? If so, then now that we had common knowledge, we needed to confront the actual crisis, "that dread is tearing apart old friendships and causing fanatics to betray everything that they ever stood for while it's existence is still being denied." + +----- -Another thing that happened that week was that former MIRI researcher Jessica Taylor joined our posse (being at an in-person meeting with Ben and Sarah and another friend on the seventeenth, and getting tagged in subsequent emails). I had met Jessica for the first time in March 2017, shortly after my psychotic break, and I had been part of the group trying to take care of her when she had her own severe psychological problems in late 2017, but other than that, we hadn't been particularly close. +Another thing that happened that week was that former MIRI researcher Jessica Taylor joined our posse (being at an in-person meeting with Ben and Sarah and another friend on the seventeenth, and getting tagged in subsequent emails). I had met Jessica for the first time in March 2017, shortly after my psychotic break, and I had been part of the group trying to take care of her when she had [her own break in late 2017](https://www.lesswrong.com/posts/pQGFeKvjydztpgnsY/occupational-infohazards), but other than that, we hadn't been particularly close. Significantly for political purposes, Jessica is trans. We didn't have to agree up front on all gender issues for her to see the epistemology problem with "... Not Man for the Categories", and to say that maintaining a narcissistic fantasy by controlling category boundaries wasn't what _she_ wanted, as a trans person. (On the seventeenth, when I lamented the state of a world that incentivized us to be political enemies, her response was, "Well, we could talk about it first.") Michael said that me and Jessica together had more moral authority than either of us alone. -As it happened, I ran into Scott on the train that Friday, the twenty-second. He said that he wasn't sure why the oft-repeated moral of "A Human's Guide to Words" had been "You can't define a word any way you want" rather than "You _can_ define a word any way you want, but then you have to deal with the consequences." +As it happened, I ran into Scott on the [BART](https://en.wikipedia.org/wiki/Bay_Area_Rapid_Transit) train that Friday, the twenty-second. He said that he wasn't sure why the oft-repeated moral of "A Human's Guide to Words" had been "You can't define a word any way you want" rather than "You _can_ define a word any way you want, but then you have to deal with the consequences." -Ultimately, I think this was a pedagogy decision that Yudkowsky had gotten right back in 'aught-eight. If you write your summary slogan in relativist language, people predictably take that as license to believe whatever they want without having to defend it. Whereas if you write your summary slogan in objectivist language—so that people know they don't have social permission to say that "it's subjective so I can't be wrong"—then you have some hope of sparking useful thought about the _exact, precise_ ways that _specific, definite_ things are _in fact_ relative to other specific, definite things. +Ultimately, I thought this was a pedagogy decision that Yudkowsky had gotten right back in 'aught-eight. If you write your summary slogan in relativist language, people predictably take that as license to believe whatever they want without having to defend it. Whereas if you write your summary slogan in objectivist language—so that people know they don't have social permission to say that "it's subjective so I can't be wrong"—then you have some hope of sparking useful thought about the _exact, precise_ ways that _specific, definite_ things are _in fact_ relative to other specific, definite things. I told Scott I would send him one more email with a piece of evidence about how other "rationalists" were thinking about the categories issue, and give my commentary on the parable about orcs, and then the present thread would probably drop there. -On Discord in January, Kelsey Piper had told me that everyone else experienced their disagreement with me as being about where the joints are and which joints are important, where usability for humans was a legitimate criterion for importance, and it was annoying that I thought they didn't believe in carving reality at the joints at all and that categories should be whatever makes people happy. +Concerning what others were thinking: on Discord in January, Kelsey Piper had told me that everyone else experienced their disagreement with me as being about where the joints are and which joints are important, where usability for humans was a legitimate criterion of importance, and it was annoying that I thought they didn't believe in carving reality at the joints at all and that categories should be whatever makes people happy. I [didn't want to bring it up at the time because](https://twitter.com/zackmdavis/status/1088459797962215429) I was so overjoyed that the discussion was actually making progress on the core philosophy-of-language issue, but ... Scott _did_ seem to be pretty explicit that his position was about happiness rather than usability? If Kelsey _thought_ she agreed with Scott, but actually didn't, that was kind of bad for our collective sanity, wasn't it? -As for the parable about orcs, I thought it was significant that Scott chose to tell the story from the standpoint of non-orcs deciding what [verbal behaviors](https://www.lesswrong.com/posts/NMoLJuDJEms7Ku9XS/guessing-the-teacher-s-password) to perform while orcs are around, rather than the standpoint of the _orcs themselves_. For one thing, how do you _know_ that serving evil-Melkior is a life of constant torture? Is it at all possible, in the bowels of Christ, that someone has given you _misleading information_ about that? Moreover, you _can't_ just give an orc a clever misinterpretation of an oath and have them believe it. First you have to [cripple their _general_ ability](https://www.lesswrong.com/posts/XTWkjCJScy2GFAgDt/dark-side-epistemology) to correctly interpret oaths, for the same reason that you can't get someone to believe that 2+2=5 without crippling their _general_ ability to do arithmetic. We weren't not talking about a little "white lie" that the listener will never get to see falsified (like telling someone their dead dog is in heaven); the orcs _already know_ the text of the oath, and you have to break their ability to _understand_ it. Are you willing to permanently damage an orc's ability to reason, in order to save them pain? For some sufficiently large amount of pain, surely. But this isn't a choice to make lightly—and the choices people make to satisfy their own consciences, don't always line up with the volition of their alleged beneficiaries. We think we can lie to save others from pain, without ourselves _wanting to be lied to_. But behind the veil of ignorance, it's the same choice! +As for the parable about orcs, I thought it was significant that Scott chose to tell the story from the standpoint of non-orcs deciding what [verbal behaviors](https://www.lesswrong.com/posts/NMoLJuDJEms7Ku9XS/guessing-the-teacher-s-password) to perform while orcs are around, rather than the standpoint of the _orcs themselves_. For one thing, how do you _know_ that serving evil-Melkor is a life of constant torture? Is it at all possible, in the bowels of Christ, that someone has given you _misleading information_ about that? -I _also_ had more to say about philosophy of categories: I thought I could be more rigorous about the difference between "caring about predicting different variables" and "caring about consequences", in a way that Eliezer would _have_ to understand even if Scott didn't. (Scott had claimed that he could use gerrymandered categories and still be just as good at making predictions—but that's just not true if we're talking about the _internal_ use of categories as a [cognitive algorithm](https://www.lesswrong.com/posts/HcCpvYLoSFP4iAqSz/rationality-appreciating-cognitive-algorithms), rather than mere verbal behavior: it's always easy to _say_ "_X_ is a _Y_" for arbitrary _X_ and _Y_ if the stakes demand it, but if you're _actually_ using that concept of _Y_ internally, that does have effects on your world-model.) +Moreover, you _can't_ just give an orc a clever misinterpretation of an oath and have them believe it. First you have to [cripple their _general_ ability](https://www.lesswrong.com/posts/XTWkjCJScy2GFAgDt/dark-side-epistemology) to correctly interpret oaths, for the same reason that you can't get someone to believe that 2+2=5 without crippling their _general_ ability to do arithmetic. We weren't talking about a little "white lie" that the listener will never get to see falsified (like telling someone their dead dog is in heaven); the orcs _already know_ the text of the oath, and you have to break their ability to _understand_ it. Are you willing to permanently damage an orc's ability to reason, in order to save them pain? For some sufficiently large amount of pain, surely. But this isn't a choice to make lightly—and the choices people make to satisfy their own consciences, don't always line up with the volition of their alleged beneficiaries. We think we can lie to save others from pain, without ourselves _wanting to be lied to_. But behind the veil of ignorance, it's the same choice! + +I _also_ had more to say about philosophy of categories: I thought I could be more rigorous about the difference between "caring about predicting different variables" and "caring about consequences", in a way that Eliezer would _have_ to understand even if Scott didn't. (Scott had claimed that he could use gerrymandered categories and still be just as good at making predictions—but that's just not true if we're talking about the _internal_ use of categories as a [cognitive algorithm](https://www.lesswrong.com/posts/HcCpvYLoSFP4iAqSz/rationality-appreciating-cognitive-algorithms), rather than mere verbal behavior: it's always easy to _say_ "_X_ is a _Y_" for arbitrary _X_ and _Y_ if the stakes demand it, but that's not the same thing as using that concept of _Y_ internally as part of your world-model.) But after consultation with the posse, I concluded that further email prosecution was not useful at this time; the philosophy argument would work better as a public _Less Wrong_ post. So my revised Category War to-do list was: - * Send the brief wrapping-up/end-of-conversation email to Scott (with the Discord anecdote with Kelsey and commentary on the orc story). + * Send the brief wrapping-up/end-of-conversation email to Scott (with the Discord anecdote about Kelsey and commentary on the orc story). * Mentally write-off Scott, Eliezer, and the so-called "rationalist" community as a loss so that I wouldn't be in horrible emotional pain from cognitive dissonance all the time. * Write up the mathy version of the categories argument for _Less Wrong_ (which I thought might take a few months—I had a dayjob, and write slowly, and might need to learn some new math, which I'm also slow at). * _Then_ email the link to Scott and Eliezer asking for a signal-boost and/or court ruling. -Ben didn't think the mathematically precise categories argument was the most important thing for _Less Wrong_ readers to know about: a similarly careful explanation of why I've written off Scott, Eliezer, and the "rationalists" would be way more valuable. +Ben didn't think the mathematically precise categories argument was the most important thing for _Less Wrong_ readers to know about: a similarly careful explanation of why I'd written off Scott, Eliezer, and the "rationalists" would be way more valuable. I could see the value he was pointing at, but something in me balked at the idea of _attacking my friends in public_ (Subject: "treachery, faith, and the great river (was: Re: DRAFTS: 'wrapping up; or, Orc-ham's razor' and 'on the power and efficacy of categories')"). -Ben had previously written (in the context of the effective altruism movement) about how [holding criticism to a higher standard than praise distorts our collective map](http://benjaminrosshoffman.com/honesty-and-perjury/#A_tax_on_criticism). - -He was obviously correct that this was a distortionary force relative to what ideal Bayesian agents would do, but I was worried that when we're talking about criticism of _people_ rather than ideas, the removal of the distortionary force would just result in an ugly war (and not more truth). Criticism of institutions and social systems _should_ be filed under "ideas" rather than "people", but the smaller-scale you get, the harder this distinction is to maintain: criticizing, say, "the Center for Effective Altruism", somehow feels more like criticizing Will MacAskill personally than criticizing "the United States" does, even though neither CEA nor the U.S. is a person. +Ben had previously written (in the context of the effective altruism movement) about how [holding criticism to a higher standard than praise distorts our collective map](http://benjaminrosshoffman.com/honesty-and-perjury/#A_tax_on_criticism). He was obviously correct that this was a distortionary force relative to what ideal Bayesian agents would do, but I was worried that when we're talking about criticism of _people_ rather than ideas, the removal of the distortionary force would just result in an ugly war (and not more truth). Criticism of institutions and social systems _should_ be filed under "ideas" rather than "people", but the smaller-scale you get, the harder this distinction is to maintain: criticizing, say, "the Center for Effective Altruism", somehow feels more like criticizing Will MacAskill personally than criticizing "the United States" does, even though neither CEA nor the U.S. is a person. -This is why I felt like I couldn't give up faith that [honest discourse _eventually_ wins](https://slatestarcodex.com/2017/03/24/guided-by-the-beauty-of-our-weapons/). Under my current strategy and consensus social norms, I could criticize Scott or Kelsey or Ozy's _ideas_ without my social life dissolving into a war of all against all, whereas if I were to give in to the temptation to flip a table and say, "Okay, now I _know_ you guys are just messing with me," then I didn't see how that led anywhere good, even if they really _were_ just messing with me. +That was why I felt like I couldn't give up faith that [honest discourse _eventually_ wins](https://slatestarcodex.com/2017/03/24/guided-by-the-beauty-of-our-weapons/). Under my current strategy and consensus social norms, I could criticize Scott or Kelsey or Ozy's _ideas_ without my social life dissolving into a war of all against all, whereas if I were to give in to the temptation to flip a table and say, "Okay, now I _know_ you guys are just messing with me," then I didn't see how that led anywhere good, even if they really _were_ just messing with me. Jessica explained what she saw as the problem with this. What Ben was proposing was _creating clarity about behavioral patterns_. I was saying that I was afraid that creating such clarity is an attack on someone. But if so, then my blog was an attack on trans people. What was going on here? -Socially, creating clarity about behavioral patterns _is_ construed as an attack and _can_ make things worse for someone: for example, if your livelihood is based on telling a story about you and your flunkies being the only sane truthseeking people in the world, then me demonstrating that you don't care about the truth when it's politically inconvenient for you is a threat to your marketing story and therefore a threat to your livelihood. As a result, it's easier to create clarity down power gradients than up power gradients: it was easy for me to blow the whistle on trans people's narcissistic delusions, but hard to blow the whistle on Yudkowsky's narcissistic delusions. +Socially, creating clarity about behavioral patterns _is_ construed as an attack and _can_ make things worse for someone: for example, if your livelihood is based on telling a story about you and your flunkies being the only sane truthseeking people in the world, then me demonstrating that you don't care about the truth when it's politically inconvenient for you is a threat to your marketing story, and therefore a threat to your livelihood. As a result, it's easier to create clarity down power gradients than up power gradients: it was easy for me to blow the whistle on trans people's narcissistic delusions, but hard to blow the whistle on Yudkowsky's narcissistic delusions.[^trans-power-gradient] + +[^trans-power-gradient]: Probably a lot of _other_ people who lived in Berkeley would find it harder to criticize trans people than to criticize some privileged white guy named Yudkowski or whatever. But those weren't the relevant power gradients in _my_ social world. But _selectively_ creating clarity down but not up power gradients just reinforces existing power relations—just like how selectively criticizing arguments with politically unfavorable conclusions only reinforces your current political beliefs. I shouldn't be able to get away with claiming that [calling non-exclusively-androphilic trans women delusional perverts](/2017/Mar/smart/) is okay on the grounds that that which can be destroyed by the truth should be, but that calling out Alexander and Yudkowsky would be unjustified on the grounds of starting a war or whatever. If I was being cowardly or otherwise unprincipled, I should own that instead of generating spurious justifications. Jessica was on board with a project to tear down narcissistic fantasies in general, but not on board with a project that starts by tearing down trans people's narcissistic fantasies, but then emits spurious excuses for not following that effort where it leads. @@ -540,7 +544,7 @@ Somewhat apologetically, I replied that the distinction between truthfully, publ Polishing the advanced categories argument from earlier email drafts into a solid _Less Wrong_ post didn't take that long: by 6 April 2019, I had an almost-complete draft of the new post, ["Where to Draw the Boundaries?"](https://www.lesswrong.com/posts/esRZaPXSHgWzyB2NL/where-to-draw-the-boundaries), that I was pretty happy with. -The title (note: "boundaries", plural) was a play off of ["Where to the Draw the Boundary?"](https://www.lesswrong.com/posts/d5NyJ2Lf6N22AD9PB/where-to-draw-the-boundary) (note: "boundary", singular), a post from Yudkowsky's [original Sequence](https://www.lesswrong.com/s/SGB7Y5WERh4skwtnb) on the [37 ways in which words can be wrong](https://www.lesswrong.com/posts/FaJaCgqBKphrDzDSj/37-ways-that-words-can-be-wrong). In "... Boundary?", Yudkowsky asserts (without argument, as something that all educated people already know) that dolphins don't form a natural category with fish ("Once upon a time it was thought that the word 'fish' included dolphins [...] Or you could stop playing nitwit games and admit that dolphins don't belong on the fish list"). But Alexander's ["... Not Man for the Categories"](https://slatestarcodex.com/2014/11/21/the-categories-were-made-for-man-not-man-for-the-categories/) directly contradicts this, asserting that there's nothing wrong with with biblical Hebrew word _dagim_ encompassing both fish and cetaceans (dolphins and whales). So who's right, Yudkowsky (2008) or Alexander (2014)? Is there a problem with dolphins being "fish", or not? +The title (note: "boundaries", plural) was a play off of ["Where to the Draw the Boundary?"](https://www.lesswrong.com/posts/d5NyJ2Lf6N22AD9PB/where-to-draw-the-boundary) (note: "boundary", singular), a post from Yudkowsky's [original Sequence](https://www.lesswrong.com/s/SGB7Y5WERh4skwtnb) on the [37 ways in which words can be wrong](https://www.lesswrong.com/posts/FaJaCgqBKphrDzDSj/37-ways-that-words-can-be-wrong). In "... Boundary?", Yudkowsky asserts (without argument, as something that all educated people already know) that dolphins don't form a natural category with fish ("Once upon a time it was thought that the word 'fish' included dolphins [...] Or you could stop playing nitwit games and admit that dolphins don't belong on the fish list"). But Alexander's ["... Not Man for the Categories"](https://slatestarcodex.com/2014/11/21/the-categories-were-made-for-man-not-man-for-the-categories/) directly contradicts this, asserting that there's nothing wrong with the biblical Hebrew word _dagim_ encompassing both fish and cetaceans (dolphins and whales). So who's right, Yudkowsky (2008) or Alexander (2014)? Is there a problem with dolphins being "fish", or not? In "... Boundaries?", I unify the two positions and explain how both Yudkowsky and Alexander have a point: in high-dimensional configuration space, there's a cluster of finned water-dwelling animals in the subspace of the dimensions along which finned water-dwelling animals are similar to each other, and a cluster of mammals in the subspace of the dimensions along which mammals are similar to each other, and dolphins belong to _both_ of them. _Which_ subspace you pay attention to can legitimately depend on your values: if you don't care about predicting or controlling some particular variable, you have no reason to look for clusters along that dimension. @@ -548,11 +552,13 @@ But _given_ a subspace of interest, the _technical_ criterion of drawing categor (Jessica and Ben's [discussion of the job title example in relation to the _Wikipedia_ summary of Jean Baudrillard's _Simulacra and Simulation_ got published separately](http://benjaminrosshoffman.com/excerpts-from-a-larger-discussion-about-simulacra/), and ended up taking on a life of its own [in](http://benjaminrosshoffman.com/blame-games/) [future](http://benjaminrosshoffman.com/blatant-lies-best-kind/) [posts](http://benjaminrosshoffman.com/simulacra-subjectivity/), [including](https://www.lesswrong.com/posts/Z5wF8mdonsM2AuGgt/negative-feedback-and-simulacra) [a](https://www.lesswrong.com/posts/NiTW5uNtXTwBsFkd4/signalling-and-simulacra-level-3) [number](https://www.lesswrong.com/posts/tF8z9HBoBn783Cirz/simulacrum-3-as-stag-hunt-strategy) [of](https://www.lesswrong.com/tag/simulacrum-levels) [posts](https://thezvi.wordpress.com/2020/05/03/on-negative-feedback-and-simulacra/) [by](https://thezvi.wordpress.com/2020/06/15/simulacra-and-covid-19/) [other](https://thezvi.wordpress.com/2020/08/03/unifying-the-simulacra-definitions/) [authors](https://thezvi.wordpress.com/2020/09/07/the-four-children-of-the-seder-as-the-simulacra-levels/).) -Sarah asked if the math wasn't a bit overkill: were the calculations really necessary to make the basic point that good definitions should be about classifying the world, rather than about what's pleasant or politically expedient to say? I thought the math was _really important_ as an appeal to principle—and [as intimidation](https://slatestarcodex.com/2014/08/10/getting-eulered/). (As it was written, [_the tenth virtue is precision!_](http://yudkowsky.net/rational/virtues/) Even if you cannot do the math, knowing that the math exists tells you that the dance step is precise and has no room in it for your whims.) +Sarah asked if the math wasn't a bit overkill: were the calculations really necessary to make the basic point that good definitions should be about classifying the world, rather than about what's pleasant or politically expedient to say? + +I thought the math was _really important_ as an appeal to principle—and [as intimidation](https://slatestarcodex.com/2014/08/10/getting-eulered/). (As it was written, [_the tenth virtue is precision!_](http://yudkowsky.net/rational/virtues/) Even if you cannot do the math, knowing that the math exists tells you that the dance step is precise and has no room in it for your whims.) "... Boundaries?" explains all this in the form of discourse with a hypothetical interlocutor arguing for the I-can-define-a-word-any-way-I-want position. In the hypothetical interlocutor's parts, I wove in verbatim quotes (without attribution) from Alexander ("an alternative categorization system is not an error, and borders are not objectively true or false") and Yudkowsky ("You're not standing in defense of truth if you insist on a word, brought explicitly into question, being used with some particular meaning", "Using language in a way _you_ dislike is not lying. The propositions you claim false [...] is not what the [...] is meant to convey, and this is known to everyone involved; it is not a secret"), and Bensinger ("doesn't unambiguously refer to the thing you're trying to point at"). -My thinking here was that the posse's previous email campaigns had been doomed to failure by being too closely linked to the politically-contentious object-level topic which reputable people had strong incentives not to touch with a ten-meter pole. So if I wrote this post _just_ explaining what was wrong with the claims Yudkowsky and Alexander had made about the philosophy of language, with perfectly innocent examples about dolphins and job titles, that would remove the political barrier and [leave a line of retreat](https://www.lesswrong.com/posts/3XgYbghWruBMrPTAL/leave-a-line-of-retreat) for Yudkowsky to correct the philosophy of language error. And then if someone with a threatening social-justicey aura were to say, "Wait, doesn't this contradict what you said about trans people earlier?", the reputable people could stonewall them. (Stonewall _them_ and not _me_!) +My thinking here was that the posse's previous email campaigns had been doomed to failure by being too closely linked to the politically-contentious object-level topic, which reputable people had strong incentives not to touch with a ten-meter pole. So if I wrote this post _just_ explaining what was wrong with the claims Yudkowsky and Alexander had made about the philosophy of language, with perfectly innocent examples about dolphins and job titles, that would remove the political barrier to Yudkowsky correcting the philosophy of language error. If someone with a threatening social-justicey aura were to say, "Wait, doesn't this contradict what you said about trans people earlier?", the reputable people could stonewall them. (Stonewall _them_ and not _me_!) Another reason someone might be reluctant to correct mistakes when pointed out, is the fear that such a policy could be abused by motivated nitpickers. It would be pretty annoying to be obligated to churn out an endless stream of trivial corrections by someone motivated to comb through your entire portfolio and point out every little thing you did imperfectly, ever. @@ -564,7 +570,7 @@ I could see a case that it was unfair of me to include political subtext and the (I did regret having accidentally "poisoned the well" the previous month by impulsively sharing the previous year's ["Blegg Mode"](/2018/Feb/blegg-mode/) [as a _Less Wrong_ linkpost](https://www.lesswrong.com/posts/GEJzPwY8JedcNX2qz/blegg-mode). "Blegg Mode" had originally been drafted as part of "... To Make Predictions" before getting spun off as a separate post. Frustrated in March at our failing email campaign, I thought it was politically "clean" enough to belatedly share, but it proved to be insufficiently [deniably allegorical](/tag/deniably-allegorical/), as evidenced by the 60-plus-entry trainwreck of a comments section. It's plausible that some portion of the _Less Wrong_ audience would have been more receptive to "... Boundaries?" as not-politically-threatening philosophy, if they hadn't been alerted to the political context by the comments on the "Blegg Mode" linkpost.) -On 13 April 2019, I pulled the trigger on publishing "... Boundaries?", and wrote to Yudkowsky again, a fourth time (!), asking if he could _either_ publicly endorse the post, _or_ publicly comment on what he thought the post got right and what he thought it got wrong; and, that if engaging on this level was too expensive for him in terms of spoons, if there was any action I could take to somehow make it less expensive? The reason I thought this was important, I explained, was that if rationalists in [good standing](https://srconstantin.github.io/2018/12/24/contrite-strategies.html) find themselves in a persistent disagreement _about rationality itself_—in this case, my disagreement with Scott Alexander and others about the cognitive function of categories—that seemed like a major concern for [our common interest](https://www.lesswrong.com/posts/4PPE6D635iBcGPGRy/rationality-common-interest-of-many-causes), something we should be very eager to _definitively settle in public_ (or at least _clarify_ the current state of the disagreement). In the absence of an established "rationality court of last resort", I feared the closest thing we had was an appeal to Eliezer Yudkowsky's personal judgement. Despite the context in which the dispute arose, _this wasn't a political issue_. The post I was asking for his comment on was _just_ about the [_mathematical laws_](https://www.lesswrong.com/posts/eY45uCCX7DdwJ4Jha/no-one-can-exempt-you-from-rationality-s-laws) governing how to talk about, _e.g._, dolphins. We had _nothing to be afraid of_ here. (Subject: "movement to clarity; or, rationality court filing"). +On 13 April 2019, I pulled the trigger on publishing "... Boundaries?", and wrote to Yudkowsky again, a fourth time (!), asking if he could _either_ publicly endorse the post, _or_ publicly comment on what he thought the post got right and what he thought it got wrong; and, that if engaging on this level was too expensive for him in terms of [spoons](https://en.wikipedia.org/wiki/Spoon_theory), if there was any action I could take to somehow make it less expensive? The reason I thought this was important, I explained, was that if rationalists in [good standing](https://srconstantin.github.io/2018/12/24/contrite-strategies.html) find themselves in a persistent disagreement _about rationality itself_—in this case, my disagreement with Scott Alexander and others about the cognitive function of categories—that seemed like a major concern for [our common interest](https://www.lesswrong.com/posts/4PPE6D635iBcGPGRy/rationality-common-interest-of-many-causes), something we should be very eager to _definitively settle in public_ (or at least _clarify_ the current state of the disagreement). In the absence of an established "rationality court of last resort", I feared the closest thing we had was an appeal to Eliezer Yudkowsky's personal judgement. Despite the context in which the dispute arose, _this wasn't a political issue_. The post I was asking for his comment on was _just_ about the [_mathematical laws_](https://www.lesswrong.com/posts/eY45uCCX7DdwJ4Jha/no-one-can-exempt-you-from-rationality-s-laws) governing how to talk about, _e.g._, dolphins. We had _nothing to be afraid of_ here. (Subject: "movement to clarity; or, rationality court filing"). I got some pushback from Ben and Jessica about claiming that this wasn't "political". What I meant by that was to emphasize (again) that I didn't expect Yudkowsky or "the community" to take a public stance _on gender politics_; I was trying to get "us" to take a stance in favor of the kind of _epistemology_ that we were doing in 2008. It turns out that epistemology has implications for gender politics which are unsafe, but that's _more inferential steps_, and ... I guess I just didn't expect the sort of people who would punish good epistemology to follow the inferential steps? @@ -580,9 +586,9 @@ But the only reason for my post to exist was because it would be even _more_ ina [^schelling]: _Strategy of Conflict_, Ch. 2, "An Essay on Bargaining" -Maybe that's not how politics works? Could it be that, somehow, the mob-punishment mechanisms that weren't smart enough to understand the concept of "bad argument (categories are arbitrary) for a true conclusion (trans people are OK)", _were_ smart enough to connect the dots between my broader agenda and my (correct) abstract philosophy argument, such that VIPs didn't think they could endorse my _correct_ philosophy argument, without it being _construed as_ an endorsement of me and my detailed heresies? +Maybe that's not how politics works? Could it be that, somehow, the mob-punishment mechanisms that weren't smart enough to understand the concept of "bad argument (categories are arbitrary) for a true conclusion (trans people are OK)", _were_ smart enough to connect the dots between my broader agenda and my (correct) abstract philosophy argument, such that VIPs didn't think they could endorse my correct philosophy argument, without it being _construed as_ an endorsement of me and my detailed heresies? -Jessica mentioned talking with someone about me writing to Yudkowsky and Alexander requesting that they clarify the category boundary thing. This person described having a sense that I should have known that wouldn't work—because of the politics involved, not because I wasn't right. I thought Jessica's takeaway was very poignant: +Jessica mentioned talking with someone about me writing to Yudkowsky and Alexander requesting that they clarify the category boundary thing. This person described having a sense that I should have known that that wouldn't work—because of the politics involved, not because I wasn't right. I thought Jessica's takeaway was very poignant: > Those who are savvy in high-corruption equilibria maintain the delusion that high corruption is common knowledge, to justify expropriating those who naively don't play along, by narratizing them as already knowing and therefore intentionally attacking people, rather than being lied to and confused. @@ -592,7 +598,7 @@ I guess in retrospect, the outcome does seem kind of "obvious"—that it should But ... it's only "obvious" if you _take as a given_ that Yudkowsky is playing a savvy Kolmogorov complicity strategy like any other public intellectual in the current year.[^any-other-public-intellectual] -[^any-other-public-intellectual]: And really, that's the _charitable_ interpretation. The extent to which I still have trouble entertaining the idea that Yudkowsky _actually_ drunk the gender ideology Kool-Aid, rather than merely having pretended to, is a testament to the thoroughness of my indoctrination. +[^any-other-public-intellectual]: And really, that's the _charitable_ interpretation. The extent to which I still had trouble entertaining the idea that Yudkowsky had _actually_ drunk the gender ideology Kool-Aid, rather than merely having pretended to, is a testament to the thoroughness of my indoctrination. Maybe this seems banal if you haven't spent your entire adult life in his robot cult? Coming from _anyone else in the world_, I wouldn't have had a problem with the "hill of validity in defense of meaning" thread—I would have respected it as a solidly above-average philosophy performance, before [setting the bozo bit](https://en.wikipedia.org/wiki/Bozo_bit#Dismissing_a_person_as_not_worth_listening_to) on the author and getting on with my day. But since I _did_ spend my entire adult life in Yudkowsky's robot cult, trusting him the way a Catholic trusts the Pope, I _had_ to assume that it was an "honest mistake" in his rationality lessons, and that honest mistakes could be honestly corrected if someone put in the effort to explain the problem. The idea that Eliezer Yudkowsky was going to behave just as badly as any other public intellectual in the current year, was not really in my hypothesis space. It took some _very large_ likelihood ratios to beat it into my head the thing that was obviously happenening, was actually happening. @@ -602,7 +608,7 @@ One might wonder why this was such a big deal to us. Okay, so Yudkowsky had prev Ben explained: Yudkowsky had set in motion a marketing machine (the "rationalist community") that was continuing to raise funds and demand work from people for below-market rates based on the claim that while nearly everyone else was criminally insane (causing huge amounts of damage due to disconnect from reality, in a way that would be criminal if done knowingly), he, almost uniquely, was not. "Work for me or the world ends badly," basically. If the claim was _true_, it was important to make, and to actually extract that labor. -But we had just falsified to our satisfaction the claim that Yudkowsky was currently sane in the relevant way (which was a _extremely high_ standard, and not a special flaw of Yudkowsky in the current environment). If Yudkowsky couldn't be bothered to live up to his own stated standards or withdraw his validation from the machine he built after we had _tried_ to talk to him privately, then we had a right to talk in public about what we thought was going on. +But we had just falsified to our satisfaction the claim that Yudkowsky was currently sane in the relevant way (which was a _extremely high_ standard, and not a special flaw of Yudkowsky in the current environment). If Yudkowsky couldn't be bothered to live up to his own stated standards or withdraw his validation from the machine he built after we had _tried_ to talk to him privately, then we had a right to talk about what we thought was going on. This wasn't about direct benefit _vs._ harm. This was about what, substantively, the machine and its operators were doing. They claimed to be cultivating an epistemically rational community, while in fact building an army of loyalists.