From: M. Taylor Saotome-Westlake Date: Sat, 27 Aug 2022 22:20:55 +0000 (-0700) Subject: memoir: building towards "... Boundaries?" X-Git-Url: http://534655.efjtl6rk.asia/source?a=commitdiff_plain;h=b2cb7a77defec83dd488fc7e99344307673781c9;p=Ultimately_Untrue_Thought.git memoir: building towards "... Boundaries?" --- diff --git a/content/drafts/a-hill-of-validity-in-defense-of-meaning.md b/content/drafts/a-hill-of-validity-in-defense-of-meaning.md index 60bef3d..698df83 100644 --- a/content/drafts/a-hill-of-validity-in-defense-of-meaning.md +++ b/content/drafts/a-hill-of-validity-in-defense-of-meaning.md @@ -372,7 +372,7 @@ Anyway, I did successfully get to my apartment and get a few hours of sleep. One I think at some level, I wanted Scott to know how frustrated I was about his use of "mental health for trans people" as an Absolute Denial Macro. But then when Michael started advocating on my behalf, I started to minimize my claims because I had a generalized attitude of not wanting to sell myself as a victim. Ben pointed out that [making oneself mentally ill in order to extract political concessions](/2018/Jan/dont-negotiate-with-terrorist-memeplexes/) only works if you have a lot of people doing it in a visibly coordinated way. And even if it did work, getting into a dysphoria contest with trans people didn't seem like it led anywhere good. -I supposed that, in Michael's worldview, aggression is more honest than passive-aggression. That seemed obviously true, but I was psychologically limited in how much overt aggression I was willing to deploy against my friends. (And particularly Yudkowsky, who I still hero-worshipped.) But clearly, the tension between "I don't want to do too much social aggression" and "losing the Category War within the rationalist community is _absolutely unacceptable_" was causing me to make wildly inconsistent decisions. (Emailing Scott at 4 a.m., and then calling Michael "aggressive" when he came to defend me was just crazy: either one of those things could make sense, but not _both_.) +I supposed that, in Michael's worldview, aggression is more honest than passive-aggression. That seemed obviously true, but I was psychologically limited in how much overt aggression I was willing to deploy against my friends. (And particularly Yudkowsky, who I still hero-worshipped.) But clearly, the tension between "I don't want to do too much social aggression" and "losing the Category War within the rationalist community is _absolutely unacceptable_" was causing me to make wildly inconsistent decisions. (Emailing Scott at 4 _a.m._, and then calling Michael "aggressive" when he came to defend me was just crazy: either one of those things could make sense, but not _both_.) Was the answer just that I needed to accept that there wasn't such a thing in the world as a "rationalist community"? (Sarah had told me as much two years ago, at BABSCon, and I just hadn't made the corresponing mental adjustments.) @@ -420,18 +420,13 @@ On Discord in January, Kelsey Piper had told me that everyone else experienced t I [didn't want to bring it up at the time because](https://twitter.com/zackmdavis/status/1088459797962215429) I was so overjoyed that the discussion was actually making progress on the core philosophy-of-language issue, but ... Scott _did_ seem to be pretty explicit that his position was about happiness rather than usability? If Kelsey thought she agreed with Scott, but actually didn't, that's kind of bad for our collective sanity, wasn't it? -As for the parable about orcs, I thought it was significant that Scott chose to tell the story from the standpoint of non-orcs deciding what [verbal behaviors](https://www.lesswrong.com/posts/NMoLJuDJEms7Ku9XS/guessing-the-teacher-s-password) to perform while orcs are around, rather than the standpoint of the _orcs themselves_. For one thing, how do you _know_ that serving evil-Melkior is a life of constant torture? Is it at all possible, in the bowels of Christ, that someone has given you _misleading information_ about that? Moreover, you _can't_ just give an orc a clever misinterpretation of an oath and have them believe it. First you have to [cripple their _general_ ability](https://www.lesswrong.com/posts/XTWkjCJScy2GFAgDt/dark-side-epistemology) to correctly interpret oaths, for the same reason that you can't get someone to believe that 2+2=5 without crippling their general ability to do arithmetic. We're not talking about a little "white lie" that the listener will never get to see falsified; the orcs _already know_ the text of the oath, and you have to break their ability to _understand_ it. Are you willing to permanently damage an orc's ability to reason, in order to save them pain? For some sufficiently large amount of pain, surely. But this isn't a choice to make lightly—and the choices people make to satisfy their own consciences, don't always line up with the volition of their alleged beneficiaries. We think we can lie to save others from pain, without ourselves _wanting to be lied to_. But behind the veil of ignorance, it's the same choice! +As for the parable about orcs, I thought it was significant that Scott chose to tell the story from the standpoint of non-orcs deciding what [verbal behaviors](https://www.lesswrong.com/posts/NMoLJuDJEms7Ku9XS/guessing-the-teacher-s-password) to perform while orcs are around, rather than the standpoint of the _orcs themselves_. For one thing, how do you _know_ that serving evil-Melkior is a life of constant torture? Is it at all possible, in the bowels of Christ, that someone has given you _misleading information_ about that? Moreover, you _can't_ just give an orc a clever misinterpretation of an oath and have them believe it. First you have to [cripple their _general_ ability](https://www.lesswrong.com/posts/XTWkjCJScy2GFAgDt/dark-side-epistemology) to correctly interpret oaths, for the same reason that you can't get someone to believe that 2+2=5 without crippling their general ability to do arithmetic. We weren't not talking about a little "white lie" that the listener will never get to see falsified (like telling someone their dead dog is in heaven); the orcs _already know_ the text of the oath, and you have to break their ability to _understand_ it. Are you willing to permanently damage an orc's ability to reason, in order to save them pain? For some sufficiently large amount of pain, surely. But this isn't a choice to make lightly—and the choices people make to satisfy their own consciences, don't always line up with the volition of their alleged beneficiaries. We think we can lie to save others from pain, without ourselves _wanting to be lied to_. But behind the veil of ignorance, it's the same choice! +I _also_ had more to say about philosophy of categories: I thought I could be more rigorous about the difference between "caring about predicting different variables" and "caring about consequences", in a way that Yudkowsky would _have_ to understand even if Scott didn't. But after consultation with the posse, I concluded that further email prosecution was not useful at this time; the philosophy argument would work better as a public _Less Wrong_ post. So my revised Category War to-do list was: - - -[SECTION: treachery, faith, and the great river - -I concluded that further email prosecution was not useful at this time. My revised Category War to-do list was: - - * Send a _brief_ wrapping-up/end-of-conversation email to Scott (with the anecdote from Discord and commentary on his orc story). + * Send the brief wrapping-up/end-of-conversation email to Scott (with the Discord anecdote and commentary on the orc story). * Mentally write-off Scott, Eliezer, and the so-called "rationalist" community as a loss so that I wouldn't be in horrible emotional pain from cognitive dissonance all the time. - * Write up the long, engaging, depoliticized mathy version of the categories argument for _Less Wrong_ (which I thought might take a few months—I had a dayjob, and write slowly, and might need to learn some new math, which I'm also slow at). + * Write up the mathy version of the categories argument for _Less Wrong_ (which I thought might take a few months—I had a dayjob, and write slowly, and might need to learn some new math, which I'm also slow at). * _Then_ email the link to Scott and Eliezer asking for a signal-boost and/or court ruling. Ben didn't think the mathematically precise categories argument was the most important thing for _Less Wrong_ readers to know about: a similarly careful explanation of why I've written off Scott, Eliezer, and the "rationalists" would be way more valuable. @@ -454,23 +449,10 @@ Somewhat apologetically, I replied that the distinction between truthfully, publ Michael said this was importantly backwards: less precise targeting is more violent. If someone said, "Michael Vassar is a terrible person", he would try to be curious, but if they don't have an argument, he would tend to worry more "for" them and less "about" them, whereas if someone said, "The Jews are terrible people", he saw that more serious threat to his safety. (And rationalists and trans women are exactly the sort of people that get targeted by the same people who target Jews.) -] - - -[SECTION about monastaries (with Ben and Anna in April 2019) +Polishing the advanced categories argument from earlier email drafts into a solid _Less Wrong_ post didn't take that long: by 6 April, I had an almost-complete draft of the new post, ["Where to Draw the Boundaries?"](https://www.lesswrong.com/posts/esRZaPXSHgWzyB2NL/where-to-draw-the-boundaries), that I was pretty happy with. -I complained to Anna: "Getting the right answer in public on topic _X_ would be too expensive, so we won't do it" is _less damaging_ when the set of such Xes is _small_. It looked to me like we added a new forbidden topic in the last ten years, without rolling back any of the old ones. -"Reasoning in public is too expensive; reasoning in private is good enough" is _less damaging_ when there's some sort of _recruiting pipeline_ from the public into the monasteries: lure young smart people in with entertaining writing and shiny math, _then_ gradually undo their political brainwashing once they've already joined your cult. (It had [worked on me](/2021/May/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems/)!) -I would be sympathetic to "rationalist" leaders like Anna or Yudkowsky playing that strategy if there were some sort of indication that they had _thought_, at all, about the pipeline problem—or even an indication that there _was_ an intact monastery somewhere. - -] - -[TODO: Jessica on corruption— - -> I am reminded of someone who I talked with about Zack writing to you and Scott to request that you clarify the category boundary thing. This person had an emotional reaction described as a sense that "Zack should have known that wouldn't work" (because of the politics involved, not because Zack wasn't right). Those who are savvy in high-corruption equilibria maintain the delusion that high corruption is common knowledge, to justify expropriating those who naively don't play along, by narratizing them as already knowing and therefore intentionally attacking people, rather than being lied to and confused. -] [TODO: after some bouncing off the posse, what was originally an email draft became a public _Less Wrong_ post, "Where to Draw the Boundaries?" (note, plural) @@ -481,6 +463,9 @@ I would be sympathetic to "rationalist" leaders like Anna or Yudkowsky playing t * I had already poisoned the well with "Blegg Mode" the other month, bad decision ] +[TODO: Jessica on corruption— +> I am reminded of someone who I talked with about Zack writing to you and Scott to request that you clarify the category boundary thing. This person had an emotional reaction described as a sense that "Zack should have known that wouldn't work" (because of the politics involved, not because Zack wasn't right). Those who are savvy in high-corruption equilibria maintain the delusion that high corruption is common knowledge, to justify expropriating those who naively don't play along, by narratizing them as already knowing and therefore intentionally attacking people, rather than being lied to and confused. +] [TODO small section: concern about bad faith nitpicking— @@ -495,8 +480,6 @@ But, well, I thought I had made a pretty convincing that a lot of people are mak * We lost?! How could we lose??!!?!? -[TODO: Michael on Anna as cult leader -Jessica told me about her time at MIRI (link to Zoe-piggyback and Occupational Infohazards)] https://slatestarcodex.com/2019/07/04/some-clarifications-on-rationalist-blogging/ @@ -513,13 +496,16 @@ curation hopes ... 22 Jun: I'm expressing a little bit of bitterness that a mole [TODO scuffle on LessWrong FAQ https://www.lesswrong.com/posts/MqrzczdGhQCRePgqN/feedback-requested-draft-of-a-new-about-welcome-page-for#iqEEme6M2JmZEXYAk ] [TODO section on factional conflict: -Michael on Anna as an enemy +Michael on Anna as cult leader +Jessica told me about her time at MIRI (link to Zoe-piggyback and Occupational Infohazards) 24 Aug: I had told Anna about Michael's "enemy combatants" metaphor, and how I originally misunderstood me being regarded as Michael's pawn assortment of agendas mutualist pattern where Michael by himself isn't very useful for scholarship (he just says a lot of crazy-sounding things and refuses to explain them), but people like Sarah and me can write intelligible things that secretly benefited from much less legible conversations with Michael. ] + + [TODO: Yudkowsky throwing NRx under the bus; tragedy of recursive silencing 15 Sep Glen Weyl apology ] diff --git a/notes/a-hill-of-validity-sections.md b/notes/a-hill-of-validity-sections.md index df2cec0..9c2ee15 100644 --- a/notes/a-hill-of-validity-sections.md +++ b/notes/a-hill-of-validity-sections.md @@ -6,6 +6,7 @@ _ the other Twitter conversation incl. Tail where Yudkowsky says he's not taking _ the exchange on old OB where I said to acknowledge that not all men _want_ to be masculine _ screenshot Rob's Facebook comment which I link _ compile Categories references from the Dolphin War +_ dates of subsequent philosophy-of-language posts far editing tier— _ clarify why Michael thought Scott was "gaslighting" me, include "beeseech bowels of Christ" @@ -987,3 +988,12 @@ Thoughts on your proposed cruxes: 1 (harmful inferences) is an unworkable AI des https://twitter.com/ESYudkowsky/status/1436025983522381827 > Well, Zack hopefully shouldn't see this, but I do happen to endorse everything you just said, for your own personal information. + + +[SECTION about monastaries (with Ben and Anna in April 2019) +I complained to Anna: "Getting the right answer in public on topic _X_ would be too expensive, so we won't do it" is _less damaging_ when the set of such Xes is _small_. It looked to me like we added a new forbidden topic in the last ten years, without rolling back any of the old ones. + +"Reasoning in public is too expensive; reasoning in private is good enough" is _less damaging_ when there's some sort of _recruiting pipeline_ from the public into the monasteries: lure young smart people in with entertaining writing and shiny math, _then_ gradually undo their political brainwashing once they've already joined your cult. (It had [worked on me](/2021/May/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems/)!) + +I would be sympathetic to "rationalist" leaders like Anna or Yudkowsky playing that strategy if there were some sort of indication that they had _thought_, at all, about the pipeline problem—or even an indication that there _was_ an intact monastery somewhere. +]