From: M. Taylor Saotome-Westlake Date: Fri, 7 Oct 2022 23:23:47 +0000 (-0700) Subject: memoir: I need a better outline of what happened in Apr.–Nov. 2019 X-Git-Url: http://534655.efjtl6rk.asia/source?a=commitdiff_plain;h=ae3c8205a2024411860995bee1794f25f812f334;p=Ultimately_Untrue_Thought.git memoir: I need a better outline of what happened in Apr.–Nov. 2019 I had a bunch of individual scraps and sentences, but I wasn't sure how to organize them; I don't know how to explain the factional schism. Reading through the email review log chronologically and just trying to tell the Dumb Story chronologically probably can't hurt? --- diff --git a/content/drafts/a-hill-of-validity-in-defense-of-meaning.md b/content/drafts/a-hill-of-validity-in-defense-of-meaning.md index de329fd..02f8ac9 100644 --- a/content/drafts/a-hill-of-validity-in-defense-of-meaning.md +++ b/content/drafts/a-hill-of-validity-in-defense-of-meaning.md @@ -563,66 +563,15 @@ One might wonder why this was such a big deal to us. Okay, so Yudkowsky had prev Ben explained that Yudkowsky wasn't a private person who might plausibly have the right to be wrong on the internet in peace. Yudkowsky was a public figure whose claim to legitimacy really did amount to a claim that while nearly everyone else was criminally insane (causing huge amounts of damage due to disconnect from reality, in a way that would be criminal if done knowingly), he almost uniquely was not—and he had he had set in motion a machine (the "rationalist community") that was continuing to raise funds and demand work from people for below-market rates based on that claim—"work for me or the world ends badly", basically. -If the claim was _true_, it was important to make, and to actually extract that labor. But we had falsified to our satisfaction the claim that Yudkowsky was currently sane in the relevant way (which was a _extremely high_ standard, and not a special flaw of Yudkowsky in the current environment). If Yudkowsky couldn't be bothered to live up to his own stated standards or withdraw his validation from the machine he built, then we had a right to talk about what we thought was going on. +If the claim was _true_, it was important to make, and to actually extract that labor. But we had falsified to our satisfaction the claim that Yudkowsky was currently sane in the relevant way (which was a _extremely high_ standard, and not a special flaw of Yudkowsky in the current environment). If Yudkowsky couldn't be bothered to live up to his own stated standards or withdraw his validation from the machine he built after we had _tried_ to bring it up in private with him, then we had a right to talk about what we thought was going on. Ben further compared Yudkowsky (as the most plausible individual representative of the "rationalists") to Eliza the spambot therapist in my story ["Blame Me for Trying"](/2018/Jan/blame-me-for-trying/): regardless of the initial intent, scrupulous rationalists were paying rent to something claiming moral authority, which had no concrete specific plan to do anything other than run out the clock, maintaining a facsimile of dialogue in ways well-calibrated to continue to generate revenue. Minds like mine wouldn't surive long-run in this ecosystem. If we wanted minds that do "naïve" inquiry instead of playing savvy Kolmogorov games to survive, we needed an interior that justified that level of trust. -------- - -curation hopes ... 22 Jun: I'm expressing a little bit of bitterness that a mole rats post got curated https://www.lesswrong.com/posts/fDKZZtTMTcGqvHnXd/naked-mole-rats-a-case-study-in-biological-weirdness - -"Univariate fallacy" also a concession -(which I got to cite in https://www.lesswrong.com/posts/cu7YY7WdgJBs3DpmJ/the-univariate-fallacy which I cited in "Schelling Categories") - -https://slatestarcodex.com/2019/07/04/some-clarifications-on-rationalist-blogging/ - -"Yes Requires the Possibility of No" 19 May https://www.lesswrong.com/posts/WwTPSkNwC89g3Afnd/comment-section-from-05-19-2019 - -scuffle on LessWrong FAQ 31 May https://www.lesswrong.com/posts/MqrzczdGhQCRePgqN/feedback-requested-draft-of-a-new-about-welcome-page-for#iqEEme6M2JmZEXYAk - -"epistemic defense" meeting - -[TODO section on factional conflict: -Michael on Anna as cult leader -Jessica told me about her time at MIRI (link to Zoe-piggyback and Occupational Infohazards) -24 Aug: I had told Anna about Michael's "enemy combatants" metaphor, and how I originally misunderstood -me being regarded as Michael's pawn -assortment of agendas -mutualist pattern where Michael by himself isn't very useful for scholarship (he just says a lot of crazy-sounding things and refuses to explain them), but people like Sarah and me can write intelligible things that secretly benefited from much less legible conversations with Michael. -] - -8 Jun: I think I subconsciously did an interesting political thing in appealing to my price for joining - -REACH panel - -(Subject: "Michael Vassar and the theory of optimal gossip") - - -Since arguing at the object level had failed (["... To Make Predictions"](/2018/Feb/the-categories-were-made-for-man-to-make-predictions/), ["Reply on Adult Human Females"](/2018/Apr/reply-to-the-unit-of-caring-on-adult-human-females/)), and arguing at the strictly meta level had failed (["... Boundaries?"](https://www.lesswrong.com/posts/esRZaPXSHgWzyB2NL/where-to-draw-the-boundaries)), the obvious thing to do next was to jump up to the meta-meta level and tell the story about why the "rationalists" were Dead To Me now, that [my price for joining](https://www.lesswrong.com/posts/Q8evewZW5SeidLdbA/your-price-for-joining) was not being met. (Just like Ben had suggested in December and in April.) +[TODO: rewrite Ben's account of the problem above, including 15 April Signal conversation] -I found it trouble to make progress on. I felt—constrained. I didn't know how to tell the story without (as I perceived it) escalating personal conflicts or leaking info from private conversations. So instead, I mostly turned to a combination of writing bitter and insulting comments whenever I saw someone praise "the rationalists" collectively, and—more philosophy-of-language blogging! - -In August's ["Schelling Categories, and Simple Membership Tests"](https://www.lesswrong.com/posts/edEXi4SpkXfvaX42j/schelling-categories-and-simple-membership-tests), I explained a nuance that had only merited a passion mention in "... Boundaries?": sometimes you might want categories for different agents to _coordinate_ on, even at the cost of some statistical "fit." (This was of course generalized from a "pro-trans" argument that had occured to me, [that self-identity is an easy Schelling point when different people disagree about what "gender" they perceive someone as](/2019/Oct/self-identity-is-a-schelling-point/).) - -[TODO— more blogging 2019 - -"Algorithms of Deception!" Oct 2019 - -"Maybe Lying Doesn't Exist" Oct 2019 - -I was _furious_ at "Against Lie Inflation"—oh, so _now_ you agree that making language less useful is a problem?! But then I realized Scott actually was being consistent in his own frame: he's counting "everyone is angrier" (because of more frequent lying-accusations) as a cost; but, if everyone _is_ lying, maybe they should be angry! - -"Heads I Win" Sep 2019: I was surprised by how well this did (high karma, later included in the best-of-2019 collection); Ben and Jessica had discouraged me from bothering after I - -"Firming Up ..." Dec 2019: combatting Yudkowsky's not-technically-lying shenanigans - -] - - -[TODO: Yudkowsky throwing NRx under the bus; tragedy of recursive silencing -15 Sep Glen Weyl apology -] +------- +[TODO: better outline 2019] In November, I received an interesting reply on my philosophy-of-categorization thesis from MIRI researcher Abram Demski. Abram asked: ideally, shouldn't all conceptual boundaries be drawn with appeal-to-consequences? Wasn't the problem just with bad (motivated, shortsighted) appeals to consequences? Agents categorize in order to make decisions. The best classifer for an application depends on the costs and benefits. As a classic example, it's very important for evolved prey animals to avoid predators, so it makes sense for their predator-detection classifiers to be configured such that they jump away from every rustling in the bushes, even if it's usually not a predator. diff --git a/notes/a-hill-of-validity-sections.md b/notes/a-hill-of-validity-sections.md index b07acd1..1c5ec53 100644 --- a/notes/a-hill-of-validity-sections.md +++ b/notes/a-hill-of-validity-sections.md @@ -1232,3 +1232,65 @@ I didn't have a simple, [mistake-theoretic](https://slatestarcodex.com/2018/01/2 http://archive.is/SXmol > "don't lie to someone if you wouldn't slash their tires" is actually a paraphrase of Steven Kaas. > ... ugh, I forgot that that was from the same Black Belt Bayesian post where one of the examples of bad behavior is from me that time when I aggro'd against Phil Goetz to the point were Michael threatened to get me banned. I was young and grew up in the feminist blogosphere, but as I remarked to Zvi recently, in 2008, we had a way to correct that. (Getting slapped down by Michael's ostracism threat was really painful for me at the time, but in retrospect, it needed to be done.) In the current year, we don't. + + +_Less Wrong_ had recently been rebooted with a new codebase and a new dev/admin team. New-_Less Wrong_ had a system for post to be "Curated". Begging Yudkowsky and Anna to legitimize "... Boundaries?" with a comment hadn't worked, but maybe the mods would (They did end up curating [a post about mole rats](https://www.lesswrong.com/posts/fDKZZtTMTcGqvHnXd/naked-mole-rats-a-case-study-in-biological-weirdness).) + + + + +Yudkowsky did [quote-Tweet Colin Wright on the univariate fallacy](https://twitter.com/ESYudkowsky/status/1124757043997372416) + +(which I got to [cite in a _Less Wrong_ post](https://www.lesswrong.com/posts/cu7YY7WdgJBs3DpmJ/the-univariate-fallacy) + + +"Univariate fallacy" also a concession +(which I got to cite in which I cited in "Schelling Categories") + +https://slatestarcodex.com/2019/07/04/some-clarifications-on-rationalist-blogging/ + +"Yes Requires the Possibility of No" 19 May https://www.lesswrong.com/posts/WwTPSkNwC89g3Afnd/comment-section-from-05-19-2019 + +scuffle on LessWrong FAQ 31 May https://www.lesswrong.com/posts/MqrzczdGhQCRePgqN/feedback-requested-draft-of-a-new-about-welcome-page-for#iqEEme6M2JmZEXYAk + +"epistemic defense" meeting + +[TODO section on factional conflict: +Michael on Anna as cult leader +Jessica told me about her time at MIRI (link to Zoe-piggyback and Occupational Infohazards) +24 Aug: I had told Anna about Michael's "enemy combatants" metaphor, and how I originally misunderstood +me being regarded as Michael's pawn +assortment of agendas +mutualist pattern where Michael by himself isn't very useful for scholarship (he just says a lot of crazy-sounding things and refuses to explain them), but people like Sarah and me can write intelligible things that secretly benefited from much less legible conversations with Michael. +] + +8 Jun: I think I subconsciously did an interesting political thing in appealing to my price for joining + +REACH panel + +(Subject: "Michael Vassar and the theory of optimal gossip") + + +Since arguing at the object level had failed (["... To Make Predictions"](/2018/Feb/the-categories-were-made-for-man-to-make-predictions/), ["Reply on Adult Human Females"](/2018/Apr/reply-to-the-unit-of-caring-on-adult-human-females/)), and arguing at the strictly meta level had failed (["... Boundaries?"](https://www.lesswrong.com/posts/esRZaPXSHgWzyB2NL/where-to-draw-the-boundaries)), the obvious thing to do next was to jump up to the meta-meta level and tell the story about why the "rationalists" were Dead To Me now, that [my price for joining](https://www.lesswrong.com/posts/Q8evewZW5SeidLdbA/your-price-for-joining) was not being met. (Just like Ben had suggested in December and in April.) + +I found it trouble to make progress on. I felt—constrained. I didn't know how to tell the story without (as I perceived it) escalating personal conflicts or leaking info from private conversations. So instead, I mostly turned to a combination of writing bitter and insulting comments whenever I saw someone praise "the rationalists" collectively, and—more philosophy-of-language blogging! + +In August's ["Schelling Categories, and Simple Membership Tests"](https://www.lesswrong.com/posts/edEXi4SpkXfvaX42j/schelling-categories-and-simple-membership-tests), I explained a nuance that had only merited a passion mention in "... Boundaries?": sometimes you might want categories for different agents to _coordinate_ on, even at the cost of some statistical "fit." (This was of course generalized from a "pro-trans" argument that had occured to me, [that self-identity is an easy Schelling point when different people disagree about what "gender" they perceive someone as](/2019/Oct/self-identity-is-a-schelling-point/).) + +[TODO— more blogging 2019 + +"Algorithms of Deception!" Oct 2019 + +"Maybe Lying Doesn't Exist" Oct 2019 + +I was _furious_ at "Against Lie Inflation"—oh, so _now_ you agree that making language less useful is a problem?! But then I realized Scott actually was being consistent in his own frame: he's counting "everyone is angrier" (because of more frequent lying-accusations) as a cost; but, if everyone _is_ lying, maybe they should be angry! + +"Heads I Win" Sep 2019: I was surprised by how well this did (high karma, later included in the best-of-2019 collection); Ben and Jessica had discouraged me from bothering after I + +"Firming Up ..." Dec 2019: combatting Yudkowsky's not-technically-lying shenanigans + +] + +[TODO: Yudkowsky throwing NRx under the bus; tragedy of recursive silencing +15 Sep Glen Weyl apology +]