From: M. Taylor Saotome-Westlake Date: Sun, 30 Oct 2022 19:54:08 +0000 (-0700) Subject: memoir: bullet outline remainder of reducing-negativity § X-Git-Url: http://534655.efjtl6rk.asia/source?a=commitdiff_plain;h=d71d1e4f3c687c7d88d7372fb4d6029efe00d25b;p=Ultimately_Untrue_Thought.git memoir: bullet outline remainder of reducing-negativity § Having trouble getting started? Nothing wrong with incremental progress and tiny commits. The chain-of-thought prompting approach: if you a large enough amount of computation turning vague notes into less-vague bullet-outlines and less-vague bullet outlines into paragraphs, you can eventually diffuse a completed ms. My limiting factor is really the "large enough amount of computation", which is rectified by keeping my butt in this chair and keeping the timer running. --- diff --git a/content/drafts/a-hill-of-validity-in-defense-of-meaning.md b/content/drafts/a-hill-of-validity-in-defense-of-meaning.md index 677dda1..d2bba00 100644 --- a/content/drafts/a-hill-of-validity-in-defense-of-meaning.md +++ b/content/drafts/a-hill-of-validity-in-defense-of-meaning.md @@ -849,28 +849,42 @@ The question was, what had specifically happened in the last six years to shift [^them-supporting-me]: Humans with very different views on politics nevertheless have a common interest in not being transformed into paperclips! -Did Yudkowsky get _new information_ about neoreaction's hidden Badness parameter, or did moral coercion on him from the left intensify (because Trump and [because Berkeley](https://thezvi.wordpress.com/2017/08/12/what-is-rationalist-berkleys-community-culture/))? My bet was on the latter. +Did Yudkowsky get _new information_ about neoreaction's hidden Badness parameter sometime between 2019, or did moral coercion on him from the left intensify (because Trump and [because Berkeley](https://thezvi.wordpress.com/2017/08/12/what-is-rationalist-berkleys-community-culture/))? My bet was on the latter. -However it happened, I didn't think the brain damage was limited to "political" topics. In November, we saw an example of Yudkowsky engaging in more destruction of language for the sake of politeness, but in the non-Culture-War context of him [trying to wirehead his fiction subreddit by suppressing criticism-in-general](https://www.reddit.com/r/rational/comments/dvkv41/meta_reducing_negativity_on_rrational/). +However it happened, it didn't seem like the brain damage was limited to "political" topics, either. In November, we saw another example of Yudkowsky destroying language for the sake of politeness, this time the non-Culture-War context of him [_trying to wirehead his fiction subreddit by suppressing criticism-in-general_](https://www.reddit.com/r/rational/comments/dvkv41/meta_reducing_negativity_on_rrational/). -That's _my_ characterization, of course: the post itself talks about "reducing negativity". [In a comment, Yudkowsky wrote](https://www.reddit.com/r/rational/comments/dvkv41/meta_reducing_negativity_on_rrational/f7fs88l/) (bolding mine): +That's _my_ characterization, of course: the post itself is about "reducing negativity". [In a comment, Yudkowsky wrote](https://www.reddit.com/r/rational/comments/dvkv41/meta_reducing_negativity_on_rrational/f7fs88l/) (bolding mine): > On discussion threads for a work's particular chapter, people may debate the well-executedness of some particular feature of that work's particular chapter. Comments saying that nobody should enjoy this whole work are still verboten. **Replies here should still follow the etiquette of saying "Mileage varied: I thought character X seemed stupid to me" rather than saying "No, character X was actually quite stupid."** -But ... "I thought X seemed Y to me" and "X is Y" _do not mean the same thing_. [The map is not the territory](https://www.lesswrong.com/posts/KJ9MFBPwXGwNpadf2/skill-the-map-is-not-the-territory). [The quotation is not the referent](https://www.lesswrong.com/posts/np3tP49caG4uFLRbS/the-quotation-is-not-the-referent). [The planning algorithm that maximizes the probability of doing a thing is different from an algorithm that maximizes the probability of having "tried" to do the thing](https://www.lesswrong.com/posts/WLJwTJ7uGPA5Qphbp/trying-to-try). [If my character is actually quite stupid, I want to believe that my character is actually quite stupid.](https://www.lesswrong.com/tag/litany-of-tarski) +But ... "I thought X seemed Y to me"[^pleonasm] and "X is Y" _do not mean the same thing_. [The map is not the territory](https://www.lesswrong.com/posts/KJ9MFBPwXGwNpadf2/skill-the-map-is-not-the-territory). [The quotation is not the referent](https://www.lesswrong.com/posts/np3tP49caG4uFLRbS/the-quotation-is-not-the-referent). [The planning algorithm that maximizes the probability of doing a thing is different from the algorithm that maximizes the probability of having "tried" to do the thing](https://www.lesswrong.com/posts/WLJwTJ7uGPA5Qphbp/trying-to-try). [If my character is actually quite stupid, I want to believe that my character is actually quite stupid.](https://www.lesswrong.com/tag/litany-of-tarski) -It might seem like a little thing (requiring "I" statements is commonplace in therapy groups and corporate sensitivity training), but this little thing _coming from Eliezer Yudwkowsky setting guidelines for an explicitly "rationalist" space_ made a pattern click. If everyone is forced to only make narcissistic claims about their map ("_I_ think", "_I_ feel"), and not make claims about the territory (which could be construed to call other people's maps into question and thereby "threaten" them, because [disagreement is disrespect](http://www.overcomingbias.com/2008/09/disagreement-is.html)), that's great for reducing social conflict, but it's not great for the kind of collective information processing that actually accomplishes cognitive work, like good literary criticism. A rationalist space _needs to be able to talk about the territory_. +[^pleonasm]: The pleonasm here ("to me" being redundant with "I thought") is especially galling coming from someone who's usually a good writer! -I understand that Yudkowsky wouldn't agree with +It might seem like a little thing of no significance—requiring "I" statements is commonplace in therapy groups and corporate sensitivity training—but this little thing _coming from Eliezer Yudkowsky setting guidelines for an explicitly "rationalist" space_ made a pattern click. If everyone is forced to only make narcissistic claims about their map ("_I_ think", "_I_ feel"), and not make claims about the territory (which could be construed to call other people's maps into question and thereby threaten them, because [disagreement is disrespect](http://www.overcomingbias.com/2008/09/disagreement-is.html)), that's great for reducing social conflict, but it's not great for the kind of collective information processing that actually accomplishes cognitive work, like good literary criticism. A rationalist space _needs to be able to talk about the territory_. -[the comment claims that "Being able to consider and optimize literary qualities" is one of the major considerations to be balanced, but this is lip service; Ruby also paid lip service] +I understand that Yudkowsky wouldn't agree with that characterization: to be fair, the same comment I quoted also lists "Being able to consider and optimize literary qualities" is one of the major considerations to be balanced. But I think (_I_ think) it's also fair to note that (as we had seen on _Less Wrong_ earlier that year), lip service is cheap. It's easy to _say_, "Of course I don't think politeness is more important than truth," while systematically behaving as if you did. -"Broadcast criticism is adversely selected for critic errors" +[TODO— -"Credibly helpful unsolicited criticism should be delivered in private" (I agree that the purpose of public criticism is not solely to help the authors) +"Broadcast criticism is adversely selected for critic errors", Yudkowsky says in the post on reducing negativity, correctly pointing out that if a work's true level of [finish math] + + * I can imagine some young person who really liked _Harry Potter and the Methods_ being intimidated by the math notation, + * But a somewhat less young person + * I would expect a real rationality teach to teach the general lesson, "model selection effects" + +"Credibly helpful unsolicited criticism should be delivered in private", says Yudkowsky. + + * I agree that public criticism isn't meant to solely help the author (because if it were, there would be no reason for anyone but the author to read it) + * But other readers also benefit! + * And if you're going to talk about incentives, you _want_ people to be rewarded for making good criticism Crocker's rules + * it's true and important that Crocker's rules were meant to be declared by the speaker; it's not a license to be mean to other people who might not want that + * But there's still something special about a culture that has "Crocker's rules" as an available concept, that's completely absent from modern Yudkowsky + +] -----