From: M. Taylor Saotome-Westlake Date: Mon, 3 Oct 2022 00:41:14 +0000 (-0700) Subject: memoir tap X-Git-Url: http://534655.efjtl6rk.asia/source?a=commitdiff_plain;h=11aa5f9aaea409d046abc4c51eb9afc6f697aacd;p=Ultimately_Untrue_Thought.git memoir tap --- diff --git a/content/drafts/a-hill-of-validity-in-defense-of-meaning.md b/content/drafts/a-hill-of-validity-in-defense-of-meaning.md index 4335d23..71a62e6 100644 --- a/content/drafts/a-hill-of-validity-in-defense-of-meaning.md +++ b/content/drafts/a-hill-of-validity-in-defense-of-meaning.md @@ -551,25 +551,27 @@ _Should_ I have known that it wouldn't work? _Didn't_ I "already know", at some But ... it's only "obvious" if you _take as a given_ that Yudkowsky is playing a savvy Kolmogorov complicity strategy like any other public intellectual in the current year. Maybe this seems banal if you haven't spent your entire life in this robot cult? But the guy doesn't _market_ himself as being like any other public intellectual in the current year. As Ben put it, Yudkowsky's "claim to legitimacy really did amount to a claim that while nearly everyone else was criminally insane (causing huge amounts of damage due to disconnect from reality, in a way that would be criminal if done knowingly), he almost uniquely was not." Call me a sucker, but ... I _actually believed_ Yudkowsky's marketing story. The Sequences _really were just that good_. That's why it took so much fuss and wasted time to generate a likelihood ratio large enough to falsify that story. -Ben further compared Yukowsky to Eliza the spambot therapist in my story ["Blame Me for Trying"](/2018/Jan/blame-me-for-trying/). Scrupulous rationalists were paying rent to something claiming moral authority, which had no concrete specific plan to do anything other than run out the clock. Minds like mine don't surive long-run in this ecosystem. If we wanted minds that do "naïve" inquiry instead of playing savvy Kolmogorov games to survive, we needed an interior that justified that level of trust. - ------- +Ben compared Yudkowsky to Eliza the spambot therapist in my story ["Blame Me for Trying"](/2018/Jan/blame-me-for-trying/). Scrupulous rationalists were paying rent to something claiming moral authority, which had no concrete specific plan to do anything other than run out the clock. Minds like mine don't surive long-run in this ecosystem. If we wanted minds that do "naïve" inquiry instead of playing savvy Kolmogorov games to survive, we needed an interior that justified that level of trust. +[TODO: weave in "set in motion a machine" 19 Apr?] +[TODO Jack— +> Zack sacrificed all hope of success in favor of maintaining his own sanity by CC'ing you guys (which I think he was correct to do conditional on email happening at all).] - - [TODO: asking Anna to weigh in] (I figured that spamming people with hysterical and somewhat demanding physical postcards was more polite (and funnier) than my recent habit of spamming people with hysterical and somewhat demanding emails.) +------- curation hopes ... 22 Jun: I'm expressing a little bit of bitterness that a mole rats post got curated https://www.lesswrong.com/posts/fDKZZtTMTcGqvHnXd/naked-mole-rats-a-case-study-in-biological-weirdness "Univariate fallacy" also a concession +(which I got to cite in https://www.lesswrong.com/posts/cu7YY7WdgJBs3DpmJ/the-univariate-fallacy which I cited in "Schelling Categories") + https://slatestarcodex.com/2019/07/04/some-clarifications-on-rationalist-blogging/ "Yes Requires the Possibility of No" 19 May https://www.lesswrong.com/posts/WwTPSkNwC89g3Afnd/comment-section-from-05-19-2019 scuffle on LessWrong FAQ 31 May https://www.lesswrong.com/posts/MqrzczdGhQCRePgqN/feedback-requested-draft-of-a-new-about-welcome-page-for#iqEEme6M2JmZEXYAk -] +"epistemic defense" meeting [TODO section on factional conflict: Michael on Anna as cult leader @@ -580,22 +582,32 @@ assortment of agendas mutualist pattern where Michael by himself isn't very useful for scholarship (he just says a lot of crazy-sounding things and refuses to explain them), but people like Sarah and me can write intelligible things that secretly benefited from much less legible conversations with Michael. ] +8 Jun: I think I subconsciously did an interesting political thing in appealing to my price for joining + +REACH panel + +(Subject: "Michael Vassar and the theory of optimal gossip") + + Since arguing at the object level had failed (["... To Make Predictions"](/2018/Feb/the-categories-were-made-for-man-to-make-predictions/), ["Reply on Adult Human Females"](/2018/Apr/reply-to-the-unit-of-caring-on-adult-human-females/)), and arguing at the strictly meta level had failed (["... Boundaries?"](https://www.lesswrong.com/posts/esRZaPXSHgWzyB2NL/where-to-draw-the-boundaries)), the obvious thing to do next was to jump up to the meta-meta level and tell the story about why the "rationalists" were Dead To Me now, that [my price for joining](https://www.lesswrong.com/posts/Q8evewZW5SeidLdbA/your-price-for-joining) was not being met. (Just like Ben had suggested in December and in April.) I found it trouble to make progress on. I felt—constrained. I didn't know how to tell the story without (as I perceived it) escalating personal conflicts or leaking info from private conversations. So instead, I mostly turned to a combination of writing bitter and insulting comments whenever I saw someone praise "the rationalists" collectively, and—more philosophy-of-language blogging! -[TODO 2019 activities— +In August's ["Schelling Categories, and Simple Membership Tests"](https://www.lesswrong.com/posts/edEXi4SpkXfvaX42j/schelling-categories-and-simple-membership-tests), I explained a nuance that had only merited a passion mention in "... Boundaries?": sometimes you might want categories for different agents to _coordinate_ on, even at the cost of some statistical "fit." (This was of course generalized from a "pro-trans" argument that had occured to me, [that self-identity is an easy Schelling point when different people disagree about what "gender" they perceive someone as](/2019/Oct/self-identity-is-a-schelling-point/).) -"epistemic defense" meeting +[TODO— more blogging 2019 -"Schelling Categories" Aug 2019 -"Heads I Win" Sep 2019 "Algorithms of Deception!" Oct 2019 -"Firming Up ..." Dec 2019 -"Against Lie Inflation"/"Maybe Lying Doesn't Exist" Oct 2019 +"Maybe Lying Doesn't Exist" Oct 2019 +I was _furious_ at "Against Lie Inflation"—oh, so _now_ you agree that making language less useful is a problem?! But then I realized Scott actually was being consistent in his own frame: he's counting "everyone is angrier" (because of more frequent lying-accusations) as a cost; but, if everyone _is_ lying, maybe they should be angry! +"Heads I Win" Sep 2019: I was surprised by how well this did (high karma, later included in the best-of-2019 collection); Ben and Jessica had discouraged me from bothering after I + +"Firming Up ..." Dec 2019: combatting Yudkowsky's not-technically-lying shenanigans + +] [TODO: Yudkowsky throwing NRx under the bus; tragedy of recursive silencing diff --git a/notes/a-hill-email-review.md b/notes/a-hill-email-review.md index cd5894e..590b822 100644 --- a/notes/a-hill-email-review.md +++ b/notes/a-hill-email-review.md @@ -1297,3 +1297,16 @@ If our _actual_ problem is "Genuinely consistent rationalism is realistically al Being transparent about the game theory I see: intuitively, it seems like I have a selfish incentive to "support" the bullies (by publicly pointing out that they have a point, as above) insofar as I'm directly personally harmed by my social network following a Kolmogorov Option strategy rather than an open-dissidence Free Speech for Shared Maps strategy, and more bullying might cause the network to switch strategies on "may as well be hung for a sheep as a lamb" grounds? Maybe I should explain this so people have a chance to talk me out of it? ... hm, acutally, when I try to formalize this with the simplest possible toy model, it doesn't work (the "may as well be hung ..." effect doesn't happen given the modeling assumptions I just made up). I was going to say: our team chooses a self-censorship parameter c from 0 to 10, and faces a bullying level b from 0 to 10. b is actually b(c, p), a function of self-censorship and publicity p (also from 0 to 10). The team leaders' utility function is U(c, b) := -(c + b) (bullying and self-censorship are both bad). Suppose the bullying level is b := 10 - c + p (self-censorship decreases bullying, and publicity increases it). My thought was: a disgruntled team-member might want to increase p in order to induce the leaders to choose a smaller value of c. But when I do the algebra, -(c + b) = -(c + (10 - c + p)) = -c - 10 + c - p = -10 - p. (Which doesn't depend on c, seemingly implying that more publicity is just bad for the leaders without changing their choice of c? But I should really be doing my dayjob now instead of figuring out if I made a mistake in this Facebook comment.) + + + + +> Eliezer is not a private person - he's a public figure. He set in motion a machine that continues to raise funds and demand work from people for below-market rates based on moral authority claims centered around his ability to be almost uniquely sane and therefore benevolent. (In some cases indirectly through his ability to cause others to be the same.) "Work for me or the world ends badly," basically. + +> If this is TRUE (and also not a threat to destroy the world), then it's important to say, and to actually extract that work. But if not, then it's abuse! (Even if we want to be cautious about using emotionally loaded terms like that in public.) + +> We've falsified to our satisfaction the hypothesis that Eliezer is currently sane in the relevant way (which is an extremely high standard, and not a special flaw of Eliezer in the current environment). This should also falsify the hypothesis that the sanity-maintenance mechanisms Eliezer set up work as advertised. + +> The machine he built to extract money, attention, and labor is still working, though, and claiming to be sane in part based on his prior advertisements, which it continues to promote. If Eliezer can't be bothered to withdraw his validation, then we get to talk about what we think is going on, clearly, in ways that aren't considerate of his feelings. He doesn't get to draw a boundary that prevents us from telling other people things about MIRI and him that we rationally and sincerely believe to be true. + +> The fact that we magnanimously offered to settle this via private discussions with Eliezer doesn't give him an extra right to draw boundaries afterwards. We didn't agree to that. Attempting to settle doesn't forfeit the right to sue. Attempting to work out your differences with someone 1:1 doesn't forfeit your right to complain later if you were unable to arrive at a satisfactory deal (so long as you didn't pretend to do so). \ No newline at end of file diff --git a/notes/a-hill-of-validity-sections.md b/notes/a-hill-of-validity-sections.md index 932fa61..09106e5 100644 --- a/notes/a-hill-of-validity-sections.md +++ b/notes/a-hill-of-validity-sections.md @@ -1215,3 +1215,5 @@ It's totally understandable to not want to get involved in a political scuffle b An analogy: racist jokes are also just jokes. Alice says, "What's the difference between a black dad and a boomerang? A boomerang comes back." Bob says, "That's super racist! Tons of African-American fathers are devoted parents!!" Alice says, "Chill out, it was just a joke." In a way, Alice is right. It was just a joke; no sane person could think that Alice was literally claiming that all black men are deadbeat dads. But, the joke only makes sense in the first place in context of a culture where the black-father-abandonment stereotype is operative. If you thought the stereotype was false, or if you were worried about it being a self-fulfilling prophecy, you would find it tempting to be a humorless scold and get angry at the joke-teller. Similarly, the "Caliphate" humor only makes sense in the first place in the context of a celebrity culture where deferring to Scott and Eliezer is expected behavior. (In a way that deferring to Julia Galef or John S. Wentworth is not expected behavior, even if Galef and Wentworth also have a track record as good thinkers.) I think this culture is bad. _Nullius in verba_. + + [TODO: asking Anna to weigh in] (I figured that spamming people with hysterical and somewhat demanding physical postcards was more polite (and funnier) than my recent habit of spamming people with hysterical and somewhat demanding emails.)