24 Mar: Michael to me on Anna as cult leader
24 Mar: I tell Michael that I might be better at taking his feedback if he's gentler
30 Mar: hang out with Jessica (previous week was Zvi and Nick and anti-fascist Purim)
+30 Mar: Michael—we need to figure out how to win against bad faith arguments
+30 Mar: wrapping up drafts
+me to Scott: _what_ post about transgender
+conceptual gerrymandering is not unique
+it's fine if you don't help; you've suffered enough
+Kelsey agreeing on the object level for a different reason
+crippling the orcs' general ability to reason
+early draft of what became "Where to Draw the Boundaries?"
+31 Mar: the real issue is whether we want to destroy everyone's
+31 Mar: Sarah—isn't the math overkill?
+31 Mar: Ben thinks I'm reading Kelsey wrong, that she's counting happiness as part of usability
+31 Mar: I think that math is important for principle—and intimidation (https://slatestarcodex.com/2014/08/10/getting-eulered/)
+31 Mar: "a very vague understanding of what you're trying to do with the second email but it seems kind of sketchy to me, like you're writing an email to Scott meant to look persuasive to Eliezer"
+31 Mar: I want a court ruling; I'm betting that it's psychologically harder for someone who knows the Rules to continue being dishonest against someone who has the correct answer and isn't willing to give up on good-faith argument.
+31 Mar: Ben— this just seemed like the kind of bank shot where, given the track record of Zack's intuitions, I expect it to fail unless he's explicitly modeling what he's doing
+31 Mar: email prosecution not useful, explicitly asking for a writ of certiorari is better than sending "cc in case you're interested" mail and delusionally hoping for it to function as an appeal.
+31 Mar: I recount to Michael Jessica's account of her time at MIRI
+31 Mar: Michael—"since there is absolutely no plausible narrative in which I was bringing them into a cult, there is no symmetry in our narratives." this didn't age well
+31 Mar: Ben—the most important thing for LessWrong readers to know about - a similarly careful explanation of why you've written off Scott, Eliezer, and the "Rationalists"
+31 Mar: I'm reluctant, because I don't want to attack my friends in public (and don't want to leak info from private conversations)
+31 Mar: Ben—self-promotion imposes a cost on others
+1 Apr: fantasizing Susan the Senior sorter parable turing violent
+1 Apr: Avoiding politically-expensive topics is fine! Fabricating bad epistemology to support a politically-convenient position and then not retracting it after someone puts a lot of effort into explaining the problem is not OK. Some of the things you said on the phone call made it sound like you didn't understand that I'm making that distinction?
+2 Apr: me to Anna cc Ben—"Reasoning in public is too expensive; reasoning in private is good enough" is less damaging when there's some sort of recruiting pipeline from the public into the monasteries
+2 Apr: Ben: I do not consider the claim that intact monasteries exist to be plausible at all right now.
+2 Apr: Twitter poll about dropping pseudonymity
+3 Apr: I don't see any advantage to speaking non-anonymously when people are persecuting you. I also find it REALLY DIFFICULT to deal with the fact that you are cooperating with people who are openly defecting against you and against me.
+6 Apr: "Where to Draw the Boundaries?" draft, Putting It Together epigraph; plan to beg Anna and Eliezer for endorsement afterwards
+8 Apr: Jessica—creating clarity about behavioral patterns, and specifically ones that people claim not to have. What you're saying here implies that you believe creating such clarity is an attack on someone. If so, your blog is attacking trans people.
+8 Apr: the distinction between truthfully, publicly criticizing group identities and named individuals still seems very significant to me
+8 Apr: Michael—less precise is more violent
+"Boundaries" discussion ...
+9 Apr: "VP" is also political
+10 Apr: Ben and Jessica coin four simulacra levels
+10 Apr: Ben—Eliezer's in denial about whether he's making decisions out of fear. / Scott's only unevenly in denial about this.
+me—Anna claims to be calculating rather than fearful. I don't have any particular evidence to dispute her self-report, but I would asphyxiate under that level of risk aversion.
+12 Apr: Ben again on, " to trying to explain why Eliezer seems to you like the court of last resort, what he's done that leads you to expect him to play this role, etc."; more money seems wrong
+12 Apr: I'm still too cowardly
+Sarah explaining how to ask things of people in general
+Ben reiterating, "You don't want either an overtly paid endorsement, or to ask Eliezer to lie about whether it's a paid endorsement."
+12 Apr: Sarah—what would you do if Eliezer never follows up?
+13 Apr: me—I don't know, what would there be left to do?—I don't have any particular claim over Eliezer.
+13 Apr: pull the trigger on "Boundaries?" (for this reason it is written: don't hate the player, hate the game-theoretic nature of all life.)
+ask Anna
+direct marketing to Nick T.
+direct marketing and/or conspiracy invitation
+court filing with Yudkowsky (Subj: "movement to clarity; or, rationality court filing")
+Ben: So, um, this _is_ politics, and we _aren't_ safe, so how about not lying?
+Michael: We are safe to understand arguments and logic, and it is Not politics to try to pin understanding down. It is politics to try to pin down the understanding of people as part of credit allocation, and we should be doing politics to try to motivate engagement here, but so far, Zack has been doing a mix of friendship, supplication and economics.
+me: Is ... is there a good place for me to _practice_ politics against someone who isn't Eliezer Yudkowsky? The guy rewrote my personality over the internet. I want more credit-allocation for how relatively well I'm performing under constrained cognition!
+me: Like, okay, I see the case that I'm doing a sneaky evil deceptive thing (bad faith is a disposition, not a feeling!) by including subtext and expecting people to only engage with the text, but if we're not going to get into full-on gender-politics on Less Wrong, but gender politics is motivating an epistemology error, I'm not sure what else I'm supposed to do! I'm pretty constrained here!
+Michael: Once people think they know what the political sides are, the political thing to do is (trivially, I am annoyed at you pretending not to know this) to be completely unprincipled and partisan in any anonymous situation.
+15 Apr: me—concern about bad faith nitpicking, Jessica says I'm not
+15 Apr: Jessica's comment, the lesson of Funk-tunul
+15 Apr: trying to recruit Steven Kaas
+16 Apr: I feel sad and gaslighted
+16 Apr: Jessica sparks engagement by appealing to anti-lying
+Ben— I think a commitment to lying is always wrong just straightforwardly leads to deciding that anything power demands you say is true.
+16 Apr: If it were just a matter of my own pain, I wouldn't bother making a fuss about this. But there's a principle here that's way bigger than any of us: that's why Michael and Ben and Sarah and Zvi and Jessica are here with me.
+we're in favor of Jessica's existence
+17 Apr: plantiff/pawn/puppydog
+17 Apr: Jessica— This person had an emotional reaction described as a sense that "Zack should have known that wouldn't work" (because of the politics involved, not because Zack wasn't right). Those who are savvy in high-corruption equilibria maintain the delusion that high corruption is common knowledge, to justify expropriating those who naively don't play along, by narratizing them as already knowing and therefore intentionally attacking people, rather than being lied to and confused.
+18 Apr: Ben on our epistemic state
+18 Apr: me—I'd rather say "Actually, I don't think I'm saner than Jessica, because ..." rather than "You owe Jessica an apology."
+18 Apr: Ben to me—"It’s sketchy to accept insults leveled at you, but it’s actually a betrayal of trust if you ask friends to back you up and then don’t back them up"
+me— You're right; I hereby retract the phrase "or my friends" from the prior email. I was trying to convey a "turn the other cheek"/"social reality is fake, anyway" attitude, but I agree that everyone can only turn her own cheek.
+18 Apr: Ben—STAY IN FORMATION
+me— I've been struggling with how to "chime in" for hours, but it looks like Eliezer just shut the door on us at 10:12 p.m. / Mission failed?
+This closure actually makes me feel a little bit less heartbroken and cognitively-dissonant about the "rationalists" already. If Scott doesn't want to talk, and Eliezer doesn't want to talk, and I'm unambiguously right ... then I guess that's that?
+Michael—common knowledge achieved
+19 Apr: Zack, that's not what a war looks like. That was ... trying to educate people who were trying not to be educated, by arguing in good faith.
+me: Ben, I mean, you're right, but if my "Category War" metaphorical name is bad, does that also mean we have to give up on the "guided by the beauty of our weapons" metaphorical catchphrase? (Good-faith arguments aren't weapons!)
+20 Apr: my closing thoughts
+20 Apr: Michael on Anna as an enemy
+30 Apr: me—I don't know how to tell the story without (as I perceive it) escalating personal conflicts or leaking info from private conversations.
+30 Apr: If we're in a war, can I apply for psych leave for like, at least a month? (Michael implied that this was OK earlier.) Or does asking get me shot for desertion? It's fair for Ben to ask for an accounting of why I'm stuck, but the fact that I'm sitting at my desk crying at 5:45 p.m. suggests a reordering of priorities for my own health.
+30 Apr: No urgency on that timescale. Just don't buy a nerf gun and make shooting noises to pretend you're fighting. On leave means on leave.
+3 May: dayjob workplace dynamics
+6 May: multivariate clarification https://twitter.com/ESYudkowsky/status/1124751630937681922
+7 May: angry at Anna for thinking I was delusional for expecting the basics
+8 May: Anna doesn't want money from me—it'll appear in some Ben Hoffman essay as "using you" or some similar such thing
+17 May: Jessica points out that Yudkowsky's pro-infanticide isn't really edgier than Peter Singer
+21 May:
+If someone claims, as someone did, that saying that words having meanings matters is politically loaded and is code for something, that can really be seen as [...] their problem.
+I mean, ordinarily yes, but when the person saying that is a MIRI Research Associate, then it's the light cone's problem.
+22 May: to Anna cc Sarah Steven—Viaweb and CfAR have importantly different missions (Sarah is Nice, unlike people like Ben or Michael or Jessica, who are Mean)
+31 May: "Categories Were Made" LessWrong FAQ https://www.lesswrong.com/posts/MqrzczdGhQCRePgqN/feedback-requested-draft-of-a-new-about-welcome-page-for#iqEEme6M2JmZEXYAk
+8 Jun: I think I subconsciously did an interesting political thing in appealing to my price for joining
+8 Jun: maybe "EA Has a Lying Problem" seemed like a similarly pointless hit piece to you? And you have as much invested in the "Effective Altruism" brand name as I have in the "Michael Vassar" brand name?
+17 Jun: "It was Dave Mitchum who now belonged on this railroad and he, Bill Brent, who did not." basically sums up how I feel about Ozy
+22 Jun: I'm expressing a little bit of bitterness that a mole rats post got curated https://www.lesswrong.com/posts/fDKZZtTMTcGqvHnXd/naked-mole-rats-a-case-study-in-biological-weirdness
+23 Jun: Schelling Categories, and Simple Membership Tests
+25 Jun: scuffle with Ruby
+26 Jun: Ben trying to talk to the Less Wrong team
+29 Jun: I talked with Ray from 7 to 10:30 tonight. My intellectual performance was not so great and I did a lot of yelling and crying.
+29 Jun: talked to Buck (who's working at MIRI these days) at the Asgard (Will and Divia's house) party tonight. He was broadly sympathetic to my plight, thinks EA leaders have it together (saying people like Holden had thought through more plies than him when he talks to them about the Issues), thinks public reason is hopeless, doesn't think much/well of Ben or Michael or Zvi.
+1 Jul: I think we can avoid getting unnecessarily frustrated/angry at these people if we can be clear to ourselves and each other that they're MOPs, and could not possibly succeed in being geeks even if they tried
+2 Jul: "Everyone Knows", pinging Anna about it
+4 Jul: https://slatestarcodex.com/2019/07/04/some-clarifications-on-rationalist-blogging/ published
+11 Jul: AI timelines scam
+
> When I look at the world, it looks like [Scott](http://slatestarcodex.com/2014/11/21/the-categories-were-made-for-man-not-man-for-the-categories/) and [Eliezer](https://twitter.com/ESYudkowsky/status/1067183500216811521) and [Kelsey](https://theunitofcaring.tumblr.com/post/171986501376/your-post-on-definition-of-gender-and-woman-and) and [Robby Bensinger](https://www.facebook.com/robbensinger/posts/10158073223040447?comment_id=10158073685825447&reply_comment_id=10158074093570447&comment_tracking=%7B%22tn%22%3A%22R2%22%7D) seem to think that some variation on ["I can define a word any way I want"]() is sufficient to end debates on transgender identity.
I'm tempted to speculate that I might be better at taking your feedback about how to behave reasonably rather than doing the "cowering and submission to whoever I'm currently most afraid of losing relationship-points with" thing, if you try to be gentler sometimes, hopefully without sacrificing clarity? (Concrete examples: statements like, "What the FUCK Zack!?!" are really hard on me, and I similarly took "We're not friends anymore already!" badly at the time because I was reading it as you damning me, but that one is mostly my poor reading comprehension under sleep deprivation, because in context you were almost certainly responding to my "everyone will still be friends" babbling.)
But, if you think that gentleness actually sacrifices clarity in practice, then I guess I could just do a better job of sucking it up.
+
+Michael—
+> I think that the strongest argument for continuing to press on is that we need to actually be able to press on into opposition to clarity if we are going to succeed regardless of where we go. We actually have to figure out, either here or elsewhere, how to win this sort of argument decisively, as a group, against people who are acting in bad faith, and can't afford to simply accept a status quo that says that we accept defeat when faced with bad faith arguments in general.
+
+
+Wait, you lost me. What post about transgender? I've been objecting to the post about how category boundaries are arbitrary and we can redraw them for utilitarian benefits. Sure, the post used transgenderism as an illustrative example, but the main point should succeed or fail independently of the choice of illustrative example. (Obviously a person of your brilliance and integrity wouldn't invent a fake epistemology lesson just to get a convenient answer on one particular question!)
+
+> I realize that allowing one thing violates a Schelling fence that potentially keeps other people from making me do awkward things, but as far as I can tell gender dysphoria is a pretty unique problem
+
+"Conceptual gerrymandering to sneak in connotations" is _not_ a unique problem! I know you know this, because you've written about it extensively.
+
+"Conceptual gerrymandering to sneak in connotations" is not a unique problem! I know you know this, because you've written about it extensively. That's why my "twelve short stories about language" email gave "(repeat for the other examples from "The noncentral fallacy—the worst argument in the world?")" as entries II.–VII.
+
+> Zack I think - everyone else experiences their disagreement with you as a disagreement about where the joints are and which joints are important, including Scott
+> and parses the important part of Scott's post as 'the usability of the categories for humans is one thing we should use to decide which joints are important'
+> and it is annoying to be told that actually we all believe the categories should be whatever makes people happy and that this is an inexcusable rationalist sin
+> when no one believes that
+I didn't want to bring it up at the time because I was so overjoyed that the discussion was actually making progress on the core philosophy-of-language issue, but ... you seem to be pretty explicit that your position is about happiness rather than usability? If Kelsey thinks she agrees with you, but actually doesn't, that's kind of bad for our collective sanity, right?
+
+
+
+(The tenth virtue is precision! Even if you cannot do the math, knowing that the math exists tells you that the dance step is precise and has no room in it for your whims.)
+
+ "Naïvely" hauling out the most exhaustively rigorous analytical machinery I have is the only way to shatter the surface-level symmetry between "How much do you care about this? C'mon, it isn't worth it" versus "Well, I think it is worth it."
+
+Perhaps it is "overkill" to transform the debate into "How much do you care about this? C'mon, it isn't worth it" versus "No! Fuck you! I have math, and I have a pretty decent heuristic argument about how the math translates into human practice, and I'm willing to put in more blood and pain and sweat and money and hours to make it even more rigorous if there's any hope left of getting you savvy lying bastards to stop fucking with my head."
+
+I want the people I love and trust and respect to stop fucking with my head. That's more important to me than whatever proportionality-intuition generates the word "overkill."
+
+
+Ben, I know that you've previously written about how holding criticism to a higher standard than praise distorts our collective map (in the context of effective altruism). You're obviously correct that this is a distortionary force relative to what Bayesian agents would do, but I'm worried that when we're talking about criticism of people (as opposed to ideas), the removal of the distortionary force would just result in an ugly war (and not more truth). Criticism of institutions and social systems should be filed under "ideas" rather than "people", but the smaller-scale you get, the harder this distinction is to maintain: criticizing, say, "the Center for Effective Altruism", somehow feels more like criticizing Will MacAskill personally than criticizing "the United States" does, even though neither CEA nor the U.S. is a person.
+
+This is why I feel like I can't give up faith that good-faith discourse eventually wins (even though I'm starting to suspect that Michael is right to suggest that it doesn't in our world). Under my current strategy and consensus social norms, I can criticize Scott or Kelsey or Ozy's ideas without my social life dissolving into a war of all against all, whereas if I give in to the temptation to flip a table and say, "Okay, now I know you guys are just fucking with me," then I don't see how that leads anywhere good, even if they really are just fucking with me.
+
+I think Anna's input could potentially be really valuable here, because this kind of issue (whether to cooperate with Society and have peace, or give it the lie at risk of war) seems like one of the core tensions between her and Michael's crew (of which I've been saying for a while that I would really like to see a peace treaty).
+
+> Self-promotion is imposing a cost on others. Eliezer, to a lesser extent Scott, and formerly Will, have all collected actual revenue from their self-promotion, on top of the less legible social capital they've acquired. It sounds like you think that this somehow creates a reciprocal obligation on our part to avoid talking about ways in which they've disappointed us or people like us. ?!?!?!?!?!
+
+> Unilateral surrender to someone strong enough to hold and defend territory, with a low enough time preference not to consume everything in it can be a good way to avoid the war of all against all. Do you think Eliezer or Scott is meeting that standard? It seems to me like Eliezer's not interested in having you as a subject, and Scott will just pass any threats from the outside right through to you.
+
+> Separately there are good pragmatic reasons to be careful about this and I don't want to ignore them - I'm really specifically arguing against what I perceive as a misapplied moralism around not criticizing people who definitely aggressed first, in a way that's much more clearly objectively aggression than criticizing someone's conduct, accurately described.
+
+
+I'd like to elaborate on what seems off to me about this.
+
+> Um. I see the value you're pointing at, but I also really don't want to attack my friends in public.
+
+What is being proposed here is creating clarity about behavioral patterns, and specifically ones that people claim not to have. What you're saying here implies that you believe creating such clarity is an attack on someone. If so, your blog is attacking trans people.
+
+What's going on here? Here's a hypothesis. Creating clarity about behavioral patterns is interpreted as an attack by the social field, and can sometimes (but not always) result in concretely worse outcomes for someone. As such, it is much easier to create clarity about behavior patterns down power gradients than up power gradients. So it's easy for you to create clarity about trans people's narcissistic behavior but hard for you to create clarity about Eliezer and Scott's narcissistic behavior (where they pretend to be pro-truth/rational/agenty and in fact aren't, and even optimize against those things).
+
+In effect, creating clarity about behavior patterns down power gradients but not up power gradients is just going to reinforce the scapegoating system. In a world with high amounts of hypocrisy about drug use, it's rather perverse to only investigate and report on the drug use of majority-black neighborhoods; it ends up maintaining the impression (to privileged people) that it's other people's problem, that someone else can be blamed, and so it does nothing to reform the drug laws.
+
+You don't get to claim that creating clarity about trans people's behavior patterns is justified because that which can be destroyed by the truth should be, and also that creating clarity about Eliezer/Scott's behavioral patterns is unjustified because it would be an attack and start a war or whatever. If you're being cowardly here or otherwise unprincipled, own that instead of producing spurious justifications.
+
+> You're obviously correct that this is a distortionary force relative to what Bayesian agents would do, but I'm worried that when we're talking about criticism of people (as opposed to ideas), the removal of the distortionary force would just result in an ugly war (and not more truth).
+
+Oh come on, calling all non-androphilic trans women delusional perverts (which you've done) is totally a criticism of people. We're actual people, you know. If you call all non-androphilic trans women delusional perverts, then you are calling me a delusional pervert, and having corresponding effects on e.g. my chances of being scapegoated. (Which is not to say you shouldn't do this, but you should be honest about what you are doing!)
+
+To conclude, I'm on board with a project to tear down narcissistic fantasies in general, but not on board with a project that starts by tearing down trans people's narcissistic fantasies and then doesn't continue that effort where it leads (in this case, to Eliezer and Scott's narcissistic fantasies), and especially not if it produces spurious justifications for this behavior.
+
+
+I would be way more comfortable writing a scathing blog post about the behavior of "rationalists" (e.g.), than "Scott Alexander and Eliezer Yudkowsky didn't use good discourse norms in an email conversation that they had good reason to expect to be private." I think I'm consistent about this: contrast my writing to the way that some anti-trans-activism writers name-and-shame named individuals. (The closest I've come is mentioning Danielle Muscato as someone who doesn't pass, and even there, I admitted it was "unclassy" and done in desperation of other ways to make the point having failed.)
+
+Am I wrong here?
+
+Like, I acknowledge that criticism of non-androphilic trans women in general implies criticism of Jessica Taylor, and criticism of "rationalists" in general implies criticism of Eliezer Yudkowsky and Scott Alexander and Zack Davis, but the extra inferential step and "fog of probability" seems really useful for maximizing the information-conveying component of the speech and minimizing the social-warfare component?
+
+
+If someone says "the Jews are OK, but they aren't Real Americans" I feel similarly concerned, and if they say "Michael is OK but he isn't a Real American" likewise not.
+
+Targeting that is Less Precise is always more violent.
+
+
+ I think that is really really importantly backwards. If someone says "Michael Vassar is a terrible person" I hope they are OK, and I try to be empathically curious, but if they don't have an argument, I don't worry much and I worry more For them than About them. If someone says The Jews are Terrible People I see that as a much more serious threat to my safety. "rationalists and trans-girls very clearly appear to me to be the exact categories of people that get targeted by the sorts of people who would also target Jews, so this analogy is *particularly* apt.
+
+This actually interacts with the good-faith kind of social constructionism! (Which mostly isn't done, but could be done.) Speakers of my generation are more likely to say "person" when sex isn't relevant, whereas older works (pre-second wave feminism?) would actually say "man" (or "woman"—see how anachronistic it feels to put the female option second and in parentheses?), presumably not because human nature has changed that much, but because of shifting cultural priorities about when sex is "relevant." (Modern speakers similarly frown on mentioning race when it's not relevant.)
+
+
+I want to be careful not to fall into the trap of being a generic social conservative. My gender crusade becomes much less interesting insofar as it can be explained simply by me being born too early and not getting "kids these days." It's not that cultural change is necessarily bad; it's that I want culture to innovate in the direction of "achieving godlike understanding of objective reality and optimizing the hell out of it" rather than "playing mind games on each other to make it artificially harder to notice that we don't actually live in a Total Morphological Freedom tech regime."
+
+
+cut from the first draft—
+> You scream and punch him in the face. Clutching his bloody nose, Bob seems more shocked than angry. "You—you punched me."
+
+> "I wouldn't say that," you say. "It depends on how you choose to draw the category boundaries of the word 'punch.'"
+
+Um, I guess it's important to notice that the only reason I think the "Vice President" example is "depoliticized" is because I live in a bubble that's obsessed with identity (gender, &c.) politics and doesn't care about class politics. It still works if my audience lives in the same bubble as me, but it would be pretty poor performance on my part to not notice the bubble.
+
+
+
+One reason someone might hypothetically be reluctant to correct mistakes when pointed out, is the fear that such a policy could be abused by bad-faith nitpickers. It would be pretty annoying to be obligated to churn out an endless stream of trivial corrections by someone motivated to comb through your entire portfolio and point out every little thing you did imperfectly, ever.
+
+This is why I found myself wondering if, in Scott or Eliezer's mental universe, I'm a blameworthy (or pitiably mentally ill) nitpicker for freaking the fuck out over a blog post from 2014 (!!) and a Tweet (!!!) from November. Like, really? I, too, have probably said things that were wrong five years ago that I would be annoyed about being pressured to issue a correction for now!
+
+But ... I don't know, I thought I put in a lot of interpretive labor making a pretty convincing case that a lot of people are making a correctable and important rationality mistake, such that the cost of a correction would actually be justified in this case, and that it should be possible to take a hard stance on "categories aren't arbitrary, you idiots, just like Eliezer said in 2008" without taking a stance on what implications that might have for other more contentious questions?? Right? Right?? Like, there are noticeable differences between me and the hypothetical bad-faith nitpicker, right????
+
+My mental health is continuing to deteriorate today, so I may need to focus on that (and therefore staying employable and having money, &c.) for a while rather than whatever we're trying to accomplish here.
+
+
+I still feel sad and upset and gaslighted and I haven't done any dayjob work so far this week even though it's Tuesday afternoon. So, I feel motivated to send this email to communicate that I feel sad, even though this message is of questionable value to the recipients.
+
+
+"We ... we had a whole Sequence about this. Didn't we? And, and ... [_you_ were there](https://tvtropes.org/pmwiki/pmwiki.php/Main/AndYouWereThere), and _you_ were there ... It—really happened, right? I didn't just imagine it? The [hyperlinks](https://www.lesswrong.com/posts/FaJaCgqBKphrDzDSj/37-ways-that-words-can-be-wrong) [still](https://www.lesswrong.com/posts/d5NyJ2Lf6N22AD9PB/where-to-draw-the-boundary) [work](https://www.lesswrong.com/posts/yLcuygFfMfrfK8KjF/mutual-information-and-density-in-thingspace) ..."
+
+
+Eliezer, I've spent my entire adult life in the subculture you created. You created me. Everything I'm doing now, I learned from you. I've put in a lot of effort—along with Ben and Michael and Sarah and Zvi and now Jessica—trying to explain that there's a collective sanity problem that we could really use your help with. (You should see the strategy email threads we've been having behind your back!) I don't understand how repeating that people live in different mental universes is a relevant response to our thousands of words of careful arguments?
+
+I'm pretty sure everyone here is strongly in favor of Jessica's existence. But maybe, if someone thinks really carefully about it, there could be some way to protect Jessica's interests without destroying the faculty of language?! (Especially given that Jessica herself may have interests in being able to talk.)
+
+Michael—
+> My guess is that at this point Zack and Ben should both sign off actually, and maybe Sarah as well, and that this should become a conversation between Eliezer and Jessica, Zvi, myself, and other people who have or have historically had strong MIRI affiliations and have paid serious attention to logical decision theories. Maybe we should bring in Wei Dai, since along with Eliezer, he created logical decision theory, so in so far as there's a natural heir to the Caliphate other than Eliezer it's him. At this point, I think that if this doesn't resolve with Eliezer backing down I am pretty much obligated to involve everyone who I believe has historically taken seriously Eliezer's claims of strong principles against lying. Among other things, I think that it's clear that without standing on such a moral bedrock, Eliezer is no longer claiming to be the spiritual leader of the Caliphate (intuitively, that title passes to me?), and that the secular leader of the AI Risk Caliphate has been Demis for many years now.
+
+> Sarah, don't worry, this definitely doesn't imply involving shit-heads who want to carpet-bomb us. It might involve explicitly discussing the pros and cons of involving leftist shit-heads as part of motivating Eliezer (and/or Scott) and I think that we should try to become psychologically immune to their carpet-bombing in any event, but there may be good reasons for Sarah to remain economically coupled to their good-will, since of all of us she's the only one who may stand to receive resources from the Corporate Center-Left Coalition or to direct their resources. It does seem to me that technically, in so far as their attacks are harmless to those not seeking privilege, leftist shit-heads are Guided by the Goodness of their Weapons, and working out whether this is in fact true may be related to the logical decision theory that will enable us to integrate political threats with compassionate rational engagement and bring Eliezer and Scott back from the team of deception and privilege.
+
+In retrospect, I totally backwards to say that I was worried about you using up our bandwidth! My message was probably pretty predictable given everything else I've said, whereas you actually made progress, even if I didn't quite understand where you were going in the moment.)
+
+... I'm kind of perversely enjoying my position as plantiff/pawn/puppydog.
+
+But—wait. Eliezer is accusing you guys of driving me crazy?? I think I might be obligated to respond to that. But, on a weekend.
+
+
+Ben on our epistemic state—
+
+> Zack started out operating on object-level trust that "Rationalists" would generically be trying to build true maps, and that while there might be errors, even quite bad errors, in individuals, coordination would be towards truth on net, on the meta-level and all higher levels of recursion. This hypothesis was decisively falsified - political factors caused it to be false. Zack's level of trust did not pay rent, except insofar as he ended up helping catalyze the formation of a dissident circle who are sympathetic to coordinating towards truth, and not just pretending to do so.
+
+> Eliezer, as the best plausible individual representative of the "Rationalists" as currently constituted, is Eliza the bot therapist in Zack's story, letting the clock run out on the interaction with Zack instead of trying to inform him. He even initially accepted in principle financial compensation for doing this, and more generally is getting paid by MIRI donors. We should consider the whole system structurally similar to Zack's Eliza story. Regardless of the initial intent, anxious scrupulous Rationalists are paying rent to something claiming moral authority, which has no concrete specific plan to do anything other than run out the clock, but remains in a facsimile of dialogue with them in ways well-calibrated to continue to generate revenue, at least insofar as bigger revenue streams aren't available yet.
+
+> Zacks don't survive long-run in this ecosystem. If we want minds like Zack's to survive somewhere, we need an interior that justifies that level of trust. In the short run, Zack just isn't useful until he self-mods to not be vulnerable to this sort of trolling again, and give up on people exhibiting behaviors like Eliezer's (or Scott's) as not worth his time. This means modeling the politics of the situation proactively, not just defensively as a horrible mess "we" want to fix.
+
+> Jessica started out thinking that things like Eliezer's code of meta-honesty were worth parsing in detail, instead of just dismissing them as the sort of thing someone formerly scrupulous who wanted to narrate their transition to a deceptive strategy as technically okay would do. This hypothesis has been falsified as well - Jessica's attention was wasted by Eliezer on the object level, and was only valuable - like Zack's - because it clarified that people like Eliezer are not operating on the level of fidelity that justifies that attention to detail.
+
+> We should expect others who are modeling politics, including the good guys who would like to build robust networks of truth-oriented trust, to ignore clever explanations we give for our behavior that substantially resemble what Eliezer is doing, and to rationally predict that we have intent to expropriate via clever rationalizations if we behave like someone with intent to appropriate via clever rationalizations. If we were mistaken to trust Eliezer, then others would be mistaken to trust us if we externally seem to behave like Eliezer.
+
+> I'm not sure what this means on the object level except that SOME leftists and anarchists are probably much better allies than they seem to us right now. Possibly some of them have analogous confusions with respect to their faction's internal coordination mechanisms, and haven't figured out how to look trustworthy to people like us. (And also, there are way more than two or three factions.)
+
+
+Michael—
+> I don't think that we should be permanently giving up on people, but I do think that we should be regarding them as temporarily without the appeal to rights in relationship to ourselves. Varelse in Orson Scott Card speak.
+
+> I think that it was extremely worth our time to engage at this level of detail, and at an even greater level of detail in order to achieve common knowledge, as I don't think that we have achieved common knowledge*yet* if Zvi still thinks that we shouldn't confidently act on the assumption that Eliezer is not doing useful things unobserved.
+
+> I would defend engagement with Eliezer for as long as we did in the past and in several future cases if we had several future relationships that seemed comparably valuable to the one we were hoping for, e.g. with a reformed Eliezer Yudkowsky, but I think that it's important that we be clear in our own minds that as of now there should be no presumptive loyalty to Eliezer over, for instance, Demis who is the Schelling Point for leader for an AI Risk coalition not Guided by the Goodness of it's Weapons.
+
+> In fact,bin all seriousness, we should see Eliezer as less trustworthy than Demis by default in light of the fact that Eliezer claimed to be following a nuanced strategy but defected while Demis never stood on any claims of morality/trustworthiness and is thus entitled to the standing of any person who seems to have managed to be successful while remaining sane enough to advance technology and to turn down large monetary gains for strategic advantage.
+
+Ben—
+> Zack, I'm not asking you not to say that publicly if that's how you honestly feel and you want to report it, but as a political matter I don't expect it to help Eliezer understand; I just expect it to make him feel sad.
+
+Jessica—
+> Worth making explicit why Eliezer's response to my criticism is sufficient reason to write him off until something majorly changes (e.g. apology and serious signals of intent to recognize and act against corruption):
+
+> - Instead of responding to my criticisms, he does a "tu quoque" (in this case accusing me of making a conflation I didn't make), which is particularly illegitimate (and thus, amusing) because I'm holding him to his standards, not mine.
+
+> - He implies I'm insane for modeling psychology. This is classic play in high-corruption gaslighting equilibria: act in bad faith while pretending to act in good faith, and label those who see it is pretend (and talk about this) as insane, to discredit them. (It's rather obvious that psychological modeling is actually a sign of sanity, not of insanity)
+
+> - He praises Zack relative to me (saying he is comparably sane because he doesn't psychologize... which, Zack does too, that's a lie). Another classic political tactic: praise one member of an alliance while blaming another, to split cracks in the alliance, turning former allies against each other.
+
+> Eliezer shifted from bullshitting in ways that plausibly defend his reputation, to actively trying to hurt me in ways that do not plausibly defend his reputation. (Him initiating a political slap fight with me actually looks worse for him than it does for me)
+
+
+> I think this is what happens when one plays politics too long without sufficiently robust defenses, and develops habits and modes of thinking in different ways. Thinking of such action as being goal-oriented with a detailed map is the wrong way to model it - as you put it, Eliezer's thinking isn't worth this level of attention to its details, as Jessica/Zack observed with its meta-honesty.
+
+> He also engaged in rhetorical actions that the Eliezer we know and value would never do, so he can't be that person at present. He is 'insane' in the sense that he has lost his mind, but it's his mind that he has lost rather than a mind in general, as such views are alas normal round these parts.
+
+Michael—
+> Common knowledge achieved.
+> Intention to transition to politics achieved in so far as everyone agrees regarding 'stay in formation, this is not a drill'.
+
+> Can everyone please let me know what you think about the value of achieving common knowledge. To my mind, we don't need Eliezer, but we absolutely need to be able to count that as value when using our 'board evaluators' or we are Totally Fucking Dead!
+
+> Now that we are doing politics at all, and conveniently, against the least skilled opponent around, it's time to start practicing offense. We have an extremely large, clear-cut weapon here. Eliezer has publicly stated his intention to commit fraud. He has publicly defrauded a number of us. If the guns that we can point at his head aren't larger than the guns already points at his head that means that he has forgotten how to see guns.
+
+Ben—
+> Eliezer's claim to legitimacy really did amount to a claim that while nearly everyone else was criminally insane (causing huge amounts of damage due to disconnect from reality, in a way that would be criminal if done knowingly), he almost uniquely was not.
+
+> "Eliezer is criminally insane" and "Eliezer's behavior is a normal failure of rationality" are not competing hypotheses from the perspective of the Sequences - they're the same hypothesis.
+
+> WE CAN'T GRADE OBJECTIVE CATEGORY DESCRIPTORS ON A CURVE.
+
+I don't think I'm setting [my price for joining](https://www.lesswrong.com/posts/Q8evewZW5SeidLdbA/your-price-for-joining) particularly high here?
+
+In general, if someone writes to me with a criticism of the form of "That was really misleading in context; I really think it's worth clarifying to your readers", I think I would either say something like "Actually, I argue that it's not misleading because ..." or "You're right, I'll clarify." Promising to never speak about the topic again just seems weird?
+
+Michael—
+> From my perspective, she is an enemy combatant and can claim no rights in regards to either me or you. We should be lying to her because we should also be trying, when convenient, to slash her tires. I do think that it's pretty important that you absorb that point. Like, we know her to be actively acting against me in a manner that I think is unambiguously unrighteous, unprovoked and not self-serving, and we have established that Eliezer is actively breaking his oaths despite everything he has ever said about how 'you can break your principles once. After that, you don't have principles to break' and the like. Do we at least agree that he is unambiguously wither a criminal or an enemy combatant?
+
+> The latter frame is more accurate both because criminals have rights and because enemy combatants aren't particularly blameworthy. They exist under a blameworthy moral order and for you to act in their interests implies acting against their current efforts, at least temporary, but you probably would like to execute on a Marshall Plan later.
+
+> I don't really need this information, so don't bring it if it makes you uncomfortable, and it's totally fine to be 'friends' with enemy combatants, just not to feel obligations towards them mediated by lawful intuitions. We should work out the logic of this with Ben, as it's important, but basically one can have an obligation of beneficence towards a small child, a tame animal, or many other things, but to attribute rights to those things rather than entitlement to beneficence is actually to give up on the belief in rights.
+
+Um. I can imagine that you might find my attitude here frustrating for reasons analogous to how I find it frustrating when people reach for the fake precision of talking about hormone levels and genitalia as separate dimensions in a context where I think the concept they actually want is "biological sex." But that's just the objection I come up with when I run my "How could a hypothetical adversary argue that I'm being hypocritical?" heuristic; it doesn't actually make me feel more eager to talk about criminality myself.
+
+
+I was _just_ trying to publicly settle a very straightforward philosophy thing that seemed really solid to me
+
+if, in the process, I accidentally ended up being an unusually useful pawn in Michael Vassar's deranged four-dimensional hyperchess political scheming
+
+that's ... _arguably_ not my fault
+
+
+
+"Harry, were you involved in the breakout from Azkaban?" "Involved doesn't have a simple truth condition ..."
+
+
+
+There's this thing where some people are way more productive than others and everyone knows it, but no one wants to make it common knowledge (like with the blue-eyed islanders), which is really awkward for the people who are simultaneously (a) known but not commonly-known to be underperforming (such that the culture of common-knowledge-prevention is to my self-interest because I get to collect the status and money rents of being a $150K/yr software engineer without actually performing at that level, and my coworkers and even managers don't want to call attention to it because that would be mean—and it helps that they know that I already feel guilty about it) but also (b) tempermentally unsuited and ideologically opposed to subsisting on unjustly-acquired rents rather than value creation
+
+(where I'm fond of the Ayn Rand æsthetic of "I earn my keep, and if the market were to decide that I don't deserve to live anymore, I guess it would be right and I should accept my fate with dignity" and I think the æsthetic is serving a useful function in my psychology even though it's also important to model how I would change my tune if the market actually decided that I don't deserve to live)
+
+(also, I almost certainly don't have a coherent view of what "unjustly-acquired rents" are)
+
+but the "Everyone knows that Zack feels guilty about underperforming, so they don't punish him, because he's already doing enough internalized-domination to punish himself" dynamic is unsustainable if it evolves (evolves is definitely the right word here) into a loop of "feeling gulit in exchange for not doing work" rather than the intended function of "feeling guilt in order to successfully incentivize work"
+
+I'm not supposed to be sending email during Math and Wellness Month, but I'm sending/writing this anyway because it's Wellness-relevant
+
+You've got to be strong to survive in the O-ring sector
+
+(I can actually see the multiplicative "tasks are intertwined and have to all happen at high reliability in order to create value" thing playing out in the form of "if I had fixed this bug earlier, then I would have less manual cleanup work", in contrast to the "making a bad latte with not enough foam, that doesn't ruin any of the other customer's lattes" from my Safeway-Starbucks-kiosk days)
+
+This is genuinely pretty bad
+
+I should do some of the logging-package prototyping stuff on the weekend (it looks fun/interesting enough such that I might be able to psych myself into thinking of it as a "fun" project)
+
+30 April saying, essentially (and sincerely), "Oh man oh jeez, Ben and Michael want me to join in a rationalist civil war against the corrupt mainstream-rationality establishment, and I'd really rather not, and I don't like how they keep using scary hyperbolic words like 'cult' and 'war' and 'criminal', but on the other hand, they're the only ones backing me up on this incredibly basic philosophy thing and I don't feel like I have anywhere else to go."
+
+(In a darkly adorable twist, Mike and Alicorn's two-year-old son Merlin was reportedly saying the next day that Kelsey doesn't like his daddy, which was confusing until it was figured out he had heard Kelsey talking about why she doesn't like Michael Vassar.)
+
+Anyway, all of this leading up to a psychological hypothesis (maybe this is "obvious", but I hadn't thought about it before): when people see someone wavering between their coalition and a rival coalition, they're motivated to offer a few concessions to keep the wavering person on their side. Kelsey could afford to not-understand (pace Upton Sinclair) the "words need to carve reality at the joints" thing when it was just me freaking out alone, but "got it" almost as soon as I could credibly threaten to walk (defect to a coalition of people she dislikes)—and maybe my "closing thoughts" email had a similar effect on Eliezer (assuming he otherwise wouldn't have spontaneously tweeted something about the univariate fallacy two weeks later)??
+
+This probably doesn't work if you repeat it (or if you try to do it consciously)?
+
+> I keep scratching my head here Zack. Weaker language on this would just be weaker. We aren't dealing with a problem of pedagogy here. We are dealing with a group of people who literally got together and built a movement around the need to do charity non-fraudulently but who were eventually shaped by the feedback systems they were in into doing exactly the thing they built their movement in opposition to while claiming that "anyone else would have done it and anyway some of us are vegans". Nobody is endorsing punishing people for doing things that anyone else would have done, but we are asking them to stop preventing others from trying to do actual charity and actual science, which they are currently actually opposing in order to defend their egos.
+
+Yes, this is a lame rationalization: if everyone actually knew, what would the act be for? Present day senile!Eliezer wrote of people being "pseudo-fooled in part of themselves even when they are not really-fooled" as if it were no big deal, but 2007!Eliezer identified people being pseudo-fooled as a bad thing. I'll add this to my post-ideas list (working title: "Fake Common Knowledge").
+
+he seems to be conflating transhumanist optimism with "causal reality", and then tone-policing me when I try to model good behavior of what means-end reasoning about causal reality actually looks like. This ... seems pretty cultish to me?? Like, it's fine and expected for this grade of confusion to be on the website, but it's more worrisome when it's coming from the mod team.
+
+I talked with Ray from 7 to 10:30 tonight. My intellectual performance was not so great and I did a lot of yelling and crying. Ray thinks that people need to feel safe before they can seek truth, because otherwise they'll distort stuff until they can feel safe. I think this is mostly true: it's not a coincidence that I put out a poor intellectual performance during the same conversations that I do a lot of yelling and crying. But then Ray goes on to say something about stag hunts and coordination, and I'm like, "In slogan form, 'Telling the truth is not a coordination problem!' You can just unilaterally tell the truth!" And ... it doesn't seem to land?
+
+I wish I hadn't done so much yelling and crying with Ray (it's undignified, and it makes me stupid), but when people seem to be pretty explicitly denying the possibility of actually reasoning with them, it's as if I don't perceive any alternative but to express myself in a language they might have a better hope of understanding
+
+Also, I think it's pretty ironic that this happened on a post that was explicitly about causal reality vs. social reality! It's entirely possible that I wouldn't feel inclined to be such a hardass about "Whether I respect you is off-topic" if it weren't for that prompt!