From: M. Taylor Saotome-Westlake Date: Thu, 15 Jun 2023 21:42:28 +0000 (-0700) Subject: there's definitely not going to be a pt. 7, dummy!! X-Git-Url: http://534655.efjtl6rk.asia/source?a=commitdiff_plain;h=a4f88e90034507fc8d1e5a794492fcca8aa396e5;p=Ultimately_Untrue_Thought.git there's definitely not going to be a pt. 7, dummy!! I was so angry about the mistreatment of Said that I was thinking of making a new part about it, but I'm over my drama budget and over my navel-gazing budget: there's only so many tens of thousands of words I can spend on this. I can mention the early-2023 moderation drama in a few paragraphs in a conclusion, but it's obviously not worth the effort of a play-by-play. --- diff --git a/content/drafts/the-last-indictment.md b/content/drafts/the-last-indictment.md deleted file mode 100644 index f84b540..0000000 --- a/content/drafts/the-last-indictment.md +++ /dev/null @@ -1,44 +0,0 @@ -Title: The Last Indictment -Author: Zack M. Davis -Date: 2023-07-01 11:00 -Category: commentary -Tags: autogynephilia, bullet-biting, cathartic, Eliezer Yudkowsky, Scott Alexander, epistemic horror, my robot cult, personal, sex differences, two-type taxonomy, whale metaphors -Status: draft - -> Would you smile to see him dead? Would you say, "We are rid of this obscenist"? Fools! The corpse would laugh at you from its cold eyelids! The motionless lips would mock, and the solemn hands, the pulseless, folded hands, in their quietness would write the last indictment, which neither Time nor you can efface. Kill him! And you write his glory and your shame! Said Achmiz in his felon stripes stands far above you now, and Said Achmiz _dead_ will live on, immortal in the race he died to free! Kill him! -> -> —[Voltairine de Cleyre](https://praxeology.net/VC-SS.htm) (paraphrased) - -[TODO—early 2023 moderation drama - * In early 2023, I was trying to finish up this memoir, but while procrastinating on that, I ended up writing a few other posts for _Less Wrong_; I thought the story of my war with the "rationalists" was relevantly "over"; I didn't anticipate things getting any "worse" - * I happened to see that Duncan Sabien's "Basics of Rationalists Discourse" was published - * Backstory: Sabien is a former CfAR employee whose Facebook posts I had used to comment on. He had a history of getting offended over things that I didn't think were important—all the way back to our very first interaction in 2017 (I remember being in Portland using Facebook/Messenger on my phone) - - ... - - - * I was reluctant to ping Oli (the way I pung Babcock and Pace) because I still "owed" him for the comment on "Challenges", but ultimately ended up sending a Twitter DM just after the verdict (when I saw that he had very-recent reply Tweets and was thus online); I felt a little bit worse about that one (the "FYI I'm at war"), but I think I de-escalated OK and he didn't seem to take it personally - - ... - - * Said is braver than me along some dimensions; the reason he's in trouble and I'm not, even though we were both fighting with Duncan, is that I was more "dovish"—when Duncan attacked, I focused on defense and withheld my "offensive" thoughts; Said's points about Duncan's blocking psychology were "offensive" - - ... - - * I'm proud of the keeping-my-cool performance when Duncan was mad at me, less proud of my performance fighting for Said so far - - ... - - * In the Ruby slapfight, I was explicit about "You shouldn't be making moderation decisions based on seniority"—this time, I've moved on to just making decisions based on seniority; if we're doing consequentialism based on how to attract people to the website, it's clear that there are no purer standards left to appeal to -] - -After this, the AI situation is looking worrying enough, that I'm thinking I should try to do some more direct xrisk-reduction work, although I haven't definitely selected any particular job or project. (It probably won't matter, but it will be dignified.) Now that the shape of the threat is on the horizon, I think I'm less afraid of being directly involved. Something about having large language models to study in the 'twenties is—grounding, compared to the superstitious fears of the paperclip boogeyman of my nightmares in the 'teens. - -Like all intellectuals, as a teenager I imagined that I would write a book. It was always going to be about gender, but I was vaguely imagining a novel, which never got beyond vague imaginings. That was before the Sequences. I'm 35 years old now. I think my intellectual life has succeeded in ways I didn't know how to imagine, before. I think my past self would be proud of this blog—140,000 words of blog posts stapled together is _morally_ a book—once he got over the shock of heresy. - -[TODO conclusion, cont'd— - * Do I have regrets about this Whole Dumb Story? A lot, surely—it's been a lot of wasted time. But it's also hard to say what I should have done differently; I could have listened to Ben more and lost faith Yudkowsky earlier, but he had earned a lot of benefit of the doubt? - * even young smart AGPs who can appreciate my work have still gotten pinkpilled - * Jonah had told me that my planning horizon was too short—like the future past a year wasn't real to me. (This plausibly also explains my impatience with college.) My horizon is starting to broaden as AI timelines shorten - * less drama (in my youth, I would have been proud that at least this vice was a feminine trait; now, I prefer to be good even if that means being a good man) -] diff --git a/notes/memoir-sections.md b/notes/memoir-sections.md index a1dc8d9..ce6b38d 100644 --- a/notes/memoir-sections.md +++ b/notes/memoir-sections.md @@ -2638,3 +2638,40 @@ https://twitter.com/ESYudkowsky/status/1668419201101615105 > As usual, I got there first and solved the relatively easy philosophy problems, so the sensible people have nothing to talk about, and the unsensible ones can't just use my answer sheet. (I thought about apologizing if some of the content was "weird" or offensive, but I figured if you've been a professional editor for 15 years and list memoirs as a specialty, you've probably seen everything.) + +After this, the AI situation is looking worrying enough, that I'm thinking I should try to do some more direct xrisk-reduction work, although I haven't definitely selected any particular job or project. (It probably won't matter, but it will be dignified.) Now that the shape of the threat is on the horizon, I think I'm less afraid of being directly involved. Something about having large language models to study in the 'twenties is—grounding, compared to the superstitious fears of the paperclip boogeyman of my nightmares in the 'teens. + +Like all intellectuals, as a teenager I imagined that I would write a book. It was always going to be about gender, but I was vaguely imagining a novel, which never got beyond vague imaginings. That was before the Sequences. I'm 35 years old now. I think my intellectual life has succeeded in ways I didn't know how to imagine, before. I think my past self would be proud of this blog—140,000 words of blog posts stapled together is _morally_ a book—once he got over the shock of heresy. + +[TODO conclusion, cont'd— + * Do I have regrets about this Whole Dumb Story? A lot, surely—it's been a lot of wasted time. But it's also hard to say what I should have done differently; I could have listened to Ben more and lost faith Yudkowsky earlier, but he had earned a lot of benefit of the doubt? + * even young smart AGPs who can appreciate my work have still gotten pinkpilled + * Jonah had told me that my planning horizon was too short—like the future past a year wasn't real to me. (This plausibly also explains my impatience with college.) My horizon is starting to broaden as AI timelines shorten + * less drama (in my youth, I would have been proud that at least this vice was a feminine trait; now, I prefer to be good even if that means being a good man) +] + +> Would you smile to see him dead? Would you say, "We are rid of this obscenist"? Fools! The corpse would laugh at you from its cold eyelids! The motionless lips would mock, and the solemn hands, the pulseless, folded hands, in their quietness would write the last indictment, which neither Time nor you can efface. Kill him! And you write his glory and your shame! Said Achmiz in his felon stripes stands far above you now, and Said Achmiz _dead_ will live on, immortal in the race he died to free! Kill him! +> +> —[Voltairine de Cleyre](https://praxeology.net/VC-SS.htm) (paraphrased) + +[TODO—early 2023 moderation drama + * In early 2023, I was trying to finish up this memoir, but while procrastinating on that, I ended up writing a few other posts for _Less Wrong_; I thought the story of my war with the "rationalists" was relevantly "over"; I didn't anticipate things getting any "worse" + * I happened to see that Duncan Sabien's "Basics of Rationalists Discourse" was published + * Backstory: Sabien is a former CfAR employee whose Facebook posts I had used to comment on. He had a history of getting offended over things that I didn't think were important—all the way back to our very first interaction in 2017 (I remember being in Portland using Facebook/Messenger on my phone) + + ... + + * I was reluctant to ping Oli (the way I pung Babcock and Pace) because I still "owed" him for the comment on "Challenges", but ultimately ended up sending a Twitter DM just after the verdict (when I saw that he had very-recent reply Tweets and was thus online); I felt a little bit worse about that one (the "FYI I'm at war"), but I think I de-escalated OK and he didn't seem to take it personally + + ... + + * Said is braver than me along some dimensions; the reason he's in trouble and I'm not, even though we were both fighting with Duncan, is that I was more "dovish"—when Duncan attacked, I focused on defense and withheld my "offensive" thoughts; Said's points about Duncan's blocking psychology were "offensive" + + ... + + * I'm proud of the keeping-my-cool performance when Duncan was mad at me, less proud of my performance fighting for Said so far + + ... + + * In the Ruby slapfight, I was explicit about "You shouldn't be making moderation decisions based on seniority"—this time, I've moved on to just making decisions based on seniority; if we're doing consequentialism based on how to attract people to the website, it's clear that there are no purer standards left to appeal to +]