From: M. Taylor Saotome-Westlake Date: Sun, 15 Jan 2023 04:20:33 +0000 (-0800) Subject: memoir: filling in TODO blocks ... X-Git-Url: http://534655.efjtl6rk.asia/source?a=commitdiff_plain;h=8378c21056f3a786c8698bc17daeb82dfa13262d;p=Ultimately_Untrue_Thought.git memoir: filling in TODO blocks ... --- diff --git a/content/drafts/a-hill-of-validity-in-defense-of-meaning.md b/content/drafts/a-hill-of-validity-in-defense-of-meaning.md index 7d8fc0a..cee83f5 100644 --- a/content/drafts/a-hill-of-validity-in-defense-of-meaning.md +++ b/content/drafts/a-hill-of-validity-in-defense-of-meaning.md @@ -9,9 +9,9 @@ Status: draft > > —Zora Neale Hurston -Recapping our story so far—in a previous post, ["Sexual Dimorphism in Yudkowsky's Sequences, in Relation to My Gender Problems"](/2021/May/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems/), I told the the part about how I've "always" (since puberty) had this obsessive sexual fantasy about being magically transformed into a woman and also thought it was immoral to believe in psychological sex differences, until I got set straight by these really great Sequences of blog posts by Eliezer Yudkowsky, which taught me (incidentally, among many other things) [how absurdly unrealistic my obsessive sexual fantasy was given merely human-level technology](https://www.lesswrong.com/posts/QZs4vkC7cbyjL9XA9/changing-emotions), and that it's actually immoral _not_ to believe in psychological sex differences [given that](https://www.lesswrong.com/tag/litany-of-tarski) psychological sex differences are actually real. In a subsequent post, ["Blanchard's Dangerous Idea and the Plight of the Lucid Crossdreamer"](/2022/TODO/blanchards-dangerous-idea-and-the-plight-of-the-lucid-crossdreamer/), I told the part about how, in 2016, everyone in my systematically-correct-reasoning community up to and including Eliezer Yudkowsky suddenly started claiming that guys like me might actually be women in some unspecified metaphysical sense, and insisted on playing dumb when confronted with alternative explanations of the relevant phenomena, until I eventually had a stress- and sleep-deprivation-induced delusional nervous breakdown, got sent to psychiatric jail once, and then went crazy again a couple months later. +Recapping our Whole Dumb Story so far—in a previous post, ["Sexual Dimorphism in Yudkowsky's Sequences, in Relation to My Gender Problems"](/2021/May/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems/), I told the the part about how I've "always" (since puberty) had this obsessive sexual fantasy about being magically transformed into a woman and also thought it was immoral to believe in psychological sex differences, until I got set straight by these really great Sequences of blog posts by Eliezer Yudkowsky, which taught me (incidentally, among many other things) [how absurdly unrealistic my obsessive sexual fantasy was given merely human-level technology](https://www.lesswrong.com/posts/QZs4vkC7cbyjL9XA9/changing-emotions), and that it's actually immoral _not_ to believe in psychological sex differences [given that](https://www.lesswrong.com/tag/litany-of-tarski) psychological sex differences are actually real. In a subsequent post, "Blanchard's Dangerous Idea and the Plight of the Lucid Crossdreamer", I told the part about how, in 2016, everyone in my systematically-correct-reasoning community up to and including Eliezer Yudkowsky suddenly started claiming that guys like me might actually be women in some unspecified metaphysical sense, and insisted on playing dumb when confronted with alternative explanations of the relevant phenomena, until I eventually had a stress- and sleep-deprivation-induced delusional nervous breakdown, got sent to psychiatric jail once, and then went crazy again a couple months later. -That's not the really egregious part of the story. The thing is, psychology is a complicated empirical science: no matter how "obvious" I might think something is, I have to admit that I could be wrong—[not just as an obligatory profession of humility, but _actually_ wrong in the real world](https://www.lesswrong.com/posts/GrDqnMjhqoxiqpQPw/the-proper-use-of-humility). If my fellow rationalists merely weren't sold on the autogynephilia and transgender thing, I would certainly be disappointed, but it's definitely not grounds to denounce the entire community as a failure or a fraud. And indeed, I _did_ [end up moderating my views](/2022/Jul/the-two-type-taxonomy-is-a-useful-approximation-for-a-more-detailed-causal-model/) compared to the extent to which my thinking in 2016–7 took Blanchard–Bailey–Lawrence as received truth. (At the same time, I don't particularly regret saying what I said in 2016–7, because Blanchard–Bailey–Lawrence is still very obviously [_directionally_ correct](/2022/Jul/the-two-type-taxonomy-is-a-useful-approximation-for-a-more-detailed-causal-model/) compared to the nonsense everyone else was telling me.) +That's not the really egregious part of the story. The thing is, psychology is a complicated empirical science: no matter how "obvious" I might think something is, I have to admit that I could be wrong—[not just as an obligatory profession of humility, but _actually_ wrong in the real world](https://www.lesswrong.com/posts/GrDqnMjhqoxiqpQPw/the-proper-use-of-humility). If my fellow rationalists merely weren't sold on the autogynephilia and transgender thing, I would certainly be disappointed, but it's definitely not grounds to denounce the entire community as a failure or a fraud. And indeed, I _did_ [end up moderating my views quite a bit](/2022/Jul/the-two-type-taxonomy-is-a-useful-approximation-for-a-more-detailed-causal-model/) compared to the extent to which my thinking in 2016–7 took Blanchard–Bailey–Lawrence as received truth. (At the same time, I don't particularly regret saying what I said in 2016–7, because Blanchard–Bailey–Lawrence is still very obviously [_directionally_ correct](/2022/Jul/the-two-type-taxonomy-is-a-useful-approximation-for-a-more-detailed-causal-model/) compared to the nonsense everyone else was telling me.) But a striking pattern in my attempts to argue with people about the two-type taxonomy in late 2016 and early 2017 was the tendency for the conversation to get _derailed_ on some variation of, "Well, the word _woman_ doesn't necessarily mean that," often with a link to ["The Categories Were Made for Man, Not Man for the Categories"](https://slatestarcodex.com/2014/11/21/the-categories-were-made-for-man-not-man-for-the-categories/), a November 2014 post by Scott Alexander arguing that because categories exist in our model of the world rather than the world itself, there's nothing wrong with simply _defining_ trans people to be their preferred gender in order to alleviate their dysphoria. @@ -195,7 +195,7 @@ It seemed better to try to clear this up in private. I still had Yudkowsky's ema The monetary offer, admittedly, was awkward: I included another paragraph clarifying that any payment was only to get his attention, and not _quid quo pro_ advertising, and that if he didn't trust his brain circuitry not to be corrupted by money, then he might want to reject the offer on those grounds and only read the post if he expected it to be genuinely interesting. -Again, I realize this must seem weird and cultish to any normal people reading this. (Paying some blogger you follow one grand just to _read_ one of your posts? What? Why? Who _does_ that?) To this, I again refer to [the reasons justifying my 2016 cheerful price offer](/2022/TODO/blanchards-dangerous-idea-and-the-plight-of-the-lucid-crossdreamer/#cheerful-price-reasons)—and that, along with tagging in Anna and Michael, who I thought Yudkowsky respected, it was a way to signal that I _really really really didn't want to be ignored_, which I assumed was the default outcome. Surely an ordinary programmer such as me was as a mere _worm_ in the presence of the great Eliezer Yudkowsky. I wouldn't have had the audacity to contact him at _all_, about _anything_, if I didn't have Something to Protect. +Again, I realize this must seem weird and cultish to any normal people reading this. (Paying some blogger you follow one grand just to _read_ one of your posts? What? Why? Who _does_ that?) To this, I again refer to the reasons justifying my 2016 cheerful price offer—and that, along with tagging in Anna and Michael, who I thought Yudkowsky respected, it was a way to signal that I _really really really didn't want to be ignored_, which I assumed was the default outcome. Surely an ordinary programmer such as me was as a mere _worm_ in the presence of the great Eliezer Yudkowsky. I wouldn't have had the audacity to contact him at _all_, about _anything_, if I didn't have Something to Protect. Anna didn't reply, but I apparently did interest Michael, who chimed in on the email thread to Yudkowsky. We had a long phone conversation the next day lamenting how the "rationalists" were dead as an intellectual community. @@ -217,7 +217,7 @@ In part of the Dumb Story that follows, I'm going to describe several times when It seems particularly important to lay out these principles of adherence to privacy norms in connection to my attempts to contact Yudkowsky, because part of what I'm trying to accomplish in telling this Whole Dumb Story is to deal reputational damage to Yudkowsky, which I claim is deserved. (We want reputations to track reality. If you see Carol exhibiting a pattern of intellectual dishonesty, and she keeps doing it even after you try talking to her about it privately, you might want to write a blog post describing the pattern in detail—not to _hurt_ Carol, particularly, but so that everyone _else_ can make higher-quality decisions about whether they should believe the things that Carol says.) Given that motivation of mine, it seems important that I only try to hang Yudkowsky with the rope of what he said in public, where you can click the links and read the context for yourself. In the Dumb Story that follows, I _also_ describe some of my correspondence with Scott Alexander, but that doesn't seem sensitive in the same way, because I'm not particularly trying to deal reputational damage to Alexander in the same way. (Not because Scott performed well, but because one wouldn't really have _expected_ Scott to perform well in this situation; Alexander's reputation isn't so direly in need of correction.) -In accordance with the privacy-norm-adherence policy just described, I don't think I should say whether Yudkowsky replied to Michael's and my emails, nor ([again](/2022/TODO/blanchards-dangerous-idea-and-the-plight-of-the-lucid-crossdreamer/#cheerful-price-privacy-constraint)) whether he accepted the cheerful price money, because any conversation that may or may not have occured would have been private. But what I _can_ say, because it was public, is that we saw [this addition to the Twitter thread](https://twitter.com/ESYudkowsky/status/1068071036732694529): +In accordance with the privacy-norm-adherence policy just described, I don't think I should say whether Yudkowsky replied to Michael's and my emails, nor (again) whether he accepted the cheerful price money, because any conversation that may or may not have occured would have been private. But what I _can_ say, because it was public, is that we saw [this addition to the Twitter thread](https://twitter.com/ESYudkowsky/status/1068071036732694529): > I was sent this (by a third party) as a possible example of the sort of argument I was looking to read: [http://unremediatedgender.space/2018/Feb/the-categories-were-made-for-man-to-make-predictions/](/2018/Feb/the-categories-were-made-for-man-to-make-predictions/). Without yet judging its empirical content, I agree that it is not ontologically confused. It's not going "But this is a MAN so using 'she' is LYING." @@ -413,7 +413,7 @@ This is the part where I began to ... overheat. I tried ("tried") to focus on my My dayjob boss made it clear that he was expecting me to have code for my current Jira tickets by noon the next day, so I deceived myself into thinking I could successfully accomplish that by staying at the office late. -(Maybe I could have caught up, if it was just a matter of the task being slightly harder than anticipated and I weren't psychologically impaired. The problem was that focus is worth 30 IQ points, and an IQ 100 person _can't do my job_.) +(Maybe I could have caught up, if it was just a matter of the task being slightly harder than anticipated and I weren't psychologically impaired from being hyper-focused on the religious war. The problem was that focus is worth 30 IQ points, and an IQ 100 person _can't do my job_.) I was in so much (psychological) pain. Or at least—as I noted in one of a series of emails to my posse that night—I felt motivated to type the sentence, "I'm in so much (psychological) pain." I'm never sure how to intepret my own self-reports, because even when I'm really emotionally trashed (crying, shaking, randomly yelling, _&c_.), I think I'm still noticeably _incentivizable_: if someone were to present a credible threat (like slapping me and telling me to snap out of it), then I would be able to calm down: there's some sort of game-theory algorithm in the brain that subjectively feels genuine distress (like crying or sending people too many hysterical emails) but only when it can predict that it will be either rewarded with sympathy or at least tolerated. (Kevin Simler: [tears are a discount on friendship](https://meltingasphalt.com/tears/).) @@ -455,7 +455,7 @@ The language I spoke was _mostly_ educated American English, but I relied on sub Maybe that's why I felt like I had to stand my ground and fight for the world I was made in, even though the contradiction between the war effort and my general submissiveness was having me making crazy decisions. -Michael said that a reason to make a stand here in "the community" was that if we didn't, the beacon of "rationalism" would continue to lure and mislead others, but that more importantly, we needed to figure out how to win this kind of argument decisively, as a group; we couldn't afford to accept a _status quo_ of accepting defeat when faced with bad faith arguments _in general_. Ben reported writing to Scott to ask him to alter the beacon so that people like me wouldn't think "the community" was the place to go for literally doing the rationality thing anymore. +Michael said that a reason to make a stand here in "the community" was that if we didn't, the beacon of "rationalism" would continue to lure and mislead others, but that more importantly, we needed to figure out how to win this kind of argument decisively, as a group; we couldn't afford to accept a _status quo_ of accepting defeat when faced with bad faith arguments _in general_. Ben reported writing to Scott to ask him to alter the beacon so that people like me wouldn't think "the community" was the place to go for literally doing the rationality thing anymore. As it happened, the next day, Wednesday, we saw these Tweets from @ESYudkowsky, linking to a _Quillette_ article interviewing Lisa Littman on her work on rapid onset gender dysphoria: diff --git a/content/drafts/blanchards-dangerous-idea-and-the-plight-of-the-lucid-crossdreamer.md b/content/drafts/blanchards-dangerous-idea-and-the-plight-of-the-lucid-crossdreamer.md index b80219b..8acf0f6 100644 --- a/content/drafts/blanchards-dangerous-idea-and-the-plight-of-the-lucid-crossdreamer.md +++ b/content/drafts/blanchards-dangerous-idea-and-the-plight-of-the-lucid-crossdreamer.md @@ -231,7 +231,7 @@ Another wrote a comment in one discussion condemning "autogynephilia discourse" Was it rude of me to confront her on the contradiction in her PMs? Yes, it was extremely rude; all else being equal, I would prefer _not_ to probe into other people's private lives and suggest that they're lying to themselves. But when they lie to the public, that affects _me_, and my attempts to figure out _my_ life. Is it a conscious political ploy, I asked her, or are people _really_ unable to entertain the hypothesis that their beautiful pure self-identity feelings are causally related to the fetish? If it's a conscious political ploy, [I wished someone would just say, "Congratulations, you figured out the secret, now keep quiet about it or else,"](/2016/new-clothes/) rather than trying to _undermine my connection of reality_; I wasn't trying to hurt anyone, but this was _really personally disturbing_. -She said that she had to deal with enough invalidation already, that she had her own doubts and concerns but would only discuss them with people who shared her views. Fair enough—I'm not entitled to talk to anyone who doesn't want to talk to me, even if I personally find it pathetic that grown adults in the so-called "rationalist" community need to be protect themselves from "invalidation". +She said that she had to deal with enough invalidation already, that she had her own doubts and concerns but would only discuss them with people who shared her views. Fair enough—I'm not entitled to talk to anyone who doesn't want to talk to me, even if I personally find it pathetic that grown adults need to protect themselves from "invalidation". The privately-sane responses were more interesting. "People are crazy about metaphysics," one trans woman told me. "That's not new. Compare with transubstantiation and how much scholarly work went in to trying to square it with natural materialism. As for causality, I think it's likely that the true explanation will not take the shape of an easily understood narrative." @@ -435,13 +435,15 @@ reply— I met Jessica in March -[On my last day at SwiftStack, I said that I was taking a sabbatical from my software engineering career to become a leading intellectual figure of the alternative right. That was a joke, but not one that I would have made after Charlottesville. August 2017 https://en.wikipedia.org/wiki/Unite_the_Right_rally -http://benjaminrosshoffman.com/guilt-shame-and-depravity/ -] + + +I decided to quit my dayjob. I had more than enough savings to take some months to learn some more math and work on this blog. (Recent experiences had made me more skeptical of earning-to-give as an altruistic intervention. If I didn't trust institutions to do what they claimed to do, there was less reason not to spend my San Francisco software engineering fortune on buying free time for myself.) + +At standup meeting on my last day, I told my coworkers that I was taking a sabbatical from my software engineering career to become a leading intellectual figure of the alternative right. That was a joke (ironically using the label "alt-right" to point to my break with liberal orthodoxy), although after the [Charlottesville incident](https://en.wikipedia.org/wiki/Unite_the_Right_rally) later that year, I would look back at that moment with a little bit of [shame](http://benjaminrosshoffman.com/guilt-shame-and-depravity/) at how the joke hits differently in retrospace. + /2017/Jun/memoirs-of-my-recent-madness-part-i-the-unanswerable-words/ [TODO: ... continue harvesting email to see what happened in April] [TODO: credit assignment ritual ($18200 credit-assignment ritual): $5K to Michael, $1200 each to trans widow friend, 3 care team members (Alicorn Sarah Anna), Ziz, Olivia, and Sophia, $400 each to Steve, A.M., Watson, "Wilhelm", Jonah, James, Ben, Kevin, Alexei (declined), Andrew, Divia, Lex, Devi] - diff --git a/content/drafts/if-clarity-seems-like-death-to-them.md b/content/drafts/if-clarity-seems-like-death-to-them.md index 14bbd27..a6136bc 100644 --- a/content/drafts/if-clarity-seems-like-death-to-them.md +++ b/content/drafts/if-clarity-seems-like-death-to-them.md @@ -13,9 +13,13 @@ Status: draft [^egan-paraphrasing]: The original quote says "one hundred thousand straights" ... "gay community" ... "gay and lesbian" ... "franchise rights on homosexuality" ... "unauthorized queer." -[TODO: recap previous posts] +Recapping our Whole Dumb Story so far: in a previous post, "Sexual Dimorphism in Yudkowsky's Sequences, in Relation to My Gender Problems", I told you about how I've always (since puberty) had this obsessive erotic fantasy about being magically transformed into a woman and how I used to think it was immoral to believe in psychological sex differences, until I read these really great Sequences of blog posts by Eliezer Yudkowsky which incidentally pointed out how absurdly impossible my obsessive fantasy was— -Given that the "rationalists" were fake and that we needed something better, there remained the question of what to do about that, and how to relate to the old thing, and the operators of the marketing machine for the old thing. +—none of which gooey private psychological minutiæ would be at all in the public interest to blog about _except that_, as I explained in a subsequent post, "Blanchard's Dangerous Idea and the Plight of the Lucid Crossdreamer", in 2016, everyone in the community that formed around the Sequences suddenly decided for political reasons that guys like me might actually be women in some unspecified metaphysical sense, and the cognitive dissonance of having to rebut all this nonsense coming from everyone I used to trust, drove me temporarily insane from stress and sleep deprivation— + +—which would have been the end of the story, _except that_, as I explained in a subsequent–subsequent post, "A Hill of Validity in Defense of Meaning", in late 2018, Eliezer Yudkowsky prevaricated about his own philosophy of language for the same political reasons, and my unsuccessful attempts to get him to clarify led me and allies to conclude that Yudkowsky and his "rationalists" were corrupt. + +Anyway, given that the "rationalists" were fake and that we needed something better, there remained the question of what to do about that, and how to relate to the old thing, and the operators of the marketing machine for the old thing. _I_ had been hyperfocused on prosecuting my Category War, but the reason Michael and Ben and Jessica were willing to help me out on that, was not because they particularly cared about the gender and categories example, but because it seemed like a manifestation of a _more general_ problem of epistemic rot in "the community". @@ -131,25 +135,17 @@ MIRI researcher Scott Garrabrant wrote a post about how ["Yes Requires the Possi On 31 May 2019, a [draft of a new _Less Wrong_ FAQ](https://www.lesswrong.com/posts/MqrzczdGhQCRePgqN/feedback-requested-draft-of-a-new-about-welcome-page-for) included a link to "... Not Man for the Categories" as one of Scott Alexander's best essays. I argued that it would be better to cite _almost literally_ any other _Slate Star Codex_ post (most of which, I agreed, were exemplary). I claimed that the following disjunction was true: _either_ Alexander's claim that "There's no rule of rationality saying that [one] shouldn't" "accept an unexpected [X] or two deep inside the conceptual boundaries of what would normally be considered [Y] if it'll save someone's life" was a blatant lie, _or_ one had no grounds to criticize me for calling it a blatant lie, because there's no rule of rationality that says I shouldn't draw the category boundaries of "blatant lie" that way. The mod [was persuaded on reflection](https://www.lesswrong.com/posts/MqrzczdGhQCRePgqN/feedback-requested-draft-of-a-new-about-welcome-page-for?commentId=oBDjhXgY5XtugvtLT), and "... Not Man for the Categories" was not included in the final FAQ. Another "victory." -But winning "victories" wasn't particularly comforting when I resented this becoming a political slapfight at all. +But winning "victories" wasn't particularly comforting when I resented this becoming a political slapfight at all. I thought a lot of the objections I faced in the derailed "Possibility of No" thread were insane. -[TODO: -a lot of the objections in the Vanessa thread were utterly insane -I wrote to Anna and Steven Kaas (who I was trying to "recruit" onto our side of the civil war)] +I wrote to Anna and Steven Kaas (who I was trying to "recruit" onto our side of the civil war). In ["What You Can't Say"](http://www.paulgraham.com/say.html), Paul Graham had written, "The problem is, there are so many things you can't say. If you said them all you'd have no time left for your real work." But surely that depends on what _is_ one's real work. For someone like Paul Graham, whose goal was to make a lot of money writing software, "Don't say it" (except for this one meta-level essay) was probably the right choice. But someone whose goal is to improve our collective ability to reason, should probably be doing _more_ fighting than Paul Graham (although still preferably on the meta- rather than object-level), because political restrictions on speech and thought directly hurt the mission of "improving our collective ability to reason", in a way that they don't hurt the mission of "make a lot of money writing software." -In "What You Can't Say", Paul Graham had written, "The problem is, there are so many things you can't say. If you said them all you'd have no time left for your real work." But surely that depends on what _is_ one's real work. For someone like Paul Graham, whose goal was to make a lot of money writing software, "Don't say it" (except for this one meta-level essay) was probably the right choice. But someone whose goal is to improve our collective ability to reason, should probably be doing _more_ fighting than Paul Graham (although still preferably on the meta- rather than object-level), because political restrictions on speech and thought directly hurt the mission of "improving our collective ability to reason", in a way that they don't hurt the mission of "make a lot of money writing software." +I said, I didn't know if either of them had caught the recent trainwreck on _Less Wrong_, but wasn't it _terrifying_ that the person who objected was a goddamned _MIRI research associate_? Not to demonize Kosoy, because [I was just as bad (if not worse) in 2008](/2021/May/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems/#hair-trigger-antisexism). The difference was that in 2008, we had a culture that could _beat it out of me_. -[TODO: I don't know if you caught the shitshow on Less Wrong, but isn't it terrifying that the person who objected was a goddamned _MIRI research associate_ ... not to demonize Vanessa because I was just as bad (if not worse) in 2008 (/2021/May/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems/#hair-trigger-antisexism), but in 2008 we had a culture that could _beat it out of me_] +Steven objected that tractibility and side effects matter, not just effect on the mission considered in isolation. For example, the Earth's graviational field directly impedes NASA's mession, and doesn't hurt Paul Graham, but both NASA and Paul Graham should spend the same amount of effort (_viz._, zero) trying to reduce the Earth's gravity. -[TODO: Steven's objection: -> the Earth's gravitational field directly hurts NASA's mission and doesn't hurt Paul Graham's mission, but NASA shouldn't spend any more effort on reducing the Earth's gravitational field than Paul Graham. +I agreed that tractability needs to be addressed, but I felt like—we were in a coal mine, and my favorite one of our canaries just died, and I was freaking out about this, and represenatives of the Caliphate (Yudkowsky, Alexander, Anna, Steven) were like, Sorry, I know you were really attached to that canary, but it's just a bird; you'll get over it; it's not really that important to the coal-mining mission. -I agreed that tractability needs to be addressed, but ... -] - -I felt like—we were in a coal-mine, and my favorite one of our canaries just died, and I was freaking out about this, and represenatives of the Caliphate (Yudkowsky, Alexander, Anna, Steven) were like, Sorry, I know you were really attached to that canary, but it's just a bird; you'll get over it; it's not really that important to the coal-mining mission. - -And I was like, I agree that I was unreasonably emotionally attached to that particular bird, which is the direct cause of why I-in-particular am freaking out, but that's not why I expect _you_ to care. The problem is not the dead bird; the problem is what the bird is _evidence_ of: if you're doing systematically correct reasoning, you should be able to get the right answer even when the question _doesn't matter_. (The causal graph is the fork "canary-death ← mine-gas → human-danger" rather than the direct link "canary-death → human-danger".) Ben and Michael and Jessica claim to have spotted their own dead canaries. I feel like the old-timer Rationality Elders should be able to get on the same page about the canary-count issue? +And I was like, I agree that I was unreasonably emotionally attached to that particular bird, which was the direct cause of why I-in-particular was freaking out, but that's not why I expected _them_ to care. The problem was not the dead bird; the problem was what the bird was _evidence_ of: if you're doing systematically correct reasoning, you should be able to get the right answer even when the question _doesn't matter_. (The causal graph is the fork "canary-death ← mine-gas → human-danger" rather than the direct link "canary-death → human-danger".) Ben and Michael and Jessica claim to have spotted their own dead canaries. I felt like the old-timer Rationality Elders should be able to get on the same page about the canary-count issue? Math and Wellness Month ended up being mostly a failure: the only math I ended up learning was [a fragment of group theory](http://zackmdavis.net/blog/2019/05/group-theory-for-wellness-i/), and [some probability/information theory](http://zackmdavis.net/blog/2019/05/the-typical-set/) that [actually turned out to super-relevant to understanding sex differences](/2021/May/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems/#typical-point). So much for taking a break. @@ -184,7 +180,7 @@ Math and Wellness Month ended up being mostly a failure: the only math I ended u * secret posse member: level of social-justice talk makes me not want to interact with this post in any way ] -[TODO: https://slatestarcodex.com/2019/07/04/some-clarifications-on-rationalist-blogging/] +On 4 July, Scott Alexander published ["Some Clarifications on Rationalist Blogging"](https://slatestarcodex.com/2019/07/04/some-clarifications-on-rationalist-blogging/), disclaiming any authority as a "rationalist" leader. ("I don't want to claim this blog is doing any kind of special 'rationality' work beyond showing people interesting problems [...] Insofar as [_Slate Star Codex_] makes any pretensions to being 'rationalist', it's a rationalist picnic and not a rationalist monastery.") I assumed this was inspired by Ben's request back in March that Scott "alter the beacon" so as to not confuse people about what the current-year community was. I appreciated it. [TODO: "AI Timelines Scam" * I still sympathize with the "mainstream" pushback against the scam/fraud/&c. language being used to include Elephant-in-the-Brain-like distortions @@ -214,15 +210,17 @@ Math and Wellness Month ended up being mostly a failure: the only math I ended u [TODO: State of Steven] -I still wanted to finish the memoir-post mourning the "rationalists", but I still felt psychologically constraint; I was still bound by internal silencing-chains. So instead, I mostly turned to a combination of writing bitter and insulting comments whenever I saw someone praise the "rationalists" collectively, and—more philosophy-of-language blogging! +I still wanted to finish the memoir-post mourning the "rationalists", but I still felt psychologically constrained; I was still bound by internal silencing-chains. So instead, I mostly turned to a combination of writing bitter and insulting comments whenever I saw someone praise the "rationalists" collectively, and—more philosophy blogging! In August 2019's ["Schelling Categories, and Simple Membership Tests"](https://www.lesswrong.com/posts/edEXi4SpkXfvaX42j/schelling-categories-and-simple-membership-tests), I explained a nuance that had only merited a passion mention in "... Boundaries?": sometimes you might want categories for different agents to _coordinate_ on, even at the cost of some statistical "fit." (This was of course generalized from a "pro-trans" argument that had occured to me, [that self-identity is an easy Schelling point when different people disagree about what "gender" they perceive someone as](/2019/Oct/self-identity-is-a-schelling-point/).) -In September 2019's "Heads I Win, Tails?—Never Heard of Her; Or, Selective Reporting and the Tragedy of the Green Rationalists" [TODO: ... I was surprised by how well this did (high karma, later included in the best-of-2019 collection); Ben and Jessica had discouraged me from bothering] +In September 2019's ["Heads I Win, Tails?—Never Heard of Her; Or, Selective Reporting and the Tragedy of the Green Rationalists"](https://www.lesswrong.com/posts/DoPo4PDjgSySquHX8/heads-i-win-tails-never-heard-of-her-or-selective-reporting), I presented a toy mathematical model of how censorship distorts group beliefs. I was surprised by how well-received it was (high karma, Curated within a few days, later included in the Best-of-2019 collection), especially given that it was explicitly about politics (albeit at a meta level, of course). Ben and Jessica had discouraged me from bothering when I sent them a draft. + +In October 2019's ["Algorithms of Deception!"](https://www.lesswrong.com/posts/fmA2GJwZzYtkrAKYJ/algorithms-of-deception), I exhibited some toy Python code modeling different kinds of deception. A function that faithfully passes observations it sees as input to another function, lets the second function constructing a well-calibrated probability distribution. But if the first function outright fabricates evidence, or selectively omits some evidence, or gerrymanders the categories by which it interprets its observations as evidence, the second function comes up with a worse (less accurate) probability distribution. -In October 2019's "Algorithms of Deception!", I explained [TODO: ...] +Also in October 2019, in ["Maybe Lying Doesn't Exist"](https://www.lesswrong.com/posts/bSmgPNS6MTJsunTzS/maybe-lying-doesn-t-exist), I replied to Scott Alexander's ["Against Lie Inflation"](https://slatestarcodex.com/2019/07/16/against-lie-inflation/), which was itself a generalized rebuke of Jessica's "The AI Timelines Scam". Scott thought Jessica was wrong to use language like "lie", "scam", _&c._ to describe someone thing (purportedly) motivatedly wrong, but not necessarily _consciously_ lying. -Also in October 2019, in "Maybe Lying Doesn't Exist" [TODO: ... I was _furious_ at "Against Lie Inflation"—oh, so _now_ you agree that making language less useful is a problem?! But then I realized Scott actually was being consistent in his own frame: he's counting "everyone is angrier" (because of more frequent lying-accusations) as a cost; but, if everyone _is_ lying, maybe they should be angry!] +I was _furious_ when "Against Lie Inflation" came out. (Furious at what I perceived as hypocrisy, not because I particularly cared about defending Jessica's usage.) Oh, so _now_ Scott agreed that making language less useful is a problem?! But on further consideration, I real I realized Alexander actually was being consistent in admitting appeals-to-consequences as legitimate. In objecting to the expanded definition of "lying", Alexander was counting "everyone is angrier" (because of more frequent lying-accusations) as a cost. Whereas on my philosophy, that wasn't a legitimate cost. (If everyone _is_ lying, maybe people _should_ be angry!) ------ diff --git a/notes/memoir-sections.md b/notes/memoir-sections.md index 07a4bb9..448ce36 100644 --- a/notes/memoir-sections.md +++ b/notes/memoir-sections.md @@ -2,20 +2,20 @@ marked TODO blocks— ✓ Slate Star Codex went down [pt. 4] ✓ revise pseudonymity graf [pt. 2] ✓ Transgender Map mis-doxxed me [pt. 2] -_ "If Clarity" recap intro [pt. 4] +✓ last day at SwiftStack, alt-right quip [pt. 2] +✓ "If Clarity" recap intro [pt. 4] +✓ Steven's gravity objection [pt. 4] +✓ "Some Clarifications on Rationalist Blogging" [pt. 4] +✓ more philosophy of language blogging [pt. 4] _ "Agreeing with Stalin" recap intro [pt. 5] _ confronting Olivia [pt. 2] -_ last day at SwiftStack, alt-right quip [pt. 2] _ scuffle on "Yes Requires the Possibility" [pt. 4] -_ Steven's gravity objection [pt. 4] _ "Lesswrong.com is dead to me" [pt. 4] -_ "Some Clarifications on Rationalist Blogging" [pt. 4] _ AI timelines scam [pt. 4] _ secret thread with Ruby [pt. 4] _ progress towards discussing the real thing [pt. 4] _ epistemic defense meeting [pt. 4] _ State of Steven [pt. 4] -_ more philosophy of language blogging [pt. 4] _ Somni [pt. 4] _ rude maps [pt. 4] _ culture off the rails; my warning points to Vaniver [pt. 4] @@ -35,6 +35,10 @@ _ the Death With Dignity era [pt. 5] _ regrets, wasted time, conclusion [pt. 5] _ include Wilhelm "Gender Czar" conversation? [pt. 2] +not even blocked— +_ Re: on legitimacy and the entrepreneur; or, continuing the attempt to spread my sociopathic awakening onto Scott [pt. 2 somewhere] +_ running away to Burlingame; Hamilton tickets + bigger blocks— _ dath ilan and Eliezerfic fight _ Dolphin War finish @@ -50,10 +54,11 @@ New (bad) time estimate: 35 smaller TODO blocks × 3/day = 12 days 7 bigger blocks × day/2.5 = 17.5 days = 29.5 workdays × 7 sidereal days / 6 workdays = 34.4 sidereal days -= gaplass draft on 17 February?? += gapless draft on 17 February?? With internet available— +_ HEXACO model _ archive.is misdoxxing _ negative utilitarian Planecrash tag _ my Tweet about upgrading to a block @@ -92,6 +97,7 @@ _ Anna's claim that Scott was a target specifically because he was good, my coun _ Yudkowsky's LW moderation policy far editing tier— +_ D. also acknowledged AGP _ "no one else would have spoken" should have been a call-to-action to read more widely _ explain who Kay Brown is _ mention Will MacAskill's gimped "longtermism" somehow @@ -156,11 +162,18 @@ _ notice the symmetry where _both_ E and I want to partition the discussion with _ contract-drafting em, SSC blogroll is most of my traffic _ "Common Interest of Many Causes" and "Kolmogorov Complicity" offer directly contradictory strategies _ Vassar's about-face on gender +_ better introduction of S.K. _ risk of people bouncing off progressivism _ an AGP teen boy could at least consent to transition, and make plans based on knowing what the thing is (you'd actually want to go through a little bit of male puberty) _ figure out full timeline of which of my Less Wrong posts to mention _ update "80,000 words" refs with the near-final wordcount - +_ less afraid of hurting O. +_ better explain "lie inflation" ambiguity +_ backlink "I've decided to drop the pseudonym" to pen name drop post +_ backlink (/2022/TODO/blanchards-dangerous-idea-and-the-plight-of-the-lucid-crossdreamer/) +_ backlink "I again refer to" Happy Price +_ backlink "(again) whether he accepted the Cheerful Price" +_ backlink "alter the beacon" terms to explain on first mention— _ Valinor @@ -2014,7 +2027,7 @@ Bostrom's apology for an old email—who is this written for?? Why get ahead, wh https://twitter.com/ESYudkowsky/status/1404697716689489921 > I have never in my own life tried to persuade anyone to go trans (or not go trans)—I don't imagine myself to understand others that much. -If you think it "sometimes personally prudent and not community-harmful" to got out of your way to say positive things about Republican candidates and never, ever say positive things about Democratic candidates (because you "don't see what the alternative is besides getting shot"), you can see why eople might regard you as a _Republican shill_—even if all the things you said were true, and even if you never told any specific individual, "You should vote Republican." +If you think it "sometimes personally prudent and not community-harmful" to go out of your way to say positive things about Republican candidates and never, ever say positive things about Democratic candidates (because you "don't see what the alternative is besides getting shot"), you can see why people might regard you as a _Republican shill_—even if all the things you said were true, and even if you never told any specific individual, "You should vote Republican." https://www.facebook.com/yudkowsky/posts/10154110278349228 > Just checked my filtered messages on Facebook and saw, "Your post last night was kind of the final thing I needed to realize that I'm a girl." @@ -2023,3 +2036,8 @@ https://www.facebook.com/yudkowsky/posts/10154110278349228 https://twitter.com/ESYudkowsky/status/1404821285276774403 > It is not trans-specific. When people tell me I helped them, I mostly believe them and am happy. ] + +the rats were supposed to be an alternative to academic orthodoxy (such that we could just jump to the correct decision theory without the political fighting needing to dethrone CDT), but we're still under the authority of the egregore + +(from October 2016 email to Scott) +This is not an advanced rationalist skill! This is the "distinguishing fantasy from reality" skill! People will quote your "Categories Were Made for the Man" in defense of the idea that someone can be biologically male, think of themselves as a boy, be thought of by others as a boy, and yet still actually have been a girl at the time by virtue of deciding to transition years later. I've been told that "Gender is a floating tag which has one substantial consequence, which is comfort of the people being addressed"! \ No newline at end of file