From: Zack M. Davis Date: Sun, 22 Oct 2023 05:37:59 +0000 (-0700) Subject: memoir: pt. 3 edit pass underway X-Git-Url: http://534655.efjtl6rk.asia/source?a=commitdiff_plain;h=fa2659be23b3ee69a6978315a5a346c5a9a943ef;p=Ultimately_Untrue_Thought.git memoir: pt. 3 edit pass underway --- diff --git a/content/drafts/if-clarity-seems-like-death-to-them.md b/content/drafts/if-clarity-seems-like-death-to-them.md index de75d6e..33f7020 100644 --- a/content/drafts/if-clarity-seems-like-death-to-them.md +++ b/content/drafts/if-clarity-seems-like-death-to-them.md @@ -13,19 +13,21 @@ Status: draft [^egan-paraphrasing]: The original quote says "one hundred thousand straights" ... "gay community" ... "gay and lesbian" ... "franchise rights on homosexuality" ... "unauthorized queer." -Recapping our Whole Dumb Story so far: in a previous post, ["Sexual Dimorphism in Yudkowsky's Sequences, in Relation to My Gender Problems"](/2021/May/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems/), I told you about how I've always (since puberty) had this obsessive erotic fantasy about being magically transformed into a woman and how I used to think it was immoral to believe in psychological sex differences, until I read these really great Sequences of blog posts by Eliezer Yudkowsky which [incidentally pointed out how absurdly impossible my obsessive fantasy was](https://www.lesswrong.com/posts/QZs4vkC7cbyjL9XA9/changing-emotions) ... +Recapping our Whole Dumb Story so far: in a previous post, ["Sexual Dimorphism in Yudkowsky's Sequences, in Relation to My Gender Problems"](/2021/May/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems/), I told you about how I've always (since puberty) had this obsessive erotic fantasy about being magically transformed into a woman and how I used to think it was immoral to believe in psychological sex differences, until I read these great Sequences of blog posts by Eliezer Yudkowsky which [incidentally pointed out how absurdly impossible my obsessive fantasy was](https://www.lesswrong.com/posts/QZs4vkC7cbyjL9XA9/changing-emotions) ... —none of which gooey private psychological minutiæ would be in the public interest to blog about _except that_, as I explained in a subsequent post, ["Blanchard's Dangerous Idea and the Plight of the Lucid Crossdreamer"](/2023/Jul/blanchards-dangerous-idea-and-the-plight-of-the-lucid-crossdreamer/), around 2016, everyone in the community that formed around the Sequences suddenly decided that guys like me might actually be women in some unspecified metaphysical sense, and the cognitive dissonance from having to rebut all this nonsense coming from everyone I used to trust drove me [temporarily](/2017/Mar/fresh-princess/) [insane](/2017/Jun/memoirs-of-my-recent-madness-part-i-the-unanswerable-words/) from stress and sleep deprivation ... -—which would have been the end of the story, _except that_, as I explained in a subsequent–subsequent post, ["A Hill of Validity in Defense of Meaning"](/2023/Jul/a-hill-of-validity-in-defense-of-meaning/), in late 2018, Eliezer Yudkowsky prevaricated about his own philosophy of language in the service of the gender-identity coalition, and my unsuccessful attempts to get him to clarify led me and allies to conclude that Yudkowsky and his "rationalists" were corrupt. +—which would have been the end of the story, _except that_, as I explained in a subsequent–subsequent post, ["A Hill of Validity in Defense of Meaning"](/2023/Jul/a-hill-of-validity-in-defense-of-meaning/), in late 2018, Eliezer Yudkowsky prevaricated about his own philosophy of language in a way that looked optimized to suggest that people who disputed that men could be women in some unspecified metaphysical sense were philosophically confused, and my unsuccessful attempts to get him to clarify led me and my allies to conclude that Yudkowsky and his "rationalists" were corrupt. Anyway, given that the "rationalists" were fake and that we needed something better, there remained the question of what to do about that, and how to relate to the old thing. -_I_ had been hyperfocused on prosecuting my Category War, but the reason Michael Vassar and Ben Hoffman and Jessica Taylor were willing to help me out on that was not because they particularly cared about the gender and categories example, but because it seemed like a manifestation of a more general problem of epistemic rot in "the community". +I had been hyperfocused on prosecuting my Category War, but the reason Michael Vassar and Ben Hoffman and Jessica Taylor[^posse-boundary] were willing to help me out on that was not because they particularly cared about the gender and categories example, but because it seemed like a manifestation of a more general problem of epistemic rot in "the community". -Ben had [previously](http://benjaminrosshoffman.com/givewell-and-partial-funding/) [written](http://benjaminrosshoffman.com/effective-altruism-is-self-recommending/) a lot [about](http://benjaminrosshoffman.com/openai-makes-humanity-less-safe/) [problems](http://benjaminrosshoffman.com/against-responsibility/) [with](http://benjaminrosshoffman.com/against-neglectedness/) Effective Altruism. Jessica had had a bad time at MIRI, as she had told me back in March, and would [later](https://www.lesswrong.com/posts/KnQs55tjxWopCzKsk/the-ai-timelines-scam) [write](https://www.lesswrong.com/posts/MnFqyPLqbiKL8nSR7/my-experience-at-and-around-miri-and-cfar-inspired-by-zoe) [about](https://www.lesswrong.com/posts/pQGFeKvjydztpgnsY/occupational-infohazards). To what extent were my thing, and Ben's thing, and Jessica's thing, manifestations of "the same" underlying problem? Or had we all become disaffected with the mainstream "rationalists" for our own idiosyncratic reasons, and merely randomly fallen into each other's, and Michael's, orbit? +[^posse-boundary]: Sarah Constantin and "Riley" had also been involved in reaching out to Yudkowsky, and were included in many subsequent discussions, but seemed like more marginal members of the group that was forming. -If there _was_ a real problem, I didn't feel like I had a good grasp on what it was specifically. Cultural critique is a fraught endeavor: if someone tells an outright lie, you can, maybe, with a lot of effort, prove that to other people, and get a correction on that specific point. (Actually, as we had just discovered, even that might be too much to hope for.) But _culture_ is the sum of lots and lots of little micro-actions by lots and lots of people. If your _entire culture_ has visibly departed from the Way that was taught to you in the late 'aughts, how do you demonstrate that to people who, to all appearances, are acting like they don't remember the old Way, or that they don't think anything has changed, or that they notice some changes but think the new way is better? It's not as simple as shouting, "Hey guys, Truth matters!"—any ideologue or religious person would agree with _that_. It's not feasible to litigate every petty epistemic crime in something someone said, and if you tried, someone who thought the culture was basically on track could accuse you of cherry-picking. If "culture" is a real thing at all—and it certainly seems to be—we are condemned to grasp it unclearly, relying on the brain's pattern-matching faculties to sum over thousands of little micro-actions as a [_gestalt_](https://en.wiktionary.org/wiki/gestalt), rather than having the kind of robust, precise representation a well-designed AI could compute plans with. +Ben had previously worked at GiveWell and had written a lot about problems with the effective altruism movement, in particular, EA-branded institutions making [incoherent](http://benjaminrosshoffman.com/givewell-and-partial-funding/) [decisions](http://benjaminrosshoffman.com/openai-makes-humanity-less-safe/) under the influence of incentives to [distort](http://benjaminrosshoffman.com/humility-argument-honesty/) [information](http://benjaminrosshoffman.com/honesty-and-perjury/) [in order to](http://benjaminrosshoffman.com/effective-altruism-is-self-recommending/) [seek](http://benjaminrosshoffman.com/against-neglectedness/) [control](http://benjaminrosshoffman.com/against-responsibility/). Jessica had previously worked at MIRI, where she was unnerved by under-evidenced paranoia about secrecy and [short AI timelines](https://www.lesswrong.com/posts/KnQs55tjxWopCzKsk/the-ai-timelines-scam), and would later [write](https://www.lesswrong.com/posts/MnFqyPLqbiKL8nSR7/my-experience-at-and-around-miri-and-cfar-inspired-by-zoe) [about](https://www.lesswrong.com/posts/pQGFeKvjydztpgnsY/occupational-infohazards) her experiences there. To what extent were my thing, and Ben's thing, and Jessica's thing, manifestations of "the same" underlying problem? Or had we all become disaffected with the mainstream "rationalists" for our own idiosyncratic reasons, and merely randomly fallen into each other's, and Michael's, orbit? + +If there was a real problem, I didn't feel like I had a good grasp on what it was specifically. Cultural critique is a fraught endeavor: if someone tells an outright lie, you can, maybe, with a lot of effort, prove that to other people, and get a correction on that specific point. (Actually, as we had just discovered, even that might be too much to hope for.) But culture is the sum of lots and lots of little micro-actions by lots and lots of people. If your entire culture has visibly departed from the Way that was taught to you in the late 'aughts, how do you demonstrate that to people who, to all appearances, are acting like they don't remember the old Way, or that they don't think anything has changed, or that they notice some changes but think the new way is better? It's not as simple as shouting, "Hey guys, Truth matters!" Any ideologue or religious person would agree with _that_. It's not feasible to litigate every petty epistemic crime in something someone said, and if you tried, someone who thought the culture was basically on track could accuse you of cherry-picking. If "culture" is a real thing at all—and it certainly seems to be—we are condemned to grasp it unclearly, relying on the brain's pattern-matching faculties to sum over thousands of little micro-actions as a [_gestalt_](https://en.wiktionary.org/wiki/gestalt), rather than having the kind of robust, precise representation a well-designed AI could compute plans with. Ben called the _gestalt_ he saw the Blight, after the rogue superintelligence in Vernor Vinge's _A Fire Upon the Deep_: the problem wasn't that people were getting dumber; it was that there was locally coherent coordination away from clarity and truth and towards coalition-building, which was validated by the official narrative in ways that gave it a huge tactical advantage; people were increasingly making decisions that were better explained by their political incentives rather than acting on coherent beliefs about the world—using and construing claims about facts as moves in a power game, albeit sometimes subject to genre constraints under which only true facts were admissible moves in the game. @@ -33,13 +35,13 @@ When I asked him for specific examples of MIRI or CfAR leaders behaving badly, h This seemed to me like the sort of thing where a particularly principled (naïve?) person might say, "That's _lying for political reasons!_ That's _contrary to the moral law!_" and most ordinary grown-ups would say, "Why are you so upset about this? That sort of strategic phrasing in press releases is just how the world works, and things could not possibly be otherwise." -I thought explaining the Blight to an ordinary grown-up was going to need _either_ lots of specific examples that were way more egregious than this (and more egregious than the examples in ["EA Has a Lying Problem"](https://srconstantin.github.io/2017/01/17/ea-has-a-lying-problem.html) or ["Effective Altruism Is Self-Recommending"](http://benjaminrosshoffman.com/effective-altruism-is-self-recommending/)), or somehow convincing the ordinary grown-up why "just how the world works" isn't good enough, and why we needed one goddamned place in the entire goddamned world (perhaps a private place) with _unusually high standards_. +I thought explaining the Blight to an ordinary grown-up was going to need either lots of specific examples that were way more egregious than this (and more egregious than the examples in Sarah Constantin's ["EA Has a Lying Problem"](https://srconstantin.github.io/2017/01/17/ea-has-a-lying-problem.html) or Ben's ["Effective Altruism Is Self-Recommending"](http://benjaminrosshoffman.com/effective-altruism-is-self-recommending/)), or somehow convincing the ordinary grown-up why "just how the world works" isn't good enough, and why we needed one goddamned place in the entire goddamned world (perhaps a private place) with _unusually high standards_. -The schism introduced new pressures on my social life. On 20 April 2019, I told Michael that I still wanted to be friends with people on both sides of the factional schism (in the frame where recent events were construed as a factional schism), even though I was on this side. Michael said that we should unambiguously regard Anna and Eliezer as criminals or enemy combatants (!!), that could claim no rights in regards to me or him. +The schism introduced new pressures on my social life. On 20 April 2019, I told Michael that I still wanted to be friends with people on both sides of the factional schism, even though I was on this side. Michael said that we should unambiguously regard Yudkowsky and CfAR president (and my personal friend of ten years) Anna Salamon as criminals or enemy combatants, who could claim no rights in regards to me or him. -I don't think I "got" the framing at this time. War metaphors sounded Scary and Mean: I didn't want to shoot my friends! But the point of the analogy (which Michael explained, but I wasn't ready to hear until I did a few more weeks of emotional processing) was specifically that soliders on the other side of a war _aren't_ necessarily morally blameworthy as individuals:[^soldiers] their actions are being directed by the Power they're embedded in. +I don't think I "got" the framing at this time. War metaphors sounded scary and mean: I didn't want to shoot my friends! But the point of the analogy (which Michael explained, but I wasn't ready to hear until I did a few more weeks of emotional processing) was specifically that soliders on the other side of a war _aren't_ necessarily morally blameworthy as individuals:[^soldiers] their actions are being directed by the Power they're embedded in. -[^soldiers]: At least, not blameworthy _in the same way_ as someone who committed the same violence as an individual. +[^soldiers]: At least, not blameworthy in the same way as someone who committed the same violence as an individual. I wrote to Anna (Subject: "Re: the end of the Category War (we lost?!?!?!)"): @@ -51,13 +53,13 @@ I wrote to Anna (Subject: "Re: the end of the Category War (we lost?!?!?!)"): ----- -I may have subconsciously pulled off an interesting political thing. In my final email to Yudkowsky on 20 April 2019 (Subject: "closing thoughts from me"), I had written— +I may have subconsciously pulled off an interesting political maneuver. In my final email to Yudkowsky on 20 April 2019 (Subject: "closing thoughts from me"), I had written— > If we can't even get a public consensus from our _de facto_ leadership on something _so basic_ as "concepts need to carve reality at the joints in order to make probabilistic predictions about reality", then, in my view, there's _no point in pretending to have a rationalist community_, and I need to leave and go find something else to do (perhaps whatever Michael's newest scheme turns out to be). I don't think I'm setting [my price for joining](https://www.lesswrong.com/posts/Q8evewZW5SeidLdbA/your-price-for-joining) particularly high here? -And as it happened, on 4 May 2019, Yudkowsky [re-Tweeted Colin Wright on the "univariate fallacy"](https://twitter.com/ESYudkowsky/status/1124751630937681922)—the point that group differences aren't a matter of any single variable—which was _sort of_ like the clarification I had been asking for. (Empirically, it made me feel a lot less personally aggrieved.) Was I wrong to interpet this as [another "concession" to me](/2023/Jul/a-hill-of-validity-in-defense-of-meaning/#proton-concession)? (Again, notwithstanding that the whole mindset of extracting "concessions" was corrupt and not what our posse was trying to do.) +And as it happened, on 4 May 2019, Yudkowsky [re-Tweeted Colin Wright on the "univariate fallacy"](https://twitter.com/ESYudkowsky/status/1124751630937681922)—the point that group differences aren't a matter of any single variable—which was _sort of_ like the clarification I had been asking for. (Empirically, it made me feelless aggrieved.) Was I wrong to interpet this as [another "concession" to me](/2023/Jul/a-hill-of-validity-in-defense-of-meaning/#proton-concession)? (Again, notwithstanding that the whole mindset of extracting "concessions" was corrupt and not what our posse was trying to do.) -Separately, on 30 April 2019, I visited the house where "Meredith" and her husband Mike and Kelsey Piper and a few other people lived, which I'll call "Arcadia",[^named-houses] saying, essentially (and sincerely), Oh man oh jeez, Ben and Michael want me to join in a rationalist civil war against the corrupt mainstream-rationality establishment, and I'd really rather not, and I don't like how they keep using scary hyperbolic words like 'cult' and 'war' and 'criminal', but on the other hand, they're _the only ones backing me up_ on this _incredibly basic philosophy thing_ and I don't feel like I have anywhere else to _go_. The ensuing group conversation made some progress, but was mostly pretty horrifying. +Separately, on 30 April 2019, I visited the house where "Meredith" and her husband Mike and Kelsey Piper and some other people lived, which I'll call "Arcadia".[^named-houses] I said, essentially, [Oh man oh jeez](https://www.youtube.com/watch?v=q_eMvgNrQQE), Ben and Michael want me to join in a rationalist civil war against the corrupt mainstream-rationality establishment, and I'd really rather not, and I don't like how they keep using scary hyperbolic words like 'cult' and 'war' and 'criminal', but on the other hand, they're _the only ones backing me up_ on this _incredibly basic philosophy thing_ and I don't feel like I have anywhere else to _go_. This culminated in a group conversation with the entire house, which I found disturbing insofar as everyone else seemed to agree on things that I thought were clearly contrary to the spirit of the Sequences. [^named-houses]: It was common practice in our subculture to name group houses. My apartment was "We'll Name It Later." @@ -207,7 +209,7 @@ On 4 July 2019, Scott Alexander published ["Some Clarifications on Rationalist B Also in early July 2019, Jessica published ["The AI Timelines Scam"](https://www.lesswrong.com/posts/KnQs55tjxWopCzKsk/the-ai-timelines-scam), arguing that the recent popularity of "short" (_e.g._, 2030) AI timelines was better explained by political factors, rather than any technical arguments: just as in previous decades, people had incentives to bluff and exaggerate about the imminence of AGI in order to attract resources to their own project. -(Remember, this was 2019. After seeing what GPT-3/PaLM/DALL-E/_&c._ could do during the ["long May 2020"](https://twitter.com/MichaelTrazzi/status/1635871679133130752), it's now looking to me like the short-timelines people had better intuitions than Jessica gave them credit for.) +(Remember, this was 2019. After seeing what GPT-3/PaLM/DALL-E/_&c._ could do during the "long May 2020", it's now looking to me like the short-timelines people had better intuitions than Jessica gave them credit for.) I still sympathized with the "mainstream" pushback against using "scam"/"fraud"/"lie"/_&c._ language to include motivated [elephant-in-the-brain](https://en.wikipedia.org/wiki/The_Elephant_in_the_Brain)-like distortions. I conceded that this was a _boring_ semantic argument, but I feared that until we invented better linguistic technology, the _boring_ semantic argument was going to _continue_ sucking up discussion bandwidth with others when it didn't need to. diff --git a/notes/memoir-sections.md b/notes/memoir-sections.md index 8aa8be2..22fda45 100644 --- a/notes/memoir-sections.md +++ b/notes/memoir-sections.md @@ -9,15 +9,16 @@ _ the hill he wants to die on (conclusion for "Zevi's Choice"??) _ Tail vs. Bailey / Davis vs. Yudkowsky analogy (new block somewhere) _ mention that "Not Man for the Categories" keeps getting cited +first edit pass bookmark: "In an adorable twist" + pt. 3 edit tier— -_ fullname Taylor and Hoffman at start of pt. 3 -_ footnote clarifying that "Riley" and Sarah weren't core members of the group, despite being included on some emails? -_ be more specific about Ben's anti-EA and Jessica's anti-MIRI things, perhaps in footnotes +✓ fullname Taylor and Hoffman at start of pt. 3 +✓ footnote clarifying that "Riley" and Sarah weren't core members of the group, despite being included on some emails? +✓ be more specific about Ben's anti-EA and Jessica's anti-MIRI things _ Ben on "locally coherent coordination": use direct quotes for Ben's language—maybe rewrite in my own language (footnote?) as an understanding test -_ set context for "EA Has a Lying Problem" (written by Sarah, likely with Michael's influence—maybe ask Sarah) -_ clarify schism (me and Vassar bros leaving the EA/rat borg?) -_ set context for Anna on first mention in the post -_ more specific on "mostly pretty horrifying" and group conversation with the whole house +_ ask Sarah about context for "EA Has a Lying Problem"? +✓ set context for Anna on first mention in the post +✓ more specific on "mostly pretty horrifying" and group conversation with the whole house _ paragraph to explain the cheerful price bit _ cut words from the "Yes Requires" slapfight? _ better introduction of Steven Kaas diff --git a/notes/memoir_wordcounts.csv b/notes/memoir_wordcounts.csv index 6ac901f..f3c5bc9 100644 --- a/notes/memoir_wordcounts.csv +++ b/notes/memoir_wordcounts.csv @@ -550,4 +550,6 @@ 10/18/2023,118932,0 10/19/2023,118990,58 10/20/2023,118990,0 -10/21/2023,, +10/21/2023,119115,125 +10/22/2023,,0 +10/23/2023,,