From: M. Taylor Saotome-Westlake Date: Wed, 24 May 2023 02:12:45 +0000 (-0700) Subject: memoir: Vassar discourse X-Git-Url: http://534655.efjtl6rk.asia/source?a=commitdiff_plain;h=77cdcd644df9756a6767572a15b8f85c41f20264;p=Ultimately_Untrue_Thought.git memoir: Vassar discourse --- diff --git a/content/drafts/people-evolved-social-control-mechanisms-and-rocks.md b/content/drafts/people-evolved-social-control-mechanisms-and-rocks.md index 8b3e56d..536cbcb 100644 --- a/content/drafts/people-evolved-social-control-mechanisms-and-rocks.md +++ b/content/drafts/people-evolved-social-control-mechanisms-and-rocks.md @@ -580,7 +580,17 @@ The intake assessment describes me as "retreat[ing] into highly intellectualized ----- -I started talking more with Michael Vassar. He sent me an email asking for my phone number: "Thinking much more about how you can help me to meet my needs than about how I can help you though, and feel guilty about it given the situation, so feel free to tell me 'no, not now'." I replied, "I like helping people meet their needs! It's prosocial!" +I started talking more with Michael Vassar. I don't think I've ever really understood Michael well enough to summarize him. Everyone else writes blog posts. If you want to know what someone's intellectual agenda is about, you can point to the blog posts. Michael never writes anything. He just has these free-wheeling conversations where he makes all sorts of crazy-sounding assertions ... which were suddenly starting to make sense to me now. + +On 22 February 2017 (two days after my release from the psych ward), he asked for my phone number (Subject: "Can I have your phone number?"). "I'd really like to talk soon," he wrote. "Thinking much more about how you can help me to meet my needs than about how I can help you though, and feel guilty about it given the situation, so feel free to tell me 'no, not now'." + +I replied, "I like helping people meet their needs! It's prosocial!" + +When I asked how I could help him meet his needs. He said that he thought my fight was ground zero in a war against words. If I had the mental composure to hold up, knowing that I had allies, he really thought that full documentation of my experiences would be the maximum leverage of my time. Otherwise, he was all but unable to ask for money for himself, even if he honestly thought he was the best use of it, but he was able to ask for nonprofit funding. What about starting a nonprofit, with me as executive director and him as fundraiser?—the Society for the Preservation of Generative Grammar and for Defense Against Structural Violence, providing legal defense for people whose rights or livelihood are threatened by political correctness. (Subject: "Re: You're really bad at communicating!") + +Regarding the suggestion to document my experiences, I replied, "Too narcissistic!" This is incredibly ironic in hindsight, given the absurd amount of effort I've ended up spending since 2019 writing up this Whole Dumb Story. But you see, I had to try making object-level arguments _first_. It was only after that conclusively failed, that I've gone to the narcissistic extreme of full documentation of my experiences as a last resort. (Or as therapy.) + +I didn't want to start a nonprofit, either. I thought our kind of people were smart enough to function without the taboo against giving money to individuals instead of faceless institutions. I had $97,000 saved up from being a San Francisco software engineer who doesn't live in San Francisco. Besides keeping most of it as savings, and spending some of it to take a sabbatical from my career, I was thinking it made sense to spend some of it just giving unconditional gifts to Michael and others who had helped me as a kind of credit-assignment ritual, although I wanted to think carefully about the details before doing anything rash. On a separate email thread, I ended up mentioning something to Michael that I shouldn't have, due to a previous confidentiality promise I had made. (I can tell you _that_ I screwed up on a secrecy obligation, without revealing what it was.) I felt particularly bad about this given that I had been told that Michael was notoriously bad at keeping secrets, and asked him to keep this one secret as a favor to me. @@ -588,18 +598,29 @@ Michael replied: > Happy to not talk about it. Just freaking ask. I can easily honor commitments, just not optimize for general secrecy. The latter thing is always wrong. I'm not being sloppy and accidentally defecting against the generalized optimization for secrecy, I'm actively at war with it. We need to discuss this soon. -I asked what were the ways he thought I could help him meet his needs. He said that he thought my fight was ground zero in a war against words. If I had the mental composure to hold up, knowing that I had allies, he really thought that full documentation of my experiences would be the maximum leverage of my time. Otherwise, he was all but unable to ask for money for himself, even if he honestly thought he was the best use of it, but he was able to ask for nonprofit funding. What about starting a nonprofit, with me as executive director and him as fundraiser?—the Society for the Preservation of Generative Grammar and for Defense Against Structural Violence, providing legal defense for people whose rights or livelihood are threatened by political correctness. (Subject: "Re: You're really bad at communicating!") +------ -Regarding the suggestion to document my experiences, I replied, "Too narcissistic!" This is incredibly ironic in hindsight, given the absurd amount of effort I've ended up spending since 2019 writing up this Whole Dumb Story. But you see, I had to try making object-level arguments _first_. It was only after that conclusively failed, that I've gone to the narcissistic extreme of full documentation of my experiences as a last resort. (Or as therapy.) +On 2 March 2017, I wrote to Michael about how "the community" was performing (Subject: "rationalist community health check?? asking for one bit of advice"). Michael had claimed that it was obvious that AI was far away. (This wasn't obvious to me.) But in contrast, a lot of people in the rationalist community seemed to have very short AI timelines. "Chaya" had recently asked me, "What would you do differently if AI was 5 years off?" -I didn't want to start a nonprofit, either. I thought our kind of people were smart enough to function without the taboo against giving money to individuals instead of faceless institutions. I had $97,000 saved up from being a San Francisco software engineer who doesn't live in San Francisco. Besides keeping most of it as savings, and spending some of it to take a sabbatical from my career, I was thinking it made sense to spend some of it just giving unconditional gifts to Michael and others who had helped me, although I wanted to think carefully about the details before doing anything rash. +(Remember, this was 2017. Five years later in March 2022, we were in fact still alive, but the short-timelines people were starting to look more prescient than Michael gave them credit for.) + +If we—my sense of the general culture of "we"—were obviously getting gender wrong, plausibly got the election wrong, plausibly were getting AI timelines wrong, and I thought Moldbug and neoreactionary friends were pointing to some genuinely valuable Bayes-structure ... it seemed like we were doing a _really poor_ job of [pumping against cultishness](https://www.lesswrong.com/posts/yEjaj7PWacno5EvWa/every-cause-wants-to-be-a-cult). Was it maybe worth bidding for a cheerful price conversation with Yudkowsky again to discuss this? (I wasn't important enough for him to spontaneously answer my emails, and I was too submissive to just do it without asking Michael first.) + +Michael said there were better ways to turn dollars into opposition to cultishness. Then I realized that I had been asking Michael for permission, not advice. (Of _course_ Michael was going to say No, there's a better way to turn dollars into anti-cultishness, which would turn out to be apophenic Vassarian moonspeak that will maybe later turn out to be correct in ways that I wouldn't understand for eight years; I shouldn't have asked.) I went ahead an emailed Yudkowsky. (Again, I won't confirm or deny whether a conversation actually happened.) + +----- + +"As far as I can tell, you are [the] maximally concrete and articulate case of a person [harmed] by PC in a context where an impartial summary would call it attempted genocide"[^vassar-typos] + +[^vassar-typos]: Bracket substitutions are where the original said "three" and "harness". I assume these were erroneous autocorrections. + + + * my Julia Serano vs. Robert Heinlein story + * nice people doing their jobs +] -[TODO I interact more with Vassar - * "rationalist community health check?? asking for one bit of advice" - * going through a "Having a nervous breakdown, suddenly understanding all the things Michael has been trying to tell me for eight years that I didn't understand at the time, and subsequently panicking and running around yelling at everyone because I'm terrified of the rationalist community degenerating into just another arbitrary Bay Area humanist cult when we were supposed to be the Second Scientific Revolution" phase of my intellectual development - * nice people doing their jobs - 7 March— + > As I recall, at the time, I was thinking that people may know far less or far more than I might have previously assumed by taking their verbal behavior literally with respect to what I think words mean: people have to gently test each other before really being able to speak the horrible truth that might break someone's self-narrative (thereby destroying their current personality and driving them insane, or provoking violence). I thought that you and Anna might be representatives of the "next level" of scientists guarding the human utility function by trying to produce epistemic technology within our totalitarian-state simulation world, and that I was "waking up" into that level by decoding messages (_e.g._, from the Mike Judge films that you recommended) and inferring things that most humans couldn't. Michael replied: