From: Zack M. Davis Date: Fri, 22 Sep 2023 00:39:08 +0000 (-0700) Subject: memoir: finish debate saga recap X-Git-Url: http://534655.efjtl6rk.asia/source?a=commitdiff_plain;h=b16b96a0c42a73e75985ac03eb7bab92b8501d3f;p=Ultimately_Untrue_Thought.git memoir: finish debate saga recap --- diff --git a/content/drafts/agreeing-with-stalin-in-ways-that-exhibit-generally-rationalist-principles.md b/content/drafts/agreeing-with-stalin-in-ways-that-exhibit-generally-rationalist-principles.md index be5d31d..67838f9 100644 --- a/content/drafts/agreeing-with-stalin-in-ways-that-exhibit-generally-rationalist-principles.md +++ b/content/drafts/agreeing-with-stalin-in-ways-that-exhibit-generally-rationalist-principles.md @@ -414,26 +414,17 @@ In January 2009, Yudkowsky published ["Changing Emotions"](https://www.lesswrong It was a good post! Though Yudkowsky was merely using the sex change example to illustrate [a more general point about the difficulties of applied transhumanism](https://www.lesswrong.com/posts/EQkELCGiGQwvrrp3L/growing-up-is-hard), "Changing Emotions" was hugely influential on me; I count myself much better off for having understood the argument. -But then, in a March 2016 Facebook post, Yudkowsky [proclaimed that](https://www.facebook.com/yudkowsky/posts/10154078468809228) "for people roughly similar to the Bay Area / European mix, I think I'm over 50% probability at this point that at least 20% of the ones with penises are actually women." +But later, in a March 2016 Facebook post, Yudkowsky [proclaimed that](https://www.facebook.com/yudkowsky/posts/10154078468809228) "for people roughly similar to the Bay Area / European mix, I think I'm over 50% probability at this point that at least 20% of the ones with penises are actually women." This seemed like a huge and surprising reversal from the position articulated in "Changing Emotions"! The two posts weren't _necessarily_ inconsistent, _if_ you assumed gender identity is an objectively real property synonymous with "brain sex", and that "Changing Emotions"'s harsh (almost mocking) skepticism of the idea of true male-to-female sex change was directed at the sex-change fantasies of _cis_ men (with a male gender-identity/brain-sex), whereas the 2016 Facebook post was about _trans women_ (with a female gender-identity/brain-sex), which are a different thing. -But this potential unification seemed very dubious to me, especially if "actual" trans women were purported to be "at least 20% of the ones with penises" (!!) in some population. _After it's been pointed out_, it should be a pretty obvious hypothesis that "guy on the Extropians mailing list in 2004 who fantasizes about having a female but 'otherwise identical' copy of himself" and "guy in 2016 Berkeley who identifies as a trans woman" are the _same guy_. +But this potential unification seemed very dubious to me, especially if "actual" trans women were purported to be "at least 20% of the ones with penises" (!!) in some population. _After it's been pointed out_, it should be a pretty obvious hypothesis that "guy on the Extropians mailing list in 2004 who fantasizes about having a female but 'otherwise identical' copy of himself" and "guy in 2016 Berkeley who identifies as a trans woman" are the _same guy_. So in October 2016, [I wrote to Yudkowsky noting the apparent reversal and asking to talk about it](/2023/Jul/blanchards-dangerous-idea-and-the-plight-of-the-lucid-crossdreamer/#cheerful-price) (offering to pay $1000 under the [cheerful price protocol](https://www.lesswrong.com/posts/MzKKi7niyEqkBPnyu/your-cheerful-price)). Because of the privacy rules I'm adhering to in telling this Whole Dumb Story, I can't confirm or deny whether he accepted and any such conversation occured. -[TODO recap cont'd— +Then, in November 2018, while criticizing people who refuse to use trans people's preferred pronouns, Yudkowsky proclaimed that "Using language in a way _you_ dislike, openly and explicitly and with public focus on the language and its meaning, is not lying" and that "you're not standing in defense of truth if you insist on a word, brought explicitly into question, being used with some particular meaning". But _that_ seemed like a huge and surprising reversal from the position articulated in ["37 Ways Words Can Be Wrong"](https://www.lesswrong.com/posts/FaJaCgqBKphrDzDSj/37-ways-that-words-can-be-wrong). After attempts to clarify via email failed, I eventually wrote ["Where to Draw the Boundaries?"](https://www.lesswrong.com/posts/esRZaPXSHgWzyB2NL/where-to-draw-the-boundaries) to explain the relevant error in general terms, and Yudkowsky would eventually go on to [clarify his position in Septembmer 2020](https://www.facebook.com/yudkowsky/posts/10158853851009228). - * October 2016, I wrote to Yudkowsky noting that he seemed to have made an a massive update and asked to talk about it (for $1000, under the [cheerful price protocol](https://www.lesswrong.com/posts/MzKKi7niyEqkBPnyu/your-cheerful-price)) - * because of the privacy rules I'm following under this document, can't confirm or deny whether he accepted - * November 2018, "hill of validity" Twitter thread - * with the help of Michael/Sarah/Ben/Jessica, I wrote to him multiple times trying to clarify - * I eventually wrote "Where to Draw the Boundaries?", which includes a verbatim quotes explaining what's wrong with the "it is not a secret" - * we eventually got a clarification in September 2020 - * I was satisfied, and then ... - * February 2021, "simplest and best proposal" - * But this is _still_ wrong, as explained in "Challenges" -] +But then in February 2021, he reopened the discussion to proclaim that "the simplest and best protocol is, '"He" refers to the set of people who have asked us to use "he", with a default for those-who-haven't-asked that goes by gamete size' and to say that this just _is_ the normative definition", the problems with which post I explained in March 2022's ["Challenges to Yudkowsky's Pronoun Reform Proposal"](/2022/Mar/challenges-to-yudkowskys-pronoun-reform-proposal/) and above. -At the start, I _never_ expected to end up arguing about something so _trivial_ as the minutiae of pronoun conventions (which no one would care about if historical contingencies of the evolution of the English language hadn't made them a Schelling point and typographical attack surface for things people do care about). The conversation only ended up here after a series of derailings. At the start, I was _trying_ to say something substantive about the psychology of straight men who wish they were women. +End recap. At this point, the nature of the game is very clear. Yudkowsky wants to make sure he's on peaceful terms with the progressive _Zeitgeist_, subject to the constraint of not saying anything he knows to be false. Meanwhile, I want to actually make sense of what's actually going on in the world as regards to sex and gender, because _I need the correct answer to decide whether or not to cut my dick off_. @@ -441,6 +432,8 @@ On "his turn", he comes up with some pompous proclamation that's very obviously On "my turn", I put in an _absurd_ amount of effort explaining in exhaustive, _exhaustive_ detail why Yudkowsky's pompous proclamation, while [not technically saying making any unambiguously "false" atomic statements](https://www.lesswrong.com/posts/MN4NRkMw7ggt9587K/firming-up-not-lying-around-its-edge-cases-is-less-broadly), was _substantively misleading_ as constrated to what any serious person would say if they were actually trying to make sense of the world without worrying what progressive activists would think of them. +At the start, I _never_ expected to end up arguing about something so _trivial_ as the minutiae of pronoun conventions (which no one would care about if historical contingencies of the evolution of the English language hadn't made them a Schelling point and typographical attack surface for things people do care about). The conversation only ended up here after a series of derailings. At the start, I was _trying_ to say something substantive about the psychology of straight men who wish they were women. + In the context of AI alignment theory, Yudkowsky has written about a "nearest unblocked strategy" phenomenon: if you directly prevent an agent from accomplishing a goal via some plan that you find undesirable, the agent will search for ways to route around that restriction, and probably find some plan that you find similarly undesirable for similar reasons. Suppose you developed an AI to [maximize human happiness subject to the constraint of obeying explicit orders](https://arbital.greaterwrong.com/p/nearest_unblocked#exampleproducinghappiness). It might first try administering heroin to humans. When you order it not to, it might switch to administering cocaine. When you order it to not use any of a whole list of banned happiness-producing drugs, it might switch to researching new drugs, or just _pay_ humans to take heroin, _&c._ diff --git a/notes/memoir-sections.md b/notes/memoir-sections.md index 2488c78..be9cbd8 100644 --- a/notes/memoir-sections.md +++ b/notes/memoir-sections.md @@ -1,7 +1,7 @@ slotted TODO blocks— ✓ psychiatric disaster ✓ "Agreeing With Stalin" intro recap -_ recap of crimes, cont'd +✓ recap of crimes, cont'd _ Dolphin War finish _ lead-in to Sept. 2021 Twitter altercation _ Michael Vassar and the Theory of Optimal Gossip