poke
authorM. Taylor Saotome-Westlake <[email protected]>
Fri, 4 Dec 2020 07:40:46 +0000 (23:40 -0800)
committerM. Taylor Saotome-Westlake <[email protected]>
Fri, 4 Dec 2020 07:40:46 +0000 (23:40 -0800)
content/drafts/sen-kelly-loeffler-is-mostly-living-in-a-simulation-now.md
content/drafts/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems.md
notes/sexual-dimorphism-in-the-sequences-notes.md
notes/trans-kids-on-the-margin-notes.md

index f319004..4309a4b 100644 (file)
@@ -24,6 +24,7 @@ Coinbase retaliation!!
 https://twitter.com/0x49fa98/status/1333502028975403009
 https://votepatternanalysis.substack.com/p/voting-anomalies-2020
 https://spectator.us/reasons-why-the-2020-presidential-election-is-deeply-puzzling/
+https://twitter.com/KanekoaTheGreat/status/1334620436487761921
 
 You don't actually control the policy node
 
index de6c520..cabe0a2 100644 (file)
@@ -19,7 +19,7 @@ Well. That's a _long story_—for another time, perhaps. For _now_, I want to ex
 
 It all started in summer 2007 (I was nineteen years old), when I came across _Overcoming Bias_, a blog on the theme of how to achieve more accurate beliefs. (I don't remember exactly how I was referred, but I think it was likely to have been [a link from Megan McArdle](https://web.archive.org/web/20071129181942/http://www.janegalt.net/archives/009783.html), then writing as "Jane Galt" at _Asymmetrical Information_.)
 
-[Although](http://www.overcomingbias.com/author/hal-finney) [technically](http://www.overcomingbias.com/author/james-miller) [a](http://www.overcomingbias.com/author/david-j-balan) [group](http://www.overcomingbias.com/author/andrew) [blog](http://www.overcomingbias.com/author/anders-sandberg), the vast majority of posts on _Overcoming Bias_ were by Robin Hanson or Eliezer Yudkowsky. I was previously acquainted in passing with Yudkowsky's [writing about future superintelligence](https://web.archive.org/web/20200217171258/https://yudkowsky.net/obsolete/tmol-faq.html). (I had [mentioned him in my Diary once in 2005](/ancillary/diary/42/), albeit without spelling his name correctly.) Yudkowsky was now using _Overcoming Bias_ and the medium of blogging [to generate material for a future book about rationality](https://www.lesswrong.com/posts/vHPrTLnhrgAHA96ko/why-i-m-blooking). Hanson's posts I could take or leave, but Yudkowsky's sequences of posts about rationality (coming out almost-daily through early 2009, eventually totaling hundreds of thousands of words) were _amazingly great_, [drawing on fields](https://www.lesswrong.com/posts/tSgcorrgBnrCH8nL3/don-t-revere-the-bearer-of-good-info) from [cognitive](https://www.lesswrong.com/posts/2ftJ38y9SRBCBsCzy/scope-insensitivity) [psychology](https://www.lesswrong.com/posts/R8cpqD3NA4rZxRdQ4/availability) to [evolutionary biology](https://www.lesswrong.com/s/MH2b8NfWv22dBtrs8) to explain the [mathematical](https://www.readthesequences.com/An-Intuitive-Explanation-Of-Bayess-Theorem) [principles](https://www.readthesequences.com/A-Technical-Explanation-Of-Technical-Explanation) [governing](https://www.lesswrong.com/posts/eY45uCCX7DdwJ4Jha/no-one-can-exempt-you-from-rationality-s-laws) _how intelligence works_—[the reduction of "thought"](https://www.lesswrong.com/posts/p7ftQ6acRkgo6hqHb/dreams-of-ai-design) to [_cognitive algorithms_](https://www.lesswrong.com/posts/HcCpvYLoSFP4iAqSz/rationality-appreciating-cognitive-algorithms). Intelligent systems that use [evidence](https://www.lesswrong.com/posts/6s3xABaXKPdFwA3FS/what-is-evidence) to construct [predictive](https://www.lesswrong.com/posts/a7n8GdKiAZRX86T5A/making-beliefs-pay-rent-in-anticipated-experiences) models of the world around them—that have "true" "beliefs"—can _use_ those models to compute which actions will best achieve their goals. You simply [won't believe how much this blog](https://www.lesswrong.com/posts/DXcezGmnBcAYL2Y2u/yes-a-blog) will change your life; I would later frequently [joke](https://en.wiktionary.org/wiki/ha_ha_only_serious) that Yudkowsky rewrote my personality over the internet.
+[Although](http://www.overcomingbias.com/author/hal-finney) [technically](http://www.overcomingbias.com/author/james-miller) [a](http://www.overcomingbias.com/author/david-j-balan) [group](http://www.overcomingbias.com/author/andrew) [blog](http://www.overcomingbias.com/author/anders-sandberg), the vast majority of posts on _Overcoming Bias_ were by Robin Hanson or Eliezer Yudkowsky. I was previously acquainted in passing with Yudkowsky's [writing about future superintelligence](https://web.archive.org/web/20200217171258/https://yudkowsky.net/obsolete/tmol-faq.html). (I had [mentioned him in my Diary once in 2005](/ancillary/diary/42/), albeit without spelling his name correctly.) Yudkowsky was now using _Overcoming Bias_ and the medium of blogging [to generate material for a future book about rationality](https://www.lesswrong.com/posts/vHPrTLnhrgAHA96ko/why-i-m-blooking). Hanson's posts I could take or leave, but Yudkowsky's sequences of posts about rationality (coming out almost-daily through early 2009, eventually totaling hundreds of thousands of words) were _amazingly great_, [drawing on fields](https://www.lesswrong.com/posts/tSgcorrgBnrCH8nL3/don-t-revere-the-bearer-of-good-info) from [cognitive](https://www.lesswrong.com/posts/2ftJ38y9SRBCBsCzy/scope-insensitivity) [psychology](https://www.lesswrong.com/posts/R8cpqD3NA4rZxRdQ4/availability) to [evolutionary biology](https://www.lesswrong.com/s/MH2b8NfWv22dBtrs8) to explain the [mathematical](https://www.readthesequences.com/An-Intuitive-Explanation-Of-Bayess-Theorem) [principles](https://www.readthesequences.com/A-Technical-Explanation-Of-Technical-Explanation) [governing](https://www.lesswrong.com/posts/eY45uCCX7DdwJ4Jha/no-one-can-exempt-you-from-rationality-s-laws) _how intelligence works_—[the reduction of "thought"](https://www.lesswrong.com/posts/p7ftQ6acRkgo6hqHb/dreams-of-ai-design) to [_cognitive algorithms_](https://www.lesswrong.com/posts/HcCpvYLoSFP4iAqSz/rationality-appreciating-cognitive-algorithms). Intelligent systems [that use](https://arbital.greaterwrong.com/p/executable_philosophy) [evidence](https://www.lesswrong.com/posts/6s3xABaXKPdFwA3FS/what-is-evidence) to construct [predictive](https://www.lesswrong.com/posts/a7n8GdKiAZRX86T5A/making-beliefs-pay-rent-in-anticipated-experiences) models of the world around them—that have "true" "beliefs"—can _use_ those models to compute which actions will best achieve their goals. You simply [won't believe how much this blog](https://www.lesswrong.com/posts/DXcezGmnBcAYL2Y2u/yes-a-blog) will change your life; I would later frequently [joke](https://en.wiktionary.org/wiki/ha_ha_only_serious) that Yudkowsky rewrote my personality over the internet.
 
 (The blog posts later got edited and collected into a book, [_Rationality: From AI to Zombies_](https://www.amazon.com/Rationality-AI-Zombies-Eliezer-Yudkowsky-ebook/dp/B00ULP6EW2), but I continue to say "the Sequences" because I _hate_ the gimmicky "AI to Zombies" subtitle—it makes it sound like a commercial book optimized to sell copies, rather than something to corrupt the youth, competing for the same niche as the Bible or the Koran—_the book_ that explains what your life should be about.)
 
@@ -31,13 +31,13 @@ The first thing—the chronologically first thing. Ever since I was thirteen or
 
 (I _still_ don't want to be blogging about this, but unfortunately, it actually turns out to be central to the intellectual–political project I've been singlemindedly focused on for the past four years because [somebody has to and no one else will](https://unsongbook.com/chapter-6-till-we-have-built-jerusalem/))
 
-—my _favorite_—and basically only—masturbation fantasy has always been some variation on me getting magically transformed into a woman. I ... need to write more about the phenomenology of this. In the meantime, just so you know what I'm talking about, the relevant TVTrope is ["Man, I Feel Like a Woman."](https://tvtropes.org/pmwiki/pmwiki.php/Main/ManIFeelLikeAWoman) Or search "body swap" on PornHub. Or check out my few, circumspect contributions to [the popular genre of](/2016/Oct/exactly-what-it-says-on-the-tin/) captioned-photo female transformation erotica: [1](/ancillary/captions/dr-equality-and-the-great-shift/) [2](/ancillary/captions/the-other-side-of-me/) [3](/ancillary/captions/the-impossible-box/) [4](/ancillary/captions/de-gustibus-non-est/).
+—my _favorite_—and basically only—masturbation fantasy has always been some variation on me getting magically transformed into a woman. I ... need to write more about the phenomenology of this. In the meantime, just so you know what I'm talking about, the relevant TVTrope is ["Man, I Feel Like a Woman."](https://tvtropes.org/pmwiki/pmwiki.php/Main/ManIFeelLikeAWoman) Or search "body swap" on PornHub. Or check out my few, circumspect contributions to [the popular genre of](/2016/Oct/exactly-what-it-says-on-the-tin/) captioned-photo female transformation erotica (everyone is wearing clothes, so these might be "safe for work" in a narrow technical sense, if not a moral one): [1](/ancillary/captions/dr-equality-and-the-great-shift/) [2](/ancillary/captions/the-other-side-of-me/) [3](/ancillary/captions/the-impossible-box/) [4](/ancillary/captions/de-gustibus-non-est/).
 
-(The first segment of my pen surname is a legacy of middle-school friends letting me borrow some of the [Ranma ½](https://en.wikipedia.org/wiki/Ranma_%C2%BD) graphic novels (about a young man cursed to transform into a woman on exposure to cold water) just _before_ puberty kicked in, but I have no way of computing the counterfactual to know whether that had a causal influence.)
+(The first segment of my pen surname is a legacy of middle-school friends letting me borrow some of the [Ranma ½](https://en.wikipedia.org/wiki/Ranma_%C2%BD) graphic novels, about a young man named Ranma Saotome cursed ("cursed"??) to transform into a woman on exposure to cold water. This was just _before_ puberty kicked in for me, but I have no way of computing the counterfactual to know whether that had a causal influence.)
 
 So, there was that erotic thing, which I was pretty ashamed of at the time, and _of course_ knew that I must never, ever tell a single soul about. (It would have been about three years since the fantasy started that I even worked up the bravery to [tell my Diary about it](/ancillary/diary/53/#first-agp-confession).)
 
-But within a couple years, I also developed this beautiful pure sacred self-identity thing that would persist for years, where I started having a lot of _non_-sexual thoughts about being female. Just—little day-to-day thoughts, little symbolic gestures.
+But within a couple years, I also developed this beautiful pure sacred self-identity thing that would persist indefinitely, where I started having a lot of _non_-sexual thoughts about being female. Just—little day-to-day thoughts, little symbolic gestures.
 
 Like when I would [write in my pocket notebook in the persona of my female analogue](/images/crossdreaming_notebook_samples.png).
 
@@ -47,7 +47,7 @@ Or the time when track and field practice split up into boys and girls, and I ir
 
 Or when it was time to order sheets to fit on the dorm beds at the University in Santa Cruz, and I deliberately picked out the pink-with-flowers design on principle.
 
-Or how I was proud to be the kind of guy who bought Julia Serano's _Whipping Girl: A Transsexual Woman on Sexism and the Scapegoating of Femininity_ when it was new in 2007.
+Or how I was proud to be the kind of guy who bought Julia Serano's _Whipping Girl: A Transsexual Woman on Sexism and the Scapegoating of Femininity_ when it was new in 2007, and [who would rather read from Evelyn Fox Keller's _Reflections on Gender and Science_ than](http://zackmdavis.net/blog/2013/03/tradition/) watch [Super Bowl XLII](https://en.wikipedia.org/wiki/Super_Bowl_XLII).
 
 Or how, at University, I tried to go by my [first-and-middle-initials](https://en.wikipedia.org/wiki/List_of_literary_initials) because I wanted a gender-neutral [byline](https://en.wikipedia.org/wiki/Byline), and I wanted what people called me in real life to be the same as my byline—even if, obviously, I didn't expect people to not-notice which sex I am in real life because _that would be crazy_.
 
@@ -144,16 +144,18 @@ Sex differences would come up a couple more times in one of the last Sequences,
 
 According to Yudkowsky, one of the ways in which people's thinking about artificial intelligence usually goes wrong is [anthropomorphism](https://www.lesswrong.com/posts/RcZeZt8cPk48xxiQ8/anthropomorphic-optimism)—expecting arbitrary AIs to behave like humans, when really "AI" corresponds to [a much larger space of algorithms](https://www.lesswrong.com/posts/tnWRXkcDi5Tw9rzXw/the-design-space-of-minds-in-general). As a social animal, predicting other humans is one of the things we've evolved to be good at, and the way that works is probably via "empathic inference": [I predict your behavior by imagining what _I_ would do in your situation](https://www.lesswrong.com/posts/Zkzzjg3h7hW5Z36hK/humans-in-funny-suits). Since all humans are very similar, [this appeal-to-black-box](https://www.lesswrong.com/posts/9fpWoXpNv83BAHJdc/the-comedy-of-behaviorism) works pretty well. And from this empathy, evolution also coughed up the [moral miracle](https://www.lesswrong.com/posts/pGvyqAQw6yqTjpKf4/the-gift-we-give-to-tomorrow) of [_sympathy_, intrinsically caring about what others feel](https://www.lesswrong.com/posts/NLMo5FZWFFq652MNe/sympathetic-minds).
 
-In ["Interpersonal Entanglement"](https://www.lesswrong.com/posts/Py3uGnncqXuEfPtQp/interpersonal-entanglement), Yudkowsky appeals to the complex moral value of sympathy as an objection to the possibility of nonsentient sex partners (_catgirls_ being the technical term). Being emotionally entangled with another actual person is one of the things that makes life valuable, that would be lost if people just got their needs met by soulless holodeck characters.
+In ["Interpersonal Entanglement"](https://www.lesswrong.com/posts/Py3uGnncqXuEfPtQp/interpersonal-entanglement), Yudkowsky appeals to the complex moral value of sympathy as an argument against the desireability of nonsentient sex partners (_catgirls_ being the technical term). Being emotionally intertwined with another actual person is one of the things that makes life valuable, that would be lost if people just had their needs met by soulless holodeck characters.
 
+Women and men aren't 
 
 
 [TODO WORKING: ... rewrite/expand description here—"Sympathetic Minds" first, then bridge to "Failed Utopia #4-2"]
 [TODO: mention that I'm responsible for the protagonist's name getting changed]
 
-The short story ["Failed Utopia #4-2"](https://www.lesswrong.com/posts/ctpkTaqTKbmm6uRgC/failed-utopia-4-2) portrays an almost-aligned superintelligence constructing a happiness-maximizing utopia for humans—except that because [evolution didn't design women and men to be optimal partners for each other](https://www.lesswrong.com/posts/Py3uGnncqXuEfPtQp/interpersonal-entanglement), and the AI is prohibited from editing people's minds, the happiness-maximizing solution ends up splitting up the human species by sex and giving women and men their own _separate_ utopias, complete with artificially-synthesized romantic partners.
 
-At the time, [I expressed horror](https://www.greaterwrong.com/posts/ctpkTaqTKbmm6uRgC/failed-utopia-4-2/comment/PhiGnX7qKzzgn2aKb) at the idea in the comments section, because my quasi-religious psychological-sex-differences denialism required that I be horrified. But looking back eleven years later, the _argument makes sense_ (though you need an additional [handwave](https://tvtropes.org/pmwiki/pmwiki.php/Main/HandWave) to explain why the AI doesn't give every _individual_ their separate utopia—if existing women and men aren't optimal partners for each other, so too are individual men not optimal same-sex friends for each other).
+At the time, [I expressed horror](https://www.greaterwrong.com/posts/ctpkTaqTKbmm6uRgC/failed-utopia-4-2/comment/PhiGnX7qKzzgn2aKb) at the idea in the comments section, because my quasi-religious psychological-sex-differences denialism required that I be horrified. But looking back a dozen years later, the _argument makes sense_. 
+
+You need an additional [handwave](https://tvtropes.org/pmwiki/pmwiki.php/Main/HandWave) to explain why the AI doesn't give every _individual_ their separate utopia—if existing women and men aren't optimal partners for each other, so too are individual men not optimal same-sex friends for each other).
 
 On my reading of the text, it is _significant_ that the AI-synthesized complements for men are given their own name, the _verthandi_ (presumably after [the Norse deity](https://en.wikipedia.org/wiki/Ver%C3%B0andi)), rather than just being referred to as women. The _verthandi_ may _look like_ women, they may be _approximately_ psychologically human, but the _detailed_ psychology of "superintelligently-engineered optimal romantic partner for a human male" is not going to come out of the distribution of actual human females, and judicious exercise of the [tenth virtue of precision](http://yudkowsky.net/rational/virtues/) demands that a _different word_ be coined for this hypothetical science-fictional type of person. Calling the _verthandi_ "women" would be _worse writing_; it would _fail to communicate_ the impact of what has taken place in the story.
 
index b96ecfe..27a6b48 100644 (file)
@@ -52,14 +52,22 @@ https://www.lesswrong.com/posts/9fpWoXpNv83BAHJdc/the-comedy-of-behaviorism
 * AGPs dating each other is the analogue of "Failed Utopia 4-2"!!—the guys in "Conservative Men in Conservative Dresses" are doing better in some ways
 > The vast majority of men are not what the vast majority of women would most prefer, or vice versa. I don’t know if anyone has ever actually done this study, but I bet that both gay and lesbian couples are happier on average with their relationship than heterosexual couples. (Googles… yep, looks like it.) <https://news.softpedia.com/news/Gay-and-Lesbian-Families-Are-Happier-than-Heterosexual-Ones-77094.shtml>
 
-* wipe culturally defined values
 * finding things in the refrigerator
 *  https://www.lesswrong.com/posts/FBgozHEv7J72NCEPB/my-way https://www.lesswrong.com/posts/xsyG7PkMekHud2DMK/of-gender-and-rationality
 * make sure the late-onset/AGP terminology is introduced in a coherent order rather than being inserted willy-nilly
 * the message length of my existence (https://www.lesswrong.com/posts/mB95aqTSJLNR9YyjH/message-length)
 [TODO: "expecting women to be defective men"]
+* aren't all those transwomen going to be _embarrrassed_ after the Singularity, when telepathy tech makes everything obvious
+* stress fracture vs. sprain: psychology is more complicated, but the basic moral holds
+* "I don't care" "It sounds like you do care"
+* pronouns do have truth conditions
+ * The text of this blog post is not something a woman could have written
+ * as a programmer, I have learned to fear dependencies
+* wipe culturally defined values: https://www.greaterwrong.com/posts/BkkwXtaTf5LvbA6HB/moral-error-and-moral-disagreement (this might have to go after Failed-Utopia #4-2)
 
-https://arbital.greaterwrong.com/p/executable_philosophy
+
+https://www.lesswrong.com/posts/r3NHPD3dLFNk9QE2Y/search-versus-design-1
+https://www.lesswrong.com/posts/JBFHzfPkXHB2XfDGj/evolution-of-modularity
 
 -----
 
@@ -325,3 +333,8 @@ The Eliezer Yudkowsky I remember wrote about [how facts are tightly-woven togeth
 A culture where there are huge catastrophic consequences for [questioning religion](https://www.lesswrong.com/posts/u6JzcFtPGiznFgDxP/excluding-the-supernatural), is a culture where it's harder to train alignment researchers that genuinely understand Occam's razor on a _deep_ level, when [the intelligent social web](https://www.lesswrong.com/posts/AqbWna2S85pFTsHH4/the-intelligent-social-web) around them will do anything to prevent them from applying the parsimony skill to the God hypothesis. 
 
 A culture where there are huge catastrophic consequences for questioning gender identity, is a culture where it's harder to train alignment researchers that genuinely understand the hidden-Bayesian-structure-of-language-and-cognition on a _deep_ level, when the social web around them will do anything to prevent them from [invalidating someone's identity](http://unremediatedgender.space/2016/Sep/psychology-is-about-invalidating-peoples-identities/).
+
+
+
+
+The short story ["Failed Utopia #4-2"](https://www.lesswrong.com/posts/ctpkTaqTKbmm6uRgC/failed-utopia-4-2) portrays an almost-aligned superintelligence constructing a happiness-maximizing utopia for humans—except that because [evolution didn't design women and men to be optimal partners for each other](https://www.lesswrong.com/posts/Py3uGnncqXuEfPtQp/interpersonal-entanglement), and the AI is prohibited from editing people's minds, the happiness-maximizing solution ends up splitting up the human species by sex and giving women and men their own _separate_ utopias, complete with artificially-synthesized romantic partners.
index 0ba2286..a4c4884 100644 (file)
@@ -19,3 +19,6 @@ https://pubmed.ncbi.nlm.nih.gov/2016237/
 
 Research notes—
 https://femalesexualinversion.blogspot.com/2020/12/the-problem-with-puberty-blockers-part.html
+
+There was that time when Merlin wanted to see the medicines on the shelf and I was like, "Aw, why do you need to know this anyway" and Elena was like, "He's curious"—people don't want to be blamed for hurting the child, and if you're living in an ideological bubble where it's presumed that telling the child the truth about what sex they are
+