That's pretty much always where I've been at. I _never_ wanted to save the world. I got sucked in to this robot cult because Yudkowsky's philsophy-of-science writing was just that good. I did do a little bit of work for the Singularity Institute back in the day (an informal internship in 'aught-nine, some data-entry-like work manually adding Previous/Next links to the Sequences, designing several PowerPoint presentations for Anna, writing some Python scripts to organize their donor database), but that was because it was my social tribe and I had connections. To the extent that I took at all seriously the whole save/destroy/take-over the world part (about how we needed to encode all of human morality into a recursively self-improving artificial intelligence to determine our entire future light cone until the end of time), I was scared rather than enthusiastic.
-Okay, being scared was entirely appropriate, but what I mean is that I was scared, and concluded that shaping the Singularity was _not my problem_, as contrasted to being scared, then facing up to the responsibility anyway. After a 2013 sleep-deprivation-induced psychotic episode which featured futurist-themed delusions, I wrote to Anna, Michael, and some MIRI employees who had been in my contacts for occasional contract work, that "my current plan [was] to just try to forget about _Less Wrong_/MIRI for a long while, maybe at least a year, not because it isn't technically the most important thing in the world, but because I'm not emotionally stable enough think about this stuff anymore" (Subject: "to whom it may concern"). When I got a real programming job and established an income for myself, I [donated to CfAR rather than MIRI](http://zackmdavis.net/blog/2016/12/philanthropy-scorecard-through-2016/), because public rationality was something I could be unambiguously enthusiastic about, and doing anything about AI was not.
+Okay, being scared was entirely appropriate, but what I mean is that I was scared, and concluded that shaping the Singularity was _not my problem_, as contrasted to being scared, then facing up to the responsibility anyway. After a 2013 sleep-deprivation-induced psychotic episode which [featured](http://zackmdavis.net/blog/2013/03/religious/) [futurist](http://zackmdavis.net/blog/2013/04/prodrome/)-[themed](http://zackmdavis.net/blog/2013/05/relativity/) [delusions](http://zackmdavis.net/blog/2013/05/relevance/), I wrote to Anna, Michael, and some MIRI employees who had been in my contacts for occasional contract work, that "my current plan [was] to just try to forget about _Less Wrong_/MIRI for a long while, maybe at least a year, not because it isn't technically the most important thing in the world, but because I'm not emotionally stable enough think about this stuff anymore" (Subject: "to whom it may concern"). When I got a real programming job and established an income for myself, I [donated to CfAR rather than MIRI](http://zackmdavis.net/blog/2016/12/philanthropy-scorecard-through-2016/), because public rationality was something I could be unambiguously enthusiastic about, and doing anything about AI was not.
At the time, it seemed fine for the altruistically-focused fraction of my efforts to focus on rationality, and to leave the save/destroy/take-over the world stuff to other, less crazy people, in accordance with the principle of comparative advantage. Yudkowsky had written his Sequences as a dependency for explaining [the need for friendly AI](https://www.lesswrong.com/posts/GNnHHmm8EzePmKzPk/value-is-fragile), ["gambl[ing] only upon the portion of the activism that would flow to [his] own cause"](https://www.lesswrong.com/posts/9jF4zbZqz6DydJ5En/the-end-of-sequences), but rationality was supposed to be the [common interest of many causes](https://www.lesswrong.com/posts/4PPE6D635iBcGPGRy/rationality-common-interest-of-many-causes). Even if I wasn't working or donating to MIRI, I was still _helping_, a good citizen according to the morality of my tribe.
My AlphaGo moment was 5 January 2021, when OpenAI released [DALL-E](https://openai.com/blog/dall-e/) (by far the most significant news story of that week in January 2021). Previous AI milestones, like GANs for a _fixed_ image class, were easier to dismiss as clever statistical tricks. If you have thousands and thousands of photographs of people's faces, I didn't feel surprised that some clever algorithm could "learn the distribution" and spit out another sample; I don't know the _details_, but it doesn't seem like scary "understanding." DALL-E's ability to _combine_ concepts—responding to "an armchair in the shape of an avacado" as a novel text prompt, rather than already having thousands of avacado-chairs and just spitting out another one of those—viscerally seemed more like "real" creativity to me, something qualitatively new and scary.
-[As recently as 2020, I had been daydreaming about](/2020/Aug/memento-mori/#if-we-even-have-enough-time) working at an embryo selection company (if they needed programmers—but everyone needs programmers, these days), and having that be my altruistic[^eugenics-altruism] contribution to the great common task. Existing companies working on embryo selection boringly market their services as being about avoiding genetic diseases, but polygenic scores should work just as well for IQ as they do for cancer risk.[^polygenic-score] Making smarter people would be a transhumanist good in its own right, and [having smarter biological humans around at the time of our civilization's AI transition](https://www.lesswrong.com/posts/2KNN9WPcyto7QH9pi/this-failing-earth) would give us a better shot at having it go well.[^ai-transition-go-well]
+[As recently as 2020, I had been daydreaming about](/2020/Aug/memento-mori/#if-we-even-have-enough-time) working at an embryo selection company (if they needed programmers—but everyone needs programmers, these days), and having that be my altruistic[^eugenics-altruism] contribution to the great common task. Existing companies working on embryo selection [boringly](https://archive.is/tXNbU) [market](https://archive.is/HwokV) their services as being about promoting health, but [polygenic scores should work as well for maximizing IQ as they do for minimizing cancer risk](https://www.gwern.net/Embryo-selection).[^polygenic-score] Making smarter people would be a transhumanist good in its own right, and [having smarter biological humans around at the time of our civilization's AI transition](https://www.lesswrong.com/posts/2KNN9WPcyto7QH9pi/this-failing-earth) would give us a better shot at having it go well.[^ai-transition-go-well]
[^eugenics-altruism]: If it seems odd to frame _eugenics_ as "altruistic", translate it as a term of art referring to the component of my actions dedicating to optimizing the world at large, as contrasted to "selfishly" optimizing my own experiences.
-[^polygenic-score]: Better, actually.
+[^polygenic-score]: Better, actually: [the heritability of IQ is around 0.65](https://en.wikipedia.org/wiki/Heritability_of_IQ), as contrasted to [about 0.33 for cancer risk](https://pubmed.ncbi.nlm.nih.gov/26746459/).
[^ai-transition-go-well]: Natural selection eventually developed intelligent creatures, but evolution didn't know what it was doing and was not foresightfully steering the outcome in any particular direction. The more humans know what we're doing, the more our will determines the fate of the cosmos, the less we know what we're doing, the more our civilization is just another primordial soup for the next evolutionary transition.
But pushing on embryo selection only makes sense as an intervention for optimizing the future if AI timelines are sufficiently long, and the breathtaking pace (or too-fast-to-even-take-a-breath pace) of the deep learning revolution is making it look less likely that we'll get that much time.
-If our racially superior children need at least twenty years to grow up to be productive alignment researchers,
+If our genetically uplifted children need at least twenty years to grow up to be productive alignment researchers,
[TODO—
For background, "Wilhelm" was a fellow old-time _Less Wrong_-er who I had met in the South Bay back in 'aught-nine, while I was doing an "internship"[^internship] in Santa Clara for what was then still the Singularity Institute for Artificial Intelligence.[^siai]
-Relevantly, "Wilhelm" was also autogynephilic (and aware of it, under that name). The first time I had ever gone crossdressing in public was at a drag event with him in 2010.
+Relevantly, "Wilhelm" [was also autogynephilic](https://www.lesswrong.com/posts/QZs4vkC7cbyjL9XA9/changing-emotions?commentId=93u4sgeARSHfPkqPN) (and aware of it, under that name). The first time I had ever gone crossdressing in public was at a drag event with him in 2010.
As it happened, I had just messaged him a few days earlier, on 22 March 2016, for the first time in four and a half years. (My opening message: "Ray Blanchard's Twitter feed is kind of disappointing, but I'm not sure what I was expecting".) I confided to him that I was seeing an escort on Saturday the twenty-sixth[^twenty-sixth] because the dating market was looking pretty hopeless, I had more money than I knew what to do with, and three female friends agreed that it was not unethical.[^unethical]
[^internship]: "Internship" is in scare quotes, because the Singularity Institute at the time was not the kind of organization that offered formal _internships_; what I mean is that there was a house in Santa Clara where a handful of people were trying to do Singularity-relevant work, and I was allowed to sleep in the garage and also try to do work, without being paid.
-[^siai]: The "for Artificial Intelligence" part was a holdover the organization's founding, from before Yudkowsky [decided that AI would kill everyone by default (and that this was a bad thing)](https://www.lesswrong.com/s/SXurf2mWFw8LX2mkG). People soon started using "SingInst" as an abbreviation more than "SIAI", until the organization was eventually rebranded as the Machine Intelligence Research Institute.
+[^siai]: The "for Artificial Intelligence" part was a holdover the organization's founding, from before Yudkowsky [decided that AI would kill everyone by default (and that this was a bad thing)](https://www.lesswrong.com/s/SXurf2mWFw8LX2mkG). People soon started using "SingInst" as an abbreviation more than "SIAI", until the organization was eventually rebranded as the Machine Intelligence Research Institute in 2013.
[^twenty-sixth]: Writing this up years later, I was surprised to see from the dates (26 March 2016) that my date with the escort was the same day as the "20% of the ones with penises" post (and my comment thereon and following conversation with "Wilhelm"). They hadn't been stored in my long-term episodic memory as "the same day", likely because the Facebook post only seems overwhelmingly significant in retrospect; at the time, I did not realize what I would be spending the next seven years of my life on.
(Incidentally, the quotation in "Meditations on Moloch" of a poem I wrote in the _Less Wrong_ comments in 2011 is probably the most exposure my writing has ever gotten, and will ever get.)
-We chatted for a few more minutes. I noted Samo Burja's comment on Yudkowsky's post as a "terrible thought" that had also occurred to me: Burja had written that the predicted moral panic may not be along the expected lines, if an explosion of MtFs were to result in trans women dominating previously sex-reserved spheres of social competition. "[F]or signaling reasons, I will not give [the comment] a Like", I added parenthetically.[^signaling-reasons]
+We chatted for a few more minutes. I noted [Samo Burja's comment](/images/burja-shape_of_the_moral_panic.png) on Yudkowsky's post as a "terrible thought" that had also occurred to me: Burja had written that the predicted moral panic may not be along the expected lines, if an explosion of MtFs were to result in trans women dominating previously sex-reserved spheres of social competition. "[F]or signaling reasons, I will not give [the comment] a Like", I added parenthetically.[^signaling-reasons]
-[^signaling-reasons]: This brazen cowardice makes for a stark contrast to my current habits of thought. Today, I would notice that that if "for signaling reasons", people don't Like comments that make _insightful and accurate predictions_ about contemporary social trends, then subscribers to our collective discourse will be _less prepared_ for the social world of tomorrow! And that's terrible.
+[^signaling-reasons]: This brazen cowardice makes for a stark contrast to my current habits of thought. Today, I would notice that that if "for signaling reasons", people don't Like comments that make _insightful and accurate predictions_ about contemporary social trends, then subscribers to our collective discourse will be _less prepared_ for a world in which those trends have progressed further.
A few weeks later, I moved out of my mom's house in [Walnut Creek](https://en.wikipedia.org/wiki/Walnut_Creek,_California) to go live with a new roommate in an apartment on the correct side of the [Caldecott tunnel](https://en.wikipedia.org/wiki/Caldecott_Tunnel), in [Berkeley](https://en.wikipedia.org/wiki/Berkeley,_California), closer to other people in the robot-cult scene and with a shorter train ride to my coding dayjob in San Francisco.
So, I realize this is an inflamatory and (far more importantly) _surprising_ claim. Obviously, I don't have introspective access into other people's minds. If someone claims to have an internal sense of her own gender that doesn't match her assigned sex at birth, on what evidence could I possibly, _possibly_ have the astounding arrogance to reply, "No, I think you're really just a perverted male like me"?
-Actually, lots. To arbitrarily pick one particularly vivid exhibit, in April 2018, the [/r/MtF subreddit](https://www.reddit.com/r/MtF/) (which currently has 100,000 subscribers) [posted a link to a poll: "Did you have a gender/body swap/transformation "fetish" (or similar) before you realised you were trans?"](https://archive.is/uswsz). The [results](https://strawpoll.com/5p7y96x2/r): [_82%_ said Yes](/images/did_you_have-reddit_poll.png). [Top comment in the thread](https://archive.is/c7YFG), with over 230 karma: "I spent a long time in the 'it's probably just a fetish' camp".
+Actually, lots. To arbitrarily pick one particularly vivid exhibit, in April 2018, the [/r/MtF subreddit](https://www.reddit.com/r/MtF/) (which had over 28,000 subscribers at the time) [posted a link to a poll: "Did you have a gender/body swap/transformation "fetish" (or similar) before you realised you were trans?"](https://archive.is/uswsz). The [results](https://archive.is/lm4ro): [_82%_ of over 2000 respondents said Yes](/images/did_you_have-reddit_poll.png). [Top comment in the thread](https://archive.is/c7YFG), with over 230 karma: "I spent a long time in the 'it's probably just a fetish' camp".
Certainly, 82% is not 100%! (But 82% is evidence for my claim that a _substantial majority_ of trans women under modern conditions in Western countries are essentially guys like me.) Certainly, you could argue that Reddit has a sampling bias such that poll results and karma scores from /r/MtF fail to match the distribution of opinion among real-world MtFs. But if you don't take the gender-identity story as a _axiom_ and [_actually look_](https://www.lesswrong.com/posts/SA79JMXKWke32A3hG/original-seeing) at the _details_ of what people say and do, these kinds of observations are _not hard to find_. You could [fill an entire subreddit with them](https://archive.is/ezENv) (and then move it to [independent](https://ovarit.com/o/ItsAFetish/) [platforms](https://saidit.net/s/itsafetish/) when the original gets [banned for "promoting hate"](https://www.reddit.com/r/itsafetish/)).
"It became obvious that explanation could not account." I don't doubt Serano's reporting of her own phenomenal experiences, but "that explanation could not account" is _not an experience_; it's a _hypothesis_ about psychology, about the _causes_ of the experience. I [don't _expect_ anyone to be able to get that sort of thing right from introspection alone!](/2016/Sep/psychology-is-about-invalidating-peoples-identities/).
-Or consider _Nevada_. This was a popular book! Part 2, Chapter 23 is our protagonist Maria's rant about the self-evident falsehood and injustice of autogynephilia theory. And she starts out by ... acknowledging the phenomenon which the theory is meant to explain:
+Or consider _Nevada_. This was a popular book! Part 2, Chapter 23 is our protagonist Maria's rant about the self-evident falsehood and injustice of autogynephilia theory. And she starts out by ... acknowledging the phenomenon which the theory is meant to explain:
> But the only time I couldn't lie to myself about who I wanted to be, and how I wanted to be, and like, the way I needed to exist in the world if I was going to actually exist in the world, is when I was jacking off.
>
... the pseudonymity is kind of a joke at this point. It turned out that my need for openness and a unified identity was far stronger than my grasp of what my very smart and cowardly friends think is prudence, such that I ended up frequently linking to and claiming ownership of the blog from my real name, _and_ otherwise [leaking](/2019/Apr/link-where-to-draw-the-boundaries/) [entropy](/2021/Jan/link-unnatural-categories-are-optimized-for-deception/) [through](/2021/Sep/link-blood-is-thicker-than-water/) a sieve on this side. Given the world of the current year (such that this blog was even _necessary_), it's _probably_ a smarter play if the _first_ page of my real-name Google search results isn't my gender [and worse](/2020/Apr/book-review-human-diversity/) heterodoxy blog?—so I _guess_ I'll keep the Saotome-Westlake byline on this site, even though it's definitely a mere differential-visibility market-segmentation pen name, like how everyone knows that Robert Galbraith is actually J. K. Rowling, and not an Actually Secret pen name. Plus, after having made the mistake (?) of listening to my very smart and cowardly friends at the start, I'd face a backwards-compatibility problem if I wanted to unwind the pseudonym: there are _already_ a lot of references to this blog being written by Saotome-Westlake, and I don't want to throw away or rewrite that history—this is also one of several reasons I'm not transitioning.
-[TODO: the Transgender Roadmap website mis-identified my pseudonym, so I guess it worked?!]
+[TODO: the Transgender Map website mis-identified my pseudonym, so I guess it worked?! https://www.transgendermap.com/community/michael-mcclure/ ]
(Separately, I'm not entirely without misgivings about the exact naming choices I made, although I don't actively regret it, the way I regret [my attempted nickname switch in the late 'aughts](/2021/May/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems/#literary-initials). For the pen name: a hyphenated last name (a feminist tradition), abbreviated-first-initial + gender-neutral middle name (as if suggesting a male ineffectually trying to avoid having an identifiably male byline), "Saotome" from [a thematically-relevant Japanese graphic novel series](https://en.wikipedia.org/wiki/Ranma_%C2%BD), "West" (+ an extra syllable) after a character in a serial novel whose catchphrase is ["Somebody has to and no one else will"](https://unsongbook.com/chapter-6-till-we-have-built-jerusalem/). For the blog name: I had already imagined that if I ever did stoop to the depravity of starting one of my own one of those [transformation/bodyswap captioned-photo erotica blogs](/2016/Oct/exactly-what-it-says-on-the-tin/), I would call it _The Titillating But Ultimately Untrue Thought_, and in fact had already claimed
[email protected]_ in 2014, to participate in [a captioning contest](http://celebbodyswap.blogspot.com/2014/02/magic-remote-caption-contest.html), but since this was to be a serious autogynephilia _science_ blog, rather than tawdry _object-level_ autogynephilia blogging, I picked "Scintillating" as a more wholesome adjective. In retrospect, it may have been a mistake to choose a URL different from the blog's title—people seem to remember the URL more than the title, and as far as the URL goes, to be led by the dot before the TLD to interpret "space" as a separate word, rather than my intent of "genderspace" being analogous to "configuration space"—but it doesn't bother me that much.)
[TODO: credit assignment ritual ($18200 credit-assignment ritual): $5K to Michael, $1200 each to trans widow friend, 3 care team members (Alicorn Sarah Anna), Ziz, Olivia, and Sophia, $400 each to Steve, A.M., Watson, "Wilhelm", Jonah, James, Ben, Kevin, Alexei (declined), Andrew, Divia, Lex, Devi]
-[On my last day at SwiftStack, I said that I was taking a sabbatical from my software engineering career to become a leading intellectual figure of the alternative right. That was a joke, but not one that I would have made after Charlottesville.]
+[On my last day at SwiftStack, I said that I was taking a sabbatical from my software engineering career to become a leading intellectual figure of the alternative right. That was a joke, but not one that I would have made after Charlottesville. August 2017 https://en.wikipedia.org/wiki/Unite_the_Right_rally ]
✓ anti-correlated variables [pt. 2]
- previous AI milestones [pt. 4]
_ short timelines and politics [pt. 4]
-_ Daria's ending/standing under the same sky? [pt. 4]
_ social justice and defying threats [pt. 4]
_ "victories" weren't comforting [pt. 3]
_ vaccine joke [pt. 4]
_ psychiatric disaster
With internet available—
-_ IQ vs. cancer risk polygenic score cite
-_ footnote embryo selection companies?
-_ footnote Tailcalled on gayGP
-_ "Qualitative Strategies of Friendliness": discussion of truth vs. happiness in Friendly AI
-_ "Radical Honesty": discussion of Crocker's rules, "whosever lies _in a journal article_ is guilty of utter heresy", lying to others
_ "not hard to find": link to more /r/itsafetish-like anecdotes
-_ /r/MtF subscriber count, and poll reply numbers
-_ Charlottesville
-_ TSRoadmap mis-identified my pseudonym
-_ did _Nevada_ win a Lambda award or anything like that?
-_ when did I order _Nevada_? when did I order _MTiMB_?
-_ Samo's comment, and what did the TERF phenomenon look like in 2016?
-_ "Tiresias" comment
-_ year of MIRI rebrand; think it was 2013; check before inserting (for ^siai footnote)
_ stats of SIAI vs. SingInst hits (for ^siai footnote)
-_ retrieve own-blog links for "futurist-themed delusions"
-_ double-check that "Changing Emotions" was in January
-_ No such thing as a tree
_ Yudkowsky on AlphaGo
+_ larger Extropy quote than "otherwise identical"
_ quote other Eliezer Yudkowsky facts
_ footnote about Scott writing six times faster than me
_ include Eric Weinstein in archive.is spree
_ 13th century word meanings
_ weirdly hostile comments on "... Boundaries?"
_ Anna's claim that Scott was a target specifically because he was good, my counterclaim that payment can't be impunity
-_ larger Extropy quote than "otherwise identical"
_ Yudkowsky's LW moderation policy
far editing tier—
people to consult before publishing, for feedback or right of objection—
+_ Tail (pt. 2 AGP discussion)
_ Iceman
_ Ben/Jessica (Michael)
+_ "Wilhelm"
_ Scott
_ Anna
_ secret posse member
_ Katie (pseudonym choice)
_ Alicorn: about privacy, and for Melkor Glowfic reference link
_ hostile prereader (April, J. Beshir, Swimmer, someone else from Alicorner #drama)
-_ maybe Kelsey (very briefly, just about her name)?
+_ Kelsey (briefly)
_ maybe SK (briefly about his name)? (the memoir might have the opposite problem (too long) from my hostile-shorthand Twitter snipes)
marketing—
https://www.lesswrong.com/posts/pC74aJyCRgns6atzu/meta-discussion-from-circling-as-cousin-to-rationality?commentId=kS4BfYJuZ8ZcwuwfB
https://www.lesswrong.com/posts/pC74aJyCRgns6atzu/meta-discussion-from-circling-as-cousin-to-rationality?commentId=4kLTSanNyhn5H8bHv
+> What I'm saying is that, in the discussion as a whole, which is constituted by the post itself, plus comments thereon, plus related posts and comments, etc., an author has an obligation to respond to reader inquiries of this sort.
+>
+> As for where said obligation comes from—why, from the same place as the obligation to provide evidence for your claims, or the obligation to cite your sources, or the obligation not to be logically rude, or the obligation to write comprehensibly, or the obligation to acknowledge and correct factual errors, etc., etc.—namely, from the fact that acknowledging and satisfying this obligation reliably leads to truth, and rejecting this obligation reliably leads to error. In short: it is _epistemically rational_.
+
Namespace on standing and warrant—
https://www.lesswrong.com/posts/pC74aJyCRgns6atzu/meta-discussion-from-circling-as-cousin-to-rationality?commentId=c7Xt5AnHwhfgYY67K
explainxkcd.com/wiki/index.php/1425:_Tasks "I'll need a research team and five years" September 2014
-May 2019: David MacIver is in on it, too! https://www.drmaciver.com/2019/05/the-inner-sense-of-gender/#comment-366840
\ No newline at end of file
+May 2019: David MacIver is in on it, too! https://www.drmaciver.com/2019/05/the-inner-sense-of-gender/#comment-366840
+
+"whosever lies _in a journal article_ is guilty of utter heresy"
+
+first I ordered _Nevada_ 24 March 2016; first ordered _MTiMB_ on 6 August 2016
+ordered additional copies of MTiMB 14 September 2016 and 19 December 2016
+
+> We passed word to the Fake Conspiracy section of Exception Handling, and they've spent the last few hours quickly planting evidence consistent with how Civilization should look if the Sparashki are real. The notion being that their apparent fictional status and licensing is just a cover, so Sparashki can walk around if they have to and just get compliments on their incredible cosplay. Since this event is medium-secret, the CEO of Yattel's Less Expensive Tunneling Machines has been photographed by surprise through a window, looking like a Sparashki, to explain why conspiracy-theoretic research is suddenly focusing there and turning up the evidence we've planted."
+https://glowfic.com/replies/1860952#reply-1860952