From: M. Taylor Saotome-Westlake Date: Sun, 11 Sep 2022 01:51:26 +0000 (-0700) Subject: memoir: novices like me and Michelle Alleva ... X-Git-Url: http://534655.efjtl6rk.asia/source?a=commitdiff_plain;h=fc7a29a0e263c6860e6e6e75b3db8ad02038d1e6;p=Ultimately_Untrue_Thought.git memoir: novices like me and Michelle Alleva ... --- diff --git a/content/drafts/a-hill-of-validity-in-defense-of-meaning.md b/content/drafts/a-hill-of-validity-in-defense-of-meaning.md index e50839f..e63baaa 100644 --- a/content/drafts/a-hill-of-validity-in-defense-of-meaning.md +++ b/content/drafts/a-hill-of-validity-in-defense-of-meaning.md @@ -469,7 +469,7 @@ Sarah asked if the math wasn't a bit overkill: were the calculations really nece My thinking here was that the posse's previous email campaigns had been doomed to failure by being too closely linked to the politically-contentious object-level topic which reputable people had strong incentives not to touch with a ten-foot pole. So if I wrote this post _just_ explaining what was wrong with the claims Yudkowsky and Alexander had made about the philosophy of language, with perfectly innocent examples about dolphins and job titles, that would remove the political barrier and [leave a line of retreat](https://www.lesswrong.com/posts/3XgYbghWruBMrPTAL/leave-a-line-of-retreat) for Yudkowsky to correct the philosophy of language error. And then if someone with a threatening social-justicey aura were to say, "Wait, doesn't this contradict what you said about trans people earlier?", stonewall them. (Stonewall _them_ and not _me_!) -I could see a case that it was unfair of me to include subtext and then expect people to engage with the text, but if we weren't going to get into full-on gender-politics on _Less Wrong_ (which seemed like a bad idea), but gender politics _was_ motivating an epistemology error, I wasn't sure what else I'm supposed to do! I was pretty constrained here! +I could see a case that it was unfair of me to include subtext and then expect people to engage with the text, but if we weren't going to get into full-on gender-politics on _Less Wrong_ (which seemed like a bad idea), but gender politics _was_ motivating an epistemology error, I wasn't sure what else I was supposed to do! I was pretty constrained here! (I did regret having accidentally "poisoned the well" the previous month by impulsively sharing the previous year's ["Blegg Mode"](/2018/Feb/blegg-mode/) [as a _Less Wrong_ linkpost](https://www.lesswrong.com/posts/GEJzPwY8JedcNX2qz/blegg-mode). "Blegg Mode" had originally been drafted as part of "... To Make Predictions" before getting spun off as a separate post. Frustrated in March at our failing email campaign, I thought it was politically "clean" enough to belatedly share, but it proved to be insufficiently [deniably allegorical](/tag/deniably-allegorical/). It's plausible that some portion of the _Less Wrong_ audience would have been more receptive to "... Boundaries?" as not-politically-threatening philosophy, if they hadn't been alerted to the political context by the 60+-comment trainwreck on the "Blegg Mode" linkpost.) @@ -533,9 +533,9 @@ In November, I received an interesting reply on my philosophy-of-categorization I had thought of the "false-positives are better than false-negatives when detecting predators" example as being about the limitations of evolution as an AI designer: messy evolved animal brains don't bother to track probability and utility separately the way a cleanly-designed AI could. As I had explained in "... Boundaries?", it made sense for _what_ variables you paid attention to, to be motivated by consequences. But _given_ the subspace that's relevant to your interests, you want to run an epistemically legitimate clustering algorithm on the data you see there, which depends on the data, not your values. The only reason value-dependent gerrymandered category boundaries seem like a good idea if you're not careful about philosophy is because it's _wireheading_. Ideal probabilistic beliefs shouldn't depend on consequences. -Abram didn't think the issue was so clear-cut. Where do "probabilities" come from, in the first place? The reason we expect something like Bayesianism to be an attractor among self-improving agents is _because_ probabilistic reasoning is broadly useful: epistemology can be _derived_ from instrumental concerns. He agreed that severe wireheading issues potentially arise if you allow consequentialist concerns to affect your epistemics. +Abram didn't think the issue was so clear-cut. Where do "probabilities" come from, in the first place? The reason we expect something like Bayesianism to be an attractor among self-improving agents is _because_ probabilistic reasoning is broadly useful: epistemology can be _derived_ from instrumental concerns. He agreed that severe wireheading issues _potentially_ arise if you allow consequentialist concerns to affect your epistemics— -But the alternative view had its own problems. If your AI consists of a consequentialist module that optimizes for utility in the world, and an epistemic module that optimizes for the accuracy of its beliefs, that's _two_ agents, not one: how could that be reflectively coherent? You could, perhaps, bite the bullet here, for fear that consequentialism doesn't tile and that wireheading was inevitable. On this view, Abram explained, "Agency is an illusion which can only be maintained by crippling agents and giving them a split-brain architecture where an instrumental task-monkey does all the important stuff while an epistemic overseer supervises." Whether this view was ultimately tenable or not, this did show that trying to forbid appeals-to-consequences entirely led to strange places. I didn't immediately have an answer for Abram, but I was grateful for the engagement. (Abram was clearly addressing the real philosophical issues, and not just trying to mess with me the way almost everyone else in Berkeley was trying to mess with me.) +But the alternative view had its own problems. If your AI consists of a consequentialist module that optimizes for utility in the world, and an epistemic module that optimizes for the accuracy of its beliefs, that's _two_ agents, not one: how could that be reflectively coherent? You could, perhaps, bite the bullet here, for fear that consequentialism doesn't tile and that wireheading was inevitable. On this view, Abram explained, "Agency is an illusion which can only be maintained by crippling agents and giving them a split-brain architecture where an instrumental task-monkey does all the important stuff while an epistemic overseer supervises." Whether this view was ultimately tenable or not, this did show that trying to forbid appeals-to-consequences entirely led to strange places. I didn't immediately have an answer for Abram, but I was grateful for the engagement. (Abram was clearly addressing the real philosophical issues, and not just trying to mess with me the way almost everyone else in Berkeley including up to and including Eliezer Yudkowsky was trying to mess with me.) Also in November, I wrote to Ben about how I was still stuck on writing the grief-memoir. My _plan_ had been that it should have been possibly to tell the story of the Category War while glomarizing about the content of private conversations, then offer Scott and Eliezer pre-publication right of reply (because it's only fair to give your former-hero-current-[frenemies](https://en.wikipedia.org/wiki/Frenemy) warning when you're about to publicly characterize them as having been intellectually dishonest), then share it to _Less Wrong_ and the /r/TheMotte culture war thread, and then I would have the emotional closure to move on with my life (learn math, go to gym, chop wood, carry water) and not be a mentally-dominated cultist. @@ -826,14 +826,27 @@ I knew better than to behave like that—and to the extent that I was tempted, I Someone who uncritically validated my not liking to be tossed into the Student Bucket, instead of assessing my _reasons_ for not liking to be tossed into the Bucket and whether those reasons had merit, would be hurting me, not helping me—because in order to navigate the real world, I need a map that reflects the territory, rather than my narcissistic fantasies. I'm a better person for straightforwardly facing the shame of getting a _C_ in community college differential equations, rather than trying to deny it or run away from it or claim that it didn't mean anything. Part of updating myself incrementally was that I would get _other_ chances to prove that my autodidacticism could match the standard set by schools. (I've had a professional and open-source programming career without finishing college; when I audited honors analysis at UC Berkeley "for fun" in 2017, I did fine on the midterm; when applying for a new dayjob in 2018, the interviewer, noting my lack of a degree, said he was going to give a version of the interview without a computer science theory question. I insisted on being given the "college" version of the interview, solved a dynamic programming problem, and got the job. And so on.) -[TODO SECTION just crazy -she thought "I'm trans" was an explanation, but then found a better theory that explains the same data—that's what "rationalism" should be—including "That wasn't entirely true!!!!" -https://somenuanceplease.substack.com/p/actually-i-was-just-crazy-the-whole -] +If you can see why uncritically affirming people's current self-image isn't the right solution to "student dysphoria", it should be obvious why the same is true of gender dysphoria. The principle that _truth matters_ is very general! + +In an article titled ["Actually, I Was Just Crazy the Whole Time"](https://somenuanceplease.substack.com/p/actually-i-was-just-crazy-the-whole), detransitioner Michelle Alleva contrasts her beliefs at the time of deciding to transition, with her current beliefs. While transitioning, she accounted for many pieces of evidence about herself ("dislike attention as a female", "obsessive thinking about gender", "didn't fit in with the girls", _&c_.) in terms of the theory "It's because I'm trans." But now, Alleva writes, she thinks she has a variety of better explanations that, all together, cover everything on the original list: "It's because I'm autistic", "It's because I have unresolved trauma", "It's because women are often treated poorly" ... including "That wasn't entirely true" (!!). + +This is a _rationality_ skill. Alleva had a theory about herself, and then she _revised her theory upon further consideration of the evidence_. Beliefs about one's self aren't special and can updated using the _same_ methods that you would use for anything else—[just as a recursively self-improving AI would reason the same about transistors "inside" the AI and transitors in "the environment."](https://www.lesswrong.com/posts/TynBiYt6zg42StRbb/my-kind-of-reflection) + +[TODO: I'm praising the form of the inference; not the conclusion; homosexual transsexuals who update to "born in the wrong body" at least have a case; for people like me, and separately people like Alleva, it's just not true; if you coddle "Female Bucket" sentiments, you're outlawing updates] + +This also isn't a particularly _advanced_ rationality skill. This is very basic—something novices grasp during their early steps along the Way. + +There was an exchange in the comment section between me and Yudkowsky back during the early days of _Less Wrong_, when I still hadn't grown out of [my teenage religion of psychological sex differences denialism](/2021/May/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems/#antisexism). Yudkowsky had claimed that he had ["never known a man with a true female side, and I have never known a woman with a true male side, either as authors or in real life."](https://www.lesswrong.com/posts/FBgozHEv7J72NCEPB/my-way/comment/K8YXbJEhyDwSusoY2) Offended at our leader's sexism (but sensing no socially acceptable way to express it), I timidly [asked him to elaborate](https://www.lesswrong.com/posts/FBgozHEv7J72NCEPB/my-way?commentId=AEZaakdcqySmKMJYj), and as part of [his response](https://www.greaterwrong.com/posts/FBgozHEv7J72NCEPB/my-way/comment/W4TAp4LuW3Ev6QWSF), he mentioned that he "sometimes wish[ed] that certain women would appreciate that being a man is at least as complicated and hard to grasp and a lifetime's work to integrate, as the corresponding fact of feminity [_sic_]." + +[I replied](https://www.lesswrong.com/posts/FBgozHEv7J72NCEPB/my-way/comment/7ZwECTPFTLBpytj7b) (bolding added): + +> I sometimes wish that certain men would appreciate that not all men are like them—**or at least, that not all men _want_ to be like them—that the fact of masculinity is [not _necessarily_ something to integrate](https://www.lesswrong.com/posts/vjmw8tW6wZAtNJMKo/which-parts-are-me).** + +_I knew_. Even then, _I knew_ + + -[TODO SECTION "duly appreciated" -] [TODO section Feelings vs. Truth This is a conflict between Feelings and Truth, between Politics and Truth. diff --git a/notes/a-hill-of-validity-sections.md b/notes/a-hill-of-validity-sections.md index e57bef9..a9628a1 100644 --- a/notes/a-hill-of-validity-sections.md +++ b/notes/a-hill-of-validity-sections.md @@ -1,7 +1,7 @@ noncontiguous on deck— ✓ being put in a bucket (school) -_ "duly appreciated" -_ "Actually, I was just crazy the whole time" +- "Actually, I was just crazy the whole time" +- "duly appreciated" _ Doublethink (Choosing to be Biased) _ the reason he got pushback @@ -1043,7 +1043,7 @@ https://www.greaterwrong.com/posts/FBgozHEv7J72NCEPB/my-way/comment/W4TAp4LuW3Ev > Okay. I’ve never seen a male author write a female character with the same depth as Phedre no Delaunay, nor have I seen any male person display a feminine personality with the same sort of depth and internal integrity, nor have I seen any male person convincingly give the appearance of having thought out the nature of feminity to that depth. Likewise and in a mirror for women and men. I sometimes wish that certain women would appreciate that being a man is at least as complicated and hard to grasp and a lifetime’s work to integrate, as the corresponding fact of feminity. I am skeptical that either sex can ever really model and predict the other’s deep internal life, short of computer-assisted telepathy. These are different brain designs we’re talking about here. https://www.greaterwrong.com/posts/FBgozHEv7J72NCEPB/my-way/comment/7ZwECTPFTLBpytj7b -> I sometimes wish that certain men would appreciate that not all men are like them—or at least, that not all men _want_ to be like them—that the fact of masculinity is not necessarily something to integrate. + > Duly appreciated. @@ -1093,3 +1093,5 @@ If we're going to die either way, wouldn't it be _less dignified_ to die with St https://twitter.com/ESYudkowsky/status/1568338672499687425 > I'm not interested in lying to the man in the street. It won't actually save the world, and is not part of a reasonable and probable plan for saving the world; so I'm not willing to cast aside my deontology for it; nor would the elites be immune from the epistemic ruin. + +The problem with uncritically validating an autodidactic's ego, is that a _successful_ autodidact needs to have an accurate model of how their studying process is working, and that's a lot harder when people are "benevolently" trying to _wirehead_ you. \ No newline at end of file