From: M. Taylor Saotome-Westlake Date: Sat, 17 Sep 2022 00:59:04 +0000 (-0700) Subject: memoir: a battle between Feelings and Truth X-Git-Url: http://534655.efjtl6rk.asia/source?a=commitdiff_plain;h=c5f1c1d8694a7aeb56883779cbce1c5ec83020d9;p=Ultimately_Untrue_Thought.git memoir: a battle between Feelings and Truth --- diff --git a/content/drafts/a-hill-of-validity-in-defense-of-meaning.md b/content/drafts/a-hill-of-validity-in-defense-of-meaning.md index d5c3d11..a1a8ae5 100644 --- a/content/drafts/a-hill-of-validity-in-defense-of-meaning.md +++ b/content/drafts/a-hill-of-validity-in-defense-of-meaning.md @@ -130,6 +130,8 @@ I'm picking on the "sports segregated around an Aristotelian binary" remark beca Thus, Yudkowsky's claim to merely have been standing up for the distinction between facts and policy questions doesn't seem credible. It is, of course, true that pronoun and bathroom conventions are policy decisions rather than a matter of fact, but it's _bizarre_ to condescendingly point this out _as if it were the crux of contemporary trans-rights debates_. Conservatives and gender-critical feminists _know_ that trans-rights advocates aren't falsely claiming that trans women have XX chromosomes. If you _just_ wanted to point out that the organization of sports leagues is a policy question rather than a fact (as if anyone had doubted this), why would you throw in the "Aristotelian binary" strawman and belittle the matter as "humorous"? There are a lot of issues that I don't _personally_ care much about, but I don't see anything funny about the fact that other people _do_ care. +(And the case of sports, the empirical facts are just _so_ lopsided that if we must find humor in the matter, it really goes the other way. Just a few years later, [Lia Thomas](https://en.wikipedia.org/wiki/Lia_Thomas) would be dominating NCAA women's swim meets by finishing [_4.2 standard deviations_](https://twitter.com/FondOfBeetles/status/1466044767561830405) (!!) faster than the median competitor, and Eliezer Yudkowsky feels obligated to _pretend not to see the problem?_ You've got to admit, that's a _little_ bit funny.) + If any concrete negative consequence of gender self-identity categories is going to be waved away with, "Oh, but that's a mere _policy_ decision that can be dealt with on some basis other than gender, and therefore doesn't count as an objection to the new definition of gender words", then it's not clear what the new definition is _for_. Like many gender-dysphoric males, I [cosplay](/2016/Dec/joined/) [female](/2017/Oct/a-leaf-in-the-crosswind/) [characters](/2019/Aug/a-love-that-is-out-of-anyones-control/) at fandom conventions sometimes. And, unfortunately, like many gender-dysphoric males, I'm _not very good at it_. I think someone looking at some of my cosplay photos and trying to describe their content in clear language—not trying to be nice to anyone or make a point, but just trying to use language as a map that reflects the territory—would say something like, "This is a photo of a man and he's wearing a dress." The word _man_ in that sentence is expressing _cognitive work_: it's a summary of the [lawful cause-and-effect evidential entanglement](https://www.lesswrong.com/posts/6s3xABaXKPdFwA3FS/what-is-evidence) whereby the photons reflecting off the photograph are correlated with photons reflecting off my body at the time the photo was taken, which are correlated with my externally-observable secondary sex characteristics (facial structure, beard shadow, _&c._), from which evidence an agent using an [efficient naïve-Bayes-like model](https://www.lesswrong.com/posts/gDWvLicHhcMfGmwaK/conditional-independence-and-naive-bayes) can assign me to its "man" category and thereby make probabilistic predictions about some of my traits that aren't directly observable from the photo, and achieve a better [score on those predictions](http://yudkowsky.net/rational/technical/) than if the agent had assigned me to its "adult human female" category, where by "traits" I mean not (just) particularly sex chromosomes ([as Yudkowsky suggested on Twitter](https://twitter.com/ESYudkowsky/status/1067291243728650243)), but the _conjunction_ of dozens or hundreds of measurements that are [_causally downstream_ of sex chromosomes](/2021/Sep/link-blood-is-thicker-than-water/): reproductive organs _and_ muscle mass (sex difference effect size of [Cohen's _d_](https://en.wikipedia.org/wiki/Effect_size#Cohen's_d) ≈ 2.6) _and_ Big Five Agreeableness (_d_ ≈ 0.5) _and_ Big Five Neuroticism (_d_ ≈ 0.4) _and_ short-term memory (_d_ ≈ 0.2, favoring women) _and_ white-to-gray-matter ratios in the brain _and_ probable socialization history _and_ [any number of other things](/papers/archer-the_reality_and_evolutionary_significance_of_human_psychological_sex_differences.pdf)—including differences we might not necessarily currently know about, but have prior reasons to suspect exist: no one _knew_ about sex chromosomes before 1905, but given all the other systematic differences between women and men, it would have been a reasonable guess (that turned out to be correct!) to suspect the existence of some sort of molecular mechanism of sex determination. @@ -368,7 +370,7 @@ That seemed a little harsh on Scott to me. At 6:14 _a.m._ and 6:21 _a.m._, I wro Michael was _furious_ with me, and he emailed and called me to say so. He seemed to have a theory that people who are behaving badly, as Scott was, will only change when they see a victim who is being harmed. Me escalating and then deescalating just after he came to help was undermining the attempt to force an honest confrontation, such that we could _get_ to the point of having a Society with morality or punishment. -Anyway, I did successfully get to my apartment and get a few hours of sleep. One of the other friends I had cc'd on some of the emails came to visit me later than morning with her young son—I mean, her son at the time. +Anyway, I did successfully get to my apartment and get a few hours of sleep. One of the other friends I had cc'd on some of the emails came to visit me later that morning with her young son—I mean, her son at the time. (Incidentally, the code that I wrote intermittently between 11 _p.m._ and 4 _a.m._ was a horrible bug-prone mess, and the company has been paying for it ever since, every time someone needs to modify that function and finds it harder to make sense of than it would be if I had been less emotionally overwhelmed in March 2019 and written something sane instead.) @@ -537,9 +539,9 @@ I had thought of the "false-positives are better than false-negatives when detec Abram didn't think the issue was so clear-cut. Where do "probabilities" come from, in the first place? The reason we expect something like Bayesianism to be an attractor among self-improving agents is _because_ probabilistic reasoning is broadly useful: epistemology can be _derived_ from instrumental concerns. He agreed that severe wireheading issues _potentially_ arise if you allow consequentialist concerns to affect your epistemics— -But the alternative view had its own problems. If your AI consists of a consequentialist module that optimizes for utility in the world, and an epistemic module that optimizes for the accuracy of its beliefs, that's _two_ agents, not one: how could that be reflectively coherent? You could, perhaps, bite the bullet here, for fear that consequentialism doesn't tile and that wireheading was inevitable. On this view, Abram explained, "Agency is an illusion which can only be maintained by crippling agents and giving them a split-brain architecture where an instrumental task-monkey does all the important stuff while an epistemic overseer supervises." Whether this view was ultimately tenable or not, this did show that trying to forbid appeals-to-consequences entirely led to strange places. I didn't immediately have an answer for Abram, but I was grateful for the engagement. (Abram was clearly addressing the real philosophical issues, and not just trying to mess with me the way almost everyone else in Berkeley including up to and including Eliezer Yudkowsky was trying to mess with me.) +But the alternative view had its own problems. If your AI consists of a consequentialist module that optimizes for utility in the world, and an epistemic module that optimizes for the accuracy of its beliefs, that's _two_ agents, not one: how could that be reflectively coherent? You could, perhaps, bite the bullet here, for fear that consequentialism doesn't tile and that wireheading was inevitable. On this view, Abram explained, "Agency is an illusion which can only be maintained by crippling agents and giving them a split-brain architecture where an instrumental task-monkey does all the important stuff while an epistemic overseer supervises." Whether this view was ultimately tenable or not, this did show that trying to forbid appeals-to-consequences entirely led to strange places. I didn't immediately have an answer for Abram, but I was grateful for the engagement. (Abram was clearly addressing the real philosophical issues, and not just trying to mess with me the way almost everyone else in Berkeley was trying to mess with me.) -Also in November, I wrote to Ben about how I was still stuck on writing the grief-memoir. My _plan_ had been that it should have been possibly to tell the story of the Category War while glomarizing about the content of private conversations, then offer Scott and Eliezer pre-publication right of reply (because it's only fair to give your former-hero-current-[frenemies](https://en.wikipedia.org/wiki/Frenemy) warning when you're about to publicly characterize them as having been intellectually dishonest), then share it to _Less Wrong_ and the /r/TheMotte culture war thread, and then I would have the emotional closure to move on with my life (learn math, go to gym, chop wood, carry water) and not be a mentally-dominated cultist. +Also in November, I wrote to Ben about how I was still stuck on writing the grief-memoir. My _plan_ had been that it should have been possibly to tell the story of the Category War while glomarizing about the content of private conversations, then offer Scott and Eliezer pre-publication right of reply (because it's only fair to give your former-hero-current-[frenemies](https://en.wikipedia.org/wiki/Frenemy) warning when you're about to publicly call them intellectually dishonest), then share it to _Less Wrong_ and the /r/TheMotte culture war thread, and then I would have the emotional closure to move on with my life (learn math, go to gym, chop wood, carry water) and not be a mentally-dominated cultist. The reason it _should_ be safe to write is because Explaining Things is Good. It should be possible to say, "This is not a social attack; I'm not saying 'rationalists Bad, Yudkowsky Bad'; I'm just trying to carefully _tell the true story_ about why, as a matter of cause-and-effect, I've been upset this year, including addressing counterarguments for why some would argue that I shouldn't be upset, why other people could be said to be behaving 'reasonably' given their incentives, why I nevertheless wish they'd be braver and adhere to principle rather than 'reasonably' following incentives, _&c_." @@ -565,7 +567,7 @@ Scott replies on 21 December https://www.lesswrong.com/posts/bSmgPNS6MTJsunTzS/m I snapped https://www.lesswrong.com/posts/bSmgPNS6MTJsunTzS/maybe-lying-doesn-t-exist?commentId=xEan6oCQFDzWKApt7 Christmas party - +playing on a different chessboard people reading funny GPT-2 quotes Tsvi said it would be sad if I had to leave the Bay Area motivation deflates after Christmas victory @@ -800,7 +802,7 @@ To his credit, he _will_ admit that he's only willing to address a selected subs Counterarguments aren't completely causally _inert_: if you can make an extremely strong case that Biological Sex Is Sometimes More Relevant Than Self-Declared Gender Identity, Yudkowsky will put some effort into coming up with some ingenious excuse for why he _technically_ never said otherwise, in ways that exhibit generally rationalist principles. But at the end of the day, Yudkowsky is going to say what he needs to say in order to protect his reputation, as is sometimes personally prudent. -Even if one were to agree with this description of Yudkowsky's behavior, it doesn't immediately follow that Yudkowsky is making the wrong decision. Again, "bad faith" is meant as a literal description that makes predictions about behavior, not a contentless attack—maybe there are some circumstances in which engaging some amount of bad faith is the right thing to do, given the constraints one faces! For example, when talking to people on Twitter with a very different ideological background from me, I sometimes anticipate that if my interlocutor knew what I was actually thinking, they wouldn't want to talk to me, so I occasionally engage in a bit of what could be called ["concern trolling"](https://geekfeminism.fandom.com/wiki/Concern_troll): I take care to word my replies in a way that makes it look like I'm more ideologically aligned with them than I actually am. (For example, I [never say "assigned female/male at birth" in my own voice on my own platform](/2019/Sep/terminology-proposal-developmental-sex/), but I'll do it in an effort to speak my interlocutor's language.) I think of this as the _minimal_ amount of strategic bad faith needed to keep the conversation going, to get my interlocutor to evaluate my argument on its own merits, rather than rejecting it for coming from an ideological enemy. In cases such as these, I'm willing to defend my behavior as acceptable—there _is_ a sense in which I'm being deceptive by optimizing my language choice to make my interlocutor make bad guesses about my ideological alignment, but I'm comfortable with that amount and scope of deception in the service of correcting the distortion where I don't think my interlocutor _should_ be paying attention to my personal alignment. +Even if one were to agree with this description of Yudkowsky's behavior, it doesn't immediately follow that Yudkowsky is making the wrong decision. Again, "bad faith" is meant as a literal description that makes predictions about behavior, not a contentless attack—maybe there are some circumstances in which engaging some amount of bad faith is the right thing to do, given the constraints one faces! For example, when talking to people on Twitter with a very different ideological background from me, I sometimes anticipate that if my interlocutor knew what I was actually thinking, they wouldn't want to talk to me, so I occasionally engage in a bit of what could be called ["concern trolling"](https://geekfeminism.fandom.com/wiki/Concern_troll): I take care to word my replies in a way that makes it look like I'm more ideologically aligned with my interlocutor than I actually am. (For example, I [never say "assigned female/male at birth" in my own voice on my own platform](/2019/Sep/terminology-proposal-developmental-sex/), but I'll do it in an effort to speak my interlocutor's language.) I think of this as the _minimal_ amount of strategic bad faith needed to keep the conversation going, to get my interlocutor to evaluate my argument on its own merits, rather than rejecting it for coming from an ideological enemy. In cases such as these, I'm willing to defend my behavior as acceptable—there _is_ a sense in which I'm being deceptive by optimizing my language choice to make my interlocutor make bad guesses about my ideological alignment, but I'm comfortable with that amount and scope of deception in the service of correcting the distortion where I don't think my interlocutor _should_ be paying attention to my personal alignment. That is, my bad faith concern-trolling gambit of deceiving people about my ideological alignment in the hopes of improving the discussion seems like something that makes our collective beliefs about the topic-being-argued-about _more_ accurate. (And the topic-being-argued-about is presumably of greater collective interest than which "side" I personally happen to be on.) @@ -872,23 +874,6 @@ Let's recap. -[TODO: - -https://twitter.com/ESYudkowsky/status/1404697716689489921 -> I have never in my own life tried to persuade anyone to go trans (or not go trans)—I don't imagine myself to understand others that much. - -If you think it "sometimes personally prudent and not community-harmful" to got out of your way to say positive things about Republican candidates and never, ever say positive things about Democratic candidates (because you "don't see what the alternative is besides getting shot"), you can see why people might regard you as a _Republican shill_—even if all the things you said were true, and even if you never told any specific individual, "You should vote Republican." - -https://www.facebook.com/yudkowsky/posts/10154110278349228 -> Just checked my filtered messages on Facebook and saw, "Your post last night was kind of the final thing I needed to realize that I'm a girl." -> ==DOES ALL OF THE HAPPY DANCE FOREVER== - -https://twitter.com/ESYudkowsky/status/1404821285276774403 -> It is not trans-specific. When people tell me I helped them, I mostly believe them and am happy. -] - - - I _never_ expected to end up arguing about something so _trivial_ as the minutiae of pronoun conventions (which no one would care about if historical contingencies of the evolution of the English language hadn't made them a Schelling point and typographical attack surface for things people do care about). The conversation only ended up here after a series of derailings. At the start, I was _trying_ to say something substantive about the psychology of straight men who wish they were women. _After it's been pointed out_, it should be a pretty obvious hypothesis that "guy on the Extropians mailing list in 2004 who fantasizes about having a female counterpart" and "guy in 2016 Berkeley who identifies as a trans woman" are the _same guy_. @@ -917,21 +902,37 @@ Seriously, you think I'm _smart enough_ to come up with all of this indepedently Does ... does he expect us not to _notice_? Or does he think that "everybody knows"? -But I don't, think that everybody knows. +But I don't, think that everybody knows. And I'm not, giving up that easily. Not on an entire subculture full of people. + + -[TODO: conflict between Feelings and Truth: you need to be able to tell Norton he's not Emperor, that the delusional autodidact that her study methods aren't working, that AGP is male] +Yudkowsky [defends his behavior](https://twitter.com/ESYudkowsky/status/1356812143849394176): -[TODO section Feelings vs. Truth -This is a conflict between Feelings and Truth, between Politics and Truth. +> I think that some people model civilization as being in the middle of a great battle in which this tweet, even if true, is giving comfort to the Wrong Side, where I would not have been as willing to tweet a truth helping the Right Side. From my perspective, this battle... +> +> ...just isn't that close to the top of my priority list. I rated nudging the cognition of the people-I-usually-respect, closer to sanity, as more important; who knows, those people might matter for AGI someday. And the Wrong Side part isn't as clear to me either. + +But the battle that matters—the battle with a Right Side and a Wrong Side—isn't "pro-trans" _vs._ "anti-trans". That's why Jessica joined our posse to try to argue with Yudkowsky in early 2019. (She wouldn't have, if my objection had been, "trans is fake; trans people Bad".) That's why Somni—one of the trans women who [infamously protested the 2019 CfAR reunion](https://www.ksro.com/2019/11/18/new-details-in-arrests-of-masked-camp-meeker-protesters/) for (among other things) CfAR allegedly discriminating against trans women—[understands what I've been saying](https://somnilogical.tumblr.com/post/189782657699/legally-blind). + +The battle that matters—and I've been _very_ explicit about this, for years—is over this proposition eloquently stated by Scott Alexander (redacting the irrelevant object-level example): + +> I ought to accept an unexpected [X] or two deep inside the conceptual boundaries of what would normally be considered [Y] if it'll save someone's life. There's no rule of rationality saying that I shouldn't, and there are plenty of rules of human decency saying that I should. + +This is a battle between Feelings and Truth, between Politics and Truth. + +In order to take the side of Truth, you need to be able to tell Joshua Norton that he's not actually Emperor of the United States (even if it hurts him). You need to be able to tell a prideful autodidact that the fact that he's failing quizzes in community college differential equations class, is evidence that his study methods aren't doing what he thought they were (even if it hurts him). And you need to be able to say that trans women are male and trans men are female _with respect to_ a female/male "sex" concept that encompasses the many traits that aren't affected by contemporary surgical and hormonal interventions (even if it offends someone's not liking to be tossed into a Male Bucket or a Female Bucket, as it would be assigned by their birth certificate, and—yes—even if it probabilistically contributes to someone's suicide). + +If you don't want to say those things because hurting people is wrong, then you have chosen Feelings. -Scott Alexander chose Feelings, but I can't really hold that against him, because Scott is very explicit about only acting in the capacity of some guy with a blog. You can tell that he never wanted to be a religious leader; it just happened to him on accident because he writes faster than everyone else. I like Scott. Scott is great. I feel bad that such a large fraction of my interactions with him over the years have taken such an adversarial tone. +Scott Alexander chose Feelings, but I can't really hold that against him, because Scott is very explicit about only acting in the capacity of some guy with a blog. You can tell from his writings that he never wanted to be a religious leader; it just happened to him on accident because he writes faster than everyone else. I like Scott. Scott is great. I feel bad that such a large fraction of my interactions with him over the years have taken such an adversarial tone. Eliezer Yudkowsky ... did not _unambiguously_ choose Feelings. He's been very careful with his words to strategically mood-affiliate with the side of Feelings, without consciously saying anything that he knows to be unambiguously false. + Eliezer Yudkowsky is _absolutely_ trying to be a religious leader. If Eliezer Yudkowsky can't _unambigously_ choose Truth over Feelings, _then Eliezer Yudkowsky is a fraud_. @@ -940,7 +941,6 @@ If Eliezer Yudkowsky can't _unambigously_ choose Truth over Feelings, _then Elie - [TODO section stakes, cooperation > [_Perhaps_, replied the cold logic](https://www.yudkowsky.net/other/fiction/the-sword-of-good). _If the world were at stake._ @@ -958,17 +958,13 @@ But if he's _then_ going to take a shit on c3 of my chessboard (["the simplest a The turd on c3 is a pretty big likelihood ratio! - -As the traditional rationalist saying goes: once is happenstance. Twice is coincidence. _Three times is enemy optimization_. +I like to imagine that they have a saying out of dath ilan: once is happenstance; twice is coincidence; _three times is enemy optimization_. ] - - - [TODO: the dolphin war, our thoughts about dolphins are literally downstream from Scott's political incentives in 2014; this is a sign that we're a cult https://twitter.com/ESYudkowsky/status/1404700330927923206 diff --git a/notes/a-hill-of-validity-sections.md b/notes/a-hill-of-validity-sections.md index 910dd78..994d890 100644 --- a/notes/a-hill-of-validity-sections.md +++ b/notes/a-hill-of-validity-sections.md @@ -10,6 +10,7 @@ _ weirdly hostile comments on "... Boundaries?" far editing tier— _ rephrase "gamete size" discussion to make it clearer that Yudkowsky's proposal also implicitly requires people to be agree about the clustering thing _ smoother transition between "deliberately ambiguous" and "was playing dumb"; I'm not being paranoid for attributing political motives to him, because he told us that he's doing it +_ it's worth an extra two sentences to explain context of Robert Stadler comparison better _ I'm sure Eliezer Yudkowsky could think of some relevant differences _ clarify why Michael thought Scott was "gaslighting" me, include "beeseech bowels of Christ" _ conversation with Ben about physical injuries (this is important because it explains where the "cut my dick off rhetoric" came from) @@ -293,8 +294,6 @@ I'm worried about the failure mode where bright young minds [lured in](http://be It's weird that he thinks telling the truth is politically impossible, because the specific truths I'm focused on are things he _already said_, that anyone could just look up. I guess the point is that the egregore doesn't have the logical or reading comprehension for that?—or rather (a reader points out) the egregore has no reason to care about the past; if you get tagged as an enemy, your past statements will get dug up as evidence of foul present intent, but if you're doing good enough of playing the part today, no one cares what you said in 2009 -Somni gets it! https://somnilogical.tumblr.com/post/189782657699/legally-blind - E.Y. thinks postrats are emitting "epistemic smog", but the fact that Eigenrobot can retweet my Murray review makes me respect him more than E.Y. https://twitter.com/eigenrobot/status/1397383979720839175 The robot cult is "only" "trying" to trick me into cutting my dick off in the sense that a paperclip maximizer is trying to kill us: an instrumental rather than a terminal value. @@ -850,15 +849,9 @@ At least, a _pedagogy_ mistake. If Yudkowsky _just_ wanted to make a politically Rather, previously sexspace had two main clusters (normal females and males) plus an assortment of tiny clusters corresponding to various [disorders of sex development](https://en.wikipedia.org/wiki/Disorders_of_sex_development), and now it has two additional tiny clusters: females-on-masculinizing-HRT and males-on-feminizing-HRT. Certainly, there are situations where you would want to use "gender" categories that use the grouping {females, males-on-feminizing-HRT} and {males, females-on-masculinizing-HRT}. -[TODO: relevance of multivariate— - -(And in this case, the empirical facts are _so_ lopsided, that if we must find humor in the matter, it really goes the other way. Lia Thomas trounces the entire field by _4.2 standard deviations_ (!!), and Eliezer Yudkowsky feels obligated to _pretend not to see the problem?_ You've got to admit, that's a _little_ bit funny.) - https://www.lesswrong.com/posts/cu7YY7WdgJBs3DpmJ/the-univariate-fallacy https://www.lesswrong.com/posts/vhp2sW6iBhNJwqcwP/blood-is-thicker-than-water -] - [TODO: sentences about studies showing that HRT doesn't erase male advantage https://twitter.com/FondOfBeetles/status/1368176581965930501 ] @@ -1106,3 +1099,28 @@ https://www.facebook.com/yudkowsky/posts/10154690145854228 lack of trust as a reason nothing works: https://danluu.com/nothing-works/ shouldn't the rats trust each other? + +https://www.greaterwrong.com/posts/FBgozHEv7J72NCEPB/my-way +> I think there will not be a _proper_ Art until _many_ people have progressed to the point of remaking the Art in their own image, and then radioed back to describe their paths. + + + + + + + +[TODO: + +https://twitter.com/ESYudkowsky/status/1404697716689489921 +> I have never in my own life tried to persuade anyone to go trans (or not go trans)—I don't imagine myself to understand others that much. + +If you think it "sometimes personally prudent and not community-harmful" to got out of your way to say positive things about Republican candidates and never, ever say positive things about Democratic candidates (because you "don't see what the alternative is besides getting shot"), you can see why people might regard you as a _Republican shill_—even if all the things you said were true, and even if you never told any specific individual, "You should vote Republican." + +https://www.facebook.com/yudkowsky/posts/10154110278349228 +> Just checked my filtered messages on Facebook and saw, "Your post last night was kind of the final thing I needed to realize that I'm a girl." +> ==DOES ALL OF THE HAPPY DANCE FOREVER== + +https://twitter.com/ESYudkowsky/status/1404821285276774403 +> It is not trans-specific. When people tell me I helped them, I mostly believe them and am happy. +] +