From: M. Taylor Saotome-Westlake Date: Sun, 6 Feb 2022 23:35:14 +0000 (-0800) Subject: Sunday drafting "Challenges": protons X-Git-Url: http://534655.efjtl6rk.asia/source?a=commitdiff_plain;h=1449679a38752b8fe439d59e1bd456d11cbecc62;p=Ultimately_Untrue_Thought.git Sunday drafting "Challenges": protons --- diff --git a/content/drafts/challenges-to-yudkowskys-pronoun-reform-proposal.md b/content/drafts/challenges-to-yudkowskys-pronoun-reform-proposal.md index 037120f..790c07a 100644 --- a/content/drafts/challenges-to-yudkowskys-pronoun-reform-proposal.md +++ b/content/drafts/challenges-to-yudkowskys-pronoun-reform-proposal.md @@ -77,7 +77,7 @@ There are a couple of problems with this. First of all, the "that you insist eve If you _actually_ believed it was Shenanigans to bake a stance on how clustered things are into a pronoun system and insist that everyone else use it, then it should be _equally_ Shenanigans independently of whether the insisted-on clusters are those of sex or those of gender identity—if you're going to be consistent, you should condemn them _both_. And yet _somehow_, the people who insist on sex-based pronouns are the target of Yudkowsky's condescension, whereas the people who insist on gender-identity-based pronouns get both a free pass, _and_ endorsement of their preferred convention (albeit for a different stated reason)? The one-sidedness here is pretty shameless! -Perhaps more importantly, however, in discussing how to reform English, we're not actually in the position of defining a language from scratch. Even if you think the [cultural evolution](/2020/Jan/book-review-the-origins-of-unfairness/) of English involved Shenanigans, it's not fair to attribute the Shenanigans to native speakers accurately describing their native language. Certainly, language can evolve; words can change meaning over time; if you can get the people in some community to start using language differently, then you have _ipso facto_ changed their language. But when we consider language as an information-processing system that we can reason about using our standard tools of probability and game theory, we see that in order to change the meaning associated with a word, you actually _do_ have to somehow get people to change their usage. You can _advocate_ for your new meaning and use it in your own speech, but you can't just _declare_ your preferred new meaning and claim that it applies to the language as actually spoken, without speakers actually changing their behavior. As a result, Yudkowsky's proposal "to say that this just _is_ the normative definition" doesn't work. +Perhaps more importantly, however, in discussing how to reform English, we're not actually in the position of defining a language from scratch. Even if you think the [cultural evolution](/2020/Jan/book-review-the-origins-of-unfairness/) of English involved Shenanigans, it's not fair to attribute the Shenanigans to native speakers accurately describing their native language. Certainly, language can evolve; words can change meaning over time; if you can get the people in some community to start using language differently, then you have _ipso facto_ changed their language. But when we consider language as an information-processing system, we see that in order to change the meaning associated with a word, you actually _do_ have to somehow get people to change their usage. You can _advocate_ for your new meaning and use it in your own speech, but you can't just _declare_ your preferred new meaning and claim that it applies to the language as actually spoken, without speakers actually changing their behavior. As a result, Yudkowsky's proposal "to say that this just _is_ the normative definition" doesn't work. To be clear, when I say that the proposal doesn't work, I'm not even saying I disagree with it. I mean that it literally, _factually_ doesn't work! Let me explain. @@ -428,7 +428,7 @@ Ah, _prudence_! He continues: > I don't see what the alternative is besides getting shot, or utter silence about everything Stalin has expressed an opinion on including "2 + 2 = 4" because if that logically counterfactually were wrong you would not be able to express an opposing opinion. -The problem with trying to "exhibit rationalist principles" in an line of argument that you're constructing in order to be prudent and not community-harmful, is that you're thereby necessarily _not_ exhibiting the central rationalist principle that what matters is the process that _determines_ your conclusion, not the reasoning you present to _reach_ your presented conclusion, after the fact. +The problem with trying to "exhibit generally rationalist principles" in an line of argument that you're constructing in order to be prudent and not community-harmful, is that you're thereby necessarily _not_ exhibiting the central rationalist principle that what matters is the process that _determines_ your conclusion, not the reasoning you present to _reach_ your presented conclusion, after the fact. The best explanation of this I know was authored by Yudkowsky himself in 2007, in a post titled ["A Rational Argument"](https://www.lesswrong.com/posts/9f5EXt8KNNxTAihtZ/a-rational-argument). It's worth quoting at length. The Yudkowsky of 2007 invites us to consider the plight of a political campaign manager: @@ -485,13 +485,13 @@ To his credit, he _will_ admit that he's only willing to address a selected subs Counterarguments aren't completely causally _inert_: if you can make an extremely strong case that Biological Sex Is Sometimes More Relevant Than Self-Declared Gender Identity, Yudkowsky will put some effort into coming up with some ingenious excuse for why he _technically_ never said otherwise, in ways that exhibit generally rationalist principles. But at the end of the day, Yudkowsky is going to say what he needs to say in order to protect his reputation, as is personally prudent. -Even if one were to agree with this description of Yudkowsky's behavior, it doesn't immediately follow that Yudkowsky is making the wrong decision. Again, "bad faith" is meant as a literal description, not a contentless attack—maybe there are some circumstances in which engaging some amount of bad faith is the right thing to do, given the constraints one faces. For example, when talking to people on Twitter with a very different ideological background from me, I sometimes anticipate that if my interlocutor knew what I was actually thinking, they wouldn't want to talk to me, so I take care to word my replies in a way that makes it look like I'm more ideologically aligned with them than I actually am. (For example, I [never say "assigned female/male at birth" in my own voice on my own platform](/2019/Sep/terminology-proposal-developmental-sex/), but I'll do it in an effort to speak my interlocutor's language.) I think of this as the _minimal_ amount of strategic bad faith needed to keep the conversation going, to get my interlocutor to evaluate my argument on its own merits, rather than rejecting it for coming from an ideological enemy. In cases such as these, I'm willing to defend my behavior as acceptable—there _is_ a sense in which I'm being deceptive by optimizing my language choice to make my interlocutor make bad guesses about my ideological alignment, but I'm comfortable with that amount and scope of deception because I don't think my interlocutor _should_ be paying attention to my personal alignment. +Even if one were to agree with this description of Yudkowsky's behavior, it doesn't immediately follow that Yudkowsky is making the wrong decision. Again, "bad faith" is meant as a literal description that makes predictions about behavior, not a contentless attack—maybe there are some circumstances in which engaging some amount of bad faith is the right thing to do, given the constraints one faces! For example, when talking to people on Twitter with a very different ideological background from me, I sometimes anticipate that if my interlocutor knew what I was actually thinking, they wouldn't want to talk to me, so I take care to word my replies in a way that makes it look like I'm more ideologically aligned with them than I actually am. (For example, I [never say "assigned female/male at birth" in my own voice on my own platform](/2019/Sep/terminology-proposal-developmental-sex/), but I'll do it in an effort to speak my interlocutor's language.) I think of this as the _minimal_ amount of strategic bad faith needed to keep the conversation going, to get my interlocutor to evaluate my argument on its own merits, rather than rejecting it for coming from an ideological enemy. In cases such as these, I'm willing to defend my behavior as acceptable—there _is_ a sense in which I'm being deceptive by optimizing my language choice to make my interlocutor make bad guesses about my ideological alignment, but I'm comfortable with that amount and scope of deception because I don't think my interlocutor _should_ be paying attention to my personal alignment. [TODO: the term is "concern trolling"; speak of trying to correct a distortion] That is, my bad faith Twitter gambit of deceiving people about my ideological alignment in the hopes of improving the discussion seems like something that makes our collective beliefs about the topic-being-argued-about _more_ accurate. (And the topic-being-argued-about is presumably of greater collective interest than which "side" I personally happen to be on.) -In contrast, Yudkowsky's bad faith gambit is the exact reverse: he's making the discussion worse in the hopes of correcting people's beliefs about his own ideological alignment. (He's not a right-wing Bad Guy, but people would tar him as a right-wing Bad Guy if he ever said anything negative about trans people.) This doesn't improve our collective beliefs about the topic-being-argued about; it's a _pure_ ass-covering move. +In contrast, the "it is sometimes personally prudent [...] to post your agreement with Stalin" gambit is the exact reverse: it's _introducing_ a distortion into the discussion in the hopes of correcting people's beliefs about the speaker's ideological alignment. (Yudkowsky is not a right-wing Bad Guy, but people would tar him as a right-wing Bad Guy if he ever said anything negative about trans people.) This doesn't improve our collective beliefs about the topic-being-argued about; it's a _pure_ ass-covering move. Yudkowsky names the alleged fact that "people do _know_ they're living in a half-Stalinist environment" as a mitigating factor. But the _reason_ censorship is such an effective tool in the hands of dictators like Stalin is because it ensures that many people _don't_ know—and that those who know (or suspect) don't have [game-theoretic common knowledge](https://www.lesswrong.com/posts/9QxnfMYccz9QRgZ5z/the-costly-coordination-mechanism-of-common-knowledge#Dictators_and_freedom_of_speech) that others do too. @@ -504,52 +504,85 @@ acceding to Power's demands (at the cost of deceiving their readers) and informi Policy debates should not appear one-sided. Faced with this kind of dilemma, I can't say that defying Power is necessarily the right choice: if there really _were_ no other options between deceiving your readers with a bad faith performance, and incurring Power's wrath, and Power's wrath would be too terrible to bear, then maybe deceiving your readers with a bad faith performance is the right thing to do. -But if you actually _cared_ about not deceiving your readers, you would want to be _really sure_ that those _really were_ the only two options. You'd [spend five minutes by the clock looking for third alternatives](https://www.lesswrong.com/posts/erGipespbbzdG5zYb/the-third-alternative)—including, possibly, not issuing proclamations on your honor as leader of the so-called "rationalist" community on topics where you _explicitly intend to ignore counteraguments on grounds of their being politically unfavorable_. Yudkowsky rejects this alternative on the grounds that it allegedly implies "utter silence about everything Stalin has expressed an opinion on including '2 + 2 = 4' because if that logically counterfactually were wrong you would not be able to express an opposing opinion", but this seems like yet another instance of Yudkowsky motivatedly playing dumb: if he _wanted_ to, I'm sure Eliezer Yudkowsky could think of _some relevant differences_ between "2 + 2 = 4" (a trivial fact of arithmetic) and "the simplest and best protocol is, "'He' refers to the set of people who have asked us to use 'he'" (a complex policy proposal whose flaws I have analyzed in detail above). +But if you actually _cared_ about not deceiving your readers, you would want to be _really sure_ that those _really were_ the only two options. You'd [spend five minutes by the clock looking for third alternatives](https://www.lesswrong.com/posts/erGipespbbzdG5zYb/the-third-alternative)—including, possibly, not issuing proclamations on your honor as leader of the so-called "rationalist" community on topics where you _explicitly intend to ignore counteraguments on grounds of their being politically unfavorable_. Yudkowsky rejects this alternative on the grounds that it allegedly implies "utter silence about everything Stalin has expressed an opinion on including '2 + 2 = 4' because if that logically counterfactually were wrong you would not be able to express an opposing opinion", but this seems like yet another instance of Yudkowsky motivatedly playing dumb: if he _wanted_ to, I'm sure Eliezer Yudkowsky could think of _some relevant differences_ between "2 + 2 = 4" (a trivial fact of arithmetic) and "the simplest and best protocol is, "'He' refers to the set of people who have asked us to use 'he'" (a complex policy proposal whose numerous flaws I have analyzed in detail above). -"I think people are better off at the end of that," Yudkowsky writes of the consequences of agreeing-with-Stalin-in-ways-that-exhibit-generally-rationalist-principles policies. But here I think we need a more conflict-theoretic analysis that looks at a more detailed level than "people." _Who_ is better off, specifically? +"I think people are better off at the end of that," Yudkowsky writes of the consequences of posting-your-agreement-with-Stalin policies. But here I think we need a more conflict-theoretic analysis that looks at a more detailed level than "people." _Who_ is better off, specifically? ... and, I had been planning to save the Whole Dumb Story about my alienation from Yudkowsky's so-called "rationalists" for a _different_ multi-thousand-word blog post, because _this_ multi-thousand-word blog post was supposed to be narrowly scoped to _just_ exhaustively replying to Yudkowsky's February 2021 Facebook post about pronoun conventions. But in order to explain the problems with "people do _know_ they're living in a half-Stalinist environment" and "people are better off at the end of that", I may need to _briefly_ recap some of the history leading to the present discussion, which explains why _I_ didn't know and _I'm_ not better off, with the understanding that it's only a summary and I might still need to tell the long version in a separate post, if it feels still necessary relative to everything else I need to get around to writing. (It's not actually a very interesting story; I just need to get it out of my system so I can stop grieving and move on with my life.) I _never_ expected to end up arguing about something so _trivial_ as the minutiae of pronoun conventions (which no one would care about if historical contingencies of the evolution of the English language hadn't made them a Schelling point and typographical attack surface for things people do care about). The conversation only ended up here after a series of derailings. At the start, I was _trying_ to say something substantive about the psychology of straight men who wish they were women. -You see, back in the 'aughts when Yudkowsky was writing his Sequences, he occasionally said some things about sex differences that I often found offensive at the time, but which ended up being hugely influential on me, especially in the context of my ideological affinity towards feminism and my secret lifelong-since-puberty erotic fantasy about being magically transformed into a woman. I wrote about this at length in a previous post, ["Sexual Dimorphism in Yudkowsky's Sequences, in Relation to my Gender Problems"](/2021/May/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems/)]. +You see, back in the 'aughts when Yudkowsky was writing his Sequences, he occasionally said some things about sex differences that I often found offensive at the time, but which ended up being hugely influential on me, especially in the context of my ideological denial of psychological sex differences and my secret lifelong-since-puberty erotic fantasy about being magically transformed into a woman. I wrote about this at length in a previous post, ["Sexual Dimorphism in Yudkowsky's Sequences, in Relation to my Gender Problems"](/2021/May/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems/)]. In particular, in ["Changing Emotions"](https://www.lesswrong.com/posts/QZs4vkC7cbyjL9XA9/changing-emotions) (and its precursor in a [2004 Extropians mailing list post](https://archive.is/En6qW)), Yudkowsky explains that "changing sex" is vastly easier said than done— +[[[ TODO summarize the Whole Dumb Story (does there need to be a separate post? I'm still not sure) + [TODO: But that was all about me—I assumed "trans" was a different thing. My first clue that I might not be living in that world came from—Eliezer Yudkowsky, with the "at least 20% of the ones with penises are actually women" thing] _After it's been pointed out_, it should be a pretty obvious hypothesis that "guy on the Extropians mailing list in 2004 who fantasizes about having a female counterpart" and "guy in 2016 Berkeley who identifies as a trans woman" are the _same guy_. [So I ended up arguing with people about the two-type taxonomy, and I noticed that those discussions kept getting _derailed_ on some variation of "The word woman doesn't actually mean that". So I took the bait, and starting arguing against that, and then Yudkowsky comes back to the subject with his "Hill of Validity in Defense of Meaning"—and I go on a philosophy of language crusade, and Yudkowsky eventually clarifies, and _then_ he comes back _again_ in Feb. 2022 with his "simplest and best protocol"] -At this point, the nature of the game is very clear. Yudkowsky wants to mood-affiliate with being on the right side of history (as ascertained by the current year's progressive _Zeitgeist_), subject to the constraint of not saying anything he knows to be false. Meanwhile, I want to actually make sense of what's actually going on in the world as regards sex and gender, because _I need the correct answer to decide whether or not to cut my dick off_. +]]] + +At this point, the nature of the game is very clear. Yudkowsky wants to make sure he's on peaceful terms with the progressive _Zeitgeist_, subject to the constraint of not saying anything he knows to be false. Meanwhile, I want to actually make sense of what's actually going on in the world as regards sex and gender, because _I need the correct answer to decide whether or not to cut my dick off_. On "his turn", he comes up with some pompous proclamation that's very obviously optimized to make the "pro-trans" faction look smart and good and make the "anti-trans" faction look dumb and bad, "in ways that exhibit generally rationalist principles." -On "my turn", I put in an _absurd_ amount of effort explaining in exhaustive, _exhaustive_ detail why Yudkowsky's pompous proclamation, while [not technically saying anything definitively "false"](https://www.lesswrong.com/posts/MN4NRkMw7ggt9587K/firming-up-not-lying-around-its-edge-cases-is-less-broadly), was _substantively misleading_ as constrated to what any serious person would say if they were actually trying to make sense of the world without worrying what progressive activists would think of them. +On "my turn", I put in an _absurd_ amount of effort explaining in exhaustive, _exhaustive_ detail why Yudkowsky's pompous proclamation, while [not technically saying making any unambiguously "false" atomic statements](https://www.lesswrong.com/posts/MN4NRkMw7ggt9587K/firming-up-not-lying-around-its-edge-cases-is-less-broadly), was _substantively misleading_ as constrated to what any serious person would say if they were actually trying to make sense of the world without worrying what progressive activists would think of them. + +Note: being _on peaceful terms_ with the progressive _Zeitgeist_ isn't the same as kowtowing to it entirely. So, for example, Yudkowsky is _also_ on the record claiming that— + +> [Everything more complicated than](https://twitter.com/ESYudkowsky/status/1108277090577600512) protons tends to come in varieties. Hydrogen, for example, has isotopes. Gender dysphoria involves more than one proton and will probably have varieties. + +> [To be clear, I don't](https://twitter.com/ESYudkowsky/status/1108280619014905857) know much about gender dysphoria. There's an allegation that people are reluctant to speciate more than one kind of gender dysphoria. To the extent that's not a strawman, I would say only in a generic way that GD seems liable to have more than one species. + +There's a sense in which this could be read as a "concession" to my agenda. The two-type taxonomy of MtF _was_ the thing I was originally trying to talk about, before the philosophy-of-language derailing, and here Yudkowsky is backing up "my side" on that by publicly offering an argument that there's probably a more-than-one-type typology. So there's an intuition that I should be grateful for and satisfied with this concession—that it would be _greedy_ for me to keep criticizing him about the pronouns and language thing, given that he's throwing me a bone here. + +But that intuition is _wrong_. The perception that there are "sides" to which one can make "concessions" is an _illusion_ of the human cognitive architecture; it's not something that any sane cognitive process would think in the course of constructing a map that reflects the territory. + +As I explained in ["On the Argumentative Form 'Super-Proton Things Tend to Come In Varieties'"](/2019/Dec/on-the-argumentative-form-super-proton-things-tend-to-come-in-varieties/), this argument that "gender dysphoria involves more than one proton and will probably have varieties" is actually _wrong_. The _reason_ I believe in the two-type taxonomy of MtF is because of [the _empirical_ case that androphilic and non-exclusively-androphilic MtF transsexualism actually look like different things](https://sillyolme.wordpress.com/faq-on-the-science/), enough so for the two-type clustering to [pay the rent](https://www.lesswrong.com/posts/a7n8GdKiAZRX86T5A/making-beliefs-pay-rent-in-anticipated-experiences) [for its complexity](https://www.lesswrong.com/posts/mB95aqTSJLNR9YyjH/message-length). + +The key lesson here that I wish Yudkowsky would understand is that when you invent rationality lessons in response to political pressure, you probably end up with _fake rationality lessons_ (because the reasoning that _generated_ the lesson differs from the reasoning that the lesson presents). I think this is bad, and that it's _equally_ bad even in cases like this where the political pressure is coming from _me_. + +If you "project" my work into the "subspace" of contemporary political conflicts, it usually _codes as_ favoring "anti-trans" faction more often than not, but [that's really not what I'm trying to do](/2021/Sep/i-dont-do-policy/). From my perspective, it's just that the "pro-trans" faction happens to be very wrong about a lot of stuff that I care about. But being wrong about a lot of stuff isn't the same thing as being wrong about everything; it's _important_ that I spontaneously invent and publish pieces like ["On the Argumentative Form"](/2019/Dec/on-the-argumentative-form-super-proton-things-tend-to-come-in-varieties/) and ["Self-Identity Is a Schelling Point"](/2019/Oct/self-identity-is-a-schelling-point/) that "favor" the "pro-trans" faction. That's how you know (and how I know) that I'm not a _partisan hack_. In the context of AI alignment theory, Yudkowsky has written about a "nearest unblocked strategy" phenomenon: if you directly prevent an agent from accomplishing a goal via some plan that you find undesirable, the agent will search for ways to route around that restriction, and probably find some plan that you find similarly undesirable for similar reasons. Suppose you developed an AI to [maximize human happiness subject to the constraint of obeying explicit orders](https://arbital.greaterwrong.com/p/nearest_unblocked#exampleproducinghappiness). It might first try administering heroin to humans. When you order it not to, it might switch to administering cocaine. When you order it to not use any of a whole list of banned happiness-producing drugs, it might switch to researching new drugs, or just _pay_ humans to take heroin, _&c._ -It's the same thing with Yudkowsky's political-risk minimization subject to the constraint of not saying anything he knows to be false. First he tries ["I think I'm over 50% probability at this point that at least 20% of the ones with penises are actually women"](https://www.facebook.com/yudkowsky/posts/10154078468809228) (March 2016). When you point out that [that's not true](https://www.lesswrong.com/posts/QZs4vkC7cbyjL9XA9/changing-emotions), he switches to ["you're not standing in defense of truth if you insist on a word, brought explicitly into question, being used with some particular meaning"](https://archive.is/Iy8Lq) (November 2018). When you point out that [_that's_ not true either](https://www.lesswrong.com/posts/FaJaCgqBKphrDzDSj/37-ways-that-words-can-be-wrong), he switches to "It is Shenanigans to try to bake your stance on how clustered things are [...] _into the pronoun system of a language and interpretation convention that you insist everybody use_" (February 2021). When you point out that's not what's going on, he switches to ... I don't know, but he's a smart guy; in the unlikely event that he sees fit to respond to this post, I'm sure he'll be able to think of _something_—but at this point, I have no reason to care. Talking to Yudkowsky on topics where getting the right answer would involve acknowledging facts that would make you unpopular in Berkeley is a _waste of everyone's time_; trying to inform you just isn't [his bottom line](https://www.lesswrong.com/posts/34XxbRFe54FycoCDw/the-bottom-line). +It's the same thing with Yudkowsky's political-risk minimization subject to the constraint of not saying anything he knows to be false. First he comes out with ["I think I'm over 50% probability at this point that at least 20% of the ones with penises are actually women"](https://www.facebook.com/yudkowsky/posts/10154078468809228) (March 2016). When you point out that [that's not true](https://www.lesswrong.com/posts/QZs4vkC7cbyjL9XA9/changing-emotions), then the next time he revisits the subject, he switches to ["you're not standing in defense of truth if you insist on a word, brought explicitly into question, being used with some particular meaning"](https://archive.is/Iy8Lq) (November 2018). When you point out that [_that's_ not true either](https://www.lesswrong.com/posts/FaJaCgqBKphrDzDSj/37-ways-that-words-can-be-wrong), he switches to "It is Shenanigans to try to bake your stance on how clustered things are [...] _into the pronoun system of a language and interpretation convention that you insist everybody use_" (February 2021). When you point out that's not what's going on, he switches to ... I don't know, but he's a smart guy; in the unlikely event that he sees fit to respond to this post, I'm sure he'll be able to think of _something_—but at this point, I have no reason to care. Talking to Yudkowsky on topics where getting the right answer would involve acknowledging facts that would make you unpopular in Berkeley is a _waste of everyone's time_; trying to inform you isn't [his bottom line](https://www.lesswrong.com/posts/34XxbRFe54FycoCDw/the-bottom-line). + +Accusing one's interlocutor of bad faith is frowned upon for a reason. We would prefer to live in a world where we have intellectually fruitful object-level discussions under the assumption of good faith, rather than risk our fora degenerating into an acrimonious brawl of accusations and name-calling, which is unpleasant and (more importantly) doesn't make any intellectual progress. I, too, would prefer to have a real object-level discussion under the assumption of good faith. + +I tried the object-level good-faith argument thing _first_. I tried it for _years_. But at some point, I think I should be _allowed to notice_ the nearest-unblocked-strategy game which is _very obviously happening_ if you look at the history of what was said. I think there's _some_ number of years and _some_ number of thousands of words of litigating the object-level after which there's nothing left for me to do but jump up a meta level and explain, to anyone capable of hearing it, why in this case I think I've accumulated enough evidence in this case for the assumption of good faith to have been _empirically falsified_. -Accusing one's interlocutor of bad faith is frowned upon for a reason. We would prefer to live in a world where we have intellectually fruitful object-level discussions under the assumption of good faith, rather than risk our fora degenerating into an acrimonious brawl of accusations and name-calling, which is unpleasant and doesn't make any intellectual progress. I, too, would prefer to have a real object-level discussion under the assumption of good faith. That's why I _tried_ the object-level good-faith argument thing _first_. I tried it for _years_. But at some point, I think I should be allowed to notice the nearest-unblocked-strategy game. I think there's _some_ number of years and _some_ number of thousands of words of litigating the object-level after which there's nothing left for me to do but jump up a meta level and explain, to anyone capable of hearing it, why in this case I think I've accumulated enough evidence for the assumption of good faith to have been _empirically falsified_. +(Of course, I realize that if we're crossing the Rubicon of abandoning the norm of assuming good faith, it needs to be abandoned symmetrically. I _think_ I'm doing a _pretty good_ job of adhering to standards of intellectual conduct and being transparent about my motivations, but I'm definitely not perfect, and, unlike Yudkowsky, I'm not so absurdly miscalibratedly arrogant to claim "confidence in my own ability to independently invent everything important" (!) about my topics of interest. If Yudkowsky or anyone else thinks they _have a case_ based on my behavior that _I'm_ being culpably intellectually dishonest, they of course have my blessing and encouragement to post it for the audience to evaluate.) -What makes all this especially galling is the fact that _all of my heretical opinions are literally just his opinions from the 'aughts!_ My whole thing about how changing sex isn't possible with existing technology because the category encompasses so many high-dimensional details? Not original to me! Again, this was _in the Sequences_ as ["Changing Emotions"](https://www.lesswrong.com/posts/QZs4vkC7cbyjL9XA9/changing-emotions). My thing about how you can't define concepts any way you want, because there are mathematical laws governing which category boundaries compress your anticipated experiences? [_We had a whole Sequence about this._](https://www.lesswrong.com/posts/FaJaCgqBKphrDzDSj/37-ways-that-words-can-be-wrong) Seriously, you think I'm _smart enough_ to come up with all of this indepedently? I'm not! I ripped it all off from Yudkowsky back in the 'aughts when he still gave a shit about telling the truth in this domain. Does ... does he expect us not to _notice_? +What makes all of this especially galling is the fact that _all of my heretical opinions are literally just Yudkowsky's opinions from the 'aughts!_ My whole thing about how changing sex isn't possible with existing technology because the category encompasses so many high-dimensional details? Not original to me! I [filled in a few trivial technical details](/2021/May/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems/#changing-sex-is-hard), but again, this was _in the Sequences_ as ["Changing Emotions"](https://www.lesswrong.com/posts/QZs4vkC7cbyjL9XA9/changing-emotions). My thing about how you can't define concepts any way you want, because there are mathematical laws governing which category boundaries compress your anticipated experiences? Not original to me! I [filled in](https://www.lesswrong.com/posts/esRZaPXSHgWzyB2NL/where-to-draw-the-boundaries) [a few technical details](https://www.lesswrong.com/posts/onwgTH6n8wxRSo2BJ/unnatural-categories-are-optimized-for-deception), but [_we had a whole Sequence about this._](https://www.lesswrong.com/posts/FaJaCgqBKphrDzDSj/37-ways-that-words-can-be-wrong) -[If someone wants to accuse me of bad faith, fine!—I think I'm doing better: I can point to places where I argue "the other side", because I know that sides are fake] +Seriously, you think I'm _smart enough_ to come up with all of this indepedently? I'm not! I ripped it all off from Yudkowsky back in the 'aughts _when he still gave a shit about telling the truth_ in this domain. (More precisely, when he thought he could afford to give a shit, before the political environment and the growing stature of his so-called "rationalist" movement changed his incentives.) -[I can win concessions, like "On the Argumentative Form", but I don't want concessions; I want to _actually get the goddamned right answer_ (maybe move this earlier and tie it into the centrism pose, which is not the same as the map that reflects the territory?)] +Does ... does he expect us not to _notice_? Or does he think that "everybody knows"? + +But I don't think that everybody knows. So I'm telling you. ------- +Why does this matter? + [Why does this matter? It would be dishonest for me to claim that this is _directly_ relevant to xrisk, because that's not my real bottom line] a rationality community that can't think about _practical_ issues that affect our day to day lives, but can get existential risk stuff right, is like asking for self-driving car software that can drive red cars but not blue cars It's a _problem_ if public intellectuals in the current year need to pretend to be dumber than seven-year-olds in 2016 +> _Perhaps_, replied the cold logic. _If the world were at stake._ +> +> _Perhaps_, echoed the other part of himself, _but that is not what was actually happening._ +https://www.yudkowsky.net/other/fiction/the-sword-of-good + https://www.readthesequences.com/ > Because it is all, in the end, one thing. I talked about big important distant problems and neglected immediate life, but the laws governing them aren't actually different. diff --git a/notes/notes.txt b/notes/notes.txt index aef2a60..ecc2fcd 100644 --- a/notes/notes.txt +++ b/notes/notes.txt @@ -2966,3 +2966,5 @@ https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0261438 https://ovarit.com/o/GenderCritical/61883/wear-what-you-want-why-i-ve-changed-my-mind > They only wear our clothing because they haven't found a way to wear our skin yet. + +https://mrwinstonmarshall.medium.com/why-im-leaving-mumford-sons-e6e731bbc255