From: M. Taylor Saotome-Westlake Date: Fri, 9 Sep 2022 14:48:19 +0000 (-0700) Subject: check in X-Git-Url: http://534655.efjtl6rk.asia/source?a=commitdiff_plain;h=28197496a8f263fa98921e3698ea78a2aaa5ebed;p=Ultimately_Untrue_Thought.git check in --- diff --git a/content/drafts/a-hill-of-validity-in-defense-of-meaning.md b/content/drafts/a-hill-of-validity-in-defense-of-meaning.md index 4ab05b6..8b4b298 100644 --- a/content/drafts/a-hill-of-validity-in-defense-of-meaning.md +++ b/content/drafts/a-hill-of-validity-in-defense-of-meaning.md @@ -108,7 +108,7 @@ However, I don't think this "narrow" reading is the most natural one. Yudkowsky > **To argue against an idea honestly, you should argue against the best arguments of the strongest advocates**. Arguing against weaker advocates proves _nothing_, because even the strongest idea will attract weak advocates. -Relatedly, Scott Alexander had written about how ["weak men are superweapons"](https://slatestarcodex.com/2014/05/12/weak-men-are-superweapons/): speakers often selectively draw attention to the worst arguments in favor of a position, in an attempt to socially discredit people who have better arguments for the position (which the speaker ignores). In the same way, by _just_ slapping down a weak man from the "anti-trans" political coalition without saying anything else in a similarly prominent location, Yudkowsky was liable to mislead his readers (who trusted him to argue against ideas honestly) into thinking that there were no better arguments from the "anti-trans" side. +Relatedly, Scott Alexander had written about how ["weak men are superweapons"](https://slatestarcodex.com/2014/05/12/weak-men-are-superweapons/): speakers often selectively draw attention to the worst arguments in favor of a position, in an attempt to socially discredit people who have better arguments for the position (which the speaker ignores). In the same way, by _just_ slapping down a weak man from the "anti-trans" political coalition without saying anything else in a similarly prominent location, Yudkowsky was liable to mislead his faithful students (who trusted him to argue against ideas honestly) into thinking that there were no better arguments from the "anti-trans" side. To be sure, it imposes a cost on speakers to not be able to Tweet about one specific annoying fallacy and then move on with their lives without the need for [endless disclaimers](http://www.overcomingbias.com/2008/06/against-disclai.html) about related but stronger arguments that they're _not_ addressing. But the fact that [Yudkowsky disclaimed that](https://twitter.com/ESYudkowsky/status/1067185907843756032) he wasn't taking a stand for or against Twitter's anti-misgendering policy demonstrates that he _didn't_ have an aversion to spending a few extra words to prevent the most common misunderstandings. @@ -126,7 +126,7 @@ Given the empirical reality of the different trait distributions, "Who are the b In light of these empirical observations, Yudkowsky's suggestion that an ignorant comittment to an "Aristotelian binary" is the main reason someone might care about the integrity of women's sports, is revealed as an absurd strawman. This just isn't something any scientifically-literate person would write if they had actually thought about the issue _at all_, as contrasted to having _first_ decided (consciously or not) to bolster one's reputation among progressives by dunking on transphobes on Twitter, and wielding one's philosophy knowledge in the service of that political goal. The relevant empirical facts are _not subtle_, even if most people don't have the fancy vocabulary to talk about them in terms of "multivariate trait distributions." -I'm picking on the "sports segregated around an Aristotelian binary" remark because sports is a case where the relevant effect sizes are _so_ large as to make the point [hard for all but the most ardent gender-identity partisans to deny](/2017/Jun/questions-such-as-wtf-is-wrong-with-you-people/). (For example, what the [Cohen's _d_](https://en.wikipedia.org/wiki/Effect_size#Cohen's_d) ≈ [2.6 effect size difference in muscle mass](/papers/janssen_et_al-skeletal_muscle_mass_and_distribution.pdf) means is that a woman as strong as the _average_ man is _at the 99.5th percentile_ for women.) But the point is very general: biological sex actually exists and is sometimes decision-relevant. People who want to be able to talk about sex and make policy decisions on the basis of sex are not making an ontology error, because the ontology in which sex "actually" "exists" continues to make very good predictions in our current tech regime. +I'm picking on the "sports segregated around an Aristotelian binary" remark because sports is a case where the relevant effect sizes are _so_ large as to make the point [hard for all but the most ardent gender-identity partisans to deny](/2017/Jun/questions-such-as-wtf-is-wrong-with-you-people/). (For example, what the [Cohen's _d_](https://en.wikipedia.org/wiki/Effect_size#Cohen's_d) ≈ [2.6 effect size difference in muscle mass](/papers/janssen_et_al-skeletal_muscle_mass_and_distribution.pdf) means is that a woman as strong as the _average_ man is _at the 99.5th percentile_ for women.) But the point is very general: biological sex actually exists and is sometimes decision-relevant. People who want to be able to talk about sex and make policy decisions on the basis of sex are not making an ontology error, because the ontology in which sex "actually" "exists" continues to make very good predictions in our current tech regime. It would be an absurdly [isolated demand for rigor](http://slatestarcodex.com/2014/08/14/beware-isolated-demands-for-rigor/) to expect someone to pass a graduate exam about the cognitive function of categorization before they can talk about sex. Yudkowsky's claim to merely have been standing up for the distinction between facts and policy questions doesn't seem credible. It is, of course, true that pronoun and bathroom conventions are policy decisions rather than a matter of fact, but it's _bizarre_ to condescendingly point this out _as if it were the crux of contemporary trans-rights debates_. Conservatives and gender-critical feminists _know_ that trans-rights advocates aren't falsely claiming that trans women have XX chromosomes. If you _just_ wanted to point out that the organization of sports leagues is a policy question rather than a fact (as if anyone had doubted this), why would you throw in the "Aristotelian binary" strawman and belittle the matter as "humorous"? There are a lot of issues that I don't _personally_ care much about, but I don't see anything funny about the fact that other people _do_ care. @@ -404,7 +404,7 @@ As such, we _shouldn't_ think that there are probably multiple kinds of gender d Had Yudkowsky been thinking that maybe if he Tweeted something favorable to my agenda, then me and the rest of Michael's gang would be satisfied and leave him alone? -But ... if there's some _other_ reason you suspect there might be multiple species of dysphoria, but you _tell_ people your suspicion is because dysphoria has more than one proton, you're still misinforming people for political reasons, which was the _general_ problem we were trying to alert Yudkowsky to. (Someone who trusted you as a source of wisdom about rationality might try to apply your _fake_ "everything more complicated than protons tends to come in varieties" rationality lesson in some other context, and get the wrong answer.) Inventing fake rationality lessons in response to political pressure is _not okay_, and it still wasn't okay in this case just because in this case the political pressure happened to be coming from _me_. +But ... if there's some _other_ reason you suspect there might be multiple species of dysphoria, but you _tell_ people your suspicion is because dysphoria has more than one proton, you're still misinforming people for political reasons, which was the _general_ problem we were trying to alert Yudkowsky to. (Someone who trusted you as a source of wisdom about rationality might try to apply your _fake_ "everything more complicated than protons tends to come in varieties" rationality lesson in some other context, and get the wrong answer.) Inventing fake rationality lessons in response to political pressure is _not okay_, and the fact that in this case the political pressure happened to be coming from _me_, didn't make it okay. I asked the posse if this analysis was worth sending to Yudkowsky. Michael said it wasn't worth the digression. He asked if I was comfortable generalizing from Scott's behavior, and what others had said about fear of speaking openly, to assuming that something similar was going on with Eliezer? If so, then now that we had common knowledge, we needed to confront the actual crisis, which was that dread was tearing apart old friendships and causing fanatics to betray everything that they ever stood for while its existence was still being denied. @@ -422,7 +422,7 @@ I [didn't want to bring it up at the time because](https://twitter.com/zackmdavi As for the parable about orcs, I thought it was significant that Scott chose to tell the story from the standpoint of non-orcs deciding what [verbal behaviors](https://www.lesswrong.com/posts/NMoLJuDJEms7Ku9XS/guessing-the-teacher-s-password) to perform while orcs are around, rather than the standpoint of the _orcs themselves_. For one thing, how do you _know_ that serving evil-Melkior is a life of constant torture? Is it at all possible, in the bowels of Christ, that someone has given you _misleading information_ about that? Moreover, you _can't_ just give an orc a clever misinterpretation of an oath and have them believe it. First you have to [cripple their _general_ ability](https://www.lesswrong.com/posts/XTWkjCJScy2GFAgDt/dark-side-epistemology) to correctly interpret oaths, for the same reason that you can't get someone to believe that 2+2=5 without crippling their _general_ ability to do arithmetic. We weren't not talking about a little "white lie" that the listener will never get to see falsified (like telling someone their dead dog is in heaven); the orcs _already know_ the text of the oath, and you have to break their ability to _understand_ it. Are you willing to permanently damage an orc's ability to reason, in order to save them pain? For some sufficiently large amount of pain, surely. But this isn't a choice to make lightly—and the choices people make to satisfy their own consciences, don't always line up with the volition of their alleged beneficiaries. We think we can lie to save others from pain, without ourselves _wanting to be lied to_. But behind the veil of ignorance, it's the same choice! -I _also_ had more to say about philosophy of categories: I thought I could be more rigorous about the difference between "caring about predicting different variables" and "caring about consequences", in a way that Eliezer would _have_ to understand even if Scott didn't. (Scott had claimed that he could use gerrymandered categories and still be just as good at making predictions—but that's not true if we're talking about the _internal_ use of categories as a [cognitive algorithm](https://www.lesswrong.com/posts/HcCpvYLoSFP4iAqSz/rationality-appreciating-cognitive-algorithms), rather than mere verbal behavior: it's always easy to _say_ "_X_ is a _Y_" for arbitrary _X_ and _Y_ if the stakes demand it.) +I _also_ had more to say about philosophy of categories: I thought I could be more rigorous about the difference between "caring about predicting different variables" and "caring about consequences", in a way that Eliezer would _have_ to understand even if Scott didn't. (Scott had claimed that he could use gerrymandered categories and still be just as good at making predictions—but that's just not true if we're talking about the _internal_ use of categories as a [cognitive algorithm](https://www.lesswrong.com/posts/HcCpvYLoSFP4iAqSz/rationality-appreciating-cognitive-algorithms), rather than mere verbal behavior: it's always easy to _say_ "_X_ is a _Y_" for arbitrary _X_ and _Y_ if the stakes demand it, but if you're _actually_ using that concept of _Y_ internally, that does have effects on your world-model.) But after consultation with the posse, I concluded that further email prosecution was not useful at this time; the philosophy argument would work better as a public _Less Wrong_ post. So my revised Category War to-do list was: @@ -644,7 +644,7 @@ Again, as discussed in "Challenges to Yudkowsky's Pronoun Reform Proposal", a co It's quite another thing altogether to _simultaneously_ try to prevent a speaker from using _tú_ to indicate disrespect towards a social superior (on the stated rationale that the _tú_/_usted_ distinction is dumb and shouldn't exist), while _also_ refusing to entertain or address the speaker's arguments explaining _why_ they think their interlocutor is unworthy of the deference that would be implied by _usted_ (because such arguments are "unspeakable" for political reasons). That's just psychologically abusive. -If Yudkowsky _actually_ possessed (and felt motivated to use) the "ability to independently invent everything important that would be on the other side of the filter and check it [himself] before speaking", it would be _obvious_ to him that "Gendered Pronouns For Everyone and Asking To Leave The System Is Lying" isn't the hill anyone would care about dying on if it weren't a Schelling point. A lot of TERF-adjacent folk would be _overjoyed_ to concede the (boring, insubstantial) matter of pronouns as a trivial courtesy if it meant getting to _actually_ address their real concerns of "Biological Sex Actually Exists", and ["Biological Sex Cannot Be Changed With Existing or Foreseeable Technology"](https://www.lesswrong.com/posts/QZs4vkC7cbyjL9XA9/changing-emotions) and "Biological Sex Is Sometimes More Relevant Than Gender Identity." The reason so many of them are inclined to stand their ground and not even offer the trivial courtesy is because they suspect, correctly, that the matter of pronouns is being used as a rhetorical wedge to try to prevent people from talking or thinking about sex. +If Yudkowsky _actually_ possessed (and felt motivated to use) the "ability to independently invent everything important that would be on the other side of the filter and check it [himself] before speaking", it would be _obvious_ to him that "Gendered Pronouns For Everyone and Asking To Leave The System Is Lying" isn't the hill anyone would care about dying on if it weren't a Schelling point. A lot of TERF-adjacent folk would be _overjoyed_ to concede the (boring, insubstantial) matter of pronouns as a trivial courtesy if it meant getting to _actually_ address their real concerns of "Biological Sex Actually Exists", and ["Biological Sex Cannot Be Changed With Existing or Foreseeable Technology"](https://www.lesswrong.com/posts/QZs4vkC7cbyjL9XA9/changing-emotions) and "Biological Sex Is Sometimes More Relevant Than Subjective Gender Identity." The reason so many of them are inclined to stand their ground and not even offer the trivial courtesy is because they suspect, correctly, that the matter of pronouns is being used as a rhetorical wedge to try to prevent people from talking or thinking about sex. Having analyzed the _ways_ in which Yudkowsky is playing dumb here, what's still not entirely clear is _why_. Presumably he cares about maintaining his credibility as an insightful and fair-minded thinker. Why tarnish that by putting on this haughty performance? @@ -769,9 +769,9 @@ This is the part where Yudkowsky or his flunkies accuse me of being uncharitable But the substance of my accusations is not about Yudkowsky's _conscious subjective narrative_. I don't have a lot of uncertainty about Yudkowsky's _theory of himself_, because he told us that, very clearly: "it is sometimes personally prudent and not community-harmful to post your agreement with Stalin about things you actually agree with Stalin about, in ways that exhibit generally rationalist principles, especially because people do _know_ they're living in a half-Stalinist environment." I don't doubt that that's [how the algorithm feels from the inside](https://www.lesswrong.com/posts/yA4gF5KrboK2m2Xu7/how-an-algorithm-feels-from-inside). -But my complaint is about the work the algorithm is _doing_ in Stalin's service, not about how it _feels_; I'm talking about a pattern of _publicly visible behavior_ stretching over years. (Thus, "take actions" in favor of/against, rather than "be"; "exert optimization pressure in the direction of", rather than "try".) I agree that everyone has a story in which they don't look terrible, and that people mostly believe their own stories, but _it does not therefore follow_ that no one ever looks terrible. +But my complaint is about the work the algorithm is _doing_ in Stalin's service, not about how it _feels_; I'm talking about a pattern of _publicly visible behavior_ stretching over years. (Thus, "take actions" in favor of/against, rather than "be"; "exert optimization pressure in the direction of", rather than "try".) I agree that everyone has a story in which they don't look terrible, and that people mostly believe their own stories, but _it does not therefore follow_ that no one ever does anything terrible. -I agree that you won't have much luck yelling at the Other about how they must really be doing `terrible_thing`. (People get very invested in their own stories.) But if you have the _receipts_ of the Other repeatedly doing `terrible_thing` in public over a period of years, maybe yelling about it to _everyone else_ might help _them_ stop getting defrauded by the Other's bogus story. +I agree that you won't have much luck yelling at the Other about how they must really be doing `terrible_thing`. (People get very invested in their own stories.) But if you have the _receipts_ of the Other repeatedly doing `terrible_thing` in public over a period of years, maybe yelling about it to _everyone else_ might help _them_ stop getting suckered by the Other's fraudulent story. Let's recap. @@ -792,44 +792,51 @@ What does the "tossed into a bucket" metaphor refer to, though? I can think of m If we're talking about overt _gender role enforcement attempts_—things like, "You're a girl, therefore you need to learn to keep house for your future husband", or "You're a man, therefore you need to toughen up"—then indeed, I strongly support people who don't want to be tossed into that kind of bucket. -(There are [historical reasons for the buckets to exist](/2020/Jan/book-review-the-origins-of-unfairness/), but I'm betting on modern Society being rich enough and smart enough to either forgo the buckets, or at least let people opt-out of the default buckets, without causing too much trouble.) +(There are [historical reasons for the buckets to exist](/2020/Jan/book-review-the-origins-of-unfairness/), but I'm eager to bet on modern Society being rich enough and smart enough to either forgo the buckets, or at least let people opt-out of the default buckets, without causing too much trouble.) -But importantly, my support for people not wanting to be tossed into gender role buckets is predicated on their reasons for not wanting that _having genuine merit_—things like "The fact that I'm a juvenile female human doesn't mean I'll have a husband; I'm actually planning to become a nun", or "The sex difference in Big Five Neuroticism is only _d_ ≈ 0.5; your expectation that I toughen up is not reasonable given the information you have about me in particular, even if most adult human males are tougher than me". I _don't_ think people have a _general_ right to prevent others from using sex categories to make inferences or decisions about them, _because that would be crazy_. If a doctor were to tell me, "As a male, you're risk for prostate cancer," it would be _bonkers_ for me to reply that I don't like being tossed into a Male Bucket like that. +But importantly, my support for people not wanting to be tossed into gender role buckets is predicated on their reasons for not wanting that _having genuine merit_—things like "The fact that I'm a juvenile female human doesn't mean I'll have a husband; I'm actually planning to become a nun", or "The sex difference in Big Five Neuroticism is only _d_ ≈ 0.5; your expectation that I toughen up is not reasonable given the information you have about me in particular, even if most adult human males are tougher than me". I _don't_ think people have a _general_ right to prevent others from using sex categories to make inferences or decisions about them, _because that would be crazy_. If a doctor were to tell me, "As a male, you're at risk for prostate cancer," it would be _bonkers_ for me to reply that I don't like being tossed into a Male Bucket like that. While piously appealing to the feelings of people describing reasons they do not want to be tossed into a Male Bucket or a Female Bucket, Yudkowsky does not seem to be distinguishing between reasons that have merit, and reasons that do not have merit. The post continues (bolding mine): > In a wide variety of cases, sure, ["he" and "she"] can clearly communicate the unambiguous sex and gender of something that has an unambiguous sex and gender, much as a different language might have pronouns that sometimes clearly communicated hair color to the extent that hair color often fell into unambiguous clusters. > -> But if somebody's hair color is halfway between two central points? If their civilization has developed stereotypes about hair color they're not comfortable with, such that they feel that the pronoun corresponding to their outward hair color is something they're not comfortable with because they don't fit key aspects of the rest of the stereotype and they feel strongly about that? If they have dyed their hair because of that, or **plan to get hair surgery, or would get hair surgery if it were safer but for now are afraid to do so?** Then it's stupid to try to force people to take complicated positions about those social topics _before they are allowed to utter grammatical sentences_. +> But if somebody's hair color is halfway between two central points? If their civilization has developed stereotypes about hair color they're not comfortable with, such that they feel that the pronoun corresponding to their outward hair color is something they're not comfortable with because they don't fit key aspects of the rest of the stereotype and they feel strongly about that? If they have dyed their hair because of that, or **plan to get hair surgery, or would get hair surgery if it were safer but for now are afraid to do so?** Then it's stupid to try to force people to take complicated positions about those social topics _before they are allowed to utter grammatical sentences_. -So, I agree that a language convention in which pronouns map to hair color doesn't seem great, and that the people in this world should probably coordinate on switching to a better convention. +So, I agree that a language convention in which pronouns map to hair color doesn't seem great, and that the people in this world should probably coordinate on switching to a better convention, if they can figure out how. -But _given_ the existence of a convention in which pronouns refer to hair color, a demand to be refered to as having a hair color _that one does not in fact have_ seems pretty outrageous to me! +But _given_ the existence of a convention in which pronouns refer to hair color, a demand to be refered to as having a hair color _that one does not in fact have_ seems pretty outrageous to me! -It makes sense to object to the convention forcing a binary choice in the "halfway between two central points" case. That's a case of _genuine_ nuance brought on by a _genuine_ complication and challenge to a system that assumes discrete hair colors. +It makes sense to object to the convention forcing a binary choice in the "halfway between two central points" case. That's an example of _genuine_ nuance brought on by a _genuine_ complication and challenge to a system that falsely assumes discrete hair colors. -But ... "plan to get hair surgery"? "Would get hair surgery if it were safer but for now are afraid to do so"? In what sense do these cases present a challenge to the discrete system and therefore call for complication and nuance? The decision to get hair surgery does not _propagate backwards in time_. The decision to get hair surgery cannot be _imported from a counterfactual universe in which it is safer_. People who, today, do not have the hair color that they would prefer, are, today, going to have to deal with that fact _as a fact_. +But ... "plan to get hair surgery"? "Would get hair surgery if it were safer but for now are afraid to do so"? In what sense do these cases present a challenge to the discrete system and therefore call for complication and nuance? There's nothing ambiguous about these cases: if you haven't, in fact, changed your hair color, then your hair is, in fact, its original color. The decision to get hair surgery does not _propagate backwards in time_. The decision to get hair surgery cannot be _imported from a counterfactual universe in which it is safer_. People who, today, do not have the hair color that they would prefer, are, today, going to have to deal with that fact _as a fact_. Is the idea that we want to use the same pronouns for the same person over time, so that if we know someone is planning to get hair surgery—that is, they have an appointment with the hair surgeon at this-and-such date—we should go ahead and switch their pronouns in advance? Okay, I can buy that. But extending that to the "would get hair surgery if it were safer" case is _absurd_. No one treats _conditional plans assuming speculative future advances in medical technology_ the same as actual plans. I don't think this case calls for any complicated nuanced position, and I don't see why Eliezer Yudkowsky would suggest that it would, unless— -Unless, at some level, Eliezer Yudkowsky doesn't expect his followers to deal with facts? - +Unless the real motive for insisting on complication and nuance in language is to obfuscate, rather than to reflect genuine complication and nuance in the territory. +Unless, at some level, Eliezer Yudkowsky doesn't expect his followers to deal with facts? [TODO: student dysphoria—I hated being put in the box as student /2022/Apr/student-dysphoria-and-a-previous-lifes-war/ ] +[ +she thought "I'm trans" was an explanation, but then found a better theory that explains the same data—that's what "rationalism" should be—including "That wasn't entirely true!!!!" +https://somenuanceplease.substack.com/p/actually-i-was-just-crazy-the-whole +] [TODO section Feelings vs. Truth This is a conflict between Feelings and Truth, between Politics and Truth. Scott Alexander chose Feelings, but I can't really hold that against him, because Scott is very explicit about only acting in the capacity of some guy with a blog. You can tell that he never wanted to be a religious leader; it just happened to him on accident because he writes faster than everyone else. I like Scott. Scott is great. I feel bad that such a large fraction of my interactions with him over the years have taken such an adversarial tone. -Eliezer Yudkowsky ... did not _unambiguously_ choose Feelings. He's been very careful with his words to strategically mood-affiliate with the side of Feelings, without consciously saying anything that can be unambigously proven false. +Eliezer Yudkowsky ... did not _unambiguously_ choose Feelings. He's been very careful with his words to strategically mood-affiliate with the side of Feelings, without consciously saying anything that he knows to be unambiguously false. + + + Eliezer Yudkowsky is _absolutely_ trying to be a religious leader. @@ -855,6 +862,9 @@ But if he's _then_ going to take a shit on c3 of my chessboard (["the simplest a The turd on c3 is a pretty big likelihood ratio! + +As the traditional rationalist saying goes: once is happenstance. Twice is coincidence. _Three times is enemy optimization_. + ] @@ -894,7 +904,7 @@ It's the same thing with Yudkowsky's political-risk minimization subject to the Accusing one's interlocutor of bad faith is frowned upon for a reason. We would prefer to live in a world where we have intellectually fruitful object-level discussions under the assumption of good faith, rather than risk our fora degenerating into an acrimonious brawl of accusations and name-calling, which is unpleasant and (more importantly) doesn't make any intellectual progress. I, too, would prefer to have a real object-level discussion under the assumption of good faith. -Accordingly, I tried the object-level good-faith argument thing _first_. I tried it for _years_. But at some point, I think I should be _allowed to notice_ the nearest-unblocked-strategy game which is _very obviously happening_ if you look at the history of what was said. I think there's _some_ number of years and _some_ number of thousands of words of litigating the object-level _and_ the meta level after which there's nothing left for me to do but jump up to the meta-meta level and explain, to anyone capable of hearing it, why in this case I think I've accumulated enough evidence in this case for the assumption of good faith to have been _empirically falsified_. +Accordingly, I tried the object-level good-faith argument thing _first_. I tried it for _years_. But at some point, I think I should be _allowed to notice_ the nearest-unblocked-strategy game which is _very obviously happening_ if you look at the history of what was said. I think there's _some_ number of years and _some_ number of thousands of words of litigating the object-level _and_ the meta level after which there's nothing left for me to do but jump up to the meta-meta level and explain, to anyone capable of hearing it, why in this case I think I've accumulated enough evidence for the assumption of good faith to have been _empirically falsified_. (Obviously, if we're crossing the Rubicon of abandoning the norm of assuming good faith, it needs to be abandoned symmetrically. I _think_ I'm doing a _pretty good_ job of adhering to standards of intellectual conduct and being transparent about my motivations, but I'm definitely not perfect, and, unlike Yudkowsky, I'm not so absurdly miscalibratedly arrogant to claim "confidence in my own ability to independently invent everything important" (!) about my topics of interest. If Yudkowsky or anyone else thinks they _have a case_ based on my behavior that _I'm_ being culpably intellectually dishonest, they of course have my blessing and encouragement to post it for the audience to evaluate.) @@ -927,7 +937,7 @@ https://twitter.com/davidxu90/status/1435106339550740482 David Xu writes (with Yudkowsky ["endors[ing] everything [Xu] just said"](https://twitter.com/ESYudkowsky/status/1436025983522381827)): -> I'm curious what might count for you as a crux about this; candidate cruxes I could imagine include: whether some categories facilitate inferences that _do_, on the whole, cause more harm than benefit, and if so, whether it is "rational" to rule that such inferences should be avoided when possible, and if so, whether the best way to disallow a large set of potential inferences is the proscribe the use of the categories that facilitate them—and if _not_, whether proscribing the use of a category in _public communication_ constitutes "proscribing" it more generally, in a way that interferes with one's ability to perform "rational" thinking in the privacy of one's own mind. +> I'm curious what might count for you as a crux about this; candidate cruxes I could imagine include: whether some categories facilitate inferences that _do_, on the whole, cause more harm than benefit, and if so, whether it is "rational" to rule that such inferences should be avoided when possible, and if so, whether the best way to disallow a large set of potential inferences is [to] proscribe the use of the categories that facilitate them—and if _not_, whether proscribing the use of a category in _public communication_ constitutes "proscribing" it more generally, in a way that interferes with one's ability to perform "rational" thinking in the privacy of one's own mind. > > That's four possible (serial) cruxes I listed, one corresponding to each "whether". @@ -947,11 +957,11 @@ Xu continues: > > This is the sense in which I suspect you are coming across as failing to properly Other-model. -At this point, I'm inclined to say it's not a "disagreement" at all. It's a _conflict_. I think what's actually at issue is that, at least in this domain, I want people to tell the truth, and the Caliphate wants people to not tell the truth. This isn't a disagreement about rationality, because telling the truth _isn't_ rational _if you don't want people to know things_. +After everything I've been through, I'm inclined to think it's not a "disagreement" at all. -At this point, I imagine defenders of the Caliphate are shaking their heads in disappointment at how I'm doubling down on refusing to Other-model. But—_am_ I? Isn't this just a re-statement of Xu's first proposed crux, except reframed as a "values difference" rather than a "disagreement"? +It's a _conflict_. I think what's actually at issue is that, at least in this domain, I want people to tell the truth, and the Caliphate wants people to not tell the truth. This isn't a disagreement about rationality, because telling the truth _isn't_ rational _if you don't want people to know things_. -Is the problem that my use of the phrase "tell the truth", which has positive valence in our culture, functions to sneak in normative connotations favoring my side? +At this point, I imagine defenders of the Caliphate are shaking their heads in disappointment at how I'm doubling down on refusing to Other-model. But—_am_ I? Isn't this just a re-statement of Xu's first proposed crux, except reframed as a "values difference" rather than a "disagreement"? Is the problem that my use of the phrase "tell the truth" (which has positive valence in our culture) functions to sneak in normative connotations favoring "my side"? Fine. Objection sustained. I'm happy to use to Xu's language. I think what's actually at issue is that, at least in this domain, I want to facilitate people making inferences (full stop), and the Caliphate wants to _not_ facilitate people making inferences that, on the whole, cause more harm than benefit. This isn't a disagreement about rationality, because facilitating inferences _isn't_ rational _if you don't want people to make inferences_. diff --git a/notes/a-hill-of-validity-sections.md b/notes/a-hill-of-validity-sections.md index adca918..9e2ee2b 100644 --- a/notes/a-hill-of-validity-sections.md +++ b/notes/a-hill-of-validity-sections.md @@ -1,10 +1,10 @@ noncontiguous on deck— -_ being put in a box (hair) _ being put in a box (school) _ "duly appreciated" -_ let's recap +_ "Actually, I was just crazy the whole time" _ if he's reading this _ tie off reply to Xu +_ let's recap _ help from Jessica for "Unnatural Categories" _ bridge to "Challenges" _ Christmas party 2019 and state of Church leadership @@ -947,19 +947,15 @@ https://unstableontology.com/2021/04/12/on-commitments-to-anti-normativity/ Sucking up the the Blue Egregore would make sense if you _knew_ that was the critical resource https://www.lesswrong.com/posts/mmHctwkKjpvaQdC3c/what-should-you-change-in-response-to-an-emergency-and-ai -I don't think I can use Ben's "Eliza the spambot therapist" analogy because it relies on the "running out the clock" behavior, and I'm Glomarizing +I don't think I can use Ben's "Eliza the spambot therapist" analogy because it relies on the "running out the clock" behavior, and I'm Glomarizing—actually I think it's OK This should be common sense, though https://forum.effectivealtruism.org/posts/3szWd8HwWccJb9z5L/the-ea-community-might-be-neglecting-the-value-of -she thought "I'm trans" was an explanation, but then found a better theory that explains the same data—that's what "rationalism" should be—including "That wasn't entirely true!!!!" -https://somenuanceplease.substack.com/p/actually-i-was-just-crazy-the-whole sorrow at putting on a bad performance with respect to the discourse norms of the people I'm trying to rescue/convert; I think my hostile shorthand (saying that censorship costs nothing implies some combination "speech isn't useful" and "other people aren't real" is pointing at real patterns, but people who aren't already on my side are not going to be sympathetic) - - https://twitter.com/ESYudkowsky/status/1067300728572600320 > You could argue that a wise policy is that we should all be called by terms and pronouns we don't like, now and then, and that to do otherwise is coddling. You could argue that Twitter shouldn't try to enforce courtesy. You could accuse, that's not what Twitter is really doing. @@ -1085,3 +1081,9 @@ I have a _seflish_ interest in people making and sharing accurate probabilistic [TODO: in the context of elite Anglosphere culture in 2016–2022; it should be clear that defenders of reason need to be able to push back and assert that biological sex is real; other science communicators like [Dawkins can see it.](https://www.theguardian.com/books/2021/apr/20/richard-dawkins-loses-humanist-of-the-year-trans-comments) [Jerry Coyne can see it.](https://whyevolutionistrue.com/2018/12/11/once-again-why-sex-is-binary/)] + +when I was near death from that salivary stone, I mumbled something to my father about "our people" + +If we're going to die either way, wouldn't it be _less dignified_ to die with Stalin's dick in his mouth? + +[Is this the hill he wants to die on? The pronouns post mentions "while you can still get away with disclaimers", referring to sanction from the outside world, as if he won't receive any sanction from his people, because he owns us. That's wrong. Yudkowsky as a person doesn't own me; the Sequences-algorithm does] diff --git a/notes/notes.txt b/notes/notes.txt index 9699850..c218cc3 100644 --- a/notes/notes.txt +++ b/notes/notes.txt @@ -3211,3 +3211,5 @@ https://robkhenderson.substack.com/p/let-a-hundred-flowers-bloom https://www.menshealth.com/sex-women/a41018711/sexplain-it-vagina-fetish-sex-bisexual/ (stability_unsafe-0gZ54e7b) zmd@ReflectiveCoherence:~/Code/Misc/stability_unsafe$ python stability_sdk/src/stability_sdk/client.py "25-year-old Nana Visitor in the shower in 1996, full body shot, 4K digital photo" -n 4 + +https://afterellen.com/tasmania-rules-against-women-only-spaces/ diff --git a/notes/post_ideas.txt b/notes/post_ideas.txt index d422959..66fda87 100644 --- a/notes/post_ideas.txt +++ b/notes/post_ideas.txt @@ -18,21 +18,22 @@ _ Trans Kids on the Margin, and Harms From Misleading Training Data _ Book Review: Charles Murray's Facing Reality: Two Truths About Race in America Minor— +_ Review of AGP Erotica Automation Tools, September 2022 _ Happy Meal _ Subspatial Distribution Overlap and Cancellable Stereotypes _ Elision _vs_. Choice _ ASL Is Not a Language _ Book Review: Johnny the Walrus _ "But I'm Not Quite Sure What That Means" +_ Beckett Mariner Is Trans https://www.reddit.com/r/DaystromInstitute/comments/in3g92/was_mariner_a_teenager_on_the_enterprised/ +_ Link: "On Transitions, Freedom of Form, [...]" _ Gaussian Gender Issues _ Timelines -_ Xpression Camera Is the Uniquely Best Piece of Software in the World _ Hrunkner Unnerby and the Shallowness of Progress _ reinterpretting all of Hannah Montana album lyrics as an AGP narrative _ my medianworld: https://www.glowfic.com/replies/1619639#reply-1619639 _ Rebecca Romijin -_ Link: "On Transitions, Freedom of Form, [...]" _ Excerpt from _Redefining Realness_ _ "Reducing" Bias and Improving "Safety" _ Unicode adopt-a-character?? (would it be wrong to adopt "♀"?) @@ -112,7 +113,6 @@ _ reductionist rebuttal to "so you think lesbians aren't women" _ Principles (transhumanism, and sanity) _ reply to https://azdoine.tumblr.com/post/173995599942/a-reply-to-unremediatedgenderspace-on-reply-to http://archive.is/JSSNi _ reply to https://deathisbadblog.com/the-real-transphobes-are-the-ones-we-made-along-the-way/ -_ Beckett Mariner Is Trans https://www.reddit.com/r/DaystromInstitute/comments/in3g92/was_mariner_a_teenager_on_the_enterprised/ _ Cynical Theories review _ Persongen/univariate fallacy/typical set _ reply on psych prison https://www.reddit.com/r/TheMotte/comments/io1iih/culture_war_roundup_for_the_week_of_september_07/g4uvgb4/?context=3