From: M. Taylor Saotome-Westlake Date: Sun, 28 Aug 2022 20:18:41 +0000 (-0700) Subject: memoir: replying to David Xu X-Git-Url: http://534655.efjtl6rk.asia/source?a=commitdiff_plain;h=cfdc07319c7f98bb8cc7a64228174d0456411360;p=Ultimately_Untrue_Thought.git memoir: replying to David Xu --- diff --git a/notes/a-hill-of-validity-sections.md b/notes/a-hill-of-validity-sections.md index 9e3f25c..c7ae98b 100644 --- a/notes/a-hill-of-validity-sections.md +++ b/notes/a-hill-of-validity-sections.md @@ -8,8 +8,10 @@ _ screenshot Rob's Facebook comment which I link _ compile Categories references from the Dolphin War _ dates of subsequent philosophy-of-language posts + far editing tier— _ clarify why Michael thought Scott was "gaslighting" me, include "beeseech bowels of Christ" +_ conversation with Ben about physical injuries (this is important because it explains where the "cut my dick off rhetoric" came from) _ address the "maybe it's good to be called names" point from "Hill" thread _ 2019 Discord discourse with Alicorner _ edit discussion of "anti-trans" side given that I later emphasize that "sides" shouldn't be a thing @@ -952,27 +954,39 @@ https://twitter.com/ESYudkowsky/status/1067187363544059905 https://twitter.com/davidxu90/status/1436007025545125896 -I'm curious what might count for you as a crux about this; candidate cruxes I could imagine include: whether some categories facilitate inferences that *do*, on the whole, cause more harm than benefit, and if so, whether it is "rational" to rule that such inferences... -...should be avoided when possible, and if so, whether the best way to disallow a large set of potential inferences is the proscribe the use of the categories that facilitate them--and if *not*, whether proscribing the use of a category in *public communication* constitutes... -..."proscribing" it more generally, in a way that interferes with one's ability to perform "rational" thinking in the privacy of one's own mind. +David Xu writes (with Yudkowsky ["endors[ing] everything [he] just said"](https://twitter.com/ESYudkowsky/status/1436025983522381827)): + +> I'm curious what might count for you as a crux about this; candidate cruxes I could imagine include: whether some categories facilitate inferences that _do_, on the whole, cause more harm than benefit, and if so, whether it is "rational" to rule that such inferences should be avoided when possible, and if so, whether the best way to disallow a large set of potential inferences is the proscribe the use of the categories that facilitate them—and if _not_, whether proscribing the use of a category in _public communication_ constitutes "proscribing" it more generally, in a way that interferes with one's ability to perform "rational" thinking in the privacy of one's own mind. +> +> That's four possible (serial) cruxes I listed, one corresponding to each "whether". I could have included a fifth and final crux about whether, even _if_ The Thing In Question interfered with rational thinking, that might be worth it; but this I suspect you would not concede, and (being a rationalist) it's not something I'm willing to concede myself, so it's not a crux in a meaningful sense between us (or any two self-proclaimed "rationalists"). +> +> My sense is that you have (thus far, in the parts of the public discussion I've had the opportunity to witness) been behaving as though the _one and only crux in play_—that is, the True Source of Disagreement—has been the fifth crux, the thing I refused to include with the others of its kind. Your accusations against the caliphate _only make sense_ if you believe the dividing line between your behavior and theirs is caused by a disagreement as to whether "rational" thinking is "worth it"; as opposed to, say, what kind of prescriptions "rational" thinking entails, and which (if any) of those prescriptions are violated by using a notion of gender (in public, where you do not know in advance who will receive your communications) that does not cause massive psychological damage to some subset of people. +> +> Perhaps it is your argument that all four of the initial cruxes I listed are false; but even if you believe that, it should be within your set of ponderable hypotheses that people might disagree with you about that, and that they might perceive the disagreement to be _about_ that, rather than (say) about whether subscribing to the Blue Tribe view of gender makes them a Bad Rationalist, but That's Okay because it's Politically Convenient. +> +> This is the sense in which I suspect you are coming across as failing to properly Other-model. -That's four possible (serial) cruxes I listed, one corresponding to each "whether". I could have included a fifth and final crux about whether, even *if* The Thing In Question interfered with rational thinking, that might be worth it; but this I suspect you would... +I reply: I'd like to [taboo](https://www.lesswrong.com/posts/WBdvyyHLdxZSAMmoz/taboo-your-words) the word "rational"; I think I can do a much better job of explaining what's going on without appealing to what is or is not "rational." (As it is written of a virtue which is nameless, if you speak overmuch of the Way, you will not attain it.) -...not concede, and (being a rationalist) it's not something I'm willing to concede myself, so it's not a crux in a meaningful sense between us (or any two self-proclaimed "rationalists"). +On the first and second cruxes, concerning whether some categories facilitate inferences that cause more harm than benefit on the whole and whether they should be avoided when possible, I ask: harm _to whom?_ Not all agents have the same utility function! If some people are harmed by other people making certain probabilistic inferences, then it would seem that there's a _conflict_ between the people harmed (who prefer that such inferences be avoided if possible), and people who want to make and share probabilistic inferences about reality (who think that that which can be destroyed by the truth, should be). -My sense is that you have (thus far, in the parts of the public discussion I've had the opportunity to witness) been behaving as though the *one and only crux in play*--that is, the True Source of Disagreement--has been the fifth crux, the thing I refused to include with the... +On the third crux, whether the best way to disallow a large set of potential inferences is to proscribe the use of the categories that facilitate them: well, it's hard to be sure whether it's the _best_ way: no doubt a more powerful intelligence could search over a larger space of possible strategies than me. But yeah, if your goal is to _prevent people from noticing facts about reality_, then preventing them from using words that refer those facts seems like a pretty effective way to do it! -...others of its kind. Your accusations against the caliphate *only make sense* if you believe the dividing line between your behavior and theirs is caused by a disagreement as to whether "rational" thinking is "worth it"; as opposed to, say, what kind of prescriptions... +On the fourth crux, whether proscribing the use of a category in public communication constitutes "proscribing" in a way that interferes with one's ability to think in the privacy of one's own mind: I think this is true (for humans). We're social animals. To the extent that we can do higher-grade cognition at all, we do it (even when alone) using our language faculties that are designed for communicating with others. How are you supposed to think about things that you don't have words for? -..."rational" thinking entails, and which (if any) of those prescriptions are violated by using a notion of gender (in public, where you do not know in advance who will receive your communications) that does not cause massive psychological damage to some subset of people. +Thus, bearing in mind that we don't all need to count harms and benefits the same way, and that it is futile to contest what kind of prescriptions "rational" thinking entails, on the question of whether the dividing line between my behavior and the Caliphate's is caused by a disagreement as to whether "rational" thinking is "worth it", I'm inclined to say— -Perhaps it is your argument that all four of the initial cruxes I listed are false; but even if you believe that, it should be within your set of ponderable hypotheses that people might disagree with you about that, and that they might perceive the disagreement to be... +It's not a "disagreement" at all. It's a _conflict_. -...*about* that, rather than (say) about whether subscribing to the Blue Tribe view of gender makes them a Bad Rationalist, but That's Okay because it's Politically Convenient. +I have a _seflish_ interest in people making and sharing accurate probabilistic inferences about how sex and gender and transgenderedness work in reality, for many reasons, but in part because _I need the correct answer in order to decide whether or not to cut my dick off_. -This is the sense in which I suspect you are coming across as failing to properly Other-model. +[TODO: +"massive psychological damage to some subset of people", +that's _not my problem_. I _don't give a shit_. + +Berkeley people may say that I'm doubling-down on failing to Other-model, but I don't think so; it's more honest to notice the conflict and analyze the conflict, than to pretend that we all want the same thing; I can empathize with "playing on a different chessboard", and I would be more inclined to cooperate with it if it weren't accompanied by sneering about how he and his flunkies are the only sane and good people in the world] · Sep 9, 2021 @@ -983,9 +997,6 @@ Zack M. Davis Sep 9, 2021 Thoughts on your proposed cruxes: 1 (harmful inferences) is an unworkable AI design: you need correct beliefs first, in order to correctly tell which beliefs are harmful. 4 (non-public concepts) is unworkable for humans: how do you think about things you're not allowed words for? -https://twitter.com/ESYudkowsky/status/1436025983522381827 -> Well, Zack hopefully shouldn't see this, but I do happen to endorse everything you just said, for your own personal information. - [SECTION about monastaries (with Ben and Anna in April 2019) I complained to Anna: "Getting the right answer in public on topic _X_ would be too expensive, so we won't do it" is _less damaging_ when the set of such Xes is _small_. It looked to me like we added a new forbidden topic in the last ten years, without rolling back any of the old ones. diff --git a/notes/post_ideas.txt b/notes/post_ideas.txt index 9050daf..5ce75d2 100644 --- a/notes/post_ideas.txt +++ b/notes/post_ideas.txt @@ -7,10 +7,9 @@ Comment: The Dunbar reference on the definition of "personality" is actually rea (stability_unsafe-0gZ54e7b) zmd@ReflectiveCoherence:~/Code/Misc/stability_unsafe$ python stability_sdk/src/stability_sdk/client.py "25-year-old Nana Visitor in the shower in 1996, full body shot, 4K digital photo" -n 4 - Urgent/needed for healing— -_ I Am Not Great With Secrets (aAL) _ Reply to Scott Alexander on Autogenderphilia +_ Pseudonym Misgivings _ Blanchard's Dangerous Idea and the Plight of the Lucid Crossdreamer _ A Hill of Validity in Defense of Meaning