From: M. Taylor Saotome-Westlake Date: Sat, 10 Sep 2022 18:52:34 +0000 (-0700) Subject: memoir: internet-enabled references, next waypoints X-Git-Url: http://534655.efjtl6rk.asia/source?a=commitdiff_plain;h=eb325b4ce19863d3295af8c4ad7b0f2be1630693;p=Ultimately_Untrue_Thought.git memoir: internet-enabled references, next waypoints --- diff --git a/content/drafts/a-hill-of-validity-in-defense-of-meaning.md b/content/drafts/a-hill-of-validity-in-defense-of-meaning.md index 8b4b298..f121f2c 100644 --- a/content/drafts/a-hill-of-validity-in-defense-of-meaning.md +++ b/content/drafts/a-hill-of-validity-in-defense-of-meaning.md @@ -461,7 +461,7 @@ In "... Boundaries?", I unify the two positions and explain how both Yudkowsky a But _given_ a subspace of interest, the _technical_ criterion of drawing category boundaries around [regions of high density in configuration space](https://www.lesswrong.com/posts/yLcuygFfMfrfK8KjF/mutual-information-and-density-in-thingspace) still applies. There is Law governing which uses of communication signals transmit which information, and the Law can't be brushed off with, "whatever, it's a pragmatic choice, just be nice." I demonstrate the Law with a couple of simple mathematical examples: if you redefine a codeword that originally pointed to one cluster, to also include another, that changes the quantitative predictions you make about an unobserved coordinate given the codeword; if an employer starts giving the title "Vice President" to line workers, that decreases the mutual information between the job title and properties of the job. -(Jessica and Ben's [discussion of the job title example in relation to the _Wikipedia_ summary of Jean Baudrillard's _Simulacra and Simulation_ ended up getting published separately](http://benjaminrosshoffman.com/excerpts-from-a-larger-discussion-about-simulacra/), and ended up taking on a life of its own in [future posts](http://benjaminrosshoffman.com/simulacra-subjectivity/), [including](https://thezvi.wordpress.com/2020/06/15/simulacra-and-covid-19/) by [other authors](https://thezvi.wordpress.com/2020/08/03/unifying-the-simulacra-definitions/).) +(Jessica and Ben's [discussion of the job title example in relation to the _Wikipedia_ summary of Jean Baudrillard's _Simulacra and Simulation_ ended up getting published separately](http://benjaminrosshoffman.com/excerpts-from-a-larger-discussion-about-simulacra/), and ended up taking on a life of its own [in](http://benjaminrosshoffman.com/blame-games/) [future](http://benjaminrosshoffman.com/blatant-lies-best-kind/) [posts](http://benjaminrosshoffman.com/simulacra-subjectivity/), [including](https://www.lesswrong.com/posts/Z5wF8mdonsM2AuGgt/negative-feedback-and-simulacra) [a](https://www.lesswrong.com/posts/NiTW5uNtXTwBsFkd4/signalling-and-simulacra-level-3) [number](https://www.lesswrong.com/posts/tF8z9HBoBn783Cirz/simulacrum-3-as-stag-hunt-strategy) [of](https://www.lesswrong.com/tag/simulacrum-levels) [posts](https://thezvi.wordpress.com/2020/05/03/on-negative-feedback-and-simulacra/) [by](https://thezvi.wordpress.com/2020/06/15/simulacra-and-covid-19/) [other](https://thezvi.wordpress.com/2020/08/03/unifying-the-simulacra-definitions/) [authors](https://thezvi.wordpress.com/2020/09/07/the-four-children-of-the-seder-as-the-simulacra-levels/).) Sarah asked if the math wasn't a bit overkill: were the calculations really necessary to make the basic point that good definitions should be about classifying the world, rather than about what's pleasant or politically expedient to say? I thought the math was _really important_ as an appeal to principle—and [as intimidation](https://slatestarcodex.com/2014/08/10/getting-eulered/). (As it is written, [_the tenth virtue is precision!_](http://yudkowsky.net/rational/virtues/) Even if you cannot do the math, knowing that the math exists tells you that the dance step is precise and has no room in it for your whims.) @@ -471,7 +471,7 @@ My thinking here was that the posse's previous email campaigns had been doomed t I could see a case that it was unfair of me to include subtext and then expect people to engage with the text, but if we weren't going to get into full-on gender-politics on _Less Wrong_ (which seemed like a bad idea), but gender politics _was_ motivating an epistemology error, I wasn't sure what else I'm supposed to do! I was pretty constrained here! -(I did regret having accidentally "poisoned the well" the previous month by impulsively sharing the previous year's ["Blegg Mode"](/2018/Feb/blegg-mode/) [as a _Less Wrong_ linkpost](https://www.lesswrong.com/posts/GEJzPwY8JedcNX2qz/blegg-mode). "Blegg Mode" had originally been drafted as part of "... To Make Predictions" before getting spun off as a separate post. Frustrated in March at our failing email campaign, I thought it was politically "clean" enough to belatedly share, but it proved to be insufficiently [deniably allegorical](/tag/deniably-allegorical/). It's plausible that some portion of the _Less Wrong_ audience would have been more receptive to "... Boundaries?" as not-politically-threatening philosophy, if they hadn't been alerted to the political context by the trainwreck in the comments on the "Blegg Mode" linkpost.) +(I did regret having accidentally "poisoned the well" the previous month by impulsively sharing the previous year's ["Blegg Mode"](/2018/Feb/blegg-mode/) [as a _Less Wrong_ linkpost](https://www.lesswrong.com/posts/GEJzPwY8JedcNX2qz/blegg-mode). "Blegg Mode" had originally been drafted as part of "... To Make Predictions" before getting spun off as a separate post. Frustrated in March at our failing email campaign, I thought it was politically "clean" enough to belatedly share, but it proved to be insufficiently [deniably allegorical](/tag/deniably-allegorical/). It's plausible that some portion of the _Less Wrong_ audience would have been more receptive to "... Boundaries?" as not-politically-threatening philosophy, if they hadn't been alerted to the political context by the 60+-comment trainwreck on the "Blegg Mode" linkpost.) ----- @@ -541,7 +541,7 @@ Also in November, I wrote to Ben about how I was still stuck on writing the grie The reason it _should_ be safe to write is because Explaining Things is Good. It should be possible to say, "This is not a social attack; I'm not saying 'rationalists Bad, Yudkowsky Bad'; I'm just trying to carefully _tell the true story_ about why, as a matter of cause-and-effect, I've been upset this year, including addressing counterarguments for why some would argue that I shouldn't be upset, why other people could be said to be behaving 'reasonably' given their incentives, why I nevertheless wish they'd be braver and adhere to principle rather than 'reasonably' following incentives, _&c_." -So why couldn't I write? Was it that I didn't know how to make "This is not a social attack" credible? Maybe because it's wasn't true?? I was afraid that telling a story about our leader being intellectually dishonest was "the nuclear option" in a way that I couldn't credibly cancel with "But I'm just telling a true story about a thing that was important to me that actually happened" disclaimers. If you're slowly-but-surely gaining territory in a conventional war, _suddenly_ escalating to nukes seems pointlessly destructive. This metaphor is horribly non-normative ([arguing is not a punishment!](https://srconstantin.wordpress.com/2018/12/15/argue-politics-with-your-best-friends/) carefully telling a true story _about_ an argument is not a nuke!), but I didn't know how to make it stably go away. +So why couldn't I write? Was it that I didn't know how to make "This is not a social attack" credible? Maybe because it's wasn't true?? I was afraid that telling a story about our leader being intellectually dishonest was "the nuclear option" in a way that I couldn't credibly cancel with "But I'm just telling a true story about a thing that was important to me that actually happened" disclaimers. If you're slowly-but-surely gaining territory in a conventional war, _suddenly_ escalating to nukes seems pointlessly destructive. This metaphor is horribly non-normative ([arguing is not a punishment!](https://srconstantin.github.io/2018/12/15/argue-politics-with-your-best-friends.html) carefully telling a true story _about_ an argument is not a nuke!), but I didn't know how to make it stably go away. A more motivationally-stable compromise would be to try to split off whatever _generalizable insights_ that would have been part of the story into their own posts that don't make it personal. ["Heads I Win, Tails?—Never Heard of Her"](https://www.lesswrong.com/posts/DoPo4PDjgSySquHX8/heads-i-win-tails-never-heard-of-her-or-selective-reporting) had been a huge success as far as I was concerned, and I could do more of that kind of thing, analyzing the social stuff I was worried about, without making it personal, even if, secretly, it actually was personal. @@ -818,12 +818,16 @@ Unless the real motive for insisting on complication and nuance in language is t Unless, at some level, Eliezer Yudkowsky doesn't expect his followers to deal with facts? -[TODO: student dysphoria—I hated being put in the box as student +[TODO SECTION: student dysphoria—I hated being put in the box as student /2022/Apr/student-dysphoria-and-a-previous-lifes-war/ ] -[ +[TODO SECTION "duly appreciated" + +] + +[TODO SECTION just crazy she thought "I'm trans" was an explanation, but then found a better theory that explains the same data—that's what "rationalism" should be—including "That wasn't entirely true!!!!" https://somenuanceplease.substack.com/p/actually-i-was-just-crazy-the-whole ] @@ -965,6 +969,8 @@ At this point, I imagine defenders of the Caliphate are shaking their heads in d Fine. Objection sustained. I'm happy to use to Xu's language. I think what's actually at issue is that, at least in this domain, I want to facilitate people making inferences (full stop), and the Caliphate wants to _not_ facilitate people making inferences that, on the whole, cause more harm than benefit. This isn't a disagreement about rationality, because facilitating inferences _isn't_ rational _if you don't want people to make inferences_. +[TODO: quote "Doublethink (Choosing to Be Biased)", note that despite Yudkowsky's doubt, the situation is actually worse that Orwell depicted—you don't even have to burn the offending material, if you can just get people to ignore it] + [TODO: "massive psychological damage to some subset of people", diff --git a/notes/a-hill-of-validity-sections.md b/notes/a-hill-of-validity-sections.md index 9e2ee2b..c989639 100644 --- a/notes/a-hill-of-validity-sections.md +++ b/notes/a-hill-of-validity-sections.md @@ -1,7 +1,10 @@ noncontiguous on deck— -_ being put in a box (school) +_ being put in a bucket (school) _ "duly appreciated" _ "Actually, I was just crazy the whole time" +_ Doublethink (Choosing to be Biased) +_ the reason he got pushback + _ if he's reading this _ tie off reply to Xu _ let's recap @@ -13,20 +16,13 @@ _ Anna vs. Michael factional conflict _ "fraud" as deception that moves resources - with internet available— -_ woke filter bubble thinking, stopped talking to Michael when he went that way https://twitter.com/ESYudkowsky/status/1435619618052214787 -_ "praise Ba'al" language from "Rationalist Blogging" (both on first ref to that post, and Feelings vs. Truth speech) -_ disclaimer on "Categories Were Made" -_ update "Argue Politics" link to Sarah's static site -_ link simulacrum posts: Zvi (he has a category), Elizabeth, at least one more from Ben _ examples of snarky comments about "the rationalists" _ Discord logs before Austin retreat _ screenshot Rob's Facebook comment which I link _ 13th century word meanings _ compile Categories references from the Dolphin War Twitter thread _ weirdly hostile comments on "... Boundaries?" -_ report comment count "Blegg Mode" trainwreck far editing tier— @@ -441,12 +437,19 @@ back in 'aught-nine, Anna commented that no one in our circle was that old, as i Really, self-respecting trans people who care about logical consistency should abhor Scott and Eliezer's opinions—you should want people to use the right pronouns _because_ of your gender soul or _because_ your transition actually worked, not because categories are flexible and pronouns shouldn't imply gender +> Because it was you, I tried to read this when it came out. But you do not know how to come to a point, because you are too horrified by the thought that a reader might disagree with you if you don't write even more first; so then I started skimming, and then I gave up. +https://twitter.com/ESYudkowsky/status/1435605868758765568 + https://twitter.com/ESYudkowsky/status/1435605868758765568 > Because it was you, I tried to read this when it came out. But you do not know how to come to a point, because you are too horrified by the thought that a reader might disagree with you if you don't write even more first; so then I started skimming, and then I gave up. > If you think you can win a battle about 2 + 3 = 5, then it can feel like victory or self-justification to write a huge long article hammering on that; but it doesn't feel as good to engage with how the Other does not think they are arguing 2 + 3 = 6, they're talking about 2 * 3. https://twitter.com/ESYudkowsky/status/1435618825198731270 +> The Other's theory of themselves usually does not make them look terrible. And you will not have much luck just yelling at them about how they must *really* be doing terrible_thing instead. That's woke filter bubble thinking. I stopped talking to Michael when he went that way. +https://twitter.com/ESYudkowsky/status/1435619618052214787 + + But I think Eliezer and I _agree_ on what he's doing; he just doesn't see it's bad Speaking of narcissism and perspective-taking, "deception" isn't about whether you personally "lied" according to your own re-definitions; it's about whether you predictably made others update in the wrong direction @@ -1087,3 +1090,6 @@ when I was near death from that salivary stone, I mumbled something to my father If we're going to die either way, wouldn't it be _less dignified_ to die with Stalin's dick in his mouth? [Is this the hill he wants to die on? The pronouns post mentions "while you can still get away with disclaimers", referring to sanction from the outside world, as if he won't receive any sanction from his people, because he owns us. That's wrong. Yudkowsky as a person doesn't own me; the Sequences-algorithm does] + +https://twitter.com/ESYudkowsky/status/1568338672499687425 +> I'm not interested in lying to the man in the street. It won't actually save the world, and is not part of a reasonable and probable plan for saving the world; so I'm not willing to cast aside my deontology for it; nor would the elites be immune from the epistemic ruin.