From: M. Taylor Saotome-Westlake Date: Fri, 20 Jan 2023 05:52:27 +0000 (-0800) Subject: memoir: Challenges to Yudkowsky's Personality Cult X-Git-Url: http://534655.efjtl6rk.asia/source?a=commitdiff_plain;h=aa1f9e0208500c4ebfd8dea163dba22b6ec1522f;p=Ultimately_Untrue_Thought.git memoir: Challenges to Yudkowsky's Personality Cult See—by exerting a little bit of discipline over the course of a day, I can grow the ms. by 1000 words. If I just do that six days a week, I can finish this project in a reasonable amount of time. --- diff --git a/content/2018/the-categories-were-made-for-man-to-make-predictions.md b/content/2018/the-categories-were-made-for-man-to-make-predictions.md index 8035f46..65a686c 100644 --- a/content/2018/the-categories-were-made-for-man-to-make-predictions.md +++ b/content/2018/the-categories-were-made-for-man-to-make-predictions.md @@ -33,7 +33,7 @@ There's no objective answer to the question as to whether we should pay more att This works because, empirically, mammals have lots of things in common with each other and water-dwellers have lots of things in common with each other. If we [imagine entities as existing in a high-dimensional configuration space](http://lesswrong.com/lw/nl/the_cluster_structure_of_thingspace/), there would be a _mammals_ cluster (in the subspace of the dimensions that mammals are similar on), and a _water-dwellers_ cluster (in the subspace of the dimensions that water-dwellers are similar on), and whales would happen to belong to _both_ of them, in the way that the vector *x⃗* = [3.1, 4.2, −10.3, −9.1] ∈ ℝ⁴ is close to [3, 4, 2, 3] in the _x₁-x₂_ plane, but also close to [−8, −9, −10, −9] in the _x₃-x₄_ plane. -If different political factions are engaged in conflict over how to define the extension of some common word—common words being a scarce and valuable resource both culturally and [information-theoretically](http://lesswrong.com/lw/o1/entropy_and_short_codes/)—rationalists may not be able to say that one side is simply right and the other is simply wrong, but we can at least strive for objectivity in _describing the conflict_. Before shrugging and saying, "Well, this is a difference in values; nothing more to be said about it," we can talk about the detailed consequences of what is gained or lost by paying attention to some differences and ignoring others. That there exists an element of subjectivity in what you choose to pay attention to, doesn't negate the fact that there _is_ a structured empirical reality to be described—and not all descriptions of it are equally compact. +If different political factions are engaged in conflict over how to define the extension of some common word—common words being a scarce and valuable resource both culturally and [information-theoretically](http://lesswrong.com/lw/o1/entropy_and_short_codes/)—rationalists may not be able to say that one side is simply right and the other is simply wrong, but we can at least strive for objectivity in _describing the conflict_. Before shrugging and saying, "Well, this is a difference in values; nothing more to be said about it," we can talk about the detailed consequences of what is gained or lost by paying attention to some differences and ignoring others. That there exists an element of subjectivity in what you choose to pay attention to, doesn't negate the fact that there _is_ a structured empirical reality to be described—and not all descriptions of it are equally compact. In terms of the Lincoln riddle: you _can_ call a tail a leg, but you can't stop people from _noticing_ that out of a dog's five legs, one of them is different from the others. You can't stop people from inferring decision-relevant implications from what they notice. (_Most_ of a dog's legs touch the ground, such that you'd have to carry the dog to the vet if one of them got injured, but the dog can still walk without the other, different leg.) And if people who live and work with dogs every day find themselves habitually distinguishing between the bottom-walking-legs and the back-wagging-leg, they _just might_ want _different words_ in order to concisely _talk_ about what everyone is thinking _anyway_. diff --git a/content/drafts/agreeing-with-stalin-in-ways-that-exhibit-generally-rationalist-principles.md b/content/drafts/agreeing-with-stalin-in-ways-that-exhibit-generally-rationalist-principles.md index abc4d40..c102d08 100644 --- a/content/drafts/agreeing-with-stalin-in-ways-that-exhibit-generally-rationalist-principles.md +++ b/content/drafts/agreeing-with-stalin-in-ways-that-exhibit-generally-rationalist-principles.md @@ -133,9 +133,9 @@ And I think I _would_ have been over it, except— ... except that Yudkowsky _reopened the conversation_ four days later on 22 February 2021, with [a new Facebook post](https://www.facebook.com/yudkowsky/posts/10159421750419228) explaining the origins of his intuitions about pronoun conventions, and concluding that, "the simplest and best protocol is, '"He" refers to the set of people who have asked us to use "he", with a default for those-who-haven't-asked that goes by gamete size' and to say that this just _is_ the normative definition. Because it is _logically rude_, not just socially rude, to try to bake any other more complicated and controversial definition _into the very language protocol we are using to communicate_." -(_Why!?_ Why reopen the conversation, from the perspective of his chessboard? Wouldn't it be easier to just stop digging? I guess my highly-Liked Facebook comment and Twitter barb about him lying-by-implicature temporarily brought me and my concerns to the top of his attention?) +(_Why!?_ Why reopen the conversation, from the perspective of his chessboard? Wouldn't it be easier to just stop digging? I guess my highly-Liked Facebook comment and Twitter barb about him lying-by-implicature temporarily brought me and my concerns to the top of his attention, despite the fact that I'm generally not that important?) -I eventually explained what was wrong with Yudkowsky's new arguments at the length of 12,000 words in March 2022's ["Challenges to Yudkowsky's Pronoun Reform Proposal"](/2022/Mar/challenges-to-yudkowskys-pronoun-reform-proposal/),[^challenges-title] but I find myself still having more left to analyze. The February 2021 post on pronouns is a _fascinating_ document, in its own way—a penetrating case study on the effects of politics on a formerly great mind. +I eventually explained what was wrong with Yudkowsky's new arguments at the length of 12,000 words in March 2022's ["Challenges to Yudkowsky's Pronoun Reform Proposal"](/2022/Mar/challenges-to-yudkowskys-pronoun-reform-proposal/),[^challenges-title], but that post focused on the object-level arguments; I have more to say here (that I decided to cut from "Challenges") about the meta-level political context. The February 2021 post on pronouns is a _fascinating_ document, in its own way—a penetrating case study on the effects of politics on a formerly great mind. [^challenges-title]: The form of the title is an allusion to Yudkowsky's ["Challenges to Christiano's Capability Amplification Proposal"](https://www.lesswrong.com/posts/S7csET9CgBtpi7sCh/challenges-to-christiano-s-capability-amplification-proposal). @@ -529,7 +529,7 @@ But fighting for public epistemology is a long battle; it makes more sense if yo Now it looks like we have—less time? Not just tautologically because time has passed (the 21st century is one-fifth over—closer to a quarter over), but because of new information from the visible results of the deep learning revolution.[^second-half] Yudkowsky seemed particularly [spooked by AlphaGo](https://www.lesswrong.com/posts/7MCqRnZzvszsxgtJi/christiano-cotra-and-yudkowsky-on-ai-progress?commentId=gQzA8a989ZyGvhWv2) [and AlphaZero](https://intelligence.org/2017/10/20/alphago/) in 2016–2017, not because superhuman board game players were dangerous, but because of what it implied about the universe of algorithms. -There had been a post in the Sequences that made fun of "the people who just want to build a really big neural net." These days, it's increasingly looking like just building a really big neural net ... [actually works](https://www.gwern.net/Scaling-hypothesis)?—which is bad news; if it's "easy" for non-scientific-genius engineering talent to shovel large amounts of compute into the birth of powerful minds that we don't understand and don't know how to control, then it would seem that the world is soon to pass outside of our understanding and control. +There had been a post in the Sequences that made fun of "the people who just want to build a really big neural net." These days, it's increasingly looking like just building a really big neural net ... [actually works](https://www.gwern.net/Scaling-hypothesis)?—which seems like bad news; if it's "easy" for non-scientific-genius engineering talent to shovel large amounts of compute into the birth of powerful minds that we don't understand and don't know how to control, then it would seem that the world is soon to pass outside of our understanding and control. [^second-half]: In an unfinished slice-of-life short story I started writing _circa_ 2010, my protagonist (a supermarket employee resenting his job while thinking high-minded thoughts about rationality and the universe) speculates about "a threshold of economic efficiency beyond which nothing human could survive" being a tighter bound on future history than physical limits (like the heat death of the universe), and comments that "it imposes a sense of urgency to suddenly be faced with the fabric of your existence coming apart in ninety years rather than 1090." @@ -555,7 +555,7 @@ But if you think the only hope for there _being_ a future flows through maintain (I remarked to "Wilhelm" in June 2022 that DeepMind changing its Twitter avatar to a rainbow variant of their logo for Pride month was a bad sign.) -So isn't there a story here where I'm the villain, willfully damaging humanity's chances of survival by picking unimportant culture-war fights in the xrisk-reduction social sphere, when _I know_ that the sphere needs to keep its nose clean in the eyes of the progressive egregore? _That's_ why Yudkowsky said the arguably-technically-misleading things he said about my Something to Protect: he _had_ to, to keep our nose clean. The people paying attention to contemporary politics don't know what I know, and can't usefully be told. Isn't it better for humanity if my meager talents are allocated to making AI go well? Don't I have a responsibility to fall in line and take one for the team—if the world is at stake? +So isn't there a story here where I'm the villain, willfully damaging humanity's chances of survival by picking unimportant culture-war fights in the xrisk-reduction social sphere, when _I know_ that the sphere needs to keep its nose clean in the eyes of the progressive egregore? _That's_ why Yudkowsky said the arguably-technically-misleading things he said about my Something to Protect: he _had_ to, to keep our nose clean. The people paying attention to contemporary politics don't know what I know, and can't usefully be told. Isn't it better for humanity if my meager talents are allocated to making AI go well? Don't I have a responsibility to fall in line and take one for the team? If the world is at stake. As usual, the Yudkowsky of 2009 has me covered. In his short story ["The Sword of Good"](https://www.yudkowsky.net/other/fiction/the-sword-of-good), our protagonist Hirou wonders why the powerful wizard Dolf lets other party members risk themselves fighting, when Dolf could have protected them: @@ -842,24 +842,33 @@ Is this the hill _he_ wants to die on? If the world is ending either way, wouldn * Maybe not? If "dignity" is a term of art for log-odds of survival, maybe self-censoring to maintain influence over what big state-backed corporations are doing is "dignified" in that sense ] -After the September 2021 Twitter altercation, I upgraded my "mute" of @ESYudkowsky to a "block", to avoid the temptation to pick more fights. +At the end of the September 2021 Twitter altercation, I [said that I was upgrading my "mute" of @ESYudkowsky to a "block"](https://twitter.com/zackmdavis/status/1435468183268331525). Better to just leave, rather than continue to hang around in his mentions trying (consciously or otherwise) to pick fights, like a crazy ex-girlfriend. (["I have no underlying issues to address; I'm certifiably cute, and adorably obsessed"](https://www.youtube.com/watch?v=UMHz6FiRzS8) ...) -I still had more things to say—a reply to the February 2021 post on pronoun reform, and the present memoir telling this Whole Dumb Story—but those could be written and published unilaterally. Given that we clearly weren't going to get to clarity and resolution, I didn't need to bid for any more of my ex-hero's attention and waste more of his time; I owed him that much. +I still had more things to say—a reply to the February 2021 post on pronoun reform, and the present memoir telling this Whole Dumb Story—but those could be written and published unilaterally. Given that we clearly weren't going to get to clarity and resolution, I didn't need to bid for any more of my ex-hero's attention and waste more of his time (valuable time, _limited_ time); I owed him that much. + +Leaving a personality cult is hard. As I struggled to write, I noticed that I was wasting a lot of cycles worrying about what he'd think of me, rather than saying the things I needed to say. I knew it was pathetic that my religion was so bottlenecked on _one guy_—particularly since the holy texts themselves (written by that one guy) [explicitly said not to do that](https://www.lesswrong.com/posts/t6Fe2PsEwb3HhcBEr/the-litany-against-gurus)—but unwinding those psychological patterns was still a challenge. + +An illustration of the psychological dynamics at play: on an EA forum post about demandingness objections to longtermism, Yudkowsky [commented that](https://forum.effectivealtruism.org/posts/fStCX6RXmgxkTBe73/towards-a-weaker-longtermism?commentId=Kga3KGx6WAhkNM3qY) he was "broadly fine with people devoting 50%, 25% or 75% of themselves to longtermism, in that case, as opposed to tearing themselves apart with guilt and ending up doing nothing much, which seems to be the main alternative." + +I found the comment reassuring regarding the extent or lack thereof of my own contributions to the great common task—and that's the problem: I found the _comment_ reassuring, not the _argument_. It would make sense to be reassured by the claim (if true) that human psychology is such that I don't realistically have the option of devoting more than 25% of myself to the great common task. It does _not_ make sense to be reassured that _Eliezer Yudkowsky said he's broadly fine with it_. That's just being a personality-cultist. [TODO last email and not bothering him— - * Although, as I struggled to write, I noticed I was wasting cycles + * Although, as I struggled to write, I noticed I was wasting cycles worrying about what he'd think of me * January 2022, I wrote to him asking if he cared if I said negative things about him, that it would be easier if he wouldn't hold it against me, and explained my understanding of the privacy norm (Subject: "blessing to speak freely, and privacy norms?") * in retrospect, I was wrong to ask that. I _do_ hold it against him. And if I'm entitled to my feelings, isn't he entitled to his? ] -[TODO "Challenges" - * the essential objections: you can't have it both ways; we should _model the conflict_ instead of taking a side in it while pretending to be neutral - * eventually shoved out the door in March - * I flip-flopped back and forth a lot about whether to include the coda about the political metagame, or to save it for the present memoir; I eventually decided to keep the post object-level - * I felt a lot of trepidation publishing a post that said, "I'm better off because I don't trust Eliezer Yudkowsky to tell the truth" - * Critical success! Oli's comment - * I hoped he saw it (but I wasn't going to email or Tweet at him about it, in keeping with my intent not to bother the guy anymore) -] +In February 2022, I finally managed to finish a draft of ["Challenges to Yudkowsky's Pronoun Reform Proposal"](/2022/Mar/challenges-to-yudkowskys-pronoun-reform-proposal/) (A year after the post it replies to! I did other things that year, probably.) It's long (12,000 words), because I wanted to be thorough and cover all the angles. (To paraphrase Ralph Waldo Emerson, when you strike at Eliezer Yudkowsky, _you must kill him._) + +If I had to compress it by a factor of 200 (down to 60 words), I'd say my main point was that, given a conflict over pronoun conventions, there's no "right answer", but we can at least be objective in _describing what the conflict is about_, and Yudkowsky wasn't doing that; his "simplest and best proposal" favored the interests of some parties to the dispute (as was seemingly inevitable), _without admitting he was doing so_ (which was not inevitable).[^describing-the-conflict] + +[^describing-the-conflict]: I had been making this point for four years. [As I wrote in February 2018's "The Categories Were Made for Man to Make Predictions"](/2018/Feb/the-categories-were-made-for-man-to-make-predictions/#describing-the-conflict), "we can at least strive for objectivity in _describing the conflict_." + +In addition to prosecuting the object-level (about pronouns) and the meta level (about acknowleding the conflict) for 12,000 words, I also had _another_ several thousand words at the meta-meta level, about the political context of the argument, and Yudkowsky's comments about what is "sometimes personally prudent and not community-harmful", but I wasn't sure whether to include it in the post itself, or save it for the memoir. I was worried about it being too aggressive, dissing Yudkowsky too much. I wasn't sure how to be aggressive and explain _why_ I wanted to be so aggressive without the Whole Dumb Story of the previous six years leaking in. + +I asked secret posse member for political advice. I thought my argument was very strong, but that the object-level argument about pronoun conventions just wasn't very interesting; what I _actually_ wanted people to see was the thing where the Big Yud of the current year _just can't stop lying for political convenience_. How could I possibly pull that off in a way that the median _Less Wrong_-er would hear? Was it a good idea to "go for the throat" with the "I'm better off because I don't trust Eliezer Yudkowsky to tell the truth in this domain" line? + +Secret posse member said the post was boring. ("Yes. I'm bored, too," I replied.) They said that I was optimizing [... TODO continue] [TODO background on Planecrash, medianworlds, dath ilan, Keepers, masochism coverup— * Yudkowsky's new fiction project is about Keltham out of dath ilan dying in a plane crash and waking up in the world of _Pathfinder_, and Dungeons-and-Dragons-alike setting. diff --git a/notes/memoir-sections.md b/notes/memoir-sections.md index 180eb83..fc5d71c 100644 --- a/notes/memoir-sections.md +++ b/notes/memoir-sections.md @@ -2,6 +2,7 @@ marked TODO blocks— ✓ AlphaGo seemed deeper [pt. 5] - "Agreeing with Stalin" recap intro [pt. 5] - social justice and defying threats [pt. 5] +_ last email and not bothering him [pt. 5] _ scuffle on "Yes Requires the Possibility" [pt. 4] _ confronting Olivia [pt. 2] _ "Lesswrong.com is dead to me" [pt. 4] @@ -50,6 +51,7 @@ New (bad) time estimate: With internet available— +_ "When you strike at a king" _ real-name blog post: jr. member of save/destroy/take-over the world conspiracy _ Sequences post making fun of "just make a really big neural net" _ DeepMind June 2022 Twitter archive? @@ -81,6 +83,7 @@ _ explain the "if the world were at stake" Sword of Good reference better _ D. also acknowledged AGP _ "no one else would have spoken" should have been a call-to-action to read more widely _ explain who Kay Brown is +_ "Great Common Task" is probably capitalized _ mention Will MacAskill's gimped "longtermism" somehow _ re-read a DALL-E explanation and decide if I think it's less scary now _ Scott Aaronson on the blockchain of science https://scottaaronson.blog/?p=6821 @@ -1637,19 +1640,11 @@ https://www.lesswrong.com/posts/BbM47qBPzdSRruY4z/instead-of-technical-research- I hate that my religion is bottlenecked on one guy -https://forum.effectivealtruism.org/posts/fStCX6RXmgxkTBe73/towards-a-weaker-longtermism?commentId=Kga3KGx6WAhkNM3qY -> I am broadly fine with people devoting 50%, 25% or 75% of themselves to longtermism, in that case, as opposed to tearing themselves apart with guilt and ending up doing nothing much, which seems to be the main alternative. - https://twitter.com/zackmdavis/status/1405032189708816385 > Egregore psychology is much easier and more knowable than individual human psychology, for the same reason macroscopic matter is more predictable than individual particles. But trying to tell people what the egregore is doing doesn't work because they don't believe in egregores!! -https://glowfic.com/replies/1882395#reply-1882395 -> the stranger from dath ilan never pretended to be anyone's friend after he stopped being their friend. -Similarly, you should stop pretending to be a rationality teacher if you're going to be corrupted by politics - 20 June 2021, "The egregore doesn't care about the past", thematic moments at Valinor - You don't want to have a reputation that isn't true; I've screwed up confidentiality before, so I don't want a "good at keeping secrets" reputation; if Yudkowsky doesn't want to live up to the standard of "not being a partisan hack", then ... Extended analogy between "Scott Alexander is always right" and "Trying to trick me into cutting my dick off"—in neither case would any sane person take it literally, but it's pointing at something important (Scott and EY are trusted intellectual authorities, rats are shameless about transition cheerleading) diff --git a/notes/memoir_wordcounts.csv b/notes/memoir_wordcounts.csv index 2e4814c..0003a33 100644 --- a/notes/memoir_wordcounts.csv +++ b/notes/memoir_wordcounts.csv @@ -297,4 +297,5 @@ 01/16/2023,78041 01/17/2023,78041 01/18/2023,78303 -01/19/2023, \ No newline at end of file +01/19/2023,79364 +01/20/2023, \ No newline at end of file