From: M. Taylor Saotome-Westlake Date: Sun, 9 Oct 2022 23:36:58 +0000 (-0700) Subject: memoir: TODO outlining in 2019 and to close X-Git-Url: http://534655.efjtl6rk.asia/source?a=commitdiff_plain;h=48e72f58d89b7b152bed0ab0b14d9beeb59d0547;p=Ultimately_Untrue_Thought.git memoir: TODO outlining in 2019 and to close Okay, after some more email review, now these TODO blocks feel sufficiently specific, that just filling them in one by one actually seems viable as a plan to get a continuous first draft? I think there are like 37 TODO blocks? (Then there's edit passes, inserting good content from the sections file that didn't make it in, finishing "Blanchard's Dangerous Idea", finishing the reply to Scott on autogenderphilia, finishing the supplement about pseudonyms, getting feedback from friendly and hostile prereaders, asking privacy-mongers if it's OK to name them ... think I can do it all in three months??) --- diff --git a/content/drafts/a-hill-of-validity-in-defense-of-meaning.md b/content/drafts/a-hill-of-validity-in-defense-of-meaning.md index 4a225f5..2d3c84f 100644 --- a/content/drafts/a-hill-of-validity-in-defense-of-meaning.md +++ b/content/drafts/a-hill-of-validity-in-defense-of-meaning.md @@ -627,20 +627,32 @@ My dayjob performance had been suffering terribly for months. The psychology of My "intent" to take a break from the religious war didn't take. -[TODO: tussle with Anna] +[TODO: tussle with Anna, was thinking of writing a public reply to her comment against Michael] -[TODO: tussle on "Yes Implies the Possibility of No"] +[TODO: tussle on "Yes Implies the Possibility of No" https://www.lesswrong.com/posts/WwTPSkNwC89g3Afnd/comment-section-from-05-19-2019 ] -[TODO: tussle on new _Less Wrong_ FAQ] +[TODO: tussle on new _Less Wrong_ FAQ 31 May https://www.lesswrong.com/posts/MqrzczdGhQCRePgqN/feedback-requested-draft-of-a-new-about-welcome-page-for#iqEEme6M2JmZEXYAk ] +[TODO: more philosophy-of-language blogging! and bitter grief comments +https://www.greaterwrong.com/posts/tkuknrjYCbaDoZEh5/could-we-solve-this-email-mess-if-we-all-moved-to-paid/comment/ZkreTspP599RBKsi7 +https://www.greaterwrong.com/posts/FT9Lkoyd5DcCoPMYQ/partial-summary-of-debate-with-benquo-and-jessicata-pt-1/comment/vPekZcouSruiCco3c +] +[TODO: 17– Jun, "LessWrong.com is dead to me" in response to "It's Not the Incentives", comment on Ray's behavior, "If clarity seems like death to them and like life to us"; Bill Brent, "Casual vs. Social Reality", I met with Ray 29 Jun; https://www.greaterwrong.com/posts/bwkZD6uskCQBJDCeC/self-consciousness-wants-to-make-everything-about-itself ; calling out the abstract pattern] -https://twitter.com/ESYudkowsky/status/1124751630937681922 -> ("sort of like" in the sense that, empirically, it made me feel much less personally aggrieved, but of course my feelings aren't the point) +[TODO: https://slatestarcodex.com/2019/07/04/some-clarifications-on-rationalist-blogging/] +[TODO: "AI Timelines Scam", within-group debate on what is a "scam" or "fraud", Pope] [TODO: epistemic defense meeting; the first morning where "rationalists ... them" felt more natural than "rationalists ... us"] +[TODO: Michael Vassar and the theory of optimal gossip; make sure to include the part about Michael threatening to sue] + +[TODO: various tussling with Steven Kaas] + +[TODO: Yudkowsky throwing NRx under the bus; tragedy of recursive silencing +15 Sep Glen Weyl apology +] In November, I received an interesting reply on my philosophy-of-categorization thesis from MIRI researcher Abram Demski. Abram asked: ideally, shouldn't all conceptual boundaries be drawn with appeal-to-consequences? Wasn't the problem just with bad (motivated, shortsighted) appeals to consequences? Agents categorize in order to make decisions. The best classifer for an application depends on the costs and benefits. As a classic example, it's very important for evolved prey animals to avoid predators, so it makes sense for their predator-detection classifiers to be configured such that they jump away from every rustling in the bushes, even if it's usually not a predator. @@ -670,7 +682,7 @@ I said I would bite that bullet: yes! Yes, I was trying to figure out whether I [TODO: plan to reach out to Rick] -[TODO: +[TODO: December tussle with Scott, and, a Christmas party— Scott replies on 21 December https://www.lesswrong.com/posts/bSmgPNS6MTJsunTzS/maybe-lying-doesn-t-exist?commentId=LJp2PYh3XvmoCgS6E > since these are not about factual states of the world (eg what the definition of "lie" REALLY is, in God's dictionary) we have nothing to make those decisions on except consequences @@ -683,10 +695,8 @@ people reading funny GPT-2 quotes A MIRI researcher sympathetically told me that it would be sad if I had to leave the Bay Area, which I thought was nice. There was nothing about the immediate conversational context to suggest that I might have to leave the Bay, but I guess by this point, my existence had become a context. -motivation deflates after Christmas victory -5 Jan memoir as nuke -] - +memoir motivation deflates after Christmas victory +5 Jan memoir as nuke] ------- @@ -740,8 +750,7 @@ Given that I spent so many hours on this little research/writing project in earl https://slatestarcodex.com/2020/09/11/update-on-my-situation/ ] -[TODO: "out of patience" email - +[TODO: "out of patience" email] > To: Eliezer Yudkowsky <[redacted]> > Cc: Anna Salamon <[redacted]> @@ -801,10 +810,9 @@ is make this simple thing established "rationalist" knowledge: [TODO: Sep 2020 categories clarification from EY—victory?! https://www.facebook.com/yudkowsky/posts/10158853851009228 _ex cathedra_ statement that gender categories are not an exception to the rule, only 1 year and 8 months after asking for it - ] -[TODO: briefly mention breakup with Vassar group] +[TODO: Sasha disaster, breakup with Vassar group] [TODO: "Unnatural Categories Are Optimized for Deception" @@ -817,7 +825,6 @@ Embedded agency means that the AI shouldn't have to fundamentally reason differe somehow accuracy seems more fundamental than power or resources ... could that be formalized? ] - And really, that _should_ have been the end of the story. At the trifling cost of two years of my life, we finally got a clarification from Yudkowsky that you can't define the word _woman_ any way you like. I didn't think I was entitled to anything more than that. I was satsified. I still published "Unnatural Categories Are Optimized for Deception" in January 2021, but if I hadn't been further provoked, I wouldn't have occasion to continue waging the robot-cult religious civil war. [TODO: NYT affair and Brennan link @@ -1100,8 +1107,6 @@ Let's recap. * ... ] - - I _never_ expected to end up arguing about something so _trivial_ as the minutiae of pronoun conventions (which no one would care about if historical contingencies of the evolution of the English language hadn't made them a Schelling point and typographical attack surface for things people do care about). The conversation only ended up here after a series of derailings. At the start, I was _trying_ to say something substantive about the psychology of straight men who wish they were women. _After it's been pointed out_, it should be a pretty obvious hypothesis that "guy on the Extropians mailing list in 2004 who fantasizes about having a female counterpart" and "guy in 2016 Berkeley who identifies as a trans woman" are the _same guy_. @@ -1157,25 +1162,23 @@ Scott Alexander chose Feelings, but I can't really hold that against him, becaus Eliezer Yudkowsky ... did not _unambiguously_ choose Feelings. He's been very careful with his words to strategically mood-affiliate with the side of Feelings, without consciously saying anything that he knows to be unambiguously false. - - - +[TODO— finish Yudkowsky trying to be a religious leader Eliezer Yudkowsky is _absolutely_ trying to be a religious leader. If Eliezer Yudkowsky can't _unambigously_ choose Truth over Feelings, _then Eliezer Yudkowsky is a fraud_. ] - - -[TODO section stakes, cooperation - -at least Sabbatai Zevi had an excuse: his choices were to convert to Islam or be impaled https://en.wikipedia.org/wiki/Sabbatai_Zevi#Conversion_to_Islam +[TODO section existential stakes, cooperation] > [_Perhaps_, replied the cold logic](https://www.yudkowsky.net/other/fiction/the-sword-of-good). _If the world were at stake._ > > _Perhaps_, echoed the other part of himself, _but that is not what was actually happening._ +[TODO: social justice and defying threats + +at least Sabbatai Zevi had an excuse: his choices were to convert to Islam or be impaled https://en.wikipedia.org/wiki/Sabbatai_Zevi#Conversion_to_Islam +] I like to imagine that they have a saying out of dath ilan: once is happenstance; twice is coincidence; _three times is hostile optimization_. @@ -1292,23 +1295,16 @@ I don't doubt Yudkowsky could come up with some clever casuistry why, _technical [TODO: elaborate on how 2007!Yudkowsky and 2021!Xu are saying the opposite things if you just take a plain-language reading and consider, not whether individual sentences can be interpreted as "true", but what kind of _optimization_ the text is doing to the behavior of receptive readers] +[TODO: body odor anecdote] + [TODO: if he's reading this, win back respect— reply, motherfucker] [TODO: the Death With Dignity era "Death With Dignity" isn't really an update; he used to refuse to give a probability, and now he says the probability is ~0 -https://twitter.com/esyudkowsky/status/1164332124712738821 -> I unfortunately have had a policy for over a decade of not putting numbers on a few things, one of which is AGI timelines and one of which is *non-relative* doom probabilities. Among the reasons is that my estimates of those have been extremely unstable. - - - /2017/Jan/from-what-ive-tasted-of-desire/ ] -I don't, actually, know how to prevent the world from ending. Probably we were never going to survive. (The cis-human era of Earth-originating intelligent life wasn't going to last forever, and it's hard to exert detailed control over what comes next.) But if we're going to die either way, I think it would be _more dignified_ if Eliezer Yudkowsky were to behave as if he wanted his faithful students to be informed. Since it doesn't look like we're going to get that, I think it's _more dignified_ if his faithful students _know_ that he's not behaving like he wants us to be informed. And so one of my goals in telling you this long story about how I spent (wasted?) the last six years of my life, is to communicate the moral that - -and that this is a _problem_ for the future of humanity, to the extent that there is a future of humanity. - -Is that a mean thing to say about someone to whom I owe so much? Probably. But he didn't create me to not say mean things. If it helps—as far as _I_ can tell, I'm only doing what he taught me to do in 2007–9: [carve reality at the joints](https://www.lesswrong.com/posts/esRZaPXSHgWzyB2NL/where-to-draw-the-boundaries), [speak the truth even if your voice trembles](https://www.lesswrong.com/posts/pZSpbxPrftSndTdSf/honesty-beyond-internal-truth), and [make an extraordinary effort](https://www.lesswrong.com/posts/GuEsfTpSDSbXFiseH/make-an-extraordinary-effort) when you've got [Something to Protect](https://www.lesswrong.com/posts/SGR4GxFK7KmW7ckCB/something-to-protect). +[TODO: regrets and wasted time] diff --git a/notes/a-hill-email-review.md b/notes/a-hill-email-review.md index ad4d4af..ecb1a00 100644 --- a/notes/a-hill-email-review.md +++ b/notes/a-hill-email-review.md @@ -236,6 +236,7 @@ me—call with Michael, the Pope surely knows that he doesn't really have a dir 18 Jul: my accusation of mis-citing Ozy was wrong 20 Jul: me to Anna and Steven about LW mods colluding to protect feelings; "basically uninterested in the mounting evidence that your entire life's work is a critical failure?" 20 Jul: Michael—Court language is the language that we have for "you don't have the ethical option of non-engagement with the complaints that are being made" +> We *also* need to develop skill in the use of SJW style blamey language, like the blamey language about feelings that was being used *on* us harder and to a great extent first, while we were acting under mistake theory assumptions. 23 Jul: "epistemic defense" meeting 24-25 Jul: Michael Vassar and the theory of optimal gossip Kelsey thinks "threatening schism is that it's fundamentally about Vassar threatening to stir shit up until people stop socially excluding him for his bad behavior" diff --git a/notes/a-hill-of-validity-sections.md b/notes/a-hill-of-validity-sections.md index 7de2d76..64b1680 100644 --- a/notes/a-hill-of-validity-sections.md +++ b/notes/a-hill-of-validity-sections.md @@ -1,15 +1,7 @@ -on deck— -_ Let's recap -_ If he's reading this ... -_ Perhaps if the world were at stake -_ ¶ about social justice and defying threats -_ ¶ about body odors -_ regrets and wasted time -_ talk about the 2019 Christmas party -_ excerpt 2nd "out of patience" email - - with internet available— +_ https://www.lesswrong.com/posts/QB9eXzzQWBhq9YuB8/rationalizing-and-sitting-bolt-upright-in-alarm#YQBvyWeKT8eSxPCmz +_ Ben on "community": http://benjaminrosshoffman.com/on-purpose-alone/ +_ check date of univariate fallacy Tweet and Kelsey Facebook comment _ Soares "excited" example _ EA Has a Lying Problem _ when did I ask Leon about getting easier tasks? @@ -37,9 +29,11 @@ _ retrieve comment on pseudo-lies post in which he says its OK for me to comment far editing tier— +_ tie off Anna's plot arc? _ quote one more "Hill of Meaning" Tweet emphasizing fact/policy distinction _ conversation with Ben about physical injuries (this is important because it explains where the "cut my dick off rhetoric" came from) _ context of his claim to not be taking a stand +_ clarify "Merlin didn't like Vassar" example about Mike's name _ rephrase "gamete size" discussion to make it clearer that Yudkowsky's proposal also implicitly requires people to be agree about the clustering thing _ smoother transition between "deliberately ambiguous" and "was playing dumb"; I'm not being paranoid for attributing political motives to him, because he told us that he's doing it _ when I'm too close to verbatim-quoting someone's email, actually use a verbatim quote and put it in quotes @@ -1253,11 +1247,11 @@ Yudkowsky did [quote-Tweet Colin Wright on the univariate fallacy](https://twitt "Univariate fallacy" also a concession (which I got to cite in which I cited in "Schelling Categories") -https://slatestarcodex.com/2019/07/04/some-clarifications-on-rationalist-blogging/ + "Yes Requires the Possibility of No" 19 May https://www.lesswrong.com/posts/WwTPSkNwC89g3Afnd/comment-section-from-05-19-2019 -scuffle on LessWrong FAQ 31 May https://www.lesswrong.com/posts/MqrzczdGhQCRePgqN/feedback-requested-draft-of-a-new-about-welcome-page-for#iqEEme6M2JmZEXYAk +scuffle on LessWrong FAQ 31 May "epistemic defense" meeting @@ -1297,11 +1291,19 @@ I was _furious_ at "Against Lie Inflation"—oh, so _now_ you agree that making ] -[TODO: Yudkowsky throwing NRx under the bus; tragedy of recursive silencing -15 Sep Glen Weyl apology -] Scott said he liked "monastic rationalism _vs_. lay rationalism" as a frame for the schism Ben was proposing. (I wish I could use this line) -I really really want to maintain my friendship with Anna despite the fact that we're de facto political enemies now. (And similarly with, e.g., Kelsey, who is like a sister-in-law to me (because she's Merlin Blume's third parent, and I'm Merlin's crazy racist uncle).) \ No newline at end of file +I really really want to maintain my friendship with Anna despite the fact that we're de facto political enemies now. (And similarly with, e.g., Kelsey, who is like a sister-in-law to me (because she's Merlin Blume's third parent, and I'm Merlin's crazy racist uncle).) + + +https://twitter.com/esyudkowsky/status/1164332124712738821 +> I unfortunately have had a policy for over a decade of not putting numbers on a few things, one of which is AGI timelines and one of which is *non-relative* doom probabilities. Among the reasons is that my estimates of those have been extremely unstable. + + +I don't, actually, know how to prevent the world from ending. Probably we were never going to survive. (The cis-human era of Earth-originating intelligent life wasn't going to last forever, and it's hard to exert detailed control over what comes next.) But if we're going to die either way, I think it would be _more dignified_ if Eliezer Yudkowsky were to behave as if he wanted his faithful students to be informed. Since it doesn't look like we're going to get that, I think it's _more dignified_ if his faithful students _know_ that he's not behaving like he wants us to be informed. And so one of my goals in telling you this long story about how I spent (wasted?) the last six years of my life, is to communicate the moral that + +and that this is a _problem_ for the future of humanity, to the extent that there is a future of humanity. + +Is that a mean thing to say about someone to whom I owe so much? Probably. But he didn't create me to not say mean things. If it helps—as far as _I_ can tell, I'm only doing what he taught me to do in 2007–9: [carve reality at the joints](https://www.lesswrong.com/posts/esRZaPXSHgWzyB2NL/where-to-draw-the-boundaries), [speak the truth even if your voice trembles](https://www.lesswrong.com/posts/pZSpbxPrftSndTdSf/honesty-beyond-internal-truth), and [make an extraordinary effort](https://www.lesswrong.com/posts/GuEsfTpSDSbXFiseH/make-an-extraordinary-effort) when you've got [Something to Protect](https://www.lesswrong.com/posts/SGR4GxFK7KmW7ckCB/something-to-protect).