From: M. Taylor Saotome-Westlake Date: Sat, 11 Mar 2023 05:40:25 +0000 (-0800) Subject: check in X-Git-Url: http://534655.efjtl6rk.asia/source?a=commitdiff_plain;h=3abc0f4e2febaf97fe5afe042377067350011dd5;p=Ultimately_Untrue_Thought.git check in --- diff --git a/content/drafts/if-clarity-seems-like-death-to-them.md b/content/drafts/if-clarity-seems-like-death-to-them.md index 97eeaec..ea1f0c2 100644 --- a/content/drafts/if-clarity-seems-like-death-to-them.md +++ b/content/drafts/if-clarity-seems-like-death-to-them.md @@ -173,6 +173,7 @@ Secret posse member expressed sadness about how the discussion on "The Incentive + [TODO— * Jessica: scortched-earth campaign should mostly be in meatspace social reality * my comment on emotive conjugation (https://www.lesswrong.com/posts/qaYeQnSYotCHQcPh8/drowning-children-are-rare#GaoyhEbzPJvv6sfZX) @@ -194,7 +195,7 @@ Secret posse member expressed sadness about how the discussion on "The Incentive * secret posse member: level of social-justice talk makes me not want to interact with this post in any way ] -On 4 July, Scott Alexander published ["Some Clarifications on Rationalist Blogging"](https://slatestarcodex.com/2019/07/04/some-clarifications-on-rationalist-blogging/), disclaiming any authority as a "rationalist" leader. ("I don't want to claim this blog is doing any kind of special 'rationality' work beyond showing people interesting problems [...] Insofar as [_Slate Star Codex_] makes any pretensions to being 'rationalist', it's a rationalist picnic and not a rationalist monastery.") I assumed this was inspired by Ben's request back in March that Scott "alter the beacon" so as to not confuse people about what the current-year community was. I appreciated it. +On 4 July 2019, Scott Alexander published ["Some Clarifications on Rationalist Blogging"](https://slatestarcodex.com/2019/07/04/some-clarifications-on-rationalist-blogging/), disclaiming any authority as a "rationalist" leader. ("I don't want to claim this blog is doing any kind of special 'rationality' work beyond showing people interesting problems [...] Insofar as [_Slate Star Codex_] makes any pretensions to being 'rationalist', it's a rationalist picnic and not a rationalist monastery.") I assumed this was inspired by Ben's request back in March that Scott "alter the beacon" so as to not confuse people about what the current-year community was. I appreciated it. [TODO: "AI Timelines Scam" * I still sympathize with the "mainstream" pushback against the scam/fraud/&c. language being used to include Elephant-in-the-Brain-like distortions @@ -228,7 +229,7 @@ I still wanted to finish the memoir-post mourning the "rationalists", but I stil In August 2019's ["Schelling Categories, and Simple Membership Tests"](https://www.lesswrong.com/posts/edEXi4SpkXfvaX42j/schelling-categories-and-simple-membership-tests), I explained a nuance that had only merited a passion mention in "... Boundaries?": sometimes you might want categories for different agents to _coordinate_ on, even at the cost of some statistical "fit." (This was of course generalized from a "pro-trans" argument that had occured to me, [that self-identity is an easy Schelling point when different people disagree about what "gender" they perceive someone as](/2019/Oct/self-identity-is-a-schelling-point/).) -In September 2019's ["Heads I Win, Tails?—Never Heard of Her; Or, Selective Reporting and the Tragedy of the Green Rationalists"](https://www.lesswrong.com/posts/DoPo4PDjgSySquHX8/heads-i-win-tails-never-heard-of-her-or-selective-reporting), I presented a toy mathematical model of how censorship distorts group beliefs. I was surprised by how well-received it was (high karma, Curated within a few days, later included in the Best-of-2019 collection), especially given that it was explicitly about politics (albeit at a meta level, of course). Ben and Jessica had discouraged me from bothering when I sent them a draft. +In September 2019's ["Heads I Win, Tails?—Never Heard of Her; Or, Selective Reporting and the Tragedy of the Green Rationalists"](https://www.lesswrong.com/posts/DoPo4PDjgSySquHX8/heads-i-win-tails-never-heard-of-her-or-selective-reporting), I presented a toy mathematical model of how censorship distorts group beliefs. I was surprised by how well-received it was (high karma, Curated within a few days, later included in the Best-of-2019 collection), especially given that it was explicitly about politics (albeit at a meta level, of course). Ben and Jessica had discouraged me from bothering when I sent them a draft. (Jessica said that it was obvious even to 10-year-olds that partisan politics distorts impressions by filtering evidence. "[D]o you think we could get a ten-year-old to explain it to Eliezer Yudkowsky?" I asked.) In October 2019's ["Algorithms of Deception!"](https://www.lesswrong.com/posts/fmA2GJwZzYtkrAKYJ/algorithms-of-deception), I exhibited some toy Python code modeling different kinds of deception. A function that faithfully passes observations it sees as input to another function, lets the second function constructing a well-calibrated probability distribution. But if the first function outright fabricates evidence, or selectively omits some evidence, or gerrymanders the categories by which it interprets its observations as evidence, the second function comes up with a worse (less accurate) probability distribution. diff --git a/content/drafts/standing-under-the-same-sky.md b/content/drafts/standing-under-the-same-sky.md index a44acad..c0c5a16 100644 --- a/content/drafts/standing-under-the-same-sky.md +++ b/content/drafts/standing-under-the-same-sky.md @@ -550,7 +550,7 @@ Is that ... _not_ evidence of harm to the community? If that's not community-har On 1 April 2022, Yudkowsky published ["MIRI Announces New 'Death With Dignity' Strategy"](https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/miri-announces-new-death-with-dignity-strategy), a cry of despair in the guise of an April Fool's Day post. MIRI didn't know how to align a superintelligence, no one else did either, but AI capabilities work was continuing apace. With no credible plan to avert almost-certain doom, the most we could do now was to strive to give the human race a more dignified death, as measured in log-odds of survival: an alignment effort that doubled the probability of a valuable future from 0.0001 to 0.0002 was worth one information-theoretic bit of dignity. -In a way, "Death With Dignity" isn't really an update. Yudkowsky had always refused to give a probability of success, while maintaining that Friendly AI was ["impossible"](https://www.lesswrong.com/posts/nCvvhFBaayaXyuBiD/shut-up-and-do-the-impossible). Now, he says the probability is approximately zero. +In a way, "Death With Dignity" isn't really an update. Yudkowsky had always refused to name a "win" probability, while maintaining that Friendly AI was ["impossible"](https://www.lesswrong.com/posts/nCvvhFBaayaXyuBiD/shut-up-and-do-the-impossible). Now, he says the probability is approximately zero. Paul Christiano, who has a much more optimistic picture of humanity's chances, nevertheless said that he liked the "dignity" heuristic. I like it, too. It—takes some of the pressure off. I [made an analogy](https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/miri-announces-new-death-with-dignity-strategy?commentId=R59aLxyj3rvjBLbHg): your plane crashed in the ocean. To survive, you must swim to shore. You know that the shore is west, but you don't know how far. The optimist thinks the shore is just over the horizon; we only need to swim a few miles and we'll probably make it. The pessimist thinks the shore is a thousand miles away and we will surely die. But the optimist and pessimist can both agree on how far we've swum up to this point, and that the most dignified course of action is "Swim west as far as you can." diff --git a/notes/memoir-sections.md b/notes/memoir-sections.md index a68fdc3..0f8acda 100644 --- a/notes/memoir-sections.md +++ b/notes/memoir-sections.md @@ -8,16 +8,21 @@ marked TODO blocks— ✓ New York [pt. 6] ✓ scuffle on "Yes Requires the Possibility" [pt. 4] ✓ "Unnatural Categories Are Optimized for Deception" [pt. 4] +✓ Eliezerfic fight: will-to-Truth vs. will-to-happiness [pt. 6] +- regrets, wasted time, conclusion [pt. 6] - "Lesswrong.com is dead to me" [pt. 4] +_ Eliezerfic fight: Ayn Rand and children's morals [pt. 6] _ AI timelines scam [pt. 4] _ secret thread with Ruby [pt. 4] _ progress towards discussing the real thing [pt. 4] _ epistemic defense meeting [pt. 4] +_ Eliezerfic fight: Big Yud tests me [pt. 6] +_ Eliezerfic fight: derail with lintamande [pt. 6] +_ Eliezerfic fight: knives, and showing myself out [pt. 6] _ reaction to Ziz [pt. 4] _ confronting Olivia [pt. 2] _ State of Steven [pt. 4] _ Somni [pt. 4] -_ rude maps [pt. 4] _ culture off the rails; my warning points to Vaniver [pt. 4] _ December 2019 winter blogging vacation [pt. 4] _ plan to reach out to Rick [pt. 4] @@ -26,9 +31,16 @@ _ out of patience email [pt. 4] _ the hill he wants to die on [pt. 6?] _ recap of crimes, cont'd [pt. 6] _ lead-in to Sept. 2021 Twitter altercation [pt. 6] -_ regrets, wasted time, conclusion [pt. 6] + +bigger blocks— +_ Dolphin War finish +_ Michael Vassar and the Theory of Optimal Gossip +_ psychiatric disaster +_ the story of my Feb. 2017 Facebook crusade [pt. 2] +_ the story of my Feb./Apr. 2017 recent madness [pt. 2] not even blocked— +_ A/a alumna consult? [pt. 2] _ "Even our pollution is beneficial" [pt. 6] _ Scott Aaronson on the blockchain of science [pt. 6] _ Re: on legitimacy and the entrepreneur; or, continuing the attempt to spread my sociopathic awakening onto Scott [pt. 2 somewhere] @@ -37,20 +49,18 @@ _ include Wilhelm "Gender Czar" conversation? [pt. 2] _ "EA" brand ate the "rationalism" brand—even visible in MIRI dialogues _ Anna's heel–face turn -bigger blocks— -_ dath ilan and Eliezerfic fight -_ Dolphin War finish -_ Michael Vassar and the Theory of Optimal Gossip -_ psychiatric disaster -_ the story of my Feb. 2017 Facebook crusade [pt. 2] -_ the story of my Feb./Apr. 2017 recent madness [pt. 2] - it was actually "wander onto the AGI mailing list wanting to build a really big semantic net" (https://www.lesswrong.com/posts/9HGR5qatMGoz4GhKj/above-average-ai-scientists) With internet available— +_ space opera TVTrope? +_ Word of God TVTropes page +_ March 2017 Blanchard Tweeting my blog? +_ bug emoji +_ what was I replying to, re: "why you actually don't want to be a happier but less accurate predictor"? _ Meta-Honesty critique well-received: cite 2019 review guide _ https://www.greaterwrong.com/posts/2Ses9aB8jSDZtyRnW/duncan-sabien-on-moderating-lesswrong#comment-aoqWNe6aHcDiDh8dr _ https://www.greaterwrong.com/posts/trvFowBfiKiYi7spb/open-thread-july-2019#comment-RYhKrKAxiQxY3FcHa +_ relevant screenshots for Eliezerfic play-by-play _ correct italics in quoted Eliezerfic back-and-forth _ lc on elves and Sparashki _ Nate would later admit that this was a mistake (or ask Jessica where) @@ -89,6 +99,8 @@ _ Anna's claim that Scott was a target specifically because he was good, my coun _ Yudkowsky's LW moderation policy far editing tier— +_ maybe current-year LW would be better if more marginal cases _had_ bounced off because of e.g. sexism +_ footnote to explain that when I'm summarizing a long Discord conversation to taste, I might move things around into "logical" time rather than "real time"; e.g. Yudkowsky's "powerfully relevant" and "but Superman" comments were actually one right after the other; and, e.g., I'm filling in more details that didn't make it into the chat, like innate kung fu _ re "EY is a fraud": it's a _conditional_ that he can modus tollens if he wants _ NRx point about HBD being more than IQ, ties in with how I think the focus on IQ is distasteful, but I have political incentives to bring it up _ "arguing for a duty to self-censorship"—contrast to my "closing thoughts" email @@ -196,12 +208,13 @@ _ backlink only seen an escort once before (#confided-to-wilhelm) terms to explain on first mention— _ Civilization (context of dath ilan) -_ Valinor +_ Valinor (probably don't name it, actually) _ "Caliphate" _ "rationalist" _ Center for Applied Rationality _ MIRI _ "egregore" +_ eliezera people to consult before publishing, for feedback or right of objection— @@ -215,11 +228,12 @@ _ secret posse member _ Katie (pseudonym choice) _ Alicorn: about privacy, and for Melkor Glowfic reference link _ hostile prereader (April, J. Beshir, Swimmer, someone else from Alicorner #drama) -_ Kelsey (briefly) +_ Kelsey _ NRx Twitter bro _ maybe SK (briefly about his name)? (the memoir might have the opposite problem (too long) from my hostile-shorthand Twitter snipes) _ Megan (that poem could easily be about some other entomologist named Megan) ... I'm probably going to cut that §, though _ David Xu? (Is it OK to name him in his LW account?) +_ afford various medical procedures marketing— _ Twitter @@ -2181,7 +2195,7 @@ https://www.lesswrong.com/posts/4pov2tL6SEC23wrkq/epilogue-atonement-8-8 * Maybe not? If "dignity" is a term of art for log-odds of survival, maybe self-censoring to maintain influence over what big state-backed corporations are doing is "dignified" in that sense ] -The old vision was nine men in a brain in a box in a basement. (He didn't say _men_.) +The old vision was nine men and a brain in a box in a basement. (He didn't say _men_.) Subject: "I give up, I think" 28 January 2013 > You know, I'm starting to suspect I should just "assume" (choose actions conditional on the hypothesis that) that our species is "already" dead, and we're "mostly" just here because Friendly AI is humanly impossible and we're living in an unFriendly AI's ancestor simulation and/or some form of the anthropic doomsday argument goes through. This, because the only other alternatives I can think of right now are (A) arbitrarily rejecting some part of the "superintelligence is plausible and human values are arbitrary" thesis even though there seem to be extremely strong arguments for it, or (B) embracing a style of thought that caused me an unsustainable amount of emotional distress the other day: specifically, I lost most of a night's sleep being mildly terrified of "near-miss attempted Friendly AIs" that pay attention to humans but aren't actually nice, wondering under what conditions it would be appropriate to commit suicide in advance of being captured by one. Of course, the mere fact that I can't contemplate a hypothesis while remaining emotionally stable shouldn't make it less likely to be true out there in the real world, but in this kind of circumstance, one really must consider the outside view, which insists: "When a human with a history of mental illness invents a seemingly plausible argument in favor of suicide, it is far more likely that they've made a disastrous mistake somewhere, then that committing suicide is actually the right thing to do." @@ -2227,4 +2241,9 @@ https://www.goodreads.com/quotes/38764-what-are-the-facts-again-and-again-and-ag "content": "I'm afraid to even think that in the privacy of my own head, but I agree with you that is way more reasonable", "type": "Generic" -"but the ideological environment is such that a Harvard biologist/psychologist is afraid to notice blatantly obvious things in the privacy of her own thoughts, that's a really scary situation to be in (insofar as we want society's decisionmakers to be able to notice things so that they can make decisions)", \ No newline at end of file +"but the ideological environment is such that a Harvard biologist/psychologist is afraid to notice blatantly obvious things in the privacy of her own thoughts, that's a really scary situation to be in (insofar as we want society's decisionmakers to be able to notice things so that they can make decisions)", + + + +In October 2016, I messaged an alumna of my App Academy class of November 2013 (back when App Academy was still cool and let you sleep on the floor if you wanted), effectively asking to consult her expertise on feminism. "Maybe you don't want people like me in your bathroom for the same reason you're annoyed by men's behavior on trains?" + diff --git a/notes/wordcounts.txt b/notes/wordcounts.txt index 5f3ab10..1833d21 100644 --- a/notes/wordcounts.txt +++ b/notes/wordcounts.txt @@ -1,4 +1,4 @@ -wc -w 2022/* 2021/* 2020/* 2019/* 2018/* 2017/* 2016/* | sort -n -s -k1,1 +wc -w 2023/* 2022/* 2021/* 2020/* 2019/* 2018/* 2017/* 2016/* | sort -n -s -k1,1 1005 2017/the-line-in-the-sand-or-my-slippery-slope-anchoring-action-plan.md 1044 2017/lesser-known-demand-curves.md