From: M. Taylor Saotome-Westlake Date: Fri, 19 Aug 2022 18:27:18 +0000 (-0700) Subject: check in ... X-Git-Url: http://534655.efjtl6rk.asia/source?a=commitdiff_plain;h=9b6d3ad114a993a13fe4424ffbb0c6476560699a;p=Ultimately_Untrue_Thought.git check in ... --- diff --git a/content/drafts/a-hill-of-validity-in-defense-of-meaning.md b/content/drafts/a-hill-of-validity-in-defense-of-meaning.md index e18172d..f392d89 100644 --- a/content/drafts/a-hill-of-validity-in-defense-of-meaning.md +++ b/content/drafts/a-hill-of-validity-in-defense-of-meaning.md @@ -286,7 +286,7 @@ Similarly, once someone is known to [vary](https://slatestarcodex.com/2014/08/14 Well, you're still _somewhat_ better off listening to them than the whistling of the wind, because the wind in various possible worlds is presumably uncorrelated with most of the things you want to know about, whereas [clever arguers](https://www.lesswrong.com/posts/kJiPnaQPiy4p9Eqki/what-evidence-filtered-evidence) who [don't tell explicit lies](https://www.lesswrong.com/posts/xdwbX9pFEr7Pomaxv/) are constrained in how much they can mislead you. But it seems plausible that you might as well listen to any other arbitrary smart person with a bluecheck and 20K followers. I know you're very busy; I know your work's important—but it might be a useful exercise, for Yudkowsky to think of what he would _actually say_ if someone with social power _actually did this to him_ when he was trying to use language to reason about Something he had to Protect? -(Note, my claim here is _not_ that "Pronouns aren't lies" and "Scott Alexander is not a racist" are similarly misinformative. I'm saying that whether "You're not standing in defense of truth if you insist on a word, brought explicitly into question, being used with some particular meaning" makes sense _as a response to_ "X isn't a Y" shouldn't depend on the specific values of X and Y. Yudkowsky's behavior the other month made it look like he thought that "You're not standing in defense of truth if ..." _was_ a valid response when, say, X = "Caitlyn Jenner" and Y = "woman." I was saying that, whether or not it's a valid response, we should, as a matter of local validity, apply the _same_ standard when X = "Scott Alexander" and Y = "racist.") +(Note, my claim here is _not_ that "Pronouns aren't lies" and "Scott Alexander is not a racist" are similarly misinformative. Rather, I'm saying that, as a matter of [local validity](https://www.lesswrong.com/posts/WQFioaudEH8R7fyhm/local-validity-as-a-key-to-sanity-and-civilization), whether "You're not standing in defense of truth if you insist on a word, brought explicitly into question, being used with some particular meaning" makes sense _as a response to_ "X isn't a Y" shouldn't depend on the specific values of X and Y. Yudkowsky's behavior the other month made it look like he thought that "You're not standing in defense of truth if ..." _was_ a valid response when, say, X = "Caitlyn Jenner" and Y = "woman." I was saying that, whether or not it's a valid response, we should, as a matter of local validity, apply the _same_ standard when X = "Scott Alexander" and Y = "racist.") Anyway, without disclosing any _specific content_ from private conversations with Yudkowsky that may or may not have happened, I think I _am_ allowed to say that our posse did not get the kind of engagement from Yudkowsky that we were hoping for. (That is, I'm Glomarizing over whether Yudkowsky just didn't reply, or whether he did reply and our posse was not satisfied with the response.) @@ -298,7 +298,7 @@ One of Alexander's [most popular _Less Wrong_ posts ever had been about the nonc _Even if_ you're opposed to abortion, or have negative views about the historical legacy of Dr. King, this isn't the right way to argue. If you call Janie a _murderer_, that causes me to form a whole bunch of implicit probabilistic expectations—about Janie's moral character, about the suffering of victim whose hopes and dreams were cut short, about Janie's relationship with the law, _&c._—most of which get violated when you subsequently reveal that the murder victim was a four-week-old fetus. -Thus, we see that Alexander's own "The Worst Argument in the World" is really complaining about the _same_ category-gerrymandering move that his "... Not Man for the Categories" comes out in favor of. We would not let someone get away with declaring, "I ought to accept an unexpected abortion or two deep inside the conceptual boundaries of what would normally not be considered murder if it'll save someone's life." Maybe abortion _is_ wrong, but you need to make that case _on the merits_, not by linguistic fiat. +Thus, we see that Alexander's own "The Worst Argument in the World" is really complaining about the _same_ category-gerrymandering move that his "... Not Man for the Categories" comes out in favor of. We would not let someone get away with declaring, "I ought to accept an unexpected abortion or two deep inside the conceptual boundaries of what would normally not be considered murder if it'll save someone's life." Maybe abortion _is_ wrong and relevantly similar to the central sense of "murder", but you need to make that case _on the merits_, not by linguistic fiat. ... Scott still didn't get it. He said that he didn't see why he shouldn't accept one unit of categorizational awkwardness in exchange for sufficiently large utilitarian benefits. I started drafting a long reply—but then I remembered that in recent discussion with my posse about what we might have done wrong in our attempted outreach to Yudkowsky, the idea had come up that in-person meetings are better for updateful disagreement-resolution. Would Scott be up for meeting in person some weekend? Non-urgent. Ben would be willing to moderate, unless Scott wanted to suggest someone else, or no moderator. @@ -310,7 +310,7 @@ My dayjob boss made it clear that he was expecting me to have code for my curren But I was just in so much (psychological) pain. Or at least—as I noted in one of a series of emails to my posse that night—I felt motivated to type the sentence, "I'm in so much (psychological) pain." I'm never sure how to intepret my own self-reports, because even when I'm really emotionally trashed (crying, shaking, randomly yelling, _&c_.), I think I'm still noticeably _incentivizable_: if someone were to present a credible threat (like slapping me and telling me to snap out of it), then I would be able to calm down: there's some sort of game-theory algorithm in the brain that subjectively feels genuine distress (like crying or sending people too many hysterical emails) but only when it can predict that it will be either rewarded with sympathy or at least tolerated. (Kevin Simler: [tears are a discount on friendship](https://meltingasphalt.com/tears/).) -I [tweeted a Sequences quote](https://twitter.com/zackmdavis/status/1107874587822297089) to summarize how I felt (the mention of @ESYudkowsky being to attribute credit; I figured Yudkowsky had enough followers that he probably wouldn't see a notification): +I [tweeted a Sequences quote](https://twitter.com/zackmdavis/status/1107874587822297089) to summarize how I felt (the mention of @ESYudkowsky being to attribute credit, I told myself; I figured Yudkowsky had enough followers that he probably wouldn't see a notification): > "—and if you still have something to protect, so that you MUST keep going, and CANNOT resign and wisely acknowledge the limitations of rationality— [1/3] > @@ -356,7 +356,8 @@ Maybe that's why I felt like I had to stand my ground and fight a culture war to * We need to figure out how to win against bad faith arguments ] -[TODO: Jessica joins the coalition; she tell me about her time at MIRI (link to Zoe-piggyback and Occupational Infohazards); Michael said that me and Jess together have more moral authority] +[TODO: Jessica joins the coalition; she tell me about her time at MIRI (link to Zoe-piggyback and Occupational Infohazards); +Michael said that me and Jess together have more moral authority] [TODO: wrapping up with Scott; Kelsey; high and low Church https://slatestarcodex.com/2019/07/04/some-clarifications-on-rationalist-blogging/] diff --git a/notes/a-hill-email-review.md b/notes/a-hill-email-review.md index 1c9da00..993cec3 100644 --- a/notes/a-hill-email-review.md +++ b/notes/a-hill-email-review.md @@ -252,6 +252,7 @@ me—That's what I would have thought, too! I consider this falsified [...] twen 24 Sep: "Heads I Win" was really successful; reasons why you might think this is good besides measuring idea quality democratically 30 Sep: the third time that someone has responded to my "help help, everyone is being stupid about the philosophy of language for transparently political reasons, and Michael Vassar's gang are the only people backing me up on this; what the fuck is going on?!" sob story with (paraphrasing), "Your philosophy hobbyhorse is OK, but Michael's gang is crazy." [...] Jessica's assessment from earlier: "Another classic political tactic: praise one member of an alliance while blaming another, to split cracks in the alliance, turning former allies against each other." / Where the three incidents seemed more coherent on the "praise Zack, diss his new friends" aspect, than on the specific content of the disses, whereas in the worlds where Michael's gang is just crazy, I would expect the content craziness allegations to correlate more Oct: model sync with Jessica/Alyssa/Lex/Sarah +19 Oct: https://www.lesswrong.com/posts/bSmgPNS6MTJsunTzS/maybe-lying-doesn-t-exist 3 Nov: actually good criticism from Abram at MIRI!!! Isn't the problem with bad (shortsighted, motivated) appeal-to-consequences, rather than appeal-to-consequences in general? example: predator-avoidance diff --git a/notes/a-hill-of-validity-sections.md b/notes/a-hill-of-validity-sections.md index 59c6b63..d1f4fc3 100644 --- a/notes/a-hill-of-validity-sections.md +++ b/notes/a-hill-of-validity-sections.md @@ -13,6 +13,7 @@ _ screenshot Rob's Facebook comment which I link _ explain first use of Center for Applied Rationality _ erasing agency of Michael's friends, construed as a pawn _ Anna thought badmouthing Michael was OK by Michael's standards +_ chat with "Wilhelm" during March 2019 minor psych episode people to consult before publishing, for feedback or right of objection— _ Iceman @@ -1163,3 +1164,6 @@ https://twitter.com/ESYudkowsky/status/1356493440041684993 We don't believe in privacy > Privacy-related social norms are optimized for obscuring behavior that could be punished if widely known [...] an example of a paradoxical norm that is opposed to enforcement of norms-in-general"). https://unstableontology.com/2021/04/12/on-commitments-to-anti-normativity/ + +Sucking up the the Blue Egregore would make sense if you _knew_ that was the critical resource +https://www.lesswrong.com/posts/mmHctwkKjpvaQdC3c/what-should-you-change-in-response-to-an-emergency-and-ai