From: M. Taylor Saotome-Westlake Date: Thu, 16 Feb 2023 03:23:48 +0000 (-0800) Subject: memoir: pie-splitting contest X-Git-Url: http://534655.efjtl6rk.asia/source?a=commitdiff_plain;h=32197bef6192a6912b6099b3378a2dc4878db474;p=Ultimately_Untrue_Thought.git memoir: pie-splitting contest --- diff --git a/content/drafts/standing-under-the-same-sky.md b/content/drafts/standing-under-the-same-sky.md index edddfcb..3b08f94 100644 --- a/content/drafts/standing-under-the-same-sky.md +++ b/content/drafts/standing-under-the-same-sky.md @@ -99,19 +99,34 @@ So, naïvely, doesn't Yudkowsky's "personally prudent to post your agreement wit I can think of two reasons why the naïve objection might fail. (And who can say but that a neutral expert witness on decision theory wouldn't think of more?) -First, the true decision theory is subtler than "defy anything that you can commonsensically pattern-match as looking like 'extortion'"; the case for resisting extortion specifically rests on there existing a subjunctive dependence between your decision and the extortionist's decision (they threaten _because_ you'll give in, or don't bother _because_ you won't), and the relevant subjunctive dependence doesn't obviously pertain in the real-life science intellectual _vs._ social justice mob match-up. If the mob has been trained from past experience to predict that their targets will give in, should you defy them now in order to somehow make your current situation "less real"?[^emerson] Depending on the correct theory of logical counterfactuals, the right stance might be ["We don't negotiate with terrorists, but we do appease bears"](/2019/Dec/political-science-epigrams/) (because the bear's response isn't calculated based on our response), and the forces of political orthodoxy might be relevantly bear-like. +First, the true decision theory is subtler than "defy anything that you can commonsensically pattern-match as looking like 'extortion'"; the case for resisting extortion specifically rests on there existing a subjunctive dependence between your decision and the extortionist's decision: they threaten _because_ you'll give in, or don't bother _because_ you won't. -[^emerson]: I remember back in 'aught-nine, Tyler Emerson was caught embezzling funds from the Singularity Institute, and SingInst made a point of prosecuting him on decision-theoretic grounds, when a lot of other nonprofits would have quietly covered it up to spare themselves the embarrassment. +Okay, but then how do I compute this "subjunctive dependence" thing? Presumably it has something to do with the extortionist's decisionmaking process incuding a model of the target. How good does that model have to be for it to "count"? -On the other hand, the relevant subjunctive dependence doesn't obviously _not_ pertain, either! Parsing social justice as an agentic "threat" rather than a non-agentic obstacle like an avalanche, does seem to line up with the fact that people punish heretics (who dissent from an ideological group) more than infidels (who were never part of the group to begin with), _because_ heretics are more extortable—more vulnerable to social punishment from the original group. +I don't know—and if I don't know, I can't say that the relevant subjunctive dependence obviously pertains in the real-life science intellectual _vs._ social justice mob match-up. If the mob has been trained from past experience to predict that their targets will give in, should you defy them now in order to somehow make your current predicament "less real"? Depending on the correct theory of logical counterfactuals, the correct stance might be "We don't negotiate with terrorists, but [we do appease bears](/2019/Dec/political-science-epigrams/) and avoid avalanches" (because neither the bear's nor the avalanche's behavior is calculated based on our response), and the forces of political orthodoxy might be relevantly bear- or avalanche-like. -Which brings me to the second reason the naïve anti-extortion argument might fail: [what counts as "extortion" depends on the relevant "property rights", what the "default" action is](https://www.lesswrong.com/posts/Qjaaux3XnLBwomuNK/countess-and-baron-attempt-to-define-blackmail-fail). If having free speech is the default, being excluded from the coalition for defying the orthodoxy could be construed as extortion. But if _being excluded from the coalition_ is the default, maybe toeing the line of orthodoxy is the price you need to pay in order to be included. +On the other hand, the relevant subjunctive dependence doesn't obviously _not_ pertain, either! Yudkowsky does seem to endorse commonsense pattern-matching to "extortion" in contexts like nuclear diplomacy. Or I remember back in 'aught-nine, Tyler Emerson was caught embezzling funds from the Singularity Institute, and SingInst made it a point of pride to prosecute on decision-theoretic grounds, when a lot of other nonprofits would have quietly and causal-decision-theoretically covered it up to spare themselves the embarrassment. Parsing social justice as an agentic "threat" rather than a non-agentic obstacle like an avalanche, does seem to line up with the fact that people punish heretics (who dissent from an ideological group) more than infidels (who were never part of the group to begin with), _because_ heretics are more extortable—more vulnerable to social punishment from the original group. -[TODO: defying threats, cont'd— +Which brings me to the second reason the naïve anti-extortion argument might fail: [what counts as "extortion" depends on the relevant "property rights", what the "default" action is](https://www.lesswrong.com/posts/Qjaaux3XnLBwomuNK/countess-and-baron-attempt-to-define-blackmail-fail). If having free speech is the default, being excluded from the dominant coalition for defying the orthodoxy could be construed as extortion. But if _being excluded from the coalition_ is the default, maybe toeing the line of orthodoxy is the price you need to pay in order to be included. + +Yudkowsky has [a proposal for how bargaining should work between agents with different notions of "fairness"](https://www.lesswrong.com/posts/z2YwmzuT7nWx62Kfh/cooperating-with-agents-with-different-ideas-of-fairness). + +Suppose Edgar and Fiona are splitting a pie, and if they can't agree on how to split it, they have to fight over it, destroying some of the pie in the process. Edgar thinks the fair outcome is that they each get half the pie. Fiona claims that she contributed more ingredients to the baking process and that it's therefore fair that she gets 75% of the pie, pledging to fight if offered anything less. - * Yudkowsky has an algorithm for bargaining between agents with different notions of "fairness": you'd prefer a fair split on the Pareto boundary, but you should be willing to except an unfair split, as long as the other guy also does worse—all the way to the Nash equilibrium https://www.lesswrong.com/posts/z2YwmzuT7nWx62Kfh/cooperating-with-agents-with-different-ideas-of-fairness +If Edgar were a causal decision theorist, he would agree to the 75/25 split, if 25% of the pie is better than fighting. Yudkowsky argues that this is irrational: if Edgar is willing to agree to a 75/25 split, then Fiona has no incentive not to adopt such a self-favoring definition of "fairness". (And _vice versa_ if Fiona's concept of fairness is the "correct" one.) + +Instead, Yudkowsky argues, Edgar should behave so as to only do worse than the fair outcome if Fiona _also_ does worse: for example, by accepting a 32/48 split (where 100−(32+48) = 20% of the pie has been destroyed by the costs of fighting) or an 18/42 split (where 40% of the pie has been destroyed). + +[TODO: defying threats, cont'd— + * How does this map on to the present situation, though? Does he think he's playing Nash, or does he think he's getting gains-from-trade? (Either figure this out, or write some smart sentences about my confusion) + +https://twitter.com/zackmdavis/status/1206718983115698176 +> 1940s war criminal defense: "I was only following orders!" +> 2020s war criminal defense: "I was only participating in a bad Nash equilibrium that no single actor can defy unilaterally!" + + * I asked him why he changed his mind about voting * "Vote when you're part of a decision-theoretic logical cohort large enough to change things, or when you're worried about your reputation and want to be honest about whether you voted." * So maybe he doesn't think he's part of a decision-theoretic logical cohort large enough to resist the egregore, and he's also not worried about his reputation for resisting the egregore @@ -124,6 +139,8 @@ Curtis Yarvin [likes to compare](/2020/Aug/yarvin-on-less-wrong/) Yudkowsky to [ ] +----- + I like to imagine that they have a saying out of dath ilan: once is happenstance; twice is coincidence; _three times is hostile optimization_. I could forgive him for taking a shit on d4 of my chessboard (["at least 20% of the ones with penises are actually women"](https://www.facebook.com/yudkowsky/posts/10154078468809228)).