From: M. Taylor Saotome-Westlake Date: Sun, 4 Sep 2022 02:09:55 +0000 (-0700) Subject: memoir: Abram Demski on wireheading X-Git-Url: http://534655.efjtl6rk.asia/source?a=commitdiff_plain;h=5a9c465cdbf6f41b93f578f1710dcc5e6c89b77d;p=Ultimately_Untrue_Thought.git memoir: Abram Demski on wireheading --- diff --git a/content/drafts/a-hill-of-validity-in-defense-of-meaning.md b/content/drafts/a-hill-of-validity-in-defense-of-meaning.md index 3285209..5bea889 100644 --- a/content/drafts/a-hill-of-validity-in-defense-of-meaning.md +++ b/content/drafts/a-hill-of-validity-in-defense-of-meaning.md @@ -527,13 +527,17 @@ mutualist pattern where Michael by himself isn't very useful for scholarship (he 15 Sep Glen Weyl apology ] -[TODO: actually good criticism from Abram] -[TODO: Ziz incident; more upset about gender validation than the felony charges, which were equally ridiculous and more obviously linked to physical violence -complicity with injustice "Ziz isn't going to be a problem for you anymore"] +In November, I received an interesting reply on my philosophy-of-categorization thesis from MIRI researcher Abram Demski. Abram asked: ideally, shouldn't all conceptual boundaries be drawn with appeal-to-consequences? Wasn't the problem just with bad (motivated, shortsighted) appeals to consequences? Agents categorize in order to make decisions. The best classifer for an application depends on the costs and benefits. As a classic example, it's very important for evolved prey animals to avoid predators, so it makes sense for their predator-detection classifiers to be configured such that they jump away from every rustling in the bushes, even if it's usually not a predator. + +I had thought of the "false-positives are better than false-negatives when detecting predators" example as being about the limitations of evolution as an AI designer: messy evolved animal brains don't bother to track probability and utility separately the way a cleanly-designed AI could. As I had explained in "... Boundaries?", it made sense for _what_ variables you paid attention to, to be motivated by consequences. But _given_ the subspace that's relevant to your interests, you want to run an epistemically legitimate clustering algorithm on the data you see there, which depends on the data, not your values. The only reason value-dependent gerrymandered category boundaries seem like a good idea if you're not careful about philosophy is because it's _wireheading_. Ideal probabilistic beliefs shouldn't depend on consequences. + +Abram didn't think the issue was so clear-cut. Where do "probabilities" come from, in the first place? The reason we expect something like Bayesianism to be an attractor among self-improving agents is _because_ probabilistic reasoning is broadly useful: epistemology can be _derived_ from instrumental concerns. He agreed that severe wireheading issues potentially arise if you allow consequentialist concerns to affect your epistemics. + +But the alternative view had its own problems. If your AI consists of a consequentialist module that optimizes for utility in the world, and an epistemic module that optimizes for the accuracy of its beliefs, that's _two_ agents, not one: how could that be reflectively coherent? You could, perhaps, bite the bullet here, for fear that consequentialism doesn't tile and that wireheading was inevitable. On this view, Abram explained, "Agency is an illusion which can only be maintained by crippling agents and giving them a split-brain architecture where an instrumental task-monkey does all the important stuff while an epistemic overseer supervises." Whether this view was ultimately tenable or not, this did show that trying to forbid appeals-to-consequences entirely led to strange places. I didn't immediately have an answer for Abram, but I was grateful for the engagement. (Abram was clearly addressing the real philosophical issues, and not just trying to mess with me the way almost everyone else in Berkeley was trying to mess with me.) -In November, I wrote to Ben about how I was still stuck on writing the grief-memoir. My _plan_ had been that it should have been possibly to tell the story of the Category War while glomarizing about the content of private conversations, then offer Scott and Eliezer pre-publication right of reply (because it's only fair to give your former-hero-current-[frenemies](https://en.wikipedia.org/wiki/Frenemy) warning when you're about to publicly characterize them as having been intellectually dishonest), then share it to _Less Wrong_ and the /r/TheMotte culture war thread, and then I would have the emotional closure to move on with my life (learn math, go to gym, chop wood, carry water) and not be a mentally-dominated cultist. +Also in November, I wrote to Ben about how I was still stuck on writing the grief-memoir. My _plan_ had been that it should have been possibly to tell the story of the Category War while glomarizing about the content of private conversations, then offer Scott and Eliezer pre-publication right of reply (because it's only fair to give your former-hero-current-[frenemies](https://en.wikipedia.org/wiki/Frenemy) warning when you're about to publicly characterize them as having been intellectually dishonest), then share it to _Less Wrong_ and the /r/TheMotte culture war thread, and then I would have the emotional closure to move on with my life (learn math, go to gym, chop wood, carry water) and not be a mentally-dominated cultist. The reason it _should_ be safe to write is because Explaining Things is Good. It should be possible to say, "This is not a social attack; I'm not saying 'rationalists Bad, Yudkowsky Bad'; I'm just trying to carefully _tell the true story_ about why, as a matter of cause-and-effect, I've been upset this year, including addressing counterarguments for why some would argue that I shouldn't be upset, why other people could be said to be behaving 'reasonably' given their incentives, why I nevertheless wish they'd be braver and adhere to principle rather than 'reasonably' following incentives, _&c_." @@ -547,6 +551,9 @@ I said I would bite that bullet: yes! Yes, I was trying to figure out whether I (This seemed correlated with the recurring stalemated disagreement within our coordination group, where Michael/Ben/Jessica would say, "Fraud, if that word _ever_ meant anything", and while I agreed that they were pointing to an important way in which things were messed up, I was still sympathetic to the Caliphate defender's reply that the Vassarite usage of "fraud" was motte-and-baileying between vastly different senses of _fraud_; I wanted to do _more work_ to formulate a _more precise theory_ of the psychology of deception to describe exactly how things are messed up a way that wouldn't be susceptible to the motte-and-bailey charge.) +[TODO: Ziz incident; more upset about gender validation than the felony charges, which were equally ridiculous and more obviously linked to physical violence +complicity with injustice "Ziz isn't going to be a problem for you anymore"] + [TODO: a culture that has gone off the rails; my warning points to Vaniver] [TODO: plan to reach out to Rick] @@ -573,6 +580,21 @@ There's another very important part of the story that would fit around here chro [TODO: "out of patience" email] [TODO: Sep 2020 categories clarification from EY—victory?!] +[TODO: briefly mention breakup with Vassar group] + +[TODO: "Unnatural Categories Are Optimized for Deception" + +Abram was right + +the fact that it didn't means that not tracking it can be an effective AI design! Just because evolution takes shortcuts that human engineers wouldn't doesn't mean shortcuts are "wrong" (instead, there are laws governing which kinds of shortcuts work). + +Embedded agency means that the AI shouldn't have to fundamentally reason differently about "rewriting code in some 'external' program" and "rewriting 'my own' code." In that light, it makes sense to regard "have accurate beliefs" as merely a convergent instrumental subgoal, rather than what rationality is about + +somehow accuracy seems more fundamental than power or resources ... could that be formalized? + +] + + [TODO: That should have been the end of the story, but then—he revisited the pronouns issue!!!] [TODO: based on the timing, the Feb. 2021 pronouns post was likely causally downstream of me being temporarily more salient to EY because of my highly-Liked response to his "anyone at this point that anybody who openly hates on this community generally or me personally is probably also a bad person inside" from 17 February; it wasn't gratuitously out of the blue] diff --git a/notes/a-hill-of-validity-sections.md b/notes/a-hill-of-validity-sections.md index f0bde42..84b1a69 100644 --- a/notes/a-hill-of-validity-sections.md +++ b/notes/a-hill-of-validity-sections.md @@ -1,7 +1,8 @@ noncontiguous on deck— X reluctance to write a memoir during 2019 X pinging Ben about memoir reluctance -_ actually good criticism from Abram +X actually good criticism from Abram +_ help from Jessica for "Unnatural Categories" _ let's recap / being put in a box _ "duly appreciated" _ if he's reading this