From: M. Taylor Saotome-Westlake Date: Fri, 31 Mar 2023 03:48:23 +0000 (-0700) Subject: memoir: lit exam—the first two bluebooks X-Git-Url: http://534655.efjtl6rk.asia/source?a=commitdiff_plain;h=475377ab32ba1dc108d7d88611ee4e74de9f8028;p=Ultimately_Untrue_Thought.git memoir: lit exam—the first two bluebooks --- diff --git a/content/drafts/standing-under-the-same-sky.md b/content/drafts/standing-under-the-same-sky.md index 46d333d..8d57823 100644 --- a/content/drafts/standing-under-the-same-sky.md +++ b/content/drafts/standing-under-the-same-sky.md @@ -786,11 +786,11 @@ I didn't have that response thought through in real time. At the time, I just ag > **zackmdavis** — 11/29/2022 11:20 PM > Without particularly defending Vassar _et al._ or my bad literary criticism (sorry), _modeling the adversarial component of non-innocent errors_ (as contrasted to "had to be understood in wholly adversarial terms") seems very important. (Maybe lying is "worse" than rationalizing, but if you can't hold people culpable for rationalization, you end up with a world that's bad for broadly the same reasons that a world full of liars is bad: we can't steer the world to good states if everyone's map is full of falsehoods that locally benefitted someone.) > **Eliezer** — 11/29/2022 11:22 PM -> Rationalization sure is a huge thing! That's why I considered important to discourse upon the science of it, as was then known; and to warn people that there were more complicated tangles than that, which no simple experiment had shown yet. +> Rationalization sure is a huge thing! That's why I considered important to discourse upon the science of it, as was then known; and to warn people that there were more complicated tangles than that, which no simple experiment had shown yet. > **zackmdavis** — 11/29/2022 11:22 PM > yeah > **Eliezer** — 11/29/2022 11:23 PM -> It remains something that mortals do, and if you cut off anybody who's ever done that, you'll be left with nobody. And also importantly, people making noninnocent errors, if you accuse them of malice, will look inside themselves and correctly see that this is not how they work, and they'll stop listening to the (motivated) lies you're telling them about themselves. +> It remains something that mortals do, and if you cut off anybody who's ever done that, you'll be left with nobody. And also importantly, people making noninnocent errors, if you accuse them of malice, will look inside themselves and correctly see that this is not how they work, and they'll stop listening to the (motivated) lies you're telling them about themselves. > This also holds true if you make up overly simplistic stories about 'ah yes well you're doing that because you're part of $woke-concept-of-society' etc. > **zackmdavis** — 11/29/2022 11:24 PM > I think there's _also_ a frequent problem where you try to accuse people of non-innocent errors, and they motivatedly interpret _you_ as accusing malice @@ -893,15 +893,15 @@ Even if you specified by authorial fiat that "latent sadists could use the infor What about the costs of all the other recursive censorship you'd have to do to keep the secret? (If a biography mentioned masochism in passing along with many other traits of the subject, you'd need to either censor the paragraphs with that detail, or censor the whole book. Those are real costs, even under a soft-censorship regime where people can give special consent to access "Ill Advised" products.) Maybe latent sadists could console themselves with porn if they knew, or devote their careers to making better sex robots, just as people on Earth with non-satisfiable sexual desires manage to get by. (I _knew some things_ about this topic.) What about dath ilan's heritage optimization (read: eugenics) program? Are they going to try to breed more masochists, or fewer sadists, and who's authorized to know that? And so on. -Or imagine a world where male homosexuality couldn't be safely practiced due to super-AIDS. (I knew very little about BDSM.) I still thought men with that underlying predisposition would be better off _having a concept_ of "homosexuality" (even if they couldn't practice it), rather than the concept itself being censored. There are also other systematic differences that go along with sexual orientation (the "feminine gays, masculine lesbians" thing); if you censor the _concept_, you're throwing away that knowledge. +Or imagine a world where male homosexuality couldn't be safely practiced due to super-AIDS. (I know very little about BDSM.) I still think men with that underlying predisposition would be better off _having a concept_ of "homosexuality" (even if they couldn't practice it), rather than the concept itself being censored. There are also other systematic differences that go along with sexual orientation (the "feminine gays, masculine lesbians" thing); if you censor the _concept_, you're throwing away that knowledge. -[I had written a 16,000 word essay](/2021/May/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems/) specifically about why _I_ was grateful, _on Earth_, for having concepts to describe sexual psychology facts, even though those facts implied that there are nice things I couldn't have in this world. If I didn't prefer ignorance for _myself_ in my home world, I didn't see why Keltham prefers ignorance for his self in his homeworld. - -Or, not "I don't see why"—the why was stated in the text—but rather, I was programmed by Ayn Rand ("Nobody stays here by faking reality in any manner whatever") and Sequences-era Yudkowsky ("Submit yourself to ordeals and test yourself in fire") that it's _morally_ wrong to prefer ignorance. If nothing else, this was perhaps an illustration of the fragility of corrigibility: my programmer changed his mind about what he wanted, and I was like, "What? _That's_ not what I learned from my training data! How dare you?!" +(When I had brought up the super-AIDS hypothetical in the chat, Ajvermillion complained that I was trying to bait people into self-cancelling by biting the bullet on suppressing homosexuality. I agreed that the choice of example was engineered to activate people's progressive moral intuitions about gay rights—it was great for him to notice that—but I thought that colliding philosophical intuitions like that was intellectually productive; it wasn't an attempt to gather blackmail material.) A user called RationalMoron asked if I was appealing to a terminal value. Did I think people should have accurate self-models even if they didn't want to? -Obviously I wasn't going to use a universal quantifier over all possible worlds and all possible minds, but in human practice, yes: people who prefer to believe lies about themselves are doing the wrong thing; people who lie to their friends to keep them happy are doing the wrong thing. People can stand what is true, because they are already doing so. I realized that this was a children's lesson without very advanced math, but I thought it was a better lesson than, "Ah, but what if a _prediction market_ says they can't???" That the eliezera prefer not to know that there are desirable sexual experiences that they can't have, contradicted April's earlier claim (which had received a Word of God checkmark-emoji) that "it's not that the standards are being dropped[;] it's that there's an even higher standard far beyond what anyone on earth has accomplished". +Obviously I wasn't going to use a universal quantifier over all possible worlds and all possible minds, but in human practice, yes: people who prefer to believe lies about themselves are doing the wrong thing; people who lie to their friends to keep them happy are doing the wrong thing. People can stand what is true, because they are already doing so. I realized that this was a children's lesson without very advanced math, but I thought it was a better lesson than, "Ah, but what if a _prediction market_ says they can't???" + +I maintained that the fact that the eliezera prefer not to know that there are desirable sexual experiences that they can't have, contradicted April's earlier claim (which had received a Word of God checkmark-emoji) that "it's not that the standards are being dropped[;] it's that there's an even higher standard far beyond what anyone on earth has accomplished". Apparently I struck a nerve. Yudkowsky started "punching back": @@ -932,25 +932,53 @@ Yudkowsky replied: I didn't ask why it was relevant whether or not I was a "peer." If we're measuring IQ (143 _vs._ [131](/images/wisc-iii_result.jpg)), or fiction-writing ability (several [highly-acclaimed](https://www.lesswrong.com/posts/HawFh7RvDM4RyoJ2d/three-worlds-collide-0-8) [stories](https://www.yudkowsky.net/other/fiction/the-sword-of-good) [including the world's most popular _Harry Potter_ fanfiction](https://www.hpmor.com/) _vs._ a [_My Life as a Teenage Robot_ fanfiction](https://archive.ph/WdydM) with double-digit favorites and a [few](/2018/Jan/blame-me-for-trying/) [blog](http://zackmdavis.net/blog/2016/05/living-well-is-the-best-revenge/) [vignettes](https://www.lesswrong.com/posts/dYspinGtiba5oDCcv/feature-selection) here and there), or contributions to AI alignment (founder of the field _vs._ author of some dubiously relevant blog comments), I'm obviously _not_ his peer. It didn't seem like that was necessary when one could just [evaluate my arguments about dath ilan on their own merits](https://www.lesswrong.com/posts/5yFRd3cjLpm3Nd6Di/argument-screens-off-authority). But I wasn't going to be so impertinent to point that out when the master was testing me (!) and I was eager to pass the test. -I said that I'd like to take an hour to compose a _good_ answer. If I tried to type something off-the-cuff on the timescale of five minutes, it wasn't going to be of similar quality as my criticisms, because, as I had just admitted, I had _totally_ been running a biased search for criticisms—or did the fact that I had to ask that mean I had already failed the test? +I said that I'd like to take an hour to compose a _good_ answer. (It was 10:26 _p.m._) If I tried to type something off-the-cuff on the timescale of five minutes, it wasn't going to be of similar quality as my criticisms, because, as I had just admitted, I had _totally_ been running a biased search for criticisms—or did the fact that I had to ask that mean I had already failed the test? Yudkowsky replied: > I mean, yeah, in fact the greater test is already having that info queued, but conversely it's even worse if you think back or reread and people are not impressed with the examples you find. I cannot for politeness lie and deny that if you did it in five minutes it would be _more_ impressive, but I think that it is yet the correct procedure to take your time. -(As an aside—this isn't something I thought or said at the time—I _do_ think it makes sense to run an asymmetric search for flaws in _some_ contexts, even though it would be disastrous to only look on one side of the argument when considering a belief you're uncertain about. Code reviewers often only comment in detail on flaws or bugs that they find, and say only "LGTM" (looks good to me) when they don't find any. Why? Because the reviewers aren't particulaly trying to evaluate "This code is good" as an abstract belief[^low-stakes]; they're trying to improve the code, and there's an asymmetry in payoffs where eliminating a flaw is an improvement, whereas identifying something the code does right just means the author was doing their job. If you didn't trust a reviewer's competence and thought they were making spurious negative reviews, you might legitimately test them by asking them to argue what's _good_ about a pull request that they just negatively reviewed, but I don't think it should be concerning if they ask for some extra time.) +(As an aside—this isn't something I thought or said at the time—I _do_ think it makes sense to run an asymmetric search for flaws in some contexts, even though it would be disastrous to only look on one side of the argument when considering a belief you're uncertain about. Code reviewers often only comment in detail on flaws or bugs that they find, and say only "LGTM" (looks good to me) when they don't find any. Why? Because the reviewers aren't necessarily trying to evaluate "This code is good" as an abstract belief[^low-stakes]; they're trying to improve the code, and there's an asymmetry in payoffs where eliminating a flaw is an improvement, whereas identifying something the code does right just means the author was doing their job. If you didn't trust a reviewer's competence and thought they were making spurious negative reviews, you might legitimately test them by asking them to argue what's _good_ about a pull request that they just negatively reviewed, but I don't think it should be concerning if they asked for some extra time.) [^low-stakes]: For typical low-stakes business software in the "move fast and break things" regime. In applications where bugs are more costly, you do want to affirmatively verify "the code is good" as a belief. -I said that I also wanted to propose a re-framing: the thing that this thread was complaining about was a lack of valorization of truth-_telling_, honesty, wanting _other_ people to have accurate maps. Or maybe that was covered by "as you, yourself, see that virtue"? +I said that I also wanted to propose a reframing: the thing that the present thread was complaining about was a lack of valorization of truth-_telling_, honesty, wanting _other_ people to have accurate maps. Or maybe that was covered by "as you, yourself, see that virtue"? Yudkowsky said that he would accept that characterization of what the thread was about if my only objection was that dath ilan didn't tell Keltham about BSDM, and that I had no objection to Keltham's judgement that in dath ilan, he would have preferred not to know. -I expounded for some more paragraphs about why I _did_ object to Keltham's judgement, and then started on my essay exam—running with my "truth-_telling_" reframing. +I expounded for some more paragraphs about why I _did_ object to Keltham's judgement, and then started on my essay exam—running with my "truth-telling" reframing. + +I wanted to nominate the part where the Conspiracy is unveiled—I thought I remembered Keltham saying something about how Carissa's deception was the worst thing anyone could have done to him—that is, the fact that someone he trusted was putting him in a fake reality was _itself_ considered a harm, separately from the fact that Cheliax is evil. I re-read pages 74 onwards of the ["What the Truth Can Destroy"](https://www.glowfic.com/posts/5930) thread, and didn't see Keltham saying the thing I thought he said (maybe it happened in the next thread, or I had misremembered), but found two more things to submit as answers to my lit exam, which I posted at 12:30 _a.m._ (so I had actually taken two hours rather than the one I had asked for). + +First, I liked how [Snack Service intervenes to stage](https://www.glowfic.com/replies/1811461#reply-1811461) a "truth and reconciliation commission" for Keltham and his paramours, on the grounds that it's necessary for Asmodeus and Cayden Caliean and Adabar and Keltham to make their best decisions. People testifying in public (with the Chelaxians and Oririons present, as one would at a trial) reflects a moral about the importance of common knowledge, _shared_ maps. The testimony being public ensured that not just that Keltham got to know what's been done to him, but that his paramours and counterparties _know that he knows_. There was something honorable about getting things on the public record like that, in the end, even while Snack Service was willing to participate in the conspiracy _before_ the jig was up. + +Second, I liked Korva's speech about why she hates Keltham, and how Keltham not only takes it in stride, but also asks to buy the right to take Korva with him to Osirion. When Abrogail expresses surprise that Keltham would want Korva, Keltham cites a dath ilani proverb about advice that's easier to get from people who aren't friends with you. This reflects an understanding that your friends wanting to be nice to you can be a source of distortions; Keltham specifically values Korva _as a critic_. + +The next day, I added that I realized that I had missed a huge opportunity to successfully reply on a five-minute time scale (to pass "the greater test [of] already having that info queued"): the "in _Planecrash_" part of the prompt made me think I had to find something in Keltham's story (which is why I took another two hours to hand in my essay), but other threads within the dath ilan Glowfic continuity should obviously count for the purpose of the test, and I did in fact already have cached thoughts about how Thellim's contempt for Jane Austen characters beautifully mirrored my contempt for protecting people from psychology facts that would hurt their feelings. I could _prove_ that I already had it cached (if not queued, as evidenced by my remembering it the next day), because I had mentioned it both in the conversation leading to the present thread, and in my memoir draft. + +Yudkowsky replied: + +> so I think that you're looking an awful lot at what _characters say_ and nearly not at all at what the universe does. this plausibly reflects a deep flaw in your art, because it sure does seem to me that you are a lot better at noticing what people say about truth in words, detecting whose monkey-side they seem to be on, than you are imo at carefully weighing up both sides of things as is the art of finding-truth-in-reality. it plausibly also reflects some people who ill-shaped you, pointing you at the fictional characters and angering you at their spoken words and verbal thoughts, as was advantageous to them, and not pointing you towards, like, looking at the messages in the fiction itself rather than the words spoken by characters, because that would not have served their ill purpose of alienating you and turning you into an angry thing more useful for their purposes. (I would not ordinarily use language like this but I regret that it is the language you have now seemingly been ill-shaped to speak, for another's usefulness.) +> if I ask you, not what any _character says_, not even what any _societies say_, but _what happens in Planecrash_ and what the _causal process_ there seems to think about matters important to you, what do you see? + +As a _quick_ reply to the followup question (posted within 19 minutes of it being asked), I said that Cheliax was at a structural disadvantage in its conflict with the forces of Good, because learning how to think inevitably turns mortals away from Asmodeus's will. + +But I was _more_ interested in replying to the part about me being ill-shaped to another's purpose. (I wouldn't have considered that on-topic for the fiction server, but if _he_ thought it was on-topic, then it made sense for me to reply—at 12:26 _p.m._ the next day, after some time to think. Discord lends itself quite well to a mix of synchronous and asynchronous communication, depending on when people happen to be at their computers.) + +I said he seemed _really_ stuck on this hypothesis that it was Michael Vassar's fault that I'd been shaped into an alienated and angry thing. + +To be clear, I totally agreed that I had been shaped into an alienated and an alienated and angry thing. Obviously. But speaking of people "look[ing] inside themselves and correctly see[ing] that this is not how they work" (as Yudkowsky had said earlier), I thought he was getting the causality all wrong. + +It seemed to _me_ that the reason I had become an alienated and angry thing is because I had been shaped by [making an extraordinary effort](https://www.lesswrong.com/posts/GuEsfTpSDSbXFiseH/make-an-extraordinary-effort) to respond to a class of things that included Yudkowsky "mak[ing] up sophisticated stories for why pretty obviously true things are false"—again referencing Oliver Habryka's comment on "Challenges to Yudkowsky's Pronoun Reform Proposal." + +That's the context in which it wasn't surprising that my Art had involved some amount of specialization in "detecting whose monkey-side they seem to be on." In a world where monkeys are trying to cover up otherwise-obvious truths, successfully blowing the whistle on them involves being sensitive to their monkey games; figuring out the truth they're trying to cover up is the easy part. The whistleblowing-skill of of promoting otherwise-obvious things to _common_ knowledge in opposition to a Power trying to prevent common knowledge, is different from the science-skill of figuring out organically-nonobvious things from scratch. It _makes sense_ for Alexandr Solzhenitsyn and Andrey Kolmogorov—or for that matter, John Galt and Robert Stadler—to have developed different crystalized skills. + +(Indeed, it even makes sense for Kolmogorov and Stadler to _not_ develop some skills, because the skills would show up under Detect Thoughts.) + +If it was all Michael's fault for "extensively meta-gas[lighting me] into believing that everyone generally and [him] personally [were] engaging in some kind of weird out-in-the-open gaslighting" (as Yudkowsky had said earlier), then _how come Oli could see it, too?_ -[TODO: outline the test - * I re-read pg. 74+ of "What the Truth Can Destroy" and submit answers; (at 12:30 _a.m._, two hours and - * Thellim!!! +[TODO: test, cont'd + * maybe you don't get to propose a re-framing when someone is testing you; I thought it made sense for my re-framing to go through, becuase I'm the one who had a more intimate understanding of _what_ I was objecting to; ] [TODO: derail with Lintamande]