From: M. Taylor Saotome-Westlake Date: Fri, 20 May 2022 23:29:42 +0000 (-0700) Subject: Friday redemption block 4: capstone X-Git-Url: http://534655.efjtl6rk.asia/source?a=commitdiff_plain;h=42b71e3d8f99c4099e18be49303a58375eb4c7ec;p=Ultimately_Untrue_Thought.git Friday redemption block 4: capstone --- diff --git a/content/drafts/a-hill-of-validity-in-defense-of-meaning.md b/content/drafts/a-hill-of-validity-in-defense-of-meaning.md index 71e8694..97a2b7e 100644 --- a/content/drafts/a-hill-of-validity-in-defense-of-meaning.md +++ b/content/drafts/a-hill-of-validity-in-defense-of-meaning.md @@ -248,19 +248,7 @@ To be clear, it's _true_ that categories exist in our model of the world, rather > I ought to accept an unexpected man or two deep inside the conceptual boundaries of what would normally be considered female if it'll save someone's life. There's no rule of rationality saying that I shouldn't, and there are plenty of rules of human decency saying that I should. -But this is just wrong. Categories exist in our model of the world _in order to_ capture empirical regularities in the world itself: the map is supposed to _reflect_ the territory, and there _are_ "rules of rationality" governing what kinds of word and category usages correspond to correct probabilistic inferences. - - - -[We had a whole Sequence about this](https://www.lesswrong.com/posts/FaJaCgqBKphrDzDSj/37-ways-that-words-can-be-wrong) back in 'aught-eight. - -Alexander cites [a post](https://www.lesswrong.com/posts/yA4gF5KrboK2m2Xu7/how-an-algorithm-feels-from-inside) from that Sequence in support of the point about how categories are "in the map." - - -But if you actually read the Sequence, - - -in which Yudkowsky pounded home this _exact_ point _over and over and over again_, that word and category definitions are _not_ arbitrary, because there are criteria that make some definitions _perform better_ than others as "cognitive technology"— +But this is just wrong. Categories exist in our model of the world _in order to_ capture empirical regularities in the world itself: the map is supposed to _reflect_ the territory, and there _are_ "rules of rationality" governing what kinds of word and category usages correspond to correct probabilistic inferences. [We had a whole Sequence about this](https://www.lesswrong.com/posts/FaJaCgqBKphrDzDSj/37-ways-that-words-can-be-wrong) back in 'aught-eight. Alexander cites [a post](https://www.lesswrong.com/posts/yA4gF5KrboK2m2Xu7/how-an-algorithm-feels-from-inside) from that Sequence in support of the (true) point about how categories are "in the map" ... but if you actually read the Sequence, another point that Yudkowsky pounds home _over and over and over again_, is that word and category definitions are nevertheless _not_ arbitrary, because there are criteria that make some definitions _perform better_ than others as "cognitive technology"— > ["It is a common misconception that you can define a word any way you like. [...] If you believe that you can 'define a word any way you like', without realizing that your brain goes on categorizing without your conscious oversight, then you won't take the effort to choose your definitions wisely."](https://www.lesswrong.com/posts/3nxs2WYDGzJbzcLMp/words-as-hidden-inferences) @@ -274,7 +262,7 @@ in which Yudkowsky pounded home this _exact_ point _over and over and over again > ["Categories are not static things in the context of a human brain; as soon as you actually think of them, they exert force on your mind. One more reason not to believe you can define a word any way you like."](https://www.lesswrong.com/posts/veN86cBhoe7mBxXLk/categorizing-has-consequences) -> ["And people are lazy. They'd rather argue 'by definition', especially since they think 'you can define a word any way you like'."](https://www.lesswrong.com/posts/yuKaWPRTxZoov4z8K/sneaking-in-connotations) +> ["And people are lazy. They'd rather argue 'by definition', especially since they think 'you can define a word any way you like'."](https://www.lesswrong.com/posts/yuKaWPRTxZoov4z8K/sneaking-in-connotations) > ["And this suggests another—yes, yet another—reason to be suspicious of the claim that 'you can define a word any way you like'. When you consider the superexponential size of Conceptspace, it becomes clear that singling out one particular concept for consideration is an act of no small audacity—not just for us, but for any mind of bounded computing power."](https://www.lesswrong.com/posts/82eMd5KLiJ5Z6rTrr/superexponential-conceptspace-and-simple-words) @@ -282,20 +270,11 @@ in which Yudkowsky pounded home this _exact_ point _over and over and over again > ["One may even consider the act of defining a word as a promise to \[the\] effect [...] \[that the definition\] will somehow help you make inferences / shorten your messages."](https://www.lesswrong.com/posts/yLcuygFfMfrfK8KjF/mutual-information-and-density-in-thingspace) +So when I quit my dayjob in order to have more time to study and work on this blog, the capstone of my sabbatical was an exhaustive response to Alexander, ["The Categories Were Made for Man to Make Predictions"](/2018/Feb/the-categories-were-made-for-man-to-make-predictions/) (which Alexander [graciously included in his next linkpost](https://archive.ph/irpfd#selection-1625.53-1629.55)). A few months later (having started a new dayjob), I followed it up with ["Reply to _The Unit of Caring_ on Adult Human Females"](/2018/Apr/reply-to-the-unit-of-caring-on-adult-human-females/), responding to a similar argument. I'm proud of those posts: I think Alexander's and _Unit of Caring_'s arguments were incredibly dumb, and I think I did a pretty good job of explaining exactly why. +At this point, I was certainly disappointed with my impact, but not to the point of bearing any hostility to "the community". People had made their arguments, and I had made mine; I didn't think I was _entitled_ to anything more than that. - - - - - - - - - -[TODO: so when I quit my job in order to write, the capstone of my sabbatical was to be "The Categories Were Made for Man to Make Predictions", which I later followed up with the "Reply on Adult Human Females" ... and mostly, things were fine—I was disappointed with my impact, but it wasn't grounds to declare the whole community a fraud] - -[TODO: I was at the company offsite browsing Twitter (which I had recently joined with fantasies of self-cancelling) when I saw the "Hill of Validity in Defense of Meaning", and I _flipped the fuck out_—exhaustive breakdown of exactly what's wrong] +[TODO: I was at the company offsite browsing Twitter (which I had recently joined with fantasies of self-cancelling) when I saw the "Hill of Validity in Defense of Meaning", and I _flipped the fuck out_—exhaustive breakdown of exactly what's wrong ; I trusted Yudkowsky and I _did_ think I was entitled to more] [TODO: getting support from Michael + Ben + Sarah, harrassing Scott and Eliezer] diff --git a/notes/a-hill-of-validity-sections.md b/notes/a-hill-of-validity-sections.md index b925203..bb280b8 100644 --- a/notes/a-hill-of-validity-sections.md +++ b/notes/a-hill-of-validity-sections.md @@ -874,4 +874,7 @@ https://www.lesswrong.com/posts/ZEgQGAjQm5rTAnGuM/beware-boasting-about-non-exis In a discussion on criticism of EA by outsiders, Lorelei spontaneously (not prompted by me) mentioned the difference between when fellow trans women called themselves AGP, vs. actual Blanchardians. This is a conspiracy!! (The ingroup is allowed to notice things, but when other people notice, deny everything. Compare Michael Anton on "celebration parallax." -https://www.lesswrong.com/tag/criticisms-of-the-rationalist-movement \ No newline at end of file +https://www.lesswrong.com/tag/criticisms-of-the-rationalist-movement + +> possible that 2022 is the year where we start Final Descent and by 2024 it's over +https://www.lesswrong.com/posts/7MCqRnZzvszsxgtJi/christiano-cotra-and-yudkowsky-on-ai-progress?commentId=iKEuFQg7HZatoebps