From: M. Taylor Saotome-Westlake Date: Mon, 12 Sep 2022 01:50:57 +0000 (-0700) Subject: memoir: quick note on the optimization lens?! X-Git-Url: http://534655.efjtl6rk.asia/source?a=commitdiff_plain;h=fae0291827e40b063fd62643d4a62f9b385010ac;p=Ultimately_Untrue_Thought.git memoir: quick note on the optimization lens?! --- diff --git a/content/drafts/a-hill-of-validity-in-defense-of-meaning.md b/content/drafts/a-hill-of-validity-in-defense-of-meaning.md index e1f7e87..d193338 100644 --- a/content/drafts/a-hill-of-validity-in-defense-of-meaning.md +++ b/content/drafts/a-hill-of-validity-in-defense-of-meaning.md @@ -1038,7 +1038,7 @@ I guess not! ["Doublethink (Choosing to be Biased)"](https://www.lesswrong.com/p I don't doubt Yudkowsky could come up with some clever casuistry why, _technically_, the text he wrote in 2007 and the text he endorsed in 2021 don't contradict each other. But _realistically_ ... again, no. -[TODO: elaborate on how 2007!Yudkowsky and 2021!Xu are saying the opposite things if you just take a plain-language reading] +[TODO: elaborate on how 2007!Yudkowsky and 2021!Xu are saying the opposite things if you just take a plain-language reading and consider, not whether individual sentences can be interpreted as "true", but what kind of _optimization_ the text is doing to the behavior of receptive readers] [TODO: if he's reading this, win back respect— reply, motherfucker]