From: M. Taylor Saotome-Westlake Date: Mon, 23 May 2022 00:19:47 +0000 (-0700) Subject: check in of sadness ... X-Git-Url: http://534655.efjtl6rk.asia/source?a=commitdiff_plain;h=4bdfb818a3f830bf167db44dd23c329115047942;p=Ultimately_Untrue_Thought.git check in of sadness ... --- diff --git a/content/drafts/a-hill-of-validity-in-defense-of-meaning.md b/content/drafts/a-hill-of-validity-in-defense-of-meaning.md index 57ad252..4aa3bdc 100644 --- a/content/drafts/a-hill-of-validity-in-defense-of-meaning.md +++ b/content/drafts/a-hill-of-validity-in-defense-of-meaning.md @@ -272,7 +272,7 @@ But this is just wrong. Categories exist in our model of the world _in order to_ So when I quit my dayjob in order to have more time to study and work on this blog, the capstone of my sabbatical was an exhaustive response to Alexander, ["The Categories Were Made for Man to Make Predictions"](/2018/Feb/the-categories-were-made-for-man-to-make-predictions/) (which Alexander [graciously included in his next linkpost](https://archive.ph/irpfd#selection-1625.53-1629.55)). A few months later (having started a new dayjob), I followed it up with ["Reply to _The Unit of Caring_ on Adult Human Females"](/2018/Apr/reply-to-the-unit-of-caring-on-adult-human-females/), responding to a similar argument. I'm proud of those posts: I think Alexander's and _Unit of Caring_'s arguments were incredibly dumb, and I think I did a pretty good job of explaining exactly why. -At this point, I was certainly disappointed with my impact, but not to the point of bearing any hostility to "the community". People had made their arguments, and I had made mine; I didn't think I was _entitled_ to anything more than that. +At this point, I was certainly _disappointed_ with my impact, but not to the point of bearing much hostility to "the community". People had made their arguments, and I had made mine; I didn't think I was _entitled_ to anything more than that. [TODO: I was at the company offsite browsing Twitter (which I had recently joined with fantasies of self-cancelling) when I saw the "Hill of Validity in Defense of Meaning", and I _flipped the fuck out_—exhaustive breakdown of exactly what's wrong ; I trusted Yudkowsky and I _did_ think I was entitled to more] @@ -291,6 +291,8 @@ At this point, I was certainly disappointed with my impact, but not to the point [TODO: "simplest and best" pronoun proposal, sometimes personally prudent; support from Oli] +[TODO: the dolphin war] + [TODO: David Xu's postrat] [TODO: why you should care; no one should like Scott and Eliezer's proposals; knowledge should go forward, not back — what I would have hoped for, what you can do; hating that my religion is bottlenecked on one guy; the Church is _still there_ sucking up adherents; this is unambiguously a betrayal rather than a mistake] diff --git a/notes/a-hill-of-validity-sections.md b/notes/a-hill-of-validity-sections.md index 386d74d..6d16a80 100644 --- a/notes/a-hill-of-validity-sections.md +++ b/notes/a-hill-of-validity-sections.md @@ -371,9 +371,15 @@ In a world where surgery is expensive, but some people desperately want to chang But I would have expected people with the barest inkling of self-awareness and honesty to ... notice the incentives, and notice the problems being created by the incentives, and to talk about the problems in public so that we can coordinate on the best solution? -And if that's +And if that's too much to accept of the general public— + +And it's too much to expect of garden-variety "rationalists" to figure out on their own— + +Then I would have at least hoped Eliezer Yudkowsky to be _in favor of_ rather than _against_ his faithful students having these very basic capabilities for reflection, self-observation, and ... _speech_? + + + -And I would have hoped Eliezer Yudkowsky would be _in favor_ of his faithful students having these very basic capabilities for reflection, self-observation, and ... _speech_? @@ -903,3 +909,6 @@ https://www.lesswrong.com/tag/criticisms-of-the-rationalist-movement > possible that 2022 is the year where we start Final Descent and by 2024 it's over https://www.lesswrong.com/posts/7MCqRnZzvszsxgtJi/christiano-cotra-and-yudkowsky-on-ai-progress?commentId=iKEuFQg7HZatoebps + +> and yeah, when Joanna came out on Facebook Zack messaged her to have a 3-hour debate about it +> I think no matter how pure his pursuit of knowledge this is actually bad behavior and he should not \ No newline at end of file