From: M. Taylor Saotome-Westlake Date: Mon, 7 Feb 2022 18:24:06 +0000 (-0800) Subject: "Challenges": can't afford to ack real empirical reason X-Git-Url: http://534655.efjtl6rk.asia/source?a=commitdiff_plain;h=1dd38f85b0f848ae7423d3acb5082c61f6edf3db;p=Ultimately_Untrue_Thought.git "Challenges": can't afford to ack real empirical reason Today I'm again flip-flop leaning towards chopping off "Challenges" at "can't say I wasn't warned", and putting the heavy bad-faith stuff into "Hill", or possibly a third post? But again, it's too early to throw in the towel of fatigue just yet; I can keep drafting to try to make this the best post it can be as currently envisioned, and then beg counsel for advice on how to split it up and polish it for public consumption. Maybe timebox it? What can I come up with before the end of February? --- diff --git a/content/drafts/challenges-to-yudkowskys-pronoun-reform-proposal.md b/content/drafts/challenges-to-yudkowskys-pronoun-reform-proposal.md index 790c07a..c5b0eb9 100644 --- a/content/drafts/challenges-to-yudkowskys-pronoun-reform-proposal.md +++ b/content/drafts/challenges-to-yudkowskys-pronoun-reform-proposal.md @@ -499,8 +499,7 @@ Zvi Mowshowitz has [written about how the false assertion that "everybody knows" But if it were _actually_ the case that everybody knew (and everybody knew that everybody knew), then what would be the point of the censorship? It's not coherent to claim that no one is being harmed by censorship because everyone knows about it, because the entire appeal and purpose of censorship is precisely that _not_ everybody knows and that someone with power wants to _keep_ it that way. -For the savvy people in the know, it would certainly be _convenient_ if everyone secretly knew: then the savvy people wouldn't have to face the tough choice between -acceding to Power's demands (at the cost of deceiving their readers) and informing their readers (at the cost of incurring Power's wrath). +For the savvy people in the know, it would certainly be _convenient_ if everyone secretly knew: then the savvy people wouldn't have to face the tough choice between acceding to Power's demands (at the cost of deceiving their readers) and informing their readers (at the cost of incurring Power's wrath). Policy debates should not appear one-sided. Faced with this kind of dilemma, I can't say that defying Power is necessarily the right choice: if there really _were_ no other options between deceiving your readers with a bad faith performance, and incurring Power's wrath, and Power's wrath would be too terrible to bear, then maybe deceiving your readers with a bad faith performance is the right thing to do. @@ -544,7 +543,9 @@ But that intuition is _wrong_. The perception that there are "sides" to which on As I explained in ["On the Argumentative Form 'Super-Proton Things Tend to Come In Varieties'"](/2019/Dec/on-the-argumentative-form-super-proton-things-tend-to-come-in-varieties/), this argument that "gender dysphoria involves more than one proton and will probably have varieties" is actually _wrong_. The _reason_ I believe in the two-type taxonomy of MtF is because of [the _empirical_ case that androphilic and non-exclusively-androphilic MtF transsexualism actually look like different things](https://sillyolme.wordpress.com/faq-on-the-science/), enough so for the two-type clustering to [pay the rent](https://www.lesswrong.com/posts/a7n8GdKiAZRX86T5A/making-beliefs-pay-rent-in-anticipated-experiences) [for its complexity](https://www.lesswrong.com/posts/mB95aqTSJLNR9YyjH/message-length). -The key lesson here that I wish Yudkowsky would understand is that when you invent rationality lessons in response to political pressure, you probably end up with _fake rationality lessons_ (because the reasoning that _generated_ the lesson differs from the reasoning that the lesson presents). I think this is bad, and that it's _equally_ bad even in cases like this where the political pressure is coming from _me_. +But Yudkowsky can't afford to acknowledge the empirical case for the two-type taxonomy—that really _would_ get him in trouble with progressives. So in order to throw me a bone while maintaining his above-it-all [pretending to be wise](https://www.lesswrong.com/posts/jeyvzALDbjdjjv5RW/pretending-to-be-wise) centerist pose, he needs to come up with some other excuse that "exhibit[s] generally rationalist principles". + +The lesson here that I wish Yudkowsky would understand is that when you invent rationality lessons in response to political pressure, you probably end up with _fake rationality lessons_ (because the reasoning that _generated_ the lesson differs from the reasoning that the lesson presents). I think this is bad, and that it's _equally_ bad even when the political pressure is coming from _me_. If you "project" my work into the "subspace" of contemporary political conflicts, it usually _codes as_ favoring "anti-trans" faction more often than not, but [that's really not what I'm trying to do](/2021/Sep/i-dont-do-policy/). From my perspective, it's just that the "pro-trans" faction happens to be very wrong about a lot of stuff that I care about. But being wrong about a lot of stuff isn't the same thing as being wrong about everything; it's _important_ that I spontaneously invent and publish pieces like ["On the Argumentative Form"](/2019/Dec/on-the-argumentative-form-super-proton-things-tend-to-come-in-varieties/) and ["Self-Identity Is a Schelling Point"](/2019/Oct/self-identity-is-a-schelling-point/) that "favor" the "pro-trans" faction. That's how you know (and how I know) that I'm not a _partisan hack_.