From: M. Taylor Saotome-Westlake Date: Sat, 25 Mar 2023 03:08:17 +0000 (-0700) Subject: memoir: the tragedy of scaling X-Git-Url: http://534655.efjtl6rk.asia/source?a=commitdiff_plain;h=9d1d316279f380a82fcdf3a0698ef9d2d42cde2d;p=Ultimately_Untrue_Thought.git memoir: the tragedy of scaling --- diff --git a/content/drafts/standing-under-the-same-sky.md b/content/drafts/standing-under-the-same-sky.md index 6b8829d..32fd51b 100644 --- a/content/drafts/standing-under-the-same-sky.md +++ b/content/drafts/standing-under-the-same-sky.md @@ -23,7 +23,9 @@ But fighting for public epistemology is a long battle; it makes more sense if yo Now it looks like we have—less time? Not just tautologically because time has passed (the 21st century is one-fifth over—closer to a quarter over), but because of new information from the visible results of the deep learning revolution.[^second-half] Yudkowsky seemed particularly [spooked by AlphaGo](https://www.lesswrong.com/posts/7MCqRnZzvszsxgtJi/christiano-cotra-and-yudkowsky-on-ai-progress?commentId=gQzA8a989ZyGvhWv2) [and AlphaZero](https://intelligence.org/2017/10/20/alphago/) in 2016–2017, not because superhuman board game players were dangerous, but because of what it implied about the universe of algorithms. -There had been a post in the Sequences that made fun of "the people who just want to build a really big neural net." These days, it's increasingly looking like just building a really big neural net ... [actually works](https://www.gwern.net/Scaling-hypothesis)?—which seems like bad news; if it's "easy" for non-scientific-genius engineering talent to shovel large amounts of compute into the birth of powerful minds that we don't understand and don't know how to control, then it would seem that the world is soon to pass outside of our understanding and control. +In part of the Sequences, Yudkowsky had been [dismissive of people who aspired to build AI without understanding how intelligence works](https://www.lesswrong.com/posts/fKofLyepu446zRgPP/artificial-mysterious-intelligence)—for example, by being overly impressed by the [surface analogy](https://www.lesswrong.com/posts/6ByPxcGDhmx74gPSm/surface-analogies-and-deep-causes) between artificial neural networks and the brain. He conceded the possibility of brute-forcing AI (if natural selection had eventually gotten there with no deeper insight, so could we) but didn't consider it a default and especially not a desirable path. (["If you don't know how your AI works, that is not good. It is bad."](https://www.lesswrong.com/posts/fKofLyepu446zRgPP/artificial-mysterious-intelligence)) + +These days, it's increasingly looking like making really large neural nets ... [actually works](https://www.gwern.net/Scaling-hypothesis)?—which seems like bad news; if it's "easy" for non-scientific-genius engineering talent to shovel large amounts of compute into the birth of powerful minds that we don't understand and don't know how to control, then it would seem that the world is soon to pass outside of our understanding and control. [^second-half]: In an unfinished slice-of-life short story I started writing _circa_ 2010, my protagonist (a supermarket employee resenting his job while thinking high-minded thoughts about rationality and the universe) speculates about "a threshold of economic efficiency beyond which nothing human could survive" being a tighter bound on future history than physical limits (like the heat death of the universe), and comments that "it imposes a sense of urgency to suddenly be faced with the fabric of your existence coming apart in ninety years rather than 1090."