앤트로픽ceo 에세이 Machines of Loving Grace, 전문 요약 번역
- -
원문 : https://darioamodei.com/machines-of-loving-grace
Anthropic CEO 다리오 아모데이가 AI가 어떻게 세상을 더 나은 곳으로 변화시킬 수 있는지에 대해 작성한 에세이이다. 아모데이가 강조하는 것 처럼 AI가 가져올 세상을 급진적이면서 동시에 자세하게 논의한다. AI 기술이 대두된 이후로 AI 기술이 가져오는 미래에 대해서 '급진적으로만' 다뤄지는 경우가 많았다. 즉 이를 진지하게 분석하는 것이 아닌 'SF적'으로 표현해왔다는 것이다. 이를 경계하고 앞으로는 AI 기술이 가져올 미래에 대해 실질적인 기술 목표와 비전을 보다 자세하게 논의하여야 한다고 주장한다. 그리고 이 에세이가 이를 위한 시작의 계기로 봤으면 좋겠다고 아모데이는 말한다.
Machines of Loving Grace
사랑의 은혜를 지닌 기계들 (기계가 사람의 감정이나 사랑과 결합되는 것을 상징적으로 나타낸다.)
How AI Could Transform the World for the Better
AI가 세상을 더 나은 곳으로 변화시키는 법
원문
I think and talk a lot about the risks of powerful AI. The company I’m the CEO of, Anthropic, does a lot of research on how to reduce these risks. Because of this, people sometimes draw the conclusion that I’m a pessimist or “doomer” who thinks AI will be mostly bad or dangerous. I don’t think that at all. In fact, one of my main reasons for focusing on risks is that they’re the only thing standing between us and what I see as a fundamentally positive future. I think that most people are underestimating just how radical the upside of AI could be, just as I think most people are underestimating how bad the risks could be.
In this essay I try to sketch out what that upside might look like—what a world with powerful AI might look like if everything goes right. Of course no one can know the future with any certainty or precision, and the effects of powerful AI are likely to be even more unpredictable than past technological changes, so all of this is unavoidably going to consist of guesses. But I am aiming for at least educated and useful guesses, which capture the flavor of what will happen even if most details end up being wrong. I’m including lots of details mainly because I think a concrete vision does more to advance discussion than a highly hedged and abstract one.
First, however, I wanted to briefly explain why I and Anthropic haven’t talked that much about powerful AI’s upsides, and why we’ll probably continue, overall, to talk a lot about risks. In particular, I’ve made this choice out of a desire to:
- Maximize leverage. The basic development of AI technology and many (not all) of its benefits seems inevitable (unless the risks derail everything) and is fundamentally driven by powerful market forces. On the other hand, the risks are not predetermined and our actions can greatly change their likelihood.
- Avoid perception of propaganda. AI companies talking about all the amazing benefits of AI can come off like propagandists, or as if they’re attempting to distract from downsides. I also think that as a matter of principle it’s bad for your soul to spend too much of your time “talking your book”.
- Avoid grandiosity. I am often turned off by the way many AI risk public figures (not to mention AI company leaders) talk about the post-AGI world, as if it’s their mission to single-handedly bring it about like a prophet leading their people to salvation. I think it’s dangerous to view companies as unilaterally shaping the world, and dangerous to view practical technological goals in essentially religious terms.
- Avoid “sci-fi” baggage. Although I think most people underestimate the upside of powerful AI, the small community of people who do discuss radical AI futures often does so in an excessively “sci-fi” tone (featuring e.g. uploaded minds, space exploration, or general cyberpunk vibes). I think this causes people to take the claims less seriously, and to imbue them with a sort of unreality. To be clear, the issue isn’t whether the technologies described are possible or likely (the main essay discusses this in granular detail)—it’s more that the “vibe” connotatively smuggles in a bunch of cultural baggage and unstated assumptions about what kind of future is desirable, how various societal issues will play out, etc. The result often ends up reading like a fantasy for a narrow subculture, while being off-putting to most people.
Yet despite all of the concerns above, I really do think it’s important to discuss what a good world with powerful AI could look like, while doing our best to avoid the above pitfalls. In fact I think it is critical to have a genuinely inspiring vision of the future, and not just a plan to fight fires. Many of the implications of powerful AI are adversarial or dangerous, but at the end of it all, there has to be something we’re fighting for, some positive-sum outcome where everyone is better off, something to rally people to rise above their squabbles and confront the challenges ahead. Fear is one kind of motivator, but it’s not enough: we need hope as well.
The list of positive applications of powerful AI is extremely long (and includes robotics, manufacturing, energy, and much more), but I’m going to focus on a small number of areas that seem to me to have the greatest potential to directly improve the quality of human life. The five categories I am most excited about are:
- Biology and physical health
- Neuroscience and mental health
- Economic development and poverty
- Peace and governance
- Work and meaning
My predictions are going to be radical as judged by most standards (other than sci-fi “singularity” visions), but I mean them earnestly and sincerely. Everything I’m saying could very easily be wrong (to repeat my point from above), but I’ve at least attempted to ground my views in a semi-analytical assessment of how much progress in various fields might speed up and what that might mean in practice. I am fortunate to have professional experience in both biology and neuroscience, and I am an informed amateur in the field of economic development, but I am sure I will get plenty of things wrong. One thing writing this essay has made me realize is that it would be valuable to bring together a group of domain experts (in biology, economics, international relations, and other areas) to write a much better and more informed version of what I’ve produced here. It’s probably best to view my efforts here as a starting prompt for that group.
나는 강력한 AI의 위험에 대해 많이 생각하고 이야기를 나누고 Anthropic은 이러한 위험을 줄이기 위한 연구를 많이 하고 있다. 이 때문에 사람들은 때때로 내가 AI는 대부분 나쁘고 위험하다고 생각하는 비관주의자(pessimist) 또는 "doomer(디스토피아적 세계관을 가진 사람)"라고 결론을 내리곤 한다. 나는 전혀 그렇게 생각하지 않을 뿐더러 내가 AI의 위험에 대해 주로 초점을 맞추는 것은 이 위험이 궁극적인 유일한 장애물이라고 보기 때문이다. 나는 또한 대부분의 사람들이 AI의 긍정적인 잠재력이 얼마나 급진적일 수 있는지 과소평가하고 있다고 생각하는 한편, AI의 위험이 얼마나 심각할 수 있는지 또한 과소평가하고 있다고 생각한다.
이 에세이에서는 이 긍정적인 면의 모습(모든 것이 잘 풀릴 경우 AI가 있는 세상이 어떻게 보일지)이 어떨지 설명할 것이다. 물론 아무도 이를 예측할 수 없다. 세부적인 것은 틀리더라도 결국 일어날 일의 본질을 포착할 수 있는 교육적이고 유용한 추측을 목표로 하고 있다. 특히 세부사항을 많이 포함하려고 하는데, 구체적인 비전을 제시하는 것이 추상적이고 조건이 많은 비전보다 논의를 더 진전시킨다고 믿기 때문이다.
그러나 그 전에 나와 Anthropic이 왜 AI의 긍정적인 면보다 위험에 대해 많이 이야기할 것인지 간단히 설명하고 싶다.
- Maximize leverage : AI의 발전과 많은 혜택은 불가피해 보이며 이는 본질적으로 강력한 시장의 세력에 의해 주도되고 있다. 반면 위험은 미리 정해진 것이 아니고 우리의 행동이 위험 가능성에 큰 영향을 미칠 수 있다.
- Avoid perception of propaganda : AI 기업들이 AI의 놀라운 이점에 대해 이야기하는 모습은 마치 선전가(propaganda)처럼 보일 수 있거나 단점에서 눈을 돌리려는 시도로 느껴질 수 있다. 또한 너무 많은 시간을 "자기 이익을 홍보하는 데(talking your book)" 보내는 것은 영혼에 좋지 않다고 생각한다.
- Avoid gradiosity : 나는 많은 AI 위험 관련 공적인 인물들(AI회사 리더들 포함)이 AGI 이후의 세상에 대해 이야기하는 방식이 종종 불편하다. 마치 그들이 예언자처럼 사람들을 구원으로 이끄는 사명감을 가지고 이를 실현하는 것 처럼 보인다. 나는 기업들이 세상을 일방적으로 형성한다고 보는 것은 위험하고, 실용적인 기술 목표를 본질적인 종교적 관점으로 보는 것도 위험하다고 생각한다.
- Avoid 'sci-fi' baggage : 대부분의 사람들이 AI의 긍정적 잠재력을 과소평가한다고 생각하지만, 급진적인 AI 미래를 논의하는 소수의 사람들은 종종 지나치게 'SF적인' 톤으로 이야기한다(featuring e.g. 업로드된 정신, 우주 탐사, 사이펑크 분위기 등). AI가 있는 미래를 상상하고 예상되는 변화나 사회적 이슈가 '바람직하다'거나 '그럴 것이다'라고 가정하고 묘사되지만, 그 미래가 실제로 어떻게 될지에 대한 논의와 기술적으로 그것이 가능한지 여부가 명확하게 설명되지 않는다. 그 결과 이는 좁은 하위문화의 판타지처럼 읽히면서 대다수의 사람들에게 거부감을 일으키게 된다.
강력한 Ai의 긍정적 applications의 목록은 매우 길지만(로봇, 제조, 에너지 등등을 포함해서), 나는 인간의 삶의 질을 직접적으로 향상시킬 수 있는 가능성이 큰 몇 가지 분야에 집중할 것이다. 가장 기대하는 범주는 아래 5가지이다.
- Biology and physical health (생물학과 신체건강)
- Neuroscience and mental health (신경과학과 정신건강)
- Economic development and poverty (경제발전과 빈곤)
- Peach and governance (평화와 통치)
- Work and meaning (일과 의미)
내 예측은 대부분의 기분으로는 급진적일 수 있지만('SF적인 특이점' 비전 같은 것들을 제외하고), 나는 그것들을 진지하고 성실하게 말하는 것이다. 이 예측의 모든 것이 쉽게 틀릴 수 있지만 적어도 다양한 분야에서 진전이 얼마나 빨라질 수 있는지 그것이 어떤 의미를 가지는지 반쯤 분석적인 평가에 근거해 보려고 노력하였다. 나는 생물학과 신경과학 분야에서 전문적인 경험을 가지고 있으며 경제 발전 분야에서도 정보가 풍부한 아마추어이다. 이 글을 쓰며 깨달은 점은 생물학, 경제학, 국제 관계 등 다양한 분야의 전문가들이 모여 내가 여기 작성한 것보다 더 나은, 더 정보가 풍부한 version을 작성하는 것이 가치가 있을 것이라는 점이다. 내 노력은 이를 위한 시작점으로 보는 것이 좋을 것이다.
원문
Basic assumptions and framework
To make this whole essay more precise and grounded, it’s helpful to specify clearly what we mean by powerful AI (i.e. the threshold at which the 5-10 year clock starts counting), as well as laying out a framework for thinking about the effects of such AI once it’s present.
What powerful AI (I dislike the term AGI) will look like, and when (or if) it will arrive, is a huge topic in itself. It’s one I’ve discussed publicly and could write a completely separate essay on (I probably will at some point). Obviously, many people are skeptical that powerful AI will be built soon and some are skeptical that it will ever be built at all. I think it could come as early as 2026, though there are also ways it could take much longer. But for the purposes of this essay, I’d like to put these issues aside, assume it will come reasonably soon, and focus on what happens in the 5-10 years after that. I also want to assume a definition of what such a system will look like, what its capabilities are and how it interacts, even though there is room for disagreement on this.
By powerful AI, I have in mind an AI model—likely similar to today’s LLM’s in form, though it might be based on a different architecture, might involve several interacting models, and might be trained differently—with the following properties:
- In terms of pure intelligence, it is smarter than a Nobel Prize winner across most relevant fields – biology, programming, math, engineering, writing, etc. This means it can prove unsolved mathematical theorems, write extremely good novels, write difficult codebases from scratch, etc.
- In addition to just being a “smart thing you talk to”, it has all the “interfaces” available to a human working virtually, including text, audio, video, mouse and keyboard control, and internet access. It can engage in any actions, communications, or remote operations enabled by this interface, including taking actions on the internet, taking or giving directions to humans, ordering materials, directing experiments, watching videos, making videos, and so on. It does all of these tasks with, again, a skill exceeding that of the most capable humans in the world.
- It does not just passively answer questions; instead, it can be given tasks that take hours, days, or weeks to complete, and then goes off and does those tasks autonomously, in the way a smart employee would, asking for clarification as necessary.
- It does not have a physical embodiment (other than living on a computer screen), but it can control existing physical tools, robots, or laboratory equipment through a computer; in theory it could even design robots or equipment for itself to use.
- The resources used to train the model can be repurposed to run millions of instances of it (this matches projected cluster sizes by ~2027), and the model can absorb information and generate actions at roughly 10x-100x human speed5. It may however be limited by the response time of the physical world or of software it interacts with.
- Each of these million copies can act independently on unrelated tasks, or if needed can all work together in the same way humans would collaborate, perhaps with different subpopulations fine-tuned to be especially good at particular tasks.
We could summarize this as a “country of geniuses in a datacenter”.
Clearly such an entity would be capable of solving very difficult problems, very fast, but it is not trivial to figure out how fast. Two “extreme” positions both seem false to me. First, you might think that the world would be instantly transformed on the scale of seconds or days (“the Singularity”), as superior intelligence builds on itself and solves every possible scientific, engineering, and operational task almost immediately. The problem with this is that there are real physical and practical limits, for example around building hardware or conducting biological experiments. Even a new country of geniuses would hit up against these limits. Intelligence may be very powerful, but it isn’t magic fairy dust.
Second, and conversely, you might believe that technological progress is saturated or rate-limited by real world data or by social factors, and that better-than-human intelligence will add very little. This seems equally implausible to me—I can think of hundreds of scientific or even social problems where a large group of really smart people would drastically speed up progress, especially if they aren’t limited to analysis and can make things happen in the real world (which our postulated country of geniuses can, including by directing or assisting teams of humans).
I think the truth is likely to be some messy admixture of these two extreme pictures, something that varies by task and field and is very subtle in its details. I believe we need new frameworks to think about these details in a productive way.
Economists often talk about “factors of production”: things like labor, land, and capital. The phrase “marginal returns to labor/land/capital” captures the idea that in a given situation, a given factor may or may not be the limiting one – for example, an air force needs both planes and pilots, and hiring more pilots doesn’t help much if you’re out of planes. I believe that in the AI age, we should be talking about the marginal returns to intelligence, and trying to figure out what the other factors are that are complementary to intelligence and that become limiting factors when intelligence is very high. We are not used to thinking in this way—to asking “how much does being smarter help with this task, and on what timescale?”—but it seems like the right way to conceptualize a world with very powerful AI.
My guess at a list of factors that limit or are complementary to intelligence includes:
- Speed of the outside world. Intelligent agents need to operate interactively in the world in order to accomplish things and also to learn. But the world only moves so fast. Cells and animals run at a fixed speed so experiments on them take a certain amount of time which may be irreducible. The same is true of hardware, materials science, anything involving communicating with people, and even our existing software infrastructure. Furthermore, in science many experiments are often needed in sequence, each learning from or building on the last. All of this means that the speed at which a major project—for example developing a cancer cure—can be completed may have an irreducible minimum that cannot be decreased further even as intelligence continues to increase.
- Need for data. Sometimes raw data is lacking and in its absence more intelligence does not help. Today’s particle physicists are very ingenious and have developed a wide range of theories, but lack the data to choose between them because particle accelerator data is so limited. It is not clear that they would do drastically better if they were superintelligent—other than perhaps by speeding up the construction of a bigger accelerator.
- Intrinsic complexity. Some things are inherently unpredictable or chaotic and even the most powerful AI cannot predict or untangle them substantially better than a human or a computer today. For example, even incredibly powerful AI could predict only marginally further ahead in a chaotic system (such as the three-body problem) in the general case, as compared to today’s humans and computers.
- Constraints from humans. Many things cannot be done without breaking laws, harming humans, or messing up society. An aligned AI would not want to do these things (and if we have an unaligned AI, we’re back to talking about risks). Many human societal structures are inefficient or even actively harmful, but are hard to change while respecting constraints like legal requirements on clinical trials, people’s willingness to change their habits, or the behavior of governments. Examples of advances that work well in a technical sense, but whose impact has been substantially reduced by regulations or misplaced fears, include nuclear power, supersonic flight, and even elevators.
- Physical laws. This is a starker version of the first point. There are certain physical laws that appear to be unbreakable. It’s not possible to travel faster than light. Pudding does not unstir. Chips can only have so many transistors per square centimeter before they become unreliable. Computation requires a certain minimum energy per bit erased, limiting the density of computation in the world.
There is a further distinction based on timescales. Things that are hard constraints in the short run may become more malleable to intelligence in the long run. For example, intelligence might be used to develop a new experimental paradigm that allows us to learn in vitro what used to require live animal experiments, or to build the tools needed to collect new data (e.g. the bigger particle accelerator), or to (within ethical limits) find ways around human-based constraints (e.g. helping to improve the clinical trial system, helping to create new jurisdictions where clinical trials have less bureaucracy, or improving the science itself to make human clinical trials less necessary or cheaper).
Thus, we should imagine a picture where intelligence is initially heavily bottlenecked by the other factors of production, but over time intelligence itself increasingly routes around the other factors, even if they never fully dissolve (and some things like physical laws are absolute). The key question is how fast it all happens and in what order.
With the above framework in mind, I’ll try to answer that question for the five areas mentioned in the introduction.
Basic assumptions and framework
강력한 AI(나는 AGI라는 말을 싫어한다)가 무엇처럼 보일지 언제 등장할지에 대한 문제는 그 자체로 매우 큰 주제이다. 어떤 사람들은 결코 만들어지지 않을 것이라고 생각하지만 나는 2026년쯤에 올 수 있다고 생각하지만 훨씬 더 걸릴 수도 있다. 이 에세이 목적을 위해 그런 것은 제쳐두고 강력한 AI가 곧 온다고 가정을 한다. 그 이후 5-10년 동안 무슨일이 일어날지 집중적으로 다룰 것이다. 또한 이 시스템이 어떤 모습일지, 어떻게 상호작용 할지 정의하고자 한다.
내가 말하는 강력한 AI란,
현재의 언어 모델(LLM)과 비슷한 형태일 가능성이 높다. 하지만 아키텍처나, 훈련방식이 다를 수 있고 여러 상호작용 모델을 포함할 수 있다. 이는 다음과 같은 특성을 가질 것이다.
- 순수 지능 측면에서, 생물학, 프로그래밍, 수학, 공학, 작문 등 대부분의 관련 분야에서 노벨상 수상자보다 똑똑하다.
- 단지 "smart thing you talke to" 이상이다. 텍스트, 오디오, 비디오, 마우스 키보드 제어, 인터넷 액세스 등 인간이 사용하는 모든 인터페이스를 가지고 있다. 이 AI는 인터넷 작업 수행, 인간에게 지시하거나 받는 것, 재료를 주문하거나 실험을 지시하는 것 등 이 인터페이스를 통해 허용되는 모든 작업과 의사소통을 수행할 수 있다.
- 단순 질문에 대한 답을 하는 게 아니라 긴 시간이 걸리는 명확성이 필요한 작업도 자율적으로 수행하는 똑똑한 직원과 유사하다.
- 물리적 구현은 없지만 컴퓨터를 통해 물리적 도구나 로봇, 실험 장비를 제어할 수 있고 이론적으로 스스로 사용할 로봇이나 장비를 설계할 수도 있다.
- 한 번 훈련된 모델의 자원은 그대로 재사용하여 수백만 개의 인스턴스를 운영할 수 있다. 정보를 흡수하고 행동을 하는 속도가 대체로 인간의 10~100배 빠르다. 그러나 물리적 시계나 상호작용하는 SW 응답시간에 의해 제한될 수 있다.
- 수백만 개의 복사 모델은 각각 독립적으로 무관한 작업을 수행할 수 있고 필요하다면 인간이 협력하는 방식처럼 협력하여 작업을 수행할 수 있다. 그리고 아마도 특정 작업에 뛰어나도록 파인튠된 다른 하위그룹이 존재할 수도 있을 것이다.
우리는 이것을 "데이터 센터에 있는 천재들의 나라(country of geniuses in a datacenter)"라고 요약할 수 있다.
이러한 존재들은 매우 어려운 문제를 매우 빠르게 해결할 수 있겠지만 그 속도가 얼마나 될지는 간단히 파악할 수 없다. 그리고 아래 두가지 '극단적인' 입장이 모두 잘못되었다고 생각한다.
- (지나치게 낙관적) 특이점(the Singularity)처럼 우수한 지능이 거의 즉시 모든 과학적 공학적 운영적 문제를 해결하며 세계가 몇 초 혹은 몇 일내에 즉각적으로 변할 것이라고 생각하는 것. 하지만 이는 물리적이고 실용적인 한계가 존재한다. 하드웨어를 구축하거나 생물학적 실험을 수행하는 데는 제한이 있고 이 강력한 지능이 마법의 요정 가루는 아니다.
- (지나치게 비관적) 기술 발전이 실제 세계의 데이터나 사회적 요인에 의해 한계에 달하거나 포화에 이르어 인간보다 뛰어난 지능이 큰 변화를 일으키지 않을 것이라 생각하는 것. 나는 이에 반하는 수백가지의 과학적 사회적 문제를 떠올릴 수 있다. 뛰어난 지능을 가진 사람들이 모여 실제로 행동할 수 있다면 문제 해결 속도가 급격히 빨라질 것이다. 그들이 분석만 하는 데 그치지 않고 실제로 일을 할 수 있다면 더더욱 그렇다. (상상 속 천재 국가가 바로 그런 일을 할 수 있고 인간 팀을 지휘하거나 도울 수 있다.)
진실은 이 극단의 혼합된 형태일 가능성이 크다. 비행기가 부족하면 조종사를 더 고용하는 경우가 큰 도움이 되지 않는 것처럼 경제학자들은 "marginal returns(한계 수익) to labor/land/capital"에 대해 이야기한다. 이처럼 AI시대에는 "marginal returns to intelligence"에 대해 이야기해야 한다고 생각하며, 지능에 보완적인 요소와 지능이 매우 높을 때 제한적인 요소가 무엇인지 파악해야 한다고 믿는다. 우리는 단지 더 똑똑해지면 이러한 task들에 얼마나 도움이 되는지 시간 규모는 어떻게 되는지를 묻는 것에 익숙하지 않다. 내 생각에 지능을 제한하거나 보완하는 요소들의 목록은 다음과 같다.
- Speed of the outside world
세상의 속도는 제한되어 있다. 세포와 동물은 고정된 속도로 움직이므로 이에 대한 실험은 일정 시간이 소요되어 그 시간 이상은 줄일 수 없는 경우가 많다. SW infra도 마찬가지이다. 과학에서는 이전 실험을 통해 배우거나 그 기반을 둬야 하기 때문에 많은 실험이 순차적으로 필요하다. 따라서 암 치료법 개발과 같은 큰 프로젝트에 대해 최소한의 시간이 필요할 수 있고 지능이 계속 증가하더라도 이를 더 이상 줄일 수 없는 한계가 있을 수 있다. - Need for data
물리학자들이 매우 다양한 이론을 개발했더라도 particle accelerator 데이터가 너무 제한적이어 어떤 이론을 채택할지 결정할 수 없다. 이처럼 슈퍼지능이 있다하더라도 원시데이터가 부족하면 더 많은 지능이 도움되지 않는다. - Intrinsic complexity
강력한 AI도 chaotic system(e.x. three-body problem)과 같은 본질적으로 예측 불가능하거나 혼돈적인 것들은 풀 수 없다. - Constraints from humans
많은 일들은 법을 어기거나, 인간에게 해를 끼치거나, 사회를 망치는 일이 아니면 불가능하다. Aligned AI가 이런일을 하지 않기를 원하지만 많은 인간 사회 구조는 비효율적이거나 해를 끼치지만, 임상 실험에 대한 법적 요구사항, 사람들의 습관 변화에 대한 의지, 정부의 행동과 같은 제약을 존중하며 이를 변화시키는 것은 어렵다. 기술적으로는 잘 작동하지만 이와 같은 규제나 잘못된 두려움으로 인해 그 영향이 줄어든 예로는 원자력, 초음속 비행, 심지어 엘레베이터가 있다. - Physical laws
첫번째 보다 더 극단적인 버전으로, 깨지지 않는 물리 법칙들이 있다. 빛의 속도보다 빠르게 여행할 수 없다던가 푸딩은 다시 휘저을 수 없다 던가 하는 것들이다. 칩은 일정 밀도를 넘으면 불안정해져 일정 수의 트랜지스터만 가질 수 있고 컴퓨터 연산은 비트를 지울 때 최소한의 에너지를 필요로 하며 이 세상에서의 연산 밀도를 제한한다.
이는 단기적으로는 강한 제약이 될 수 있지만 장기적으로는 지능에 의해 유연해질 수 있다. 예) 지능을 활용해 동물 실험이 필요했던 것을 실험실에서 할 수 있도록 새로운 패러다임을 개발하거나, 새로운 데이터를 수집하는 데 필요한 도구를 만들거나
따라서 우리는 처음에는 다른 생산 요소들에 의해 지능이 크게 제한되는 상황을 상상할 수 있지만, 시간이 지남에 따라 지능 자체가 점차 다른 요소들을 우회하는 모습을 상상해야 한다. 물론 이러한 제한적 요소들이 완전히 사라지진 않지만(물리 법칙과 같은 것들은 절대적이므로), 지능은 점차 다른 요소들을 피할 수 있게 될 것이다.
여기서 핵심 질문은 이러한 변화가 얼마나 빠르게 일어나느냐 이고 어떤 순서로 이루어지는가이다. 나는 이 프레임워크를 염두해두고 다섯가지 분야에 대해 이 질문에 답해보겠다.
원문
1. Biology and health
Biology is probably the area where scientific progress has the greatest potential to directly and unambiguously improve the quality of human life. In the last century some of the most ancient human afflictions (such as smallpox) have finally been vanquished, but many more still remain, and defeating them would be an enormous humanitarian accomplishment. Beyond even curing disease, biological science can in principle improve the baseline quality of human health, by extending the healthy human lifespan, increasing control and freedom over our own biological processes, and addressing everyday problems that we currently think of as immutable parts of the human condition.
In the “limiting factors” language of the previous section, the main challenges with directly applying intelligence to biology are data, the speed of the physical world, and intrinsic complexity (in fact, all three are related to each other). Human constraints also play a role at a later stage, when clinical trials are involved. Let’s take these one by one.
Experiments on cells, animals, and even chemical processes are limited by the speed of the physical world: many biological protocols involve culturing bacteria or other cells, or simply waiting for chemical reactions to occur, and this can sometimes take days or even weeks, with no obvious way to speed it up. Animal experiments can take months (or more) and human experiments often take years (or even decades for long-term outcome studies). Somewhat related to this, data is often lacking—not so much in quantity, but quality: there is always a dearth of clear, unambiguous data that isolates a biological effect of interest from the other 10,000 confounding things that are going on, or that intervenes causally in a given process, or that directly measures some effect (as opposed to inferring its consequences in some indirect or noisy way). Even massive, quantitative molecular data, like the proteomics data that I collected while working on mass spectrometry techniques, is noisy and misses a lot (which types of cells were these proteins in? Which part of the cell? At what phase in the cell cycle?).
In part responsible for these problems with data is intrinsic complexity: if you’ve ever seen a diagram showing the biochemistry of human metabolism, you’ll know that it’s very hard to isolate the effect of any part of this complex system, and even harder to intervene on the system in a precise or predictable way. And finally, beyond just the intrinsic time that it takes to run an experiment on humans, actual clinical trials involve a lot of bureaucracy and regulatory requirements that (in the opinion of many people, including me) add unnecessary additional time and delay progress.
Given all this, many biologists have long been skeptical of the value of AI and “big data” more generally in biology. Historically, mathematicians, computer scientists, and physicists who have applied their skills to biology over the last 30 years have been quite successful, but have not had the truly transformative impact initially hoped for. Some of the skepticism has been reduced by major and revolutionary breakthroughs like AlphaFold (which has just deservedly won its creators the Nobel Prize in Chemistry) and AlphaProteo11, but there’s still a perception that AI is (and will continue to be) useful in only a limited set of circumstances. A common formulation is “AI can do a better job analyzing your data, but it can’t produce more data or improve the quality of the data. Garbage in, garbage out”.
But I think that pessimistic perspective is thinking about AI in the wrong way. If our core hypothesis about AI progress is correct, then the right way to think of AI is not as a method of data analysis, but as a virtual biologist who performs all the tasks biologists do, including designing and running experiments in the real world (by controlling lab robots or simply telling humans which experiments to run – as a Principal Investigator would to their graduate students), inventing new biological methods or measurement techniques, and so on. It is by speeding up the whole research process that AI can truly accelerate biology. I want to repeat this because it’s the most common misconception that comes up when I talk about AI’s ability to transform biology: I am not talking about AI as merely a tool to analyze data. In line with the definition of powerful AI at the beginning of this essay, I’m talking about using AI to perform, direct, and improve upon nearly everything biologists do.
To get more specific on where I think acceleration is likely to come from, a surprisingly large fraction of the progress in biology has come from a truly tiny number of discoveries, often related to broad measurement tools or techniques12 that allow precise but generalized or programmable intervention in biological systems. There’s perhaps ~1 of these major discoveries per year and collectively they arguably drive >50% of progress in biology. These discoveries are so powerful precisely because they cut through intrinsic complexity and data limitations, directly increasing our understanding and control over biological processes. A few discoveries per decade have enabled both the bulk of our basic scientific understanding of biology, and have driven many of the most powerful medical treatments.
Some examples include:
- CRISPR: a technique that allows live editing of any gene in living organisms (replacement of any arbitrary gene sequence with any other arbitrary sequence). Since the original technique was developed, there have been constant improvements to target specific cell types, increasing accuracy, and reducing edits of the wrong gene—all of which are needed for safe use in humans.
- Various kinds of microscopy for watching what is going on at a precise level: advanced light microscopes (with various kinds of fluorescent techniques, special optics, etc), electron microscopes, atomic force microscopes, etc.
- Genome sequencing and synthesis, which has dropped in cost by several orders of magnitude in the last couple decades.
- Optogenetic techniques that allow you to get a neuron to fire by shining a light on it.
- mRNA vaccines that, in principle, allow us to design a vaccine against anything and then quickly adapt it (mRNA vaccines of course became famous during COVID).
- Cell therapies such as CAR-T that allow immune cells to be taken out of the body and “reprogrammed” to attack, in principle, anything.
- Conceptual insights like the germ theory of disease or the realization of a link between the immune system and cancer13.
I’m going to the trouble of listing all these technologies because I want to make a crucial claim about them: I think their rate of discovery could be increased by 10x or more if there were a lot more talented, creative researchers. Or, put another way, I think the returns to intelligence are high for these discoveries, and that everything else in biology and medicine mostly follows from them.
Why do I think this? Because of the answers to some questions that we should get in the habit of asking when we’re trying to determine “returns to intelligence”. First, these discoveries are generally made by a tiny number of researchers, often the same people repeatedly, suggesting skill and not random search (the latter might suggest lengthy experiments are the limiting factor). Second, they often “could have been made” years earlier than they were: for example, CRISPR was a naturally occurring component of the immune system in bacteria that’s been known since the 80’s, but it took another 25 years for people to realize it could be repurposed for general gene editing. They also are often delayed many years by lack of support from the scientific community for promising directions (see this profile on the inventor of mRNA vaccines; similar stories abound). Third, successful projects are often scrappy or were afterthoughts that people didn’t initially think were promising, rather than massively funded efforts. This suggests that it’s not just massive resource concentration that drives discoveries, but ingenuity.
Finally, although some of these discoveries have “serial dependence” (you need to make discovery A first in order to have the tools or knowledge to make discovery B)—which again might create experimental delays—many, perhaps most, are independent, meaning many at once can be worked on in parallel. Both these facts, and my general experience as a biologist, strongly suggest to me that there are hundreds of these discoveries waiting to be made if scientists were smarter and better at making connections between the vast amount of biological knowledge humanity possesses (again consider the CRISPR example). The success of AlphaFold/AlphaProteo at solving important problems much more effectively than humans, despite decades of carefully designed physics modeling, provides a proof of principle (albeit with a narrow tool in a narrow domain) that should point the way forward.
Thus, it’s my guess that powerful AI could at least 10x the rate of these discoveries, giving us the next 50-100 years of biological progress in 5-10 years.14 Why not 100x? Perhaps it is possible, but here both serial dependence and experiment times become important: getting 100 years of progress in 1 year requires a lot of things to go right the first time, including animal experiments and things like designing microscopes or expensive lab facilities. I’m actually open to the (perhaps absurd-sounding) idea that we could get 1000 years of progress in 5-10 years, but very skeptical that we can get 100 years in 1 year. Another way to put it is I think there’s an unavoidable constant delay: experiments and hardware design have a certain “latency” and need to be iterated upon a certain “irreducible” number of times in order to learn things that can’t be deduced logically. But massive parallelism may be possible on top of that15.
What about clinical trials? Although there is a lot of bureaucracy and slowdown associated with them, the truth is that a lot (though by no means all!) of their slowness ultimately derives from the need to rigorously evaluate drugs that barely work or ambiguously work. This is sadly true of most therapies today: the average cancer drug increases survival by a few months while having significant side effects that need to be carefully measured (there’s a similar story for Alzheimer’s drugs). This leads to huge studies (in order to achieve statistical power) and difficult tradeoffs which regulatory agencies generally aren’t great at making, again because of bureaucracy and the complexity of competing interests.
When something works really well, it goes much faster: there’s an accelerated approval track and the ease of approval is much greater when effect sizes are larger. mRNA vaccines for COVID were approved in 9 months—much faster than the usual pace. That said, even under these conditions clinical trials are still too slow—mRNA vaccines arguably should have been approved in ~2 months. But these kinds of delays (~1 year end-to-end for a drug) combined with massive parallelization and the need for some but not too much iteration (“a few tries”) are very compatible with radical transformation in 5-10 years. Even more optimistically, it is possible that AI-enabled biological science will reduce the need for iteration in clinical trials by developing better animal and cell experimental models (or even simulations) that are more accurate in predicting what will happen in humans. This will be particularly important in developing drugs against the aging process, which plays out over decades and where we need a faster iteration loop.
Finally, on the topic of clinical trials and societal barriers, it is worth pointing out explicitly that in some ways biomedical innovations have an unusually strong track record of being successfully deployed, in contrast to some other technologies16. As mentioned in the introduction, many technologies are hampered by societal factors despite working well technically. This might suggest a pessimistic perspective on what AI can accomplish. But biomedicine is unique in that although the process of developing drugs is overly cumbersome, once developed they generally are successfully deployed and used.
To summarize the above, my basic prediction is that AI-enabled biology and medicine will allow us to compress the progress that human biologists would have achieved over the next 50-100 years into 5-10 years. I’ll refer to this as the “compressed 21st century”: the idea that after powerful AI is developed, we will in a few years make all the progress in biology and medicine that we would have made in the whole 21st century.
Although predicting what powerful AI can do in a few years remains inherently difficult and speculative, there is some concreteness to asking “what could humans do unaided in the next 100 years?”. Simply looking at what we’ve accomplished in the 20th century, or extrapolating from the first 2 decades of the 21st, or asking what “10 CRISPR’s and 50 CAR-T’s” would get us, all offer practical, grounded ways to estimate the general level of progress we might expect from powerful AI.
Below I try to make a list of what we might expect. This is not based on any rigorous methodology, and will almost certainly prove wrong in the details, but it’s trying to get across the general level of radicalism we should expect:
- Reliable prevention and treatment of nearly all17 natural infectious disease. Given the enormous advances against infectious disease in the 20th century, it is not radical to imagine that we could more or less “finish the job” in a compressed 21st. mRNA vaccines and similar technology already point the way towards “vaccines for anything”. Whether infectious disease is fully eradicated from the world (as opposed to just in some places) depends on questions about poverty and inequality, which are discussed in Section 3.
- Elimination of most cancer. Death rates from cancer have been dropping ~2% per year for the last few decades; thus we are on track to eliminate most cancer in the 21st century at the current pace of human science. Some subtypes have already been largely cured (for example some types of leukemia with CAR-T therapy), and I’m perhaps even more excited for very selective drugs that target cancer in its infancy and prevent it from ever growing. AI will also make possible treatment regimens very finely adapted to the individualized genome of the cancer—these are possible today, but hugely expensive in time and human expertise, which AI should allow us to scale. Reductions of 95% or more in both mortality and incidence seem possible. That said, cancer is extremely varied and adaptive, and is likely the hardest of these diseases to fully destroy. It would not be surprising if an assortment of rare, difficult malignancies persists.
- Very effective prevention and effective cures for genetic disease. Greatly improved embryo screening will likely make it possible to prevent most genetic disease, and some safer, more reliable descendant of CRISPR may cure most genetic disease in existing people. Whole-body afflictions that affect a large fraction of cells may be the last holdouts, however.
- Prevention of Alzheimer’s. We’ve had a very hard time figuring out what causes Alzheimer’s (it is somehow related to beta-amyloid protein, but the actual details seem to be very complex). It seems like exactly the type of problem that can be solved with better measurement tools that isolate biological effects; thus I am bullish about AI’s ability to solve it. There is a good chance it can eventually be prevented with relatively simple interventions, once we actually understand what is going on. That said, damage from already-existing Alzheimer’s may be very difficult to reverse.
- Improved treatment of most other ailments. This is a catch-all category for other ailments including diabetes, obesity, heart disease, autoimmune diseases, and more. Most of these seem “easier” to solve than cancer and Alzheimer’s and in many cases are already in steep decline. For example, deaths from heart disease have already declined over 50%, and simple interventions like GLP-1 agonists have already made huge progress against obesity and diabetes.
- Biological freedom. The last 70 years featured advances in birth control, fertility, management of weight, and much more. But I suspect AI-accelerated biology will greatly expand what is possible: weight, physical appearance, reproduction, and other biological processes will be fully under people’s control. We’ll refer to these under the heading of biological freedom: the idea that everyone should be empowered to choose what they want to become and live their lives in the way that most appeals to them. There will of course be important questions about global equality of access; see Section 3 for these.
- Doubling of the human lifespan18. This might seem radical, but life expectancy increased almost 2x in the 20th century (from ~40 years to ~75), so it’s “on trend” that the “compressed 21st” would double it again to 150. Obviously the interventions involved in slowing the actual aging process will be different from those that were needed in the last century to prevent (mostly childhood) premature deaths from disease, but the magnitude of change is not unprecedented19. Concretely, there already exist drugs that increase maximum lifespan in rats by 25-50% with limited ill-effects. And some animals (e.g. some types of turtle) already live 200 years, so humans are manifestly not at some theoretical upper limit. At a guess, the most important thing that is needed might be reliable, non-Goodhart-able biomarkers of human aging, as that will allow fast iteration on experiments and clinical trials. Once human lifespan is 150, we may be able to reach “escape velocity”, buying enough time that most of those currently alive today will be able to live as long as they want, although there’s certainly no guarantee this is biologically possible.
It is worth looking at this list and reflecting on how different the world will be if all of it is achieved 7-12 years from now (which would be in line with an aggressive AI timeline). It goes without saying that it would be an unimaginable humanitarian triumph, the elimination all at once of most of the scourges that have haunted humanity for millennia. Many of my friends and colleagues are raising children, and when those children grow up, I hope that any mention of disease will sound to them the way scurvy, smallpox, or bubonic plague sounds to us. That generation will also benefit from increased biological freedom and self-expression, and with luck may also be able to live as long as they want.
It’s hard to overestimate how surprising these changes will be to everyone except the small community of people who expected powerful AI. For example, thousands of economists and policy experts in the US currently debate how to keep Social Security and Medicare solvent, and more broadly how to keep down the cost of healthcare (which is mostly consumed by those over 70 and especially those with terminal illnesses such as cancer). The situation for these programs is likely to be radically improved if all this comes to pass20, as the ratio of working age to retired population will change drastically. No doubt these challenges will be replaced with others, such as how to ensure widespread access to the new technologies, but it is worth reflecting on how much the world will change even if biology is the only area to be successfully accelerated by AI.
1. Biology and health
생물학은 인간 삶의 질을 직접적이고 명확하게 향상시킬 수 있는 과학적 발전의 잠재력이 가장 큰 분야일 것이다. "limiting factors"에 대해 이야기하면, 생물학에 지능을 직접 적용할 때의 주요 도전 과제는 데이터와 물리적 세계의 속도, 그리고 본질적인 복잡성이다(이 세가지는 서로 연관되어 있다).
데이터와 물리적 세계의 속도. 세포, 동물, 심지어 화학적 과정에 대한 실험은 물리적 세계의 속도에 의해 제한된다. 많은 생물학적 프로토콜은 박테리아나 다른 세포를 배양하거나 화학 반응이 일어나길 기다려야하며 이를 가속화할 명확한 방법은 없다. 동물 실험은 몇달, 인간 실험은 종종 몇 년, 수십 년이 걸리기도 한다. 이에 관한 데이터는 양 뿐 아니라 품질도 부족하다. 생물학적 효과에서 혼란스러운 요소로 부터 분리되거나, 인과적으로 개입하거나, 어떤 효과를 직접적으로 측정하는 명확한 데이터는 항상 부족하다.
본질적인 복잡성. 이러한 데이터 문제의 일부는 본질적인 복잡성에서 비롯된다. 예를 들어 인간 대사 과정의 생화학을 보여주는 다이어그램을 본 적이 있다면 이 복잡한 시스템에서 어떤 부분의 영향을 분리하는 것이 매우 어렵고 이 시스템을 정확하거나 예측 가능한 방식으로 개입하는 것이 더 어렵다는 것을 알 수 있다.
이 외에도 실제 임상 시험에는 많은 관료주의와 규제 조건이 있으며 이는 많은 사람들이 생각하는 것처럼 불필요한 추가 시간과 진행 지연을 초래한다. 임상 시험과 사회적 장벽에 관한 논의에서 생물의학 혁신이 다른 기술들과 비교해 매우 성공적으로 배치된 사례가 많다는 점을 명확히 언급할 가치가 있다. 많은 기술들이 기술적으로 잘 작동해도 사회적 요인 때문에 어려움을 겪고 있다. 이것이 AI에 대한 비관적 전망을 제시할 수도 있다. 그러나 생물의학은 독특하다. 약물이 개발되는 과정이 너무 복잡하긴 하지만 일단 개발되면 대부분 성공적으로 배치된다는 점에서 차별화된다.
이 점들을 고려할 때 생물학자들은 AI와 빅데이터가 생물학에 가치있다고 보는 것에 회의적이었으며 지난 30년 동안 생물학에 수학, 컴퓨터, 물리학자들이 적용한 기술이 꽤 성공적이었음에도 변혁적인 영향을 미친 것도 아니다. AlphaFold와 AlphaProteo 같은 혁명적 돌파구들이 AI에 대한 회의적인 시각을 어느 정도 줄였지만 여전히 AI가 유용한 상황이 제한적일 것이라는 인식은 남아 있다. "AI는 데이터를 잘 분석할 수 있지만 더 많은 데이터를 생성하거나 데이터의 품질을 향상시킬 수 없다. 쓰레기를 넣으면 쓰레가기 나온다."는 일반적인 견해가 그렇다.
하지만 나는 이 시각이 AI를 잘못 이해하는 것이라고 생각한다. AI 발전에 대한 우리의 핵심 가설이 맞다면 AI는 단순한 데이터 분석 도구가 아니라 "생물학자가 하는 모든 작업을 수행하는 가상의 생물학자"로 생각해야 한다.
생물학의 발전에서 놀라울 정도의 많은 진전이 정말 극 소수의 발견에서 비롯되었으며 이들은 생물학 시스템에 정확하지만 일반화된 또는 프로그래밍 가능한 개입을 가능하게 하는 도구나 기법에 관련이 있다. 이러한 발견은 연간 1건 정도이며 이들이 생물학에서 50% 이상의 발전을 이끌어낸다고 볼 수 있다. 이 발견이 강력한 이유는 본질적인 복잡성과 데이터의 한계를 뚫고 생물학적 과정을 이해하고 통제하는 능력을 직접적으로 향상시키기 때문이다. 몇 가지 예를 들면, CRISPR, micriscopy at a precise level, Genome sequencing and synthesis, Optogenetic, mRNA vaccines 이고 이러한 기술들의 발견 속도가 10배 이상 증가할 수 있다고 생각한다. 왜냐면 이러한 발견은 대게 소수의 뛰어난 연구자들에 의해 이뤄지고 그들이 자주 같은 발견을 반복하기 때문이다. 이는 우연한 탐색이 아닌 기술과 창의성의 결과이다.
AI가 가속화할 수 있는 발전의 예 - 전염병의 예방과 치료, 유전질환 예방과 치료법, 생물학적 자유(외모, 생식, 체중 등), 인간 수명 두 배 연장, 대부분의 암 퇴치, 알츠하이머 예방, 당뇨병, 비만, 심장 질환 등의 대부분의 질병은 암이나 알츠하이머보다는 더 쉽게 해결될 수 있을 것이다. AI는 이 발전의 속도를 10배 이상 가속화할 수 있을 것으로 예상하며 이는 생물학의 50-100년을 5-10년으로 압축할 것이다.
이 변화들이 강력한 AI의 도입을 예상한 소수의 사람들을 제외한 모두에게 얼마나 놀라울 것인지 과소평가하기는 어렵다. 예를 들어 현재 미국에서 수천 명의 경제학자와 정책 전문가들이 사회보장제도와 메디케어의 재정 안정을 어떻게 유지할지, 의료 비용을 어떻게 낮출지에 대한 논의가 있다 (의료비용 대부분은 70세 이상의 노인들, 특히 암과 같은 말기 질환 환자들에 의해 소비된다). 만약 언급한 발전들이 실현된다면, 경제활동 연령층과 은퇴하는 인구 비율이 극단적으로 변할 것이기 때문에, 이 프로그램들의 상황은 급진적으로 개선될 가능성이 크다. 물론 이 도전들은 새로운 기술에 대한 접근성을 보장하는 문제와 같은 다른 문제로 대체될 것이지만 AI가 생물학 분야만이라도 성공적으로 가속화한다면 세상이 얼마나 달라질지 깊이 생각해 볼 가치가 있다.
원문
2. Neuroscience and mind
In the previous section I focused on physical diseases and biology in general, and didn’t cover neuroscience or mental health. But neuroscience is a subdiscipline of biology and mental health is just as important as physical health. In fact, if anything, mental health affects human well-being even more directly than physical health. Hundreds of millions of people have very low quality of life due to problems like addiction, depression, schizophrenia, low-functioning autism, PTSD, psychopathy21, or intellectual disabilities. Billions more struggle with everyday problems that can often be interpreted as much milder versions of one of these severe clinical disorders. And as with general biology, it may be possible to go beyond addressing problems to improving the baseline quality of human experience.
The basic framework that I laid out for biology applies equally to neuroscience. The field is propelled forward by a small number of discoveries often related to tools for measurement or precise intervention – in the list of those above, optogenetics was a neuroscience discovery, and more recently CLARITY and expansion microscopy are advances in the same vein, in addition to many of the general cell biology methods directly carrying over to neuroscience. I think the rate of these advances will be similarly accelerated by AI and therefore that the framework of “100 years of progress in 5-10 years” applies to neuroscience in the same way it does to biology and for the same reasons. As in biology, the progress in 20th century neuroscience was enormous – for example we didn’t even understand how or why neurons fired until the 1950’s. Thus, it seems reasonable to expect AI-accelerated neuroscience to produce rapid progress over a few years.
There is one thing we should add to this basic picture, which is that some of the things we’ve learned (or are learning) about AI itself in the last few years are likely to help advance neuroscience, even if it continues to be done only by humans. Interpretability is an obvious example: although biological neurons superficially operate in a completely different manner from artificial neurons (they communicate via spikes and often spike rates, so there is a time element not present in artificial neurons, and a bunch of details relating to cell physiology and neurotransmitters modifies their operation substantially), the basic question of “how do distributed, trained networks of simple units that perform combined linear/non-linear operations work together to perform important computations” is the same, and I strongly suspect the details of individual neuron communication will be abstracted away in most of the interesting questions about computation and circuits22. As just one example of this, a computational mechanism discovered by interpretability researchers in AI systems was recently rediscovered in the brains of mice.
It is much easier to do experiments on artificial neural networks than on real ones (the latter often requires cutting into animal brains), so interpretability may well become a tool for improving our understanding of neuroscience. Furthermore, powerful AI’s will themselves probably be able to develop and apply this tool better than humans can.
Beyond just interpretability though, what we have learned from AI about how intelligent systems are trained should (though I am not sure it has yet) cause a revolution in neuroscience. When I was working in neuroscience, a lot of people focused on what I would now consider the wrong questions about learning, because the concept of the scaling hypothesis / bitter lesson didn’t exist yet. The idea that a simple objective function plus a lot of data can drive incredibly complex behaviors makes it more interesting to understand the objective functions and architectural biases and less interesting to understand the details of the emergent computations. I have not followed the field closely in recent years, but I have a vague sense that computational neuroscientists have still not fully absorbed the lesson. My attitude to the scaling hypothesis has always been “aha – this is an explanation, at a high level, of how intelligence works and how it so easily evolved”, but I don’t think that’s the average neuroscientist’s view, in part because the scaling hypothesis as “the secret to intelligence” isn’t fully accepted even within AI.
I think that neuroscientists should be trying to combine this basic insight with the particularities of the human brain (biophysical limitations, evolutionary history, topology, details of motor and sensory inputs/outputs) to try to figure out some of neuroscience’s key puzzles. Some likely are, but I suspect it’s not enough yet, and that AI neuroscientists will be able to more effectively leverage this angle to accelerate progress.
I expect AI to accelerate neuroscientific progress along four distinct routes, all of which can hopefully work together to cure mental illness and improve function:
- Traditional molecular biology, chemistry, and genetics. This is essentially the same story as general biology in section 1, and AI can likely speed it up via the same mechanisms. There are many drugs that modulate neurotransmitters in order to alter brain function, affect alertness or perception, change mood, etc., and AI can help us invent many more. AI can probably also accelerate research on the genetic basis of mental illness.
- Fine-grained neural measurement and intervention. This is the ability to measure what a lot of individual neurons or neuronal circuits are doing, and intervene to change their behavior. Optogenetics and neural probes are technologies capable of both measurement and intervention in live organisms, and a number of very advanced methods (such as molecular ticker tapes to read out the firing patterns of large numbers of individual neurons) have also been proposed and seem possible in principle.
- Advanced computational neuroscience. As noted above, both the specific insights and the gestalt of modern AI can probably be applied fruitfully to questions in systems neuroscience, including perhaps uncovering the real causes and dynamics of complex diseases like psychosis or mood disorders.
- Behavioral interventions. I haven’t much mentioned it given the focus on the biological side of neuroscience, but psychiatry and psychology have of course developed a wide repertoire of behavioral interventions over the 20th century; it stands to reason that AI could accelerate these as well, both the development of new methods and helping patients to adhere to existing methods. More broadly, the idea of an “AI coach” who always helps you to be the best version of yourself, who studies your interactions and helps you learn to be more effective, seems very promising.
It’s my guess that these four routes of progress working together would, as with physical disease, be on track to lead to the cure or prevention of most mental illness in the next 100 years even if AI was not involved – and thus might reasonably be completed in 5-10 AI-accelerated years. Concretely my guess at what will happen is something like:
- Most mental illness can probably be cured. I’m not an expert in psychiatric disease (my time in neuroscience was spent building probes to study small groups of neurons) but it’s my guess that diseases like PTSD, depression, schizophrenia, addiction, etc. can be figured out and very effectively treated via some combination of the four directions above. The answer is likely to be some combination of “something went wrong biochemically” (although it could be very complex) and “something went wrong with the neural network, at a high level”. That is, it’s a systems neuroscience question—though that doesn’t gainsay the impact of the behavioral interventions discussed above. Tools for measurement and intervention, especially in live humans, seem likely to lead to rapid iteration and progress.
- Conditions that are very “structural” may be more difficult, but not impossible. There’s some evidence that psychopathy is associated with obvious neuroanatomical differences – that some brain regions are simply smaller or less developed in psychopaths. Psychopaths are also believed to lack empathy from a young age; whatever is different about their brain, it was probably always that way. The same may be true of some intellectual disabilities, and perhaps other conditions. Restructuring the brain sounds hard, but it also seems like a task with high returns to intelligence. Perhaps there is some way to coax the adult brain into an earlier or more plastic state where it can be reshaped. I’m very uncertain how possible this is, but my instinct is to be optimistic about what AI can invent here.
- Effective genetic prevention of mental illness seems possible. Most mental illness is partially heritable, and genome-wide association studies are starting to gain traction on identifying the relevant factors, which are often many in number. It will probably be possible to prevent most of these diseases via embryo screening, similar to the story with physical disease. One difference is that psychiatric disease is more likely to be polygenic (many genes contribute), so due to complexity there’s an increased risk of unknowingly selecting against positive traits that are correlated with disease. Oddly however, in recent years GWAS studies seem to suggest that these correlations might have been overstated. In any case, AI-accelerated neuroscience may help us to figure these things out. Of course, embryo screening for complex traits raises a number of societal issues and will be controversial, though I would guess that most people would support screening for severe or debilitating mental illness.
- Everyday problems that we don’t think of as clinical disease will also be solved. Most of us have everyday psychological problems that are not ordinarily thought of as rising to the level of clinical disease. Some people are quick to anger, others have trouble focusing or are often drowsy, some are fearful or anxious, or react badly to change. Today, drugs already exist to help with e.g. alertness or focus (caffeine, modafinil, ritalin) but as with many other previous areas, much more is likely to be possible. Probably many more such drugs exist and have not been discovered, and there may also be totally new modalities of intervention, such as targeted light stimulation (see optogenetics above) or magnetic fields. Given how many drugs we’ve developed in the 20th century that tune cognitive function and emotional state, I’m very optimistic about the “compressed 21st” where everyone can get their brain to behave a bit better and have a more fulfilling day-to-day experience.
- Human baseline experience can be much better. Taking one step further, many people have experienced extraordinary moments of revelation, creative inspiration, compassion, fulfillment, transcendence, love, beauty, or meditative peace. The character and frequency of these experiences differs greatly from person to person and within the same person at different times, and can also sometimes be triggered by various drugs (though often with side effects). All of this suggests that the “space of what is possible to experience” is very broad and that a larger fraction of people’s lives could consist of these extraordinary moments. It is probably also possible to improve various cognitive functions across the board. This is perhaps the neuroscience version of “biological freedom” or “extended lifespans”.
One topic that often comes up in sci-fi depictions of AI, but that I intentionally haven’t discussed here, is “mind uploading”, the idea of capturing the pattern and dynamics of a human brain and instantiating them in software. This topic could be the subject of an essay all by itself, but suffice it to say that while I think uploading is almost certainly possible in principle, in practice it faces significant technological and societal challenges, even with powerful AI, that likely put it outside the 5-10 year window we are discussing.
In summary, AI-accelerated neuroscience is likely to vastly improve treatments for, or even cure, most mental illness as well as greatly expand “cognitive and mental freedom” and human cognitive and emotional abilities. It will be every bit as radical as the improvements in physical health described in the previous section. Perhaps the world will not be visibly different on the outside, but the world as experienced by humans will be a much better and more humane place, as well as a place that offers greater opportunities for self-actualization. I also suspect that improved mental health will ameliorate a lot of other societal problems, including ones that seem political or economic.
2.Neuroscience ans mind
신경과학은 생물학의 하위 분류이고 정신 건강은 신체 건강만큼 중요하다. 수억 명의 사람들이 중독, 우울증, 조현병, 자폐증, PTSD, 사이코패스, 지적 장애 등의 문제로 낮은 삶의 질을 겪고 있다. 생물학과 마찬가지로 인간 경험의 기본적 질을 향상시키는 것이 AI로 가능할 수 있다. 내가 생물학에서 제시한 기본 프레임워크는 신경과학에도 동일하게 적용된다. 신경과학 분야는 측정 도구나 정밀한 개입과 관련된 몇 가지 발견에 의해 추진된다. optogenetics는 신경과학의 발견이었고 최근에는 CLARITY와 Expansion Microscopy와 같은 방법들이 비슷한 방향으로 발전하고 있다. 또한 많은 일반 세포 생물학 방법들이 신경과학에도 직접적으로 적용되고 있다. 따라서 가속화는 신경과학에도 동일하게 적용된다고 본다.
이 Basic picture에 추가해야 할 점은, 최근 몇 년 동안 우리가 AI에 대해 배운 것들이 신경과학 발전에 도움이 될 가능성이 있다는 것이다. 비록 신경과학이 여전히 인간에 의해 연구된다 하더라도 말이다. "Interpretability(해석가능성)"은 명백한 그 예이다. 생물학적 세포(neuron)는 인공 신경망(neural net)과 표면적으로는 완전히 다른 방식으로 작동한다(생물학적 신경 세포는 스파이크와 그 빈도를 통해 소통하고 이는 인공 신경망에는 없는 시간적 요소를 포함하고 세포 생리학과 신경전달물질과 같은 세부 사항이 그들의 작용에 큰 영향을 미친다). 그럼에도 불구하고 "how do distributed, trained networks of simple units that peform combined linear/non-linear operations work together to perform important computations(단순 단위들이 결합된 선형/비선형 연산을 통해 중요한 계산을 수행하기 위해 어떻게 함께 작동하는가)"라는 기본적인 질문은 동일하다. 나는 individual neuron commuication의 디테일이 컴퓨테이션과 회로에 관한 흥미로운 질문 대부분을 추상화할 것이라고 강하게 의심한다. 예를 들어 AI Interpretability 연구자들이 발견한 계산 메커니즘이 최근 생쥐의 뇌에서 다시 발견되었다.
인공 신경망에 대한 실험은 실제 뇌에 대한 실험보다 훨씬 쉽다. 따라서 Interpretability는 신경과학의 이해를 개선하는 도구가 될 수 있으며 강력한 AI는 인간보다 아마 이 도구를 더 잘 개발하고 적용할 수 있을 것이다.
Interpretability를 넘어 AI에서 배운 지능 시스템의 훈련 방식에 대한 이해는 신경과학에 혁신을 일으킬 가능성이 크다. 내가 신경과학에서 일하던 시절, 많은 사람들은 잘못된 질문에 초점을 맞췄다. 왜냐면 당시는 Scaling Hypothesis(시스템의 성능은 데이터와 모델 규모가 커지질수록 증가한다는 가설)나 Bitter Lesson 이라는 개념이 존재하지 않았기 때문이다. 간단한 Objective와 방대한 데이터만으로도 매우 복잡한 행동을 유도할 수 있다는 아이디어는 emergent computations보다는, objective function과 architectural biases를 이해하는 것이 더 흥미롭다는 관점을 만들었다. 나는 Scaling hypothesis를 항상 "이것이 고차원적 지능이 작동하는 방식과 어떻게 쉽게 진화했는지에 대한 설명이다"라고 생각했지만, 이는 평균적인 신경과학자의 관점은 아니라고 생각합니다. Scaling hypothesis는 아직 AI에서도 완전히 받아들여지지는 않았습니다. 나는 신경과학자들이 이 기본적인 통찰을 인간 뇌의 특수성(biophysical limitations, evolutionary history, topology, details of motor and sensory inputs/outputs)과 결합하여 신경과학의 핵심적인 수수께끼를 풀기 위해 노력해야 한다고 생각한다.
이 부분은 굉장히 인상 깊은 부분이었다. 스케일링 가설을 신경과학 관점에서 보면 세부적인 뉴런 간 상호작용(emergent computations로 빗대어지는) 보다 거시적인 규모에서 데이터(사람의 지능에서는 보고 듣는 모든 것)와 모델 확장(뇌의 크기와 연결)에 의해 지능이 더 고도화 된다는 것을 보여준다고 주장하는 부분이다. 즉 모델의 성능을 올리는 방법을 해석하는 것을 사람의 지능과 신경과학에도 적용하거나 분석해볼 가치가 있다는 것이고 이러한 관점에 대해 생각해본 적이 없어서인지 고개가 절로 끄덕여지는 부분이었다. 사람의 지능에 스케일링 가설을 적용해본다면 많은 경험과 학습을 할수록 뇌의 크기가 크거나 뉴런 간 연결의 복잡성이 복잡할 수록 사람의 지능은 복잡하고 많은 정보를 정교하게 사고하고 처리할 수 있다는 것이다.
나는 AI가 신경과학의 진전을 아래 4가지 경로를 통해 가속화할 것으로 기대하고, 이 4가지 경로가 함께 작용해 정신 질환을 치료하고 기능을 향상시키는데 기여할 수 있기를 바란다.
- Traditional molecular biology, chemistry, and genetics
신경전달물질을 조절하여 뇌의 기능과, 각성, 인지, 기분을 변화시키는 여러 약물이 있다. 1장에서 언급한 같은 메커니즘으로 AI는 더 많은 약물을 발명하는 데 도움이 되고 정신질환의 유전적 기초에 대한 연구를 가속화할 수 있을 것. - Fine-grained neural measurement and intervention
개별 뉴런과 신경 회로가 무엇을 하는지 측정하고 이를 변화시키기 위한 개입을 할 수 있는 것. - Advanced computational neuroscience
AI가 주는 통찰과 AI 시스템의 전체적인 구조가 신경과학의 질문들에 유용하게 적용될 수 있을 것. - Behavior interventions
AI가 새로운 방법을 개발하는 것 외에도 기존 방법을 환자들이 잘 따르도록 하는 'AI 코치'와 같은 것.
나는 이 4가지가 함께 작용하면 신체 질환과 마찬가지로 AI가 없더라도 100년 내에 대부분의 정신 질환을 치료하거나 예방할 수 있을 것이라고 생각한다. 그리고 AI가 가속화된 5-10년 안에 그것이 가능할 것이라고 예상한다. 구체적으로 예상하는 것은 아래와 같다.
- 대부분의 정신 질환은 아마도 치료될 수 있을 것이다.
- 매우 "structural" 조건들은 어려울 수 있어도 불가능하지 않다.
예를 들어 사이코패스는 뇌의 어떤 영역이 작거나 더 발달한 경우이다. 그들의 뇌가 다르다면 처음부터 그랬을 것이고 뇌를 재구성하는 것은 어려워보이지만 성인의 뇌를 초기의 혹은 더 유연한 상태로 되돌려 재구성할 수 있는 방법이 있을지도 모른다. - 정신 질환의 효과적인 유전적 예방이 가능할 것 같다.
- 우리가 임상 질환으로 간주하지 않는 일상적인 문제들도 해결될 것이다.
20세기 동안 우리는 인지 기능과 감정 상태를 조절하는 수많은 약물을 개발해왔기 때문에, 나는 가속화된 압축된 21시기에서 모든 사람이 자신의 뇌를 조금 더 잘 동작시켜 더 만족스러운 일상을 누리게 될 것이라고 낙관한다. - 인간의 기본적인 경험을 훨씬 더 향상시킬 수 있다.
많은 사람들은 계시적인 순간, 창의적 영감, 연민, 성취감, 초월적 경험, 사랑, 아름다움, 평화 등을 경험한 적이 있다. 이의 특성이나 빈도는 사람마다 같은 사람이라도 시간마다 다르다. 더많은 사람들이 그들의 삶에서 이러한 비범한 순간을 경험할 수 있을 것이고 다양한 인지 기능을 향상시킬 수도 있을 것이다. 이는 아마도 생물학적 자유나 확장된 수명의 신경과학적 버전이라고 할 수 있을 것이다.
"Mind uploading"은 인간 뇌의 패턴과 다이나믹스를 포착해 그것을 SW로 구현하는 것이다. 나는 이 업로딩이 원칙적으로는 거의 확실히 가능하다고 생각하지만 실현에는 강력한 AI가 있다 하더라도 상당한 기술적, 사회적 도전 과제가 존재한다고 본다. 이는 5-10년 안에 이루어지기 어려운 일일 가능성이 높다.
요악하자면 AI 가속화된 신경과학의 대부분은 정신 질환을 개선하거나 치유할 뿐 아니라 인지적 및 정신적 자유와 인간의 인지, 감정적 능력을 크게 확장할 것이다. 외적인 세상이 달라지지 않더라도 인간이 경험하는 세상은 훨씬 나아지고 인간미 넘치는 곳이될 것이다. 향상된 정신 건강이 정치적, 경제적 문제를 포함한 다른 사회적 문제를 완화시킬 것이라고 나는 확신한다.
원문
3. Economic development and poverty
The previous two sections are about developing new technologies that cure disease and improve the quality of human life. However an obvious question, from a humanitarian perspective, is: “will everyone have access to these technologies?”
It is one thing to develop a cure for a disease, it is another thing to eradicate the disease from the world. More broadly, many existing health interventions have not yet been applied everywhere in the world, and for that matter the same is true of (non-health) technological improvements in general. Another way to say this is that living standards in many parts of the world are still desperately poor: GDP per capita is ~$2,000 in Sub-Saharan Africa as compared to ~$75,000 in the United States. If AI further increases economic growth and quality of life in the developed world, while doing little to help the developing world, we should view that as a terrible moral failure and a blemish on the genuine humanitarian victories in the previous two sections. Ideally, powerful AI should help the developing world catch up to the developed world, even as it revolutionizes the latter.
I am not as confident that AI can address inequality and economic growth as I am that it can invent fundamental technologies, because technology has such obvious high returns to intelligence (including the ability to route around complexities and lack of data) whereas the economy involves a lot of constraints from humans, as well as a large dose of intrinsic complexity. I am somewhat skeptical that an AI could solve the famous “socialist calculation problem”23 and I don’t think governments will (or should) turn over their economic policy to such an entity, even if it could do so. There are also problems like how to convince people to take treatments that are effective but that they may be suspicious of.
The challenges facing the developing world are made even more complicated by pervasive corruption in both private and public sectors. Corruption creates a vicious cycle: it exacerbates poverty, and poverty in turn breeds more corruption. AI-driven plans for economic development need to reckon with corruption, weak institutions, and other very human challenges.
Nevertheless, I do see significant reasons for optimism. Diseases have been eradicated and many countries have gone from poor to rich, and it is clear that the decisions involved in these tasks exhibit high returns to intelligence (despite human constraints and complexity). Therefore, AI can likely do them better than they are currently being done. There may also be targeted interventions that get around the human constraints and that AI could focus on. More importantly though, we have to try. Both AI companies and developed world policymakers will need to do their part to ensure that the developing world is not left out; the moral imperative is too great. So in this section, I’ll continue to make the optimistic case, but keep in mind everywhere that success is not guaranteed and depends on our collective efforts.
Below I make some guesses about how I think things may go in the developing world over the 5-10 years after powerful AI is developed:
- Distribution of health interventions. The area where I am perhaps most optimistic is distributing health interventions throughout the world. Diseases have actually been eradicated by top-down campaigns: smallpox was fully eliminated in the 1970’s, and polio and guinea worm are nearly eradicated with less than 100 cases per year. Mathematically sophisticated epidemiological modeling plays an active role in disease eradication campaigns, and it seems very likely that there is room for smarter-than-human AI systems to do a better job of it than humans are. The logistics of distribution can probably also be greatly optimized. One thing I learned as an early donor to GiveWell is that some health charities are way more effective than others; the hope is that AI-accelerated efforts would be more effective still. Additionally, some biological advances actually make the logistics of distribution much easier: for example, malaria has been difficult to eradicate because it requires treatment each time the disease is contracted; a vaccine that only needs to be administered once makes the logistics much simpler (and such vaccines for malaria are in fact currently being developed). Even simpler distribution mechanisms are possible: some diseases could in principle be eradicated by targeting their animal carriers, for example releasing mosquitoes infected with a bacterium that blocks their ability to carry a disease (who then infect all the other mosquitos) or simply using gene drives to wipe out the mosquitos. This requires one or a few centralized actions, rather than a coordinated campaign that must individually treat millions. Overall, I think 5-10 years is a reasonable timeline for a good fraction (maybe 50%) of AI-driven health benefits to propagate to even the poorest countries in the world. A good goal might be for the developing world 5-10 years after powerful AI to at least be substantially healthier than the developed world is today, even if it continues to lag behind the developed world. Accomplishing this will of course require a huge effort in global health, philanthropy, political advocacy, and many other efforts, which both AI developers and policymakers should help with.
- Economic growth. Can the developing world quickly catch up to the developed world, not just in health, but across the board economically? There is some precedent for this: in the final decades of the 20th century, several East Asian economies achieved sustained ~10% annual real GDP growth rates, allowing them to catch up with the developed world. Human economic planners made the decisions that led to this success, not by directly controlling entire economies but by pulling a few key levers (such as an industrial policy of export-led growth, and resisting the temptation to rely on natural resource wealth); it’s plausible that “AI finance ministers and central bankers” could replicate or exceed this 10% accomplishment. An important question is how to get developing world governments to adopt them while respecting the principle of self-determination—some may be enthusiastic about it, but others are likely to be skeptical. On the optimistic side, many of the health interventions in the previous bullet point are likely to organically increase economic growth: eradicating AIDS/malaria/parasitic worms would have a transformative effect on productivity, not to mention the economic benefits that some of the neuroscience interventions (such as improved mood and focus) would have in developed and developing world alike. Finally, non-health AI-accelerated technology (such as energy technology, transport drones, improved building materials, better logistics and distribution, and so on) may simply permeate the world naturally; for example, even cell phones quickly permeated sub-Saharan Africa via market mechanisms, without needing philanthropic efforts. On the more negative side, while AI and automation have many potential benefits, they also pose challenges for economic development, particularly for countries that haven't yet industrialized. Finding ways to ensure these countries can still develop and improve their economies in an age of increasing automation is an important challenge for economists and policymakers to address. Overall, a dream scenario—perhaps a goal to aim for—would be 20% annual GDP growth rate in the developing world, with 10% each coming from AI-enabled economic decisions and the natural spread of AI-accelerated technologies, including but not limited to health. If achieved, this would bring sub-Saharan Africa to the current per-capita GDP of China in 5-10 years, while raising much of the rest of the developing world to levels higher than the current US GDP. Again, this is a dream scenario, not what happens by default: it’s something all of us must work together to make more likely.
- Food security 24. Advances in crop technology like better fertilizers and pesticides, more automation, and more efficient land use drastically increased crop yields across the 20th Century, saving millions of people from hunger. Genetic engineering is currently improving many crops even further. Finding even more ways to do this—as well as to make agricultural supply chains even more efficient—could give us an AI-driven second Green Revolution, helping close the gap between the developing and developed world.
- Mitigating climate change. Climate change will be felt much more strongly in the developing world, hampering its development. We can expect that AI will lead to improvements in technologies that slow or prevent climate change, from atmospheric carbon-removal and clean energy technology to lab-grown meat that reduces our reliance on carbon-intensive factory farming. Of course, as discussed above, technology isn’t the only thing restricting progress on climate change—as with all of the other issues discussed in this essay, human societal factors are important. But there’s good reason to think that AI-enhanced research will give us the means to make mitigating climate change far less costly and disruptive, rendering many of the objections moot and freeing up developing countries to make more economic progress.
- Inequality within countries. I’ve mostly talked about inequality as a global phenomenon (which I do think is its most important manifestation), but of course inequality also exists within countries. With advanced health interventions and especially radical increases in lifespan or cognitive enhancement drugs, there will certainly be valid worries that these technologies are “only for the rich”. I am more optimistic about within-country inequality especially in the developed world, for two reasons. First, markets function better in the developed world, and markets are typically good at bringing down the cost of high-value technologies over time25. Second, developed world political institutions are more responsive to their citizens and have greater state capacity to execute universal access programs—and I expect citizens to demand access to technologies that so radically improve quality of life. Of course it’s not predetermined that such demands succeed—and here is another place where we collectively have to do all we can to ensure a fair society. There is a separate problem in inequality of wealth (as opposed to inequality of access to life-saving and life-enhancing technologies), which seems harder and which I discuss in Section 5.
- The opt-out problem. One concern in both developed and developing world alike is people opting out of AI-enabled benefits (similar to the anti-vaccine movement, or Luddite movements more generally). There could end up being bad feedback cycles where, for example, the people who are least able to make good decisions opt out of the very technologies that improve their decision-making abilities, leading to an ever-increasing gap and even creating a dystopian underclass (some researchers have argued that this will undermine democracy, a topic I discuss further in the next section). This would, once again, place a moral blemish on AI’s positive advances. This is a difficult problem to solve as I don’t think it is ethically okay to coerce people, but we can at least try to increase people’s scientific understanding—and perhaps AI itself can help us with this. One hopeful sign is that historically anti-technology movements have been more bark than bite: railing against modern technology is popular, but most people adopt it in the end, at least when it’s a matter of individual choice. Individuals tend to adopt most health and consumer technologies, while technologies that are truly hampered, like nuclear power, tend to be collective political decisions.
Overall, I am optimistic about quickly bringing AI’s biological advances to people in the developing world. I am hopeful, though not confident, that AI can also enable unprecedented economic growth rates and allow the developing world to at least surpass where the developed world is now. I am concerned about the “opt out” problem in both the developed and developing world, but suspect that it will peter out over time and that AI can help accelerate this process. It won’t be a perfect world, and those who are behind won’t fully catch up, at least not in the first few years. But with strong efforts on our part, we may be able to get things moving in the right direction—and fast. If we do, we can make at least a downpayment on the promises of dignity and equality that we owe to every human being on earth.
3. Economic development and poverty
앞에서 질병을 치료하고 삶의 질을 향상시키는 것들에 대해 이야기 했다. 하지만 "이 기술들이 모든 사람들에게 제공될 수 있을까?"
AI가 선진국의 경제 성장과 삶의 질을 향상시키고, 개발도상국에는 별다른 도움이 되지 않는다면, 이는 인도주의적(humanitarian) 승리로서 큰 도덕적 실패가 될 것이다. 경제에는 인간적 제약과 내재적 복잡성이 존재하므로 AI가 경제 성장과 불평등을 해결할 수 있을지는 그렇게 확신하지 않는다. 또한 AI가 Socialist Calculation Problem을 해결할 수 있을지도 회의적이다. AI가 그런 역할을 한다하더라도 정부가 경제 정책을 AI에게 맡기지는 않을 것이라고 생각하며 사람들이 의심하고 그것을 받아들이지 않으려는 문제도 존재한다.
개발도상국이 직면한 문제는 민간, 공공 부문의 만연한 부패로 더 복잡하고, 부패는 악순환을 만들며 빈곤은 악화시키고 빈곤은 더 많은 부패를 낳는다. 그럼에도 불구하고 나는 상당한 이유로 낙관적이다. 질병이 근절되고 많은 국가들이 부유해질 때 이 과정에서 필요한 결정들은 분명히 high return to intelligence이다. 따라서 먀는 현재 이루어지고 있는 것보다 더 나은 방식으로 이를 수행하고 인간의 제약을 피할 수 있는 방법들을 AI가 집중할 수 있을 것이다. 더 중요한 것은 이를 시도해야 한다는 것이고 도덕적 의무는 너무나도 크다.
다음은 powerful AI가 개발된 후 5-10년 동안 개발도상국에서 일이 어떻게 진행될지에 대한 내 추측이다.
- Distribution of health interventions
질병은 상향식 캠페인에 의해 근절된 적이 있다(천연두, 소아마비, guinea warm 등). 수학적으로 정교한 역학 모델링은 질병 근절 캠페인에서 중요한 역할을 했고 AI는 이를 인간보다 더 잘 처리할 가능성이 매우 높다. 예를 들어 말라리아는 질병에 걸릴 때마다 치료가 필요해 근절이 어려웠지만 한 번만 투여하면 되는 백신은 logistics를 훨씬 단순화시킨다. 더욱 간단한 것도 가능하다. 박테리아에 감연된 모기를 방출하거나 유전자 구동을 통해 모기를 제거할 수 있다. 이는 수백명을 개별적으로 치료하는 것 대신 몇 가지 중앙 집중적 조치로 이루어진다. - Economic growth
20세기 후반 몇 동아시아 경제들은 선진국을 따라잡았다. 전체 경제를 직접 통제하는 것이 아니라 몇 가지 주요한 지렛대를 통해 성공을 이끌어냈다(수출 주도 성장, 천연 자원에 의존하지 않는 전략 등). AI 재무장관과 중앙 은행장이 이 성과를 복제하거나 초과할 수 있다. 중요한 질문은 개발도상국이 이를 받아들이도록 하는 것과 자주 자율성을 존중하는 방식으로 진행할 필요성이 있다는 것이다. 낙관적 측면에서 건강 개입은 경제 성장을 자연스럽게 증가시킬 가능성이 크다. - Food security
20세기 동안 더 나은 비료와 농약, 자동화 및 효율적 토지 이용 기술의 발전은 농작물 수확량을 급격히 증가시켰다. 더 많은 방법을 Ai 기반으로 찾아 두 번째 녹색 혁명이 가능할 수 있고 선진국과 도상국의 차이를 줄이는 데 도움이 될 것이다. - Mitigating climate change
Ai가 기후 변화완화를 더 적은 비용과 혼란으로 가능하게 만들 것이라는 좋은 이유가 있다. 이는 많은 반대 의견을 무효화시킬 수 있고 개발도상국들이 더 많은 경제 발전을 이룰 수 있게 할 것이다. - Inequality within countries
언급된 기술들이 "부자들만의 기술"이 될 수 있다는 우려가 있을 것입니다. 하지만 개발된 국가에서 시장은 더 잘 작동하고 시간이 지날 수록 고급 기술의 비용을 낮추는 데 유리하며, 개발된 국가의 정치 제도는 시민들의 요구에 더 잘 반응하고 보편적 프로그램을 실행할 수 있는 더 큰 국가 능력을 갖추고 있으므로 국가 내 불평등에 대해서는 낙관적이다. - The opt-out problem
반백신 운동이나, 기술 거부 운동처럼 Ai 기반 혜택을 거부하는 문제가 있다. 결정 능력이 부족한 사람들은 자신의 이익이 될 기술을 거부하고 이는 더 큰 격차를 만들고 디스토피아적 하층 계급을 들어낼 수 있다. 일부 연구자들은 이것이 민주주의를 약화시킬 것이라고 주장하기도 한다. 사람들이 강제로 기술을 사용하는 것보다 과학적 이해를 증진시키는 노력이 필요하고, 이 자체를 Ai가 도울 수 있을지 모른다. 하지만 대부분의 사람들은 결국 기술을 택하게 되므로 긍정적이다. 반면 원자력 발전처럼 진정으로 저지되는 기술들은 집단적 정치적 결정에 의존하는 경향이 있다.
첫 몇 년동안은 선진국과 개발도상국 모두에서 배제 문제가 있더라도 우리 모두의 막강한 노력으로 올바른 방향으로 빠르게 나아갈 수 있을 것이다. 그렇게 된다면 우리는 모든 인류에게 마땅히 주어야 할 존엄과 평등의 약속에 대한 최소한의 downpayment를 만들 수 있을 것이다.
원문
4. Peace and governance
Suppose that everything in the first three sections goes well: disease, poverty, and inequality are significantly reduced and the baseline of human experience is raised substantially. It does not follow that all major causes of human suffering are solved. Humans are still a threat to each other. Although there is a trend of technological improvement and economic development leading to democracy and peace, it is a very loose trend, with frequent (and recent) backsliding. At the dawn of the 20th Century, people thought they had put war behind them; then came the two world wars. Thirty years ago Francis Fukuyama wrote about “the End of History” and a final triumph of liberal democracy; that hasn’t happened yet. Twenty years ago US policymakers believed that free trade with China would cause it to liberalize as it became richer; that very much didn’t happen, and we now seem headed for a second cold war with a resurgent authoritarian bloc. And plausible theories suggest that internet technology may actually advantage authoritarianism, not democracy as initially believed (e.g. in the “Arab Spring” period). It seems important to try to understand how powerful AI will intersect with these issues of peace, democracy, and freedom.
Unfortunately, I see no strong reason to believe AI will preferentially or structurally advance democracy and peace, in the same way that I think it will structurally advance human health and alleviate poverty. Human conflict is adversarial and AI can in principle help both the “good guys” and the “bad guys”. If anything, some structural factors seem worrying: AI seems likely to enable much better propaganda and surveillance, both major tools in the autocrat’s toolkit. It’s therefore up to us as individual actors to tilt things in the right direction: if we want AI to favor democracy and individual rights, we are going to have to fight for that outcome. I feel even more strongly about this than I do about international inequality: the triumph of liberal democracy and political stability is not guaranteed, perhaps not even likely, and will require great sacrifice and commitment on all of our parts, as it often has in the past.
I think of the issue as having two parts: international conflict, and the internal structure of nations. On the international side, it seems very important that democracies have the upper hand on the world stage when powerful AI is created. AI-powered authoritarianism seems too terrible to contemplate, so democracies need to be able to set the terms by which powerful AI is brought into the world, both to avoid being overpowered by authoritarians and to prevent human rights abuses within authoritarian countries.
My current guess at the best way to do this is via an “entente strategy”26, in which a coalition of democracies seeks to gain a clear advantage (even just a temporary one) on powerful AI by securing its supply chain, scaling quickly, and blocking or delaying adversaries’ access to key resources like chips and semiconductor equipment. This coalition would on one hand use AI to achieve robust military superiority (the stick) while at the same time offering to distribute the benefits of powerful AI (the carrot) to a wider and wider group of countries in exchange for supporting the coalition’s strategy to promote democracy (this would be a bit analogous to “Atoms for Peace”). The coalition would aim to gain the support of more and more of the world, isolating our worst adversaries and eventually putting them in a position where they are better off taking the same bargain as the rest of the world: give up competing with democracies in order to receive all the benefits and not fight a superior foe.
If we can do all this, we will have a world in which democracies lead on the world stage and have the economic and military strength to avoid being undermined, conquered, or sabotaged by autocracies, and may be able to parlay their AI superiority into a durable advantage. This could optimistically lead to an “eternal 1991”—a world where democracies have the upper hand and Fukuyama’s dreams are realized. Again, this will be very difficult to achieve, and will in particular require close cooperation between private AI companies and democratic governments, as well as extraordinarily wise decisions about the balance between carrot and stick.
Even if all that goes well, it leaves the question of the fight between democracy and autocracy within each country. It is obviously hard to predict what will happen here, but I do have some optimism that given a global environment in which democracies control the most powerful AI, then AI may actually structurally favor democracy everywhere. In particular, in this environment democratic governments can use their superior AI to win the information war: they can counter influence and propaganda operations by autocracies and may even be able to create a globally free information environment by providing channels of information and AI services in a way that autocracies lack the technical ability to block or monitor. It probably isn’t necessary to deliver propaganda, only to counter malicious attacks and unblock the free flow of information. Although not immediate, a level playing field like this stands a good chance of gradually tilting global governance towards democracy, for several reasons.
First, the increases in quality of life in Sections 1-3 should, all things equal, promote democracy: historically they have, to at least some extent. In particular I expect improvements in mental health, well-being, and education to increase democracy, as all three are negatively correlated with support for authoritarian leaders. In general people want more self-expression when their other needs are met, and democracy is among other things a form of self-expression. Conversely, authoritarianism thrives on fear and resentment.
Second, there is a good chance free information really does undermine authoritarianism, as long as the authoritarians can’t censor it. And uncensored AI can also bring individuals powerful tools for undermining repressive governments. Repressive governments survive by denying people a certain kind of common knowledge, keeping them from realizing that “the emperor has no clothes”. For example Srđa Popović, who helped to topple the Milošević government in Serbia, has written extensively about techniques for psychologically robbing authoritarians of their power, for breaking the spell and rallying support against a dictator. A superhumanly effective AI version of Popović (whose skills seem like they have high returns to intelligence) in everyone’s pocket, one that dictators are powerless to block or censor, could create a wind at the backs of dissidents and reformers across the world. To say it again, this will be a long and protracted fight, one where victory is not assured, but if we design and build AI in the right way, it may at least be a fight where the advocates of freedom everywhere have an advantage.
As with neuroscience and biology, we can also ask how things could be “better than normal”—not just how to avoid autocracy, but how to make democracies better than they are today. Even within democracies, injustices happen all the time. Rule-of-law societies make a promise to their citizens that everyone will be equal under the law and everyone is entitled to basic human rights, but obviously people do not always receive those rights in practice. That this promise is even partially fulfilled makes it something to be proud of, but can AI help us do better?
For example, could AI improve our legal and judicial system by making decisions and processes more impartial? Today people mostly worry in legal or judicial contexts that AI systems will be a cause of discrimination, and these worries are important and need to be defended against. At the same time, the vitality of democracy depends on harnessing new technologies to improve democratic institutions, not just responding to risks. A truly mature and successful implementation of AI has the potential to reduce bias and be fairer for everyone.
For centuries, legal systems have faced the dilemma that the law aims to be impartial, but is inherently subjective and thus must be interpreted by biased humans. Trying to make the law fully mechanical hasn’t worked because the real world is messy and can’t always be captured in mathematical formulas. Instead legal systems rely on notoriously imprecise criteria like “cruel and unusual punishment” or “utterly without redeeming social importance”, which humans then interpret—and often do so in a manner that displays bias, favoritism, or arbitrariness. “Smart contracts” in cryptocurrencies haven’t revolutionized law because ordinary code isn’t smart enough to adjudicate all that much of interest. But AI might be smart enough for this: it is the first technology capable of making broad, fuzzy judgements in a repeatable and mechanical way.
I am not suggesting that we literally replace judges with AI systems, but the combination of impartiality with the ability to understand and process messy, real world situations feels like it should have some serious positive applications to law and justice. At the very least, such systems could work alongside humans as an aid to decision-making. Transparency would be important in any such system, and a mature science of AI could conceivably provide it: the training process for such systems could be extensively studied, and advanced interpretability techniques could be used to see inside the final model and assess it for hidden biases, in a way that is simply not possible with humans. Such AI tools could also be used to monitor for violations of fundamental rights in a judicial or police context, making constitutions more self-enforcing.
In a similar vein, AI could be used to both aggregate opinions and drive consensus among citizens, resolving conflict, finding common ground, and seeking compromise. Some early ideas in this direction have been undertaken by the computational democracy project, including collaborations with Anthropic. A more informed and thoughtful citizenry would obviously strengthen democratic institutions.
There is also a clear opportunity for AI to be used to help provision government services—such as health benefits or social services—that are in principle available to everyone but in practice often severely lacking, and worse in some places than others. This includes health services, the DMV, taxes, social security, building code enforcement, and so on. Having a very thoughtful and informed AI whose job is to give you everything you’re legally entitled to by the government in a way you can understand—and who also helps you comply with often confusing government rules—would be a big deal. Increasing state capacity both helps to deliver on the promise of equality under the law, and strengthens respect for democratic governance. Poorly implemented services are currently a major driver of cynicism about government27.
All of these are somewhat vague ideas, and as I said at the beginning of this section, I am not nearly as confident in their feasibility as I am in the advances in biology, neuroscience, and poverty alleviation. They may be unrealistically utopian. But the important thing is to have an ambitious vision, to be willing to dream big and try things out. The vision of AI as a guarantor of liberty, individual rights, and equality under the law is too powerful a vision not to fight for. A 21st century, AI-enabled polity could be both a stronger protector of individual freedom, and a beacon of hope that helps make liberal democracy the form of government that the whole world wants to
adopt.
4. Peace and governance
이 모든 것이 잘 진행된다 하더라도 인간의 모든 주요한 고통의 원인이 해결된다는 뜻은 아니다. 인간은 여전히 서로에게 위협이 될 수 있다. 기술 발전과 경제 성장이 민주주의와 평화가 촉진되는 경항은 있지만 이는 매우 느슨한 경향에 불과하며 자주(그리고 최근에는) 되돌아가고 있다. 사람들은 전쟁을 과거의 일로 생각했지만 20세기 초 두차례의 세계 대전이 있엇다. 미국 정책 입안자들은 중국과의 자유무역이 중국을 더 부유하게 만들고 자유화를 이끌 것이라고 믿었지만 전혀 그렇지 않았고, 우리는 이제 부활한 권위주의 블록과 함께 두 번째 냉전으로 항하는 것 같다. 그리고 인터넷 기술이 실제로 민주주의보다는 권위주의에 유리할 수 있다고 제시한다(e.x. 아랍의 봄).
AI는 선전과 감시를 훨씬 더 잘하게 도와줄 가능성이 높고 이는 독재자들의 주요도구이다. 자유 민주주의의 승리와 정치적 안정은 보장되지 않으며 그럴 가능성도 낮고 우리의 큰 희생과 헌신이 필요할 것이다. 과거에도 그랬듯.
내가 생각하는 가장 좋은 방법은 연합 전략이다. 민주주의 국가들 연합이 강력한 AI에 대해 명확한 우위를 차지하기 위해 공급망을 확보하고 빠르게 확장하여 반대 세력이 칩이나 반도체 장비와 같은 주요 자원에 접근하는 것을 차단하거나 지연시키는 방식이다. 이 연합은 AI로 강력한 군사적 우위를 확보하면서(채찍), 동시에 강력한 AI의 혜택을 점점 많은 국가들에게 분배할 수 있도록 하는 조건을 제시(당근)할 것이다.
평등한 경쟁 환경은 몇 가지 이유로 인해 점진적으로 글로벌 거버넌스를 민주주의 쪽으로 기울게 만들 가능성이 크다.
첫째, 삶의 질 향상은 민주주의를 촉진할 것이다. 역사적으로 볼 때 어느정도는 그랬다. 특히 정신 건강, 웰빙, 교육의 향상은 민주주의를 증가시킬 것으로 예상된다. 일반적으로 사람들은 다른 필요가 충족되었을 때 더 많은 자기 표현을 원하고, 민주주의는 다른 무엇보다도 자기 표현의 형태이다. 반대로, 독재주의는 두려움과 분노에서 번성한다.
둘째, 자유로운 정보가 독재자들이 그것을 검열할 수 없다면 실제로 독재주의를 약화시킬 가능성도 크다. AI가 독재자들이 차단하거나 검열할 수 없는 방식으로 모든 사람의 주머니에 들어가게 된다면, 전 세계의 반체제 운동가들과 개혁가들에게 큰 힘을 실어줄 수 있을 것이다.
다시 말하지만 이는 긴 싸움이 될 것이고 승리가 보장되지 않지만 AI를 올르게 설계하고 구축한다면 적어도 자유를 옹호하는 사람들이 우위를 점할 수 있는 싸움이 될 가능성은 있다.
신경과학과 생물학처럼 우리는 "better than normal"이 될 수 있는 법을 물어볼 수 있다. 즉, 독재를 피하는 것 뿐 아니라 민주주의를 오늘보다 더 나은 방향으로 만드는 방법에 대해서도 말이다. 법치주의 사회는 시민들에게 모두가 법 앞에서 평등하고 기본적인 인권을 누릴 자격이 있다는 약속을 하지만 실제로는 이러한 권리를 항상 보장받는 것은 아니다. AI가 우리를 더 나은 방향으로 이끌 수 있을까? 예를 들어 AI가 우리의 법률 및 사법 시스템을 개선하여 더 공정하고 편향 없는 결정을 내릴 수 있을까?
AI의 진정한 구현은 편향을 줄이고, 모든 사람에게 더 공정한 결과를 가져올 잠재력이 있다. 법률 시스템은 법이 공정해야 한다는 딜레마에 직면해왔다. 법은 본질적으로 주관적이기 때문에 반드시 편향된 인간에 의해 해석되어야 했다. 실제 세계는 복잡하고 수학 공식으로 모두 포착할 수 없기 때문에 법을 완전히 기계적으로 만들지 못했다. 법률 시스템은 이를 인간이 해석해야 했고 이는 종종 편향, 편애, 임의성이 드러나기도 한다. 암호화폐에서의 스마트 계약은 법을 혁신하지 않았다. 일반적으로 코드가 그런 복잡한 문제를 해결할 만큼 똑똑하지 않다. 그러나 AI는 이를 해결할 수 있을 만큼 똑똑할 수 있다. AI는 처음으로 넓고 애매한 판단을 반복 가능하고 기계적으로 내릴 수 있는 기술이기 때문이다.
판사를 AI로 대체해야 한다고 주장하는 것은 아니지만 인간의 의사결정을 돕는 보조 도구로서 작용할 수 있을 것이다. 이러한 시스템의 학습 과정은 철저히 연구될 수 있고, 고급 해석가능성 기법을 사용해 최종 모델을 들여다보고 숨겨진 편향이 있는지 평가할 수 있는 방법을 제시할 수 있다. 인간에게는 불가능한일이다. AI는 사법적, 경찰적 맥락에서 기본적인 권리 침해를 감시하는 데 사용될 수도 있어 헌법을 더욱 자율적으로 집행할 수 있게 된다.
사려깊고 정보에 기반한 AI가 당신이 법적으로 정부로부터 받을 수 있는 모든 것을 이해할 수 있는 방식으로 제공하고 자주 혼란스러운 규칙을 준수하도록 돕는다면 이는 큰 변화가 될 것이다. 국가의 능력을 강화하는 것은 법 앞의 평등이라는 약속을 이행하는 데 도움을 주며 민주적 거버넌스에 대한 존중을 강화할 수 있다. 이는 생물학 신경과학에 비해 확신하지 않고 모호하며 비현실적 유토피아일 수 있다. 그러나 중요한 것은 야심찬 비전을 가지고 큰 꿈을 꾸고 다양한 시도를 해보는 것이다.
원문
5. Work and meaning
Even if everything in the preceding four sections goes well—not only do we alleviate disease, poverty, and inequality, but liberal democracy becomes the dominant form of government, and existing liberal democracies become better versions of themselves—at least one important question still remains. “It’s great we live in such a technologically advanced world as well as a fair and decent one”, someone might object, “but with AI’s doing everything, how will humans have meaning? For that matter, how will they survive economically?”.
I think this question is more difficult than the others. I don’t mean that I am necessarily more pessimistic about it than I am about the other questions (although I do see challenges). I mean that it is fuzzier and harder to predict in advance, because it relates to macroscopic questions about how society is organized that tend to resolve themselves only over time and in a decentralized manner. For example, historical hunter-gatherer societies might have imagined that life is meaningless without hunting and various kinds of hunting-related religious rituals, and would have imagined that our well-fed technological society is devoid of purpose. They might also have not understood how our economy can provide for everyone, or what function people can usefully service in a mechanized society.
Nevertheless, it’s worth saying at least a few words, while keeping in mind that the brevity of this section is not at all to be taken as a sign that I don’t take these issues seriously—on the contrary, it is a sign of a lack of clear answers.
On the question of meaning, I think it is very likely a mistake to believe that tasks you undertake are meaningless simply because an AI could do them better. Most people are not the best in the world at anything, and it doesn’t seem to bother them particularly much. Of course today they can still contribute through comparative advantage, and may derive meaning from the economic value they produce, but people also greatly enjoy activities that produce no economic value. I spend plenty of time playing video games, swimming, walking around outside, and talking to friends, all of which generates zero economic value. I might spend a day trying to get better at a video game, or faster at biking up a mountain, and it doesn’t really matter to me that someone somewhere is much better at those things. In any case I think meaning comes mostly from human relationships and connection, not from economic labor. People do want a sense of accomplishment, even a sense of competition, and in a post-AI world it will be perfectly possible to spend years attempting some very difficult task with a complex strategy, similar to what people do today when they embark on research projects, try to become Hollywood actors, or found companies28. The facts that (a) an AI somewhere could in principle do this task better, and (b) this task is no longer an economically rewarded element of a global economy, don’t seem to me to matter very much.
The economic piece actually seems more difficult to me than the meaning piece. By “economic” in this section I mean the possible problem that most or all humans may not be able to contribute meaningfully to a sufficiently advanced AI-driven economy. This is a more macro problem than the separate problem of inequality, especially inequality in access to the new technologies, which I discussed in Section 3.
First of all, in the short term I agree with arguments that comparative advantage will continue to keep humans relevant and in fact increase their productivity, and may even in some ways level the playing field between humans. As long as AI is only better at 90% of a given job, the other 10% will cause humans to become highly leveraged, increasing compensation and in fact creating a bunch of new human jobs complementing and amplifying what AI is good at, such that the “10%” expands to continue to employ almost everyone. In fact, even if AI can do 100% of things better than humans, but it remains inefficient or expensive at some tasks, or if the resource inputs to humans and AI’s are meaningfully different, then the logic of comparative advantage continues to apply. One area humans are likely to maintain a relative (or even absolute) advantage for a significant time is the physical world. Thus, I think that the human economy may continue to make sense even a little past the point where we reach “a country of geniuses in a datacenter”.
However, I do think in the long run AI will become so broadly effective and so cheap that this will no longer apply. At that point our current economic setup will no longer make sense, and there will be a need for a broader societal conversation about how the economy should be organized.
While that might sound crazy, the fact is that civilization has successfully navigated major economic shifts in the past: from hunter-gathering to farming, farming to feudalism, and feudalism to industrialism. I suspect that some new and stranger thing will be needed, and that it’s something no one today has done a good job of envisioning. It could be as simple as a large universal basic income for everyone, although I suspect that will only be a small part of a solution. It could be a capitalist economy of AI systems, which then give out resources (huge amounts of them, since the overall economic pie will be gigantic) to humans based on some secondary economy of what the AI systems think makes sense to reward in humans (based on some judgment ultimately derived from human values). Perhaps the economy runs on Whuffie points. Or perhaps humans will continue to be economically valuable after all, in some way not anticipated by the usual economic models. All of these solutions have tons of possible problems, and it’s not possible to know whether they will make sense without lots of iteration and experimentation. And as with some of the other challenges, we will likely have to fight to get a good outcome here: exploitative or dystopian directions are clearly also possible and have to be prevented. Much more could be written about these questions and I hope to do so at some later time.
5. Working and meaning
모든 것이 잘 된다면 여전히 중요한 질문이 하나 남는다. "우리가 기술적으로 빈보하고 온정적인 세상에서 살고 있다면, AI가 모든것을 하는 것은 인간에게 어떤 의미가 있을까? 더 나아가 인간들은 어떻게 경제적으로 생존할 수 있을까?"
이 질문은 다른 문제보다 더 어렵다(비관적이라는 의미는 아니다). 역사적으로 수렵사회에서 사냥과 관련된 다양한 종교 의식 없이 삶이 무의미하다고 생각했을지 모른다. 그들은 지금 기술 사회가 목적이 결여된 사회라고 상상했을 수 있다. 이 문제는 사회가 어떻게 조직될 것인가라는 거시적 문제와 관련되어 있어 시간과 분산된 방식으로 해결되야 하므로 예측하기 어렵고 모호하다. 이 섹션이 짧은 것은 내가 이 문제를 진지하게 생각해서가 아니고 오히려 명확한 답이 없다는 사실을 반영하는 것이다.
나는 AI가 더 잘하기 때문에 내가 하는 일이 무의미하다라는 생각은 잘못되었다고 생각한다. 대부분의 사람들은 어떤 일에 있어 최고가 아니고 그 사실이 불편하지 않다. 또한 경제적 가치가 없는 활동에서도 큰 즐거움을 얻을 수 있다.
경제적 문제는 의미의 문제보다 더 어렵다. 나는 결국 Ai가 매우 효율적이고 저렴해져서 더이상 AI보다 인간 작업이 더 효율적인 상황이 존재하지 않게될 것이라고 생각한다. 미친 생각처럼 들릴 수 있지만 사실 문명은 과거에 여러 번 주요한 경제적 전환을 성공적으로 겪었다. 수집 채렵에서 농업으로, 농업에서 봉건주의로, 봉건주의에서 산업화로. 나는 그때와 마찬가지로 지금과는 전혀 다른 더 기이한 해결책이 필요할 것이라고 생각한다. 이는 오늘날 누구도 제대로 상상하지 못한 것일 수 있다. 모든 사람에게 보편적 기본소득을 제공하는 것 같은 것은 해결책의 일부에 불과할 것이다. Ai 시스템이 자원을 인간에게 제공하는 자본주의 경제일 수도 있다. 이 경우 AI는 인간에게 보상할만한 가치가 있다고 판단하는 것을 기반으로 자원을 분배한다면, 어쩌면 경제는 Whuffie point로 운영될지도 모르겠다. 혹은 인간들이 여전히 경제적을 가치 있는 존재로 남을 수도 있다. 하지만 이들은 모두 수 많은 문제가 있고 실제 유효할지 여부는 많은 반복과 실험으로 알 수 있다. 결국 좋은 결과를 얻기 위해 싸워야하며 착취적이나 디스토피아적 방향을 방지해야 한다.
원문
Taking stock
Through the varied topics above, I’ve tried to lay out a vision of a world that is both plausible if everything goes right with AI, and much better than the world today. I don’t know if this world is realistic, and even if it is, it will not be achieved without a huge amount of effort and struggle by many brave and dedicated people. Everyone (including AI companies!) will need to do their part both to prevent risks and to fully realize the benefits.
But it is a world worth fighting for. If all of this really does happen over 5 to 10 years—the defeat of most diseases, the growth in biological and cognitive freedom, the lifting of billions of people out of poverty to share in the new technologies, a renaissance of liberal democracy and human rights—I suspect everyone watching it will be surprised by the effect it has on them. I don’t mean the experience of personally benefiting from all the new technologies, although that will certainly be amazing. I mean the experience of watching a long-held set of ideals materialize in front of us all at once. I think many will be literally moved to tears by it.
Throughout writing this essay I noticed an interesting tension. In one sense the vision laid out here is extremely radical: it is not what almost anyone expects to happen in the next decade, and will likely strike many as an absurd fantasy. Some may not even consider it desirable; it embodies values and political choices that not everyone will agree with. But at the same time there is something blindingly obvious—something overdetermined—about it, as if many different attempts to envision a good world inevitably lead roughly here.
In Iain M. Banks’ The Player of Games29, the protagonist—a member of a society called the Culture, which is based on principles not unlike those I’ve laid out here—travels to a repressive, militaristic empire in which leadership is determined by competition in an intricate battle game. The game, however, is complex enough that a player’s strategy within it tends to reflect their own political and philosophical outlook. The protagonist manages to defeat the emperor in the game, showing that his values (the Culture’s values) represent a winning strategy even in a game designed by a society based on ruthless competition and survival of the fittest. A well-known post by Scott Alexander has the same thesis—that competition is self-defeating and tends to lead to a society based on compassion and cooperation. The “arc of the moral universe” is another similar concept.
I think the Culture’s values are a winning strategy because they’re the sum of a million small decisions that have clear moral force and that tend to pull everyone together onto the same side. Basic human intuitions of fairness, cooperation, curiosity, and autonomy are hard to argue with, and are cumulative in a way that our more destructive impulses often aren’t. It is easy to argue that children shouldn’t die of disease if we can prevent it, and easy from there to argue that everyone’s children deserve that right equally. From there it is not hard to argue that we should all band together and apply our intellects to achieve this outcome. Few disagree that people should be punished for attacking or hurting others unnecessarily, and from there it’s not much of a leap to the idea that punishments should be consistent and systematic across people. It is similarly intuitive that people should have autonomy and responsibility over their own lives and choices. These simple intuitions, if taken to their logical conclusion, lead eventually to rule of law, democracy, and Enlightenment values. If not inevitably, then at least as a statistical tendency, this is where humanity was already headed. AI simply offers an opportunity to get us there more quickly—to make the logic starker and the destination clearer.
Nevertheless, it is a thing of transcendent beauty. We have the opportunity to play some small role in making it real.
Taking Stock
나는 이 주제들을 통해 AI가 잘 진행될 경우 오늘보다 훨씬 더 나은 세상을 만들 수 있다는 비전을 제시하려고 했다. 이는 용기 있고 헌신적인 사람들의 엄청난 노력과 투쟁없이는 이루어지지 않을 것이다. 하지만 이 세상은 싸울 가치가 있는 세상이다. 언급한 것들이 실제로 일어난다면 지켜보는 사람들은 모두 그 효과에 놀랄 것이라고 생각한다. 나는 개인 혜택만을 말하는 것이 아니라 우리가 오랫동안 품어온 이상이 한꺼번에 눈앞에서 실현되는 경험은 많은 사람들이 그것을 보고 눈물을 흘릴 것이라고 생각한다.
여기서 제시된 비전은 매우 급진적인 것이다. 거의 누구도 다음 10년 내에 일어날 일이라고 기대하지 않으며 많은 사람들에게는 터무니없는 환상처럼 보일 것이다. 하지만 눈을 멀게 할 정도로의 분명한 무언가가 있다(마치 좋은 세상을 구상하려는 수많은 시도가 결국 여기로 향할 수 밖에 없는 것처럼 보인다).
나는 문화의 가치가 승리하는 전략이라고 생각한다. 왜냐면 그것은 분명한 도덕적 힘을 가진 수백만 개의 작은 결정들의 합이며 사람들이 모두 같은 편이 되게 만드는 경향이 있기 때문이다. 공정함, 협력, 호기심, 자율성에 대한 기본적인 인간의 직관은 논란의 여지가 적고 우리의 파괴적 충동들과 달리 누적되는 경향이 있다. 우리는 아이들이 병으로 죽지 않아야 한다는 주장을 쉽게 할 수 있고 더 나아가 아이들이 그 권리를 평등하게 누려야 한다고 주장하는 것도 어렵지 않다. 거기서 더 나아가 우리는 모두 힘을 합쳐 이 결과를 달성해야 한다고 주장하는 것도 어렵지 않다. 마찬가지로 사람들은 자신들의 삶과 선택에 대해 자율성과 책임을 가져야 한다는 생각도 직관적으로 받아들여진다. 이러한 간단한 직관들이 논리적 결론에 이르게 되면 결국 법의 지배, 민주주의, 계몽 주의적 가치를 만들어낸다.최소한 통계적으로 보면 인류는 이런 방향으로 가고 있었다. AI는 단지 우리가 더 빨리 그곳에 도달할 수 있도록 해주는 기회를 제공한다(논리를 더 뚜렷하게 하고 목적지를 더 분명히 해준다).
그럼에도 불구하고 이것은 초월적인 아름다움이 있다. 우리는 그것을 현실적으로 만드는 데 작은 역할을 할 기회가 있다.
'👀 etc...' 카테고리의 다른 글
ChatGPT Is A Blurry JPEG Of The Web - 테드창 (0) | 2023.04.22 |
---|---|
The Bitter Lesson - Rich Sutton(2019) (0) | 2022.04.12 |
Nvidia 추천시스템 Meetup 후기 (0) | 2021.08.25 |
문의댓글 및 블로그 글 (1) | 2021.07.23 |
[TIL] 기술 아티클 읽기 (3) | 2021.05.11 |
소중한 공감 감사합니다