计算创世纪:生命即代码,代码即生命,自然如斯,技术亦然

B站影视 2025-01-02 19:24 2

摘要:地球上的生命是如何诞生的?尽管查尔斯·达尔文清楚地阐述了进化论的原理,但他对此却一无所知。1863 年,他在写给好友约瑟夫·道尔顿·胡克的信中写道:“目前,生命起源的想法完全是胡说八道;人们不妨想想物质的起源。”

作者:Blaise Agüera y Arcas 2024 年 8 月 19 日

一、无生源论(自然发生说)

地球上的生命是如何诞生的?尽管查尔斯·达尔文清楚地阐述了进化论的原理,但他对此却一无所知。1863 年,他在写给好友约瑟夫·道尔顿·胡克的信中写道:“目前,生命起源的想法完全是胡说八道;人们不妨想想物质的起源。”

今天,我们掌握了更多的线索,尽管细节已随着时间的流逝而消失。在生命起源领域工作的生物学家和化学家——研究 30 亿或 40 亿年前化学变成生命的那一刻——已经提出了多个可信的起源故事。其中一个故事认为,远古“ RNA世界”中的原始生物是由 RNA 分子构成的,这些分子既可以复制,又可以折叠成可以像原始酶一样发挥作用的 3D 结构。1在与之竞争的“代谢优先”理论中,化学反应网络在海底“黑烟囱”的多孔岩石烟囱中,由地热能驱动,从而产生生命;RNA 和 DNA 后来才出现。2

无论如何,即使是细菌——现存最简单的生命形式——也是许多后续进化步骤的产物。这些步骤中最重要的步骤可能是巨大而突然的,而不是达尔文理论中的日常渐进突变和选择。这些“重大进化转变”涉及更简单、更简单的复制实体变得相互依赖,以形成更大、更复杂、更强大的复制器。3

我们是由功能构成的,而这些功能又是由功能构成的,一直到最后。

正如特立独行的生物学家林恩·马古利斯 (Lynn Margulis) 在 20 世纪 60 年代发现的那样,真核细胞就是这种共生事件的产物,当时成为我们线粒体的古老细菌被另一种与今天的古细菌有关的单细胞生命形式吞噬。在这样的时刻,生命之树不仅会分出分支,还会与自身纠缠在一起,其分支融合产生全新的形式。马古利斯是这种观点的早期拥护者,他认为这些事件是推动进化飞跃的因素。

细菌本身很可能就是这种共生事件的产物——例如 RNA 和蛋白质之间的共生事件。4即使是黑烟囱中复制微弱的化学反应网络也可以理解为这样一种联盟,一组反应通过相互催化,形成了一个更为强大、可自我维持的整体。

因此,从某种意义上说,达尔文说“思考生命起源是无稽之谈”也许是对的,因为生命可能没有单一的起源,而是由许多不同的线编织而成的,其中最古老的线看起来就像普通的化学物质。这种编织不需要智能设计;只需要一个不容置疑的逻辑:有时,联盟会创造出持久的东西,而持久的东西……就会持久。

通常,持久意味着既占据又创造全新的生态位。因此真核生物并没有取代细菌;事实上,它们最终为细菌创造了许多新的生态位。同样,多细胞生命的共生出现——另一个重大的进化转变——并没有取代单细胞生命。我们的星球是一张复写本,它的大部分过去在现在仍然可见。甚至黑烟囱仍在冒泡。原始生命的自催化化学反应可能仍在海底慢慢酝酿。

二、计算

虽然大多数生物化学家都专注于了解地球上生命的具体历史和运作方式,但对生命现象的更普遍理解却来自一个意想不到的领域:计算机科学。这种联系的理论基础可以追溯到该领域的两位创始人:艾伦·图灵和约翰·冯·诺依曼。

1935 年,图灵在剑桥大学获得数学学位后,开始专注于当时最根本的悬而未决的问题之一:Entscheidungsproblem (德语为“判定问题”),该问题询问是否存在一种算法来确定任意数学陈述的有效性。答案是“否”,但图灵证明它的方式最终比结果本身重要得多。5

图灵的证明要求他定义一个通用的计算程序。他通过发明一种我们现在称之为“图灵机”的虚构装置来实现这一目的。图灵机由一个读写头组成,它可以沿着无限长的磁带向左或向右移动,根据内置表指定的一组规则在磁带上读写符号。

首先,图灵证明了,只要有适当的规则表、足够的时间和足够的磁带,任何手工完成的计算都可以由这种机器完成。然后,他证明了存在某些定义通用机器的规则表,这样磁带本身不仅可以指定任何输入数据,还可以指定所需的表,这些表被编码为符号序列。这是一台通用计算机:一台可以编程来计算任何东西的机器。

20 世纪 40 年代初,匈牙利裔美国博学者冯·诺依曼将注意力转向了计算领域,他此前已在物理学和数学领域做出了重大贡献。他成为 ENIAC 和 EDVAC 设计的关键人物,这两台机器是世界上第一台真实存在的通用图灵机,现在被称为“计算机”。

多年来,人们投入了大量的思考和创造力,才弄清通用图灵机可以有多简单。只需要几条指令。深奥的语言爱好者甚至已经弄清楚了如何只用一条指令进行计算(所谓的 OISC 或“单指令集计算机”)。

但是,有一些不可简化的要求:一条或多条指令必须以后续指令能够“看到”的方式改变环境,并且必须有条件分支,这意味着根据环境的状态,会发生一件事或另一件事。在大多数编程语言中,这使用“if/then”语句来表达。当只有一条指令时,它必须同时满足两个目的,例如 SUBLEQ 语言,其唯一指令是“如果结果小于或等于零,则减去并分支”。

三、功能主义

图灵和冯·诺依曼都敏锐地意识到了计算机和大脑之间的相似之处,并提出了许多后来成为神经科学和人工智能基础的思想。冯·诺依曼在 EDVAC 的报告中明确地将机器的逻辑门描述为电子神经元。6无论这种类比是否成立(事实并非如此;神经元比逻辑门更复杂),这里的关键见解是,大脑和计算机都不是由它们的机制来定义的,而是由它们所做的事情来定义的——它们的功能,无论是在口语上还是在数学意义上。

一个思维实验可以说明这种区别。虽然我们对大脑还有很多需要了解的地方,但生物物理学家已经彻底描述了单个神经元的电行为。因此,我们可以编写计算机代码,准确模拟它们对电和化学输入的反应。如果我们能用运行这种模型的计算机替换你大脑中的一个神经元,并将其输入和输出适当地插入相邻的神经元,你大脑的其余部分——或者“你”——能分辨出区别吗?

如果模型准确,答案是“不”。即使要替换一百万个神经元……或全部神经元,答案仍然相同。无论是单个神经元还是整个大脑,重要的是功能。我们是由功能组成的,而这些功能又是由功能组成的,一直到最底层。

将 DNA 称为“程序”并不是一个比喻——事实确实如此。

在 20 世纪 50 年代的流行文化中,人们常常认为计算机“像”大脑,原因很肤浅,比如两者都依赖电力。但对于图灵来说,这些细节无关紧要,重视它们只是迷信。计算机也可以由齿轮和齿轮组成,就像查尔斯·巴贝奇和艾达·洛夫莱斯在 19 世纪梦想的蒸汽朋克“分析机”(但遗憾的是从未制造出来)一样。更深层次的问题是,一台足够强大的通用计算机,经过适当的编程,可以计算大脑计算的一切。

人工智能就是对这种程序的探索,而图灵模仿游戏(如今被称为图灵测试的思想实验)的意义在于,当这样的程序能够像智能人类一样发挥功能时,我们应该得出结论,计算机(或正在运行的程序)同样具有智能。

图灵测试通常将互动限制在聊天窗口,从而简化了事情,但当我们将目光放远,将目光放远到整个生命体,而不仅仅是一个缸中之脑时,这种简化似乎不再合适。从进化的角度来说,生物体最基本的功能不是发送和接收短信,而是繁殖。也就是说,它的输出不仅仅是信息,而是与自身相似的某种东西的真实副本。冯·诺依曼想知道,机器(在最广泛的意义上)如何繁殖?换句话说,生命如何可能存在。

四、繁殖

冯·诺依曼设想了一台由标准化部件(如乐高积木)制成的机器,在水库中划行,这些部件在水面上漂浮。7机器的工作是收集所有需要的部件并建造另一台与自己一样的机器。当然,这正是细菌繁殖所必须做的事情;事实上,这也是每个细胞分裂所必须做的事情,也是每个母亲生育所必须做的事情。

从表面上看,制造像你自己一样复杂的东西有点自相矛盾,就像靠自己的力量提升自己一样。然而,冯·诺依曼使用通用图灵机的泛化证明了这不仅是可能的,而且很简单。

他设想了一台“机器 A”,它能读取一盘磁带,磁带上记录了基于有限零件目录的顺序装配说明,并逐步执行这些说明。然后,会有一台“机器 B”,其功能是复制磁带——假设磁带本身也是由可用零件制成的。如果制造机器 A 和 B 的说明本身就编码在磁带上,那么你就有了一个复制器。

构建任何额外的非生殖机制的说明也可以编码在磁带上,因此复制器甚至可以构建比自身更复杂的东西。种子或受精卵就说明了这一点。

值得注意的是,冯·诺依曼在发现DNA 的结构和功能之前就描述了自我复制机器的这些要求。尽管如此,他还是完全正确的。对于地球上的生命来说,DNA 是磁带;复制 DNA 的 DNA 聚合酶是“机器 B”;按照 DNA 上按顺序编码的指令构建蛋白质的核糖体是“机器 A”。核糖体和 DNA 聚合酶由蛋白质组成,而蛋白质的序列又编码在我们的 DNA 中并由核糖体制造。这就是生命如何通过自己的引导来提升自己。

五、等价性

尽管很少有人完全认识到这一点,但冯·诺依曼的洞察力确立了生命与计算之间的深刻联系。机器 A 和 B 是图灵机。它们必须执行影响其环境的指令,并且这些指令必须循环运行,从头开始,到尾结束。这需要分支,例如“如果下一条指令是密码子 CGA,则将精氨酸添加到正在构建的蛋白质中”,以及“如果下一条指令是 UAG,则停止”。将 DNA 称为“程序”并不是比喻——事实就是如此。

生物计算与 ENIAC 或智能手机进行的数字计算之间存在显著差异。DNA 是微妙且多层次的,包括表观遗传学和基因邻近效应等现象。细胞 DNA 也远非全部。我们的身体包含(并不断交换)无数的细菌和病毒,每个细菌和病毒都运行自己的代码。生物计算是大规模并行的;你的细胞大约有 300千万亿个核糖体。所有这些生物计算也都是嘈杂的;每个化学反应和自组装步骤都是随机的。

尽管如此,它还是计算。事实上,计算机科学中有许多经典算法需要随机性,这就是为什么图灵坚持要求 Ferranti Mark I(他于 1951 年参与设计的早期计算机)包含随机数指令。因此,随机性是原始图灵机的一个很小但重要的扩展,尽管任何计算机都可以通过计算确定性但看似随机或“伪随机”的数字来模拟它。

并行性对计算机科学也越来越重要。例如,现代人工智能既依赖于大规模并行性,也依赖于随机性——例如用于训练当今大多数神经网络的“随机梯度下降”算法,以及几乎所有聊天机器人使用的“温度”设置,以在其输出中引入一定程度的随机性。

随机性、大规模并行性和微妙的反馈效应都使得手工推理、“编程”或“调试”生物计算变得非常非常困难。(我们需要人工智能的帮助。)不过,我们应该记住,图灵的根本贡献不是发明了任何特定的计算机器,而是发明了一种通用的计算理论。计算就是计算,所有计算机本质上都是等价的。

任何可以由生物系统计算的函数都可以由带有随机数生成器的图灵机计算,反之亦然。任何可以并行完成的事情也可以串行完成,尽管这可能需要很长时间。事实上,当今基于人工神经网络的人工智能的低效率很大程度上在于我们仍在对串行处理器进行编程,以按顺序循环执行大脑并行执行的操作。

六、人工生命

冯·诺依曼的洞见表明,生命依赖于计算。因此,在一个物理定律不允许计算的宇宙中,生命不可能出现。幸运的是,我们宇宙的物理定律允许计算,这一点已由我们可以制造计算机这一事实证明——而且我们确实存在。

现在我们可以问:在一个能够计算的宇宙中,生命出现的频率是多少?显然,它在这里发生了。这是一个奇迹,一个必然性,还是介于两者之间?我和几位合作者于 2023 年底开始探索这个问题。

我们的第一个实验使用了一种名为 Brainfuck 的深奥编程语言(抱歉)。8 Brainfuck虽然不像 SUBLEQ 那么简单,但它非常简单,与原始图灵机非常相似。与图灵机一样,它包含一个可以沿着磁带向左或向右移动的读/写头。

在我们的版本中,我们称之为“bff”,其中有一个包含数千盘磁带的“汤”,每盘磁带都包含代码和数据。磁带的长度是固定的(64 字节),一开始填满的是随机字节。然后,它们会随机地反复交互。在交互过程中,两盘随机选择的磁带首尾相连,形成一个 128 字节长的字符串,然后运行这个组合磁带,可能会修改自身。然后将 64 字节长的两半拉开并放回汤中。偶尔,字节值会随机化,就像宇宙射线对 DNA 所做的那样。

经过几百万次互动后,神奇的事情发生了:磁带开始复制。

由于 bff 只有七条指令,用字符“ + - , ”表示,并且有 256 个可能的字节值,因此在随机初始化之后,给定磁带中只有 2.7% 的字节将包含有效指令;任何非指令都将被跳过。因此,一开始,磁带之间的交互并不多。偶尔,一条有效指令会修改一个字节,这种修改将永远存在。但平均而言,每次交互只会发生几个计算操作,而且通常它们没有影响。换句话说,虽然在这个玩具世界中计算是可能的,但实际上很少发生。当一个字节被改变时,很可能是由于随机突变造成的,即使它是由执行一条有效指令引起的,这种改变也是任意的、无目的的。

但经过几百万次交互后,神奇的事情发生了:磁带开始复制。当它们复制自己和彼此时,随机性让位于复杂的秩序。每次交互中发生的计算量都会急剧上升,因为——记住——复制需要计算。Brainfuck 的七条指令中的两条,“[”和“]”专用于条件分支,并在代码中定义循环;复制至少需要一个这样的循环(“复制字节直到完成”),这导致在一次交互中执行的指令数量至少攀升到数百条。

代码不再是随机的,而是明显有目的的,因为它的功能可以被分析和逆向工程。一个不幸的突变会破坏它,使其无法复制。随着时间的推移,代码会进化出巧妙的策略来提高其对此类损害的鲁棒性。这种功能和目的的出现就像我们在各个尺度的有机生命中看到的那样;这就是为什么我们能够谈论循环系统、肾脏或线粒体的功能,以及它们如何“失效”——尽管没有人设计这些系统。

我们用各种其他编程语言和环境重现了我们的基本结果。在一个特别漂亮的可视化中,我的同事 Alex Mordvintsev 创建了一个二维的 bff 类环境,其中 200×200 的“像素”阵列中的每一个都包含一个磁带,并且交互只发生在网格上相邻的磁带之间。这些磁带被解释为标志性的 Zilog Z80 微处理器的指令,该微处理器于 1976 年推出,多年来被用于许多 8 位计算机(包括 Sinclair ZX Spectrum、Osborne 1 和 TRS-80)。在这里,复杂的复制器也很快从随机交互中出现,以连续的波浪形式在网格中进化和传播。

七. 热力学

我们还没有图灵想要的那种优雅的数学证明,但我们的模拟表明,一般来说,只要条件允许,生命就会自发出现。这些条件似乎非常低:一个能够支持计算的物理环境、一个噪声源和足够的时间。

复制器之所以出现,是因为可复制的实体比不可复制的实体更具有动态稳定性。换句话说,如果我们从一盘可复制的磁带和一盘不可复制的磁带开始,那么在以后的某个时间,我们可能会找到可复制磁带的许多副本,但我们不太可能找到另一个,因为它要么被噪音降级,要么被覆盖。

这意味着热力学的一个重要概括,热力学是物理学的一个分支,研究受随机热波动影响的物质的统计行为——即所有物质,因为在绝对零度以上,一切都受这种随机性的影响。著名的热力学第二定律告诉我们,在一个封闭的系统中,熵会随着时间的推移而增加;这就是为什么如果你把一台崭新的割草机放在外面,它的刀片会逐渐变钝和氧化,它的油漆会开始剥落,几年后,剩下的就只是一堆高熵的锈迹。

对于物理学家来说,生命很奇怪,因为它似乎违背了第二定律。生物会生存、生长,甚至会随着时间的推移变得更加复杂,而不是退化。这里并没有严格违反热力学,因为生命不能存在于封闭系统中——它需要自由能的输入——但生命系统看似自发的出现和复杂化似乎超出了物理学的范围。

但现在看来,通过将热力学与计算理论统一起来,我们应该能够将生命理解为统计过程的可预测结果,而不是将其视为技术上允许但又神秘莫测的东西。我们的人工生命实验表明,当计算成为可能时,它将是一个“动态吸引子”,因为复制实体比非复制实体更具有动态稳定性,而且正如冯·诺依曼所表明的那样,计算是复制所必需的。

在我们的宇宙中,这需要能源。这是因为,一般来说,计算涉及不可逆步骤,而这些步骤会消耗自由能。因此,我们计算机中的芯片在运行时会消耗能量并产生热量。生命必须消耗能量并产生热量,原因相同:因为它本质上是计算性的。

八. 复杂性

当我们在经过几百万次交互后从众多磁带中挑选出一盘磁带时,当复制器接管时,我们经常会看到磁带上的程序复杂程度似乎高得不必要,甚至难以置信。一个正常工作的复制器可能只包含单个循环中的少量指令,需要运行几百个操作。相反,我们经常看到指令填满了 64 个字节的大部分,多个复杂的嵌套循环,以及每次交互的数千个操作。

所有这些复杂性从何而来?它看起来肯定不像是简单的达尔文选择的结果,这些选择针对的是一百万只猴子在一百万台打字机上打字产生的随机文本。事实上,即使没有随机突变,也会出现这种复杂性——也就是说,只使用汤中的初始随机性,结果就是一本中篇小说的胡言乱语。几乎一百万只猴子——而且数量太少,不足以容纳超过几个连续的工作代码字符。

计算机和手机当然是有目的的,否则我们就不会说它们有缺陷了。

答案让人想起马古利斯的洞见:共生在进化中发挥着核心作用,而不仅仅是随机突变和选择。当我们仔细观察磁带开始复制之前的静止期时,我们注意到计算量逐渐稳定地上升。这是由于不完美复制器的快速出现——非常短的代码片段,以某种方式,有某种非零概率生成更多代码。即使生成的代码与原始代码不同,它仍然是代码,只有代码才能产生更多代码;非代码不能产生任何东西!

因此,从一开始,就有一个选择过程在起作用,其中代码产生代码。这种内在的创造性、自我催化过程在产生新颖性方面比随机突变更为重要。当增殖代码的片段组合成复制器时,这是一个共生事件:通过协同工作,这些代码片段产生的代码比它们单独产生的代码要多,而它们产生的代码又会产生更多具有同样功能的代码,最终导致指数级增长。

此外,在功能齐全的磁带复制器启动后,我们会看到更多共生事件。复制磁带中可能会出现额外的复制器,有时每次交互都会产生多个副本。在发生突变的情况下,这些额外的复制器甚至可以与它们的“宿主”建立共生关系,从而抵抗突变损伤。

九、生态

从根本上讲,生命就是代码,代码就是生命。更准确地说,单个计算指令是生命中不可分解的量子——最小的复制实体集,无论它们看起来多么非物质化和抽象,它们聚集在一起形成更大、更稳定、更复杂的复制器,以不断上升的共生级联。

在 bff 的玩具世界中,基本指令是七个特殊字符“ + – , ”。在原始海床上,地热驱动的化学反应可能发挥了同样的作用,这些反应可以催化进一步的化学反应。我们对生命作为一种自我强化的动态过程的理解日益加深,归根结底不是事物,而是互惠互利的关系网络。在任何尺度上,生命都是一个生态系统。

如今,我们时刻与计算机互动:口袋和钱包里的手机、笔记本电脑和平板电脑、数据中心和人工智能模型。它们也是活的吗?

它们当然是有目的的,否则我们就不能说它们有问题或有缺陷。但硬件和软件一般都无法自行复制、成长、修复或进化,因为工程师们很久以前就知道,自我修改的代码(如 bff 或 DNA)很难理解和调试。因此,手机不会制造出婴儿手机,应用程序也不会自发生成自己的新版本。

然而:今年世界上的手机数量比去年多;应用程序不断获得新功能,逐渐过时,最终达到使用寿命,被新应用程序取代;人工智能模型也在逐月改进。看起来技术确实在不断复制和发展!

如果我们把视野拉远,把科技和人类放在一起,我们可以看到这个更大的、共生的“我们”肯定在繁衍、成长和进化。科技的出现,以及人与科技之间互利(有时也充满危机)的关系,无非是我们自己最近的重大进化转变。因此,科技与自然或生物并无二致,只是其最近的进化层面。

In the Beginning, There Was Computation

Life is code, and code is life, in nature as it is in technology.

By Blaise Agüera y Arcas August 19, 2024

I. Abiogenesis

How did life on Earth first arise? Despite his clear articulation of the principle of evolution, Charles Darwin didn’t have a clue. In 1863, he wrote to his close friend Joseph Dalton Hooker that “it is mere rubbish, thinking, at present, of origin of life; one might as well think of origin of matter.”

Today, we have more of a clue, although the details are lost to deep time. Biologists and chemists working in the field of abiogenesis—the study of the moment when, 3 or 4 billion years ago, chemistry became life—have developed multiple plausible origin stories. In one, proto-organisms in an ancient “RNA world” were made of RNA molecules, which both replicated and folded into 3-D structures that could act like primitive enzymes.1 In a competing “metabolism first” account, chemical reaction networks sputtered to life in the porous rock chimneys of “black smokers” on the ocean floor, powered by geothermal energy; RNA and DNA came later.2

Either way, even bacteria—the simplest life forms surviving today—are a product of many subsequent evolutionary steps. The most important of these steps may have been large and sudden, not the everyday, incremental mutation and selection theorized by Darwin. These “major evolutionary transitions” involve simpler, less complex replicating entities becoming interdependent to form a larger, more complex, more capable replicator.3

We are made out of functions, and those functions are made out of functions, all the way down.

As maverick biologist Lynn Margulis discovered in the 1960s, eukaryotic cells are the result of such a symbiotic event, when the ancient bacteria that became our mitochondria were engulfed by another single-celled life form, related to today’s archaea. At moments like these, the tree of life doesn’t just branch; it also entangles with itself, its branches merging to produce radically new forms. Margulis was an early champion of the idea that these events are what drive evolution’s leaps forward.

It’s likely that bacteria are themselves the product of such symbiotic events—for instance, between RNA and proteins.4 Even the feebly replicating chemical reaction networks in those black smokers can be understood as such an alliance, a set of reactions which, by virtue of catalyzing each other, formed a more robust, self-sustaining whole.

So in a sense, Darwin may have been right to say that “it is mere rubbish” to think about the origin of life, for life may have had no single origin, but rather, have woven itself together from many separate strands, the oldest of which look like ordinary chemistry. Intelligent design isn’t required for that weaving to take place; only the incontrovertible logic that sometimes, an alliance creates something enduring, and that whatever is enduring … endures.

Often, enduring means both occupying and creating entirely new niches. Hence eukaryotes did not replace bacteria; indeed, they ultimately created many new niches for them. Likewise, the symbiotic emergence of multicellular life—another major evolutionary transition—did not supplant single-celled life. Our planet is a palimpsest, with much of its past still visible in the present. Even the black smokers are still bubbling away. The self-catalyzing chemistry of proto-life may still be brewing down there, slowly, on the ocean floor.

II. Computation

While most biochemists have focused on understanding the particular history and workings of life on Earth, a more general understanding of life as a phenomenon has come from an unexpected quarter: computer science. The theoretical foundations of this connection date back to two of the field’s founding figures, Alan Turing and John von Neumann.

After earning a degree in mathematics at Cambridge University in 1935, Turing focused on one of the fundamental outstanding problems of the day: the Entscheidungsproblem (German for “decision problem”), which asked whether there exists an algorithm for determining the validity of an arbitrary mathematical statement. The answer turned out to be “no,” but the way Turing went about proving it ended up being far more important than the result itself.5

Turing’s proof required that he define a general procedure for computation. He did so by inventing an imaginary gadget we now call the “Turing Machine.” The Turing Machine consists of a read/write head, which can move left or right along an infinite tape, reading and writing symbols on the tape according to a set of rules specified by a built-in table.

First, Turing showed that any calculation or computation that can be done by hand could also be done by such a machine, given an appropriate table of rules, enough time, and enough tape. He then showed that there exist certain tables of rules that define universal machines, such that the tape itself can specify not only any input data, but also the desired table, encoded as a sequence of symbols. This is a general-purpose computer: a single machine that can be programmed to compute anything.

COMPLEXITY RISING: A video excitedly taken by the author of digital life emerging in one of the first runs of bff, a programming language, on his laptop. Imperfect replicators arise almost immediately, with a sharp transition to whole-tape replication after approximately 6 million interactions, followed by several further symbiotic “complexifications.” As these transitions take place, the number of computations (“ops”) per interaction rises from a few to thousands.

In the early 1940s, von Neumann, a Hungarian-American polymath who had already made major contributions to physics and mathematics, turned his attention to computing. He became a key figure in the design of the ENIAC and EDVAC—among the world’s first real-life Universal Turing Machines, now known as “computers.”

Over the years, a great deal of thought and creativity has gone into figuring out how simple a Universal Turing Machine can get. Only a few instructions are needed. Esoteric language nerds have even figured out how to compute with just a single instruction (a so-called OISC or “one instruction set computer”).

There are irreducible requirements, though: The instruction, or instructions, must change the environment in some way that subsequent instructions are able to “see,” and there must be conditional branching, meaning that depending on the state of the environment, either one thing or another will happen. In most programming languages, this is expressed using “if/then” statements. When there’s only a single instruction, it must serve both purposes, as with the SUBLEQ language, whose only instruction is “subtract and branch if the result is less than or equal to zero.”

III. Functionalism

Both Turing and von Neumann were keenly aware of the parallels between computers and brains, developing many ideas that would become foundational to neuroscience and AI. Von Neumann’s report on the EDVAC explicitly described the machine’s logic gates as electronic neurons.6 Whether or not that analogy held (it did not; neurons are more complex than logic gates), the key insight here was that both brains and computers are defined not by their mechanisms, but by what they do—their function, both in the colloquial and in the mathematical sense.

A thought experiment can illustrate the distinction. While we still have much to learn about the brain, biophysicists have thoroughly characterized the electrical behavior of individual neurons. Hence, we can write computer code that accurately models how they respond to electrical and chemical inputs. If we were somehow able to replace one of the neurons in your brain with a computer running such a model, plugging its inputs and outputs as appropriate into neighboring neurons, would the rest of your brain—or “you”—be able to tell the difference?

If the model is faithful, the answer is “no.” That answer remains the same if one were to replace a million neurons … or all of them. What matters, whether at the scale of an individual neuron or a whole brain, is function. We are made out of functions, and those functions are made out of functions, all the way down.

It’s not a metaphor to call DNA a “program”—that is literally the case.

In 1950s popular culture, computers were often thought to be “like” brains for superficial reasons, like the fact that they both rely on electricity. For Turing, such details were irrelevant, and attaching importance to them was mere superstition. A computer could just as well be made out of cogs and gears, like the steampunk “Analytical Engine” Charles Babbage and Ada Lovelace dreamed of (but sadly, never built) in the 19th century. The deeper point was that a sufficiently powerful general-purpose computer, suitably programmed, can compute whatever the brain computes.

AI was the search for that program, and the point of Turing’s Imitation Game, a thought experiment known nowadays as the Turing Test, was that when such a program can behave functionally like an intelligent human being, we should conclude that the computer (or the running program) is likewise intelligent.

In its usual form, the Turing Test simplifies things by restricting interaction to a chat window, but when one zooms out to consider a whole living body, not just a brain in a vat, this simplification no longer seems adequate. Evolutionarily speaking, the most basic function of an organism is not to send and receive text messages, but to reproduce. That is, its output is not just information, but a real-life copy of something like itself. How, von Neumann wondered, could a machine (in the broadest possible sense) reproduce? How, in other words, is life possible.

IV. Reproduction

Von Neumann imagined a machine made out of standardized parts, like LEGO bricks, paddling around on a reservoir where those parts could be found bobbing on the water.7 The machine’s job is to gather all the needed parts and construct another machine like itself. Of course, that’s exactly what a bacterium has to do in order to reproduce; in fact it’s what every cell must do in order to divide, and what every mother must do in order to give birth.

On the face of it, making something as complex as you yourself are has a whiff of paradox, like lifting yourself up by your own bootstraps. However, von Neumann showed that it is not only possible, but straightforward, using a generalization of the Universal Turing Machine.

EVOLUTION THROUGH SYMBIOSIS: This animation shows, for a random selection of tapes in a particular soup of programming language bff, the provenance of each of the tape’s 64 bytes, beginning at interaction 10,000 and ending at interaction 10,000,000. Vertical lines in the beginning show bytes tracing their lineages to the original (random) values; diagonal lines show bytes increasingly copying themselves from one location to another. Around 2 million interactions, imperfect replicators begin competing, chaotically overwriting each other to create short-lived chimeras; then, at about 5.6 million interactions, a symbiotic whole-tape replicator suddenly emerges out of the chaos like a cat’s cradle, subsequently undergoing further evolutionary changes; but it will conserve elements of its original architecture indefinitely.

He envisioned a “machine A” that would read a tape containing sequential assembly instructions based on a limited catalog of parts, and carry them out, step by step. Then, there would be a “machine B” whose function is to copy the tape—assuming the tape itself is also made out of available parts. If instructions for building machines A and B are themselves encoded on the tape, then voilà—you have a replicator.

Instructions for building any additional non-reproductive machinery can also be encoded on the tape, so it’s even possible for a replicator to build something more complex than itself. A seed, or a fertilized egg, illustrate the point.

Remarkably, von Neumann described these requirements for a self-replicating machine before the discovery of DNA’s structure and function. Nonetheless, he got it exactly right. For life on Earth, DNA are the tape; DNA polymerase, which copies DNA, is “machine B”; and ribosomes, which build proteins by following the sequentially encoded instructions on DNA, are “machine A.” Ribosomes and DNA polymerase are made out of proteins whose sequences are, in turn, encoded in our DNA and manufactured by ribosomes. That is how life lifts itself up by its own bootstraps.

V. Equivalence

Although this is seldom fully appreciated, von Neumann’s insight established a profound link between life and computation. Machines A and B are Turing machines. They must execute instructions that affect their environment, and those instructions must run in a loop, starting at the beginning and finishing at the end. That requires branching, such as “if the next instruction is the codon CGA, then add an arginine to the protein under construction,” and “if the next instruction is UAG, then STOP.” It’s not a metaphor to call DNA a “program”—that is literally the case.

There are meaningful differences between biological computing and the kind of digital computing done by the ENIAC, or your smartphone. DNA is subtle and multilayered, including phenomena like epigenetics and gene proximity effects. Cellular DNA is nowhere near the whole story, either. Our bodies contain (and continually swap) countless bacteria and viruses, each running their own code. Biological computing is massively parallel; your cells have somewhere in the neighborhood of 300 quintillion ribosomes. All this biological computing is also noisy; every chemical reaction and self-assembly step is stochastic.

It’s computing, nonetheless. There are, in fact, many classic algorithms in computer science that require randomness, which is why Turing insisted that the Ferranti Mark I, an early computer he helped to design in 1951, include a random number instruction. Randomness is thus a small but important extension to the original Turing Machine, though any computer can simulate it by computing deterministic but random-looking or “pseudorandom” numbers.

Parallelism, too, is increasingly fundamental to computer science. Modern AI, for instance, depends on both massive parallelism and randomness—as in the “stochastic gradient descent” algorithm, used for training most of today’s neural nets, and the “temperature” setting used in virtually all chatbots to introduce a degree of randomness into their output.

Randomness, massive parallelism, and subtle feedback effects all conspire to make it very, very hard to reason about, “program,” or “debug” biological computation by hand. (We’ll need AI help.) Still, we should keep in mind that Turing’s fundamental contribution was not the invention of any specific machine for computing, but a general theory of computation. Computing is computing, and all computers are, at bottom, equivalent.

Any function that can be computed by a biological system can be computed by a Turing Machine with a random number generator, and vice versa. Anything that can be done in parallel can also be done in series, though it might take a very long time. Indeed, much of the inefficiency in today’s artificial neural net-based AI lies in the fact that we’re still programming serial processors to loop sequentially over operations that brains do in parallel.

VI. Artificial Life

Von Neumann’s insight shows that life depends on computation. Thus, in a universe whose physical laws did not allow for computation, it would be impossible for life to arise. Luckily, the physics of our universe do allow for computation, as proven by the fact that we can build computers—and that we’re here at all.

Now we’re in a position to ask: In a universe capable of computation, how often will life arise? Clearly, it happened here. Was it a miracle, an inevitability, or somewhere in between? A few collaborators and I set out to explore this question in late 2023.

Our first experiments used an esoteric programming language called (apologies) Brainfuck.8 While not as minimal as SUBLEQ, Brainfuck is both very simple and very similar to the original Turing Machine. Like a Turing Machine, it involves a read/write head that can step left or right along a tape.

In our version, which we call “bff,” there’s a “soup” containing thousands of tapes, each of which includes both code and data. The tapes are of fixed length—64 bytes—and start off filled with random bytes. Then, they interact at random, over and over. In an interaction, two randomly selected tapes are stuck end to end, creating a 128-byte-long string, and this combined tape is run, potentially modifying itself. The 64-byte-long halves are then pulled back apart and dropped back into the soup. Once in a while, a byte value is randomized, as cosmic rays do to DNA.

After a few million interactions, something magical happens: The tapes begin to reproduce.

Since bff has only seven instructions, represented by the characters “ + – , ”, and there are 256 possible byte values, following random initialization only 2.7 percent of the bytes in a given tape will contain valid instructions; any non-instructions are skipped over. Thus, at first, not much comes of interactions between tapes. Once in a while, a valid instruction will modify a byte, and this modification will persist in the soup. On average, though, only a couple of computational operations take place per interaction, and usually, they have no effect. In other words, while computation is possible in this toy universe, very little of it actually takes place. When a byte is altered, it’s likely due to random mutation, and even when it’s caused by the execution of a valid instruction, the alteration is arbitrary and purposeless.

But after a few million interactions, something magical happens: The tapes begin to reproduce. As they spawn copies of themselves and each other, randomness gives way to complex order. The amount of computation taking place in each interaction skyrockets, since—remember—reproduction requires computation. Two of Brainfuck’s seven instructions, “[” and “],” are dedicated to conditional branching, and define loops in the code; reproduction requires at least one such loop (“copy bytes until done”), causing the number of instructions executed in an interaction to climb into the hundreds, at minimum.

The code is no longer random, but obviously purposive, in the sense that its function can be analyzed and reverse-engineered. An unlucky mutation can break it, rendering it unable to reproduce. Over time, the code evolves clever strategies to increase its robustness to such damage. This emergence of function and purpose is just like what we see in organic life at every scale; it’s why, for instance, we’re able to talk about the function of the circulatory system, a kidney, or a mitochondrion, and how they can “fail”—even though nobody designed these systems.

We reproduced our basic result with a variety of other programming languages and environments. In one especially beautiful visualization, my colleague Alex Mordvintsev created a two-dimensional bff-like environment where each of a 200×200 array of “pixels” contains a tape, and interactions occur only between neighboring tapes on the grid. The tapes are interpreted as instructions for the iconic Zilog Z80 microprocessor, launched in 1976 and used in many 8-bit computers over the years (including the Sinclair ZX Spectrum, Osborne 1, and TRS-80). Here, too, complex replicators soon emerge out of the random interactions, evolving and spreading across the grid in successive waves.

VII. Thermodynamics

We don’t yet have an elegant mathematical proof of the sort Turing would have wanted, but our simulations suggest that, in general, life arises spontaneously whenever conditions permit. Those conditions seem quite minimal: a physical environment capable of supporting computation, a noise source, and enough time.

Replicators arise because an entity that reproduces is more dynamically stable than one that doesn’t. In other words, if we start with one tape that can reproduce and one that can’t, then at some later time, we’re likely to find many copies of the one that can reproduce, but we’re unlikely to find the other at all, because it will either have been degraded by noise or overwritten.

This implies an important generalization of thermodynamics, the branch of physics concerned with the statistical behavior of matter subject to random thermal fluctuations—that is, of all matter, since, above absolute zero, everything is subject to such randomness. The famous second law of thermodynamics tells us that, in a closed system, entropy will increase over time; that’s why, if you leave a shiny new push mower outside, its blades will gradually dull and oxidize, its paint will start to peel off, and in a few years, all that will be left is a high-entropy pile of rust.

To a physicist, life is weird, because it seems to run counter to the second law. Living things endure, grow, and can even become more complex over time, rather than degrading. There is no strict violation of thermodynamics here, for life can’t exist in a closed system—it requires an input of free energy—but the seemingly spontaneous emergence and complexification of living systems has seemed beyond the purview of physics.

It now seems clear, though, that by unifying thermodynamics with the theory of computation, we ought to be able to understand life as the predictable outcome of a statistical process, rather than regarding it uneasily as technically permitted, yet mysterious. Our artificial life experiments suggest that, when computation is possible, it will be a “dynamical attractor,” because replicating entities are more dynamically stable than non-replicating ones, and, as von Neumann showed, computation is required for replication.

In our universe, that requires an energy source. This is because, in general, computation involves irreversible steps, and these consume free energy. Hence, the chips in our computers draw power and generate heat when they run. Life must draw power and generate heat for the same reason: because it is inherently computational.

VIII. Complexification

When we pick a tape out of the bff soup after a few million interactions, when replicators have taken over, we often see a level of complexity in the program on that tape that seems unnecessarily—even implausibly—high. A working replicator could consist of just a handful of instructions in a single loop, requiring a couple of hundred operations to run. Instead, we often see instructions filling up a majority of the 64 bytes, multiple and complex nested loops, and thousands of operations per interaction.

Where did all this complexity come from? It certainly doesn’t look like the result of simple Darwinian selection operating on the random text generated by a proverbial million monkeys typing on a million typewriters. In fact, such complexity emerges even with zero random mutation—that is, using only the initial randomness in the soup, which works out to a novella’s worth of gibberish. Hardly a million monkeys—and far too little to contain more than a few consecutive characters of working code.

Computers and cellphones are certainly purposive, or we wouldn’t talk about them as being buggy.

The answer recalls Margulis’ insight: the central role of symbiosis, rather than mere random mutation and selection, in evolution. When we look carefully at the quiescent period before tapes begin replicating, we notice a gradual, steady rise in the amount of computation taking place. This is due to the rapid emergence of imperfect replicators—very short bits of code that, in one way or another, have some nonzero probability of generating more code. Even if the code produced isn’t like the original, it’s still code, and only code can produce more code; non-code can’t produce anything!

Thus, there’s a selection process at work from the very beginning, wherein code begets code. This inherently creative, self-catalyzing process is far more important than random mutation in generating novelty. When bits of proliferating code combine to form a replicator, it’s a symbiotic event: By working together, these bits of code generate more code than they could separately, and the code they generate will in turn produce more code that does the same, eventually resulting in an exponential takeoff.

Moreover, after the takeoff of a fully functional tape replicator, we see further symbiotic events. Additional replicators can arise within a replicating tape, sometimes producing multiple copies of themselves with each interaction. In the presence of mutation, these extra replicators can even enter into symbiotic relationships with their “host,” conferring resistance to mutational damage.

IX. Ecology

Fundamentally, life is code, and code is life. More precisely, individual computational instructions are the irreducible quanta of life—the minimal replicating set of entities, however immaterial and abstract they may seem, that come together to form bigger, more stable, and more complex replicators, in ever-ascending symbiotic cascades.

In the toy universe of bff, the elementary instructions are the seven special characters “ + – , ”. On the primordial sea floor, geothermally driven chemical reactions that could catalyze further chemical reactions may have played the same role. Our growing understanding of life as a self-reinforcing dynamical process boils down not to things, but to networks of mutually beneficial relationships. At every scale, life is an ecology.

Nowadays, we interact with computers constantly: the phones in our pockets and purses, our laptops and tablets, data centers, and AI models. Are they, too, alive?

They are certainly purposive, or we couldn’t talk about them being broken or buggy. But hardware and software are, in general, unable to reproduce, grow, heal, or evolve on their own, because engineers learned long ago that self-modifying code (like bff or DNA) is hard to understand and debug. Thus, phones don’t make baby phones, and apps don’t spontaneously generate new versions of themselves.

And yet: There are more phones in the world this year than last year; apps acquire new features, become obsolete, and eventually reach end-of-life, replaced by new ones; and AI models are improving from month to month. It certainly looks as if technology is reproducing and evolving!

If we zoom out, putting technology and humans in the frame together, we can see that this larger, symbiotic “us” is certainly reproducing, growing, and evolving. The emergence of technology, and the mutually beneficial (if, sometimes, fraught) relationship between people and tech, is nothing more or less than our own most recent major evolutionary transition. Technology, then, is not distinct from nature or biology, but merely its most recent evolutionary layer.

Lead image: Bruce Rolff / Shutterstock

References

1. Cech, T.R. The RNA worlds in context. Cold Spring Harbor Perspectives in Biology 47, a006742 (2012).

2. Russell, M.J. & Martin, W. The rocky roots of the acetyl-CoA pathway. Trends in Biochemical Sciences 29, 358-363 (2004).

3. Szathmáry, E. & Smith, J.M. The major evolutionary transitions. Nature 374, 227-232 (1995).

4. Woese, C.R. On the evolution of cells. Proceedings of the National Academy of Sciences 99, 8742-8747 (2002).

5. Turing, A.M. On computable numbers, with an application to the Entscheidungsproblem. Proceedings of the London Mathematical Society s2-42, 230-265 (1937).

6. Von Neumann, J. First draft of a report on the EDVAC. University of Pennsylvania (1945).

7. Von Neumann, J. Theory of Self-Replicating Automata University of Illinois Press, Urbana, IL (1966).

8. Agüera y Arcas, B., et al. Computational life: How well-formed, self-replicating programs emerge from simple interaction. arXiv 2406.19108 (2024).

阅读最新前沿科技趋势报告,请访问欧米伽研究所的“未来知识库”

未来知识库是“欧米伽未来研究所”建立的在线知识库平台,收藏的资料范围包括人工智能、脑科学、互联网、超级智能,数智大脑、能源、军事、经济、人类风险等等领域的前沿进展与未来趋势。目前拥有超过8000篇重要资料。每周更新不少于100篇世界范围最新研究资料。欢迎扫描二维码或访问进入。

截止到12月25日 ”未来知识库”精选的100部前沿科技趋势报告

2024 美国众议院人工智能报告:指导原则、前瞻性建议和政策提案

未来今日研究所:2024 技术趋势报告 - 移动性,机器人与无人机篇

Deepmind:AI 加速科学创新发现的黄金时代报告

Continental 大陆集团:2024 未来出行趋势调研报告

埃森哲:未来生活趋势 2025

国际原子能机构 2024 聚变关键要素报告 - 聚变能发展的共同愿景

哈尔滨工业大学:2024 具身大模型关键技术与应用报告

爱思唯尔(Elsevier):洞察 2024:科研人员对人工智能的态度报告

李飞飞、谢赛宁新作「空间智能」 等探索多模态大模型性能

欧洲议会:2024 欧盟人工智能伦理指南:背景和实施

通往人工超智能的道路:超级对齐的全面综述

清华大学:理解世界还是预测未来?世界模型综合综述

Transformer 发明人最新论文:利用基础模型自动搜索人工生命

兰德公司:新兴技术监督框架发展的现状和未来趋势的技术监督报告

麦肯锡全球研究院:2024 年全球前沿动态图表呈现

兰德公司:新兴技术领域的全球态势综述

前瞻:2025 年人形机器人产业发展蓝皮书 - 人形机器人量产及商业化关键挑战

美国国家标准技术研究院(NIST):2024 年度美国制造业统计数据报告(英文版)

罗戈研究:2024 决策智能:值得关注的决策革命研究报告

美国航空航天专家委员会:2024 十字路口的 NASA 研究报告

中国电子技术标准化研究院 2024 扩展现实 XR 产业和标准化研究报告

GenAI 引领全球科技变革关注 AI 应用的持续探索

国家低空经济融创中心中国上市及新三板挂牌公司低空经济发展报告

2025 年计算机行业年度策略从 Infra 到 AgentAI 创新的无尽前沿

多模态可解释人工智能综述:过去、现在与未来

【斯坦福博士论文】探索自监督学习中对比学习的理论基础

《机器智能体的混合认知模型》最新 128 页

Open AI 管理 AI 智能体的实践

未来生命研究院 FLI2024 年 AI 安全指数报告 英文版

兰德公司 2024 人工智能项目失败的五大根本原因及其成功之道 - 避免 AI 的反模式 英文版

Linux 基金会 2024 去中心化与人工智能报告 英文版

脑机接口报告脑机接口机器人中的人机交换

联合国贸发会议 2024 年全球科技创新合作促发展研究报告 英文版

Linux 基金会 2024 年世界开源大会报告塑造人工智能安全和数字公共产品合作的未来 英文版

Gartner2025 年重要战略技术趋势报告 英文版

Fastdata 极数 2024 全球人工智能简史

中电科:低空航行系统白皮书,拥抱低空经济

迈向科学发现的生成式人工智能研究报告:进展、机遇与挑战

哈佛博士论文:构建深度学习的理论基础:实证研究方法

Science 论文:面对 “镜像生物” 的风险

镜面细菌技术报告:可行性和风险

Neurocomputing 不受限制地超越人类智能的人工智能可能性

166 页 - 麦肯锡:中国与世界 - 理解变化中的经济联系(完整版)

未来生命研究所:《2024 人工智能安全指数报告》

德勤:2025 技术趋势报告 空间计算、人工智能、IT 升级。

2024 世界智能产业大脑演化趋势报告(12 月上)公开版

联邦学习中的成员推断攻击与防御:综述

兰德公司 2024 人工智能和机器学习在太空领域感知中的应用 - 基于两项人工智能案例英文版

Wavestone2024 年法国工业 4.0 晴雨表市场趋势与经验反馈 英文版

Salesforce2024 年制造业趋势报告 - 来自全球 800 多位行业决策者对运营和数字化转型的洞察 英文版

MicrosoftAzure2024 推动应用创新的九大 AI 趋势报告

DeepMind:Gemini,一个高性能多模态模型家族分析报告

模仿、探索和自我提升:慢思维推理系统的复现报告

自我发现:大型语言模型自我组成推理结构

2025 年 101 项将 (或不会) 塑造未来的技术趋势白皮书

《自然杂志》2024 年 10 大科学人物推荐报告

量子位智库:2024 年度 AI 十大趋势报告

华为:鸿蒙 2030 愿景白皮书(更新版)

电子行业专题报告:2025 年万物 AI 面临的十大待解难题 - 241209

中国信通院《人工智能发展报告(2024 年)》

美国安全与新兴技术中心:《追踪美国人工智能并购案》报告

Nature 研究报告:AI 革命的数据正在枯竭,研究人员该怎么办?

NeurIPS 2024 论文:智能体不够聪明怎么办?让它像学徒一样持续学习

LangChain 人工智能代理(AI agent)现状报告

普华永道:2024 半导体行业状况报告发展趋势与驱动因素

觅途咨询:2024 全球人形机器人企业画像与能力评估报告

美国化学会 (ACS):2024 年纳米材料领域新兴趋势与研发进展报告

GWEC:2024 年全球风能报告英文版

Chainalysis:2024 年加密货币地理报告加密货币采用的区域趋势分析

2024 光刻机产业竞争格局国产替代空间及产业链相关公司分析报告

世界经济论坛:智能时代,各国对未来制造业和供应链的准备程度

兰德:《保护人工智能模型权重:防止盗窃和滥用前沿模型》-128 页报告

经合组织 成年人是否具备在不断变化的世界中生存所需的技能 199 页报告

医学应用中的可解释人工智能:综述

复旦最新《智能体模拟社会》综述

《全球导航卫星系统(GNSS)软件定义无线电:历史、当前发展和标准化工作》最新综述

《基础研究,致命影响:军事人工智能研究资助》报告

欧洲科学的未来 - 100 亿地平线研究计划

Nature:欧盟正在形成一项科学大型计划

Nature 欧洲科学的未来

欧盟科学 —— 下一个 1000 亿欧元

欧盟向世界呼吁 加入我们价值 1000 亿欧元的研究计划

DARPA 主动社会工程防御计划(ASED)《防止删除信息和捕捉有害行为者(PIRANHA)》技术报告

兰德《人工智能和机器学习用于太空域感知》72 页报告

构建通用机器人生成范式:基础设施、扩展性与策略学习(CMU 博士论文)

世界贸易组织 2024 智能贸易报告 AI 和贸易活动如何双向塑造 英文版

人工智能行业应用建设发展参考架构

波士顿咨询 2024 年欧洲天使投资状况报告 英文版

2024 美国制造业计划战略规划

【新书】大规模语言模型的隐私与安全

人工智能行业海外市场寻找 2025 爆款 AI 应用 - 241204

美国环保署 EPA2024 年版汽车趋势报告英文版

经济学人智库 EIU2025 年行业展望报告 6 大行业的挑战机遇与发展趋势 英文版

华为 2024 迈向智能世界系列工业网络全连接研究报告

华为迈向智能世界白皮书 2024 - 计算

华为迈向智能世界白皮书 2024 - 全光网络

华为迈向智能世界白皮书 2024 - 数据通信

华为迈向智能世界白皮书 2024 - 无线网络

安全牛 AI 时代深度伪造和合成媒体的安全威胁与对策 2024 版

2024 人形机器人在工业领域发展机遇行业壁垒及国产替代空间分析报告

《2024 年 AI 现状分析报告》2-1-3 页.zip

万物智能演化理论,智能科学基础理论的新探索 - newv2

世界经济论坛 智能时代的食物和水系统研究报告

生成式 AI 时代的深伪媒体生成与检测:综述与展望

科尔尼 2024 年全球人工智能评估 AIA 报告追求更高层次的成熟度规模化和影响力英文版

计算机行业专题报告 AI 操作系统时代已至 - 241201

Nature 人工智能距离人类水平智能有多近?

Nature 开放的人工智能系统实际上是封闭的

斯坦福《统计学与信息论》讲义,668 页 pdf

国家信息中心华为城市一张网 2.0 研究报告 2024 年

国际清算银行 2024 生成式 AI 的崛起对美国劳动力市场的影响分析报告 渗透度替代效应及对不平等状况英文版

大模型如何判决?从生成到判决:大型语言模型作为裁判的机遇与挑战

毕马威 2024 年全球半导体行业展望报告

MR 行业专题报告 AIMR 空间计算定义新一代超级个人终端 - 241119

DeepMind 36 页 AI4Science 报告:全球实验室被「AI 科学家」指数级接管

《人工智能和机器学习对网络安全的影响》最新 273 页

2024 量子计算与人工智能无声的革命报告

未来今日研究所:2024 技术趋势报告 - 广义计算篇

科睿唯安中国科学院 2024 研究前沿热度指数报告

文本到图像合成:十年回顾

《以人为中心的大型语言模型(LLM)研究综述》

经合组织 2024 年数字经济展望报告加强连通性创新与信任第二版

波士顿咨询 2024 全球经济体 AI 成熟度矩阵报告 英文版

理解世界还是预测未来?世界模型的综合综述

GoogleCloudCSA2024AI 与安全状况调研报告 英文版

英国制造商组织 MakeUK2024 英国工业战略愿景报告从概念到实施

花旗银行 CitiGPS2024 自然环境可持续发展新前沿研究报告

国际可再生能源署 IRENA2024 年全球气候行动报告

Cell: 物理学和化学 、人工智能知识领域的融合

智次方 2025 中国 5G 产业全景图谱报告

上下滑动查看更多

来源:人工智能学家

相关推荐