经济学人|假如人工智能只是一种“普通”技术?【原版音频】

B站影视 韩国电影 2025-09-08 19:51 2

摘要:AI究竟是末日威胁还是救世福音?普林斯顿学者最新研究提出了一个反常识的观点:人工智能或许只是一种“普通技术”。本文跳出乌托邦与反乌托邦的极端叙事,从技术革命的历史轨迹切入,探讨AI普及的真实速度、对就业的渐进式影响,以及为何当前“对齐焦虑”可能被误导。在这场全

有趣灵魂说

AI究竟是末日威胁还是救世福音?普林斯顿学者最新研究提出了一个反常识的观点:人工智能或许只是一种“普通技术”。本文跳出乌托邦与反乌托邦的极端叙事,从技术革命的历史轨迹切入,探讨AI普及的真实速度、对就业的渐进式影响,以及为何当前“对齐焦虑”可能被误导。在这场全球狂热的AI辩论中,一份“乏味却清醒”的中间立场报告,反而值得每一位思考未来的人静心阅读。

译文为原创,仅供个人学习使用

The Economist |Free exchange

经济学人|自由交换

What if artificial intelligence is just a “normal” technology?

假如人工智能只是一种“普通”技术?

Its rise might yet follow the path of previous technological revolutions

它的崛起或许仍将遵循以往技术革命的路径

关于人工智能的观点往往呈现两极分化。一端是乌托邦式的观点,认为人工智能将推动经济爆发式增长、加速科学研究,甚至可能让人类获得永生。另一端则是反乌托邦式的观点,认为人工智能将导致突然的大规模失业和经济动荡,甚至可能失控并毁灭人类。因此,普林斯顿大学两位计算机科学家阿尔温德·纳拉亚南和萨亚什·卡普尔今年早些时候发表的一篇论文显得格外引人注目——它以一种不合潮流的冷静态度将人工智能视为“普通技术”。这项研究引发了人工智能研究者和经济学家的广泛讨论。

作者写道,无论是乌托邦还是反乌托邦观点,都将人工智能视为一种具有自主决定未来能力的、前所未有的智能体,这意味着以往的技术类比都不再适用。纳拉亚南和卡普尔驳斥了这种观点,并勾勒出他们认为更可能的情景:人工智能将遵循过去技术革命的轨迹。他们进而分析了这种可能性对人工智能的应用普及、就业、风险和政策制定的影响。“将人工智能视为普通技术,与将其视为类人智能体相比,会得出根本不同的风险 mitigation 策略结论,”他们指出。

作者认为,人工智能的应用普及速度一直慢于技术创新速度。许多人偶尔会使用人工智能工具,但在美国的使用强度(以每日使用小时数计)仍只占整体工作时长的一小部分。应用普及滞后于技术创新并不令人意外,因为个人和企业需要时间调整习惯和工作流程以适应新技术。应用普及还受到以下因素的制约:许多知识是隐性且组织特有的,数据格式可能不兼容,且数据使用可能受到监管限制。类似的制约在一个世纪前的工厂电气化过程中也曾出现:电气化耗费了数十年时间,因为它需要彻底重新思考车间布局、流程和组织结构。

此外,论文指出,对人工智能创新速度的限制可能比表面看起来更加重要,因为许多应用(如药物研发、自动驾驶汽车,甚至只是预订假期)都需要大量的现实世界测试。这一过程可能缓慢且成本高昂,尤其在受到严格监管的安全关键领域。因此作者总结道,经济影响“很可能是渐进的”,而非突然实现大部分经济活动的自动化。

即使人工智能缓慢普及,也会改变工作性质。随着更多任务变得可自动化,“人类工作中与人工智能控制相关的比例将逐渐增加”。这与工业革命存在相似之处:工人从执行手工任务(如织布)转变为监督机器执行这些任务——并处理机器无法应对的情况(如在机器卡顿时的干预)。与其说人工智能会大规模取代工作岗位,不如说工作岗位将越来越涉及配置、监控和控制基于人工智能的系统。纳拉亚南和卡普尔推测,若缺乏人类监督,人工智能可能“错误率过高而缺乏商业价值”。

这进而影响到人工智能的风险。值得注意的是,作者批评了当前对人工智能模型“对齐”的过度强调(即确保输出符合人类创造者目标的努力)。他们指出,某个输出是否有害往往取决于人类能够理解而模型缺乏的上下文语境。例如,一个被要求撰写 persuasive 邮件的模型无法判断这封邮件将被用于合法营销还是恶意钓鱼。作者写道,试图制造一个无法被滥用的人工智能模型“就像试图制造一台不能用于做坏事的计算机”。相反,他们建议防御人工智能滥用(例如创建计算机恶意软件或生物武器)的措施应更注重下游环节,通过加强网络安全和生物安全领域的现有防护手段来实现。这也能增强应对不涉及人工智能的同类威胁的韧性。

终结者只是虚构

这种思路衍生出一系列降低风险、增强韧性的政策建议。包括吹哨人保护(如其他许多行业所见)、强制披露人工智能使用情况(如数据保护领域的做法)、登记制度以追踪部署(如汽车和无人机)以及强制性事件报告(如网络攻击)。总之,论文总结道,以往技术的经验教训可以有效地应用于人工智能——将这项技术视为“普通”技术,相比将其视为即将出现的超级智能,能催生更明智的政策。

该论文并非没有缺陷。有时它读起来像一篇针对人工智能炒笼统驳斥的檄文。部分论述散漫,将主观信念陈述为客观事实,并非所有论点都令人信服——尽管乌托邦和反乌托邦的长篇大论也存在同样问题。即使对人工智能持务实态度者也可能认为:作者对劳动力市场受冲击的可能性过于轻描淡写,低估了人工智能应用普及的速度,对错位和欺骗风险过于轻视,并对监管显得自满。他们关于人工智能在预测或说服方面无法“显著超越受过训练的人类”的预测,似乎奇怪地过于自信。即便乌托邦和反乌托邦场景都是错误的,人工智能可能带来的变革性影响仍远超出作者的描述。

但许多人在读到这种对人工智能例外论的否定时,会点头表示同意。这种中间立场的观点不如即将来临的“快速腾飞”或末日预言那样戏剧化,因此往往不太受到关注。这正是作者认为有必要阐明这一立场的原因:因为他们相信“我们世界观的某种版本正被广泛持有”。在当前对人工智能投资可持续性的担忧中,他们的论文为人工智能狂热提供了一种令人耳目一新的“乏味”替代方案。■

Opinions about artificial intelligence tend to fall on a wide spectrum. At one extreme is the utopian view that AIwill cause runaway economic growth, accelerate scientific research and perhaps make humans immortal. At the other extreme is the dystopian view that AIwill cause abrupt, widespread job losses and economic disruption, and perhaps go rogue and wipe out humanity. So a paper published earlier this year by Arvind Narayanan and Sayash Kapoor, two computer scientists at Princeton University, is notable for the unfashionably sober manner in which it treats AI: as “normal technology”. The work has prompted much debate among AI researchers and economists.

Both utopian and dystopian views, the authors write, treat AIas an unprecedented intelligence with agency to determine its own future, meaning analogies with previous inventions fail. Messrs Narayanan and Kapoor reject this, and map out what they see as a more likely scenario: that AIwill follow the trajectory of past technological revolutions. They then consider what this would mean for AIadoption, jobs, risks and policy. “Viewing AIas normal technology leads to fundamentally different conclusions about mitigations compared to viewing AIas being humanlike,” they note.

The pace of AIadoption, the authors argue, has been slower than that of innovation. Many people use AItools occasionally, but at an intensity in America (in hours of usage per day) that is still low as a fraction of overall working hours. For adoption to lag behind innovation is not surprising, because it takes time for people and companies to adapt habits and workflows to new technologies. Adoption is also hampered by the fact that much knowledge is tacit and organisation-specific, data may not be in the right format and its use may be constrained by regulation. Similar constraints were in place a century ago, when factories were electrified: doing so took decades, because it needed a total rethink of floor layouts, processes and organisational structures.

Moreover, constraints on the pace of AIinnovation itself may be more significant than they seem, argues the paper, because many applications (such as drug development, self-driving cars or even just booking a holiday) require extensive real-world testing. This can be slow and costly, particularly in safety-critical fields that are tightly regulated. As a result, economic impacts “are likely to be gradual”, the authors conclude, rather than involving the abrupt automation of a big chunk of the economy.

Even a slow spread of AIwould change the nature of work. As more tasks become amenable to automation, “an increasing percentage of human jobs and tasks will be related to AIcontrol.” There is an analogy here with the Industrial Revolution, in which workers went from performing manual tasks, such as weaving, to supervising machines doing those tasks—and handling situations machines could not (like intervening when they get stuck). Rather than AIstealing jobs wholesale, jobs might increasingly involve configuring, monitoring and controlling AI-based systems. Without human oversight, Messrs Narayanan and Kapoor speculate, AImay be “too error-prone to make business sense”.

That, in turn, has implications for AIrisk. Strikingly, the authors criticise the emphasis on “alignment” of AImodels, meaning efforts to ensure outputs align with their human creators’ goals. Whether a given output is harmful often depends on context that humans may understand, but the model lacks, they argue. A model asked to write a persuasive email, for example, cannot tell if that message will be used for legitimate marketing or nefarious phishing. Trying to make an AI model that cannot be misused “is like trying to make a computer that cannot be used for bad things”, the authors write. Instead, they suggest, defences against the misuse of AI,for example to create computer malware or bioweapons, should focus further downstream, by strengthening existing protective measures in cyber-security and biosafety. This also increases resilience to forms of these threats not involving AI

Terminator is fictional

Such thinking suggests a range of policies to reduce risk and increase resilience. These include whistleblower protection (as seen in many other industries), compulsory disclosure of AIusage(as happens with data protection), registration to track deployment (as with cars and drones) and mandatory incident-reporting (as with cyber-attacks). In sum, the paper concludes that lessons from previous technologies can be fruitfully applied to AI—and treating the technology as “normal” leads to more sensible policies than treating it as imminent superintelligence.

The paper is not without its flaws. At times it reads like a polemic against AIhype in general. It is rambling in places, states beliefs as facts and not all of its arguments are convincing—though the same is true of utopian and dystopian screeds. Even AI-pragmatists may feel the authors are too blasé about the potential for labour-market disruption, underestimate the speed of AIadoption, are too dismissive of the risks of misalignment and deception, and complacent about regulation. Their prediction that AIwill not be able to “meaningfully outperform trained humans” at forecasting or persuasion seems oddly overconfident. And even if the utopian and dystopian scenarios are wrong, AIcould still be far more transformative than the authors describe.

But many people, on reading this rejection of AIexceptionalism, will nod in agreement. The middle-ground view is less dramatic than predictions of an imminent “fast take-off” or apocalypse, so tends not to receive much attention. That is why the authors think it worthwhile to articulate this position: because they believe that “some version of our worldview is widely held”. Amid current worries about the sustainability of AIinvestment, their paper makes for a refreshingly dull alternative to AIhysteria.

来源:左右图史

相关推荐