原創翻譯:龍騰網 http://www.i8yj.com 翻譯:chinawungbo2 轉載請注明出處
論壇地址:http://www.i8yj.com/bbs/thread-484865-1-1.html


Restraining the robots

約束機器人

Autonomous weapons and the new laws of war

自動武器與新戰爭法

A technology that may prove hard to restrain

一種可能難以管制的技術



The harop, a kamikaze drone, bolts from its launcher like a horse out of the gates. But it is not built for speed, nor for a jockey. Instead it just loiters, unsupervised, too high for those on the battlefield below to hear the thin old-fashioned whine of its propeller, waiting for its chance.

自殺式無人機“哈洛普”猶如一匹脫韁的野馬,從發射者手中飛馳而出。但它不是為速度或比賽而生,而只是在空中無人監督地盤旋。螺旋槳發出輕微老掉牙的呼嘯聲,飛得太高而不會被地面作戰人員聽見,它在等待戰機。

If the Harop is left alone, it will eventually fly back to a pre-assigned airbase, land itself and wait for its next job. Should an air-defence radar lock on to it with malicious intent, though, the drone will follow the radar signal to its source and the warhead nestled in its bulbous nose will blow the drone, the radar and any radar operators in the vicinity to kingdom come.

如果“哈洛普”無人監督,它最終將飛回事先指定的空軍基地,自動降落,等待下一次任務。當“哈洛普”被防空雷達惡意鎖定時,它會跟蹤雷達信號源,隱蔽在蒜頭狀鼻部內的彈頭將對雷達及其周圍的所有雷達操作員實施致命打擊。



Acknowledging the long, unpleasant history of devices which kill indiscriminately, or without direct human command, is crucial to any discussion of the risks, and morality, of autonomous weapons. It should not mask the fact that their capabilities are increasing quickly—and that although agreements to limit their use might be desirable, they will be very difficult to enforce. It is not that hard to decide if a landmine fits the criteria that ban such weapons under the Ottawa treaty. But whether a Harop is an autonomous robot or a remote-controlled weapon depends on the software it is running at the time.

這種惡劣的武器由來已久,它們濫殺無辜或不受人類的直接指揮去殺人,認識到這一點對于探討自動武器的風險與道德至關重要。事實不容掩蓋:它們的能力正在迅速發展,或許達成使用限制協議是可取的,但實施起來非常困難。判斷一種地雷是否符合《渥太華條約》禁用此類武器的標準并不難,但判斷“哈洛普”是自動機器人還是遙控武器,取決于當時運行的軟件。

Weapons have been able to track their prey unsupervised since the first acoustic-homing torpedoes were used in the second world war. Most modern weapons used against fast-moving machines home in on their sound, their radar reflections or their heat signatures. But, for the most part, the choice about what to home in on—which aircraft’s hot jets, which ship’s screws—is made by a person.

自從二戰首次使用聲自導魚雷以來,武器就已具備自動跟蹤目標能力。用于攻擊快速移動裝置的大部分現代武器能夠鎖定目標的聲音、雷達反射、熱信號。但在多數情況下,鎖定哪個目標由人來決定:哪架飛機的燃氣射流,哪艘艦船的螺釘。

An exception is in defensive systems, such as the Phalanx guns used by the navies of America and its allies. Once switched on, the Phalanx will fire on anything it sees heading towards the ship it is mounted on. And in the case of a ship at sea that knows itself to be under attack by missiles too fast for any human trigger finger, that seems fair enough. Similar arguments can be made for the robot sentry guns in the demilitarised zone (dmz) between North and South Korea.

防御系統是個例外,比如美國海軍及其盟友使用的“密集陣”(Phalanx)火炮。一旦被開啟,“密集陣”會對探測到的向本艦靠近的所有目標開火。當艦船意識到來襲導彈速度太快,人類來不及扣動扳機時,“密集陣”似乎合乎情理。朝韓非軍事區部署哨兵機器人也有這番道理。

Rise of the machines

機器覺醒

The challenge that modern armed forces, and armsmakers like iai, are working on is the ability to pick the target out from a field of non-targets. There are two technological developments that make the challenge a timely one. One is that computers are far more powerful than they used to be and, thanks to “machine learning”, getting much more sophisticated in their ability to distinguish between objects. If an iPhone can welcome your face but reject your sibling’s, why shouldn’t a missile be able to distinguish a tank from a school bus?

現代軍隊和武器制造商(例如:以色列飛機工業公司)面臨一個挑戰,即從大量非目標中識別目標的能力。這一挑戰恰逢兩大技術發展:一是計算機比以往更加強大,二是“機器學習”使計算機的物體區分能力更加精密。既然iPhone能區分你與同胞的面容,導彈怎么不能區分坦克與校車?



Cost is also a factor in armies where trained personnel are pricey. “The thing about robots is that they don’t have pensions,” says General Sir Richard Barrons, one of Britain’s most senior commanders until 2016. Nor do they have dependents. The loss of a robot is measured in money and capability, not human potential.

成本也是軍隊考慮的一個因素,訓練有素的軍事人員耗資巨大。“關鍵是機器人不需要養老金”,2016年卸任的英國最高指揮官之一理查德·巴倫將軍說道。機器人也不需要贍養家屬,損失一臺機器人是以金錢和能力去衡量,而不是人的潛能。

If keeping a human in the loop was merely a matter of spending more, it might be deemed worthwhile regardless. But human control creates vulnerabilities. It means that you must pump a lot of encrypted data back and forth. What if the necessary data links are attacked physically—for example with anti-satellite weapons—jammed electronically or subverted through cyberwarfare? Future wars are likely to be fought in what America’s armed forces call “contested electromagnetic environments”. The Royal Air Force is confident that encrypted data links would survive such environments. But air forces have an interest in making sure there are still jobs for pilots; this may leave them prey to unconscious bias.

假如由人類操控僅僅會增加成本,那可能是值得的。但人類操控存在缺陷,這意味著你必須反復輸入大量的加密數據,如果關鍵的數據鏈遭受物理攻擊(例如反衛星武器)、電磁干擾或網絡戰的破壞怎么辦?未來戰爭可能在美軍所謂的“你爭我奪的電磁環境”下進行。英國皇家空軍堅信,加密數據鏈在電磁環境下會繼續存在,但空軍希望確保飛行員有事可做,這可能使飛行員受到無意識的偏見。

The vulnerability of communication links to interference is an argument for greater autonomy. But autonomous systems can be interfered with, too. The sensors for weapons like Brimstone need to be a lot more fly than those required by, say, self-driving cars, not just because battlefields are chaotic, but also because the other side will be trying to disorient them. Just as some activists use asymmetric make-up to try to confuse face-recognition systems, so military targets will try to distort the signatures which autonomous weapons seek to discern. Paul Scharre, author of “Army of None: Autonomous Weapons and the Future of War”, warns that the neural networks used in machine learning are intrinsically vulnerable to spoofing.

提高自主性的一個理由是通信鏈路易受干擾,但自動裝置也會受干擾。相比無人駕駛汽車,“硫磺石”這類武器的傳感器靈敏度要高得多,不僅因為戰場環境混雜,而且敵軍也會想辦法誤導它們。正如一些激進分子利用非對稱式化妝去迷惑人臉識別系統,軍事目標也會設法改變自身特征,防止被自動武器識別。《無人軍團:自動武器與未來戰爭》一書的作者保羅·斯查瑞(Paul Scharre)提出警告:依靠神經網絡的機器學習本質上容易受騙。

Judgment day

審判日

New capabilities, reduced costs, resistance to countermeasures and the possibility of new export markets are all encouraging r&d in autonomous weapons. To nip this in the bud, the Campaign to Stop Killer Robots is calling for a pre-emptive ban on “fully autonomous” weapons. The trouble is that there is little agreement on where the line is crossed. Switzerland, for instance, says that autonomous weapons are those able to act “in partial or full replacement of a human in the use of force, notably in the targeting cycle”, thus encompassing Harop and Brimstone, among many others. Britain, by contrast, says autonomous weapons are only those “capable of understanding higher level intent and direction”. That excludes everything in today’s arsenals, or for that matter on today’s drawing boards.

新的能力、節省成本、不怕反制、可能出現新的出口市場,這一切都在促進自動武器的研發。為了扼殺于萌芽,“殺手機器人禁令運動”呼吁對“全自動”武器出臺先發制人的禁令。問題是沒有統一標準,例如瑞士認為,自動武器能夠“部分或完全替代人類行使武力,尤其是目標鎖定”,因此“哈洛普”和“硫磺石”屬于自動武器。英國則認為,只有“能夠理解人的高級意圖和指令”才稱得上自動武器。照此說法,已有或設計中的所有武器都不屬于自動武器。

Partly in order to sort these things out, in 2017 the un’s Convention on Certain Conventional Weapons formalised its earlier discussions of the issues by creating a group of governmental experts (gge) to study the finer points of autonomy. As well as trying to develop a common understanding of what weapons should be considered fully autonomous, it is considering both a blanket ban and other options for dealing with the humanitarian and security challenges that they create.

在一定程度上為了解決這些問題,2017年,聯合國《特定常規武器公約》設立了政府專家小組研究武器自主性的細節,使早先對有關議題的論述具有法律效力。該公約不僅試圖在哪些武器應被視為全自動武器上達成共識,也在考慮出臺全面禁令,以及應對人道和安全挑戰的其他方案。

Most states involved in the convention’s discussions agree on the importance of human control. But they differ on what this actually means. In a paper for Article 36, an advocacy group named after a provision of the Geneva conventions that calls for legal reviews on new methods of warfare, Heather Roff and Richard Moyes argue that “a human simply pressing a ‘fire’ button in response to indications from a computer, without cognitive clarity or awareness” is not really in control. “Meaningful control”, they say, requires an understanding of the context in which the weapon is being used as well as capacity for timely and reasoned intervention. It also requires accountability.

參與公約討論的多數國家認同人類操控的重要性,但對實質性意義存在分歧。“第36條”是以《日內瓦公約》條款命名的一個倡議組織,該條款呼吁對新的戰爭方法進行合法性審查,希瑟·羅夫和理查德·莫伊斯在該組織報告中指出,“人類在情況不明或不假思索的情況下,僅通過扣動扳機來響應計算機的指示”,這不算是人類操控。他們認為,“有意義的操控”不僅要了解武器的使用環境,還需要適時理性的干預能力和責任心。



The two dozen states that want a legally binding ban on fully autonomous weapons are mostly military minnows like Djibouti and Peru, but some members, such as Austria, have diplomatic sway. None of them has the sort of arms industry that stands to profit from autonomous weapons. They ground their argument in part on International Humanitarian Law (ihl), a corpus built around the rules of war laid down in the Hague and Geneva conventions. This demands that armies distinguish between combatants and civilians, refrain from attacks where the risk to civilians outweighs the military advantage, use no more force than is proportional to the objective and avoid unnecessary suffering.

二十多個國家希望對全自動武器出臺有法律約束力的禁令,這些國家多是軍事上無足輕重的國家,例如吉布提、秘魯,但也有在外交上有影響力的國家,例如澳大利亞。這些國家沒有受益于自動武器的軍工產業,它們的理由部分基于《國際人道主義法》,它是《海牙公約》和《日內瓦公約》包含的戰爭規則的總稱。《國際人道主義法》要求軍隊區分作戰人員和平民,避免攻擊給平民帶來的風險大于軍事利益,僅采取與目標相稱的武力,避免不必要的傷亡。

When it comes to making distinctions, Vincent Boulanin and Maaike Verbruggen, experts at sipri, note that existing target-recognition systems, for all their recent improvement, remain “rudimentary”, often vulnerable to bad weather or cluttered backgrounds. Those that detect humans are “very crude”. And this is before wily enemies try to dupe the robots into attacking the wrong things.

在目標識別方面,來自斯德哥爾摩國際和平研究所(sipri)的兩名專家文森特·伯蘭寧和瑪依柯·維爾布魯根指出,盡管現有的目標識別系統有所進步,但仍停留在“初期”階段,容易受惡劣天氣或嘈雜環境的影響。人類探測系統還“很不成熟”,更何況狡猾的敵人會試圖欺騙機器人攻擊錯誤的目標。

Necessity and proportionality, which requires weighing human lives against military aims, are even more difficult. “However sophisticated new machines may be, that is beyond their scope,” says Major Kathleen McKendrick of the British army. An army that uses autonomous weapons needs to be set up so as to be able to make proportionality decisions before anything is fired.

必要與相稱原則需要在人類性命與軍事目標之間權衡,這就更難了。英國陸軍少校凱斯琳·麥肯德里克指出:“無論新型武器多么尖端,都超出了它們的能力范圍”。這需要建立使用自動武器的軍隊,具備先制定相稱決策再開火的能力。

Salvation?

救贖?

More broadly, ihl is shaped by the “Martens clause”, originally adopted in the Hague convention of 1899. This says that new weapons must comply with “the principles of humanity” and “dictates of public conscience”. Bonnie Docherty of Human Rights Watch, the ngo which co-ordinates the anti-robot campaign, argues that, “As autonomous machines, fully autonomous weapons could not appreciate the value of human life and the significance of its loss...They would thus fail to respect human dignity.” A strong argument, but hardly legally watertight; other philosophies are available. As for the dictates of public conscience, research and history show that they are more flexible than a humanitarian would wish.

更廣泛地講,《國際人道主義法》深受“馬爾頓斯條款”的影響,1899年海牙公約首次引入這條法律,規定新型武器必須符合“人道主義原則”和“公眾良知要求”。非政府組織“人權觀察”為“抵制機器人運動”提供協助,該組織的邦妮·多徹蒂指出:“由于自動裝置和全自動武器無法衡量人類生命的價值和失去生命的意義……因此它們不會尊重人的尊嚴”。論據很有力,但幾乎不具有法律嚴謹性,何況還存在其他觀點。至于公眾良知要求,研究和歷史表明,它比人道主義者所想的更靈活。

Leaving aside law and ethics, autonomous weapons could pose new destabilising risks. Automatic systems can interact in seemingly unpredictable ways, as when trading algorithms cause “flash crashes” on stockmarkets. Mr Scharre raises the possibility of a flash war caused by “a cascade of escalating engagements”. “If we are open to the idea that humans make bad decisions”, says Peter Roberts, director of military sciences at the Royal United Services Institute, a think-tank, “we should also be open to the idea that ai systems will make bad decisions—just faster.”

且不論法律和道德,自動武器可能帶來新的不穩定風險。自動武器的交互方式似乎難以預料,正如交易算法會引發股市“閃電暴跌”。斯查瑞先生提出“一系列交火升級”可能導致戰爭突然爆發。智囊機構“英國皇家聯合軍種研究院”的負責人彼得·羅伯茨指出:“如果我們認為人會做出錯誤決策,那么人工智能系統也會做出錯誤決策,只不過速度比人更快”。

Beyond the core group advocating a ban there is a range of opinions. China has indicated that it supports a ban in principle; but on use, not development. France and Germany oppose a ban, for now; but they want states to agree a code of conduct with wriggle room “for national interpretations”. India, which chaired the gge, is reserving its position. It is eager to avoid a repeat of nuclear history, in which technological have-nots were locked out of game-changing weaponry by a discriminatory treaty.

除了核心的政府專家小組支持禁令,還有其他方案。中國表示原則上支持禁令,但僅針對應用而非研發。目前法國和德國反對禁令,但希望各國就行為準則達成共識,并“為國家解讀”留有余地。印度作為政府專家小組的主席國持保留態度,不希望再次上演核武的歷史,即歧視性條約禁止技術落后的國家擁有顛覆性武器。



The urge to restrict the technology before it is widely fielded, and used, is understandable. If granting weapons ever more autonomy turns out, in practice, to yield a military advantage, and if developed countries see themselves in wars of national survival, rather than the wars of choice they have waged recently, past practice suggests that today’s legal and ethical restraints may fall away. States are likely to sacrifice human control for self-preservation, says General Barrons. “You can send your children to fight this war and do terrible things, or you can send machines and hang on to your children.” Other people’s children are other people’s concern.

在自動技術被廣泛部署和應用之前,強烈要求對其加以限制是可以理解的。如果事實證明,賦予武器越來越多的自主性能帶來軍事優勢,如果發達國家認為自己陷入國家生死存亡之戰,而非由他們發動的可打可不打的戰爭,以往事實證明,當今的法律與道德約束可能不復存在。巴倫將軍指出:“國家可能出于自衛本能而放棄人類操控。你可以派子孫上戰場干可怕的事情,也可以派機器上戰場保住子孫的性命”。至于別國的子孫,那是他們的事情。