A new commentary by tech editor Reed Albergotti argues that as artificial-intelligence systems rapidly improve, autonomous weapons are becoming a very real concern, and the United States and China must institute far tougher guardrails. The piece highlights how both nations are now investing heavily in drones, missiles, and other strike platforms capable of identifying, tracking, and destroying targets with limited human intervention. The article contends that without meaningful human oversight and clear rules for deployment and accountability, these systems risk destabilising global security and weakening the ethical foundations of warfare. Additional reporting shows that autonomous military capabilities are advancing swiftly: for example, Reuters reports China’s armed forces deploying AI-powered drones supported by U.S.-made Nvidia chips despite export controls, and Reuters also reported U.S. defence contractor Lockheed Martin partnering with Saildrone to arm sea-drones with Tomahawk missiles—a move that underscores how autonomous platforms are now being weaponised in real time.
Sources: Semafor, Tom’s Hardware
Key Takeaways
– The rapid proliferation of AI-enabled autonomous weapons is outpacing existing regulatory frameworks and international norms.
– Major powers such as the U.S. and China are investing aggressively in autonomous strike platforms, increasing the risk of escalation, miscalculation and lower thresholds for conflict.
– Without robust human-in-the-loop oversight, transparency and accountability structures, the deployment of autonomous weapons could undermine legal, ethical and strategic stability.
In-Depth
The pace at which weapon systems incorporating artificial intelligence are evolving is raising urgent questions for national security, ethics and global stability. In a recent commentary, Reed Albergotti points out that autonomous weapons are no longer theoretical—drones, missiles and other robotic platforms are being built by both the United States and China with the capacity to detect, track and destroy targets with minimal human oversight. The article emphasises that this shift demands serious guardrails: policies, oversight and standards that ensure humans remain meaningfully in control of decisions about lethal force.
The broader context bolsters these concerns. Investigative reporting shows that Chinese defence firms are leveraging cutting-edge AI systems—reports indicate continued use of U.S. Nvidia chips even under export restrictions—to fuel autonomous combat-drone development. At the same time, U.S. companies like Lockheed Martin are actively advancing uncrewed sea-drone strike platforms equipped with long-range missiles, signalling that the autonomous-weapons era is already underway. What this means is that the technology is not simply on the horizon—it is being deployed and proliferated now.
Yet regulation and oversight remain inadequate. No universally binding treaty exists that mandates human decision-makers are always involved in the use of force, and many military AI systems still fall under grey zones of “human-on-the-loop” or even “human-out-of-the-loop” configurations. The risk is not simply that machines will misfire or malfunction (though that is a concern) but that the speed, autonomy and scale of these systems will reduce the time for human deliberation, blur the lines of accountability and raise the prospect of unintended escalation. In short, a future in which wars are fought by machines without meaningful human judgement is not science-fiction—it is increasingly plausible.
From a conservative standpoint, the implications are profound. National defence is predicated on deterrence, clear chains of command, and moral clarity in the use of force. If autonomous weapons erode human responsibility and oversight, they could undermine the very principles that give democracies their legitimacy in war. Additionally, the risk of an arms race—not just in traditional weapons but in autonomous systems—raises the prospect of strategic instability. If adversaries believe they can gain an advantage by deploying uncrewed lethal systems, the incentive to rush development and cut corners in safety grows.
Therefore, the policy takeaway is clear: the U.S. and its allies must lead in developing enforceable standards, transparent testing and certification regimes, and international agreements that make human oversight non-negotiable. Moreover, defence investment should not simply mirror adversary capabilities, but be paired with governance frameworks that preserve accountability, moral authority and democratic oversight. Failing to act could result in a world where machines take ever-growing roles in war—making decisions that should remain human, and in doing so, eroding the foundations of Western military ethics and strategic stability.

