At DrupalCon Chicago, we attended the keynote The Security Implications of AI, delivered by Alexandra Bell, director at the Bulletin of the Atomic Scientists and former diplomat with decades of experience in arms control and nuclear non-proliferation.
Bell presented a deep and balanced vision of how AI is transforming global security. Her central message: humanity still has the agency to decide how AI will shape our future, but only if we act with responsibility, preparation, and cooperation.
Two narratives, one decision: where will AI land?
Bell opened by acknowledging the two dominant narratives. The optimistic view holds that AI will make work more efficient, drive advances in science, medicine, and education, and could even help reverse the climate crisis. The pessimistic view, on the other hand, warns that it will make us dependent and less capable, destroy jobs en masse causing social instability, amplify disinformation, and threaten privacy, up to extreme scenarios like autonomous military control or dystopian fantasies like the "paperclip maximizer."
Her conclusion was clear: reality will land somewhere in between, but the direction depends on the decisions we make now.
The nuclear dilemma of artificial intelligence
This was the most critical axis of the talk. Bell explained that AI could both facilitate the construction of nuclear weapons — through its greater capacity for analysis and design — and help prevent it, by improving the monitoring of fissile materials and making arms control agreements more efficient through better verification tools.
But the most serious risks emerge when AI is integrated into military and nuclear command systems. To illustrate this, Bell recalled the Stanislav Petrov incident in 1983, when a technical failure nearly triggered a nuclear war. Petrov avoided the disaster because he could use human intuition and incorporate external information that the system had not anticipated.
An AI system, by design, cannot do that: it doesn't go off-script or process data it wasn't programmed for, making it more vulnerable to errors and systemic failures. For Bell, the uncontrolled integration of AI into nuclear decision-making is one of the greatest existential risks of the century.
Climate tool or environmental burden?
AI can be a key tool for optimizing power grids, improving climate models, designing resilience technologies, and supporting the global energy transition. But Bell also warned of the other side: the exponential energy consumption of data centers, dependence on fossil fuels or unregulated nuclear energy, pressure on water resources, destructive mining for chips and hardware, and the rise of electronic waste.
Her message was straightforward: AI can help the climate, but right now it's also making it worse.
AI and pathogens: the risk that keeps Bell up at night
Bell highlighted that AI can detect pandemics early, accelerate diagnostics, improve vaccine production, and strengthen biological surveillance systems. However, it could also facilitate the creation of extremely dangerous engineered pathogens, increase the risk of laboratory accidents if used without oversight, and open the door to forms of synthetic biology with existential risks still difficult to fully grasp. It's no coincidence that Bell identified biosecurity as the issue that worries her most.
Disinformation on an industrial scale
AI multiplies disinformation at unprecedented speed and erodes the collective ability to distinguish the real from the fabricated. That has concrete consequences: it weakens democracies and breaks the chain between expert knowledge and the public. Bell illustrated this with a close-to-home example: analyses from the Bulletin of the Atomic Scientists itself end up being misrepresented by low-quality LLMs, losing nuance and precision along the way.
If we can't trust the information we consume, we can't organize to face any global threat.
Norms, cooperation, and a clock that can still turn back
Bell was clear: we still have time. Not as easy optimism, but as an honest reading of what's at stake. Governments need real international agreements, especially between the U.S., Russia, and China. Industry needs more ethics and less uncontrolled consumption. And society needs digital literacy, responsible journalism, and critical skepticism toward everything AI generates.
The Doomsday Clock, which the Bulletin has maintained since 1947, is today closer to midnight than at any other point in history. But Bell didn't mention it as a verdict: she mentioned it as a warning. It has moved back before. It can move back again. And that, ultimately, depends on the decisions we make now.