The Bulletin of the Atomic Scientists recently set the Doomsday Clock to 85 seconds to midnight, the closest it has ever been to catastrophe. While some speculate that artificial intelligence (AI) is a major driver, two leading AIs, Grok and ChatGPT, provide a different perspective. They emphasize that human actions, not AI, are the primary force behind the clock's alarming position.
In a series of questions posed to Grok, developed by xAI, and ChatGPT, developed by OpenAI, both AIs consistently pointed to human decisions and actions as the key factors. When asked if they have intentions, desires, or political goals, Grok responded, "No. I don’t have consciousness, emotions, or self-directed motivations. I generate responses based on training and inputs, not personal intent or political goals."
ChatGPT added a touch of humor, stating, "Also no. If I had political goals, they’d probably involve mandatory fact-checking and a long nap. I don’t want power – which already puts me ahead of several people who have it."
When questioned about responsibility for harm caused by AI, both AIs were clear. Grok stated, "With humans. AI systems are tools; harm arises from how people design, deploy, regulate, or misuse them – not from independent action by the system."
ChatGPT echoed this sentiment, adding, "Tools don’t wake up angry, vindictive, or obsessed with loyalty. Humans do. Responsibility follows power. It always has."
Both AIs also addressed their potential for error and the ability to correct real-world outcomes. Grok acknowledged, "Yes. My data can be incomplete or biased, and my reasoning can be wrong, particularly on complex or evolving topics."
ChatGPT chimed in, "Absolutely. The difference is I’ll admit it without calling the error 'fake news' or blaming a deep state."
On the question of independently correcting real-world outcomes, Grok and ChatGPT were unanimous: "No. I have no real-world agency. Corrections require human action."
When asked who poses the greater risk to democratic systems, Grok highlighted, "Humans with political authority who disregard norms, accountability, and institutional guardrails. AI without agency can amplify problems, but leaders can directly undermine institutions and rights."
ChatGPT further elaborated, "This isn’t subtle. An AI without agency is less dangerous than a human who thinks accountability is optional and consequences are for other people."
Regarding the Doomsday Clock, Grok clarified, "While AI is cited as a contributing 'disruptive technology,' the primary drivers are nuclear risk, climate failure, biological threats, and the rise of nationalist autocracies undermining international cooperation."
ChatGPT provided a more direct take, "The clock didn’t move because a chatbot learned French. It moved because adults with power keep playing brinkmanship with reality."
On the greater long-term risk, Grok noted, "How flawed human decision-makers use powerful tools. History shows harm comes from misuse driven by ideology, greed, or error – not from tools existing."
ChatGPT reinforced this, "Fire didn’t burn cities by itself. Nuclear physics didn’t threaten civilisation on its own. People did that — usually while insisting they were very smart."
Both AIs agreed that large-scale harm with AI is most likely to come from institutions with access to data, capital, infrastructure, and authority, rather than individuals.
Subscribe to our newsletter for the latest AI news, tutorials, and expert insights delivered directly to your inbox.
We respect your privacy. Unsubscribe at any time.
Comments (0)
Add a Comment