A bipartisan coalition of experts, former officials, and public figures has released the Pro-Human Declaration, a comprehensive framework for responsible artificial intelligence (AI) development. This comes in the wake of Washington's contentious relationship with AI firm Anthropic, highlighting the urgent need for coherent AI regulations.
The Pro-Human Declaration, signed by hundreds of experts, outlines a path to ensure AI enhances human potential rather than supplants it. The document emphasizes five key pillars: keeping humans in charge, avoiding power concentration, protecting the human experience, preserving individual liberty, and holding AI companies legally accountable.
Max Tegmark, an MIT physicist and AI researcher who helped organize the effort, notes, “Polling suddenly [is showing] that 95% of all Americans oppose an unregulated race to superintelligence.”
The declaration includes several robust provisions, such as:
The release of the Pro-Human Declaration coincides with recent events that underscore the need for AI regulation. On the last Friday in February, Defense Secretary Pete Hegseth designated Anthropic a “supply chain risk” after the company refused to grant the Pentagon unlimited use of its technology. Hours later, OpenAI struck a deal with the Defense Department, raising concerns about enforceability.
Dean Ball, a senior fellow at the Foundation for American Innovation, commented, “This is not just some dispute over a contract. This is the first conversation we have had as a country about control over AI systems.”
The Pro-Human Declaration aims to address the regulatory vacuum in AI, which has become increasingly apparent. Tegmark uses an analogy to highlight the importance of regulation: “You never have to worry that some drug company is going to release some other drug that causes massive harm before people have figured out how to make it safe, because the FDA won’t allow them to release anything until it’s safe enough.”
Child safety is seen as a critical pressure point that could drive legislative action. The declaration calls for mandatory pre-deployment testing of AI products, particularly those that could impact children.
Subscribe to our newsletter for the latest AI news, tutorials, and expert insights delivered directly to your inbox.
We respect your privacy. Unsubscribe at any time.
Comments (0)
Add a Comment