![]() Effective coordination will require meaningful participation from all of us.” Signatory Statements “The ongoing arms race risks global disaster and undermines any chance of realizing the amazing futures possible with AI. At the upcoming UK summit, every concerned party should have a seat at the table, with no ‘second-tier’ participants” said Max Tegmark, President of FLI. “Addressing the safety risks of advanced AI should be a global effort. FLI has also released a set of recommendations for leaders leading up to and after the event. Later this year, global leaders will convene in the United Kingdom to discuss the safety implications of advanced AI development. The steering wheel and brakes don’t even exist right now”. “We need our leaders to have the technical and legal capability to steer and halt development when it becomes dangerous. 80% of Americans don’t trust AI corporations to self-regulate, and a bipartisan majority support the creation of a federal agency for oversight,” said Aguirre. "Our letter wasn't just a warning it proposed policies to help develop AI safely and responsibly. They include: requiring registration for large accumulations of computational resources, establishing a rigorous process for auditing risks and biases of powerful AI systems, and requiring licenses for the deployment of these systems that would be contingent upon developers proving their systems are safe, secure, and ethical. Policy RecommendationsįLI has published policy recommendations to steer AI toward benefiting humanity and away from extreme risks. It also includes quotes from AI corporations about the risks, and polling data that reveals widespread concern. We urge policymakers, press, and members of the public to consider these - and address them to AI corporations wherever possible. Critical QuestionsįLI has created a list of questions that must be answered by AI companies in order to inform the public about the risks they represent, the limitations of existing safeguards, and their steps to guarantee safety. They acknowledge massive risks, safety concerns, and the potential need for a pause, yet they are unable or unwilling to say when or even how such a slowdown might occur,” said Anthony Aguirre, FLI’s Executive Director. "AI corporations are recklessly rushing to build more and more powerful systems, with no robust solutions to make them safe. Yet much remains to be done to prevent the harms that could be caused by uncontrolled and unchecked AI development. Congress has held hearings on the large-scale risks, emergency White House meetings have been convened, and polls show widespread public concern about the technology’s catastrophic potential - and Americans’ preference for a slowdown. Since then, the EU strengthened its draft AI law, the U.S. It was signed by more than 30,000 experts, researchers, industry figures and other leaders. On Friday September 22 nd 2023, the Future of Life Institute (FLI) will mark six months since they released their open letter calling for a six month pause on giant AI experiments, which kicked off the global conversation about AI risk.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |