
With its convening at the 80th United Nations General Assembly (UNGA), AI Safety Connect is taking a decisive next step. What began as an initiative to accentuate AI Safety during the AI Action summit in Paris is now bringing together scientists, frontier labs, and policymakers during the UN high-level week — signaling that AI safety is moving onto the same level of urgency as climate change and nuclear security.
From initiative to international platform
AI Safety Connect (AISC) is where the world meets to make AI safe. Launched in early 2025 during the Paris AI Action Summit, in just a few months, it has evolved into a recognized international platform for AI safety and governance. AISC provides a neutral convening space where global stakeholders can address the risks of an unchecked race toward superintelligence.
AISC UNGA is organized by the AI Governance Coordination Project, FAR.AI, MILA, and The Future Society, and is co-hosted by the Permanent Missions of the Republic of Singapore, Canada, and Brazil to the United Nations, as well as the United Nations Development Programme (UNDP).
AI Safety: a global challenge, a shared responsibility
Every quarter brings new AI breakthroughs. Systems are becoming more powerful, agentic, opaque, and difficult to control, exacerbating the risk of their unintended consequences and deliberate misuse. No one frontier lab or country can contain the promise or mitigate the harms and risks posed by advanced AI systems.
Harnessing global and multilateral coordination requires deep, trusted, and consistent engagement with stakeholders. AI Safety Connect provides a place to unite stakeholders on pressing global AI safety issues and discuss practical mechanisms for governance that enable responsible innovation and sustainable development. By embedding AI safety into
multilateral discussions at the UN, the initiative ensures that governance frameworks evolve in step with technology and that cooperation does not stop at borders.
“AI safety is no longer a technical issue—it is a diplomatic one. With AI systems advancing at unprecedented speed, we need the same kind of international cooperation that has governed nuclear security or climate change. AI Safety Connect exists to make sure this conversation happens at the highest level and continues beyond single events.” — Cyrus Hodes, Co-Founder, AI Safety Connect
UNGA 2025: a milestone gathering
Around the 80th United Nations General Assembly, AI Safety Connect will convene roughly 100 high-level participants from UN agencies, governments, leading frontier labs, academia, and civil society to:
- Coordinate policymakers’ responses to advanced AI development by frontier labs
across borders and industries.
- Highlight how advanced AI affects the role of AI Safety Institutes, and government more broadly.
- Introduce the Global Call for AI Red Lines, a campaign calling for international action to define unacceptable uses and behaviors of AI systems. As of today, it has been endorsed by over 200 prominent figures, including former heads of state and Nobel laureates, and over 70 organizations working on AI governance. See red-lines.ai for more information.
“Our mission with AI Safety Connect is to build lasting channels of trust and cooperation between policymakers, researchers, and frontier labs. This is not about a one-day forum, but about creating the infrastructure for collective action so that when AI reaches dangerous thresholds, the world is prepared to respond together.” — Nicolas Miailhe, Co-Founder, AI Safety Connect
Learn more about AISC here: aisafetyconnect.org