Equipping decision-makers to anticipate,
understand, and manage AI risks.
Scroll to read more
The AI Risk Explorer is an online platform monitoring the emergence and management of large-scale AI risks.
We currently focus on four prominent threats.
AI could enhance cyberattacks across all stages, from early reconnaissance to active exploitation.
AI could enable the creation and misuse of biological agents, increasing risk from concept to deployment.
Advanced AI systems could slip beyond human control, making it difficult to oversee, correct, intervene, or shut them down.
AI could amplify the effectiveness and scale of influence operations and other forms of harmful manipulation.
Access our comprehensive databases of model evaluations, benchmarks, and real-world threats
These resources support evidence-based AI risk research and analysis:
Comprehensive database of reports assessing capabilities and propensities across different risk categories.
Real-world AI incidents and notable operations, focusing on cyber and manipulation risks.
Collection of AI risk benchmarks to support systematic risk assessment and comparison

The Challenge: As AI advances, risk increases. Staying up-to-date is often impossible due to information overload, time constraints, and technical barriers.
Our Solution: We make things easier by providing curated, timely, and accessible information.

In-depth insights into major AI risks
Monitor emerging evidence through evaluations, benchmarks, and incidents
Curated policies developed to address them

Anyone who wants to keep with the emergence of AI risk can benefit from the AI Risk Explorer, including:
Subscribe to our newsletter for the latest insights on AI risk