Work in AI safety includes:
- AI alignment: Technical and conceptual research focused on getting AI systems to do what we want them to do
- AI policy and governance: Setting up institutions and mechanisms that cause major actors (such as AI labs and national governments) to implement good AI safety practices
- AI strategy and forecasting: Building models of how AI will develop and how our actions can make it go better
- Supporting efforts: Setting up systems and resources to support the above, like outreach, building and supporting communities, and education