An error occurred.

OpenAI Looks to Strengthen AI Safety Leadership with New Preparedness Head

OpenAI Looks to Strengthen AI Safety Leadership with New Preparedness Head

OpenAI is searching for a senior executive to lead its Preparedness function as the company intensifies efforts to identify and manage risks linked to increasingly advanced artificial intelligence systems. The move highlights OpenAI’s growing focus on safety as AI capabilities expand across sensitive and high-impact areas.

The role, titled Head of Preparedness, will be responsible for tracking emerging risks related to computer security, biological applications, and the potential mental health impact of AI models. The executive will also oversee the execution of OpenAI’s Preparedness Framework, which guides how the organisation evaluates, mitigates, and responds to high-risk AI capabilities before deployment.

OpenAI CEO Sam Altman recently acknowledged that newer AI systems are beginning to present complex challenges. In a public post, he pointed to models that are increasingly effective at identifying software vulnerabilities, alongside concerns about how advanced AI interactions may affect users’ mental wellbeing. He emphasised the importance of enabling defenders and researchers while preventing misuse by malicious actors.

According to the job listing, the role carries a compensation package of $555,000, excluding equity. OpenAI originally introduced its Preparedness team in 2023 to study potential catastrophic AI risks, ranging from cyber threats such as phishing to more speculative dangers involving large-scale security failures.

In 2024, OpenAI reassigned its former head of Preparedness to a role focused on AI reasoning, while several other safety-focused leaders exited or shifted away from preparedness functions. The company has since updated its Preparedness Framework, noting it may revise safety thresholds if competitors release high-risk models without similar safeguards.

The hiring comes amid heightened scrutiny of generative AI tools, particularly around mental health concerns. OpenAI has stated it is actively improving systems to recognise distress signals and guide users toward appropriate real-world support.

Leave a Comment

All Rights Reserved @2025ViralVault