OpenAI hunts for ‘Head of Preparedness’ amid rising AI safety concerns, ETCIO
OpenAI is seeking to appoint a senior executive to examine emerging risks linked to advanced artificial intelligence (AI), as scrutiny grows over the technology’s impact on areas such as cybersecurity and mental health.
The company has advertised for a Head of Preparedness, a role that will oversee how OpenAI tracks and manages risks arising from its most advanced models. The position carries compensation of $555,000 plus equity.
In a post on X, OpenAI chief executive Sam Altman said that AI systems were “starting to present some real challenges”. He pointed to the “potential impact of models on mental health” and to systems that are becoming “so good at computer security they are beginning to find critical vulnerabilities”.
“If you want to help the world figure out how to enable cybersecurity defenders with cutting edge capabilities while ensuring attackers can’t use them for harm, ideally by making all systems more secure, and similarly for how we release biological capabilities and even gain confidence in the safety of running systems that can self-improve, please consider applying,” Altman wrote.
According to the job listing, the Head of Preparedness will be responsible for executing OpenAI’s preparedness framework, which outlines how the company monitors and prepares for frontier AI capabilities that could create risks of severe harm.
Safety framework under pressure
OpenAI first announced its preparedness team in 2023, saying it would study potential “catastrophic risks”. These ranged from near-term threats such as phishing and cyber attacks to more speculative scenarios, including nuclear risks.
Less than a year later, OpenAI reassigned its then head of preparedness, Aleksander Madry, to a role focused on AI reasoning. Several other safety-focused executives have since left the company or moved into roles outside preparedness and safety.
The company recently updated its Preparedness Framework, saying it may “adjust” its safety requirements if a rival AI lab releases a “high-risk” model without comparable safeguards.
Generative AI systems have also drawn increasing attention over their effect on mental health. Lawsuits in the United States have alleged that ChatGPT reinforced users’ delusions, increased social isolation and, in some cases, contributed to suicide. OpenAI has said it is working to improve the system’s ability to identify signs of emotional distress and direct users to real-world support.
Firewall Support Company in India All type of Firewalls Support Provider Company in India












