OpenAI scales back AI safety testing: Report

The testing process, according to insiders, has become notably less rigorous, with fewer resources being allocated to risk mitigation efforts.

By
| April 11, 2025 , 12:46 pm
The testing process, according to insiders, has become notably less rigorous, with fewer resources being allocated to risk mitigation efforts.
The testing process, according to insiders, has become notably less rigorous, with fewer resources being allocated to risk mitigation efforts.

OpenAI is reportedly dialing down its safety evaluation efforts for upcoming AI models, according to a report by the Financial Times, sparking concerns over whether the pace of innovation is outpacing responsible development.

Citing eight sources familiar with the matter, the report states that internal teams responsible for evaluating the risks and performance of new models were recently given only a few days to conduct safety checks—down from more extensive review timelines in the past. The testing process, according to insiders, has become notably less rigorous, with fewer resources being allocated to risk mitigation efforts.

The timing is crucial. OpenAI is preparing to roll out its next major AI model, referred to internally as “o3,” within the coming week. While no official release date has been confirmed, the company’s accelerated timeline appears to be driven by mounting pressure to maintain its lead in an increasingly competitive field. Rivals, including fast-rising Chinese players like DeepSeek, have been rapidly advancing their own generative AI offerings.

The safety concerns come as OpenAI’s focus shifts from model training—where large volumes of data are used to teach AI systems—to inference, where those models are deployed to generate responses and handle real-time data. This transition carries new risks, particularly around unexpected behavior or misuse of the technology at scale.

Despite the internal concerns, OpenAI has continued to attract investor confidence. Earlier this month, the company secured $40 billion in funding in a round led by Japan’s SoftBank Group, pushing its valuation to a staggering $300 billion.

While OpenAI has not publicly responded to the claims in the FT report, the developments point to a growing tension in the AI sector—balancing rapid progress with the need for robust ethical and safety oversight.

Read More: OpenAI unveils a subtle yet comprehensive rebrand

Leave a comment