Over 10 years we help companies reach their financial and branding goals. Engitech is a values-driven technology agency dedicated.

Gallery

Contacts

411 University St, Seattle, USA

engitech@oceanthemes.net

+1 -800-456-478-23

Skip to content
Technology Daily Brief Vendor Claim

OpenAI Safety Fellowship Opens as New Yorker Investigation Reports Internal Safety Team Dissolution

2 min read The Next Web Partial
OpenAI announced a pilot fellowship funding external researchers on AI safety and alignment, hours after a New Yorker investigation reportedly found the company had dissolved its internal safety teams. The juxtaposition defines the story.

OpenAI launched the Safety Fellowship on April 6, 2026, a pilot program placing external researchers inside the company to work independently on AI safety and alignment. Fellows receive a monthly stipend, computing resources, and mentorship from OpenAI researchers. The program runs September 14, 2026 through February 5, 2027. Applications close May 3; successful applicants are notified by July 25.

The fellowship covers seven priority research areas: safety evaluation, ethics, robustness, scalable mitigations, privacy-preserving safety methods, agentic oversight, and high-severity misuse domains. Fellows can work remotely or from Constellation, OpenAI’s Berkeley workspace. The program is explicitly described as a pilot, not a standing commitment.

The timing is hard to read as coincidental. According to TNW, the fellowship was announced hours after a New Yorker investigation by Ronan Farrow reported that OpenAI had dissolved its superalignment and AGI-readiness teams and removed safety from its IRS filings. The Filter was unable to verify the New Yorker article directly from the primary source, so those specific claims should be understood as reported by TNW, not independently confirmed here. What is confirmed: the fellowship and the investigation landed on the same news cycle.

That context matters for anyone evaluating the program at face value. An external fellowship is a real resource commitment. Stipend, compute, mentorship, and a five-month runway for independent safety research, those aren’t props. But “external” and “independent” are doing a lot of work in the announcement language. External fellows working under OpenAI mentorship, with access contingent on OpenAI’s approval, are not the same as genuinely arm’s-length safety researchers.

The agentic oversight priority area deserves specific attention. It’s listed alongside ethics and robustness, but it sits differently for practitioners building on OpenAI’s stack. Agentic systems, multi-step, tool-using, autonomous workflows, are where the near-term safety risks are most concrete and least well-understood. That OpenAI is explicitly recruiting external researchers into this area suggests internal capacity may be limited, redirected, or both.

For researchers considering the program: the application window is short. May 3 is five weeks out. The fellowship page on OpenAI’s alignment site carries the authoritative details on what the program expects and what it offers. Whether the fellowship represents a genuine opening for independent safety work or a well-resourced signal depends, in part, on what fellows are actually permitted to publish and who controls that decision. The announcement doesn’t address that question.

What to watch: whether the Farrow investigation’s specific claims, dissolved teams, IRS filing changes, are confirmed or contested by OpenAI directly; whether the fellowship’s “independent” framing holds when fellows attempt to publish findings that conflict with OpenAI’s public positions; and whether other frontier labs respond with similar external safety programs, which would convert this from a company-specific announcement into an industry pattern.

The deeper question isn’t whether OpenAI cares about safety. It’s whether external fellowships and internal safety teams are complements or substitutes. Right now, the evidence points in different directions depending on which source you trust.

View Source
More Technology intelligence
View all Technology

Stay ahead on Technology

Get verified AI intelligence delivered daily. No hype, no speculation, just what matters.

Explore the AI News Hub