OpenAI wants to be part of writing the rules. On April 23, the company published a 13-page policy document, “Industrial Policy for the Intelligence Age: Ideas to Keep People First”, that lays out a set of economic and governance proposals spanning taxation, worker benefits, and research infrastructure. No government has adopted any of these proposals. They are OpenAI’s stated policy preferences.
The most concrete element in the package is the one least speculative: OpenAI announced $100,000 fellowships and $1 million in API credits for selected public-interest research grantees. That program exists now, has defined parameters, and gives academic and civic institutions a path to access OpenAI infrastructure for policy-relevant research. For organizations that work in AI governance, safety research, or economic analysis of AI impacts, this program is worth examining on its own terms, separate from the broader policy agenda.
The document’s larger proposals are more ambitious and less certain. OpenAI proposes a public wealth fund to distribute returns from AI-driven productivity gains, according to Forbes’ coverage of the document. The company’s blueprint proposes taxing capital rather than labor to offset automation’s economic effects, according to Yahoo Finance’s analysis. These are OpenAI’s proposals. The company frames them explicitly as responses to AI-driven economic disruption, the document positions automation displacement as the problem these mechanisms would address.
Two additional elements in the document should be treated with appropriate caution. The blueprint reportedly calls for accelerated investment in electric grid infrastructure, and reportedly includes references to what one source describes as “model-containment playbooks” for dangerous systems. Both claims come from a single lower-tier source and couldn’t be independently verified, they should be noted but not relied upon without primary document review.
Why does a policy wishlist from a private company belong in a regulation brief? Because of what it signals about where AI governance is headed. OpenAI isn’t the first tech company to publish policy recommendations, but this document is unusually specific. It picks sides on economic questions, wealth distribution, labor vs. capital taxation, federal investment architecture, that go well beyond product safety or content moderation. That specificity makes it a data point about how major AI labs intend to engage with the regulatory process.
Policy professionals and compliance teams should track this document not for what it will immediately change, nothing, but for whether it generates legislative response. If any of these proposals surface in congressional proposals, White House frameworks, or international policy discussions over the next several months, this document is the origin point. The fellowship program is also worth monitoring as a mechanism through which OpenAI builds relationships with the research community that does independent AI evaluation.
OpenAI is trying to define the terms of the economic debate around AI before that debate is fully underway. Whether that’s responsible corporate citizenship or strategic lobbying depends on your vantage point. What’s clear is that the document exists, the proposals are specific, and the fellowship program is real. For the policy and compliance community, that’s enough to warrant attention.