OpenAI’s President Gave Millions to Trump. He Says It’s for Humanity
AI Overview
•Greg Brockman, OpenAI’s president and cofounder, was a major individual donor to President Trump in…
•Brockman defends his political donations by emphasizing the importance of supporting politicians…
•Anthropic is donating $20 million to Public First Action, a super PAC advocating for AI…
•This donation underscores a growing divide in the AI sector regarding AI regulation and political…
OpenAI's President Greg Brockman made headlines after it was revealed he was a major donor to Donald Trump in 2025, a move that has ignited debate around AI regulation and political influence. Brockman stated in an interview with WIRED that supporting politicians who back AI is “bigger than the people that I happen to be employed with.” This highlights the growing tensions within the AI sector regarding the role of regulation and the potential for political sway.
AI Leaders Enter the Political Arena
The intersection of AI development and political influence is becoming increasingly visible. Brockman's donations illustrate how AI leaders are attempting to shape the political landscape to support their interests. This has sparked concern, considering the potential for AI to reshape society.
Dueling Super PACs
The AI sector is witnessing the rise of competing super PACs (Political Action Committees), reflecting differing views on AI regulation. Anthropic, a safety-focused AI company formed by former OpenAI executives, is putting $20 million into Public First Action [1]. This organization opposes super PACs backed by OpenAI leaders and investors. Public First Action aims to ensure that OpenAI does not accumulate excessive political power [1].
Anthropic's Counter-Move
Anthropic's $20 million donation to Public First Action highlights the company's commitment to AI transparency and safeguards [2]. This bipartisan advocacy group will support candidates who favor AI safeguards. Their priorities include giving the public more visibility into AI companies and regulating high-risk AI applications [2]. Anthropic is positioning itself as a proponent of regulating AI, advocating for limiting exports of sensitive technology and proposing an AI transparency framework [2].
Diverging Views on AI Regulation
The contrasting approaches of OpenAI and Anthropic highlight a fundamental disagreement on AI regulation. While OpenAI's leaders are backing super PACs with a particular view of how AI regulation should go, Anthropic is actively supporting efforts to police AI safety more tightly [2, 3]. This division raises questions about the future of AI governance and the potential impact on technological development.
What's Next
The 2026 elections will likely be a battleground for AI policy, with significant financial resources being deployed to influence voters and candidates. It will be crucial to monitor how these competing super PACs shape the debate around AI regulation and the extent to which they succeed in swaying political outcomes. The ongoing discussions around AI safety and transparency will also continue to evolve.
Why It Matters
Political Influence: Brockman's donations to President Trump underscore the growing attempts by AI leaders to influence political outcomes, raising questions about transparency and accountability.
Regulatory Divide: Anthropic's $20 million investment in a counter-PAC highlights a significant split within the AI industry regarding the necessity and direction of AI regulation [1].
Campaign Flashpoint: AI policy is emerging as a key issue in upcoming elections, with super PACs raising millions to sway voters, indicating the high stakes involved in shaping the future of AI governance [2].
Transparency Push: Public First Action's focus on AI transparency and safeguards suggests a growing demand for greater visibility into AI companies and their practices, potentially leading to increased scrutiny and oversight [2].
Electoral Impact: With Anthropic backing candidates who support AI guardrails, the 2026 elections could determine the extent to which AI development is guided by ethical considerations and safety measures [2].