a16z Super PAC Targets Alex Bores Over AI Regulation Bill

The political battle over artificial intelligence regulation has found a dramatic new front in New York, where a powerful pro-AI Super PAC – a type of political action committee that can raise and spend unlimited amounts of money to influence elections, as long as it does not directly coordinate with candidates – has launched its first major offensive. Leading the Future, backed by venture capital giant Andreessen Horowitz and OpenAI President Greg Brockman, has chosen New York Assembly member Alex Bores and his congressional campaign as its primary target [1]. At stake is Bores’ bipartisan RAISE Act, which would impose significant AI safety requirements in New York, making this clash a critical test case for whether states can establish meaningful AI guardrails against Silicon Valley’s preference for minimal regulation.

The RAISE Act: A New Approach to AI Safety

At the heart of the political confrontation lies the RAISE Act, a bipartisan legislation that represents a pragmatic approach to AI safety. This framework requires large AI labs operating in New York to implement comprehensive safety plans, adhere to established safety protocols, and disclose critical incidents, such as model theft by malicious actors. The legislation explicitly prohibits AI firms from releasing models with unreasonable risks of critical harm and imposes substantial civil penalties of up to $30 million if companies fail to live up to these standards [2]. This bipartisan cooperation across party lines underscores the perceived urgency of establishing foundational guardrails for powerful AI systems. The RAISE Act’s provisions aim to address core AI safety requirements – measures designed to ensure artificial intelligence systems operate in ways that avoid causing harm to humans, environments, or societies. For technology firms based in or operating within New York, the legislation creates a clear compliance landscape with significant financial consequences for negligence, positioning the state as a potential model for other jurisdictions grappling with similar regulatory challenges.

The Super PAC’s Strategy: Targeting Bores

The Super PAC’s Strategy: Targeting Bores represents a calculated escalation in the political battle over artificial intelligence governance. Leading the Future, the pro-AI super PAC backed by tech heavyweights including Andreessen Horowitz and OpenAI President Greg Brockman, has made New York Assembly member Alex Bores its primary political target. The organization’s leadership has been remarkably transparent about their intentions, with Zac Moffatt and Josh Vlasto telling Politico that they would work on a multibillion-dollar effort to sink Bores’ campaign [3]. This declaration marks one of the most direct confrontations between Silicon Valley interests and a state-level policymaker advocating for AI safety requirements.

The strategic choice to target Bores specifically reveals much about the super PAC’s broader political calculus. As the chief sponsor of New York’s bipartisan RAISE Act, Bores has positioned himself at the forefront of state-level AI regulation. His legislation requires large AI labs to implement safety plans, disclose critical safety incidents, and prohibits releasing models with unreasonable risks of critical harm – all provisions that directly challenge the industry’s preference for self-regulation. The super PAC’s targeting of Bores may be seen as an attempt to stifle necessary regulation and protect the interests of tech giants who prefer minimal government oversight.

This confrontation extends beyond a single congressional race to encompass fundamental questions about how AI should be governed in America. Leading the Future’s leaders have framed their opposition in nationalistic terms, arguing that bills like the RAISE Act threaten American competitiveness and could cede AI leadership to China. However, this framing overlooks the reality that effective regulation often enhances public trust and long-term innovation rather than hindering it. The super PAC’s massive financial commitment against Bores suggests they view his campaign as a critical test case – if they can defeat a prominent regulator at the state level, they may deter other politicians from pursuing similar AI safety requirements.

The implications for New York’s political landscape are profound. A multibillion-dollar effort against a single congressional candidate represents an unprecedented level of outside spending in what would typically be a local race. This signals that tech interests are willing to deploy enormous resources to shape not just federal policy but state-level governance as well. For voters in New York’s 12th Congressional District, the race has transformed from a local contest into a national referendum on AI regulation.

Bores’ response to this targeting has been notably defiant, framing the super PAC’s opposition as validation of his regulatory approach. By characterizing their spending as opposition to basic guardrails on AI, he turns their financial advantage into a political liability – positioning himself as defending constituents against corporate overreach. This dynamic creates an intriguing political theater where massive spending could potentially backfire by reinforcing Bores’ narrative about unchecked tech power.

The stakes extend well beyond one election cycle. If Leading the Future succeeds in defeating Bores, it could establish a powerful deterrent against other state legislators considering AI safety requirements. Conversely, if Bores prevails despite the super PAC’s intervention, it would demonstrate that even well-funded industry opposition cannot override constituent concerns about AI risks. The outcome will likely influence whether other states follow New York’s lead in pursuing comprehensive AI regulation or retreat in fear of similar political retaliation.

The Debate: Innovation vs. Regulation

The RAISE Act has become a lightning rod in the broader debate over AI regulation, pitting innovation advocates against those prioritizing safety and accountability. Leading the Future, the super PAC targeting Assembly member Alex Bores, frames the legislation as “ideological and politically motivated” [1], arguing it would “handcuff not only New York’s, but the entire country’s ability to lead on AI jobs and innovation.” They contend that state-level regulations create a problematic patchwork that undermines American competitiveness and could cede AI leadership to China. This perspective reflects a common industry position favoring a single national framework over what they see as fragmented state interventions. However, proponents counter that the RAISE Act represents a balanced approach that promotes innovation while ensuring safety and accountability. The legislation’s requirements for safety plans and incident disclosure are framed as basic guardrails rather than innovation-stifling burdens. Bores himself emphasizes that “having basic rules of the road… is actually a very pro-innovation stance if done well,” suggesting that trustworthy AI will ultimately win in the marketplace. The debate extends beyond New York, touching on fundamental questions about regulatory philosophy. State-level AI regulation can serve as a testing ground for effective policies, which could inform federal legislation – an approach Bores describes as states functioning like “policy laboratories.” This experimentation model allows for iterative refinement before scaling solutions nationally. Meanwhile, industry leaders continue pushing for federal preemption of state laws through provisions like those Senator Ted Cruz has sought to resurrect, setting the stage for continued tension between competing visions of how best to govern transformative technology.

Consequences and Risks: The Broader Implications

The consequences of the super PAC’s intervention extend far beyond a single congressional race, posing systemic risks across multiple domains. Economically, the multibillion-dollar effort to sink Bores’ campaign could significantly impact his chances of winning, potentially silencing a voice advocating for balanced AI governance. Politically, the push for a national regulatory framework – while seemingly logical – could undermine state-level initiatives and delay necessary protections, creating a regulatory vacuum at a critical moment. Socially, public concern over AI’s impact on jobs, mental health, and climate change may intensify if federal action remains slow, eroding public trust in both technology and governance. Environmentally, the unchecked proliferation of data centers and escalating energy consumption could exacerbate climate change while driving up utility costs for communities. As Leading the Future’s leadership told Politico that they would work on a multibillion-dollar effort to sink Bores’ campaign [1], these interconnected risks highlight how corporate influence in political processes could shape AI’s trajectory in ways that prioritize industry interests over broader societal welfare.

The Future of AI Regulation in America

The battle over the RAISE Act and the super PAC’s intervention against Alex Bores represents a critical inflection point for AI governance in America. The arguments are starkly drawn: proponents argue that basic safety requirements and incident disclosures are essential to build public trust and prevent catastrophic harms, while opponents contend such regulation would stifle innovation and cede technological leadership to China. Looking ahead, three distinct scenarios emerge. In a positive outcome, Bores wins his congressional bid and the RAISE Act becomes law, setting a precedent for responsible AI regulation. A neutral scenario sees Bores’ campaign heavily contested, leading to compromise where some aspects of the RAISE Act are adopted at state level. The negative outcome involves the super PAC’s efforts succeeding in defeating Bores, stalling state-level AI regulation and leaving the public vulnerable to AI risks. As Leading the Future’s leaders argued in their statement to Politico, ‘bills like the RAISE Act threaten American competitiveness’ [1]. Ultimately, this New York contest may determine whether states can continue serving as ‘policy laboratories’ for AI governance or if federal preemption becomes the dominant approach.

Frequently Asked Questions

What is the RAISE Act and what does it require?

The RAISE Act is a bipartisan legislation that imposes significant AI safety requirements in New York, mandating large AI labs to implement comprehensive safety plans, adhere to established protocols, disclose critical incidents, and prohibit releasing models with unreasonable risks of harm, with potential civil penalties of up to $30 million for non-compliance.

Who is Alex Bores and why is he targeted by the Leading the Future Super PAC?

Alex Bores is a New York Assembly member and the chief sponsor of the bipartisan RAISE Act. He is targeted by the Leading the Future Super PAC, which is backed by Andreessen Horowitz and OpenAI President Greg Brockman, because his legislation challenges the industry’s preference for minimal regulation by enforcing AI safety measures.

What is the primary goal of the Leading the Future Super PAC?

The Leading the Future Super PAC aims to defeat Alex Bores’ congressional campaign through a multibillion-dollar expenditure, framing the RAISE Act as ideologically motivated and arguing that it threatens American competitiveness in AI by hindering innovation and creating regulatory burdens.

What are the potential consequences of the political clash over the RAISE Act?

The clash could lead to significant outcomes: if Bores wins, the RAISE Act might establish a model for state-level AI regulation; if he loses, it could deter other states from implementing similar safety requirements, potentially delaying comprehensive AI governance and influencing the debate on federal versus state approaches.

Relevant Articles​

10.12.2025

Mistral AI Models Open Source: Devstral 2 & Vibe CLI for Agentic Dev Mistral AI's new Devstral 2 model is…

09.12.2025

CUDA Tile-Based Programming: NVIDIA's AI Strategy Shift for Future AI NVIDIA's new CUDA Tile-Based Programming and Green Contexts are set…