More

    Silicon Valley spooks the AI safety advocates

    This week, prominent figures in Silicon Valley, including David Sacks, the White House AI and Crypto Czar, and Jason Kwon, Chief Strategy Officer at OpenAI, ignited a significant discussion online regarding AI safety advocacy. In their comments, they suggested that some AI safety proponents may not be as altruistic as they seem, possibly serving their own interests or those of wealthy backers.

    AI safety organizations that spoke with TechCrunch claim that these allegations represent Silicon Valley’s ongoing efforts to intimidate its critics. This is not the first instance of such tactics; in 2024, various venture capital firms spread rumors that a controversial California AI safety bill, SB 1047, could lead to imprisonment for startup founders. The Brookings Institution labeled these rumors as one of many “misrepresentations” concerning the bill, which Governor Gavin Newsom ultimately chose to veto.

    Regardless of whether Sacks and OpenAI aimed to intimidate critics, their comments have instilled fear among several AI safety advocates. Many leaders from nonprofit organizations, when contacted by TechCrunch, requested anonymity to protect their groups from potential backlash.

    This controversy highlights the escalating tension in Silicon Valley between the responsible development of AI and the push for its commercialization—a topic my colleagues Kirsten Korosec, Anthony Ha, and I delve into in this week’s Equity podcast. We also discuss California’s newly enacted AI safety law regulating chatbots, along with OpenAI’s stance on adult content within ChatGPT.

    On Tuesday, Sacks shared a post on X, claiming that Anthropic—a company actively voicing concerns about AI’s potential to cause unemployment, cyberattacks, and societal harm—was engaging in fearmongering to push laws that benefit its own agenda while inundating smaller startups with red tape. Notably, Anthropic was the only major AI lab to support California’s Senate Bill 53 (SB 53), which mandates safety reporting for large AI firms and was recently signed into law.

    Sacks’s comments were made in response to a viral essay by Anthropic co-founder Jack Clark, expressing his concerns about AI technologies. Clark presented these thoughts at the Curve AI safety conference in Berkeley, where many perceived his expressions as authentic concerns from a developer about his creations. Sacks, however, interpreted them differently.

    While Sacks accused Anthropic of employing a “sophisticated regulatory capture strategy,” one might argue that a truly sophisticated strategy wouldn’t involve alienating the federal government. In a follow-up on X, Sacks noted that Anthropic has continuously positioned itself as an adversary to the Trump administration.

    Techcrunch event

    San Francisco
    |
    October 27-29, 2025

    In another development, OpenAI’s Jason Kwon published a post on X explaining the company’s decision to issue subpoenas to AI safety nonprofits, including Encode, which advocates for responsible AI policies. Kwon stated that after Elon Musk’s lawsuit against OpenAI—raising concerns about the company’s deviation from its nonprofit roots—OpenAI found it suspicious that various organizations opposed its restructuring. Encode had filed an amicus brief supporting Musk’s lawsuit, while other nonprofits publicly criticized OpenAI’s changes.

    Kwon raised transparency issues concerning the funding of these nonprofits and whether there was any organizational coordination.

    NBC News reported that OpenAI issued expansive subpoenas to Encode and six other criticized nonprofits, requesting communications relating to its two significant detractors, Musk and Meta CEO Mark Zuckerberg. OpenAI also sought communications from Encode pertaining to its support of SB 53.

    A senior figure in the AI safety community informed TechCrunch that a divide appears to be forming within OpenAI, between its government relations team and its research division. While the safety researchers consistently publish reports on AI risks, the policy team has actively lobbied against SB 53, opting for uniform federal regulations instead.

    Joshua Achiam, OpenAI’s head of mission alignment, expressed his thoughts on the subpoenas directed towards nonprofits in a post on X.

    “At what is possibly a risk to my whole career I will say: this doesn’t seem great,” Achiam remarked.

    Brendan Steinhauser, CEO of the AI safety nonprofit Alliance for Secure AI (which hasn’t received a subpoena), told TechCrunch that OpenAI appears to believe its critics are part of a conspiracy orchestrated by Musk. He insists this is not the case; much of the AI safety community is simply critical of xAI’s safety practices, or lack thereof.

    “OpenAI’s actions seem designed to silence critics, intimidate them, and deter other nonprofits from taking similar stands,” Steinhauser stated. “Sacks appears anxious that the AI safety movement is gaining traction and that people want accountability from these companies.”

    In a related comment, Sriram Krishnan, the White House’s senior policy advisor for AI and a former general partner at a16z, contributed his perspective via social media, accusing AI safety advocates of being disconnected from reality. He urged these organizations to engage with individuals who are actively utilizing, selling, and adopting AI technologies in their daily lives.

    A recent Pew study revealed that approximately half of Americans are more concerned than excited about AI advancements, although the specific nature of their concerns remains ambiguous. Another study detailed that American voters prioritize job losses and deepfakes over the catastrophic risks that are the primary focus of the AI safety movement.

    Addressing these safety issues may come at the risk of stifling the rapid growth of the AI industry—a concern shared by many in Silicon Valley. Given that AI investments underpin a significant portion of the American economy, fears surrounding excessive regulation are understandable.

    After years of unchecked AI advancement, however, the momentum behind the AI safety movement appears to be gaining traction as we head into 2026. Silicon Valley’s efforts to counter these safety-oriented organizations may be a clear indication of their growing influence.

    Source

    Recent Articles

    spot_img

    Related Stories

    Leave A Reply

    Please enter your comment!
    Please enter your name here

    Stay on op - Ge the daily news in your inbox