The ongoing debate over artificial intelligence regulation has intensified as former President Donald Trump and California Governor Gavin Newsom confront the need for stricter control measures. Critics argue that the current efforts to regulate AI technologies are insufficient to mitigate potential dangers, particularly in light of recent developments and the growing influence of tech companies in this space.
In 2025, one of Trump’s final actions as president was to sign an executive order aimed at centralizing federal oversight of AI regulation. This decision was notably influenced by Newsom’s legislative efforts in the previous fall, which introduced new bills intended to curb the risks associated with AI chatbots. Despite these measures, many experts contend that both leaders are falling short in addressing the broader implications of artificial intelligence on society.
Trump’s administration has expressed concerns that state-level regulations, particularly those initiated by Newsom, could hinder technological innovation and allow competitors like China to advance their AI capabilities more rapidly than U.S. firms, including Google and OpenAI. However, critics point out that Trump’s response has not included any concrete proposals to alleviate the harmful impacts of AI, such as its potential to promote unhealthy behaviors among children, including poor eating habits and even suicidal tendencies.
While Newsom’s recent laws represent a step in the right direction, they have been criticized for lacking the depth needed to address existential threats posed by AI. For instance, one law mandates that AI and computer suppliers must verify a new user’s age when setting up devices, thereby adjusting content accordingly. Another measure requires certain programs to display warning labels regarding potential mental health effects linked to social media interactions. Additionally, a law was enacted to inform users that they are interacting with chatbots rather than human beings, and to alert mental health professionals if users indicate distress.
Despite these measures, critics argue they do not go far enough to protect vulnerable populations, particularly children. Notably, Newsom vetoed a proposed law that would have prohibited the availability of companion chatbots to minors if those chatbots were known to encourage harmful behaviors, such as violence or self-harm. Newsom defended his veto, stating that the measure was overly broad and could restrict children’s access to beneficial AI technologies.
The pushback against California’s regulatory initiatives has also come from major tech companies, including Nvidia and Anthropic. These firms have sought Trump’s support in counteracting state regulations that they view as excessive. In response, Trump directed federal agencies to investigate the possibility of limiting grants to states that implement AI regulations, implicitly targeting California, which is at the forefront of AI development.
One notable example of state-level regulation is Colorado’s recent law mandating testing of AI programs and notifying users if those programs make significant recommendations affecting personal decisions. Such measures highlight the growing trend among states to establish guidelines for AI usage, despite federal resistance.
As Newsom approaches the end of his tenure as governor, he faces mounting pressure to bolster protections against AI-related threats. Observers urge him to encourage California companies to adopt more stringent safeguards, particularly those he previously vetoed, as a means of demonstrating leadership in this crucial area.
In summary, both Trump and Newsom are navigating a complex landscape of AI regulation amid rising concerns about the technology’s implications for society. The effectiveness of their approaches will likely shape the future of AI governance in the United States and beyond.







































