{*}
Add news
March 2010 April 2010 May 2010 June 2010 July 2010
August 2010
September 2010 October 2010 November 2010 December 2010 January 2011 February 2011 March 2011 April 2011 May 2011 June 2011 July 2011 August 2011 September 2011 October 2011 November 2011 December 2011 January 2012 February 2012 March 2012 April 2012 May 2012 June 2012 July 2012 August 2012 September 2012 October 2012 November 2012 December 2012 January 2013 February 2013 March 2013 April 2013 May 2013 June 2013 July 2013 August 2013 September 2013 October 2013 November 2013 December 2013 January 2014 February 2014 March 2014 April 2014 May 2014 June 2014 July 2014 August 2014 September 2014 October 2014 November 2014 December 2014 January 2015 February 2015 March 2015 April 2015 May 2015 June 2015 July 2015 August 2015 September 2015 October 2015 November 2015 December 2015 January 2016 February 2016 March 2016 April 2016 May 2016 June 2016 July 2016 August 2016 September 2016 October 2016 November 2016 December 2016 January 2017 February 2017 March 2017 April 2017 May 2017 June 2017 July 2017 August 2017 September 2017 October 2017 November 2017 December 2017 January 2018 February 2018 March 2018 April 2018 May 2018 June 2018 July 2018 August 2018 September 2018 October 2018 November 2018 December 2018 January 2019 February 2019 March 2019 April 2019 May 2019 June 2019 July 2019 August 2019 September 2019 October 2019 November 2019 December 2019 January 2020 February 2020 March 2020 April 2020 May 2020 June 2020 July 2020 August 2020 September 2020 October 2020 November 2020 December 2020 January 2021 February 2021 March 2021 April 2021 May 2021 June 2021 July 2021 August 2021 September 2021 October 2021 November 2021 December 2021 January 2022 February 2022 March 2022 April 2022 May 2022 June 2022 July 2022 August 2022 September 2022 October 2022 November 2022 December 2022 January 2023 February 2023 March 2023 April 2023 May 2023 June 2023 July 2023 August 2023 September 2023 October 2023 November 2023 December 2023 January 2024 February 2024 March 2024 April 2024 May 2024 June 2024 July 2024 August 2024 September 2024 October 2024 November 2024 December 2024 January 2025 February 2025 March 2025 April 2025 May 2025 June 2025 July 2025 August 2025 September 2025 October 2025 November 2025 December 2025 January 2026 February 2026 March 2026
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
22
23
24
25
26
27
28
29
30
31
News Every Day |

Analyzing the Trump Administration’s National Policy Framework for AI

Kevin T. Frazier

Since the first day of this administration, President Trump has made clear that America’s success in the AI space requires a uniform, national strategy. His Day One Executive Order on AI called for the removal of “existing AI policies and directives that act as barriers to American AI innovation.” The AI Action Plan further clarified that “AI is far too important to smother in bureaucracy at this early stage, whether at the state or Federal level.” In December 2025, the Executive Order on a National Policy Framework prioritized a national approach in light of increasing concerns about an emerging state patchwork while still allowing “states to continue to enforce existing, generally applicable law.” This December EO also outlined the need for a legislative framework that would ensure AI companies are “free to innovate without cumbersome regulation.” 

Today, the Trump administration released its official proposal for a single national AI framework

Emphasizing a Light Touch Approach to Allow AI Innovation to Flourish

Unlike the most recent legislative AI proposal offered by Senator Blackburn, the framework is not a call for a heavy-handed, all-encompassing federal AI statute but rather an allocation of regulatory authority in line with the Constitution’s intended distribution of powers between the states and the federal government. AI regulation is often framed as an all-or-nothing proposition, which fuels the mistaken perception that the White House is seeking to infringe on states’ authority to exercise their police powers. Yet, AI governance is best thought of as a five-layer cake involving energy, chips, infrastructure, models, and applications. Each layer may require a different mix of federal and state engagement based on the extent to which regulation at that layer has nationwide implications. The framework intends to ensure that the federal government leads on issues related to AI development because training, fine-tuning, and deploying models “is an inherently interstate phenomenon with key foreign policy and national security implications.” 

The direction to Congress provided by the framework is to “preempt state AI laws that impose undue burdens to ensure a minimally burdensome national standard consistent with these recommendations, not fifty discordant ones.” That said, Congress is instructed not to interfere with state laws that seek to “protect children, prevent fraud, and protect consumers,” nor “state zoning laws, including state authorities to determine the placement of AI infrastructure.”

Beyond a call for a constitutionally-sound approach to AI governance, the framework set forth several specific policy recommendations in six areas:

  • Protecting Children and Empowering Parents 
  • Safeguarding and Strengthening American Communities
  • Respecting Intellectual Property Rights and Supporting Creators
  • Preventing Censorship and Protecting Free Speech
  • Enabling Innovation and Ensuring American AI Dominance
  • Educating Americans and Developing an AI-Ready Workforce

Cato has a deep bench of scholars from Jennifer Huddleston to Dave Inserra and many more in between who have explored these AI policy domains. Expect more in-depth analysis as everyone has a chance to dive into the framework’s provisions. For now, it’s worth highlighting some key aspects of each of these domains. 

Protecting Children and Empowering Parents

Unlike state laws that effectively invite labs to surveil users and might create greater privacy risks for both children and adults, the framework urges Congress to prioritize laws that put parents in the driver’s seat of how, when, and to what ends their child uses an AI tool. This approach carries the benefit of leaving sensitive decisions about which tools are appropriate to parents, not the federal government. Each child and each family is unique. Parents and other trusted adults, not policymakers, are in the best position to help kids and teens use technology in positive ways and respond to crises.

The framework also calls for “commercially reasonable, privacy protective, age-assurance requirements.” This recommendation can raise significant privacy and speech concerns for all users, as my Cato colleague Jennifer Huddleston has discussed. As made clear by a recent letter signed by hundreds of computer scientists around the world, there’s reason to believe that “privacy protective” age verification is an oxymoron. We will remain attentive to how Congress interprets and acts on this aspect of the framework.

Another key recommendation here is that Congress “avoid setting ambiguous standards about permissible content.” While many people want AI to be “moral,” the fact is that most Americans do not share the same morals. It’s not the role of the government to dictate who is “right” on sensitive questions and to compel labs to train their models in a manner that aligns with the preferences of some Americans over others.

Safeguarding and Strengthening American Communities

Leading on AI is a whole-of-nation endeavor. This section recognizes that AI success requires bringing small towns and small businesses along for the ride, while keeping an eye out for negative consequences on everyday Americans.

There are a few recommendations worth highlighting. First, there’s a reminder that there’s no AI exception to existing law. State AGs from New Jersey to California have recognized that expansive state laws on fraud, discrimination, and consumer protection can already be used to penalize bad actors. Rather than drafting new laws, the framework instructs Congress to consider “augment[ing] existing law enforcement efforts to combat AI-enabled impersonation scams and fraud that target vulnerable populations such as seniors.”

Second, the framework rightly notes that effective AI policy will require extensive technical expertise within the government. That’s why Congress should “ensure that the appropriate agencies within the national security enterprise possess sufficient technical capacity to understand frontier AI model capabilities and any associated national security considerations and establish plans to mitigate potential concerns.”

Finally, following the lead of states like Oklahoma, the framework embraces the idea of easing regulatory burdens to behind-the-meter power generation. This will play a key role in expanding the nation’s power supply by allowing companies greater autonomy to produce their own energy. It will also allow us to better compete with China as it races ahead in developing its own AI infrastructure.

Respecting Intellectual Property Rights and Supporting Creators

This section aims to strike a balance between lawful innovation and maintaining America’s leadership in a robust cultural economy. 

On the key issue of whether training on copyrighted data qualifies as fair use, the Trump administration largely punts by asking Congress not to interfere with judicial resolution of the question. As I will explain in more detail later, I think this is a flawed approach. Access to data is essential for AI innovation and research. The largest labs have built a data moat that lets them train new models faster than entrants. Until this question is resolved, AI startups will struggle to compete. Congress, relying on the first principles set out in the IP Clause, should clarify that training is a fair use. Waiting for courts to figure this out will only make it harder for new firms to get into this key market. 

On how best to support and sustain creators in this new AI economy, the framework directs Congress to exercise a little creativity itself. It encourages Congress to study “enabling licensing frameworks or collective rights systems for rights holders to collectively negotiate compensation from AI providers, without incurring antitrust liability.”

Preventing Censorship and Protecting Free Speech

This is the shortest section with huge ramifications. But it is a particularly critical issue especially in the wake of the dispute between Anthropic and the Department of War. As Cato’s work and brief have noted, such actions can raise significant First Amendment concerns. There is a live conversation about how the federal government balances its particular procurement needs and the constitutionally protected ability of private companies to develop tools that align with their own values and mission. 

It will be important to keep a close eye on how Congress responds to the two recommendations spelled out below: 

“Congress should prevent the United States government from coercing technology providers, including AI providers, to ban, compel, or alter content based on partisan or ideological agendas.

Congress should provide an effective means for Americans to seek redress from the Federal Government for agency efforts to censor expression on AI platforms or dictate the information provided by an AI platform.”

How Congress chooses to interpret these recommendations will be interesting to see. Given the Trump administration’s ongoing dispute with Anthropic, as well as its high-profile uses of the FCC, DHS, and FTC to influence the speech of private actors, these recommendations demonstrate an apparent hypocrisy. Cato scholars Jennifer Huddleston and David Inserra have written extensively on the worrying trend under both parties to weaponize government agencies against disfavored speech. Congress should take action to protect against government coercion of speech. But whether the Trump Administration actually wants these protections to have teeth remains to be seen.

Enabling Innovation and Ensuring American AI Dominance

As noted from the outset, the Trump administration has been explicit about its desire for the US to be a global leader in AI. Pursuant to that goal, this section offers a few simple yet transformative policy recommendations. First, lean into regulatory sandboxes to foster a “try-first” mentality. This iterative and evidence-based approach to governance acknowledges that Americans will need to test and deploy AI tools to discover their risks and benefits. Second, recognizing the aforementioned need for more data, make more federal datasets available to innovators and researchers.

Educating Americans and Developing an AI-Ready Workforce

Finally, this section embraces a vision of the future in which all Americans thrive in the Age of AI. This future follows from adherence to recommendations on improving educational opportunities and retraining programs. As I testified before Congress, this is a pivotal and immediate issue. Congress must resist the temptation to replicate the paternalistic architecture of past workforce programs — the approved-provider lists, restricted vouchers, and compliance-heavy pipelines that move at the pace of bureaucracy rather than the pace of displacement. The better approach is to trust the worker. A displaced machinist in Tulsa knows her own barriers better than a program administrator in Washington ever will. That is why any AI-era workforce initiative worth its name should include direct, fast, unrestricted reemployment support — modest grants delivered within weeks, not months, with accountability tied to outcomes rather than process. Congress should also resist the instinct to define “AI readiness” narrowly. 

The goal is not to produce a nation of prompt engineers; it is to produce workers with the adaptability, foundational digital literacy, and economic runway to meet this moment on their own terms.

Conclusion

The framework is a starting point, not a final answer. Other Cato scholars and I will continue to dig into each of these domains — scrutinizing the details, flagging the tradeoffs, and identifying where Congress should push further and where it should pull back. In general, however, this approach signals that the administration remains committed to the idea that the light-touch approach to regulation that allowed America to lead in the internet age remains the best solution to continued leadership in the AI era.

Ria.city






Read also

Report: Casemiro urging Man Utd to sign €80m compatriot before leaving the club

Minnesota Twins release Liam Hendriks, Andrew Chafin and Gio Urshela from minor league contracts

Cinnabon breaks up with “The Bachelorette”

News, articles, comments, with a minute-by-minute update, now on Today24.pro

Today24.pro — latest news 24/7. You can add your news instantly now — here




Sports today


Новости тенниса


Спорт в России и мире


All sports news today





Sports in Russia today


Новости России


Russian.city



Губернаторы России









Путин в России и мире







Персональные новости
Russian.city





Friends of Today24

Музыкальные новости

Персональные новости