
A couple weeks ago we covered the White House’s national cyber strategy with its six pages, six pillars, and AI AI AI agentic everything. So, next up the administration released a national policy framework for artificial intelligence (PDF), four pages of legislative recommendations they want Congress to turn into law this year.
There are seven sections in the framework doc, covering child safety, energy/infrastructure, intellectual property, free speech, innovation, workforce, and federal preemption of state laws. The child safety and workforce sections are what you’d expect. The rest is where it gets…interesting.
Copyright and training data:
“the Administration believes that training of AI models on copyrighted material does not violate copyright laws, it acknowledges arguments to the contrary exist and therefore supports allowing the Courts to resolve this issue.”
They’re telling Congress to stay out of the copyright question and let the courts sort it out. They also want ways for rights holders to collectively negotiate with AI companies without antitrust liability, but also say that legislation “should not address when or whether such licensing is required.” So… optional licensing that nobody has to use?
On state regulation:
“States should not be permitted to regulate AI development, because it is an inherently interstate phenomenon with key foreign policy and national security implications.”
“States should not be permitted to penalize AI developers for a third party’s unlawful conduct involving their models.”
One is liability shield for model developers. If someone uses an AI model to do something illegal, the company that built the model would not be held responsible under state law. Colorado, California, Utah, and Texas have already passed their own AI rules, this doc aims to override all that and says that congress should not create any new federal org to regulate AI.
Instead, the White House wants regulators like the FTC, FDA, FAA, etc. to handle AI applications in their own sectors, and use industry-led standards. Sandboxes for AI applications, for example. Federal datasets to be opened up in “AI-ready formats” for training.
Fifty Republicans sent a letter to the administration in March 2026, calling the state regulation “an effort to prevent the passage of measures holding the tech industry accountable.” What’s next? This may or may not become law before the midterms, if a tanked economy, and job loss are blamed on AI, this might be a wedge issue.
from Adafruit Industries – Makers, hackers, artists, designers and engineers! https://ift.tt/k0QL62F
via IFTTT
Комментариев нет:
Отправить комментарий