Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

Google, Following the heels of the Opena, Reveal a policy proposal In response to the call of a national “AI Action Plan” in the Trump administration. Tech giants support “balanced” export controls in addition to poor copyright restrictions in AI training that protects national protection when enable US export and global business activities. “
“The United States needs to follow an active international economic policy in support of American values ​​and internationally supporting the AI ​​innovation,” it writes in the Google document. “For a long time, AI policy has given unnecessary attention to the risk, often ignoring the innovation of stray control, national competition and scientific leadership – a dynamic that begins to move under the new administration.”
One of the most controversial recommendations in Google is related to the use of IP-protected material.
Google argues that “fair use and text-and-data mining exceptions” “critical” for AI development and AI-related scientific innovation. OpenThe company tries to train IT rights and rivals universally available data – including copyrighted data – mainly without restrictions.
Google writes, “These exceptions allow the use of copyrighted, universally available material for AI training without significantly impressing the Rightslanders,” Google writes, “and avoids very unexpected, imbalanced and long discussions with dathers during models or scientific exams.”
Google, which has Report Trained Model number Public, copyrighted data, War Case Data owners who have been accused of notifying the company before doing this and failing to compensate. The US court has not yet decided whether the fair use doctrine effectively protect II developers from IP case litigation.
In the proposal of the AI ​​policy, Google also issue with it Controlling some exports imposed under the administration of bidenWhich it says “can reduce the” economic competitive goals “by imposing unnecessary burdens on USCloud services suppliers. It is contrary to Google contestants’ statements like Microsoft, which in January Said that it was “confident” It could “fully obey” with the rules.
Importantly, export rules, which want to restrict the availability of advanced AI chips in unsatisfactory countries, give a discount for a large cluster of chips for trusted business.
Otherwise its proposal, Google has called for “long-term, sustainable” investment in the Google Foundational Domestic R&D, stressing against recent federal efforts Reduce the expenses and eliminate grant rewardsThe The agency has said that the government should publish datasets that can be helpful in commercial AI training and funding “starting market research and development” while ensuring that computing and models are “widely available” for scientists and companies.
The US state called on the Google government to pass the federal law in AI, including a broad privacy and security structure, pointing to the chaotic regulatory environment created by the Patches of the United States AI laws. In just two months in 2025, The number of AI bills pending in the United States has increased 781According to an online tracking equipment.
Google warns the US government against imposing a strict obligation as a useful responsibility to the AI ​​system. In many cases, Google argues that the developer of a model “is” no visibility or control “on how a model is being used and thus responsible for abuse.
.This Lap An an AI developer should be careful before releasing a model and in that case the developers may be responsible for model-motivated damage.
Google wrote, “A developer directly provides a model to the diplors, the diplors will often be the best placed to understand the risk of downtime use, implement effective risk management and conduct market monitoring and logging,” Google wrote. “Google wrote.
Google also called for a manifestation to be considered “over -broad” by the EU in his proposal and said that the US government should oppose the transparency rules for which “commercial privacy, allow competitors to be allowed to duplicate products, or how to serve a safety or jailbreak models, to provide a roadmap.”
The growing number of countries and states have passed the laws to further reveal how AI developers work their systems. California AB 2013 Mandate that AI systems developed companies reveal a high-level summary of datasets used to train their systems. In the EU, to adhere to it after the AI ​​Act is implemented, the companies need to provide detailed instructions on the operation, restrictions and risk related to the model.