Democrats Demand Answers on DOGE’s Use of AI

Spread the love

On Democrats The House Overseas Committee closed two dozen requests on Wednesday morning for information related to the plan to install AI software by pressing federal agency leaders Federal agency In the ongoing Government -cut Power

Following the recent report by the barrage of search Wired And Washington Post Elon masks’ so -called Government Skills Department (DOGE) is related to the attempt to automatically automatically and access sensitive data with various owned AI equipment.

“American people give to the Federal Government with sensitive personal information on their health, financial and other biographical information that this information will not be expressed or inappropriately used without their consent,” the request states, “via the use of an unchanged third party AI software.”

Requests received by the first wired signed by a Democratic Congressman of Virginia, Gerald Konolie.

The central purpose of the requests is to press agencies to prove that agencies are taking steps to protect the personal information of the legal and Americans any possible use of AI. Democrats also want to know that any use of AI will benefit the musk financially, who founded Jai and who founded HassleTesla, Robotics and AI are working towards. Democrats are more concerned, Konley says that musk can use its access to sensitive government data for personal prosperity, which earns data to “supercharge” its own owned AI model known as Grock.

In requests, Konley mentions that federal agencies are “bounded by multiple statutory requirements in the use of AI software”, mainly pointing to the Federal Risk and approval management program, which ensures properly evaluated for the government’s approach to the cloud services and the AI-based equipment for protection risk. He also points toward the advanced American AI act, which Need Federal agencies are “to prepare and maintain a list of cases of artificial intelligence use of the agency” as well as “make agency inventors available for the people.”

Documents received by wired last week showed that the Days contains Operative Deployed a owned chatbot GSAI was called about 1,500 federal staff. The GSA federal supervises the government’s property and provides information technology services to many agencies.

A memo found by the wired journalists shows that employees’ software has been warned against feeding any controlled classified information. Other companies, including the Treasury and Health and Human Services departments, have considered the use of a chatboat in accordance with the documents seen by the ward, though the GSAIA is not necessarily.

Ward also reported that the United States army is currently using the camogpt dubbed software Scan its recorded systems For any reference to diversity, equity, inclusion and accessibility. An army spokesman confirmed the existence of the equipment but refused to provide more information on how the army was planning to use it.

At the requests, Konley writes that more than 43 million people have personally identified information involved in the assistance program of federal students in the education department. He writes, “The opaque and insane motion seems to be operating because of the excited motion,” he writes, “I am deeply concerned that students ‘, parents’, spouses ‘, family members’ and all other Orrows are being managed for any protection for vague purposes and public use for public use. Washington Post Report The Dog that began to feed the sensitive federal data obtained from the record system of the education department to analyze its expenses.

Education Secretary Linda McMahon said on Tuesday that he was moving forward with plans to dismiss more than a thousand workers in the department Several hundred Those who took the “Bayouts” in the day last month. The Education Department has lost about half of its staff – the first step, McMahon saysTo fully abolish the agency.

“The use of AI for sensitive information evaluation is filled with serious danger beyond the inappropriate expression,” warned that “used inputs and selected parameters for analysis can be defective, defects can be introduced by the design of the AI ​​software and the workers may misinterpret AI’s recommendations.”

He also added: “Without the obvious intention behind the use of AI, maintenance to ensure proper handling and adequate supervision and transparency, the application of AI is dangerous and possiblely violates the federal law.”

Leave a Reply

Your email address will not be published. Required fields are marked *