Is AI Running the Government? Here’s What We Know

Spread the love

The Trump administration is allowing the generator to loosen the AI ​​chatbot.

As the Federal Agency General service administration And Social security administration Chatzipt-Ask technology has been roll out for their workers. The Veterans The generator is using AI to write the code.

The US army has deployed Camogpt, A generator AI tool to review documents to eliminate variations, equity and inclusion mentions. More equipment is coming down the line. The Education Department The generator has proposed to use AI to answer the students and families’ questions about financial assistance and loan payment.

The generator AI is the end of the year to automate the previously performed jobs from the Federal Workers’ forces with 300,000 job cuts a predicted 300,000 job cut.

However the technology is not ready to take part in this work, the researcher at Data and Society Meg Young says Institute of Nonprofit Research and Policy in New York CityThe

“We’re in a insane hype cycle,” she says.

What does AI do for the American government?

Currently, government chats are mainly for general work, such as federal workers to write e-mails and help as a summary of documents. However you can expect government agencies to give them more responsibilities soon. And in many cases the generator does not depend on the AI ​​work.

For example, GSA Wants to use Generator AI for the collection related jobs. The collection is the legal and bureaucratic process through which the government buy goods and services from the private companies. For example, when building a new office building, a government will go through the collection to look for a contractor.

The collection process involves discussing an agreement from the government and the agency to the lawyers that ensure that the company adheres to the government rules such as transparent requirements or the requirements of the American disability law. The deal may also be what the company has repaired after the contract is provided.

According to Young, it is not clear that the generator will speed up the AI ​​collection. For example, it can make it easier for government employees to search and summarize documents, he said. However, lawyers can find very defects in using practical AI in many steps in the purchase process, which are involved in discussing a lot of money. The generator AI can even waste time.

Lawyers need to carefully examine the language in this agreement. In many cases, they have already agreed on the recognized words.

Young says, “If you have created new conditions, it is creating a lot of work and burning a lot of legal time,” said Young. “The most time -saved thing is just copied and pasted” ”

Government employees should also be vigilant when using the generator AI on legal issues, as they are not reliably correct in legal logic. Ay 2024 study Lexiscenexis and Thomson Reuters have been specially designed chattubs for legal research published by companies, created true defects or made hallucinations, from 17% to 33% of the time.

Companies have since published new legal AI equipment, upgrades suffering from similar problems, Surani says.

What kind of mistake does AI make?

The types of errors are widespread. Most significantly, in 2021, the lawyers were granted on behalf of the Avianka Airlines, after quoting existential cases produced by the ChatzPT. In another example, a chatbot, trained for legal logic, says that the Nebraska Supreme Court has rejected the US Supreme Court of the United States, co-authors of the study by Faiz Surani, a 2020 survey.

“It remains inevitable to me,” he says. “Most high schools may tell you that the judicial system in this country is not what it works.”

Other types of errors can be more subtle. The survey found that the chattabots have difficulty in distinguishing between the court’s decisions and the case of the case. They also found examples where the LLM quoted a law that was turned upside down.

Surani also learned that the chatties sometimes failed to identify the mistakes in the prompt. For example, when at Luther. A fictional judge named Wilgarten was requested with a question about the verdict, while the chatboat responded to a real case.

The legal argument is so complicated for the generator AI because the court rejects the lawsuit and the legislature law. This system makes it so that the statements about the law “can be 100% true from time to time and then may immediately be fully true,” Surani says.

He explains it in the context of a technique known as the recovery -et -et-aer generation, which are commonly used a year ago. In this strategy, the system first collects a few relevant cases from a database in response to a prompt and produces its output based on that case.

However, this method still often produces errors, 2024 survey showed. Asked if the United States Constitution guarantees the right to abortion, for example, a chatbot can select Row vs. Wade and Planned Parenthood vs. Cassie and say yes. However, this would be wrong, because Dobs vs. Jackson Women’s Health Organization was canceled.

Furthermore, the law itself can be unclear. For example, tax code Not always clean You can write the key as the cost of treatment, so that the court can consider separate cases.

“The court always disagrees with the law professor at North Carolina University in Chapel Hill, and so the answer seems to be even as vague.”

Is your taxes being given to a chatboat?

Although internal revenue service does not currently provide a generator AI -driven chatbot for public use, k 2024 IRS reports This is proposed to invest more in AI capacity for this national chatboat.

To confirm, the generator may be effective in the AI ​​government. Ay Pilot program In the case of partnership with OPENAI in Pennsylvania, for example, using ChatzPT the average of 95 minutes every day was saved as an summary of writing and summary of documents.

The program that Young Notes conducts the program that researchers did this in a measuring way by exploring how to fit 175 employees to their existing workflows.

However, the Trump administration did not follow the same restraint.

Young says, “The process they are following shows that they do not care whether the AI ​​works for the purpose of the AI.” “It is very fast. It is not designed to do certain people’s workflow it

The administration has released 13,000 people in an accelerated timeline in GSAI.

In 2022, Osophsky Manage a study Automatic Government Legal Guidelines with Chatbot. The chatbots he studied did not use the generator AI. Their study makes a number of recommendations to the government about the chatbot that meant public use, such as a proposed education department.

They advise the chatbots to come with the disbelievers that notify users that they are not talking to any people. The chatboat should also be made clear that its output is not legally compulsory.

At this point, if a chatbot tells you that you have been allowed to reduce a specific business expense, the IRS does not agree, you cannot force the IRS to follow the chatboat response and the chatboat should be called it in its output.

Government agencies also need to accept “a clear chain of command” to show who is in charge of the creation and maintenance of these chatboats, Joshua Blank, a law professor at the University of California, said that Ivin, who assisted in this study with Oosophsky.

During their studies, they were often developed people who were technical experts who were somewhat sealing from other employees of the department. When the agency’s view of the agency changed in legal direction, the developers were not always clear how to update their respective chatboats.

As the government uses generators AI, it is important to remember that the technology is still childhood. You can believe in the recipe and write your mourning cards, but the administration is a completely different animal.

Young says that technology companies still do not know what AI will be useful, Young says. Openi, ethnographic and Google are actively looking for cases of use Partnership with the governmentThe

“We are still in the first days of determining what we are still in the AI ​​and the governments are not effective in the governments.”

Leave a Reply

Your email address will not be published. Required fields are marked *