The House Committee on Oversight and Accountability Subcommittee on Cybersecurity, Information Technology, and Government Innovation met on Wednesday, December 6, to discuss the White House’s policy on artificial intelligence (AI). To supporters of AI technology advancement, this is a step in the right direction. But for those concerned about the dangers of such developments, nightmares from the 1980s movies WarGames and Terminator flash before their eyes. While the committee agreed there needs to be specific regulations for this progress, witnesses testified that the government itself needs more AI.
Artificial Intelligence in the Government
It’s common knowledge that the government takes forever to get things done, but is this a good enough reason to start inserting more artificial intelligence into the very offices that control our country? The witnesses testifying before the committee seemed to think so. Samuel Hammond, senior economist of the Foundation for the American Innovation, had this to say:
“In every case, managing these growing throughput demands will require the federal government to not only adopt AI aggressively, but should force Congress and the executive branch to rethink the configuration of our administrative and regulatory agencies from the ground up. From broken procurement policies to the bureaucratic sclerosis engendered by slow and outdated administrative procedures, incremental reform is unlikely to suffice. We must modernize government at the firmware-level, or risk ubiquitous system failure and government becoming the primary bottleneck to AI’s enormous potential upside.”
Hammond used several examples to emphasize his belief that increasing artificial intelligence is a good thing. Google DeepMind, he explained, published an AI model that discovered 2.2 million new crystals and 380,000 new stable materials that could power future technologies; nearly 800 years’ worth of knowledge gained virtually overnight.
One of the growing fears is that AI will replace humans in the workforce. This concern was highlighted during the recent strike by writers who feared being replaced by the technology. According to Hammond, OpenAI claimed that Large Language Models (LLM) will have an impact on the labor market, and that about 80% of the US workforce could see 10% of its tasks affected “with jobs like Accountants, Auditors, and Legal Secretaries facing an exposure rate of 100%.” Hammond added: “Many large companies have already begun downsizing or have plans to downsize, in anticipation of the enormous efficiency gains unlocked by emerging AI tools and agents.”
This is especially true in the government, where the wheels of production churn slowly. AI tools can be used for congressional oversight, Hammond pointed out. With the implementation of more artificial intelligence, agencies can easily track staff performance, expedite reports to Congress, and enjoy near real-time monitoring of agency’s activities. Hammond opined it should be called “General-GPT.” He explained:
“Much of the work performed in government bureaucracies is especially low-hanging fruit for AI. With just under 3 million employees in the federal workforce, Congress should demand the White House and OMB undertake an analogous survey to discover which federal jobs are most exposed to AI, and to what extent legislation is needed to expedite new, AI-enabled models of governance. The goal should not be to downsize the federal bureaucracy per se, but rather to augment employee productivity and free up human resources for higher value uses, reducing waste and enhancing capacity simultaneously.”
AI Race and China Concerns
President Joe Biden issued an executive order in October on safe, secure, and trustworthy artificial intelligence. Part of the order speaks to how America already leads in AI innovation – “more AI startups raised first-time capital in the United States last year than in the next seven countries combined.” But the race is on, and China is a big concern. The question is: How fast is too fast when it comes to developing such dangerous technology, and how much should be regulated by the government?
Dr. Daniel E. Ho of Stanford University in California thinks there needs to be less regulation, saying, “government innovation should not be trapped in red tape.” He added:
“For example, the memo’s proposal that agencies allow everyone to opt out of AI for human review does not always make sense, given the sheer variety of programs and uses of AI. The U.S. Postal Service, for example, uses AI to read handwritten zip codes on envelopes. Opting out of this system would mean hiring thousands of employees just to read digits.”
Committee Member Nancy Mace (R-SC) opined: “We have to be very careful before we even think about regulating AI. We have to first figure out if our own existing laws today already apply. You can’t create bioweapons as it is today, why would AI be any different? AI obviously wouldn’t be helpful in that either.”
She mentioned how the US needs to be a world leader in the technology and “we don’t want China to catch up with us, and in order for that to happen, we have to keep innovating.” When it comes to the government, Mace said it “moves like just the slowest dinosaur” still using mainframe computers and legacy systems. “How in the hell do we think we could make advances in AI via the government? I mean, that’s just never gonna happen.”
Dr. Ho said China wants to be the world leader in artificial intelligence by 2030, adding: “There are indeed foreign adversaries who have used AI to oppress populations.”
Committee member Gerald Connolly (D-VA) said the federal government has some stunning success in its own research and that without 100% funding by the government, we wouldn’t have the internet, GPS, radar, etc. “We have to be concerned about the pace because we’re in a race,” He added, “not only with the natural evolution with the technology, but with competitors who are accelerating or exploiting that pace.”
To emphasize just how rapidly artificial intelligence is growing, Connolly mentioned a recent podcast in which Elon Musk was asked when he thought AI would overtake human intelligence. “His answer was ‘within three years,’ not 30, three.”
From actual issues to conspiracy theories, the use of artificial intelligence is concerning to many. Liberty Nation has covered a few, such as AI toys that collect sensitive data from children to the CBP One cellphone app that helps illegal immigrants get into the country faster. We might not yet be on the level of sending Terminators from the future or starting an actual war with a computer game, but there are certainly serious concerns – the privacy and security of citizens, for example. How do we make sure our personal information, such as bank and credit card accounts, are secure?
Dr. Rumman Chowdhury, CEO and Co-Founder of Humane Intelligence, said in her testimony to the committee:
“Artificial intelligence is not inherently neutral, trustworthy, nor beneficial. This technology is not a mystery, it is not magic, and it is not alive. While it has immense capability, like many other high-potential technologies, it can also be used for harm by both malicious and well-intentioned actors. Concerted and directed effort is needed to ensure this technology is used to support and advance human interests.”
Do the American people want the government to rely more on artificial intelligence given the dangers involved?