Congressional Oversight According to Luddites

The Fermi Paradox goes like this: if intelligent life other than human beings exists out in the universe, why hasn’t it contacted humans? Among the countless explanations, one stands out as particularly applicable to our current situation on planet earth: intelligent life is bound to destroy itself. The technological advancements of any given organism eventually outpace its ability to regulate them, and these advancements become so out of control that they manage to destroy any trace of the organism.

While the word “regulate” could mean many different things in this explanation, let’s take the literal definition of a government passing laws that control the conduct within a certain field. In this case, technology. Despite countless flashy hearings in which legislators “grill” tech executives on the conduct of their corporations, there is an alarming gap of regulation when it comes to technology developed years ago, including 8-10 year old social media platforms. Senator Blumenthal’s “will you commit to ending Finsta?” question comes to mind as an especially cringeworthy reminder of how our country’s leaders are still struggling to regulate platforms as established as Facebook and Instagram. 

Perhaps raising the alarm bells for this regulatory gap is unnecessary. Is it possible that this pattern of regulation (or lack thereof) is normal when a major new technology comes on the scene? Let’s take a brief look back at some major technological advancements in U.S history. The U.S government introduced robust regulations on television in The Communications Act of 1934. Approximately fourteen years later, when one of the first “major” television shows, Texaco Star Theater began airing on NBC, less than 2% of households in the United States had a television. The United States passed the first regulations on radio in 1912, focussed on regulating those radios used on ships. In 1913 for reference, there were only 322 licensed amateur radio operators in the entire country. Now, compare this to social media–approximtelay 70% of US adults report using Facebook, and regulations of the company still remain relatively barren compared to other modes of communication. It is clear that a lack of regulation on technologies released in the last 10-15 years is a historical anomaly, and a new frontier for our country.

The lack of regulation of social media companies is old news, though. We’ve already seen the consequences of this inaction damage an entire generation of youth, and according to many experts, contribute significantly to the youth mental health crisis. Despite the full consequences of social media on this generation not being fully understood, a new technological development has already taken center stage. Tumbler reached 1 million users in 27 months. Instagram reached 1 million accounts in 2.5 months. ChatGPT reached 1 million users in 5 days. On November 6th, less than a year after its launch, Open AI executives announced that the AI Chatbot had reached 100 million users. 

Although AI had been discussed by the general population before, AI had never been available to the public in such an open and powerful way. New applications for the technology exploded overnight, and without any warning–30% of college students reported using ChatGPT for homework in the 2022-2023 academic year, and countless fortune 500 companies have already integrated services offered by ChatGPT into their operations.

So, what does the U.S government have on the books in terms of AI regulation? A single executive order requiring AI companies to report about the risks around deep fakes, considered the “most ambitious regulation of AI yet.” Nothing about research into the potential risks around what developing AI further could bring. Nothing about how it could be integrated into advertising, about how it could be fed people’s personal information, or about how it could be connected to the open internet. Not a single word making guidelines for development, or creating any sort of oversight like every other sector of technology has in the United States. As the AI revolution has come into full swing, regulators on the congressional committee on science space and tech have held numerous hearings questioning why the U.S government has adopted new targets for mitigating carbon in the atmosphere. It seems that any meaningful conversation about these new developments is, much like any relevant conversation in congress, MIA.

It’s hard to tell whether this lack of regulation is due to a lack of concern, or a lack of knowledge and ability to draft relevant policy. Either way, it is shocking how far behind our congress has fallen in the race to regulate technologies like AI. The EU has passed a significant amount of regulation making AI development more transparent, and regulating the use of AI by the government and in certain situations. Although the United States’ legislative culture is often different from Europe’s, increasing feedback from experts suggest that a lack of regulation when it comes to artificial intelligence could be an existential hazard. A letter signed by hundreds of AI experts called for a halt on the development of AI technology, claiming that it could pose a threat to humanity. Then of course, there is the notorious 2015 letter, signed by Stephen Hawking, which claims that Artificial Intelligence, if overdeveloped, could potentially destroy humanity.

When it comes to climate change, the vast majority of scientists agree that further warming of our planet will have disastrous effects for humankind, including increased famines, storms, and mass extinction events. All out nuclear wars are similarly modeled, and we know that soot released in the atmosphere as a result of nuclear blasts would cause global temperatures to drop at a similar scale as the last ice age, making the event globally catastrophic. The difference between these two events and the future of Artificial Intelligence is that the consequences that come with further development are completely unknown. Although experts in favor and against continued development will claim that they know what will happen as we continue to develop these technologies, it is simply impossible to have a definitive grasp on what continuing to build more human-like neural networks and algorithms will mean for our world. 

The impending apocalypse due to a runaway and all powerful AI might seem pretty low on the list of priorities for congress right now. Perhaps the hypothesis that AI could potentially become sentient and disobey humans is utterly ridiculous. Although we don’t know whether these threats are credible, we do know one thing for sure–technology has outpaced our understanding of it, and our ability to regulate it. Our inability to predict what happens next is the best possible reason for increased regulation, and the most terrifying fact of all.