- On Oct. 30, U.S. President Joe Biden signed an Executive Order laying out plans for governing the development and use of AI safely and responsibly.
- The lengthy Executive Order is 111 pages long, yet is unclear what will happen if tech companies don’t comply, and overall, it seems to rely on the continued voluntary compliance of AI companies.
- A lack of clear, concise regulations often leads to consumer harm, stifles American competition, and forces regulatory bodies to handle issues on a case-by-case basis.
The discussion surrounding the safe development of artificial intelligence has been going on for years but has heated up in recent months due to the popularization of OpenAI’s ChatGPT and several other tech companies’ public forays into AI.
Most of these discussions relate to ethical concerns about using algorithms to impact individuals’ lives, particularly in areas like housing, hiring, and criminal justice. Also in question are algorithmic bias (AI systems inheriting biases from the systems they were trained on), and issues that could arise from the widespread use of AI-generated images, or deepfakes.
For months, AI companies asked lawmakers for clear regulations that would establish clear strictures related to the development of artificial intelligence, but until President Joe Biden’s October 30 Executive Order on the Safe, Secure, and Trustworthy Development of Artificial Intelligence, they received mostly radio silence.
The 111-page Executive Order is monolithic.
While the document is comprehensive — it covers the development of AI safety standards and potential risks associated with using AI to bioengineer dangerous biological materials, establishing cybersecurity programs and a National Security Memorandum, protecting Americans from deep-faked images, privacy concerns, and even invokes the 1950 Defense Production Act — the order fails to create a regulatory framework that businesses can actually use.
Some of this lack of clarity can be attributed to the tentative nature of Executive Orders, but the larger issue is that the United States still seems unsure of how to connect with AI companies, where its responsibilities to American citizens begin and end, and how to create a foundation that incentivizes American innovation.
We don’t have time to focus on creating the perfect framework
A lack of clear, concise regulations often leads to consumer harm, stifles competition, and forces regulatory bodies to handle problems on a case-by-case basis.
The Executive Order speaks at length about creating regulatory frameworks for various subsections of artificial intelligence, but these are all initiatives to be completed later.
The United States has dealt with the lack of a clear regulatory framework several times in recent history, and every time, American consumers bear the brunt of the fallout.
In the early days of social media, the federal government failed to create regulations on data collection until the Data Protection Act of 2018 and modern consumers are still dealing with the widespread dissemination of misinformation that the government is continuing to try and curtail.
Regarding cryptocurrency and the swath of scams and fraudulent ICO listings, the federal government is still dealing with most of these bad actors on a case-by-case basis rather than creating a framework that would protect investors and creators alike. Despite this lack of a well-defined system, the government has made a point to insist that any of these transactions are indeed taxable.
In both of these situations, the problem was a lack of action. Rather than trying to create the perfect regulatory framework that protects private businesses, consumers, and American security without any unforeseen issues, the U.S. government should acknowledge that this is rapidly advancing technology that demands a malleable, always-evolving, framework.
Comply or…?
Despite the countless directives and new responsibilities doled out to various agencies, the Executive Order is very unclear about who will enforce these initiatives, how it will be done, or if they will be enforced at all.
For instance, the Executive Order calls on the Department of Commerce to create guidance on watermarks, labeling, and metadata tools that will be used to help consumers identify whether a piece of content was developed with AI.
This is a great concept, but the Order fails to mention that this technology simply does not exist. We have no way of consistently identifying whether an image was created with AI, nor do we have a way to guarantee that every AI tool utilizes the proposed watermarks.
In addition, the Order discusses both the dangers of unchecked AI finding potential exploits in matters of national security and the importance of establishing a cybersecurity program to help improve critical software, before going on to direct the Director of the Cybersecurity and Infrastructure Security Agency (a subsect of the Department of Homeland Security) to take 90-days to identify “potential risks related to the use of AI in critical infrastructure.”
This is just another example of a lack of context.
Is the government exerting too much power, or is this necessary?
Before the release of this Executive Order, top AI companies met at the White House to take a voluntary pledge to create safe and secure AI and to test their technology before releasing it to the public.
Although each of the companies agreed, the Executive Order called upon the Defense Production Act and stated that tech companies building large-scale and powerful artificial intelligence would have to share their testing results with the U.S. government, as developing these AI models could pose a threat to national security.
As it stands, though, the Order itself was unclear on who would enforce this regulation, and what would happen if tech companies didn’t comply, and overall, it seems to be relying on the continued voluntary compliance of AI companies.
Although some AI thought leaders, like Microsoft’s Brad Smith, applaud the order, others, like Atlantic’s Steven Tiell, aren’t as convinced.
While the Executive Order seems to come from a place of goodwill, it’s difficult to imagine whether tech companies are pleased with the lack of real change or are simply saving face to help ensure that when real regulation comes, it isn’t debilitating.
AI regulation’s impact on the Future of Work
Any new regulation issued by the government can have a profound influence on businesses and future strategic planning. The impact of this particular Executive Order will be even more amplified as nearly all industries have begun incorporating AI into everyday operations.
Although the stated purpose of the Executive Order is to ensure that “AI systems can earn American people’s trust and trust from people around the world” the truth is that the order may stifle American innovation through a continued lack of clarity, overregulation, or confusing enforcement policies.
A lack of clarity isn’t as big of a hindrance for established tech companies as they have the capital and manpower to deal with misunderstandings and court cases. For smaller companies, this isn’t the case, as it makes better fiscal sense to create a business where you aren’t living in fear of a clearer regulatory environment eliminating your operations.
Similarly, because we still haven’t found a way to remove bias from training data, the idea that the government will have the final say in whether or not the high-powered AI you’re creating is spreading the “wrong” of data is almost guaranteed to prevent individuals from trying to create foundation models in America.
Although the order is a net positive in the sense that at least we are discussing AI regulations, the United States has a ways to go before establishing itself as ground zero for future artificial intelligence advancements.