Hello again and welcome back to the Data Connections Blog! If you read my last entry, you know how much I struggled to make writing about “Artificial Intelligence (AI) Basics” interesting and relatable. However, I think it was worth it as AI is certainly an important building block in understanding data connections.
You will not be surprised to learn that this entry, on AI Regulation, posed a similar and potentially even greater challenge. I mean, who hasn’t had an in-depth conversation about regulating technology around the dinner table? (Grandma, I find your concerns about the impact of deep fakes on voter turn out to be implausible and unconvincing… please pass the mashed potatoes.) If, after reading this entry, you start having those conversations, let me know!
There are those who will debate whether there is any value in government regulation. But I think most can agree that some regulation can be a good thing when done correctly. In the US, we have rules administered by the Food and Drug Administration to help ensure the safety of our food, for example. We also have regulations from the Occupational Safety and Health Administration to help keep our workplaces safe. Prior to these regulations, life in the United States was a riskier proposition. It makes sense to regulate food and workplaces, which typically evolve at a steady but manageable pace, but can the same benefits be applied to a technology like AI?
Technology regulation is challenging, to say the least. Generally, technology development moves so fast that by the time a legislative body has addressed what appears to be a pressing issue, the technology has moved on to new and potentially bigger challenges.
Case in point: as recently as January of this year, Congress heard testimony from Facebook (Meta) founder Mark Zuckerberg on the harms of social media [1]. Since Facebook was founded in 2004, perhaps it is finally time for Congress to deal with it! Likewise, Congress continues to struggle with TikTok -- which was released in the US in 2018 [2]. This is not to say regulation of a technology like AI is not likely or even valuable; but it is worth considering the type and extent of regulation that would work. So, let’s dive into that.
We’ll start with a threshold question I got from the students in my Big Data class. Though we know that AI has good and beneficial uses, should we totally ban AI because of all of the scary things that it might do in the future (e.g., its Terminator-like potential to destroy us all)? In other words, can we stuff this AI Genie back into the magic bottle [3]?
On a basic level, AI can be used to perform time-intensive (and potentially mind-numbing) tasks previously performed by people and with little risk. For example, as I mentioned in my last Blog entry, AI can sort emails to move unwanted and/or bulk emails to a Junk folder and it can sort through thousands (or more) of bytes of data to find relevant documents in document discovery associated with litigation. There is little controversy over the acceptability of delegating such routine tasks to AI, preferably with some human oversight.
For all the legitimate concerns over the more advanced uses of AI, there are also significant potential benefits. As we know, access to professional resources like doctors can be difficult (“Um…no, the reason for my scheduling my visit with Dr. Jones is not urgent… what, 3 months? Really? Can I change my answer to urgent?) as well as costly, especially in remote geographic areas.
The promise of AI, correctly done, is to provide greater access to these types of resources to more people and at a lower cost [4]. AI may also be able to assist professionals in doing their jobs better and more effectively. While we cannot know how well AI will fill these gaps, it would be short-sighted to prematurely dismiss the promise of AI with catch all AI legislation that eliminated or prohibitively regulated all AI with advanced capabilities.
So, we don’t want to stuff that AI Genie back in the bottle entirely. Then how do we make it “behave”? Who makes those rules?
Given the glacial pace at which government enacts regulations, one option is to consider self-regulation of the AI industry by the AI creators or industry groups. At a recent AI conference I attended, Gary Marchant, Regent’s Professor at Arizona State University, discussed the importance of self-regulation in technology law. Because technology moves fast and the companies creating it want to avoid intrusive regulation, they often voluntarily agree to comply with industry standards. These standards are set by organizations like the National Institute of Standards and Technology (NIST) or the International Organization for Standardization (ISO) [5]. Standards are useful and can work up to a point, though it is notable that even the standards organizations have not yet set meaningful standards for such things as data quality. The problem is that standards generally require unanimous agreement of all the industry participants and, as we discussed in Blog entry #7, the motives of industry participants in creating standards may not always be pure or self-evident.
How then can regulations be crafted to replace or supplement self-regulation and to preserve the potential for good or even neutral uses of AI? At the same time, how can these regulations also pay close attention to uses of AI that could be controversial or even ripe for problems? What rules should be followed to help minimize risk? That’s a lot to ask!
The good news is that the concept of regulating technology based on end use or risk level is not new. It has been applied to technology in various forms. General purpose computers, for example, are not banned or restricted even though they can also be used for high risk purposes including to operate nuclear power plants and to guide missile launch operations. However, once the general purpose computers are equipped and utilized for these higher risk uses, additional regulations typically apply [6].
Okay but how do we translate this approach to AI? The European Union’s Artificial Intelligence Act may provide a starting point. The EU AI Act utilizes a risk-based approach to AI regulation using the pyramid illustrated below [7].
The AI categories are determined through self-identification by the AI’s creator. AI systems intended for use in applications such as facial recognition or social scoring (i.e., the controversial use of an AI system to evaluate an individual's trustworthiness based on behaviors or other personal characteristics) are designated as unacceptable risk and generally prohibited, with limited exceptions [8]. AI systems intended for use in areas like border control and law enforcement are designated as high risk and are allowed. But they are subject to strict regulation. AI systems identified as limited risk models, such as general purpose AI (GPAI) models, are generally permitted subject to transparency requirements (e.g. requiring disclosure of algorithms and data used). Of course, minimal risk systems, like the Junk filter we discussed or an Ad blocker, are permissible [9]. The above categories are not mutually exclusive, and it is possible for an AI system to be covered by more than one.
While it does have benefits, the EU AI Act is not a perfect formulation of technology legislation. It has been criticized by all sides. Some in the AI industry fear that any regulation, including the EU AI Act, will hurt innovation in the EU -- where there are already relatively few AI companies.
On the other end of the spectrum, critics of AI were looking for full ban of all usage of AI for controversial things like facial recognition, which they did not get in the Act [10]. One of my concerns with the EU AI Act is the heavy reliance on the AI creator to ensure use of only “quality data” (particularly in AI systems determined to be higher risk) with no details provided other than to say standards will be developed to identify what this means [11].
The EU regulatory approach to AI is currently more comprehensive than that in the US. The US has a patchwork of laws on AI. In some ways, this may be worse than a complete ban on AI and could lead to confusing and inconsistent regulations being enacted in the states and in federal government agencies.
At the federal level, the Biden Administration has used Executive Orders to shape the discussion [12]. The Biden Executive Order on AI Safeguards included: 1) requiring that developers of the most powerful AI systems share their safety test results (among other things) with the US government; 2) ordering NIST to develop consistent tools, tests and standards for AI [13]; 3) developing ways to detect AI generated content and to help authenticate real content; 4) addressing cybersecurity risks to AI (through tools and fixing known vulnerabilities in software); and 5) developing a national security memorandum to address and direct additional actions needed on AI and security [14].
While an executive order does not have the impact of a law, it guides the procurement of goods by the US government. Since the US government is the top purchaser of consumer goods in the world, the requirements that it imposes on vendors have a significant impact on the economy and on most large suppliers [15].
Sixteen states currently have AI laws in the US [16]. Most of these laws address profiling people using AI. Even more states are proposing AI legislation in response to the concerns people have expressed over identifying and avoiding deepfakes and other uses of AI where it poses as real people. This includes Ohio’s proposed law requiring that AI products be watermarked (so they are easily identified) as well as Tennessee’s Ensuring Likeness Voice and Image Security (ELVIS) Act, which I have referenced in a previous post.
The ELVIS Act creates a property right in an individual’s name, image, and likeness (addressing deep fakes particularly, Grandma) and provides a cause of action for anyone whose rights are violated. Other states are looking at implementing their own version of the ELVIS Act. The No AI Fraud Act has been proposed (but not enacted) at the Federal level to establish a national framework to protect an individual’s voice and likeness against such things as deepfakes [17].
In the US we are currently left with, at best, a partial solution to concerns over AI regulation. There is no coordinated movement to generally manage the AI Genie by way of a comprehensive use/risk-based approach. Given the high level of concern over the future of AI, one would hope some bipartisan agreement could be achieved in Congress; but this has not happened to date.
In the US we are currently left with ELVIS, a Genie, and what sounds like the beginning of a bad joke (Elvis and a Genie walk into a bar [fill in your own punch line here!])… rather than a meaningful solution to growing concerns over the potential threat of AI. This is all the more reason to continue to educate ourselves on the real issues with data and AI so we can add meaningful input to the discussion of what needs to be done to tame the AI Genie instead of trying to completely bottle it up.
I hope you’ll be back next time as we continue to explore data connections!
If you have questions about your data and your legal compliance programs for data, Mortinger & Mortinger LLC can help! Contact me directly at: steve@mortingerlaw.com
Mortinger & Mortinger LLC when experience is important and cost matters