We are back with another Data Connections blog! My last Blog post explored how data is connected in what I refer to as the four-legged data stool. If you have not already read it, you may want to start here before reading this entry. In this entry, we’ll take a look at what seems to be the rampant fear of AI and try to better understand where it comes from and how much of that fear is real.
A panel at a data seminar I attended recently discussed how much fear has been generated by Artificial Intelligence (AI), for any number of reasons. That got me thinking about AI and why it seems to cause these fears. So, I’ll ask: does AI frighten you?
I directed this question to my class of mostly 3rd year law students, all of whom are tech savvy and all were raised in a world where cell phone technology was commonplace as early as they could remember.
It surprised me when well over half the class acknowledged that they are afraid of AI.
As I asked follow up questions, I learned that their concerns about AI were based on the information - or lack of information - about AI generated by tech leaders and the press. Let’s explore this a bit.
On the surface, it would not seem rational to be afraid of programming instructions that are trained by data. Most of the AI systems that we are familiar with, including ChatGPT, are large language models (LLMs). A Large Language Model, as defined by Wikipedia, is:
"… a language model notable for its ability to achieve general-purpose language generation and other natural language processing tasks such as classification. LLMs acquire these abilities by learning statistical relationships from text documents during a computationally intensive self-supervised and semi-supervised training process. LLMs can be used for text generation, a form of generative AI, by taking an input text and repeatedly predicting the next token or word."[2]
At least in the near future, it would not seem like an AI system, even a generative AI system that is trained to learn, could just jump from answering questions and creative artistic endeavors like writing novels to doing such things as domination of the human race.
Over the years my clients have included AI experts, programmers, and data scientists. I asked each about their level of concern over AI pulling off an attack on the human race (like in the classic Terminator movies). After an obligatory polite chuckle, everyone I asked said that while it is not theoretically impossible, it is not likely to occur in any timeframe they could imagine.[3]
We have seen LLMs fail to perform far less challenging functions that were not within the design of, or included within the data training for a particular LLM. While AI/LLMs can perform masterfully in some contexts, they cannot transcend the limits of their design or programming.
For example, Amazon’s Rekognition AI, and other similar LLMs, have struggled and even failed with identification of race and gender in the context of employment and law enforcement. This is due to lack of diversity in the data being used to train the AI… effectively imposing a ceiling on the AI’s development and use.[4]
Likewise, while IBM’s Watson computer dominated humans in playing the game show Jeopardy! in 2011, IBM was not able to parlay this success to unrelated applications such as treating cancer or addressing climate change.[5]
Assuming ChatGPT will dominate our world simply because it can do some tasks for which it has been trained impressively well (e.g., write a compelling story emulating the style of Steven King) is far from an inevitable conclusion. This is at least true for any time frame likely to impact most of us. If this is the case though, why are people who have some pretty impressive credentials seemingly so worried about AI?
In March of 2023, The Future of Life Institute published a call to action: “Pause Giant AI Experiments: An Open Letter.” [4] This letter called for a 6 month pause in the training of all AI systems more powerful than GPT-4 for fear of their ultimate risk to humanity. During this pause, the request was that labs and AI experts work to “jointly develop and implement a set of shared safety protocols” for advanced AI design and development. This letter, as of my last check, was signed by 33,708 people including Elon Musk, Steve Wozniak (co-founder of Apple), former Presidential candidate Andrew Yang and many others in education and the high tech industry.[6]
Later in 2023, some of these same leaders attended a private forum in Washington DC hosted by U.S. Senate Minority Leader, Chuck Schumer, where they endorsed the idea that government should have a role in the oversight and regulation of AI. Interestingly, some of those in attendance, like Elon Musk, “…voiced dire concerns evoking popular science fiction about the possibility of humanity losing control to advanced AI systems if the right safeguards are not in place.” [7]
Am I just wrong to downplay the imminent existential risk to humans in the development of AI? Certainly, these luminaries in the AI and Tech world know better, right?
One potentially reasonable explanation is that it is in the self-interest of certain tech companies to make AI seem powerful, awe inspiring and even a bit dangerous. The image of AI through this lens becomes so compelling that it makes the AI systems seem as transformational to human development as the discovery of fire, electricity, or nuclear fusion and this likely compels the attention of big investors. Additionally, it provides a convenient excuse for AI companies who can later, in the face of some possibly tragic event, argue that they had literally begged regulators for guardrails… which never appeared (or appeared only in some limited form).[8]
One other rationale may be the most compelling of all. Just like a magician who focuses you on her right hand, while all the behind the scenes efforts are taking place with her left hand, the focus on the existential threat of AI may well be an intentional distraction.
So long as we are all focused on the seemingly intractable issue of protecting humanity from a take-over by AI in the (distant?) future, we might forget to call the developers of AI to task for repeated failures to deal with more immediately pressing issues that actually can be addressed. We know that failure to obtain adequate and representative data has made it virtually impossible to equitably use AI for important tasks like mortgage assessments; criminal sentencing recommendations; facial recognition/photo identification; and medical diagnosis. Additionally, use of AI to create synthetic images and dialogue allows for deep fakes such as pornographic images of Taylor Swift, the creation of royalty free “sound alike” songs using the synthetic voices of popular artists, and nearer to home for many, the creation of harassing images of minors by their peers.[9]
None of these real and present AI threats are in the distant future. We are seeing them now. Moreover, resolution of these issues is not beyond the reach of current technology. In an upcoming blog I will discuss some relatively straightforward steps that can be taken to address some of these issues.
There may be legitimate reasons to be worried about or even scared of AI. As I discussed Blog entry #4, AI can lead to personally harmful results for people due to bad or historically-biased data being used, without context, to make important and impactful decisions. However, the more we are distracted by generalized fear of a distant Terminator apocalypse, the more likely these more immediate issues will go unresolved.
Is there any legitimate reason to be concerned – and even afraid - of AI? Yes – but not based on some distant apocalyptic threat. Rather, you should be concerned because you know it could and should be better.
Now that we have an idea of why data is important, how it can be used for good (or not), where data comes from, how data is connected, and how we should look at “scary” AI, we are ready to dive even deeper into the 4 legs of the data stool.
I hope you’ll be back next time as we continue to explore data connections!
_____________________________________________
[1] This image was generated by Microsoft’s AI Image Generator in response to the prompt “Image of scary Artificial Intelligence”. Any resemblance to the Terminator or the Borg from Star Trek is unintentional.
[2] https://en.wikipedia.org/wiki/Large_language_model
[3] Though I did have general understanding with my Research clients that if someone named “Sarah Connor” ever appeared at the front desk, they were NOT to let her in the building (https://en.wikipedia.org/wiki/Terminator:_The_Sarah_Connor_Chronicles).
[4] https://research.aimultiple.com/ai-fail/
[5] https://www.nytimes.com/2021/07/16/technology/what-happened-ibm-watson.html
[6] https://futureoflife.org/open-letter/pause-giant-ai-experiments/
[8] AI Doomerism is a Decoy: https://www.theatlantic.com/technology/archive/2023/06/ai-regulation-sam-altman-bill-gates/674278/
[9] For a start at addressing these issues see: First-of-Its-Kind AI Law Addresses Deep Fakes and Voice Clones @ https://www.lexology.com/r/U9ZZGRC/9f59e7e108
If you have questions about your data and your legal compliance programs for data, Mortinger & Mortinger LLC can help! Contact me directly at: steve@mortingerlaw.com
Mortinger & Mortinger LLC when experience is important and cost matters
One thought on “Data Connections Blog #7: Forget Lions and Tigers and Bears, Dorothy. Now We’ve Got AI … Oh My!”