Updated: Apr 12
At this point, if you have not heard of ChatGPT or other general purpose generative “AI” tools, you have probably been living on Mars. ChatGPT has captured the interest of the world, and tools like it appear to be here to stay. ChatGPT can answer most questions, write formal business documents and pass an MBA exam, solve math problems, pass the Bar, pass the U.S. Medical Licensing exam, and even generate ok starter code. All of these things are great, but after reading numerous articles, and listening to friends who extol the benefits of leveraging this wild new tool to crank out everything from PowerShell to Python, and spending some time thinking about this new wave of tools that the biggest tech giants are now battling over, it has become clear that we as a species should probably take a step back and answer some basic questions about where we want to go into the future and how tools like this should or could be used. Could we develop a thing is always the easiest technical question because invariably the answer is always yes. Human ingenuity is astounding by itself, however when you throw an accelerator like technology into the mix, the answer to could is always yes. The more important question that needs to be thought about, researched, and studied is should we. Michael Crichton's character Ian Malcolm had it right! Except that was only with dinosaurs on an island. We are dealing with something with far greater power.
Is there a need for humans to create, operate, and maintain a general purpose generative artificial intelligence (GPGAI)? It is a serious question. Do we need this tool? Needing is very different from wanting. Needing a thing lands you on Maslow's hierarchy of human needs. This is where the should part comes into play. Once you need a thing, you immediately must consider if you should do something. Do we need GPGAI. If your answer is yes, I urge you to consider the implications of moving an intelligence that is not human to a place that is on par with the need for water or food or sleep. Or take a slight step back and think about how deep your need for GPGAI goes? What are you comfortable pushing to the side to use a tool like ChatGPT? Well, based on what I have read and heard from friends and co-workers in the tech industry, GPGAI immediately starts at the top of Maslow's hierarchy by replacing creativity and problem-solving and acceptance of facts! Think about that for a little bit. Seriously, I will give you a few minutes away from your phone, or from ChatGPT to consider that. You have just replaced the top tier of the human triangle of need with a tool! Now let's go deeper down the triangle to Esteem. Why use a GPGAI tool? I would argue that some would use the tool to gain self-esteem and achievement and possibly respect of others. Meaning, you can get your work done faster and more consistently with a singular tool, you may gain respect of others that are not aware you are using a tool to do all of this great work. Okay, so in a few minutes of thinking critically we have taken a new tool that hardly any people actually understand the inner workings of and this tool has now made its way into the second tier of human need! That is a bit scary. But, let's really think about it and go deeper. If more GPGAI tools are adopted, we quickly arrive at the level of Safety! Safety is only one level away from food and water at the bottom of the human need pyramid. Why Safety? Well, most people that I talk to are using ChatGPT for work or school. With work comes employment, which then allows you to be able to obtain resources and health (insurance) and property. If you have a need at the level of Safety, you have created dependency for your safety and security in life, and with that dependency comes some very serious implications. Now, of course, other tools throughout human history have worked their way down Maslow's hierarchy, however they all had one major thing in common. They all were self-limiting because each tool required raw earth materials and refinement and finally production. This self-limiting aspect of all human technology, up until roughly the 1970's with the advent of the personal computer coupled with the massive increases to the global supply chain, allowed societies to keep a relative pace with the technologies being used. The issue with GPGAI tools is that they are not self-limiting. The problem is speed. Society has barely had time to adapt to virtualization and all of the increases in productivity and speed that came from the ability to shrink hardware resources into a large-scale and widely adopted virtualization layer. Then before we even knew what we had, and before we could develop standards, we virtualized the virtualization with the advent of Kubernetes. If you thought virtualization was tough, just imagine for a moment shrinking all datacenter operations into a singular layer, and then take that layer and stack some virtualization on top of it and then add some logic that allows it to operate things on its own. And now we have GPGAI tools that can get a C+ in all fields and pass most exams! All of this happened in 23 years.
I had to throw this giphy of Ryan Reynolds looking confused because I just realized I said a lot in a short period. It also works really well as he looks up at everything I wrote. Hilariously enough, it is also the look I get when explaining Kubernetes to people. Thanks, Ryan, for working on so many levels as always. Let's take a breath and continue. :)
What Else Do We Need to Consider and Be Concerned About?
Look, there are dozens of things to consider when thinking about GPGAI strategically as a species, and honestly a blog is not the appropriate forum to discuss all of these topics at length. That said, I wanted to quickly touch on a couple of additional points before I check out for the day because they were just on my mind.
Here is a fun thought experiment. What is consciousness? If you know what it is, and you are able to strictly define it, and you are able to prove it, I advise you write up that paper or book ASAP. You will get the Nobel Prize. Look, the objective reality is humans do NOT know what consciousness is or how it works. We think we know some of the symptoms of it, but even one of the greatest philosophical thinkers in human history, Descartes, arrived at the best argument of consciousness "I think therefore I am" within the text Discourse on the Method (1637).
That is the penultimate definition of consciousness humans, in all of our infinite knowledge and intelligence, have arrived at! That is pretty silly. I do not mean to demean Descartes, my intention is to show we have no idea what it is. With all of that said, let's now think critically about GPGAI and if it is conscious. Again, I recognize many will read that and immediately write me off, and that is ok. This is, however, a legitimate question that needs to be thought about. We designed one/many machine(s) that mimic human consciousness, and we boldly and arrogantly claim that it is not conscious. If we cannot determine what consciousness is, how can we claim ChatGPT is not conscious? Again, I fully recognize that someone could read only that statement and claim I am a crazy person, but that argument is silly. Think past the immediate benefits of ChatGPT and GPGAI tools, and leave your human arrogance at the door. Then actually think about what we have created without any bias. You will quickly understand what I am talking about.
Tactics + Strategy Please!
I have read a number of incredibly short-sited arguments about how these GPGAI tools can replace white-collar workers, and, interesting, and conversely, how humans will always be needed?!?! Both of these arguments are sacrificing our future for short-term gains and a feeling of safety. On the one hand, we have high-powered CEOs making statements like GPGAI tools will and should replace white-collar workers.
Okay, interestingly, the person that made the statement may not have realized they are, in fact, a white-collar worker. And this same person presupposes that his/her white-collar job can never be replaced because it is so important and far too complicated to be replicated. Both of these arguments are false when/if you take a few minutes and apply some very simplistic critical thinking about the logical conclusions of our current rate of technological progress given the fact that there are NO enforced controls, structure, and ethics. Without any controls, structure, and ethics, we have a runaway train of artificial intelligence that will continually improve and advance, dropped into a socioeconomic system that is constantly seeking ways to increase profit margins. This is the equivalent of dumping an ocean of gasoline onto a raging fire. And again, if you are interested in arguing this point, think back to my previous point about how all human technology prior to approximately 1970 was self limiting! Older tech was self limited because it required physical work to mine material, process that material, and productionize. GPGAI does not require any of that. With physical constraints removed, combined with a system designed to always look to increase profit margin, we arrive at the only logical conclusion. Humans will eventually not be necessary as everything we can do is performed by an artificial intelligence. Look, I fully recognize that last statement sounds like a conspiracy theory, it sounds crazy, but it is rooted in simple logic. Given a thing without physical constraints that can train itself to continually improve, you arrive rapidly at something we do not understand and an intelligence we cannot control. Just saying, maybe we temper the general excitement surrounding GPGAI with the simple, cold, logic.
The End is the Beginning is the End
I completely understand that some will conclude from this blog that I am arguing against the development of GPGAI, AI, and possibly ML tools. I want to be very clear. I am NOT arguing against them. Mentat develops its own AI/ML tools. However, they are fit for use and fit for purpose. They are not intended to answer all questions, not solve all problems, and not to replace people. This is my broader point. GPGAI and tools like it should be carefully considered and not rapidly implemented and adopted. My argument is really that technology in general has reached sufficient maturity to warrant an independent technical body that sets standards, not guidelines, that licenses not certifies, and defines and enforces ethics broadly across the industry. A body of independent technologists similar to the American Bar Association or the American Medical Association or the National Council of Architectural Registration Boards. I know, I know, that will slow things down! Yes, it will. And, I think that is a good thing, you know, so humans can continue to be the dominant intelligence on earth. Just saying.
Check out John Oliver's take on AI. He raises a lot of excellent points and will make you laugh while doing it.
Steve and Stuart must have read our blog post ...