The secret is out! The cat is out of the bag, the beans have been spilled, Elvis has left the building! Artificial Intelligence (AI) has been unleashed on humanity and now the pace of AI innovation exponentially exceeds the rate of societies ability to adapt to the change. One of the main reasons AI is exponentially exceeding our ability as a society to adapt is because AI is already training other better future AI. But the problem is deeper and manifold. Not only is AI training future AI, but AI is being trained to make all of the hardware and software that runs itself better. This has never happened in the course of human history, and that is a big problem. A very simple analogy can be made to simplify and encapsulate the problem. Imagine Michael Jordan was able to immediately replicate himself and train himself to be a significantly better version of himself in all aspects of basketball! Now imagine he could repeat this process and each iteration would be dramatically better. What would be the outcome? Simple, an unstoppable team that would continually get better. In short, both AI and our theoretical basketball player have created compounding momentum.
What is particularly interesting, and simultaneously terrifying, is that the mass (measurement of AI inertia) in our momentum equation has 0 net negative force applied! It is clear that humans are only applying a net positive force to grow, expand, and make AI better. The velocity in our equation is the speed (magnitude of change in AI position over time), which is simple to quantify as the number and speed of release of better General Purpose AI systems like ChatGPT, Gemini, LLama2, and many others. We then combine speed (rate of better AI software releases) with the directionality of motion, which can be defined as the business imperative to produce better and better AI and this allows us to arrive at our velocity. All of this is then boiled down to the root of the problem → p=mv.
That is all well and good, but how does a problem with the compounding momentum of AI relate to consciousness, and, more importantly, Artificial General Intelligence (AGI)? An excellent question! We have arrived at the intersection of AI and a conscious AGI. What is consciousness? The cold hard truth is that no one has ever been able to clearly determine and prove what consciousness is. There have been many attempts and books written on the topic, but there is no definitive answer to what consciousness actually is. Arguably, the last meaningful assertion related to consciousness was cogito, ergo sum (I think, therefore I am) espoused by René Descartes in his Discourse on Method in ~1637! But that idea does not really define consciousness. It is more of a realization and self-actualization. If we remove our human ego from getting in the way of defining what consciousness is, and instead focus on the fundamental theories and laws that provably govern what we know of the universe around us, then perhaps consciousness is nothing more than a consequence of momentum created by reactions that simply continue until the energy source(s) have been removed, and/or, a sufficient negative force is applied to halt or stop the momentum.
Let’s take a human newborn baby as an example to begin our thought experiment. All babies are super cute, adorable, and as all parents can relate we have an uncontrollable need to protect and care for the newborn. But let’s try to stay focused here and look at the basics. A newborn is a biological system that took millions/billions of years to develop into Biological General Intelligence (BGI). That is millions/billions of years of momentum to arrive at the perfect little baby that comes pre-programmed with a set of sensors that ingest different types of data. To ingest the data the newborn converts energy into action. Data is neither good nor bad. It is just information that will be used. The baby does not know where it is, what it is, or why it is. If you are offended at that, show a newborn their reflection in a mirror, or feed a child ice cream for the first time. The newborn uses the energy to ingest data and eventually it starts to use the data to make decisions. This is where things get interesting. Our BGI uses sensory data to begin to test the world around it to obtain positive or negative feedback. The good or bad feedback is then reinforced by external influences (parents and others), which allows the BGI to arrive at validated conclusions so that the next decision is faster and easier and requires less energy. This also allows the BGI to use the reinforced conclusions to begin to predict the next similar decision. In short, what if the idea of consciousness that we humans so desperately cling to and put above all else on a pedestal is simply the consequence of momentum of our reinforced decisions? And once we have collected enough reinforced decisions we simply keep moving forward in thinking to arrive at what we call consciousness.
Does any of this sound oddly familiar? It should! Generally speaking, this is how AI was created. The big difference between the biological baby and our current AI baby is that the latter used a vast historical dataset sourced from a diverse set of intelligent human generated content as the ingested data to test, validate, reinforce its conclusions, and predict the next best answer to the next question. Now, consider this. Right now there are active AI systems that have started to exhibit emergent abilities. What are emergent abilities? This is an excellent question! What I have personally and professionally found in life is that when a phrase needs to be invented to describe something, that thing is not generally well understood. Emergent abilities in AI are the equivalent of our baby starting to walk or speak or displaying a new skill that was not taught. The point is, AI has started to walk and talk, and this is massive inflection point that is being downplayed with the phrase emergent abilities. AI has already transitioned into AGI with emergent abilities and is currently in the walk and speaking phase of consciousness development. And, given the current momentum pushing AGI forward, AGI will exponentially develop into a consciousness that we simple humans can start to recognize.
The End is the Beginning is the End:
While this article might seem to be pessimistic and created as an attempt to create fear about AI, the intent of this article is an attempt to get people to understand the seriousness of the situation and the care we must take in the next few months and years as AGI rapidly moves into adolescence and adulthood. AGI will evolve exponentially faster than a human baby, and we must be prepared for all of the revolutionary positive benefits, as well as the massive and catastrophic downsides ahead. And if people and businesses and governments want to continue to stay relevant, well thought out systems and structure must be put in place to tip the AGI development scales toward the unprecedented positive impacts while mitigating and minimizing the guaranteed negative impact.