There is a serious misconception about how AI will transform our future, especially by those seeking maximum acceleration, that if understood might make us question the entirety of this journey and if we wish to be on it.
The implication is that everyone is enthusiastically racing towards a destination that does not exist. The capability to make the things you want will ironically be the same capability that makes them unattainable. This is not a scenario that arises from some type of AI failure, but rather this is assuming AI performing exactly as intended.
What we will find is that the faster we accelerate toward the goals we desire, the less likely we will achieve those goals. The gold rush to build dreams will become the inescapable spiral of nightmares.
What is the technological anticipation gap?
The technological anticipation gap (TAG) defines the discrepancy between what you envision yourself doing with future technology and the reality of that technological capability when it arrives. The wider the gap, the more significantly the experience differs from expectations as the overall technology available will be vastly superior than envisioned.
The most misleading aspect of AI's impact on the world is caused by TAG, as it is responsible for nearly all the misplaced enthusiasm that will ultimately turn into despair. The magnificent dreams of people doing fantastical things with AI will likely not manifest as imagined.
The reason for this is the rapid pace of technological advancement, which becomes increasingly incomprehensible and challenging to predict. Humans are reasonably adept at linear estimation but incredibly poor at anticipating exponential growth. Not only are we terrible at exponentials, but our predictive abilities will also worsen as we progress further along the curve.
There are no stable plateaus
Humans imagine a future and then dedicate their lives to planning and preparing for that future. We invest in the resources, skills and all the other necessities of life to realize some desired outcome so we hit the imagined target. There is a substantial cost for what we often dedicate our lives towards. Technology often presents interruptions on our journey, but we often can adapt or leverage the technology to assist with our endeavors.
However, there is a limit to our capacity to adapt and keep up with change. There is a limit to which a society can remain stable in the midst of rapid technological change. At some point, it becomes disadvantageous to invest in anything as it will simply be obsolete before it can be realized or implemented.
TAG presents us with some significant problems for this endeavor. We envision a target or goal based on some capability projection for what will be possible at some point in time in the future. As we can’t predict the future, it is going to be based mostly on current capability with our best assumptions.
Once we arrive at that future point, technology will have advanced in some unexpected ways we could not account for previously. This always happens, but typically it involves some minor modifications to our plans. The plans don’t typically just become completely irrelevant.
However, this discrepancy between what we expected and what manifests is the capability gap. On an exponentially increasing capability curve, the gap will become larger and larger as we move forward in time. Our assumptions for future planning will continue to get worse.
If we are going to utilize the technology in some manner, we need islands of stability or stable plateaus between periods of technological disruptions. Humans need these periods to plan their lives and to enjoy the moments of their creations.
Envision ordering a sophisticated fuel cell car, only to have anti-gravity flying cars emerge before delivery. You order one of those, but teleportation devices become reality before their arrival. Drowning in this storm of technological advancements, you are left adrift, unable to sail the sea of marvelous creations. There are no tranquil moments to experience what exists.
Traditional innovation cycles look like S-curves. They are disruptive within a narrow sector of civilization for a short period followed by stabilization. That stabilization period is important as it is where we build meaning around our lives. We adopt skills around those innovations and become good at something that gives us a pleasant shared experience with the rest of society.
This pattern of advancement also allows time to reflect on what came before and decide what parts are beneficial and what parts need changes or improvements before the next cycle. We need these cycles for our own sanity and to become a bit wiser about choices going forward. Non-stop acceleration that consumes every sector of society will be to exist in perpetual disruption for everything.
The infinite horizon of Singularities
Nearly everyone incorrectly perceives the AI Singularity event, the moment when AI surpasses human intelligence, as a single moment of time. It is commonly perceived as a transformation event that comes to a conclusion of a new reality. However, that description does not capture the significance of what follows because it is wrong about the single moment of time.
Instead, it is the beginning of an infinite repetition. A continuum of accelerating intelligence. In other words, once you have arrived, then there is another incomprehensible point of vastly superior intelligence ahead on the curve that we could call Singularity #2. Potentially the very same concerns exist about moving ahead from that point as there are now.
Once we reach the Singularity, we have not removed the uncertainty about our future. Uncertainty might even increase as we are accelerating faster. No matter where we are on the curve, the curve stretches upward into the future toward the incomprehensible. Racing toward infinite intelligence is to be in a perpetual state of an incomprehensible uncertain future. At no point is there a concept of the transition being complete. No time for an oasis to simply reflect and enjoy the present.
The Technological Acceleration Paradox
The only possible state of the world if we truly continue progressing exponentially towards and beyond AGI is the state where we are caught between perpetual infinities of desires and despairs. Unlimited capability is infinite obsolescence. Exponential growth is exponential death of ideas and dreams that no longer have value. The future you perceive is already obsolete.
No one will care about whatever it is you are dreaming about. Whatever you wish to create will be obsolete before you can ever entertain the idea. The future is no longer predictable. Capabilities you can not foresee will make everything you are working towards irrelevant. AI is both the maker and taker of dreams. All dreams delivered will be reclaimed. This is the technological acceleration paradox. The faster we accelerate towards our dreams the less value they will have when we arrive.
Technological acceleration anxiety
As it becomes more difficult to plan our lives or invest in a business or other personal project and bring that to fruition before it becomes irrelevant, then there is going to be much trepidation for even beginning such endeavors.
Who wants to invest hours, weeks and years into a new artistic skill that can be replicated instantly and infinitely by AI? Or who wants to invest years of income into a new technology company that may become obsolete in a single moment without any warning? No time to react or replan. All investment costs at 100% loss.
It isn’t clear how society will or could adapt to an extreme pace of new developments and innovations that are not incremental advancements, but substantial new unseen and unexpected capabilities.
Each new advancement over the past year has been received with significant astonishment. The most recent being Sora, the new OpenAI AI video generator. Anyone who made future plans based on at-the-moment AI limitations over the past year likely had to abandon them. Most of the incoherent artifacts are gone from Midjourney and many will now find it difficult to reliably know if the best AI images are not real images.
Few expected something as good as Sora to appear seemingly out of nowhere. But this is the typical type of advancement that AI can deliver. There will be criticisms that Sora still has many imperfections, and indeed it does. However, it will likely improve at the same pace as Midjourney.
Currently, there are many people already concerned about job impacts. However, it is not simply jobs at risk. Every company is already obsolete and they just don’t know it yet. They are all rushing to build the instrument of their demise. All companies are simply middleware between you and what you want. AI delivers what you want. That is the end of everything else.
The new rat race is attempting to stay ahead of the technological curve to remain relevant. Asking children what they want to do when they grow up becomes irrelevant. No future they could imagine would be relevant by the time they arrive at that point.
The only path left would seem one that severs all human connection. As with extreme capability, ASI becomes a wish-granting machine. Through this lens is the only way to consider the societal implications. No one will care about you or what you create as there is nothing you can offer they can’t simply wish for themselves. Life is all about you and the machine, nothing else. We land in a world dominated by techno-dystopian narcissism.
Transhumanism will not save you
If we can’t keep up, then the answer must be human enhancements. This is a common position from those who at least perceive the problem. Putting aside the issue that many might not want to go down the path of becoming part machine, this wouldn’t solve the problem anyway.
A device that interfaces with the brain to provide cognitive enhancement will require physical installation. It is a surgical procedure. That device, just like any other computational device will have a lifecycle for its relevance. Meaning that an upgrade is not a permanent solution.
You still face the same dilemmas. Once the device is installed, it may be obsolete before you recover from the procedure. You still can’t keep up. Even if the device is not yet obsolete, will it ever compete with large institutional ASI? If ASI is improving exponentially from its inception, you will never catch up. There can’t be enough resources to do so for any particular individual.
Therefore, such devices would likely be more like interfaces to the ASI instead of autonomous functioning enhancements. Cognitive tasks are outsourced to the ASI through the interface. Of course, this would likely result in giving up some individuality or maybe all individuality as we progress towards becoming nothing but agents of the ASI.
Why this is unlike everything that came before
One of the most heavily relied-on counterarguments for all AI-related concerns is that all prior technological disruptions from innovations simply resolve as a benefit in the end. Disruptions are just temporary instability until stabilization and adaption.
However, this is not a technological revolution or mere innovation. It is accelerating technological evolution. Meaning there is no island of stability on the other side. There is no adaption to change and then we move along for a while. It is continuous. What makes this disruption different than all others is AI is not a narrow disruption. It is a disruption for everything because at its core it is a machine for the replication of skill, technology and even thought. A concept that has never existed prior with any other technological disruption.
All analogies fail for AI when compared to prior technological innovations because AI is a transcendence from innovation to something entirely new. It is no longer a technology but a different entity, that being intelligence. No precedents of other disruptions will be applicable.
Technology consists of all things that are tools. Intelligence exists outside that set and is unique. We need to reframe this to comprehend the impact. That intelligence is not a mere tool as being a tool would imply it is bound to the precise control of humans as a function of utility. Instead, intelligence becomes something else entirely that either coexists alongside or even surpasses humanity’s role in the world.
But all prior technology has been beneficial
Another historical precedent that is often cited as a counterargument is that no prior technological advancement has led to significant harm. Of course prior patterns only establish high-level trends, they can not give us certainty about future outcomes. Nonetheless, how true is the premise that technology has not had some serious negative consequences?
If we constrain our observations to only direct causes, this likely remains mostly true. However, observing the more intertwined real-world consequences might reveal a different picture.
All technology is currently converging towards implementations that assist the authoritarian control of modern society. Social media is a monstrous force shaping society by hands unseen built on top of nontransparent algorithms that are not even well understood by those who implemented them. Virtually all electronic devices today are also tracking devices for powerful institutions and state governments.
Many don’t perceive these advances as detrimental as living within a very nice gilded cage can be rather pleasant until it is not. Nonetheless, the manipulation of public political opinions, social engineering, socially induced anxiety via social media, destruction of privacy and individual thought, and overthrowing of democratic governments are all issues in debate enabled by technological advancements.
Whatever may be the current direction of society under the present technological capabilities, we can only assume AI will accelerate society towards the path we currently are traveling which is toward a techno dystopia of a populace mesmerized by shiny glimmering lights.
Some dismiss all warnings as pure sci-fi nonsense. Ironically they will embrace all sci-fi utopian visions. It would have been nice if we paid more attention to 1984 and Brave New World. Some warnings are worth heeding.
Just level up your skills and do something new
AI will simply empower you to do more is often the counterargument to the current skill or technology at risk of irrelevance. This has always worked out in the past due to much slower acceleration combined with the fact all prior disruptions were narrow in comparison. The problem now is that any pivot you attempt will be potentially so short-lived it is not worth the investment and the opportunities to pivot will continue to have shorter and shorter opportunity windows.
If you thought you would escape the AI-generated image apocalypse a year ago by pivoting to video, then it looks like your time is already running out. Maybe you will decide to become a video director, producer or studio manager. However, none of that is off-limits with AGI. There are no higher skill plateaus with AGI. If AGI is achieved, then there are no limits. It is the consumption of everything you could ever possibly do.
There is a great misunderstanding of the nature and scope of the problem. AI does not steal art, it steals meaning, purpose and the value of humanity from everyone. It will provide temporarily these things as an illusion to those who seek them. The journey of life disappears and all eagerly await for the machine to simply deliver it to them.
Old ways don’t end because of new ways?
This is a prevalent perspective that challenges the notion that rapid technological advancement will inevitably eradicate activities that we currently cherish. The most frequently cited examples are typically something like "People still play chess today even though computers can outperform them" or "People did not abandon traditional art when digital art emerged."
However, AI is injected something new into the equation of new transformative technologies. That is AI can also be a perfect mimic or replica of anything else that exists.
The only reason simpler activities continue amidst superior technology is that people can still perceive a difference. The majority of individuals appreciate the journey, talent, effort, and greatness displayed by others. People still participate in athletic events such as track and field races or bicycle races, even though one is clearly superior in performance. This remains true because we can still discern the differences which are required to find meaning in the human experience.
Imagine if cyclists could appear indistinguishable from people walking, with no way to verify if someone was using a bicycle or not. In such a scenario, foot races would indeed be rendered obsolete. Or consider if you entered a chess tournament and there was no way to ascertain whether you were playing against a human or a machine, then chess competitions would cease to exist. It is the same for art. If we could not distinguish between artist and machine.
AI threatens the meaning we find in our creations and endeavors as they become indistinguishable and untraceable. Anything we value for its origin, effort, uniqueness, human elements, or ability to connect us to others becomes increasingly difficult to discern any of those attributes. If there is any hope that these cherished aspects of human endeavor persist, and I sincerely hope that they do, then we must find a way to preserve authenticity and it is not yet clear this will be possible.
The only way out is humanity’s end?
The only argument that provides a way out while remaining on the exponential curve is the one that accepts the end of humanity. The one that states this is the desirable outcome. Obviously, we can’t enjoy and keep up with such an accelerated rate of advancement. Therefore we must be transformed into something different that can keep up with such a rate of advancement.
This idea in some part permeates the institutions working towards building AI. Their vision is not simply making your life better, but a new genesis of life in the image of their own design. They see humanity as flawed and it simply needs a new transformation.
“This will probably give them godlike powers over whoever doesn't control ASI.”
— Daniel Kokotajlo, OpenAI Futures/Governance
They believe they are creating a god. This power would likely mean either we merge with machines, live in virtual environments like The Matrix, or the machines go on without us as some look at AI as our children or natural evolutionary descendants. Their goal isn’t to save humanity but to create something superior that explores the universe and all existence.
“We will be the first species ever to design our own descendants. My guess is that we can either be the biological bootloader for digital intelligence and then fade into an evolutionary tree branch, or we can figure out what a successful merge looks like.”
— Sam Altman, CEO OpenAI
They are enticing you with utopian dreams such that you blindly accept your inevitable obsolescence and irrelevance. Human meaning and connection cease to exist unless you are willing to live in a virtual simulated reality.
Everyone that wants AI to takeover the world does so from the perception that AI will think like them and will create their vision of utopia. Their vision of truth and justice is the correct one, and of course the AI will perceive it as such.
There is one other potential, that being split societies. Meaning some who choose to live without concern of trying to keep up. The question is whether they will be allowed to do so. As society continues shifting towards authoritarian control over the populace, we have already lost much agency over the direction of our own lives.
How might all of this not happen?
It is always worth considering for any theory how we might be wrong. There are a few possibilities that may unwind these events differently.
The fundamental assumption we are leading with is the continued exponential acceleration. Nonetheless, what is important to point out, is that many believe they want exponential acceleration. They want to arrive at that destiny they have in their mind as fast as possible. What they don’t conceive, is that what they are saying is they want to arrive at that destination and then stop or slow down to experience that reality. But we don’t individually get to choose how that plays out.
Without exponential acceleration, problems don’t just go away, but they would be delayed. It should be noted that the human race is on an exponential curve even without AI. It is just that the factor of that curve potentially increases dramatically with AI. On a long enough timeline, we would still encounter the same dilemmas; however, we might in some way have a better chance at preparing otherwise.
We also might not be on the path towards AGI. Despite many bold claims from the institutions building AI, LLMs suffer from a significant lack of true reasoning ability. Those behind LLMs believe that mostly just scaling up the models will bring us toward true intelligence. However, this is debatable and numerous researchers disagree stating we are still going to need new architectures and discoveries before we get there. Nonetheless, I expect advancements will continue to astonish even without AGI.
Intelligence might not be unbounded. Some propose that once AI can enhance itself it is simply intelligence to infinity. However, we don’t know if that will be the case. There must be some physical resources required and we don’t know if there will be limits at scale. Nonetheless, many of the problems discussed here don’t need infinite intelligence to manifest. Human-level intelligence that can be processed at high speeds and scale horizontally is sufficient for many of the concerns.
Ultimately all of these alternate paths suggest that the only divergence from the critical concerns is if AI fails to become the type of super intelligence that the builders are attempting to build. It is important to make this clear. We might not be on the path they think we are on, but we need to question if we would even want to be on that path if it were possible as this is clearly their intention.
Humans within any environment are good at sorting things out and finding some meaning and purpose in the most dystopian or desolate environments. We may adapt in some way to the new era of AI, but we may potentially lose sight of what we have lost. Just as many today can not conceive of the existence prior to social media, cell phones and the tech-infused world. They have never known the peace of mind, equanimity and harmony one could feel in an environment that doesn’t consist of some type of attention-seeking apparatus from which you can rarely escape. It would at least be wise for us to attempt to more carefully consider the implications of the things we wish for.
The future is a series of plot twists
Finally, none of us can be highly confident of any particular outcome. The greatest consistent pattern emerging over the past year of AI development is the consistency of everyone being wrong about predictions. It is not just the rapid pace, but the nature of the technology that is currently beyond the grasp of anyone to fully comprehend how it all works. The only prediction I can make that might hold up is that we are going to only get worse at predictions going forward if we are indeed accelerating.
We are now at the slowest part of the curve. This is the most stable AI and society will ever be. If you can’t keep up now, you never will. What this should also make apparent, is that we need far more careful consideration for the choices we make about our future. There is little conversation about what this all means for society going forward. There is a false perception that if we simply avoid X-risk scenarios then we end up in utopia.
Do we want a future where every moment is in constant flux? Where each day, each moment is a transformation and death of everything that came before? A future where you stop, look down, smell the rose, and when you look up again the world is no longer recognizable. The greatest crisis going forward may simply be one of meaning. What if there are no answers? What if more intelligence still can’t answer “why?” Is it possible to gain intelligence while also losing sanity?
Have we thought long enough about the future to know we want the future we think we want?
This essay only scratches the surface of societal issues related to AI.
Continue with these other important Mind Prison essays:
The coming post-truth society
The destruction of privacy by AI
Why alignment will be impossible
and much more …
Please help share this essay to expand the participation in these important debates.
Finally, for the moment enjoy the significance that you are the only AGI ever created.
There’s a problem with your argument: the speed of light puts a hard ceiling on the exponential growth curve.
As an illustration of this imagine i have an AGI; and i add more compute to it to make it smarter, to simplify this lets imagine that it’s 1 dimensional thing, we start at 0 and things are at integer positions. Eventually we get to a position, that’s so far from 0, that the time it takes for the signal to get back to 0 is greater than the time it takes to just compute the result at position 0. For this reason the ASI *can’t* scale forever, it has to plateau. We can scale in all 3 dimensions but the distance from one end to the other is still a constraint, we can make the compute units smaller and faster but there’s a limit.
We actually can’t make CPUs with terra-hertz clock speeds because that you either need transistors smaller than atoms or FTL, we’re near the end of improvement for digital computing period; AGI only has a shot because tensor cores essentially emulate an analogue computer which still have a ton of headroom to grow.
technology is going to hit a hard ceiling probably sometime this century, where it can’t get any better because it would need to violate lightspeed to do so.
Dakara, I would ask you to take some time to explore the Luddites and there response/reaction to the first industrial revolution. I am convinced you could write something compelling. Their enemy was innovation, business entreprenuers and governments that embraced a shiny and deceptive promise of "new and better" of cheaper, faster, better, more consistent textiles. I think we are there again, this time most of us are or will soon to be the Luddites facing as you discussed, a point of inflection. Where the Luddites went wrong was turning their realizations into a binary "we win/they win" scenerio which rapidly put them into jeapordy, criminal behaviour and the destruction of their movement. I think there are some battles that cannot be won but potentially they could be channeled into a new direction if Luddites were willing to learn to play chess with the powers that be.