Notes From the Desk are periodic informal posts that summarize recent topics of interest or other brief notable commentary.
The Destruction of Meaning by AI
The threat of AI might not be the “end of the world” scenarios, but it might be the end of meaning and purpose. We are using technology to push the limits of capturing human attention until we can no longer give attention to our own thoughts.
Technology is the new socially accepted drug. We ignore the effects simply because it is not a chemical we consume into our bodies. Yet, all modern tech is evolving for a single purpose: to create a surge in your dopamine. Your attention is the payment for the drug. The attention economy is turning everyone into addicts.
AI optimizes this trajectory. What are the consequences to ever-increasing saturation of our senses? Dagan Shani has a short documentary that explores the consequences of this continuous, never-ending stimulation.
This wild explosion of ultra seductive, highly saturated, and overly complex visuals on the internet made me realize that we're constantly overdosing on imagery that is actively f’ing up our brains.
…
Looking at Tik Tok's latest trend of AI generated videos, we can see the pattern. Pushing things to greater complexity, extremes, and absurdity just to squeeze out another hit of dwindling dopamine so we can feel something.
…
When you can't be sure, especially when something is pretending to be real, you will keep your emotion locked behind the closed gates of your doubt. You will not be tempted to invest your real emotions in a possibly fake smile or a prompted tear.
We Are Building Sterile Prisons for Human Minds
What are we doing to our world and the things that hold meaning? We have an obsession with chasing cold, algorithmic precision that is void of the warmth of imperfection, which is part of the natural world. We are trying to escape our natural environment and build optimally sterile prisons: a perfect emptiness without disorder.
It is all a trap made of alluring illusions of perfection that entice us to leave behind our human elements for something that pretends to be better. However, everything that leads to less genuine human connection is likely a net negative for civilization. Authentic human connection is a foundation for a sane and healthy society.
Scrolling on social media all day is like drinking all day. Technology has simply provided a more socially accepted and efficient method for destroying your life.
The Most Comprehensive Refutation of AI Alignment on The Internet
I recently updated and extended my original essay on the impossibility of successfully aligning AI. It is the most comprehensive argument you will find on the internet.
“Alignment, which we cannot define, will be solved by rules on which none of us agree, based on values that exist in conflict, for a future technology that we do not know how to build, which we could never fully understand, must be provably perfect to prevent unpredictable and untestable scenarios for failure, of a machine whose entire purpose is to outsmart all of us and think of all possibilities that we did not.”
The arguments in the essay refute any possible alignment, beginning with the LLMs we have today to the mythical ASI that AI labs claim they wish to build. I expose the fallacies in all of it. This is your essential reference to inject into any alignment discussion.
The Dystopia of Technological Acceleration That Few Perceive
But what if we could build it - the superintelligence? I also recently updated my essay on this topic with more content, updated illustrations, and improved edits for better readability.
The AI pro-acceleration faction, those that want to build ASI (the wish-granting machine), believe they will bring utopia to the world by creating a superintelligence. However, even if we were to ignore the alignment problem, an “aligned” powerful AI would still result in the complete destruction of humanity - the complete loss of meaning and purpose.
Most importantly, all the things people dream about doing or having, the very incentives that are driving them to want a superintelligence, will paradoxically not be achievable.
This is the argument for why we would never want to build AGI or ASI, if we ever could figure it out. This is not the argument for losing control or the end of the world. It is the argument for why powerful AI, working exactly as we would wish, still leads to dystopian ends. This is the essential reference for conversations about maximum acceleration.
Randomly Correct Answers Aren’t Enough
Today’s LLMs certainly aren’t a superintelligence. Although sometimes useful, they don’t appear to have any intelligence at all. It is becoming apparent to businesses that randomly-correct-answers aren't the miracle that AI labs have claimed. More signs that the hype is fading and the bubble cannot be sustained.
Data Shows That AI Use Is Now Declining at Large Companies
The survey, which compiles data from over 1.2 million firms throughout the US, shows usage of AI tools among companies with over 250 employees dropping from nearly 14 percent in mid-June to under 12 percent in August.
…
It's a particularly distressing sign for tech investors and CEOs, whose unfettered spending on AI is now literally holding up the US economy. For the last few years, they've held that enterprise AI — stuff that would prop up powerful companies in tech, finance, and beyond — was the key to building a sustainable business model off of AI development.
— Data Shows That AI Use Is Now Declining at Large Companies - Futurism
Mind Prison is an oasis for human thought on topics of technology, AI, and philosophy, attempting to survive amidst the dead internet. I typically spend hours to days on articles, including creating the illustrations for each.
I hope if you find them valuable and you still appreciate the creations from human beings, you will consider subscribing. Thank you!
No compass through the dark exists without hope of reaching the other side and the belief that it matters …