Notes From the Desk are periodic informal posts that summarize recent topics of interest or other brief notable commentary.
The Destruction of Meaning by AI
The threat of AI might not be the “end of the world” scenarios, but it might be the end of meaning and purpose. We are using technology to push the limits of capturing human attention until we can no longer give attention to our own thoughts.
Technology is the new socially accepted drug. We ignore the effects simply because it is not a chemical we consume into our bodies. Yet, all modern tech is evolving for a single purpose: to create a surge in your dopamine. Your attention is the payment for the drug. The attention economy is turning everyone into addicts.
AI optimizes this trajectory. What are the consequences to ever-increasing saturation of our senses? Dagan Shani has a short documentary that explores the consequences of this continuous, never-ending stimulation. This is very well done and I hope you can take a moment to watch the video below.
This wild explosion of ultra seductive, highly saturated, and overly complex visuals on the internet made me realize that we're constantly overdosing on imagery that is actively f’ing up our brains.
…
Looking at Tik Tok's latest trend of AI generated videos, we can see the pattern. Pushing things to greater complexity, extremes, and absurdity just to squeeze out another hit of dwindling dopamine so we can feel something.
…
When you can't be sure, especially when something is pretending to be real, you will keep your emotion locked behind the closed gates of your doubt. You will not be tempted to invest your real emotions in a possibly fake smile or a prompted tear.
We Are Building Sterile Prisons for Human Minds
What are we doing to our world and the things that hold meaning? We have an obsession with chasing cold, algorithmic precision that is void of the warmth of imperfection, which is part of the natural world. We are trying to escape our natural environment and build optimally sterile prisons: a perfect emptiness without disorder.
It is all a trap made of alluring illusions of perfection that entice us to leave behind our human elements for something that pretends to be better. However, everything that leads to less genuine human connection is likely a net negative for civilization. Authentic human connection is a foundation for a sane and healthy society.
Scrolling on social media all day is like drinking all day. Technology has simply provided a more socially accepted and efficient method for destroying your life.
The Most Comprehensive Refutation of AI Alignment on The Internet
I recently updated and extended my original essay on the impossibility of successfully aligning AI. It is the most comprehensive argument you will find on the internet.
“Alignment, which we cannot define, will be solved by rules on which none of us agree, based on values that exist in conflict, for a future technology that we do not know how to build, which we could never fully understand, must be provably perfect to prevent unpredictable and untestable scenarios for failure, of a machine whose entire purpose is to outsmart all of us and think of all possibilities that we did not.”
The arguments in the essay refute any possible alignment, beginning with the LLMs we have today to the mythical ASI that AI labs claim they wish to build. I expose the fallacies in all of it. This is your essential reference to inject into any alignment discussion.
The Dystopia of Technological Acceleration That Few Perceive
But what if we could build it - the superintelligence? I also recently updated my essay on this topic with more content, updated illustrations, and improved edits for better readability.
The AI pro-acceleration faction, those that want to build ASI (the wish-granting machine), believe they will bring utopia to the world by creating a superintelligence. However, even if we were to ignore the alignment problem, an “aligned” powerful AI would still result in the complete destruction of humanity - the complete loss of meaning and purpose.
Most importantly, all the things people dream about doing or having, the very incentives that are driving them to want a superintelligence, will paradoxically not be achievable.
This is the argument for why we would never want to build AGI or ASI, if we ever could figure it out. This is not the argument for losing control or the end of the world. It is the argument for why powerful AI, working exactly as we would wish, still leads to dystopian ends. This is the essential reference for conversations about maximum acceleration.
Randomly Correct Answers Aren’t Enough
Today’s LLMs certainly aren’t a superintelligence. Although sometimes useful, they don’t appear to have any intelligence at all. It is becoming apparent to businesses that randomly-correct-answers aren't the miracle that AI labs have claimed. More signs that the hype is fading and the bubble cannot be sustained.
Data Shows That AI Use Is Now Declining at Large Companies
The survey, which compiles data from over 1.2 million firms throughout the US, shows usage of AI tools among companies with over 250 employees dropping from nearly 14 percent in mid-June to under 12 percent in August.
…
It's a particularly distressing sign for tech investors and CEOs, whose unfettered spending on AI is now literally holding up the US economy. For the last few years, they've held that enterprise AI — stuff that would prop up powerful companies in tech, finance, and beyond — was the key to building a sustainable business model off of AI development.
— Data Shows That AI Use Is Now Declining at Large Companies - Futurism
Mind Prison is an oasis for human thought on topics of technology, AI, and philosophy, attempting to survive amidst the dead internet. I typically spend hours to days on articles, including creating the illustrations for each.
I hope if you find them valuable and you still appreciate the creations from human beings, you will consider subscribing. Thank you!
No compass through the dark exists without hope of reaching the other side and the belief that it matters …






Haven't read the whole article, watched the video. It's interesting as it echoes something I recognised earlier, in 2019 (below) prior to the Stable Diffusion launch, which engaged me for a few months, then LLM's became the more interesting diversion.
"This morning my phone, which I'd switched back to a live Tumblr feed for the desktop image, was showing a naked girl in what appeared to be a rainforest swamp. The incongruity of it struck me immediately as it looked like the girl was expertly coiffured and made up, but who would go to all the trouble of doing a naked photoshoot in a swamp, given the logistics.
It was at that point that I began to wonder if the image was real or CG, the girl was passably human, albeit with slightly larger breasts, and she was very trim and toned. The shot did look "real" in that it didn't look oddly posed, and the model did appear to be part of the background.
I guess this is going to be a feature of "reality" going forward. the questioning of what we see with our eyes, (when not in VR/AR) is actually what it appears to be. At what point do we stop second guessing it I wonder?"
11/06/2019 17:31
Though I do think you are right about suspension of disbelief, knowing it's fake. This is how I approach AI. I find it strange that so many people when you tell them about this, freak out about "it's not real!" Of course it's not real. I think Neurotypical people are in a far worse place epistemologically speaking.