Notes From the Desk are periodic posts that summarize recent topics of interest or other brief notable commentary that might otherwise be a tweet or note.
Welcome to Notes From the Desk! This will be a series of posts which are more short informal summaries of topics of interest, point of views or conversations. It will be a place for me to write what I often place on X(Twitter), but simply goes unseen due to X’s current algorithm which is punishing for small accounts. I recently wrote on that topic here.
Tom Woods recently had a podcast last week titled “Ep. 2383 Fears About AI Are Overblown”. I bring up this episode as Tom referenced my article AI, End of Privacy, End of Sanity as part of the counter to his guest’s pro AI position.
Following is my summary of commentary reflecting on the guest speaker, Perry Metzger, point of views. I encourage you to listen to the episode yourself and comment on your own perspectives.
Commentary reflecting on Tom Wood’s guest speaker (Perry Metzger) point of view:
Perry Metzger asserts greater confidence can be found in the pro AI position.
First, it should be stated that all arguments for all positions are probably significantly wrong. It is not possible to any meaningful degree for either side to have greater confidence over their position. Why is this the case? Because the arguments for doom and the arguments for Utopia have little more than conjecture based on a technology that nobody understands.
This is the first time we have found ourselves in such a predicament. Although, the most relatable is probably medicine. As we build drugs and tools that have interactions in such a complex system that nobody fully understands. Henceforth the catastrophic epidemic of unforeseen side effects and the damage done. Maybe we should take heed of this as we prepare to launch a more complicated experiment which can not be tested reliably before release.
Perry Metzger asserts the LLM’s were designed with a very specific capability and therefore it will never be possible for AI to do something harmful such as wipe out humanity as it will never be designed to do such a task.
The current AI capabilities found in LLM’s were not put there by design. No staff of engineers wrote out the requirements and test cases prior to building the models. The building was essentially training a model to predict the next word in a sequence.
All subsequent capabilities have been discovered after the fact. Nobody predicted these capabilities. The models have unpredictable capabilities in ways we do not understand. We call these emergent capabilities - https://jasonwei.net/blog/emergence
Not only are there emergent capabilities, sometimes we get undesirable capabilities that emerge as well. Such as deceptions - https://bounded-regret.ghost.io/emergent-deception-optimization/
And we simply don’t understand how this works “Since we don’t know how they work under the hood, we can’t say which of those things is happening.” - https://quantamagazine.org/the-unpredictable-abilities-emerging-from-large-ai-models-20230316/
The fact that there are unpredictable capabilities, means that we can’t necessarily state with high confidence such claims as the LLM’s won’t do something destructive because we didn’t train them to do something destructive.
Furthermore, risk could be stated as risk = unpredictability * capability. We don’t need to entertain the possibility for consciousness or even the ability to think in some conceptual way similar to ours. All we need is a system with unpredictable behavior that governs over systems with immense capability. For example, it would not be wise today to have an LLM control robots with machine guns or make decisions for the justice system ( some are already headed here - https://forbes.com/sites/mollybohannon/2023/06/08/lawyer-used-chatgpt-in-court-and-cited-fake-cases-a-judge-is-considering-sanctions/?sh=494a2df17c7f )
Perry Metzger asserts good AI will counter balance bad AI.
It is uncertain if AI will be able to analyze AI generated information or propaganda. In theory, the answer is no as the intent of AI is to pass as human in all such endeavors. This seems to be the case as OpenAI shutdown their efforts to detect AI content as they found it unreliable. I’ve also found the same to be true in experimentation. AI can not reliably detect AI and probably never will. - https://decrypt.co/149826/openai-quietly-shutters-its-ai-detection-tool
Perry Metzger asserts AI is no different than all prior innovations in how we respond and adapt.
Although it seems reasonable to compare this new technological innovation to those prior as a precedent to establish what to expect, it does differ significantly in several ways. First, there is often a contradiction in how this analogy is utilized. As we will use this analogy to suggest that risk and disruption are no different than those prior. However, when we then talk about the positives, we then suggest that everything is different. It is the technology to change the world and provide unimaginable capabilities.
It would seem this is a bit unbalanced as both sides of the equation should experience notable unprecedented changes if that is the case.
AI is different in that it is accelerating technological evolution. Meaning there is no island of stability on the other side. There is no adaption to change and then we move along for a while. It is continuous. What makes this disruption different than all others is that AI is not a narrow disruption. It is a disruption for everything because at its core it is a machine for the replication of skill and technology. A concept that has never existed prior with any other technological disruption. Individuals need islands of stability to plan, reason and enjoy the present. Rapidly shrinking innovation, disruption and adaption cycles are permanently destabilizing. There won’t be 20 years to the next revolution, or 2 years, or 2 months etc. There will be no such thing as 5 year, 2 year or even 1 year business plans. How much time are you going to invest into building a business project that will increasingly likely become obsolete before you can launch?
AI may be the new rat race for jobs. Instead of the monotonous task rat race, it is the rat race to sprint ahead of the ever approaching technological curve in a desperate attempt to remain relevant.
If AI actually does replace all jobs and we end up in what some perceive as Utopia, I would offer this allegory as a cautionary tale that it might not be the Utopia you expected - Twighlight Zone episode ‘A Nice Place to Visit’ https://imdb.com/title/tt0734544/
Perry Metzger asserts the concept of alignment has no rational basis.
Alignment is not some fringe concept from critics or the sole invention of Yudkowsky. It is the core problem that nearly all the builders of the technology are concerned about and nobody has any idea how to successfully align AI that can be expressed as a falsifiable theory. - https://openai.com/blog/introducing-superalignment
Perry Metzger asserts all technological advancements are a net positive in the end.
If I had made the arguments for how social media would be an instrument of societal control ushering in an era of Orwellian existence as well as resulting in bizarre engagement hook behaviors in 2009, people would have said I was crazy. I think we should not dismiss without significant concern what AI will potentially do to society.
Metzger mentions we are very good at adapting and therefore we will be fine.
I would agree, but that is a double edge sword as we have been very good at adapting to loss of privacy, societal lockdowns and technological gilded cages. In some regards we adapt too well to dystopia and accept it as the new normal.
None of my arguments are calling for regulation or the shutdown of AI. As I don’t see those as workable solutions as they would simply enable different players to perform harmful actions. Instead, my arguments are for awareness and debate such that we can better evolve and reason about the implications.
FYI, I posted similar response on X as well.
Yann LeCun(Chief AI Scientist at Meta) recently posted this point of view on X:
AI systems will become more intelligent than humans, but they will still be subservient to us. They same way the members of the staff of politicians or business leaders are often smarter than their leader. But their leader still calls the shot, and most staff members have no desire to take their place.
We will design AI to be like the supersmart-but-non-dominating staff member.
The "apex species" is not the smartest but the one that sets the overall agenda. That will be us.
LeCun's argument for why AI will need no alignment is that intelligent humans don't seek power. Basing the argument on the subservient behavior of workers in the workplace.
"... leader still calls the shot, and most staff members have no desire to take their place."
Nearly everyone desires to replace the Pointy-Haired boss. At a minimum, they wish to override decisions they don't agree with.
The only reason they don't is they lack capability.