Notes From the Desk are periodic posts that summarize recent topics of interest or other brief notable commentary that might otherwise be a tweet or note.
AGI is coming: or something we can’t describe, but coming sooner than expected
AI Explained has summarized a number of recent developments and interviews with OpenAI. Part of the content reviewed by AI Explained comes from this wired article. I cover some of the most striking below.
The key points covered by AI Explained:
OpenAI doesn’t claim to know what AGI really is …
The people who work at OpenAI are fanatical in their pursuit of that goal.
…
when I asked several of the company’s top brass if someone could comfortably work there if they didn’t believe AGI was truly coming—and that its arrival would mark one of the greatest moments in human history—most executives didn’t think so. Why would a nonbeliever want to work here?Expecting 1000x times the power of GPT4 in 3 years
100x times the power of GPT4 in 18 months
OpenAI has legal statements that seem to suggest they can exit any existing agreements once AGI is achieved.
These are quite a fascinating set of insights and perspectives into the disposition of OpenAI.
OpenAI doesn’t claim to know what AGI really is. But they are going to build it whatever it is. This is interesting because it means there are no real targets. They are simply attempting to build the largest model possible as soon as possible and see what happens as a result.
This seems to be the atom smasher method of discovery. Just keep building something bigger and observe what comes out of it. However, in the case of AI, what might come out of it is far less predictable and covers vastly more uncertain possibilities. At least physics of atom smashers allows the possibilities to be somewhat constrained to a set of theories, but AI really has no calculatable bounds on what might emerge. We are at the casino with the future of humanity riding on a roll of the dice.
It is mentioned that the company even has a clause in their documents which states that when AGI is achieved, all financial arrangements will be reconsidered. Essentially asserting when they have AGI, all previous rules of the world are void as they will then own the machine to recreate society.
Who elected OpenAI to change all the rules? What if some don’t want the rules changed, their lives changed, their country and homeland changed? Is anyone given a choice in this global reset?
Everything is beginning to happen much faster than expected and whatever are today’s predictions will likely be rewritten again soon. 3 years to 1000x might happen much sooner.
We have no idea what these models will be capable of doing as each new power level releases new unexpected behaviors. In addition to that problem, there are no tests or benchmarks to even assess if the models are capable of deception or self replication. We don’t know when they will be capable of self improvement. It seems we are on the course to simply find out after the fact. GPT4 will likely seem like a cheap parlor trick compared to a 1000x model.
Note, it should be rather apparent that when you can not even define your target, AGI, combined with the fact that unpredictable behaviors are known to emerge on each larger model, with a team of engineers drunk on the moral superiority of changing all of humanity, any semblance of the concept of safety has been long since lost. The title of my article at the beginning of the year, AI Singularity: The Hubris Trap, seems rather appropriate now.
No compass through the dark exists without hope of reaching the other side and the belief that it matters …