5 Comments

It's not clear to me what you mean by your conclusion.

"A path forward will likely need to consider something other than how to align a superintelligence. Most likely, the very nature of what we are attempting to build. A new paradigm that looks at the problems we want to solve and reimagines a different path to get there."

What does this "different path" involve concretely, and how would we achieve it?

Expand full comment
author

There is no alternate concrete path as nobody is currently even looking for one. However, it would begin by looking at the problems we need to solve and focusing on what is required to solve them versus a machine that solves everything.

Potentially that is AI optimized for specific tasks, but not some conscious entity that oversees the entirety of society. Or maybe the path is figuring out how to enhance our own cognitive abilities instead of building machines. I don't claim to have the answer here, but simply observing that we aren't currently exploring any alternative with any substantial interest.

Likely we are somewhat blinded to alternatives as long as we remain fixated on the potential for the creation of a wish-granting machine.

Expand full comment

I agree that creating a wish granting machine is deleterious and that alternatives should be explored. As to the specific option of creating AI optimized for specific tasks, I don't know that it's a stable option in the long run. For many of the most difficult problems we would want it to solve, it seems that we would basically need AGI in order to solve them. We could have a norm, policy, or software limitation that says "only use it for specific purposes and not to control society as a whole," but then that gets back to the problem of power-seeking humans: Someone is going to want to remove that policy, or concoct several domain-specific AIs together into an AGI.

Expand full comment
author

Yes power-seeking humans, what we should do and what we will do likely will not align. Human curiosity, ambition etc will ultimately drive us to build ultimate power if it is possible.

At present, the only way out I can foresee is if it turns out that building it is not possible. Not everyone agrees we can reach AGI or beyond simply scaling up the types of systems we have today and some are uncertain if it is even possible to create human type of intelligence out of silicon.

Nonetheless, extremely powerful systems that somewhat simulate intelligence can still prove to be high risk in numerous ways.

Expand full comment

Humanity's values looks good at first glance. What about pride, cunning, and cruelty? Those are values of humanity; they're in accordance with the way some behave, anyway. So then we need some external standard: something congruent with somebody's theory of reality. There are quite a few of those, popularly known as religions. Unfortunately there are two groups of those: ones coming in from the Powers of Darkness (paganism), and those coming from God. What's the hapless AI to do? I think I know. It will be invested by an evil spirit; thus the antichrist will appear. Be careful out there.

Expand full comment