13 Comments
Feb 23Liked by Dakara

There’s a problem with your argument: the speed of light puts a hard ceiling on the exponential growth curve.

As an illustration of this imagine i have an AGI; and i add more compute to it to make it smarter, to simplify this lets imagine that it’s 1 dimensional thing, we start at 0 and things are at integer positions. Eventually we get to a position, that’s so far from 0, that the time it takes for the signal to get back to 0 is greater than the time it takes to just compute the result at position 0. For this reason the ASI *can’t* scale forever, it has to plateau. We can scale in all 3 dimensions but the distance from one end to the other is still a constraint, we can make the compute units smaller and faster but there’s a limit.

We actually can’t make CPUs with terra-hertz clock speeds because that you either need transistors smaller than atoms or FTL, we’re near the end of improvement for digital computing period; AGI only has a shot because tensor cores essentially emulate an analogue computer which still have a ton of headroom to grow.

technology is going to hit a hard ceiling probably sometime this century, where it can’t get any better because it would need to violate lightspeed to do so.

Expand full comment
author

Thank you. Yes for most of the argument I went with the accelerationist view of no limits, but I alluded in the argument later there must be some kind of limits ...

"Intelligence might not be unbounded. Some propose that once AI can enhance itself it is simply intelligence to infinity. However, we don’t know if that will be the case. There must be some physical resources required and we don’t know if there will be limits at scale."

However, exponentials can take diverging paths. You can scale horizontally instead of vertically.

For example, humans are not more capable machines than they were a century ago. However, there are billions more humans working on inventions in parallel.

I agree with your critique, but some will argue higher intelligence will find ways to scale we have not thought about. We do have a tendency to not perceive the fully possible as in the death of Moore's Law. Nonetheless, I don't perceive either as an impact to the core argument of the potential problems with acceleration in that even if we stop accelerating at some point, we still will have achieved far higher velocity than our current human minds can keep up with. What are your thoughts on that argument?

Expand full comment
Feb 23Liked by Dakara

Firstly; what i’m arguing is that the sigmoid of technology is a fractal. Yes technologies individually are sigmoids, but so too are industrial revolutions, and zooming out further, so is technology as a whole.

Counterpoint: if you graph fit the growth of human knowledge and technology improvement then it matches a hyperbola not an exponential; this indicates that technology is “finished” at 2047, all possible technologies would be discovered at that point because growth would be infinite—at this point the velocity hits 0 because there’s nothing left to figure out.

Alternatively: its a double exponential because the growth in technology creates a bigger population which enables more growth; so infinite exponential growth but the problem is the same. there’s resource constraints and physics constraints. There’s a fundamental limits to “all technology”: the light speed barrier; or the constraints of the solar system.

So it must be a kind of sigmoid, things will get exponentially harder to discover and this is what we have already seen. e.g. drug discovery used to be so cheep random pharmacists could do it; now it costs billions of dollars to make a new drug. you used to discover physics by an apple falling on you; now particle colliders cost billions of dollars; etc. Look at AI itself, scaling it up is costing more and more, and yeah the price will probably come down but the point remains, it’s getting harder and harder to discover anything new; to the point that soon we will need ASI just to continue the current rate of growth, because we already got all the low hanging fruit and new things will just be that hard to discover.

Second point: what the hell is sora? is it a “physics simulation” is it “predicting the next world”; no. it’s an imagination, it’s not creating a new world or proof of simulation theory any more than you do when you pretend to argue in the shower. In this way the imagination is already better than ours: like if you show someone those science experiment videos for kids and ask them what will happen, they do not do a good job predicting it. But we still say it’s not good enough because we compare it to filmmaking not to imagination. Making a visualized scene from a prompt is impressive in that it’s a computer doing it, but fundamentally it‘s just the same as imagining a scene when reading a book. And i’m not trying to downplay it, because i would argue that fundamentally imagination = consciousness; your train of thought is an internalized prediction of the world and what will happen when you act on it. But let’s call a spade a spade.

Thirdly, lets determine what even is an ASI; my definition is as follows you can agree or not: “if you want to know what an AI is up to: an AGI can explain itself; an ASI cannot because it’s beyond your comprehension”

Based on this definition do i believe ASI exists? no. Why? well we train our current LLMs by giving it tons of training data from a general intelligence, e.g. us. So if we extrapolate from that, you train an ASI by giving it training data from an ASI, and we’ve hit a problem. To expand on this argument: i fundamentally argue that when you have language you have the world, any idea can be expressed because language is countably infinite in the things it can express. Therefore an ASI would need to have an uncountably infinite language so it can express more things, and that can’t be represented in a computer anyway.

So lets think not about ASI but rather super-AGI; it’s smarter than us, and faster than us, but it’s not qualitatively smarter than us, it could explain itself if it really wanted to.

Well there’s really only 2 ways to scale intelligence: raw power or divide and conquer.

I already examined the first, but to expand on it a bit; we already hit the limit for silicon, if we assume graphene we might be able to get another 10x out (they say 100x but really thats more about temperature and vertical stacking than raw speed), room temperature superconductors might get us 50x, but we’re really close to the limit either way. Regardless when we talk about CPU improvements from here we’re generally talking about moving memory on chip to reduce latency (which high performance programs won’t be effected by they’re already heavily optimized) or vertical stacking, parallelism, divide and conquer.

So lets look at the second. A super-agi is probably using divide and conquer, it can’t scale itself to the point of being an ASI; it can’t train an ASI; so it’s rather a swarm of AGI. That’s just like a group of people. Each one is smarter than a people, but is it though? If one of the tasks is to basically be an accountant, why would that afford the processing overhead of a mainline AGI? So a lot of them are smarter than a people, but a lot of the agents are probably not. Is it really different than just a corporation?

This is kind of another thing: any job can be done by an ANI, because no jobs cover the whole range of human experience, only a subset of it. So AGI may just make a ton of ANI agents that are smarter than our current ANI but not really as smart as a person; and that swarm is the automated company.

But looking over everything so far: AGI will make a bunch of really hard problems suddenly low hanging fruit, but… they will continue to get exponentially harder. So AGI isn’t the point that we accelerate infinitely; it’s the center of the sigmoid of all technology (which as a double exponential has a much longer lead in than drop off) The fastest rate of growth, from that point problems will either be solved so there’s nowhere to go from there, or they’ll just get exponentially harder faster than the AGI improves.

But lets assume all of this is wrong; it does just scale infinitely. Well… so what? the technologies discovered by an ASI are important to other ASIs, they’re beyond our understanding, we have no use case for them; things just kind of stopped progressing from our perspective because new technology doesn’t matter to us.

Regardless i fundamentally contest that the velocity doesn’t grind to a halt due to hitting some kind of hard limit where technology plateus and we can figure out some kind of new path forward.

Also if AGI is made for our benefit, wouldn’t it realize that its not benefiting us by discounting our creativity and do something to ensure that that remains “just a tool” in certain aspects of our lives? Maybe future computer interfaces will be more like companion animals because thats just better for us than screens and VR, who knows. Additionally, those that get addicted to the generative-AI waifus do not reproduce, and i don’t think how extreme of a selection pressure this will be is often accounted for.

But thats a bunch of my thoughts on it.

Expand full comment
author

Yes, agree with fractal sigmoids and most of your comments in general related to acceleration competing with increasing difficulty.

Also, agree that it is questionable if ASI can be something that manifests from the training of AGI/human level reasoning. I think your definition of ASI is reasonable and also suggests as I've also considered we would never know if AI becomes ASI. We can't create a test for something we can't comprehend. Fast AGI will be perceived as ASI by most.

When mentioning velocity, I mostly had in mind the experience of information. Yes, technological innovations from fundamental understanding reaches its limit. However, permutations of creations would continue.

But maybe even in that endeavor that reaches its limit as well. We all become Marvin from Hitchhiker's Guide to The Galaxy and become bored with own existence of knowing everything, no motivation to produce anything new.

Would AGI reflect on its benefit/harm and ensure it remains a tool? I suppose that depends on how human like AGI becomes. Humans are not necessarily very good at comprehending their own benefit/harm they impose onto others.

Expand full comment

I would follow david shapiro’s argument: the alignment of an AGI is a network of moral optimization functions that mutually contradict, the “right” choice is the one that minimizes tension. I would also argue altruism is easy to program because it evolved multiple times, so an evolution simulation does the trick fine.

Regardless point is, if it’s unable to reflect it’s not AGI; if it doesn’t care about it’s benefit/harm then it’s not aligned. So yes it would be concerned about that or it would probably kill us as it sets about maximizing paperclips.

permutations continue yes; but permutations of novels and ideas for movies continue, those don’t like shock us. likewise permutations of existing technologies don’t disturb the social order. its only things using untapped laws of physics that disturb the social order.

industrial revolution -> steam power

2nd industrial revolution -> electromagnetic force

digital computing -> semiconductor bandgaps

AI -> phase change materials (if AI accelerators don’t use this then anything beyond GPT4 level will be so prohibitively slow and expensive it won’t disturb the social order; there are already accelerators in the lab using this that are 800x faster than tensor cores though)

The thing is that new permutations are what you talked about in the post with people messing around with technologies as they mature; they’re the steady state of messing about with stuff. The transitions are not that, they are unexplored areas of physics. You eventually just run out; and when new ones are discovered everyone rapidly leaps onto it trying stuff looking for an edge; the possibilities are exhausted quickly.

I think we’ll reach a steady state before 2100; but it’s going to get wild before that and i’d really rather not be born to witness the upheaval.

Expand full comment
author

I'm not saying AGI wouldn't be concerned, I'm saying that the idea of alignment is that we align AGI to humanity's values. At least that is the theory. However, "aligned" humans exhibit demonstrable conflict. We can't align to mutual exclusive viewpoints of how the world should be. Humans are in this battle now.

Yes, GPT 4 capability isn't exactly societal breaking in its inherit capability. Yet, neither was social media, but some would argue it has become so. We will have to wait and see what types of meta develops around the use of primitive AI. I suspect it will be disruptive in unforeseen ways as social media has been.

The issue of permutations isn't technological leaps, but processing of the information and making sense of it. In some regards, it is a current societal problem. Maybe this is a bit orthogonal or maybe not, but we now have the capability in a sense to install software in the minds of the global population via social media algorithms, but the pace at which we can do so surpasses our ability for assessing the impact. The feedback loops are far longer than our delivery mechanism.

Expand full comment

chatGPT already shows lower bias than humans do; we align it to our idealized versions of ourselves not our true selves. our values are already ideal; the issue is that we are not motivated by our values, not that our values are bad.

Expand full comment
Feb 23Liked by Dakara

Dakara, I would ask you to take some time to explore the Luddites and there response/reaction to the first industrial revolution. I am convinced you could write something compelling. Their enemy was innovation, business entreprenuers and governments that embraced a shiny and deceptive promise of "new and better" of cheaper, faster, better, more consistent textiles. I think we are there again, this time most of us are or will soon to be the Luddites facing as you discussed, a point of inflection. Where the Luddites went wrong was turning their realizations into a binary "we win/they win" scenerio which rapidly put them into jeapordy, criminal behaviour and the destruction of their movement. I think there are some battles that cannot be won but potentially they could be channeled into a new direction if Luddites were willing to learn to play chess with the powers that be.

Expand full comment
author

Thank you. Observing problems is far easier than the solutions :-)

I suppose a perspective I'm going for is "informed consent of the future".

Expand full comment