9 Comments
User's avatar
Asa Boxer's avatar

Great work here. Seems like the more we tinker with AI, the more we learn the divide between our understanding of intelligence and intelligence itself. At some point, we'll have to come to the conclusion that statistical models can't explain very much. It's odd how obsessed our culture is with stats and probabilities, that there's even a notion of "laws of probability" when probability is a clever workaround rather than an explanation of any phenomenon.

Expand full comment
Dakara's avatar

Yes, the statistical models will have their uses, it is just they can't accomplish many of the types of tasks for which they are being hyped.

In the end, maybe many will have a greater appreciation and understanding for just how special true intelligence really is. In the future, I hope to write a bit more specifically on the distinction to bring more clarity to the topic.

Expand full comment
mia's avatar

Thank you for this relevant analysis, even if I am not a specialist or training, I am able to understand according to the subject covered.

As I mentioned earlier, for me it is the word intelligence that leads to confusion between data processing capacity and human intelligence-specific reasoning. I suppose it takes observation (of human brain function) and valid reasoning to get out of this confusion.

Expand full comment
Dakara's avatar

Yes, it is definitely a topic of some obscurity, and not always well agreed-upon-definitions. I hope to continue to write more on the subject and hopefully bring more clarity.

Expand full comment
Japhy Grant's avatar

Wildly inaccurate analysis here. Some folks are just determined to believe intelligence is a uniquely human attribute.

Expand full comment
Dakara's avatar

Maybe you could identify the error in analysis?

Expand full comment
Japhy Grant's avatar

Did you read the paper? Far from revealing that these models are purely statistical, it shows that AI models are coming up with a variety of sophisticated strategies to solve complex problems and requests. Here’s a quote: “Our results uncover a variety of sophisticated strategies employed by models. For instance, Claude 3.5 Haiku routinely uses multiple intermediate reasoning steps “in its head” 2 to decide its outputs. It displays signs of forward planning, considering multiple possibilities for what it will say well in advance of saying it. It performs backward planning, working backwards from goal states to formulate earlier parts of its response. We see signs of primitive “metacognitive” circuits that allow the model to know the extent of its own knowledge. More broadly, the model’s internal computations are highly abstract and generalize across disparate contexts.”

Expand full comment
Dakara's avatar

Yes, I read it. However, none of that says it is not a statistical model. Statistical derived pattern matching can self organize into some form algorithms. But those "algorithms" are still within the bounds of the architecture in which they execute.

It is still a function of probabilities that are operating against the training data. I don't deny that this can become extremely sophisticated. I mention this in the post. However, there are specific limits to that capability that emerge in certain forms such as hallucinations.

There is no mechanism present for the creation of semantic information. Yes, it can do some sophisticated "analysis" of patterns, that is no doubt useful, but it diverges from the capabilities we would expect from true understanding of fundamental concepts. Edge cases don't appear when there is fundamental understanding.

Expand full comment
molten_ore's avatar

The models aren't coming up with a variety of sophisticated strategies. The developers of the models are coming up with these strategies for the model. The computer program is programmed to complete these "reasoning" steps.

Expand full comment