Discussion about this post

User's avatar
Craig's avatar

I don't even like using the word "hallucination."

It seems wrong to anthropomorphize computer errors.

I also like saying "regurgitative" rather than "generative."

Expand full comment
Rene Bruentrup's avatar

Very interesting read, thank you for sharing. A couple of questions if I may:

1) So, when AI companies talk about 'reasoning models' they flat out lie? Because AI doesn't reason. It only extrapolates and interpolates data. If this is the case how does Apple's paper fit in that says "models break down at a certain level of complexity'? If there is no reasoning, then reasoning cannot break down?

2) Could data quality and quantity become so huge in certain areas that the problem doesn't matter anymore and promises can be kept? As you said, any sufficiently advanced pattern matching is indistinguishable from intelligence. I am wondering whether this fundamental flaw will ultimately crash the party or whether we will just steamroll over it.

3) You said the model doesn't know the probability. But shouldn't it be possible to derive that based on the underlying statistical process? A confidence metric could be assigned to the output that indicates where the output lies within the curve, similar to the R-Squared of a simple regression?

Expand full comment
3 more comments...

No posts