10 Comments
User's avatar
Craig's avatar

The thing that pisses me off the most is that there is so little quality content out there.

What happened to you really makes me wonder. They didn't just flatline you, they killed you across multiple platforms. Are you sure you don't have secret haters in silicon valley or something?

Because it seems almost coordinated.

Dakara's avatar

I can't be sure, but strict targeting based on content doesn't make sense. I see many accounts across social media saying similar things while some have reach and others do not.

If they are restricting based on content, they are applying some random sampling, which might be the case, but if so I know nothing I can do about it.

Houston Wood's avatar

So sorry you've been disappeared on Social Media. You are here on Substack though--do you feel you are being disappeared here, as well?

The view that LLM's won't take us to AGI seem reasonable to me. However, what LLMs can do seems to me you underestimate. In fields I know a little about--education, research, finance, and experimental science--LLMs seem to be transformational. Do you think I'm wrong about those fields? (I have not studied them, my impressions are just what people I know in these fields tell me.)

Dakara's avatar

> do you feel you are being disappeared here, as well

I've stopped posting to Substack Notes for quite some time. They received less and less views over time to the point it was meaningless.

LLMs will always seem impressive in areas we are less familiar. I don't deny they can be useful, but it is useful in an odd precarious way. The hallucination problem will always inject hard to perceive errors, mistakes, etc. into whatever is generated. Generally, only experts will recognize what's wrong.

For example, Grokipedia made several mistakes within articles that cite my articles. I doubt anyone except myself would have noticed. https://www.mindprison.cc/p/grokipedia-or-slopipedia-is-it-truthful-accurate

They are useful as tools when the output is carefully scrutinized, but they aren't transformational in the way they are hyped. They do not exhibit true intelligence, nonetheless; pattern-matching can be very helpful when applied in the right way.

My best example I would point you to that exhibits the difference between hype and reality would be the post I wrote about Cancer research. The media ran away with headlines talking about AI's amazing discovery, but the reality is a bit more subtle.

FYI - https://www.mindprison.cc/p/ai-discovers-novel-cancer-drug-or-did-it-gemma-27b

Houston Wood's avatar

Thanks; will read re cancer. I’m a college teacher surrounded by college teachers—very hard not to think LLMs are transforming higher ed.

Dakara's avatar

I think, undoubtedly, they are transforming the world, but the key is by what means and in what way. My argument is mainly that they aren't transformative in the way that is promoted by the AI labs and social media influencers etc.

It has been observed they are impacting our cultural language, as described here: https://arxiv.org/pdf/2409.01754. Their influence is unquestionable.

However, it has also been demonstrated that we are not very good at always self-assessing their impact, as noted in the METR study. The difference in perceived benefit vs. reality was striking: https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/

And adding to that perspective from METR, while most would agree we are experiencing some kind of major societal transformation due to AI, I would argue that in some part that the benefits are illusions.

For example, someone at Microsoft posted they used AI to build an entire web browser from scratch within a week. Posted here - https://x.com/mntruell/status/2011562190286045552

However, only someone very knowledgeable can decipher what they did and reveal it was mostly meaningless. Posted here - https://x.com/its_bvisness/status/2011585317070127430. In short, it reused all the pre-existing major components that make a browser and still ended up with something that only somewhat works.

So, I cannot assess deeply what is the state of AI's impact on every field. However, within my own field, I see constant exaggerations, such as the browser example. And when I do take time to try to understand the significance of something happening outside of my area of knowledge, such as I did in the AI cancer discovery, I find what seems to be the same scale of exaggerations reported by media.

So these are some of the things driving my perspectives.

Houston Wood's avatar

Thanks for the analysis and references.

My own intuition is the hype is way out ahead of what will happen in the next 10 years, but is warning us what is likely to come in 25 years, not with LLMs but with what comes after them.

It's so hard to remember how very new this all is. But the take off from, say, Kitty Hawk flying to people on the moon was what, 60 years? The take off from today's AIs until 2076 AI? That is what I think we need to think about, worry about.

Disputing today's hype may be useful, I'm thinking, but even more useful is to try to prepare humans for the breakthroughs that seem likely to come that make LLMs seem like the Wright brothers' planes.

Dakara's avatar

> prepare humans for the breakthroughs that seem likely to come that make LLMs seem like the Wright brothers' planes.

> is warning us what is likely to come in 25 years

Yes, I agree that we should closely observe what is happening, as it has many implications. Even though they have not delivered what they claim as of yet, what we can already observe is how people would use super powerful machines if they had them and that is already disturbing.

If we figure out how to build the type of thing they actually want to build, the wish-granting machine, I doubt there is much we can prepare for. Despite much of the capability of LLMs being hype, we still struggle to keep up with their impact on society. The rapid change is real enough already.

Nonetheless, I have thought extensively about what comes afterward and pushed that to the limit in my own thought experiments. This is my writing on that topic. But also, FWIW, I have also theorized that silicon may place limits on the type of "intelligence" we can build. Something I'm still yet to write more about.

https://www.mindprison.cc/p/the-technological-acceleration-paradox

Houston Wood's avatar

Thanks for the link. Will read / savor when I have time. So pleased to have found your voice before THEY silence you on all media :)