14 Comments
User's avatar
Steve Martin's avatar

As a paying subscriber to OpenA.I.'s ChatGPT, I was excited about taking 4.o out for a spin.

I live in Japan and have spent decades (and way too much money) offshore fishing around the Izu islands and peninsula just south and west of Tokyo, so I asked for a photographic quality image taken in 2024 of an ocean sunset as shot from Matsuzaki overlooking Suruga Bay. That bay is big enough so that even on a good day, one can barely see Shimizu and points south on the opposite side.

Open A.I. promptly cranked out what looked like an Edo-era woodblock print of a tiny cove-laden harbor.

Even if this were a "photograph", it would in no way resemble the architecture of 2024 or the nature of Suruga Bay. I can see the A.I. did not even bother looking at a map, or triangulating latitude or longitude. If this is the best the latest and greatest can do with visuals, I shudder to imagine a similar level of accuracy when prompting for a text-only "answer".

Unfortunately, the developers are just clever enough to realize most people are either just stupid or lazy enough to prefer a quick and dirty "good enough" for business or pleasure over quality or accuracy. Combine that with an ominous prophecy by the late Stephen Hawking — "Greed and stupidity will mark the end of the human race."

"Nefarious uses" is the key word in your well thought out essay. As you pointed out with the changing group dynamics of A.I. personnel, humanity has never successfully aligned itself with its own worst nature, much less with its tools.

Despite it all, cheers from Japan.

Expand full comment
Dakara's avatar

Stephen Hawking may have gotten it right. Over the past year it seems that it is becoming more clear that the AI hallucination problem is a fundamental flaw in its architecture. This current AI tech does not reason or think, but presents a very convincing illusion of reason.

It is unfortunate that it seems to be so easily used for activities that are a detriment to society, but is rather difficult to utilize for those that are beneficial.

Unless a surprising breakthrough comes along, I don't perceive billions to trillions of investment dollars will continue to pour into this technology. It does have valid uses, but they don't justify those large numbers. People are still betting on the creation of a wish-granting machine.

Expand full comment
Steve Martin's avatar

Hi Dakara (LOL ... is that the Japanese meaning?)

Regarding your comment, BINGO! And part of that circles back around to Hawking's quote.

Maybe most humans are also only capable of the illusion of reason.

That being said, I don't even believe reason is the end-all and be-all. Without a more fundamental empathy that binds social primates, reason is just another weapnizable tool.

Cheers from Japan!

Expand full comment
Dakara's avatar

"Maybe most humans are also only capable of the illusion of reason."

To some degree yes. We desire to be logical; however, the desire to be logical is ironically itself an emotional need. We often use logic to justify the outcome we desire. At the core, we are all emotional beings first and logic second.

Somewhat related, a section in my recent essay about alignment that questions if alignment is impossible in silicon and only exists in biological form.

https://www.mindprison.cc/i/142155375/human-values-are-innate-biological

Expand full comment
David Vandervort's avatar

The "Users prefer wrong answers" thing seems to be a faulty analysis unless the users knew that the answers were wrong before selecting them. And in that case, I would say that normal human perversity was in play. THAT is something AI will likely never beat us on.

Expand full comment
Dakara's avatar

The don't knowingly prefer wrong answers. From the study, "users make occasional mistakes by preferring incorrect ChatGPT answers based on ChatGPT’s articulated language styles, as well as seemingly correct logic that is presented with positive assertions"

It is a statement about the power of LLMs influential language, where wrong answers may seem more convincing than right answers.

Expand full comment
anzabannanna's avatar

Humans in 2024 have a strong aversion to strict Epistemology. There is almost no exception. Belief >>> Truth.

Expand full comment
Steve Martin's avatar

Oooo! Nicely put.

My only hedge would be to strike out 'in 2024' to nail part of what it means to be a social (sometimes) primate ... more likely a swarming one.

Expand full comment
name12345's avatar

Maybe the silver lining is that GenAI will get a significant chunk of people to question everything? Now that I can't trust anything on the internet (AI critiques included; I personally verified the pizza glue thing before touting it to friends), maybe more people will question previously unquestionable status quo concepts, too.

Expand full comment
Dakara's avatar

That's the hopeful outlook. However, the first paper challenges that notion. At least when it is not completely apparent it is wrong, it seems convincing language often wins in the end.

Expand full comment
name12345's avatar

Right, midwits and below will be woo'ed by GenAI, but they are generally followers anyway. Maybe this will be an intellectual revolution for the higher IQ leaders of society (obviously they will face fierce competition with the existing mafias/oligarchies).

Expand full comment
Steve Martin's avatar

Hi name ... a bit of a hedge on that. Some research is showing those scoring above average on I.Q. tests (the professional / manager class) are also more likely to slip into group-think, implementation of the plandemic being one example.

I suspect the biggest confounding variable is the elephant in the room ... the hubris that comes with caste, and the illusion of meritocracy.

Although merely anecdotal, during an academic career in a Japanese college, I found the cleaning lady to be far more intelligent, articulate, and wiser than my colleagues. Shared many a cold mug of Japan's finest with her. JMHO.

Cheers from Japan.

Expand full comment
Francis Turner's avatar

The other problem with LLMs and wrong answers is that sometimes the wrong answer is actively dangerous. Glue in Pizza is probably not that deadly, though not good, but it is easy to think of cases where things go much much more wrong as I wrote here https://ombreolivier.substack.com/p/llm-considered-harmful?sd=pf

Expand full comment
Dakara's avatar

Thank you! That is well written and perfectly on point. I inserted a link to your article within this post.

I agree with your conclusion that this is unsolvable and recently wrote a related analysis here FYI - https://www.mindprison.cc/p/the-question-that-no-llm-can-answer

Expand full comment