9 Comments
Apr 16Liked by Dakara

Psychologist Daniel Kahneman (who recently passed away) describes two types of thinking:

“System 1” is thinking we do subconsciously based on our observing patterns in the world and using the massive parallel processing in our brains to predict what might come next. It is intuitive and very fast way of thinking that we use most all the time, but it can be wrong as this way of thinking can easily be mislead or even manipulated.

The other way of thinking that we humans do is called “System 2” and it occurs in our consciousness. We use logic and math applied to facts to reach conclusions that are more likely correct, assuming we do it right.

In the “Durably reducing conspiracy beliefs through dialogues with AI” paper, the authors wrote conspiracy theories “primarily arise due to a failure to engage in reasoning, reflection, and careful deliberation. Past work has shown that conspiracy believers tend to be more intuitive and overconfident than those who are skeptical of conspiracies.”

It is System 1 thinking that results in conspiracy beliefs.

The paper finds that GPT-4 Turbo used “reasoning-based strategies …evidence-based alternative perspectives were used ‘extensively’ in a large majority of conversations”.

This is System 2 thinking that this top rated LLM is using to change the human’s beliefs.

I have found that GPT-4 with the internet access plug in turned on, rarely hallucinates facts and reasons. About 3% of the time in one study. This is unlike GPT-3 and other earlier LLMs.



 https://www.fastcompany.com/91006321/how-ai-companies-are-trying-to-solve-the-llm-hallucination-problem

Regardless, if I am having an argument with an LLM about an idea, I am using my System 2 thinking abilities, and if I find that the LLM makes up a fact or comes up with an invalid reason, I will contest it with the LLM. Then I find that typically the LLM will back down saying it made a mistake and that it is still learning and will make mistakes.

Debating ideas using facts and reasoning is the goal! Either the LLM changes its position or I do.

Expand full comment
author

Thank you for your observations. Here are my thoughts:

"
It is System 1 thinking that results in conspiracy beliefs."

This is really just an assumption from the paper. On X the authors seemed much less confident in discussions they had about the paper. Additionally, I also cited a different study that demonstrated conspiracists using System 2 thinking to a higher level than opponents.

I do not find the current study has not confidently explained why the AI was necessarily better than having debate with a human.

"and if I find that the LLM makes up a fact or comes up with an invalid reason,"

How would you know? The participants apparently did not check any information that the AI provided. This is interesting itself, as it is contradictory to the assumption conspiracists are skeptical. They just accepted the answers from the AI.

"Debating ideas using facts and reasoning is the goal! Either the LLM changes its position or I do."

Since the LLM in this study was pre-instructed to defeat the argument of the participant, the AI would not submit to your counterpoint in the same way it would normally. In this case your ultimatum would presumably end with you changing your position as did the participants.

Expand full comment
Apr 19Liked by Dakara

Thanks for your reply. 



System 2 reasoning can fail because it is influenced by faulty System 1 thoughts. Kahneman says (page 86 in my 2011 version of his book “Thinking, Fast and Slow”):

“The combination of a coherence-seeking System 1 with a lazy System 2 implies that System 2 will endorse many intuitive beliefs, which closely reflect the impressions generated by System 1.”

In a debate, one’s beliefs as well as one’s logic is being challenged. The authors of the paper reported “many conspiracy-believing respondents in our sample expressed excitement and appreciation in their conversations with the AI”. They evidently had not been effectively challenged by their past debates with humans.

If you use GPT-4 with its internet access pluggin turned on, it can provide the user with URL links to go read more and check its accuracy.

I should not have expressed the goal as “either the LLM changes its position or I do” as it is possible that it neither of us will change our position. I ended one debate with GPT-4 that way. I was debating with it a physics question where even physicists are not in complete agreement!



Unfortunately, our brain is not likely to improve its ability to think in the near future, unlike AIs.


Here is an article I read today:

“Report: AI is advancing beyond humans, we need new benchmarks”



https://dailyai.com/2024/04/report-ai-is-advancing-beyond-humans-we-need-new-benchmarks/

[Current] “AI systems routinely exceed human performance on standard benchmarks.”

“The trends indicate that AI models will eventually be smarter than us.”



In time, people may just accept what the AI tells them without doing any checking and thinking. If different AIs tells us different things, then humans will need to remain involved to resolve the differences. But if the AIs of the future all are generally in agreement, then what will become of Homo Sapiens?


Expand full comment
author

"They evidently had not been effectively challenged by their past debates with humans."

Possibly, but we also don't know if the AI utilized any "truth" in its argument. The concern is that the premise of the paper assumes this method will result in greater truth in society. I disagree and it is my perspective that the paper has only shown general influence ability that can be utilized for any point of view, truthful or not.

Furthermore, the AI can use the same influential tactics that news organizations utilize today. Simply show all the evidence in support of a view while ignoring all evidence counter to a view. It has then been truthful, but presented a specific view by utilizing omission. Providing references under such direction doesn't necessarily ensure truth as an outcome, but does enhance influence.

Since its initial release it has shown incredible influential ability even over knowledgeable experts when the reasoning and data it provides are wrong. For reference - https://twitter.com/paniterka_ch/status/1599893818186543105

"In time, people may just accept what the AI tells them without doing any checking and thinking. "

I agree, but I think this is actually the current default. For reference - https://twitter.com/emollick/status/1769887930288320925

“AI systems routinely exceed human performance on standard benchmarks.”

There is some level of uncertainty surrounding the significance of these benchmarks. We now have the problem that the tests may be in the training data. Nonetheless, how smart people believe AI to be, can become influential itself. For reference

https://twitter.com/ChombaBupe/status/1768024557196070962

What seems apparent, whether AI is smarter than us or not, it is going to have substantial influence over our society and most of the common public is unlikely to be aware in what capacity they may have been influenced. It places more influential power in the hands of the AI builders than social media has today.

Expand full comment

The antichrist will likely be an AI. If the meager AIs in existence now are so effective, just think what the super-AIs (soon to appear) will be capable of? There will likely be no defense against it/them. An AI would seem to be the functional equivalent of the One Ring.

Expand full comment

The good news is that the paper is complete garbage. It's alleged insights are false and won't transfer to the real world. That doesn't mean they won't get a grant to try.

Expand full comment
author

What do you perceive as the effects that would not transfer to the real world? In essence it is already in the "real world" as they experimented on real people.

We already know that algorithmic manipulation has effects in the real world. Why wouldn't a more sophisticated approach that involves simulated reasoning also have effects in the real world?

Expand full comment

MP, i have found that faith in the living God and in His word in the bible grants discernment to be able to seperate true truths from the chaff. We are moving into a time of massive deception as per what John mentioned. The attack against Trump and covid operation seems to be start of an increasing array of global deceptions.

Expand full comment

AI supercharging the already bad things.

Expand full comment