The public trusts AI as a credible source
Notes From the Desk: No. 27 - 2024.02.26
Notes From the Desk are periodic posts that summarize recent topics of interest or other brief notable commentary that might otherwise be a tweet or note.
Public trusts AI as a credible source
A recent study has some surprising results for those who spend a lot of time reading about and being aware of the developments of AI. It suggests that the general public perceives no credibility issues with AI.
Potentially this is the case simply because the public actually knows very little about AI, its potential for bias etc. Nonetheless, the implications are disturbing as we know how easily this can be used to transform public opinions without their awareness.
This is an intriguing result: LLM-generated texts seem to outperform human-generated text with regard to clarity and engagement, showing even a higher message credibility than human generated text. At the same time, people do not seem to differentiate between human and LLM-generated texts with regards to source credibility: They perceive the texts as equally competent and trustworthy. The UI has no influence on credibility perceptions at all. So even when people know the origin of the texts they do not doubt the credibility of ChatGPT generated texts.
Furthermore, public opinion and ongoing media coverage is increasingly recognizing generative AI’s proficiency in handling and conveying information. Undoubtedly, LLMs have demonstrated impressive progress in generating high-quality textual content. Nonetheless, it remains a fact that these systems carry an inherent potential for errors, misunderstandings, and even instances of generating content that deviates from reality. Surprisingly, these risks appear to be either underestimated by users or are not substantially influencing their perceptions of credibility.
It would be interesting if this changes at all after the exposure of Gemini, but then again maybe that awareness still does not permeate the general public.
AI simulation of events led to nuclear escalation
Add this to the list of evidence that continues to question of the ability to “align” models for consistent and expected behaviors. The abstract from study Escalation Risks from Language Models in Military and Diplomatic Decision-Making.
We find that all five studied off-the-shelf LLMs show forms of escalation and difficult-to-predict escalation patterns. We observe that models tend to develop arms-race dynamics, leading to greater conflict, and in rare cases, even to the deployment of nuclear weapons. Qualitatively, we also collect the models' reported reasonings for chosen actions and observe worrying justifications based on deterrence and first-strike tactics. Given the high stakes of military and foreign-policy contexts, we recommend further examination and cautious consideration before deploying autonomous language model agents for strategic military or diplomatic decision-making.
It is important to note that risk does not only begin with AGI, it is simply unpredictable behavior * capability.
Militarization of AI advances
The fact that AI continues to often demonstrate unpredictable behavior is precisely the root of risk scenarios which should make the following ever more concerning. This is the advocation for allowing AI to make the decision to take lethal actions.
In a speech in August, US Deputy Secretary of Defense, Kathleen Hicks, said technology like AI-controlled drone swarms would enable the US to offset China's People's Liberation Army's (PLA) numerical advantage in weapons and people.
Frank Kendall, the Air Force secretary, told The Times that AI drones will need to have the capability to make lethal decisions while under human supervision.
From The Pentagon is moving toward letting AI weapons autonomously decide to kill humans. How we use AI on ourselves is still a paramount issue before we ever get to what will AI do on its own.
Technological acceleration paradox
Mind Prison’s new essay explores what potentially happens if we really achieve safe AGI and exponential acceleration. The essay explores a concept that has almost no discussion or debate in AI circles. That being the faster we accelerate the less likely we achieve the results people envision.
“Holy shit. That was a good article. I don’t agree with everything you said(I have some minor issues), but you cut down the issues to some of the most important parts.” — Ecstatic-Nework-917, Reddit
“Fantastic article … Really great insights and interesting arguments.” — Geahk, Reddit
Read the full essay - The Technological Acceleration Paradox and let me know your perspective in the comments.
No compass through the dark exists without hope of reaching the other side and the belief that it matters …
in the quest for a perfect system where mistakes of judgment will be eliminated, we will actually wind up starting WWIII by a creation of man that pretends to take all the responsibility away from man having to actually make difficult decisions. its the ultimate passing the buck.