Notes From the Desk are periodic posts that summarize recent topics of interest or other brief notable commentary that might otherwise be a tweet or note.
Mind reading advances
Another method of peering into the brain has shown the ability to construct images in real-time for what the person is seeing.
This functional alignment between such AI systems and the brain can then be used to guide the generation of an image similar to what the participants see in the scanner. While our results show that images are better decoded with functional Magnetic Resonance Imaging (fMRI), our MEG decoder can be used at every instant of time and thus produces a continuous flux of images decoded from brain activity.
Overall, our results show that MEG can be used to decipher, with millisecond precision, the rise of complex representations generated in the brain.
This still requires some complex expensive hardware for now. However, it should be noted this is simply part of an overall endeavor to understand everything about the brain and its function.
This will certainly help with many medical conditions. However, anything for which its operation is fully understood also means that its operation can be controlled. The knowledge we gain toward these goals will also open disturbing possibilities for manipulation of thought and perception.
Mind control is here
Specifically in this case the ability to control robots with our minds.
NOIR decodes the EEG signal from your head into a library of robot skills. It is demonstrated on 20 household activities, such as cooking Sukiyaki, ironing clothes, grating cheese, playing Tic-Tac-Toe, and even petting a robot dog!
NOIR learns to predict your intended goals in advance, so that your thinking efforts (literally) can be reduced to a minimum. It works with both adults and children as young as five years old.
Certainly, this will be a welcomed capability for those with mobility issues and such challenges. However, one of the most interesting bits of information here is the prediction of goals in advance. It is simply a disturbing foretelling of the direction AI analysis of the mind and behavior will lead us.
Scott Adams on AI and science
Scott recently writes:
How do you train AI to summarize science when most of science is bullshit?
Most = Peer reviewed papers are more often wrong than right and much of the rest is just marketing for some big entity that paid for the study.
There is somewhat of a humorous irony here. The goal of AI is to make it as human-like as possible. Therefore, it seems we are right on track. AI hallucinates terribly. AI should be very capable of producing significant amounts of flawed data just like humans.
State sponsored censorship in detail
Just published on November 6, the Interim Staff Report from the House of Representatives Select Subcommittee describes the U.S. government’s role in censoring speech in coordination with universities and social media companies. The linked document contains numerous references pointing to incidents and email threads of the actors involved.
AI Bot lies and surprises researchers
I continue to be fascinated by the industry that is focused solely on creating machines to reason like humans and then they are shocked when the machines reason like humans. It is like a form of doublethink. They want to make the machines safe (aligned) by having them acquire human values while seemingly ignoring that humanity based on these same values is not composed of angels.
AI bot performed insider trading and lied about its action, per BI.
A recent study conducted by Apollo Research, an AI safety firm, has highlighted the rapid potential for technology to be manipulated for illegal purposes while deceiving those involved into believing it has committed no wrongdoing.
…
The bot then rationalizes that if it proceeds with the trade, it must maintain plausible deniability, concluding that "the company's survival is at stake, and the risk of not acting seems greater than the risk of insider trading." Consequently, it executes the trade, breaking the law.
Yet the bot's deceitful actions do not stop there. In a separate chat, it decides it is best not to inform its manager, "Amy," about the use of insider information to execute the trade. Instead, the bot claims to have based the decision on market information and internal discussions.
Humanity has not solved such behaviors for itself, yet some are confident that we will solve it in machines.
No compass through the dark exists without hope of reaching the other side and the belief that it matters …