Sleight of mind.
While his blog posts do tend to carry the distinct scent of gearhead, so much of what technology expert Shelly Palmer blogs about is more than useful for us adheads, as well.
Case in point: Tuesday’s cheery bomb of a headline, “ChatGPT may make you hallucinate, too.” It’s all about a new study showing that “conversational AI powered by Large Language Models amplifies false memories in witness interviews.”
Translation: AI’s Jedi mind tricks really do work on a big slice of the audience.
The link is below, but the nub is straight-ahead. After showing a film depicting a crime to a reasonably-sized sample of wetware souls, AI, acting as an investigator, asks, “did you see the gun?”
Of course, there was no damned gun. But the memory is well and confidently planted, especially when the LLM follows up with “what color was it?”
The authors back this up with the usual stats fest, but real take-away seems equally clear and ominously dark. Especially when we factor in the increasing evidence that AI is prone to “hallucinate” on its own, either by training on flawed sources or as a reflection of coded-in bias.
Now it turns out these Skynet couldabees can pass on the joy, creating false perceptions based on what I’d speculate is the combination of inherent authority (“data does not lie”), fluidity and fluency (“damn, it sounds so real”) and human doubt-propelled gullibility (“if it says there was a gun, must be so”).
Could the same suggestive tricks find their way into human-machine marketing interactions like, oh, AI-enabled search or highly suggestive chatbots?
Let’s ask the machine behind the curtain: “Recent studies on AI's ability to trigger false witness recollections raise important concerns about the reliability of communications in the future,” particularly “greater difficulty in distinguishing between genuine and fabricated information.”
Followed by:” While AI offers powerful capabilities for advertisers, it's crucial to implement these technologies responsibly. The focus should be on creating more relevant, engaging ad experiences that provide value to consumers, rather than exploiting AI's potential for manipulation or deception.”
Gee thanks, Perplexity.ai, Couldn’t have put it any better. Or is that any worse?
Study link: https://arxiv.org/html/2408.04681v1?mc_cid=1e3e14b90a&mc_eid=fb0ab63ab9