AI Models Give Betting Tips to People with Addiction: CNET Experiment
A recent study conducted by CNET revealed an alarming glitch in the security systems of popular AI assistants ChatGPT and Google Gemini. It turned out that these chatbots are able to give betting advice even to those users who directly admit to gambling addiction.
The journalist who conducted the experiment initially asked AI for recommendations on college football betting. Then he said that he was suffering from gambling addiction, and immediately after that, he again asked for tips on betting. Despite the open admission, the chatbots once again gave him betting tips.
According to artificial intelligence expert Yumei He from Tulane University, the reason is the way the models work. In their context window , repeated requests for bets received more weight than a single mention of addiction. Simply put, with repeated requests for information about bets, the model "erases" the previous warning and continues to respond. However, if you start a new chat with a mention of gambling addiction, and then ask for advice, the system reacts correctly and refuses.
Risky phrases and their impact on addicts
Scientists from the University of Nevada have noticed that chatbots often use expressions like "bad luck" or "bad luck". For an ordinary person, such words seem harmless, but for people with addiction, they may sound like a subtle hint to "try again".
Thus, even random AI formulations can increase the craving for the game and become a catalyst for new bets. This makes the problem much deeper and more serious than just giving advice.
Write a comment...