AI/ML, Data Security

ChatGPT use raises cybersecurity concerns

ChatGPT chatbot by OpenAI - artificial intelligence

AI is increasingly used as a personal advisor, but many people are unknowingly exposing themselves to security risks by asking ChatGPT sensitive questions, according to new research by NordVPN, TechRadar reports.

While common queries, like how to avoid phishing scams or pick a secure VPN, highlight the growing public interest in cybersecurity, users are also feeding AI tools with personal data, including passwords and banking details, that could be exploited by malicious actors. Some questions even reflect fundamental misconceptions, such as fears of hackers stealing thoughts or eavesdropping through "the cloud" during storms. "What may seem like a harmless question can quickly turn into a real threat," warns Marijus Briedis, CTO at NordVPN, who stresses that attackers can exploit this data through social engineering and phishing. With most AI platforms retaining conversation history to improve model training, there's a real risk of that information being extracted or misused. The findings underscore the urgent need for digital literacy and cautious AI use.

An In-Depth Guide to AI

Get essential knowledge and practical strategies to use AI to better your security program.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms of Use and Privacy Policy.

You can skip this ad in 5 seconds