SiliconAngle reports that threat actors have been leveraging interest in OpenAI's ChatGPT chatbot in new sophisticated investment scams.
Phishing emails with fake OpenAI and ChatGPT graphics are being sent by threat actors to lure targets into opening a link that would redirect to a ChatGPT imitation offering fraudulent opportunities to earn up to $10,000 monthly using the platform, according to a report from Bitdefender.
After introducing its role in financial market analysis, the fake ChatGPT chatbot proceeds to ask targets' current income and other financial questions while claiming to promise daily earnings of $420 before asking targets' email addresses. Victims are also being asked to transfer $266 during the interaction with the fake ChatGPT chatbot.
"Scammers using new viral internet tools or trends to defraud users is nothing new. If youre looking to test out the official ChatGPT and its AI-powered text-generating abilities, do so only using the official website," said researchers.
Aside from featuring over 40 million signals from the DNS Research Federation's data platform and the Global Anti-Scam Alliance's comprehensive stakeholder network, the Global Signal Exchange will also contain more than 100,000 bad merchant URLs and one million scam signals from Google.
While some threat actors established fraudulent disaster relief websites as part of phishing attacks aimed at exfiltrating financial details and Social Security numbers from individuals seeking aid, others impersonated Federal Emergency Management Agency assistance providers to create fake claims that enabled relief fund and personal data theft.
Malicious GitHub pages and YouTube videos containing links for purported cracked office software, automated trading bots, and game cheats, have been leveraged to facilitate the download of self-extracting password-protected archives.