[TheDevs]

Criminals Exploit Artificial Intelligence to Target Victims, Warns Security Expert

Leading security expert Paul Bischoff has issued a warning about the increasing use of artificial intelligence (AI) by criminals to target unsuspecting victims. Bischoff highlights the current threat posed by AI, dispelling the misconception that it is solely a future concern. Criminals are already utilizing AI to perpetrate scams, with deepfake audio being a particularly challenging issue. By employing AI, criminals can create fake voices that sound remarkably similar to loved ones, deceiving individuals into falling victim to their schemes.

The ease and speed with which artificial intelligence voice cloning can be achieved is alarming. In a matter of seconds, a fake voice can be created that is virtually indistinguishable from a real one. This poses a significant challenge for individuals who may struggle to identify whether a voice is genuine or not. As a precautionary measure, it is advisable to avoid answering calls from unknown numbers, utilize safe words to verify caller identities, and remain vigilant for signs of a scam, such as urgent requests for money or personal information.

However, deepfake voices are not the only AI threat that individuals should be wary of. Bischoff also highlights the potential for criminals to hijack AI chatbots to obtain private information or deceive unsuspecting victims. AI chatbots can be used for phishing purposes, targeting individuals to steal sensitive data such as passwords, credit card numbers, and social security numbers. The concealment of information sources by AI further complicates the identification of potential scams.

The U.S. Sun recently shed light on the dangers posed by AI romance scam bots, which exploit individuals seeking romantic connections online. These chatbots, designed to mimic human conversation, can be challenging to detect. However, there are warning signs that can help individuals identify them, such as quick and generic responses or attempts to move the conversation to external platforms. Any requests for personal information or money should be treated as red flags.

The ubiquity of AI is a growing concern for internet users. It already powers chatbots used by millions of people, and its presence will only increase in various apps and products. Google’s Gemini and Microsoft’s Copilot are already integrated into products, while Apple’s forthcoming iPhone will be powered by Apple Intelligence and OpenAI’s ChatGPT. Therefore, it is crucial for individuals to understand how to stay safe while using AI.

Sean Keach, Head of Technology and Science at The Sun and The U.S. Sun, emphasizes the rising threat of deepfakes in online security. While the quality of deepfakes continues to improve, there is also increased awareness and investment in software that can detect fake AI content. Social media platforms are working towards flagging such content to users, but personal vigilance remains essential. Applying scrutiny to online videos, considering their plausibility and potential motives behind their creation, can help individuals identify fraudulent clips.