How I Got Jake Paul, Red Bull, and Neil deGrasse Tyson to Take a Chance on My Marketing Company

Secure 2.0 Offers Some Sweet Tax Benefits and Companies Are Starting to Take Advantage

How We Turned Our POV Into a Selling Point

'Help, My Employees Are Making TikToks on the Job.' Here's Why That Might Not Be a Bad Thing

How We Turned Eyeglasses Into a Recurring Revenue Growth Engine

How an AI-Powered Dog Food Brand Secured $2 Million in Seed Funding After Launching Its First Product

How I Turned a Personal Health Crisis Into a Gut Health Soda

Social engineering is arguably one of the most potent and persistent forms of cybercrime. There's a reason why social engineering is so popular with cybercriminals: Hacking people is a lot simpler than breaching software. To hack software networks, one needs to understand the target environment and how to pry open weaknesses and uncover loopholes--which requires tech skills and resources. On the other hand, hacking humans simply requires basic knowledge of human nature--our susceptibility to greed, lust, curiosity, and impatience. If you hack the right person, namely someone unaware of phishing lures and their telltale signs, you procure the keys to the kingdom, and your illicit intentions pass undetected.

Technology also plays a role. The more technology evolves, the more technology-dependent we become. Also, it's become easier to deceive humans. First it was email (phishing), then SMS (smishing), then voice (vishing), then social media, then QR codes (quishing). Social engineering has evolved cheek-to-cheek with technology. The sudden wave of AI technologies has rolled in new heights of sophistication to these attack vectors. Let's examine five upcoming AI developments and the implications they could have on social engineering scams:

1. Professionalized and personalized phishing at scale

According to research by Google Cloud, generative AI is already being used to develop advanced phishing attacks where misspellings, grammatical errors, and lack of cultural context are mostly nonexistent, making these phishes harder to identify and block. Moreover, using automation, attackers can personalize or customize phishing messages to make them appear more authentic and convincing.

2. Weaponization of voice and video

AI technologies are enabling users to clone audio, superimpose faces on videos, and impersonate people. Persuasive attacks are being noted around the world where adversaries are cloning audio and fabricating virtual personas to swindle money from organizations via their own employees.

3. More contextualized attacks using MLLMs

Standard LLMs (large language models) process only text. MLLMs (multimodal large language models) present substantial benefits over LLMs because multimodal models can process and associate additional media such as images, video, audio, and sensory data. This enables AI tools to build a deeper awareness of context, leading to more intelligent responses, improved reasoning, and human-computer interactions. Attackers could soon harness MLLMs to create highly contextualized phishing messages, significantly boosting the efficacy of social engineering attacks.

4. Malicious applications of text-to-video technology

Text-to-video (T2V) is another emerging technology in the AI field. As the name implies, T2V enables users to create high quality visual content by simply providing text inputs. Such technology in the hands of threat actors could be dangerous, using it to fabricate false narratives (disinformation) and generate deepfakes at scale; to deceive people and organizations, and leverage for targeted social engineering attacks.

5. Emergence of AI technology as a service

Google's report predicts that AI tools will soon be offered as a service to assist other threat actors with their insidious campaigns. Malicious AI tools like FraudGPT have already begun surfacing on the dark web, empowering cybercriminals with crafting sophisticated spear phishing emails. As these AI technologies mature and become more accessible, less skilled bad actors will be able to deploy these tools, giving rise to a higher volume of AI-powered social engineering attacks.

Social engineering attacks are not exclusive to large enterprises. A worker at a business with fewer than 100 employees versus a larger business will experience 350 percent more social engineering attacks. What's more, as AI technologies proliferate and businesses transact and interact more digitally than physically, such attacks will become commonplace. Here are best practices that can help mitigate this threat:

1. Improve awareness of AI risks: Through regular communication and reminders, employees must be made aware of emerging AI risks. Document AI risks in security policies so that workers understand how to recognize them, how to handle them, and who to can contact when they encounter a threat.

2. End-user training: The importance of regular (monthly) security awareness training cannot be emphasized enough. Deliver in-person training, give personalized coaching if needed, and run phishing simulation exercises to strengthen employees' security skills and aptitude. The success and failure of social engineering attacks hinge on employees alertness and education.


3. Leverage tools and technology: While social engineering attacks are usually difficult to detect, organizations can implement controls to reduce the risk of identity theft and fraud. For instance, deploy phishing-resistant multi-factor authentication (MFA) to bolster authentication checks. Businesses can also consider employing AI-based cybersecurity tools that can inspect the meta-data of email messages to detect evidence of phishing attempts.

Social engineering is usually phase one in the cyberattack cycle. If organizations learn to harness human intuition developed through repeated phishing exercises, they will be able to detect and block an attack before it can cause material damage. Along with cultivating the right instincts, it's equally important for employees to be accountable and act responsibly in reporting suspicious items and incidents. To achieve this, organizations must endeavor to foster a healthy and supportive culture of cybersecurity.

The Daily Digest for Entrepreneurs and Business Leaders

Privacy Policy

QOSHE - 5 AI-Fueled Social Engineering Risks to Watch - Stu Sjouwerman
menu_open
Columnists Actual . Favourites . Archive
We use cookies to provide some features and experiences in QOSHE

More information  .  Close
Aa Aa Aa
- A +

5 AI-Fueled Social Engineering Risks to Watch

22 0
05.03.2024

How I Got Jake Paul, Red Bull, and Neil deGrasse Tyson to Take a Chance on My Marketing Company

Secure 2.0 Offers Some Sweet Tax Benefits and Companies Are Starting to Take Advantage

How We Turned Our POV Into a Selling Point

'Help, My Employees Are Making TikToks on the Job.' Here's Why That Might Not Be a Bad Thing

How We Turned Eyeglasses Into a Recurring Revenue Growth Engine

How an AI-Powered Dog Food Brand Secured $2 Million in Seed Funding After Launching Its First Product

How I Turned a Personal Health Crisis Into a Gut Health Soda

Social engineering is arguably one of the most potent and persistent forms of cybercrime. There's a reason why social engineering is so popular with cybercriminals: Hacking people is a lot simpler than breaching software. To hack software networks, one needs to understand the target environment and how to pry open weaknesses and uncover loopholes--which requires tech skills and resources. On the other hand, hacking humans simply requires basic knowledge of human nature--our susceptibility to greed, lust, curiosity, and impatience. If you hack the right person, namely someone unaware of phishing lures and their telltale signs, you procure the keys to the kingdom, and your illicit intentions pass undetected.

Technology also plays a role. The more technology evolves, the more technology-dependent we become. Also, it's become easier to deceive humans. First it was email (phishing), then SMS (smishing), then voice (vishing), then social media, then QR codes........

© Inc.com


Get it on Google Play