Sitemap

When AI Goes Rogue: How Agentic AI Will Reshape Social Engineering Attacks

9 min readMay 21, 2025

Cyber criminals are rarely late to the game when it comes to new technologies. In fact, they’re often among the first ones to experiment with emerging technologies. They do not have the limitations that govern legitimate organizations. They can move quickly, take risks, and operate without concern for ethics or legal frameworks.

There is currently a significant shift in artificial intelligence: Agentic AI.

Agentic AI are systems that go beyond offering suggestions or supporting tasks based on human input. Instead, they are capable of planning and executing tasks on their own.

Within this ecosystem there are autonomous AI agents that are collectively capable of planning, making decisions, and acting in autonomous collaboration with one another. They are receiving feedback, learning, adapting, and trying again. All without major human involvement.

In simple terms:

Agentic AI refers to the full system of AI agents working together. These systems can make decisions, adapt from feedback, and complete goal-oriented tasks.

AI agents are the individual components within that system. Each agent is responsible specific tasks.

Agentic workflows are the structured processes that guide AI agents on how to collaborate. The workflow enables multiple steps of a task to take place from start to finish with limited to no human intervention.

Isn’t it interesting? Our world is changing — fast.

Last month, Gartner published a report predicting that by 2029, agentic AI will be capable of resolving 80% of routine customer service interactions with no human intervention.

This automatically raises the question: how long until malicious actors follow the same logic and adopt agentic AI to create autonomous, effective, and legitimate-looking digital interactions for malicious purposes?

This challenge is already starting to take shape.

As most digital interactions now begin online, attackers have more opportunities than ever. As artificial intelligence evolves from a supportive tool to autonomous agents capable of long-term planning and adaptive decision-making, the cybersecurity industry faces a rapidly shifting threat landscape. The rise of agentic AI has opened new frontiers for adversaries looking to automate and scale social engineering attacks.

Let’s see the first developments that we can expect.

Agentic AI And The Future of Social Engineering: Key Developments

Automated Reconnaissance.

AI agents can be tasked to perform online reconnaissance on targets in several ways. This can include:

  • Identifying vulnerabilities in physical infrastructure (e.g., building layouts, entry points, security controls)
  • Mapping organizational structures, policies, and hierarchies (based on public data like employer review websites, interviews, leaked documents)
  • Monitoring upcoming travel plans & public appearances (relevant for executive protection)

But also:

  • Behavioral profiling; analyzing digital footprints to understand personal habits, preferences, motives, and weaknesses
  • Tailoring a social engineering approach; crafting custom attack strategies based on a target’s behavioral patterns, known weaknesses, and corporate dynamics

Voice and Video Deepfakes.

Coupled with generative AI, agentic systems will be able to convincingly simulate executives, business partners, clients, or trusted colleagues in phone calls or voice messages nearly in real time. This will reduce the lag often associated with deepfakes (a known detection cue), making malicious communications appear more authentic and significantly harder to detect.

Chain interactions & spear phishing attempts.

Lengthier online interactions are also expected to become more automated and improved.

So far, when it comes to LLM-supported digital interactions (e.g. through social media, smishing, or spear phishing attempts) traditional chatbots have struggled to maintain the conversational context for long without human intervention. That’s because rule-based systems follow fixed scripts and therefore during longer interactions they often start producing irrelevant or fragmented responses.

On the other hand, Agentic AI’s memory and orchestration capabilities make it great at keeping track of long conversations, maintaining the conversational flow, and staying contextually relevant and coherent throughout an interaction, without human supervision. This is expected to increase the quality, scalability, and effectiveness of social engineering attacks that target specific individuals in lengthier attack scenarios.

Multi-Vector Approach Adaptability

Another key development will be the ability to learn and refine its interactions based on how the targets it engages respond. The system will gather data on the types of approaches that work best and the target profiles, demographics or even company departments that are most susceptible, thus making each subsequent attack more compelling and effective. This can include a multi-vector approach: if phishing emails do not work, the system may proceed to spear phishing or phone texts/audio messages.

Example:

Let’s say that a state sponsored threat actor is deploying an agentic AI system with a single high-level objective: impersonate a conference organizer and use that persona to lure a high-value individual into a phishing trap. The process is to convince the target over a series of personalized emails, to accept an invitation to speak at a prestigious event abroad. The ultimate goal is to have the target register for the conference by *clicking a link*, then *logging into the fraudulent conference platform through their Microsoft/Google/Other account* (which is set up to harvest data), and *download a -malicious- registration form*. What looks to the target like an administrative task is, in fact, a well-disguised breach attempt.

Behind the scenes, the agentic AI breaks this objective into a coordinated web of sub-goals. Each AI agent is deployed for different components of the operation. One starts by collecting open-source intelligence on the target, analyzing their interests, past speaking engagements, and digital footprint. Another drafts the initial outreach email and tailors tone, topic, and the details in a way that resonate with the recipient. A third monitors replies, analyzes sentiment and engagement patterns, and adjusts the tone or content of future messages. Another agent may oversee the whole orchestration; the tracking progress, managing dependencies, and ensuring coherence across interactions.

What makes this system powerful isn’t just its structure and autonomy. It’s also the adaptability. If the target ignores the first email, the system doesn’t stop. It analyzes why. Was the message too vague? Too generic? Was the communication channel not the right one? Feedback loops kick in. The campaign evolves. A revised email with better hooks or more lucrative context, goes out. The system keeps learning and iterating. If one path fails, it adjusts course. If one tactic works, it’s saved and reused across other targets with similar profiles or backgrounds.

The system keeps working on it. There is no need for sleep, no delay for team coordination, no hesitation. Just continuous optimization in pursuit of the goal. Every interaction is data. Every data point is a lesson. And every lesson is fed back into the system to make the next attempt smarter, faster, and more convincing.

***Reality Check***

How well can we expect Agentic AI to *actually* perform all these tasks? After all, large language models and AI cloning tools are not perfect, and neither are the systems built on top of them.

Well, it remains to be seen. What is undoubtfully the case, however, is that we have been seeing exponential improvements in all AI systems, even though their first models had significant flaws. We can expect significant developments in the capabilities of agenticAI systems as well.

If history is any guide, what feels like science fiction today may quietly become part of an attacker’s toolkit tomorrow.

Sooo…What is the Situation RIGHT NOW?

Today’s social engineering scams no longer come with typos, strange accents, blurry (or non-existent) videos and obvious red flags. They come with perfect grammar, fairly realistic -cloned- voices, and videos of people who’ve never existed, or who have also been cloned to look like your manager, executive, or coworker.

We are already seeing a clear increase in both the quality and volume of phishing and spear phishing attempts. Targeted attacks have also become better, more personalized, and harder to detect.

This is not only what we have been seeing through our clients cases. These findings are also reflected in industry reports such as the ENISA’s Threat Landscape Report 2024 and Verizon’s Data Breach Investigations Report 2025.

ENISA Threat Landscape Report 2024

While voice and video-based attacks are still evolving, they’re improving quickly. Right now, it still takes time and effort to create convincing voice and video clones, especially for dynamic, real-time conversations. Deepfake videos remain relatively rare, but voice cloning keeps gaining popularity among threat actors, particularly in the form of pre-recorded voice messages on platforms like WhatsApp.

Surprisingly, some of those audio-based attacks are proving effective, even when they sound “robotic” or have a different talking style than the person supposed to be sending them.

Here’s why:

We live at a time where workers are juggling multiple responsibilities at the same time. They have increased expectations from their work and family and face more pressure and stress than probably ever before.

Not the best combination of variables for clarity and proper cognitive filtering.

Now couple that with insufficient, passive training (often 15 minute videos a few times per year).

That’s the dangerous cocktail. The perfect storm.

Last Few Thoughts

Despite all developments, some organizations still say things like:

“Our employees keep falling for basic phishing emails, and we have given up on trying to improve that. They just do a video-recorded training, we tick the compliance box, and that’s it”.

“Oh, we can take on a couple of security blows. We will recover, pay something, done.”

“Social engineering is simple, it’s just some phishing emails and we do this in training, we are going well.”

This mentality has nothing to do with work towards cyber resilience. It is denial. It is a state of blissful ignorance that breeds dangerous overconfidence. And when you mix that attitude with increasingly capable threat actors and autonomous AI agents, it’s not just a storm, it’s a hurricane.

But there is also another side.

Thankfully, we also get to see security leaders with a much more responsible mentality. The ones that understand that cyber resilience is a process and take active steps to improve each cybersecurity layer, continuously. It is not an easy task but from what I have seen their organizations tend to have a much better cyber posture (at least on the domains I get to check).

It is both encouraging and rewarding when I teach in-person classes on social engineering defense within companies that invest in their human perimeter. Their security leaders take the steps necessary to discuss with employees, offer them in-person training (and thus the chance to also ask questions) and organize initiatives that help employees get an in-depth understanding on why security matters. In turn, I have seen those employees demonstrating this understanding by better detecting and deterring social engineering attempts.

I believe in the effectiveness of being proactive. And when you are not, I believe in the domino effect. In social engineering defense, it looks like this: if an organization is not adequately prepared for the standard social engineering attacks, it is not prepared for AI-enhanced social engineering. If they were not proactive enough to prepare for the AI-enhanced social engineering, then the next wave coming will hit hard.

Social engineering will not get easier to detect. We won’t go back in time. For the next few months, threat actors will continue to work on optimizing their attacks and leveraging AI. Agentic AI systems are likely already being tested by some cybercriminal groups. We still have some time until these types of agents start becoming the new norm in social engineering -but still, it is a matter of time.

Organizations need to strengthen their defenses in anticipation of these developments by using AI-powered cybersecurity tools, investing in better quality security training programs, and fostering a culture of shared responsibility for cyber security across their teams.

The ones that understand this shift and prepare for it, will be the ones that keep their head above the water. Those that continue to rely on outdated assumptions and checkbox compliance will find themselves dangerously exposed.

Want to know more?

If you have questions or work with a team that can needs training on how to recognize and defend against AI-powered social engineering, you may contact me on LinkedIn, X, or Bluesky.

We also offer specialized, tailored briefings for the executive management, c-suite, and the board of directors. For details, visit: https://www.cyber-risk-gmbh.com/Board.html

Thank you for reading, and stay safe!

xxx

--

--

Christina Lekati
Christina Lekati

Written by Christina Lekati

Practicing and interconnecting my big passions: Social Engineering, Psychology, HUMINT & OSINT, for the sake of better cybersecurity & to help keep others safe.

Responses (5)