14.5 C
Johannesburg
Wednesday, April 30, 2025

What Does Agentic AI Mean for Cybersecurity?

Must read

Johannesburg, 29 April25: If you’ve ever felt frustrated booking trips, finding troubleshooting information, coordinating complex supply chains, or managing any elaborate environment, you’ll be interested in what AI agents offer.

AI agents are poised to reshape how we get things done. They stand in for us, managing and delegating numerous tasks towards an overarching outcome, giving people more time to focus on higher-value tasks.

But they also pose security risks. What should you know about AI agents and how to keep them secure from malicious employees and cybercriminals?

The Artificial Helper

In the late 1980s, Francesco Cirillo needed to remedy his poor time management. He saw a kitchen timer, shaped like a tomato, and used it to break his tasks into smaller work intervals. He names his technique after the Italian word for tomato: Pomodoro.

Cirillo created an elegant model for an age-old productivity technique: split projects into smaller tasks. Now, artificial intelligence is evolving this concept with AI agents. Also called agentic AI, these AIs act on behalf of humans and systems, coordinating multiple tasks. But instead of following fixed sets of instructions, AI agents learn to adapt their behaviours.

An AI agent could work directly with a system, such as scheduling meetings on your behalf. It could be much more complex, such as managing supply chain items or planning an elaborate trip, orchestrated through scripts and specialised “narrow” AIs.

It’s a technology that promises to “transform many aspects of human-machine collaboration,” wrote the Harvard Business Review. No wonder that Deloitte predicts half of the companies using generative AI will adopt agentic AI by 2027.

“Agentic AI is very exciting and could be the biggest productivity boost yet that everyone has been expecting from AI. It’s more than automation—it will help us manage complex and dynamic objectives at high speed, increasing the impact that machine systems make for us. But with that comes new security risks,” says Gerhard Swart, Chief Technology Officer at cyber security company, Performanta.

Risky AI

An executive trusts their personal assistant. A patient trusts their doctor. An inventory controller trusts their logistics manager. That trust is crucial because those people are exposed to the inner workings of processes and information.

“The same applies to AI agents, which immediately flags the biggest security concern,” says Swart.

“For agentic AI to work, it needs access to secure systems—often several different systems. That kind of access is very dangerous if it falls under the influence of malicious employees or cybercriminals.”

Unless AI agents are sufficiently secure and managed, they pose significant risks due to their extensive reach and the degree to which we’ll come to rely on them. Already, major vendors such as Salesforce, which has an agentic AI service called Agentforce, run bug bounty programs to help find security gaps affecting AI agents.

Protecting AI Agents

“From a risk perspective, we can consider AI agents as similar to human users with access to sensitive systems. This comparison helps define what can keep AI agents safe,” says Swart. “An AI agent can have access to important business data, scheduling and email applications, SaaS services, and such things. There is much we can determine from that vantage, such as ensuring we understand and limit the agent’s access to systems.”

Methods to secure AI agents include:

  • Strong identity management and authentication systems.
  • Policies to encourage and guide usage, and avoid shadow AI agents.
  • Regular human-in-the-loop evaluations of AI agents’ output and behaviour.
  • Limiting data exposure and using encryption smartly.
  • Continuous security monitoring.

Continuous monitoring is one of the most important measures. The best managed security service providers (MSSPs) employ sophisticated monitoring and remediation systems that include AI. Moreover, globally top-rated MSSPs such as Performanta take a risk-first approach to identify, monitor, and protect a customer’s most crucial areas, lowering the associated risks and costs.

“Partnering with an MSSP is particularly prudent for AI agents. MSSPs invest in active monitoring and remediation tools, including AI agents, that can act as fast as the attackers,” says Swart.

“Active monitoring and remediation are crucial and need to be very fast. They should be as fast as machines can act. MSSPs invest in this speed and accuracy. If you want to get the most from AI agents while keeping risks and costs down, especially as you start to deploy agents, partnering with a top MSSP is the best option,” concludes Swart.

- Advertisement -

More articles

- Advertisement -

Latest article