The use of unapproved apps and systems inside an organisation, known as Shadow IT, is often viewed as a rebellious act by employees looking to break the rules. The reality is far more nuanced. Shadow IT is rarely malicious; it is usually a quite rational decision made by your most dedicated employees.
In Game Theory, a ‘rational agent’ is expected to act in a way that maximises their own utility or benefit. When an employee is faced with a deadline and a clunky, slow official process, the rational choice is to bypass it in favour of a faster, consumer-grade tool.
“People just want to get their work done, and they will pick the path of least resistance,” says Anna Collard, SVP of content strategy & CISO advisor at KnowBe4 Africa. “If the official secure file-transfer system takes twenty minutes to navigate, and a free online tool takes two minutes, the diligent employee – focused on efficiency – will choose the free tool. They aren’t trying to be insecure; they are trying to be productive.”

However, this individual rationality leads to collective risk. Recent studies show that over half of the average organisation’s applications are unauthorised by IT departments, while 80% of employees admit to using Shadow IT.
The invisible cost of ‘efficiency’
While the employee gains speed (the immediate payoff), the organisation absorbs a hidden, long-term cost: invisible risk.
The landscape of this risk has shifted dramatically with the explosion of Generative AI. Shadow IT has become about more than just using personal Dropbox accounts or WhatsApp groups. It now involves employees pasting proprietary code, financial forecasts, or sensitive strategy documents into public chatbots like ChatGPT to speed up their workflow.
“A recent KnowBe4 study found that while most employees are using AI for their work, they aren’t aware of their organisation’s policy governing the use of it,” Collard notes.
The 2025 KnowBe4 Africa Human Risk Management report found that Southern Africa is the region with the highest rate (23%) of organisations where no formal AI policies are in place across the continent. Shadow AI is very much a governance blind spot in numerous African organisations, with similar research published in the Data and Policy journal indicating most African nations lag in global AI governance readiness.
“These technologies offer massive productivity gains, which makes the incentive to use them incredibly high,” states Collard. “But because they sidestep security controls, sensitive information can be leaked into public models. The employee finishes their report faster, but the organisation is left with a dangerous gap in its cyber-defence posture.”
The friction-security trade-off
To manage this, we must look at human risk management (HRM) not just as a behaviour problem, but as a design problem. Shadow IT is a classic example of where employee behaviour bypasses official protocols because the tools create too much friction.
“If you treat security as a blockade, employees will tunnel under it,” explains Collard. “Shadow IT is effectively a vote of no confidence in the organisation’s provided tools.”
Real-world consequences of this friction are severe. In 2023, a breach at identity giant Okta was traced back to an employee accessing a personal Google account on a company device – a classic Shadow IT vector. The employee likely used the personal account to bridge a gap in their workflow, inadvertently compromising more than 100 global customers.
Solving the Shadow IT dilemma requires organisations to change the incentives. If you want employees to use secure tools, those tools must be as frictionless as the insecure ones. Collard advises against a purely punitive approach, which often drives Shadow IT further underground. Instead, she outlines a three-step approach to realigning incentives:
- Reduce process friction: Create a fast-track process for employees to request and vet new tools. “If the approval process takes three months, the project will be over before the tool is approved,” says Collard.
- Competing on UX: Security teams need to vet tools for usability, not just safety. Sanctioned tools must compete with consumer apps on user experience (UX). If the secure option is painful to use, it will lose.
- Transparency training: “Employees often don’t understand why a tool is banned,” notes Collard. “When you explain that a free PDF converter might own the rights to the data uploaded to it, their cost-benefit analysis will change.”
“You cannot police your way to productivity,” Collard concludes. “Bring employees into the decision-making process. When the secure way becomes the easy way, Shadow IT disappears all by itself.”






































