AI is no longer a side project or a future innovation agenda item; in most firms, it is already embedded in day-to-day tasks. From drafting client emails and summarising meetings, enabling research and analysis, product development, and operational reporting. The board-level question has moved on from “should we invest?” to something more like, “are we scaling AI safely, responsibly, and profitably? AI isn’t a technology story; it’s a leadership story, and it only creates value when someone is willing to rethink the work, set sensible guardrails, and bring people with them.
Why AI literacy matters in South African financial services
Financial services carry a unique set of pressures: high trust, regulated decision making, and outcomes that affect people’s savings, claims, and credit. That reality raises the bar. A ‘move fast and break things’ mindset rarely survives contact with auditors, compliance teams, and clients, and it certainly won’t hold when AI begins influencing decisions that matter.
In practice, a lot of early AI activity is sensible and low-risk: summarising documents, tightening communication, speeding up research, and reducing admin. The problem starts when boards assume that tool adoption equals organisational capability. It doesn’t. Capability is only truly evident when AI begins to influence client outcomes, operational decisions, and the compliance environment. Boards don’t need leaders who can talk about AI; they need leaders who can turn spend into measurable performance, without compromising trust, culture, or control.
From tool adoption to operating change
While the temptation for quick wins, through pilots or licence arrangements, is real, these rarely produce sustained value. The difference between activity and impact is whether leadership is willing to change how work moves from beginning to end.
In banking, that redesign could mean reworking a credit journey from application to outcome, with clearer hand-offs, fewer manual checks, and better escalation. In insurance, it might mean modernising the claims journey, so clients get speed and transparency, without reducing oversight. In wealth management, it could mean improving research and client reporting, so advisers spend more time advising and less time on the physical act of typing up reports.
AI doesn’t fix inefficient workflows; it exposes them. Leaders who can redesign the work are the ones who unlock the maximum value and leave their competitors in the dust.
Where culture and risk collide
Another contributor to ‘AI immaturity’ is a two-speed organisation. Executives and managers tend to adopt quickly. The operational teams, however, either adopt, get left behind, or find their own workarounds, using tools in inconsistent ways. The result is disparity in the quality of work, little clarity on how it’s being produced, and frustration between teams and leaders.
This is where maturity either accelerates or collapses. If guardrails are unclear, people work around them. If workflows don’t change, AI becomes one more layer of work. If training is vague or optional, adoption becomes inconsistent and unpredictable. When AI scales badly, it doesn’t just create inefficiency; it creates mistrust. And once trust drops, adoption becomes harder.
How boards should hire for 2026
When boards recruit CEOs, COOs, CIOs, or divisional leaders this year, the question shouldn’t be ‘do you know AI?’ It should be ‘how would you scale AI responsibly in this business and what would you change first?’
Below is a practical executive search lens using across interviews, referencing, and assessment:
AI judgement
Strong candidates show prioritisation coupled with restraint, and they are able to explain which use cases matter, why they matter, what they’d stop, and what must remain human-led. They also define value clearly, including what success looks like within a given timeframe.
Risk governance
In financial services, governance considerations are often seen as the pacesetter for the business. The right leader can put practical protections in place: data handling rules, approved tools, human review thresholds, escalation paths, and auditability. Correctly setting this in place will allow governance to facilitate scale and not stall it.
Change leadership
AI transformation is behavioural change at scale. Leaders need to redesign workflows, build adoption discipline, and measure impact. It’s relatively easy to pilot an initiative; true leadership is evident in the movement from pilot to scale. Change leadership can deal with resistance without losing momentum.
The ability to understand the data
This isn’t about coding. It’s about understanding the data reality of the business: quality, access, ownership, permissions, and measurement. Many AI programmes fail when implementation comes up against data that is less than optimal. Strong leaders fix the data foundations before scaling.
Upskilling mindset
AI maturity becomes durable when AI becomes a company-wide capability. Leaders need to build confidence and competence across roles, not create dependency on a few power users. Enablement should be role-based and tied to everyday work, not a once-off training session.
What this means for executive recruitment
Your hiring process is a preview of how the organisation will run its AI agenda. If the brief is vague, decision-making is slow, or accountability is unclear, AI scaling will follow the same pattern. Instead of asking for ‘AI exposure’, assess for evidence: outcomes delivered, adoption achieved, and risk managed.
Ask what changed in the operating model and what impact it had on performance. In regulated environments, test how candidates handled policy, oversight, and human review when the stakes were high. Hiring for AI-literate leadership is not about finding a ‘tech executive. It is about appointing leaders who modernise how work gets done.
