Is Your Job at Risk? — AI in Financial Crime Operations (Part 2)

By Erkin Oksel  |  CEO, EyesClear


 

The Current State of Financial Crime Operations

For medium-to-large financial institutions, financial crime functions are distributed between operations and compliance teams. In large-scale banks, high-effort but low-complexity tasks have already been outsourced to third parties or heavily automated. The economics made sense long before AI entered the picture.

For smaller financial institutions — especially in emerging markets — the reality is quite different. Teams are still growing, still catching up with the regulatory complexity that has compounded over the past decade. These institutions are building the house while the rules of architecture keep changing.

Executive AI vs. Institutional AI: A Critical Distinction

Before we discuss AI’s impact on your role, we need to separate two fundamentally different categories of AI adoption.

Executive AI is what most of us use daily — interacting with tools like Claude, Grok, or ChatGPT. You ask a question, you get a response. It operates under your control, in a request–response pattern. It doesn’t act by itself. We use Claude for all of ours, and it has become an indispensable part of how we work.

Institutional AI is fundamentally different. This is when AI capabilities are embedded into the overall operational flow — not as a tool you query, but as a participant in the process. It acts within parameters, makes preliminary determinations, and escalates when it cannot resolve.

Why “Agentic AI” Changes the Game

The first generation of AI — with its well-documented hallucination problem — was simply not fit for purpose in mission-critical operations. You cannot have an AI confidently fabricating transaction details in a Suspicious Activity Report.

But the current generation is different. The breakthrough is not just better models — it is agentic flows. In simple terms: two or more AI models working successively, one as the maker, the other as the checker, before producing the final output. Unlike “Reasoning AI” (where a single model reasons internally), agentic flows can use different models with different prompts, each optimised for its role in the chain.

This maker–checker pattern mirrors how compliance teams already work. And that is precisely why it works.

The Trust Curve: From 4 Eyes to 2 Eyes

We believe the adoption will follow a predictable trust curve:

Phase 1 — 1 Silicon + 4 Human Eyes: AI produces the initial output. Two human reviewers validate. Trust is being built through evidence.

Phase 2 — 1 Silicon + 2 Human Eyes: As institutional confidence grows, one human reviewer becomes sufficient. The AI side is further strengthened through additional layers orchestrated via agentic flows.

Where AI Wins First — The Operational Perspective

From an operational standpoint, the clear early winners for AI adoption are:

1)    Data Quality Validation. Without clean data, even a human analyst is helpless. Data validation doesn’t require deep intellectual capacity — which means there’s minimal room for hallucination to cause damage. This is low-hanging fruit.

2)    False Positive Resolution. The majority of operational work in financial crime is resolving obviously benign alerts. These are high-volume, repetitive tasks with clear patterns — exactly where AI excels.

3)    Recommendation Engine for Complex Cases. Once the simpler layers are handled (or in parallel), AI becomes a powerful recommendation engine for complex investigations. The human still decides — but the preparation, the data assembly, the initial assessment? That’s where AI shines.

The 51% Problem: Why Financial Crime is Different

As I explained in my earlier post, we are not yet at a point where we can delegate complex tasks to AI and forget them. AI can confidently make two conflicting recommendations in the same session.

As a dear friend and EyesClear’s former Director put it regarding trading systems: “You need to be 51% right.” AI can easily achieve that. But from a financial crime and critical operations perspective, the story is fundamentally different. Being wrong 49% of the time is not an option when regulatory penalties, reputational damage, and genuine criminal activity are at stake.

The Bottom Line: Five Questions Answered

1) Is your job at risk? No. Not yet.

2) Will positions in your job area grow? No. Unless you are in a start-up financial institution building from scratch.

3) Will the job get easier? Absolutely not. This is a paradigm shift. Whatever is easier to respond to, AI will respond to. All the remaining complex, conflicted subjects will hit less-resourced FI teams harder than ever.

4) Will it get more fun? I believe yes. Our work will become more relevant, more sensible. We will grapple with philosophical challenges rather than operational checklists. Discussions will shift from tick-box activities to genuine “meet and sense” engagement.

5) Will salaries increase along with efficiencies? Yes — for AI-native professionals. Those who adapt, who learn to work with and through AI, will command higher value. But for those sticking with the legacy approach? No — and frankly, expect inflation-based reductions. The market will increasingly price in AI fluency as a baseline expectation.

A Final Note on the Economics

AI adoption will progress at its own pace. And there is a dangerous deception in the market that AI projects are one-off costs — buy 10 H100s and you’re set.

Unfortunately, no.

The key lies in the DATA assets and the commercial aspects of AI evolution. Without the right data infrastructure, the most powerful AI is just an expensive engine without fuel.

That will be a separate post.

 

EyesClear is a real-time AML transaction monitoring platform with integrated agentic AI, processing 1000s of transactions per second.

www.eyesclear.com