Testing vs. Assumptions: Are Your Fraud Controls Proven or Just Trusted?

March 11, 2026
Todd Arnts, VP of Fraud Red Team

Testing vs. Assumptions: Are Your Fraud Controls Proven or Just Trusted?

Most fraud programs are built on a set of assumptions.

We assume the controls we implemented last year still work today.
We assume the alerts we tuned are catching what they’re supposed to catch.
We assume the controls that passed a test in QA will behave the same way in production.

Sometimes those assumptions are correct. But increasingly, they aren’t.

Fraud attacks are evolving faster than many control programs can realistically validate. AI-enabled fraud, social engineering scams, and sophisticated account takeover techniques are forcing organizations to rethink a fundamental question: Do we actually know our controls work or do we just believe they do? 

The Assumption Gap

During a recent Neovera fraud webinar, we asked attendees how their organizations validate fraud controls today.

The responses revealed a clear trend:

  • 43% reported their organizations are primarily reactive, relying on alerts, incidents, and losses to determine if controls are working.

  • 20% said they are exploring proactive testing, but have not yet implemented it.

  • Only about 18% reported conducting structured proactive testing such as simulations or targeted production tests.

  • Just 6% have a formal Fraud Red Team program in place.

In other words, most organizations are still learning about control effectiveness after fraud occurs.

That’s not a criticism, it’s just the reality of how many fraud programs evolved. Historically, fraud controls were evaluated through:

  • Post-incident investigations

  • Loss reporting

  • Model tuning based on alert performance

The problem is that these methods measure outcomes after the fact, not the resilience of controls while fraud activity is actually occurring. And fraudsters don’t politely wait for your quarterly review cycle.

Why Testing Is Harder Than It Sounds

If proactive testing is so valuable, why don’t more organizations do it? We asked that question too.

The most common barriers organizations reported were:

  • Budget constraints or competing priorities (50%)

  • Operational complexity

  • Risk or compliance approval challenges

  • Legal concerns and unclear ownership

All of these are understandable. Testing fraud controls in production can raise legitimate questions:

  • Could testing trigger real customer alerts?

  • Will it disrupt operations?

  • Who owns the testing process — fraud, security, risk, or engineering?

  • How do you safely simulate attacks?

These concerns often lead organizations to delay testing indefinitely. But the alternative is something most leaders are increasingly uncomfortable with: running critical fraud controls that have never been validated against real attack behavior.

The Threat Landscape Is Moving Faster

Another poll question from the webinar asked attendees which fraud threats concern them most over the next 12 months.

The top responses were telling:

  • AI-enabled fraud (deepfakes, automated attacks) — 40%

  • Scams and social engineering — 26%

  • Account takeover (ATO) — 15%

All three of these threats share something in common: They target the seams between systems, teams, and controls.

Deepfake voice attacks may bypass call center verification processes.
Social engineering scams manipulate customers into authorizing transactions.
ATO attacks exploit gaps between identity, device, and behavioral controls.

These are not static threats. They evolve constantly. Which means static assumptions about control effectiveness quickly become outdated.

The Shift Toward Validation

Leading fraud programs are beginning to shift from assumption-based confidence to evidence-based validation.

Instead of asking:  “Do we have the right controls?”

They ask: “Can our controls actually stop real attacks?”

This is where structured fraud control testing comes into play. Testing programs may include:

  • Simulated fraud scenarios

  • Targeted production testing

  • Cross-channel attack simulations

  • Red-team style adversarial testing

The goal isn’t to “break” controls for the sake of it. The goal is to understand how fraud actually moves through your environment, and where defenses succeed or fail. Because once you see how an attack really unfolds across channels, systems, and teams, the gaps become much easier to fix.

Moving Beyond Assumptions

Fraud programs rarely fail because teams lack skill. They struggle because modern fraud ecosystems are too complex to validate through assumptions alone.

Testing helps bring clarity. It answers questions that dashboards and policies can’t: Will our controls stop this attack? How quickly would we detect it? Where might it succeed?

Because in fraud defense, the gap between confidence and guesswork is often where the next loss occurs.

Want to know if your controls would stop real attacks?
Contact Fraud Red Team to learn how we test real-world impersonation scenarios against both systems and people.