From Paper KYC to Explainable AI: What 20 Years in Compliance Taught Me
For two decades I’ve worked across banking, investments, lending and compliance – from paper KYC files and rule‑based AML systems to today’s AI‑driven monitoring. This essay traces that journey, unpacks what went wrong around 2008, and lays out a practical checklist for “getting AI right” in KYC, AML and CFT under evolving Canadian and FINTRAC expectations.
pavan aujla
11/27/20255 min read
From Paper KYC to Explainable AI: What 20 Years in Compliance Taught Me
My career began in Canadian banking. Two decades ago, my desk was stacked with paper files. The screen in my office still ran on MS‑DOS, and senior management was only just debating whether to move from paper folders to an automated CRM system.
KYC lived in lever‑arch binders.
Photocopied passports were clipped to physical forms.
Transaction logs, printed on continuous paper, sat in boxes under desks, waiting for compliance teams to go through them, line by line, highlighter in hand, searching for anything that felt wrong.
At the time, KYC meant manual identity verification, AML meant sampling a tiny fraction of transactions, and CFT required endless coordination with law enforcement. And every time a scandal broke, regulatory requirements tightened, and the volume of data we were expected to review increased significantly.
We did our best. But we were relying on human eyes, instincts, and spreadsheets in a world that was already accelerating beyond our reach.
From Paper to Rules. And the Limits of Automation
As technology advanced, we upgraded.
First to spreadsheets. Then to rule‑based systems.
It helped. But it didn’t solve the core problem.
Rules could only flag what we already knew to look for.
We still missed subtle risk patterns.
False positives overwhelmed our investigators.
And, as we learned in 2008, automation paired with misaligned incentives doesn’t just scale efficiency. It can also scale risk.
The global financial crisis wasn’t caused by automation alone. But model‑driven decisions, opaque securitization, and overconfidence in quantitative tools undeniably amplified it.
It was an early warning.
Technology without ethics, explainability, or governance can be dangerous.
Why I Moved into AI
A few years ago, I made a deliberate decision to shift my focus to artificial intelligence.
Not because it was fashionable.
But because the traditional tools had hit a wall.
Transaction volumes had become too large for sampling
Synthetic and fraudulent documents could now easily slip through legacy systems
Criminal networks were exploiting digital channels and synthetic identities with increasing sophistication
Regulators were expecting stronger, risk‑based approaches and better outcomes
Today, we’re already seeing AI applied across compliance in Canada.
From fraud detection and KYC/AML transaction monitoring to chatbots and risk-based credit or insurance underwriting.
This trend is acknowledged by global bodies like the BIS, FSB, and increasingly in Canadian policy circles.
When implemented well, machine learning can:
Analyze millions of transactions in real time
Detect patterns of layering and structuring that rules alone cannot catch
Dramatically reduce false positives, allowing investigators to focus on genuine threats
Cross‑reference internal and external data to improve KYC accuracy
This is no longer theoretical.
Banks and fintechs in Canada and beyond are already scaling AI‑powered AML and KYC platforms and reporting measurable uplifts in detection quality and double‑digit reductions in false alerts.
But AI in Finance Cannot Be a Black Box
In financial services, we operate under strict expectations.
Reliability. Accountability. Fairness. Ethics. Privacy.
And AI-specific law is now catching up.
In Europe, the EU AI Act classifies many financial AI use cases such as credit scoring, fraud prevention, and AML monitoring as high-risk. These trigger mandatory requirements around data quality, documentation, human oversight, bias mitigation, and post‑deployment monitoring.
In Canada, the proposed Artificial Intelligence and Data Act (AIDA), along with its companion guidance and the federal Voluntary Code of Conduct for Advanced AI Systems, points clearly in the same direction.
High-impact AI systems will need to sit within robust risk-management frameworks, with clear accountability, impact assessments, incident reporting, and meaningful transparency.
Even as AIDA continues through Parliament, the message is already clear.
In Canadian finance and compliance, “just trust the model” will not fly.
If we’re not careful, we risk recreating a new version of 2008.
A shared dependence on a small group of AI vendors
Herding behaviours, as models converge on similar strategies
Hidden biases in training data
System-wide mispricing of risk discovered only after it's too late
Canada’s central bank, OSFI, and international supervisors are already cautioning that AI‑driven trading, pricing, and risk tools could trigger or amplify crises if left unchecked.
Grounding AI in Real Obligations. The Canadian AML/ATF Example
These aren’t abstract concerns.
In Canada, financial institutions, securities dealers, life insurers, money services businesses, mortgage brokers, accountants, real estate developers and brokers, and others already operate under a comprehensive AML/ATF framework.
FINTRAC’s guidance outlines clear expectations on:
Client identification and ongoing monitoring
Beneficial ownership and risk-based relationship management
Suspicious transaction reporting and terrorist property obligations
Record‑keeping, travel rule compliance, and sector‑specific KYC
Enhanced measures for politically exposed persons (PEPs), heads of international organizations (HIOs), their families, and close associates
If we introduce AI into this environment, it must be to support, not sidestep, these obligations.
An AI system that cannot explain why it flagged a client as high-risk, or how it contributes to a compliant, risk‑based monitoring program, is a liability, not an asset.
What “Getting AI Right” Looks Like in KYC, AML, and CFT
For me, “responsible AI in finance” is not a slogan. It’s a checklist.
Any AI system used in KYC, AML, or CFT should be able to answer these six questions:
Explainability (XAI)
Can the model explain clearly, in plain language, why it flagged or cleared a client or transaction? Would that explanation withstand scrutiny from a regulator or in court?Bias and Fairness
Have you tested the model for disparate impact across customer segments, regions, and demographics? How often do you retest as customer behavior and data evolve?Data Governance
Where does your data come from? Is it legally collected, documented, and auditable? How is personal information anonymized, minimized, and protected?Model Risk Management
Is the model catalogued, versioned, validated, stress‑tested, and continuously monitored, just like any other material risk model?Human in the Loop
Who is accountable? When and how can a human override the model? Are investigators trained to work with the AI, not around it?Vendor and Third‑Party Risk
If you’re buying AI, can your vendor demonstrate security, transparency, and robustness to your standards and those of your regulators?
When these guardrails are in place, AI doesn’t replace human judgment. It amplifies it.
Your compliance team moves from drowning in alerts to truly understanding risk.
Why This Matters for Compliance‑Intensive Industries
These aren’t just questions for global banks.
They apply across high‑impact Canadian financial sectors, including:
Wealth managers
Tax advisors and accountants
Mortgage investment corporations (MICs)
Private lenders
Pension funds and credit unions
Mortgage brokers and underwriters
Real estate developers and brokerages
Fintechs, MSBs, and emerging financial platforms
These sectors are already experiencing:
Stricter expectations from regulators on due diligence, suitability, and conflicts of interest
More complex client structures and cross‑border transactions
Clients demanding transparency around data use, ethics, and ESG
Expanding AML/ATF obligations under FINTRAC, including PEP treatment, beneficial ownership, and real-time suspicious transaction reporting
As this regulatory pressure builds, the firms that embrace auditable, explainable, ethically grounded AI systems will lead, not lag.
They will be able to demonstrate to regulators, clients, and boards not just that they are using AI, but that they are using it responsibly, transparently, and in full alignment with Canadian law.
The Work I Do. And an Invitation to Learn
This is the work I’ve moved into.
Helping compliance‑intensive Canadian organizations adopt AI to strengthen integrity, not compromise it.
Bridging the gap between:
Regulatory obligations
Technical capability
Real‑world risk
If you’re a professional in wealth, tax, MICs, accounting, real estate, fintech, or financial services.
And if you’re:
Exploring AI for compliance, risk management, or client onboarding
Wrestling with explainability, model governance, or data ethics
Trying to understand what the next 3 to 5 years of AI regulation will mean for your firm
Let’s connect.
No hype.
No black boxes.
Just a chance to think clearly about what responsible, explainable AI could look like in your world before it’s imposed by law.
info@learnova.tech
© 2025. All rights reserved.
Terms of Use | Privacy Policy | Cookie Policy | Disclaimer


