There is a lot of conversation in the industry about the inefficiencies of “traditional” rules-based monitoring systems, Alert-to-SAR ratios, and the problem of high false positive rates. Let me add to that conversation by throwing out what could be some controversial observations and suggestions …
Current Rules-Based Transaction Monitoring Systems – are they really that inefficient?
For the last few years AML experts have been stating that rules-based or typology-driven transaction monitoring strategies that have been deployed for the last 20 years are not effective, with high false positive rates (95% false positives!) and enormous staffing costs to review and disposition all of the alerts. Should these statements be challenged? Is it the fact the transaction monitoring strategies are rules-based or typology-driven that drives inefficiencies, or is it the fear of missing something driving the tuning of those strategies? Put another way, if we tuned those strategies so that they only produced SARs that law enforcement was interested in, we wouldn’t have high false positive rates and high staffing costs. Graham Bailey, Global Head of Financial Crimes Analytics at Wells Fargo, believes it is a combination of basic rules-based strategies coupled with the fear of missing a case. He writes that some banks have created their staffing and cost problems by failing to tune their strategies, and by “throwing orders of magnitude higher resources at their alerting.” He notes that this has a “double negative impact” because “you then have so many bad alerts in some banks that they then run into investigators’ ‘repetition bias’, where an investigator has had so many bad alerts that they assume the next one is already bad” and they don’t file a SAR. So not only are the SAR/alert rates so low, you run the risk of missing the good cases.
After 20+ years in the AML/CTF field – designing, building, running, tuning, and revising programs in multiple global banks – I am convinced that rules-based interaction monitoring and customer surveillance systems, running against all of the data and information available to a financial institution, managed and tuned by innovative, creative, courageous financial crimes subject matter experts, can result in an effective, efficient, proactive program that both provides timely, actionable intelligence to law enforcement and meets and exceeds all regulatory obligations. Can cloud-based, cross-institutional, machine learning-based technologies assist in those efforts? Yes! If properly deployed and if running against all of the data and information available to a financial institution, managed and tuned by innovative, creative, courageous financial crimes subject matter experts.
For more, see “False Positive Rates”, below …
Alert to SAR Ratios – is that a ratio that we should be focused on?
A recent Mid-Size Bank Coalition of America (MBCA) survey found the average MBCA bank had: 9,648,000 transactions/month being monitored, resulting in 3,908 alerts/month (0.04% of transactions alerted), resulting in 348 cases being opened (8.9% of alerts became a case), resulting in 108 SARs being filed (31% of cases or 2.8% of alerts). Note that the survey didn’t ask whether any of those SARs were of interest or useful to law enforcement. Some of the mega banks indicate that law enforcement shows interest in (through requests for supporting documentation or grand jury subpoenas) 6% – 8% of SARs.
So I argue that the Alert/SAR and even Case/SAR (in the case of Wells, Package/Case and Package/SAR) ratios are all of interest, but tracking to SARs filed is a little bit like a car manufacturer tracking how many cars it builds but not how many cars it sells, or how well those cars perform, how well they last, and how popular they are. The better measure for AML programs is “SARs purchased”, or SARs that provide value to law enforcement.
How do you determine whether a SAR provides value to Law Enforcement? One way would be to ask Law Enforcement, and hope you get an answer. That could prove to be difficult. Can you somehow measure Law Enforcement interest in a SAR? Many banks do that by tracking grand jury subpoenas received to prior SAR suspects, Law Enforcement requests for supporting documentation, and other formal and informal requests for SARs and SAR-related information. As I write above, an Alert-to-SAR rate may not be a good measure of whether an alert is, in fact, “positive”. What may be relevant is an Alert-to-TSV SAR rate (see my previous article for more detail on TSV SARs). What is a “TSV SAR”? A SAR that has Tactical or Strategic Value to Law Enforcement, where the value is determined by Law Enforcement providing a response or feedback to the filing financial institution within five years of the filing of the SAR that the SAR provided tactical (it led to or supported a particular case) or strategic (it contributed to or confirmed a typology) value. If the filing financial institution does not receive a TSV SAR response or feedback from law enforcement or FinCEN within five years of filing a SAR, it can conclude that the SAR had no tactical or strategic value to law enforcement or FinCEN, and may factor that into decisions whether to change or maintain the underlying alerting methodology. Over time, the financial institution could eliminate those alerts that were not providing timely, actionable intelligence to law enforcement, and when that information is shared across the industry, others could also reduce their false positive rates.
Which leads to …
False Positive Rates – if 95% is bad … what’s good?
There is a lot of lamenting, and a lot of axiomatic statements, about high false positive rates for AML alerts: 95% or even 98% false positive rates. I’d make three points.
First, vendors selling their latest products, touting machine learning and artificial intelligence as the solution to high false positive rates, are doing what they should be doing: convincing consumers that their current product is out-dated and ill-equipped for its purpose by touting the next, new product. I argue that high false positive rates are not caused by the current rules-based technologies; rather, they’re caused by inexperienced AML enthusiasts or overwhelmed AML experts applying rules that are too simple against data that is mis-labeled, incomplete, or simply wrong, and erring on the side of over-alerting and over-filing for fear of regulatory criticism and sanctions.
If the regulatory problems with AML transaction monitoring were truly technology problems, then the technology providers would be sanctioned by the regulators and prosecutors. But an AML technology provider has never been publicly sanctioned by regulators or prosecutors … for the simple reason that any issues with AML technology aren’t technology issues: they are operator issues.
Second, are these actually “false” alerts? Rather, they are alerts that, at the present time, based on the information currently available, do not rise to the level of either (i) requiring a complete investigation, or (ii) if completely investigated, do not meet the definition of “suspicious”. Regardless, they are now valuable data points that go back into your monitoring and case systems and are “hibernated” and possibly come back if that account or customer alerts at a later time, or there is another internally- or externally-generated reason to investigate that account or customer.
Third, if 95% or 98% false positive rates are bad … what is good? What should the target rate be? I’ll provide some guidance, taken from a Treasury Office of Inspector General (OIG) Report: OIG-17-055 issued September 18, 2017 titled “FinCEN’s information sharing programs are useful but need FinCEN’s attention.” The OIG looked at 314(a) statistics for three years (fiscal years 2010-2012) and found that there were 711 314(a) requests naming 8,500 subjects of interest sent out by FinCEN to 22,000 financial institutions. Those requests came from 43 Law Enforcement Agencies (LEAs), with 79% of them coming from just six LEAs (DEA, FBI, ICE, IRS-CI, USSS, and US Attorneys’ offices). Those 711 requests resulted in 50,000 “hits” against customer or transaction records by 2,400 financial institutions.
To analogize those 314(a) requests and responses to monitoring alerts, there were 2,400 “alerts” (financial institutions with positive matches) out of 22,000 “transactions” (total financial institutions receiving the 314(a) requests). That is an 11% hit rate or, arguably, a 89% false positive rate. And keep in mind that in order to be included in a 314(a) request, the Law Enforcement Agency must certify to FinCEN that the target “is engaged in, or is reasonably suspected based on credible evidence of engaging in, terrorist activity or money laundering.” So Law Enforcement considered that all 8,500 of the targets in the 711 requests were active terrorists or money launderers, and 11% of the financial institutions positively responded.
With that, one could argue that a “hit rate” of 10% to 15% could be optimal for any reasonably designed, reasonably effective AML monitoring application.
But a better target rate for machine-generated alerts is the rate generated by humans. Bank employees – whether bank tellers, relationship managers, or back-office personnel – all have the regulatory obligation of reporting unusual activity or transactions to the internal bank team that is responsible for managing the AML program and filing SARs. For the twenty plus years I was a BSA Officer or head of investigations at large multi-national US financial institutions, I found that those human-generated referrals resulted in a SAR roughly 40% to 50% of the time.
An alert to SAR ratio goal of machine-based alert generation systems should be to get to the 40% to 50% referral-to-SAR ratio of human-based referral generation programs.