Loading…

OCC Comptroller Talks About AML “False Negatives” and Technology

Whether “False Negatives” or “False Positives”, the Answer May Not Lie Just in New or Improved Technologies, but in an Improved Mix of New Technologies and More Forgiving Regulatory Requirements


On January 24, 2020, Jo Ann Barefoot had Thomas Otting, Comptroller of the Currency, as her guest on her podcast. The link is available at Barefoot Otting Podcast. Among other things, the Comptroller talked about BSA/AML, or as he put it “AML/BSA”.

Approximately 12:00 minutes into the podcast, the Comptroller had this to say about BSA/AML:

“Are we doing it the most effective way? … what we’re doing, is it helping us catch the bad guys as they’re coming into the banking industry and taking advantage of it?”

In a discussion on technology trends, the Comptroller spoke about how banks are using new technologies to learn about their customers and for risk management. Beginning at the 20:45 mark, he stated:

“Today our AML/BSA relies upon a lot of systems to kick out a lot of data that often has an enormous amount of false negatives associated with it that requires a lot of resources to go through that false negative, and I think if we can get to the point where we have better fine-tuned data with artificial intelligence about tracking information is and the type of activities that are occurring, I think ultimately we’ll have better risk management practices within the institutions as well.”

Having been a guest on Jo Ann’s podcast myself (see Richards Podcast), I know how unforgiving the literal transcript of a podcast can be, so it is fair to write that the Comptroller’s point was that the current systems kick out a lot of false negatives that require a lot of manual investigations; and better data and artificial intelligence could reduce those false negatives, resulting in greater efficiencies and better risk management.

But it is curious that he refers to “false negatives” – which are transactions that do not alert but should have alerted – rather than “false positives” – which are transactions that did alert and, after being investigated, prove not to be suspicious and therefore falsely alerted.  The Comptroller has many issues to deal with, and it’s easy to confuse false negatives with false positives. In fairness, his ultimate point was well made: the current regulatory requirements and expectations around AML monitoring, alerting, investigations, and reporting have resulted in a regime that is not efficient (he didn’t addressed the effectiveness of the SAR regime).

At the 21:30 mark, Jo Ann Barefoot commented on the recent FinTech Hackathon she hosted that looked at using new technology to make suspicious activity  monitoring and reporting more efficient and effective, and stated that “we need to get rid of the false flags in the system” (I got the sense that she was uncomfortable with using the Comptroller’s phrase of “false negatives” – Jo Ann is well-versed in BSA and AML and familiar with the issue of high rates of false positives). Comptroller Otting replied:

“If you think just in the SARs space, that 7 percent of transactions kind of hit the tripwire, and then ultimately about 2 percent generally have SARs filed against them, that 5 percent is an enormous amount of resources that organizations are dedicating towards that compliance function that I’m convinced that with new technology we can improve that process.”

Again, podcast transcripts can be unforgiving, and I believe the point that the Comptroller was making was that a small percentage of transactions are alerted on by AML monitoring systems, and an even smaller percentage of those alerts are eventually reported in SARs. His percentages, and math, may not foot back to any verifiable data, but his point is sound: the current AML monitoring, alerting, investigations, and reporting system isn’t as efficient as it should be and could be (again, he didn’t address its effectiveness).

I don’t believe that the inefficiencies in the current AML system are wholly caused by outdated or poorly deployed technology. Rather, financial institutions are (rightfully) deathly afraid of a regulatory sanction for missing a potentially suspicious transaction, and will err on the side of alerting and filing on much more than is truly suspicious. For larger institutions, it will cost them a few million dollars more to run at a 95% false positive rate rather than an 85% rate, or 75% rate (I address the question of what is a good false positive rate in one of the articles, below), but those institutions know that by doing so, they avoid the hundreds of millions of dollars in potential fines for missing that one big case, or series of cases, that their regulator, with hindsight, determines should have been caught.

Running an AML monitoring and surveillance program that produces 95% false positives is not “helping us catch the bad guys that are taking advantage of the banking industry” as the Comptroller noted at the beginning of the podcast. Perhaps a renewed and coordinated, cooperative effort between technologists, bankers, BSA/AML professionals, law enforcement, and the Office of the Comptroller of the Currency can lead us to a monitoring/surveillance regime enhanced with more effective technologies and better feedback on what is providing tactical and strategic value to law enforcement … and, hopefully, tempered by a more forgiving regulatory approach.

Below are two articles I’ve written on monitoring, false positive rates, the use of artificial intelligence, among other things. Let’s work together to get to a more effective and efficient AML regime.

Rules-Based Monitoring, Alert to SAR Ratios, and False Positive Rates – Are We Having The Right Conversations?

This article was published on December 20, 2018. It is available at RegTech Article – Are We Having the Right Conversations?

There is a lot of conversation in the industry about the inefficiencies of “traditional” rules-based monitoring systems, Alert-to-SAR ratios, and the problem of high false positive rates. Let me add to that conversation by throwing out what could be some controversial observations and suggestions …

Current Rules-Based Transaction Monitoring Systems – are they really that inefficient?

For the last few years AML experts have been stating that rules-based or typology-driven transaction monitoring strategies that have been deployed for the last 20 years are not effective, with high false positive rates (95% false positives!) and enormous staffing costs to review and disposition all of the alerts.  Should these statements be challenged? Is it the fact the transaction monitoring strategies are rules-based or typology-driven that drives inefficiencies, or is it the fear of missing something driving the tuning of those strategies? Put another way, if we tuned those strategies so that they only produced SARs that law enforcement was interested in, we wouldn’t have high false positive rates and high staffing costs.  Graham Bailey, Global Head of Financial Crimes Analytics at Wells Fargo, believes it is a combination of basic rules-based strategies coupled with the fear of missing a case. He writes that some banks have created their staffing and cost problems by failing to tune their strategies, and by “throwing orders of magnitude higher resources at their alerting.”  He notes that this has a “double negative impact” because “you then have so many bad alerts in some banks that they then run into investigators’ ‘repetition bias’, where an investigator has had so many bad alerts that they assume the next one is already bad” and they don’t file a SAR. So not only are the SAR/alert rates so low, you run the risk of missing the good cases.

After 20+ years in the AML/CTF field – designing, building, running, tuning, and revising programs in multiple global banks – I am convinced that rules-based interaction monitoring and customer surveillance systems, running against all of the data and information available to a financial institution, managed and tuned by innovative, creative, courageous financial crimes subject matter experts, can result in an effective, efficient, proactive program that both provides timely, actionable intelligence to law enforcement and meets and exceeds all regulatory obligations. Can cloud-based, cross-institutional, machine learning-based technologies assist in those efforts? Yes! If properly deployed and if running against all of the data and information available to a financial institution, managed and tuned by innovative, creative, courageous financial crimes subject matter experts.

Alert to SAR Ratios – is that a ratio that we should be focused on?

A recent Mid-Size Bank Coalition of America (MBCA) survey found the average MBCA bank had: 9,648,000 transactions/month being monitored, resulting in 3,908 alerts/month (0.04% of transactions alerted), resulting in 348 cases being opened (8.9% of alerts became a case), resulting in 108 SARs being filed (31% of cases or 2.8% of alerts). Note that the survey didn’t ask whether any of those SARs were of interest or useful to law enforcement. Some of the mega banks indicate that law enforcement shows interest in (through requests for supporting documentation or grand jury subpoenas) 6% – 8% of SARs.

So I argue that the Alert/SAR and even Case/SAR (in the case of Wells, Package/Case and Package/SAR) ratios are all of interest, but tracking to SARs filed is a little bit like a car manufacturer tracking how many cars it builds but not how many cars it sells, or how well those cars perform, how well they last, and how popular they are.  The better measure for AML programs is “SARs purchased”, or SARs that provide value to law enforcement.

How do you determine whether a SAR provides value to Law Enforcement? One way would be to ask Law Enforcement, and hope you get an answer. That could prove to be difficult.  Can you somehow measure Law Enforcement interest in a SAR?  Many banks do that by tracking grand jury subpoenas received to prior SAR suspects, Law Enforcement requests for supporting documentation, and other formal and informal requests for SARs and SAR-related information. As I write above, an Alert-to-SAR rate may not be a good measure of whether an alert is, in fact, “positive”. What may be relevant is an Alert-to-TSV SAR rate (see my previous article for more detail on TSV SARs).  What is a “TSV SAR”? A SAR that has Tactical or Strategic Value to Law Enforcement, where the value is determined by Law Enforcement providing a response or feedback to the filing financial institution within five years of the filing of the SAR that the SAR provided tactical (it led to or supported a particular case) or strategic (it contributed to or confirmed a typology) value. If the filing financial institution does not receive a TSV SAR response or feedback from law enforcement or FinCEN within five years of filing a SAR, it can conclude that the SAR had no tactical or strategic value to law enforcement or FinCEN, and may factor that into decisions whether to change or maintain the underlying alerting methodology. Over time, the financial institution could eliminate those alerts that were not providing timely, actionable intelligence to law enforcement, and when that information is shared across the industry, others could also reduce their false positive rates.

Which leads to …

False Positive Rates – if 95% is bad … what’s good?

There is a lot of lamenting, and a lot of axiomatic statements, about high false positive rates for AML alerts: 95% or even 98% false positive rates.  I’d make three points.

First, vendors selling their latest products, touting machine learning and artificial intelligence as the solution to high false positive rates, are doing what they should be doing: convincing consumers that their current product is out-dated and ill-equipped for its purpose by touting the next, new product. I argue that high false positive rates are not caused by the current rules-based technologies; rather, they’re caused by inexperienced AML enthusiasts or overwhelmed AML experts applying rules that are too simple against data that is mis-labeled, incomplete, or simply wrong, and erring on the side of over-alerting and over-filing for fear of regulatory criticism and sanctions.

If the regulatory problems with AML transaction monitoring were truly technology problems, then the technology providers would be sanctioned by the regulators and prosecutors.  But an AML technology provider has never been publicly sanctioned by regulators or prosecutors … for the simple reason that any issues with AML technology aren’t technology issues: they are operator issues.

Second, are these actually “false” alerts? Rather, they are alerts that, at the present time, based on the information currently available, do not rise to the level of either (i) requiring a complete investigation, or (ii) if completely investigated, do not meet the definition of “suspicious”. Regardless, they are now valuable data points that go back into your monitoring and case systems and are “hibernated” and possibly come back if that account or customer alerts at a later time, or there is another internally- or externally-generated reason to investigate that account or customer.

Third, if 95% or 98% false positive rates are bad … what is good? What should the target rate be? I’ll provide some guidance, taken from a Treasury Office of Inspector General (OIG) Report: OIG-17-055 issued September 18, 2017 titled “FinCEN’s information sharing programs are useful but need FinCEN’s attention.” The OIG looked at 314(a) statistics for three years (fiscal years 2010-2012) and found that there were 711 314(a) requests naming 8,500 subjects of interest sent out by FinCEN to 22,000 financial institutions. Those requests came from 43 Law Enforcement Agencies (LEAs), with 79% of them coming from just six LEAs (DEA, FBI, ICE, IRS-CI, USSS, and US Attorneys’ offices). Those 711 requests resulted in 50,000 “hits” against customer or transaction records by 2,400 financial institutions.

To analogize those 314(a) requests and responses to monitoring alerts, there were 2,400 “alerts” (financial institutions with positive matches) out of 22,000 “transactions” (total financial institutions receiving the 314(a) requests). That is an 11% hit rate or, arguably, a 89% false positive rate. And keep in mind that in order to be included in a 314(a) request, the Law Enforcement Agency must certify to FinCEN that the target “is engaged in, or is reasonably suspected based on credible evidence of engaging in, terrorist activity or money laundering.” So Law Enforcement considered that all 8,500 of the targets in the 711 requests were active terrorists or money launderers, and 11% of the financial institutions positively responded.

With that, one could argue that a “hit rate” of 10% to 15% could be optimal for any reasonably designed, reasonably effective AML monitoring application.

But a better target rate for machine-generated alerts is the rate generated by humans. Bank employees – whether bank tellers, relationship managers, or back-office personnel – all have the regulatory obligation of reporting unusual activity or transactions to the internal bank team that is responsible for managing the AML program and filing SARs. For the twenty plus years I was a BSA Officer or head of investigations at large multi-national US financial institutions, I found that those human-generated referrals resulted in a SAR roughly 40% to 50% of the time.

An alert to SAR ratio goal of machine-based alert generation systems should be to get to the 40% to 50% referral-to-SAR ratio of human-based referral generation programs.

Flipping the Three AML Ratios with Machine Learning and Artificial Intelligence (why Bartenders and AML Analysts will survive the AI Apocalypse)

This article was posted on December 14, 2018. It remains the most viewed article on my website. It is available at RegTech Article – Flipping the Ratios

Machine Learning and Artificial Intelligence proponents are convinced – and spend a lot of time trying to convince others – that they will disrupt and revolutionize the current “broken” AML regime. Among other targets within this broken regime is AML alert generation and disposition and reducing the false positive rate (more on false positives in another article!). The result, if we believe the ML/AI community, is a massive reduction in the number of AML analysts that are churning through the hundreds and thousands of alerts, looking for the very few that are “true positives” worthy of being labelled “suspicious” and reported to the government.

But is it that simple? Can the job of AML Analyst be eliminated or dramatically changed – in scope and number of positions – by machine learning and AI? Much has been and continues to be written about the impact of artificial intelligence on jobs.  Those writers have categorized jobs along two axes – a Repetitive-to-Creative axis, and an Asocial-to-Social axis – resulting in four “buckets” of jobs, with each bucket of jobs being more or less likely to be disrupted or even eliminated:

A good example is the “Social & Repetitive” job of Bartender: Bartenders spend much of their time doing very routine, repetitive tasks: after taking a drink order, they assemble the correct ingredients in the correct amounts, and put those ingredients in the correct glass, then present the drink to the customer. All of that could be more efficiently and effectively done with an AI-driven machine, with no spillage, no waste, and perfectly poured drinks. So why haven’t we replaced bartenders? Because a good bartender has empathy, compassion, and instinct, and with experience can make sound judgments on what to pour a little differently, when to cut-off a customer, when to take more time or less with a customer. A good bartender adds value that a machine simply can’t.

Another example could be the “Asocial & Creative” (or is it “Social & Repetitive”?) job of an AML Analyst: much of an AML Analyst’s time is spent doing very routine, repetitive tasks: reviewing the alert, assembling the data and information needed to determine whether the activity is suspicious, writing the narrative. So why haven’t we replaced AML Analysts? Because a good Analyst, like a good bartender, has empathy, compassion, and instinct, and with experience can make sound judgments on what to investigate a little differently, when to cut-off an investigation, when to take more time or less on an investigation. A good Analyst adds value that a machine simply can’t.

Where AI and Machine Learning, and Robot Process Automation, can really help is by flipping the three currently inefficient AML ratios:

  1. The False Positive Ratio– the currently accepted, but highly axiomatic and anecdotal, ratio is that 95% to 98% of alerts do not result in SARs, or are “false positives” … although no one has ever boldly stated what an effective or acceptable false positive rate is (even with ROC curves providing some empirical assistance), perhaps the ML/AI/RPA communities can flip this ratio so that 95% of alerts result in SARs. If they can do this, they can also convince the regulatory community that this new ratio meets regulatory expectations (because as I’ll explain in an upcoming article, the  false positive ratio problem may be more of a regulatory problem than a technology problem).
  2. The Forgotten SAR Ratio– like false positive rates, there are anecdotes and some evidence that very few SARs provide tactical or strategic value to law enforcement. Recent Congressional testimony suggests that ~20% of SARs provide TSV (tactical or strategic value) to law enforcement … perhaps the ML/AI/RPA communities can help to flip this ratio so that 80% of SARs are TSV SARs. This also will take some effort from the regulatory and law enforcement communities.
  3. The Analysts’ Time Ratio– 90% of an AML Analyst’s time can be spent simply assembling the data, information, and documents needed to investigate a case, and only 10% of their time thinking and using their empathy, compassion, instinct, judgment, and experience to make good decisions and file TSV SARs … perhaps the ML/AI/RPA communities can help to flip this ratio so that Analysts spend 10% of their time assembling and 90% of their time thinking.

We’ve seen great strides in the AML world in the last 5-10 years when it comes to applying machine learning and creative analytics to the problems of AML monitoring, alerting, triaging, packaging, investigations, and reporting. My good friend and former colleague Graham Bailey at Wells Fargo designed and deployed ML and AI systems for AML as far back as 2008-2009, and the folks at Verafin have deployed cloud-based machine learning tools and techniques to over 1,600 banks and credit unions.

I’ve outlined three rather audacious goals for the machine learning/artificial intelligence/robotic process automation communities:

  1. The False Positive Ratio – flip it from 95% false positives to 5% false positives
  2. The Forgotten SAR Ratio – flip it from 20% TSV SARs to 80% TSV SARs
  3. The Analysts’ Time Ratio – flip it from 90% gathering data to 10% gathering data

Although many new AML-related jobs are being added – data scientist, model validator, etc. – and many existing AML-related jobs are changing, I am convinced that the job of AML Analyst will always be required. Hopefully, it will shift over time from being predominantly that of a gatherer of information and more of a hunter of criminals and terrorists. But it will always exist. If not, I can always fall back on being a Bartender. Maybe …

FinCEN’s BSA Value Project is A Year Old … How Is It Going?

In January 2019, FinCEN launched its “BSA Value Project” – an effort to “catalogue the value of BSA reporting across the entire value chain of its creation and use” and “result in a comprehensive and quantitative understanding of the broad value of BSA reporting and other BSA information to all types of consumers of that information” (quoting the prepared remarks of FinCEN Director Kenneth A. Blanco delivered at the 12th annual Law Vegas AML Conference for casinos and card clubs, August 13, 2019, available at Director Blanco Remarks 8-13-2019).

FinCEN is now one year into the BSA Value Project … how is that project going?

Again, quoting from Director Blanco’s remarks last August, “so far, the study has confirmed there are extensive and extremely varied uses of BSA information across all stakeholders (including by the private sector) consistent with their missions.”

It appears that there are, indeed, extensive uses of BSA information by the public sector, as Director Blanco has told us that almost one in four FBI and IRS-CI investigations use BSA data. Director Blanco made the following remarks (again, on August 13, 2019) on the usefulness of BSA data:

“All FBI subject names are run against the BSA database. More than 21 percent of FBI investigations use BSA data, and for some types of crime, like organized crime, nearly 60 percent of FBI investigations use BSA data. Roughly 20 percent of FBI international terrorism cases utilize BSA data. The Internal Revenue Service-Criminal Investigation section alone conducts more than 126,000 BSA database inquiries each year. And as much as 24 percent of its investigations involving criminal tax, money laundering, and other BSA violations are directly initiated by, or associated with, a BSA report.

In addition to providing controlled access to the data to law enforcement, FinCEN also proactively pushes certain information to them on critical topics. On a daily basis, FinCEN takes the suspicious activity reports and we run them through several categories of business rules or algorithms to identify reports that merit further review by our analysts. Our terrorist financing-related business rules alone generate over 1,000 matches each month for review and further dissemination to our law enforcement and regulatory partners in what we call a Flash report. These Flash reports enable the FBI, for example, to identify, track, and disrupt the activities of potential terrorist actors. It is incredibly valuable information.”

Four months later, in prepared remarks delivered at the American Bankers Association/American Bar Association Financial Crimes Conference (December 10, 2019, available at Director Blanco at ABA December 10 2019) Director Blanco provided another perspective on the public sector use of BSA data:

“FinCEN grants more than 12,000 agents, analysts, and investigative personnel from over 350 unique federal, state, local, and tribal agencies across the United States with direct access to this critical reporting by financial institutions. There are approximately 30,000 searches of the BSA data each day. Further, there are more than 100 Suspicious Activity Report (SAR) review teams and financial crimes task forces across the country, bringing together prosecutors and investigators from different agencies to review BSA reports. Collectively, these teams reviewed approximately 60% of all SARs filed. Each day, FinCEN, law enforcement, regulators, and others query this data—that equates to an average of 7.4 million queries per year. Those queries identify an average of 18.2 million filings that are responsive or useful to ongoing investigations, examinations, victim identification, analysis and network development, sanctions development, and U.S. national security activities, among many, many other uses that help protect our nation, deter crime, and save lives.”

But Which BSA Filings are Providing Real Value to Law Enforcement?

There is no doubt that the (roughly) 20 million BSA reports that are filed each year provide great value to law enforcement. But questions remain about the utility of those filings, and the costs of preparing them. Some of those questions include: (i) which of those reports provide value? (ii) what kind of value is being provided – tactical and/or strategic? (iii) can financial institutions eliminate the “no value” filings and deploy those resources to higher-value filings? (iv) can financial institutions automate the preparation and filing of the low value filings and deploy those resources to the highest-value filings?

FinCEN’s BSA Value Project, and its “Value Quantification Model”, May Answer Those Questions

In his December 2019 remarks, Director Blanco updated us on the BSA Value Project and revealed the “value quantification model” FinCEN is building:

FinCEN is using the BSA Value Project to improve how we communicate the value and use of BSA information, and to develop metrics to track and measure the value of its use on an ongoing basis. The project has involved the gathering and review of reams of data, statistics, case studies, and other information, as well as holding detailed interviews with a wide range of government and private-sector stakeholders, including many of the organizations in this room today. That information has informed us about how each stakeholder uses and gains value from BSA reporting and the value-add activities of other stakeholders. This “value chain” of BSA reporting is being developed for each type of stakeholder:  FinCEN, law enforcement, industry, regulators, and others.

We are validating these results with the agencies and firms that have contributed to their development, and soon we will be talking with some of you about the value chain that has been developed for financial institutions to ensure it captures every aspect properly.

As of today, the team has identified over 500 different metrics that are being incorporated into the valuation model. We expect the model to show us the relative value of specific forms and even key fields—what is seen as more valuable and what is seen as less valuable.

    • This value quantification model will help us assess how the regulatory and compliance changes we are considering making with our government partners will affect the value of BSA reporting—we want any changes to lead to more effective outcomes and increase the value of BSA reporting, not just provide greater industry efficiency.
    • It will help us provide you better and more targeted feedback on the information you report so you can identify whether it is the automated tools and databases or the more manual work of your internal financial intelligence units and investigators that is driving that value creation in specific instances.
    • The project also is showing us specific challenges that we need to address, particularly in the area of communication and the development of shared AML priorities on which we can focus our efforts.

I also want to make very clear that the value of BSA data is not just confined to FinCEN, law enforcement, or the government. Industry also benefits. Financial institutions and other reporting entities derive important value from their BSA compliance and reporting activities. Throughout the study, industry consistently has confirmed that their BSA obligations, while incurring costs, also help them:

    • Identify and exit bad actors to avoid reputational and financial risks;
    • Manage risks more effectively to permit greater responsible revenue generation;
    • Secure partnerships and investment opportunities domestically and internationally in a responsible, risk-sensitive manner, something particularly important for emerging entrants in the financial services arena; and, of course;
    • Avoid financial, operational, and reputational costs from non-compliance.

I want to stress that we intend to be as transparent and public facing as possible about the results from this project. FinCEN hopes to show the tremendous variety of uses we have for your reporting.”

Conclusion

Kudos to Director Blanco and his FinCEN team for their initiative and efforts around the BSA Value Project. The results of the Project, notably the BSA Value Quantification Model, could be a game-changer for the financial industry’s BSA/AML programs. The industry is being inundated with calls to apply machine learning and artificial intelligence to make their AML programs more effective and efficient. But if those institutions don’t know which of their filings provide value, and arguably only one in four is providing value, they cannot effectively use machine learning or AI.

The entire industry is looking forward to the results of FinCEN’s BSA Value Project!

For other articles on the need for better reporting on the utility of SAR filings, see:

BSA Value Project August 19 2019

SAR Feedback 314(d) – July 30 2019

BSA Reports and Federal Criminal Cases – June 5 2019

The TSV SAR Feedback Loop – June 4 2019

Like Sam Loves Free Fried Chicken, Law Enforcement Loves “Free” Suspicious Activity Reports … But What If Law Enforcement Had to Earn the Right to Use the Private Sector’s “Free” SARs?

“Well, I’m here in the freezing cold getting’ free chicken sandwiches. Because the food tastes great. I mean, it’s chicken. Fried chicken. I like fried chicken.”

Eleven year-old Sam Caruana of Buffalo, New York waited outside a Chick-fil-A restaurant in the freezing cold in order to be one of the 100 people given free fried chicken for one year (actually, one chicken sandwich a week for fifty-two weeks). In a video that went viral (Sam Caruana YouTube – Free Chicken), young Sam explained that he simply loved fried chicken, and he’d stand in the cold for free fried chicken.

Just as Sam loves free fried chicken, law enforcement loves free Suspicious Activity Reports, or SARs. In the United States, over 30,000 private sector financial institutions – from banks to credit unions, to money transmitters and check cashers, to casinos and insurance companies, to broker dealers and investment advisers – file more than 2,000,000 SARs every year. And it costs those financial institutions billions of dollars to have the programs, policies, procedures, processes, technology, and people to onboard and risk-rate customers, to monitor for and identify unusual activity, to investigate that unusual activity to determine if it is suspicious, and, if it is, to file a SAR with the Treasury Department’s Financial Crimes Enforcement Network, or FinCEN. From there, hundreds of law enforcement agencies across the country, at every level of government, can access those SARs and use them in their investigations into possible tax, criminal, or other investigations or proceedings. To law enforcement, those SARs are, essentially, free. And like Sam loves free fried chicken, law enforcement loves free SARs. Who wouldn’t?

But should those private sector SARs, that cost billions of dollars to produce, be “free” to public sector law enforcement agencies? Put another way, should the public sector law enforcement agency consumers of SARs need to provide something in return to the private sector producers of SARs?

I say they should. And here’s what I propose: that in return for the privilege of accessing and using private sector SARs, law enforcement shouldn’t have to pay for that privilege with money, but with effort. The public sector consumers of SARs should let the private sector producers know which of those SARs provide tactical or strategic value.

A recent Mid-Size Bank Coalition of America (MBCA) survey found the average MBCA bank had: 9,648,000 transactions/month being monitored, resulting in 3,908 alerts/month (0.04% of transactions alerted), resulting in 348 cases being opened (8.9% of alerts became a case), resulting in 108 SARs being filed (31% of cases or 2.8% of alerts). Note that the survey didn’t ask whether any of those SARs were of interest or useful to law enforcement. Some of the mega banks indicate that law enforcement shows interest in (through requests for supporting documentation or grand jury subpoenas) 6% – 8% of SARs.

I argue that the Alert/SAR and even Case/SAR ratios are all of interest, but tracking to SARs filed is a little bit like a car manufacturer tracking how many cars it builds but not how many cars it sells, or how well those cars perform, how long they last, and how popular they are. And just like the automobile industry measuring how many cars are purchased, the better measure for AML programs is “SARs purchased”, or SARs that provide value to law enforcement.

Also, there is much being written about how machine learning and artificial intelligence will transform anti-money laundering programs. Indeed, ML and AI proponents are convinced – and spend a lot of time trying to convince others – that they will disrupt and revolutionize the current “broken” AML regime. Among other targets within this broken regime is AML alert generation and disposition and reducing the false positive rate. The result, if we believe the ML/AI community, is a massive reduction in the number of AML analysts that are churning through the hundreds and thousands of alerts, looking for the very few that are “true positives” worthy of being labelled “suspicious” and reported to the government. But the fundamental problem that every one of those ML/AI systems has is that they are using the wrong data to train their algorithms and “teach” their machines: they are looking at the SARs that are filed, not the SARs that have tactical or strategic value to law enforcement.

Tactical or Strategic Value Suspicious Activity Reports – TSV SARs

The best measure of an effective and efficient financial crimes program is how well it is providing timely, effective intelligence to law enforcement. And the best measure of that is whether the SARs that are being filed are providing tactical or strategic value to law enforcement. How do you determine whether a SAR provides value to law enforcement? One way would be to ask law enforcement, and hope you get an answer. That could prove to be difficult.  Can you somehow measure law enforcement interest in a SAR?  Many banks do that by tracking grand jury subpoenas received to prior SAR suspects, law enforcement requests for supporting documentation, and other formal and informal requests for SARs and SAR-related information. As I write above, an Alert-to-SAR rate may not be a good measure of whether an alert is, in fact, “positive”. What may be relevant is an Alert-to-TSV SAR rate.

A TSV SAR is one that has either tactical value – it was used in a particular case – or strategic value – it contributed to understanding a typology or trend. And some SARs can have both tactical and strategic value. That value is determined by law enforcement indicating, within seven years of the filing of the SAR (more on that later), that the SAR provided tactical (it led to or supported a particular case) or strategic (it contributed to or confirmed a typology) value.  That law enforcement response or feedback is provided to FinCEN through the same BSA Database interfaces that exist today – obviously, some coding and training will need to be done (for how FinCEN does it, see below). If the filing financial institution does not receive a TSV SAR response or feedback from law enforcement or FinCEN within seven years of filing a SAR, it can conclude that the SAR had no tactical or strategic value to law enforcement or FinCEN, and may factor that into decisions whether to change or maintain the underlying alerting methodology. Over time, the financial institution could eliminate those alerts that were not providing timely, actionable intelligence to law enforcement. And when FinCEN shares that information across the industry, others could also reduce their false positive rates.

FinCEN’s TSV SAR Feedback Loop

FinCEN is working to provide more feedback to the private sector producers of BSA reports. As FinCEN Director Ken Blanco recently stated:[1]

“Earlier this year, FinCEN began the BSA Value Project, a study and analysis of the value of the BSA information we receive. We are working to provide comprehensive and quantitative understanding of the broad value of BSA reporting and other BSA information in order to make it more effective and its collection more efficient. We already know that BSA data plays a critical role in keeping our country strong, our financial system secure, and our families safe from harm — that is clear. But FinCEN is using the BSA Value Project to improve how we communicate the way BSA information is valued and used, and to develop metrics to track and measure the value of its use on an ongoing basis.”

FinCEN receives every SAR. Indeed, FinCEN receives a number of different BSA-related reporting: SARs, CTRs, CMIRs, and Form 8300s. It’s a daunting amount of information. As FinCEN Director Ken Blanco noted in the same speech:

FinCEN’s BSA database includes nearly 300 million records — 55,000 new documents are added each day. The reporting contributes critical information that is routinely analyzed, resulting in the identification of suspected criminal and terrorist activity and the initiation of investigations.

“FinCEN grants more than 12,000 agents, analysts, and investigative personnel from over 350 unique federal, state, and local agencies across the United States with direct access to this critical reporting by financial institutions. There are approximately 30,000 searches of the BSA data taking place each day. Further, there are more than 100 Suspicious Activity Report (SAR) review teams and financial crimes task forces across the country, which bring together prosecutors and investigators from different agencies to review BSA reports. Collectively, these teams reviewed approximately 60% of all SARs filed.

Each day, law enforcement, FinCEN, regulators, and others are querying this data:  7.4 million queries per year on average. Those queries identify an average of 18.2 million filings that are responsive or useful to ongoing investigations, examinations, victim identification, analysis and network development, sanctions development, and U.S. national security activities, among many, many other uses that protect our nation from harm, help deter crime, and save lives.”

This doesn’t tell us how many of those 55,000 daily reports are SARs, but we do know that in 2018 there were 2,171,173 SARs filed, or about 8,700 every (business) day. And it appears that FinCEN knows which law enforcement agencies access which SARs, and when. And we now know that there are “18.2 million filings that are responsive or useful to ongoing investigations, examinations, victim identification, analysis and network development, sanctions development, and U.S. national security activities” every year. But which filings?

The law enforcement agencies know which SARs provide tactical or strategic value, or both. So if law enforcement finds value in a SAR, it should acknowledge that, and provide that information back to FinCEN. FinCEN, in turn, could provide an annual report to every financial institution that filed, say, more than 250 SARs a year (that’s one every business day, and is more than three times the number filed by the average bank or credit union). That report would be a simple relational database indicating which SARs had either or both tactical or strategic value. SAR filers would then be able to use that information to actually train or tune their monitoring and surveillance systems, and even eliminate those alerting systems that weren’t providing any value to law enforcement.

Why give law enforcement seven years to respond? Criminal cases take years to develop. And sometimes a case may not even be opened for years, and a SAR filing may trigger an investigation. And sometimes a case is developed and the law enforcement agency searches the SAR database and finds SARs that were filed five, six, seven or more years earlier. Between record retention rules and practical value, seven years seems reasonable.

Law enforcement agencies have tremendous responsibilities and obligations, and their resources and budgets are stretched to the breaking point. Adding another obligation – to provide feedback to the banks, credit unions, and other private sector institutions that provide them with reports of suspicious activity – may not be feasible. But the upside of that feedback – that law enforcement may get fewer, but better, reports, and the private sector institutions can focus more on human trafficking, human smuggling, and terrorist financing and less on identifying and reporting activity that isn’t of interest to law enforcement – may far exceed the downside.

Free Suspicious Activity Reports are great. But like Sam being prepared to stand in the freezing cold for his fried chicken, perhaps law enforcement is prepared to let us know whether the reports we’re filing have value.

For more on alert-to-SAR rates, the TSV feedback loop, machine learning and artificial intelligence, see other articles I’ve written:

The TSV SAR Feedback Loop – June 4 2019

AML and Machine Learning – December 14 2018

Rules Based Monitoring – December 20 2018

FinCEN FY2020 Report – June 4 2019

FinCEN BSA Value Project – August 19 2019

BSA Regime – A Classic Fixer-Upper – October 29 2019

[1] November 15, 2019, prepared remarks for the Chainalysis Blockchain Symposium, available at https://www.fincen.gov/news/speeches/prepared-remarks-fincen-director-kenneth-blanco-chainalysis-blockchain-symposium

A Bank’s Bid for Innovative AML Solutions: Innovation Remains A Perilous Endeavor

One Bank Asked the OCC to Have an “Agile Approach to Supervisory Oversight”

On September 27, 2019 the OCC published an Interpretive Letter answering an unknown bank’s request to make some innovative changes to how it files cash structuring SARs. Tacked onto its three technical questions was a request by the bank to do this innovation along with the OCC itself through something the bank called an “agile approach to supervisory oversight.” After qualified “yes” answers to the three technical questions, the OCC’s Senior Deputy Comptroller and Chief Counsel indicated that the OCC was open to “an agile and transparent supervisory approach while the Bank is building this automated solution” but he didn’t actually write that the OCC would, in fact, adopt an agile approach. This decision provides some insight, and perhaps the first public test, of (i) the regulators’ December 2018 statement on using innovative efforts to fight money laundering, and (ii) the OCC’s April 2019 proposal around innovation pilot programs. Whether the OCC passed the test is open to discussion: what appears settled, though, is that AML innovation in the regulated financial sector remains a perilous endeavor.

Regulators’ December 2018 Joint Statement on Innovative AML Efforts

On December 3, 2018 the five main US Bank Secrecy Act (BSA) regulators issued a joint statement titled “Innovative Efforts to Combat Money Laundering and Terrorist Financing”.[1] The intent of the statement was to encourage banks to use modern-era technologies to bolster their BSA/AML compliance programs. The agencies asked banks “to consider, evaluate, and, where appropriate, responsibly implement innovative approaches to meet their Bank Secrecy Act/anti-money laundering (BSA/AML) compliance obligations, in order to further strengthen the financial system against illicit financial activity” and “[t]he Agencies recognize[d] that private sector innovation, including new ways of using existing tools or adopting new technologies, can help banks” to do so.

The statement was a very positive step to encourage private sector innovation in fighting financial crime by testing new ways of using existing tools as well as adopting new technologies.

But it wasn’t the “green light to innovate” that some people have said it is. There was some language in the statement that made it, at best, a cautionary yellow light. And the September 27th OCC letter seems to clarify that banks can innovate, but the usual regulatory oversight and potential sanctions still apply.

The Agencies’ December 2018 statement included five things that bear repeating:

  1. “The Agencies recognize that private sector innovation, including new ways of using existing tools or adopting new technologies, can help banks identify and report money laundering, terrorist financing, and other illicit financial activity by enhancing the effectiveness and efficiency of banks’ BSA/AML compliance programs. To assist banks in this effort, the Agencies are committed to continued engagement with the private sector and other interested parties.”
  2. “The Agencies will not penalize or criticize banks that maintain effective BSA/AML compliance programs commensurate with their risk profiles but choose not to pursue innovative approaches.”
  3. “While banks are expected to maintain effective BSA/AML compliance programs, the Agencies will not advocate a particular method or technology for banks to comply with BSA/AML requirements.”
  4. Where test or implemented “artificial intelligence-based transaction monitoring systems … identify suspicious activity that would not otherwise have been identified under existing processes, the Agencies will assess the adequacy of banks’ existing suspicious activity monitoring processes independent of the results of the pilot program”
  5. “… the implementation of innovative approaches in banks’ BSA/AML compliance programs will not result in additional regulatory expectations.”

Note the strong, unqualified language: “the Agencies are committed to continued engagement”, “the Agencies will not penalize or criticize”, “the Agencies will not advocate …”, “the Agencies will assess”, and “the implementation of innovative approaches will not result in additional regulatory expectations”.

The qualified “assurances” come in the paragraph about pilot programs (with emphasis added):

“Pilot programs undertaken by banks, in conjunction with existing BSA/AML processes, are an important means of testing and validating the effectiveness of innovative approaches.  While the Agencies may provide feedback, pilot programs in and of themselves should not subject banks to supervisory criticism even if the pilot programs ultimately prove unsuccessful.  Likewise, pilot programs that expose gaps in a BSA/AML compliance program will not necessarily result in supervisory action with respect to that program.  For example, when banks test or implement artificial intelligence-based transaction monitoring systems and identify suspicious activity that would not otherwise have been identified under existing processes, the Agencies will not automatically assume that the banks’ existing processes are deficient.  In these instances, the Agencies will assess the adequacy of banks’ existing suspicious activity monitoring processes independent of the results of the pilot program.  Further, the implementation of innovative approaches in banks’ BSA/AML compliance programs will not result in additional regulatory expectations.”

Here there are the qualified assurances (a qualified assurance is not an assurance, by the way): “should not” is different than “will not”; “will not necessarily” is very different than “will not”; and “not automatically assume” isn’t the same as “not assume”.  These are important distinctions. The agencies could have written something very different:

“… pilot programs in and of themselves will not subject banks to supervisory criticism even if the pilot programs ultimately prove unsuccessful.  Likewise, pilot programs that expose gaps in a BSA/AML compliance program will not result in supervisory action with respect to that program.  For example, when banks test or implement artificial intelligence-based transaction monitoring systems and identify suspicious activity that would not otherwise have been identified under existing processes, the Agencies will not assume that the banks’ existing processes are deficient …”

The OCC’s April 2019 Innovation Pilot Program

On April 30, 2019 the OCC sought public comment on its proposed Innovation Pilot Program, a voluntary program designed to provide fintech providers and financial institutions “with regulatory input early in the testing of innovative activities that could present significant opportunities or benefits to consumers, businesses, financial institutions, and communities.” See OCC Innovation Pilot Program. As the OCC has written, the Innovation Pilot Program clearly notes that the agency would not provide “statutory or regulatory waivers and does not absolve entities participating in the program from complying with applicable laws and regulations.”

Twenty comments were posted to the OCC’s website. A number of them included comments that innovators needed some formalized regulatory forbearance in order to be able encourage them to innovate. The Bank Policy Institute’s letter (BPI Comment), submitted by Greg Baer (a long-standing and articulate proponent of reasonable and responsible regulation), provided that:

“… the OCC should clarify publicly that a bank is not required to seek the review and approval of its examination team prior to developing or implementing a new product, process, or service; that unsuccessful pilots will not warrant an MRA or other sanction unless they constitute and unsafe and unsound practice or a violation of law; and that innovations undertaken without seeking prior OCC approval will not be subject to stricter scrutiny or a ‘strict liability’ regime. We also recommend that the OCC revisit and clarify all existing guidance on innovation to reduce the current uncertainty regarding the development of products, processes and services; outdated or unnecessary supervisory expectations should be rescinded.”

The American Bankers Association comment ABA Comment also asks for similar guidance:

“For institutions to participate confidently in a pilot, there must be internal agreement that OCC supervision and enforcement will not pursue punitive actions. In other words, the program should produce decisions that have the full support of the OCC and bind the agency to those conclusions going forward … One way for the OCC to accomplish this is to clarify that a participating bank will not be assigned Matters Requiring Attention (MRAs) if it acts in good faith as part of a Pilot Program. The nature of technological innovation means that banks must try new things, experiment, and sometimes make mistakes. The Pilot Program has been designed as a short-term limited-scale test to ensure that any mistakes made are unlikely to have an impact on the safety and soundness of an institution. Clarifying that MRAs will not be issued for mistakes made in good faith may help give banks the certainty they need to participate in a Pilot Program.”

And the Securities Industry and Financial Markets Association (SIFMA) comment letter SIFMA Comment Letter included the following:

“Relief from strict regulatory compliance is a vital prerequisite to draw firms into the test environment, precisely so that those areas of noncompliance may be identified and remediated and avoid harm to the consumers. Without offering this regulatory relief, the regulatory uncertainty associated with participating in the Pilot Program could, by itself, deter banks from participating. Similarly, the lack of meaningful regulatory relief could limit the opportunity the program provides for firms to experiment and innovate.”

So where did that leave banks that were thinking of innovative approaches to AML?  For those that choose not to pursue innovative pilot programs, it is clear that they will not be penalized or criticized, but for those that try innovative pilot programs that ultimately expose gaps in their BSA/AML compliance program, the agencies will not automatically assume that the banks’ existing processes are deficient. In response to this choice – do not innovate and not be penalized, or innovate and risk being penalized – many banks have chosen the former. As a result, advocates for those banks – the BPI and ABA, for example – have asked the OCC to clarify that it will not pursue punitive actions against banks that unsuccessfully innovate.

How has the OCC replied? It hasn’t yet finalized its Innovation Program, but it has responded to a bank’s request for guidance on some innovative approaches to monitoring for, alerting on, and filing suspicious activity reports on activity and customers that are structuring cash transactions.

A Bank’s Request to Have the OCC Help It Innovate

The OCC published an Interpretive Letter on September 27, 2019 that sheds some light on how it looks at its commitments under the December 2018 innovation statement.[2]  According to the Interpretive Letter, on February 22, 2019 an OCC-regulated bank submitted a request to streamline SARs for potential structuring activity (the Bank also sought the same or a similar ruling from FinCEN: as of this writing, FinCEN has not published a ruling). The bank asked three questions (and the OCC responded):

  1. Whether the Bank could file a structuring SAR based solely on an alert, without performing a manual investigation, and if so, under what circumstances (yes, but with some significant limitations);
  2. Whether the proposed automated generation of SAR narratives for structuring SARs was consistent with the OCC’s SAR regulations (yes, but with some significant limitations);
  3. Whether the proposed automation of SAR filings was consistent with the OCC’s BSA program regulations (yes, but with some significant limitations).

The most interesting request by the Bank, though, was its request that the OCC take an “agile approach to supervisory oversight” for the bank’s “regulatory sandbox” initiative. Pages 6 and 7 of the OCC letter provide the particulars of this request. There, the OCC writes:

“Your letter also requested regulatory relief to conduct this initiative within a “regulatory sandbox.” Your regulatory sandbox request states ‘This relief would be in the form of an agile approach to supervisory oversight, which would include the OCC’s full access, evaluation, and participation in the initiative development, but would not include regulatory outcomes such as matters requiring attention, violations of law or financial penalties. [The Bank] welcomes the OCC to consider ways to participate in reviewing the initiative outcomes outside of its standard examination processes to ensure effectiveness and provide feedback about the initiative development.’”

NOTE: I had to read the key sentence a few times to settle on its intent and meaning. That sentence is “This relief would be in the form of an agile approach to supervisory oversight, which would include the OCC’s full access, evaluation, and participation in the initiative development, but would not include regulatory outcomes such as matters requiring attention, violations of law or financial penalties.”

Was the bank saying the relief sought was an agile approach to supervisory oversight that included the OCC’s full participation in the process and no adverse regulatory outcomes? Or was the bank saying the relief sought was an agile approach to supervisory oversight that included the OCC’s full participation in the process, but did not include anything to do with adverse regulatory outcomes?

I settled on the latter meaning: that the bank was seeking the OCC’s full participation, but did not expect any regulatory forbearance.

The OCC first reiterated its position from the December 2018 joint statement by writing that it “supports responsible innovation in the national banking system that enhances the safety and soundness of the federal banking system, including responsibly implemented innovative approaches to meeting the compliance obligations under the Bank Secrecy Act.” It then wrote that it “is also open to an agile and transparent supervisory approach while the Bank is building this automated solution for filing Structuring SARs and conducting user acceptance testing.” This language is a bit different than what the OCC wrote at the top of page 2 of the letter: “the OCC is open to engaging in regular discussions between the Bank and appropriate OCC personnel, including providing proactive and
timely feedback relating to this automation proposal.”

Notably, the OCC wrote that it is “open to an agile and transparent supervisory approach”, and “open to engaging in regular discussions between the Bank and appropriate OCC personnel”, but being open to something doesn’t mean you approve of it or agree to it. In fact, the OCC didn’t appear to grant the bank’s request. In the penultimate sentence the OCC wrote: “The OCC will monitor any such changes through its ordinary supervisory processes.”

How About Forbearance to Innovate Without Fear of Regulatory Sanctions?

As set out above, in June 2019 the BPI and ABA (and eighteen others) commented on the OCC’s proposal for an innovation pilot program. The BPI commented that “the OCC should clarify publicly that … unsuccessful pilots will not warrant an MRA or other sanction unless they constitute and unsafe and unsound practice or a violation of law”, and the ABA commented that the OCC should “clarify that a participating bank will not be assigned Matters Requiring Attention (MRAs) if it acts in good faith as part of a Pilot Program”.

The OCC seems to have obliquely responded to both of those comments. In its September 2019 Interpretative Letter, the OCC took the time to write that it “will not approve a regulatory sandbox that includes forbearance on regulatory issues for the Bank’s initiative for the automation of Structuring SAR filings.” Note that the OCC made this statement even though the bank appears to have specifically indicated that the requested relief did not include forbearance from “regulatory outcomes such as matters requiring attention, violations of law or financial penalties”. And the OCC letter includes a reference to both the Interagency statement on responsible innovation and the OCC’s April 2019 Innovation Pilot Program (see footnote 25 on page 7): “banks must continue to meet their BSA/AML compliance obligations, as well as ensure the ongoing safety and soundness of the bank, when developing pilot programs and other innovative approaches.”

So although the OCC hasn’t formally responded to the comments to its June 2019 innovation program to allow banks to innovate without fear of regulatory sanction if that innovation doesn’t go well, it has made it clearer that a bank still has the choice to not innovate and not be penalized, or to innovate and risk being penalized.

(In fairness, in its Spring 2019 Semiannual Risk Perspective Report, the OCC noted that a bank’s inability to innovate is “a source of significant strategic risk.” See OCC Semiannual Risk Perspective, 2019-49 (May 20, 2019)).

Timely Feedback – Is Seven Months Timely?

As set out above, the OCC wrote that it “is open to engaging in regular discussions between the Bank and appropriate OCC personnel, including providing proactive and timely feedback …”.  The bank’s request was submitted on February 22, 2019. The OCC’s feedback was sent on September 27, 2019. So it took the OCC seven months to respond to the bank’s request for an interpretive letter. In this age of high-speed fintech disruption, seven months should not be considered “timely.” What would be timely? I would aim for 90 days.

Conclusion

This unnamed OCC-regulated bank appears to have a flashing green or cautionary yellow light from the OCC to deploy some technology and process enhancements to streamline a small percentage if its SAR monitoring, alerting, and filing.  The OCC will remain vigilant, however, warning the bank that it “must ensure that it has developed and deployed appropriate risk governance to enable the bank to identify, measure, monitor, and control for the risks associated with the automated process. The bank also has a continuing obligation to employ appropriate oversight of the automated process.”

So the message to the 1,700 or so OCC banks appears to be this: there’s no peril in not innovating, but if you decide to innovate, do so at your peril.

[1] The Board of Governors of the Federal Reserve System, the Federal Deposit Insurance Corporation (FDIC), the Financial Crimes Enforcement Network (FinCEN), the National Credit Union Administration, and the Office of the Comptroller of the Currency. The statement is available at https://www.occ.gov/news-issuances/news-releases/2018/nr-occ-2018-130a.pdf

[2] https://www.occ.gov/topics/charters-and-licensing/interpretations-and-actions/2019/int1166.pdf

The Current BSA/AML Regime is a Classic Fixer-Upper … and Here’s Seven Things to Fix

A 1970 Holden “Belmont” … built the same year as the first BSA-related Act was passed in the United States: the Currency and Foreign Transactions Reporting Act, PL 91-508

There is a lot of media attention around the need for a new way to tackle financial crimes risk management. Apparently the current regime is “broken” (I disagree) or in desperate need of repair (what government-run programs are not in some sort of state of disrepair?), or, at the very least, not particularly effective nor efficient. And there are a lot of suggestions from the private and public sectors on how to make the regime more effective and more efficient.  I’ll offer seven things to consider as we all work towards renovating our BSA/AML regime, to take it from its tired, dated (the last legislative change to the three statutes we call the Bank Secrecy Act was made in 2004) state to something that provides a more balanced, effective, and efficient regime.

I. Transaction Monitoring Systems

Apparently, current customer- and account-based transaction monitoring systems are highly inefficient, because for every 100 alerts they produce, five or fewer actually end up being reported to the government in a Suspicious Activity Report. The transaction monitoring software is often blamed (although bad data is the more likely culprit), and machine learning and artificial intelligence are often touted (by providers of machine learning and artificial intelligence) as the solutions. Consider the following when it comes to transaction monitoring and false positives:

  1. If a 95% false positive rate is bad … what is good? Human-generated referrals will result in SARs about 50% of the time: that might be a good standard.
  2. We have to stop tuning our transaction monitoring systems against SARs filed with law enforcement, and start tuning them against SARs used by law enforcement. I’ve written about this on many occasions, and have offered up something called the “TSV” SAR – a SAR that law enforcement indicates has Tactical or Strategic Value.
  3. High false positives rates may not be caused by bad data or poor technology at all, but by regulatory expectations – real or imagined – that financial institutions can’t afford the audit, regulatory, legal, and reputational costs of failing to identify (alert on) something unusual or anomalous that could eventually be found to have been suspicious.

(I’ve written about this on a few occasions: see, for example, RegTech Consulting Article).

It may be that transaction monitoring itself is the culprit (and not bad data, outmoded technology, or unreasonable regulatory expectations). My experience is that customer- and account-based transaction monitoring is not nearly as effective as relationship-based interaction surveillance. Let’s parse this out:

  • Customer versus relationship – focusing on a single customer is less efficient than looking at the entire relationship that customer is or could be part of. Bank’s marketing departments think in terms of households as the key relationship: credit department’s think in terms of parent and subsidiary entities and guarantors as the needed relationship in determining credit worthiness. Financial crimes departments need to also think in the same terms. It is simply more encompassing and more efficient.
  • Transaction versus interaction – customers may interact with a bank many times, through a phone call, an online session, a balance inquiry, or a mobile look-up, before they will perform an actual transaction or movement of value. Ignoring those interactions, and only focusing on transactions, doesn’t provide the full picture of that customer’s relationship with the bank.
  • Monitoring versus surveillance – monitoring is not contextual: it is simply looking at specific transaction types, in certain amounts or ranges, performed by certain customers or customer classes. Surveillance, on the other hand, is contextual: it looks at the context of certain activity compared against all activity of that customer over time, and/or of certain activity of that customer compared to other customers within its class (Whatever that class may be).

So the public sector needs to encourage the private sector to shift from a customer-based transaction monitoring regime to a relationship-based interaction surveillance regime.

II. Information Sharing

Crime and criminal organizations don’t operate in a single financial institution or even in a single jurisdiction. Yet our BSA/AML regime still encourages single entity SAR filers and doesn’t promote cross-jurisdictional information sharing.  The tools are available to better share information across a financial institution, and between financial institutions. Laws, regulations, and regulatory guidance all need to change to specifically and easily allow a single financial institution operating in multiple jurisdictions to (safely) share more information with itself, to allow multiple institutions in a single and multiple jurisdictions to (safely) share more information between them, and to allow those institutions to jointly investigate and report together. Greater encouragement and use of Section 314(b) associations and joint SAR filings are critical.

III. Classical Music, or Jazz?

Auditors, regulators, and even a lot of FinTech companies, would prefer that AML continue to be like classical music, where every note (risk assessments and policies) is carefully written, the music is perfectly orchestrated (transaction monitoring models are static and documented), and the resulting music (SAR filings) sounds the same time and time again regardless of who plays it. This allows the auditors and regulators to have perfectly-written test scripts to audit and examine the programs, and allows the FinTech companies to produce a “solution” to a defined problem. This approach may work for fraud, where an objective event (a theft or compromise) produces a defined result (a monetary loss). But from a financial institution’s perspective, AML is neither an objective event nor a defined result, but is a subjective feeling that it is more likely than not that something anomalous or different has occurred and needs to be reported. So AML is less like classical music and more like jazz: defining, designing, tuning, and running effective anti-money laundering interaction monitoring and customer surveillance systems is like writing jazz music … the composer/arranger (FinTech) provides the artist (analyst) a foundation to freely improvise (investigate) within established and consistent frameworks, and no two investigations are ever the same, and similar facts can be interpreted a different way by different people … and a SAR may or may not be filed. AML drives auditors and examiners mad, and vexes all but a few FinTechs. So be it. Let’s acknowledge it, and encourage it.

IV. Before Creating New Tools, Let’s Use the Ones We Have

The federal government has lots of AML tools in its arsenal: it simply needs to use them in more courageous and imaginative ways. Tools such as section 311 Special Measures and 314 Information Sharing are grossly under-utilized. Information sharing is discussed above: section 311 Special Measures are reserved for the most egregious bad actors in the system, and are rarely invoked. But the reality is that financial institutions will kick out a customer or not (knowingly) provide services to entire classes of customers or in certain jurisdictions for fear of not being able to economically manage the perceived risk/reward equation of that customer or class of customer or jurisdiction. But that customer or class or jurisdiction simply goes to another financial institution in the regulated sector, or to an institution in an un- or under-regulated sector (the notion of “de-risking”). The entire financial system would be better off if, instead of de-risking a suspected bad customer or class of customer or jurisdiction, financial institutions were not encouraged to exit at all, but encouraged to keep that customer or class, and monitor for and report any suspicious activity. Then, if the government determined that the customer or class of customers was too systemically risky to be banked at all, it could use section 314 to effectively blacklist that customer or class of customers. Imposing “special measures” shouldn’t be a responsibility of private sector financial institutions guessing at whether a customer or class of customers is a bad actor: it is and should be the responsibility of the federal government using the tool it currently has available to it: Section 311.

V. … and Let’s Restore The Tool We Started With

The reporting of large cash transactions was the first AML tool the US government came up with (in 1970 as part of the Currency & Foreign Transactions Reporting Act).  Those reports, called Currency Transaction Reports, or CTRs, started out as single cash transactions on behalf of an accountholder, for more than $10,000.  They have since morphed to one or more cash transactions aggregating to more than $10,000 in a 24-hour period, by or on behalf of one or more beneficiaries.  There will be more than 18 million CTRs filed this year, and apparently law enforcement finds them an effective tool. But there is nothing more inefficient: simply put, CTRs are now the biggest resource drain in BSA/AML. Because of regulatory drift, CTRs are de facto SAR-lites … we need to get back to basic CTRs and redeploy the resources used to wrestle with the ever-expanding aggregation and “by or on behalf of” requirements, and deploy them against potential suspicious activity. And forget about increasing the threshold amount from the current “more than $10,000” standard: $10,000 is almost 5,000 times the amount of the average cash transaction in the United States today (which is $22, according to multiple reports from the Federal Reserve), and no one can argue that having a requirement to report a transaction or transactions that are 5,000 times the average is unreasonable. And it isn’t the amount that causes inefficiencies, it is the requirements to (i) aggregate multiple transactions totaling more than $10,000 in a 24-hour period, (ii) to identify and aggregate transactions “by or on behalf of” multiple parties and accountholders, and (iii) exempt, on a bank-by-bank basis, certain entities that can be exempted (but rarely are) from the CTR filing regime. If anything, we could save and redploy resources if the CTR threshold was the same as the SAR threshold – $5,000.

VI. The Clash of the Titles

And remember the “Clash of the Titles” … the protect-the-financial-system (filing great SARs) requirements of Title 31 (Money & Finance … the BSA) are trumped by the safety and soundness (program hygiene) requirements of Title 12 (Banks & Banking), and financial institutions act defensively because of the punitive measures in Title 18 (Crimes & Criminal Procedure) and Title 50 (War … OFAC’s statutes and regulations). There is a need to harmonize the Four Titles – or at least Titles 12 and 31 – and how financial institutions are examined against them. BSA/AML people are judged on whether they avoid bad TARP results (from being Tested, Audited, Regulated, and Prosecuted) rather than  on whether they provide actionable, timely intelligence to law enforcement. Today, most BSA Officers live in fear of not being able to balance all their commitments under the four titles: the great Hugh MacLeod was probably thinking of BSA Officers when he wrote: “I do the work for free. I get paid to be afraid …”

VII. A Central Registry for Beneficial Ownership Information

At the root of almost all large money laundering cases are legal entities with opaque ownership, or shell companies, where kleptocrats, fraudsters, tax evaders, and other miscreants can hide, move, and use their assets with near impunity.  Greater corporate transparency has long been seen as one of the keys to fighting financial crime (the FATF’s Recommendation 24 on corporate transparency was first published in 1993), and accessible central registries of beneficial ownership information have been proven to be the key to that greater transparency. Yet the United States is one of the few major financial centers that does not have a centralized registry of beneficial ownership information. I’ve written that without such a centralized registry, the current beneficial ownership requirements are ineffective.  See Beneficial Ownership Registry Article. Two bills currently before Congress – the Senate’s ILLICIT Cash Act (S2563) and the House’s Corporate Transparency Act (HR2513) both contemplate a centralized registry of beneficial ownership maintained by FinCEN. But both of those bills – and FATF recommendations and guidance on the same issue – fall short in that they only allow law enforcement (or “competent authorities” using the FATF term) to freely access that database. The bills before Congress allow financial institutions to access the database but only with the consent of the customer they’re asking about and only for the purposes of performing due diligence on that customer. I have proposed that those bills be changed to also allow financial institutions to query the database without the consent of the entity they’re asking about for the purposes of satisfying their suspicious activity reporting requirements.

Conclusion – Seven Fixer-Upper Projects for the BSA/AML Regime

  1. Shift from customer-centric transaction monitoring systems to relationship-based interaction surveillance systems
  2. Encourage cross-institutional and cross-jurisdictional information sharing
  3. Encourage the private sector to be more creative and innovative in its approach to AML – AML is like jazz music, not classical music
  4. Address de-risking through aggressive use of Section 311 Special Measures
  5. Simplify the CTR regime. Please. And forget about increasing the $10,000 threshold – in fact, reduce it to $5,000
  6. As long as financial institutions are judged on US Code Titles 12, 18, 31, and 50, expect them to be both ineffective and inefficient. Can Titles 12 and 31 try to get along?
  7. A central registry of beneficial ownership information that is freely accessible to financial institutions is a must have

FinCEN’s BSA Value Project – An Effort to Provide Actionable Information for SAR Filers

Two Million SARs are Filed Every Year … But Which Ones Provide Tactical or Strategic Value to Law Enforcement?

Included in the Director’s remarks was some interesting information on an eight-month old “BSA Value Project” that may have been started because, as Director Blanco remarked, FinCEN has “heard during our discussions that there continues to be a desire for more feedback on what FinCEN is seeing in the BSA data in terms of trends [and] we need to do better SAR analysis for wider trends and typologies …”. Director Blanco noted that “We want to provide more feedback, and we will.”

There has not been much public mention of the BSA Value Project: a quick Google search shows that FinCEN’s Associate Director Andrea Sharrin introduced the BSA Value Project at a Florida International Bankers Association (FIBA) conference on March 12, 2019, and then Director Blanco described it in his August 13th remarks:

In January 2019, FinCEN began an ambitious project to catalogue the value of BSA reporting across the entire value chain of its creation and use. The project will result in a comprehensive and quantitative understanding of the broad value of BSA reporting and other BSA information to all types of consumers of that information.

We already know that BSA data plays a critical role in keeping our country strong, our financial system secure, and our families safe from harm. But FinCEN is using the BSA Value Project to improve how we communicate the way BSA information is valued and used, and to develop metrics to track and measure the value of its use on an ongoing basis. The project has included hundreds of interviews with stakeholder groups, including casinos.

So far, the study has confirmed there are extensive and extremely varied uses of BSA information across all stakeholders (including by the private sector) consistent with their missions.

Almost One in Four FBI and IRS-CI Investigations Use BSA Data

Director Blanco made the following remarks on the usefulness of BSA data:

All FBI subject names are run against the BSA database. More than 21 percent of FBI investigations use BSA data, and for some types of crime, like organized crime, nearly 60 percent of FBI investigations use BSA data. Roughly 20 percent of FBI international terrorism cases utilize BSA data.

The Internal Revenue Service-Criminal Investigation section alone conducts more than 126,000 BSA database inquiries each year. And as much as 24 percent of its investigations involving criminal tax, money laundering, and other BSA violations are directly initiated by, or associated with, a BSA report.

In addition to providing controlled access to the data to law enforcement, FinCEN also proactively pushes certain information to them on critical topics. On a daily basis, FinCEN takes the suspicious activity reports and we run them through several categories of business rules or algorithms to identify reports that merit further review by our analysts.

Our terrorist financing-related business rules alone generate over 1,000 matches each month for review and further dissemination to our law enforcement and regulatory partners in what we call a Flash report. These Flash reports enable the FBI, for example, to identify, track, and disrupt the activities of potential terrorist actors. It is incredibly valuable information.

But Which BSA Filings are Providing Real Value to Law Enforcement?

There is no doubt that the (roughly) 20 million BSA reports that are filed each year provide great value to law enforcement. But questions remain about the utility of those filings, and the costs of preparing them. Some of those questions include: (i) which of those reports provide value? (ii) what kind of value is being provided – tactical and/or strategic? (iii) can financial institutions eliminate the “no value” filings and deploy those resources to higher-value filings? (iv) can financial institutions automate the preparation and filing of the low value filings and deploy those resources to the highest-value filings?

I have written a number of articles on the need for better reporting on the utility of SAR filings. Links to three of them are:

SAR Feedback 314(d) – July 30 2019

BSA Reports and Federal Criminal Cases – June 5 2019

The TSV SAR Feedback Loop – June 4 2019

Conclusion

Kudos to Director Blanco and his FinCEN team for their initiative and efforts around the BSA Value Project. The results of the Project could be a game-changer for the financial industry’s BSA/AML programs. The industry is being inundated with calls to apply machine learning and artificial intelligence to make their AML programs more effective and efficient. But if those institutions don’t know which of their filings provide value, and arguably only one in four is providing value, they cannot effectively use machine learning or AI.

The entire industry is looking forward to the results of FinCEN’s BSA Value Project!

The WayBack Machine … and the Marihuana Problem in New York (circa 1944) – updated with the OFAC Fentanyl Drug Trafficking Organization Designation of August 21, 2019

One of the greatest investigative tools available today is the Internet Archive, a “non-profit library of millions of free books, movies, software, music, websites, and more” – https://archive.org/. The best tool in this online library is the WayBack Machine. It is described as follows:

The Internet Archive has been archiving the web for 20 years and has preserved billions of webpages from millions of websites. These webpages are often made up of, and link to, many images, videos, style sheets, scripts and other web objects. Over the years, the Archive has saved over 510 billion such time-stamped web objects, which we term web captures.

We define a webpage as a valid web capture that is an HTML document, a plain text document, or a PDF.

domain on the web is an owned section of the internet namespace, such as google.com or archive.org or bbc.co.uk. A host on the web is identified by a fully qualified domain name or FQDN that specifies its exact location in the tree hierarchy of the Domain Name System. The FQDN consists of the following parts: hostname and domain name.  As an example, in case of the host blog.archive.org, its hostname is blog and the host is located within the domain archive.org.

We define a website to be a host that has served webpages and has at least one incoming link from a webpage belonging to a different domain.

As of today, the Internet Archive officially holds 273 billion webpages from over 361 million websites, taking up 15 petabytes of storage.

Here’s an example of how the WayBack Machine can be used. In a federal criminal complaint unsealed on August 15, 2019 in the case of United States v Manish Patel (Eastern District of California, case no 19-MJ-0128), the affidavit supporting the complaint provided that the defendant had business cards that showed he was the CEO of The Sentient Law Group PC in New York City, but the website for that entity – http://www.sentientlawgroup.com – as accessed on August 5, 2019 did not show him as CEO.  But by simply typing that URL into the WayBack Machine’s search bar you find every instance of that website that was captured by the WayBack Machine. Viewing the first and last captures (on April 13, 2017 and February 12, 2019) shows the defendant Patel as the CEO, his practice focus areas (including cannabis law, which is ironic given that Patel was charged with multiple counts involving possession with intent to distribute marijuana).  This tool is particularly helpful in online child pornography cases, where defendants move and change websites, and was instrumental in a number of post-9/11 cases, where the English language Al Qaeda website changed dramatically after 9/11 … but its historical web pages remained accessible, thanks to the Internet Archive and its WayBack Machine.

OFAC Designation of the Zheng Drug Trafficking Organization – August 21, 2019

Another great example of the power of the WayBack Machine can be found in a series of federal criminal cases that culminated in OFAC designating the criminal defendants as Foreign Narcotics Kingpins. See the Treasury press release at https://home.treasury.gov/news/press-releases/sm756

One of those designated, Fujing Zheng, was indicted in federal court in Ohio in August 2018 (US v Zhang et al, Northern District of Ohio, case 18CR00474). In that 86-page indictment, the Government alleges that the Zhang organization used a website to market its illegal drugs – www.globalrc.net

What has happened to www.globalrc.net?

If you search for that URL today, you get the following:

As it shows, that domain has been seized by the DEA and is no longer accessible. But the WayBack Machine has captured and saved that website 65 times between April 8, 2009 and February 15, 2019:

And simply by selecting any of the 65 dates, you can access the captured website. An example is from January 6, 2017:

You can see the actual website used by the Zheng DTO back in 2017. A powerful investigative tool!

But there is more to be found on the Internet Archive. The twenty or so archived collections are incredible sources. Here is an example of a document from the “Journals” collection:

https://archive.org/details/TheMarihuanaProblemInTheCityOfNewYork-19441973Edition/page/n19

In 1944, Legendary New York Mayor F.H. LaGuardia commissioned a report to look into “The Marihuana Problem in the City of New York.” The forward is interesting. It provides:

“As Mayor of New York City, it is my duty to foresee and take steps to prevent the development of hazards to the health, safety, and welfare of our citizens. When rumors were recently circulated concerning the smoking of marihuana by large segments of our population and even by school children, I sought advice from The New York Academy of Medicine, as is my custom when confronted with problems of medical import.”

“The report of the present investigation covers every phase of the problem and is of practical value not only to our own city but to communities throughout the country. It is a basic contribution to medicine and pharmacology.”

“I am glad that the sociological, psychological, and medical ills commonly attributed to marihuana have been found to be exaggerated insofar as the City of New York is concerned. I hasten to point out, though, that the findings are to be interpreted only as a reassuring report of progress and not as encouragement to indulgence, for I shall continue to enforce the laws prohibiting the use of marihuana until and if complete findings may justify an amendment to existing laws. The scientific part of the research will be continued in the hope that the drug may prove to possess therapeutic value for the control of drug addiction.”

Try out the Internet Archive!

BSA Reports and Federal Criminal Cases – What’s the Connection?

54,000 Federal Criminal Cases … and 20,000,000 BSA Reports

If the question is “how many BSA reports are used in federal criminal cases?”, the answer may be “we don’t know.” But in fact, somebody knows whether and which BSA reports were used in, or led to, or somehow contributed to each and every criminal case filed in federal district courts across America. But having a way to obtain that information from the thousands of somebodies across 93 US Attorneys’ offices and dozens of federal law enforcement agencies has proven to be elusive.

If Only We Knew What We Know …

… is the title of a book written by C. Jackson Grayson and Carla O’dell (Simon and Schuster, 1998) that goes through the problems associated with the transfer of knowledge and best practices within an organization. Those problems are amplified when the transfers occur across organizations, and amplified again when the transfers occur between the public and private sectors.  If only the financial services community – the producers and filers of more than 20 million BSA reports every year – knew how many, and which of those filings were of tactical or strategic value to law enforcement as they bring over 50,000 new federal criminal cases every year.

US Attorneys Annual Statistical Reports

The Department of Justice publishes annual statistical reports that provide some insight into the numbers and types of criminal and civil cases filed across the 93 US Attorneys’ offices and 94 judicial districts in the United States. They are available at DOJ Annual Statistical Reports

The most recent report covers fiscal year 2017 (October 1, 2016 through September 30, 2017). It shows that there were 53,899 new criminal cases brought in FY2017 and 53,416 were closed. Notably, about 94% of federal criminal cases end in a guilty plea or guilty finding. And what law enforcement agencies are bringing those cases? About 43% of new federal criminal cases originated with either Customs & Border Patrol or Immigration & Customs Enforcement.

And what kinds of cases are being opened? The DOJ classifies its cases under “programs”, which is the primary or leading charge if there are multiple charges in a case or against a defendant. According to the FY2017 data, the leading programs are:

A few observations on this data. First, in FY2017 the 93 US Attorney’s offices brought only 132 federal drug possession cases charging 163 defendants. A subset of those involve marijuana possession charges. Separate data from prior years suggests that almost all of these are cases along the southwest border or on military bases.

Second, and as noted in the comment box above, data from FinCEN indicates that in the three fiscal years prior to FY2017, there was an average of just over 2.1 million SARs filed per year and about 19.2 million BSA reports in total (including SARs) filed per year. With about 54,000 criminal cases, that means that there are over 350 BSA reports filed for every federal criminal case brought.

But currently there is no means to determine how many of those criminal cases involved BSA reports, or how many of those BSA reports contributed to federal criminal cases.

See my previous article “FinCEN’s FY2020 Report to Congress Reveals its Priorities and Performance: FinCEN Needs More Resources – and a TSV SAR Feedback Loop – To Really Make a Difference in the Fight Against Crime & Corruption” at TSV SAR Feedback Loop

Artificial Intelligence – Who Is On The Hook When Things Go Wrong With Your AI System? You Are!

“Organisations and individuals developing, deploying or operating AI systems should be held accountable for their proper functioning”

For all the upstart fintechs out there that are trumpeting their innovative Artificial Intelligence-based solutions that can solve a financial institution’s financial crimes problems! … note that you may be held accountable when that AI system doesn’t quite turn out like your marketing materials suggested. Legal responsibility for something you design, build, and deploy is not a new concept, but how that “something” – in this case, the AI system you developed and installed at a client bank – actually works, and reacts, and adapts, over time could very be new ground that hasn’t been explored before. But many smart people are thinking about AI developers’ accountability, and other AI-related issues, and many of those have produced some principles to guide us as we develop and implement AI-based systems.

On May 22, 2019 the OECD published a Council Recommendation on Artificial Intelligence. At its core, the recommendation is for the adoption of five complimentary “value-based principles for responsible stewardship of trustworthy artificial intelligence. The link is Artificial intelligence and the actual recommendation is https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449#_ga=2.200835047.853048335.1559167756-681244095.1559167756

What’s the big deal about artificial intelligence?

The OECD recognized a number of things about AI that are worth including:

  • AI has pervasive, far-reaching and global implications that are transforming societies, economic sectors and the world of work, and are likely to increasingly do so in the future;
  • AI has the potential to improve the welfare and well-being of people, to contribute to positive sustainable global economic activity, to increase innovation and productivity, and to help respond to key global challenges;
  • At the same time, these transformations may have disparate effects within, and between societies and economies, notably regarding economic shifts, competition, transitions in the labour market, inequalities, and implications for democracy and human rights, privacy and data protection, and digital security;
  • Trust is a key enabler of digital transformation; that, although the nature of future AI applications and their implications may be hard to foresee, the trustworthiness of AI systems is a key factor for the diffusion and adoption of AI; and that a well-informed whole-of-society public debate is necessary for capturing the beneficial potential of the technology, while limiting the risks associated with it;
  • Given the rapid development and implementation of AI, there is a need for a stable policy environment that promotes a human-centric approach to trustworthy AI, that fosters research, preserves economic incentives to innovate, and that applies to all stakeholders according to their role and the context;
  • certain existing national and international legal, regulatory and policy frameworks already have relevance to AI, including those related to human rights, consumer and personal data protection, intellectual property rights, responsible business conduct, and competition, while noting that the appropriateness of some frameworks may need to be assessed and new approaches developed; and
  • Embracing the opportunities offered, and addressing the challenges raised, by AI applications, and empowering stakeholders to engage is essential to fostering adoption of trustworthy AI in society, and to turning AI trustworthiness into a competitive parameter in the global marketplace.

What is “Artificial Intelligence”?

The recommendation includes some helpful definitions of the major terms:

Artificial Intelligence System: a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. AI systems are designed to operate with varying levels of autonomy.

Artificial Intelligence System Lifecycle: four phases which can be sequential but may be iterative:

(i) design, data and models – a context-dependent sequence encompassing planning and design, data collection and processing, as well as model building;

(ii) verification and validation;

(iii) deployment; and

(iv) operation and monitoring

Artificial Intelligence Actors: AI actors are those who play an active role in the AI system lifecycle, including organisations and individuals that deploy or operate AI.

Is an OECD Recommendation binding on a country that has adopted it?

OECD Recommendations are not legally binding but they are highly influential and have many times formed the basis of international standards and helped governments design national legislation. For example, the OECD Privacy Guidelines adopted in 1980 and stating that there should be limits to the collection of personal data underlie many privacy laws and frameworks in the United States, Europe and Asia.

So the AI Principles are not binding, but the OECD provided five recommendations to governments:

  1. Facilitate public and private investment in research & development to spur innovation in trustworthy AI.
  2. Foster accessible AI ecosystems with digital infrastructure and technologies and mechanisms to share data and knowledge.
  3. Ensure a policy environment that will open the way to deployment of trustworthy AI systems.
  4. Empower people with the skills for AI and support workers for a fair transition.
  5. Co-operate across borders and sectors to progress on responsible stewardship of trustworthy AI.

Who developed the OECD AI Principles?

The OECD set up a 70+ member expert group on AI to scope a set of principles. The group consisted of representatives of 20 governments as well as leaders from the business (Google, Facebook, Microsoft, Apple, but not any financial institutions), labor, civil society, academic and science communities. The experts’ proposals were taken on by the OECD and developed into the OECD AI Principles.

What is the Purpose of the OECD Principles on AI?

The OECD Principles on Artificial Intelligence promote artificial intelligence (AI) that is innovative and trustworthy and that respects human rights and democratic values. The OECD AI Principles set standards for AI that are practical and flexible enough to stand the test of time in a rapidly evolving field. They complement existing OECD standards in areas such as privacy, digital security risk management and responsible business conduct.

What are the OECD AI Principles?

The Recommendation identifies five complementary values-based principles for the responsible stewardship of trustworthy AI:

1. Inclusive growth, sustainable development and well-beingAI systems should be designed in a way that respects the rule of law, human rights, democratic values and diversity, and they should include appropriate safeguards – for example, enabling human intervention where necessary – to ensure a fair and just society. And AI should benefit people and the planet by driving inclusive growth, sustainable development and well-being.

The actual text reads: “Stakeholders should proactively engage in responsible stewardship of trustworthy AI in pursuit of beneficial outcomes for people and the planet, such as augmenting human capabilities and enhancing creativity, advancing inclusion of underrepresented populations, reducing economic, social, gender and other inequalities, and protecting natural environments, thus invigorating inclusive growth, sustainable development and well-being.

2. Human-centred values and fairness AI actors should respect the rule of law, human rights and democratic values, throughout the AI system lifecycle. These include freedom, dignity and autonomy, privacy and data protection, non-discrimination and equality, diversity, fairness, social justice, and internationally recognized labor rights. To this end, AI actors should implement mechanisms and safeguards, such as capacity for human determination, that are appropriate to the context and consistent with the state of art.

3. Transparency and explainabilityAI Actors should commit to transparency and responsible disclosure regarding AI systems. To this end, they should provide meaningful information, appropriate to the context, and consistent with the state of art to foster a general understanding of AI systems, to make stakeholders aware of their interactions with AI systems, including in the workplace, to enable those affected by an AI system to understand the outcome, and, to enable those adversely affected by an AI system to challenge its outcome based on plain and easy-to-understand information on the factors, and the logic that served as the basis for the prediction, recommendation or decision.

4. Robustness, security and safetyAI systems should be robust, secure and safe throughout their entire lifecycle so that, in conditions of normal use, foreseeable use or misuse, or other adverse conditions, they function appropriately and do not pose unreasonable safety risk. To this end, AI actors should ensure traceability, including in relation to datasets, processes and decisions made during the AI system lifecycle, to enable analysis of the AI system’s outcomes and responses to inquiry, appropriate to the context and consistent with the state of art. And AI actors should, based on their roles, the context, and their ability to act, apply a systematic risk management approach to each phase of the AI system lifecycle on a continuous basis to address risks related to AI systems, including privacy, digital security, safety and bias.

5. AccountabilityAI actors should be accountable for the proper functioning of AI systems and for the respect of the above principles, based on their roles, the context, and consistent with the state of art. Organisations and individuals developing, deploying or operating AI systems should be held accountable for their proper functioning in line with the above principles.

What countries belong to the OECD?

Australia, Austria, Belgium, Canada, Chile, Czech Republic, Denmark, Estonia, Finland, France, Germany, Greece, Hungary, Iceland, Ireland, Israel, Italy, Japan, Korea, Latvia, Lithuania, Luxembourg, Mexico, Netherlands, New Zealand, Norway, Poland, Portugal, Slovak Republic, Slovenia, Spain, Sweden, Switzerland, Turkey, United Kingdom, United States

BigTech, FinTech, and the Battle Over Financial Services

BigTech vs FinTech – Which Will Replace Traditional Banks?

Two recent papers have looked at the attributes, relative strengths and weaknesses, and likelihood to emerge as the main challenger to traditional financial institutions, of two different species of technology company: BigTechs and FinTechs. The two papers are:

  1. Financial Stability Board’s (FSB) February 2019 paper titled “FinTech and Market Structure in Financial Services”, available at https://www.fsb.org/wp-content/uploads/P140219.pdf
  2. Bank for International Settlements’ (BIS) April 2019 Working Paper titled “BigTech and the changing structure of financial intermediation”, available at https://www.bis.org/publ/work779.pdf

The BIS Working Paper makes a pretty compelling argument that the BigTech firms have some distinct advantages over FinTechs that make them more likely to usurp traditional financial institutions. Advantages such as an existing customer base (that is familiar with a user interface and messaging platform), and access to capital (often without the constraints that financial institutions have). And the BIS paper also sets out some of the advantages that BigTech has over traditional financial institutions, such as the financial sector’s current dependence on BigTech’s cloud-based computing and storage (think of Amazon’s AWS), technological advantages such as artificial intelligence, machine learning, and APIs, and regulatory advantages (BigTech isn’t burdened with Dodd-Frank, Basel capital restrictions, model risk regulations, and anti-money laundering program regulations).

But what are the differences between “BigTech” and “FinTech”? Both papers provide definitions for, and examples of, the two terms:

BigTech
  1. FSB: “refers to large technology companies that expand into the direct provision of financial services or of products very similar to financial products”
  2. BIS: “refers to large, existing companies whose primary activity is in the provision of digital services, rather than mainly in financial services … BigTech companies offer financial products only as one part of a much broader set of business lines.”

Both the FSB and BIS have the same BigTech firms: Facebook, Amazon, Apple, Google, Alibaba, Tencent, Vodafone, among others.

FinTech
  1. FSB: “technology-enabled innovation in financial services that could result in new business models, applications, processes or products with an associated material effect on the provision of ‘financial services’ … used to describe firms whose business model focuses on these innovations.”
  2. BIS: “refers to technology-enabled innovation in financial services with associated new business models, applications, processes, or products, all of which have a material effect on the provision of financial services.”

Both the FSB and BIS use QuickenLoans and SOFI, among others, as examples of FinTech firms.

BigTech is really … Big

The BIS paper notes that the six largest global BigTech firms all have market capitalizations greater than the market capitalization of the largest global financial institution, JPMorgan Chase:

Which BigTech Firms are Providing What Financial Services Today?

The BIS paper provides a great summary table of the five main types of financial services that the eleven dominant BigTechs are currently providing. It’s clear from this table that the three Chinese BigTechs – Alibaba, Tencent, and Baidu – have the most comprehensive suite of financial services/products, followed by the US trio of Google, Amazon, and Facebook.

Conclusion

There is no conclusion. Every day brings new entrants and participants, shifts, and changes. The regulatory environments are rapidly changing (although regulators and regulations always lag the regimes they regulate). But these two papers provide some insights into the world of FinTech, BigTech, and financial services, and are worth spending some time on.