Artificial Intelligence – Who Is On The Hook When Things Go Wrong With Your AI System? You Are!

“Organisations and individuals developing, deploying or operating AI systems should be held accountable for their proper functioning”

For all the upstart fintechs out there that are trumpeting their innovative Artificial Intelligence-based solutions that can solve a financial institution’s financial crimes problems! … note that you may be held accountable when that AI system doesn’t quite turn out like your marketing materials suggested. Legal responsibility for something you design, build, and deploy is not a new concept, but how that “something” – in this case, the AI system you developed and installed at a client bank – actually works, and reacts, and adapts, over time could very be new ground that hasn’t been explored before. But many smart people are thinking about AI developers’ accountability, and other AI-related issues, and many of those have produced some principles to guide us as we develop and implement AI-based systems.

On May 22, 2019 the OECD published a Council Recommendation on Artificial Intelligence. At its core, the recommendation is for the adoption of five complimentary “value-based principles for responsible stewardship of trustworthy artificial intelligence. The link is Artificial intelligence and the actual recommendation is https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449#_ga=2.200835047.853048335.1559167756-681244095.1559167756

What’s the big deal about artificial intelligence?

The OECD recognized a number of things about AI that are worth including:

  • AI has pervasive, far-reaching and global implications that are transforming societies, economic sectors and the world of work, and are likely to increasingly do so in the future;
  • AI has the potential to improve the welfare and well-being of people, to contribute to positive sustainable global economic activity, to increase innovation and productivity, and to help respond to key global challenges;
  • At the same time, these transformations may have disparate effects within, and between societies and economies, notably regarding economic shifts, competition, transitions in the labour market, inequalities, and implications for democracy and human rights, privacy and data protection, and digital security;
  • Trust is a key enabler of digital transformation; that, although the nature of future AI applications and their implications may be hard to foresee, the trustworthiness of AI systems is a key factor for the diffusion and adoption of AI; and that a well-informed whole-of-society public debate is necessary for capturing the beneficial potential of the technology, while limiting the risks associated with it;
  • Given the rapid development and implementation of AI, there is a need for a stable policy environment that promotes a human-centric approach to trustworthy AI, that fosters research, preserves economic incentives to innovate, and that applies to all stakeholders according to their role and the context;
  • certain existing national and international legal, regulatory and policy frameworks already have relevance to AI, including those related to human rights, consumer and personal data protection, intellectual property rights, responsible business conduct, and competition, while noting that the appropriateness of some frameworks may need to be assessed and new approaches developed; and
  • Embracing the opportunities offered, and addressing the challenges raised, by AI applications, and empowering stakeholders to engage is essential to fostering adoption of trustworthy AI in society, and to turning AI trustworthiness into a competitive parameter in the global marketplace.

What is “Artificial Intelligence”?

The recommendation includes some helpful definitions of the major terms:

Artificial Intelligence System: a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. AI systems are designed to operate with varying levels of autonomy.

Artificial Intelligence System Lifecycle: four phases which can be sequential but may be iterative:

(i) design, data and models – a context-dependent sequence encompassing planning and design, data collection and processing, as well as model building;

(ii) verification and validation;

(iii) deployment; and

(iv) operation and monitoring

Artificial Intelligence Actors: AI actors are those who play an active role in the AI system lifecycle, including organisations and individuals that deploy or operate AI.

Is an OECD Recommendation binding on a country that has adopted it?

OECD Recommendations are not legally binding but they are highly influential and have many times formed the basis of international standards and helped governments design national legislation. For example, the OECD Privacy Guidelines adopted in 1980 and stating that there should be limits to the collection of personal data underlie many privacy laws and frameworks in the United States, Europe and Asia.

So the AI Principles are not binding, but the OECD provided five recommendations to governments:

  1. Facilitate public and private investment in research & development to spur innovation in trustworthy AI.
  2. Foster accessible AI ecosystems with digital infrastructure and technologies and mechanisms to share data and knowledge.
  3. Ensure a policy environment that will open the way to deployment of trustworthy AI systems.
  4. Empower people with the skills for AI and support workers for a fair transition.
  5. Co-operate across borders and sectors to progress on responsible stewardship of trustworthy AI.

Who developed the OECD AI Principles?

The OECD set up a 70+ member expert group on AI to scope a set of principles. The group consisted of representatives of 20 governments as well as leaders from the business (Google, Facebook, Microsoft, Apple, but not any financial institutions), labor, civil society, academic and science communities. The experts’ proposals were taken on by the OECD and developed into the OECD AI Principles.

What is the Purpose of the OECD Principles on AI?

The OECD Principles on Artificial Intelligence promote artificial intelligence (AI) that is innovative and trustworthy and that respects human rights and democratic values. The OECD AI Principles set standards for AI that are practical and flexible enough to stand the test of time in a rapidly evolving field. They complement existing OECD standards in areas such as privacy, digital security risk management and responsible business conduct.

What are the OECD AI Principles?

The Recommendation identifies five complementary values-based principles for the responsible stewardship of trustworthy AI:

1. Inclusive growth, sustainable development and well-beingAI systems should be designed in a way that respects the rule of law, human rights, democratic values and diversity, and they should include appropriate safeguards – for example, enabling human intervention where necessary – to ensure a fair and just society. And AI should benefit people and the planet by driving inclusive growth, sustainable development and well-being.

The actual text reads: “Stakeholders should proactively engage in responsible stewardship of trustworthy AI in pursuit of beneficial outcomes for people and the planet, such as augmenting human capabilities and enhancing creativity, advancing inclusion of underrepresented populations, reducing economic, social, gender and other inequalities, and protecting natural environments, thus invigorating inclusive growth, sustainable development and well-being.

2. Human-centred values and fairness AI actors should respect the rule of law, human rights and democratic values, throughout the AI system lifecycle. These include freedom, dignity and autonomy, privacy and data protection, non-discrimination and equality, diversity, fairness, social justice, and internationally recognized labor rights. To this end, AI actors should implement mechanisms and safeguards, such as capacity for human determination, that are appropriate to the context and consistent with the state of art.

3. Transparency and explainabilityAI Actors should commit to transparency and responsible disclosure regarding AI systems. To this end, they should provide meaningful information, appropriate to the context, and consistent with the state of art to foster a general understanding of AI systems, to make stakeholders aware of their interactions with AI systems, including in the workplace, to enable those affected by an AI system to understand the outcome, and, to enable those adversely affected by an AI system to challenge its outcome based on plain and easy-to-understand information on the factors, and the logic that served as the basis for the prediction, recommendation or decision.

4. Robustness, security and safetyAI systems should be robust, secure and safe throughout their entire lifecycle so that, in conditions of normal use, foreseeable use or misuse, or other adverse conditions, they function appropriately and do not pose unreasonable safety risk. To this end, AI actors should ensure traceability, including in relation to datasets, processes and decisions made during the AI system lifecycle, to enable analysis of the AI system’s outcomes and responses to inquiry, appropriate to the context and consistent with the state of art. And AI actors should, based on their roles, the context, and their ability to act, apply a systematic risk management approach to each phase of the AI system lifecycle on a continuous basis to address risks related to AI systems, including privacy, digital security, safety and bias.

5. AccountabilityAI actors should be accountable for the proper functioning of AI systems and for the respect of the above principles, based on their roles, the context, and consistent with the state of art. Organisations and individuals developing, deploying or operating AI systems should be held accountable for their proper functioning in line with the above principles.

What countries belong to the OECD?

Australia, Austria, Belgium, Canada, Chile, Czech Republic, Denmark, Estonia, Finland, France, Germany, Greece, Hungary, Iceland, Ireland, Israel, Italy, Japan, Korea, Latvia, Lithuania, Luxembourg, Mexico, Netherlands, New Zealand, Norway, Poland, Portugal, Slovak Republic, Slovenia, Spain, Sweden, Switzerland, Turkey, United Kingdom, United States

BigTech, FinTech, and the Battle Over Financial Services

BigTech vs FinTech – Which Will Replace Traditional Banks?

Two recent papers have looked at the attributes, relative strengths and weaknesses, and likelihood to emerge as the main challenger to traditional financial institutions, of two different species of technology company: BigTechs and FinTechs. The two papers are:

  1. Financial Stability Board’s (FSB) February 2019 paper titled “FinTech and Market Structure in Financial Services”, available at https://www.fsb.org/wp-content/uploads/P140219.pdf
  2. Bank for International Settlements’ (BIS) April 2019 Working Paper titled “BigTech and the changing structure of financial intermediation”, available at https://www.bis.org/publ/work779.pdf

The BIS Working Paper makes a pretty compelling argument that the BigTech firms have some distinct advantages over FinTechs that make them more likely to usurp traditional financial institutions. Advantages such as an existing customer base (that is familiar with a user interface and messaging platform), and access to capital (often without the constraints that financial institutions have). And the BIS paper also sets out some of the advantages that BigTech has over traditional financial institutions, such as the financial sector’s current dependence on BigTech’s cloud-based computing and storage (think of Amazon’s AWS), technological advantages such as artificial intelligence, machine learning, and APIs, and regulatory advantages (BigTech isn’t burdened with Dodd-Frank, Basel capital restrictions, model risk regulations, and anti-money laundering program regulations).

But what are the differences between “BigTech” and “FinTech”? Both papers provide definitions for, and examples of, the two terms:

  1. FSB: “refers to large technology companies that expand into the direct provision of financial services or of products very similar to financial products”
  2. BIS: “refers to large, existing companies whose primary activity is in the provision of digital services, rather than mainly in financial services … BigTech companies offer financial products only as one part of a much broader set of business lines.”

Both the FSB and BIS have the same BigTech firms: Facebook, Amazon, Apple, Google, Alibaba, Tencent, Vodafone, among others.

  1. FSB: “technology-enabled innovation in financial services that could result in new business models, applications, processes or products with an associated material effect on the provision of ‘financial services’ … used to describe firms whose business model focuses on these innovations.”
  2. BIS: “refers to technology-enabled innovation in financial services with associated new business models, applications, processes, or products, all of which have a material effect on the provision of financial services.”

Both the FSB and BIS use QuickenLoans and SOFI, among others, as examples of FinTech firms.

BigTech is really … Big

The BIS paper notes that the six largest global BigTech firms all have market capitalizations greater than the market capitalization of the largest global financial institution, JPMorgan Chase:

Which BigTech Firms are Providing What Financial Services Today?

The BIS paper provides a great summary table of the five main types of financial services that the eleven dominant BigTechs are currently providing. It’s clear from this table that the three Chinese BigTechs – Alibaba, Tencent, and Baidu – have the most comprehensive suite of financial services/products, followed by the US trio of Google, Amazon, and Facebook.


There is no conclusion. Every day brings new entrants and participants, shifts, and changes. The regulatory environments are rapidly changing (although regulators and regulations always lag the regimes they regulate). But these two papers provide some insights into the world of FinTech, BigTech, and financial services, and are worth spending some time on.

The “Ice Cream Social” Bandit – Former Bank Cash Vault Manager Stole $4.3 Million

Former Bank Cash Vault Manager Sentenced to 10 Years in Federal Prison for Stealing over $4 Million – Case Reveals Gaps in “Dual Control” Training

The scheme involved an “ice cream social” as an excuse to stay late, a private jet paid for with stolen bank funds, and training to get around dual controls


According to the US Attorney for Alaska, on April 29, 2019, Gerardo Valenzuela aka Gary Cazarez was sentenced to serve 10 years in prison after pleading guilty to stealing more than $4.3 million from the cash vault of KeyBank in Anchorage Alaska. The theft occurred on July 26, 2011, and Valenzuela fled to Mexico. He was arrested by Mexican authorities on Aug. 2, 2011, when a random search of his luggage at an internal (Mexico) checkpoint revealed $3.8 million in cash, firearms, and ammunition. Cazarez was charged and convicted in Mexico of criminal offenses analogous to money laundering and illegal possession of firearms for smuggling the cash and firearms into Mexico.  After serving a term of seven years in prison for his Mexican conviction, Cazarez was extradited to the U.S. to face the Alaska bank theft charges.

The theft was well conceived, well planned, and well executed. It also reveals a few interesting potential gaps that banks could have in their controls and training programs. The US Attorney’s press release tells the story:

According to court documents, on or about July 29, 2011, Valenzuela was the Vault Manager for KeyBank when he stole approximately $4.3 million dollars in U.S. Currency from KeyBank in Anchorage, and then flew in a chartered jet to Washington, bought a car, obtained an AK-47 for protection and drove to Mexico.  He mailed his and his girlfriend’s cell phones to Florida and New York to throw off investigators.  The investigation revealed that Valenzuela’s motive to rob his employer was his concern that Keybank was going to make his position obsolete and he would be out of a job.

Months prior to his theft, Valenzuela told his girlfriend that he could rob the bank noting that the bank had video surveillance, but no physical surveillance at that time.  In June 2011, he started to put his plan into action, which began with requesting that his brother obtain a firearm for him.  On July 8, 2011, Valenzuela falsely trained new employees on vault procedures, effectively removing dual controls over the vault and laying the groundwork for his ability to steal $4.3 million a few weeks later.

Here are the first two potential control gaps. First, the bank video surveillance but no physical surveillance. Second, he was able to falsely train new employees on vault procedures, effectively removing dual controls over the vault.

On July 26, 2011, Valenzuela purchased an airplane ticket for his girlfriend from Anchorage to Seattle.  Two days later, he then stole $30,000 from Keybank, which he used $24,000 to rent a private jet for himself to make his escape the next day.  On the day of his theft, July 29, 2011, Valenzuela told the branch manager he was going to organize an ice cream social for bank customers, giving him an excuse to stay late as he cleaned up.  Late at night and without dual controls in place, Valenzuela was able to access the vault without another employee present.  He boxed up $4.3 million in cash, rolled it out of the vault to his car in the parking lot, and loaded the money into his car.  Valenzuela drove to where the private jet was waiting for him in Anchorage and he flew to Seattle.

Valenzuela had set the timer on the vault lock for the maximum time allowable, giving him six days to escape to Mexico.  By the time Keybank discovered his theft, Valenzuela and his girlfriend were already in Mexico; however, Valenzuela was arrested by Mexican authorities on Aug. 2, 2011, when a random search of his luggage at a checkpoint revealed $3.8 million in cash, firearms, and ammunition.

Here is the third control gap: the vault manager was able to set the timer on the vault lock for six days. July 29, 2011 was a Friday, so at most the vault timer should have been set for two days, not six.

Cazarez was charged and convicted in Mexico of criminal offenses analogous to money laundering and illegal possession of firearms for smuggling the cash and firearms into Mexico.  After serving a term of seven years in prison for his Mexican conviction, Cazarez was extradited to the U.S. for the crimes charged in the superseding indictment.

Chief Judge Burgess noted that the most important sentencing factors in this case were the “magnitude of the crime” and Valenzuela’s lack of candor with the court.  At the sentencing hearing, evidence was presented that Valenzuela had executed a “fail safe plan” that included stashing $500,000 in Washington before he fled to Mexico so that if he were caught he would still have money when he was released.  That money has still not been recovered.

Appropriate controls on the timers on vaults, and ensuring there is physical surveillance to supplement any video surveillance, are two controls that should be in place for most financial institutions. But the most interesting control breakdown was around training the staff on appropriate dual control procedures. As the very name – dual – suggests, these controls are intended to involve (at least) two people on the theory that it is much harder for two people to conspire and act together than it is for one person to act alone. But if the person doing the training is both corrupt and one of the two people involved in the execution of the dual control, that control is ineffective, and the innocent person that received the fraudulent training is none the wiser.

So … all institutions that have dual controls, check to see who is doing the training: it cannot be one of the people involved in the execution of that control!

Regulatory Lag & Drag – Are There FinTech Solutions?

The RegTech, SupTech, and FinTech communities are focused on developing new technologies to speed up, simplify, and streamline financial institutions’ ability to implement new rules, regulations, and regulatory guidance. But there are two other stages of the regulatory life cycle that may be longer and more problematic for financial institutions than implementing new regulations: these are the time it takes for new regulations to be written and published (“Regulatory Lag”), and the time it takes to enforce those regulations (“Regulatory Drag”).

Time to Regulate – or “Regulatory Lag”.

This lag occurs where a new risk emerges, or a new product is introduced, or an existing product is used in new ways. There is always a lag between that new risk or product and the resulting legislative and/or regulatory response. In the meantime, institutions have to begin addressing the new risks when they first emerge – they can’t wait for new rules, regulatory guidance, and regulations to begin the multi-year people, process, and technology changes necessary to address the requirements of the regulation. Those early, pre-rule and pre-regulation efforts at building controls to address new risks can be expensive, and institutions run the risk of missing the mark and having to re-do much of what they’ve built. The best example of regulatory lag in the AML space is 9/11, which saw legislation passed in 45 days (October 2001), regulations published two years later (2003), and regulatory guidance in the form of the BSA Exam Manual two years after that (2005). Although it was only 45 days that financial institutions knew about the new information sharing provisions in section 314 of the USA PATRIOT Act, it was almost another four years before financial institutions knew how their regulators would examine their compliance with those information sharing provisions. It was this “regulatory lag” that led to my written statement (in December 2006) that “we’ll be judged tomorrow on what we’re building today, based on regulations that haven’t yet been written and best practices that haven’t been shared.”

Time to Enforce – or “Regulatory Drag”

Public enforcement actions (and prosecutions) drive a lot of compliance-related behavior in financial services. Yet there are multi-year delays between when the impugned behavior occurred and when a public enforcement action (and/or prosecution) makes them known to the industry. FinCEN’s December 2014 action against MoneyGram’s former BSA Officer is a good example: that action was made public in December 2014, and alleged violations of the Bank Secrecy Act that occurred from 2003 through May 2008, or more than 6 ½ years from the last day of the impugned activity and when the public action was taken.

What Can Technology Do To Address Regulatory Lag and Drag?

Regulatory lag and drag have been around for as long as there have been regulators. But with the world speeding up as much as it is, with new products and services, and new providers, being rolled out and created much faster than regulatory bodies can manage, there must be changes made in the entire regulatory life cycle.

FinTech providers and their customers demand a fast revolution. Regulators prefer a slow, deliberate evolution. There has to be a better way to identify new and emerging risks, to draft and communicate regulations to address those risks, and to implement the needed controls to manage those risks.

I’m not sure what can be done from a purely technology perspective to speed up regulators (and prosecutors), but the proponents of FinTech, RegTech, and SupTech solutions shouldn’t just focus on digitizing the implementation of new regulations, but on digitizing the entire regulatory life cycle: the regulatory lag between new risks and new regulations, the regulations themselves, and the regulatory drag from regulatory problem to public resolution.

Posted on LinkedIn on January 28, 2019 https://www.linkedin.com/pulse/regulatory-lag-drag-fintech-solutions-jim-richards/

CFTC Primer on “Smart Contracts” … which apparently aren’t necessarily “smart”

The Commodity Futures Trading Commission (CFTC) recently published an excellent primer on Smart Contracts.

I’ve reproduced most of the primer here: it was a PowerPoint reduced to PDF, so some of the images are not included. But the main gist of it is here.

Notably, the CFTC notes that “a ‘smart contract’ is not necessarily ‘smart.’  The operation is only as smart as the information feed it receives and the machine code that directs it.”  This is a great quote, expressing a sentiment that I have repeatedly stated in the context of machine learning and artificial intelligence applications for financial crimes risk management … they are only as good as the data they receive!



What is a smart contract?

Fundamentally, a “smart contract” is a set of coded computer functions. It may incorporate the elements of a binding contract (e.g., offer, acceptance, and consideration), or may simply execute certain terms of a contract. A smart contract allows self-executing computer code to take actions at specified times and/or based on reference to the occurrence or non-occurrence of an action or event (e.g., delivery of an asset, weather conditions, or change in a reference rate).

A “smart contract” is not necessarily “smart.” The operation is only as smart as the information feed it receives and the machine code that directs it. A “smart contract” may not be a legally binding contract. It may be a gift or some other non-contractual transfer, it may be only part of a broader contract. To the extent a smart contract violates the law, it would not be binding or enforceable.

Smart Contracts Leverage Blockchain/DLT

Smart contracts can be stored and executed on a distributed ledger, an electronic record that is updated in real-time and intended to be maintained on geographically disperse servers or nodes. Through decentralization, evidence of the smart contract is deployed to all nodes on a network, which effectively prevents modifications not authorized or agreed by the parties. Blockchain is a continuously growing database of permanent records, “blocks,” which are linked and secured using cryptography. Note: Distributed ledgers may be public or private/permissioned. See “A CFTC Primer on Virtual Currencies,” October 17, 2017, https://www.cftc.gov/LabCFTC/Primers/index.htm

Smart Contract Origins & Recent Explanations

The concept of a smart contract is not new. More than 20 years ago, computer scientist Nick Szabo stated the following:

“A smart contract is a set of promises, specified in digital form, including protocols within which the parties perform on the other promises…. The basic idea of smart contracts is that many kinds of contractual clauses (such as liens, bonding, delineation of property rights, etc.) can be embedded in the hardware and software we deal with, in such a way as to make breach of contract expensive (if desired, sometimes prohibitively so) for the breacher.” Nick Szabo, Computer Scientist Smart Contracts Building Blocks for Digital Markets 1996 ‡ See Nick Szabo, Smart Contracts: Building Blocks for Digital Markets, 1996, http://www.fon.hum.uva.nl/rob/Courses/InformationInSpeech/CDROM/Literature/LOTwinterschool2006/szabo.best.vwh.net/smart_contracts_2.html

“A smart contract is a mechanism involving digital assets and two or more parties, where some or all of the parties put assets in, and assets are automatically redistributed among those parties according to a formula based on certain data that is not known at the time the contract is initiated.” Vitalik Buterin, Founder of Ethereum, “DAOs, DACs, DAs and More: An Incomplete Terminology Guide,” (May 6, 2014), available at https://blog.ethereum.org/2014/05/06/daos-dacs-das-and-more-an-incompleteterminology-guide/

“A smart contract is an agreement in digital form that is self-executing and self-enforcing.” Kevin Werbach, Professor of Legal Studies & Business Ethics, University of Pennsylvania, Wharton Business School, “The Promise — and Perils — of ‘Smart’ Contracts,” (May 18, 2017), available at http://knowledge.wharton.upenn.edu/article/what-are-smart-contracts/

“A smart contract is an automatable and enforceable agreement. Automatable by computer, although some parts may require human input and control. Enforceable either by legal enforcement of rights and obligations or via tamper-proof execution of computer code.” ISDA and King and Wood Mallesons, Smart Derivatives Contracts: From Concept to Construction (October 2018), at 5 (citing Clack, C., Bakshi, V., and Braine, L., “Smart Contract Templates: foundations, design landscape and research directions” (August 4, 2016, revised March 15, 2017))

Smart contracts can be viewed as part of an evolution to automate processes with machines and self-executing code. Increasing automation has long been a feature of our financial markets including: for example, Stop Loss (Conditional) Orders (“If the price falls below $X, then sell at market”), and trading algorithms and smart order routers (machines that direct orders for execution).  Increasingly, smart contract-like automation is a feature of everyday life. Common examples include ATMs, automatic bill pay, touch-to-pay systems, and instant money transfer apps.

Potential Benefits of a Smart Contract

The attributes of a smart contract give rise to potential benefits throughout an economic transaction lifecycle, e.g., formation, execution, settlement.

Examples of a Smart Contract

The article provided three examples of a smart contract, a self-executing insurance contract, transportation (bicycle rental), and a credit default swap.

Other Potential Smart Contract Use Cases

Smart Contracts may have potential uses in financial market operations, and likewise may be useful in a variety of other areas as well. Examples include:

  • Financial Markets and Participants
    • Derivatives – streamline post-trade processes, real time valuations and margin calls.
    • Securities – simplify capitalization table maintenance (e.g., automate dividends, stock splits).
    • Trade Clearing and Settlement – improve efficiency and speed of settlement with less misunderstandings of terms.
    • Supply Chain/Trade Finance – track product movement, streamline payments, facilitate lending and liquidity.
    • Data Reporting and Recordkeeping – greater standardization and accuracy (e.g., Swaps Data Reporting, regulator nodes for real time risk analysis); automated retention and destruction.
    • Insurance – automatic and automated claims processing based on specified events; Internet of Things (IoT) enabled vehicles/homes/farms could execute claims automatically.
  • Other sample applications:
    • Public property records – maintain a “gold copy” of ownership and interests in real property.
    • Loyalty and rewards – can power travel or other rewards systems.
    • Electronic Medical Records – improves security and accessibility of data, empowering patients to control their own records while improving compliance with regulations (e.g., HIPAA).
    • Clinical Trials – protects patients with timestamped immutable consent forms, securely automates sequences, and increases data sharing of anonymized data while ensuring patient privacy.

Potentially Applicable Legal Frameworks

Depending on the facts and circumstances, a Smart Contract can be a binding legal contract. Smart contracts may be subject to a variety of legal frameworks depending on their application or product characterization. Examples include:

  • Commodity Exchange Act and CFTC regulations
  • Federal and state securities laws and regulations
  • Federal, state, and local tax laws and regulations
  • The Uniform Commercial Code (UCC), Uniform Electronic Transactions Act (UETA), and Electronic Signatures in Global and National Commerce Act (ESIGN Act)
  • The Bank Secrecy Act, USA PATRIOT Act, and other Anti-Money Laundering (AML) laws and regulations
  • State and federal money transmission laws.

Existing law and regulation apply equally regardless what form a contract takes. Contracts or constituent parts of contracts that are written in code are subject to otherwise applicable law and regulation.

Smart Contracts: Operational Risk

Smart contracts may not include appropriate or sufficient backup / failover mechanisms in case something goes awry. Smart contracts may depend on other systems to fulfill contract terms. These other systems may have vulnerabilities that could prevent the smart contract from functioning as intended.

Some smart contract platforms may be missing critical system safeguards and customer protections. Where smart contracts are linked to a blockchain, forks in the chain could create operational problems.

In case of an operational failure, recourse may be limited or non-existent – complete loss of a virtual asset is possible. Poor governance is another operational risk: smart contracts may require attention, action, and possible revision subject to appropriate governance and liability mechanisms.

Smart Contracts – Technical Risks

There are a number of technical risks, including:

  • Unintended software vulnerabilities
  • Humans! – make mi$taak3s when K0diNg
  • Technology failures – internet service can go down, user interfaces may become incompatible, or computers/servers can stop working
  • Scaling or bandwidth issues
  • Divergent/Forked Blockchains – such events can create multiple smart contracts where only one existed, or may disrupt the functioning of a smart contract
  • Future proofing – unforeseen or unanticipated future events that shock and/or stress the technology
  • Oracle (the oracle, not Oracle) failure, disruption, or other issues with the external sources used to obtain reference prices, events, or other information.

Rules-Based Monitoring, Alert to SAR Ratios, and False Positive Rates – Are We Having The Right Conversations?

There is a lot of conversation in the industry about the inefficiencies of “traditional” rules-based monitoring systems, Alert-to-SAR ratios, and the problem of high false positive rates. Let me add to that conversation by throwing out what could be some controversial observations and suggestions …

Current Rules-Based Transaction Monitoring Systems – are they really that inefficient?

For the last few years AML experts have been stating that rules-based or typology-driven transaction monitoring strategies that have been deployed for the last 20 years are not effective, with high false positive rates (95% false positives!) and enormous staffing costs to review and disposition all of the alerts.  Should these statements be challenged? Is it the fact the transaction monitoring strategies are rules-based or typology-driven that drives inefficiencies, or is it the fear of missing something driving the tuning of those strategies? Put another way, if we tuned those strategies so that they only produced SARs that law enforcement was interested in, we wouldn’t have high false positive rates and high staffing costs.  Graham Bailey, Global Head of Financial Crimes Analytics at Wells Fargo, believes it is a combination of basic rules-based strategies coupled with the fear of missing a case. He writes that some banks have created their staffing and cost problems by failing to tune their strategies, and by “throwing orders of magnitude higher resources at their alerting.”  He notes that this has a “double negative impact” because “you then have so many bad alerts in some banks that they then run into investigators’ ‘repetition bias’, where an investigator has had so many bad alerts that they assume the next one is already bad” and they don’t file a SAR. So not only are the SAR/alert rates so low, you run the risk of missing the good cases.

After 20+ years in the AML/CTF field – designing, building, running, tuning, and revising programs in multiple global banks – I am convinced that rules-based interaction monitoring and customer surveillance systems, running against all of the data and information available to a financial institution, managed and tuned by innovative, creative, courageous financial crimes subject matter experts, can result in an effective, efficient, proactive program that both provides timely, actionable intelligence to law enforcement and meets and exceeds all regulatory obligations. Can cloud-based, cross-institutional, machine learning-based technologies assist in those efforts? Yes! If properly deployed and if running against all of the data and information available to a financial institution, managed and tuned by innovative, creative, courageous financial crimes subject matter experts.

For more, see “False Positive Rates”, below …

Alert to SAR Ratios – is that a ratio that we should be focused on?

A recent Mid-Size Bank Coalition of America (MBCA) survey found the average MBCA bank had: 9,648,000 transactions/month being monitored, resulting in 3,908 alerts/month (0.04% of transactions alerted), resulting in 348 cases being opened (8.9% of alerts became a case), resulting in 108 SARs being filed (31% of cases or 2.8% of alerts). Note that the survey didn’t ask whether any of those SARs were of interest or useful to law enforcement. Some of the mega banks indicate that law enforcement shows interest in (through requests for supporting documentation or grand jury subpoenas) 6% – 8% of SARs.

So I argue that the Alert/SAR and even Case/SAR (in the case of Wells, Package/Case and Package/SAR) ratios are all of interest, but tracking to SARs filed is a little bit like a car manufacturer tracking how many cars it builds but not how many cars it sells, or how well those cars perform, how well they last, and how popular they are.  The better measure for AML programs is “SARs purchased”, or SARs that provide value to law enforcement.

How do you determine whether a SAR provides value to Law Enforcement? One way would be to ask Law Enforcement, and hope you get an answer. That could prove to be difficult.  Can you somehow measure Law Enforcement interest in a SAR?  Many banks do that by tracking grand jury subpoenas received to prior SAR suspects, Law Enforcement requests for supporting documentation, and other formal and informal requests for SARs and SAR-related information. As I write above, an Alert-to-SAR rate may not be a good measure of whether an alert is, in fact, “positive”. What may be relevant is an Alert-to-TSV SAR rate (see my previous article for more detail on TSV SARs).  What is a “TSV SAR”? A SAR that has Tactical or Strategic Value to Law Enforcement, where the value is determined by Law Enforcement providing a response or feedback to the filing financial institution within five years of the filing of the SAR that the SAR provided tactical (it led to or supported a particular case) or strategic (it contributed to or confirmed a typology) value. If the filing financial institution does not receive a TSV SAR response or feedback from law enforcement or FinCEN within five years of filing a SAR, it can conclude that the SAR had no tactical or strategic value to law enforcement or FinCEN, and may factor that into decisions whether to change or maintain the underlying alerting methodology. Over time, the financial institution could eliminate those alerts that were not providing timely, actionable intelligence to law enforcement, and when that information is shared across the industry, others could also reduce their false positive rates.

Which leads to …

False Positive Rates – if 95% is bad … what’s good?

There is a lot of lamenting, and a lot of axiomatic statements, about high false positive rates for AML alerts: 95% or even 98% false positive rates.  I’d make three points.

First, vendors selling their latest products, touting machine learning and artificial intelligence as the solution to high false positive rates, are doing what they should be doing: convincing consumers that their current product is out-dated and ill-equipped for its purpose by touting the next, new product. I argue that high false positive rates are not caused by the current rules-based technologies; rather, they’re caused by inexperienced AML enthusiasts or overwhelmed AML experts applying rules that are too simple against data that is mis-labeled, incomplete, or simply wrong, and erring on the side of over-alerting and over-filing for fear of regulatory criticism and sanctions.

If the regulatory problems with AML transaction monitoring were truly technology problems, then the technology providers would be sanctioned by the regulators and prosecutors.  But an AML technology provider has never been publicly sanctioned by regulators or prosecutors … for the simple reason that any issues with AML technology aren’t technology issues: they are operator issues.

Second, are these actually “false” alerts? Rather, they are alerts that, at the present time, based on the information currently available, do not rise to the level of either (i) requiring a complete investigation, or (ii) if completely investigated, do not meet the definition of “suspicious”. Regardless, they are now valuable data points that go back into your monitoring and case systems and are “hibernated” and possibly come back if that account or customer alerts at a later time, or there is another internally- or externally-generated reason to investigate that account or customer.

Third, if 95% or 98% false positive rates are bad … what is good? What should the target rate be? I’ll provide some guidance, taken from a Treasury Office of Inspector General (OIG) Report: OIG-17-055 issued September 18, 2017 titled “FinCEN’s information sharing programs are useful but need FinCEN’s attention.” The OIG looked at 314(a) statistics for three years (fiscal years 2010-2012) and found that there were 711 314(a) requests naming 8,500 subjects of interest sent out by FinCEN to 22,000 financial institutions. Those requests came from 43 Law Enforcement Agencies (LEAs), with 79% of them coming from just six LEAs (DEA, FBI, ICE, IRS-CI, USSS, and US Attorneys’ offices). Those 711 requests resulted in 50,000 “hits” against customer or transaction records by 2,400 financial institutions.

To analogize those 314(a) requests and responses to monitoring alerts, there were 2,400 “alerts” (financial institutions with positive matches) out of 22,000 “transactions” (total financial institutions receiving the 314(a) requests). That is an 11% hit rate or, arguably, a 89% false positive rate. And keep in mind that in order to be included in a 314(a) request, the Law Enforcement Agency must certify to FinCEN that the target “is engaged in, or is reasonably suspected based on credible evidence of engaging in, terrorist activity or money laundering.” So Law Enforcement considered that all 8,500 of the targets in the 711 requests were active terrorists or money launderers, and 11% of the financial institutions positively responded.

With that, one could argue that a “hit rate” of 10% to 15% could be optimal for any reasonably designed, reasonably effective AML monitoring application.

But a better target rate for machine-generated alerts is the rate generated by humans. Bank employees – whether bank tellers, relationship managers, or back-office personnel – all have the regulatory obligation of reporting unusual activity or transactions to the internal bank team that is responsible for managing the AML program and filing SARs. For the twenty plus years I was a BSA Officer or head of investigations at large multi-national US financial institutions, I found that those human-generated referrals resulted in a SAR roughly 40% to 50% of the time.

An alert to SAR ratio goal of machine-based alert generation systems should be to get to the 40% to 50% referral-to-SAR ratio of human-based referral generation programs.

FinCrime FinTech Hype, Hubris, and Subject Matter Enthusiasm

Two very recent fincrime fintech start-ups recently published marketing papers – one a self-styled “Report” the other a blog – that should serve as reminders that, although innovation and change are critical to financial institutions’ financial crimes risk management programs, fincrime fintechs are not. Or, put another way, those fincrime fintechs need to understand what they are and what they are not. Most important, they are not “solutions”: they are tools that could be deployed, in whole or in part, by true financial crimes experts who bear the statutory and regulatory responsibility for – and personal liability for – designing, developing, implementing, maintaining, and enhancing their programs. And U.S. banking agencies are embracing the idea of responsibly implementing innovative approaches to financial crimes risk management. The U.S. banking agencies’ December 3rd joint statement is a very positive step to encourage private sector innovation in fighting financial crime. But they don’t limit those innovative approaches to just adopting new technologies: they also encourage “testing new ways of using existing tools”. For those banks that are considering replacing their existing tools with “modern era technologies”, I would caution them to first look at how they are using their existing tools, whether they have the data and in-house expertise to even deploy modern era technologies, and consider whether they are better off improving and augmenting their existing tools.

Let’s take a look at the report and blog.

Feedzai – “the market leader in fighting financial crime fraud with AI”

The first report is from Feedzai, which, according to its website:

Feedzai is AI. We’re coding the future of commerce with a leading platform powered by artificial intelligence and big data. Founded and developed by data scientists and aerospace engineers, Feedzai has one critical mission: make commerce safe. The world’s largest banks, payment providers and retailers use Feedzai’s machine learning technology to manage risks associated with banking and shopping, whether it’s in person, online or via mobile devices.

[and …]

Feedzai is the market leader in fighting financial crime fraud with AI. But even a leader needs partners. To maximize our impact, we partner with top tier financial institutions, consultancies, system integrators, and technology providers to create win/win/win scenarios for the marketplace.

Feedzai’s report is titled “A Guide for Financial Institutions – Augmenting Your AML with AI: See The Risk Signals in the Noise”


This “report” is really a marketing document from Feedzai, used to convince financial institutions that if they’re not deploying machine learning and AI – indeed, if they’re not deploying Feedzai’s machine learning and AI – they’re at risk of what they refer to as “The Six Pains of Money Laundering” which can only be addressed if the buyer “flips the script with Feedzai anti-money laundering.”

Let’s look at those six pains. First, and foremost, none of them are actually pains of money laundering, but of complying with government-imposed legislative and regulatory requirements and expectations.  Some aren’t even pains, or pains related to the technology solutions that Feedzai is selling, but simple observations.

The first “pain” is regulatory fines. Feedzai notes that “In the past decade, compliance fines erased $342 billion in profits for top US and European banks. This figure is expected to exceed $400 billion by 2020.” And then they list what is implied to be AML-related fines for 11 banks and 1 non-bank telecom manufacturer. Going through those, every one of them is solely, or primarily, an OFAC or sanctions-related penalty, with AML either not part of the penalty or, in the case of the hybrid OFAC/AML penalties, a small part.  At best Feedzai’s list is sloppy and incomplete: at worst it is deceptive. If they’re going to write a paper touting their AML capabilities that includes regulatory fines as the first pain point, they could at least use AML-related regulatory fines.

The second “pain” is organizational burden. They write: “Financial institutions might employ upwards of 5,000 employees in sanction screening alone. As transaction volume keeps growing, so do alerts, false positives, and compliance teams, all at unsustainable rates.” Again, they’ve confused AML with sanctions. Economic sanctions programs are related to AML programs, just as fraud programs are related to AML programs. But they are very different disciplines and require very different programs, technologies, staffing, and reporting. And a phrase such as “might employ upwards of 5,000” is weak (the word “might”) and ambiguous (does “upwards of 5,000” mean 4,900? 1,000?).

The third “pain” is that “current transaction monitoring solutions lack context”. Feedzai writes:

A PwC report states that transaction monitoring for AML often generates false positive rates of over 90%. The rule based systems that monitor these transactions do what they were supposed to: point to incidents where money movement exceeded certain thresholds. However, compliance teams cannot go deeper to provide additional context that would substantiate or refute the actual money laundering risk. Current solutions are unable to connect the dots between multiple seemingly unrelated alerts in order to contextualize and visualize suspicious movement patterns that point to broader AML risk.

First, the reason that there are false positives is that compliance teams must, can, and do go deeper than the alert generating monitoring systems to provide additional context to substantiate (apparently in 10% of the cases) or refute (in 90%). But those teams don’t substantiate or refute “the actual money laundering risk” as Feedzai writes. What financial institutions are charged with is making a determination that certain activity is suspicious, not that it is, in fact, money laundering.  And as all experienced AML professionals know, it is the job of the analyst or investigator to take the alert or referral and to determine whether the activity has no business or apparent lawful purposes or is not the type  of activity that the particular customer would normally be expected to engage in, and to conclude that there is no reasonable explanation for the activity after examining the available facts, including the background and possible purpose of the transactions and activity. It is fair, though, that analysts and the entire financial services industry would be better served if AML transaction monitoring, interaction monitoring, and customer surveillance applications could produce alerts that led to SARS in more than 10% of the cases. But as I will write in an upcoming article, addressing the false positive issue is more about, or at least as much about, cleaning up a bank’s data and regulatory reform, than it is about deploying new technology.

Second, if a bank’s current solution is “unable to connect the dots between multiple seemingly unrelated alerts in order to contextualize and visualize suspicious movement patterns that point to broader AML risk”, then that bank is not using the data it has available to it in any reasonable way. A simple Scenario Analysis tool, such as the one I first developed in 1999 (and the subject of a July 2018 News post on this site), was used to run sophisticated, segmented customer surveillance models using basic relational database tools. That, coupled with a rudimentary case management system that allowed grouping and de-duplicating of related alerts and referrals into consolidated case packages, connected the dots in two different multi-national financial institutions. Connecting AML dots does not require banks to rip-and-replace existing tools: it requires them to creatively use their existing tools.

The fourth “pain of money laundering” that Feedzai identifies is manual SAR reporting. But their description of this manual reporting pain point doesn’t really address the manual nature of the process nor offer a technology solution. They write:

Typically as little as 7% of all filed SARS are deemed by the regulator as worthy of further AML investigation, which means that 95% of the effort of these teams goes to waste. As SAR reporting is still a highly manually intensive task, the end result is that most of the AML resources allocated by FIs and the regulator are busy clearing their own “noise,” created in the first place because they are unable to substantiate true money laundering risk. Today’s compliance-focused systems use limited legacy technologies and reward quantity over quality, sending millions of dollars to waste.

First, regulators (at least US regulators) don’t examine banks on whether their SARs are “worthy of further AML investigation”. It may be that the 7 per cent figure used by Feedzai reports to the largest banks anecdotal statements that they get some sort of law enforcement response to roughly 7% of their SARs, with responses being a follow up subpoena, a formal request for supporting documentation, or a national security letter. That doesn’t mean that the other 93% of SARs “go to waste”. I recently wrote that law enforcement (in the case of the FBI) can conservatively say that at least 20% of BSA filings have tactical or strategic value to law enforcement. We would all like to see that percentage go up, and that is a noble task for Feedzai, other fintechs, the financial sector, regulators, and law enforcement.

Second, I’m not sure what Feedzai means by writing that “most of the AML resources allocated by FIs and the regulator are busy clearing their own ‘noise,’ created in the first place because they are unable to substantiate true money laundering risk.” Including regulators in this statement is confusing (to me) and it suggests that regulators are allocating resources (Q. their own resources or compelling banks to allocate bank resources) because regulators cannot substantiate true money laundering risk.

The fifth “pain” is disconnected business units, and Feedzai impugns siloed AML and fraud units, and disconnected investigations and analytics teams. Both are, indeed, pain points for any program, but both are easily overcome without deploying any new technology. They are organizational problems overcome with organizational solutions.

The sixth and last “pain” is the “barrier to digital transformation.” Feedzai describes this pain not as a barrier to digital transformation but because of digital transformation, because this digital transformation across the bank’s businesses and operations “can harbor new waves of financial crime with criminals hiding behind large new sets of distributed and disconnected data.” The solution? “The magnitude of the detection complexity calls for new technologies to take the helm as legacy systems simply don’t scale up to the task.”

With these pains, Feedzai concludes that banks must “flip the script with Feedzai anti-money laundering”. They announce “the dawn of machine learning for AML” with Feedzai’s machine learning and advanced automation, etc.

Unfortunately, it’s not the dawn of machine learning for AML.  It may be the dawn for some banks that have allowed their programs and technologies to stagnate and become obsolete. But for five to ten years there have been banks (Wells Fargo) and fintechs (Verafin) using machine learning, artificial intelligence, and visual (geographical, temporal, relational) analytics to “replace the manually tedious parts of existing AML processes with insights that are specific to money laundering” to “separate meaningful risk signals from noise, ensuring that manual investigation resources are applied using a validated risk-based approach” and to allow FIU analysts to “understand suspicious patterns and more precisely allocate their manual investigation resources”, all using advanced financial crimes-specific case management applications to ingest, triage, de-duplicate, risk-score, package, decision, and route alerts and referrals; triage, risk-score, and make SAR decisions; automate and write narratives; manage and report to external and internal stakeholders; and feed all of this back into the system to learn and adapt, tune and adapt models, and revise customer risk ratings.

So Feedzai: if you believe you are the best, or want to be the best, AML systems provider in the industry, your marketing materials such as “A Guide for Financial Institutions – Augmenting Your AML with AI: See The Risk Signals in the Noise” should be the best. They’re not. Your subject matter enthusiasm is to be commended; your subject matter expertise needs work.

Tookitaki – intending to transform the way organizations do predictive modeling

According to its website …

Tookitaki is building an intelligent decision support system (DSS) to help businesses take smarter decisions. Built on an effective AI system, our DSS intends to transform the way organisations do predictive modeling. Most businesses globally use consultants, build ad hoc predictive models on sample data and take decisions. The current process offers neither efficiency nor scale – rather becomes obsolete in the world of big data.

Our DSS will empower businesses go beyond the barriers of existing statistical packages creating one-off solutions by offering production-ready, automated predictive modeling. Clients can call our REST API for live feedback and take actions accordingly.

Tookitaki’s CEO, Abhishek Chatterjee, published a blog on December 14, 2018 titled “Modern Tech to Reshape US AML Compliance with Regulators’ Recent Handshake.” Let’s take a look at that blog.

Mr. Chatterjee begins with his synopsis of the Joint Statement on Innovative Efforts to Combat Money Laundering and Terrorist Financing:

On December 3, The Board of Governors of the Federal Reserve System, the Federal Deposit Insurance Corporation (FDIC), the Financial Crimes Enforcement Network (FinCEN), the National Credit Union Administration, and the Office of the Comptroller of the Currency issued a joint statement encouraging banks to use modern-era technologies to bolster their Bank Secrecy Act/anti-money laundering (BSA/AML) compliance programs. The agencies ask banks “to consider, evaluate, and, where appropriate, responsibly implement innovative approaches to meet their Bank Secrecy Act/anti-money laundering (BSA/AML) compliance obligations, in order to further strengthen the financial system against illicit financial activity.

Actually, the agencies did not issue a statement encouraging banks to use modern-era technologies to bolster their BSA/AML programs. The agencies’ statement encouraged banks to “consider, evaluate, and, where appropriate, responsibly implement innovative approaches to meet their” BSA/AML compliance obligations”. And, in the very next sentence following the quote above, the Joint Statement provides, “[t]he Agencies recognize that private sector innovation, including new ways of using existing tools or adopting new technologies, can help banks …”.

Notably, the Agencies are not limiting innovative approaches to the adoption of new (“modern-era”) technologies (and by implication, replacement of not-so-modern-era technologies), but including new ways of using existing tools. This is critically important to those banks that are facing increasing pressure from fincrime fintechs to rip-and-replace existing AML systems with new, and often untested, technologies.

They are of the view that private sector innovation, involving new technologies such as artificial intelligence and machine learning, can help banks identify and report money laundering, terrorist financing and other illicit activities.

The Agencies provide two examples of innovative approaches: the use of innovate Financial Intelligence Units (FIUs) and “artificial intelligence and digital identity technologies”. Notably, bank FIUs have been in existence since the late 1990s (I know, I deployed the first large bank FIU at FleetBoston Financial in 1999). The concept of a bank FIU is twenty years old, and almost every large financial institution now has an FIU that is continually implementing innovative approaches to fighting financial crimes. The success of an FIU is equal parts data, technology, tools, courage, imagination, compassion, empathy, cynicism, collaboration, hard work, patience, and luck.

Mr. Chatterjee next describes the “assurances” the agencies give:

In addition, the regulators assured that they will not penalize those firms who are found to have a deficiency in their existing compliance programs as they run pilots employing modern technologies. The statement reads: “While the Agencies may provide feedback, pilot programs in and of themselves should not subject banks to supervisory criticism even if the pilot programs ultimately prove unsuccessful. Likewise, pilot programs that expose gaps in a BSA/AML compliance program will not necessarily result in supervisory action with respect to that program.” They have added that “the implementation of innovative approaches in banks’ BSA/AML compliance programs will not result in additional regulatory expectations.”

This is a reasonably accurate description of the assurances – although I would not use the word “assurances” given the qualifiers attached to it. The first three “assurances”, and two more, are clear cut:

  1. “The Agencies recognize that private sector innovation, including new ways of using existing tools or adopting new technologies, can help banks identify and report money laundering, terrorist financing, and other illicit financial activity by enhancing the effectiveness and efficiency of banks’ BSA/AML compliance programs. To assist banks in this effort, the Agencies are committed to continued engagement with the private sector and other interested parties.”
  2. “The Agencies will not penalize or criticize banks that maintain effective BSA/AML compliance programs commensurate with their risk profiles but choose not to pursue innovative approaches.”
  3. “While banks are expected to maintain effective BSA/AML compliance programs, the Agencies will not advocate a particular method or technology for banks to comply with BSA/AML requirements.”
  4. Where test or implemented “artificial intelligence-based transaction monitoring systems … identify suspicious activity that would not otherwise have been identified under existing processes, the Agencies will assess the adequacy of banks’ existing suspicious activity monitoring processes independent of the results of the pilot program
  5. “… the implementation of innovative approaches in banks’ BSA/AML compliance programs will not result in additional regulatory expectations.

Note the strong, unqualified language: “the Agencies are committed to continued engagement”, “the Agencies will not penalize or criticize”, “the Agencies will not advocate …”, “the Agencies will assess”, and “the implementation of innovative approaches will not result in additional regulatory expectations”.

The qualified “assurances” come in the paragraph about pilot programs (with emphasis added):

Pilot programs undertaken by banks, in conjunction with existing BSA/AML processes, are an important means of testing and validating the effectiveness of innovative approaches.  While the Agencies may provide feedback, pilot programs in and of themselves should not subject banks to supervisory criticism even if the pilot programs ultimately prove unsuccessful.  Likewise, pilot programs that expose gaps in a BSA/AML compliance program will not necessarily result in supervisory action with respect to that program.  For example, when banks test or implement artificial intelligence-based transaction monitoring systems and identify suspicious activity that would not otherwise have been identified under existing processes, the Agencies will not automatically assume that the banks’ existing processes are deficient.  In these instances, the Agencies will assess the adequacy of banks’ existing suspicious activity monitoring processes independent of the results of the pilot program.  Further, the implementation of innovative approaches in banks’ BSA/AML compliance programs will not result in additional regulatory expectations.

Here there are the qualified assurances (which are not assurances): “should not”, “will not necessarily”, and “not automatically assume”.  These are important distinctions. The Agencies could have written something very different:

“… pilot programs in and of themselves will not subject banks to supervisory criticism even if the pilot programs ultimately prove unsuccessful.  Likewise, pilot programs that expose gaps in a BSA/AML compliance program will not result in supervisory action with respect to that program.  For example, when banks test or implement artificial intelligence-based transaction monitoring systems and identify suspicious activity that would not otherwise have been identified under existing processes, the Agencies will not assume that the banks’ existing processes are deficient …”

But the author of the blog also uses an interesting qualifier by writing that the joint statement “largely clears the air for modern AML solutions, especially those based on artificial intelligence and machine learning”. I agree: the joint statement largely, but not entirely, clears the air or provides some comfort to banks who implement innovative approaches, including machine learning and AI. But as the Agencies remind us, any innovative approaches must be done responsibly while the bank continues to meet its BSA/AML program obligations and, if in doing so any gaps in that program that are identified will not necessarily result in supervisory action, but the Agency will assess those gaps to determine whether the program is, in fact, meeting regulatory requirements.

Finally, I disagree with Mr. Chatterjee’s statement that we are in an “era of sophisticated financial crimes that are impossible to detect with legacy systems.” I trust that this is simply a marketing phrase, and the use of the absolute word “impossible” is puffery and salesmanship. The statement is false.

Like Mr. Chatterjee and his firm, I also am “both happy and excited at the US regulators’ change of tone with regard to the use of modern technologies by banks and financial institutions to combat financial crimes such as money laundering.” But we need to be as realistic and practical as we are happy and excited about embracing new technologies without fully utilizing the existing technologies. Modern era technologies will be no better than the existing technologies if they are deployed against incomplete, outdated, stale, poorly labeled data by people lacking courage, imagination, and financial crimes expertise.

The U.S. banking agencies’ December 3rd joint statement is a very positive step to encourage private sector innovation in fighting financial crime by testing new ways of using existing tools as well as adopting new technologies. For those banks that are considering replacing their existing tools with “modern era technologies”, I would caution them to first look at how they are using their existing tools, whether they have the data and in-house expertise to even deploy modern era technologies, and consider whether they are better off improving and augmenting their existing tools.  A bank’s data and personnel are the “rails” upon which the AML technology rides: if those rails can’t support the high-speed train of machine learning and AI-based systems, then it’s best to fix and replace the rails before you test and buy the new train.

Flipping the Three AML Ratios with Machine Learning and Artificial Intelligence (why Bartenders and AML Analysts will survive the AI Apocalypse)

Machine Learning and Artificial Intelligence proponents are convinced – and spend a lot of time trying to convince others – that they will disrupt and revolutionize the current “broken”AML regime. Among other targets within this broken regime is AML alert generation and disposition and reducing the false positive rate (more on false positives in another article!). The result, if we believe the ML/AI community,is a massive reduction in the number of AML analysts that are churning through the hundreds and thousands of alerts, looking for the very few that are “true positives” worthy of being labelled “suspicious” and reported to the government.

But is it that simple? Can the job of AML Analyst be eliminated or dramatically changed – in scope and number of positions – by machine learning and AI? Much has been and continues to be written about the impact of artificial intelligence on jobs.  Those writers have categorized jobs along two axes – a Repetitive-to-Creative axis, and an Asocial-to-Social axis –  resulting in four “buckets” of jobs, with each bucket of jobs being more or less likely to be disrupted or even eliminated:

A good example is the “Social & Repetitive” job of Bartender: Bartenders spend much of their time doing very routine, repetitive tasks: after taking a drink order, they assemble the correct ingredients in the correct amounts, and put those ingredients in the correct glass, then present the drink to the customer.All of that could be more efficiently and effectively done with an AI-driven machine, with no spillage, no waste, and perfectly poured drinks. So why haven’t we replaced bartenders? Because a good bartender has empathy, compassion, and instinct,and with experience can make sound judgments on what to pour a little differently, when to cut-off a customer, when to take more time or less with a customer. A good bartender adds value that a machine simply can’t.

Another example could be the “Asocial & Creative” (or is it “Social & Repetitive”?) job of an AML Analyst: much of an AML Analyst’s time is spent doing very routine, repetitive tasks: reviewing the alert, assembling the data and information needed to determine whether the activity is suspicious, writing the narrative. So why haven’t we replaced AML Analysts? Because a good Analyst, like a good bartender, has empathy, compassion, and instinct, and with experience can make sound judgments on what to investigate a little differently, when to cut-off an investigation, when to take more time or less on an investigation. A good Analyst adds value that a machine simply can’t. 

Where AI and Machine Learning, and Robot Process Automation, can really help is by flipping the three currently inefficient AML ratios:

  1. The False Positive Ratio – the currently accepted, but highly axiomatic and anecdotal, ratio is that 95% to 98% of alerts do not result in SARs, or are “false positives” … although no one has ever boldly stated what an effective or acceptable false positive rate is (even with ROC curves providing some empirical assistance), perhaps the ML/AI/RPA communities can flip this ratio so that 95% of alerts result in SARs. If they can do this, they can also convince the regulatory community that this new ratio meets regulatory expectations (because as I’ll explain in an upcoming article, the  false positive ratio problem may be more of a regulatory problem than a technology problem).
  2. The Forgotten SAR Ratio – like false positive rates, there are anecdotes and some evidence that very few SARs provide tactical or strategic value to law enforcement. Recent Congressional testimony suggests that ~20% of SARs provide TSV (tactical or strategic value) to law enforcement … perhaps the ML/AI/RPA communities can help to flip this ratio so that 80% of SARs are TSV SARs. This also will take some effort from the regulatory and law enforcement communities.
  3. The Analysts’ Time Ratio – 90% of an AML Analyst’s time can be spent simply assembling the data, information, and documents needed to investigate a case, and only 10% of their time thinking and using their empathy, compassion, instinct, judgment, and experience to make good decisions and file TSV SARs … perhaps the ML/AI/RPA communities can help to flip this ratio so that Analysts spend 10% of their time assembling and 90% of their time thinking.

We’ve seen great strides in the AML world in the last 5-10 years when it comes to applying machine learning and creative analytics to the problems of AML monitoring, alerting, triaging, packaging, investigations, and reporting. My good friend and former colleague Graham Bailey at Wells Fargo designed and deployed ML and AI systems for AML as far back as 2008-2009, and the folks at Verafin have deployed cloud-based machine learning tools and techniques to over 1,600 banks and credit unions.

I’ve outlined three rather audacious goals for the machine learning/artificial intelligence/robotic process automation communities:

  1. The False Positive Ratio – flip it from 95% false positives to 5% false positives
  2. The Forgotten SAR Ratio – flip it from 20% TSV SARs to 80% TSV SARs
  3. The Analysts’ Time Ratio – flip it from 90% gathering data to 10% gathering data 

Although many new AML-related jobs are being added – data scientist, model validator, etc. – and many existing AML-related jobs are changing, I am convinced that the job of AML Analyst will always be required. Hopefully, it will shift over time from being predominantly that of a gatherer of information and more of a hunter of criminals and terrorists. But it will always exist. If not, I can always fall back on being a Bartender. Maybe …