Artificial Intelligence – Who Is On The Hook When Things Go Wrong With Your AI System? You Are!

“Organisations and individuals developing, deploying or operating AI systems should be held accountable for their proper functioning”

For all the upstart fintechs out there that are trumpeting their innovative Artificial Intelligence-based solutions that can solve a financial institution’s financial crimes problems! … note that you may be held accountable when that AI system doesn’t quite turn out like your marketing materials suggested. Legal responsibility for something you design, build, and deploy is not a new concept, but how that “something” – in this case, the AI system you developed and installed at a client bank – actually works, and reacts, and adapts, over time could very be new ground that hasn’t been explored before. But many smart people are thinking about AI developers’ accountability, and other AI-related issues, and many of those have produced some principles to guide us as we develop and implement AI-based systems.

On May 22, 2019 the OECD published a Council Recommendation on Artificial Intelligence. At its core, the recommendation is for the adoption of five complimentary “value-based principles for responsible stewardship of trustworthy artificial intelligence. The link is Artificial intelligence and the actual recommendation is https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449#_ga=2.200835047.853048335.1559167756-681244095.1559167756

What’s the big deal about artificial intelligence?

The OECD recognized a number of things about AI that are worth including:

  • AI has pervasive, far-reaching and global implications that are transforming societies, economic sectors and the world of work, and are likely to increasingly do so in the future;
  • AI has the potential to improve the welfare and well-being of people, to contribute to positive sustainable global economic activity, to increase innovation and productivity, and to help respond to key global challenges;
  • At the same time, these transformations may have disparate effects within, and between societies and economies, notably regarding economic shifts, competition, transitions in the labour market, inequalities, and implications for democracy and human rights, privacy and data protection, and digital security;
  • Trust is a key enabler of digital transformation; that, although the nature of future AI applications and their implications may be hard to foresee, the trustworthiness of AI systems is a key factor for the diffusion and adoption of AI; and that a well-informed whole-of-society public debate is necessary for capturing the beneficial potential of the technology, while limiting the risks associated with it;
  • Given the rapid development and implementation of AI, there is a need for a stable policy environment that promotes a human-centric approach to trustworthy AI, that fosters research, preserves economic incentives to innovate, and that applies to all stakeholders according to their role and the context;
  • certain existing national and international legal, regulatory and policy frameworks already have relevance to AI, including those related to human rights, consumer and personal data protection, intellectual property rights, responsible business conduct, and competition, while noting that the appropriateness of some frameworks may need to be assessed and new approaches developed; and
  • Embracing the opportunities offered, and addressing the challenges raised, by AI applications, and empowering stakeholders to engage is essential to fostering adoption of trustworthy AI in society, and to turning AI trustworthiness into a competitive parameter in the global marketplace.

What is “Artificial Intelligence”?

The recommendation includes some helpful definitions of the major terms:

Artificial Intelligence System: a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. AI systems are designed to operate with varying levels of autonomy.

Artificial Intelligence System Lifecycle: four phases which can be sequential but may be iterative:

(i) design, data and models – a context-dependent sequence encompassing planning and design, data collection and processing, as well as model building;

(ii) verification and validation;

(iii) deployment; and

(iv) operation and monitoring

Artificial Intelligence Actors: AI actors are those who play an active role in the AI system lifecycle, including organisations and individuals that deploy or operate AI.

Is an OECD Recommendation binding on a country that has adopted it?

OECD Recommendations are not legally binding but they are highly influential and have many times formed the basis of international standards and helped governments design national legislation. For example, the OECD Privacy Guidelines adopted in 1980 and stating that there should be limits to the collection of personal data underlie many privacy laws and frameworks in the United States, Europe and Asia.

So the AI Principles are not binding, but the OECD provided five recommendations to governments:

  1. Facilitate public and private investment in research & development to spur innovation in trustworthy AI.
  2. Foster accessible AI ecosystems with digital infrastructure and technologies and mechanisms to share data and knowledge.
  3. Ensure a policy environment that will open the way to deployment of trustworthy AI systems.
  4. Empower people with the skills for AI and support workers for a fair transition.
  5. Co-operate across borders and sectors to progress on responsible stewardship of trustworthy AI.

Who developed the OECD AI Principles?

The OECD set up a 70+ member expert group on AI to scope a set of principles. The group consisted of representatives of 20 governments as well as leaders from the business (Google, Facebook, Microsoft, Apple, but not any financial institutions), labor, civil society, academic and science communities. The experts’ proposals were taken on by the OECD and developed into the OECD AI Principles.

What is the Purpose of the OECD Principles on AI?

The OECD Principles on Artificial Intelligence promote artificial intelligence (AI) that is innovative and trustworthy and that respects human rights and democratic values. The OECD AI Principles set standards for AI that are practical and flexible enough to stand the test of time in a rapidly evolving field. They complement existing OECD standards in areas such as privacy, digital security risk management and responsible business conduct.

What are the OECD AI Principles?

The Recommendation identifies five complementary values-based principles for the responsible stewardship of trustworthy AI:

1. Inclusive growth, sustainable development and well-beingAI systems should be designed in a way that respects the rule of law, human rights, democratic values and diversity, and they should include appropriate safeguards – for example, enabling human intervention where necessary – to ensure a fair and just society. And AI should benefit people and the planet by driving inclusive growth, sustainable development and well-being.

The actual text reads: “Stakeholders should proactively engage in responsible stewardship of trustworthy AI in pursuit of beneficial outcomes for people and the planet, such as augmenting human capabilities and enhancing creativity, advancing inclusion of underrepresented populations, reducing economic, social, gender and other inequalities, and protecting natural environments, thus invigorating inclusive growth, sustainable development and well-being.

2. Human-centred values and fairness AI actors should respect the rule of law, human rights and democratic values, throughout the AI system lifecycle. These include freedom, dignity and autonomy, privacy and data protection, non-discrimination and equality, diversity, fairness, social justice, and internationally recognized labor rights. To this end, AI actors should implement mechanisms and safeguards, such as capacity for human determination, that are appropriate to the context and consistent with the state of art.

3. Transparency and explainabilityAI Actors should commit to transparency and responsible disclosure regarding AI systems. To this end, they should provide meaningful information, appropriate to the context, and consistent with the state of art to foster a general understanding of AI systems, to make stakeholders aware of their interactions with AI systems, including in the workplace, to enable those affected by an AI system to understand the outcome, and, to enable those adversely affected by an AI system to challenge its outcome based on plain and easy-to-understand information on the factors, and the logic that served as the basis for the prediction, recommendation or decision.

4. Robustness, security and safetyAI systems should be robust, secure and safe throughout their entire lifecycle so that, in conditions of normal use, foreseeable use or misuse, or other adverse conditions, they function appropriately and do not pose unreasonable safety risk. To this end, AI actors should ensure traceability, including in relation to datasets, processes and decisions made during the AI system lifecycle, to enable analysis of the AI system’s outcomes and responses to inquiry, appropriate to the context and consistent with the state of art. And AI actors should, based on their roles, the context, and their ability to act, apply a systematic risk management approach to each phase of the AI system lifecycle on a continuous basis to address risks related to AI systems, including privacy, digital security, safety and bias.

5. AccountabilityAI actors should be accountable for the proper functioning of AI systems and for the respect of the above principles, based on their roles, the context, and consistent with the state of art. Organisations and individuals developing, deploying or operating AI systems should be held accountable for their proper functioning in line with the above principles.

What countries belong to the OECD?

Australia, Austria, Belgium, Canada, Chile, Czech Republic, Denmark, Estonia, Finland, France, Germany, Greece, Hungary, Iceland, Ireland, Israel, Italy, Japan, Korea, Latvia, Lithuania, Luxembourg, Mexico, Netherlands, New Zealand, Norway, Poland, Portugal, Slovak Republic, Slovenia, Spain, Sweden, Switzerland, Turkey, United Kingdom, United States