X
96989

Brave new world, Part 1: Modern regulatory effects on AI and automation

April 30 2019
by Paige Bartley


Introduction


From a pessimist's point of view, the global data privacy and data protection trend toward regulation has thrown a wrench into the progress and innovation around machine learning (ML) and AI technologies. Restrictions for data collection, access, and processing limit what can be done to data, for what purpose and by whom. ML and AI thrive on expansive training sets, and any restrictions around usage are (often incorrectly) perceived as a threat by organizations that wish to use this data to develop self-training algorithms geared toward automated use cases.

There is a certain element of uncertainty that lies in the regulatory future of ML and AI, but the regulations themselves are in no way meant to directly oppress progress in these technological fields; they are meant to provide guardrails. Society is really only in the beginning stages of identifying potential problems with next-generation automation enabled by ML and AI. In the absence of agreed-upon universal frameworks for ethics, evolving regulations such as the EU's General Data Protection Regulation (GDPR) and similar regional policies provide meaningful guidance on the use of automation, particularly when it affects the public. In Part 1 of this series, we look at the current regulatory landscape and how it potentially affects automation, ML and AI. Part 2 examines potential enterprise strategies for defensible ML and AI use.

The 451 Take

Under current regulatory frameworks such as the EU's GDPR, rules around 'automated decision-making' typically serve as a proxy for AI and ML technology. Regulations themselves must not be technically prescriptive if they are to be adaptive over time, leading to some ambiguity in interpretation for technologies such as ML and AI. There will soon be many instances of automated technology litigated in the court system, and it will often be these decisions that create general principles for fair and ethical use in the absence of agreed-upon national or international frameworks. Given what we know now, transparency and 'explainability' of automation are emerging as key principles in existing regulation.

No organization likes landing in the courtroom, and implementing automated technologies amid proliferating – yet often ambiguous – regulations will assume this risk, creating the need for cost/benefit analysis. But litigation is not necessarily a bad thing, since it will address new technologies as they evolve faster than any single regulation could be written or revised. For organizations implementing ML and AI, staying abreast of relevant case law will be key.

GDPR and the guiding principles


The EU's GDPR was flagship legislation in that is successfully set a data handling standard not only for Europe, but for much of the developed world. Because of economic pressure and extraterritorial reach, GDPR was able to persuade other countries and regions to adopt similar standards so that they could obtain EU 'adequacy decisions' that allowed them to freely transfer data with the EU, ensuring business continuity in the digital era. Therefore, many of the subsequent regional laws that fell into place after the implementation of GDPR closely mirrored its principles, such as Brazil's General Data Protection Law (LGPD). The US was a notable outlier in this regard, and has yet to draft federal policy for data privacy and protection; however, state-level legislation such as the California Consumer Privacy Act takes action to protect consumer data interests. However, CCPA does not attempt to regulate automated decision-making, perhaps because of its somewhat protectionist stance toward the thriving California technology and startup industry.

Because so many data protection and data privacy regulations mirror GDPR's basic tenets, its rules around automated decision-making are particularly notable. Nowhere in the text of the regulation are the terms 'artificial intelligence' or 'machine learning' ever mentioned; to do so would be self-constraining and limit the shelf life of the regulation. For the regulation to be flexible over time and accommodate new technological developments, it cannot be prescriptive, particularly with ill-defined terms such as AI. Instead, it focuses on the broader role of automation and its impact on the lives of consumers and living individuals.

GDPR and automated decision-making: The definitive rules


While GDPR does not use the terms 'artificial intelligence' or 'machine learning,' it does have articles that pertain to rules for automated decision-making, which effectively act as a proxy for governing the use of these technologies in many situations. Since decision support is a leading enterprise use case for ML and AI, understanding the rules and nuances of these requirements is a must for any organization that is crafting strategy around implementation of these algorithms and processes at scale.

Two articles of the regulation carry significant heft with regard to regulation of automated decision-making.

Collectively, these rules set a gold standard for the protection of consumers and living individuals, so that they may not be subject to automated decisions, such as the rejection of a loan based on automated credit analysis, without the safety valve of human explanation. While Articles 15 and 22 are specific to GDPR, the core ethos of transparency is a common shared theme across many of the proliferating global data privacy and protection regulations.

It is important to note that the regulation governs automated decision-making when it has legal or otherwise significant effects on data subjects. Automated decision-making that has minimal material impact on these individuals is not restricted, creating an opportunity for enterprise AI and ML use cases, which we will discuss in Part 2.

Courts will play a major role in interpretation


With the rules around AI and ML not prescriptive within existing regulations, the legal world of these technologies exists in a bewildering spectrum of greys. Automated decision-making is one thing to describe and define, but the technologies underlying it, such as neural networks, are another thing entirely. Inevitably, this means the courts will step in to interpret when the ethics or situational aspects of a particular technology usage butt up against the principles of regulation without technically violating its written rules. The EU, in particular, is where we can expect to see many of these early landmark court cases unfold, defining acceptable business use cases for AI and ML under regulatory frameworks. While the EU does not traditionally have a culture of class-action lawsuits as the US does, GDPR specifically gives data subjects rights to legal action, and many organizations have already taken root in the EU to assist individuals in exerting their rights.

Litigation, in the realm of AI and ML, should not be viewed as a net threat to business. No organization enjoys going to court, but these cases will be critical in determining working standards for the use of technology given the absence of agreed-upon international frameworks for ethics and fair use of automated technologies. Especially as new, previously unforeseen technologies develop, the courts will be essential in determining how to govern and implement the technology in ways that balance the interests of business and the public.

For any organization that is focused on building out an enterprise-scale ML or AI initiative, particularly with public-facing use cases, this means that diligent monitoring of ongoing case law is necessary. It also warrants defensive practices, such as thorough documentation of the decisions and rationale that went into the implementation of an ML or AI initiative. Strong governance of the entire ML and AI model development process, via a DataOps and MLOps approach, is also a protective mechanism should the organization end up facing regulatory authorities or the court system.

What to do now?


Amid the legal and regulatory climate, organizations are facing tough decisions when it comes to balancing risk and potential reward of automated technologies. Automation, and associated ML and AI technologies, will be necessary for keeping pace with competition as organizations increasingly become dependent on data for all strategic business decisions. But organizations that push boundaries with implementation face potential litigation and regulatory wrath. In Part 2 of this series, we will explore a defensible approach to implementing automated technologies, designed to maximize value in low-risk use cases while cautiously crafting defensible strategy for higher-risk ones.