Artificial Intelligence Regulation Under the California Privacy Rights Act

On November 3, 2020, while the rest of the country voted for president, California residents voted to adopt the most sweeping privacy law in the nation, the California Privacy Rights Act (“CPRA”). The new law is principally concerned with privacy rights and protections, expanding those created by the California Consumer Privacy Act (“CCPA”), the 2018 law that became effective this year. However, the CPRA also creates a new agency, the California Privacy Protection Agency (“CPPA”), and sometime in the middle of 2021, the California Attorney General’s office will assign its regulatory authority under the CCPA and CPRA to that agency. The CPRA instructs the CPPA to issue “regulations governing access and opt-out rights with respect to businesses’ use of automated decision-making technology, including profiling and requiring businesses’ response to access requests to include meaningful information about the logic involved in such decision-making processes, as well as a description of the likely outcome of the process with respect to the consumer” (emphasis added).

California borrowed much of this language from the European Union’s General Data Protection Regulation (the “GDPR”), the groundbreaking privacy law that inspired the CCPA and CPRA. Neither the GDPR nor the CPRA defines “automated decision-making,” but it is generally understood to mean “making a decision solely by automated means without any human involvement.” In other words, the CPRA instructs the CPPA to regulate certain forms of artificial intelligence (“AI”). However, they are not blanket instructions, and it is useful to parse out the language the CPRA uses.

First, the type of AI is limited to those forms, applications and uses that make decisions without any human involvement. If the AI application provides analysis for a human decision-maker to consider, like the AI analytical tools adopted by some state criminal justice systems, that’s beyond the terms of the CPRA. Having said that, the CPRA is broad enough that the CPPA could adopt regulations that consider whether the human decision-maker is merely a rubber stamp for the AI’s recommendations and the AI is functioning without human involvement for practical purposes.

Second, the CPRA states that the CPPA should promulgate “regulations governing access and opt-out rights with respect to businesses’ use of” AI that makes decisions without any human involvement. This language means that the agency’s regulations will (a) only apply to businesses’ use of that AI, not personal, governmental, or non-profit use, (b) grant California consumers and households the right to obtain information about how that AI impacts them, and (c) grant California consumers and households the right to require businesses to have a human being involved in any decisions about them, effectively telling businesses to stop using that AI to make decisions about them.

Third, the regulations governing businesses’ responses to requests for information about how the relevant AI impacts consumers or households must require “meaningful information about the logic involved in such decision-making processes, as well as a description of the likely outcome of the process with respect to the consumer.” This requirement has been characterized in other countries as the “Right to an Explanation.” Essentially, the CPRA instructs the CPPA to prepare regulations that require businesses relying on AI to make decisions without human involvement to explain how those decisions are made, what information the AI relies on to make those decisions, and how those decisions impact consumers and households. That poses a significant change for many companies that treat their AI applications as a black box or trade secret.

Finally, the CPRA specifically notes that profiling is within the AI that the regulations will address. Profiling is defined as “any form of automated processing of personal information…. to evaluate certain personal aspects relating to a natural person, and in particular to analyze or predict aspects concerning that natural person’s performance at work, economic situation, health, personal preferences, interests, reliability, behavior, location or movements.” Because profiling is included in the CPRA’s instructions to regulate AI, the CPPA potentially has a greenlight to regulate many businesses that monitor online behavior and mobile application usage to construct advertising profiles or to push media to users (including political stories and current events), assuming their AI applications are making decisions without human intervention.

Per the CPRA, the CPPA will adopt final regulations governing automated decision-making by July 1, 2022, although enforcement will not commence until July 1, 2023. Two and a half years might sound like a long time, but given the way many businesses operate their AI, most will need it. Although the CPRA does not apply to every business – the CPRA only applies to businesses that hit certain thresholds in revenue and personal information of California consumers and households – its effects could be broader than that. In the same way that privacy laws like the GDPR and CCPA have influenced market expectations for privacy rights beyond their jurisdictions, the CPRA’s AI-related rights represent the future of AI practices and will likely prove influential beyond California.

Consumers nationwide may soon expect detailed explanations from any company that uses AI applications to analyze personal data. Explanations may soon lead to demands for regular assessments, as individuals want assurances that the applications are not demonstrating inappropriate bias or discrimination. Companies that engage in profiling or that have incorporated AI analysis into their business operations should start thinking about how they are going to enforce the right to an explanation and comply with consumer AI expectations. Just like with privacy, organizations that act early will have an advantage over organizations that delay these considerations.

Cameron Shilling
Cameron Shilling

Cameron is the chair of the Cybersecurity and Privacy group at McLane Middleton. In his 20 plus years as a lawyer, Cameron has managed, litigated and resolved numerous commercial matters involving data security, technology, business, and employment issues in New Hampshire, Massachusetts, New England, and around the country. Data privacy is a focus of Cameron’s practice, including creating and implementing privacy policies, terms of use agreements, information use and social media policies, advising clients about workplace privacy, social media, and consumer privacy, and handling data privacy claims asserted against companies. 

Leave a Reply