Introduction
Thank you for the opportunity to offer public comment on this important rulemaking. I am the Head of AI Policy at the Abundance Institute, a mission-driven nonprofit dedicated to creating the policy and cultural environment where emerging technologies can develop and thrive in order to perpetually expand widespread human prosperity.
I write to express serious concern about the California Privacy Protection Agency’s (CPPA) proposed regulations governing Automated Decision-Making Technology (ADMT) under the California Consumer Privacy Act (CCPA). These sweeping new rules – which impose broad opt-out and transparency requirements on virtually any algorithmic decision process – are overly burdensome and costly, with minimal demonstrated consumer benefit. They risk exceeding the CPPA’s legal authority, infringing on First Amendment rights, and transforming the CCPA from a privacy law into a de facto AI regulation regime. Moreover, the aggressive implementation timeline magnifies the compliance challenges.
Artificial intelligence and automated systems hold immense promise for innovation and prosperity. Ill-considered regulation at this stage could needlessly stifle that promise. I respectfully urge the CPPA to reconsider and substantially revise the ADMT proposal to avoid harming California’s economy and technological leadership while yielding negligible privacy gains.
Excessive Compliance Costs and Flawed Economic Assumptions
The CPPA’s proposed changes to its regulations do not pass a proper benefit / cost analysis in their current form. The agency’s own analysis of the costs and benefits is flawed and unpersuasive even upon casual review. It is telling that even ignoring many of the most important cost effects of the proposed rules, the Agency’s own predicted costs in dollars and jobs is significant. Still, that estimate is far too low. And the predicted benefits are speculative, unmeasurable, and in some cases, provably mistaken.
The Agency underestimated regulatory costs
The compliance costs of the proposed ADMT regulations are extraordinarily high, far outweighing any tangible benefits to consumers. The CPPA’s own Standardized Regulatory Impact Assessment (SRIA) predicts direct costs of all the proposed regulatory changes to California businesses of approximately $3.5 billion in the first year, with average annual costs around $1.0 billion for the first decade. These costs include the need to implement new data systems, perform mandated risk assessments and audits, respond to ADMT opt-out and access requests, and re-engineer algorithms to accommodate opt-outs.
The SRIA further acknowledges significant macroeconomic harm: job losses peaking at roughly 126,000 positions by 2030 and annual state tax revenue losses of about $6.17 billion by 2030 as a result of the regulations. This drag on employment and tax base would hurt not only businesses but workers and government services statewide.
And still, even these massive projected compliance costs estimates are unrealistically low. According to a Capitol Matrix Consulting review for the California Chamber of Commerce, the SRIA understated compliance expenses by skimping on labor costs and ignoring out-of-state firms that must comply.
Furthermore, the SRIA fails to even consider other significant costs, such as how the regulation will affect consumers and non-covered businesses. This omission is particularly significant for the provisions that would newly regulate “extensive profiling” for behavioral advertising because such advertising is a major proportion of commercial advertising and is viewed by nearly every consumer. A recent survey showed that a whopping 97.3% of businesses use at least some digital advertising. A 2024 economic analysis conservatively estimated that more than 427,000 California businesses purchase digital ads from three prominent platforms. An estimated 69% of U.S. small and medium-sized businesses use digital ads (which are typically targeted) to find new customers, and these ads comprise the bulk of their advertising spending.
Any cost-benefit analysis that ignores obvious impacts on this scale of economic activity is at best incomplete. As a Federal Trade Commission economist has explained,
Personal data collection and targeted advertising can be beneficial or detrimental to consumers depending on many factors. Targeted ads reduce search costs and improve match quality, which in turn may increase price competition; this increases the total value consumers derive from acquiring the products they match with. Targeting could mean fewer ads overall; consumers benefit directly from not having to view ads, but also indirectly from cost-savings passed on by firms. On the other hand, price discrimination by a single firm or market segmentation by previously-competing firms may lead to higher prices for some or all consumers involved, though the overall welfare effect depends on the shape of demand functions. There are also consumer privacy concerns that need to be addressed. Policy decisions in this arena must account for all these various aspects of economic analysis.
The SRIA fails to acknowledge any of these benefits and how their diminishment by regulation imposes costs. This matters because the proposed rules impose new disclosure, opt-out, and access requirements on the use of first-party data that will constrict the use of behavioral advertising. These changes “will reduce income of online publishers and raise costs for businesses to advertise to new consumers.”
For example, the SRIA notes that “these proposed regulations can lead to increased opt-outs from the use of ADMT for profiling for behavioral advertising.” Perhaps in making such opt-out choices consumers are fully informed as to the tradeoffs they will suffer regarding search costs, match quality, and price competition. However, such choices also impose other costs. One study showed that users who opt out of behavioral ads “fetch 52% less revenue on the exchange than comparable ads for users who allow behavioral targeting,” which directly impacts the revenues of publishers who rely on advertising revenue models. Behavioral ads also “increase[] seller and platform revenue,” and improve consumer satisfaction, “with 10% lower post-purchase product returns and 2.3% higher repeat purchase probability." Thus, regardless of what benefits that opt-out mechanisms may provide, there are undeniable costs that must be assessed.
Questionable Benefits
At the same time, the SRIA’s projected benefits for the proposed rules are almost entirely reliant on the cybersecurity provisions, and even those projected benefits relied on faulty assumptions, including a mathematical error that grossly inflated baseline cybercrime losses, and optimistic claims about risk reduction unsupported by empirical literature. As the California Chamber of Commerce explains, when corrected, the SRIA’s purported benefits evaporate, revealing a regulation that is all cost and little measurable benefit.
The SRIA also failed to evaluate impacts on innovation incentives, despite a statutory requirement to do so. This is a glaring omission given the rule’s likely chilling effect on AI adoption. A Goldman Sachs study estimates AI could boost global GDP by 7% over ten years, which implies a nearly $400 billion increase in California’s GDP by 2036. At that scale, policies that stifle even a small fraction of ADMT usage could cost “tens of billions per year” in lost economic output.
In sum, the economic analysis is fundamentally flawed – overstating nebulous benefits while downplaying the very real and substantial compliance and innovation burdens these rules would impose on California’s businesses, workers, and economy.
Overly Broad Definition of “Automated Decision-Making Technology”
The proposed regulations define “Automated Decision-Making Technology” so broadly that it sweeps in nearly any software or process that uses personal information to aid or replace human decision-making. Under Section 7001(f) of the draft rules, ADMT is “any technology that processes personal information and uses computation to execute a decision, replace human decisionmaking, or substantially facilitate human decisionmaking.” This expansive definition explicitly includes any technology used to perform “profiling” and covers tools that provide a score or recommendation used as a primary factor in a human’s decision. In effect, any automated system or algorithmic process involving personal data could be deemed ADMT, from advanced AI models down to basic data sorting, if it influences an outcome. Even commonplace business software like analytics programs or productivity tools might fall under this umbrella whenever they inform decisions.
Industry stakeholders have justifiably objected that this definition is overly broad and ill-defined. As one commenter noted in the CPPA’s hearing, the draft rule would regulate “essentially all computational technology”, a scope so broad it “would be disastrous for California’s AI development.” Rather than targeting genuinely high-risk automated decisions (e.g. those involving important health, finance, employment decisions), the rule casts an indiscriminate net over countless benign or beneficial applications of data. This not only creates massive compliance uncertainty – businesses cannot easily discern which of their decision-making processes are in scope – but also threatens to chill innovation in AI and data-driven services across the board. Developers may forego deploying useful automated features (fosr personalization, fraud detection, efficiency, etc.) for fear of triggering onerous ADMT compliance.
By treating trivial algorithms the same as impactful AI, the proposal departs from a risk-based approach and instead approaches AI with a precautionary, one-size-fits-all restriction. Such overreach goes beyond what voters likely envisioned when empowering the CPPA to regulate automated decision-making technology. Indeed, the CPPA’s own references to the EU AI Act and federal AI initiatives in justifying these rules reveal an intent to regulate AI generally, far afield from the CCPA’s core focus on personal privacy. This transformation of a privacy law into a sweeping AI law via regulation is unwarranted and unwise.
Impacts on Behavioral Advertising and Small Businesses
One of the most problematic extensions of the ADMT proposal is its attempt to regulate behavioral advertising practices, especially so-called first-party targeted advertising. The draft rules would create new consumer rights to opt out of a business’s use of ADMT for “profiling” and personalized ads, even when the advertising uses data the business collected from its own website and customers. This goes beyond the CCPA’s statutory provisions, which grant consumers the right to opt out of the sale or sharing of data (targeting primarily third-party ad tracking) – but do not grant a right to prevent a business from using data it collected to market to its own customers. In other words, the CPPA is venturing beyond the law’s text by trying to curtail a business’s ability to show tailored content or offers to users on its own platform.
Such a restriction would have significant negative impacts on the digital economy, especially for publishers and small businesses that rely on personalized advertising. For media and online services, first-party targeted ads are a critical revenue source that funds free content and services. For small and medium-sized businesses (SMBs), targeted ads are an essential tool to efficiently reach new customers and compete with larger firms. A recent study found 69% of U.S. small businesses use digital ads to find new customers, and 82% credit online ads with helping them grow their revenue in 2023. These businesses also report that personalized ads save them time and money by focusing their marketing spend. If regulations severely limit targeted advertising, the fallout for SMBs could be dire – more than one-third of small advertisers said losing personalized ads would hurt sales, nearly half said they would have to raise prices, and 1 in 5 indicated they might have to close their business entirely. One analysis explains that extending opt-out rights to cover first-party behavioral ads will reduce the income of online publishers and raise advertising costs for businesses, with especially harsh effects on small businesses that depend on targeted ads to grow. In short, these rules threaten to dismantle a key economic engine of the modern internet – the ad-supported model – which could drive up costs for consumers, eliminate free services, and entrench larger incumbents (who can absorb compliance costs that smaller players cannot).
Crucially, it remains unclear what tangible privacy or consumer benefit would offset these harms. The proposal frames personalized advertising as an ADMT issue, yet advertising use of data was already addressed by the CCPA’s restrictions on cross-context behavioral advertising, which empowers consumers to opt out of third-party cross-site tracking. The CCPA does not give an opt-out from first-party personalization that occurs within a single service’s context. From a consumer standpoint, receiving relevant product recommendations or site features based on one’s preferences is often seen as a benefit, not a risk. The ACLU and other privacy advocates raised concerns about surveillance marketing, but the CPPA has not demonstrated that its heavy-handed approach (effectively an opt-out for all targeted advertising) will meaningfully improve consumer welfare. Many consumers may not exercise this opt-out, and those who do might find their experience degraded (with generic ads or less personalized content) more than their privacy is enhanced. The marginal privacy gains are speculative, while the economic downsides – lost revenue for publishers, weakened small businesses, and potentially less consumer choice in the marketplace – are concrete. The Agency should not stretch the law to regulate in this area, especially not without clear legislative direction or evidence of a net benefit.
First Amendment and Legal Overreach Concerns
Beyond policy overreach, the proposed ADMT regulations raise legal and constitutional issues, particularly under the First Amendment. By intruding into how businesses use information about and communicate with their own consumers, the rules may constitute a form of speech regulation – one that courts have been skeptical of in the context of data and advertising restrictions. Notably, the U.S. Supreme Court in Sorrell v. IMS Health struck down a state law that banned the sale of certain data for specific pharmaceutical marketing uses, finding that it imposed content- and speaker-based burdens on speech and thus violated the First Amendment. The Court made clear that even restrictions on the use of information for marketing purposes can trigger heightened First Amendment scrutiny.
The CPPA’s proposal is, in one way, more restrictive than the Vermont law at issue in Sorrell: it imposes practical limits not just the sale or sharing of consumer data, but on a company’s ability to speak certain advertising messages to a customer using data provided by a customer directly to the company. The state cannot simply label business communications as “ADMT” and thereby evade constitutional scrutiny; if the regulation targets communicative content (like an ad or an algorithmically curated message), First Amendment protections are implicated.
Moreover, the ADMT rules would compel businesses to engage in speech and disclosures that raise further constitutional questions. For example, companies must provide detailed “pre-use” notices explaining their use of ADMT and describing “in plain language” the logic of their algorithms and the consumer’s rights.
They also must, upon consumer request, divulge intimate details about how a decision was made about that person – effectively a form of compelled explanatory speech about the company’s proprietary processes. This requirement only applies to decisions made by or with the support of ADMT technologies, based on an unsupported (indeed, not even explicitly stated) assumption that the outcomes of ADMT are inherently more likely to be harmful than outcomes from non-automated decisionmaking processes.
Forcing private entities to disclose how their internal processes operate treads into sensitive territory of both trade secret law and free expression. Recently, a federal court enjoined California’s Age-Appropriate Design Code (AADC) in part because its Data Protection Impact Assessments requirements were unconstitutional compelled speech. The Ninth Circuit affirmed that decision, agreeing that such mandates were facially unconstitutional because they compel covered businesses to create and disclose content about sensitive, highly subjective topics.
The CPPA’s risk assessment requirements and ADMT explanation duties could be vulnerable to similar challenge – they force companies to create and potentially submit documents explaining sensitive internal processes, regardless of how legitimate and legal the ultimate decision. A court could view such mandates as a form of compelled speech about the content those algorithms deliver.
In addition to constitutional concerns, there are questions of statutory authority and administrative overreach. The ballot initiative known as the California Privacy Rights Act of 2020 (CPRA) amended the CCPA and empowered the Agency to promulgate regulations on automated decision-making access and opt-out rights. However, that grant of authority must be interpreted reasonably and in line with the statute’s intent. The CPPA’s current proposal goes beyond what the law contemplates. CCPA did not authorize limits on first-party advertising; nor did CPRA. The CPPA appears to be using a privacy law to pursue objectives more akin to AI ethics regulation, ventures that the text of the CCPA/CPRA does not clearly authorize.
At minimum, this overreach invites legal challenge and uncertainty. The better course would be a narrower regulation hewing closely to the CPRA’s text – for instance, focusing on access/opt-out for truly significant automated decisions, rather than trying to regulate every algorithm that touches personal data. Without such retrenchment, the Agency’s rules may not withstand judicial scrutiny.
Unworkable and Rushed Implementation Timeline
Finally, the implementation timeline for these rules is exceedingly aggressive, which will compound costs and confusion. The CPPA began formal rulemaking in late 2024 and initially set written comments due by January 14, 2025 – literally the same day as the first public hearing. This rushed schedule left little time for stakeholders to analyze and provide input on a complex 66-page regulation. It was only after significant criticism and external events like California wildfires that the Agency extended the comment deadline to February 19, 2025. Such haste in the rulemaking process suggests insufficient consideration of practical compliance challenges.
The proposed regulations themselves envision being “fully implemented two years after the effective date” for certain requirements like risk assessments. However, other provisions – including ADMT opt-out, transparency, and access provisions – may take effect much sooner, potentially within months of final adoption. Given the CPPA’s expected timeline, businesses could be faced with complying by late 2025 with novel obligations that have no precedent in U.S. law. By comparison, major privacy regimes (like GDPR or state privacy laws) often provide two years or more lead time for businesses to adjust systems and processes. Here, companies would have to scramble to inventory all ADMT systems, build new consumer interfaces for opt-outs, engineer opt-out mechanisms or human review alternatives for automated processes, draft detailed algorithmic disclosures, and train personnel – all under threat of enforcement by an agency keen to show results. Smaller companies, in particular, will struggle to meet these demands on short notice, as they lack armies of compliance staff and counsel.
The harms of a rushed rollout are significant. Companies may err on the side of removing or disabling beneficial automated features (to avoid non-compliance), leading to degraded products and services. Others might simply fail to meet the deadline, resulting in a wave of violations that neither the CPPA nor businesses are prepared to handle. It is telling that many business representatives at the January 2025 hearing implored the Agency to slow down and reconsider its approach and warned of unintended consequences of hasty implementation. The CPPA should heed these warnings. Rushing complex, sweeping regulations almost always yields unintended consequences. The CPPA would be far better served by a measured timeline: allow ample public input, consider a phased or narrowed implementation, and ensure businesses have clarity and time to comply. An onerous rule on paper that is impractical in reality helps no one – not consumers (who get a false sense of security), not businesses (facing chaos), and not the Agency (which will be mired in enforcement difficulties and backlash).
Conclusion
In closing, the proposed ADMT regulations are overly expansive, burdensome, and premature. They impose massive costs and frictions on California’s economy for speculative privacy gains. They use an axe where a scalpel was needed, and venture beyond the CPPA’s legal authority in ways that risk being struck down in court. By trying to do too much too fast – regulating every algorithmic decision and even basic advertising practices – the Agency runs the danger of undermining California’s famed innovation climate and driving businesses (and jobs) out of the state. This is directly contrary to the interests of Californians, who benefit from a thriving digital economy.
To be clear: protecting consumers from genuinely harmful decisions, whether automated or not, is a worthy goal. No one disputes that important decisions about individuals (loans, employment, housing, etc.) should be fair, transparent, and accountable. However, these proposals overshoot that goal by a wide margin. They turn the CCPA – a consumer privacy law – into an omnibus AI regulation without the necessary tailoring or legislative guidance. The result is a framework that would burden legitimate, beneficial uses of data and algorithms far more than it would curb truly harmful conduct.
I urge the CPPA to substantially revise the ADMT regulations before adoption. At minimum, the Agency should: (1) Narrow the definition of ADMT to focus on high-risk automated decisions (those with significant legal, financial, or health effects), excluding ordinary or low-risk data processing; (2) Remove the restrictions on first-party advertising and marketing, which are beyond the law’s scope and raise constitutional issues – the focus should remain on third-party cross-context profiling as in the statute; (3) Conduct a fresh SRIA analysis that fully accounts for all costs (including impacts on innovation and small businesses) and does not rely on speculative, error-prone benefit calculations; (4) Address the legal authority and First Amendment questions in a thorough manner – if certain provisions (like compelled algorithm disclosures or broad opt-outs) cannot be squared with constitutional limits, they should be dropped or reworked now, rather than in protracted litigation later; and (5) Extend the implementation timeline significantly, and/or consider a phased implementation to ensure businesses have adequate time to build compliance programs that actually work.
California must govern technology in a way that is balanced, lawful, and evidence based. These draft regulations, as written, unfortunately miss that mark. The CPPA should realign its approach so that it truly furthers consumer privacy without inflicting disproportionate economic damage, stifling the very innovations that drive prosperity, or usurping the role of the California Legislature. I appreciate the opportunity to comment and hope the Agency will carefully consider these concerns in the rulemaking process.
Sincerely,
Neil Chilson
Head of AI Policy
Abundance Institute
This comment is designed to assist the agency as it explores the issues of this proceeding. The views expressed in this comment are those of the author(s) and do not necessarily reflect the views of the Abundance Institute. Submitted via email to regulations@cppa.ca.gov on February 19, 2025.