Research / Public Interest Comment on the NTIA Dual Use Foundation Artificial Intelligence Models with Widely Available Model Weights Request for Public Input

Public Interest Comment on the NTIA Dual Use Foundation Artificial Intelligence Models with Widely Available Model Weights Request for Public Input

DOWNLOAD PDF

Comments of The Abundance Institute

Introduction

Thank you for the opportunity to comment on this important topic.

In order to produce a report required by President Biden’s Artificial Intelligence Executive Order (EO), NTIA seeks comment in a Request for Public Input (“RFPI”) on “potential risks, benefits, other implications, and appropriate policy and regulatory approaches to dual-use foundation models for which the model weights are widely available.”

The RFPI sets out questions intended to help NTIA advise the White House on how to draw difficult definitional lines between amorphous concepts. The questions suggest that NTIA seeks to divide the spectrum on which foundational models lie from closed to open into two (or more) categories. The goal seems to be that NTIA or the White House would then assign different qualitative assessments of risk and benefits to these categories, and next partition out regulatory burdens accordingly. 

Drawing such lines in a principled, sustainable way is difficult if not impossible. For such a general purpose technology as AI, we can expect dozens if not hundreds of variations in the relative openness of models, both as a technical matter and as a business proposition. Dividing this ecosystem into a false binary of “closed” or “open” is fine as a mere matter of simplistic descriptive convenience, but should not trigger differing regulatory obligations. 

Fortunately, to deal adequately with risks from AI, such line-drawing is unnecessary. The EO’s request that prompted the RFPI implies that some perceive open, widely available artificial intelligence models as inherently more risky than more closed models. But this perception contradicts the evidence from decades of experience with open source software. 

For such a general purpose technology as AI, the open or closed nature of the tool is not going to be the primary driver of risk. Instead, the proposed use or application of AI will be the primary driver of benefits and risks. Categorizing various models according to their features and openness characteristics could be of use to customers or academics, but it won’t tell us that much about risk. (It may tell us something about the tools the government can and cannot bring to bear, however – see the First Amendment discussion below.) 

Below we first highlight the importance of open source software in general and openness of AI in particular. Then, we address some of the particular questions in the RFPI. In particular, we argue: 

  • Distinctions between “open” and “closed” models are somewhat arbitrary and fortunately not necessary.

  • Open models offer certain significant advantages over closed or proprietary models, including to non-profit organizations like ours.

  • The First Amendment constrains the ability of government to limit the publication and use of open models.

  • Open models do and will increase competition at the model layer, spurring innovation at that layer and distributing value creation to other layers of the AI stack.

  • Use-specific, harms-based ex post enforcement approaches that focus on deterring and redressing concrete harms caused by the misuse of AI tools, regardless of whether they are open or closed, are likely to be more effective and more adaptable. 

Open Source Software is Widely Used and Its Benefits Have Far Exceeded Any Risks 

This section is responsive to RFPI Question 6(a) but so central to this matter that we wanted to lead with it.

The history of computer science and software development demonstrates that openness in software has significant benefits that outweigh any costs, and NTIA should begin its analysis of foundational models with widely available open weights from that default position.

Open source is the foundation of modern computer software. It is everywhere. The biggest open source repository tool, GitHub, reported 420 million total projects in 2023, with 27% YoY growth. One 2024 study reviewed 1,607 code bases across 17 core industry clusters. The researchers found that 96 percent of the reviewed code bases contained open-source code. In aerospace, aviation, automotive, transportation, and logistics – all crucial applications requiring high reliability and security – 100 percent of the code bases reviewed included open source. The study also found that of all lines of code reviewed, a whopping 77 percent were from open source. A separate software supply chain study has estimated that approximately 90% of all code run has an open source origin.

Open source has also created enormous value. A recent report estimated the total value created by open source software at more than $8 trillion dollars, and found that “firms would need to spend 3.5 times more on software than they currently do if OSS [(open source software)] did not exist.” As deep learning researcher and AI accessibility advocate Jeremy Howard has explained: 

“Today, nearly every website you use is running an open source web server (such as Apache), which in turn is installed on an open source operating system (generally Linux). Most programs are compiled with open source compilers, and written with open source editors. Open source documents like Wikipedia have been transformative. Initially, these were seen as crazy ideas that had plenty of skeptics, but in the end, they proved to be right. Quite simply, much of the world of computers and the internet that you use today would not exist without open source.”

The massive value created by and pervasive presence of open source software across a wide range of industries and uses is concrete evidence that the advantages of open source overwhelmingly outweigh the disadvantages – including in critical and sensitive industries.

The general benefits of open source also apply in the context of AI. First, many of the fundamental tools used to develop AI are open source. Open source models have been key to significant advances in the state of the art. Indeed, a leaked Google employee memo argued, “Open-source models are faster, more customizable, more private, and pound-for-pound more capable. They are doing things with $100 and 13B params that we struggle with at $10M and 540B. And they are doing so in weeks, not months.”

Models with open weights are in a different part of the spectrum of openness than open source software, but provide many of the same benefits and even some unique ones. In particular, the high costs of compiling data and purchasing compute to train foundational models are a significant barrier to model training. Sharing model weights eliminates this cost barrier, broadening access and enabling users that would otherwise simply be priced out of building their own AI stack. Foundation models with widely available model weights provide just that: a foundation on which many others can build.

Because of the clear history of open source software and early evidence of the benefits of open model weights, NTIA’s analysis of widely-available foundation model weights should begin from the default position that open model weights have significant benefits compared to any risks. 

Specific Questions

Question 1: How should NTIA define “open” or “widely available” when thinking about foundational models and model weights?

There are no clear-cut distinctions between open and closed models. But because there should be little regulatory distinction between fully open and fully closed models, there is no need for such difficult definitions. 

The difference between “open” and “closed” foundation models has been accurately described as a gradient. Openness can apply at different levels of the models. For example, some developers have made public, or “open sourced,” the software code used to develop their foundation models. Others have provided publicly available model weights that are the result of such software. Many others enable access to a model through a chat interface or an API. Finally, some companies develop proprietary AI models for internal use that they do not share with anyone else. All but the last include some benefits of openness. For example, OpenAI’s ChatGPT interface has provided access to millions of users who would otherwise lack the time or technical expertise to experiment with and use a powerful foundational model.

Even within these broad categories there are significant variations in “openness.” For example, Meta’s Llama models are released under a license and terms of use that prohibit certain uses such as spam. Mistral’s models are released under a standard Apache 2.0 license, with no specific use restrictions. Besides use restrictions, other variations in “openness” might include the levels of disclosure of the training data, the procedures used to fine tune, and the documentation provided. The spectrum of openness is broad and fine-grained, and placing a regulatory line that divides one side into “open” and the other into “closed” would be arbitrary in most cases.

For descriptive purposes, it may be useful to categorize as “open” any foundational model with weights that the model developer has publicly posted online in at least one freely accessible repository. But again, this distinction should have little legal or regulatory impact given that such models should not be treated differently from models provided in any other way. 

Question 3: What are the benefits of foundation models with model weights that are widely available as compared to fully closed models? 

Models with widely available weights have several advantages over more closed models. We fully agree with an open letter by the Center for Democracy & Technology, Mozilla, and nearly fifty civil society organizations and scholars, that open models have many clear benefits including: “[a]dvancing innovation, competition and research”; “[p]rotecting civil rights and human rights”; and “[e]nsuring safety and security.” In particular, we agree that open source models “by lowering the barrier for innovators, startups, and small businesses” thereby “promote economic growth” and “enabl[e] more AI services to be built by and for diverse communities with different needs.”

We have experienced these benefits ourselves. The Abundance Institute is developing a number of AI tools using Mistral’s Mixtral 8x7B open model. As a non-profit with limited resources, the lower cost of such a model means that we can do far more experiments per donor dollar spent. Furthermore, open models help us maintain independence. We are actively involved in artificial intelligence policy discussions and our positions may conflict with specific company positions. The availability of open models helps us avoid being overly reliant on a single, centralized, company-owned resource. We suspect that many other non-profits see similar benefits from open models.

Question 6: What are legal or business issues or effects related to open foundation models?

First Amendment Implications of Restrictions on Open Models: The RFPI does not specifically ask about First Amendment issues around foundational models with widely available weights, but this is a significant issue that ought to be a component of NTIA’s report to the White House. The interaction between open source software and the First Amendment has a long history, including in the context of regulation intended to promote national security and address safety concerns. The concerns raised in EO Section 4.6 and in the RFPI echo concerns that motivated the so-called “Crypto Wars” in the 1990s, when “the US government designated encryption software as a ‘munition’ to be regulated for national security purposes with intensive export restrictions.” Legal challenges as well as the widespread availability of strong encryption internationally despite the restrictions eventually led President Clinton to remove commercial encryption software from the munitions list. 

Like restrictions on encryption, restrictions on foundational AI models raise First Amendment issues. One of the key court decisions in the Crypto Wars, Bernstein v. U.S. Dep’t of State, held that source code is protected expression. Bernstein also explained that source code converted to functional machine-readable object code is likewise protected expression:

The music inscribed in code on the roll of a player piano is no less protected for being wholly functional. Like source code converted to object code, it "communicates" to and directs the instrument itself, rather than the musician, to produce the music. That does not mean it is not speech. Like music and mathematical equations, computer language is just that, language, and it communicates information either to a computer or to those who can read it.

Model weights are analogous. Like object code, model weights communicate information to a computer – in this case, a computer running an inference engine. The fact that such speech “is essentially functional, [] does not remove it from the realm of speech. Instructions, do-it-yourself manuals, recipes, even technical information about hydrogen bomb construction … are often purely functional; they are also speech.” People and organizations who wish to publish such model weights have a protected speech interest in doing so.

Writers of open source code for training foundational models and publishers of open model weights are not the only parties with speech interests at stake. Others with speech interests include facilities that host such projects, such as GitHub; compute service providers who offer inference engines applying such model weights, developers building apps using open source models, and end users who use such applications in order to generate their own expressive content.

In particular, any government requirement that models be conditioned before release to respond in certain government-preferred ways to user prompts is the kind of content-based restrictions that deserve strict First Amendment scrutiny.

Any recommendations that NTIA provides to the White House ought to acknowledge the clear First Amendment constraints on government action in this area.

Competition Impacts of Open Models - Question 3(a): Open model weights offer benefits for competition and innovation, both in the marketplace for AI services and in other areas of the economy, including:

  • Leveling the playing field: Open models reduce the barriers to entry and give smaller players and startups access to cutting-edge AI technology. This could increase competition across the economy as more organizations are able to leverage powerful AI capabilities in their products and services without needing the massive resources to develop the foundational models themselves. This leveling effect is supported by research that demonstrates that using generative AI tools in work settings disproportionately benefits lower-performing workers.

  • Shifting focus to applications and fine-tuning: With shared access to strong open models, competitive differentiation will depend on how well companies can adapt and apply the models to specific domains and use cases. The ability to efficiently fine-tune models and develop powerful applications on top of them could become more important than ability and capacity to train a foundational model from scratch.

  • Commoditization of foundational models: In the long run, open models could commoditize foundational AI technology. If everyone has access to high-quality open models, the models themselves may not be a sustainable competitive advantage. The real value may migrate to compute, proprietary datasets, customizations, and application-specific IP. This would distribute gains from this technology more broadly across the economy.

  • New business models: Open models could spur new business models and ways of creating value in the AI ecosystem. For example, there may be opportunities to provide compute resources for fine-tuning, offer managed services around open models, or develop proprietary add-ons and extensions.

  • Collaboration and shared standards: Open models could foster greater collaboration and interoperability within the AI community. Shared standards and a common technological substrate could emerge, enabling more vibrant competition in the application layer.

  • Quality and safety assurance: With open models, there may be more intra-firm competition to ensure the quality, safety, and responsible use of foundational models. Expertise in AI alignment, safety, robustness, and ethical deployment could become key competitive differentiators, both within open source and proprietary models.

The primary implication of all of these points is that open models do and will increase competition at the model layer, spurring innovation at that layer and distributing value creation to other layers of the AI stack.

Question 7: What are current or potential voluntary, domestic regulatory, and international mechanisms to manage the risks and maximize the benefits of foundation models with widely available weights? What kind of entities should take a leadership role across which features of governance?

In addition to this question, the RFPI “requests input on any potential regulatory models, either voluntary or mandatory, that could maintain and potentially increase the benefits and/or mitigate the risks of dual use foundation models with widely available model weights” and “seek[s] input as to different kinds of regulatory structures that could deal with not only the large scale of these foundation models, but also the declining level of computing resources needed to fine-tune and retrain them.” We offer the following responses.

Market Process Mechanisms

The base mechanism for managing risks and maximizing benefits of any technology is the market process. This is well-established, in practice, in economics, and in law. As one of us argued in a previous NTIA proceeding on AI: 

[T]he norm in competitive marketplaces is that companies expend significant effort to develop, use, and continuously improve the quality of their products and services. They also seek to build consumer trust and to develop their reputation and “marks of quality.” Their ability to satisfy their customers is the difference between business success and failure.

In fact, markets play a crucial role in disciplining company behavior and holding them accountable to their customers. Some of the mechanisms involved include competition, reputation, customer feedback, pricing, and transparency.

The market process can dynamically accommodate and balance among a wide range of values, risk tolerances, and desired benefits. This ability makes markets the fundamental mechanism for maximizing the benefits of any new technology, including foundation models with widely available weights.

Whether non-market mechanisms can supplement and improve on the results of market-based risk management and benefit maximization will depend on a number of factors, including the precision of the identified market failures, the targeted nature of the remedies, the diligence of the application of those remedies, and the ability of the overall structure to adjust as technologies and applications evolve. 

A Knowledge Problem and a Use-Focused Solution

All of this boils down to a knowledge problem: can the non-market mechanism gather sufficient information (compared to the market process) to ensure that the effect of intervention will be more aligned to the purpose of the regulatory action than had no action been taken?

Regulators face a unique challenge in gathering this information for foundational models. All of the risks highlighted by the RFPI would occur as a result of the use of the foundational models. Likewise, the benefits of such models will depend on their various uses. And, as the RFPI notes, foundational models have a wide range of existing and potential uses – which is why they are a general-purpose technology. Exhaustively identifying the specific categories of such uses would be impossible. 

For that reason, regulatory efforts should focus where information about harms and potential remedies is most accessible: the usage of foundation models. Regulatory efforts at the model level, by contrast, will be indirect at best and speculative and counterproductive at the worst.

An Electric Analogy

Perhaps an analogy to another general purpose technology is revealing: the provision of electricity. Obviously, misuse of electricity can cause harm to users. To address such harms we have an assorted combination of tort law, licensing law for electricians, general consumer protection law, recall authority, and industry-based standards. Which laws and regulations apply depends on the specific use and user of electricity: construction, lighting, EV charging, appliances, etc. 

We also have federal and state regulatory systems in place for electrical generation and transmission. Indeed, this is what most people mean when they refer to “electricity regulation.” In many states, such systems are quite interventionist. But the purpose of such regulation is not to prevent the harmful end uses of electricity by retail consumers, but to regulate the risks and account for the unique economics specifically involved in electrical generation and electrical transmission.

Requiring generation and transmission regulation to guard against any possible harm from the millions of different retail uses of electricity would be unthinkable. We do not limit or regulate electricity generation and transmission in order to keep people from burning their house down with home repairs or to stop people from using it to power TV transmitters broadcasting lies. We use other regulatory mechanisms focused on those specific uses and to mitigate those specific risks.

Likewise, with foundational AI models: regulation at the model level should focus on any direct risks of developing models and exclude risks based on model uses. Developing foundational AI models is quite a bit physically safer than generating and transmitting electricity, so we might expect lighter touch regulation on models even if AI uses raise certain risks. 

Three Advantages of a Use-Specific, Harms-Based Approach

Thus, the general purpose nature of foundational models recommends a use-specific, harms-based approach to intervention.  This has three advantages.

  • First, as already discussed, such approaches have access to more relevant information and therefore can more directly target harms with fewer side effects.

  • Second, there are often already existing frameworks for dealing with many harms, no matter what technology is used to cause them. For example, we have laws prohibiting discrimination on the basis of protected characteristics. If a lender uses a foundational model to discriminate against racial minorities, that is already illegal.

  • Third, and most importantly for the purposes of this specific proceeding, approaches that focus on model uses rather than model development necessarily treat closed and open models similarly. Individuals that misuse a foundation model to cause harm can and should be treated the same, regardless of whether the model used was open or closed. Targeting bad actors, not the tools they use, is far more consistent with long standing U.S. legal principles and general moral intuitions. Individuals ought to be responsible for acts they commit and not for those that others commit. This approach also sets proper incentives by penalizing the party that committed the illegal act. Such targeted intervention discourages bad behavior while allowing other, beneficial uses. 

Question 8(a): In the face of continually changing technology, and given unforeseen risks and benefits, how can governments, companies, and individuals make decisions or plans today about open foundation models that will be useful in the future? How should these potentially competing interests of innovation, competition, and security be addressed or balanced?

The knowledge problem discussed in the previous question is at its most troublesome when dealing with future developments. No one can gather sufficient information about events that haven’t happened yet, particularly in fast-changing, technologically complex domains like artificial intelligence. In such environments, then, it is important for governance structures to anchor themselves on long-standing constants and principles.

This is another reason to focus legal frameworks on ex post redress of harms rather than on ex ante regulation of the tools that might be used to cause harms. The types of harms governments address through legal and policy means change slowly. Physical injury, damage to real property or finances, deception and fraud, unjust discrimination, and certain kinds of reputational or dignitary harms have a long history of legal protections. And ex post enforcement approaches that apply general principles to redress past harms and deter future harms are more able to gather relevant data about any one situation and adapt to new developments and situations as they occur.

Conclusion

The open source movement has a long history of delivering immense value and fostering innovation across industries, and the AI field is no exception. Foundational models with widely available weights are already demonstrating significant benefits, including increased access, lower barriers to entry, and a thriving ecosystem of new applications and use cases. These benefits are likely to grow as open models mature and become more widely adopted.

As with any powerful technology, there are potential risks associated with the misuse of foundational models. Policymakers and regulators are rightly concerned about these risks, but it is crucial that any regulatory approach carefully balances the need to mitigate harms with the importance of preserving the benefits of openness and innovation.

Drawing arbitrary lines between "open" and "closed" models is unlikely to be an effective way to manage risks. Instead, a use-specific, harms-based approach that focuses on deterring and redressing concrete harms caused by the misuse of AI tools, regardless of whether they are open or closed, is more likely to be effective and adaptable to changing technologies.

Such an approach would also be more consistent with long-standing legal principles, such as individual responsibility for one's actions and the protection of free speech. Attempts to restrict or condition the publication or use of open source AI models are likely to raise significant First Amendment concerns.

In navigating the rapidly evolving AI landscape, NTIA, the White House, and other policymakers should look to the proven track record of the open source movement and the power of market-based mechanisms to drive innovation and manage risks. While targeted interventions may be necessary to address specific market failures or harms, the default stance should be one of openness and support for the continued development and responsible use of open AI models.

________________________________

Public Interest Comment on the National Telecommunications and Information Administration (NTIA) Dual Use Foundation Artificial Intelligence Models with Widely Available Model Weights Request for Public Input 

Regulations.gov Docket No. NTIA–2023–0009

NTIA Docket No. 240216-0052

Submitted: March 27, 2024

The Abundance Institute is a mission-driven nonprofit dedicated to creating an environment where emerging technologies, including artificial intelligence, can germinate, develop, and thrive in order to perpetually expand widespread human prosperity and abundance. The views expressed in this comment are those of the authors and do not necessarily reflect the views of others at The Abundance Institute.
Executive Office of the President, Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, 88 FR 75191 (Nov. 1, 2023), https://www.federalregister.gov/documents/2023/11/01/2023-24283/safe-secure-and-trustworthydevelopment-and-use-of-artificial-intelligence (EO).
National Telecommunications and Information Administration, Request for Public Input, Dual Use Foundation Artificial Intelligence Models with Widely Available Model Weights, NTIA Docket No. 240216-0052, 89 FR 14059 (Feb. 26, 2024) (RFPI).
Kyle Daigle, GitHub, Octoverse: The state of open source and the rise of AI in 2023 (Nov. 8, 2023), https://github.blog/2023-11-08-the-state-of-open-source-and-ai/.
Synopsys, 2024 Open Source Security and Risk Analysis Report at 5 (Feb. 2024), https://www.synopsys.com/software-integrity/engage/ossra/ossra-report.
Id. at 4.
Id. at 5.
Id. at 4.
Sonatype, 9th Annual State of the Software Supply Chain Report at 6 (2023), available at https://www.sonatype.com/state-of-the-software-supply-chain/open-source-supply-and-demand.
Manuel Hoffman et al., The Value of Open Source Software (Jan. 1, 2024), Harvard Business School Strategy Unit Working Paper No. 24-038, available at http://dx.doi.org/10.2139/ssrn.4693148.
Jeremy Howard, AI Safety and the Age of Dislightenment (July 10, 2023), https://www.fast.ai/posts/2023-11-07-dislightenment.html. See also, W3Techs, Web Technology Surveys (Mar. 27, 2024), https://w3techs.com/technologies/overview/web_server (showing open-source webserver software packages Nginx and Apache serving a combined 64.4% of all websites).
Letter from Center for Democracy and Technology, Mozilla, et al., to Secretary Gina Raimondo at n.8 (March 25, 2024) (“Key model architectures like AlexNet, frameworks like PyTorch and TensorFlow, and research on topics like attention mechanisms were all made widely available, fueling significant advances in AI R&D.”), available at https://cdt.org/wp-content/uploads/2024/03/Civil-Society-Letter-on-Openness-for-NTIA-Process-March-25-2024.pdf (“Civil Society Letter”).
Dylan Patel and Afzal Ahmad, Google “We Have No Moat, And Neither Does OpenAI” (May 4, 2023),  https://www.semianalysis.com/p/google-we-have-no-moat-and-neither.
Irene Solaiman, The Gradient of Generative AI Release: Methods and Considerations (2023),  https://arxiv.org/pdf/2302.04844.pdf.
Mistral AI Team, Mistral 7B (Sept. 27, 2023), https://mistral.ai/news/announcing-mistral-7b/.
Meta, Llama 2 Community License Agreement (July 18, 2023), https://ai.meta.com/llama/license/.
See generally, OpenAI, GPT-4, https://openai.com/research/gpt-4.
See, Alfons Futterer, NanoMatriX, AI Models: Choosing the Right Type For Your Business, https://www.nanomatrixsecure.com/choosing-the-right-type-for-your-business/ (discussing different approaches to AI development, including proprietary models).
See supra n.16 and Meta, Llama 2 Acceptable Use Policy, https://ai.meta.com/llama/use-policy/.
See supra n.15.
Andreas Liesenfeld et al., Opening up ChatGPT: Tracking openness, transparency, and accountability in instruction-tuned text generatorshttps://dl.acm.org/doi/10.1145/3571884.3604316. See also, Rick Evans, Why Should Policy Models Be Open Source?, Policy Paper, Abundance Institute (2024, forthcoming) (detailing seven degrees of “opensourcedness”: open-access source code, open-access data, open source programming language platform, documentation, licensing, scalable collaboration, and accessible web applications or executables).
See Civil Society Letter, supra n.12 at 2.
See, e.g., John E. Jones, Open Source Software Is Philanthropy, (Oct. 30, 2017), Stanford Social Innovation Review, available at https://ssir.org/articles/entry/open_source_software_is_philanthropy.
See, Alison Dame-Boyle, EFF at 25: Remembering the Case that Established Code as Speech (Apr. 16, 2015), https://www.eff.org/deeplinks/2015/04/remembering-case-established-code-speech.
Id.
Executive Office of the President, Administration of Export Controls on Encryption Products, 61 FR 58767 (Nov. 15, 1996), https://www.federalregister.gov/documents/1996/11/19/96-29692/administration-of-export-controls-on-encryption-products.
See, Bernstein v. U.S. Dep’t of State, 922 F.Supp. 1426, 1435 (N.D. Cal. 1996)(holding that online publication of encryption source code is protected by the First Amendment).
Id.
Id.
Brian Eastwood, Workers with less experience gain the most from generative AI (Jun. 26, 2023) https://mitsloan.mit.edu/ideas-made-to-matter/workers-less-experience-gain-most-generative-ai; Erik Brynjolfsson et al., Generative AI at Work (Oct. 9, 2023) Working Paper, https://danielle-li.github.io/assets/docs/GenerativeAIatWork.pdf.
RFPI, 89 FR at 14061.
See, e.g., Executive Order 12866, Regulatory Planning and Review, Section 1(a) (Sept. 30, 1993), as amended (“Federal agencies should promulgate only such regulations as are required by law, are necessary to interpret the law, or are made necessary by compelling need, such as material failures of private markets to protect or improve the health and safety of the public, the environment, or the well-being of the American people.”) (emphasis added), https://www.reginfo.gov/public/jsp/Utilities/EO_12866.pdf.
The Center for Growth and Opportunity at Utah State University, et al., Public Interest Comment on the National Telecommunications and Information Administration (NTIA) AI Accountability Policy Request for Comment at 4 (Jun. 15, 2023), https://www.regulations.gov/comment/NTIA-2023-0005-1364.
The RFPI posits that “[f]oundation models with widely-available model weights could engender substantial harms, such as risks to security, equity, civil rights, other harms due to, for instance, affirmative misuse, failures of effective oversight, or lack of clear accountability mechanisms.” 89 FR at 14061.
RFPI, 89 FR at 14061 (after discussing an academic definition and a definition from the Executive Order, concludes that “[b]oth definitions of ‘foundation model’ and of ‘dual-use foundation model’—highlight the key trait of these models, that they can be used in a number of ways”).
See generally, U.S. Energy Law: Electricity, GW Law Jacob Burns Library Research Guides, https://law.gwu.libguides.com/electricity.
Robert J. Michaels, Electricity and Its Regulation, Econlib, https://www.econlib.org/library/Enc/ElectricityandItsRegulation.html (noting that there has been some evolution from the baseline where “[s]cale economies and reliability concerns left electricity dominated by large, vertically integrated utilities; that is, utilities that generated, transmitted, and distributed power”).
Id. (“Important characteristics of electricity limit the possibilities for markets.”).
See generally, Neil Chilson, Getting Out of Control: Emergent Leadership in a Complex World 172-82 and 191-197 (2021) New Degree Press (discussing four principles for regulation in complex, fast-changing spaces – minimize simplistic legibility; temper ambitious plans with prudence and humility; reduce the planner’s ability to impose a plan; and increase the ability of the participants to resist plans – and demonstrating the advantages of case-by-case ex post enforcement policy approaches in fast-moving environments).

↳ JOIN OUR NEWSLETTER