Foundation series

Foundation AI GPT-3 and DALL-E models need release standards

Percy Liang is director of the Center for Research on Foundation Models, a faculty affiliated with the Stanford Institute for Human-Centered AI and associate professor of computer science at Stanford University.

Humans aren’t very good at predicting the future, especially when it comes to technology.

Core models are a new class of large-scale neural networks with the ability to generate text, audio, video, and images. These patterns will anchor all kinds of applications and have the power to influence many aspects of society. It is difficult for anyone, even experts, to imagine where this technology will lead in the years to come.

Basic models are trained on large data using large-scale self-supervision so that they can be adapted to a wide range of tasks. This revolutionary approach to AI represents a dramatic improvement in accuracy and opens up new possibilities, as organizations no longer need to start training a model from scratch for every new AI application. It also poses obvious risks because the downstream consequences are difficult to predict, let alone control. If not managed effectively, core models such as GPT-3, PaLM, and DALL-E 2 could cause significant harm to individuals and society, whether intentionally or unintentionally.

A key element of governance is to establish community standards for publishing foundation models so that a diverse group of scholars have the opportunity to closely analyze them. Currently, companies like Microsoft, Google, OpenAI, Meta, and DeepMind each take a different stance on how to release their models. Some adopt a widely open version, while others prefer a closed version or one limited to a small group of researchers.

While we don’t expect consensus, we believe it’s problematic for each foundation model developer to determine their release policy independently. A single actor releasing dangerous and powerful technology could, knowingly or unknowingly, cause significant harm to individuals and society. Moreover, developers would benefit from sharing best practices, rather than incurring the economic and social costs of rediscovering certain damage over and over again.

Fortunately, releasing new foundation designs doesn’t have to be an all-or-nothing proposition. A multidimensional policy framework would consider four key issues.

  1. What to release: Articles, models, code and data may be published separately; each has an impact on expanding scientific knowledge and decreasing the potential risk of harm.
  2. Who has access to the version: Given the risks associated with the publication of models, the order of access is important. For example, there may be an inner circle of trusted colleagues, an intermediate circle of researchers who request access, and the general public.
  3. When to publish the model: The timing of a release should depend on both intrinsic properties, such as the results of safety assessments, and external conditions, such as other existing models and elapsed time.
  4. How to release the model: The release process for new assets should include a two-way means of communication between developers and researchers so that release is maintained over time.

To help developers make more informed decisions with input from the wider community, we at the Center for Research on Foundation Models at the Stanford Institute for Human-Centered AI have proposed to create a Foundation Model Review Board. foundation. The role of the council would be to facilitate the process of release of the base models by the developers to external researchers. This approach will broaden the pool of researchers who can study and improve foundation patterns, while helping to manage release risks.

The basic review board workflow would look like this:

  • A developer publishes a call for proposals describing the foundation model(s) available and what the developer believes are the most critical areas of research on those models.
  • A researcher submits a research proposal outlining the objectives of the research, the type of access needed to achieve those objectives, and a plan for managing ethical and security risks.
  • The board reviews the research proposal and deliberates, possibly with additional input from the researcher.
  • Based on the board’s recommendation, the model foundation developer makes the final decision to approve, reject, or defer the proposal.
  • If the proposal is approved, the base model developer releases the desired assets to the researcher.

A review board like this would ensure that publication decisions are made in a highly contextual way: for a particular researcher, for a particular purpose, for a specific foundation model with a specific form of access, and at a certain time. . This concreteness makes it much easier to reason about the benefits and risks of a given decision. Community standards on model releases would emerge through a series of these decisions.

We must recognize that foundation models are changing rapidly and require governance standards. The patterns we will see five years from now may be unrecognizable to us today, just as the patterns of today would be inconceivable five years ago. Those developing base models should engage with the community to develop best practices around releasing new models. Downstream users, including application developers and researchers, should be more aware of the models they use, what data was used to train those models, and how the models were built – and if that information is not not available, ask to know.

An important characteristic of human-centered AI is transparency and openness, providing collective governance, marked by fair procedures and striving for superior outcomes. Given the immense uncertainty and our poor ability to predict the future, we cannot make decisions based solely on anticipated outcomes. We must focus on developing a resilient process that will allow us to be prepared for whatever lies ahead.

As basic model research is still in its early stages, feedback and comments are extremely valuable. For those working with base models, whether through research or development, we’d love to hear from you at [email protected]

Additional contributions to this report come from Rob Reich, professor of political science and, as a courtesy, professor of philosophy at Stanford University; Rishi Bommasani, Ph.D. student in the computer science department at Stanford; and Kathleen Creel, HAI-EIS Embedded EthICS Fellow at Stanford.