Futuristic cityscape with people and AI networks representing the formation of AI norms.

AI's New Rules: How Society is Shaping Norms for Artificial Intelligence

"Discover how formal regulations, informal institutions, and spontaneous norms are evolving to govern AI's role in our lives, and what it means for the future."


Artificial Intelligence (AI) is quickly becoming an integral part of our daily lives, raising the urgent need for new social and ethical norms. However, AI's rapid evolution far outpaces traditional norm-setting processes, presenting unprecedented challenges. Understanding how these norms are formed is essential to ensure AI benefits society.

This article examines the critical processes of norm formation surrounding AI, focusing on how new norms are established rather than dictating what they should be. We will explore different scenarios, from centralized mandates by formal authorities to decentralized, bottom-up emergence driven by spontaneous social behaviors.

Far from offering a definitive guide, this article aims to provide interpretive tools for understanding society's response to AI's growing pervasiveness. We'll also consider how AI itself might influence the development of future social norms, emphasizing the importance of open, inclusive, and transparent public discourse in this rapidly evolving landscape.

The Three Pillars of AI Norm Formation

Futuristic cityscape with people and AI networks representing the formation of AI norms.

Norms around AI are forming through three main avenues: formal institutions, informal institutions, and spontaneous norms. Each plays a crucial role in shaping how we interact with and regulate AI technologies.

Formal institutions, such as governments, create regulations and laws to govern AI behavior. Recently, there has been increasing demand for top-down regulation to address existential threats and legal uncertainties related to AI. These regulations aim to ensure the safe adoption of AI in sectors like healthcare and finance, aligning AI use with human rights and ethical considerations. Auditing—systematic evaluations of AI actions—is a promising approach.

  • Unknown Unknowns: The rapid pace of AI development makes it challenging to anticipate future capabilities, leading to potential regulatory gaps.
  • Limited Regulatory Control: Unlike sectors like nuclear power, AI is easier to implement and harder to trace, making it difficult to restrict malicious uses.
  • Guarding the Guardians: Concentrated power in a few private companies raises concerns about conflicts of interest and the need for independent oversight.
Informal institutions, such as universities and professional organizations, also play a key role in shaping AI norms. These bodies propose codes of behavior and address gaps in formal regulation. Two sectors where informal institutions are actively developing norms are scientific publishing and education. For example, many publishers have decided that AI cannot be listed as an author, emphasizing human accountability and creativity in research. Education is another heated battleground, with institutions grappling with how to manage AI tools like ChatGPT. Some have banned it, while others seek more nuanced approaches.

The Future of AI Norms: A Call for Open Dialogue

As AI continues to evolve, it's crucial to maintain open and inclusive discussions about its role in society. The interplay between formal regulations, informal institutions, and spontaneous norms will shape the future of AI. We must address the challenges of rapid technological advancements, potential conflicts of interest, and the need for adaptability in our regulatory approaches. Fostering transparency and proactive measures will ensure that AI benefits humanity while upholding our democratic values.

About this Article -

This article was crafted using a human-AI hybrid and collaborative approach. AI assisted our team with initial drafting, research insights, identifying key questions, and image generation. Our human editors guided topic selection, defined the angle, structured the content, ensured factual accuracy and relevance, refined the tone, and conducted thorough editing to deliver helpful, high-quality information.See our About page for more information.

This article is based on research published under:

DOI-LINK: 10.1098/rstb.2023.0028,

Title: Shaping New Norms For Ai

Subject: physics.soc-ph cs.ai cs.hc cs.si econ.gn q-fin.ec

Authors: Andrea Baronchelli

Published: 17-07-2023

Everything You Need To Know

1

What are the three main avenues through which norms around Artificial Intelligence are being formed?

Norms around Artificial Intelligence are forming through three main avenues: formal institutions, informal institutions, and spontaneous norms. Formal institutions, such as governments, create regulations and laws to govern AI behavior. Informal institutions, like universities and professional organizations, propose codes of behavior and address gaps in formal regulation. Spontaneous norms emerge from everyday behaviors and interactions within society, shaping how individuals and communities interact with AI technologies.

2

How do formal institutions, like governments, influence the development of norms surrounding Artificial Intelligence?

Formal institutions, such as governments, shape Artificial Intelligence norms by creating regulations and laws. These regulations aim to ensure the safe adoption of AI across various sectors like healthcare and finance, aligning AI use with human rights and ethical considerations. They also address potential risks associated with AI. For instance, there's increasing demand for top-down regulation to address existential threats and legal uncertainties related to AI.

3

In what ways do informal institutions contribute to shaping the norms associated with Artificial Intelligence?

Informal institutions, like universities and professional organizations, significantly contribute to shaping Artificial Intelligence norms. They propose codes of behavior and address gaps in formal regulations. Two sectors where informal institutions are actively developing norms are scientific publishing and education. For example, many publishers have decided that AI cannot be listed as an author, emphasizing human accountability and creativity in research. Education is another heated battleground, with institutions grappling with how to manage AI tools like ChatGPT.

4

What are some of the key challenges in regulating Artificial Intelligence, as highlighted within the context?

Some key challenges in regulating Artificial Intelligence include 'Unknown Unknowns,' the rapid pace of AI development makes it difficult to anticipate future capabilities, leading to potential regulatory gaps. There's also 'Limited Regulatory Control' because AI is easier to implement and harder to trace, making it difficult to restrict malicious uses. Finally, 'Guarding the Guardians,' concentrated power in a few private companies raises concerns about conflicts of interest and the need for independent oversight. These challenges underscore the need for adaptability and proactive measures in AI norm formation.

5

Why is open dialogue and inclusive discussions crucial for the future of Artificial Intelligence norms?

Open and inclusive discussions are crucial for the future of Artificial Intelligence norms because the interplay between formal regulations, informal institutions, and spontaneous norms will shape AI's future. Addressing challenges like rapid technological advancements, potential conflicts of interest, and the need for adaptability in regulatory approaches requires a broad, inclusive conversation. Fostering transparency and proactive measures through open dialogue ensures that Artificial Intelligence benefits humanity while upholding democratic values.

Newsletter Subscribe

Subscribe to get the latest articles and insights directly in your inbox.