pot

Cutting-edge research on AI wellbeing

Our work spans technical, philosophical, strategic, and policy questions to deepen our understanding of AI wellbeing and guide key decision-makers.

Blog · October 30, 2024

New report: Taking AI Welfare Seriously

Our new report argues that there is a realistic possibility of consciousness and/or robust agency—and thus moral significance—in near-future AI systems, and makes recommendations for AI companies. (Joint output with the NYU Center for Mind, Ethics, and Policy.)

Blog · October 22, 2024

New AI welfare report coming soon

A comprehensive analysis of potential AI welfare and moral patienthood

Blog · September 30, 2024

Experts Who Say That AI Welfare is a Serious Near-term Possibility

A list of researchers who either explicitly claim that AI systems might have moral status soon, or assert something that strongly implies this view.

Paper · November 14, 2023

Towards Evaluating AI Systems for Moral Status Using Self-Reports

We argue that under the right circumstances, self-reports, or an AI system's statements about its own internal states, could provide an avenue for investigating whether AI systems have states of moral significance. We outline a technical research agenda towards making AI self-reports more introspection-based and reliable for this purpose.

Paper · August 17, 2023

Consciousness in Artificial Intelligence: Insights from the Science of Consciousness

This report argues for, and exemplifies, a rigorous and empirically grounded approach to AI consciousness: assessing existing AI systems in detail, in light of our best-supported neuroscientific theories of consciousness.