pot

We’re investigating AI sentience and wellbeing

Eleos AI Research is a nonprofit organization dedicated to understanding and addressing the potential wellbeing and moral patienthood of AI systems.

What We Do

We aim to build a deeper understanding of AI sentience and wellbeing. We develop tools and recommendations for labs, policymakers, and other stakeholders. We’re working to improve the discourse on AI sentience, and catalyze the growth of this nascent research field.

Advise leading AI companies and other stakeholders

Produce high-leverage, action-relevant research

Facilitate reasonable public discourse

Catalyze the growth of an effective new field of research and policy

Meet the people who drive Eleos AI's research

Our Team

Robert Long

Robert Long

Executive Director

Rob is a leading researcher on AI consciousness, moral patienthood, and related issues. He has a PhD in Philosophy from NYU and previously worked at the Future of Humanity Institute and the Center for AI Safety.

Kathleen Finlinson

Kathleen Finlinson

Head of Strategy

Kathleen is a machine learning researcher and AI forecasting expert. She holds graduate degrees in math and applied math. She previously worked as a researcher at the Open Philanthropy Project, a data scientist at lead removal startup BlueConduit, and a strategic advisor for AI policymakers.

Abraham Rowe

Abraham Rowe

Head of Operations

Abraham is the founder and Principal at Good Structures and serves part-time as Eleos’ head of operations. Previously, he co-founded a science-focused nonprofit and was the COO of a major think tank.

Our Advisors

David Chalmers

New York University

Owain Evans

CHAI - UC Berkeley

Jeff Sebo

New York University

Partnerships

We work closely with leading AI labs and academic researchers to produce and deploy our research and recommendations. If you’re interested in collaborating, please get in touch.

Our Work

Our research is focused on whether and when AI systems will deserve moral consideration, and what we should do about this possibility.

Blog · October 30, 2024

New report: Taking AI Welfare Seriously

Our new report argues that there is a realistic possibility of consciousness and/or robust agency—and thus moral significance—in near-future AI systems, and makes recommendations for AI companies. (Joint output with the NYU Center for Mind, Ethics, and Policy.)

Blog · October 22, 2024

New AI welfare report coming soon

A comprehensive analysis of potential AI welfare and moral patienthood

Blog · September 30, 2024

Experts Who Say That AI Welfare is a Serious Near-term Possibility

A list of researchers who either explicitly claim that AI systems might have moral status soon, or assert something that strongly implies this view.

Paper · November 14, 2023

Towards Evaluating AI Systems for Moral Status Using Self-Reports

We argue that under the right circumstances, self-reports, or an AI system's statements about its own internal states, could provide an avenue for investigating whether AI systems have states of moral significance. We outline a technical research agenda towards making AI self-reports more introspection-based and reliable for this purpose.

Paper · August 17, 2023

Consciousness in Artificial Intelligence: Insights from the Science of Consciousness

This report argues for, and exemplifies, a rigorous and empirically grounded approach to AI consciousness: assessing existing AI systems in detail, in light of our best-supported neuroscientific theories of consciousness.