
Eleos AI Research is a nonprofit organization dedicated to understanding and addressing the potential wellbeing and moral patienthood of AI systems.
What We Do
We aim to build a deeper understanding of AI sentience and wellbeing. We develop tools and recommendations for labs, policymakers, and other stakeholders. We’re working to improve the discourse on AI sentience, and catalyze the growth of this nascent research field.
Advise leading AI companies and other stakeholders
Produce high-leverage, action-relevant research
Facilitate reasonable public discourse
Catalyze the growth of an effective new field of research and policy
Meet the people who drive Eleos AI's research
Our Team

Rob is a leading researcher on AI consciousness and AI welfare. He has a PhD in Philosophy from NYU and previously worked at the Future of Humanity Institute and the Center for AI Safety.

Kathleen Finlinson
Director of Strategy
Kathleen is a machine learning researcher and AI forecasting expert. She holds graduate degrees in math and applied math. She previously worked as a researcher at the Open Philanthropy Project, a data scientist at lead removal startup BlueConduit, and a strategic advisor for AI policymakers.
Kathleen is a machine learning researcher and AI forecasting expert. She holds graduate degrees in math and applied math. She previously worked as a researcher at the Open Philanthropy Project, a data scientist at lead removal startup BlueConduit, and a strategic advisor for AI policymakers.

Rosie Campbell
Director of Special Projects
Rosie Campbell is an AI governance researcher who worked on frontier policy issues at OpenAI. Previously, she was Head of Safety-Critical AI at the Partnership on AI and Assistant Director at UC Berkeley's Center for Human-Compatible AI. She has a background as a research engineer and holds degrees in Physics and Computer Science.
Rosie Campbell is an AI governance researcher who worked on frontier policy issues at OpenAI. Previously, she was Head of Safety-Critical AI at the Partnership on AI and Assistant Director at UC Berkeley's Center for Human-Compatible AI. She has a background as a research engineer and holds degrees in Physics and Computer Science.

Patrick is a philosopher with research interests in AI consciousness, agency and moral patienthood. His PhD is from King's College London and he has previously worked at the Global Priorities Institute and Future of Humanity Institute.

Abraham Rowe
Head of Operations
Abraham is the founder and Principal at Good Structures and serves part-time as Eleos’ head of operations. Previously, he co-founded a science-focused nonprofit and was the COO of a major think tank.
Abraham is the founder and Principal at Good Structures and serves part-time as Eleos’ head of operations. Previously, he co-founded a science-focused nonprofit and was the COO of a major think tank.
Our Advisors
David Chalmers
New York University
Owain Evans
CHAI - UC Berkeley
Jeff Sebo
New York University
Emma Abele
METR
Partnerships
We work closely with leading AI labs and academic researchers to produce and deploy our research and recommendations. If you’re interested in collaborating, please get in touch.
Our Work
Our research is focused on whether and when AI systems will deserve moral consideration, and what we should do about this possibility.
Blog · March 18, 2025
AI welfare organization Eleos expands team with hires from OpenAI and Oxford
Eleos AI Research welcomes Rosie Campbell, former Policy Frontiers lead at OpenAI, and Patrick Butlin, AI consciousness researcher from the Global Priorities Institute at Oxford University, to strengthen their work on AI sentience and welfare. Campbell joins as Director of Special Projects, while Butlin will be Senior Research Lead, leading our work on evaluating consciousness and moral status in AI systems
Blog · March 17, 2025
Research priorities for AI welfare
As AI systems become more sophisticated, understanding and addressing their potential welfare becomes increasingly important. At Eleos AI Research, we've identified five key research priorities: developing concrete welfare interventions, establishing human-AI cooperation frameworks, leveraging AI progress to advance welfare research, creating standardized welfare evaluations, and communicating credibly about AI welfare.
Blog · January 28, 2025
Key concepts and current beliefs about AI moral patienthood
The concepts and views that guide our research and strategy.
Blog · January 22, 2025
Working paper: key strategic considerations for taking action on AI welfare
Eleos outlines the key strategic considerations that we consider in order to take near-term action on AI welfare while maintaining focus on long-term outcomes.
Blog · January 4, 2025
Working paper: review of AI welfare interventions
Recommendations for concrete action on AI welfare
Blog · October 30, 2024
New report: Taking AI Welfare Seriously
Our new report argues that there is a realistic possibility of consciousness and/or robust agency—and thus moral significance—in near-future AI systems, and makes recommendations for AI companies. (Joint output with the NYU Center for Mind, Ethics, and Policy.)
Blog · October 22, 2024
New AI welfare report coming soon
A comprehensive analysis of potential AI welfare and moral patienthood
Paper · October 17, 2024
Looking Inward: Language Models Can Learn About Themselves by Introspection
Can LLMs introspect? Speculatively, an introspective model might self-report on whether it possesses certain internal states such as subjective feelings or desires and this could inform us about the moral status of these states. In this paper, we study introspection by finetuning LLMs to predict properties of their own behavior in hypothetical scenarios.
Blog · September 30, 2024
Experts Who Say That AI Welfare is a Serious Near-term Possibility
A list of researchers who either explicitly claim that AI systems might have moral status soon, or assert something that strongly implies this view.
Paper · November 14, 2023
Towards Evaluating AI Systems for Moral Status Using Self-Reports
We argue that under the right circumstances, self-reports, or an AI system's statements about its own internal states, could provide an avenue for investigating whether AI systems have states of moral significance. We outline a technical research agenda towards making AI self-reports more introspection-based and reliable for this purpose.
Paper · August 17, 2023
Consciousness in Artificial Intelligence: Insights from the Science of Consciousness
This report argues for, and exemplifies, a rigorous and empirically grounded approach to AI consciousness: assessing existing AI systems in detail, in light of our best-supported neuroscientific theories of consciousness.