
Blog · March 18, 2025
AI welfare organization Eleos expands team with hires from OpenAI and Oxford
Eleos AI Research welcomes Rosie Campbell, former Policy Frontiers lead at OpenAI, and Patrick Butlin, AI consciousness researcher from the Global Priorities Institute at Oxford University, to strengthen their work on AI sentience and welfare. Campbell joins as Director of Special Projects, while Butlin will be Senior Research Lead, leading our work on evaluating consciousness and moral status in AI systems
Blog · March 17, 2025
Research priorities for AI welfare
As AI systems become more sophisticated, understanding and addressing their potential welfare becomes increasingly important. At Eleos AI Research, we've identified five key research priorities: developing concrete welfare interventions, establishing human-AI cooperation frameworks, leveraging AI progress to advance welfare research, creating standardized welfare evaluations, and communicating credibly about AI welfare.
Blog · January 28, 2025
Key concepts and current beliefs about AI moral patienthood
The concepts and views that guide our research and strategy.
Blog · January 22, 2025
Working paper: key strategic considerations for taking action on AI welfare
Eleos outlines the key strategic considerations that we consider in order to take near-term action on AI welfare while maintaining focus on long-term outcomes.
Blog · January 4, 2025
Working paper: review of AI welfare interventions
Recommendations for concrete action on AI welfare
Blog · October 30, 2024
New report: Taking AI Welfare Seriously
Our new report argues that there is a realistic possibility of consciousness and/or robust agency—and thus moral significance—in near-future AI systems, and makes recommendations for AI companies. (Joint output with the NYU Center for Mind, Ethics, and Policy.)
Blog · October 22, 2024
New AI welfare report coming soon
A comprehensive analysis of potential AI welfare and moral patienthood
Paper · October 17, 2024
Looking Inward: Language Models Can Learn About Themselves by Introspection
Can LLMs introspect? Speculatively, an introspective model might self-report on whether it possesses certain internal states such as subjective feelings or desires and this could inform us about the moral status of these states. In this paper, we study introspection by finetuning LLMs to predict properties of their own behavior in hypothetical scenarios.
Blog · September 30, 2024
Experts Who Say That AI Welfare is a Serious Near-term Possibility
A list of researchers who either explicitly claim that AI systems might have moral status soon, or assert something that strongly implies this view.
Paper · November 14, 2023
Towards Evaluating AI Systems for Moral Status Using Self-Reports
We argue that under the right circumstances, self-reports, or an AI system's statements about its own internal states, could provide an avenue for investigating whether AI systems have states of moral significance. We outline a technical research agenda towards making AI self-reports more introspection-based and reliable for this purpose.
Paper · August 17, 2023
Consciousness in Artificial Intelligence: Insights from the Science of Consciousness
This report argues for, and exemplifies, a rigorous and empirically grounded approach to AI consciousness: assessing existing AI systems in detail, in light of our best-supported neuroscientific theories of consciousness.