Ned Cooper
I figure out how to design, evaluate, and govern AI systems in high-stakes domains — where getting deployment right really matters.
I specialise in designing AI mental health products and policies, evaluating LLM capabilities like theory of mind and persuasion, and participatory approaches to AI governance.
Currently I'm a postdoctoral researcher in the DesignAI lab at Cornell, and an affiliate of the MINT and Social-Centered AI labs.
Previously, I worked at Google Research on speech technologies for underserved language communities.
Before academia, I spent eight years in strategy consulting and law -- including managing network strategy and regulatory policy for Australia's national broadband rollout, and criminal defense work with Aboriginal communities.
Email: ned [dot] cooper [at] cornell [dot] edu
RECENT PUBLICATIONS:
Designing AI
- Ned Cooper, Jose A. Guridi, Angel Hsing-Chi Hwang, Beth Kolko, Emma Elizabeth McGinty, Qian Yang. 2026. Framing Responsible Design of AI for Mental Well-Being: AI as Primary Care, Nutritional Supplement, or Yoga Instructor?. ACM Conference on Human Factors in Computing Systems (CHI).
- Ned Cooper and Alexandra Zafiroglu. 2025. Constraining Participation: Affordances of Feedback Features in Interfaces to Large Language Models. ACM Journal on Responsible Computing.
- Ben Hutchinson, Celeste Rodríguez Louro, Glenys Collard, and Ned Cooper. 2025. Designing Speech Technologies for Australian Aboriginal English: Opportunities, Risks and Participation. ACM Conference on Fairness, Accountability, and Transparency (FAccT).
Evaluating AI
- Jared Moore, Rasmus Overmark*, Ned Cooper*, Beba Cibralic, Nick Haber, Cameron R. Jones. 2026. Large Language Models Persuade Without Planning Theory of Mind. arXiv.
- Jared Moore, Ned Cooper*, Rasmus Overmark*, Beba Cibralic, Nick Haber, Cameron R. Jones. 2025. Do Large Language Models Have a Planning Theory of Mind? Evidence from MINDGAMES: a Multi-Step Persuasion Task. Second Conference on Language Modeling (COLM).
*Equal contribution