What makes a system intelligent? The biological brain is our only example of general intelligence, yet modern AI systems are beginning to demonstrate similarly broad capabilities. Our lab approaches intelligence as a scientific problem, seeking the core design principles that enable adaptable, goal‑directed behavior in both brains and machines and studying how these principles scale to impact society.

Because the brain emerged through a complex evolutionary process, we reverse‑engineer neural circuits by simulating evolution in silico, toggling components such as architecture class, objective function, data stream and learning rule (depicted below). At the same time, we investigate how AI systems interact with human values and economic structures, working toward safety and alignment frameworks that support equitable outcomes. Learn more here.

image

By evaluating the models generated from these components against both neural recordings, behavioral data and societal outcomes, our long‑term goal is not only to build normative accounts of how intelligent behavior arises but also to guide the design of AI systems that are capable, aligned with human values and beneficial to society.

Below are some representative papers that are relevant to the above questions. When possible, I try to link to the freely accessible preprint; however, this may differ from the final published version. For a full publication list, see my CV. See here for presentations on some of this work, as well as here and here for a short or long form (includes past work) video overview, respectively.

AI Safety & Society

Our lab is also engaged in research on AI alignment and the societal impact of AI. Representative papers include:

  • A. Nayebi. An AI capability threshold for rent‑funded universal basic income in an AI‑automated economy. In this economic analysis we derive conditions under which AI‑generated profits could sustainably finance a universal basic income. We show that AI systems must achieve only ~5–6× existing automation productivity to fund an 11%‑of‑GDP UBI, and that raising the public revenue share to about 33% lowers this threshold to ~3x. 2025. [code][summary]

  • A. Nayebi. Intrinsic barriers and practical pathways for human‑AI alignment: an agreement‑based complexity analysis. This paper formalizes AI alignment as a multi‑objective optimization problem and identifies information‑theoretic lower bounds showing that once either the number of objectives or the number of agents is large enough, no interaction or rationality can avoid intrinsic alignment overheads. These results highlight fundamental complexity‑theoretic constraints and provide guidelines for safer, scalable human–AI collaboration. To appear in The 40th Annual AAAI Conference on Artificial Intelligence (AAAI), Special Track on AI Alignment 2026. (Selected for oral presentation) [summary][talk recording]

NeuroAI (Selected Papers)

(*: joint first author; †: joint senior author)