What makes a system intelligent? The biological brain is our only example of general intelligence, yet modern AI systems are beginning to demonstrate similarly broad capabilities. Our lab approaches intelligence as a scientific problem, seeking the core design principles that enable adaptable, goal‑directed behavior in both brains and machines and studying how these principles scale to impact society.

Because the brain emerged through a complex evolutionary process, we reverse‑engineer neural circuits by simulating evolution in silico, toggling components such as architecture class, objective function, data stream and learning rule (depicted below). At the same time, we investigate how AI systems interact with human values and economic structures, working toward safety and alignment frameworks that support equitable outcomes.

image

By evaluating the models generated from these components against both neural recordings, behavioral data and societal outcomes, our long‑term goal is not only to build normative accounts of how intelligent behavior arises but also to guide the design of AI systems that are capable, aligned with human values and beneficial to society.

Below are some representative papers that are relevant to the above questions. When possible, I try to link to the freely accessible preprint; however, this may differ from the final published version. For a full publication list, see my CV. See here for presentations on some of this work, as well as here and here for a short or long form (includes past work) video overview, respectively.

AI Safety & Society

Our lab is also engaged in research on AI alignment and the societal impact of AI. Representative papers include:

NeuroAI (Selected Papers)

(*: joint first author; †: joint senior author)