Breadcrumb
- Home
- Benjamin Harack
Benjamin Harack
Ben studies the potential for artificial intelligence (AI) to trigger a world war and how to prevent that from happening.
He is a DPhil Affiliate at the Oxford Martin AI Governance Initiative, a research group examining the risks of AI and how those risks can be addressed through governance.
Previously, he was the co-founder of the Vision of Earth project, the primary author of Ruling Ourselves, and one of the engineers behind the Human Diagnosis Project — a worldwide effort led by the global medical community to build an open intelligence system that maps the steps to help any patient.
Ben spent a decade in Silicon Valley as a software engineer and engineering manager. His transition to studying the nexus of AI and international relations was primarily due to a growing understanding that a) controlling AI will probably become extraordinarily difficult as it becomes more powerful and b) the unchecked pursuit of transformative technologies like AI has the potential to upend the global order, potentially triggering even nuclear war.
He holds degrees in computer science, mathematics, physics, and psychology as well as a Master’s degree in physics. His honours work in computer science focused on parallel computing, while his honours in physics focused on nuclear physics. His Master's thesis looks at the physics of semiconductors.
Research
Extending the bargaining model of war to examine both the potential for artificial intelligence to trigger great power war and the potential for rational states to find peaceful alternatives.
Designing international institutions which can govern civilian AI (see International Governance of Civilian AI: A Jurisdictional Certification Approach report) and military AI.
Understanding how “social dilemmas” in game theory (such as the prisoner’s dilemma) are different under existential risk than under other kinds of risk.
Exploring the dynamics of technology races in the Modeling Cooperation project.
Examining whether the advent of truly “existential” concerns during the Cold War (due to the idea of nuclear winter) led to a shift in rhetoric, behaviour, and policy for the superpowers.
Examining verification mechanisms that can be used to prove compliance with international AI governance agreements.
Areas of expertise
Artificial intelligence
Semiconductors
Cryptography
Nuclear science
Formal theory
Quantitative analysis
Publications
Robert Trager, Ben Harack, Anka Reuel, et al. International Governance of Civilian AI: A Jurisdictional Certification Approach.