This interview has been edited for length and clarity.
Berkeley, California. Logan Graham is an artificial intelligence (AI) researcher and former Special Advisor to the British Prime Minister. He has worked at the intersection of biology and machine learning at Google X and Babylon Health, and on science and technology policy in government. He is currently working on a biosecurity startup and on large language models. Interact’s Arjun Ramani caught up with Logan to find out more.
Hey Logan! Where did you grow up? And what type of kid were you?
I grew up between Vancouver, Canada and some very large islands just west of Vancouver. Practically that meant my childhood was full of trees, mountains and oceans. My nickname as a two-year old was “Buddha boy” because I was always very calm and happy, and I’ve mostly always been that way. But growing up I faced quite crippling health issues. I was born prematurely with a set of muscle and respiratory issues and later developed arthritis and uveitis. That basically meant for the next three years I had a lot of trouble walking. There was a chance I would be heavily disabled and would even go blind. So I think the contrast between this counterfactual path of my life, which is very different from where I am now, but generally still always being very happy through it, was a big defining factor of my childhood.
How do you think experiencing those medical difficulties has influenced who you are today?
Knowing there is a thin line between being able to move and see, and not, has motivated me to make the most of what I’ve been given. Every time I pedal on a bike or go for a long hike, I’m almost always thinking about how remarkable it is that I can do that, which is incredibly motivating. It also gave me a sense that life is random and short, and I might not have very much time, so I might as well do as ambitious a thing as I possibly could. Recently, these health issues have been popping up again, which has made me think, “Oh God, if I’m 35, and have a family and am building a company, and then I wake up one day and I can’t walk anymore, what do I do?” But I sort of learned when I was younger how to accept it. So if I am 35, and that does happen, I hope I’ll at least be emotionally prepared. But we’ll see.
You’re clearly making the most of it. You studied economics in undergrad and then you did your PhD in machine learning at Oxford on a Rhodes Scholarship. Tell us about what you worked on and why you think it’s important?
I went into economics because I wanted to understand how the world worked and find a mathematical description of it. But I realized that the world was giving off far more information than human brains could process–so I thought, maybe we should build some more brains, which could figure out how the world worked for us. And that got me super fascinated by the world of machine learning.
I focused on causality because “what causes what, and why” seemed like the most important question. It had this sort of divine quality in economics, the pursuit of which was holy and religious. It’s like the Eigen property of the universe–everything that matters is just causal interactions between things. Practically, if we want to build a good artificial general intelligence (AGI), then you need two things. You need some kind of verifiability or understanding of what it is doing. And that kind of verifiability seems to pop up with causality. Second, causality is likely extremely helpful to making AGI–it is an extremely efficient compression of how things work.
There are lots of ways to do causality in machine learning. I think that at least for now, the “stack more layers and train a lot” approach, and letting causality emerge from learning seems to be winning. And that’s why I’m working on [Large Language Models].
Great, we’ll talk more about your current work soon. So after your PhD you ended up at Number 10 as a special advisor to the British prime minister. How did that happen?
The Prime Minister’s Office was trying to bring in some unusual people — scientists, entrepreneurs, economists. Some of them had been reading similar papers in causality and machine learning and shared similar goals of building tools like a simulation of the economy. That was one of my goals too. So after some calls at 2am my time, they brought me in.
Extremely cool. What did you work on?
A lot of things. The actual first goal was to build an AI research lab in the basement of Number 10. Imagine this world where Number 10 has a rack of GPUs, simulating the economy, predicting wars and identifying new technologies for us to invest in. Pretty quickly, you realize that in government, especially for anything technically complex, you’re really under-resourced. There’s a lack of the right people in the right place at the right time. So I spent the next 1.5 years focused on how we could overhaul science and technology. We believed the thing that mattered at the state level in the 21st century was to provide similar conditions to what the UK had at the start of the industrial revolution. The UK was at the founding of AI, computing, genomics, and molecular biology. Why couldn’t it be again this time?
Concretely, we did a bunch of things. First, we set up a data science team inside Number 10. We also set up ARIA, a non-defence clone of DARPA, a part of one of the largest boosts of R&D funding in a short period of time in the UK. And we oversaw the national AI strategy and a series of other technology strategies. Some of the most meaningful work I worked on was around big catastrophic risks like biosecurity.
How much of that momentum will continue under future governments?
Some of the R&D-related work now has a lot of bipartisan support. So I suspect R&D levels will stick and hopefully grow. The same thing holds for ARIA, which is legally enshrined.
However, in the broad sense of whether there is somebody inside Number 10 thinking a lot about science and technology, Its implications on growth and on foreign policy, I think unfortunately, you need the right person in there at the right place at the right time. But the forces don’t really conspire to make sure of that. The reason is that political feedback cycles are short term. Long term issues like science, technology and security are not things that political leaders wake up and spend a lot of time thinking about.
On the other hand, I’ve become more convinced that for many crises you can actually afford to mostly look past them–long term delivery is what actually matters to people. As an example, recently the UK faced an extreme increase in energy price bills. That and the cost of the energy price caps dominated headlines, which is not at all unjustified. Yet one day at work I raised that had the UK been on the growth trajectory of the US since the point at which British growth leveled off, you could weather nearly 15 energy price cap crises a year. The long term issue is really the issue.
Switching gears to what you’re working on. You mentioned both biosecurity and large language models. Can you spell that out for us?
I think a radicalizing conclusion of working on science at the nation-state level is that more than ever, a small number of people with access to compute, or some advanced technology, can change the world. So I left thinking that I want to build technology that shapes the world in a positive direction, which involves two things.
The first is biosecurity. It seems plausible that over the next several decades, biosecurity will be any country’s largest security threat. The cost of generating new biological risks is decreasing and the feasibility increasing, that we need some sort of biodefense infrastructure layer. If we’re successful in 10, 20 years that will just be a normal part of daily life, like the internet or roads. I’m specifically working on technical interventions, from detection–how do we find and characterize these pathogens really, really quickly?–to a software layer for governments or enterprises to figure out how concerned they should be and what they should do. And then the fourth is physical interventions in the world. How do you rapidly develop therapeutics within hours or days of a new pathogen being identified? I’m specifically focused on starting a company at the early-detection stage.
The second is large language models and the safety of artificial intelligence systems. ChatGPT is just a herald of what’s coming. In the AI strategy in the UK, we wrote that more general artificial intelligence systems should be considered a top level national security issue. These technologies are going to be on the doors of generals and presidents and we need to think far more seriously about what to do about them. In general, powerful, democratized technology can equally be the best thing that’s ever happened to humanity or the worst.
How do you approach the early-detection problem in biosecurity?
We want a world where there are, say, a million boxes everywhere sampling things every hour which would let you know if there’s anything dangerous spreading. There’s some hope that test devices will become 10 times cheaper in the next five years. Maybe every home can have something that you quickly spit on and it will tell you what’s up.
I’m working on a few strands. First, is there a version of that million-box technology that is really cheap and easy to deploy everywhere? We do have some version of the basic technology today. It’s just expensive and takes a long time. Second is software. How does the Department of Defense or a large enterprise get an enterprise layer, which takes all this data on pathogens in their physical environment, warehouses, office, supply-chain, and figures out how exposed they are and what steps they need to take?
The difficulty is who pays it; right now, it’s basically a market failure. For example, COVID probably cost $10 trillion to the US alone. If you expect something like that to happen, say, once every 100 years, then maybe you’d be willing to spend $100 billion dollars a year to avoid it, but we don’t do anything like that. I’m hopeful that in the next few years we’ll build and deploy this “biodefense technology stack” and it’ll be one of the most exciting missions to be on.
How does Interact fit into this winding journey through research, government and startups?
So I came across someone’s response to the Interact application online, and their essays were just extremely thoughtful about technology and their place in the world. That year we had to submit a video teaching something as part of the application. Mine was me teaching this totally useless algorithm. Every day when I went to the lab, the algorithm would generate a maximally random integer from zero to 10–and that was the number of times my best friend and I would shake hands before getting to work. Thankfully my reviewer didn’t think I was an idiot.
The main impact of Interact on me has been to raise my aspirations. The person that I roomed with now runs one of the largest AI companies in the world. I love when people update the email newsletter with the agriculture company they started, the research they published, or the marriage they’ve been deepening. People are just extremely ambitious to wield their skills in a really positive way. In some sense, it was a bit of what I felt was missing from the Rhodes [Scholarship], which is nominally a similar thing. But Interact just seems fit for the 21st century in the sense that it emphasizes technology and the organizations that help scale technology.
Last question, what’s the best thing you’ve read in the last year?
Something that I keep thinking about is “War 101”, which is a practical but narrative essay, written by an anonymous author, on what it’s like on day one of being drafted into the Armed Forces of Ukraine. It goes through what you need to know about war.
Another one that I really liked is “There is no antimemetics division”. I read it on a lake in Canada with Canadian geese flying overhead.
_____
Interact is a community of mission-driven technologists. Applications for the 2023 Fellowship are now open at joininteract.com.