Finetune odds asteroid bennu hitting
So I guess maybe I’ll kind of start with this one. Well, first of all Lucas, thanks so much for having us on the show. So we can start off here with a pretty simple question, and so what was the intention behind forming Anthropic?ĭaniela Amodei: Yeah. I’m super excited to be learning all about Anthropic. It’s really wonderful to have you guys here on the podcast. If you’d like to stay up to date, you can follow me on Twitter at LucasFMPerry, link in the description.Īnd with that, I’m happy to present this interview with Daniela and Dario Amodei on Anthropic. I’ll have more details about my new podcast soon. I’m starting a brand new podcast focused on exploring questions around wisdom, philosophy, science, and technology, where you’ll see some of the same themes we explore here like existential risk and AI alignment. The second item is that even though I will no longer be the host of the FLI Podcast, I won’t be disappearing from the podcasting space. You can learn more about those at the careers tab as well. We also have another 4 job openings currently for a Human Resources Manager, an Editorial Manager, an EU Policy Analyst, and an Operations Specialist. If you’re interested in applying for this position, you can head over to the careers tab at for more information. As host, you would be responsible for the guest selection, interviews, production, and publication of the FLI Podcast. The first is that FLI is hiring for a new host for the podcast. The first announcement is that I will be moving on from my role as Host of the FLI Podcast, and this means two things. If you’ve tuned into any of the previous two episodes, you can skip ahead just a bit.
Dario holds a PhD in (Bio)physics from Princeton University.īefore we jump into the interview, we have a few announcements. He was previously at OpenAI, Google, and Baidu. Dario Amodei is CEO and Co-Founder of Anthropic. She was previously at Stripe and OpenAI, and has also served as a congressional staffer. Daniela Amodei is Co-Founder and President of Anthropic. Daniela and Dario join us to discuss the mission of Anthropic, their perspective on AI safety, their research strategy, as well as what it’s like to work there and the positions they’re currently hiring for. Their goal is to make progress on these issues through research, and, down the road, create value commercially and for public benefit. Their view is that large, general AI systems of today can have significant benefits, but can also be unpredictable, unreliable, and opaque. For those not familiar, Anthropic is a new AI safety and research company that’s working to build reliable, interpretable, and steerable AI systems. Today’s episode is with Daniela and Dario Amodei of Anthropic. Lucas Perry: Welcome to the Future of Life Institute Podcast. You can listen to the podcast above or read the transcript below. Have any feedback about the podcast? You can share your thoughts hereĢ:44 What was the intention behind forming Anthropic?Ħ:28 Do the founders of Anthropic share a similar view on AI?ħ:55 What is Anthropic’s focused research bet?ġ1:10 Does AI existential safety fit into Anthropic’s work and thinking?ġ4:14 Examples of AI models today that have properties relevant to future AI existential safetyĢ0:02 What does it mean for a model to lie?Ģ2:44 Safety concerns around the open-endedness of large modelsĢ9:01 How does safety work fit into race dynamics to more and more powerful AI?ģ6:16 Anthropic’s mission and how it fits into AI alignmentģ8:40 Why explore large models for AI safety and scaling to more intelligent systems?Ĥ3:24 Is Anthropics research strategy a form of prosaic alignment?Ĥ6:22 Anthropic’s recent research and papersĤ9:52 How difficult is it to interpret current AI models?ĥ2:40 Anthropic’s research on alignment and societal impactĥ5:35 Why did you decide to release tools and videos alongside your interpretability research?ġ:01:04 What is it like working with your sibling?ġ:05:33 Inspiration around creating Anthropicġ:12:40 Is there an upward bound on capability gains from scaling current models?ġ:18:00 Why is it unlikely that continuously increasing the number of parameters on models will lead to AGI?ġ:22:26 How does Anthropic see itself as positioned in the AI safety space?ġ:25:35 What does being a public benefit corporation mean for Anthropic?ġ:30:55 Anthropic’s perspective on windfall profits from powerful AI systemsġ:34:07 Issues with current AI systems and their relationship with long-term safety concernsġ:39:30 Anthropic’s plan to communicate it’s work to technical researchers and policy makersġ:48:30 What it’s like working at Anthropicġ:52:48 Why hire people of a wide variety of technical backgrounds?ġ:54:33 What’s a future you’re excited about or hopeful for?ġ:59:42 Where to find and follow Anthropic