5 questions for Google DeepMind's Helen King

How the next wave of technology is upending the global economy and its power structures
Aug 23, 2024 View in browser
 
POLITICO's Digital Future Daily newsletter logo

By Derek Robertson

Helen King

Helen King. | Google DeepMind

Hello, and welcome to this week’s installment of the Future in Five Questions. This week we interviewed Helen King, Google DeepMind’s senior director of responsibility. As one of Google’s top decisionmakers on artificial intelligence risk and governance, she discussed how far the AI community has to go in learning how to evaluate its own tools, how AI might influence human decision-making and why everyone needs to get on the same page about what, exactly, “red-teaming” is. An edited and condensed version of the conversation follows:

What's one underrated big idea? 

To build AI systems that are safe, ethical and beneficial for everyone, we need to bring together points of view from fields that may not typically intersect. The first step should happen internally, by assembling multi-disciplinary teams. The next step is to engage with third parties who can add specialized knowledge to decision making, to introduce a position we might not have thought about. This was a core part of how we determined the release of our AI system AlphaFold. Through consultation with experts across biology, pharma, biosecurity and human rights we built an understanding of how our release strategy could balance benefits and risks. That wouldn’t have been possible if we didn’t think beyond the walls of Google DeepMind.

What’s a technology that you think is overhyped? 

Right now, I’d say AI evaluations tools are getting attention, but it’s somewhat premature. Tools that help researchers identify capabilities and unwanted behaviors in AI systems will be crucial for managing risks and ensuring AI benefits everyone. But it's important to acknowledge how nascent the field still is, despite the enormous progress made.

There are gaps in our collective understanding of how evaluations should work and what defines a “good” evaluation. One effort I’m particularly proud of is the launch of our Frontier Safety Framework, a set of protocols for proactively identifying future AI capabilities that could cause severe harm and putting in place mechanisms to detect and mitigate them. It’s also positive to see efforts such as the ML Commons, Google’s SAIF framework, and the Frontier Model Forum’s AI Safety Fund helping fill the funding gap by providing grants to independent researchers exploring some of the most critical safety risks associated with frontier AI systems.

What book most shaped your conception of the future?

A book I read many years ago and often come back to is a book called “Yes!” by Noah J. Goldstein, Steve J. Martin, and Robert B. Cialdini. It’s about humans, psychology, and persuasion and contains food for thought on how we are influenced and persuaded. I often think about how that will translate to the future as generative AI systems become increasingly advanced and could influence decision-making, and it’s something my colleagues are thinking about too.

What could the government be doing regarding technology that it isn’t?

Consistency in the way that different companies and countries are evaluating the capabilities of AI systems is essential. To achieve that, we all need to be on the same page when we are talking about specific approaches to evaluations. “Red-teaming” is a very valuable practice, but different parties have different understandings of what that means, so it’s important for us to have a rigorous discussion about that before we jump to over specifying requirements in legislation.

Governments can play an important role here by working closely with industry, civil society, and academia to agree on what specific terms mean, and help drive consistency in the way we use them. This in turn helps us set the right safety norms.

What has surprised you the most this year?

I’ve been working at the forefront of AI development and research for over a decade, and have been pleased to see so much collaboration across AI safety around the world — especially during the past year. It’s taken a lot of different forms, from the Frontier AI Safety Commitments bringing together 16 leading AI companies to agree on specific steps to take to develop AI safely, to the launch of dedicated AI Safety Institutes across 10 countries and the European Union. The Institute for Advanced Study also brought together scientific experts across safety and ethics to align on guidance for evaluation. This type of collaboration is essential to help us develop a shared understanding of the opportunities and risks presented by frontier AI models, and ensure billions of people around the world benefit.

 

DON’T MISS OUR AI & TECH SUMMIT: Join POLITICO’s AI & Tech Summit for exclusive interviews and conversations with senior tech leaders, lawmakers, officials and stakeholders about where the rising energy around global competition — and the sense of potential around AI and restoring American tech knowhow — is driving tech policy and investment. REGISTER HERE.

 
 
trump's de-fiant stand

Trump

Former President Donald Trump. | Photo by Michael Vadon, courtesy of Flickr.

Trump brothers Don Jr. and Eric’s much-hyped crypto startup still hasn’t arrived, but their famous dad is already hawking it on the campaign trail.

POLITICO’s Jasper Goodman reported for Pro subscribers on former President Donald Trump’s post on Truth Social hyping the project. Trump linked to a Telegram channel called “The DeFiant Ones” (get it?) and wrote that “For too long, the average American has been squeezed by the big banks and financial elites. It's time we take a stand — together.”

As for what that stand might actually look like… it’s still unclear, with no details about the project amid confusion over who is actually launching it. But the former president’s giving it his seal of approval comes as no surprise, given both the familial connection and Trump’s own bear hug of the industry in recent months.

the future of media?

The future of media might have been on display at the Democratic National Convention this week, and not everyone liked what they saw.

POLITICO’s Calder McHugh reported on the influencers invited by the Democratic Party to cover its convention, who flooded Chicago with a decidedly different take on political coverage. Many were there expressly to boost Democratic causes and Vice President Kamala Harris’ campaign, and clashed with members of the traditional media who said the influencers received preferential treatment in terms of access, seats and accommodations for the convention.

“When you’re getting a TikTok from an influencer about what’s happening at the DNC, that is not objective, that is a subjective person, putting on a very specific spin,” Grace Segers, a staff writer at The New Republic, told Calder. “And you can argue that the media has its own spin, but journalists care about fact-checking. We care about making sure that something is accurate. And you can’t say the same about most quote, unquote content creators.”

And then, the other side of the argument: “The [press’] obsolete asses are being replaced and they hate it … The difference between us and you is that y’all are lazy and sensationalist,” influencer Brian Baez said on TikTok and X. “You report on information and spin it to rile up groups of people in hopes to get clicks and views. We combat misinformation and raise awareness … clearly it’s more effective cause we got your motherfucking seats … and fucking good, more of your seats should be gutted until you get with the fucking program.”

TWEET OF THE DAY

Milla Jovovich answers fan questions in an online chat on AOL, 1995.

The Future in 5 links

Stay in touch with the whole team: Derek Robertson (drobertson@politico.com); Mohar Chatterjee (mchatterjee@politico.com); Steve Heuser (sheuser@politico.com); Nate Robson (nrobson@politico.com); Daniella Cheslow (dcheslow@politico.com); and Christine Mui (cmui@politico.com).

If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.

 

A YEAR OF CALIFORNIA CLIMATE: A year ago, the California Climate newsletter was created with a goal in mind — to be your go-to source for cutting-edge climate policy reporting in the Golden State. From covering Gov. Newsom's crucial China trip to leading the coverage on California's efforts to Trump-proof its climate policies, we've been at the forefront of the climate conversation. Join us for year two if you haven’t already, subscribe now.

 
 
 

Follow us on Twitter

Daniella Cheslow @DaniellaCheslow

Steve Heuser @sfheuser

Christine Mui @MuiChristine

Derek Robertson @afternoondelete

 

Follow us

Follow us on Facebook Follow us on Twitter Follow us on Instagram Listen on Apple Podcast
 

To change your alert settings, please log in at https://login.politico.com/?redirect=https%3A%2F%2Fwww.politico.com/settings

This email was sent to salenamartine360.news1@blogger.com by: POLITICO, LLC 1000 Wilson Blvd. Arlington, VA, 22209, USA

Unsubscribe | Privacy Policy | Terms of Service

Post a Comment

Previous Post Next Post