5 questions for Microsoft’s Sarah Bird

Presented by Instagram: How the next wave of technology is upending the global economy and its power structures
Dec 13, 2024 View in browser
 
POLITICO Digital Future Daily Newsletter Header

By Derek Robertson

Presented by 

Instagram

Sarah Bird

Sarah Bird. | Microsoft

Hello and welcome to this week’s edition of the Future in Five Questions. This week I spoke with Sarah Bird, the chief product officer of responsible AI at Microsoft. Bird is responsible for testing the security of Microsoft’s AI tools and making sure they don’t cause harm or replicate bias, and we discussed what needs to be done after an AI tool is “red-teamed,” the limits of thinking about AI in terms of whether it’s “aligned” with humanity’s interests and why industry still needs government to bring the two sides together and work out standards for AI evaluation. An edited and condensed version of the conversation follows:

What’s one underrated big idea?

Testing, but a very specific type of testing.

In the last two years, we've started to see the term “red-teaming” become very popular. Red-teaming is one very important testing tool, but we see people not understanding how critical it is to follow that with what we call measurement, or robust evaluation.

A lot of our investment in responsible AI over the past couple of years has been in building safety evaluation systems that allow us to look across a variety of risks, test a large number of examples against these systems, and then see what rate of defects we're seeing in our systems. We do not ship anything without first looking at the safety evaluation results.

We all know we should test software, but common practice was not to test AI systems for types of risks like prompt injection attacks, the ability to produce copyrighted materials, or harmful content or stereotyping content.

Hallucination is another example — you need a lot of expertise to even understand the risk. There's still a lot more work we need to build in our system to make it so that people can customize effectively, so that it probes their application as deeply as possible. It’s still a very nascent field.

What’s a technology that you think is overhyped?

The big one for me in the responsible AI space is alignment.

It is actually a critical technology, but what we hear from many people talking about this, whether in academia or other organizations, is that they look at alignment as the one-size-fits-all silver bullet. We'll just align the models, and we will have no challenges with safety or responsible AI.

We very much believe you need an in-depth approach. We're never going to have just one technology that solves the whole problem here. The other challenge that we see with alignment is that building safety into the model makes it less useful for certain applications. For example, I was mentioning the safety evaluation system before where in order to actually role-play and generate these tests we use AI — but to do that, we actually need to generate harmful content as part of that. We want these tests to be very robust and we want to use the most sophisticated models possible, but if you have the safety entirely built in, we're not able to use them.

What book most shaped your conception of the future?

“Tools and Weapons,” by our president Brad Smith and Carol Ann Browne.

The title points out that a lot of this technology is dual-use. The technology itself is not good or bad, it's how we use it. The book looks at technology and innovation throughout history, their challenge, impact and how we solved those problems. The fact that we're all figuring out right now globally how to regulate AI, the level of coordination that's needed is very different than how this looked in the past where technology was often staying in one pocket for much longer as it matured. That’s really influenced my understanding of AI’s impact and how we need to prepare for it.

What could the government be doing regarding technology that it isn’t?

Microsoft's position for a long time has been that we need laws and regulations around AI. For quite a few years now we’ve self-regulated and made this public so people can see us put in place the rules that we're upholding. But not every organization is going to go and do that on their own. So in order to protect people's rights and civil liberties it's important that there’s actually a standard for how this technology should be built and how this technology should be used.

However, for me as a technologist one of the parts of this I think is really important is that it’s quite complex in practice, and sometimes what we think would work doesn't actually work the way we expected it to. It’s important that the government help convene different stakeholders and make sure they're learning from technologists. I think of what the National Institute of Standards and Technology is doing here around evaluation, bringing together many different experts on the topic to figure out how we should start building standards for evaluating AI systems, and Microsoft’s Frontier Model Forum. We need more of these kinds of conversations.

What has surprised you the most this year?

It’s easy in responsible AI to think of negative surprises, but I have been delighted to see how the field is maturing very quickly. We were talking about the importance of testing and evaluation, and even six months ago that was not something that customers were asking about. People are now really thinking about how they're building AI, and looking for tools to help them do better and make sure they're governing appropriately. That’s been a massive change over the past year.

 

A message from Instagram:

Instagram Teen Accounts: automatic protections for teens

Parents want safer online experiences for their teens. That's why Instagram is introducing Teen Accounts, with automatic protections for who can contact teens and the content they can see.

A key factor: Only parents can approve safety setting changes for teens under 16.

Learn more.

 
influencers fleeing romania

Demonstrators hold EU and Romanian national flags during a pro-European rally.

Demonstrators hold EU and Romanian national flags during a pro-European rally and in support of democracy at Piata Universitatii square in Bucharest on December 5, 2024, a few days before key elections. | Daniel Mihailescu/AFP via Getty Images

Romanians who helped promote Cǎlin Georgescu’s presidential campaign on TikTok are now fleeing the country.

POLITICO’s Andrei Popoviciu reported this morning on the influencers who are leaving after Romanian tax authorities launched a widespread investigation into Georgescu’s campaign. Romania’s Constitutional Court nullified its presidential election after authorities produced evidence of a widespread interference and social media campaign on the far-right Georgescu’s behalf.

Romania’s government accused the influencers of receiving illegal payments to promote the campaign. Toni Greblă, the head of Romania’s electoral office, said the investigation would “assess any causal links between those payments and a particular candidate in the Romanian presidential election.”

 

A message from Instagram:

Advertisement Image

 
nvidia city

San Jose and Nvidia announced a partnership that will train city workers and help Silicon Valley reach its environmental goals.

POLITICO’s Eric He reported for Pro subscribers on the agreement, which includes includes job training efforts, collaboration on research, support for startups and shared tech resources — and makes San Jose the first city in the nation to sign such a deal with Nvidia.

Nvidia will give the city access to AI tools and training for city employees and students, and support local AI startups. The city pledged to help clear regulatory hurdles and potentially give financial incentives for AI-driven development projects. It will also offer public facilities to support research, and work with Nvidia to use AI to meet the city’s emissions goals.

“If we can do it right, a potential outcome is students getting hired to support the systems that they built,” Louis Stewart, Nvidia’s head of strategic initiatives for the developer ecosystem, told Eric.

 

Billions in spending. Critical foreign aid. Immigration reform. The final weeks of 2024 could bring major policy changes. Inside Congress provides daily insights into how Congressional leaders are navigating these high-stakes issues. Subscribe today.

 
 
post OF THE DAY

artificial intelligence features a second-mover advantage in the context of international competitionthis is because the societal / economic impacts of adopting advanced systems may be uniquely volatile, so it’s useful to be able observe another society going through it first

The Future in 5 links

Stay in touch with the whole team: Derek Robertson (drobertson@politico.com); Mohar Chatterjee (mchatterjee@politico.com); Steve Heuser (sheuser@politico.com); Nate Robson (nrobson@politico.com); Daniella Cheslow (dcheslow@politico.com); and Christine Mui (cmui@politico.com).

If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.

 

A message from Instagram:

Instagram Teen Accounts: a protected experience for teens, guided by parents

Instagram Teen Accounts are designed to address parents’ biggest concerns, providing automatic protections for who can contact their teens and the content they can see.

The impact: Built-in limits give parents more peace of mind when it comes to protecting their teens.

Learn more.

 
 

Write your own chapter in the new Washington. From the Lame Duck Congress Series to New Administration insights, POLITICO Pro delivers intelligence across 22+ policy areas to help you anticipate and navigate change. Discover how a Pro subscription empowers you. Learn more today.

 
 
 

Follow us on Twitter

Daniella Cheslow @DaniellaCheslow

Steve Heuser @sfheuser

Christine Mui @MuiChristine

Derek Robertson @afternoondelete

 

Follow us

Follow us on Facebook Follow us on Twitter Follow us on Instagram Listen on Apple Podcast
 

To change your alert settings, please log in at https://login.politico.com/?redirect=https%3A%2F%2Fwww.politico.com/settings

This email was sent to salenamartine360.news1@blogger.com by: POLITICO, LLC 1000 Wilson Blvd. Arlington, VA, 22209, USA

Unsubscribe | Privacy Policy | Terms of Service

Post a Comment

Previous Post Next Post