AI vs. Nukes: ‘This is much more dangerous’

How the next wave of technology is upending the global economy and its power structures
May 25, 2023 View in browser
 
POLITICO's Digital Future Daily newsletter logo

By Matt Berg and Mohar Chatterjee

With help from Derek Robertson

Rep. Seth Moulton is pictured during a House Armed Services Committee hearing.

Rep. Seth Moulton is pictured during a House Armed Services Committee hearing on July 9, 2020 in Washington, D.C. | Greg Nash-Pool/Getty Images

Rep. Seth Moulton believes that killer robots could be here… today.

And, not that much further ahead in the future, they could replace actual soldiers on the battlefield — which is why the Massachusetts Democrat is picking a fight with the Pentagon about it.

In his eyes, the Defense Department hasn’t “done much of anything” to address concerns about the role artificial intelligence could play in the military down the line, he told Digital Future Daily in an interview following his op-ed in the Boston Globe on Wednesday warning of the imminent danger the tech poses to humanity.

He likened the threat to nuclear weapons – and warned the United States has been ceding ground to foreign rivals. From his vantage point on the House Armed Services Committee – and as a top Democratic voice on military matters – the Pentagon military needs to be doing more, now.

It’s worth noting that the DoD recently updated its autonomous weapons directive to follow the department's AI Ethical Principles policy, and military officials have repeatedly said there will always be a “human in the loop” when it comes to autonomous weapons killing people.

But Moulton thinks that’s not enough. For starters, the Defense Department needs to lead by example and set ethical rules to prevent an automated or unintended atrocity on future battlefields, and capitalize on the military’s huge data reserves to advance artificial intelligence, he said.

We spoke with the lawmaker about his fears and hopes for the future, and how he thinks Congress should respond to ensure that we’re all still living and breathing when killer robots become reality. Our interview has been edited for clarity and brevity.

You’ve said the Pentagon isn't paying enough attention to AI's potential threats in military situations. What has DoD done poorly, and what can they improve upon?

I don't think they've done much of anything. I talked to two service chiefs in the past week who both said the Pentagon is way behind on this. They've got no direction whatsoever, no guidance… The Pentagon is a $760 billion enterprise, and right now, we're spending half as much as China is on AI as a percentage of our defense plan.

You're the first lawmaker I’ve heard make the direct comparison to AI being a threat comparable to nuclear weapons. What made you draw that conclusion? Why isn't everyone freaking out about this?

Actually, I think there are a lot of people freaking out about it. What sets us apart from the nuclear age is that as soon as we developed nuclear weapons, there was a massive effort to curtail their use.

It was led primarily by many of the scientists who developed this technology in the first place and recognized how dangerous it was to humanity, and it resulted in an international effort to limit nuclear arms and limit their proliferation. I just haven't seen anything comparable to that with AI.

This is much more dangerous. China is investing tremendous resources in AI. Putin has come out and said that whoever wins the AI race will control the world. All of our serious adversaries are in a real competitive race with us on AI, and so we're losing the leverage to help set these international standards.

How plausible is a “Geneva Convention on AI,” like you called for in the op-ed?

I think it's plausible because I think everybody fundamentally recognizes the risk. It's easy to be a cynic here and say, well, we're at war with Russia in a proxy war and tensions are rising with China, but let's not forget that we had a lot of nuclear arms agreements during the Cold War. The Geneva Conventions were negotiated with a lot of tensions in the world. I think that this is hard, but it's absolutely worth trying.

You also said that we're not far away from seeing killer robots in use. How far away are we? 

I think that we could produce them today, if we really wanted to. The question is just what a killer robot looks like, right? If you're just talking about autonomous drones, we basically already have that technology and we're developing very quickly into swarm and even more sophisticated forms. If you're talking about killer robots replacing an infantry man, that's probably still five or 10 years off, but it's not decades away.

Data quality is a big part of moving on AI quickly and safely. What are your thoughts on the military's data classification system?

The United States collects more military sensor data than any other country in the world. And we delete more data than any other country in the world, because we just don't have the place to store it.

So, part of this is quite simple. We just need to build the storage capacity for all the sensor data we're collecting on a regular basis. Thank God, we did have the radar data to help us identify those balloons that had flown over from China. We looked back at Big Data to find that. A lot of that data is being deleted every single day.

Is there anything in particular you're looking out for regarding AI in the National Defense Authorization Act?

Well, there's competing philosophies about how to deal with this. One of the conversations I'll have with [Rep. Rob Wittman, R-Va., vice chairman of the House Armed Services Committee] is that I like the idea of jumpstarting thinking on AI, but don't want to set up an office that immediately exerts so much control over the services. In a public hearing, I encouraged the chiefs to go fast on AI and figure this out. The Pentagon secretary is not going to lead on this. So you should do it yourself. Much like the Marine Corps independently developed a strategy for the Pacific and now everybody else is trying to catch up.

 

DON’T MISS POLITICO’S HEALTH CARE SUMMIT: The Covid-19 pandemic helped spur innovation in health care, from the wide adoption of telemedicine, health apps and online pharmacies to mRNA vaccines. But what will the next health care innovations look like? Join POLITICO on Wednesday June 7 for our Health Care Summit to explore how tech and innovation are transforming care and the challenges ahead for access and delivery in the United States. REGISTER NOW.

 
 
desantis' tech buddies

Elon Musk gestures as he is interviewed by Fox News host Tucker Carlson.

Elon Musk appears on Fox News. | Fox News via AP

By now you’ve probably heard about what a technical debacle Florida Gov. Ron DeSantis’ presidential campaign launch was.

But there’s a reason DeSantis was in a position to break Twitter’s servers in the first place, and it points to a potential future for the anti-establishment techno-libertarianism that’s been bubbling under the surface of American politics. DFD’s Ben Schreckinger reported yesterday morning on how the Florida governor decided to join in with that camp, bonding with figures like Elon Musk and venture capitalist David Sacks over their shared affinity for the conservative culture war DeSantis has been waging relentlessly.

“By announcing his run with the two moguls on Twitter Spaces, DeSantis is betting that his ultra-wealthy supporters will be useful not just for writing checks, but for framing his campaign for public consumption,” Ben writes — but not everyone is so sanguine it’ll fly with Republican primary voters.

“Is it going to be his issue or is it going to be the Twitter show?” Rollins asked Ben. “You’ve got to win Iowa. You’ve got to win New Hampshire, and that’s where you should spend a lot of time.” Judging from how the campaign rollout actually went, it’s unsolicited advice DeSantis might find himself begrudgingly taking. — Derek Robertson

tech takes the wheel

While Washington dithers on how to approach AI regulation, private industry is proposing its own rules — and inviting lawmakers to the table to hash them out.

POLITICO's Mohar Chatterjee and Brendan Bordelon report this afternoon on efforts from top tech CEOs to get their own preferred rules in front of lawmakers across the globe. Microsoft's Brad Smith unveiled this morning a five-point plan for regulating AI that focused on security and licensing for models, while Google's Sundar Pichai and OpenAI's Sam Altman are on a similar blitz in Europe.

But as Mohar and Brendan point out, tech leaders are not exactly keen to be seen as leading the process. “I'm not even sure that we're in the car,” said Smith. “But we do offer points of view and suggested directions for those who are actually driving.”

Microsoft's licensing proposal would represent a sea change in how the currently unregulated field operates. Which is exactly what's wrong with it, in the eyes of critics: some leading AI researchers believe that it would effectively pull the ladder away from smaller AI developers now that big tech players have established their pole position. — Derek Robertson

Tweet of the Day

2012: we can tell a blurry beagle apart from a blurry Labrador2022: we gave the search engine a soul

the future in 5 links

Stay in touch with the whole team: Ben Schreckinger (bschreckinger@politico.com); Derek Robertson (drobertson@politico.com); Mohar Chatterjee (mchatterjee@politico.com); Steve Heuser (sheuser@politico.com); and Benton Ives (bives@politico.com). Follow us @DigitalFuture on Twitter.

If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.

 

GET READY FOR GLOBAL TECH DAY: Join POLITICO Live as we launch our first Global Tech Day alongside London Tech Week on Thursday, June 15. Register now for continuing updates and to be a part of this momentous and program-packed day! From the blockchain, to AI, and autonomous vehicles, technology is changing how power is exercised around the world, so who will write the rules? REGISTER HERE.

 
 
 

Follow us on Twitter

Ben Schreckinger @SchreckReports

Derek Robertson @afternoondelete

Steve Heuser @sfheuser

Benton Ives @BentonIves

 

Follow us

Follow us on Facebook Follow us on Twitter Follow us on Instagram Listen on Apple Podcast
 

To change your alert settings, please log in at https://www.politico.com/_login?base=https%3A%2F%2Fwww.politico.com/settings

This email was sent to salenamartine360.news1@blogger.com by: POLITICO, LLC 1000 Wilson Blvd. Arlington, VA, 22209, USA

Please click here and follow the steps to unsubscribe.

Post a Comment

Previous Post Next Post