The AI safety fog of war

Presented by Special Competitive Studies Project: How the next wave of technology is upending the global economy and its power structures
May 02, 2024 View in browser
 
POLITICO's Digital Future Daily newsletter logo

By Derek Robertson

Presented by Special Competitive Studies Project

A view of the California state capitol building.

The California state capitol. | Arturo Holmes/Getty Images

The debate over what, exactly, constitutes “safe” artificial intelligence continues to heat up.

If you’re not part of the hyper-online community of AI wonks, engineers and enthusiasts, you might have missed a spat this week on X between Democratic California state Sen. Scott Weiner and AI nonprofit director Brian Chau, who accused the lawmaker in a lengthy thread of attempting to “crush OpenAI’s competitors” and “hurt startups and open source” with his proposed AI legislation.

Chau argued that by requiring licensing and regulation for AI models that meet similar benchmarks to the industry-leading ones from OpenAI, the state is effectively making it impossible for smaller AI companies to compete.

Weiner shot back, calling the critique “outright fear-mongering” and saying lawmakers have worked with the startup community to make sure it’s workable for all participants in the market.

It’s not just Silicon Valley lawmakers who are trying to put the regulatory clamps on the most powerful AI models: A bipartisan group of senators introduced legislation in April that would establish federal oversight for them, and the European Union’s AI Act establishes transparency requirements for so-called “foundation models.” President Joe Biden’s executive order on AI set specific, but as yet not achieved, limits on computing power after which AI systems will merit extra regulatory attention.

A major argument in the ongoing efforts by regulators to tackle AI is what, exactly, constitutes “safety.” For some researchers the issues posed by AI aren’t all that different from tech platforms more generally; they see the tech carrying largely the same risks of privacy invasion and media manipulation that have plagued the app-dominated social media era.

Others, including Elon Musk and Sam Altman, see AI as a radical new force that could be even more frightening than nuclear weapons — devoting enormous resources to the problem of “alignment,” which could mean anything from ensuring AI serves human goals to preventing it from destroying the world. The Bletchley Park Declaration, signed by representatives of 28 countries and the European Union last year, warned that “Particular safety risks arise at the ‘frontier’ of AI… There is potential for serious, even catastrophic, harm, either deliberate or unintentional, stemming from the most significant capabilities of these AI models.”

The debate can be reduced to two crude, equally Manichean views. In one, a powerful cabal of wealthy Silicon Valley developers are rattling regulators with ghost stories about “misalignment” in an attempt to scare them into locking smaller competitors out of the market with onerous regulation. In the other, reckless engineers bent on ushering in a techno-apocalypse are fighting for a digital anarchism that could lead to bioterrorism or nuclear winter via AI mishap.

Lost in the rhetoric around AI safety is a more robust understanding of what powerful models like those used in ChatGPT actually do, and where they stand in a tradition of computing that goes back to the Turing test — no mere intellectual exercise when Washington lawmakers are scrambling to regulate a field they barely understand, surrounded on all sides by shadowy, well-funded lobbying interests.

With that in mind I called David McDonald, a decades-long veteran of the AI industry and developer of some of the earliest language algorithms, to see how it looks to someone who has shepherded what we now call “generative AI” from its figurative infancy.

McDonald, who professed to be a fan of the AI critic Gary Marcus, expressed his frustration with what he called a “computer illiterate” Congress. But even more than that, he warned that the current discourse around AI misunderstands something very fundamental about the technology, which he knows all too well from decades of studying semantics and language generation in computer systems: It’s not just that chatbots don’t “think,” but that they cannot think.

“I know enough about the brain to know that everybody is idiosyncratic when it gets down to the nuts and bolts,” McDonald said. “I’m very curious to see what happens, but I don't see any danger… [worry about human-like AI] presumes common sense, and that computers have rich intentions, and at that point it’s a different beast.”

McDonald’s research and design work has focused on semantic algorithms that are tailored for specific uses, unlike the general-purpose predictive models that have captured the public imagination of late. He cited a 2021 paper on those models as “stochastic parrots” that risk, with their massive-scale hoovering of data, directing “resources away from efforts that would facilitate long-term progress towards natural language understanding, without using unfathomable training data.”

“We would always be generating [language] for a purpose, where the thing we're generating has some intent, and does not have much emotion,” he said. “Intent and semantics are alien concepts to a large language model, they don’t make sense. Large language models are just walking the most interesting statistical paths.”

Still, McDonald isn’t a complete LLM skeptic: He acknowledged that their intuitive, (mostly) polished output creates a powerful mystique that makes it all the more important for regulators to truly understand what they can and cannot do. And companies obsessed with “alignment” and the possibility of “artificial general intelligence,” like OpenAI and Meta, generally frame their alignment work — at least in public — as more of a project to help AI help humanity, and not to stop a rogue, “sentient” system from ending it.

Right now, nonprofits linked to those very companies are staffing congressional offices with the goal of fixing the lack of education that McDonald lamented. Lawmakers and regulators face a significant challenge in cutting through the morass of competing interests and ideologies that surround AI safety at the moment to figure out what risk the technology truly poses. They might, ultimately, have no choice but to do what would have occurred first to McDonald’s generation of computer scientists: Open up the machines and take a look inside themselves.

 

A message from Special Competitive Studies Project:

Innovation is on display at The AI Expo for National Competitiveness. Step into the future with the Special Competitive Studies Project and explore captivating, cutting-edge tech demos and witness insightful discussions on tech policy and global affairs led by a mix of government, academic, and industry leaders in DC. Join us to forge connections, gain perspectives and be part of charting a course toward a future defined by innovation, collaboration, and shared purpose for free.

 
west coast humor

Alex Karp, CEO of the software firm Palantir Technologies, arrives.

Alex Karp on Capitol Hill in 2023. | J. Scott Applewhite/AP

The right-leaning politics of some corners of the Silicon Valley elite were on full, unexpected display in Washington yesterday afternoon.

POLITICO’s Brendan Bordelon reported yesterday on remarks made by Palantir CEO Alex Karp at a Capitol Hill event meant to bridge the cultural gap between the Bay Area and Washington, in which Karp expressed his opposition to campus protests against Israel’s war efforts, joked about drone striking his business rivals, and warmed up the crowd for a surprise video appearance by former President Donald Trump.

Jacob Helberg, one of Karp’s advisers and a key player in the bill passed last week that could force TikTok’s sale, told Brendan that Karp “believes in working with the U.S. government regardless of who’s in office,” and that “It’s great for both sides to familiarize themselves with a little bit of West Coast humor.”

 

A message from Special Competitive Studies Project:

Advertisement Image

 
copyrights (and wrongs)

The United Kingdom’s government is facing some heat over its lack of action in protecting copyright from AI.

POLITICO’s Joseph Bambridge reported for Pro subscribers today on a critique from Tina Stowell, Baroness Stowell of Beeston, a member of the House of Lords who submitted a letter claiming the government needs to “go beyond its current position” and “make more explicit commitments” around protecting copyright, as well as competition in the AI market.

“The unintended risks of entrenching incumbent advantages are real and growing,” Stowell wrote. “Even an unfounded perception of close relationships between AI policy and technology leaders risks lasting damage to public trust.”

In response, U.K. Technology Secretary Michelle Donelan called AI copyright a “complex and challenging area” and said it would not be appropriate to comment on specific potential breaches of copyright given ongoing litigation.

 

THE GOLD STANDARD OF TECHNOLOGY POLICY REPORTING & INTELLIGENCE: POLITICO has more than 500 journalists delivering unrivaled reporting and illuminating the policy and regulatory landscape for those who need to know what’s next. Throughout the election and the legislative and regulatory pushes that will follow, POLITICO Pro is indispensable to those who need to make informed decisions fast. The Pro platform dives deeper into critical and quickly evolving sectors and industries, like technology, equipping policymakers and those who shape legislation and regulation with essential news and intelligence from the world’s best politics and policy journalists.

Our newsroom is deeper, more experienced, and better sourced than any other. Our technology reporting team—including Brendan Bordelon, Josh Sisco and John Hendel—is embedded with the market-moving legislative committees and agencies in Washington and across states, delivering unparalleled coverage of technology policy and its impact across industries. We bring subscribers inside the conversations that determine policy outcomes and the future of industries, providing insight that cannot be found anywhere else. Get the premier news and policy intelligence service, SUBSCRIBE TO POLITICO PRO TODAY.

 
 
Tweet of the Day

What’s really happening in the background around AI regulation.

The Future in 5 links
  • Changpeng Zhao received a considerably lighter sentence than other crypto convicts.
  • Fake AI “people” could take the place of focus groups or clinical trials.
  • TikTok is clamping down on AI-generated voice clones of pop stars.
  • NASA is firming up commercial support for potential Mars missions.
  • A nascent drone startup ecosystem is powering Ukraine’s defense against Russia.

Stay in touch with the whole team: Derek Robertson (drobertson@politico.com); Mohar Chatterjee (mchatterjee@politico.com); Steve Heuser (sheuser@politico.com); Nate Robson (nrobson@politico.com); Daniella Cheslow (dcheslow@politico.com); and Christine Mui (cmui@politico.com).

If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.

 

A message from Special Competitive Studies Project:

The future is unfolding at The AI Expo for National Competitiveness. Step into the future with the Special Competitive Studies Project and explore captivating, cutting-edge tech demos and witness insightful discussions on tech policy and global affairs led by a mix of government, academic and industry leaders. The AI Expo has over 125 exhibitors on the show floor, over 150 panel speakers, special events and side rooms and networking spaces. Join us on May 7-8 in Washington, D.C. to forge meaningful connections or gain fresh perspectives and be part of charting a course toward a future defined by innovation, collaboration, and shared purpose. The AI Expo is free to attend.

 
 

POLITICO IS BACK AT THE 2024 MILKEN INSTITUTE GLOBAL CONFERENCE: POLITICO will again be your eyes and ears at the 27th Annual Milken Institute Global Conference in Los Angeles from May 5-8 with exclusive, daily, reporting in our Global Playbook newsletter. Suzanne Lynch will be on the ground covering the biggest moments, behind-the-scenes buzz and on-stage insights from global leaders in health, finance, tech, philanthropy and beyond. Get a front-row seat to where the most interesting minds and top global leaders confront the world’s most pressing and complex challenges — subscribe today.

 
 
 

Follow us on Twitter

Ben Schreckinger @SchreckReports

Derek Robertson @afternoondelete

Steve Heuser @sfheuser

 

Follow us

Follow us on Facebook Follow us on Twitter Follow us on Instagram Listen on Apple Podcast
 

To change your alert settings, please log in at https://login.politico.com/?redirect=https%3A%2F%2Fwww.politico.com/settings

This email was sent to salenamartine360.news1@blogger.com by: POLITICO, LLC 1000 Wilson Blvd. Arlington, VA, 22209, USA

Unsubscribe | Privacy Policy | Terms of Service

Post a Comment

Previous Post Next Post