5 questions for Gary Marcus

How the next wave of technology is upending the global economy and its power structures
Mar 15, 2024 View in browser
 
POLITICO's Digital Future Daily newsletter logo

By Derek Robertson

SAN FRANCISCO, CALIFORNIA - SEPTEMBER 21: Robust.AI & Geometric.AI Founder Gary Marcus speaks onstage during TechCrunch Disrupt 2023 at Moscone Center on September 21, 2023 in San Francisco, California. (Photo by Kimberly White/Getty Images for TechCrunch)

Gary Marcus at the 2023 TechCrunch Disrupt conference. | Kimberly White/Getty Images for TechCrunch

Hello, and welcome to this week’s edition of The Future In Five Questions. During the South by Southwest festival in Austin, Texas this week I interviewed entrepreneur, AI researcher, and X power user Gary Marcus, who was in town to speak on a panel about “AI and the Future of the Truth.” Marcus has a reputation as a vocal AI skeptic, but he still believes the technology has the power to disrupt society. As he said to me on Monday in front of the Thompson Austin hotel, “I still think AI is stupid; that’s why it’s dangerous.” 

We talked about his fervent belief in neuro symbolic AI as an alternative to current large language models, the enduring insight of Isaac Asimov, and why the government shouldn’t offer Section 230 protection to AI firms. An edited and condensed version of the conversation follows:

What’s one underrated big idea?

Neuro symbolic AI is a vastly underrated idea. This is the idea that you put together neural networks with classic, symbolic AI in ways that are not fully specified. Some people are working on this, but not nearly enough, it's not taken seriously enough, there's not enough funding for it.

You have to remember that deep learning, which drives LLMs, was largely unknown to the public in 2009. Most people had given up on it, and it took a certain potentiating condition of having [powerful enough] GPUs in order to be useful. It’s easy to give up on an idea too soon, but I think it will ultimately be transformative because the foundational problem we have with large language models is they're not trustworthy. They hallucinate all the time. People have made various promises that hallucinations will be fixed, but I first saw them in 2001 and I'm still seeing them now.

I often get this: “Gary Marcus used to think AI is stupid, now he thinks it's dangerous.” And I’m like, it’s not that I used to think it's stupid, I still think it's stupid, and that's why it's dangerous. There's no contradiction there. If you have a model that cannot understand the truth, then it's a sitting duck for someone to exploit it to make up garbage.

What’s a technology that you think is overhyped?

Large language models are totally overrated. They’re useful, but how useful is still unclear. I would say the best application is computer programming. Some programmers love them, but there's some data that the quality of code is going down and the quality of security is going down. Just because you love something and feel like it's good for you doesn't mean that it really is.

At every company on the planet, the CEO last year said, you know, “Hey, Jones, I want you to work on AI.” So [AI companies] sold a lot of services last year. But a lot of people ended up thinking it was interesting, but couldn’t get it to work out of the box, and thought maybe someday they’ll figure it out. It wouldn't surprise me if to some extent, this whole thing fizzled out. We may, years from now, think: “That was a little like crypto,” where maybe it didn't disappear entirely, but it hasn't lived up to expectations.

What book most shaped your conception of the future?

I still think we need to wrap our mind around what Isaac Asimov was talking about with Asimov's laws.

Everybody knows they're over-simplified, and the stories were kind of hinging on the fact that they didn't fully work. But you can't build an AI system that can calculate whether what it's doing is harming someone or not. You're asking for trouble. That’s still a fundamental issue that we have not adequately addressed, and a lot of the best future stuff probably comes from science fiction writing, like Asimov, Stanislaw Lem, Arthur C. Clarke.

Nobody fully predicts the future with accuracy. The best prediction is probably the AT&T ad where they just made ads around the tech they had, and it did actually pretty much come to pass. I have made a lot of predictions about the limitations of a particular kind of AI architecture that have been, I think, phenomenally accurate. These are still shorter-term predictions, and they're also qualified with respect to a particular architecture and particular way of building systems. I think I can say, look, this system has this flaw, but what I can't say is when society will put in the money to try to do something differently, and what will be the effect of that. There are limits to what anybody can predict.

What could government be doing regarding technology that it isn’t?

One first step would be an absolutely clear statement that we're not giving Section 230 protection [to the AI industry], and that [AI] companies are going to be liable for the harms that they cause. Having a cabinet-level U.S. AI agency would be a really good idea. AI is going to be as important to our future as anything else, and it's cross-cutting where we have something like 20 agencies running around doing the best they can, but with mandates that were not originally written about.

And we should have a law that you can't do an uncontrolled experiment on 100 million people that ChatGPT did without some kind of risk-benefit analysis involving some independent people outside the company.

What surprised you most this year?

There's been a total suspension of disbelief, and I'm not completely shocked by it, but people are building policies and valuations around something that is close to make-believe. It’s spread through the entire society that everybody thinks we're close to [artificial general intelligence], and that if China had this thing, American society would come to an end. The reality is, if China got GPT-5 faster than we did, they would write more boilerplate letters faster. You're not going to use it for military strategy. There are all kinds of conversations in Washington and across the world about how we have to get this thing first, and I don't think it's the right thing. If you look at it carefully and see all the hallucinations, and so forth, you'd realize this is still very premature technology. People are rewriting and orienting their policy around this fantasy.

 

YOUR GUIDE TO EMPIRE STATE POLITICS: From the newsroom that doesn’t sleep, POLITICO's New York Playbook is the ultimate guide for power players navigating the intricate landscape of Empire State politics. Stay ahead of the curve with the latest and most important stories from Albany, New York City and around the state, with in-depth, original reporting to stay ahead of policy trends and political developments. Subscribe now to keep up with the daily hustle and bustle of NY politics. 

 
 
dsa takes on ai (no, not that dsa)

Europe’s digital regulators are flexing their muscles on generative AI.

POLITICO’s Mark Scott reported on an announcement yesterday from European Commission internal market Commissioner Thierry Breton, who said on X that his enforcement team is “fully mobilized” to request info about generative AI programs that might violate the Digital Services Act. Mark reports requests on AI went to Facebook, Snapchat, TikTok, YouTube, X (formerly Twitter), Instagram, Google and Bing.

Most of these companies pledged last month to “combat deceptive use of AI in 2024 elections,” but the European Union wants more information on how they plan to do so. Breton also announced an investigation into the Chinese e-commerce giant AliExpress over whether it breached the DSA.

Tweet of the Day

someone in my mentions just “proved” LLMs are creative by linking me an 8 page arxiv preprint that concludes that LLMs are not creative. we gotta open the schools.

The Future in 5 links

Stay in touch with the whole team: Derek Robertson (drobertson@politico.com); Mohar Chatterjee (mchatterjee@politico.com); Steve Heuser (sheuser@politico.com); Nate Robson (nrobson@politico.com); Daniella Cheslow (dcheslow@politico.com); and Christine Mui (cmui@politico.com).

If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.

 

JOIN US ON 3/21 FOR A TALK ON FINANCIAL LITERACY: Americans from all communities should be able to save, build wealth, and escape generational poverty, but doing so requires financial literacy. How can government and industry ensure access to digital financial tools to help all Americans achieve this? Join POLITICO on March 21 as we explore how Congress, regulators, financial institutions and nonprofits are working to improve financial literacy education for all. REGISTER HERE.

 
 
 

Follow us on Twitter

Ben Schreckinger @SchreckReports

Derek Robertson @afternoondelete

Steve Heuser @sfheuser

 

Follow us

Follow us on Facebook Follow us on Twitter Follow us on Instagram Listen on Apple Podcast
 

To change your alert settings, please log in at https://login.politico.com/?redirect=https%3A%2F%2Fwww.politico.com/settings

This email was sent to salenamartine360.news1@blogger.com by: POLITICO, LLC 1000 Wilson Blvd. Arlington, VA, 22209, USA

Unsubscribe | Privacy Policy | Terms of Service

Post a Comment

Previous Post Next Post