| | | | By Steven Overly | With help from Derek Robertson
| The logo of ChatGPT on a smartphone screen and the letters AI on a laptop screen. | AFP via Getty Images | Strong opinions on artificial intelligence are easy to come by these days — and not just from the techies paid to make it, or the policy wonks paid to regulate it. Even conversations with a film director, school principal, comedy writer or civil rights leader can veer into provocative debates about how this increasingly smart and powerful technology is reshaping day-to-day lives. (Seriously, bring it up over brunch and see what happens.) New data published today reveal that when it comes to the greatest risks of AI, there’s a disconnect between what experts are worried about, and what the public most worries about — a clash of attitudes that could complicate Washington’s efforts to write the rules for a fast-evolving and highly disruptive new technology. “It's a tug of war in many cases,” Lee Rainie, the researcher who conducted the poll at Elon University in North Carolina, said on today’s POLITICO Tech podcast. “Sometimes the elite interests, in particular people who have skin in the game, get to prevail. But a lot of times it's the public sentiment, certainly the public backlash to things, that can shape whether [regulations] get done or not.” For the poll, Rainie and his team queried 1,000 members of the public and 250 academics, think tankers, industry reps and policy figures. They found concerns that AI will have a negative impact on issues like employment, elections, wealth inequality and human rights were not evenly shared between experts and the wider public. When it comes to AI’s impact on jobs, the poll showed that the public was more worried than experts — one of the few areas where public concern was higher. (Fifty-five percent of general public respondents fear AI will have a negative impact on jobs, compared to 43 percent of experts.) Despite the public worry, labor market disruption hasn’t always been high on Washington’s agenda: Instead, risks to national security, elections and public safety are often the drivers of major AI policy. Other key findings in the Elon University poll include:
- 70 percent of experts said the impact of AI on wealth inequality would be more negative than positive, compared to 37 percent of public respondents.
- Experts were also more concerned than the public about its negative impact on politics and elections (67 to 51 percent) and basic human rights (54 percent to 41 percent).
- 61 percent of experts said AI will have a negative impact on war. The public’s opinion was not polled .
- The loss of privacy was the top concern for both groups, with 79 percent of experts and 66 percent of the public saying AI will have a negative impact.
Rainie said policymakers risk writing policies that ignite public backlash if they ignore the differences. “There are always cautionary notes that are sounded in polls like the one that we did, where the public isn't yet quite tuned in to all the things that elites are thinking about,” Rainie said. “There's some possibility that elites might run away with things if they don't take full account of how distributed public attitudes are.”
| Listen to the full interview and subscribe to POLITICO Tech on Apple, Spotify or Simplecast. Rainie attributes the difference, in part, to worldview. Members of the public tend to be more focused on AI’s direct risks to their well-being and livelihood, while academics and policy wonks take a broader perspective on society at large. There may also be a gap in awareness. Generative AI first captured the public interest just over a year ago with the launch of OpenAI’s ChatGPT. So while tech experts have long known of the potential power (and abuse) of these AI systems, Rainie said regular people need time to catch up — and discover the upheaval it could bring to the world around them. Nevertheless, global regulators are already moving ahead, from Biden’s executive order to the European Union’s AI Act, as the risks posed by AI feel increasingly urgent. Rainie said the trick for policymakers will be not to leave the public behind or downplay their present worries — either of which can erode the already-tenuous faith people have in government. “The things that experts are worried about now that haven't yet captured the public imagination, soon enough the public is going to be tuned into things that really seem to have big consequences,” Rainie said. “And if you end up with solutions that the public hates, then you're just going to have a worsening situation with trust.”
| | YOUR GUIDE TO EMPIRE STATE POLITICS: From the newsroom that doesn’t sleep, POLITICO's New York Playbook is the ultimate guide for power players navigating the intricate landscape of Empire State politics. Stay ahead of the curve with the latest and most important stories from Albany, New York City and around the state, with in-depth, original reporting to stay ahead of policy trends and political developments. Subscribe now to keep up with the daily hustle and bustle of NY politics. | | | | | AI is supposed to disrupt the labor market, but which jobs is it actually taking? Labor analysis website Bloomberry crunched the early numbers, scanning freelance gigs listed on Upwork from November 2022, a month before the release of ChatGPT, to February 14 of this year. They looked at which jobs disappeared in the greatest numbers, which were the least affected, and which ones had the biggest decrease in pay, as well as which AI skills were associated with the most postings. Their key findings: While most fields actually increased their number of listings during this time period, writing, translation, and customer service took hits of 33, 19, and 16 percent respectively. When it comes to pay, translation took a 20 percent hit to its rates, and there were also significant decreases to rates for video editing and market research. The report’s authors say their approach is best suited to track AI’s early economic impact because “If there’s any going to be any impact to certain jobs, we’ll probably see it first in the freelance market because large companies will be much slower in adopting AI tools.”
| | The United Kingdom’s government is going all-in on AI. Today’s POLITICO Morning Tech U.K. reports that Deputy Prime Minister Oliver Dowden is spending £110 million to deploy AI throughout Whitehall, hoping in vintage Tory fashion to “drive a more efficient government and a smaller state.” Except much of the cash will go to… well, adding new employees, as Dowden told POLITICO. The government plans to hire somewhere between 30 and 70 employees away from Big Tech to help with their mission. Dowden says the goal is to bring AI R&D in-house, building tools to help with bureaucratic work like analyzing and summarizing public comment on policy papers. The U.K. government is also planning a “bespoke Civil Service AI assistant,” a partnership with the country’s National Health Service to improve fraud detection in pharmacies, and a potential chatbot for the official national website.
| | |
| | | Stay in touch with the whole team: Derek Robertson (drobertson@politico.com); Mohar Chatterjee (mchatterjee@politico.com); Steve Heuser (sheuser@politico.com); Nate Robson (nrobson@politico.com); Daniella Cheslow (dcheslow@politico.com); and Christine Mui (cmui@politico.com). If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.
| | CONGRESS OVERDRIVE: Since day one, POLITICO has been laser-focused on Capitol Hill, serving up the juiciest Congress coverage. Now, we’re upping our game to ensure you’re up to speed and in the know on every tasty morsel and newsy nugget from inside the Capitol Dome, around the clock. Wake up, read Playbook AM, get up to speed at midday with our Playbook PM halftime report, and fuel your nightly conversations with Inside Congress in the evening. Plus, never miss a beat with buzzy, real-time updates throughout the day via our Inside Congress Live feature. Learn more and subscribe here. | | | | | Follow us on Twitter | | Follow us | | | |