Should we be concerned that the latest AI technology wants to be more than friends? One thing we know about generative AI is that it’s really, really good at seeming human. We’ve already heard about a lonesome night watchman treating AI as a companion, and a chatbot that tried to break up a reporter’s marriage. One company, Character.AI, lets users create tailored characters and talk to them. For some, this is a huge new opportunity: Character.AI raised $150 million in a March funding round. For others it’s a blinking red light about the kind of future we’re heading into. A new report from the nonprofit Public Citizen rails against what it called the “deceptive anthropomorphism” of AI systems. The venerable consumer-advocacy group has lately taken a big interest in AI; the FEC is considering Public Citizen's petition to create rules about deepfakes in 2024 election campaign advertising. The report says companies, in their quest to perfect human-like AI for profit, can use the systems to hijack users’ attention and manipulate their feelings. Author Rick Claypool lays out a set of policy recommendations — ranging from banning “counterfeit humans” in commercial transactions to restricting the very techniques that make AI seem human, like first-person pronouns and human-like avatars. The report also suggests applying extra scrutiny and testing on AI systems intended for children, older people and psychologically vulnerable individuals. The report is eerie — one of the most complete documentations of the race to create human-like AI I’ve seen yet — but the future it’s trying to prevent feels almost inevitable under current market incentives in the U.S. So as a gut-check, I discussed its findings and recommendations with computer scientist Suresh Venkatasubramanian, who co-authored the White House’s AI Bill of Rights during his stint as a science policy adviser in the Biden administration. “This idea of using various forms of AI to interact with people and provide assistance in various forms — that’s going to happen, I agree,” Venkatasubramanian said. “The question really is what design choices we're going to make in building these systems.” He calls the Public Citizen report “a call to arms” for the researchers designing AI systems. “We’ll be responsible,” he said, “if we don't think about other ways to design interfaces that are not deceptive, that do create a clear demarcation between an automated system and the person interacting with it.” But he also cautioned that these systems are going to evolve and that it may be too soon to have an “entire regulatory apparatus” to rein them in. Still, there are toeholds for existing regulatory agencies to intervene. “If I were the FDA,” Venkatasubramanian said, “I'd be very worried about the next way to solve telehealth by not interacting with a doctor.” Think of an online bot therapist that ingests medical literature and your health conditions and talks to you about your mental health, he said. We’re already part of the way there — health IT giant Epic recently tapped Microsoft to integrate generative AI into its electronic health record software. Venkatasubramanian worries that the race to replace humans with human-like AI in customer-facing workflows will deepen the digital divide to access critical services. “We'll see more and more rollout of tools in places where we take away human involvement, because it looks like these tools can act like humans. But they really can't. And they'll just make everything a lot more difficult to navigate … Those who are more adept at navigating these tools and working with them will succeed. Those who don’t, won’t.” he said. In the long run, we could also risk losing the loneliest fringes of our population to the rising tide of AI companions. Venkatasubramanian speculated that the future may not look quite like WALL-E, where humans are lost in their own devices. “Humans have shown we are more social than that,” he said. The growing intimacy between humans and their AI companions isn’t necessarily an effect we can measure in the aggregate. But around the margins of society, “if someone was already a little bit antisocial and or was unable to comfortably interact with other people and found this as an alternative, it's likely to tip them over the edge,” Venkatasubramanian said.
|