When it comes to regulating artificial intelligence, the action right now is in the states, not Washington. State legislatures are often, like their counterparts in Europe, contrasted favorably with Congress — willing to take action where their politically paralyzed federal counterpart can’t, or won’t. Right now, every state except Alabama and Wyoming is considering some kind of AI legislation. But simply acting doesn’t guarantee the best outcome. And today, two consumer advocates warn in POLITICO Magazine that most, if not all, state laws are overlooking crucial loopholes that could shield companies from liability when it comes to harm caused by AI decisions — or from simply being forced to disclose when it’s used in the first place. Grace Gedye, an AI-focused policy analyst at Consumer Reports, and Matt Scherer, senior policy counsel at the Center for Democracy & Technology, write in an op-ed that while the use of AI systems by employers is screaming out for regulation, many of the efforts in the states are ineffectual at best. Under the most important state laws now in consideration, they write, “Job applicants, patients, renters and consumers would still have a hard time finding out if discriminatory or error-prone AI was used to help make life-altering decisions about them.” Transparency around how and when AI systems are deployed — whether in the public or private sector — is a key concern of the growing industry’s watchdogs. The Netherlands’ tax authority infamously immiserated tens of thousands of families by accusing them falsely of child care benefits fraud after an algorithm used to detect it went awry. Gedye and Scherer are concerned with how AI is already being used in the U.S. in decisions around hiring, education, insurance, housing and lending, among other decision-making processes (including even criminal sentencing). They say that the current crop of state legislation is toothless when it comes to details of enforcement, or even written expressly with human resources tech companies in mind. (Those companies, for their part, assert that working with state legislatures to shape bills around AI is a tried-and-true part of balancing regulation and private sector innovation.) One issue: a series of jargon-filled loopholes in many bill texts that says the laws only cover systems “specifically developed” to be “controlling” or “substantial” factors in decision-making. “Cutting through the jargon, this would mean that companies could completely evade the law simply by putting fine print at the bottom of their technical documentation or marketing materials saying that their product wasn’t designed to be the main reason for a decision and should only be used under human supervision,” they explain. Another problem is one that dogs regulators of industries much more quotidian than AI: The haziness around what is and isn’t a “trade secret” which would harm companies to disclose. Theranos, Gedye and Scherer point out, reportedly used trade secrets law to threaten whistleblowers. These flaws, combined with weak enforcement mechanisms, they write, have made a law aimed at curtailing AI in hiring in New York City almost completely ineffectual. So… what should happen? Gedye and Scherer lay out an explicit list of recommendations for lawmakers looking to beef up their AI decision-making regulation, including: Making definitions of decision-making systems less vague, to encompass any algorithm used in official consumer decisions; up-front transparency about the decision-making process before algorithms are deployed (and requiring explanations for consumers after); eliminating the “trade secrets” loophole; complying with existing civil rights law; and strengthening legal recourse for those affected by the systems’ use. Another big issue for any legislature is the tech industry itself; companies want more freedom to operate as they figure out how and where to deploy powerful AI platforms. The big AI companies have made their own efforts to get out in front of AI-related mayhem, with Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI agreeing in September 2023 to a series of voluntary commitments laid out by the White House. Those do touch on many of the same topics, including public “discussion of the model’s effects on societal risks such as fairness and bias,” as well as “empowering trust and safety teams, advancing AI safety research, advancing privacy, protecting children, and working to proactively manage the risks of AI.” Still, it’s unclear what kind of enforcement they’d actually accept. The op-ed authors write that there have been a few promising changes to Connecticut’s AI legislation to close loopholes and force companies using AI tools to explain their positions. But they end on a wary note, pointing out that “Connecticut Gov. Ned Lamont has expressed skepticism of his state’s bill — not because the bill doesn’t do enough to protect consumers, but because he fears it will hurt Connecticut’s standing with the business community.”
|