Hello, and welcome to this week’s edition of the Future in Five Questions. This week I spoke with Babak Hodjat, an AI researcher and entrepreneur who helped develop the natural language technology that would eventually enable Apple’s Siri. Hodjat is now the chief technology officer for AI at Cognizant, an information technology firm that modernizes IT systems for some of the world’s top companies in pharma, banking and media among others. (Cognizant is the company that quit the content moderation business after a major scandal over working conditions in 2019.) Hodjat discussed his vision of AI as a “knowledge worker in a box,” the pace of AI development in the past year — and what Singapore is doing right that the U.S. could emulate. The following has been edited and condensed for clarity: What’s one underrated big idea? The idea of treating AI as a knowledge worker in a box. These large language models can do a lot of things, and they can be all things to all people — but let's constrain them, put them in a box and tell them here's what your responsibilities are, here's the kind of input that you're going to get, here are some tools that you can use, and here are some other agents you can talk to. Culturally, we're still not there. We think of humans in that manner, but we don't think of AI as a black-box entity that we define a job description for and then have them operate in a workflow. There are huge implications once we kind of warm up to that idea of using AI in that manner, and it could be hugely disruptive. What’s a technology that you think is overhyped? Technology has to be overrated in order to get the investment it requires. AI suffers from the problem of expectation, as far as being able to do everything in one model. The track we're on with large language models is overrated and has some weaknesses that we're glazing over. Not only is it very energy inefficient compared to the human brain, but it's also very rigid, and people are not acknowledging the fact these systems are trained offline on tons and tons of data, and then fixed. You know, the “P” in “GPT” is “pre-trained.” That's an inherent weakness of where we are with AI right now. What book most shaped your conception of the future? Aldous Huxley’s “Brave New World.” I know it's a dystopian book, but man, did he get it right. The world that he describes is a world that's actually more familiar to us than it was to his [contemporaneous] readers. He wasn't a technologist, but we are living what he described in that book. I've read a lot of Isaac Asimov, and a lot of other science fiction books from the 20th century, and back then science fiction was really science fiction. It was easier to predict the future than it is right now. Right now, people just come up short. What could the government be doing regarding technology that it isn’t? An AI regulatory sandbox. But before we even get there, [regulators] need to educate themselves on what AI is and start using it personally. What I see based on my interactions is that it's spooky to them. It's scary. The first thing to do is to understand it better, and not from the perspective of a worst-case scenario. Singapore has attempted a sandbox in which innovators, testers and users, as well as the regulator, can come together and try out new technologies. This can preemptively allow us to get ahead of the implications of that technology. It takes a level of patience and investment, but that would be my suggestion. Rather than putting out regulation blindly without knowing what the implications are, to actually try it out in a sandbox, in a safe environment where you can actually get it to work. Obviously saying this is easier than actually making it happen. What has surprised you the most this year? This is in my own research, but in the 1990s we built multi-agent systems. I published a paper, I think in 2000 about introducing a multi-agent architecture. With agents back then we were like, AI is too weak and we don't have enough processing capacity, so let's simplify the domain in which AI operates — that led to the creation of agents. And then the agents worked with each other, and this coincided with the advent of the web, which back then was very simple, just text and hyperlinks. So the agent concept then involved AI navigating the web on your behalf, and finding stuff, and maybe running into another agent, and maybe there's some sort of collaboration or competition going on. Now you have multi-agent architectures as a resurgent field, where you have all these agents and they start talking to each other. One thing that I tried out was to test whether you can encapsulate the domain of their responsibility. Not to tell every agent what is at their disposal, but to say, “Hey, agent, you know these three others, you don't know what they do but you're in communication with them. If there's a task that comes in that you can't fully support, ask them to see whether they can support it.” They, in turn, talk to other agents so now you have this network of agents, like a network of humans, that sorts things out amongst themselves and comes back with what you need. I built a system like that, and it did things that I simply didn't expect. It resolved complex tasks, and the full knowledge of what this community of agents can do did not reside in any single agent. That was very exciting and surprising to me.
|