Meta and Google bet on AI voice assistants. Will they take off?

An older voice assistant like Siri, which reacts to a database of commands and questions it has been programmed to understand, would fail unless you used specific words, including “What’s the weather like in New York?” and “What should I pack for a trip to New York?”

The first conversation seems more fluid, like the way people talk to each other.

One of the main reasons people gave up on voice assistants like Siri and Alexa was that computers couldn’t understand much of what they were asked, and it was difficult to figure out which questions worked.

Dimitra Vergyri, director of voice technology at SRI, the research lab behind the initial version of Siri before it was acquired by Apple, said generative AI addressed many of the problems that researchers have struggled with for years. The technology makes voice assistants able to understand spontaneous speech and respond with helpful answers, she said.

John Burkey, a former Apple engineer who worked on Siri in 2014 and has been an outspoken critic of the assistant, said he believes that because generative AI has made it easier for people to get help from computers, it is likely that many of us talk to them. assistants soon — and that when enough of us start doing it, it could become the norm.

“Siri was limited in size — it only knew a certain number of words,” he said. “You have better tools now.”

But it could be years before the new wave of AI assistants are widely adopted because they introduce new problems. Chatbots including ChatGPT, Google’s Gemini, and Meta AI are prone to “hallucinations,” which is when they make things up because they can’t find the correct answers. They made mistakes in basic tasks like counting and summarizing information from the web.