The New Update To Google Assistant Is Incredible, And Controversial

Google has recently held an artificial intelligence conference, unveiling some of their newest advances in AI. One of the things Google unveiled at the conference was an update to their Google Assistant.

The Google Assistant system has been given an upgrade that allows it to speak with people over the phone, using natural language, to do things like set appointments and order pizza. The Google Assistant update has impressed many, though it has worried others.

Making Appointments With AI

At the Google AI event, the presenters showed how Google Assistant was able to call both a restaurant and a hairdresser and set appointments (theoretically for its user), and that the person on the other end of the line never knew that they were speaking to a computer program. The hairdresser even mentioned that there were no available appointment times for 12 PM, and Google Assistant responded smoothly, taking it in stride. The conversation is extremely natural, and Google Assistant even adds in “ah”s and “um”s to make it sound more like a human is speaking.

When speaking with a person at the restaurant that was called, the restaurant staff misheard Google Assistant, and it was able to keep the conversation on the right topic, without veering off course. The two clips of the AI making the reservations can be heard here.

Google CEO Sundar Pichai explained that Google Assistant can now understand the nuances of a particular conversation. The system that drives Google Assistant’s new ability is referred to as Google Duplex, and Pichai says that it “brings together natural language understanding, deep learning, text to speech.”

Google Assistant, and the Google Duplex technology that drives it, utilizes machine learning to carry out these complex tasks. Machine learning is the application of statistical techniques to give computers the ability to carry out functions without being explicitly programmed to do so, “learning” in this case refers to the fact that a computer program will get progressively better at the task in question. Deep learning is an extension of machine learning, a specific subfield which focuses on using algorithms inspired by the function of the human brain. These algorithms are often referred to as artificial neural networks, and they operate through a system of nodes arrayed in layers. Nodes are connected to one another via connections which are analogous to synapses and they manipulate the data coming from the input source to make a prediction about what should be done next.

Deep learning techniques have many applications in the area of human-computer interaction, translating speech to text and, as the recent Google Assistant presentation shows, handling spoken language.

Limitations

A recent blog post from Google clarified what the Google Duplex system is capable of. Google Duplex can’t operate outside of certain “closed domain”, specific scenarios. It’s intended only for use in certain business tasks, essentially. It apparently isn’t capable of generalizing outside of these specific pre-defined contexts to any social situation.

Associate professor of AI at Georgia Tech, Mark Reidl, said that the Duplex system would probably work well but only in certain rigid situations. Reidl said in an interview with The Verge:

Handling out-of-context language dialogue is a really hard problem. But there are also a lot of tricks to disguise when the AI doesn’t understand or to bring the conversation back on track.

There are a few caveats and important things to make note of here. First of all, the Duplex functions for Google Assistant aren’t finished and won’t be widely available for some time. It will apparently be available for a trial run with limited users sometime this coming summer. Google has allegedly stated that what was presented at the AI conference was only an experiment and that many refinements will be made to the system before it goes live to the general public. Another crucial fact about the Duplex system is that it has a self-monitoring ability, which will allow it to tell when a conversation has become more complex than it can handle. If this occurs it sends a signal to a human who can pick up the line and complete the task.

Ethical Concerns

While many people have reacted enthusiastically to the announcement about Google Assistant’s new capabilities, others have reacted with concern. Some AI ethics researchers are worried that having an AI carry on a conversation with someone without disclosing itself as an AI may be inherently unethical. The worry is that this technology could be used to scam people or carry out hoaxes. For example, a similar AI could be used to pretend to be a loved one and gain identifying or private information about people. AI ethicist Eliezer Yudkowsky worries that Google Duplex and similar programs could end up making “phone calls between strangers almost impossible. Or between friends, too, if caller ID spoofing is not solved.”

https://mobile.twitter.com/ESYudkowsky/status/994505703053180929

Another worry is that technology like Google Duplex may violate privacy laws concerning the recording of conversations. Eavesdropping laws exist in many states, and they require all participants of a discussion give consent before a recording of the discussio

n can be made. Since AI programs don’t have ears to hear and then analyze the sound coming from the other party with, they theoretically need to make some kind of recording in order to analyze the audio.

In what seems to have been a response to the initial controversy and concerns over the new Duplex system, Google released a statement saying that they would take the concerns seriously and would always have the system identify itself. A spokesperson from Google reportedly said that the final version of the Google Duplex backed Assistant would notify people that they were being recorded or speaking to an AI. The Google spokesperson said that the feature was being designed with “disclosure built-in” to ensure the system is “appropriately identified”. Pichai said that while they hope the technology can be a positive force for people around the world, it’s also clear that Google can’t simply “be wide-eyed about what we create”.

Joanna Bryson, an associate professor of AI ethics at the University of Bath, said while it’s important that individuals companies like Google do the right thing, there should also be new laws introduced to protect the public from less scrupulous companies who would prey on people with the new technology. Bryson said that ultimately it’s a good thing that Google chose to publicize this kind of technology, as it draws attention to the development of these kinds of services since they won’t be the only ones using them.

“It’s important that they keep doing demos and videos so people can see this stuff is happening …What we really need is an informed citizenry,” said Bryson.