The Bixby button
What is really at stake here is the entire display industry, as vocal interaction with search functions supplants a visual search, and while the necessity for eventually looking at whatever it is that you searched for is still a part of most searches, the use of voice activated search assistants lowers the value of displays. While the use of voice to command interaction with computers has been around for many years, and the use of AI in search algorithms has also, the functionality, or ability to use natural language has improved steadily over the last few years.
Will it obviate the need for visual displays? Not likely, but it could have an effect on the overall habits of mobile and stationary device users over a generation. Will your kids be satisfied that their smartphone AI told them that a restaurant got 5 ‘poor’ rating on Yelp, or will they have to read the reviews themselves? Natural language interfaces have been tried before yielding less than spectacular results, but algos get better over time. Does this mean we not only lose the use of our legs, see our hands evolve into a single pointy finger, and now see our ears enlarge and lose the ability to see? Probably not, but speaking is easier than typing and listening is easier than looking, so we would have to expect some long-term changes to the display space should Bixby, Cortina, Alexa, and others become our children’s companions…