Last week, Apple acquired a little-known startup in Seattle, Xnor.ai, which specializes in "edge-based" artificial intelligence – meaning AI that appears on users' devices, rather than in cloud computing at sea.
While Xnor.ai's association with Wyze home security cameras – AI technology has fueled Wyze's relatively recent device detection capabilities – has led to speculation that Apple acquired them to improve HomeKit Secure Video, it could have been a somewhat superficial analysis, given that Xnor.ai's expertise runs far, far deeper than just face detection.
In fact, Xnor.ai had actually made the Forbes list of the most promising artificial intelligence companies in America and, with both its expertise and AI technology, so closely aligned with Apple's privacy goals, it seems an acquisition was completely inevitable.
Apple has already worked hard to get the most out of its self-learning technology even on the silicon of its iOS devices, with features such as object recognition and face added in the 2016 iOS 10 Photos app, which has differentiated the privacy model. Apple from rivals such as Google and Facebook, pushing photo analysis on its A-series chips, rather than relying on cloud servers. When Apple introduced a "Neural Motor" in their A11 chips the following year, it seemed even more obvious that this was the way forward for the company.
The missing piece from Apple AI
However, there is one area that Apple has failed to address in the silicon on the device and that is Siri. While the company is making a good effort to process as much as possible on the device – the "Hey Siri" wake-up command, for example, is not relying on Apple servers at all, even to take into account voice recognition – real Siri commands require another return to Apple's servers.
Not only does this have potentially serious implications for confidentiality, as we discovered last summer, but it is ridiculously inefficient from the user's experience point of view. Even the simplest Siri commands such as "Stop" can cause awkward breaks, because Siri is required to send the request to the Apple cloud just to figure out what the user is saying.
As MacWorld's Michael Simon points out, the Xnor acquisition may be the hope we all expected to bring Siri smartphones directly to Apple devices, from iPhone to HomePod. As Simon explains, one of the magic pieces of Xnor.ai detection technology was that it worked on a budget room, given the confidentiality, which is actually the company's claim to fame – making both machine learning algorithms so much. efficient so that it can run some of the lowest hardware levels.
In other words, while Apple was forced to upgrade its A-series security chips and even throw in a Neural engine to manage much of its artificial intelligence, Xnor.ai's technology could allow them to do so much more with much less. In fact, if you look at the crazy amount of power expected to land on the A14 chip this year and then combine it with the Xnor.ai capabilities, the sky might very well be the limit.
One of Apple's challenges with Siri is that the voice assistant has to be ubiquitous, which means it has to run on everything from the A8 chip in the Apple HomePod to the A13 chip in the iPhone 11 Pro Max – and probably on some. older devices. . While Apple could technically use AI edge for some of the voice assistant features only on newer iPhones that are equipped with Neural Engines, this would likely result in an extremely fragmented and inconsistent user experience – something that Apple he almost wants to avoid it.
However, given that Xnor.ai could detect sophisticated people on cameras as simple as Wyze's, it's not hard to imagine what the company's technology could pull off a relatively better-equipped A8 chip in the HomePod. , not to mention the more powerful chips that Apple continues to have from year to year.
This could allow Apple to build a version of Siri that should not speak at all to the cloud – except, of course, when you actually ask for information, for example – and could even significantly improve voice recognition to a great degree. higher accuracy. After all, as Simon points out, Xnor.ai could tell the difference between humans and pets on a bare-bones security camera, so they shouldn't have a hard time differentiating voices on a powerful A-series processor. .
In fact, Simon claims that Apple's stance on privacy has prevented them from doing more with Siri, especially compared to rivals such as Google Assistant and Alexa, who do not have such compositions on collecting and processing user data in the cloud. However, if Siri's voice recognition and even its requests could be fully managed on the device, without the need to relay requests through Apple servers, Siri could become much smarter and more useful as an assistant because it can monitor your actions. and respond contextually without risks to your privacy.