An exploration of mobile first AI

An exploration of mobile first artificial intelligence
fb twitter linkedin
link
Copied!

Artificial Intelligence has come a long way since the 1950s, gaining unprecedented traction thanks to the start of the deep learning era. With the availability of big data and the advancement of heterogeneous computing, deep neural networks have brought machine learning to a whole new level — especially in the areas of computer vision and natural language processing.

Today, everyone has a smart voice assistant at their hands. Cars just started to drive themselves. Most facial recognition systems have surpassed the human ability to recognize faces. Needless to say, the world’s spotlight is on AI. With AlphaGo and Google Duplex largely expanding people’s imagination of what AI can do, there is more and more potential for AI to infinitely grow. As a reflection of this promising future, Google has recently announced their AI First strategy, notably shifting from Mobile First.

So, has mobile technology become a focus of the past? What is the role of mobile in this AI first world, if any?

The potential of mobile AI

In my opinion, despite the development of numerous smart devices, mobile phones remain the largest platform that people would use to experience AI. Let’s take a look at some numbers: 63% of web traffic in the US comes from mobile, 70% of Amazon sales are through mobile devices and 85% of the time users spend on Twitter is mobile. These numbers are already significant, and they’re still growing.

Moreover, phone manufacturers are also facilitating AI on mobile devices. Huawei launched its Kirin 970 at IFA 2017, calling it the first chipset with a dedicated neural processing unit (NPU). Then, Apple unveiled the A11 Bionic chip, which powers the iPhone 8 and X. The A11 Bionic features a neural engine that the company says is “purpose-built for machine-learning,” among other things. By 2019, almost every new phone will be equated with an AI chip, and AI applications on smartphones are very likely going to explode in popularity. Mobile First AI could be the next big thing.

The current landscape

AI in mobile apps is already being developed in countless ways by existing and emerging companies.  It wasn’t long ago when both Google and Microsoft added neural networks (computer systems inspired by functions of the human brain) to their translation apps. Now, Spotify is challenging the Apple Music app by claiming to use AI-powered recommendations. In terms of breakthrough AI applications, the app Prisma is useful to note, as it employs neural style transfer to help users turn their photos and videos into art. For all these examples, AI became possible thanks to recent technological advancements in natural language processing, machine learning, predictive modelling, sensors, and cloud solutions.

The obstacles of machine learning

Traditionally, computationally heavy machine learning tasks are run on cloud servers. Major cloud service providers include AWS, Google Cloud and Azure, which all have their own general purpose machine learning APIs for developers. The apparent advantage is the ease of use, and the capability of running large models. While it’s great to have little to no machine learning knowledge required by mobile developers, the disadvantage is also obvious: the user experience is heavily dependent on network latency, and there are concerns of data privacy.

Applications like voice recognition and object detection require real time processing. A network round trip time of over 33ms kills the possibility of running a computer vision task at 30 frames per second. Running algorithms on a device is the last resort. Moreover, with the roll out of GDPR (General Data Protection Regulation), keeping user data on device can potentially save you 20 million euros, or 4% of your total worldwide annual turnover (insider joke).

The complexity of neural networks

Running deep neural networks on mobile devices is not a trivial task. Although almost all neural networks are designed with high level frameworks like Tensorflow, Torch or MXNet, and trained on a GPU/TPU cluster, the inference/prediction step needs to be run on device. At the heart of all neural network computation, there’s GEMM (General Matrix to Matrix Multiplication), which typically takes about 90% of all computing time. Although BLAS (Basic Linear Algebra Subprograms) libraries have been available for iOS and Android, writing tens of neural layers for a network is definitely not something most mobile developers can do.

Due to the lack of general GPU computing frameworks like CUDA and OpenCL on mobile systems, writing shader programs to make algorithms run on GPU is also very challenging. Lots of efforts have been made to develop machine learning frameworks for mobile. Apple released its first machine learning framework CoreML in WWDC 2017, utilizing its high performance MPS(Metal Performance Shader) engine. Google debuted Tensorflow Lite and and subsequently, ML Kit in Google I/O 2018. These frameworks have vastly lowered the complexity of running machine learning models on mobile device, and this is just the beginning.

Mobile AI at TTT

Introducing AI to an app involves a lot of hard work. At Two Tall Totems, we have a long history of utilizing artificial intelligence to solve business problems. Four years ago, we helped a medical client develop a mobile application to examine range of motion for jaw and head rotations with computer vision. This month, we debuted a facial recognition system at the BC Tech Summit, which has attracted lots of attention and excitement. We’ve also been working on a smart office assistant product that we’re eager to release later this year. It’s obvious that our team has a passion for AI, and we truly believe that mobile AI will transform most businesses in the near future. Got a killer AI application in mind? Talk to TTT.


To learn more about the latest tech, check out our blog on The Five Most Exciting Technologies of 2018 here.