The latent promise of the Internet of Things will only be realized if machines can detect complex signal patterns in data across a wide range of sources inside, across and outside the existing silos of data.

Today, progress is held back by monolithic application silos and data wastage. Where can machine learning have the most impact in IoT technology? Where will we see ML take hold in consumer tech and industrial/enterprise use cases?


Parsing AI, ML and Deep Learning


ML excels at tasks that are too complex for humans to create software programs for directly. Practically speaking, this means that instead of a human explicitly creating software programs, large amounts of data are run through an ML algorithm to see if the machine can find the model or function that a data scientist seeks.

Some of the best tasks solved by ML (with IoT data inputs) include:

Pattern recognition, such as recognizing a concealed weapon, facial expressions or visual objects. IoT sensor inputs include cameras.

Anomaly detection, such as unusual machine or environmental readings. IoT sensor inputs include machines, vibration sensors and temperature sensors.

Predicting future events, answering questions such as when a machine will fail or what happens to one’s overall system performance if a particular part fails. IoT sensors include machines, vibration sensors and temperature sensors.

The 451 Alliance’s IoT survey reveals that enterprises are beginning to recognize the transformation possibilities of applying ML to the data that runs through and around their business, including, of course, IoT.

That demand is driven in part by the leaders of cloud and IT such as Amazon Web Services (AWS), Google, Microsoft and IBM, which have invested hundreds of millions of dollars into AI and ML to run their own businesses and, importantly, democratize those capabilities so more and more enterprises have access to that computing power.

At Google I/O 2018, the company demoed how its Object Detection API can be used in practice with a Raspberry Pi (cheap IoT hardware). This example illustrated how a simple system can be quickly built by a programmer (not a data scientist) to collect camera data, orchestrate the data (via Google Cloud IoT core) and run ML algorithms via TensorFlow (invented by Google but now an open source library for dataflow programming, often used for ML applications such as neural networks).

The 451 Alliance AI/ML survey notes the relative infancy of ML in terms of production deployment. It is clearly on the cusp of wider-scale adoption, as 37% indicate that they are in the late-stage development of ML but not currently deployed; only 17% indicate that they already have production ML.

Does your organization have a machine learning initiative?

Even for those deploying ML, its scope is likely to be confined to a very narrow piece of functionality to automate specific tasks. No organizations of any scale are completely transforming themselves using AI/ML. Education is still required to help enterprises understand how ML can be used as a tool to solve business problems.


The Possibilities: Now and Future

Tasks Enabled by IoT: Human or Machine

The simple quadrant in the above graph provides a lens by which to view IoT connected devices and their value possibilities.

On the horizontal axis we present two options answering the question ‘Who is providing the intelligence?’ – either humans or machines (the ML algorithm).

On the vertical axis we categorize tasks as either simple or complex.

While we contend that today humans are still the best stewards of IoT scenarios such as remote surgery, the upper-right quadrant is where computers and specifically the application of ML will shine and always outperform a human being.

For simple tasks, both humans and machines can remotely monitor, program and control IoT machines from anywhere via digital, speech or AR/VR interfaces equally well.

For complex tasks that require dexterity, specific domain knowledge and flexibility for changing conditions, humans are a much better option than machines as the source of intelligence, although advances in fields such as industrial robotics and autonomous vehicles (AV) dictate that this is not a static situation.

In the upper-right quadrant, humans don’t stand a chance versus machines for analysis and interpretation.

Machines analyze simple or complex IoT datasets to find insights that a human couldn’t or wouldn’t – and at massive speed and scale. ML programs can analyze several complex datasets and signal variables together to uncover insights and signals that a human being simply could not. This upper-right quadrant is where we will focus next.




Building automation is one of the most exciting horizontal segments for the combined powers of ML and IoT data, but also one of the most difficult. AI applied to building automation involves integrating automation with legacy building systems.

Commercial buildings are strongly influenced by variable events such as occupancy, weather, temperature and energy costs that include overlapping cycles (daily, weekly, monthly and seasonal, to name a few).

In general, ML and data mining algorithms work poorly on time-series data. Time-series data, as the name indicates, differs from other types of data in the sense that the temporal aspect is important.

On a positive note, this provides additional information that can be used when building an ML model – not only do the input features contain useful information, but so do the changes in input/output over time.

However, while the time component adds information, it also makes time-series problems more difficult to handle compared to many other prediction tasks. Buildings outfitted with sensors and/or connected HVAC, security and lighting systems can leverage IoT data to actively adjust HVAC; access security, AC and lighting controls; and learn and react to worker movement, external conditions, individual or group preferences, emergency situations, etc.

Today, actionable data is typically generated from IoT devices – such as motion detectors, photocells, temperature gauges, and carbon dioxide and smoke detectors – that are used primarily for energy savings and safety.

The ROI of such systems typically comes down to energy savings, as commercial buildings trail only transportation and energy when it comes to overall usage. These systems also improve worker experience and overall building functionality and efficiency.

There are several use cases where ML and deep learning techniques can be applied to make better building automation decisions.




In this segment, cameras and or/motion sensors collect data for optimizing the use of office equipment, meeting rooms and lighting as well as HVAC in commercial office spaces. This type of solution can help maximize energy efficiency by only using heating, cooling and lighting where it is functionally needed.

A startup that ABB has invested in, PointGrab, has developed an offering that uses AI by sensing and analyzing information about where and how people use the space, while maintaining highest standards for privacy and data security. PointGrab claims its solution can save up to 30% of annual expenses for an office space using real-time motion analytics.

The easy win here is that industry standards dictate that at any time, a commercial office only utilizes 40% of its usable space.




Another use case segment where AI will have a profound impact in conjunction with IoT connectivity is in fleet management. Even without AI specifically, fleet management solutions already create significant value by solving easy problems like ‘When will the delivery be made?’ and ‘Who is driving which vehicle?’ with simple connectivity and GPS solutions and driver apps and management portals. Once that infrastructure is in place, the possibilities for advanced analytics are boundless.

Using predictive analytics, fleet managers will be able to more accurately predict vehicle failures. If enough data is available on past cargo mishaps such as spoilage for meat and vegetables or damage to physical fleet inventory, then ML algorithms can be leveraged to alert, anticipate and avoid based on real-time environmental data such as temperature, moisture, and vibration and historical data. External variables such as weather and road conditions can be used to optimize routes for safety and fuel efficiency.

Autonomous driving systems embedded with AI functionality will help keep drivers and vehicles out of harm’s way. An Israeli startup called Fleetonomy is creating a platform that combines AI and fleet data to deliver insights and allow fleet managers to simulate services before deploying physical vehicles.




One of the most important factors to keeping workers safe is their overall level of fatigue and alertness.

When it comes to the use of biometric signals and fatigue monitoring, the most popular method is ‘percentage of eye closure,’ or PERCLOS. While there is clearly a correlation between PERCLOS and fatigue-based impairment, this method is also susceptible to false positive results due to external factors such as glare, dust and humidity.

SmartCap, an Australian firm focused on the use of electroencephalography (EEG) to predict fatigue, employs a device similar to a baseball cap and uses a ‘fatigue algorithm’ to predict fatigue and drowsiness based on an individual EEG rating.




In the manufacturing industry, the goal is to keep a mechanical system working for as long as is safely and cost-effectively possible and predict a failure point before it occurs, if possible.

When a machine breaks in a manufacturing process, it often triggers a chain reaction of problems within a machine itself (which can have hundreds or thousands of moving parts) or part of a larger manufacturing system – that is, stopping the entire assembly process and therefore taking all machines offline.

Given the costs involved with such failures and the opportunity cost of machine downtime, there is a very high value associated with early prediction of anomalies; the field of predictive maintenance is far and away the leading use case for applying algorithms to machine data.

Of course, the value is easy to understand but extremely difficult to deploy in scenarios where training data is not available for anomalous behavior. The reason for excitement in this particular case is the ability to bring ‘exact science’ to the inefficient field of planned preventative maintenance, where maintenance is scheduled on machines in working order in an effort to avoid downtime.

In the case of preventative maintenance there is a constant risk of under- or over-maintaining equipment. Predictive maintenance solutions use data from various sources including historical maintenance records, sensors directly affixed to machines or within subsystems, and environmental data such as temperature, vibration and noise level.

Companies such as Uptake offer predictive maintenance software that uses ML algorithms to translate massive amounts of raw data into actionable insights for their customers. The goal is to keep mission-critical assets up and running at peak performance levels. Uptake has been able to greatly increase the reliability of wind turbines with customers such as Mid- American Intrepid Wind Farm by alerting operators to potential catastrophic failures within turbines before they occur.




Currently, traffic lights run in sequences and are not designed to react to vehicles passing through them over public roads. Traffic congestion is a scourge on everything from worker productivity to pollution and citizen satisfaction, so solutions are desperately needed.

AI solutions have already arrived in applications such as smart traffic light systems designed to ease traffic congestion. These solutions have been trialed for the better part of the past five years and are largely ready for scaled deployment.

Rapid Flow Technologies, based in Pittsburgh, Pennsylvania, developed a system called Surtrac in conjunction with the Intelligent Coordination and Logistics Laboratory at the Robotics Institute of Carnegie Mellon University as part of the Traffic21 research initiative. The system was initially rolled out in 2012 and has since been expanded significantly after reducing travel times more than 25% and reducing wait times by 40% on average.

In the UK, the city of Milton Keynes is installing smart traffic lights that can detect congestion and alter traffic patterns accordingly. The system was designed by Vivacity Labs and will be deployed on 2,500 AI-powered cameras in traffic lights across the city. The sensors will cover a 50-square-mile area and maintain a constant view of all major traffic junctions and parking spaces around the city. The traffic lights will be able to prioritize the passage of ambulances, buses and cyclists. Innovate UK, part of the government-funded UK Research and Innovation group, has invested US$2m in the project.


IoT and AI Use Cases for Consumers


The phenomenon known as ‘consumerization,’ or consumers beginning to leverage their personal digital tools for work, started with Wi-Fi, smartphones and apps. Consumerization brought with it nasty IT challenges such as rogue WLAN access points and employees insisting on using their smart devices in the work setting.

Similar problems will likely crop up in the context of AI as applied to IoT datasets. It makes sense, given the mass market appeal of consumer and home-based solutions for technology and application vendors, the deep AI chops of the leaders in those ecosystems, and the relative lack of complexity of those computing environments when compared to an enterprise.

While we are most excited about the potential for AI in the context of enterprise and industrial applications, it’s worth taking a peek at some of the innovations possible in the consumer segment.




One of the more interesting applications of ML applied within the realm of home healthcare and motion detection comes from a startup vendor called Aerial Technologies.

While it’s not overtly an IoT solution, it has a cool factor and is an example of the power of AI to, for all intents and purposes, transform a regular Wi-Fi routing device into a motion sensor and analysis machine.

This software is set to enable value-added applications such as elderly care and motion detection by translating standard Wi-Fi signal data into intelligence around motion. The company claims that its algorithms are so well honed (i.e., trained on Wi-Fi traffic patterns) that they can detect breathing rate for home occupants, never mind emergency events such as slip and fall.

The company has a cloud-based system that uses ML to understand motion without requiring the expensive and disruptive deployment of wearable devices, cameras or other sensors. All that is required is client software, which can run on industry-standard WLAN routers. Of course, these outcomes can be achieved through traditional sensing technologies, but the appeal here is the existing ubiquity of WLANs in people’s homes.

Any ISP/telco offering home broadband service could package the software as part of CPE or vCPE and value-added services on top of their traditional services without asking for consumers to do anything but ‘turn it on,’ which completely eliminates the need for new devices or habits. Aerial announced a partnership with Quantenna to integrate its software directly with the firm’s high-performance Wi-Fi chipsets.




In wearable computing, smart watches and fitness monitors can collect and translate personal heart rate data into lifesaving insights.

A company called Cardiogram has developed an algorithm to use the Apple Watch to detect atrial fibrillation — the most common heart arrhythmia — with higher accuracy than previously validated methods. Cardiogram users generate massive amounts of unlabeled heart rate data: The company applied 139 million heart rate measurements to pre-train its neural network.




Autonomous driving is perhaps the most controversial and globally impactful application of IoT and AI.

There are several firms investing billions of dollars into the field, including giants like Uber, which acquired startup Otto in 2017 to form the Uber Advanced Technologies Group. In this case, the driver rides within the vehicle and switches the truck to autonomous mode once it reaches the highway.

The choreography between the AV sensors embedded in vehicles and the software that controls driving creates a comprehensive model of the environment surrounding the vehicle. These integrated sensor inputs are combined with 3-D maps as a composite input into an AI model that makes path-planning decisions that are subsequently actuated through steering, braking and acceleration systems.

NVIDIA is a leader in this capability via its robust suite of hardware and software and wide partnerships with automotive industry OEMs and tier one suppliers, as well as emerging startups. Another leader in this space is Google and its AV subsidiary Waymo, which began its AV journey in 2009 and has among the most AV miles driven (10 million).