Recent Advances in Machine Learning with Applications to IoT
Cisco estimates that 2020 will see around 50 billion connected devices and a market worth about $14.4 trillion by 2022. The World Economic Forum also predicts that the biggest drivers of AI adoption will include IoT, AI, and machine learning right in the top five. Companies and researchers are working to advance analytics capability to meet this global demand.
Adam McElhinney is here to outline recent advances in machine learning in the IoT space, particularly in the Industrial Internet of Things (IIoT), and how these advances could save money and increase efficiency in his talk for ODSC’s 2019 Accelerate AI, “Recent Advances in Machine Learning with Applications to IoT.”
The High Stakes of IIoT
The value of use cases in IIoT is much higher than consumer-facing technologies, mainly because the overall dollar amounts are just higher. There’s only so much money to be made from counting steps, for example, but fleet management is worth billions.
IIoT is helping cut costs and create more efficient pipelines by allowing shippers and suppliers to deliver goods with tight windows and precise quantities. These numbers create massive value for better IIoT using machine learning and cloud computing for predictions based on truly massive data.
Major Use Case: Failure Prediction
For industrial machines, the cost of maintenance happens in two parts. First, the hard cost of parts and labor factor into a small part of the equation. Second, a large part of the cost is simply the lost revenue of the machine being down for any period.
Machine learning allows companies to identify small changes in performance that indicate possible issues. Detecting failures well before they happen allows companies to prevent unplanned downtime and minimize costs of repair.
The P-F curve shows the performance of a machine over time. Machine learning uses the traditional P-F curve to predict down to very fine periods. As machine learning gets smarter, managing these machines becomes more efficient.
What’s Driving IIoT
There are six main components of IIoT that are driving adoption rates.
- Decreased sensor cost: Cheaper sensors allows us to capture more data because more of our devices have these sensors. In the beginning, sensors were used for operation. Now, we can use them for prognostics and diagnostics.
- Decreased transmission cost: We no longer need to download information manually. The most popular method, however, is cell data, of which the cost has dropped dramatically.
- Decreased storage costs: We process around 1.2 billion data bits per day. Cloud storage is more efficient and allows companies to store that data for future insight.
- New algorithms: We’re achieving human levels of detection with the latest algorithms. In 2015, for example, ImageNet algorithms reached human-level performance on image classification. These algorithms are similar to impactful monitoring of IIoT.
- Decreased computing costs: We can run powerful algorithms more cheaply because of advances in computing technology.
What Is the Value of Prognostics?
If we could reduce unplanned downtime through better prognostics by even 10%, we may see savings of millions or billions of dollars. For example, according to McElhinney, unplanned downtime in locomotives costs companies $322,000 per locomotive per year. The average Class-I railroad could see savings of around $86 million if that 10% of unplanned downtime was converted to planned maintenance.
Challenges of Implementation
The process of machine learning operates a little differently in the industrial use case.
- Connectivity — Cell connectivity can be spotty. In remote and rural environments, there are fewer cell connectivity chances. In these extreme environments, machine failure presents even more significant issues.
- Data transmission issues — Machines transmit data only when the machine itself is turned on. If the machine is off, you don’t know if the lack of data is because of a critical error or because of purposeful shutdown.
- Failure/Repair data is inaccurate and inconsistent — The most crucial source of data is based on humans. Data entry relies solely on human input, and that can’t cover the breadth needed. Also, asset hierarchies must be standardized across companies and equipment types.
- Failure causes are difficult to ascertain — Machine learning algorithms haven’t reached human levels of causation understanding. THey’re looking for patterns, some of which may be learned incorrectly.
- There are many types of failure — The top failure code in most industries only accounts for around 4% of failure types according to McElhinney. This is another area where the breadth of information can be challenging to overcome.
- High-value failures are rare — There’s just a massive data imbalance making it more difficult.
- Sensor limitations — Fixed sensor placement dictates what information is available. Some events happen too quickly for current sensor technology.
- Machine aging — within a fleet, you may have radically different ages of the same machine, making it difficult to create consistent rules and models.
- Differences between fleets — Even with controlling for differences in type or age, there could be variances in certain signals between fleets.
- Seasonality in the data — You must account for seasonality data in addition to fleetwide and age differences.
- Difficult to measure value — Operations are complex and nonlinear. For example, one broken-down train also causes delays in other, still-operational trains.
- Out of order events — Connectivity issues and latency sometimes deliver data out of order. For us, this is intuitive. Machines find this problematic.
The Main Approaches to Overcoming Obstacles for Recent Advances in Machine Learning and IIoT
There are two approaches to overcoming the obstacles of IIoT.
Complex computer codes simulate these physical systems, running diagnostics and hypotheses simultaneously in this virtual environment. They operate by solving partial differential equations in mechanics and thermodynamics.
These are done for high-value assets because it’s labor-intensive and expensive. It’s typically a Ph.D. level engineer building these models individually. They’re capable of handling low probability but very high-risk failures such as nuclear plants.
Data-Driven Failure Models
The data from failures are analyzed through sophisticated algorithms to create a series of predictions and rules-based understandings of the target environment. For more frequent, lower-risk assets, machine learning is a much better option for modeling these failures.
Physics-based simulations at scale aren’t possible, but machine learning allows companies to scale and learn. It makes useful inferencing from previously unseen faults.
Within data-driven models, there are subsets of models:
- knowledge-based models — useful for inferences from previously observed situations
- life expectancy models — manages life expectancy and degradation
- artificial neural networks — computes predictions and estimations using observation data. Transfer learning falls into this area and has massive potential for prognostics.
- physical models — computes predictions and estimations using the physical behavior of degradation
The Future of IIoT and Analytics
These sensors and algorithms are getting better at building more efficient models despite current obstacles. It’s a big learning curve but a field that’s ripe for innovation and a potentially high-value target, with more recent advances in machine learning to come.
Read more data science articles on OpenDataScience.com, including tutorials and guides from beginner to advanced levels! Subscribe to our weekly newsletter here and receive the latest news every Thursday. You can also get data science training on-demand wherever you are with our Ai+ Training platform.