How are predictive analytics currently used with regards to asset management? Are there benefits to be gained?

In this article, we will look at predictive analytics, its current application in asset management, possible benefits to be gained and success factors for implementation.

Simplifying the jargon

In all industry sectors and disciplines, there is always a host of terminology (jargon) used, that to the outsider, can be confusing and incomprehensible. We defined some of the key terms in our companion article Big Data, Predictive Analytics and Maintenance”, but let’s revisit these and add some others here.

In easily understood terms, what does some of the terminology mean?

  • Big Data – lots of data from different sources in different formats
  • Data Mining – determining the relationships between data with respect to a known output
  • Predictive Analytics – using historical relevant data to predict future outcomes based on current data
  • Models and Algorithms – a mathematical version of the relationships between data and outcomes. These are developed as part of Data Mining and Predictive Analytics. A simple example is a model (algorithm, equation) of a straight-line relationship of input and output.

y = ax + b

Where x is the input variable (factor, data), y is the output, a is the slope of the line (the relationship between input and output) and b is the value of the output when the input is = 0

  • Machine Learning – software that uses historical data to form and improve on a model.
  • Advanced Pattern Recognition (APR) – statistical analysis of the difference between the models predicted data and the real data.

Predictive analytics

Predictive Analytics uses historical data to determine the relationships of data (factors, variables) with outputs and build models (algorithms, equations) to check against current data. With regard to asset management, the application is mostly in:

  • process optimisation and early warning of process anomalies
  • early warning of asset condition/health anomalies and to some extent a predictive element of asset failure.

How are predictive analytic models used in industry?

For the most part, a model of an asset or a process is created in the software programme that predicts what the value of an output should be based on the current data. If there is a significant difference (anomaly), an alarm or warning is raised.

The aim is to provide much earlier warning than instrument alarm levels or distributed control system alarm levels. An early warning allows corrective action to be planned and scheduled at a convenient time. The aim is not to detect instantaneous failures, as the control systems do this.

The first part of the process is to decide on the critical processes and assets to be monitored. Then connections are created between the required data sources and a historian (database). This can be a site-based historian or a cloud-based historian and will be discussed later. The data sources could include:

  • equipment sensor readings,
  • process instrument readings,
  • distributed control system data,
  • equipment control systems data,
  • computerised maintenance management system data
  • any other source of data considered relevant.

Models for the processes and assets are created using two broad methods:

  • Using known algorithms that are suitable for the well-understood assets and processes.
  • Using 3 to 12 months of normal operational data (supervised learning) or 3 to 12 months of all data (unsupervised learning) and machine-learning programmes to develop models,

When these models are considered sufficiently accurate, they are put into production. This means they are set-up to sample the real time data being captured by the historian. This interval could be minutes to hours depending on the situation.

The data used is not “Real Time” as per an instrument or control system data, but called “Near Real Time”. Sampling provides the benefit of reducing data transfer and can be particularly important if data is being transferred over the internet. Large amounts of data transfer require high capability (large bandwidth) internet connections.

Anomalies are mostly detected by Advanced Pattern Recognition. In the event a warning is generated, a person with experience of the equipment or process reviews the warning to determine if it is real or within acceptable conditions. This is particularly the case in the early life of a model where Machine Learning has been used to develop the model.

Most of the software also provides Dashboards that provide an indication of equipment or process health together with an expected life until failure. The accuracy and effectiveness of calculating and accumulating adverse operating conditions to provide this estimate has not been examined by the author.

There are two broad methods of using Predictive Analytics:

  • Site-based – where all of the additional hardware and software is purchased by the organisation, installed and implemented on-site, usually with assistance from the software supplier.
  • Cloud-based – where a subscription is purchased for a period (e.g. 6 to 12 months). This is for use of the supplier’s hardware, software and often, technical oversite of the models.

Power generation (including nuclear, fossil fuel, wind and solar)and distribution companies have adopted extensive use of Predictive Analytics, possibly as a result of power generation and distribution OEMs (GE and Schneider Electric) developing their own predictive analytics in conjunction with software companies. Mining equipment OEM’s such CAT, Komatsu and Hitachi in partnership with various software companies are working on their own predictive analytics.

Savings reported by several large power generation companies include:

  • EDF – 300 power plants. (nuclear, hydro, solar) France, A$1.5 M so far from 150 power plants
  • Exelon – 10 nuclear power stations and 17 reactors USA, A$4.6 M per year savings
  • Tata Power – One of largest integrated power companies in India. (thermos, hydro, solar, wind) India, A$355,000 savings for a catch
  • SSE – UK’s broadest-based energy company UK and Ireland, 5 to 6 catches per month, A$5.7 M/yr early detections, A$14.2 M reduction in insurance cost/yr

Note: a “Catch” is an early warning of a failure where the consequences of the failure was eliminated or significantly reduced.

Success factors for implementation

Organisations that have successfully implemented predictive analytics solutions have a number of common factors that include:

  • Dedicated resources to monitor and maintain the system
  • Selected critical assets and/or processes to be monitored
  • Recording, quantifying in $ terms and widely reporting catches on a regular basis

The set-up, implementation, resourcing and maintaining of a Predictive Analytics system needs to be offset against benefits to the business. Without continual reporting of quantified benefits, senior management are likely to wonder why they are allocating resources to something like this, when the plant or equipment is performing well.


There are benefits to be gained from the use of predictive analytics in a wide range of industries provided the success factors mentioned above are taken into consideration. Avoidance or significant reduction of the consequences a single failure of critical equipment with prolonged downtime could easily pay for such a system many times over.

I hope this article has served to provide an understanding how predictive analytics is currently being used in industry. If you need help implementing predictive analytics in your organisation. Contact us now to discuss your needs, and how we can help.

For future useful articles sign up to our mailing list now.

Back to top