Identifying Automation System Tribbles |

Identifying Automation System Tribbles

Identifying Automation System Tribbles

By Roy Kok - Aware Technology, Inc.


“Scotty - Status Report…
Aye Captain, All Systems Normal…”

How did Scotty know? How do you know when your systems are normal? Are the Dilithium crystals at their proper temperature and vibrational frequency for this Warp Speed and area of space? Could he be sure he caught every last Tribble, and none are blocking the cooling ducts to the Anti- Matter Drive? Is he alarming on enough variables to sense a “Disturbance in the Force?” (Just testing you – I know that was Star Wars…)

It seems like an easy question until you explore typical automation systems and find that they are designed to control the process and are NOT designed with troubleshooting as a prime directive. After all, controlling is a hard enough thing to do. How much money is left over for extensive diagnostics, troubleshooting, and adaptive alarming?

Have you considered that an absence of Alarms is not necessarily an indication of “Systems Normal?” Are your systems in the states they are usually in? And, by what percentage? Is this the state your systems are in 90 percent of the time or 10 percent of the time, under these current conditions of load, throughput, and raw materials? When was the last time your systems were in this state?

HMI/SCADA (Human Machine Interfaces and Supervisory Control and Data Acquisition) systems are our automation eyes into the process. Yet these eyes are typically focused on the items we think are leading indicators, the Alarm Indicators to alert us to extreme conditions. And even then, they are loosely tuned, the main reason being that tightly tuned alarms, on all the key variables, will flood the user with too much data and not enough information. Hence, alarms are set for the most critical variables and their management through varying process conditions are loosely managed. Are your system alarms varying with load, flow, seasonal variances, etc.?

So, how do we make improvements?

  • Enhance your HMI/SCADA systems with additional alarms and deal with it. Not optimal as this will desensitize your users/operators and will have the added side effect of making them cranky.
  • Hire additional operators to keep an eye on your process. The operators will enjoy the additional camaraderie and more eyes on the process will manage it more effectively, however, this approach brings additional and significant overhead and tends to make Management cranky.
  • Mathematically model your processes and compare existing conditions to the plant model. This is a technically elegant solution that relies on high quality data and will give the process engineers a wonderful implementation challenge, but the upfront expense, and ongoing maintenance will likely make everyone cranky.

The solutions above would not meet Star Fleet objectives.

Complicating systems is not an answer, even if you have some form of intelligent alarm management to separate the noise from the real alerts. There are some markets where alarms are just not tolerated. An alarm will require an explanation and that means paperwork, workflow routing etc., adding time and complexity to a batch acceptance. In today’s business climate, increased manpower isn’t a solution either. And while modeling is a very elegant and potentially effective solution, it is also complex to install and maintain and the strength of a model based solution really resides in the ability to do predictive analytics and Advanced Process Control. How many applications really need Advanced Process Control? How many processes really need a mathematical model to understand variable interactions?

Surely, Scotty had a better solution

There must be a technology that will perform the functions of an operator, learning from the day to day operations with the ability to make you aware of what is usual and what is unusual. And, unlike an operator, the tool would never lose interest, get sick, get tired and never leave for a better paying job. Don’t forget the years of experience that walk out the door, locked in your operator’s head – irreplaceable experiences.

Sound impossible? Sound like rocket science? Well, perhaps it is. NASA faces similar problems all the time. They often do things for the first time and need to learn quickly from experiences. They have very complex systems that are extremely difficult to model. How do you detect an anomaly in the tile temperatures during a takeoff or re-entry? What is usual or unusual?

NASA has the Solution

NASA looks at the data from a different perspective. Data isn’t necessarily a set of numbers that need to be mathematically understood. Who really cares that a change in this number will likely cause a .05% change in this other number? Instead, view sets of correlated numbers as patterns. Build an experience database of these patterns and match incoming data to the experience database, on the fly. And, build up metrics on the experiences to tell you that this is a common pattern and this new pattern has never been seen. If data starts moving into new territory, that’s news, and time to alert an operator. But unlike a simple alarm, the experience database will arm you with metrics indicating how different the data is, or, if it is a rare condition, will tell you the last occurrence. This technology is widely used at NASA today, monitoring everything from leading edges to brake systems and fighter aircraft engines.

Introducing Aware Technology – Productizing NASA Technology

This technology is coming to the automation marketplace in the form of PDM (Process Data Monitor), by a new company, Aware Technology Inc. PDM has been developed under a Patent License from NASA, and with additional layered patents making the technology more appropriate to delivering Data Validation, Adaptive Alarming and Anomaly Detection. The technology is based on interoperability standards delivering connectivity to third party automation products. OPC enables PDM to retrieve data from realtime and Historian data sources – your plant HMI/SCADA and/or Historian. ODBC provides connectivity to data locked in Relational Databases. An XML data structure on the front end enables System Integrators to easily create custom interfaces to legacy data sources.

PDM configuration is also straight forward, as it should be to meet broad appeal. The user interface is through a Web Browser. After connecting PDM to your data sources, simply select a variable (Tag) and define the granularity for analysis. After that, PDM begins to learn. The selection of several variables (Tags) belonging to a process or asset is grouped in a “System.” Systems are where the real “Rocket Science” of “Pattern Recognition” begins. Finally, define your levels of detection and method of notification.

Detection can fall into many forms – new experiences being developed, falling into experiences that have been highlighted as bad or questionable in the past. Simple alarms are also available. But more important, perhaps, is the ability for PDM to give feedback that you are experiencing data that is very common in your operations, lending additional validation to metrics that you may have based on that data. Notification is handled through email, with links letting the operator drill into results for that may highlight the root cause.

The ability to develop an experience database can happen over time, through the monitoring of live and real-time data. However, it can also be developed quickly through a replay of Historian data. The experience database can be run in one of two modes, Open Book or Closed Book. In Open Book mode, incoming data is added to the experience database – the system continues to learn. In Closed Book mode, the incoming data is compared to an existing experience database – no new experiences are added. Actually, PDM can compare incoming data against multiple experience databases, either open or closed, in parallel. These options enable you to replay and develop an experience database that represents a Golden Batch, or perfect scenario. On the other hand, experience databases can be developed for specific product runs, seasonal changes, various equipment configurations, etc.

Most important is PDM’s ability to simply watch data and start learning. This one point makes PDM extremely simple to install and operate. Maintenance is virtually non-existent. One of the simplest operations can be “Let me know if something changes,” a sort of Cruise Control that an operator can set on his process. “Things are running and I want you to notify me when there is something to review…”

PDM will be delivered in three forms. A Cloud Based service, able to accept remote data, perform its analysis and return both alerts and the ability to drill deeper into root causes, and will be offered for applications that want to avoid the costs and maintenance associated with onsite equipment. An Enterprise Appliance will be available for applications where data cannot be sent outside the corporate walls, or concerns with Internet related downtime would prohibit a SaaS (Software as a Service) model. Finally, PDM will be delivered as an embedded appliance, able to perform data validation and anomaly detection from with-in the operations of a piece of equipment.

Applying PDM to today’s Automation Systems

Automation Systems are filled with variability that is difficult to model and account for. Weather variables, system variables, plant operational variables, season variables, Batch variables, Throughputs, Loads, etc. all contribute to what would be a very difficult model to create and maintain over time. Changes in setpoints, equipment, chemicals and procedures make it very difficult to identify the differences between normal variations from one machine to another, or changes within one production line over time. The ideal solution is one that will recognize these changes, alert on them, and quickly adapt, learning new behaviors in order to begin the process of detecting under these new working conditions.

Automation Systems are often managed by process engineers with the help of System Integrators. It becomes a very expensive proposition to tune modeling solutions for adaptive alarming and anomaly detection. PDM fits right in, able to deliver system validation and anomaly detection through local data collection, with connectivity to the Cloud, or to an Enterprise Appliance, for data management and notification services. This is a significant benefit to the operators in a plant, as they can now leverage a sophisticated technology (PDM) to make them aware of system changes that may be leading indicators to more catastrophic events to come. Often, the subtle changes in combinations of variables that cannot be easily alarmed on an individual basis, will provide a major clue to impending change, enabling them to run their process either more reliably or more economically.

What’s coming in the future?

The underlying PDM technology has been applied in the field by both NASA and iSagacity (one of the founders of Aware Technology), for many years. PDM is offered as a Hosted Service today, but will quickly morph into an Enterprise solution in early 2011. Aware Technology is looking for early adopters to help tune our solution to vertical markets.

In a future version, PDM may even make judgment calls like “Look Dave, I can see you're really upset about this. I honestly think you ought to sit down calmly, take a stress pill, and think things over.” (Testing you again – 2001, A space Odyssey.)

About the Author
Roy Kok is the President of Aware Technology, founded February, 2011. He has been involved with the automation marketplace for over 30 years, he has worked with industry leaders such as Intellution, GE, Kepware, and also offers Business Development, Sales and Marketing consulting with AutomationSMX. Roy grew up with Star Trek.

Did you Enjoy this Article?

Check out our free e-newsletters
to read more great articles.

Subscribe Now