- By Mike Brooks
- October 25, 2021
Reliability and process engineers need solutions that fit their needs for operational insight and asset management. The valuable skillset of data scientists must be complemented with engineering wisdom and AI applied within the business processes to create role-based applications that will eventually lead to the self-optimizing plant (SOP).
How do companies develop the deep insights to make decisions about plant operations, especially in an era when experienced and highly qualified personnel are retiring and leaving a skills shortage? Unfortunately, the workforce and career expectations have changed. New engineers, stewarded by veteran staff, used to spend years in early career development learning, harvesting knowledge and developing and practicing skills to sharpen capabilities for developing those insights.
But today, replacing the comprehension and wisdom in any organization cannot occur as fast as it retires. Nowadays, employees are unlikely to spend an entire career in one company but are more eager to hop to a new job every few years. Therefore skills, competence and insight cannot develop the same way it has in the past. However, all is not lost. Technology has become the de facto propellant to facilitate work. Artificial intelligence (AI) now overtakes and advances automation to improve productivity. But it will not continue to happen in the same way we see it experimentally executed on the frontiers of data science today.
Living up to data science expectations
The expectations of data science and the reality of implementation are often unrelated to each other. Data science with machine learning was designed to uncover the potential to solve “previously unsolvable problems;” such important work to be unleashed by new algorithms exposing great new value for a business. For a number of reasons, many companies have found it does not happen that way. First, companies underestimate the depth of domain-specific knowledge that must be applied in solutions. It’s not a matter of “Just give me the data!” as some might hope.
The reason it is not that simple is because algorithms uncover correlation patterns that may be nothing but coincidental. Algorithms are not yet smart enough to uncover causation; those patterns which indicate the root cause of the missing production or lost downtime. Some data scientists may not yet have that specific problem-solving expertise. They often rely on domain advice from seasoned industry veterans.
Nearly a decade ago, it was projected that the US would need hundreds of thousands of data scientists by 2020. It did not happen. It’s unlikely there were even enough data science students in colleges and universities 10 years ago. Furthermore, it’s not realistic to think employing more data scientists is the only way to uncover causation. To appropriately address the issue, the valuable skillset of the industry’s limited data scientists must be complemented with deeply embedded engineering wisdom and AI applied directly within the business processes. The new direction begins in role-based applications that will eventually lead to the self-optimizing plant (SOP) that is self-learning, self-adapting and self-sustaining. Analysis and performance assessments do not appear from generalized AI workbenches and functions.
The promise of role-based applications
Without forgetting supply and delivery issues, the major decisions in a plant are always about asset performance management. This includes the total lifecycle return on assets; when and how hard you run them; when and how they are maintained; and the total expected productivity. Such decisions are intensely domain and engineering specific based on careful periodic assessment of asset operating conditions against expected performance. But decisions often need collaboration to understand real systems equipment interactions. For example, an apparent throughput constraint may be caused by an actual upstream feed issue rather than the equipment itself. Added to that, every decision must be underscored by a clear understanding of fundamental risks and costs, especially across equipment lifecycles.
Reliability and process engineers need solutions that fit their direct needs, presenting them with ease-of-use and domain familiarity. Such solutions are the only way to provide the scalability to efficiently solve plant problems. Engineers, who are doing the data science and engineering, can readily use built-in guardrails to solve role and domain specific problems that assure they find causation, and not just simple correlation. Such is the promise of role-based applications.
Accordingly, we see specific applications creating and deploying hundreds or thousands of Autonomous Agents–small software programs performing repetitive analysis, so the end user does not have to. Different Agents help users in specific ways. For mechanical problems, Agents scan signals to predict and prevent degradation and failure. Quality Agents warn of process conditions that reduce quality or yield and advise the changes to keep the manufacturing on track. Event Agents warn process operators when sensor patterns show errant conditions and advise the steps to prevent them from becoming major issues. More Agents will follow.
Sometimes more complex repetitive patterns occur across multiple similar assets such as repeat failures of the same type in power generating wind farms. Such patterns are then referred to data science teams to search for external influential factors, allowing business leaders to use the output analysis for determining the best path forward.
Agents & APM 4.0
The direction for creating Agents emanates from asset performance management (APM) 4.0, which is different to earlier incarnations of APM. APM 4.0 matures maintenance availability-centered tactics. It transforms overall performance in the overlap between operations, maintenance and the value chain. It delivers on availability, production, financial and sustainability goals in volatile, uncertain, complex and ambiguous (VUCA) environments.
Sometimes, from a business perspective, it makes sense to run an asset to failure. But how can an asset-intensive organization have the insight to make such decisions about its plant, during a skills shortage, especially when seasoned veterans are retiring? That decision-making in role-based applications from Agents is an appropriate choice.
Thousands of Autonomous Agents across every asset and process feed high-precision risk alerts into collaborative assessment and mitigation workflows. Herein, collaborative application processing weighs the risks and rewards of each asset or process decision whether relating to capacity, logistics, design redundancy, raw materials, weather, or market dynamics. This real-time engine uses data rather than opinion, to qualify and quantify decision choices. Such data science augments and elevates plant personnel capabilities, assuring better judgments and decisions that lead to significant gains in reliability, sustainability and value.
A comprehensive approach identifies risks across an entire plant earlier than ever before, providing diagnosis that businesses can apply across their entire operations. The insights from role-based applications impact prescriptive plant interventions. Such capabilities are essential to work toward the more definitive self-optimizing plant that relies on a combination of data science expertise, AI and role-based applications.
Did you enjoy this great article?
Check out our free e-newsletters to read more great articles..Subscribe