Closed-loop AI Enables Autonomous Process Manufacturing

Closed-loop AI Enables Autonomous Process Manufacturing
Closed-loop AI Enables Autonomous Process Manufacturing

For process manufacturing, the ultimate promise of Industry 4.0 is autonomous manufacturing. Autonomous control of manufacturing processes is required, not to eliminate human workers, but to build resilient and highly responsive manufacturing supply chains. Resilience is required to enhance the top and bottom lines of a manufacturing enterprise.

The top-line drivers include the ability to introduce innovative, high-value, and high-margin new products to the market quickly. Consumerism as a trait of society is only going to increase. Our desire to live longer, healthier lives and to consume highly personalized products will continue to rise, making process manufacturing more complex.

Figure 1: The autonomous plant of the future must be infinitely and continuously adaptive to deal with the change of flexible and resilient supply chains.


The bottom-line drivers include the higher utilization of production assets for multiple products, waste reduction and recycling, and meeting energy and sustainability goals. This requires process manufacturing to be highly resilient. Resilience comes from flexibility.

Autonomous manufacturing, therefore, will need to be infinitely and continuously adaptive (figure 1). When and why is adaptability needed? Adaptability is needed when things change. So, if the manufacturing processes required for flexible and resilient supply chains will need to deal constantly with change, is the current state of automation sufficient?

Most of the concepts described in this article, which discusses the need for and opportunity of achieving autonomous manufacturing, refer to process manufacturing, with specific focus on batch/hybrid manufacturing. The underlying technology can extend to continuous and discrete manufacturing.


Opportunity versus current state

Process manufacturing has reached a state of high automation for the most part. Assume the manufacturing process is highly automated. This state of automation works well in a steady state and can deal with change usually in two forms: (1) transient states like startup, ramp up/down, and shutdown; and (2) variability caused by raw materials and process dynamics. Using advanced control and optimization, this state of automation also can handle changes in volume/capacity demand and from upstream process units.

Adaptive control is the capability of the system to modify its own operation to achieve the best possible operation mode. This requires the system to be able to perform the following functions: providing continuous information about the present system state or identifying the process (observation); comparing present system performance to the desired or optimal performance (interpretation/analysis); and making a decision (decisioning) to change the system to achieve the defined optimal performance (action).

Keep the term system in mind for the rest of this article. Also get familiar with the term systemof-systems. Assume a system to be a unit operation system as a minimum. A system-of-systems can be an entire supply chain consisting of multiple sites or plants—but for this article, assume it is a plant that consists of multiple unit operations.

For autonomous manufacturing at this plant, the system-of-systems will need to operate autonomously— without human intervention—to follow the changing commands from a management operating system (MOS) running the corporate manufacturing strategy execution.

This cannot be done with the current state of automation, even with existing advanced process control/model predictive control (APC/MPC) and optimization. Heuristic-based expert systems have been used, and in some cases, provide limited success, generally at the unit operation level, if maintained and constantly updated.

But artificial intelligence (AI), specifically machine learning (ML), can get us there with closedloop AI. And this is not in the far and distant future. We’ll discuss how it is being done now and the rapid advances being made for its widespread use at scale. Some of the work being done by Quartic.ai is referenced in this article.


Is autonomous control of process manufacturing in sight?

Existing automation can handle regulatory controls, batch orchestration, and steady-state operations. It also can deal with transient states and upsets, and deal with variability and process dynamics at a loop and interactive loops with techniques like MPC.

But it is not autonomous.

To move from automated to autonomous, the following tasks being performed by humans— at a system or system-of-systems level—need to be automated: observation, interpretation, decisioning, and action. These are cognitive tasks that humans perform in the current state of automation in manufacturing. This is the essence of Industry 4.0 and autonomous manufacturing—automation of these cognitive tasks.

The generalized approach for achieving this can be treated as an optimization approach. If, when given a business command from the MOS, the system-of-systems (the plant) attempts to achieve an optimized state as quickly as possible, without causing any waste, off-spec product, cycle time loss, or energy loss, then it establishes the best mode of operation for underlying systems and automation (figure 2).

It must be assumed that the underlying systems can provide sufficient data (to inform) and are automated enough to be responsive to the commands—the plant must be sufficiently automated before it can become autonomous.

Figure 2: When given a business command from the MOS, the system-of-systems attempts to achieve an optimized state as quickly as possible, without causing waste, off-spec product, cycle time loss, or energy loss. It establishes the best mode of operation for underlying systems and automation.


The path to an autonomous manufacturing system goes through an optimization system. This optimization system will attempt to constantly optimize the objective(s) of the system-of-systems (plant), and in doing so, will generate commands and set points for the underlying systems.


MPC and EMPC

It is well understood that traditional MPC cannot be practically implemented at the system, let alone at a system-of-systems level. It does not directly optimize the end goal (e.g., profit, yield maximization). Instead, it just tries to track given set points. MPC has become a better substitute for proportional-integral-derivative (PID) controls in many cases.

To overcome the shortcomings of MPC, approaches like economic MPC (EMPC) were developed recently. EMPC removes the separation between optimization and control (e.g., it finds the optimal set points as well as the optimal way of tracking the set points), and can be used as a decision-making tool to achieve high-level goals directly. Could EMPC be used as this master controller to optimize, in real time, the objective function of a system or a system-of-systems?

EMPC has some key fundamental challenges even within the scope of the underlying systems it is being used for:

● A system model is required—whether it is a data-driven state-space model, a mechanistic model, or a combination of the two.
● Online computation load can be high, especially for nonlinear models. Depending on how the optimization is solved, sometimes a local optimum may not be achieved, which may lead to significant performance degradation (and instability).

To achieve an autonomous state, both of these challenges become highly amplified. Models will need to cover a much larger underlying process—multiple units, multiunit interactions, and combinations of serial and parallel processing units—in a flexible manufacturing realm for agile autonomous manufacturing. The computation load can become so high, in some cases, the compute cost may dilute the resulting benefits.


AI, ML, and closed-loop AI

Machine learning (sometimes in conjunction with underlying MPC) can be used as this system-ofsystems optimizer in a closed loop or closed-loop AI. The mention of closed loop sometimes evokes existing mental models of what a loop is, and leads to apprehension and skepticism. The loop, in this context, is neither the traditional sensor-PID/MPC-actuator loop, nor is it the intention of AI to replace the PID loop. The loop in this context is either a system or ideally a system-of-systems.

Another mental model evoked, and the assumption made, is this loop must run at execution speeds of PID loops—and hence takes us to the hype about the use of AI at the edge—as if AI were to replace a flow control loop that is executing in milliseconds, and is highly synchronous with other loops. This is not the case, as it may be for an autonomous vehicle. If the loop is the system-of-systems (plant), the execution requirements apply according to the dynamics of the entire plant and the frequency of the set-point demands from the MOS (figure 3).

Figure 3: If the loop is the system-of-systems, the execution requirements apply according to the dynamics of the entire plant and the frequency of the set-point demands from the MOS.


For manufacturing applications, ML and deep learning are being used successfully for anomaly detection, soft sensors, and forecasting (prediction). Predictive machine learning can be extended for some prescriptive (recommender) uses. However, since all ML algorithms are based on learning from co-relations, not causality, they cannot be used for optimization for autonomous manufacturing—to cause a change to achieve an optimal objective/outcome. Causal learning is in too early stages of research to be considered a viable option. To build highly accurate and responsive data science–based models, large historical informative training data sets must be built as well. ML algorithms need variance in the training data to learn from. This makes valuable training data even more scarce in manufacturing applications, particularly in industries like biomanufacturing where past data contains very little variance because processes are precisely controlled.

Deep reinforcement learning in conjunction with mechanistic models (hybrid learning) is also having some success, although in a limited way. The compute costs associated with deep reinforcement can be extremely high, and the high-fidelity mechanistic models are difficult and expensive to build, and in some cases, such as biological processes, near impossible (with current techniques).

We need techniques that can learn from little historical data (warm start), learn continuously, and cause changes (generate set points) to optimize.


Bayesian optimization

Rapid progress can be made with Bayesian optimization. Bayesian optimization can be used to optimize any black-box function. A black box function is a function where the relationship between inputs and outputs cannot be easily represented mathematically or are vague, but the effects on the output can be observed. For manufacturing applications, where high-fidelity mechanistic models cannot be built, a black box builds a surrogate for the objective and quantifies the uncertainty in that surrogate using a Bayesian machine learning technique, Gaussian process regression. It then uses an acquisition function defined from this surrogate to decide where to sample. Bayesian optimization is an ideal approach to optimizing objective functions that take a long time (minutes or hours) to evaluate.

This technique was used successfully for batch process optimization of a fed-batch fermentation bioreactor (figure 4). With only set-point measurements and the final objective function (yield), the Bayesian optimizer could achieve a 4 percent average yield increase with 400 batch runs of optimization. No process parameter measurements were used in the process. The optimizer can be used for a cold start (only starting set points from the recipe/work instruction are used), a warm start (known ideal set points are available from previous batch runs), or online learning (the optimizer uses the starting set points and continually learns and optimizes). The continuous/online learning mode is ideal for closed loop/autonomous control and is being used for a continuous chemical reactor.

Figure 4: Bayesian optimization is an ideal approach to optimizing objective functions that take a long time (minutes or hours) to evaluate. This technique has been used successfully for batch process optimization of a fed-batch fermentation bioreactor


Further optimization can be achieved when process measurements, including a good, online measurement of the objective function, are available, and a high-fidelity model (digital twin) is used in conjunction with the ML optimization. Using Raman spectral data for online measurement of the yield, the system was able to achieve approximately 10% performance gain on average for 100 batches.

To build an autonomous manufacturing system that can optimize systems or systems-of-systems, the system needs to observe, interpret and make decisions on a much wider, zoomed-out view of the process, multiple process units, and the interactions among those units. This level of analysis cannot be handled by existing control and optimization techniques; it becomes a big data control problem to solve. Machine learning, combined with mechanistic models and MPC, provides a path toward real-time, continuous optimization, with which autonomous manufacturing can be built.

All figures courtesy of Quartic.ai

This article comes from the May 2021 issue of Intech Focus: Process Control and Safety.

 

About The Author


Rajiv Anand is the cofounder and CEO of Quartic.ai. He is an instrumentation and control engineer with 30 years of experience implementing process control and asset health solutions for power, mining, pharmaceutical, and chemical industries.


Did you enjoy this great article?

Check out our free e-newsletters to read more great articles..

Subscribe