TMCnet News

Multivariable control performance [InTech]
[August 22, 2014]

Multivariable control performance [InTech]


(InTech Via Acquire Media NewsEdge) The case for model-less multivariable control By Allan Kern, P.E. Il Jl ultivariable control is usually thought u\ Mu of as a product of the computer age, I I but multivariable control has always been an integral part of industrial process operation. Before the computer era, the operating team did multivariable control manually, by adjusting the available controllers and valves to keep related process variables within constraint limits and to improve economic performance. This basic approach to managing the multivariable nature of industrial processes remains a prominent aspect of operation today, whether in lieu of, or in conjunction with, modern automated multivariable controllers.



With the advent of computers in process control, it became possible to automate and "close the loop" on multivariable control, with obvious potential to improve the quality of constraint control and optimization. Multivariable control technology that combined mathematical models of process interactions, economic optimization routines, and matrix-based solution techniques soon appeared to accomplish this, and the rest is history. Since the 1980s, model-based predictive multivariable control (MPC) has thoroughly dominated the field of advanced process control (APC). Today, the terms are usually synonymous.

But MPC has not been without difficulties. Although a limited number of applications are delivering high value, and many are delivering partial success, MPC performance levels overall have remained low. "Degraded" MPC performance and MPC applications that have "fallen into disuse" are well-known, if rarely highlighted, industry concerns. Users have assumed this situation would correct itself with time, but today installation costs remain high, a manageable ownership model has not emerged, and performance levels continue to be low. Industry enthusiasm for MPC, once unbridled, has become circumspect, and decision makers are increasingly reluctant to allocate the high levels of financial and human resources that once seemed warranted for MPC.


Industry is thus faced with a question it thought was settled: Is MPC the technology of choice for automated multivariable control going forward, or is a réévaluation indicated at this juncture? This article explores the role of models in traditional MPC, their part in its cost and performance history, the necessity of models going forward, and the viability of an alternative model-less approach to multivariable constraint control and optimization, based on industry's experiences and lessons of the past 20 years.

The role of models in traditional MPC The incorporation of model-based solutions into multivariable control was natural and ingenious. In an ideally behaved process, such as a simulation, where the models are fed back as the process response, model-based control is essentially perfect, regardless of tuning. The theory of MPC remains sound. But experience has shown that most real processes behave very nonideally, leading to several performance complications.

Models play several roles within MPC. They are used for control, to calculate how to move the directly controlled variables (DCVs) to make the desired changes in the indirectly controlled variables (ICVs). MPC uses model gains for steady-state optimization, to solve for the optimum steady-state target values for the DCVs and ICVs. And MPC uses models for path optimization, to find the optimal series of DCV moves to bring the process from current conditions to target conditions, so that interim suboptimal conditions are minimized, and constraints are not violated along the way.

These multiple roles illustrate the heavy dependence of MPC on models, and why reliable performance depends on model accuracy and durability. Early on, this led to the practice of process step testing, to collect process response data in a controlled setting and, upon analysis, to yield "highfidelity" models. An assumption in this effort is that the resulting process models will remain accurate for a reasonable life-cycle period of two to five years, but experience has shown this to be an inappropriate assumption for many processes.

For example, in modern oil refineries, feed rates, feedstock qualities, and product specifications often change daily. Many process gains and response times are directly related to unit feed rate, and most units have typical turn-down designs of 2:1 (i.e., they may be operated at 100 percent of design feed rate or as low as 50 percent). Feedstock qualities, such as heavy crude oil versus light crude oil, or straight-run gas oil versus olefinic (or "cracked") gas oil, have large effects on unit behaviors and affect feed rates and feedstock qualities to downstream units in turn. When one refinery unit is shut down, process streams are reduced or redirected, which also impacts feed rates and/or feedstock qualities to related units. This illustrates that many process gains change nearly continuously and that achieving ongoing model fidelity is usually a practical impossibility, even in the very short term. This situation may not be the case in all process industries, but varying production rates, feedstock qualities, and product grades characterize many processes, and it is this type of process disturbance and variation that makes multivariable control potentially beneficial in the first place (to automatically manage and compensate for these changes).

Model quality and MPC performance history What does the inevitability of model error say about the history of MPC performance? The idea that models are predominantly inaccurate goes a long way toward explaining why MPC performance has been predominantly below expectations and why users have responded by adopting various detuning techniques, such as direct control variable (DCV) move size limits. Detuning allows DCVs to move in the direction indicated by the control and optimization calculations, but not with the speed or size indicated by those calculations. This results in slower but more reliable performance. There is time for the process to respond and serve as feedback to update the move plan as it unfolds, thereby avoiding excessive and potentially destabilizing DCV movement. This is analogous to detuning a single-loop controller and is an equally fitting solution in the face of unknown or dynamically changing process gain.

Similarly, model error explains the practice of DCV "clamping." Clamping precludes any further DCV movement. It usually occurs after a DCV has been moved too far, too fast, or for the wrong reason, leading to unwanted process conditions that no one wants to risk repeating. Clamping is analogous to placing a single-loop controller in manual mode-any further moves must be made manually. (DCV clamping is also the result of including many inappropriate variables and models in the controller matrix design in the first place.) Together, detuning and clamping produce the condition industry now calls "degraded" MPC performance, which is characterized by little or no DCV movement and frequent operator intervention. When an application no longer has enough control functionality or value to justify the burden of use it places on the operating team, and the cost of reengineering also appears unjustified, the application is switched off and has "fallen into disuse." The extent of disuse and degraded performance in industry is hard to know, because industry has naturally focused on the successes and potential of MPC, rather than on its mistakes and limitations. Some evidence suggests that degradation and disuse affect the majority of DCVs and MPC applications that have been installed over the past two to three decades.

It is informative that the historical performance limitations of both singleloop control and multivariable control can be traced to a common root cause (poorly known and dynamically changing process gains) and that similar work-arounds have emerged in both cases (a degree of detuning, increased reliance on actual feedback, and decreased reliance on feedforward or predictive control). This sheds light on the historical industry challenges that have persisted in both single-loop and multivariable control performance, and creates one common picture, tending to clarify this interpretation of events.

Feedforward is the single-loop counterpart of model-based predictive control. Like model-based predictive control, feedforward has well-recognized potential to reject process disturbances seamlessly, but has historically found limited success in practice, due to the same difficulty of depending on an accurate and durable feedforward model, even on a single-loop basis. In retrospect, this makes it easy to see why implementing reliable predictive control on a "wholesale" basis (involving dozens or hundreds of models developed en masse), has presented such a daunting challenge.

Process control conservatism Industry's long struggle to understand MPC performance has, in the process, revealed a second force that has contributed to the slow pace of progress. Process control and industrial process operation are by nature very conservative practices. Haste is not part of process operation culture. In terms of process control, operation culture almost always prefers more gradual constraint management and optimization, without overshoot or oscillation. This helps explain the tendency to detune controls in the face of unsure process response-it is always better to make a conservative move and gauge the actual response before making further moves, than to move too far or too fast in the first place. This principle of conservatism is at odds with MPC in important ways, suggesting that detun- ing, gradual movement, and greater reliance on feedback might be the order of the day, even if high-fidelity models were attainable.

For example, MPC's path optimization function can be compared to driving a car toward a distant stop sign, and first accelerating and then braking hard, rather than simply coasting to a stop. A path optimizer may well favor the former solution, because it will arrive at the stop sign sooner, but it is totally inappropriate in practice. Similarly, traditional "error minimization" comes at the cost of overshoot and decaying oscillations. However, operations culture abhors overshoot and oscillation, because they indicate potential instability or can mask a developing problem elsewhere in the process. As another example, consider a passenger jet increasing its cruising altitude: Would the appropriate algorithm be minimum error (with rapid change, overshoot, and decaying oscillation) or minimum overshoot (with a smooth ramp or first-order approach)? Obviously, the latter algorithm is preferred in practice, due to the conservative nature of the business, even though the response models are well-known and reliable. Minimizing transient control error, a traditional criteria and benefit of MPC, is almost always of negligible concern in process operation practice and never trumps preserving process stability (figure 1).

High-level controls do not provide process stability, they depend on it. Process stability is the responsibility of base-layer controls. High-level controls should never move set points or outputs in a manner that outpaces or compromises the ability of the baselayer controls to do this job. This is basically a better-known formulation of the conservatism principle, but it has been widely disregarded in MPC practice, on the idea that broadly applied modelbased control renders process stability essentially a nonconcern. Experience has now shown that unchecked DCV movement, deriving from inaccurate models and ideal tuning, has often caused process instability, leading to degradation, and reminding the process control community that this principle remains both sound and necessary.

These examples illustrate that "simulation-like" performance may actually be a largely inappropriate goal in the process industries, even if highfidelity models were available. The traditional process control principle of conservatism, the consequent degree of detuning, and greater reliance on actual feedback have largely proven to be more important in practice than the potential of model-based predictive control over the past two to three decades. Among other changes, multivariable control technology needs to reflect these principles more strongly, and not be at odds with them, to move beyond its historical limitations.

Experience points the way forward The MPC paradigm that has become deeply rooted in industry in past decades can make it difficult to imagine multivariable control without models. But the above discussion has identified several perspectives that suggest model-less multivariable control may be both possible and preferable in many cases. Model-less control already exists in the form of many detuned MPCs that largely ignore model detail, and it has always existed in the form of manual multivariable control.

An initial response to the idea of model-/ess multivariable control is often that without accurate gain values, how can the combined gain of moving multiple DCVs and the combined economic effects be known? In other words, how can the multivariable constraint control and optimization problem be solved? This is a good question when multivariable control is approached as a mathematical problem, without a process operation perspective. However, this question never arises from actual operating teams, because they already know the correct control actions for any given situation, based on their process knowledge, training, experience, and usually common sense. MPC projects often seem to bring new wisdom to process operation, by virtue of a more global solution involving many models, but in almost every application, actual post-deployment controller behavior is bent to the established wisdom of the operating team (through the use of detuning and other improvised practices), not vice versa. A more reliable approach, this experience suggests, would be to design controllers based on proven operating practice in the first place.

Framing multivariable control as a global optimization problem dependent upon dozens (often hundreds) of detailed models, rather than framing it as automating the more commonsense logic and methods already employed by the operating team, may have seemed like innovative use of new-found computer power in the 1980s. In retrospect, it made the problem much bigger, and the solution much less reliable, than necessary. Several accompanying assumptions that also seemed reasonable at the timesuch as the ease of achieving model fidelity, the idea that more models improve the result, and the assumption that ideal tuning is naturally preferable to detuned behavior-unfortunately also turned out to be largely incorrect. Consequently, this path had very limited success and has left industry lacking an appropriately scaled, affordable, and agile tool for the majority of straight-forward industrial multivariable process control applications.

A model-less multivariable controller would function similarly to historical manual multivariable control, except more timely and reliably, thereby capturing the benefits industry expects (if not always achieves) from MPC. This behavior is also similar to an appropriately detuned, and otherwise well-designed, MPC controller. The DCVs move persistently but cautiously, based primarily on gain direction, to effect constraint management and optimization, and movement stops based on process feedback as the constraint limits or optimization targets are approached.

This method does not require or depend on detailed models. It depends on only three pieces of process knowledge: gain direction of the primary interactions, preselected conservative move sizes for each DCY and optimization priorities for each variable. Importantly, this is all common knowledge among the operating team and can be captured in a meeting, without a plant test or large-scale engineering effort. The "primary" interactions are those that are already proven and employed in operation for constraint management and optimization, i.e., the "small matrix" philosophy. Preferred conservative move rates for key variables are always well-known within operations, and are often documented in existing operating procedures. And within most MPC practice, actual stream pricing was abandoned in favor of a simpler, more practical, and more reliable optimization priority scheme years ago.

This concept of model-less multivariable control has yet to surface in industry as an available technology, but its potential efficacy and advantages are not difficult to perceive, and commercial products are sure to follow, especially as the lessons of model-based control become clear. Dispensing with the entire aspect of detailed modeling would be a paradigm shift with the promise to reduce costs and complexity at every life-cycle stage of multivariable control, including procurement, design, deployment, training, operation, maintenance, modification, and performance monitoring. It also has the potential to move multivariable control from the domain of specialists, third parties, and large budgets, into the domain of routine operational competency. As a result, design, deployment, and operation can be accomplished by the operating team and in-house control engineers, based on standard DCS control system capabilities. This would transform multivariable control from a specialized, highcost, high-maintenance technology, into an agile and affordable tool, appropriately scaled in terms of cost and complexity, for the widespread needs of the process industries.

FAST FORWARD * Users have expected the price and performance of multivariable control to improve as the technology matured, but costs remain high, and overall performance continues to be low.

An examination of historical multivariable control performance, and of the improvised work practices that have emerged around it, reveals the root causes of the performance limitations, and points industry toward a more agile and affordable solution.

Detailed process models, normally considered the central strength of multivariable control, may actually be unnecessary, as well as being the source of most costs and ownership difficulties.

Multivariable control terminology The terms direct control variable (DCV), manipulated variable (MV), "handle, " and independent variable are largely synonymous. Most often, DCVs are the set points of existing base-layer single-loop controllers. DCVs are directly adjusted by the multivariable controller.

The terms indirect control variable (ICV), controlled variable (CV), constraint limit, and dependent variable are also largely synonymous. ICVs are process variables that are controlled indirectly by the multivariable controller by adjusting the DCVs so that the ICVs remain within prescribed constraint limits and, where degrees of freedom exist, move toward economic optima.

A multivariable controller is said to have its "hands on" the DCVs (think "handles") and its "eyes on" the ICVs (think "eyes"), i.e., it adjusts the DCVs to keep the ICVs within constraint limits.

* Each DCV may affect multiple ICVs, and each ICV may be affected by multiple DCVs. This comprises the multivariable nature of most industrial processes and makes coordinated multivariable control an essential requirement of modern process automation.

An interaction is the effect of one DCV on one ICV. Detailed knowledge of the interaction, such as gain, response time, and interim dynamics, constitutes a model of the interaction. Gain direction refers only to the sign (positive or negative) of the final steady-state gain of the interaction.

Matrix design is the process of selecting the DCVs, ICVs, and models (or gain directions) that will comprise the multivariable controller. Matrix design may follow a "big matrix" or "small matrix" approach.

In the "big matrix" approach, all potentially relevant DCVs, ICVs, and models are included, on the basis that more variables and models results in a more complete solution. This usually leads to a "double-digit" matrix size, such as 20x50, and hundreds of models. (Note: The author believes this approach also leads to frequent unwanted and incorrect control action and is a root source of MPC degradation.) The "small matrix" approach includes primarily the DCVs, ICVs, and models (or gain directions) that the operating team uses to manage constraints and optimization in the first place. The basis is to mimic the existing proven methods and logic of the operating team. This usually leads to much smaller "single-digit" matrix dimensions, such as 5x8, and one or two dozen models. Model-less technology recommends (but does not strictly require) the small matrix approach.

Model-based predictive control (MPC) refers to using detailed models for multivariable control and optimization. Model-based control can be applied on a single-loop control basis, but MPC usually implies a multivariable controller application.

* Modeless multivariable control refers to accomplishing multivariable constraint control and optimization without detailed models, based on gain direction, preselected move rates, and an optimization priority scheme.

Multivariable control optimization refers to using DCVs to improve process economic performance, when degrees of freedom remain available to do so after constraint management objectives have been met (constraint management being a higher priority function than optimization).

Global (unit, refinery, or companywide) optimization is a daily or weekly business-side function that includes inputs that are outside the awareness of multivariable control (such as market prices and refinery capabilities) and can result in resetting (either manually or automatically) select multivariable controller limits and targets on one or more units.

View the online version at www.isa.org/intech/20140802.

By Allan Kern, P.E.

ABOUT THE AUTHOR Allan Kern, P.E., ([email protected]) has 35 years of process control experience. He has authored numerous papers on topics ranging from field instrumentation, safety systems, and loop tuning to multivariable control, inferential control, and expert systems. From 2001 to 2008, Kern served as automation leader at a major Middle Eastern refinery, where his responsibilities included deployment and performance of multivariable control systems. Since 2005, Kern has published more than a dozen papers on multivariable control performance. In 2012, he became an independent process control consultant serving clients worldwide.

(c) 2014 International Society of Automation

[ Back To TMCnet.com's Homepage ]