None Notebook

This notebook contains material from cbe30338-2021; content is available on Github.

< 6.1 Static Operability | Contents | Tag Index | 6.3 Predictive Control >

Open in Colab

Download

6.2 Simulation and Open-Loop Optimal Control

This notebook demonstrates the use of CVXPY for the simulation and computation of open-loop optimal control. The notebook includes a lab exercise.

6.2.1 Heater Model

6.2.1.1 Model

We will use the two-state model for a single heater/sensor assembly for the calculations that follow.

\begin{align} C^H_p\frac{dT_{H,1}}{dt} & = U_a(T_{amb} - T_{H,1}) + U_b(T_{S,1} - T_{H,1}) + \alpha P_1 u_1\\ C^S_p\frac{dT_{S,1}}{dt} & = U_b(T_{H,1} - T_{S,1}) \end{align}

The model is recast into linear state space form as

\begin{align} \frac{dx}{dt} & = A x + B_u u + B_d d \\ y & = C x \end{align}

where

$$x = \begin{bmatrix} T_{H,1} \\ T_{S,1} \end{bmatrix} \qquad u = \begin{bmatrix} u_1 \end{bmatrix} \qquad d = \begin{bmatrix} T_{amb} \end{bmatrix} \qquad y = \begin{bmatrix} T_{S,1} \end{bmatrix}$$

and

$$A = \begin{bmatrix} -\frac{U_a+U_b}{C^H_p} & \frac{U_b}{C^H_p} \\ \frac{U_b}{C^S_p} & -\frac{U_b}{C^S_p} \end{bmatrix} \qquad B_u = \begin{bmatrix} \frac{\alpha P_1}{C^H_p} \\ 0 \end{bmatrix} \qquad B_d = \begin{bmatrix} \frac{U_a}{C_p^H} \\ 0 \end{bmatrix} \qquad C = \begin{bmatrix} 0 & 1 \end{bmatrix}$$

6.2.1.2 Parameter Values

6.2.2 The Control Problem

For the purposes of this notebook, the control problem is to find a control policy $u(t)$ for the interval $0 \leq t \leq t_f$ which causes the output $y(t)$ to track a desired setpoint or reference tracjectory $r(t)$.

6.2.2.1 Reference Tractory

The reference trajectory is a sequence of ramp/soak intervals. Python function r(t) uses numpy.interp to compute values of the reference trajectory at any point in time.

6.2.2.2 Guessing a Solution

So what should $u(t)$ be?

The next cell defines process inputs $d(t)$ and $u(t)$. For this disturbance, the model parameters given above, do you think this control policy will cause $y(t)$ to be close to the reference trajectory?

6.2.3 Simulation

Let's see how well our initial guess at a control strategy will work for us subject to initial conditions

\begin{align*} T_H(t_0) & = T_{amb} \\ T_S(t_0) & = T_{amb} \end{align*}

and prior specification of inputs $u(t)$ and $d(t)$.


Study Question: Evaluate how well this control policy performed. Keep in mind that the objective is for $T_S$ to track the reference input (i.e., the setpoint) as closely as possible. Did the controller achieve the desired steady-state? What about the prior ramp and soak periods?

Study Question: Edit the cells above to change $u(t)$ in to produce a response closer to the target. Make at least 3 attempts. What changes did you make, and were you able to get a better result.

Study Question: What criteria could you use to determine if one control policy was better than another? Give at least three examples of possible criteria.


6.2.4 Feedforward Optimal Control

An optimal control policy minimizes the differences

\begin{align*} \min_{u} \int_{t_0}^{t_f} \|T_H^{SP}(t) - T_H(t)\|^2\,dt \\ \end{align*}

subject to constraints

\begin{align*} C_p^H \frac{dT_H}{dt} & = U_a (T_{amb} - T_H) + U_c (T_S - T_H) + P u(t) + d(t)\\ C_p^S \frac{dT_S}{dt} & = - U_c (T_S - T_H) \end{align*}

initial conditions

\begin{align*} T_H(t_0) & = T_{amb} \\ T_S(t_0) & = T_{amb} \end{align*}

and prior knowledge of $d(t)$.


Study Question: The optimal control computed above requires rapid changes in power level. In process systems where control action requires movement of a valve stem position, there are often limits on how fast the manipulated variable can change. Modify the model to include differential inequalities that limit the time rate of change of control.

\begin{align*} \frac{du}{dt} & \leq \dot{u}_{max} \\ \frac{du}{dt} & \geq -\dot{u}_{max} \end{align*}

where $\dot{u}_{max}$ is the maximum rate of change. Add these rate constraints to the problem above. Specify that the maximum power cannot change more than 1% per second.

How does that change the response?

Study Question: Change the objective so that the goal is to guide the heater (insteady of the sensor) temperature to the reference trajectory. How does the control policy change? Explain what you observe.


6.2.5 Lab Assignment 8

The goal of this lab assignment is to extend the calculations shown above to the case of the four-state models with two manipulable inputs and independent setpoint functions for $T_{S,1}$ and $T_{S,2}$.

6.2.5.1 Exercise 1

In a new cell, create reference inputs for sensor temperatures $T_{S,1}$ and $T_{S,2}$. The new reference trajectories should

6.2.5.2 Exercise 2

Set up and solve for the heater control policies minimizing the sum of least squares between the sensor temperatures and reference trajectories. Create functions U1(t) and U2(t) that interpolate the solutions for u1(t) and u2(t) for any value of t. Plot the results.

6.2.5.3 Exercise 3

Apply the functions U1(t) and U2(t) to your hardware and compare the measured sensor temperatures to those predicted in Exercise 2. How did you do?

< 6.1 Static Operability | Contents | Tag Index | 6.3 Predictive Control >

Open in Colab

Download