%======================== \section{Construction of...
Tạo vào: 24 tháng 2, 2026
Trả lời bằng GPT-5.1 Thinking bởi Chat01
Tạo vào: 24 tháng 2, 2026
Trả lời bằng GPT-5.1 Thinking bởi Chat01
%========================
\section{Construction of Dataset}
%========================
Consider a five-stand tandem cold rolling mill indexed by .
Let denote the -th sampling instant and the corresponding sampling interval length.
Define the local within-interval time variable .
Let denote the exit thickness of stand (),
and let denote the inter-stand strip tension between stand and ().
The system is strongly coupled because the inter-stand tensions propagate along the mill line and are affected by neighboring stands' actions.
Let and be the desired references (setpoints) given by process requirements. We define deviation variables
\begin{equation}
\Delta h_i(t)\triangleq h_i(t)-h_i^{\mathrm{ref}}(t),\qquad
\Delta T_i(t)\triangleq T_i(t)-T_i^{\mathrm{ref}}(t).
\label{eq:dev_def}
\end{equation}
Throughout this paper, the symbol ``'' attached to states always means deviation from reference.
For each stand , we choose the local deviation state as
\begin{equation}
\Delta x_i(t)\triangleq
\begin{bmatrix}
\Delta h_i(t)\
\Delta T_{i-1}(t)\
\Delta T_i(t)
\end{bmatrix}\in\mathbb{R}^{d},\qquad d=3,
\label{eq:xi_def_clean}
\end{equation}
with the boundary convention and to keep a unified dimension for all stands.
For a five-stand tandem mill, the dominant coupling is between adjacent stands, hence we define
\begin{equation}
Z_1={2},\quad
Z_i={i-1,i+1}\ (i=2,3,4),\quad
Z_5={4}.
\label{eq:Zi_clean}
\end{equation}
Define the neighbor-state stack
\begin{equation}
\Delta x_{Z_i}(t_n)=\mathrm{col}{\Delta x_k(t_n),|,k\in Z_i}.
\label{eq:xZi_clean}
\end{equation}
\begin{bmatrix}
\Delta s_i(t_n)\
\Delta v_i(t_n)
\end{bmatrix}.
\label{eq:du_discrete_clean}
\end{equation}
Throughout this paper, the symbol ``'' attached to inputs means sample-to-sample increment.
Thus, and are conceptually different, and this is fixed by definition.
Let denote exogenous disturbances.
We denote the interval-level equivalent disturbance by .
denotes the identity matrix; denotes the zero matrix.
M_d,\Delta x_i(t_n)
+
N_d,\Delta u_i(t_n)
+
F_d,\Delta d_i(t_n),
\label{eq:linear_form_concept}
\end{equation}
where represent equivalent discrete-time matrices around operating conditions.
In a practical five-stand cold rolling mill, accurately deriving and identifying these matrices and disturbance models from first principles is difficult,
due to strong coupling, unmodeled nonlinearities, and time-varying operating regimes.
Therefore, this paper aims to learn a high-fidelity approximation of the interval evolution from data and then embed it into distributed MPC.
\begin{remark}
In fact, due to the existence of complex coupling relationships, it is difficult to directly and accurately establish \eqref{eq:linear_form_concept}
based on first principles. Therefore, in this paper, we learn an approximate mapping of \eqref{eq:true_mapping_clean} from data.
\end{remark}
Although decisions are updated at discrete instants , the hydraulic gap and drive systems evolve continuously inside each interval,
and abrupt within-interval changes may excite tension oscillations and deteriorate thickness stability.
Thus, parameterizing the within-interval increment trajectory by a low-order polynomial:
(i) yields a compact finite-dimensional decision representation;
(ii) enforces smooth profiles inside the interval;
(iii) enables enforcing increment constraints for all .
This is appropriate when is not excessively large relative to actuator bandwidth and the within-interval evolution is well approximated by a low-order basis.
\Gamma_{i,n0}+\Gamma_{i,n1}\tau+\Gamma_{i,n2}\tau^2,
\qquad \tau\in[0,\delta_n],
\label{eq:du_poly_vec_clean}
\end{equation}
where are coefficient vectors ().
Component-wise, \eqref{eq:du_poly_vec_clean} corresponds to
\begin{equation}
\begin{aligned}
\Delta s_{i,n}(\tau) &= \gamma^{(s)}{i,n0}+\gamma^{(s)}{i,n1}\tau+\gamma^{(s)}{i,n2}\tau^2,\
\Delta v{i,n}(\tau) &= \gamma^{(v)}{i,n0}+\gamma^{(v)}{i,n1}\tau+\gamma^{(v)}{i,n2}\tau^2.
\end{aligned}
\label{eq:du_components_clean}
\end{equation}
Define the stacked parameter vector
\begin{equation}
\Gamma{i,n}\triangleq
\big[
(\Gamma_{i,n0})^\top,,
(\Gamma_{i,n1})^\top,,
(\Gamma_{i,n2})^\top
\big]^\top
\in\mathbb{R}^{p},
\qquad
p=3n_u=6.
\label{eq:Gamma_clean}
\end{equation}
Here, is the baseline increment at , while and describe the linear and quadratic variation rates.
Define the interval-averaged equivalent increments as
\begin{equation}
\begin{aligned}
\Delta u_i(t_n) &\triangleq \frac{1}{\delta_n}\int_0^{\delta_n}\Delta u_{i,n}(\tau),d\tau,\
\Delta d_i(t_n) &\triangleq \frac{1}{\delta_n}\int_0^{\delta_n}\Delta d_i(\tau),d\tau.
\end{aligned}
\label{eq:avg_def_clean}
\end{equation}
With \eqref{eq:du_poly_vec_clean}, the input average has a closed form:
\begin{equation}
\Delta u_i(t_n)=
\Gamma_{i,n0}
+\Gamma_{i,n1}\frac{\delta_n}{2}
+\Gamma_{i,n2}\frac{\delta_n^2}{3}.
\label{eq:avg_closed_clean}
\end{equation}
Let denote the sampling domain of deviation states and neighbor stacks ,
and let denote the sampling domain of polynomial parameters .
These domains specify the operating envelope used to generate supervised training data.
Given the above parameterization, one training sample is generated on each interval .
In addition to the local deviation state, the neighbor deviation states are included to represent inter-stand coupling.
The process is summarized in Table~\ref{tab:interval_sample_generation_en}.
\begin{table}[t]
\centering
\small
\renewcommand{\arraystretch}{1.15}
\caption{Procedure for generating one interval-level sample on (five-stand coupled mill).}
\label{tab:interval_sample_generation_en}
\begin{tabularx}{\linewidth}{>{\centering\arraybackslash}p{0.09\linewidth} X}
\toprule
\textbf{Step} & \textbf{Operation} \
\midrule
1 & \textbf{State sampling:} sample and from . \
2 & \textbf{Parameter sampling:} draw (coefficients for both and ). \
3 & \textbf{Control construction:} compute via \eqref{eq:du_poly_vec_clean}. \
4 & \textbf{State propagation:} integrate the \emph{five-stand coupled} mill model on (e.g., RK4) using the within-interval control trajectory, and record . \
\bottomrule
\end{tabularx}
\end{table}
Accordingly, an interval sample for subsystem can be represented as
\begin{equation}
\mathcal{D}{i,n}=\big{\Delta x_i(t_n),\ \Delta x{Z_i}(t_n),\ \Delta u_{i,n}(\tau),\ \Delta x_i(t_{n+1})\big}.
\label{eq:interval_sample_clean}
\end{equation}
Note that is fully determined by via \eqref{eq:du_poly_vec_clean},
therefore it is sufficient to store as the learning input.
For each subsystem , by repeating the above procedure across multiple intervals and randomized draws,
the local one-step training dataset is formed as
\begin{equation}
\begin{split}
S_i=\Big{&
\big(\Delta x_i^{(j)}(t_n),,\Delta x_{Z_i}^{(j)}(t_n),,\Delta x_i^{(j)}(t_{n+1});,
\Gamma_{i,n}^{(j)},,\delta_n^{(j)}\big)
\ \Big|\ j=1,\ldots,J
\Big}.
\end{split}
\label{eq:S_i_clean}
\end{equation}
Here is the number of one-step samples for subsystem .
The overall dataset for the five-stand mill is denoted by .
The point-cloud visualization of the training dataset is shown in Figure~\ref{2}.
\begin{figure*}[htbp]
\centering
\includegraphics[scale=0.5]{picture/Fig2.pdf}
\caption{Point cloud map of the training dataset.}\label{2}
\end{figure*}
The one-step set is sufficient for one-step regression, but it is not sufficient for training with multi-step rollout loss
and reciprocal-consistency regularization, because these objectives require ground-truth deviation-state trajectories over a horizon of consecutive intervals.
Therefore, without changing the single-interval sampling mechanism above, we additionally organize the offline-simulated samples
into -step trajectory segments.
Specifically, for each starting time we generate a segment of length by consecutively sampling
(and the corresponding inputs/disturbances),
and integrating the five-stand coupled mill model over for .
Hence, we obtain the deviation-state sequence as well as the neighbor stacks
.
Define a -step segment sample for subsystem as
\begin{equation}
\begin{aligned}
\mathcal{W}{i,n}=
\Big{&
\big(\Delta x_i(t{n+s}),,\Delta x_{Z_i}(t_{n+s}),,\Gamma_{i,n+s},,\delta_{n+s}\big){s=0}^{K-1}; \
&\big(\Delta x_i(t{n+s+1})\big){s=0}^{K-1}
\Big}.
\end{aligned}
\label{eq:segment_clean}
\end{equation}
By repeating the above segment generation, we form the multi-step training set
\begin{equation}
S_i^{(K)}=\Big{\mathcal{W}{i,n}^{(j)}\ \Big|\ j=1,\ldots,J_K\Big},
\label{eq:S_i_K_clean}
\end{equation}
where is the number of -step segment samples.
Note that can be viewed as the marginal one-step projection of (keeping only ),
thus the original dataset design is preserved, and only an additional \emph{segment organization} is introduced for multi-step training.
%========================
\section{Construction of Residual Neural Network}
%========================
\subsection{Residual Neural Network Structure Construction and Training Method}
Given the dataset, the neural network model is trained to learn a stand-wise, control-dependent one-step evolution law of deviation states:
\begin{equation}
\Delta x_i(t_{n+1})
\approx
\Delta x_i(t_n)+
\mathcal{N}i!\Big(\Delta x_i(t_n),,\Delta x{Z_i}(t_n),,\Gamma_{i,n},,\delta_n;,\Theta_i\Big),
\label{eq:learned_dyn_clean}
\end{equation}
where outputs the one-step deviation-state change and are trainable parameters.
\begin{remark}
If does not take control information as input (here and ),
the predictor becomes an autoregressive model that only reproduces trajectories under the training input patterns
and cannot answer the counterfactual question: ``what will happen if we choose a different roll gap and speed trajectory?''
Since MPC optimizes over candidate decisions, a control-dependent predictor \eqref{eq:learned_dyn_clean} is necessary
to evaluate the predicted thickness and tension behavior under different candidate actuator trajectories.
\end{remark}
Let (state dimension), be the number of neighbors of stand in \eqref{eq:Zi_clean}, and in \eqref{eq:Gamma_clean}.
Define the input vector
\begin{equation}
X_{i,\text{in}} \triangleq
\big[
\Delta x_i(t_n)^\top,,
\Delta x_{Z_i}(t_n)^\top,,
\Gamma_{i,n}^\top,,
\delta_n
\big]^\top
\in \mathbb{R}^{d(1+|Z_i|)+p+1}.
\label{eq:X_in_clean}
\end{equation}
The network mapping is
\begin{equation}
\mathcal{N}_i:\mathbb{R}^{d(1+|Z_i|)+p+1}\rightarrow\mathbb{R}^{d}.
\end{equation}
To improve training stability and long-horizon rollout robustness, we use a residual form.
Let be a selection matrix extracting the local state block:
\begin{equation}
\hat{I}i = [I_d,, 0{d\times(d|Z_i|+p+1)}].
\label{eq:Ihat_clean}
\end{equation}
Then the one-step predictor is written as
\begin{equation}
X_{i,\text{out}} = \hat{I}i X{i,\text{in}} + \mathcal{N}i(X{i,\text{in}}; \Theta_i),
\label{eq:res_predict_clean}
\end{equation}
where represents the predicted .
This structure implements a baseline-plus-correction interpretation:
the shortcut propagates the current deviation state , while the network learns the correction capturing
unmodeled nonlinearities and inter-stand coupling (via ) under varying operating conditions.
To improve robustness when varies, we introduce an auxiliary branch inside :
\begin{equation}
\mathcal{N}i(X{i,\text{in}};\Theta_i)\triangleq
\psi_i(X_{i,\text{in}};\Theta_{\psi_i}) + \rho_i(X_{i,\text{in}};\theta_i),
\label{eq:aux_clean}
\end{equation}
where is a lightweight feedforward branch that captures low-frequency/scale effects strongly related to ,
and captures the remaining nonlinear coupling corrections.
When , the model reduces to a standard residual network.
For the -th sample in \eqref{eq:S_i_clean}, define
\begin{equation}
X_{i,\text{in}}^{(j)} =
\big[
\Delta x_i^{(j)}(t_n),\ \Delta x_{Z_i}^{(j)}(t_n),\
\Gamma_{i,n}^{(j)},\ \delta_n^{(j)}
\big]^{\top},
\end{equation}
and the supervised residual target
\begin{equation}
\Delta r_i^{(j)}=\Delta x_i^{(j)}(t_{n+1})-\Delta x_i^{(j)}(t_n).
\label{eq:target_clean}
\end{equation}
To suppress accumulation drift induced by long-horizon recursion and to improve long-term predictive stability,
we train the forward predictor jointly with an auxiliary backward residual model
and impose a multi-step reciprocal-consistency regularization over a -step segment from .
Construct a backward residual network
\begin{equation}
\mathcal{B}i:\mathbb{R}^{d(1+|Z_i|)+p+1}\rightarrow\mathbb{R}^{d},
\end{equation}
parameterized by . For the backward step associated with interval , define
\begin{equation}
\begin{aligned}
X{i,\mathrm{in}}^{b}
&=
\big[
\Delta x_i(t_{n+1}),\ \Delta x_{Z_i}(t_{n+1}),
\Gamma_{i,n},\ \delta_n
\big]^{\top},\
X_{i,\mathrm{out}}^{b}
&=
\hat{I}i X{i,\mathrm{in}}^{b} + \mathcal{B}i(X{i,\mathrm{in}}^{b};\bar{\Theta}i),
\end{aligned}
\label{eq:back_clean}
\end{equation}
where represents the backward estimate of .
The supervised backward residual target is
\begin{equation}
\Delta r_i^{b}=\Delta x_i(t_n)-\Delta x_i(t{n+1}).
\end{equation}
Given a segment sample , initialize
\begin{equation}
\Delta \hat{x}i(t_n)=\Delta x_i(t_n),
\end{equation}
and recursively apply the forward predictor for steps:
\begin{equation}
\begin{aligned}
\Delta \hat{x}i(t{n+s+1})
&=
\Delta \hat{x}i(t{n+s})
+
\mathcal{N}i!\Big(
\Delta \hat{x}i(t{n+s}),,\Delta \hat{x}{Z_i}(t{n+s}),,
\Gamma_{i,n+s},,\delta_{n+s};,\Theta_i
\Big),\
&\qquad s=0,\ldots,K-1.
\end{aligned}
\label{eq:fwd_roll_clean}
\end{equation}
Set the terminal condition
\begin{equation}
\Delta \bar{x}i(t{n+K})=\Delta \hat{x}i(t{n+K}),
\end{equation}
and roll back using :
\begin{equation}
\begin{aligned}
\Delta \bar{x}i(t{n+s})
&=
\hat{I}i X{i,\mathrm{in}}^{b}(t_{n+s})
+
\mathcal{B}i!\Big(X{i,\mathrm{in}}^{b}(t_{n+s});,\bar{\Theta}i\Big),
\quad s=K-1,\ldots,0,
\end{aligned}
\label{eq:bwd_roll_clean}
\end{equation}
where
\begin{equation}
X{i,\mathrm{in}}^{b}(t_{n+s})=
\big[
\Delta \bar{x}i(t{n+s+1}),\ \Delta \hat{x}{Z_i}(t{n+s+1}),
\Gamma_{i,n+s},\ \delta_{n+s}
\big]^{\top}.
\end{equation}
\sum_{s=0}^{K}
\left|
\Delta \hat{x}i(t{n+s})-\Delta \bar{x}i(t{n+s})
\right|^2.
\end{equation}
We jointly minimize:
\begin{equation}
\begin{aligned}
L_{\mathrm{1step}}(\Theta_i)
&= \frac{1}{J_K}\sum_{j=1}^{J_K}\frac{1}{K}\sum_{s=0}^{K-1}
\Big|
\big(\Delta x_i^{(j)}(t_{n+s+1})-\Delta x_i^{(j)}(t_{n+s})\big)
-\mathcal{N}i!\left(
X{i,\mathrm{in}}^{(j)}(t_{n+s});\Theta_i
\right)
\Big|^2,\[2mm]
L_{\mathrm{bwd}}(\bar{\Theta}i)
&= \frac{1}{J_K}\sum{j=1}^{J_K}\frac{1}{K}\sum_{s=0}^{K-1}
\Big|
\big(\Delta x_i^{(j)}(t_{n+s})-\Delta x_i^{(j)}(t_{n+s+1})\big)
-\mathcal{B}i!\left(
X{i,\mathrm{in}}^{b,(j)}(t_{n+s});\bar{\Theta}i
\right)
\Big|^2,\[2mm]
L{\mathrm{msrp}}(\Theta_i,\bar{\Theta}i)
&= \frac{1}{J_K}\sum{j=1}^{J_K} E_i^{(j)}(t_n),\[2mm]
L_{\mathrm{roll}}(\Theta_i)
&= \frac{1}{J_K}\sum_{j=1}^{J_K}\sum_{s=1}^{K}
\Big|
\Delta x_i^{(j)}(t_{n+s})-\Delta \hat{x}i^{(j)}(t{n+s})
\Big|^2.
\end{aligned}
\label{eq:loss_clean}
\end{equation}
Here, enforces one-step accuracy; explicitly suppresses long-horizon drift under recursion;
regularizes the learned dynamics by enforcing reciprocal consistency between forward and backward rollouts;
and trains the backward model for the consistency regularization.
In implementation, these terms are combined as
\begin{equation}
L_{\mathrm{total}}=\lambda_1 L_{\mathrm{1step}}+\lambda_2 L_{\mathrm{roll}}+\lambda_3 L_{\mathrm{msrp}}+\lambda_4 L_{\mathrm{bwd}},
\end{equation}
where are tuned on a validation set.
\Delta x_i(t_n)
+
\mathcal{N}i!\Big(
\Delta x_i(t_n),,\Delta x{Z_i}(t_n),,
\Gamma_{i,n},,\delta_n;,\Theta_i^*
\Big),
\label{eq:pred_clean}
\end{equation}
and multi-step prediction is obtained by recursive rollout of \eqref{eq:pred_clean}.
This learned predictor is the internal model used by the MPC optimizer in the next section.
Finally, network parameters are optimized using Adam:
\begin{equation}
\Theta_{i,t+1} = \Theta_{i,t} - \alpha \frac{\hat{m}{i,t}}{\sqrt{\hat{v}{i,t}} + \varepsilon},
\end{equation}
where is the learning rate (we use to avoid conflict with other symbols),
and are bias-corrected moment estimates, and is a small constant for numerical stability.
Figure~\ref{fig:rnn_logic} illustrates the overall structure.
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.85]{picture/x6.pdf}
\caption{Logic diagram of the residual neural network.}
\label{fig:rnn_logic}
\end{figure}
\subsection{Explainability of the residual network}
\Delta x_i(t_n)
+
\underbrace{\int_{0}^{\delta_n}
f_i!\Big(\Delta x_i(t_n+\tau),,\Delta x_{Z_i}(t_n+\tau),,u_i(t_n+\tau),,d_i(t_n+\tau)\Big),d\tau}{\triangleq\ \varphi{i,n}},
\label{eq:increment_integral}
\end{equation}
where is the one-interval state increment generated by the mechanistic dynamics.
Our learned model \eqref{eq:learned_dyn_clean} adopts the same increment form as \eqref{eq:increment_integral}:
\begin{equation}
\Delta x_i(t_{n+1})
\approx
\Delta x_i(t_n)
+
\mathcal{N}i!\Big(\Delta x_i(t_n),,\Delta x{Z_i}(t_n),,\Gamma_{i,n},,\delta_n;\Theta_i\Big).
\end{equation}
Here, plays the role of a data-driven approximation of the integral increment ,
i.e., it approximates the accumulated effect of the mechanistic dynamics over .
This is consistent with the well-known interpretation that a residual network behaves like a one-step time integrator:
the identity path propagates the current state, while the residual branch represents the increment over the time lag.
Inside each interval, we do not optimize point-wise but parameterize the increment trajectory by
.
Hence, the mechanistic increment in \eqref{eq:increment_integral} depends on the \emph{whole} within-interval trajectory.
Feeding into is therefore a compact way to represent how different candidate
gap/speed trajectories change the integral effect and thus the next thickness--tension state.
When varies and is not very small, directly learning may be harder.
Motivated by the generalized residual idea,
we decompose the increment predictor into two parts in \eqref{eq:aux_clean}:
\begin{equation}
\mathcal{N}_i(\cdot)=\psi_i(\cdot)+\rho_i(\cdot).
\end{equation}
Conceptually, captures low-frequency and scale effects strongly related to ,
while captures the remaining nonlinear coupling corrections.
This provides a mechanism-consistent interpretation: a baseline increment plus a residual correction
that compensates unmodeled nonlinearities and inter-stand coupling.
%========================
\section{Nash Equilibrium-Based RNE-DMPC}
%========================
The five-stand tandem cold rolling system is strongly coupled through inter-stand tension propagation.
As a result, changes in control actions (roll gap and stand speed) at one stand can affect both upstream and downstream stands,
making centralized online optimization over all stands' decision variables computationally demanding.
To mitigate this issue, we decompose the global predictive-control problem into local subproblems associated with individual stands.
Each local controller optimizes its own decision variables while accounting for coupling via limited information exchange with neighboring controllers.
Motivated by game-theoretic coordination \citep{rawlings2008coordinating}, we formulate distributed coordination as a Nash-equilibrium-seeking iteration.
Based on the trained residual neural network surrogate model, we construct a Nash-equilibrium-based distributed MPC method (RNE-DMPC)
for coordinated thickness--tension regulation and tracking. The overall control structure is shown in Figure~\ref{4}.
\begin{figure*}[htbp]
\centering
\includegraphics[width=\linewidth]{picture/x2.pdf}
\caption{Schematic diagram of the control architecture for a tandem cold rolling mill.}\label{4}
\end{figure*}
At sampling time , stand chooses the polynomial-parameter sequence
, where .
Let
denote the joint strategy profile, and let denote the collection of all strategies except stand .
Given the current measured/estimated deviation state and the strategies
,
the multi-step prediction used by stand is written explicitly as
\begin{equation}
\begin{aligned}
\Delta \hat{x}i(t{n+s+1};\mathbf{\Gamma}i,\mathbf{\Gamma}{Z_i})
&=
\Delta \hat{x}i(t{n+s};\mathbf{\Gamma}i,\mathbf{\Gamma}{Z_i})
+
\mathcal{N}i!\Big(
\Delta \hat{x}i(t{n+s};\cdot),,
\Delta \hat{x}{Z_i}(t_{n+s};\mathbf{\Gamma}{Z_i}),\
&\qquad
\Gamma{i,n+s},,
\delta_{n+s};\Theta_i^*
\Big),
\end{aligned}
\label{eq:rollout_mpc_game}
\end{equation}
for , with initialization .
Here the neighbor stack is generated from neighbors' strategies via the same learned predictors.
\Gamma_{i,n+s,0}
+\Gamma_{i,n+s,1}\frac{\delta_{n+s}}{2}
+\Gamma_{i,n+s,2}\frac{\delta_{n+s}^2}{3}.
\label{eq:du_avg_clean}
\end{equation}
\begin{remark}
Because inter-stand tension is jointly affected by the adjacent stands and ,
the predicted evolution of depends on neighbors' future actions,
hence the MPC problems are not independent but form a coupled dynamic game.
\end{remark}
\mathrm{col}{\Gamma_{i,n},\Gamma_{i,n+1},\ldots,\Gamma_{i,n+N_c-1}}
\in \mathbb{R}^{pN_c}.
\end{equation}
In deviation coordinates, the regulation/tracking objective is , i.e.
\begin{equation}
\Delta x_{i,\mathrm{ref}}(t_{n+s})\equiv 0\in\mathbb{R}^{d},\qquad d=3.
\end{equation}
Recall .
Define the row selectors
\begin{equation}
C^- \triangleq [0\ \ 1\ \ 0]\in\mathbb{R}^{1\times 3},\qquad
C^+ \triangleq [0\ \ 0\ \ 1]\in\mathbb{R}^{1\times 3},
\end{equation}
so that (upstream interface) and (downstream interface).
C^-\Delta \hat{x}{i+1}(t{n+s};\mathbf{\Gamma}),\qquad i=1,\ldots,4.
\label{eq:shared_tension_mismatch}
\end{equation}
\sum_{s=1}^{N_p}
\left|
\Delta \hat{x}i(t{n+s};\mathbf{\Gamma}i,\mathbf{\Gamma}{Z_i})
\right|{Q_i}^{2}
+
\sum{s=0}^{N_c-1}
\left|\Gamma_{i,n+s}\right|_{R_i}^{2}
+
J_i^{\mathrm{cpl}}(\mathbf{\Gamma}i;\mathbf{\Gamma}{-i})
\label{eq:Ji_game}
\end{equation}
where weights thickness and tension deviations, and penalizes actuation magnitudes.
C^-\Delta \hat{x}{i+1}(t{n+s};\mathbf{\Gamma})
\big|^2
\Big),
\end{aligned}
\label{eq:coupling_cost}
\end{equation}
with and indicator .
This term makes the coupling conflict explicit: unilateral actions that locally reduce thickness error may worsen shared-tension
compatibility and thus increase , and also affect neighbors' objectives.
We enforce the absolute-input bounds and within-interval increment bounds.
Absolute input bounds (roll gap and speed):
\begin{equation}
u_{i,\min}\le u_i(t_{n+s})\le u_{i,\max},
\qquad s=0,\ldots,N_p-1,
\label{eq:u_abs_game}
\end{equation}
where .
Within-interval increment-trajectory bounds:
\begin{equation}
\Delta u_{i,\min}\le \Delta u_{i,n+s}(\tau;\Gamma_{i,n+s})\le \Delta u_{i,\max},
\qquad \forall\tau\in[0,\delta_{n+s}],\ s=0,\ldots,N_p-1,
\label{eq:du_traj_game}
\end{equation}
where is given by the polynomial parameterization.
\Gamma_{i,n+s,0}
+\Gamma_{i,n+s,1}\frac{\delta_{n+s}}{2}
+\Gamma_{i,n+s,2}\frac{\delta_{n+s}^2}{3},
\label{eq:du_avg_game}
\end{equation}
and propagate the absolute input along the horizon by
\begin{equation}
u_i(t_n)=u_i(t_{n-1})+\Delta u_i(t_n),\qquad
u_i(t_{n+s})=u_i(t_{n+s-1})+\Delta u_i(t_{n+s}),\ s=1,\ldots,N_p-1,
\label{eq:u_prop_game}
\end{equation}
where is the applied (measured) input from the previous sampling instant.
Compact feasible set:
\begin{equation}
\Omega_i \triangleq
\Big{\mathbf{\Gamma}_i\ \Big|\
\eqref{eq:rollout_mpc_game}\ \text{holds and}
\eqref{eq:u_abs_game},\eqref{eq:du_traj_game},\eqref{eq:u_prop_game}\ \text{are satisfied}
\Big}.
\label{eq:Omega_i_game}
\end{equation}
\arg\min_{\mathbf{\Gamma}_i\in\Omega_i}\
J_i(\mathbf{\Gamma}i;\mathbf{\Gamma}{-i}).
\label{eq:local_BR}
\end{equation}
Because the learned surrogate is differentiable, \eqref{eq:local_BR} can be solved by standard gradient-based NLP solvers.
At each sampling time , the distributed MPC coordination induces a finite-horizon dynamic game:
players are stands ; strategy sets are ; and payoff (cost) functions are
defined in \eqref{eq:Ji_game}--\eqref{eq:coupling_cost}.
A joint strategy profile
is a Nash equilibrium if
\begin{equation}
\forall i\in{1,\ldots,5},\qquad
\mathbf{\Gamma}i^*\in
\arg\min{\mathbf{\Gamma}_i\in\Omega_i}
J_i(\mathbf{\Gamma}i;\mathbf{\Gamma}{-i}^*).
\label{eq:NE_def}
\end{equation}
This definition explicitly characterizes the strategic coupling:
each player's optimal decision depends on neighbors' decisions through the shared-tension dynamics and the coupling term.
(1-\omega)\mathbf{\Gamma}_i^{(l-1)}
+
\omega,\mathbf{\Gamma}_i^{\mathrm{BR},(l)},
\qquad \omega\in(0,1].
\label{eq:relaxed_BR}
\end{equation}
The relaxation factor mitigates oscillations caused by strong coupling and improves practical convergence.
The Nash equilibrium is computed through distributed best-response iterations, summarized in Table ~\ref{tab:nash_iter_en}.
\max_i
\frac{\left|
\mathbf{\Gamma}_i^{(l)}-\mathbf{\Gamma}_i^{(l-1)}
\right|_2}{
\left|
\mathbf{\Gamma}_i^{(l-1)}
\right|_2+\epsilon},
\end{equation}
with small.
\begin{table}[t]
\centering
\small
\renewcommand{\arraystretch}{1.12}
\setlength{\tabcolsep}{3.5pt}
\caption{Relaxed distributed Nash best-response iteration for RNE-DMPC (five-stand).}
\label{tab:nash_iter_en}
\begin{tabularx}{\linewidth}{>{\centering\arraybackslash}p{0.11\linewidth} X}
\toprule
\textbf{Step} & \textbf{Description} \
\midrule
A &
Initialize and warm-start (e.g., from the previous sampling time). \
B &
Communicate (or the induced predicted trajectories) among neighbors; form . \
C &
Given , solve the best-response NLP \eqref{eq:local_BR} to obtain . \
D &
Update the relaxed strategy using \eqref{eq:relaxed_BR} and compute the induced predictions
via \eqref{eq:rollout_mpc_game}. \
E &
Broadcast and the predicted interface quantities needed in \eqref{eq:coupling_cost}
(e.g., ) to neighbors. \
F &
Compute ; if , stop and set ; otherwise and repeat. \
\bottomrule
\end{tabularx}
\end{table}
\Gamma_{i,n,0}^*
+
\Gamma_{i,n,1}^\frac{\delta_n}{2}
+
\Gamma_{i,n,2}^\frac{\delta_n^2}{3},
\end{equation}
followed by .
The overall control flow chart is shown in Fig~\ref{liu}.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{picture/x5.pdf}
\caption{The overall system control flow chart}\label{liu}
\end{figure}
\begin{remark}
Equations \eqref{eq:Ji_game}--\eqref{eq:coupling_cost} and the NE definition \eqref{eq:NE_def} explicitly show that each player's objective
depends on others' strategies through the coupled prediction dynamics and the shared-tension interface mismatch penalty,
thus making the multi-stand strategic interaction and coordination mechanism transparent.
\end{remark}这是我最新的第二章,第三章,第四章。在多项式参数化的部分, 这个地方一般的情况用n阶进行方案设计,仿真可以选为二阶的,所有的时变信号参数化都这样改。给出改后的完整的latex代码,上下文要符合逻辑,在符合逻辑的前提下其他地方都不要改