39
DR. TAREK A. TUTUNJI ADVANCED CONTROL SYSTEMS MECHATRONICS ENGINEERING DEPARTMENT PHILADELPHIA UNIVERSITY JORDAN PRESENTED AT HOCHSCHULE BOCHUM GERMANY MAY 19-21, 2015 Modeling and Control Overview

Modeling and Control Overview - Philadelphia University · 2015. 5. 18. · Optimal Control Optimal control is a set of differential equations describing the paths of the state and

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Modeling and Control Overview - Philadelphia University · 2015. 5. 18. · Optimal Control Optimal control is a set of differential equations describing the paths of the state and

D R . T A R E K A . T U T U N J I

A D V A N C E D C O N T R O L S Y S T E M S

M E C H A T R O N I C S E N G I N E E R I N G D E P A R T M E N T

P H I L A D E L P H I A U N I V E R S I T Y

J O R D A N

P R E S E N T E D A T

H O C H S C H U L E B O C H U M

G E R M A N Y

M A Y 1 9 - 2 1 , 2 0 1 5

Modeling and Control Overview

Page 2: Modeling and Control Overview - Philadelphia University · 2015. 5. 18. · Optimal Control Optimal control is a set of differential equations describing the paths of the state and

Math Modeling Review

Page 3: Modeling and Control Overview - Philadelphia University · 2015. 5. 18. · Optimal Control Optimal control is a set of differential equations describing the paths of the state and

Math Modelling Categories

Static vs. dynamic

Linear vs. nonlinear

Time-invariant vs. time-variant SISO vs. MIMO Continuous vs. discrete Deterministic vs. stochastic

Page 4: Modeling and Control Overview - Philadelphia University · 2015. 5. 18. · Optimal Control Optimal control is a set of differential equations describing the paths of the state and

Static vs. Dynamic

Models can be static or dynamic Static models produce no motion, fluid flow, or any other

changes.

Example: Battery connected to resistor

Dynamic models have energy transfer which results in power flow. This causes motion, or other phenomena that change in time.

Example: Battery connected to resistor, inductor, and

capacitor

4

v = 𝒊𝑹

idtC

1

dt

diLRiv

Page 5: Modeling and Control Overview - Philadelphia University · 2015. 5. 18. · Optimal Control Optimal control is a set of differential equations describing the paths of the state and

Linear vs. Nonlinear

Linear models follow the superposition principle The summation outputs from individual inputs will be equal to the

output of the combined inputs

Most systems are nonlinear in nature, but linear models can be used to approximate the nonlinear models at certain point.

Page 6: Modeling and Control Overview - Philadelphia University · 2015. 5. 18. · Optimal Control Optimal control is a set of differential equations describing the paths of the state and

Linear vs. Nonlinear Models

Linear Systems Nonlinear Systems

Page 7: Modeling and Control Overview - Philadelphia University · 2015. 5. 18. · Optimal Control Optimal control is a set of differential equations describing the paths of the state and

Nonlinear Model: Motion of Aircraft

Page 8: Modeling and Control Overview - Philadelphia University · 2015. 5. 18. · Optimal Control Optimal control is a set of differential equations describing the paths of the state and

Linearization using Taylor Series

sin 𝑥 = sin 0 + cos 0 𝑥 − 0 = 𝑥

Linearization is done at a particular point. For example, linearizing the nonlinear function, sin at x=0, using the first two terms becomes

Page 9: Modeling and Control Overview - Philadelphia University · 2015. 5. 18. · Optimal Control Optimal control is a set of differential equations describing the paths of the state and

Time-invariant vs. Time-variant

The model parameters do not change in time-invariant models

The model parameters change in time-variant models Example: Mass in rockets vary with time as the fuel is

consumed.

If the system parameters change with time, the system is time varying.

Page 10: Modeling and Control Overview - Philadelphia University · 2015. 5. 18. · Optimal Control Optimal control is a set of differential equations describing the paths of the state and

Time-invariant vs. variant

Time-invariant

F(t) = ma(t) – g

Where m is the mass, a is the acceleration, and g is the gravity

Time-variant

F = m(t) a(t) – g

Here, the mass varies with time. Therefore the model is time-varying

Page 11: Modeling and Control Overview - Philadelphia University · 2015. 5. 18. · Optimal Control Optimal control is a set of differential equations describing the paths of the state and

Linear Time-Invariant (LTI)

LTI models are of great use in representing systems in many engineering applications.

The appeal is its simplicity and mathematical structure.

Although most actual systems are nonlinear and time varying

Linear models are used to approximate around an operating point the nonlinear behavior

Time-invariant models are used to approximate in short segments the system’s time-varying behavior.

Page 12: Modeling and Control Overview - Philadelphia University · 2015. 5. 18. · Optimal Control Optimal control is a set of differential equations describing the paths of the state and

LTI Models: Block Diagrams Manipulation

Page 13: Modeling and Control Overview - Philadelphia University · 2015. 5. 18. · Optimal Control Optimal control is a set of differential equations describing the paths of the state and

SISO vs. MIMO

Single-Input Single-Output (SISO) models are somewhat easy to use. Transfer functions can be used to relate input to output.

Multiple-Input Multiple-Output (MIMO) models involve combinations of inputs and outputs and are difficult to represent using transfer functions. MIMO models use State-Space equations

Page 14: Modeling and Control Overview - Philadelphia University · 2015. 5. 18. · Optimal Control Optimal control is a set of differential equations describing the paths of the state and

System States

Transfer functions

Concentrates on the input-output relationship only.

Relates output-input to one-output only SISO

It hides the details of the inner workings.

State-Space Models

States are introduced to get better insight into the systems’ behavior. These states are a collection of variables that summarize the present and past of a system

Models can be used for MIMO models

Page 15: Modeling and Control Overview - Philadelphia University · 2015. 5. 18. · Optimal Control Optimal control is a set of differential equations describing the paths of the state and

SISO vs. MIMO Systems

Transfer Function U(t) Y(t)

Page 16: Modeling and Control Overview - Philadelphia University · 2015. 5. 18. · Optimal Control Optimal control is a set of differential equations describing the paths of the state and

Continuous vs. discrete

Continuous models have continuous-time as the dependent variable and therefore inputs-outputs take all possible values in a range

Discrete models have discrete-time as the dependent variable and therefore inputs-outputs take on values at specified times only in a range

Page 17: Modeling and Control Overview - Philadelphia University · 2015. 5. 18. · Optimal Control Optimal control is a set of differential equations describing the paths of the state and

Continuous vs. discrete

Continuous Models

Differential equations

Integration

Laplace transforms

Discrete Models

Difference equations

Summation

Z-transforms

Page 18: Modeling and Control Overview - Philadelphia University · 2015. 5. 18. · Optimal Control Optimal control is a set of differential equations describing the paths of the state and

Deterministic vs. Stochastic

Deterministic models are uniquely described by mathematical equations. Therefore, all past, present, and future values of the outputs are known precisely

Stochastic models cannot be described mathematically with a high degree of accuracy. These models are based on the theory of probability

Page 19: Modeling and Control Overview - Philadelphia University · 2015. 5. 18. · Optimal Control Optimal control is a set of differential equations describing the paths of the state and

Control Overview

Page 20: Modeling and Control Overview - Philadelphia University · 2015. 5. 18. · Optimal Control Optimal control is a set of differential equations describing the paths of the state and

Closed-Loop vs. Open-Loop Control

Page 21: Modeling and Control Overview - Philadelphia University · 2015. 5. 18. · Optimal Control Optimal control is a set of differential equations describing the paths of the state and

Example: CNC Positional Control

Page 22: Modeling and Control Overview - Philadelphia University · 2015. 5. 18. · Optimal Control Optimal control is a set of differential equations describing the paths of the state and

Transient and Steady State Response

Page 23: Modeling and Control Overview - Philadelphia University · 2015. 5. 18. · Optimal Control Optimal control is a set of differential equations describing the paths of the state and

Step Response: First Order System

Page 24: Modeling and Control Overview - Philadelphia University · 2015. 5. 18. · Optimal Control Optimal control is a set of differential equations describing the paths of the state and

Step Response: Second Order System

Page 25: Modeling and Control Overview - Philadelphia University · 2015. 5. 18. · Optimal Control Optimal control is a set of differential equations describing the paths of the state and

Control Techniques / Strategies

Classical Control

Modern Control

Optimal Control

Robust Control

Adaptive Control

Variable Structure Control

Intelligent Control

Page 26: Modeling and Control Overview - Philadelphia University · 2015. 5. 18. · Optimal Control Optimal control is a set of differential equations describing the paths of the state and

Classical Control

Classical control design are used for SISO systems.

Most popular concepts are:

Root Locus

Bode plots

Nyquist Stability

PID is widely used in feedback systems.

Page 27: Modeling and Control Overview - Philadelphia University · 2015. 5. 18. · Optimal Control Optimal control is a set of differential equations describing the paths of the state and

Stability

Stable Unstable

Page 28: Modeling and Control Overview - Philadelphia University · 2015. 5. 18. · Optimal Control Optimal control is a set of differential equations describing the paths of the state and

Classical Control: PID

Proportional-Integral-Derivative (PID) is the most commonly used controller for SISO systems

dt

)t(deKdt)t(eK)t(eK)t(u DIp

Page 29: Modeling and Control Overview - Philadelphia University · 2015. 5. 18. · Optimal Control Optimal control is a set of differential equations describing the paths of the state and

Example

Page 30: Modeling and Control Overview - Philadelphia University · 2015. 5. 18. · Optimal Control Optimal control is a set of differential equations describing the paths of the state and

Example

Page 31: Modeling and Control Overview - Philadelphia University · 2015. 5. 18. · Optimal Control Optimal control is a set of differential equations describing the paths of the state and

Pole location effects

Page 32: Modeling and Control Overview - Philadelphia University · 2015. 5. 18. · Optimal Control Optimal control is a set of differential equations describing the paths of the state and

Root Locus Example

Page 33: Modeling and Control Overview - Philadelphia University · 2015. 5. 18. · Optimal Control Optimal control is a set of differential equations describing the paths of the state and

Modern Control

Modern control theory utilizes the time-domain state space representation.

A mathematical model of a physical system as a set of

input, output and state variables related by first-order differential equations.

The variables are expressed as vectors and the differential

and algebraic equations are written in matrix form.

The state space representation provides a convenient and compact way to model and analyze systems with multiple inputs and outputs.

Page 34: Modeling and Control Overview - Philadelphia University · 2015. 5. 18. · Optimal Control Optimal control is a set of differential equations describing the paths of the state and

Optimal Control

Optimal control is a set of differential equations describing the paths of the state and control variables that minimize a “cost function”

For example, the jet thrusts of a satellite needed to bring it to desired trajectory that consume the least amount of fuel.

Two optimal control design methods have been widely used in industrial applications, as it has been shown they can guarantee closed-loop stability. Model Predictive Control (MPC)

Linear-Quadratic-Gaussian control (LQG).

Page 35: Modeling and Control Overview - Philadelphia University · 2015. 5. 18. · Optimal Control Optimal control is a set of differential equations describing the paths of the state and

Robust Control

Robust control is a branch of control theory that deals with uncertainty in its approach to controller design.

Robust control methods are designed to function

properly so long as uncertain parameters or disturbances are within some set.

Page 36: Modeling and Control Overview - Philadelphia University · 2015. 5. 18. · Optimal Control Optimal control is a set of differential equations describing the paths of the state and

Adaptive Control

Adaptive control involves modifying the control law used by a controller to cope with changes in the systems’ parameters

For example, as an aircraft flies, its mass will slowly

decrease as a result of fuel consumption; A control law can adapt to such changing conditions.

Page 37: Modeling and Control Overview - Philadelphia University · 2015. 5. 18. · Optimal Control Optimal control is a set of differential equations describing the paths of the state and

Variable Structure Control

Variable structure control, or VSC, is a form of discontinuous nonlinear control.

The method alters the dynamics of a nonlinear system by application of a high-frequency switching control.

The main mode of VSC operation is sliding mode control (SMC).

Page 38: Modeling and Control Overview - Philadelphia University · 2015. 5. 18. · Optimal Control Optimal control is a set of differential equations describing the paths of the state and

Intelligent Control

Intelligent Control is usually used when the mathematical model for the plant is unavailable or highly complex.

The most two commonly used intelligent controllers are

Fuzzy Logic

Artificial Neural Networks

Page 39: Modeling and Control Overview - Philadelphia University · 2015. 5. 18. · Optimal Control Optimal control is a set of differential equations describing the paths of the state and

References

• Advanced Control Engineering (Chapter 8: State Space Methods for Control System Design) by Roland Burns 2001

• Modern Control Engineering (Chapter 10: State Space Design

Methods) by Paraskevopoulos 2002 • Modern Control Engineering (Chapters 9 and 10 Control System

Analysis and Design in State Space) by Ogata 5th edition 2010 • Feedback Systems: An Introduction to Scientists and Engineers

(Chapter 8: System Models) by Astrom and Muray 2009 • Modern Control Technology: Components and Systems by Kilian 2nd

edition