162
1 A Vision Based Control System for Autonomous Rendezvous and Capture of a Mars Orbital Sample Vishnu Jyothindran, David W. Miller & Alvar Saenz-Otero August 2013 SSL # 3-13

A Vision Based Control System for Autonomous Rendezvous and

Embed Size (px)

Citation preview

Page 1: A Vision Based Control System for Autonomous Rendezvous and

1

A Vision Based Control System for Autonomous Rendezvous

and Capture of a Mars Orbital Sample

Vishnu Jyothindran, David W. Miller & Alvar Saenz-Otero

August 2013 SSL # 3-13

Page 2: A Vision Based Control System for Autonomous Rendezvous and

2

Page 3: A Vision Based Control System for Autonomous Rendezvous and

3

A Vision Based Control System for Autonomous Rendezvous

and Capture of a Mars Orbital Sample

Vishnu Jyothindran, David W. Miller & Alvar Saenz-Otero

August 2013 SSL # 3-13 This work is based on the unaltered text of the thesis by Vishnu Jyothindran submitted to the Department of Aeronautics and Astronautics in partial fulfillment of the requirements for the degree of S.M in Aerospace Engineering at the Massachusetts Institute of Technology.

Page 4: A Vision Based Control System for Autonomous Rendezvous and

4

Page 5: A Vision Based Control System for Autonomous Rendezvous and

5

A Vision Based Control System for Autonomous Rendezvous

and Capture of a Mars Orbital Sample

by

Vishnu Jyothindran

Submitted to the Department of Aeronautics and Astronautics

on August 2, 2013, in partial fulfillment of the

requirements for the degree of

Master of Science

Abstract

NASA’s Mars Sample Return (MSR) mission involves many challenging operations. The current mission scenario utilizes a small Orbiting Sample (OS) satellite, launched from the surface of Mars, which will rendezvous with an Earth Return Vehicle (ERV) in Martian orbit. One of the highest-risk operations is the guidance of the OS into the capture mechanism on the ERV. Since the OS will most likely be

passive (with no attitude or propulsion control), the ERV must determine the OS’ location in Martian orbit, and maneuver itself to capture it. The objective of this project is to design and develop a vision-based tracking and capture system using the SPHERES test bed. The proposed Mars Orbital Sample Return (MOSR) system uses a SPHERES satellite to emulate the combined motion of the capture satellite and the OS. The key elements of the system are: (1) a modified SPHERES satellite with a white shell to match the optical properties of the OS; (2) a capture mechanism; and (3) an optical tracking system. The system uses cameras mounted on the capture mechanism to optically track the OS. Software on the capture mechanism computes the likely maneuver commands for a capture satellite, which are then translated into relative motions to be performed by a SPHERES satellite, acting as the OS. The focus of this thesis is on the vision-based algorithms and techniques used to ensure accurate 3-DOF ranging of the OS. The requirements of the OS tracking system are severe and require robust tracking performance in challenging illumination conditions without the use of any fiduciary markers(on the OS) to assist as a point of reference. A brief literature survey of common machine vision techniques for generic target tracking (in aerospace and other fields) is presented. In the proposed OS tracking system, two different methods are used for tracking and ranging of the OS. A Hough Transform algorithm is used to ensure accurate tracking

of the OS in the ‘near’ field within all possible illumination regimes. A Luminosity

based tracking algorithm is used to track the OS in the ‘far’ and ‘near’ field. Results

from testing at MIT’s Flat Floor Facility are presented to show the performance of these algorithms in an integrated Kalman Filter. Lastly, a new Model Predictive controller design is proposed for the fuel-optimal capture of the OS. Implementation and testing of the controller in the SPHERES satellite is presented and the comparisons to the SPHERES PD control system are revealed to highlight its strengths.

Thesis Supervisor: David W. Miller & Alvar Saenz-Otero

Page 6: A Vision Based Control System for Autonomous Rendezvous and

6

Page 7: A Vision Based Control System for Autonomous Rendezvous and

7

Acknowledgements

This work was performed primarily under contract S5.04-9755 with the National

Aeronautics and Space Administration as part of the SPHERES MOSR RDOS

program. The work was also performed for contracts AP10-314 with Aurora Flight

Sciences. The author gratefully thanks the sponsors for their generous support that

enabled this research.

Page 8: A Vision Based Control System for Autonomous Rendezvous and

8

Page 9: A Vision Based Control System for Autonomous Rendezvous and

9

Contents

1 Introduction ................................................................................................. 16

1.1 Motivation ............................................................................................. 16

1.2 Base architecture ................................................................................... 16

1.3 Previous work on vision based rendezvous navigation in space ............. 17

1.4 The Hough Transform ........................................................................... 20

1.4.1 Hough Transform Theory ................................................................... 20

1.4.2 Applications of Hough Transform ....................................................... 22

1.5 Astronomical photometry ...................................................................... 24

1.5.1 Astronomical Photometry theory ....................................................... 24

1.5.2 Applications of Photometry ................................................................ 26

1.6 Model Predictive Control (MPC) .......................................................... 26

1.6.1 Model Predictive Control Theory ....................................................... 26

1.6.2 Applications of Model Predictive Control ........................................... 28

1.7 Outline of thesis ..................................................................................... 28

2 The MOSR RDOS System ........................................................................... 31

2.1 Introduction ........................................................................................... 31

2.2 Existing capabilities ............................................................................... 32

2.2.1 OS Capture Mechanism ...................................................................... 32

2.2.2 Synchronized Position Hold Engage & Reorient Experimental

Satellites (SPHERES) as OS .............................................................................. 33

2.2.3 OS Shell .............................................................................................. 35

2.2.4 Camera system ................................................................................... 38

2.3 Software Architecture ............................................................................ 39

3 Vision Algorithms and Kalman Estimator ................................................... 41

3.1 Introduction ........................................................................................... 41

3.2 Hough Transform ................................................................................... 41

3.2.1 Brief Review ....................................................................................... 41

Page 10: A Vision Based Control System for Autonomous Rendezvous and

10

3.2.2 The role of Canny edge detection ....................................................... 42

3.2.3 The Hough Transform applied to MOSR ........................................... 44

3.2.4 Absolute Position estimation using the Hough Transform ................. 49

3.2.5 Conclusion .......................................................................................... 54

3.3 Astronomical Photometry ...................................................................... 55

3.3.1 Review ................................................................................................ 55

3.3.2 Astronomical photometry for MOSR .................................................. 55

3.3.3 Conclusion .......................................................................................... 58

3.4 Kalman Estimator ................................................................................. 59

3.4.1 Review ................................................................................................ 59

3.4.2 The Use of the Kalman Filter in MOSR ............................................. 60

3.5 Kalman Filter Results ............................................................................ 62

3.6 Conclusion ............................................................................................. 66

4 Model Predictive Control ............................................................................. 68

4.1 Overview ................................................................................................ 68

4.2 Definition of the Control Strategy ......................................................... 68

4.3 Controller constraints ............................................................................ 69

4.3.1 Field of view constraint ...................................................................... 69

4.3.2 Limited control authority ................................................................... 69

4.3.3 Terminal constraints ........................................................................... 69

4.3.4 Attitude constraints ........................................................................... 70

4.4 MPC Software Architecture ................................................................... 70

4.5 Definition of the Reference frames ......................................................... 71

4.6 MPC Results .......................................................................................... 72

4.6.1 Offline MPC time delay calculation .................................................... 72

4.6.2 MPC results in the SPHERES simulator ............................................ 73

4.6.3 MPC results using Global Metrology on SPHERES hardware ........... 73

4.6.4 PD and MPC Control comparison results using Global Metrology on

SPHERES Hardware .......................................................................................... 76

Page 11: A Vision Based Control System for Autonomous Rendezvous and

11

4.7 Conclusion ............................................................................................. 78

5 Conclusions................................................................................................... 81

5.1 Summary of Thesis ................................................................................ 81

5.2 List of contributions............................................................................... 82

5.3 Future work ........................................................................................... 82

References............................................................................................................. 85

6 Appendix A .................................................................................................. 89

6.1 MPC Simulation results using PD and MPC Control ............................ 89

6.2 PD control test on SPHERES hardware ................................................ 92

6.3 MPC Test on SPHERES hardware ........................................................ 94

Page 12: A Vision Based Control System for Autonomous Rendezvous and

12

List of Figures

Figure 1: Artist’s rendition of Mars sample launching from MSR lander; (right)

MSR Orbiter performing OS target search and acquisition in Mars orbit ................. 17

Figure 2: DARPA's Orbital Express showing both spacecraft ASTRO and

NEXTSAT[4] ............................................................................................................. 18

Figure 3: Artist’s rendition of the Hayabusa spacecraft landing on the Itokawa

asteroid ...................................................................................................................... 19

Figure 4: Polar transform as used by Line Hough transform ................................ 20

Figure 5: Circular Hough Transform .................................................................... 22

Figure 6: Circular fiduciary markers on the ISS ................................................... 24

Figure 7: Graphical representation of MPC[18] .................................................... 27

Figure 8: SPHERES-MOSR test bed performing OS contact dynamics

experiments on reduced gravity flight ....................................................................... 32

Figure 9: Boresight view of SPHERES-MOSR testbed ........................................ 33

Figure 10: SPHERES onboard the International Space Station[26] ...................... 34

Figure 11: Communications architecture for SPHERES MOSR system ............... 35

Figure 12: OS Shell disassembled ......................................................................... 36

Figure 13: OS Shell (with SPHERE for comparison) ........................................... 36

Figure 14: OS Shell showing both light and dark configurations .......................... 37

Figure 15: OS on an air bearing support structure ............................................... 37

Figure 16: uEye LE camera (lens not shown) ....................................................... 38

Figure 17: Overall Software Control architecture ................................................. 39

Figure 18: Edge Detection using the Canny edge Detection algorithm ................ 43

Figure 19: Raw Image of OS shell on the flat floor .............................................. 45

Figure 20: Thresholded OS image ........................................................................ 46

Figure 21: OS image after Canny Edge detection ................................................. 47

Figure 22: Completed Hough Transform OS image .............................................. 48

Figure 23: Partially occluded OS image processed with the Hough Transform .... 49

Figure 24: Camera reference frame perpendicular to image plane ........................ 50

Figure 25: Camera Reference frame along image plane ........................................ 51

Figure 26: Time History plot of Hough Transform for a stationary OS in the

‘near’ field .................................................................................................................. 52

Figure 27: Time History plot of Hough Transform for a stationary OS in the ‘far’

field ........................................................................................................................... 53

Figure 28: Relationship between OS size vs. depth .............................................. 54

Figure 29: Number of Pixels on target scales with range...................................... 56

Page 13: A Vision Based Control System for Autonomous Rendezvous and

13

Figure 30: Linear fit plot for astronomical photometry ........................................ 57

Figure 31: Inverse Quartic relationship of astronomical photometry .................... 58

Figure 32: Comparison of Reference frames (Global Metrology and Camera) ...... 63

Figure 33: Vision Estimator Convergence vs. Raw measurement ......................... 64

Figure 34: Position Convergence Comparison in the GM Reference frame ........... 65

Figure 35: Velocity Convergence Comparison in the GM Reference frame .......... 66

Figure 36: MPC Constraints ................................................................................ 70

Figure 37: Online Communication Framework with MPC ................................... 71

Figure 38: Camera and Global Metrology reference frames .................................. 72

Figure 39: Time Delay computation for MPC Computation ................................ 73

Figure 40: X and Y Position Errors for MPC engine ........................................... 74

Figure 41: Velocity vs. Time using the SPHERES testbed ................................... 75

Figure 42: Control accelerations vs. Time using the SPHERES testbed .............. 75

Figure 43: Cumulative Δv requirement in MPC vs. PD on SPHERES testbed .... 76

Figure 44: Velocity profile comparison for MPC and PD controller on SPHERES

testbed ....................................................................................................................... 77

Figure 45: XY Trajectory comparisons of PD vs. MPC ....................................... 78

Figure 46: SPHERES Simulator PD MPC Comparison – Position Error vs. Time

.................................................................................................................................. 89

Figure 47: SPHERES Simulator PD MPC Comparison – Control Accelerations

vs. Time .................................................................................................................... 90

Figure 48: SPHERES Simulator PD MPC Comparison – Δv requirements vs.

Time .......................................................................................................................... 90

Figure 49: SPHERES Simulator PD MPC Comparison – Control accelerations vs.

Time .......................................................................................................................... 91

Figure 50: SPHERES hardware PD Comparison – Control accelerations vs. Time

.................................................................................................................................. 92

Figure 51: SPHERES hardware PD Comparison – Cumulative Δv vs. Time ....... 92

Figure 52: SPHERES hardware PD Comparison – X and Y Position vs. Time ... 93

Figure 53: SPHERES hardware PD Comparison – Velocities vs. Time ............... 93

Figure 54: SPHERES hardware PD Comparison – XY Trajectory ...................... 94

Figure 55: MPC Trajectory using SPHERES Hardware ....................................... 94

Figure 56: Cumulative Δv requirement for MPC using SPHERES hardware ....... 95

Page 14: A Vision Based Control System for Autonomous Rendezvous and

14

Figure 57: XY Trajectory for MPC using SPHERES hardware ........................... 95

Page 15: A Vision Based Control System for Autonomous Rendezvous and

15

Page 16: A Vision Based Control System for Autonomous Rendezvous and

16

Chapter 1

1 Introduction

1.1 Motivation

Ever since the beginning of planetary exploration on Mars, there’s been significant

interest in a Sample Return Mission. A Mars Sample Return Mission (MSR) would

be a spaceflight mission to collect rock and dust samples from the Martian surface

and return it to Earth. It is widely considered to be a very powerful form of

exploration since the analysis of geological samples is free from the typical spacecraft

constraints such as time, budget, mass and volume. According to R.B Hargraves[1],

it is considered to be the “holy grail” of robotic planetary mission because of its high

scientific return.

However, due its high system complexity (and high financial needs), all MSR-type

missions have not passed any planning phases. The goal of this thesis is to

demonstrate a control system for the rendezvous and capture of a passive Martian

sample in Martian orbit. This is considered to be a high-risk maneuver since

rendezvous and capture of a completely passive object has not been attempted past

Earth orbit.

1.2 Base architecture

For the purposes of this thesis, the base sample return architecture used is that of

Mattingly et. al[2]. This architecture consists of a small passive Orbiting Sample

satellite (OS) and chaser satellite. The OS is launched from Martian surface and

contains Martian geological samples recovered by a caching rover. The OS is

completely passive during the entire rendezvous and capture process and it is

possibly fitted with a radio beacon for long-range tracking. The chaser satellite uses a

visual band camera system for final rendezvous and capture and ‘the need for risk

mitigation in this system is critical’[3]. This is the primary focus of this thesis.

Page 17: A Vision Based Control System for Autonomous Rendezvous and

17

Figure 1 shows an artist’s rendition of this architecture.

Figure 1: Artist’s rendition of Mars sample launching from MSR lander; (right)

MSR Orbiter performing OS target search and acquisition in Mars orbit

1.3 Previous work on vision based rendezvous navigation in

space

Vision based navigation for rendezvous and capture has not been well used in

spacecraft in the past. The first major use of machine vision for rendezvous was

DARPA’s Orbital express[4]. The project hoped to demonstrate several satellite

servicing operations and technologies including rendezvous, proximity operations,

station keeping, capture and docking. The Advanced Video Guidance sensor was

used for docking between two spacecraft. This sensor consisted of a laser diode to

illuminate a reflective visual target that was processed by vision algorithms on board

the spacecraft. A relative state estimate consisting of relative spacecraft attitudes

and positions was computed by the on-board vision system. Figure 2 shows the two

spacecraft used in the Orbital Express mission.

Page 18: A Vision Based Control System for Autonomous Rendezvous and

18

Figure 2: DARPA's Orbital Express showing both spacecraft ASTRO and

NEXTSAT[4]

One of the more recent missions to exhibit vision based spacecraft navigation was

the Hayabusa mission by the Japanese Aerospace Exploration Agency(JAXA) which

performed a sample return mission of the Itokawa asteroid[5]. The Hayabusa

spacecraft attempted multiple vision based landings on the asteroid to gather

samples on the surface. However, due to the unknown local terrain geometry and the

visual appearance of the asteroid, many of the landings were not ideal[5]. The

spacecraft navigated to the landing sites by tracking features on the surface of the

asteroid. Figure 3 shows an artist’s rendition of the Hayabusa satellite attempting to

land on the surface of the asteroid.

Page 19: A Vision Based Control System for Autonomous Rendezvous and

19

Figure 3: Artist’s rendition of the Hayabusa spacecraft landing on the Itokawa

asteroid

A small yet significant application of optical navigation was done by the Mars

Reconnaissance Orbiter (MRO). From 30 days to 2 days prior to Mars Orbit

Insertion, the spacecraft collected a series of images of Mars' moons Phobos and

Deimos. By comparing the observed position of the moons to their predicted

positions relative to the background stars, the mission team was able to accurately

determine the position of the orbiter in relation to Mars. While not needed by Mars

Reconnaissance Orbiter to navigate to Mars, the data from this experiment

demonstrated that the technique could be used by future spacecraft to ensure their

accurate arrival. Adler et al, have also proposed using such a system for an MSR

mission[6].

The common feature between both the missions mentioned above was the

tracking of fiduciary markers (Orbital Express) or the use of known features

(Hayabusa and MRO). Both of these navigation techniques track individual known

reference points and use this to update state estimates. However, the use of generic

shape detection algorithms, where the only assumption made is that of the target’s

outer shape has not been done in spacecraft. This project attempts to show vision

Page 20: A Vision Based Control System for Autonomous Rendezvous and

20

based rendezvous and docking using such generic vision techniques. Two such

techniques used are: (1) The circular Hough Transform and (2) astronomical

photometry. This also allows the use of a single camera instead of using a

stereoscopic (two or more cameras) system.

1.4 The Hough Transform

1.4.1 Hough Transform Theory

The Hough transform is a feature extraction technique used in image analysis and

machine vision. The purpose of the technique is to find imperfect instances of objects

within a certain class of shapes by a voting procedure.

The Hough transform was invented by Richard Duda and Peter Hart[7]. There is

some debate as to who the real inventor is as a patent was filed shortly(or around

the same time as the publication of the paper by Duda and Hart) by Paul Hough[8]

who the algorithm is named after.

The simplest case of Hough transform is the linear transform for detecting

straight lines. In the image space, a line can be described as y = mx + b where the

parameter m is the slope of the line, and b is the y-intercept. In the Hough

transform, however, this can be simplified further using polar coordinates, as shown

in Figure 4 and Equation 1 and proposed by Duda[7].

Figure 4: Polar transform as used by Line Hough transform

(1)

Page 21: A Vision Based Control System for Autonomous Rendezvous and

21

Extending this to discretized points(xi, yi) on an image, it is easy to see that the

each coordinate pair now transforms into sinusoidal curves in the θ – r plane as

defined by Equation 2:

(2)

One can also see that for a collinear set of points (xi, yi) and (xj, yj); the curves

corresponding to these points intersect at a unique point (θo, ro). Therefore, the

problem for finding collinear points is converted to finding concurrent curves. This

can be done quickly computationally by any equation solver.

This concept can also be extended to the circle as proposed by Kierkegaard et. al

[9]. Here the parametric representation of the equation of a circle can be written in

the form as given in Equation 3.

( ) ( ) (3)

For any arbitrary point (xi, yi), this can be transformed into a surface in three

dimensional space with the dimensions being (a,b,c) as defined in Equation 4:

( ) ( )

(4)

Each sample point (xi, yi) is now converted into a right circular cone in the

(a,b,c) parameter space. If multiple cones intersect at one point (ao,bo,co), then these

sample points now exist on a circle defined by the parameters ao,bo, and co.

The above process is repeated over all the sample points in an image and votes are

gathered for each set of parameters (ao,bo,co). This method is extremely

computationally intensive and has a memory requirement on the order of O(m n c)

where mn is the total number of sample points(i.e. image size) and c is the number

of possible radii. This memory requirement is needed for the accumulator array

where the columns of the accumulator array are the possible radii in the image and

each row is the unique set of (a,b) points for the coordinate of the centers.

Page 22: A Vision Based Control System for Autonomous Rendezvous and

22

However, the computational burden of this method can be significantly lower if the

radius c is known a priori. This reduces the 2-D accumulator array into a single

column vector for a specific co or range of radii (ci – cj). Thus, if the radius is known

a priori, the algorithm’s memory requirement reduces to O(mn) time, since fewer

cells in the accumulator array have to be updated for each sample point. Figure 5

and shows a purely graphical representation of the circular Hough transform.

Figure 5: Circular Hough Transform

The process described above only applies to certain simple shapes like lines and

curvilinear shapes like circles and ellipses. However, this can be extended to detect

shapes in any orientation as proposed by Ballard et. al [10]. This key advancement in

the theory shows its possible use in detecting generic objects in a space environment.

1.4.2 Applications of Hough Transform

Uses of the Hough Transform are spread over a variety of fields but their use in

aerospace applications is very limited.

One of the most widespread uses of the Hough transform is in the medical industry.

Zana et. Al [11] proposed using the Hough transform to detect blood vessels and

lesions in retinal scans. This method is used on multiple retinal scans at different

orientations and common blood vessels and lesions are marked in each so that

physicians can follow the progress of a certain disease or general retinal health over

time. This also improves the identification of some lesions and allows easy

comparison of images gathered from different sources.

Another use of the Hough transform is in traffic monitoring. As suggested by Kamat

et. al[12], the Hough transform is used for the general problem of detection of vehicle

Page 23: A Vision Based Control System for Autonomous Rendezvous and

23

license plates from road scenes for the purpose of vehicle tracking. It describes an

algorithm for detecting a license plate from a road scene acquired by a CCD camera

using image processing techniques such as the Hough transform for line detection

(the shape of the license plates is defined by lines). Here the Hough transform is

more memory efficient since the dimensions of the license plates are known.

Lastly, the use of the Hough transform is also fairly common in the geological

industry for detecting features in terrain. As proposed by Karnieli et. al[13], the

Hough transform is used for detecting geological lineaments in satellite images and

scanned aerial photographs.

As previously mentioned, the use of the Hough transform is severely limited in the

aerospace industry. Only one application is known at this time. Casonato et. al[14]

proposed an automatic trajectory monitoring system designed for the rendezvous

between the automatic transfer vehicle (ATV) and the International Space Station

(ISS). During the final approach phase, a TV camera on the ISS currently provides

images of ATV visual targets to be used by ISS crew for visual monitoring. The

proposed monitoring system, based on the Hough transform is applied to these TV

images of the approach, and is intended to autonomously and automatically

determine relative ATV-ISS position and attitude. Artificial intelligence techniques

for edge detection, Hough transform and pattern matching are used to implement a

recognition algorithm, able to fast and accurately indicate ATV visual targets

position and orientation with respect to ISS. Those values are then processed in

order to calculate the full set of relative ATV navigation data. However, in this case,

the Hough transform is used only in a supervisory role and not as part of a vision

based control system. It is also important to note that the circular Hough transform

is used to detect fiduciary markers (in the shape of circles) that already exist on the

outer facade of the ISS (shown in Figure 6).

Page 24: A Vision Based Control System for Autonomous Rendezvous and

24

Figure 6: Circular fiduciary markers on the ISS

1.5 Astronomical photometry

1.5.1 Astronomical Photometry theory

Photometry is an astronomical technique concerned with measuring the flux, or

intensity of an astronomical object's electromagnetic radiation. In this case, the

spectrum of electromagnetic radiation is limited to the visible light spectrum.

When using a CCD camera to conduct photometry, there are a number of

possible ways to extract a photometric measurement from the raw CCD image. The

observed signal from an object will typically be smeared (convolved) over many

pixels. When obtaining photometry for a point source the goal is to add up all the

light from the point source and subtract the light due to the background. This means

that the apparent brightness or luminosity (F) of a point source in an image is a

simple sum of pixel brightness B(xi,yi) of all the pixels across the point source(with

dimensions m, n pixels in each direction) as shown in Equation 5.

Page 25: A Vision Based Control System for Autonomous Rendezvous and

25

∑ ( )

(5)

The pixel brightness B(xi,yi) is the CCD value for that pixel in a grayscale

(monochrome) camera and is the average across all three red, green, blue channels

for a color camera as shown in Equation 6 and Equation 7.

( ) ( ) (6)

( )

( ) ( ) ( )

( )

(7)

The flux density or apparent brightness of a point source can also be given by the

flux density relationship in Equation 8:

(8)

Here, L is the true luminosity of the source and r is the distance away from the

source.

In this application, the target is not a point source. For Equation 8 to apply, each

pixel is regarded as a point source. Therefore, the apparent luminosity Ftarget is simply

the sum of the apparent luminosity over all points on the target as shown in

Equation 9.

(9)

It is interesting to note, that one can calculate the distance away from the target

using this method as shown in Equation 10.

Page 26: A Vision Based Control System for Autonomous Rendezvous and

26

(10)

However, the true luminosity L needs to be known first. In this application, the

true luminosity is pre-calculated from a previous state estimate given to the

algorithm from another ranging algorithm (such as the radio beacon described in

Section 1.2). This algorithm also shows an increased sensitivity to depth

measurements since the distance to the target given by r scales quadratically to the

flux density (or apparent brightness) of the target.

1.5.2 Applications of Photometry

As mentioned before, photometry is limited to the field of astronomy. The

preeminent source on the subject is Astronomical Photometry: A guide by Christiaan

Sterken and Jean Manfroid[15].

The first major application to ranging was by Shapley et. al[16] who used

astronomical photometry to measure the distance to star clusters. This was done by

measuring the apparent brightness of Cepheid stars whose true luminosities are

known. The use of photometry in a spacecraft application is limited to one. Reidel et.

al[17] proposed using such a ranging technique for the Mars CNES Premier Orbiter

mission but this has not been tested in hardware.

1.6 Model Predictive Control (MPC)

An equally important role is played by the control algorithms in this system.

Sections 1.4 to 1.5 presented the vision algorithms used while Section 1.6 will

describe the control system and its applications.

1.6.1 Model Predictive Control Theory

Model predictive controllers rely on dynamic models of the process, most often

linear empirical models obtained by system identification. The main advantage of

MPC is that it allows the current timeslot to be optimized, while keeping future

timeslots in account. This is achieved by optimizing a finite time-horizon, but only

Page 27: A Vision Based Control System for Autonomous Rendezvous and

27

implementing the current timeslot. MPC has the ability to anticipate future events

and can take control actions accordingly. PID controllers do not have this predictive

ability.

MPC is based on an iterative, finite horizon optimization of a plant model. At

time t the current plant state is sampled and a cost minimizing control strategy is

computed (via a numerical minimization algorithm) for a relatively short time

horizon in the future, (t - t+T). An online or on-the-fly calculation is used to explore

state trajectories that emanate from the current state and find a cost-minimizing

control strategy until time t+T. Only the first step of the control strategy is

implemented, then the plant state is sampled again and the calculations are repeated

starting from the now current state, yielding a new control and new predicted state

path. The prediction horizon keeps being shifted forward. Figure 7 shows a graphical

representation of MPC and the receding horizon control strategy.

Figure 7: Graphical representation of MPC[18]

Receding horizon strategy: only the first one of the computed moves u(t) is

implemented.

MPC also maintains a history of past control actions. This history is used in a

cost optimization to optimize a certain parameter of the control system. In this

application, the cost function relates directly to thruster fuel efficiency. For details

about this system, the reader is advised to refer to [19].

Page 28: A Vision Based Control System for Autonomous Rendezvous and

28

1.6.2 Applications of Model Predictive Control

As briefly mentioned in Section 1.6, MPC is commonly used in the chemical and oil

industries.

Though the ideas of receding horizon control and model predictive control can be

traced back to the 1960s[20], interest in this field started to develop in the 1980s

after dynamic matrix control (DMC) [21]. DMC was conceived to tackle the

multivariable constrained control problems typical for the oil and chemical industries

and it had a tremendous impact in those industries.

In spacecraft applications, MPC has been proposed and theorized before.

However, no hardware applications are known at this point. Manikonda et. al[22]

proposed an application of a model predictive control-based approach to the design

of a controller for formation keeping and formation attitude control for the NASA

DS3 mission[23]. Richards et. al [24] presented an MPC controller for a spacecraft

rendezvous approach to a radial separation from a target. Both Richards and

Manikonda used MPC to reduce thruster fuel or power usage. Lavaei et. al[25]further

extended the use of MPC to multiple cooperative spacecraft with communication

constraints.

1.7 Outline of thesis

The work presented in this thesis is part of an overall research initiative to

develop computer vision based rendezvous and docking algorithms for an MSR-type

mission. This thesis presents the three steps in this program: the development of a

research testbed to be tested onboard the ISS, the verification of this research

testbed through the implementation of a vision tracking algorithms (based on the

Hough transform and astronomical photometry), and the preliminary analysis of an

MPC based controller for rendezvous and capture.

Chapter 2 presents the design and development of a testbed for computer vision

based navigation onboard the ISS. This testbed utilizes the Synchronize Position

Hold Engage Re-orient Experimental Satellites (SPHERES) along with new

hardware such as cameras and lights. This hardware was developed in a partnership

with Aurora Flight Sciences as part of the Mars Orbital Sample Return Rendezvous

and Docking of Orbital Sample (MOSR) program.

Page 29: A Vision Based Control System for Autonomous Rendezvous and

29

Chapter 3 discusses the performance and analysis of the machine vision

algorithms outlined in Section 1.4 and Section 1.5. Experimental results using these

algorithms are shown along with suggested improvements and integration into a

Kalman filter. An approach to a linearly coupled Kalman filter is proposed and

experimental comparisons with the SPHERES Global Metrology system (used as

ground truth) are presented.

Chapter 4 presents the Model Predictive Controller. Simulation and experimental

results are shown to compare its performance with classic PID controllers.

Chapter 5 summaries the conclusions and contributions of this thesis and

discusses future work.

Page 30: A Vision Based Control System for Autonomous Rendezvous and

30

Page 31: A Vision Based Control System for Autonomous Rendezvous and

31

Chapter 2

2 The MOSR RDOS System

2.1 Introduction

The current MSR mission scenario utilizes a small Orbiting Sample (OS) satellite,

launched from the surface of Mars, which will rendezvous with a chaser spacecraft.

The guidance of the OS into the capture mechanism on the chaser satellite is critical.

Since the OS will most likely be passive—possibly outfitted with a radio beacon for

long-distance detection, but with no means of active propulsion or attitude control—

the chaser spacecraft must determine the location of the OS in Martian orbit, and

maneuver itself to capture it. As proposed earlier in Chapter 1, the chaser spacecraft

will rely on optical tracking using a single visual-band camera to perform the final

rendezvous and capture operation.

Using the Synchronize Position Hold Engage Re-orient Experimental Satellites

(SPHERES) satellites (created at the Massachusetts Institute of Technology), the

existing MOSR testbed is augmented to incorporate optical tracking and control.

This augmentation will facilitate the repeated trial and testing of new control and

vision based algorithms to be used on a final launch MSR mission. The goal of the

MOSR testbed is to provide a research platform where such algorithms and controls

can be tested in a relatively ‘risk-free’ environment onboard the International Space

Station.

This thesis focuses on the Phase II work of the MOSR project. The Phase II effort

focuses on the adaptation of the SPHERES MOSR test bed to address the "last few

meters" GN&C problem of Mars Sample Return Orbiting Sample capture -

integrating and demonstrating the tracking and relative maneuver capabilities with

respect to the passive OS. There are two main technical objectives for the Phase 2

effort: (1) provide visual tracking for the MOSR test bed by integrating the required

cameras and lighting and (2) emulate the combined chaser/OS dynamics using a

SPHERES satellite by integrating the relative motion controls into the MOSR

baseplate and SPHERES software.

Page 32: A Vision Based Control System for Autonomous Rendezvous and

32

2.2 Existing capabilities

2.2.1 OS Capture Mechanism

The existing SPHERES MOSR test bed currently has the capability to analyze

the contact dynamics between the OS and the OS capture mechanism, using a

SPHERES satellite as a surrogate OS and incorporating an instrumented baseplate

to measure the contact loads between the OS and the capture mechanism. The

MOSR test bed is also equipped with boresight and side-view cameras to provide

video data to correlate the contact dynamics data.

Figure 8 shows the capture mechanism being tested onboard a reduced gravity

flight.

Figure 9 shows the boresight camera view which is the primary capture view used

by the vision system.

Figure 8: SPHERES-MOSR test bed performing OS contact dynamics

experiments on reduced gravity flight

Page 33: A Vision Based Control System for Autonomous Rendezvous and

33

Figure 9: Boresight view of SPHERES-MOSR testbed

2.2.2 Synchronized Position Hold Engage & Reorient Experimental

Satellites (SPHERES) as OS

The SPHERES test bed is a test bed for the development of multi-satellite

GN&C algorithms. Currently, there are three SPHERES used in 3DOF ground

testing and three SPHERES on the ISS for full 6DOF microgravity testing, shown in

Figure 10. This test bed provides a unique opportunity for researchers to test

algorithms, analyze data, then refine and uplink new algorithms in a relatively short

period of time. This iterative process allows algorithms to be matured in a low-cost,

low-risk environment.

Each satellite is equipped with a processor, two communication transceivers,

battery power, 12 cold-gas thrusters, and 24 ultrasound receivers. These ultrasound

receivers are used in a time-of-flight metrology system analogous to GPS. Using

ultrasound beacons placed in a designated volume around the SPHERES, the

satellites individually measure their respective positions and attitudes with an

accuracy of a few millimeters and 1-2 degrees, respectively (Miller et. al[26]).

Page 34: A Vision Based Control System for Autonomous Rendezvous and

34

Figure 10: SPHERES onboard the International Space Station[26]

One of these SPHERES will be modified to serve as a surrogate OS. While the

actual OS will not have the same capabilities as SPHERES, many of the SPHERES

subsystems will aid the development, testing and verification of the SPHERES

MOSR test bed. First, the propulsion system will enable the surrogate OS to

maneuver and perform motions equivalent to the chaser being actuated. Using this

reversed-roles maneuvering precludes the development of a larger chaser satellite that

would have to maneuver with the capture cone. Second, the communication system

will allow the OS to receive the control commands necessary to execute the reversed-

roles maneuvering. Third, the ultrasound metrology system provides the OS with the

ability to determine its position and attitude. These independent measurements will

provide a means to validate the visual tracking algorithm’s accuracy. Figure 11

shows a brief communications overview in this reversed-roles model.

Page 35: A Vision Based Control System for Autonomous Rendezvous and

35

Figure 11: Communications architecture for SPHERES MOSR system

2.2.3 OS Shell

As shown in Figure 10, the SPHERES satellites are of different colors. In order to

make the SPHERES MOSR system not dependent on a single satellite, a removable

OS shell prototype is built. The prototype (shown disassembled in Figure 12) is

fabricated from black ABS-M30 plastic using Fused Deposition Modeling (FDM)

rapid prototyping technology. It consists of six parts – two hemispheres that encase

the satellite, connected via four size 4-40 button head cap screws to two disks, one on

each side that locks the assembly in place. The hemispheres each contain four

extrusions on its interior surface that fit snugly over the satellite, preventing the

shell from moving relative to it. Two additional disks rotate into the two

hemispheres with a quarter-lock mechanism to provide access to the satellite’s

battery compartments. Figure 13 shows the assembled shell next to a SPHERES

satellite to illustrate the mounting orientation.

OS

Backdrop

Capture

Mechanism

(stationary)

SPHERES Laptop

SPHERES Radio Comm.

Cameras

SPHERES Radio Comm.

SPHERES

Metrology

Page 36: A Vision Based Control System for Autonomous Rendezvous and

36

Figure 12: OS Shell disassembled

Figure 13: OS Shell (with SPHERE for comparison)

The color scheme selected for the shell involves a quarter of the sphere being

painted flat black and the rest of the sphere painted flat white, as shown in Figure

14. This allows for testing of the OS in a variety of lighting conditions that simulate

that of on orbit operations, where the majority of the OS will be illuminated by the

capture spacecraft during rendezvous. Lighting of the OS will range from fully

illuminated (only the white section being visible) to the worst case lighting condition

where the black section of the OS will cover half of the OS surface orientated

towards the tracking cameras so only half of the OS is visible, as seen in Figure 15.

Page 37: A Vision Based Control System for Autonomous Rendezvous and

37

Figure 14: OS Shell showing both light and dark configurations

During flat floor testing, the orbiting sample is mounted on a SPHERES air

bearing support structure that is suspended on a cushion of CO2 gas, allowing it to

move around the polished flat floor surface with minimal friction, thus simulating

motion in a 2-D space environment. Figure 15 shows the OS on the MIT flat floor

facility.

Figure 15: OS on an air bearing support structure

Page 38: A Vision Based Control System for Autonomous Rendezvous and

38

2.2.4 Camera system

The visual tracking of the OS utilizes at least two cameras based on the already

NASA-approved VERTIGO cameras, uEye UI-1225LE-M-HQ. These gray-scale

cameras have a resolution of 752x480 pixels with 8-bits of depth over a USB 2.0

interface. One camera is mounted inside the baseplate looking along the bore sight of

the capture cone. This enables video from the final approach and capture of the OS.

The second camera is mounted offset from the capture cone and provides a wider

field of view. This enables visual tracking of the OS tens-of-meters away from the

capture system and provides the mechanism for the rendezvous and approach of the

chaser satellite with the OS, as emulated by the movements of the SPHERES

module. Table 1 presents the camera’s specifications. Figure 16 shows the camera

with the lens detached. The focal length of the lens used is 5 mm.

Table 1: Specifications of uEye LE Camera

Sensor 1/2" CMOS with Global

Shutter

Camera

Resolution 640 x 480 pixels

Pixel Size 6.0 m, square

Lens Mount CS-Mount

Frame Rate 87 FPS (Max), 10 FPS

(Typical)

Exposure 80 s - 5.5 s

Power

Consumption 0.65 W each

Size 3.6 cm x 3.6 cm x 2.0 cm

Mass 12 g

Figure 16: uEye LE camera (lens not shown)

Page 39: A Vision Based Control System for Autonomous Rendezvous and

39

2.3 Software Architecture

The overall software architecture is shown in Figure 17. The chaser's visual

tracking algorithm provides the relative positions and velocities of the chaser and the

OS. The chaser then, using the controller and model described in Section 1.6 and

Chapter 4, computes its required forces and torques to rendezvous and capture the

OS. For the purposes of the MOSR Chaser/OS emulation, the chaser's baseplate

computer then converts these into the forces required to move the SPHERES

satellite, representing the OS, in order to emulate the combined motions. The

baseplate then transmits these to the SPHERES satellite, whose internal controller

then actuates its respective thrusters to execute the motion. This architecture

expects to transmit the x, y, and z forces (or accelerations) and relative location

components from the chaser's baseplate to the OS at a rate greater than or equal to

1Hz.

Figure 17: Overall Software Control architecture

Page 40: A Vision Based Control System for Autonomous Rendezvous and

40

Page 41: A Vision Based Control System for Autonomous Rendezvous and

41

Chapter 3

3 Vision Algorithms and Kalman Estimator

3.1 Introduction

Chapter 2 has described the overall SPHERES MOSR system. This chapter will

describe the algorithms used in developing the software on this testbed. First, the

reader is introduced to the Hough Transform and how it integrates into the MOSR

system. Equations describing key fundamental relationships are shown and raw data

from the algorithm are shown. The process is repeated for the astronomical

photometry algorithm and raw data is shown. Key results and breakdowns are shown

in both algorithms and the need for a Kalman estimator is revealed. Finally, the

application of a Kalman filter is shown and the results are presented. Comparisons

are made with the SPHERES global metrology system to gauge the accuracy of a

single camera vision system.

3.2 Hough Transform

3.2.1 Brief Review

Revisiting the Hough Transform shown in Section 1.4.1, the parametric

representation of the equation of a circle can be written in the form as given in

Equation 11.

( ) ( ) (11)

For any arbitrary point (xi, yi), this can be transformed into a surface in three

dimensional space with the dimensions being (a,b,c) as defined in Equation 12:

( ) ( )

(12)

Page 42: A Vision Based Control System for Autonomous Rendezvous and

42

Each sample point (xi, yi) is now converted into a right circular cone in the

(a,b,c) parameter space. If multiple cones intersect at one point (ao,bo,co), then these

sample points now exist on a circle defined but the parameters ao,bo, and co.

The above process is repeated over all the sample points in an image and votes

are gathered for each set of parameters (ao,bo,co). This method is of course extremely

computationally intensive and has a memory requirement on the order of O(m n c)

where mn is the total number of sample points(i.e. image size) and c is the number

of possible radii. This memory requirement is needed for the accumulator array

where the columns of the accumulator array are the possible radii in the image and

each row is the unique set of (a,b) points for the coordinate of the centers. However,

the computational burden of this method can be significantly lower if the radius c is

known a priori. This is reduces the 2-D accumulator array into a single column

vector for a specific co or range of radii (ci – cj). Thus, if the radius is known a priori,

the algorithm’s memory requirement reduces to O(mn) time, since fewer cells in the

accumulator array have to be updated for each sample point.

3.2.2 The role of Canny edge detection

Based on the above method, a similar algorithm based on Yuen et. al [27] is used.

Here the key difference is an edge detector such as that proposed by Canny[28]is

used. This eliminates all the sample points in the image that do not lie on an edge.

Canny edge detection[28] works by computing derivatives in the horizontal and

vertical direction. These are performed on slightly blurred images in order to discard

any discontinuities due to CCD sensor noise. These derivatives are the change in

pixel brightness along those directions. Equation 13 shows the derivative G(also

known as the edge gradient) which the magnitude of the Gx and Gy.

(13)

One can also calculate an edge direction θ which is given as:

Page 43: A Vision Based Control System for Autonomous Rendezvous and

43

(

)

(14)

Once these gradient magnitudes are found, a localized search of the area is done

to find if the gradient magnitude is a local maximum. If it indeed lies in a local

maximum, the sample point lies on an edge and is kept. Any other sample points in

the localized search area are discarded.

This process is repeated until all the edge points are located. The edge points are

then set to max brightness (max pixel value in a monochrome camera). The points

that do not lie on an edge are set to pixel level 0. Figure 18 shows the original

blurred image followed by an image with Canny edge detection performed.

Figure 18: Edge Detection using the Canny edge Detection algorithm

Page 44: A Vision Based Control System for Autonomous Rendezvous and

44

3.2.3 The Hough Transform applied to MOSR

Yuen et. al[27] proposed using an edge detection algorithm to reduce the storage and

computational demands of circle finding. Using edge detection, the problem of circle

finding can now be decomposed into two stages consisting of a 2D Hough Transform

to find the circle centers, followed by a 1D Hough Transform to determine radii.

Since the center of the circle must lie along the gradient direction of each edge point

on the circle, then the common intersection point of these gradients identifies the

center of the circle. A 2D array is used to accumulate the center finding transform

and candidate centers are identified by local peak detection. This can be viewed as

an integration along the radius axis of all values of the Hough Transform at a single

value of (a,b). The second stage of the method uses the center of the first stage to

construct histogram values of possible radii values. The radius value with the most

number of edge points lying on it is considered the radius of the circle.

A potential problem with this modified Hough Transform method is that any 3D

information from the image in the edge detection process is lost. For example, any

circles that do not lie parallel to the camera image plane are completely lost since

their gradients might not lie in the vertical and horizontal directions. This problem is

not a concern for MOSR since the OS is assumed to be a “few meters” away from the

capture mechanism. The cross-section of a spherical object will always remain

parallel to the image plane.

Using the method described above, the algorithm is applied to the MOSR OS in

steps:

Step 1 consists of gathering a raw monochrome image as shown in Figure 19.

Page 45: A Vision Based Control System for Autonomous Rendezvous and

45

Figure 19: Raw Image of OS shell on the flat floor

It is important to note that the air bearing support structure as shown in Figure

15 is covered by grey foam so as not to interfere with the algorithms accuracy.

Spurious circles may be detected in the air bearing mechanism. In this image, the OS

is 1.5 m away from the camera.

Step 2 consists of a simple thresholding algorithm which sets any pixel brighter

than a certain threshold to full ‘white’ (pixel value 255 in an 8-bit CCD sensor) and

conversely, sets any pixel dimmer than a certain threshold to full ‘black’ (pixel value

0). This can be seen in Figure 20.

Page 46: A Vision Based Control System for Autonomous Rendezvous and

46

Figure 20: Thresholded OS image

It is important to note that in this image; most of the room plumbing in the

background has now disappeared. A bright speck in the top left corner passed the

thresholding algorithm but this should not be of concern since it is not circular or of

the right diameter in shape.

Step 3 consists of the image then being processed by a Canny edge detector as

shown in Figure 21.

Page 47: A Vision Based Control System for Autonomous Rendezvous and

47

Figure 21: OS image after Canny Edge detection

Again, it is important to note that the speck in the top left still exists (albeit with

only its edges intact). The Canny edge detector has now removed all the pixels

internal to the OS and kept the outer edge intact.

It can now be seen that a large non-zero 752 by 480 pixel image (total pixels:

360,960) is reduced to an image with only several hundred non-zero pixels. This

offloads the computation done by the Hough Transform in Step 4.

Step 4 consists of running the image through the Hough transform algorithm with

an initial guess for a diameter. As mentioned in Section 1.4.1, this initial estimate for

the diameter will be given by another ranging algorithm (possibly using the long

range radio beacon) in the real MSR mission.

The result after Step 4 can be seen in Figure 22.

Page 48: A Vision Based Control System for Autonomous Rendezvous and

48

Figure 22: Completed Hough Transform OS image

In the interest of clarity, the solution is superimposed on the original non-

processed image. The green circle signifies the radius that the transform computed,

and green dot in the center signifies the center of that circle. It is important to note

that the smaller circles were not recognized since the radii of these circles were not

within acceptable bounds (within ± 10% of the initial estimate). The new circle

radius is then fed into the following iteration as the new initial ‘guess’. Here, it is

assumed that the relative motion of the OS perpendicular to the image plane is not

high enough to exceed the 10% tolerance mentioned above.

Step 1-4 can be reapplied to a partially shielded OS as shown in Figure 23.

Page 49: A Vision Based Control System for Autonomous Rendezvous and

49

Figure 23: Partially occluded OS image processed with the Hough Transform

The OS shown above was partially occluded by rotating the dark side of the OS

to face the camera (as shown in Figure 15).

In Figure 23, the circle radius is not as accurate. This is mainly due to the lower

number of edge points in the image. These circular edge points now generate fewer

votes thereby decreasing the accuracy of the algorithm.

3.2.4 Absolute Position estimation using the Hough Transform

As shown in Figure 22 and Figure 23, the Hough Transform generates a robust

radius and center measurement in pixels and pixel positions respectively. This section

will cover the transformation of the pixel positions and values to absolute position

states (in meters).

Figure 24 shows the geometry of the OS in the camera reference frame.

Page 50: A Vision Based Control System for Autonomous Rendezvous and

50

Figure 24: Camera reference frame perpendicular to image plane

In Figure 24, the depth ‘d’ of the OS in the camera reference frame can be derived

from basic trigonometric principles and is presented in Equation 15. The reference

axes directions are also given for clarity.

( )

(15)

Here, FOVx is the field of view in degrees of the camera, rp is the estimated radius

in pixels of the OS, D is the actual diameter of the OS shell in meters(0.277 m) and

hx is total resolution of the image in that direction (752 pixels).

The same calculation can be done in the perpendicular image direction using total

resolution hy and Field of View FOVy. An identical depth z is found.

Looking at the center location of the OS, Equation 16 and 17 can now be used to

calculate the x and y coordinate of the OS position using α and β (azimuth and

elevation angles) with respect to the center of the image plane(as the origin).

OS

Camera

d

Tota

l R

esolu

tion

(hx)

752 p

ixel

s

D

+x

+z

Page 51: A Vision Based Control System for Autonomous Rendezvous and

51

Figure 25: Camera Reference frame along image plane

Using Figure 25, x and y locations can be derived using basic trigonometric

principles shown in Equations 16 and 17.

(

)

(16)

(

)

(17)

Once again, z is the depth estimate computed from Equation 15, FOVx and FOVy

are the Fields Of View in the x and y directions, xp and yp are the pixel positions of

the center of the OS with respect to a origin centered at the FOV center and hx and

hy are the total pixel resolutions in the x and y directions respectively. It is noted

that z appears in both equations and so the depth measurement z is crucially

important in state estimation.

The next step is to plot the time history of the OS depth z in this camera

reference frame. In this case, the OS is completely stationary.

α

β

Camera

FOV center

+y+x

Page 52: A Vision Based Control System for Autonomous Rendezvous and

52

Figure 26: Time History plot of Hough Transform for a stationary OS in the

‘near’ field

In Figure 26, the time history shows a relatively low noise level for a stationary

OS in the ‘near’ field (actual depth is 1.75m). In ten samples, the noise level remains

between ±1 pixel which is nearly ideal for the Hough transform[29].

In Figure 27, the same time history plot is taken for the ‘far’ field.

1 2 3 4 5 6 7 8 9 101.72

1.74

1.76

1.78

1.8

1.82

Time(s)

Depth

(d)

to O

S (

m)

Estimated depth (d)

Actual depth(d)

1 2 3 4 5 6 7 8 9 1028

29

30

31

Time(s)

Radiu

s o

f O

S (

pix

els

)

Page 53: A Vision Based Control System for Autonomous Rendezvous and

53

Figure 27: Time History plot of Hough Transform for a stationary OS in the ‘far’

field

In Figure 27, the noise level is higher than in the ‘near’ field case. Here the OS is

placed at 2.78 m. This is due to the larger thresholding errors and fewer edge points

on the circle which accumulate fewer votes for a given radii. Again, a pixel error of ±

1 pixel is expected for the Hough transform.

However, a very curious relationship can be seen. For a change in 1 pixel of

radius estimate, the resulting difference in depth is much higher in the ‘far’ field case

(~15 cm) than in the ‘near’ field case (~6 cm). This relationship can be seen more

easily when Equation 15 is plotted. Here there is a clear non-linear (inverse)

relationship between rp(pixel radius) and depth z.

1 2 3 4 5 6 7 8 9 102.7

2.8

2.9

3

Time(s)

Depth

(d)

to O

S (

m)

Estimated depth (d)

Actual depth(d)

1 2 3 4 5 6 7 8 9 1017

18

19

20

Time(s)

Radiu

s o

f O

S (

pix

els

)

Page 54: A Vision Based Control System for Autonomous Rendezvous and

54

Figure 28: Relationship between OS size vs. depth

In Figure 28, the blue boxes indicate discrete pixel values whose depths can be

resolved. As the depth decreases, the radius of the OS rp in pixels increases, and the

difference between two successive pixel estimates(i.e. the vertical space between each

consecutive pixel estimate) decreases. This relationship is what is being seen in

Figure 26 and Figure 27.

3.2.5 Conclusion

As a consequence of these results, it is evident that the Hough transform

algorithm can only generate depths at discrete pixel values. Therefore, a method

which does not rely on discrete pixel estimates needs to be used in order to increase

fidelity in the depth measurement. This algorithm is based on astronomical

photometry.

5 10 15 20 252

3

4

5

6

7

8

9

10

11

Radius of OS (pixels)

Depth

(d)

to O

S (

m)

Relationship between OS size vs. Depth

Page 55: A Vision Based Control System for Autonomous Rendezvous and

55

3.3 Astronomical Photometry

3.3.1 Review

From Section 1.5.1, the apparent brightness or luminosity (F) of a point source in

an image is a simple sum of pixel brightness B(xi,yi) of all the pixels across the point

source(with dimensions m, n pixels in each direction) as shown in Equation 18.

∑ ( )

(18)

The flux density or apparent brightness of point source can also be given by the

flux density relationship in Equation 19:

(19)

Here, L is the true luminosity of the source and z is the distance away from the

source.

3.3.2 Astronomical photometry for MOSR

In this application, the target (OS) is not a point source. For Equation 5 to

apply, each pixel is regarded as a point source. Therefore, the apparent luminosity

Ftarget is simply the sum of the apparent luminosity over all points on the target as

shown in Equation 20.

(20)

Page 56: A Vision Based Control System for Autonomous Rendezvous and

56

However, the total number of pixels on the OS increases inverse-quadratically

with respect to depth. Figure 29 presents this relationship graphically.

Figure 29: Number of Pixels on target scales with range

(21)

Combining equations 20 and 21, the relationship between total apparent

brightness Ftarget and depth z is now an inverse quartic relationship with C1 being a

new constant of proportionality that needs to be solved a priori.

(22)

Equation 22 now presents an inverse quartic relationship between Ftarget and depth

z. This increased fidelity to the z measurement can now be seen in Equation 23.

(23)

Equation 23 is a fourth root function and can be converted to the log scale as

seen in Equation 24.

( ) ( ) (24)

Near field Far field

Page 57: A Vision Based Control System for Autonomous Rendezvous and

57

A linear curve fit can be done between z and Ftarget to calculate the values of ‘a’

and C1.

Figure 30 shows a log plot with a linear fit to Equation 24. In this plot, z is

known using alternate methods (a laser range finder) and Ftarget is computed by a

simple summation of the pixel values as shown in Equation 20.

Figure 30: Linear fit plot for astronomical photometry

In Figure 30, the linear fit coefficients are computed as follows:

( )

(25)

For this linear fit, b is the power on z and is expected to be 4. However, the

linear fit does not correlate well for OS locations closer than 2 m (these points have

been excluded from the fit). This discrepancy can be seen clearly when the plot is

converted back to non-log axes as presented in Figure 31.

10 10.5 11 11.5 12 12.5 13 13.5 140.4

0.6

0.8

1

1.2

1.4

1.6

1.8

Log of Total brightness

Log(d

)

Log-Log plot of depth (d) vs. Total brightness

Raw data

Linear fit

Page 58: A Vision Based Control System for Autonomous Rendezvous and

58

Figure 31: Inverse Quartic relationship of astronomical photometry

The discrepancy in the result presented in Figure 31 is due to CCD sensor

saturation. As the OS approaches the camera, the pixel values cannot get higher

than 255(the max pixel value for an 8-bit sensor) and the inverse quartic relationship

starts to break down. However, the performance of this algorithm is fairly robust in

the far field.

3.3.3 Conclusion

The photometry algorithm described in this section features strong performance

in the far field and relatively poor performance in the near field. On the flip side, the

Hough transform begins to breakdown in the far field, and it is this complementary

relationship between the two algorithms that can be exploited by an integrated filter

such as the Kalman Filter.

1.5 2 2.5 3 3.5 4 4.50

2

4

6

8

10

12

14

16

18x 10

5

Depth (d) to OS (m)

Tota

l brightn

ess

Depth (d) of OS vs. Total brightness

Experimental data

Curve Fit

Page 59: A Vision Based Control System for Autonomous Rendezvous and

59

3.4 Kalman Estimator

3.4.1 Review

The discrete time Kalman Filter equations are reviewed here. For further detail,

the reader is advised to consult numerous textbooks such [30] and [31].

The state estimate vector is xk and the measurements are zk. If there are no

known external disturbances (as is assumed in this case), the Kalman Filtering

equation is simply given by Equation 26.

(

) (26)

In Equation 26, xk- (the predicted state estimate vector) can be rewritten as

shown in Equation 27. In this equation, A is the system model matrix(which is

known and is constant and xk-1 is the state estimate from the previous iteration.

(27)

From Equation 28, the Kalman gain, Kk is calculated in the manner described in

Equation 28 and Equation 29.

(28)

(

) (29)

Equation 28 and 29 introduces four new terms: H(the measurement model

matrix), R(the covariance matrix of the measurement noise), Pk-(the predicted error

covariance matrix), and Q (the process noise matrix).

Finally, the error covariance matrix Pk can be updated using Equation 30:

(30)

The accuracy of the Kalman Filter heavily depends on the matrices that define

the state model, the measurement model and the noise model. These matrices (A, Q,

Page 60: A Vision Based Control System for Autonomous Rendezvous and

60

R and H) are vitally important and are application specific. They are discussed in

Section 3.4.2.

3.4.2 The Use of the Kalman Filter in MOSR

The Kalman Filter is used to integrate the measurements described in Sections

3.2.4 and 3.3.2. The measurement vector zk consists of position estimate as given in

Equation 31.

[ ] (31)

The position estimates described here are the same for both algorithms, with the

key difference being that z is different for both algorithms (and so their

corresponding x and y estimates also differ from each other).

The state estimate xk is a 9x1 column vector with position, velocity and

accelerations shown in Equation 32. While position and velocity estimates are the

only required estimates for the controller described in Chapter 4, accelerations are

included to take into account process noise. Since the Kalman Filter process noise

matrix Q assumes zero mean, Gaussian noise, the acceleration terms are required to

account for variations in velocity. This will become more apparent in Equation 35.

[ ] (32)

The state model A is assumed to be simply based on Newton’s second law. Here

A is a 3x3 vector consisting of 3 3x3 terms along the diagonal resulting in a 9x9

system as shown in Equation 33 and 34.

[

]

(33)

Page 61: A Vision Based Control System for Autonomous Rendezvous and

61

[

]

(34)

In Equation 33, Ts is the length of the time step between measurements; in this

case, it is 1 second.

As mentioned previously, the motion of the OS is assumed to have zero-mean

acceleration along the entire maneuver. Therefore, the non-discretized form of the

process noise matrix is presented in Equation 35.

[

] (35)

Discretizing Q from Equation 35 using the convolution integral results in

Equation 36 for Qo.

The process noise matrix Qk is a 9x9 matrix which consists of 3 3x3 Qo matrices

along the diagonal.

[

]

(36)

[

]

(37)

In Equation 35 and 36, φ is the spectral noise density. This is a constant and is

tuned via trial and error when testing the Kalman Filter.

Page 62: A Vision Based Control System for Autonomous Rendezvous and

62

H, the measurement matrix is the mapping matrix from zk to xk. Since the zk

terms exist in xk without any numeric calculations, the H matrix is simply of the

form shown in Equation 38.

[

] (38)

The measurement noise matrix R is tuned via trial and error (using the constant

R0) and has the form shown in Equation 39.

[

] (39)

Given the above constant matrices, A, Q, H and R, the Kalman filter can be

integrated in MATLAB using Equations 26 to 30.

3.5 Kalman Filter Results

With proper tuning of the parameters φ and R0, Figure 33 to Figure 35 are the

results of the filter.

It is important to note that comparisons are made to Global Metrology system

(used as ground truth). The camera reference frame is converted to the Global

metrology reference frame as shown in Figure 32.

Page 63: A Vision Based Control System for Autonomous Rendezvous and

63

Figure 32: Comparison of Reference frames (Global Metrology and Camera)

Figure 33 shows the comparison between the raw measurement and the filtered

state estimate.

Page 64: A Vision Based Control System for Autonomous Rendezvous and

64

Figure 33: Vision Estimator Convergence vs. Raw measurement

The X and Y position estimate from the Kalman filter estimator data in Figure

27 shows a clear ‘smoothing’ capability. As mentioned in Section 3.2, there is

significant noise at the beginning of the maneuver in the ‘far’ field and this is

smoothed out by the Kalman Filter. Towards the end of the maneuver (when the OS

is closest to the camera), the raw data is much less noisy as expected. The Kalman

Filter works well in tracking the mean value of the measurement while smoothing the

results as is needed in this application.

Figure 34 and Figure 35 compare the Kalman Filter state estimate generated by

the vision system with the Global Metrology data from the SPHERES system. It is

important to note here that the Global Metrology system had significantly more

noise than is usually expected. This problem was diagnosed as being due to

interference from the MOSR shell geometry. The MOSR shell blocks the infrared

synch pulse from the SPHERES satellite to the external Global metrology system. A

workaround was made where the exterior back half of the MOSR shell was removed

for the test. This workaround was only successful for a few tests.

0 20 40 60 80 100 120 140 160 180-1

-0.5

0

0.5X Raw Measurement vs. Filtered State Estimate in the GM reference frame

Time(s)

px(m

)

Vision Estimator

Raw Measurement

0 20 40 60 80 100 120 140 160 180-0.3

-0.2

-0.1

0

0.1Y Raw Measurement vs. Filtered State Estimate in the GM reference frame

Time(s)

py(m

)

Vision Estimator

Raw Measurement

Page 65: A Vision Based Control System for Autonomous Rendezvous and

65

Figure 34: Position Convergence Comparison in the GM Reference frame

The X and Y Position data presented in Figure 34 show a bias that varies with

time. It is conceivable that this bias maybe due to a small pixel error in the Hough

Transform in the ‘far’ field. This ‘bias’ then gets propagated through the filter and

slowly recovers as the raw measurement error decreases and the OS travels closer to

the camera. This bias varies between 15 cm and 5 cm.

0 20 40 60 80 100 120 140 160 180-1

-0.5

0

0.5X Position Convergence in the GM reference frame

Time(s)

px(m

)

Vision Estimator

Global Metrology

0 20 40 60 80 100 120 140 160 180-0.4

-0.2

0

0.2

0.4Y Position Convergence in the GM reference frame

Time(s)

py(m

)

Vision Estimator

Global Metrology

Page 66: A Vision Based Control System for Autonomous Rendezvous and

66

Figure 35: Velocity Convergence Comparison in the GM Reference frame

The X and Y Velocity data generated by the vision estimator in Figure 35 shows

remarkable accuracy when compared the to the Global Metrology data. The velocity

error is less than 1 cm/s which is more than acceptable for the purposes of docking

and capture.

3.6 Conclusion

This chapter presents an approach for close proximity navigation and state

estimation using a single camera system. The details of this approach (including both

algorithms used in this approach) are described and experimental results using a

laser range finder and the SPHERES Global Metrology system are discussed. From

these results, it can be concluded that the system has upper bounds on accuracy of

around 15 cm for position and 1 cm/s for velocity. While these accuracies are not

ideal when compared to a stereoscopic system laid out by Tweddle [32], it is

remarkable for a single camera system.

0 20 40 60 80 100 120 140 160 180-0.4

-0.2

0

0.2

0.4X Velocity Convergence in the GM reference frame

Time(s)

v x(m

/s)

Vision Estimator

Global Metrology

0 20 40 60 80 100 120 140 160 180-0.2

-0.1

0

0.1

0.2Y Velocity Convergence in the GM reference frame

Time(s)

v y(m

/s)

Vision Estimator

Global Metrology

Page 67: A Vision Based Control System for Autonomous Rendezvous and

67

Page 68: A Vision Based Control System for Autonomous Rendezvous and

68

Chapter 4

4 Model Predictive Control

4.1 Overview

The rendezvous and docking scenario for the MOSR mission involves a chaser

spacecraft with an on-board vision based navigation system and a capture

mechanism system already described in Chapters 2 and 3. In order to capture the

relative motion between the chaser spacecraft and the target, tests are performed on

the Flat Floor facility (as shown in Figure 15). A fixed device with the vision

navigation system placed on one side of the Flat Floor operative area and a

SPHERES satellite is used to perform the rendezvous and capture maneuver. This

rendezvous and capture maneuver is performed using a Model Predictive Controller

as described in Section 1.6. In order to define how the controller works, different

controller strategies can be considered.

4.2 Definition of the Control Strategy

There are two possible control strategies that can be considered:

1. In the first strategy, a planning algorithm is used to generate a (safe) reference

state trajectory, starting from an initial estimated state of the chaser

spacecraft and ending at a target location. A controller, such as an MPC-

based one, could be used to track the reference trajectory.

2. In the second strategy, an MPC-based controller is used to both compute a

reference trajectory at each time step and track it at the same time.

For this work, the second control strategy is adopted with the aim of testing the

capability of MPC to compute and track a reference trajectory for the close-

proximity phase of the rendezvous and capture maneuver.

Page 69: A Vision Based Control System for Autonomous Rendezvous and

69

4.3 Controller constraints

In the reversed roles model proposed in Chapter 2, it is important that the controller

maintain certain constraints in order to correctly emulate the OS-Chaser mechanics.

The constraints are as follows.

4.3.1 Field of view constraint

It is required that the chaser remains within the FOV cone of the vision system

during the entire rendezvous maneuver. The FOV in the horizontal plane is 90

degrees and the FOV in the vertical plane is 65 degrees.

4.3.2 Limited control authority

The thruster system on the SPHERES satellite has a maximum control force than

can be actuated. Each thruster on-board the SPHERES satellite can perform a max

force of 0.112N. Taking into account the thruster configuration (two thrusters along

each direction), the maximum force that can be applied in any direction is Fmax =

0.224N.

The mass of the SPHERES is assumed constant and equal to the SPHERE and

the air bearing support structure shown in Figure 15. The mass is 11 kg, and

therefore the theoretical maximum acceleration is 0.02 m/s2. The MPC algorithm

assumes a Piece Wise constant control acceleration profile, with a control period of 1

second. Due to interference issues between the thruster and global metrology system,

thrusters can only be fired during 0.2 sec of the 1 sec controller duty cycle.

4.3.3 Terminal constraints

Terminal constraints are applied to the relative dynamic state of the OS in the

proximity of the target position for safety purposes and to guarantee certain

conditions for nominal capture mechanism function.

The following terminal constraints are placed on the SPHERES MPC controller:

1. The relative velocity has to be less than 3 cm/s in the z direction (in the

camera reference frame).

2. The relative velocity along the other two directions must be almost 0.

Page 70: A Vision Based Control System for Autonomous Rendezvous and

70

Figure 36 shows these constraints graphically.

Figure 36: MPC Constraints

4.3.4 Attitude constraints

The attitude of the chaser spacecraft is maintained at a target orientation during

close proximity operations. This is done so that the MOSR shell (described in Section

2.2.3) can be tested under different lighting conditions independently.

A quaternion based PD controller is used to regulate the attitude of the

SPHERES satellite during the entire maneuver. MPC does not control the attitude

of the OS.

4.4 MPC Software Architecture

Due to its high computational time when run on-board the SPHERES satellite, the

MPC Engine cannot be used directly on-board SPHERES when a control frequency

of 1Hz is required. In these situations, the MPC problem is solved using the laptop

control station and the computed control forces are transmitted back to the

SPHERES satellite for the actuation. To manage the time delay due to offline MPC

computation and communication lag, a new state estimate for the SPHERES satellite

is propagated with a constant assumed time delay. This time delay must be much

less than 1 second in order for new control accelerations to be output to the

SPHERES satellite before a new control cycle has begun.

+x

+z

OS

CameraMax control

force

Page 71: A Vision Based Control System for Autonomous Rendezvous and

71

Figure 37 shows the layout of the MPC Communications framework. The

communication between the offline MPC Engine and the SPHERES satellite takes

place using the regular SPHERES communication protocol.

Figure 37: Online Communication Framework with MPC

4.5 Definition of the Reference frames

The following Reference frames are defined:

1. Camera Reference Frame: x and y axes are in the camera image plane, z axis

along the camera focal length axis. This is identical to the camera reference

frame defined in Section 3.5.

2. Global Metrology reference frame: This is the reference frame used by the

SPHERES ultrasound Global Metrology system.

Figure 38 shows these reference frames in a graphical format. The teal cone is

FOV constraint as it is applied to the MPC Controller.

Page 72: A Vision Based Control System for Autonomous Rendezvous and

72

Figure 38: Camera and Global Metrology reference frames

4.6 MPC Results

For the purposes of this thesis, the exact mechanisms of the MPC controller are not

shown. The reader is advised to refer to [19] whose author is a collaborator on this

project.

4.6.1 Offline MPC time delay calculation

In order to check the viability of offloading the MPC Engine to MATLAB, a test

was done to calculate the time delay between the SPHERES satellite requesting a

new control action from the offline MPC Engine and the accurate delivery of the new

control action back the SPHERES satellite. Figure 39 shows this result:

Page 73: A Vision Based Control System for Autonomous Rendezvous and

73

Figure 39: Time Delay computation for MPC Computation

Figure 39 presents the time delay and number of optimizer iterations required to

generate a new MPC control action. The time delay is much lower than the required

1 second (~50 milliseconds) and this proves the viability of offloading the MPC

engine to the control station.

4.6.2 MPC results in the SPHERES simulator

To ensure proper code functionality, the MPC code was run in the SPHERES

MATLAB simulator. The MPC engine was compared with a well-tuned PD

controller (having the same settling time as the MPC controller) and the same PD

gains were used in the test using the SPHERES hardware. Refer to Appendix A for

more information of the simulation results.

4.6.3 MPC results using Global Metrology on SPHERES hardware

Figure 40 to Figure 42 show the MPC tests on the flat floor using the Global

Metrology for state estimation. Comparisons are made to the SPHERES simulator

results.

0 10 20 30 40 50 60 70 80 90 1000

5

10

15

20MPC Computation Performance

Num

ber

of

itera

tions

Time [s]

0 10 20 30 40 50 60 70 80 90 1000

50

100

MP

C c

om

puting t

ime [

mill

iseconds]

Time [s]

Page 74: A Vision Based Control System for Autonomous Rendezvous and

74

Figure 40: X and Y Position Errors for MPC engine

In Figure 40, the position errors are higher than the simulation. It is conceivable

that the flat floor air bearing setup used for this test is not frictionless as the

simulator assumes. This build-up of position errors leads to a build-up of control

accelerations as shown in Figure 42. As the test progresses, the position errors tend

to decay and this behavior matches the simulator results. A non-zero X-position

error is seen towards the trailing edge of the test. This is possibly due to excessive

friction in one of the air pucks that the SPHERES satellite floats on for this 2D test.

This is routinely seen on SPHERES tests on the flat floor and the error is not

unexpected.

0 10 20 30 40 50 60 70 80 90 100-0.5

0

0.5

1

1.5X and Y Position Errors vs. Time

px (

t) [

m]

SPHERES testbed

SPHERES simulator

0 10 20 30 40 50 60 70 80 90 100-0.6

-0.4

-0.2

0

0.2

py (

t) [

m]

Time [s]

SPHERES testbed

SPHERES simulator

Page 75: A Vision Based Control System for Autonomous Rendezvous and

75

Figure 41: Velocity vs. Time using the SPHERES testbed

Figure 42: Control accelerations vs. Time using the SPHERES testbed

0 10 20 30 40 50 60 70 80 90 100-0.02

0

0.02

0.04

0.06

X and Y Velocities vs. Time

v x (

t) [

m/s

]

SPHERES testbed

SPHERES simulator

0 10 20 30 40 50 60 70 80 90 100-0.01

0

0.01

0.02

0.03

0.04

v y (

t) [

m/s

]

Time [s]

SPHERES testbed

SPHERES simulator

0 20 40 60 80 100 120-5

0

5x 10

-3 Control Accelerations vs. Time

ux (

t) [

m/s

2]

SPHERES testbed

SPHERES simulator

0 20 40 60 80 100 120-5

0

5x 10

-3

uy (

t) [

m/s

2]

Time [s]

SPHERES testbed

SPHERES simulator

Page 76: A Vision Based Control System for Autonomous Rendezvous and

76

In Figure 42, there is a noticeable increase in control accelerations along the x

and y axes. This is a direct consequence of the large position errors that build up

during the start of the test.

However, the control accelerations in the y-direction show large fluctuations. This

could be possibly due to the lack of y-axis convergence from the Global Metrology

estimator. It is uncertain if these large fluctuations in the control acceleration are a

result of velocity fluctuations seen in Figure 41 or these control accelerations are

causing the velocity fluctuations in the y-direction.

4.6.4 PD and MPC Control comparison results using Global Metrology

on SPHERES Hardware

Figure 43 and Figure 44 are shown to compare a standard PD controller and the

MPC controller. As the results in Appendix A show, the MPC controller should show

a 10-11% decrease in fuel consumption.

Figure 43: Cumulative Δv requirement in MPC vs. PD on SPHERES testbed

0 10 20 30 40 50 60 70 80 90 1000

0.05

0.1

0.15

0.2

0.25

0.3

0.35

Cum

ula

tive

v r

equirem

ent

[m/s

]

Time [s]

Cumulative v requirement [m/s]

MPC

PD

Page 77: A Vision Based Control System for Autonomous Rendezvous and

77

The MPC data displayed in Figure 43 shows a clear advantage with the PD

controller. While the Δv is not as significant as the 11% seen in the simulations (refer

to Appendix A), the MPC controller is still fuel more fuel efficient than a well-tuned

PD controller. This advantage is due to the predictability features of the MPC

algorithm and the thruster saturation handling ability. As more and more time

elapses, the ability of the MPC controller to predict its state increases thereby

increasing its handling ability.

These advantages discussed here can also be seen in Figure 44.

Figure 44: Velocity profile comparison for MPC and PD controller on SPHERES

testbed

Figure 44 presents the MPC advantages in a slightly different form. The velocity

in the x-direction is significantly lower for MPC throughout the entire maneuver

even though both satellites reach the end of the maneuver before the test ends. This

relationship is easier to see in the x-y trajectory mapped out in Figure 45.

0 10 20 30 40 50 60 70 80 90 100-0.02

0

0.02

0.04

0.06

X and Y Velocities vs. Time

v x (

t) [

m/s

]

MPC

PD

0 10 20 30 40 50 60 70 80 90 100-0.01

0

0.01

0.02

0.03

v y (

t) [

m/s

]

Time [s]

MPC

PD

Page 78: A Vision Based Control System for Autonomous Rendezvous and

78

Figure 45: XY Trajectory comparisons of PD vs. MPC

The green points in Figure 45 refer to the final ending locations of the SPHERES

satellites. Even though both controllers had different starting points, the PD

controller overshot the target x-location causing it to backtrack to find the final

target location. In contrast, the MPC controller took a much smoother approach to

the ending location with nearly zero overshoot. This shows that the MPC controller

is indeed more fuel efficient than the PD controller. See Appendix A for more results

from the PD-MPC test.

4.7 Conclusion

This chapter presents a different controller approach for close proximity rendezvous

and capture. The details of the controller architecture and its communication

framework are described and experimental results with a PD controller using the

SPHERES hardware are shown. From these results, it can be concluded that the

MPC controller is a viable controller for use in such a rendezvous application with

clear fuel efficiency benefits over PD control. However, no tests were performed with

-0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8-0.2

-0.1

0

0.1

0.2

0.3

0.4

0.5Y Position vs. X Position

py (

t) [

m]

px (t) [m]

MPC

PD

Page 79: A Vision Based Control System for Autonomous Rendezvous and

79

the vision estimator in the loop due the MOSR shell interference issues described in

Section 3.5.

Page 80: A Vision Based Control System for Autonomous Rendezvous and

80

Page 81: A Vision Based Control System for Autonomous Rendezvous and

81

Chapter 5

5 Conclusions

5.1 Summary of Thesis

This thesis presents the first steps of a larger research initiative to investigate the

problem of rendezvous and capture of a completely passive object around Mars orbit.

Chapter 2 presents the overall MOSR SPHERES system which is currently being

developed in order to perform a full 6-DOF test on-board the International Space

Station. This testbed consists of a capture mechanism with at least two independent

non-stereoscopic cameras used in succession. A bi-color MOSR shell is designed and

built to increase the accuracy and robustness of the vision algorithm.

Chapter 3 presents the two algorithms used in the vision system. This system

does not use any fiduciary markers or known surface features to compute the 3 DOF

state estimate. The system assumes only a constant circular cross-section of given

diameter. First, the mechanism of the Hough transform is discussed. The inherent

drawbacks of using such an algorithm are explained and possible mitigating solutions

are discussed. Second, an algorithm based on astronomical photometry is presented.

The increased fidelity in measuring target range is noted and special attention is paid

to the weaknesses of each of these algorithms (when used in parallel). Finally, the use

of a Kalman Filter is discussed as a possible ‘smoother’ to integrate both these

algorithms. Once again, the mechanism of the linearly-coupled Kalman filter is

discussed and experimental data is shown with appropriate comparisons to the

Global Metrology system (which is assumed to be ground truth).

Chapter 4 presents the controller used in this project. The controller is based on

Model Predictive control as outlined in [19] and is used for rendezvous and capture of

the OS from ‘few meters’ away. The system specific constraints are listed and the

communications framework of this controller for this application is revealed.

Evidence is presented that strongly suggests that MPC can be a better alternative to

PD based controllers for close proximity rendezvous and capture applications.

Page 82: A Vision Based Control System for Autonomous Rendezvous and

82

5.2 List of contributions

The following is a list of contributions of this thesis:

Design and development of a vision system that can be used for close

proximity rendezvous applications.

Implementation and experimental evaluation of a vision system that uses

no fiduciary markers or surface features for relative state estimation. This

vision system also uses a single camera and can be an advantage over more

sophisticated stereoscopic systems.

Identification of the weaknesses of the approach to relative state estimation

using the vision techniques described. A Kalman filter is introduced to

integrate both these algorithms and experimental evaluation of this

estimator is presented.

Implementation and testing of a new Model Predictive Controller for

space-based rendezvous and capture applications. This particular

application of MPC has not been done before and there is sufficient

evidence to show that MPC is a more fuel-efficient alternative to PD.

5.3 Future work

The main focus of the future work on the SPHERES MOSR project will involve

more integrated testing of the estimator and the MPC controller. As is noted before,

the MOSR shell described in Section 2.2.3 interferes with the IR synch pulse sent

from the SPHERES satellite to the Global Metrology beacons. Modifications to this

shell need to be made before more integrated testing can be done to test the

estimator and the controller in a closed-loop environment.

Future work on the vision navigation system should include incorporation of the

Kalman Filter into the Hough Transform initial estimate. This ensures that the

algorithm can deliver OS ranges with maximum accuracy possible. The ability of the

Kalman filter to predict future states of the OS can be invaluable to the accuracy of

the Hough transform. This approach to integrate the estimator into the working of

the Hough transform is described in Mills et. al[33]. The astronomical photometry

algorithm can also be upgraded to compute new estimates for the proportionality

constants on the fly. Since the range estimate of the photometry algorithm depends

heavily on the constants of the inverse-quartic curve fit, updating these constants

over time can lead to better precision and accuracy of the range measurement.

Page 83: A Vision Based Control System for Autonomous Rendezvous and

83

Finally, the Kalman Filter can be upgraded to an Extended Kalman Filter to take

into account of some of the inherent non-linearities and cross coupling of the

measurement model. While this upgrade will not offer a significant increase in

estimator accuracy (since the accuracy of the estimator is driven by the ‘sensor’

algorithms), it might allow the initial state convergence to be smoother than what is

seen now.

A number of areas exist for the preliminary results presented in Chapter 4. The

MPC controller can be tested with the vision estimator to test closed-loop

performance of the entire system. Additionally, a test with no control inputs can be

conducted to diagnose the unexpected noise seen along the y-axis. This test will

reveal the true source of noise in the system along the y-axis; this could be sensor

noise through the Global Metrology system or inherent noise in the controller design

which might need to be accounted for, for future tests using this control system.

Page 84: A Vision Based Control System for Autonomous Rendezvous and

84

Page 85: A Vision Based Control System for Autonomous Rendezvous and

85

References

[1] Hargraves, R. B., Knudsen, J. M., Madsen, M. B., and Bertelsen, P., “Finding the right

rocks on Mars,” Eos, vol. 82, no. 26, pp. 292–292, 2001.

[2] R. Mattingly, “Continuing evolution of Mars sample return,” 2004 IEEE Aerospace

Conference Proceedings, pp. 477–492, 2004.

[3] Aurora Flight Sciences, “SPHERES MOSR RDOS Phase 2 NASA Proposal,” 2010.

[4] “DARPA Orbital Express.” [Online]. Available:

http://archive.darpa.mil/orbitalexpress/index.html. [Accessed: 01-Aug-2013].

[5] “JAXA | Asteroid Explorer ‘HAYABUSA’ (MUSES-C).”[Online]. Available:

http://www.jaxa.jp/projects/sat/muses_c/index_e.html. [Accessed: 01-Aug-2013].

[6] M. Adler, W. Owen, and J. Riedel, “Use of MRO Optical Navigation Camera to

Prepare for Mars Sample Return,” LPI Contributions, 2012.

[7] R. O. Duda and P. E. Hart, “Use of the Hough transformation to detect lines and

curves in pictures.,” Communications of the ACM, vol. 15, pp. 11–15, 1972.

[8] P. Hough, “METHOD AND MEANS FOR RECOGNIZING COMPLEX

PATTERNS,” 1960.

[9] P. Kierkegaard, “A method for detection of circular arcs based on the Hough

transform,” Machine Vision and Applications, vol. 5, pp. 249–263, 1992.

[10] D. H. Ballard, “Generalizing the Hough transform to detect arbitrary shapes,” Pattern

Recognition, vol. 13, pp. 111–122, 1981.

[11] F. Zana and J. C. Klein, “A multimodal registration algorithm of eye fundus images

using vessels detection and Hough transform.,” IEEE Transactions on Medical

Imaging, vol. 18, pp. 419–428, 1999.

[12] V. Kamat and S. Ganesan, “An efficient implementation of the Hough transform for

detecting vehicle license plates using DSP’S,” Proceedings RealTime Technology and

Applications Symposium, pp. 58–59, 1995.

[13] A. Karnieli, A. Meisels, L. Fisher, and Y. Arkin, “Automatic extraction and evaluation

of geological linear features from digital remote sensing data using a Hough

transform,” Photogrammetric Engineering and Remote Sensing, vol. 62, pp. 525–531,

1996.

Page 86: A Vision Based Control System for Autonomous Rendezvous and

86

[14] G. Casonato and G. B. Palmerini, “Visual techniques applied to the ATV/1SS

rendezvous monitoring,” 2004 IEEE Aerospace Conference Proceedings IEEE Cat

No04TH8720, vol. 1, 2004.

[15] C. Sterken and J. Manfroid, Astronomical Photometry: A Guide. Springer, 1992, p.

272.

[16] H. Shapley, “Studies based on the colors and magnitudes in stellar clusters. VI. On the

determination of the distances of globular clusters.,” The Astrophysical Journal, vol.

48, p. 89, Sep. 1918.

[17] J. Riedel, J. Guinn, M. Delpech, and J. Dubois, “A combined open-loop and

autonomous search and rendezvous navigation system for the CNES/NASA Mars

Premier Orbiter mission,” Pasadena, CA, 2003.

[18] A. Bemporad and M. Morari, “Robust Model Predictive Control: A Survey,”

Robustness in Identification and Control, vol. 245, pp. 207–226, 1999.

[19] A. Valmorbida, “Model Predictive Control (MPC) strategies on the SPHERES test-bed

for the Mars Orbital Sample Return (MoSR) scenario,” University of Padova, 2013.

[20] C. E. Garcia, D. M. Prett, and M. Morari, “Model Predictive Control : Theory and

Practice a Survey,” Automatica, vol. 25, pp. 335–348, 1989.

[21] C. E. Garcia and A. M. Morshedi, “Quadratic programming solution of dynamic

matrix control (QDMC),” Chemical Engineering Communications, vol. 46, pp. 73–87,

1986.

[22] V. Manikonda, P. O. Arambel, M. Gopinathan, R. K. Mehra, and F. Y. Hadaegh, “A

model predictive control-based approach for spacecraft formation keeping and attitude

control,” Proceedings of the 1999 American Control Conference Cat No 99CH36251,

vol. 6, pp. 4258–4262, 1999.

[23] R. Linfield and P. Gorham, “The DS3 Mission: A Separated Spacecraft Optical

Interferometer,” The American Astronomical Society, p. 3, 1998.

[24] A. Richards and J. P. How, “Model predictive control of vehicle maneuvers with

guaranteed completion time and robust feasibility,” Proceedings of the 2003 American

Control Conference 2003, vol. 5, pp. 4034–4040, 2003.

[25] J. Lavaei, A. Momeni, and A. G. Aghdam, “A Model Predictive Decentralized Control

Scheme With Reduced Communication Requirement for Spacecraft Formation,” IEEE

Transactions on Control Systems Technology, vol. 16, pp. 268–278, 2008.

[26] D. Miller, A. Saenz-Otero, and J. Wertz, “SPHERES: a testbed for long duration

satellite formation flying in micro-gravity conditions,” AAS/AIAA Space Flight

Conference, 2000.

Page 87: A Vision Based Control System for Autonomous Rendezvous and

87

[27] H. K. Yuen, J. Princen, J. Illingworth, and J. Kittler, “Comparative study of Hough

transform methods for circle finding,” Image and Vision Computing, vol. 8, pp. 71–77,

1990.

[28] J. Canny, “A computational approach to edge detection.,” IEEE Transactions on

Pattern Analysis and Machine Intelligence, vol. 8, pp. 679–698, 1986.

[29] C. M. Brown, “Inherent bias and noise in the hough transform.,” IEEE Transactions on

Pattern Analysis and Machine Intelligence, vol. 5, pp. 493–505, 1983.

[30] P. Kim, Kalman Filter for Beginners: with MATLAB Examples. CreateSpace

Independent Publishing Platform, 2011, p. 232.

[31] P. Zarchan and H. Musoff, Fundamentals of Kalman Filtering: A Practical Approach,

vol. 190. 2005, p. 2010.

[32] B. Tweddle, “Computer vision based navigation for spacecraft proximity operations,”

no. February, p. 225, 2010.

[33] S. Mills, T. P. Pridmore, and M. Hills, “Tracking in a Hough Space with the Extended

Kalman Filter,” Procedings of the British Machine Vision Conference 2003, pp. 18.1–

18.10, 2003.

Page 88: A Vision Based Control System for Autonomous Rendezvous and

88

Page 89: A Vision Based Control System for Autonomous Rendezvous and

89

6 Appendix A

6.1 MPC Simulation results using PD and MPC Control

Figure 46: SPHERES Simulator PD MPC Comparison – Position Error vs. Time

0 10 20 30 40 50 60 70 80-1

0

1

2Position error vs Time

x(t

) -

xre

f(t)

[m]

MPC

PD

0 10 20 30 40 50 60 70 80-0.5

0

0.5

1

y(t

) -

yre

f(t)

[m]

0 10 20 30 40 50 60 70 80-0.5

0

0.5

Time [s]

z(t

) -

zre

f(t)

[m]

Page 90: A Vision Based Control System for Autonomous Rendezvous and

90

Figure 47: SPHERES Simulator PD MPC Comparison – Control Accelerations

vs. Time

Figure 48: SPHERES Simulator PD MPC Comparison – Δv requirements vs.

Time

0 10 20 30 40 50 60 70 80-0.01

0

0.01Control Accelerations vs Time

ux(t

)

MPC

PD

0 10 20 30 40 50 60 70 80-0.01

0

0.01

uy(t

)

0 10 20 30 40 50 60 70 80-0.01

0

0.01

Time [s]

uz(t

)

0 10 20 30 40 50 60 70 800

0.05

0.1

0.15

0.2

0.25

0.3

0.35v requirement profile vs Time

Time [s]

v

requirem

ent

[m/s

]

PD

MPC (-11.65 % wrt PD)

Page 91: A Vision Based Control System for Autonomous Rendezvous and

91

Figure 49: SPHERES Simulator PD MPC Comparison – Control accelerations vs.

Time

0 10 20 30 40 50 60 70 80-0.01

0

0.01Control Acceleration increments vs Time

u

x(t

)

0 10 20 30 40 50 60 70 80-0.01

0

0.01

0.02

u

y(t

)

0 10 20 30 40 50 60 70 80-0.01

0

0.01

Time [s]

u

z(t

)

Page 92: A Vision Based Control System for Autonomous Rendezvous and

92

6.2 PD control test on SPHERES hardware

Figure 50: SPHERES hardware PD Comparison – Control accelerations vs. Time

Figure 51: SPHERES hardware PD Comparison – Cumulative Δv vs. Time

0 10 20 30 40 50 60 70 80 90 100-5

0

5x 10

-3 Control Accelerations vs. Time

ux (

t) [

m/s

2]

SPHERES testbed

SPHERES simulator

0 10 20 30 40 50 60 70 80 90 100-5

0

5x 10

-3

uy (

t) [

m/s

2]

Time [s]

SPHERES testbed

SPHERES simulator

0 10 20 30 40 50 60 70 80 90 1000

0.05

0.1

0.15

0.2

0.25

0.3

0.35

Cum

ula

tive

v r

equirem

ent

[m/s

]

Cumulative v requirement [m/s]

Time [s]

SPHERES testbed

SPHERES simulator

Page 93: A Vision Based Control System for Autonomous Rendezvous and

93

Figure 52: SPHERES hardware PD Comparison – X and Y Position vs. Time

Figure 53: SPHERES hardware PD Comparison – Velocities vs. Time

0 10 20 30 40 50 60 70 80 90 1000

0.5

1

1.5X and Y Position Errors vs. Time

px (

t) [

m]

SPHERES testbed

SPHERES simulator

0 10 20 30 40 50 60 70 80 90 100-0.6

-0.4

-0.2

0

0.2

py (

t) [

m]

Time [s]

SPHERES testbed

SPHERES simulator

0 10 20 30 40 50 60 70 80 90 100-0.02

0

0.02

0.04

0.06

X and Y Velocities vs. Time

v x (

t) [

m/s

]

SPHERES testbed

SPHERES simulator

0 10 20 30 40 50 60 70 80 90 100-0.01

0

0.01

0.02

0.03

0.04

v y (

t) [

m/s

]

Time [s]

SPHERES testbed

SPHERES simulator

Page 94: A Vision Based Control System for Autonomous Rendezvous and

94

Figure 54: SPHERES hardware PD Comparison – XY Trajectory

6.3 MPC Test on SPHERES hardware

Figure 55: MPC Trajectory using SPHERES Hardware

-0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8-0.2

-0.1

0

0.1

0.2

0.3

0.4

0.5Y Position vs. X Position

py (

t) [

m]

px (t) [m]

SPHERES testbed

SPHERES simulator

Page 95: A Vision Based Control System for Autonomous Rendezvous and

95

Figure 56: Cumulative Δv requirement for MPC using SPHERES hardware

Figure 57: XY Trajectory for MPC using SPHERES hardware

0 10 20 30 40 50 60 70 80 90 1000

0.05

0.1

0.15

0.2

0.25

0.3

0.35

Cum

ula

tive

v r

equirem

ent

(t)

[m/s

]

Time [s]

SPHERES testbed

SPHERES simulator

-0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8-0.2

-0.1

0

0.1

0.2

0.3

0.4

0.5Y Position vs. X Position

py (

t) [

m]

px (t) [m]

SPHERES testbed

SPHERES simulator

Page 96: A Vision Based Control System for Autonomous Rendezvous and

96

Page 97: A Vision Based Control System for Autonomous Rendezvous and

University of Padova

Center of Studies and Activities for Space (CISAS)“G. Colombo”

Ph.D. School in

Sciences Technologies and Measures for Space

Test of Model Predictive Control (MPC) strategieson the SPHERES test bed for the Mars Orbital

Sample Return (MOSR) scenario

Andrea VALMORBIDA

Ph.D. candidate, XXV cycle

Mechanical Measures for Engineering and Space

August 29, 2013

Page 98: A Vision Based Control System for Autonomous Rendezvous and
Page 99: A Vision Based Control System for Autonomous Rendezvous and

Contents

Contents iii

Acronyms v

1 THE SPHERES MOSR SCENARIO 1

1.1 SPHERES MOSR Scenario Overview . . . . . . . . . . . . . . . . . . . . . . 1

1.2 Definition of the Control Strategy . . . . . . . . . . . . . . . . . . . . . . . . 2

1.3 Controller Requirements and Constraints . . . . . . . . . . . . . . . . . . . . 2

1.3.1 Field of view constraint . . . . . . . . . . . . . . . . . . . . . . . . . 2

1.3.2 Limited control authority . . . . . . . . . . . . . . . . . . . . . . . . 3

1.3.3 Terminal constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

1.3.4 Attitude constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

1.4 Reference Frames Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

2 MODEL PREDICTIVE CONTROL FOR LTI SYSTEMS 7

2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

2.2 Prediction Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

2.3 Optimization problem description . . . . . . . . . . . . . . . . . . . . . . . . 11

2.4 Cost function in standard form . . . . . . . . . . . . . . . . . . . . . . . . . 14

2.5 Constraints in standard form . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

2.5.1 Control variable increment constraints . . . . . . . . . . . . . . . . . 17

2.5.2 Control variable constraints . . . . . . . . . . . . . . . . . . . . . . . 19

2.5.3 Controlled variable constraints . . . . . . . . . . . . . . . . . . . . . . 21

2.5.4 Standard form of the inequality constraint equation . . . . . . . . . . 23

2.6 The MPC algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

2.6.1 MPC QP solver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

2.6.2 MPC QP computing algorithm . . . . . . . . . . . . . . . . . . . . . 28

3 MPC TOOLS FOR SPHERES 33

3.1 MPC Engine for SPHERES . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

3.2 Matlab MPC Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

3.3 C MPC Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

iii

Page 100: A Vision Based Control System for Autonomous Rendezvous and

iv CONTENTS

4 TEST RESULTS 39

4.1 Preliminary Simulation Results . . . . . . . . . . . . . . . . . . . . . . . . . 39

4.1.1 Matlab Simulator MPC vs Matlab Simulator PD . . . . . . . . . . . 39

4.1.2 SPHERES Simulator MPC vs SPHERES Simulator PD . . . . . . . . 40

4.2 SPHERES Test Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

4.2.1 SPHERES Test Plan . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

4.2.2 Test 1 - MPC computing time evaluation . . . . . . . . . . . . . . . . 46

4.2.3 Test 2 - MPC rendezvous maneuver . . . . . . . . . . . . . . . . . . . 46

4.2.4 Test 3 - PD rendezvous maneuver . . . . . . . . . . . . . . . . . . . . 47

4.2.5 MPC - PD comparison . . . . . . . . . . . . . . . . . . . . . . . . . . 47

4.2.6 Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

4.3 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

Bibliography 59

Page 101: A Vision Based Control System for Autonomous Rendezvous and

CONTENTS v

.

Acronyms

w.r.t.: with respect to

s/c: spacecraft

FOV: Field Of View

RF: Reference Frame

DC: Duty Cycle

PWM: Pulse Width Modulation

PWC: Piece Wise Constant

CMS: Capture Mechanism System

VNS: Vision-based relative Navigation System

FF: MIT SSL Flat Floor

LTI: Linear Time Invariant

Page 102: A Vision Based Control System for Autonomous Rendezvous and

vi CONTENTS

Page 103: A Vision Based Control System for Autonomous Rendezvous and

Chapter 1

THE SPHERES MOSR SCENARIO

1.1 SPHERES MOSR Scenario Overview

The rendezvous and docking scenario for the MOSR mission involves a chaser spacecraft,

the Orbiter/Earth Return Vehicle (ERV), with an on-board Vision-based relative Navigation

System (VNS) and a Capture Mechanism System (CMS) which attempts rendezvous and

docking with a passive target containing a Martian geological sample called the Orbiting

Sample (OS) satellite.

In order to capture the relative motion between the chaser s/c and the target and be

able to test this scenario using the SPHERES test bed at the MIT SSL Flat Floor (FF)

facility, a fixed device with both the CMS and the VNS systems is placed at one side of the

FF operative area and a SPHERES satellite is used to perform the rendezvous and capture

maneuver (see Figure 1.1).

The MoSR scenario on SPHERES (1)

MPC vs PD preliminary performance evaluation for the Mars orbital Sample Return (MoSR)

scenario on SPHERES

Mars orbital Sample Return (MoSR)

Earth

MoSR scenario on SPHERES

fixed device (spacecraft)

with Capture Mechanism

and the Vision-based

Ph.D. School in SCIENCES TECHNOLOGIES AND MEASURES FOR SPACE – CISAS – University of Padova (Italy)

ANDREA VALMORBIDA March 2013Test of MPC strategies on SPHERES testbed for the MoSR scenario

4

passive target

with a martian

soil sample controlled spacecraft with the

Capture Mechanism and the

Vision-based Relative

Navigation System

and the Vision-based

Relative Navigation

System

controlled target

(SPHERES satellite)

Figure 1.1: SPHERES MOSR scenario.

1

Page 104: A Vision Based Control System for Autonomous Rendezvous and

2 CHAPTER 1. THE SPHERES MOSR SCENARIO

1.2 Definition of the Control Strategy

There are two possible experimental configurations that can be considered:

1. In the first configuration, a planning algorithm is used to generate a (safe) reference

state trajectory1, starting from an initial estimated state of the chaser s/c and ending

to a target state. A controller, such as a MPC-based one, could be used to track the

reference trajectory.

2. In the second configuration, a MPC-based controller is used to both compute a reference

trajectory and track it at the same time.

In this work we adopted the second control strategy with the aim of testing the MPC

capability to compute and track a reference trajectory for the close-proximity phase of the

rendezvous and capture maneuver.

1.3 Controller Requirements and Constraints

Some requirements and/or constraints have to be taken into account with the aim of:

• guarantee the safety for both the chaser s/c and the target;

• guarantee proper operational conditions for all the elements used to execute the ren-

dezvous and capture maneuver;

• improve the control system performance taking into account the actual system.

1.3.1 Field of view constraint

It is required that the chaser remains within the FOV cone of the vision-based sensing system

during the rendezvous maneuver.

This FOV requirement can be represented by the following linear inequality constraint

on the relative position vector p:

HFOV p(tk) ≤ kFOV (1.1)

for each time step tk of the rendezvous maneuver.

The camera FOV has an amplitude in the horizontal plane fovh = 60 deg, and in the

vertical plane fovv = 30 deg.

1all trajectories/states are considered as relative trajectories/states of the chaser s/c w.r.t. the target s/c.

Page 105: A Vision Based Control System for Autonomous Rendezvous and

1.3. CONTROLLER REQUIREMENTS AND CONSTRAINTS 3

1.3.2 Limited control authority

This constraint is due to the maximum control force that can be actuated by the propulsion

system onboard the chaser satellite.

Each thruster onboard SPHERES can perform a force Fth = 0.112N . Taking into account

the SPHERES thruster system configuration, the maximum force that can be applied in any

direction is Fmax = 2Fth = 0.224N (worst case when the force direction is parallel to the

thruster firing direction). The mass of the SPHERES is considered constant and equal to

msph,LAB = msph + mair carriage = 11 kg (at the MIT SSL). The maximum acceleration that

the thruster system can perform is then umax,LAB = 0.02m/s2. The MPC algorithm assumes

a Piece Wise Constant (PWC) control acceleration profile, with a control period ∆tctrl = 1s.

A control acceleration is applied using a Pulse Width Modulation (PWM) strategy with a

maximum pulse width pwmax = 0.2 s, i.e. the acceleration is equal to umax and the width, or

time duration, of each pulse is computed in order to preserve the impulse of the acceleration.

The maximum equivalent PWC acceleration, which is used in the MPC algorithm, can be

computed as follows:

MPCumax =pwmax∆tctrl

umax = DCctrl umax (1.2)

which is MPCumax,LAB = 0.004m/s2 at the MIT SSL.

Tanking into account these constraints within the control algorithm, MPC is expected to

have better performance with respect to other classical control strategies.

1.3.3 Terminal constraints

Terminal constraints are applied to the relative dynamic state of the chaser satellite in

proximity of the target position both for safety purposes and to guarantee good conditions

for the capture mechanism to work.

We imposed the following constraints on the relative velocity when the SPHERES satellite

is close to the target position:

• the relative velocity has to be less than a safety value along the camera focal axis

direction;

• the relative velocity has to be almost zero in the other two directions.

This means that the SPHERES satellite approaches the target position following a straight

path along the camera focal axis and with a reduced velocity. Coin straits on the relative

velocity are met by properly tuning the MPC weights (PD gains).

Page 106: A Vision Based Control System for Autonomous Rendezvous and

4 CHAPTER 1. THE SPHERES MOSR SCENARIO

The MoSR scenario on SPHERES (2)

Two main constraints on the MoSR scenario on SPHERES:

• Limited control authority: maximum control force that the Spheres propulsion system can

perform control variable constraint

• FOV of the Vision-based relative Navigation System: the Spheres satellite must be kept

within the camera FOV during the Rendez-vous & Capture maneuver relative position

constraint

Spheres at

Ph.D. School in SCIENCES TECHNOLOGIES AND MEASURES FOR SPACE – CISAS – University of Padova (Italy)

ANDREA VALMORBIDA March 2013Test of MPC strategies on SPHERES testbed for the MoSR scenario

5

Target Position

Spheres at

Initial Position

Camera and

Capture mechanism Camera Field Of

View (FOV)

Maximum

control force

Figure 1.2: Controller requirements and constraints.

1.3.4 Attitude constraints

The attitude of the chaser s/c is maintained equal to a target orientation during close-

proximity operations. A quaternion-based PD controller is used to regulate the attitude of

the SPHERES satellite during the whole maneuver.

1.4 Reference Frames Definition

We define the following Reference Frames (RFs) (Figure 1.3):

1. Global RF: x and y axes are in the local horizontal (orbital) plane, z axis is along the

local vertical (radial) direction.

2. Camera RF: x and y axes are in the camera focal plane, z axis is along the camera

focal axis.

3. Flat Floor (FF) RF: it is the RF used by the SPHERES Ultra-Sound Global Metrology

System.

Page 107: A Vision Based Control System for Autonomous Rendezvous and

1.4. REFERENCE FRAMES DEFINITION 5

Figure 1.3: Global and Camera Reference Frames.

Page 108: A Vision Based Control System for Autonomous Rendezvous and

6 CHAPTER 1. THE SPHERES MOSR SCENARIO

Page 109: A Vision Based Control System for Autonomous Rendezvous and

Chapter 2

MODEL PREDICTIVE CONTROL

FOR LTI SYSTEMS

2.1 Introduction

Model Predictive Control (MPC) [1, 2, 3] or Receding Horizon Control (RHC) is a modern

control technique in which the current control action to be applied to a dynamic system is

obtained by solving on-line, at each sampling instant, a finite horizon open-loop optimal

control problem, using a sample of the current state as the initial state and a model of

the system dynamics. The optimization process yields an optimal control sequence that

minimizes a certain cost measure for the system over a finite prediction horizon. The first of

those control actions is applied to the plant, and the process is repeated at the next sampling

instant, thereby introducing feedback into the system.

The basic premise of MPC, which is illustrated in Figure 2.1, is the following1. On the

basis of a (time invariant) model of the system

x(τ + 1) = f(x(τ), u(τ)) , where τ ∈ [0, Hp]

and a measurement xt.= x(0) of the system’s state at sampling time t, i.e. τ = 02, an opti-

mization problem is formulated that predicts over a finite horizon Hp the optimal evolution

of the state x according to some optimality criterion. The prediction of the state variable x

is performed over an interval with length Hp. The control signal u is assumed to be fixed

after τ + Hu. Hp and Hu are referred to as the prediction horizon and the control horizon

respectively. The result of the optimization procedure is an optimal control input sequence

U?Hp(xt)

.= u(0), u(1), ..., u(Hp − 1)

Even though a whole sequence of inputs is at hand, only the first element u(0) is applied to

1For simplicity we assume that the system state x is equal to the system output y.2t is the absolute time, while τ is the relative time, a time instance on the relative prediction time axis

that shifts with every new sampling absolute time.

7

Page 110: A Vision Based Control System for Autonomous Rendezvous and

8 CHAPTER 2. MODEL PREDICTIVE CONTROL FOR LTI SYSTEMS

Algorithm 2.1 on-line receding horizon control.

1. Measure the state x t at time instance t;

2. Obtain U?Hp(xt)) by solving an optimization problem over the horizon Hp;

IF U?Hp(xt) = Ø THEN problem infeasible STOP.

3. Apply the first element u(0) of U?Hp(xt)) to the system;

4. Wait for the new sampling time t+ 1, goto (1.).

the system. In order to compensate for possible modeling errors or disturbances acting on

the system, a new state measurement xt+1 is taken at the next sampling instance t+ 1 and

the whole procedure is repeated. This introduces feedback into the system.

From the above explanations, it is clear that a fixed prediction horizon is shifted or

receded over time, hence the name Receding Horizon Control. The procedure of this on-line

optimal control technique is summarized in the Algorithm 2.1.

The key difference between MPC and classical control techniques is its ability to handle

constraints, which are inherent in nearly every real application. For example, actuators

are naturally limited in the force (or equivalent slew rates) they can apply. Alternatively,

operating limits may be imposed for safety or efficiency reasons. The presence of such

constraints can render the off-line determination of a control law very difficult, whereas the

MPC on-line nature is more adept at handling such problems, as constraints are handled

naturally within the optimization framework [4].

It is this re-planning nature and constraint-handling ability that makes MPC well-suited

to spacecraft rendezvous problems [5, 6]. The explicit consideration of the system dynamics

and constraints allows fuel-efficient, feasible plans to be determined autonomously and on-

line, based on latest system state.

In the following sections we show in detail how MPC works taking into account Linear

Time Invariant (LTI) systems with Linear Inequality Constraints on both the control action

and the system output (controlled variable) [7]3.

2.2 Prediction Model

We consider a linearized, discrete-time, state-space model of the plant:

x(k + 1|k) = Ax(k) +B u(k|k) (2.1)

y(k) = C x(k) (2.2)

3Other useful references are [8, 9].

Page 111: A Vision Based Control System for Autonomous Rendezvous and

2.2. PREDICTION MODEL 9

Idea behind MPC (2)

Control

variable

Output

variable

constraints

6SPHERES Ops Meeting – Space Systems Laboratory – Massachusetts Institute of Technology

ANDREA VALMORBIDA September 13th, 2012

Hp

variable

constraints

Hu

Figure 2.1: The idea of MPC.

where:

• x ∈ Rnx is the state vector ;

• u ∈ Rnu is the control vector (also referred to input vector or manipulated variable

vector);

• y ∈ Rny is the controlled variable vector, that is the vector of output variables which

are to be controlled.

·(k + j|k) denote the value of · predicted for time k + j based on the information available

at time k.

We define:

• Hp ≥ 1 as the prediction horizon;

• Hu ≤ Hp as the control horizon;

• ∆u(k + i|k) = u(k + i|k) − u(k + i − 1|k) as the difference between two consecutive

control vectors, that is the control variable increment between the previous step and

the next one.

It is assumed that u(k+i|k) = u(k+Hu−1|k), that is ∆u(k+i|k) = 0, for all Hu ≤ i ≤ Hp−1.

The control variable can change only within the control horizon.

The following definition is given:

• Z(k) =

y(k|k)

...

y(k +Hp|k)

is the sequence of the controlled variables within the predic-

tion horizon;

Page 112: A Vision Based Control System for Autonomous Rendezvous and

10 CHAPTER 2. MODEL PREDICTIVE CONTROL FOR LTI SYSTEMS

• T (k) =

r(k|k)

...

r(k +Hp|k)

is the sequence of the reference (or target) for the controlled

variable within the prediction horizon;

• U(k) =

u(k|k)

...

u(k +Hu − 1|k)

is the sequence of the control variables within the control

horizon;

• ∆U(k) =

∆u(k|k)

...

∆u(k +Hu − 1|k)

is the sequence of the control variable increments

within the control horizon;

• Ut(k) =

ut(k|k)

...

ut(k +Hu − 1|k)

is the sequence of the target control variables within

the control horizon.

Using (2.1) and (2.2), Z(k) can be written as:

Z(k) = Ψx(k) + Υu(k − 1) + Θ∆U(k) (2.3)

where:

• Ψ =

CA

...

CAHu

CAHu+1

...

CAHp

;

Page 113: A Vision Based Control System for Autonomous Rendezvous and

2.3. OPTIMIZATION PROBLEM DESCRIPTION 11

• Υ =

CB

...∑Hu−1i=0 CAiB∑Hu

i=0 CAiB

...∑Hp−1i=0 CAiB

;

• Θ =

CB · · · 0

CAB + CB · · · 0

.... . .

...∑Hu−1i=0 AiB · · · CB∑Hu

i=0CAiB · · · CAB + CB

......

...∑Hp−1i=0 CAiB · · · ∑Hp−Hu

i=0 CAiB

.

Given an estimation of the current state of the system x(k) and the control variable at

the previous step u(k − 1), using equation (2.3) it is possible to compute the sequence of

the controlled variable within the prediction horizon for any given sequence of the control

variable.

We introduce the following “tracking error” vector:

ε(k) = T (k)−Ψx(k)−Υu(k − 1) (2.4)

This is the difference between the future target trajectory and the free response of the system,

namely the response that would occur over the prediction horizon if no input changes were

made, that is if ∆U(k) = 0.

2.3 Optimization problem description

Assume that an estimate of x(k) is available at time k. The Model Predictive Control action

at time k is obtained by solving the following optimization problem:

Page 114: A Vision Based Control System for Autonomous Rendezvous and

12 CHAPTER 2. MODEL PREDICTIVE CONTROL FOR LTI SYSTEMS

Table 2.1: Prediction Model - Matrix/Vector dimensions.

Matrix/Vector Dimensions

A nx xnx

B nx xnu

C ny xnx

Z(k) (nyHp) x 1

T (k) (nyHp) x 1

U(k) (nuHu) x 1

∆U(k) (nuHu) x 1

Ut(k) (nuHu) x 1

ε(k) (nyHp) x 1

Ψ (nyHp) xnx

Υ (nyHp) xnu

Θ (nyHp) x (nuHu)

min

∆u(k|k)

...

∆u(k +Hu − 1|k)

ε

V (k) (2.5)

where V (k) is the cost function

V (k) =

Hp−1∑i=0

[ny∑j=1

∣∣wyi+1,j [yj(k + i+ 1|k)− rj(k + i+ 1)]∣∣2 +

+nu∑j=1

∣∣w∆ui,j ∆uj(k + i|k)

∣∣2 +

+nu∑j=1

∣∣wui,j [uj(k + i|k)− ut,j(k + i)]∣∣2]+ ρεε

2 (2.6)

with “()j” the j-th component of a vector, subject to:

Page 115: A Vision Based Control System for Autonomous Rendezvous and

2.3. OPTIMIZATION PROBLEM DESCRIPTION 13

∆uj,min(i)− εV∆uj,min(i) ≤ ∆uj(k + i|k) ≤ ∆uj,max(i)− εV∆u

j,max(i), j = 1, . . . , nu (2.7)

uj,min(i)− εVuj,min(i) ≤ uj(k + i|k) ≤ uj,max(i)− εVu

j,max(i), j = 1, . . . , nu (2.8)

yj,min(i)− εVyj,min(i) ≤ yj(k + i|k) ≤ yj,max(i)− εVy

j,max(i), j = 1, . . . , ny (2.9)

ε ≥ 0 (2.10)

for i = 0, ..., Hp − 1 and with ∆u(k + h|k) = 0 for h = Hu, ..., Hp − 1. ε ≥ 0 is the “slack”

variable used to relax the constraints on u, ∆u, and y. Note that the cost function V (k) is

minimized with respect to the sequence of input increments ∆U(k) and to the slack variable

ε.

The following definitions are given for the parameters used in the optimization problem

formulation (MPC problem parameters):

• wyi,j, w∆ui,j and wui,j are non negative scalar weights for the corresponding variables y, ∆u

and u; the smaller w, the less important is the behavior of the corresponding variable

to the overall performance index;

• uminj , umaxj , ∆uminj , ∆umaxj , yminj and ymaxj are lower/upper bounds on the corresponding

variables;

• ρε is a weight that penalizes the violation of the constraints; the larger ρε with respect

to input and output weights, the more the constraint violation is penalized;

• Vuj,min, Vu

j,max, V∆uj,min, V∆u

j,max,Vyj,min and Vy

j,max are the non negative Equal Concern

for the Relaxation (ECR) parameters that represent the concern for relaxing the cor-

responding constraint; the larger V, the softer the constraint.

Once the optimization problem is solved and a sequence of the optimal control variable

increments ∆U? is computed, the MPC controller sets u?(k) = u(k − 1) + ∆u?(k|k), where

∆u?(k|k) is the first element of the optimal sequence ∆U?. This means that only ∆u?(k|k)

is actually used to compute u(k). The remaining samples ∆u(k + i|k), with i > 0, are

discarded, and a new optimization problem based on x(k + 1) is solved at the next step

k + 1.

As you can see from the previous equations, the MPC optimization problem is a Quadratic

Programming (QP) problem with linear inequality constraints. To be solved, the QP problem

must be rewritten into the following standard form:

minx

V (x) = minx

[12XT HX + GXT

]s.t. ΩX ≤ ω

(2.11)

Page 116: A Vision Based Control System for Autonomous Rendezvous and

14 CHAPTER 2. MODEL PREDICTIVE CONTROL FOR LTI SYSTEMS

In the next two sections 2.4 and 2.5 we describe in detail how to obtain the QP problem

in standard form (Equation 2.11) from the MPC problem (Equations 2.5-2.10). The MPC

Algorithm is presented in Section 2.6.

2.4 Cost function in standard form

The cost function given in (2.6) can be written in matrix form as:

V (k) = ‖Z(k)− T (k) ‖2Wy

+ ‖∆U(k) ‖2W∆u

+ ‖U(k)− Ut(k) ‖2Wu

+ ρε ε2 (2.12)

whereWy,W∆uy andWu are the output variable sequence, the controlled variable increment

sequence and the control variable sequence weighting matrices computed from wyi,j, w∆ui,j and

wui,j , respectively.

To write the cost function V (k) in standard form, we must rewrite each of its first three

terms as a quadratic function of ∆U(k). The fourth term is already a quadratic function of

ε.

The first term can be written as:

‖Z(k)− T (k) ‖2Wy

= ‖Θ∆U(k)− ε(k) ‖2Wy

= [ ∆UT (k)ΘT − ε(k)T ]Wy[ Θ ∆U(k)− ε(k) ]

= ∆UT (k)ΘTWyΘ∆U − 2∆UT (k)ΘTWyε(k) + ε(k)TWyε(k)

(2.13)

The second term is:

‖∆U(k) ‖2W∆u

= ∆UT (k)W∆u∆U (2.14)

To write the third term as a quadratic function of ∆U(k), we first write U(k) as a function

of ∆U(k):

U(k) = Λ ∆U(k) + 1u(k − 1) (2.15)

where

Page 117: A Vision Based Control System for Autonomous Rendezvous and

2.4. COST FUNCTION IN STANDARD FORM 15

Λ =

I 0 · · · · · · 0

I I. . . . . . 0

......

. . . . . ....

I I · · · I 0

I I · · · I I

∈ RnuHuxnuHu (2.16)

is the map from U(k) to ∆U(k), I is a nu xnu identity matrix, 0 is a nu xnu null matrix and

1 =

I

...

I

Hutimes (2.17)

We can write then

U(k)− Ut(k) = Λ∆U(k) + 1u(k − 1)− Ut(k) = Λ ∆U + µ(k) (2.18)

where we have defined

µ(k) = 1u(k − 1)− Ut(k) (2.19)

Therefore, the third term of (2.12) can be written as:

‖U(k)− Ut(k) ‖2Wu

= ‖Λ ∆U(k) + µ(k) ‖2Wu

= [ ∆UT (k)ΛT + µ(k)T ]Wu[ Λ ∆U(k) + µ(k) ]

= ∆UT (k)ΛTWuΛ∆U + 2∆UT (k)ΛTWuµ(k) + µ(k)TWuµ(k)

(2.20)

The cost function (2.12) can be written as:

V (k) = ∆UT (k)H∆U(k) + ∆UT (k)G(k) + c(k) + ρε ε2(k) (2.21)

where we have defined:

H = ΘTWyΘ +W∆u + ΛTWuΛ (2.22)

G(k) = −2ΘTWyε(k) + 2ΛTWuµ(k) (2.23)

c(k) = ε(k)TWyε(k) + µ(k)TWuµ(k) (2.24)

Page 118: A Vision Based Control System for Autonomous Rendezvous and

16 CHAPTER 2. MODEL PREDICTIVE CONTROL FOR LTI SYSTEMS

From (2.4) and (2.19), we can write G(k) as a function of T (k), x(k), u(k − 1) and Ut(k):

G(k) = GT T (k) + Gx x(k) + Guk−1u(k − 1) + Gut Ut(k) (2.25)

where:

GT = −2ΘTWy (2.26)

Gx = 2ΘTWyΨ (2.27)

Guk−1= 2ΘTWyΥ + 2ΛTWu1 (2.28)

Gut = −2ΛTWu (2.29)

At this point we introduce the variable X that is the independent variable of the opti-

mization problem:

X(k) = [ ∆UT (k), ε(k) ]T (2.30)

ad we define the final parameters of the optimization problem:

H =

H 0

0 ρε

(2.31)

G(k) =

G(k)

0

(2.32)

The cost function in standard form that the QP solver should minimize is then:

V (k) = XT (k)HX(k) +XT (k)G(k) (2.33)

Note that c(k) has been dropped off since constant4 and it is not necessary to calculate its

value.

4the value of c(k) is different for each step.

Page 119: A Vision Based Control System for Autonomous Rendezvous and

2.5. CONSTRAINTS IN STANDARD FORM 17

2.5 Constraints in standard form

2.5.1 Control variable increment constraints

If we assume that both lower/upper bounds and ECR parameters are constant for all steps,

the constraints on each component j of the control variable increment ∆u for each time step

i can be written as follows:

∆uj,min − εV∆uj,min ≤ ∆uj(k + i|k) ≤ ∆uj,max − εV∆u

j,max (2.34)

for i = 0, ..., Hu and j = 1, ..., nu. This equation can be written in matrix form for all ∆u

components as:

M

∆u1

∆u2

...

∆unu

+mε ε ≤ m (2.35)

where

M =

−1 0 0 · · · 0

1 0 0 · · · 0

0 −1 0 · · · 0

0 1 0 · · · 0

......

......

...

0 0 0 · · · −1

0 0 0 · · · 1

(2.36)

Page 120: A Vision Based Control System for Autonomous Rendezvous and

18 CHAPTER 2. MODEL PREDICTIVE CONTROL FOR LTI SYSTEMS

mε = −

V∆u1,min

V∆u1,max

V∆u2,min

V∆u2,max

...

V∆unu,min

V∆unu,max

(2.37)

m =

−∆u1,min

∆u1,max

−∆u2,min

∆u2,max

...

−∆unu,min

∆unu,max

(2.38)

The inequality constraint (2.35) holds for each time step i within the control horizon Hu.

Therefore, the standard form of the inequality constraint on the optimization variable X due

to constraints on the control variable increment sequence within the control horizon Hu can

be written as:

[E∆u | eε

]X ≤ e∆u (2.39)

where

E∆u =

M 0

. . .

0 M

︸ ︷︷ ︸Hutimes

Hutimes (2.40)

Page 121: A Vision Based Control System for Autonomous Rendezvous and

2.5. CONSTRAINTS IN STANDARD FORM 19

eε =

...

Hutimes (2.41)

e∆u =

m

...

m

Hutimes (2.42)

2.5.2 Control variable constraints

Assuming that both lower/upper bounds and ECR parameters are constant for all steps, the

constraints on each component j of the control variable u for each time step i can be written

as follows:

uj,min − εVuj,min ≤ uj(k + i|k) ≤ uj,max − εVu

j,max (2.43)

for i = 0, ..., Hu and j = 1, ..., nu. This equation can be written in matrix form for all ∆u

components as:

N

u1

u2

...

unu

+ nε ε ≤ n (2.44)

where

N = M (2.45)

Page 122: A Vision Based Control System for Autonomous Rendezvous and

20 CHAPTER 2. MODEL PREDICTIVE CONTROL FOR LTI SYSTEMS

nε = −

Vu1,min

Vu1,max

Vu2,min

Vu2,max

...

Vunu,min

Vunu,max

(2.46)

n =

−u1,min

u1,max

−u2,min

u2,max

...

−unu,min

unu,max

(2.47)

The inequality constraint (2.44) written in matrix form for each time step i within the

control horizon Hu is:

[Fu | f ε

] U(k)

ε(k)

≤ fu (2.48)

where

Fu =

N 0

. . .

0 N

︸ ︷︷ ︸Hutimes

Hutimes (2.49)

Page 123: A Vision Based Control System for Autonomous Rendezvous and

2.5. CONSTRAINTS IN STANDARD FORM 21

fε =

...

Hutimes (2.50)

fu =

n

...

n

Hutimes (2.51)

Using (2.15), we can obtain the standard form of the inequality constraint on the opti-

mization variable X due to constraints on the control variable sequence within the control

horizon Hu: [Fu | fε

]X ≤ Fu,uk−1

u(k − 1) + fu (2.52)

where

Fu = Fu Λ (2.53)

Fu,uk−1= −Fu 1 (2.54)

2.5.3 Controlled variable constraints

Assuming that both lower/upper bounds and ECR parameters are constant for all steps,

the constraints on each component j of the controlled variable y for each time step i can be

written as follows:

yj,min − εVyj,min ≤ yj(k + i|k) ≤ yj,max − εVy

j,max (2.55)

for i = 0, ..., Hp and j = 1, ..., ny. This equation can be written in matrix form for all y

components as:

L

y1

y2

...

yny

+ lε ε ≤ l (2.56)

where

Page 124: A Vision Based Control System for Autonomous Rendezvous and

22 CHAPTER 2. MODEL PREDICTIVE CONTROL FOR LTI SYSTEMS

L = M (2.57)

lε = −

Vy1,min

Vy1,max

Vy2,min

Vy2,max

...

Vyny ,min

Vyny ,max

(2.58)

l =

−y1,min

y1,max

−y2,min

y2,max

...

−yny ,min

yny ,max

(2.59)

The inequality constraint (2.56) written in matrix form for each time step i within the

control horizon Hp is:

[Dy | dε

] Z(k)

ε(k)

≤ dy (2.60)

where

Dy =

L 0

. . .

0 L

︸ ︷︷ ︸Hptimes

Hptimes (2.61)

Page 125: A Vision Based Control System for Autonomous Rendezvous and

2.5. CONSTRAINTS IN STANDARD FORM 23

dε =

...

Hptimes (2.62)

dy =

l

...

l

Hptimes (2.63)

Using (2.3), we can obtain the standard form of the inequality constraint on the optimiza-

tion variable X due to constraints on the controlled variable sequence within the prediction

horizon Hp: [Dy | dε

]X ≤ Dy,x x(k) +Dy,uk−1

u(k − 1) + dy (2.64)

where

Dy = Dy Θ (2.65)

Dy,x = −Dy Ψ (2.66)

Dy,uk−1= −Dy Υ (2.67)

2.5.4 Standard form of the inequality constraint equation

The standard form of the inequality constraint on the optimization variableX(k) = [ ∆UT (k), ε(k) ]T

is:

E∆u eε

Fu fε

Dy dε

0 −1

X(k) ≤

e∆u

Fu,uk−1u(k − 1) + fu

Dy,x x(k) +Dy,uk−1u(k − 1) + dy

0

(2.68)

This is in the form:

ΩX ≤ ω(k) (2.69)

Page 126: A Vision Based Control System for Autonomous Rendezvous and

24 CHAPTER 2. MODEL PREDICTIVE CONTROL FOR LTI SYSTEMS

with

Ω =

E∆u eε

Fu fε

Dy dε

0 −1

(2.70)

ω(k) =

e∆u

Fu,uk−1u(k − 1) + fu

Dy,x x(k) +Dy,uk−1u(k − 1) + dy

0

=

e∆u

ωu(k)

ωy(k)

0

(2.71)

In Table 2.6, nc,y is the number of constraints on the controlled variable.

2.6 The MPC algorithm

The QP problem in standard form (Equation 2.11) is solved using the Dantzig-Wolfe’s active

set method [10]. Dantzig-Wolfe’s inputs and outputs are listed in Table 2.75.

The next Subsection 2.6 describes how to:

• Compute the QP solver inputs from the current dynamic state xk, the previous control

action uk−1, the controlled variable reference sequence T and the MPC parameters;

• Compute the optimal control action u∗k from the QP solver outputs.

2.6.1 MPC QP solver

The QP solver inputs tabi and basi are computed from xk, uk−1, T and the MPC parameters

as follow.

The following parameters are defined:

mnu = nuHu + 1 (2.72)

nc = 2nuHu + 2nuHu + ncyHp + 1 (2.73)

HH = H−1 ∈ Rmnuxmnu = const (2.74)

5See [10] for a detailed description of the Dantzig-Wolfe’s algorithm.

Page 127: A Vision Based Control System for Autonomous Rendezvous and

2.6. THE MPC ALGORITHM 25

TAB =

−HH HHΩT

ΩHH −ΩHHΩT

= const (2.75)

basisi =

basisi1

basisi2

=

αGk − xmin

σ Gk + ωk

(2.76)

where

α = −HH = const (2.77)

σ = ΩHH = const (2.78)

mnu is the dimension of the optimization vector, nc is the number of constraints, TAB

is the initial tableau and basisi is the initial basis. As you can see the initial tableau TAB

is constant, so it can be computed off-line.

basisi1 can be written as

αGk − xmin = αT T + αx xk + αuk−1uk−1 − xmin (2.79)

where

αT = α1 GT (2.80)

αx = α1 Gx (2.81)

αuk−1= α1 Guk−1

(2.82)

α1 = α( : , 1 : end− 1 ) (2.83)

basisi2 can be written as

σ Gk + ωk = σT T + σx xk + σuk−1uk−1 + ωk (2.84)

We can also write

Page 128: A Vision Based Control System for Autonomous Rendezvous and

26 CHAPTER 2. MODEL PREDICTIVE CONTROL FOR LTI SYSTEMS

σ Gk + ωk =

σy,T

σ∆u,T

σu,T

σε,T

T +

σy,x

σ∆u,x

σu,x

σε,x

xk +

σy,uk−1

σ∆u,uk−1

σu,uk−1

σε,uk−1

uk−1 +

ωy(k)

e∆u

ωu(k)

0

(2.85)

=

σy,T

σ∆u,T

σu,T

σε,T

T +

σy,x +Dy,x

σ∆u,x

σu,x

σε,x

xk +

σy,uk−1+Dy,uk−1

σ∆u,uk−1

σu,uk−1+ Fu,uk−1

σε,uk−1

uk−1 +

dy

e∆u

fu

0

(2.86)

We define:

basT =

αT

σy,T

σ∆u,T

σu,T

σε,T

(2.87)

basx =

αx

σy,x +Dy,x

σ∆u,x

σu,x

σε,x

(2.88)

Page 129: A Vision Based Control System for Autonomous Rendezvous and

2.6. THE MPC ALGORITHM 27

basuk−1=

αuk−1

σy,uk−1+Dy,uk−1

σ∆u,uk−1

σu,uk−1+ Fu,uk−1

σε,uk−1

(2.89)

bascnst =

−xmin

dy

e∆u

fu

0

(2.90)

so basisi can be written as

basisi = basT T + basx xk + basuk−1uk−1 + bascnst (2.91)

If T is constant, we can define

basT ,cnst = basT T + bascnst (2.92)

and basisi can then be written as:

basisi = basT ,cnst + basx xk + basuk−1uk−1 (2.93)

Equations (2.91) and (2.93) are used in the on-line updating of basisi. basT or basT ,cnst,

basx and basuk−1are constants, so they can be computed off-line and then uploaded during

the initialization phase of the MPC algorithm. Dimensions of the on-line MPC parameters

are listed in Table 2.8.

The optimal control action u∗k can be calculated from the QP solver outputs as follow.

The sequence of the optimal control variable increments ∆U? is first computed from bas

and il:

∆U?(j) = bas( il(j) ) + xmin(j) (2.94)

The current optimal control action is then computed as:

u?(k) = u(k − 1) + ∆u?(k|k) (2.95)

Page 130: A Vision Based Control System for Autonomous Rendezvous and

28 CHAPTER 2. MODEL PREDICTIVE CONTROL FOR LTI SYSTEMS

xk

uk−1T

basisi QP SOLVERbasil

∆U?

basT , basx, basuk−1, bascnst

Dynamic SystemMPC parameters

TAB

MPC ENGINE

Figure 2.2: MPC QP computing algorithm.

where ∆u?(k|k) is the first element of the optimal sequence ∆U?.

2.6.2 MPC QP computing algorithm

The MPC QP computing algorithm is shown in Figure 2.2. Off-line computations are in

green, while on-line computations are in cyan and red.

Page 131: A Vision Based Control System for Autonomous Rendezvous and

2.6. THE MPC ALGORITHM 29

Table 2.2: Cost function - Matrix/Vector dimensions.

Matrix/Vector Dimensions

ρε 1 x 1

ε 1 x 1

wy ny x 1

wu nu x 1

w∆u nu x 1

Wy (nyHp) x (nyHp)

Wu (nuHu) x (nuHu)

W∆u (nuHu) x (nuHu)

Λ (nuHu) x (nuHu)

1 nuHu xnu

µ(k) (nuHu) x 1

GT (nuHu) x (nyHp)

Gx (nuHu) xnx

Guk−1(nuHu) xnu

Gut (nuHu) x (nuHu)

G(k) (nuHu) x 1

H (nuHu) x (nuHu)

G(k) (nuHu + 1) x 1

H (nuHu + 1) x (nuHu + 1)

X(k) (nuHu + 1) x 1

V (k) 1 x 1

Table 2.3: Control variable increment constraints - Matrix/Vector dimensions.

Matrix/Vector Dimensions

M (2nu) xnu

mε (2nu) x 1

m (2nu) x 1

E∆u (2nuHu) x (nuHu)

eε (2nuHu) x 1

e∆u (2nuHu) x 1

Page 132: A Vision Based Control System for Autonomous Rendezvous and

30 CHAPTER 2. MODEL PREDICTIVE CONTROL FOR LTI SYSTEMS

Table 2.4: Control variable constraints - Matrix/Vector dimensions.

Matrix/Vector Dimensions

N (2nu) xnu

nε (2nu) x 1

n (2nu) x 1

Fu (2nuHu) x (nuHu)

fε (2nuHu) x 1

fu (2nuHu) x 1

Fu (2nuHu) x (nuHu)

Fu,uk−1(2nuHu) x (nu)

Table 2.5: Controlled variable constraints - Matrix/Vector dimensions.

Matrix/Vector Dimensions Dimensions

L (2ny) xny (nc,y) xny

lε (2ny) x 1 (nc,y) x 1

l (2ny) x 1 (nc,y) x 1

Dy (2nyHp) x (nyHp) (nc,yHp) x (nyHp)

dε (2nyHp) x 1 (nc,yHp) x 1

dy (2nyHp) x 1 (nc,yHp) x 1

Dy (2nyHp) x (nuHu) (nc,yHp) x (nuHu)

Dy,x (2nyHp) x (nx) (nc,yHp) x (nx)

Dy,uk−1(2nyHp) x (nu) (nc,yHp) x (nu)

Table 2.6: Inequality constraint equations - Matrix/Vector dimensions.

Matrix/Vector Dimensions

Ω (2nuHu + 2nuHu + nc,yHp + 1) x (nuHu + 1)

ω (2nuHu + 2nuHu + nc,yHp + 1) x 1

Page 133: A Vision Based Control System for Autonomous Rendezvous and

2.6. THE MPC ALGORITHM 31

Table 2.7: Dantzig-Wolfe’s active set method inputs and outputs.

Inputs

tabi initial tableau

basi initial basis

ibi initial setting of ib

ili initial setting of il

maxiter maximum number of iterations

Outputs

bas final basis vector

ib index vector for the variables

il index vector for the lagrange multipliers

iter iteration counter

tab final tableau

Table 2.8: On-line MPC parameters - Matrix/Vector dimensions.

Matrix/Vector Dimensions

Ω (nu) x (mnu)

HH (mnu) x (mnu)

TAB (mnu+ nc) x (mnu+ nc)

basT (mnu+ nc) x (nyHp)

bascnst (mnu+ nc) x 1

basT ,cnst (mnu+ nc) x 1

basx (mnu+ nc) xnx

basuk−1(mnu+ nc) xnu

basisi (mnu+ nc) x 1

Page 134: A Vision Based Control System for Autonomous Rendezvous and

32 CHAPTER 2. MODEL PREDICTIVE CONTROL FOR LTI SYSTEMS

Page 135: A Vision Based Control System for Autonomous Rendezvous and

Chapter 3

MPC TOOLS FOR SPHERES

3.1 MPC Engine for SPHERES

Due to its high computational time when run onboard SPHERES, the MPC Engine (see

2.2) can not be used directly onboard SPHERES when a control frequency fctrl < 0.1Hz is

required. In these situations, the MPC problem is solved using the flight laptop and the

computed control forces are transmitted back to the SPHERES satellite for the actuation.

To manage the time delay due to MPC computation and MPC data exchange between

SPHERES and the flight laptop, given an estimation of the dynamic state xk of the plant

at time tk, a model of the plant is first used to estimate its dynamic state f control steps

forward in time, xk+f . The new estimate dynamic state at time tk+f is then used as initial

condition for the MPC Engine in order to obtain a sequence of f control acceleration that

should be actuated starting at time tk+f , i.e. f control steps forward in time with respect to

the initial time tk.

Figure 3.1 shows the block diagram and the data flow when the position and velocity vec-

tors of the SPHERES satellite are provided by the Vision-Based Relative Navigation System

onboard the Capture Mechanism. We obtain a similar configuration when the SPHERES

satellite state is given by the SPHERES satellite itself using the Ultrasound Global Metrology

System.

The time schedule of the MPC operations are presented in Figure 3.2, where:

• ∆test is the time interval needed by the Vision-Based Relative Navigation System to

estimate position and velocity of the SPHERES satellite w.r.t. the Capture Mechanism;

• ∆ttxCPT−>MAT is the transmission delay from the Capture Mechanism to Matlab;

• ∆tMPC is the time interval needed to solve the MPC problem; it includes forward

propagation, QP data updating, QP solver and control sequence updating;

• ∆ttxMAT−>SPH is the transmission delay from Matlab to the SPHERES satellite.

MPC data is exchanged between SPHERES and Matlab using two types of data packets:

33

Page 136: A Vision Based Control System for Autonomous Rendezvous and

34 CHAPTER 3. MPC TOOLS FOR SPHERES

MPC implementation on SPHERES

Block diagram and data flow

Capture mechanism Control Station (laptop)

Flight

GUI

MATLAB

MPC

ENGINE

PLANT

MODEL

Xk

Xk+f

uk+f-1

Mission scenarioMPC data

KALMAN

FILTER X

Xk

(position)

RELATIVE

POSITION

ESTIMATOR

Ph.D. School in SCIENCES TECHNOLOGIES AND MEASURES FOR SPACE – CISAS – University of Padova (Italy)

ANDREA VALMORBIDA March 2013Test of MPC strategies on SPHERES testbed for the MoSR scenario

10

SPHERES satellite

control force

actuation

update thrusters times

ENGINEMODEL

uk+f, …, uk+2f-1 update control

sequence

∆uk+f, …, ∆uk+2f-1

FILTER Xk

(position

and

velocity)Vision system

Figure 3.1: Block diagram and data flow of the MPC on-line operations.

MPC implementation on SPHERES

Time schedule

∆test: time interval to estimate position and velocity of the SPHERE satellite w.r.t. the Capture Mechanism;

∆ttx CPT -> MAT: transmission delay from the Capture Mechanism to Matlab;

∆tMPC: time interval to solve the MPC problem: forward propagation, QP data updating, QP solver, control sequence

updating;

∆ttx MAT -> SPH : transmission delay from Matlab to the SHERES satellite.

tk+f∆t ∆

∆tMPC delay

tk+2f

Ph.D. School in SCIENCES TECHNOLOGIES AND MEASURES FOR SPACE – CISAS – University of Padova (Italy)

ANDREA VALMORBIDA March 2013Test of MPC strategies on SPHERES testbed for the MoSR scenario

11

tk

tk+f∆test

∆ttx CPT -> MAT

∆tMPC

∆ttx MAT -> SPH

must be > 0

time

time

time

Capture

Mechanism

Control

Station

SPHERES

satellite

∆test

∆ttx CPT -> MAT

∆tMPC

actuation of uk+f, …, uk+2f-1computation of uk+f, …, uk+2f-1

tk+2f

...

Figure 3.2: Time schedule of the MPC on-line operations.

Page 137: A Vision Based Control System for Autonomous Rendezvous and

3.2. MATLAB MPC TOOLS 35

• MPC computation data packet (see Table 3.1). This data packet is transmitted from

SPHERES to Matlab to execute the MPC computation using position and velocity

vectors transmitted with this packet as initial dynamic state.

• MPC control acceleration data packet (see Table 3.2). This data packet, transmitted

from Matlab to SPHERES, includes the control acceleration sequence that have to be

actuated on SPHERES.

Table 3.1: MPC computation data packet.

Byte Size (bits) Field Type Description

1-4 4(32) test time unsigned int current test time

5-8 4(32) pos x float current x position

9-12 4(32) pos y float current y position

13-16 4(32) vel x float current x velocity

17-20 4(32) vel y float current y velocity

21-24 4(32) acc x float previous x control acceleration

25-28 4(32) acc y float previous y control acceleration

Table 3.2: MPC control acceleration data packet.

Byte Size (bits) Field Type Description

0 1(8) received packets unsigned shortNumber of received

MPC data packets

1 1(8) total packets unsigned shortNumber of total MPC data packets

for each MPC step

2-5 4(32) tk int Initial time of the MPC data

6-29 24(192) u matrix floatMatrix with x and y MPC

control acceleration sequence

In Table 3.2, received packets and total packets parameters allow to split MPC control

acceleration data into two or more data packets with 32-byte maximum length each. Also,

each column of u matrix represents the x and y components of a control vector at the same

time instant.

3.2 Matlab MPC Tools

The Matlab MPC Tools consist of the following elements:

Page 138: A Vision Based Control System for Autonomous Rendezvous and

36 CHAPTER 3. MPC TOOLS FOR SPHERES

1. the Matlab MPC Engine, i.e. the CONTROL MPC.m Matlab class with parameters

and functions that allow the user to initialize and solve an MPC problem;

2. the QP Solver Engine written in C and Mex-interfaced with Matlab;

3. the Multi-Parametric Toolbox 1 (MPT);

4. a set of m-functions and m-files that allow the user to tune MPC parameters and run

some preliminary maneuvers;

5. the gspInitDisplay.m and gspUpdateDisplay.m m-functions to interface the Matlab

MPC Engine with SPHERES through the SPHERES Flight GUI.

To make MPC run on Matlab, you first need to:

• install the Multi-Parametric Toolbox (MPT) and include the MPT main folder with

all its sub-folders in the Matlab directories using:

addpath(genpath(‘PATH_TO_MPT_MAIN_DIRECTORY’));

• include the MPC OOP directory: addpath(genpath(‘PATH_TO_MPT_OOP’)).

After that, execute the program Test_MPC_Algorithm_Matlab_06_OOP.m. If an error con-

cerning the qpsolver function is reported, then mex compile qpsolver.c typing mex qp-

solver.c in the Matlab Command Window.

To use the MPC Engine in Matlab, both for off-line and on-line MPC computation, the

user have first to initialize the MPC class following these steps:

• initialize the plant model: mympc.SetupPlant(Dt,Ac,Bc,Cc,Dc,Hp,Hu);

• Initialize the weighting matrices: mympc.SetupWeights(wy,wu,wDu,rho_epsilon);

• initialize constraints types and constraints:

– mympc.SetupConstraintsType(y_const_flag,y_constr_type,

u_constr_flag,Du_constr_flag);

– mympc.SetupConstraints(y_min,y_max,L,l_epsilon,

u_min,u_max,Du_min,Du_max,VV_y,VV_u,VV_Du);

• initialize the reference state trajectory:

mympc.SetupTrajectoryType(state_ref_traj_type,state_ref_traj);

• initialize the forward computation parameters:

mympc.SetupForwardComputation(forward_compute_flag,forward_steps);

1MPT can be downloaded at http://control.ee.ethz.ch/research/software.en.html

Page 139: A Vision Based Control System for Autonomous Rendezvous and

3.3. C MPC TOOLS 37

• update the off-line MPC parameters: mympc.UpdateMPCParameters().

where mympc is an instance of the CONTROL MPC class.

At this point, the MPC Engine can be called using the function:

mympc.solve(tk,xk,uk_1,T,Duk_forward_sequence).

A detailed description of MPC parameters and function is provided in the CONTROL MPC

class Matlab code.

3.3 C MPC Tools

The C MPC Tools for SPHERES consist of the following Guest Scientist Program (GSP) C

source and header files:

• gsp.c and gsp.h that are primary source and header files with a standard SPHERES

GSP structure;

• gsp MPC FORWARD.c and gsp MPC FORWARD.h with tests and maneuvers for the

MPC MOSR scenario;

• mpc forward.c and mpc forward.h with the SPHERES side MPC engine;

• mpc param forward.h with MPC parameters;

• gsp PD.c and gsp PD.h with a custom PD controller for the rendezvous and capture

maneuver;

• gsp types.h with custom gsp types;

• gsp utils.c and gsp utils.h with some utility functions.

The previous GSP files have to be compiled with the SPHERES core library and uploaded

to the SPHERES satellite.

Page 140: A Vision Based Control System for Autonomous Rendezvous and

38 CHAPTER 3. MPC TOOLS FOR SPHERES

Page 141: A Vision Based Control System for Autonomous Rendezvous and

Chapter 4

TEST RESULTS

Two types of test were performed to evaluate the performance of an MPC-based controller

in executing the rendezvous and capture maneuver for the MOSR scenario:

1. preliminary simulation tests using both a Matlab simulator and the SPHERES simu-

lator, Section 4.1;

2. hardware tests using the MIT Flat Floor SPHERES test-bed, Section 4.2.

4.1 Preliminary Simulation Results

In this section we evaluate the behavior of a MPC-based controller in executing the ren-

dezvous and capture maneuver, comparing its performance with a standard Proportional-

Derivative (PD) controller.

Two simulation environments were considered:

• the Matlab simulator, which is an ideal environment with no representative SPHERES

properties, no sensor/actuator noises and without the attitude dynamics;

• the SPHERES simulator, which is a representative environment of the SPHERES test-

bed with 6 degrees of freedom.

4.1.1 Matlab Simulator MPC vs Matlab Simulator PD

The initial dynamic state and the final or target position for this simulation are as follows:

xi = [0.9m, 1m, 0.48m, 0m/s, 0m/s, 0m/s]T (4.1)

pf = [−0.75, 0, 0 ]T m (4.2)

PD and MPC parameters are tuned in order to obtain comparable settling time for the

position error components profiles. Both MPC and PD parameters are listed in Table 4.1.

Other parameters used for this simulation are:

39

Page 142: A Vision Based Control System for Autonomous Rendezvous and

40 CHAPTER 4. TEST RESULTS

• control step ∆tctrl = 1 s;

• control acceleration constraints in all directions: umin = −10−2m/s2, umax = 10−2m/s2;

• no forward computation in MPC (see Section 3.1).

Simulation results for both MPC and PD controller are compared in the following figures:

• Figure 4.1: 3D position trajectory;

• Figure 4.2: position error components (current - reference) vs. time;

• Figure 4.3: velocity error components (current - reference) vs. time;

• Figure 4.4: control acceleration components vs. time;

• Figure 4.5: cumulative ∆v requirement vs. time.

From these results you can see that the MPC total ∆v requirement is about 11% less than

the PD result, with similar settling times for all the position error components of both

controllers. This is due to MPC’s prediction capability and actuator saturation handling

ability 1 (see Figure 4.4).

Table 4.1: MPC and PD parameters - MPC vs PD in the Matlab simulator.

PD MPC

GkP 10−2[5, 8, 15] Gwy 10−5[1, 15, 15]T

GkD 10−1[5.00, 4.65, 6.40] Gwu 10−1[1, 1, 1]T

Hu 6

Hp 8Hu = 48

4.1.2 SPHERES Simulator MPC vs SPHERES Simulator PD

The initial dynamic state and the final or target position for this simulation are as follows:

xi = [0.9m, 1m, 0.48m, 0m/s, 0m/s, 0m/s]T (4.3)

pf = [−0.75, 0, 0 ]T m (4.4)

PD and MPC parameters are tuned in order to obtain comparable settling time for the

position error components profiles. Both MPC and PD parameters are listed in Table 4.2.

Other parameters used for this simulation are:

1MPC solves an optimal control problem on-line, for the current state of the plant rather than determiningoff-line a feedback policy that provide the control actions for all states.

Page 143: A Vision Based Control System for Autonomous Rendezvous and

4.1. PRELIMINARY SIMULATION RESULTS 41

Figure 4.1: 3D Position Trajectory - Matlab Sim. MPC vs Matlab Sim. PD.

0 10 20 30 40 50 60 70 80-1

0

1

2X and Y Position Errors vs. Time

px (

t) [

m]

Time [s]

MPCPD

0 10 20 30 40 50 60 70 80-0.5

0

0.5

1

py (

t) [

m]

Time [s]

0 10 20 30 40 50 60 70 80-0.5

0

0.5

pz (

t) [

m]

Time [s]

Figure 4.2: Position components error vs. time - Matlab Sim. MPC vs Matlab Sim. PD.

Page 144: A Vision Based Control System for Autonomous Rendezvous and

42 CHAPTER 4. TEST RESULTS

0 10 20 30 40 50 60 70 80-0.2

0

0.2X and Y Velocity Errors vs. Time

vx (

t) [

m/s

]

Time [s]

MPCPD

0 10 20 30 40 50 60 70 80-0.1

0

0.1

vy (

t) [

m/s

]

Time [s]

0 10 20 30 40 50 60 70 80-0.1

0

0.1

vz (

t) [

m/s

]

Time [s]

Figure 4.3: Velocity components error vs. time - Matlab Sim. MPC vs Matlab Sim. PD.

0 10 20 30 40 50 60 70 80-0.01

0

0.01Control Accelerations vs. Time

ux (

t) [

m/s

2]

Time [s]

MPCPD

0 10 20 30 40 50 60 70 80-0.01

0

0.01

uy (

t) [

m/s

2]

Time [s]

0 10 20 30 40 50 60 70 80-0.01

0

0.01

uz (

t) [

m/s

2]

Time [s]

Figure 4.4: Control acceleration components vs. time - Matlab Sim. MPC vs Matlab Sim.PD.

Page 145: A Vision Based Control System for Autonomous Rendezvous and

4.1. PRELIMINARY SIMULATION RESULTS 43

0 10 20 30 40 50 60 70 800

0.05

0.1

0.15

0.2

0.25

0.3

0.35Cumulative ∆v requirement

Cu

mu

lative

∆ v

re

qu

ire

me

nt

(t)

[m/s

]

Time [s]

MPC (-11.65 % wrt PD)

PD

Figure 4.5: Cumulative ∆v requirement vs. time - Matlab Sim. MPC vs Matlab Sim. PD.

• Control step ∆tctrl = 1 s;

• Control acceleration constraints in all directions: umin = −10−2m/s2, umax = 10−2m/s2;

• No forward computation in MPC (see Section 3.1).

Results for this preliminary test are shown from Figure 4.6 to Figure 4.8. As you can see

from these results, the MPC total ∆v requirement is about 13% less than the PD one, with

similar settling times for all the position error components of both controllers.

Note also that SPHERES simulation results were obtained with same initial and final

system states and very similar PD gains and MPC weights as Matlab simulator results. A

delayed behavior both in position and velocity between Matlab and SPHERES simulators

results for both PD and MPC is observed (as expected).

We can conclude that also in a more representative simulation environment, i.e the

SPHERES simulator, a MPC-based controller has higher performances than a classical PD

controller.

Page 146: A Vision Based Control System for Autonomous Rendezvous and

44 CHAPTER 4. TEST RESULTS

Table 4.2: MPC and PD parameters - MPC vs PD in the Matlab simulator.

PD MPC

GkP 10−2[5, 8.5, 19.1] Gwy 10−5[1, 15, 15]T

GkD 10−1[5.7, 4.65, 6.6] Gwu 10−1[1, 1, 1]T

Hu 6

Hp 8Hu = 48

0 10 20 30 40 50 60 70 80 90 100-1

0

1

2X and Y Position Errors vs. Time

px (

t) [

m]

Time [s]

MPCPD

0 10 20 30 40 50 60 70 80 90 100-0.5

0

0.5

1

py (

t) [

m]

Time [s]

0 10 20 30 40 50 60 70 80 90 100-0.5

0

0.5

pz (

t) [

m]

Time [s]

Figure 4.6: Position components error vs. time - SPHERES Sim. MPC vs SPHERES Sim.PD.

Page 147: A Vision Based Control System for Autonomous Rendezvous and

4.1. PRELIMINARY SIMULATION RESULTS 45

0 10 20 30 40 50 60 70 80 90 100-0.1

0

0.1X and Y Velocity Errors vs. Time

vx (

t) [

m]

Time [s]

MPCPD

0 10 20 30 40 50 60 70 80 90 100-0.1

0

0.1

vy (

t) [

m]

Time [s]

0 10 20 30 40 50 60 70 80 90 100-0.1

-0.05

0

0.05

vz (

t) [

m]

Time [s]

Figure 4.7: Velocity components error vs. time - SPHERES Sim. MPC vs SPHERES Sim.PD.

0 10 20 30 40 50 60 70 80 90 1000

0.1

0.2

0.3

0.4

0.5

0.6

0.7Cumulative ∆v requirement

Cu

mu

lative

∆ v

re

qu

ire

me

nt

(t)

[m/s

]

Time [s]

MPC (-13.13% wrt PD)

PD

Figure 4.8: Cumulative ∆v requirement vs. time - SPHERES Sim. MPC vs SPHERES Sim.PD.

Page 148: A Vision Based Control System for Autonomous Rendezvous and

46 CHAPTER 4. TEST RESULTS

4.2 SPHERES Test Results

4.2.1 SPHERES Test Plan

The SPHERES test plan is presented in Table 4.3. As you can see, Test 1 is used for both

debugging C and Matlab codes and evaluating the MPC time delay, i.e. MPC computing

time and data transmission between SPHERES and Matlab (see Figure 3.2 also). The

performance of MPC and PD controllers in executing the MOSR rendezvous and capture

maneuver are evaluated running Test 2 and Test 3, respectively.

Initial position and velocity, final position and other parameters for Test 2 and Test 3

are listed in Table 4.4.

Table 4.3: SPHERES Test Plan.

SPHERES Test Plan

SPHERES MoSR MPC

TE

ST

NU

MB

ER

MA

NU

VE

R

NU

MB

ER

RU

N T

IME

[S

]

DESCRIPTION

1 1 70 MPC time delay evaluation.

Position and velocity vectors are taken from the global metrology system

2

Rendez-vous and capture maneuver using an MPC controller.

Position and velocity vectors are taken from the global metrology system.

1 40 state estimator convergence

2 80 initial position acquisition

3 1 initial state acquisition

4 100 rendez-vous and capture maneuver

5 30 station keeping

3

Rendez-vous and capture maneuver using an PD controller.

Position and velocity vectors are taken from the global metrology system.

1 40 state estimator convergence

2 80 initial position acquisition

3 1 initial state acquisition

4 100 rendez-vous and capture maneuver

5 30 station keeping

4.2.2 Test 1 - MPC computing time evaluation

The MPC computing time is between 300 ms ad 600 ms with active constraints on both the

output variable and the control variable.

4.2.3 Test 2 - MPC rendezvous maneuver

Test results are shown in the following figures:

Page 149: A Vision Based Control System for Autonomous Rendezvous and

4.2. SPHERES TEST RESULTS 47

Table 4.4: Test 2 and Test 3 parameters.

Test 2 parameters

FFpi [−0.4737, −0.1611]T m

FFvi [−3.15, 0.13]T 10−3m/s

FFpf [0.7, 0.4]T m

∆tctrl 1 s

FFwy [ 3 · 10−5, 1.35 · 10−1]T

FFwu [0.1, 0.1]T

Hu 6

Hp 8Hu = 48

umax 4 · 10−3m/s2

umin −4 · 10−3m/s2

Test 3 parameters

FFpi [−0.5035, −0.1213]T m

FFvi [−0.0008, 0.0011]T 10−3m/s

FFpf [0.7, 0.4]T m

∆tctrl 1 s

FFkp [0.08, 0.22]T

FFkd [1, 1.674]T

• Figure 4.9: computing time to solve the MPC problem in Matlab; notice that the MPC

problem for this test is solved in less than 60ms with less than 20 iterations;

• Figure 4.10: position profiles;

• Figure 4.12: 2D position trajectory, with initial position in blue and final position in

green;

• Figure 4.11: velocity profiles;

• Figure 4.13: control acceleration profiles;

• Figure 4.14: cumulative ∆v requirement profile.

4.2.4 Test 3 - PD rendezvous maneuver

Results for Test 3 are shown from Figure 4.15 to Figure 4.19.

4.2.5 MPC - PD comparison

Results for MPC - PD comparison are shown from Figure 4.20 to Figure 4.24.

4.2.6 Considerations

A noisy and delayed behavior between Matlab Simulator and SPHERES results for both PD

and MPC is observed. This may be due to following causes:

Page 150: A Vision Based Control System for Autonomous Rendezvous and

48 CHAPTER 4. TEST RESULTS

0 10 20 30 40 50 60 70 80 90 1000

5

10

15

20MPC Performances

Nu

mb

er

of

ite

ratio

ns

time [s]

0 10 20 30 40 50 60 70 80 90 1000

20

40

60

80

100

MP

C c

om

pu

tin

g t

ime

[m

s]

time [s]

Figure 4.9: MPC computing time i Matlab - Test 2.

0 10 20 30 40 50 60 70 80 90 100-0.5

0

0.5

1

1.5X and Y Position Errors vs. Time

px (

t) [

m]

Time [s]

SPHERES test-bed

Matlab simulator

0 10 20 30 40 50 60 70 80 90 100-0.6

-0.4

-0.2

0

0.2

py (

t) [

m]

Time [s]

SPHERES test-bed

Matlab simulator

Figure 4.10: MPC position components error vs. time - Test 2.

Page 151: A Vision Based Control System for Autonomous Rendezvous and

4.2. SPHERES TEST RESULTS 49

0 10 20 30 40 50 60 70 80 90 100-0.02

0

0.02

0.04

0.06X and Y Velocities vs. Time

vx (

t) [

m/s

]

Time [s]

SPHERES test-bedMatlab simulator

0 10 20 30 40 50 60 70 80 90 100-0.01

0

0.01

0.02

0.03

0.04

vy (

t) [

m/s

]

Time [s]

SPHERES test-bedMatlab simulator

Figure 4.11: MPC velocity components error vs. time - Test 2.

-0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8-0.2

-0.1

0

0.1

0.2

0.3

0.4

0.5Y Position vs. X Position

py (

t) [

m]

px (t) [m]

SPHERES test-bedMatlab simulator

Figure 4.12: MPC 2D position trajectory - Test 2.

Page 152: A Vision Based Control System for Autonomous Rendezvous and

50 CHAPTER 4. TEST RESULTS

0 10 20 30 40 50 60 70 80 90 100-4

-2

0

2

4

6x 10

-3 Control Accelerations vs. Timeu

x (

t) [

m/s

2]

Time [s]

SPHERES test-bed

Matlab simulator

0 10 20 30 40 50 60 70 80 90 100-6

-4

-2

0

2

4x 10

-3

uy (

t) [

m/s

2]

Time [s]

SPHERES test-bed

Matlab simulator

Figure 4.13: MPC control acceleration components vs. time - Test 2.

0 10 20 30 40 50 60 70 80 90 1000

0.05

0.1

0.15

0.2

0.25

0.3

0.35

Cu

mu

lative

∆ v

re

qu

ire

me

nt

(t)

[m/s

]

Time [s]

Cumulative ∆v requirement

SPHERES test-bedMatlab simulator

Figure 4.14: MPC Cumulative ∆v requirement vs. time - Test 2.

Page 153: A Vision Based Control System for Autonomous Rendezvous and

4.2. SPHERES TEST RESULTS 51

0 10 20 30 40 50 60 70 80 90 1000

0.5

1

1.5X and Y Position Errors vs. Time

px (

t) [

m]

Time [s]

SPHERES test-bed

Matlab simulator

0 10 20 30 40 50 60 70 80 90 100-0.6

-0.4

-0.2

0

0.2

py (

t) [

m]

Time [s]

SPHERES test-bed

Matlab simulator

Figure 4.15: PD position components error vs. time - Test 3.

0 10 20 30 40 50 60 70 80 90 100-0.02

0

0.02

0.04

0.06X and Y Velocities vs. Time

vx (

t) [

m/s

]

Time [s]

SPHERES test-bed

Matlab simulator

0 10 20 30 40 50 60 70 80 90 100-0.01

0

0.01

0.02

0.03

0.04

vy (

t) [

m/s

]

Time [s]

SPHERES test-bed

Matlab simulator

Figure 4.16: PD velocity components error vs. time - Test 3.

Page 154: A Vision Based Control System for Autonomous Rendezvous and

52 CHAPTER 4. TEST RESULTS

-0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8-0.2

-0.1

0

0.1

0.2

0.3

0.4

0.5Y Position vs. X Position

py (

t) [

m]

px (t) [m]

SPHERES test-bedMatlab simulator

Figure 4.17: PD 2D position trajectory vs. time - Test 3.

0 10 20 30 40 50 60 70 80 90 100-5

0

5x 10

-3 Control Accelerations vs. Time

ux (

t) [

m/s

2]

Time [s]

SPHERES test-bed

Matlab simulator

0 10 20 30 40 50 60 70 80 90 100-5

0

5x 10

-3

uy (

t) [

m/s

2]

Time [s]

SPHERES test-bed

Matlab simulator

Figure 4.18: PD control acceleration components vs. time - Test 3.

Page 155: A Vision Based Control System for Autonomous Rendezvous and

4.2. SPHERES TEST RESULTS 53

0 10 20 30 40 50 60 70 80 90 1000

0.05

0.1

0.15

0.2

0.25

0.3

0.35C

um

ula

tive

∆ v

re

qu

ire

me

nt

(t)

[m/s

]

Time [s]

Cumulative ∆v requirement

SPHERES test-bed

Matlab simulator

Figure 4.19: PD cumulative ∆v requirement vs. time - Test 3.

0 10 20 30 40 50 60 70 80 90 1000

0.5

1

1.5X and Y Position Errors vs. Time

px (

t) [

m]

Time [s]

MPC

PD

0 10 20 30 40 50 60 70 80 90 100-0.6

-0.4

-0.2

0

0.2

py (

t) [

m]

Time [s]

MPC

PD

Figure 4.20: Position components error vs. time - MPC-PD comparison.

Page 156: A Vision Based Control System for Autonomous Rendezvous and

54 CHAPTER 4. TEST RESULTS

0 10 20 30 40 50 60 70 80 90 100-0.02

0

0.02

0.04

0.06X and Y Velocities vs. Time

vx (

t) [

m/s

]

Time [s]

MPCPD

0 10 20 30 40 50 60 70 80 90 100-0.01

0

0.01

0.02

0.03

vy (

t) [

m/s

]

Time [s]

MPCPD

Figure 4.21: Velocity components error vs. time - MPC-PD comparison.

-0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8-0.2

-0.1

0

0.1

0.2

0.3

0.4

0.5Y Position vs. X Position

py (

t) [

m]

px (t) [m]

MPCPD

Figure 4.22: 2D position trajectory vs. time - MPC-PD comparison.

Page 157: A Vision Based Control System for Autonomous Rendezvous and

4.2. SPHERES TEST RESULTS 55

0 10 20 30 40 50 60 70 80 90 100-5

0

5x 10

-3 Control Accelerations vs. Time

ux (

t) [

m/s

2]

Time [s]

MPC

PD

0 10 20 30 40 50 60 70 80 90 100-5

0

5x 10

-3

uy (

t) [

m/s

2]

Time [s]

MPC

PD

Figure 4.23: Control acceleration components vs. time - MPC-PD comparison.

0 10 20 30 40 50 60 70 80 90 1000

0.05

0.1

0.15

0.2

0.25

0.3

0.35

Cu

mu

lative

∆ v

re

qu

ire

me

nt

(t)

[m/s

]

Time [s]

Cumulative ∆v requirement

MPC

PD

Figure 4.24: Cumulative ∆v requirement vs. time - MPC-PD comparison.

Page 158: A Vision Based Control System for Autonomous Rendezvous and

56 CHAPTER 4. TEST RESULTS

• friction between the air carriage and the flat floor that impedes the SPHERES motion;

• noise of the SPHERES estimated state;

• because of a minimum pulse width, the control acceleration that is actually performed

is different from the computed one;

• two different control actuation schemes were used, PWC in Matlab and PWM in

SPHERES;

• noise of the actuator systems;

• time delay in actuating the control actions.

Both MPC and PD controllers are effective, since the final target position is reached with a

small final error. This steady-state error is mainly due to the air carriage - flat floor friction

and the control force minimum pulse that the (real) propulsive system system can apply. An

Integral component in the control policy should fix that steady-state error.

As you can see from Figure 4.18, PD X and Y control actions and MPC Y control action

are very noisy. This is due in part to the estimation uncertainty, but also to the use of too

high gains for PD and weights for MPC. Further tests should be run to properly tune PD

gains and MPC weights. Nevertheless, analyzing the control profiles of the first part of the

rendezvous maneuver it is possible to conclude that the MPC controller can reduce the total

∆v requirement by 4%-5% w.r.t. the PD controller. A reduction in MPC performance was

expected, since, for example, uncertainty of the SPHERES estimated state propagates over

the prediction horizon which in return, reduces its prediction accuracy and optimality.

Further tests need to be run on the hardware, i.e the SPHERES test bed, to fully char-

acterize a MPC-based controller.

4.3 Conclusions

In this work we have:

• developed some Matlab and C tools to run a MPC-based controller in SPHERES;

• shown that a MPC-based controller can reduce the total ∆v requirement w.r.t a classi-

cal PD with similar settling times by 13% in a representative simulation environment,

i.e the SPHERES simulator, and by 4%-5% in the real system, i.e. the SPHERES Flat

Floor test bed. The last results need to be verified with other tests.

Further tests need to be run using the SPHERES test bed to:

• Properly tune PD gains and MPC weights;

• Show cases when a PD controller fails while a MPC controller doesn’t;

Page 159: A Vision Based Control System for Autonomous Rendezvous and

4.3. CONCLUSIONS 57

• Use a MPC controller with the vision-based estimation algorithm used in MOSR re-

sulting in a full test of the GNC system.

Page 160: A Vision Based Control System for Autonomous Rendezvous and

58 CHAPTER 4. TEST RESULTS

Page 161: A Vision Based Control System for Autonomous Rendezvous and

Bibliography

[1] J. M. Maciejowski, Predictive Control with Constraints. Prentice Hall, 2002.

[2] J. A. Rossiter, Model-Based Predictive Control - A Practical Approach. CRC Press,

2003.

[3] D. Q. Mayne, J. B. Rawlings, C. V. Rao, and P. O. M. Scokaert, “Constrained Model

Predictive Control: Stability and Optimality,” Automatica, vol. 36, pp. 789–814, 2000.

[4] J. H. Lee, “Model Predictive Control: Review of the Three Decades of Development,”

Journal of Control, Automation and Systems, vol. 9, no. 3, pp. 415–424, 2011.

[5] A. Richards and J. P. How, “Performance evaluation of Rendezvous using Model Pre-

dictive Control,” AIAA Guidance, Navigation, and Control Conference and Exhibit,

2003.

[6] L. Breger and J. P. How, “Safe Trajectories for Autonomous Rendezvous of Spacecraft,”

Journal of Guidance, Control, and Dynamics, vol. 31, pp. 1478–1489, 2008.

[7] A. Bemporad, M. Morari, and N. L. Ricker, Model Predictive Control Toolbox - User’s

Guide and User’s Manual. The MathWorks, 2012.

[8] A. Bemporad, M. Morari, V. Dua, and E. N. Pistikopoulos,“The explicit linear quadratic

regulator for constrained systems,” Automatica, vol. 38, pp. 3–20, 2002.

[9] P. Tondel, T. A. Johansen, and A. Bemporad, “An algorithm for multi-parametric

quadratic programming and explicit mpc solutions,” Automatica, vol. 39, pp. 489–497,

2003.

[10] G. B. Dantzig, Linear Programming and Extensions. Princeton University Press, Prince-

ton, 1963.

59

Page 162: A Vision Based Control System for Autonomous Rendezvous and

60 BIBLIOGRAPHY

.