330
1 Use of modeling and simulation in pulp and paper industry Editor: Erik Dahlquist A production by COST E 36

Use of modeling and simulation in pulp and paper making

Embed Size (px)

Citation preview

Page 1: Use of modeling and simulation in pulp and paper making

1

Use of modeling and simulation in pulp and paper industry

Editor: Erik Dahlquist

A production by COST E 36

Page 2: Use of modeling and simulation in pulp and paper making

2

Book title: Use of modeling and simulation in pulp and paper industry 2008 ISBN 978-91-977493-0-5

Editor: Erik Dahlquist

© COST Office, 2008 No permission to reproduce or utilise the contents of this book by any means is necessary, other than in the case of images, diagrams or other material from other copyright holders. In such cases, permission of the copyright holders is required. This book may be cited as: COST Action number- title of the publication. Neither the COST Office nor any person acting on its behalf is responsible for the use which might be made of the information contained in this publication. The COST Office is not responsible for the external websites referred to in this publication.

Page 3: Use of modeling and simulation in pulp and paper making

3

COST- the acronym for European COoperation in the field of Scientific and Technical Research- is the oldest and widest European intergovernmental network for cooperation in research. Established by the Ministerial Conference in November 1971, COST is presently used by the scientific communities of 35 European countries to cooperate in common research projects supported by national funds. The funds provided by COST - less than 1% of the total value of the projects - support the COST cooperation networks (COST Actions) through which, with EUR 30 million per year, more than 30.000 European scientists are involved in research having a total value which exceeds EUR 2 billion per year. This is the financial worth of the European added value which COST achieves. A “bottom up approach” (the initiative of launching a COST Action comes from the European scientists themselves), “à la carte participation” (only countries interested in the Action participate), “equality of access” (participation is open also to the scientific communities of countries not belonging to the European Union) and “flexible structure” (easy implementation and light management of the research initiatives) are the main characteristics of COST. As precursor of advanced multidisciplinary research COST has a very important role for the realisation of the European Research Area (ERA) anticipating and complementing the activities of the Framework Programmes, constituting a “bridge” towards the scientific communities of emerging countries, increasing the mobility of researchers across Europe and fostering the establishment of “Networks of Excellence” in many key scientific domains such as: Biomedicine and Molecular Biosciences; Food and Agriculture; Forests, their Products and Services; Materials, Physical and Nanosciences; Chemistry and Molecular Sciences and Technologies; Earth System Science and Environmental Management; Information and Communication Technologies; Transport and Urban Development; Individuals, Societies, Cultures and Health. It covers basic and more applied research and also addresses issues of pre-normative nature or of societal importance. Web: www.cost.esf.org

Page 4: Use of modeling and simulation in pulp and paper making

4

USE OF MODELING AND SIMULATION IN PULP

AND PAPER INDUSTRY

Use of modeling and simulation in pulp and paper industry ............................................................................ 4

PREFACE ............................................................................................................................................................................ 15

Chapter 1 Introduction - Using models to work smarter ........................................................................ 19

1.1 Using models to work smarter ............................................................................................................ 19

1.2 Benefits with installations ..................................................................................................................... 21

1.2.0 Summary................................................................................................................................................ 21

1.2.1 Applications of models and simulations in paper & board mills (Dutch industry): .......................................................................................................................................................... 21

1.2.2 Benefits for managers ..................................................................................................................... 23

1.3 Key points for success .............................................................................................................................. 24

Modelling and simulation ............................................................................................................................................ 27

Chapter 2 Modelling including pre-processing - ModelLing methods ............................................... 27

2.0 Summary ......................................................................................................................................................... 27

2.1 What to think about when starting development of a simulation model ..................... 27

2.1.1 Identify what problem the model shall solve: .................................................................... 28

2.1.2 Interaction with “outside world” .............................................................................................. 29

2.1.3 Physical, event driven or statistical, data driven model ............................................... 29

2.1.4 Simulation environment selection ........................................................................................... 31

2.1.5 Verification of models ..................................................................................................................... 31

2.1.6 Initialization ......................................................................................................................................... 32

2.1.7 Use of the simulator ......................................................................................................................... 32

2.1.8 Additional factors to make a project effective ................................................................... 33

2.2 Modeling methods...................................................................................................................................... 33

2.2.1 Physical models – first principles ............................................................................................. 33

Page 5: Use of modeling and simulation in pulp and paper making

5

2.2.1.1. Physical models and their validation for pulp and paper applications. ..... 34

2.2.1.1.1 Abstract: .............................................................................................................................. 35

2.2.1.1.2 Introduction: ..................................................................................................................... 35

2.2.1.1.3 Simulator models ........................................................................................................... 36

2.2.1.1.3.1 Screens ............................................................................................................................. 36

2.2.1.1.3.2 Hydro cyclones, cleaners ........................................................................................ 39

2.2.1.1.4 Model validation and tuning with process data ............................................. 42

2.2.1.1.5 Simulators for other applications .......................................................................... 44

2.2.1.1.6 Conclusions ....................................................................................................................... 44

2.2.1.1.7 References: ........................................................................................................................ 45

2.2.2 Statistical models/data driven models ....................................................................................... 46

2.2.2.1 Finding dependencies and time lags between different signals ........................... 47

2.2.2.2 Correlation Measures .................................................................................................................. 47

2.2.2.3 Entropy and Mutual Information .......................................................................................... 47

2.2.2.4 The Mutual Information Rate ................................................................................................. 49

2.2.2.5 Test Results on Industrial Process Data ............................................................................ 50

2.2.2.6 Discussion and Practical Considerations .......................................................................... 51

2.2.2.7 References ......................................................................................................................................... 51

2.2.3 Artificial neural networks (ANNs) ........................................................................................... 52

2.2.3.1 Types of ANNs ................................................................................................................................. 52

2.2.3.2 Way of working of ANNs ........................................................................................................... 54

2.2.3.3 Back propagation algorithm .................................................................................................... 54

2.2.3.4 Modifications of back propagation algorithm ................................................................ 58

2.2.3.5 Applications of ANNs in the pulp and paper industry ............................................... 59

2.2.3.6 Literature references .................................................................................................................. 60

2.2.4 Event driven models ........................................................................................................................ 62

2.2.5 Hybrid models .................................................................................................................................... 62

Page 6: Use of modeling and simulation in pulp and paper making

6

2.2.6 Overview when different types are to be used .................................................................. 63

2.3 Data processing and uncertainty handling ................................................................................... 63

2.3.1 How to handle uncertainties ....................................................................................................... 63

2.3.2 Preprocessing, “digging the diamonds” ................................................................................. 64

2.3.2.1 Summary ...................................................................................................................................... 64

2.3.2.2 Data reconciliation .................................................................................................................. 65

2.3.2.2.1 Introduction ...................................................................................................................... 65

2.3.2.2.2 Cause of degradation in heat and power plant process and sensors .. 67

2.3.2.2.3 Effect of degradation in heat and power plant process and sensors .. 68

2.3.2.2.4 Data reconciliation for handling degradation ................................................. 69

2.3.2.2.5 Example of data reconciliation ................................................................................ 71

2.3.2.3 Dynamic validation of sensors .......................................................................................... 74

2.3.2.3.1 Dynamic validation of multivariate linear soft sensors with reference laboratory measurements ............................................................................................................. 74

2.3.2.3.1.1 Introduction .............................................................................................................. 74

2.3.2.3.1.2 Bayesian estimation of parameter distribution ..................................... 74

2.3.2.3.1.3 Example ....................................................................................................................... 77

2.3.2.3.1.4 Conclusions ................................................................................................................ 84

2.3.2.3.1.5 References .................................................................................................................. 84

2.3.2.4 Signal filtering and outlier ........................................................................................................ 85

2.3.2.6 Adaptation of models using on-line data .......................................................................... 86

2.3.2.7 The importance of sampling frequency ............................................................................. 87

2.3.2.8 Time matching different signals ............................................................................................ 88

Chapter 3 Soft sensors ............................................................................................................................................. 89

3.1 Softsensors – where to use .................................................................................................................... 89

3.2 Soft sensors in pulp and paper industry......................................................................................... 89

3.2.1 Introduction ......................................................................................................................................... 89

3.2.2 Methods .................................................................................................................................................. 90

Page 7: Use of modeling and simulation in pulp and paper making

7

3.2.2.1 Estimation methods ..................................................................................................................... 91

3.2.2.2 Classification methods ............................................................................................................. 92

3.2.2.3 Inference methods ........................................................................................................................ 93

3.2.2.4 AI methods ........................................................................................................................................ 94

3.2.2.5 References ......................................................................................................................................... 94

Chapter 4 Transfer of process know how into models ............................................................................... 96

Process control and decision making ...................................................................................................................... 97

Chapter 5 Model based control ............................................................................................................................ 97

5.1 Model predictive control ........................................................................................................................ 97

5.1.1 Introduction ......................................................................................................................................... 97

5.5.1.1 Background ...................................................................................................................................... 97

5.1.1.2 Brief history ..................................................................................................................................... 98

5.1.2 MPC in the automation hierarchy ............................................................................................. 99

5.1.3 Why use MPC? ...................................................................................................................................100

5.1.4 Basic MPC Principles .....................................................................................................................101

5.1.4.1 Definitions .......................................................................................................................................101

5.1.4.2 Simple MPC construction ........................................................................................................102

Plant model ...............................................................................................................................................102

Prediction models .................................................................................................................................103

Cost function ............................................................................................................................................104

Constraints................................................................................................................................................105

Computing controller action ...........................................................................................................106

Principle of receding horizon control .........................................................................................108

5.1.5 Developments in MPC theory ...................................................................................................108

5.1.6 Practical issues in state-of-the-art MPC ..............................................................................110

5.1.7 Conclusions ........................................................................................................................................112

5.1.7.1 Future possibilities and trends with MPC ......................................................................112

Page 8: Use of modeling and simulation in pulp and paper making

8

5.1.7.2 Further reading ............................................................................................................................112

5.1.8 Bibliography ......................................................................................................................................113

Chapter 6 Production planning ...........................................................................................................................117

6.1 Planning and scheduling .......................................................................................................................117

6.2 Dynamic optimization ............................................................................................................................117

6.2.1 A toolset for supporting continuous decision making ......................................................118

6.2.1.1 Abstract ............................................................................................................................................118

6.2.1.2 Introduction ...................................................................................................................................118

6.2.1.3 The simulation model ...............................................................................................................119

6.2.1.4 Cost function and parametrization of set point trajectories ................................120

6.2.2 Optimization ............................................................................................................................................122

6.2.3 Conclusions ..............................................................................................................................................125

6.2.4 References ................................................................................................................................................126

Chapter 7 Decision support .................................................................................................................................127

7.1 Diagnostics and decision support ....................................................................................................127

Applications and case studies ...................................................................................................................................128

Chapter 8 Design tools ..........................................................................................................................................129

8.1 Engineering and Design of new processes or modification of existing ........................129

Chapter 9 Applications in pulp mills (kraft, CTMP, recovery ) ...........................................................130

9.1 Process Optimization and Model Based Control in Pulp Mills ..........................................130

9.1.1 Introduction .......................................................................................................................................130

9.1.2 Overall pulp mill optimization: ................................................................................................131

9.1.3 Digester optimization and MPC ...............................................................................................135

9.1.3.1 Different modeling approaches ...........................................................................................136

9.1.3.1.1 Sequential solver using Fortran code with iteration between pressure-flow calculations and chemical reactions- tank level calculations. 1-D. ...................................136

9.1.3.1.2 Simultaneous solver but without pressure flow calculations. The flows are assumed controlled by the real DCS system. 1-D .......................................................................138

Page 9: Use of modeling and simulation in pulp and paper making

9

9.1.3.1.3 2-D calculations with a simultaneous solver for both pressure flow calculations and chemical reactions .................................................................................................141

9.1.3.2 Other applications.......................................................................................................................143

9.1.3.3 Modelling of delignification ...................................................................................................143

9.1.3.4 Conclusions ....................................................................................................................................147

9.1.3.5 References .......................................................................................................................................148

9.2 Economic Benefits of Advanced Digester Control. ..................................................................149

9.3 Application of softsensors for cooking ..........................................................................................155

9.3.1 Batch cooking ....................................................................................................................................155

9.3.2 Continuous cooking .......................................................................................................................156

9.4 Decision Support System for TMP Production ..........................................................................157

9.4.1 Abstract ................................................................................................................................................157

9.4.2 Introduction .......................................................................................................................................157

9.4.3 Description of the Case.................................................................................................................157

9.4.4 Phases of the Optimization ........................................................................................................158

9.4.5 The Simulation Models .................................................................................................................159

9.4.6 Decision making and optimization ........................................................................................161

9.4.7Conclusions .........................................................................................................................................163

9.4.8 References ..........................................................................................................................................164

9.5 Optimisation of TMP production scheduling .............................................................................164

9.5.1 Abstract ................................................................................................................................................164

9.5.2 Introduction ............................................................................................................................................165

9.5.3 Process description ............................................................................................................................166

9.5.4 Application ...............................................................................................................................................167

9.5.4.1 Two Mill Model .............................................................................................................................168

9.5.4.2 Paper Machines’ Production Schedules and TMP Demands .................................169

9.5.4.3 Implementation ............................................................................................................................169

9.5.4.4 Results ...............................................................................................................................................169

Page 10: Use of modeling and simulation in pulp and paper making

10

9.5.4.5 Conclusions ....................................................................................................................................171

Chapter 10 Applications in paper mills (incl deinking) ...........................................................................172

10.1 Developiing a generic method for paper mill optimization .............................................172

10.1.1 Abstract .............................................................................................................................................172

10.1.2. Introduction ...................................................................................................................................172

10.1.3 A generic method .........................................................................................................................173

10.1.4 Use of an External Simulator ..................................................................................................176

10.1.5 Optimization of Drewsen sizing quality ...........................................................................176

10.1.6 Optimization of Lancey water broke system .................................................................181

10.1.7 Conclusions......................................................................................................................................184

10.1.8 References ........................................................................................................................................184

10.2 On-line strength prediction and optimization for multi-ply kraft liner ....................188

10.2.1 Introduction ....................................................................................................................................188

10.2.2 Process description .....................................................................................................................189

10.2.3 Modelling and Identification ..................................................................................................190

10.2.4 On-line Prediction ........................................................................................................................192

10.3 On-line monitoring of the papermachine performance ....................................................196

10.3.1 Background .....................................................................................................................................196

10.3.2 Introduction ....................................................................................................................................197

10.3.3 On-line monitoring of the papermachine performance ...........................................198

10.3.4 Conclusions......................................................................................................................................202

10.3.5 References ........................................................................................................................................203

10.4 Improved Paper Machine Performance .....................................................................................204

10.5 Paper mill applications .......................................................................................................................206

10.5.1 Knowledge on detrimental phenomena originating from stock .........................206

10.5.2 TMP ......................................................................................................................................................207

10.5.2.1 Fibres ..............................................................................................................................................207

Page 11: Use of modeling and simulation in pulp and paper making

11

10.5.2.2 Dissolved and colloidal substances .................................................................................208

10.5.2.3 Effects of peroxide bleaching .............................................................................................208

10.5.3 DIP ........................................................................................................................................................209

10.5.4 Control and Analysis of the process chemical state ...................................................212

10.5.4.1 Available process control methods ................................................................................213

10.5.4.2 Chemical methods to control detrimental substances..........................................214

10.5.5 Savcor -WEDGE process analysis system ........................................................................215

10.5.6 Quality parameters of water and measurement methods ......................................217

10.5.7 Measuring the chemical state of paper machine stock and water systems ...218

10.5.7.1 Direct on-line measurements .............................................................................................218

10.5.7.2 Indirect on-line measurements .........................................................................................219

10.5.8 Sampling techniques ...................................................................................................................219

10.5.8.1 Methods for solids-free samples ......................................................................................220

10.5.8.2 Future on-line measurements ...........................................................................................221

10.5.9 WIC Systems ...................................................................................................................................224

10.5.10 Fibre and paper properties ..................................................................................................224

10.5.11 Conclusions ...................................................................................................................................225

10.6 Model-based wet-end optimisation .............................................................................................227

10.6.1 Introduction ....................................................................................................................................227

10.6.2 Objectives .........................................................................................................................................228

10.6.3 Methodology ...................................................................................................................................229

10.6.4 The off-line to on-line path ......................................................................................................234

10.6.5 Robustness of developed models .........................................................................................237

10.6.7 Conclusions......................................................................................................................................239

10.7 Model-based wet-end optimisation: Design of a soft sensor ..........................................239

10.7.1 Objectives and specific aspects of wet-end optimization ........................................240

10.7.2 Design of a soft sensor estimating sizing .........................................................................240

Page 12: Use of modeling and simulation in pulp and paper making

12

10.7.2.1 Project layout..............................................................................................................................240

10.7.2.2 Preselection of correlating parameters ........................................................................240

10.7.2.3 Machine trials .............................................................................................................................241

10.7.2.4 Correlation analysis ................................................................................................................241

10.7.2.5 Modelling of sizing ...................................................................................................................244

10.7.2.5.1 Sizing optimization trials ..................................................................................................246

10.7.3 Conclusions and outlook ..........................................................................................................249

10.7.4 References ........................................................................................................................................250

10.8 Real-time paper web formation control using stochastic distribution control concept ...................................................................................................................................................................250

10.8.1 Introduction ....................................................................................................................................251

10.8.2 Variables to Be Included ...........................................................................................................252

10.8.3 Control Objectives ........................................................................................................................253

10.8.4 Experimental system ..................................................................................................................254

10.8.4.1 Pilot machine and experimental conditions ...............................................................254

10.8.4.2 Data sampling with digital camera ..................................................................................255

10.8.4.3 Input polymer.............................................................................................................................256

10.8.5 Process Modelling ........................................................................................................................256

10.8.5.1 Image processing ......................................................................................................................256

10.8.5.2 Calculation of entropy ............................................................................................................257

10.8.5.3 Direct step-response identification method...............................................................258

10.8.6 Robust PID Controller Design ................................................................................................259

10.8.7 Real-time Implementation ......................................................................................................260

10.8.7 1 Open loop step response modelling ...............................................................................261

10.8.7.2 Closed loop test .........................................................................................................................263

10.8.8 Conclusions......................................................................................................................................264

10.8.9 References ........................................................................................................................................264

10.7 Off-line applications .......................................................................................................................266

Page 13: Use of modeling and simulation in pulp and paper making

13

10.7.1 Training simulators .....................................................................................................................266

10.7.1.1 Operator training: ....................................................................................................................266

10.7.1.2 Training simulator systems ................................................................................................267

10.7.1.3 Training procedure for operator training on simulator ......................................269

10.7.2 MNI Experience with Process Simulation Paper presented at Asia Paper 2002 in Singapore ...................................................................................................................................................274

10.8 Using simulation to tune a sensitivity indicator ..............................................................281

Chapter 11 Applications in utilities ..................................................................................................................286

11.1 Soft sensors in Waste water treatment ......................................................................................286

11.2 Soft sensors in Recovery boiler ......................................................................................................286

11.3 Soft sensor in Lime kiln ......................................................................................................................287

11.4 Early warning system for recovery boilers, .............................................................................287

Commercial HW/SW tools ......................................................................................................................................290

Chapter 12 Commercial simulation environments .....................................................................................290

12.1 Introduction..............................................................................................................................................290

12.2 Current use of software in COST E36 action ..........................................................................290

12.2.1 Preface ...............................................................................................................................................291

12.2.2 Analysis of the replies ................................................................................................................291

12.2.3 List of software packages ........................................................................................................296

12.2.4 Software evaluation ...................................................................................................................301

12.2.5 Other questions ............................................................................................................................302

12.2.6 Conclusions......................................................................................................................................302

12.3 Users’ requirements for simulation software .........................................................................303

12.3.1 Preface ...............................................................................................................................................303

12.3.2 Analysis of the replies ................................................................................................................304

Affiliation ..............................................................................................................................................310

12.3.4 Remarks ............................................................................................................................................312

Chapter 13 Presentation of application of different software tools .....................................................314

Page 14: Use of modeling and simulation in pulp and paper making

14

13.1 Brown stock washing problem .......................................................................................................314

13.1.1 Objective ...........................................................................................................................................314

13.1.2 Data ......................................................................................................................................................314

13.1.3 Brown stock washing flow sheeT ........................................................................................315

Chapter 14 Commercial HW/SW structures (DCS, fieldbuses etc)....................................................317

14.1 General – HW and SW Architecture .............................................................................................317

OPC and ODBC ....................................................................................................................................................317

14.2 Data to and from simulator models .............................................................................................318

14.2.1 Software requirements .............................................................................................................319

14.2.2 Interoperation solutions...........................................................................................................319

14.2.3The Component Object Model ................................................................................................320

14.2.4 Marshaling .......................................................................................................................................321

14.2.5 Clients and servers in COM .....................................................................................................321

14.2.6 Distributed COM (DCOM) .........................................................................................................322

14.2.7 An example ......................................................................................................................................322

14.2.7.1 The optimization software .............................................................................................323

14.2.7.2 The simulation software .................................................................................................324

14.2.7.3 Optimization problem ......................................................................................................324

14.2.7.4 How to run the IPSEpro from MATLAB ...................................................................325

14.2.7.5 The optimization procedure .........................................................................................327

14.2.7.6 Results and conclusions of this example ................................................................327

14.2.7.7 Discussion ...............................................................................................................................328

14.2.8 References ........................................................................................................................................329

Page 15: Use of modeling and simulation in pulp and paper making

15

PREFACE

The purpose of this book is to show how simulators can be utilized for many different applications in Pulp and Paper industry. Theory is combined with many practical applications. Both technical and economic benefits are covered. This publication is supported by COST.

The intention is that many different cathegories can use the book. Process engineers can learn about the actual methods, managers can find motivation for why the implementation should be done, production managers can get ideas for process improvements and automation engineers can gain information about a broad range of methods to use in the future. For research and development staff the book will give an overview of the state of the art both with respect to the methods as such, as well as with respect to implementations in “real life”.

The contributors to the book are all members of an EU network under the COST action E36, which has had the goal to show benefits with process simulation in Pulp and Paper industry. The participants are from 14 EU-countries. The chairman of the action has been Johannes Kappen from PTS in Munich, vice chairman prof Risto Ritala from Tampere Technical university in Finland. The group leaders for the work groups have been Jussi Manninen from VTT (off-line simulations), Espoo, Finland, Erik Dahlquist from Mälardalen University, Västerås, Sweden (on-line simulations) and Carlos Negro University of Complutense, Madrid (commercial softwares). Erik Dahlquist also has been the main editor of the book.

The book is divided into four major sections after the introduction. We will go through the content briefly in each chapter.

Chapter 1 Introduction

Modelling and simulation

Here we want to introduce the basic methods used for mathematical modelling of industrial processes. First we have the

Chapter 2 Modelling including pre-processing

where we go through how to handle the data before it is used to build or verify the models, or later on used for predictions and control. In the next chapter

Chapter 3 Soft sensors

we show how a number of process measurements can be used together with lab data to build new soft sensors. These are primarily for on-line use with respect to different quality variables, that are difficult to measure directly due to a lack of good instruments, or due to high price for the instruments.

In chapter four

Chapter 4 Transfer of process know how into models

Page 16: Use of modeling and simulation in pulp and paper making

16

the importance of understanding the processes either you want to make statistical or physical models is discussed. It is sometimes stated that statistical models will fix that by themselves, but that is normally not a good approach. The more information you have of the process, the more efficient the work will be to get a good simulation model and a working control system.

This takes us over to the next major part of the book,

Process control and decision making

This covers primarily how models can be used for in-line applications using different methods. First we discuss

Chapter 5 Model based control (MPC – Model predictive control,…)

which is a strong tool for multivariable process control on-line. Theory as well as practical examples is presented, although more experience is given in later chapters. Model based control takes us over to the

Chapter 6 Production planning

which covers a longer time perspective, and how to schedule the production within a certain time horizon. Methods for optimized production planning are presented. Decisions support,

Chapter 7 Decision support

represents a number of methods helping operators as well as managers and process engineers to make the right decision when different situations occur, and where the most obvious action is not selfevident.

Now follows a larger number of real life experiences as

Applications and case studies

These cover different parts of the mills where we start with

Chapter 8 Applications in pulp mills (kraft, CTMP, recovery ) followed by

Chapter 9 Applications in paper mills (incl deinking) and

Chapter 10 Applications in utilities

The last major section more concerns tools, simulator environements and commercial softwares.

Commercial HW/SW tools

In the chapter about design tools a number of tools have been tested for the same applications, and comments on pros and cons are given.

Chapter 11 Design tools (off-line, steady state and dynamic)

Page 17: Use of modeling and simulation in pulp and paper making

17

Chapter 12 Commercial simulation environments (WGC)

Chapter 13 Comparison of different software tools (WGA)

Chapter 14 Commercial HW structures (DCS, fieldbuses etc)(WGB),

The authors of this book are the following :

Ahola Timo, Outokumpu Stainless Oy, Tornio, Finland Alonso Alvaro , University of Complutense, Madrid, Spain Belle Jürgen, Papiertechnische Stiftung PTS, Munich Blanco Angeles, University Complutense, Madrid, Spain Brown Martin , Control Systems Centre, University of Manchester,UK Brüning Frank, Papiertechnische Stiftung PTS, Munich Carlsson Mattias, ABB Automation, Västerås, Sweden Dahlquist Erik, Mälardalen University, Västerås, Sweden Dhak Janice, ÅF Process consultants, Stockholm and Mälardalen University, Västerås, Sweden Dietz Wolfram, Papiertechnische Stiftung PTS, Munich Edelmann Kari, VTT, Espoo, Finland Gillblad Daniel, Swedish Institute of Computer Science, Sweden Goedsche Frank, Papiertechnische Stiftung PTS, Munich Heath William , Control Systems Centre, University of Manchester,UK Heijs Klaas, TNO, Netherlands Holmström Kenneth, TOMLAB AB and Mälardalen University, Västerås, Sweden Holst Anders, Swedish Institute of Computer Science, Sweden Horton Robert, ABB Process Industries Inc, Columbus, OH, US Jansson Johan, SAPPI, Johannesburg and Mälardalen University Kaijaluoto Sakari , VTT, Espoo, Finland Kappen Johannes, Papiertechnische Stiftung PTS, Munich Karlsson Christer, Mälardalen University, Västerås, Sweden Konkarikoski Kimmo, Institute of Measurement and Information Technology, Kvarnström Andreas, Mälardalen University, Västerås, Sweden Labidi Jalel, University of the Basque Country , San Sebastian, Spain Ledung Lars, ABB Automation , Singapore Leiviskä Kauko, Control Engineering Laboratory,University of Oulu, Finland Lie Bernt, Telemark University , Porsgrunn, Norway Lo Cascio D.M.R., TNO, Netherlands Mannert Christian, Papiertechnische Stiftung PTS, Munich Manninen Jussi, VTT, Espoo, Finland Nappa Marja,KCL, Espoo, Finland Negro Carlos, Complutense University, Madrid, Spain Persson Ulf, ABB Automation, Västerås, Sweden Pettersson Jens, ABB Corporate Research, Västerås, Sweden

Page 18: Use of modeling and simulation in pulp and paper making

18

Pulkkinen Petteri, Institute of Measurement and Information Technology, Ritala Risto,Institute of Measurement and Information Technology, Ropponen Aino, Institute of Measurement and Information Technology, Ruiz Jean, Centre Technique du PapierDomaine Universitaire, CTP, Grenoble, France Ryan Kevin, Malaysian Newsprint Industries,Mentakab, Malaysia Shuman Lysette, ABB Process Industries Inc, Columbus, OH, US Sinon Arjo, Sappi Fine Paper Europe, Netherlands, Somnitz Desiree , Papiertechnische Stiftung PTS, Munich Sorsa Aki,Control Engineering Laboratory,University of Oulu, Finland Suojärvi Mika, Savcor Oy, Finland Tienari Matti, Accenture Oy, Finland Wang Hong, Control Systems Centre, University of Manchester,UK Warnqvist Jonas, ABB Automation, Västerås, Sweden Widarsson Bjorn, Mälardalen University, Västerås, Sweden Yue Hong, Control Systems Centre, University of Manchester,UK

In the COST E36 working group for the book also the following persons participated :

Jan-Erik Gustafsson and Peter Hansen from STFI, Klaus Willforth, Darmstadt University, Peter Fisera Andritz, Gernot Plevnik Andritz, Mats Hiertner Stora-Enso, Rudolf Muench Voith, Christoph Spielmann Andritz, Jukka Valkama Darmstadt University.

Page 19: Use of modeling and simulation in pulp and paper making

19

CHAPTER 1 INTRODUCTION - USING MODELS TO WORK

SMARTER

Erik Dahlquist, Mälardalen University and Arjo Sinon, SAPPI

This book is made for

1) Decision makers in pulp and paper industry, showing the potential for an advanced automation under the conditions putting in certain efforts for maintenance.

2) Process and automation engineers, showing the possible future ways of interaction between operators and processes, as well as possible automated optimization methods using simulators.

3) Process development engineers and researchers, showing how simulation and modeling can be used in the process development.

4) Suppliers to pulp and paper industry.

It is important to have operators, process engineers and system developers involved in development and implementation to get full acceptance, and by this achieve full benefit of new methods.

To achieve this we need strong interest from the managers showing that this is important. Then it will be a task performed by the operators, as well. This will give benefits that will not come out without the effort by the operators! Simulation and models are tools for the operators and the production team to get a better production with respect to both capacity and quality

In this book we want to give ideas and practical guidelines for how to use simulators and models. More fundamental aspects you can read about elsewhere, but it is interesting to have proposals for future application and what can be done. How a simulation project shall be successful will be covered in this book. We will give some good examples and describe what has been done at different mills.

1.1 USING MODELS TO WORK SMARTER

Arjo Sinon, SAPPI

From various sources we can learn that a rather large potential of operational income is hidden in the way operations are performed today. This potential has to be unleashed by a mill in order to compete successfully. Cost reduction alone is, of course, one step in the right

Page 20: Use of modeling and simulation in pulp and paper making

20

direction but will not enable a mill to “win the war”. Operations need to get smarter and more intelligent. “More needs to be done with less” is a hollow statement, but describes the way visionary managers think.

Changing papermaking operations is more and more the way to improve business results. The proven concept of (strategic) investments is another, but no longer can be seen as the only king of the castle. Globalization and mergers have made papermaking an economical practice in which the companies want to increase value for the shareholders only. Investments therefore have to follow very strict rules, generating cash within very short timeframes. New ways of accomplishing things need to be explored and modeling and simulation are surely among the possible candidates to become very important in the next decades.

From the past

Operations have become complex to such an extent that the operators and engineers are not able to follow the process anymore. The past decades have brought us more and more unit operations in-line: from dilution head boxes to multiple wire sections, press section and dryer section, size presses, infrared dryers, in-line coaters, calendars, and what-have-you. All these are part of the same paper machine which used to consist of a simple head box, forming section, press- and dryer section. This puts a lot of pressure on the material we try to produce in the same process, namely paper. Specialty chemicals are added to enhance properties during the production process, aside from all the chemicals that are added to improve functional properties.

Currently we are at a point that adding another chemical gives unknown side effects without us even knowing it. At the same time we see a downward trend in the quality of the raw materials (stock), are it virgin fiber due to cost reasons, or be it recycled fibers due to recycling itself.

Apart from the above, the knowledge level of operators and staff is decreasing every year. The papermaking industry is not very attractive to newly skilled personnel and year-by-year we see elderly –experienced– people leave. This is a threat especially for existing (non-Greenfield) mills.

Towards the future

The above-mentioned trends force us to shift our efforts from ad-hoc solutions to more structural changes in the way we work, or perform the operations. Implementing models in the operations or in decision-making is one of the tools that enable us to do this. One example can be a model of a certain quality parameter of the end-product, normally measured hours or even days after the paper is being produced. At the exact time the paper is produced, this model predicts this quality and enables us to adjust the process, if necessary. This way, a lot of time and effort (money) is saved, improving the operational result.

To be successful these models have to meet certain criteria, which actually do not comprise of the type of model, but of the online applicability of the result. We have seen examples of all types of successful models, from very simple univariate linear relations to very

Page 21: Use of modeling and simulation in pulp and paper making

21

complicated neural networks. The thing they all have in common is what can be defined as completeness.

Completeness

Online models are pieces of software, running on a computer, generating output by processing one or several inputs. First of all the inputs have to come from the running process, preferably from the process information (PI) system in use. At the same time, the output(s) should be redirected to the same PI-system, making the results fully transparent to the user. In actual online application, these will most probably be the operators, controlling the process. Any deviation from these basic rules will generate possible impairment of the applicability, due to lack of completeness.

1.2 BENEFITS WITH INSTALLATIONS

1.2.0 SUMMARY

- Examples: In an installation for a new green field mill in Malaysia MNI (Malaysian Newsprint Industries) concluded that they got a 20% faster start up due to operator training with simulator; compared to what they would expect from start ups with experienced operators in Australia, although no one of the operators had any previous experience from pulp and paper operations.

- At one mill in Indonesia (APRIL group, RAPP) sheet brakes could be reduced by 29 % after using advanced tools for process diagnostics and loop analysis. The value of this was 7 MUSD/y.

- At a plant in Norway (Norske Skog, Skogn) the experienced operators used a dynamic simulator with the complete DCS-system to test the interaction between the process and the control system before start up. This decreased the start up problems significantly. Preliminary loop tuning could be performed already before start up among others.

-

1.2.1 APPLICATIONS OF MODELS AND SIMULATIONS IN PAPER & BOARD

MILLS (DUTCH INDUSTRY):

Klaas Heijs,

Within the Dutch Paper & Board industry a survey has been conducted under the 27 paper & board mills and different suppliers (knowledge institutes / suppliers & consultants).

Page 22: Use of modeling and simulation in pulp and paper making

22

When the first contact person asked about models is the process engineer. Almost all mills have been using models and simulations, by them self’s or by consultants for two main area’s.

- Water / Mass balances

- Logistic challenges

These two area’s have in common that the process is visible and can be made understandable, although the complete process is still complex. Because setting up a good working model/simulation an accurate plan of the process is necessary. To get this up-to-date accurate process scheme the mill has to take a detailed look at the process. This work already gives a lot of insight in the process at work, which also improves the understanding of the results coming from the model and the simulation. When looking at other area’s for simulations and modelling like optimizing press section and dryers section. Models are looked at with caution. First the processes inside these sections are complex and difficult to master. Second these models and simulations aren’t always in line with practical experience.

One mill had a visit of a supplier of press felts. The supplier used a simulation to show the effectiveness of their felts. The mill asked to put in a higher moisture content of the felts before entering the press nip. The simulation showed lower press efficiency, while practical experience showed that the press efficiency was improved. These kinds of cases make process engineers weary of applying advanced models and simulations.

The last but not the least remark made by process engineers is that a lot of models are much too complex to be issued out in the mill. A simulation model may provide the operator / manager / engineer with a traffic light on the performance of the machine. The model/simulation gives a complete controlled traffic junction with all inputs / calculations and outputs shown. Next to this the program asks continuously for different settings.

It is off course possible for mills to buy their own model, analysing and simulation program(s), like KCL-Wedge, WinGems, Aspen and Mathlab. However these programs can be used for a lot of modelling and simulation situations. That makes the program very diverse but also because of a lot of options difficult to master. After mastering the program, putting in the process in the right way, with the right parameters is another time consuming activity. Most mills don’t have the manpower to put in these kinds of projects. You have to take into account also the possibility that the person in question finds another job after a lot of special trainings.

Therefore most of these challenges are performed by consultants or knowledge institutes. The bottleneck in this situation is that these parties know the modelling and simulation program, but not the mill specific process at every detail. This often leads to modelling and simulation projects taking more time than estimated.

When a model, analysis or simulation of a certain process has been set up in such a way that it makes the right prediction, every change in the process has to be implemented also in the model, analysis and simulation. This makes these programs very time consumable in order to keep them accurate.

Page 23: Use of modeling and simulation in pulp and paper making

23

Often the analysis, model or simulation is made on a single data block. When loading the solution on a new data block often the solution doesn’t have a fit. Making the solution fit to more data blocks and the process can only be done with a lot of ‘manual’ analysis of the solution and the physical process.

In general results from modelling and simulations were satisfactory and very useful. Especially for processes which are hard to make visible or when there’s too much risk for empirical testing.

All interviewed people indicated they would like to make more use of models and simulations. When asked for the criteria to use them, the following criteria where mentioned:

- Some models are not yet available (on-line stiffness prediction / mottle / internal bond)

- Models are available but do not yet seem to be in-line with empirical experience

- Models are very complex in control, an easier understandable user interface would be very welcome

o A layered user interface for the different functions in a mill working with the model/simulation.

- Easier maintenance of the model in-line with process changes.

1.2.2 BENEFITS FOR MANAGERS

What benefits can managers have from modelling and simulations?

First of all one should note that there is a difference between a manager and a scientist or engineer. A manager is used to manage and in particular manage risks. A scientist or engineer tries to understand the process in order to control it, with a minimal on risk. To reach this goal they do research in order to eliminate the risk in the control.

However also managers have a certain dislike to risks. They see that models and simulations can give their staff more insight in the process and that processes can be made more efficiently without the risk of empirical testing. Also the revenues/cost can be better assessed with models. They see the biggest advantage in predicting end product qualities by machine settings, especially for qualities that can not be measured reliable in-line. Some qualities mentioned are colour, surface texture, stiffness, internal bond and mottle.

Continuing from above nowadays models, analysing and simulation programs are complex and show complex models, where a manager is far more interested in a helicopter view and general data which enable him to asses the risk. The manager would like to see a traffic light, indicating the expected performance. Now the model shows the manager the complete highway junction including weather information, which makes it difficult to find a general

Page 24: Use of modeling and simulation in pulp and paper making

24

outcome. However this screen full of data, pointers and flags is useful for the process engineer.

The dilemma has it origin where the scientist, engineer makes the analysis, model or simulation in very great detail in order to put all working variables and parameters in the model and to find their mutual interaction. This makes the model too complex for the manager to get a good insight in the risk and to make fast decisions.

These world will can come together if a manager had to time to learn the program and make his/her own presentation layer or the scientist/engineer could produce this presentation layer from the managers point of view. Another more practical solution is to equip the model with different user interfaces, one for each level of operating the model.

However when the analysis, model or simulation has been set up in the right way and is well presented the manager can benefit from it. Then it will be very easy for him what the risk and benefit is on certain changes, investments or trials in the process. Before we get there the mentioned programs still need to undergo development, since nowadays they are still a helpful tool for specialists, which require not only adaptation by every change in the physical process, but also a (self)critical well informed scientist/engineer who knows the process at hand.

The overall conclusion is that mills see very much potential in modelling and simulation but therefore the following criteria have to be met:

- New models available for end product qualities which are now undetermined

- Models that are consistent with empirical experience

- Logical user interfaces which are understandable for different levels of operating

- Simplicity in maintenance of the models

Run ability

1.3 KEY POINTS FOR SUCCESS

Arjo Sinon, SAPPI

Keep it simple, keep it simple, and keep it simple.

These are the three main key points for success. These should be applied to the models and/or simulation tools and to the application-project as well. People like simple things and, when worked out right, most things can be simple.

Prepare for a lot of work and do not expect the impossible.

Page 25: Use of modeling and simulation in pulp and paper making

25

Goals should be ambitious, but as soon as belief in success is lost, it is better to redefine either the application-project or the goals. Commitment of all parties involved is crucial. Top-management sets the vision and should “live the message”; otherwise lower levels of the organization will not follow. Middle management must believe in the results, as they are the key players in empowering the end-users. Application-projects in the next decade will most certainly evolve around reshaping the way we work today and around evangelizing the message of simulation results. Very often the end-users (operators) tend to believe actual sensors together with their gut feeling, and will expose a deep-rooted disbelief to calculated results, just because these results are calculated and not “real” or because the calculated results sometimes do not match with their past experience.

Use the right tool for the job.

Part of the result comes from the tools. This is true in normal life, but is vital in application of simulation results or modeling-projects, especially in the paper industry. Due to the large number of modeling methods it is easy to pick the wrong one and make life more difficult than it should be. The key to success is a combination of knowledge of available modeling methods, tools in that area, and process-knowledge. Thorough knowledge about the process being studied enables us to pick the suitable tools, improving the chances to success.

Innovate, don’t imitate.

All efforts done in implementing simulations should generate outcome in terms of operational result, otherwise no value is added and the application (actual software program) is prone to be not used at all or the application-project itself will not succeed. Therefore, implementing simulations should generate innovative results as opposed to “more of the same” as seen in the past years.

Better is the enemy of good.

Applicable simulations should be implemented. There should be no time-lag between development and application, due to polishing or trying to make it better. Very often the end-users are in need of a simple traffic light with a red, yellow and green light, but a totally controlled high-fashion junction is presented to them instead. A simple traffic light will do! Simulations very often are done by highly educated staff with their own believes of what is good and what is not. The truth is that the end-users will know best what’s good for them or what’s fit for purpose.

It is better to shut-up than to lie.

Faith is easier lost than it is gained. New information will be trusted only after a rather long time in which the information has been proven. False information in that period will reset the faith-counter, as a matter of speaking. To be successful, it is very important for simulation results to have some indication of the reliability, or trustfulness. This should be an integral part of the modeling efforts or implementation project and will prove to be one of the key factors for success.

Carrot or stick?

Page 26: Use of modeling and simulation in pulp and paper making

26

Involvement of the end-users in the development of any real-life object is crucial. Too often we see things being marketed which obviously were developed without the end-users in sight. Of course, money can be earned by selling things nobody wants, but it takes more to be really successful.

Applying simulation or modeling results calls for a sturdy approach in which the end-users play an important role. In case the end-users are the operators of a paper machine, too often developers think that they will not like the change or are unable to understand the physics involved, or are not willing to co-operate. This is not true in most cases! Operators like new things, they like changes in the process because it makes or keeps things exciting. What they don’t like are sudden changes or things changing without being informed or without them being involved in the process of change.

In the past decades too much new technology has entered the field of operations in a way that doesn’t fit the above description. This has in a way damaged the belief operators have in management. Implementing simulation results or modeling efforts therefore has to be done very carefully with a lot of operator involvement; otherwise what really is a carrot will be seen as a stick.

Page 27: Use of modeling and simulation in pulp and paper making

27

MODELLING AND SIMULATION

CHAPTER 2 MODELLING INCLUDING PRE-PROCESSING -

MODELLING METHODS

2.0 SUMMARY

- It is important to define the level of accuracy we need for the modeling or simulation before the work starts. This needs a careful evaluation of what the purpose of the simulation is. What is the problem we need an answer to? This need discussions between the customer and the supplier and should be well documented, to avoid later problems when the customer starts to use the simulation model. State what is “a must” respectively what is “nice to have to have”. When implementations are started this should be kept in mind, and the level of accuracy needed for the different tasks has to be stated. If we do this both suppliers and customers normally are much happier after the delivery compared to if there are big differences between what the supplier feels he has sold, and the customer feels he has bought!

- If the models built should be physical, grey or black box model has to be discussed. If the process is well known, a physical or grey box model may be the best. If the process is less well understood but good measurement data from the process are available, a black box model may be a good choice.

- Pure white (physical) models may have to strong limitations on computer power today and thus grey or black models should be used today. Still, you should aim at using physical models as far as possible, but the limitations have to be considered and will lead to simplifications, and lead to “grey models”. The best case is if we can fit a physical model with real measurements of physical properties. We can also use experience like knowledge about the heat transfer as a function of different situations. This can lead to a “white box model”. “Grey box models” is when we tune the “physical or white model” with process data.

- Different quality of data input is needed for different models, and different types of models ( linear, dynamics, steady state etc). The data can be discrete or continuous, and it can be on-line measurements as well as lab measurements taken intermittently.

2.1 WHAT TO THINK ABOUT WHEN STARTING DEVELOPMENT OF A

SIMULATION MODEL

Erik Dahlquist, Malardalen University

Page 28: Use of modeling and simulation in pulp and paper making

28

When you start considering using simulation you first have to consider what the purpose of the simulation is. Is it for optimizing a design, study performance of a process during different operational situations or to use on-line for diagnostics or control purpose? The purpose of the simulator will determine how detailed it should be modeled.

If the purpose is to simulate a complete process from a design point of view it may be enough to have a steady state model that can be use as a support for optimization calculations. If you also want to test operational aspects of the process a dynamic model may be needed, where you also include controls like PID controls, inter-locking etc. Perhaps a complete DCS code is to be tested towards the model? Then we have to include all aspects of response to the DCS system from the simulator, which may include also a number of digital signals.

If the purpose is to optimize the design of specific equipment like a gas turbine, a screen, a flash tank, a reactor, a digester or a boiler, then we may need a very detailed model describing all physical means of the process including all dynamic aspects. On the other hand we can then skip the control aspects and perhaps also the interaction with other equipment. Instead it may be necessary to model the equipment with a complex geometry in a reasonably accurate way.

If the purpose is to use the model for on-line purpose the major issue may be the calculation speed. You have to perform the full calculation perhaps every second, every minute or every hour, depending what the model shall be used for.

This shows that it is very important to discuss the usage of the model before starting the development of the model. Otherwise we may end up with a model that is too expensive or not good enough to solve the problem you have.

Example: Retention, formation and wet-end parameters as a function of primary inputs

2.1.1 IDENTIFY WHAT PROBLEM THE MODEL SHALL SOLVE:

- Design of a new process (need many components and use of a commercial library may be useful at least as a “starter”. May include some dynamics and simplified controls with controls of specified flows but without PID controls)

- optimization of an existing process with respect to design ( principally as above)

- optimization of the operation of a process off-line ( need a dynamic model including also controls)

- Optimization of the operation of process on-line (often ok with a steady state or semi-steady state model with relatively simple physics, but with interaction with a good solver and with input from the DCS system on-line. Need to consider what

Page 29: Use of modeling and simulation in pulp and paper making

29

information is needed as input and output in detail). This may include both physical models and statistical, data-driven models.

- Improve process understanding (a detailed geometry, detailed chemical reactions etc needed. Have to consider if Eulerian or Lagrange type of problem formulation, micro or macro aspects).

- Use of the model for diagnostic purposes (the model can be detailed or overview, physical or data-driven. Need very thorough consideration how to model different tasks.).

- Simulator for operator training (need a simulator interacting with the DCS system including all process displays. Need full dynamics in the simulator but not necessarily very detailed physical modeling of the different equipment).

2.1.2 INTERACTION WITH “OUTSIDE WORLD”

The next step is to identify how we shall model specific equipment and how this shall interact with “the outside world” that is what input and output are needed to the model. This may include vectors with flows, concentrations, temperatures and other physical data, but also other type of information like dimensions, parameter values like valve openings, speed of a motor etc. When we include dynamics we also need to handle buffer volumes and similar from one time step to the other. All these values may be included as a data set or just interact on-line with a data base when needed, which may be only as initialization when the simulator starts for parameters as dimensions and starting values.

For more detailed physical models we may also need to include boundary conditions.

2.1.3 PHYSICAL, EVENT DRIVEN OR STATISTICAL, DATA DRIVEN MODEL

If we have a running process it may be possible to build a model from process data. We then get a statistical model with just a general structure like a polynomial or a physical model with tuned parameters.

For a new process or if principal aspects need to be modeled we can not use process data. Instead we try to include all important physical aspects including chemical reactions like combustion processes, separation mechanisms and similar. We may also need to include complex geometries in the equipment and complex fluid dynamics.

In this case it is important to start from the complex equations and then try to simplify in a logical way. If we know what is happening from earlier experience we can simplify special function. We can also make assumptions on what are realistic values for the operation, like

Page 30: Use of modeling and simulation in pulp and paper making

30

reasonable flow ranges, temperatures of operation, gradients in the process like how fast a change can be allowed to be etc.

If we want to model a combustion process in a boiler for instance, you can chose between following a specific particle and what happen to this or look at flows and temperatures in specific volume elements in the combustion chamber, if you use one of the commercial CFD programs like Fluent or CFX. Then you have to consider what you really want to study. Is full geometrics needed? In this case you need to have a support tool for achieving the geometry. On the other hand, we then also may need a very detailed grid, to really get the influence of the full geometry. Just as an example: a mixer between hot and cold water was to be modeled. When the grid had 300 000 volume elements the temperature difference just after the mixer was only half of the one when 600 000 volume elements were used. The conclusions from the experiment in the first case would then give the result that the temperature difference would not cause technical problems, while when the temperature difference was twice as high the mixer had to be rebuilt. As this was in a nuclear power plant and exhaust in the material could be dangerous, it shows that the choice of the grid was very important. If we had been looking at an on-line application, we would probably not have been able to use more than perhaps 100o volume element from a calculation point of view, as the calculation will take to long otherwise. This is to show how important it is to identify what the purpose with the simulation is.

If the simulation result is to be used for important decisions it is also important to have a possibility to tests towards real experiments. This may be directly in a pilot plant or a full scale plant, but may also include testing the model towards similar applications, or compare to how other researchers have verified similar models towards experiments in literature.

If we go back to the problem with the combustor, and if we can follow both particles (Lagrangian) and the volume elements (Eulerian) it may be possible to do some kind of interaction between these two approaches. We can first make a model determining the temperature and flow rates and directions in the boiler, and then follow a particle moving in this environment. For each time step we move the particle in the direction and with a reasonable distance from where it is in the combustor for each time step. Then this environment in the volume element where it is with oxygen, temperature etc will be used as boundary conditions when calculating what happens with the particle.

When we in this way model particles more in detail we may find that the Eulerian solution is not giving correct values, as it is not handling the particle interactions in a correct way, and we may have to modify the equations for the volume elements in some way and do a new iteration. Etc.

If we on the other hand only want to study the control of the boiler, we can have a very simple split of the boiler into perhaps five volume elements, where we take overall reactions only. With a certain amount of fuel and air we will get a certain reaction rate and a certain composition of the exhaust gases from each volume element. This can be used for adjustment of the feed of fuel or air, to see the dynamics of the whole boiler without knowing the details of what is happening. On the other hand we can get very fast calculations like every second a new value.

Page 31: Use of modeling and simulation in pulp and paper making

31

For a series of discrete events like cutting paper and forming rolls, transport of the paper rolls to local storage and finally reaching a converter or printing mill, it is better to use discrete events to follow the fibers. The principles are mainly the same, but the models much simpler.

2.1.4 SIMULATION ENVIRONMENT SELECTION

When we have selected what type of model to use it is time to put it into an environment. We then can select a commercial environment with existing solvers, libraries etc, or we make our own model in e.g. FORTRAN, C++ or some high level language like Modelica or gPROMS. At least the FORTRAN and C-codes can easily be accessed from the commercial simulator environments if we just have done a good job identifying what signals to send in and out of the model.

It is useful to have a good interface for input and output, and the graphical interface is especially important when other people are to use the model. An intuitive design of this is the best. We also have to consider if the model shall be used only on one single computer, in a network or if it even should be accessed from the internet. The signal capacity between the model and e.g. a DCS system may be a limiting factor, especially for a larger model like when a complete process is to be modeled like in a training simulator system or for on-line applications.

In another chapter different simulator environments are discussed.

2.1.5 VERIFICATION OF MODELS

As discussed above it is important to verify that the model is giving realistic results. This can be achieved either directly in the process by checking both on-line measurements and by examining samples from the process at lab. This is for the complete process or for specific equipment in the process.

In some cases it may be possible to build an experimental set up as a pilot plant specially made for testing purpose. This is common for suppliers of special equipment to use for development purpose. The problem is then that only some specific experiments can be performed, but at least we can verify some specific conditions. From this we can know that the model is giving reasonable results and if the model is physical- first principles- we can know that the results also for similar conditions will be reasonable.

If we have a statistical, data driven model we know it is giving reasonable values from an historical perspective, but have to compare the predictions for new values and compare to lab measurements also for the future. When we see the model is deviating too much from measurements in lab afterwards we can update the model with new data. This type of data-

Page 32: Use of modeling and simulation in pulp and paper making

32

driven models can also be used together with physical models in a larger model for a complete process.

When we verify a model it is favorable if we cam measure important variables and results during tests using a factorial design of the experiments. Then we know that we cover the whole “operational range” in a good way, and can find also interactions between the different variables.

In many cases it is very difficult to measure all variable we want to follow. If we have a boiler for instance, it is not enough to measure the temperature at a few spots. The temperature may vary 3-400 oC just from the wall to a position 3 dm into the boiler, and the flow through the exhaust gas channel may have a temperature profile, as well as the solids may be unevenly distributed over the area. It is thus very difficult to verify the performance without extensive and well designed sampling at many positions.

On the other hand we can make use of overall mass and energy balances and to verify the complete balance also for the model. We can then make changes in the model that can give directions for how sampling should be performed to study selected issues and problem.

Another aspect is to test the model code so that it is stable under all conditions. This could be seen as part of the debugging, but in reality it is very important to find the limits for where the model can be used. Some models may only be useful up to a specific limit with respect to concentration, temperature of flow rate, and then this should be noted.

2.1.6 INITIALIZATION

When we start the simulation we need to have initial values. These should be values at steady state conditions. It may sound simple to achieve this, but in reality tuning of a complex dynamic model for a complete plant is non-trivial. Usually the tuning to find a steady state condition is the most time consuming task in the whole simulator project.

If it is possible to have the same model in two versions, one steady state and one dynamic, the steady state version can be used to produce values for the dynamic model.

If we have a simple model it may be easy to find all variable to initialize, while if we have a DCS system as well, all values may not even be measurable, as all values are not stored in the DCS system. Then we may need to introduce dummy values, just to have some reasonable value as a starter.

2.1.7 USE OF THE SIMULATOR

When we have the system up and running we need to have some way of doing e.g. dynamic upsets or if the simulator is used for optimization interaction with other software like solvers. Upsets can be ramps, sudden increase or decrease in specific values etc. It is important to consider what situations you really want to test with the simulator at an early

Page 33: Use of modeling and simulation in pulp and paper making

33

stage. Otherwise this may be very tricky as you have to rewrite the code to fulfill demands you had not though of earlier.

If the simulator is used for on-line purpose it is important to have a good interaction with the DCS system, where filtering of signals as well as data reconciliation is important for the input data. It may also be interesting to update the model with new data on-line. Still, this really needs a good control of the input data.

2.1.8 ADDITIONAL FACTORS TO MAKE A PROJECT EFFECTIVE

o To make a project successful also includes a strong involvement of both operators and process engineers, as well as of the management. If the management shows involvement it sends the message that this is of importance.

o The production management and the technical management both can benefit strongly from participating actively in a project. They will learn a lot about the process and interaction between the process and the controls, as well as getting ideas for modifications. These can also be briefly tested with the simulator, to give a basis for possible process or control modifications.

o In an operator training project the management can get a good possibility to train everyone in the way they want to have the process operated. The operator can get a chance to see the effect of different ways of operations, and identify what way they all should act. This can give good long term effects for the production, to get the shift teams to operate in the same way, something that is not always the case. Everyone should implement the best practice, and this is a way to agree on which one that is the best.

o It is also important that the operators and engineers get some time to train at the simulator. This has to be planned in a robust way in advance, and may need extra staff for some time period.

o It is important to discuss what the purpose of the simulator is in advance to get the operators and engineers motivated to use it. By also participate in the definition of what tasks to solve the engagement will be even further.

2.2 MODELING METHODS

2.2.1 PHYSICAL MODELS – FIRST PRINCIPLES

Erik Dahlquist, Malardalen University

It can take long time to calculate the results with an advanced physical model, and we need input data as well as process understanding for building a good model. A physical model can

Page 34: Use of modeling and simulation in pulp and paper making

34

be valid also outside the area where we have measured data earlier. Physical models are easy to use if good physical principles are known. Still we need values of parameters that have to be taken from general knowledge or from process data. An advantage with physical models is that you can use them to evaluate new processes with similar components, which is normally more difficult for black box models. Still, you may lose the advantage of a physical model if the simplification is driven to far. On the other hand – it is easier to include dynamics in a physical model normally.

Physical models can be as Finite element models or as Difference models. The finite element models are normally used for stress analysis when designing equipment that need certain mechanical properties like bending strength and similar. This type of models is normally not used for process simulation purposes, and will therefore not be discussed further here.

Difference models are the type of models normally used to for process simulation. They can be steady state or dynamic, depending on the purpose of the simulation. If the major purpose is process design, we can start with a steady state model. When control aspects are included normally we need a dynamic process model that can show how different process changes affect the process performance with respect to different variables. Here we normally try to identify the basic physical principles behind a certain phenomena.

If we talk about fluid flows, the pressure-flow relation is modeled, with pressure drops etc. If we want to model chemical reactions, kinetics and equilibriums of different reactions are included. Etc. The physical principles normally are formulated as a set of Differential-Algebraic Equations, DAE. These are then solved by using different solvers. Examples of simultaneous solvers are e.g. Modellica and gPROMS. Earlier simulators usually had sequential solvers, where reactions in one block were first calculated, and the output sent to the next block in a predefined order. A separate pressure-flow network calculator gave the flows in the network as a function of pressure drops and sources. The two solvers were then used iteratively. This technique still is used in many applications, as it has also advantages when a DCS-system is included in the system. With this approach it is normally easy to keep track of the time steps, while the simultaneous solvers normally have different time steps depending on the degree of activity at a certain time. During a fast grade change for instance, the time step becomes much shorter, to get converging solutions.

The physical models make use of Navier Stokes equations, Energy equations and continuity equations. Depending on the application these can be simplified. Finally key-parameters in the equations are tuned with process data, to get the real quantitative effects. We then form so called “grey models” from the principally “white models”, or pure physical models. As long as the computer power is not without limits, pure physical models will not be possible, but more different levels of “grayness”.

2.2.1.1. PHYSICAL MODELS AND THEIR VALIDATION FOR PULP AND PAPER

APPLICATIONS.

By Erik Dahlquist, Mälardalen University, Vasteras, Sweden

Page 35: Use of modeling and simulation in pulp and paper making

35

2.2.1.1.1 ABSTRACT:

In pulp and paper industry there are a huge number of different types of process equipments. There are digesters, screens, filters, hydro cyclones, presses, dryers, boilers etc. In many cases the equipment suppliers want to consider single equipments as unique, and thus a special model is needed. This can give hundreds or thousands of different models to keep updated in a simulation package, and when it comes to model validation and testing, it becomes almost impossible to handle in reality.

If we instead try to identify the basic physical principles of each unit, we can start from that point and then just add on special “extra features”. E.g. screens, filters and presses all have similar basic principles. In this way it is possible to reduce the number of modules for a complete integrated mill to some 20 models. These are then tuned with existing data from literature on “real performance”, and configured with rough geometric data. For a screen for instance there is normally data available on separation efficiency when a certain mixture has been operated under certain conditions, like flow rate, geometric dimensions of the screen with the screen plate etc. Still, seldom fiber size distribution is included, as this has not been measured. The same is normally the case for e.g. cleaners and other types of centrifugal separation devices.

We start from first principle models and then tune these for different operational conditions, where once the size distribution was measured and some variables varied. In another case study other variables or conditions were investigated. A generalization of the model can be done by combining all this information, covering to at least some extent all the different operational conditions, and all fiber sizes and concentration ranges. By then only fitting the model with the existing simple mass balance data for a specific equipment, you can get a reasonably accurate model for all kind of operations for this and similar types of equipments. In this paper a description is made of how a number of different equipments have been modeled in this “general” way. Tuning and the result of model validation for different operational modes are also shown.

2.2.1.1.2 INTRODUCTION:

The reasons for using a dynamic simulator system may be many, but mainly fall into seven categories of use:

1) To train operators before start up of a new mill or to introduce new employees to the process before starting operating the real plant. [Ryan et al 2002]

2) To use the dynamic simulator for optimization of the process, in the design phase of a rebuild or expansion of an existing mill, or for a completely new green field mill.

3) To test the DCS functionality together with the process before start up of the real plant.

4) To optimize an existing process line, by testing different ways of operation for process improvements.

Page 36: Use of modeling and simulation in pulp and paper making

36

5) On-line prediction and control of a process line or part of a process line [Persson et al 2003]

6) Use in combination with an optimization algorithm for production planning or on-line optimization and control [Dhak et al 2004]

7) For diagnostics purposes [Karlsson et al 2003]

8) For decision support [Bell et al 2004]

It should also be noticed, that a simulator system can be anything from a small test model of a specific equipment, where the engineering and programming effort can be a couple of hours to get it into operation, to huge systems with thousands or even tens of thousands of DCS signals connected to a model of a whole factory. Here perhaps 10.000 engineering hours or more are needed for the project. Therefore you have to be sure to understand what you are really out for, before starting to discuss costs and time schedules for a simulator project!

2.2.1.1.3 SIMULATOR MODELS

To build the simulator, we need a model for every single equipment in the plant. Some of these models can be very simple while others are very complex. In most cases we use a physical model as the basis and tune this with process data. This gives us a reasonably good model over a large operational area, and it does not collapse even if we go significantly out of the normal operational area, which can be the case for a pure statistical model.

In reality we do not principally have that many different physical mechanisms in the major equipment. Most common is filtration or screening, where the mechanism is mechanical removal of particles on a mesh, wire or porous media.

2.2.1.1.3.1 SCREENS

The basic separation is done where fibers or particles are separated depending on mainly the ratio between the particle size and the pores they have to pass through. In filtering almost all fibers are separated, and then primarily the water is passing through the pores with a flow rate depending on the driving pressure as well as the pore size.

Often there is a concentration gradient, and thus it would be good to have the filter as in a thin vessel, and filtration/screening take place along the surface, with an ever increasing feed concentration.

Conc out Conc in

Page 37: Use of modeling and simulation in pulp and paper making

37

Figure 1. Simple model of a screen.

The separation is also depending on the concentration, shear forces over the surface and the flow velocity through the filter (l/m2.h). In some cases also addition of chemicals (like in the stock prep) can increase flow rate or decrease “fines” passing through. This is also of interest to model.

The pressure drop over the filter or screen can be modeled as if it was a “fake valve”, where the clean filter corresponds to the Admittance factor, which is the flow /h as a function of the driving pressure, when “the valve” is 100% open. Clogging of the screen due to different actions then results in a “valve opening” less than 100%.

In the pressure-flow network solver we use the relation

Fs= V*A*(ρ (P1-P2)) ^0.5,

The admittance A is is a constant specific for a certain “valve” (kg/h at 1 bar pressure difference and fully open valve, that is an unclogged screen) , Fs normal flow (kg/h) and the corresponding pressure drop (P1-P2). The valve opening V is the total open area of the unclogged screen and V= 100 for nominal pore area (sum of all holes or slots). This is valid for pure water, which is used to calculate the admittance factor A (for max rotor speed as well).

The absolute flow through the screen plate is determined by the difference between the pressure in the feed and at the accept side.

The concentration of each fiber fraction in the reject respectively accept is determined from the ratio between the fiber and the pore opening, where a weighting is made between the hole pore area of the screen and the shape of the particle. A weighting can be done between length, width and thickness (three dimensions), so that normally length get a lower weighting factor than the other dimensions, especially in the slit type of screen.

By having maximum rotor speed, the actual screen holes area is almost as large as the nominal. The holes are clogged more and more, as the rotor velocity goes down, and a minimum value is set for zero rpm.

Concerning concentration, we have a function that the whole area goes down as a function of the concentration, above a certain preset concentration. Above another conc., the screen is totally clogged.

The clogging of the screen is implemented by a ramp, where back flushing resets the open area of the openings to the original value, or to some lower values due to an irreversible clogging as one part of the total clogging.

Page 38: Use of modeling and simulation in pulp and paper making

38

When we configure a screen we first select holes or slots. The dimensions of these, the total hole-area/m2 and the total screen area are inputs, and gives the total nominal hole area.

The ratio between the active hole-area and the nominal is giving the average pore size and the pressure drop over the screen.

Active hole area=total nominal hole area*rpm_par*clog_par*conc_par

where

rpm_par=rpm_COF(6)+ ((1-COF(6)) * rpm/rpm_max) ; COF(6) = 0.3 as default.

This gives a realistic impact of the rotor for an average screen, reducing the separation efficiency from 90 to 71 % by increasing the rotor speed by 30 %.These are impact values reported from experiments for a typical screen. With COF(6) = 0.2, the impact will be going from 90 to 79 % separation efficiency, which is a bit more conservative.

Clog_par= short_clog_COF(3)/(short_clog_time+short-clog_COF(3)* (long_clog_COF(4)/ (long_clog_time+ long_clog_COF(4))

conc_par= 1- ( concentration in reject/COF(5))

where COF(3)=0.06 as default. The COF(3) is chosen so, that the maximum value of the reject before clogging the screen is used as COF(3). This may be 0.06 for a typical screen, giving the right effect on e.g. a screen going from 0.5 to 1.5 %,with an increase in separation efficiency from 58 to 81 %. COF(3)= 0.15 is the very maximum value of any screen. COF(4) and COF(5) are 1.0 as default, but can be calculated from experimental data.

Area_par= active_hole_area/ total_nominal_hole_area

Dh = hole diameter or slot width in mm.

Fiber/ particle: lengths ,diameters and heights are also given in mm; virtual radius is also in mm.

The separation efficiency (SepEff) for each particle size is calculated principally as the ratio between the weighted fiber size and the hole/slot diameter, compensated for the clogging of the pores by multiplying the hole diameter with the Area_par. To avoid division with 0, the hole diameter is added to 1.0, and the COF(7) is used for the tuning to different screen types

SepEff = (fiber_COF(20)*length + 10*fiber_COF(20)*(width + height))

SepEff = (virtual radius/accept flow rate)^COF(21)* SepEff* COF(7) /( (1 + Dh* Area_par )

For slots Dh is calculated as Dh= slot width * 7.0. This has been seen as realistic from experimental data.

COF(7) has to be calculated (see below) while COF(20) =0. 0350 and COF(21) = 0.08

as default values. Default value for COF(7) = 3.44 for flow rates 10-150 l/m2.s.

The mass balance between feed, reject and accept now is calculated as:

Page 39: Use of modeling and simulation in pulp and paper making

39

Mass_flow_reject =

SepEff * kg/h_each_fiber_fraction_in_feed*(m3/h_reject_flow/m3/h_feed_flow )^COF( 22) (kg/h).

Mass_flow_accept= Mass_flow_feed - Mass_flow_reject

The concentration of each fiber fraction in the accept is the

Concentration_accept= Mass_flow_accept/ (m3_accept_flow_per_h).

2.2.1.1.3.2 HYDRO CYCLONES, CLEANERS

The second most common type of equipment is the Hydro cyclone, or cleaner. This also includes a number of similar equipment like the deculator.

Cleaners are looked upon as a vessel that is either full (= separation working), or as not full (= separation is not working). Calculation of the liquid level in the cyclone or the common vessel for several cyclones is first done, and if positive the separation is calculated according to the following procedure:

Principally the deviation of particles from the stream lines during the rotational flow will be related to the volume of the particle divided by the friction of the particle surface relative the water, the density difference between particles and water, the rotational velocity, the cyclone diameter ( giving the rotational velocity-higher for small diameter cyclones) and the viscosity of the water.

First we calculate a shape factor = (1+ fiber diameter)/ (1+ fiber length).Both lengths in mm. This compensates for the fact that an elliptical particle moves in a different way than a spherical particle.

We then calculate an adjusted particle radius : First we calculate the volume of the particle. For rectangular pieces Volume V = H*W*L (Height*Width*Length). For fibers the volume is calculated from V= π*R^2 * L. Hereafter the radius R for the sphere with the same volume is calculated as

R = ( V*3/ 4π) ^1/3.Length in meter.

The cyclone volume is calculated from the geometric inputs, to give the residence time = volume/feed_flow_rate.

An adjusted cyclone radius is calculated as the average from swirl zone and bottom of cone.

The basic equation for gravitational and centrifugal separation matches lifting forces with buoyancy forces + forces due to liquid motion: (4/3)*πR^3 *ρs*g=(4/3)*π*R^3 *ρliq*g+

Page 40: Use of modeling and simulation in pulp and paper making

40

6*π*µ*v(d)*R [Bird et al 2002] . Solving for deviation velocity due to gravitational forces becomes

v(g)= (2/9)*R^2 *(ρs -ρliq)*g/µ

while the corresponding velocity for centrifugal forces becomes principally for a sphere:

v(c)= (2/9)*R^2 *(ρs -ρliq)*v(r)^2/(r*µ) .

The deviation v(c) is in m/s from the stream lines in radial direction due to the liquid turning around in the cyclone ( radius r) with velocity v(r). It is calculated in our algorithm using an “adjusted radius” to the sphere for other particle shapes like fibers, and with a correction for the larger drag forces due to long, thin fibers compared to spheres, by multiplying with the shape factor:

v(d)= COF(11)* (adjusted particle radius)^2 *Shapefactor* (density difference water-particle)* [v(r)^2/r]*[1/ viscosity]

v(r) = Qin / Area of inlet pipe to cyclone (m/s)

µ= viscosity, 10^-3 Ns/m2 for water.

Principally we can also include the effect of higher consistency and temperature effects on separation in the viscosity term. This means that µcorr=µ*0.02*(conc/2.0)^-2.2 for zero to 3 % consistency. The temperature effect on the viscosity is directly related by µ T1 = 1.002*10^-3 * (T1/20)^-0.737, where the temperature is given in oC, and compared to the viscosity at 20 oC, where it is 1.002 * 10^-3 .

The total distance for the particles will be given by multiplying with the residence time in the cyclone, and will depend on the liquid flow as well as the volume of the cyclone.

The shape factor takes into account freeness (surface roughness) as well as the shortest particle diameter. High surface area and long fibers will go preferably in the top or centre, compared to spheres and short fibers, assuming the same density. High density particles will go towards the wall, and downwards.

The absolute separation will also depend on the split rate between flow upwards ( Qupaccept,center) resp downwards( Q down,wall,reject).If we assume 50 % volumetric flow in both, the separation will be 50 % of the fibers in each stream, if the density is the same of the fibers as for water, for average sized and shaped particles. To give the mass separation we calculate the part of the incoming (inject) mass flow that goes to the reject(wall) as Minj*Qrej/Qinj.

If the density is higher for particles, the relative distance compared to the distance of the radius of the cyclone, will give the extra separation efficiency of particles of a specific size towards the bottom compared to the top outlet. Where the flow rate is very low, also the gravity is considered, but then in relation to the liquid level in the vessel. This is giving the gravimetric separation efficiency as:

Page 41: Use of modeling and simulation in pulp and paper making

41

ηg= COF(11)* (adjusted particle radius)^2 *Shapefactor* (density difference water-particle)*g*(1/ viscosity)*(residence time/liquid level)

The separation factor for centrifugal forces will be calculated as :

ηc = v(c)* residence time in cyclone/ cyclone radius ( upper part)

where v(c) = radial velocity and the particle residence time in the cyclone is Qin/volume of cyclone.

The mass flow (M(I)) in the reject for each particle fraction (I) is calculated by:

Mrej(I) = Minj(I) * ( (Qrej/Qinj) +ηc + ηg) for the reject and

Macc(I) = Minj(I) - Mrej(I) for the accept.

The mass separation efficiency of the cyclone then is ηs = Mrej/Minj= (Qrej/Qinj)+ ηc + ηg

The concentration concerning particles of a certain size/shape going to the wall/bottom will be calculated according to:

Conc rej,bottom(I)= Mrej(I)/ Q bottom

For the top or centre we will get correspondingly:

Conc acc,top(I) = Macc(I)/ Q top

To get the negative effect of fast increases or decrease of the incoming flow Qinj on the separation, this is decreased according to adding a turbulence effect factor

(ηc+ηg) (t) = (ηc+ηg) (t) - ∆ v/v(t)

The mass balance is calculated, to give the concentration of fibers for each fraction (I) , in the top respectively bottom.

The pressure flow network makes use of the Bernoulli equation, which is principally:

v12/ 2g + p1/ρg + h1 = v22 / 2g + p2 / ρg + h2 + friction losses

where v1 and v2 is the velocities , p1 and p2 the pressures and h1 and h2 the liquid heads upstream resp downstream an entrainment like a valve, or in both ends of a pipe etc.

If we just look at a valve with the same liquid head on both sides, we can simplify this to principally

v = Constant * SQRT(( p1-p2)/ρ)

for a fully open valve, where the constant relates to the friction losses caused by different geometries. The velocity v multiplied with the open area of the valve, will give the flow through the valve, for a given pressure difference.

By using this technique and making a number of equations, one for each node, and then solve this set of equation simultaneously, we will determine the pressure and flow in the

Page 42: Use of modeling and simulation in pulp and paper making

42

whole network of pipes and process equipments. For the different process equipments, pressure losses are determined due to the operating conditions, and thus are included in the calculation, as well.

Separation of fibers and other particles for the different process equipments is also calculated in each equipment algorithm. This gives the material balance over the equipments and for the whole network, for each time step, considering also dynamics. This is useful, when you want to test new advanced control algorithms , where you don’t have the DCS code for them, but can write them in Fortran, C++ or make use of Matlab instead. With a good process model, it is possible to test the control strategy before implementing it on the real process.

Other examples

There are many different models developed for both paper and pulp mills. For pulp mills the major focus has been on the digester, and several examples exist on good models for a number of different applications here, like [Bhartiya et al 2001],[ Wisnewski et al 1997] and [Jansson et al 2004].

2.2.1.1.4 MODEL VALIDATION AND TUNING WITH PROCESS DATA

Examples of process equipments mentioned earlier can be the screen (to the right) and the hydro cyclone (= cleaner, to the left)

For the screen algorithm we assume a plug flow from the top and down wards, but with total mixing in radial direction. The rotor rotate with a relatively high velocity normally, giving shear forces at the screen surface. Fibers are mechanically separated at the screen if they are larger than a certain size in relation to the hole area or slot size, but also depending on the concentration, shape, flow rate through the holes or slot, the rotor speed, temperature and volume reduction factor (reject rate).The model gives the pressure drop over the screen, as well as the mass and energy balance. At low reject flow, the concentration in the reject goes up, and if it becomes too high, the motor stops, the fibers

Page 43: Use of modeling and simulation in pulp and paper making

43

accumulate and eventually the whole screen plugs up. The pressures around the screens are calculated, but also the fiber size distribution of the fibers in the different streams, as well as the amounts and concentrations.

In the table below results for the model algorithm calculations are compared to data from experiments done by technical institutes in Canada( Paprican) [Gooding et al 1992]and Sweden [STFI 1999]. The model used is the one described earlier.:

Table 1 For a 1.4 mm hole screen, the following results can be seen:

Qrej/ accept Separation efficiency

Qfeed l/m2.s 0-0.5 0.5-1 2-5 mm fibers

Exp Calc Exp Calc Exp Calc

0.4 99 74.8 77.1 80.5 80.6 92.0 91.8

0.7 16.5 89.3 89.0 92.3 93.0 97.6 100

Tbale 2: For a 0.4 mm slot screen, assuming 0.7 mm long , 0.025 mm wide fibers:

Experimental Calculated Separation efficiency

Accept l/m2.s 50 100 200 50 100 200

Q_rej/Q_feed

0.4 0.61 0.52 0.63 0.52

0.29 0.33 0.35

0.09 0.27 0.18 0.12 0.25 0.18 0.15

Table 3. For a cleaner( =hydrocyclone), the corresponding figures are shown below:

FlowQrej/Qinj 0.25 mm 0.75 mm 1.75 mm 3.5 mm fibers

l/min Exp Calc Exp Calc Exp Calc Exp Calc

270 0.10 0.24 0.25 0.29 0.31 0.32 0.32 0.32 0.33

500 0.26 0.56 0.54 0.67 0.66 0.74 0.70 0.77 0.71

As can be seen, the prediction of the separation efficiency can be quite good, although not absolute. The reason is both the difficulty to make the experiments totally controlled ( see e.g. the first row of the cleaner experiment, where the separation efficiency is not correlated to fiber size for high reject ,low flow) , and to catch all possible effects in one single model, based on first order principles. The tuning / configuration of the models are

Page 44: Use of modeling and simulation in pulp and paper making

44

done to fit the actual equipment and the normal operating range, but will give reasonably good result also outside this area.

In reality data from many more papers were used to build the models, where different variable were varied, and different types of equipments tested. Unfortunately few of these include both variation of flow rates, concentrations, reject/accept ratio, different fiber size distributions etc, but mostly only one of these, and normally not including looking at the effect on different fiber sizes. Still, together all these data bring new elements into the puzzle to build a good physical model. An example of starting with a physical model and tune it with statistical plant data was shown by [Pettersson 1998]

When we look at a new type of screen we use normally simple mass balance data to tune at some few operational conditions. These data will be used to tune some of the coefficients mentioned earlier, while the rest are used as default values until new data comes up that may be used to proceed with more detailed tuning for the specific equipment. Some of this has been collected from [Nilsson 1995] and [Jegeback 1990, 1993]

2.2.1.1.5 SIMULATORS FOR OTHER APPLICATIONS

After the start up, or for an existing mill directly, the dynamic simulator can be used for optimizing the process. This is done so, that different ways of running the process can be tested by the process engineers on the simulator, to see how e.g. water flows or dry solids contents etc are influenced, e.g. during a grade change.

By collecting data through the information management system, and sending them to the simulator, it may be possible to use the data for more advanced diagnostics [Karlsson et al 2003]. This will be for both the process and the sensors, by making use of the expert system functions residing in the simulator. With a model, that shows how different sensors and process parts are correlated to each other, predictions of performance can be made as well. By comparing to the real process signals, deviations and drifts can be diagnosed, to alarm the operator before it is possible to see the faults “manually”.

For the process optimization, higher fidelity models (the fidelity of the models can be selected for the most important equipments) may be needed[Bell et al 2004],[Dhak et al 2004][Hess 2000] and [Morari et al 1980]. The communication speed with the DCS systems does normally not need to be considered, as the process engineer can work on the simulator without the real time DCS system connected. With interaction between a simulator and an optimization algorithm the communication still may be the limiting parameter.

2.2.1.1.6 CONCLUSIONS

Page 45: Use of modeling and simulation in pulp and paper making

45

A model has been developed for major pulp and paper equipments. It has been shown by comparing experimental results to model predictions that a reasonably good prediction for separation efficiency for many fractions can be made for separation equipments operating under a wide operational range. The same model can be used for many different types of process equipments and models, where only a number of parameters (constants) have to be configured with data from relatively few experiments, or principally normal mass balance data that can be achieved from vendors or mills.

2.2.1.1.7 REFERENCES:

NOPS Paper Machine Operator Training System, ABB Process Automation, 1990

Bird R.B. ,W.E.Stewart and E.N.Lightfoot: Transport Phenomena, by,John Wiley & sons, 2nd edition 2002.

Gooding Robert W. and Richard J.Kerekes: Consistency changes caused by pulp screening, Tappi Journal, Nov 1992, p 109-118.

Nilsson Anders: The simulation resource Extend , Licentiate Thesis Pulp&Paper Tech Dept ,Royal Inst of Technology in Stockholm, TRITA-PMT Report 1995:14.

Jegeback M. and B.Norman: Fextend- computer simulation of paper machine back water systems for process engineers, STFI report A 987, 1990.

Jegeback M.: Dynamic simulation of FEX back water system by Fextend, ,STFI report A 996, 1993

Data from experimental reports from STFI on screening and cleaning 1999.

Bell J., Dahlquist E.,Holmstrom K.,Ihalainen H.,Ritala R.,Ruis J.,Sujärvi M.,Tienari M: Operations decision support based on dynamic simulation and optimization. PulPaper2004 conference in Helsinki, 1-3 June 2004.Proceedings.

Dhak J.,Dahlquist E.,Holmstrom K.,Ruiz J.,Bell J.,Goedsch F: Developing a generic method for paper mill optimization.Control Systems 2004 in Quebec City, 14-17 June 2004. Proceedings.

Ulf Persson, Lars Ledung, Tomas Lindberg, Jens Pettersson, Per-Olof Sahlin and Åke Lindberg: “On-line Optimization of Pulp & Paper Production”, in proceedings from TAPPI conference in Atlanta, 2003.

Wisnewski P.A, Doyle F.J and Kayihan F.: Fundamental continuous pulp digester model for simulation and control. AIChE Journal Vol 43, no 12, dec 1997, pp 3175-3192.

Pettersson J. (1998): On Model Based Estimation of Quality Variables for Paper Manufacturing. Tech LicThesis , KTH

Bhartiya , Dufour and Doyle ( 2001) : Thermal- Hydraulic modelling of a continuous pulp digester, in proceedings from Conference on Digester modelling in Annapolis, June, 2001.

Page 46: Use of modeling and simulation in pulp and paper making

46

Hess T. (2000): Process optimization with dynamic modeling offers big benefits, I&CS,August, p 43-48.

Karlsson C., Dahlquist E., “Process and sensor diagnostics - Data reconciliation for a flue gas channel”, Värmeforsk Service AB, 2003, (in Swedish).

Morari M, Stephanopolous G and Arkun Y: Studies in the synthesis of control structures for chemical processes. Part 1: formulation of the problem.Process decomposition and the classification of the control task. Analysis of the optimizing control structures. American institute of Chemical Engineering Journal , 26(2), 220-232.1980.

Ryan K. and Dahlquist E.: MNI Experiences with process simulation. Proceedings Asia Paper, Singapore, 2002.

Jansson and Erik Dahlquist : Model based control and optimization in pulp industry, SIMS2004,Copenhagen Sept 23-24,2004.

2.2.2 STATISTICAL MODELS/DATA DRIVEN MODELS

If we don’t have a good process understanding, but there are good process measurements available, data driven models are a good choice. You collect data from the actual process and correlate these to different quality variables that can be achieved from lab measurements, normally. In this way we can get prediction models for many properties on-line that are not possible to get otherwise, or where on-line instruments are far too expensive to be motivated.

Still we must be aware of the fact that “shit in- shit out”. We need to have process variations that are significant and uncorrelated to build a good statistical model. This is sometimes not realised, and the operators will be very disappointed with the quality of the predictions. Often also very important variables were kept constant during the data collection, like the wire speed of the paper machine, and thus the most important variable is not included in the model.

To avoid this type of poor models, you can preferably make use of a “factorial design” of experiments Here different variables are varied in a structured way independent of each other. Even if you can not perform all the experiments you would like to, you can at least get a number of them, and then probably a significantly better model than if you just used the “randomly collected” data from the process. The model most probably also will be more robust with respect to different process variations not directly encountered in the model.

When a statistical model is built you often start with including any kind of measured data, but later it is normally best to reduce the number of variables in the prediction model to the most important ones. This will give robustness and will be easier to update continually. Statistical models can handle dynamics, but normally physical models are better on that. Examples of statistical models are ANN, MVDA, BN, PLS, PCA. These techniques are presented later on in this book.

One important application of statistical models is as “soft sensors”, and several examples of such are given later on.

Page 47: Use of modeling and simulation in pulp and paper making

47

2.2.2.1 FINDING DEPENDENCIES AND TIME LAGS BETWEEN DIFFERENT

SIGNALS

Daniel Gillblad and Anders Holst, Swedish Institute of Computer Science, Sweden

In industrial processes, finding dependencies between measured attributes are important. Most of the measured attributes in these cases comes in the form of time series. Then it is also important to keep track of the time lag between the different measurements. For example, it can take around 8 hours for the pulp to move through the digester, indicating that relevant dependencies between different parts of the process may have a significant delay.

One way of collecting all measurements for a certain volume of pulp is to use a plug flow model, like the Quality Foot Print (PQF) made by ABB. Here we follow the pass of e.g 5 ton of pulp all the way through the fiber line to the final paper product. All new measurements along the process line will be attached to the same volume of fibers, and thus prediction models will be much easier to build with some robustness. This process is however intrusive, and it would be beneficial to instead rely only on measured data from normal operations.

There are several ways of determining correlation between series, most of them suffering from specific problems when applied to real-world data. Here, we will discuss a well performing measure based on the mutual information rate, that also allow us to determine the most significant delays between the correlations.

2.2.2.2 CORRELATION MEASURES

Correlation between attributes can be measured in a number of different ways, the perhaps most commonly used being Pearson's correlation coefficient and the covariance. These are very useful, robust measures that often give a good indication about the dependency structure of the sequences. If, for some reason, we can be sure that there are only linear dependencies and independent samples in the data, they are also optimal measures. When non-linear dependencies are present or samples are not independent, as in a time series, the measure might be fooled into either giving a low value of the correlation for two highly correlated variables, or to significantly overestimate the actual correlation. The use of information theory [4, 5], or more specifically the concept of mutual information, can provide a solution [3].

2.2.2.3 ENTROPY AND MUTUAL INFORMATION

Page 48: Use of modeling and simulation in pulp and paper making

48

The entropy of a stochastic variable [1, 2], which can be thought of as a measure of the amount of uncertainty or the mean information from the variable, is defined (in the discrete case) as

H(X) = − P(xk )log P(xk )k ∈X

∑ (1)

If two variables X and Y are independent, H(X, Y) = H(X) + H(Y). If the variables are not independent, H(X) + H(Y) will be larger than H(X, Y), that is some of the information in H(X,Y) is included in both the marginal entropies. This common information is called the mutual information, I(X; Y). It can also be thought of as the reduction in the uncertainty in one variable due to the knowledge of the other, or more formally

I(X;Y ) = H(X) − H(Y | X ) = H(Y ) − H(X |Y ) = H(X) + H(Y ) − H(X,Y ) (2)

The mutual information is symmetrical and is always larger than or equal to zero, with equality only if X and Y are independent. It can therfore be viewed as a measure of the dependence between two variables. If the variables are independent, the mutual information between them will be zero. If they are strongly dependent, the mutual information will be large. Mutual information is a general correlation measure and can be generalised to all kinds of probability distributions. It is also, given an appropriate model of the distributions, able to detect non-linear dependencies between variables.

To be able to calculate the mutual information, we have to know both the variables marginal distributions and their joint distribution. If the parametric forms of the distributions are not known, it is still possible to calculate the mutual information by quantising each variable into discrete values, or bins. Each marginal is discretised, and the joint distribution is modelled by the grid resulting from the two marginal distribution models. Then histograms are constructed from the data using this discretisation and from these the probabilities are estimated.

The number of discrete values in the grid is critical. Choosing a too fine grid with more bins than data points results in an estimate of mutual information equal to the logarithm of the number of data points. If too few intervals are chosen, again only part of the correlation will be detected. When discretizising the marginal distributions, it is useful to set the limits between bins so that all bins contain the same number of data points. This gives us a much more stable measure. It is less sensitive to both skewness and outliers and the resulting grid uses the highest resolution in the areas where there are most data points.

Another common assumption about the distributions is that they are Gaussian. This results in a measure where a linear correlation is assumed, and it is only possible to detect the linear part of the correlations in the data. To derive an expression for the mutual information between Gaussian variables, we can start from an expression of the entropy for a Gaussian distribution. The entropy of an n-dimensional Gaussian distribution can be written as

Page 49: Use of modeling and simulation in pulp and paper making

49

h(X1,X2,K, Xn ) =

12

log(2πe)n C (3)

where |C| denotes the determinant of the covariance matrix. Using the equations presented earlier, it is easy to calculate the mutual information for Gaussian distributions. When calculating the mutual information under these assumptions, the only parameters needed are the means and variances of the two variables and the covariance between them, all easily estimated from data.

The measures described above are general correlation measures. If we are working with sequential data, the correlation is usually measured as a function of the time shift between the series. This can then be plotted as a correllogram for visual inspection or used for automatic generation of dependency graphs.

2.2.2.4 THE MUTUAL INFORMATION RATE

Let us now have a look at how we can extend the notion of mutual information to efficiently find dependencies between time series [6]. We start from an expression for the uncertainty of a sequence, corresponding to the entropy of a single variable. Loosely speaking, if we have a sequence of n random variables, this uncertainty can be defined as how the entropy of the sequence grows with n. This is called the entropy rate of the process and can be defined as

Hr (X) = lim

n →∞

1n

H(Xn | Xn−1,K,X1) (4)

Based on the entropy rate, we can then construct a measure of the mutual information rate,

Ir(X;Y ) = Hr(X) + Hr (Y) − Hr (X,Y ) (5)

Informally, it can be understood by considering the entropy rate of a sequence as analogous to the entropy of a stochastic variable and then applying equation 2. This way, the mutual information rate measures the complete dependence between the sequences.

We can make a reasonable estimate of the entropy rates in this expression if we use a Markov assumption, i.e. we assume that the process has a limited memory so that the value of a variable is dependent only on the closest earlier values. When we make the Markov assumption, we also have to take into account the shift of the sequences. That is, the assumption is that one variable affects the other with a specific time delay. If we denote this shift d, using a first order Markov assumption and assuming stationary sequences, the mutual information rate Ir(X; Y; d), can be simplified to entropies of joint distributions as

Page 50: Use of modeling and simulation in pulp and paper making

50

Ir(X;Y;d) = H(Xn | Xn−1) + H(Yn |Yn−1) −H(Xn ,Yn−d | Xn−1,Yn−d −1)

= H(Xn , Xn−1) − H(Xn ) + H(Yn,Yn−1) − H(Yn ) −H(Xn ,Yn−d ,Xn−1,Yn−d −1) + H(Xn ,Yn−d )

(6)

Once again, the distributions can be modelled by discretising the values using a multi-dimensional grid. First each marginal is discretised, and then two and four dimensional grids are constructed for the joint distributions based on these marginal grids. With this method, using N bins on the marginal distributions leads to N4 bins when estimating the joint distribution of four variables.

Instead of using bins, Gaussian distributions can be used, resulting in a linear measure tailored for time series. This is a much more robust measure; it requires much less data to estimate these Gaussians reliably than the use of a grid. The drawback is of course that still essentially only the linear part of dependencies can be discovered. In many practical cases this may be sufficient, since the linear part is often dominant.

2.2.2.5 TEST RESULTS ON INDUSTRIAL PROCESS DATA

We will now present a couple of examples, both taken from a chemical plant application. Only the linear versions of the measures are shown since they are more stable than the binned versions.

The top left diagram of figure 1 shows the linear correlation between the measured variable X3 and the controlled variable C63. It is an example of a well behaved, linear correlation with a short and reasonable time delay. The correllogram shows just one clear peak at delay -5 minutes, indicating that this probably is a real correlation between the attributes and not produced by artifacts in the data. The mutual information rate correllogram for the same attributes in the top right diagram of figure 1 shows the same behaviour. It is a bit more peaky and shows much lower correlation, but the peak is at almost the same place, delay -6 minutes, as in the mutual information correllogram.

In the lower left diagram of figure 1 the linear mutual information between attribute X48 and Y is shown. The correllogram is very smooth, although somewhat low, but the measure is obviously fooled by some general trend in the data since it is constantly increasing with decreasing values of the delay. The maximum is at -400 minutes, simply because that is the chosen plot range, and is a too long delay to be considered reasonable in this case. The mutual information rate on the other hand, shown in the lower right diagram of figure 1, shows a clear peak at delay 2. That is a plausible value of the delay between the sequences, although the value of the correlation is rather low. The information rate diagram is not at all as smooth as the mutual information, showing several small spikes which are most likely effects of noise in the data.

Page 51: Use of modeling and simulation in pulp and paper making

51

Figure 3: Results on chemical plant data.

2.2.2.6 DISCUSSION AND PRACTICAL CONSIDERATIONS

Using mutual information or the normal correlation coefficient between time series tends to give a too high value of the correlation. This happens because that if the time series moves slowly enough, the relation between the series at one point in time is likely to be maintained for several time steps. This means that pure random coincidences between the series gets multiplied with a factor depending on how slow the series are, making that correlation seem more significant than it is. The mutual information rate on the other hand, which only considers the new information in every step, correctly compensates for that effect but instead requires a more complicated model to estimate, which makes it more sensitive to noise.

All in all it seems that the linear information rate is the measure that gives the most reliable indications of correlations between the time series. However, due to the different trade-offs, a general rule is that a feature, a peak, should appear using at least two of the methods to be considered significant.

2.2.2.7 REFERENCES

Page 52: Use of modeling and simulation in pulp and paper making

52

[1] Shannon C. E. (1948). The mathematical theory of communication. Bell Syst. Tech. J. 27:379-423.

[2] Shannon C. E. (1951). Prediction and entropy of printed English. Bell Syst. Tech. J. 1951:379-423.

[3] Li W. (1990). Mutual information functions versus correlation functions. Journal of

Statistical Physics. 60:823-837.

[4] Cover T. M., Thomas J. A. (1991). Elements of Information Theory. John Wiley and Sons, New York.

[5] Ash R. (1967). Information Theory. Interscience Publishers, New York.

[6] Gillblad D. and Holst A. (2001). Dependency derivation in industrial process data. In Proceedings of the 2001 International Conference on Data Mining, pp. 599–602. IEEE Computer Society Press, Los Alamitos, CA.

2.2.3 ARTIFICIAL NEURAL NETWORKS (ANNS)

Martin Brown and Hong Wang, University of Manchester, UK

An artificial neural network (ANN) is defined as a data processing system consisting of a large number of simple, highly interconnected mathematical processing nodes (artificial neurons or perceptrons) in an architecture inspired by the structure of the cerebral cortex of the brain, that allow learning non-linear complex behaviour patterns. (Tsoukalas L.H. and Uhrig R.E., 1997).

ANNs have been widely considered as black-box models, but there are several papers that give different approaches in order to calculate the impact of the inputs over each output in a neural network (Chitra S.P., 1993; Tchaban T. et al., 1998). Taking into account these considerations ANNs should be defined as grey-box models. An ANN, mathematically, is a broad function in which Outputs=f (Inputs). “f” is function of the architecture, internal weights, biases and transfer functions. This function generally implies a complex non-linear behaviour which is rather difficult to translate into a specific physical meaning. If this translation could be clearly defined, ANNs would be considered as white box models within the studied ranges of values in each input. As this translation, due to its difficulty, has not been found in any case in the pulp and paper industry, we can consider that ANNs are grey box models at this moment.

2.2.3.1 TYPES OF ANNS

Page 53: Use of modeling and simulation in pulp and paper making

53

Different types of ANNs can be defined depending on the structure and way of working. Neuronal nodes are usually distributed in layers, which are interconnected node by node with specific connection weights. Connections can exist only from a specific layer to the following, or between nodes in the same layer, even backwards. Those different connections determine the different types and applications of ANNs.

Figure X shows one of the simplest cases, the multilayer perceptron or feed forward ANN, which has not got both lateral connections between neurons in the same layer and back connections to previous layers. Each node sums its inputs, applies a transfer function and obtains an output. Transfer functions in each node can vary and be step functions, linear ones, sigmoid and so on.

Hidden Layer(s)

Input Layer

Output LayerNodes or

neuronsHidden Layer(s)

Input Layer

Output LayerNodes or

neurons

Figure 1 – Scheme of a feed forward ANN.

Two main classifications can be carried out. The first one classifies ANNs depending on the data that they process.

Auto associative ANNs: Output data and input data are the same. They are used in signal filtering and database debugging.

Hetero associative ANNs: Output is different from input. They are used commonly for a great variety of systems.

The second classification can be carried out according to their characteristics and applications:

• Widrow-Hoff ANNs: They are networks with applications mainly for linear systems like the ADALINE network. They use algorithms based on minimisation of mean squared error and most famous applications have been the echo removal in acoustic signals.

• Multilayer Perceptrons: They are the most used for nonlinear pattern recognition. They consist of several simple processing nodes placed in a multilayer distribution. They commonly use different back propagation algorithms including heuristic and numeric modifications.

Page 54: Use of modeling and simulation in pulp and paper making

54

• Associative ANNs: They include logic expressions in their calculations, like the simple associative network or the recognition network. They are based on different training error reduction rules like Kohonen rule or Hebb rule. Their main application is the classification of different groups or classes.

• Competitive ANNs: They work with a different system based on competitive comparison of different values (called prototypes) in a recurrent layer. They have some similarities with genetic algorithms, as they only use those prototypes that give the best performance. Self Organising Maps (SOMs) are included in this sort of ANNs, and they are commonly used for many different applications.

• Grossberg network: this sort of ANN is based on the filling of incomplete data, just as our brain does with, for instance, visual imperfections.

• Other ANNs: There are lots of ANN types and modifications of each type. Probabilistic ANNs, Radial Basis ANNs, General Regression ANNs, with dead times (Elman ANNs, autoregressive non-linear moving average ANNs, real-time recurrent ANNs, polynomic ANNs, etc.).

2.2.3.2 WAY OF WORKING OF ANNS

As previously mentioned, ANN’s architecture usually consists of interconnected layers formed by nodes. Each layer has processing significance, and the total amount of layers determine how complex will be the behaviours that the ANN will be able to predict. But a higher number of hidden layers will result in more complex evaluations of internal weights, as well as an important trend to over fit data noise.

Input layer presents data to ANN without previous sums, only applying selected transfer functions. Internal layers are used in order to model the behaviour. Depending on the studied system, different transfer functions can be applied. Output layer is used to make final sums and present data. It is quite common when using multilayer perceptrons to use linear transfer functions in this layer. It usually speeds up training step.

Through different algorithms, connection weight matrixes are calculated in an iterative way, and in that way the importance of each connection is calculated from the output layer to the input one. The most used sort of algorithm in the pulp and paper industry is the back propagation algorithm.

2.2.3.3 BACK PROPAGATION ALGORITHM

Prediction error reduction in an ANN takes place through a training-learning process, which consists of modifying internal weights in order to reach an output that should be as closer as possible to training data. There are different types of algorithms that perform such

Page 55: Use of modeling and simulation in pulp and paper making

55

weight tuning like back propagation ones and their heuristic modifications, those based on genetic algorithms, fuzzy logics, bayesian fusion, and so on.

The most used training algorithms in pulp and paper industry according to existing literature are those based on back propagation weight changing rules like the gradient descent rule. They are usually applied with a minimum of three layers (1 hidden layer) (Aguiar H.C. and Filho R.M., 2001; Broeren L.A. and Smith B.A., 1996; Campoy-Cervera P. et al., 2001; Kumar A. and Hand V.C., 2000; Masmoudi R.A., 1999; Miyanishi T. and Shimada H., 1998; Vaughan J.S. et al., 1999).

Training of an ANN is structured with the following steps:

1. Creating random initial weight matrixes with values between 0 and 1, either positive or negative. Matrixes created with constant values usually do not fit the patterns as well as random ones. In order to assure good results, several initial weight matrixes must be trained in the development of a robust model.

2. Training pair selection. A training pair is the input vector together with the output vector taken at the same time. This selection is carried out with all data selected for training. When working with vectors (the most usual way) there will be one vector with one element per input.

3. With obtained weights (initial in step 1 and calculated in step 5) ANN outputs are calculated following the same procedure in each neuron (summing weighted inputs and applying transfer functions), from the first layer and concluding with the output layer, obtaining the ANN responses.

4. Obtained responses are compared with target values from experimental data. Errors are calculated as the difference between the response and the target.

5. Back propagation algorithm, which will be explained in detail, is applied in order to adjust all the weights in the ANN and reduce training errors.

6. Steps 2-5 are repeated for each pair of input-output data until obtained errors in training or validation, depending on the case, are considered optimum and low enough.

When training, there are two main time-consuming calculations. The first one is carried out feed forward in order to calculate ANN outputs. The second is calculated backwards in order to adjust weights depending on errors, trying to reach experimental targets. All calculations are made layer by layer, and the output in one layer becomes the input in the following one.

A description of back propagation algorithm is presented by descriptive equations that show the procedure step by step.

First of all, the input to one neural node is calculated as the sum of all weighted outputs from the previous layer, as can be seen in equation 1

Page 56: Use of modeling and simulation in pulp and paper making

56

∑=

=+++=n

iiinn wxwxwxwxI

12211 ... (1)

Where ‘wi’ represent the weights applied over ‘xi’, which are the outputs from neurons of the previous layer. After summing, an activation function is applied. This function varies depending on each specific case. Usual functions are the step function, logistic sigmoid (quite usual in hidden layers, hyperbolic tangent, linear or any other type with similar characteristics.

There are many different transfer functions to be selected in each neuron of the ANN. They are usually selected by layers, and can be step functions, linear functions, hyperbolic tangents, logistic sigmoid, etc. In equation (2) logistic sigmoid with flexible ‘α’ parameter is shown as an example of usual transfer function.

IeI

·11

)( α−+=Φ (2)

With those transfer functions, which just convert a single input into a single output, the output of a neuron is obtained, and this output will be properly weighted and inputted in each neuron of the following layer, where the same process (summing, applying transfer functions and going forward through weighting the outputs) will take place until the output of the ANN is obtained.

The use of non-linear transfer functions in the layers is usually due to the complexity of studied process, which needs the introduction of non-linearities in the model. The output layer makes final summing and often applies linear transfer functions in order to speed up iteration procedure.

After obtaining the output of the ANN, it is compared with experimental values, which are the target during training. Errors are calculated as the difference between experimental data and ANN outputs in each output variable. Errors are usually squared to remove the effect of signs.

Once obtained the errors different custom rules could be applied. The most commonly accepted one is the Delta rule, which considers that the change in a connection weight is proportional to the change in the squared error and inversely proportional to the change in corresponding weight in previous iterations, as shown in equation 3.

kpq

q

qpkpq ww

,

2

,, δδε

η−=∆ (3)

where the proportionality constant qp,η is called ‘learning rate’, whose subscripts refer to

two different neurons ‘p’ and ‘q’, respectively located in layers ‘j’ and ‘k’, being ‘k’ the last

Page 57: Use of modeling and simulation in pulp and paper making

57

layer, placed just after layer ‘j’. Note that calculations are slightly different to change weights in previous layers, but they have the same theoretical basis. Connection weight ‘w’ is thus named according to the two neurons that it links.

Through mathematical developments that are available at the literature (Tsoukalas L.H. and Uhrig R.E., 1997), by evaluating equation 3, an indication of the new weights to be applied in different weight matrixes can be calculated with equation 4, expressed in two different forms.

jpkpqqpkpqkpq

jpkpqqpkpq

qqpkpq

NwNw

ww

.,,,,

.,,,

2

,,

·)()1(

·

Φ−=+

Φ−=−=∆

δη

δηδδε

η (4)

kpq,δ is an error indicator that is defined by the equation 5.

[ ] [ ]kq

kqqkqkqkqqkpq I

T,

,,,,, 212

δδ

εαδΦ

=Φ−ΦΦ−≡ (5)

where α is the same parameter than in equation 2, kq,Φ the output from neuron ‘q’ in layer

‘k’, and kqI , the input to the same neuron, calculated as the summing from the outputs from

previous layer, once weighted. Tq is the target value, with which ANN response is compared.

All the steps that have been mentioned are carried out in order to calculate the new weights between last layer and previous one. The other weights can be calculated using the same basis. After some calculations equation 6 is achieved.

∑=

−=+r

qjhphphjhpjhp xNwNw

1.,,, ·)()1( δη (6)

where ‘xh’ are the inputs of layer ‘i’, placed just before layer ‘j’ and two layers before layer ‘k’, and ‘r’ is the number of neurons in layer ‘k’. Neuron that is considered for calculation of the new weights in layer ‘i’ is called neuron ‘h’.

A representative scheme with ANN structure and nomenclature is shown in figure X1.

Page 58: Use of modeling and simulation in pulp and paper making

58

kplw ,

hxkqI ,jpI ,

krI ,

klI ,

jhpw , kpqw ,

kprw ,

jp,Φ

kr ,Φ

kq,Φ

kl ,Φ

rT

qT

Comp

Comp

kplw ,

hxkqI ,jpI ,

kr

klI ,

jhpw , kpqw ,

kprw ,

jp,Φ

kr ,

q,

kl lTlTlComp

kplw ,

hxkqI ,jpI ,

krI ,

klI ,

jhpw , kpqw ,

kprw ,

jp,Φ

kr ,Φ

kq,Φ

kl ,Φ

rT

qT

Comp

Comp

lεlε

kplw ,

hxkqI ,jpI ,

kr

klI ,

jhpw , kpqw ,

kprw ,

jp,Φ

kr ,

q,

kl lTlTllTlTlComp

Figure 2 – Scheme of a back propagation ANN. Source: Tsoukalas L.H. and Uhrig

R.E., 1997.

The calculation sequence previously mentioned is repeated, and training errors become smaller with each new iteration. A low training error does not guarantee better predictions, because it can be due to over fitting in the ANN. A proper validation procedure in supervised learning should test the ANN responses with independent data for different ANN architectures and different number of training iterations. Thus, the training-learning procedure must be optimised in order to reduce validation errors and achieve robust results.

2.2.3.4 MODIFICATIONS OF BACK PROPAGATION ALGORITHM

There are numerous modifications available in the literature (Hagan et al., 1995). Most common ones are the heuristic modifications like the momentum factor and variable learning rate, and numerical optimization techniques like the Levenberg-Marquardt algorithm.

Example of a heuristic modification: Momentum

Momentum has the same meaning as inertia in physics. The value of this parameter will determine the weight of previous changes in internal weights when calculating that change in current iteration. Equation 7 shows the influence of momentum over the change in the weights in the last iteration.

, , , . ,( 1) · · ( )pq k p q pq k p j pq kw N w Nη δ µ∆ + = − Φ + ∆ (7)

where µ is the momentum factor and its values can oscillate between 0 and 1. The main utility of this factor is the capacity of avoiding local minima of training errors that it

Page 59: Use of modeling and simulation in pulp and paper making

59

provides to the ANN, allowing a more probable finding of global minima, which would probably allow to achieve more robust models. Nature of each system will determine the optimum value of this parameter, or if it is better not to use it.

Therefore, a first stage when developing ANNs is to make different trials with different types of algorithms, or even different values for other parameters like the learning rate, the transfer functions or the number of hidden layers and neurons. There are also algorithms in which learning rate varies automatically as weight changes also does, with custom increasing and decreasing factors.

There are quite comprehensive works describing all possible variations in this back propagation algorithm and even describing in detail other types of ANNs. In this chapter a description of the most usual algorithm according to literature has been carried out. Applications that have been analysed are shown in the following chapter.

2.2.3.5 APPLICATIONS OF ANNS IN THE PULP AND PAPER INDUSTRY

Feed forward ANNs with back propagation algorithms are the most used in the pulp and paper industry as previously mentioned. There are general issues that are commented about constructing ANNs. Screening outliers and pre-processing data are essential in order to achieve good results. These models have been used for many different purposes. The most common ones are:

• Environmental applications like the monitoring and prediction of different emissions (Masmoudi R.A., 1999; Smith et al., 2000)

• Product quality properties prediction (Broeren L.A. and Smith B.A., 1996; Joeressen A., 2001; Kumar A. and Hand V.C., 2000; Alonso A. et al, 2006)

• Process optimization (Campoy-Cervera P. et al., 2001; Miyanishi T. and Shimada H., 1998; Vaughan J.S. et al., 1999).

Going more into the detail of some published works, it can be seen that they have achieved in general good results. ANNs have been used to optimize and control brightness (Broeren L.A. and Smith B.A., 1996; Vaughan J.S. et al., 1999), for paper pulp inspection (Campoy-Cervera P. et al., 2001), to predict DBO (Masmoudi R.A., 1999) or to optimise wet-end (Alonso A. et al., 2006).

They have also been used combined with deterministic models/equations to predict pulping degree (Aguiar H.C. and Filho R.M., 2001) and corrugated box compression values (Joeressen A., 2001); combined with genetic algorithms (Kumar A. and Hand V.C., 2000), and also with MVA tools like PCA for fault detection (Miyanishi T. and Shimada H., 1998).

Moreover, there are some explicit references to economical benefits when applying ANNs.

• Process optimization through brightness predictions allowed saving $289,000 per year (pulper chemical costs) (Broeren L.A. and Smith B.A., 1996). It also allowed a

Page 60: Use of modeling and simulation in pulp and paper making

60

total chemical cost reduction of US$ 138000 annually (Smith B.A. and Broeren L.A., 1996).

• Lime kiln control resulted in 7% specific heat energy consumption savings, and a subsequent reduction in the heat losses (Järvensivu et al., 2001).

• When predicting corrugated box compression values, 5% of total costs were saved (Joeressen A., 2001).

• DBO prediction allowed US$ 1 million savings during 1998 (Masmoudi R.A., 1999).

• Web-breaks diagnosis with ANNs and PCA (Miyanishi T. and Shimada H., 1998) resulted in significant improvements: first-pass retention and first-pass ash retention improved by 3% and 4%; sludge generation from the effluent clarifier was reduced by 1%; alum usage decreased and the total annual cost reduction was estimated about US$ 1 million.

• Application of fuzzy logic and ANNs in order to control the kappa index resulted in a reduction of deviations in kappa index by 25% (Pulkkinen et al., 1997).

• Stapley et al. in 1997 experienced several benefits when using ANNs in wet-end. In a substantiation trial, chemical costs were reduced by 32% (which means annual US$ 800000 savings) and production was increased by 6%. Brightness was increased by 10% and that fact allowed a reduction in the TMP bleached chemical consumption required to achieve specifications (which supposed additional savings between US$ 100000 and 200000 per year).

• Finally, the development of an ANN soft sensor for chlorine dioxide stage brightness control (Vaughan J.S. et al., 1999) reduced the frequency of off quality rounds by 13%, the brightness variability by 50%, and it also reduced the amount of dioxide used at the stage by 33%.

These applications are just some examples in a great amount of works that cover almost all the steps in papermaking, from raw materials until the final product. There is a continuous flow of new papers dealing with the use of ANNs in the pulp and paper industry with many different purposes.

An important challenge when using such models will be converting the concept of being black-box models into grey or even white box models, through different analysis of the internal structure.

2.2.3.6 LITERATURE REFERENCES

1. Aguiar H.C. and Filho R.M., Neural network and hybrid model: a discussion about different modelling techniques to predict the pulping degree with industrial data, Chem. Eng. Sci., Vol. 56, No. 2, pp 565-570. 2001.

Page 61: Use of modeling and simulation in pulp and paper making

61

2. Alonso A., Blanco A., Negro C., Tijero J. and San Pío I., Application of advanced data treatment to predict paper properties, Proceeding of the 5th MathMod Conference, Vienna, Austria, 8-10 Feb., 7pp. 2006.

3. Broeren L.A. and Smith B.A., Process Optimization with Neural Network Software, Prog. Pap. Recycling, Vol. 5, No. 2, pp 95-98. 1996.

4. Campoy-Cervera P., Muñoz-García D.F., Pena D. and Calderón-Martínez J.A., Automatic Generation of Digital Filters by NN Based Learning: An Application on Paper Pulp Inspection, Lect. Notes Comput. Sci., No. 2085, pp 235-245. 2001.

5. Chitra S.P., Use Neural Networks for Problem Solving, Chem. Eng. Prog., Vol. 89, Nº 4, pp 44-52. 1993.

6. Järvensivu M., Saari K. and Jämsä-Jounela S.-L., Intelligent control system of an industrial lime kiln process, Control Engineering Practice, Vol. 9, 589-606. 2001.

7. Joeressen A., Predicting Corrugated Box Compression Values Through Innovative Software, Developments in manufacture, technology and markets for corrugated board, Manchester, UK, 17-18 Sept, 4pp. 2001.

8. Kumar A. and Hand V.C., Feasibility of Using Genetic Algorithms and Neural Networks to Predict and Optimize Coated Paper and Board Brightness, Ind. Eng. Chem. Res., Vol. 39, No. 12, pp 4956-4962. 2000.

9. Masmoudi R.A., Rapid prediction of effluent biochemical oxygen demand for improved environmental control, Tappi J., Vol. 82, No. 10, pp 111-119. 1999.

10. Miyanishi T. and Shimada H., Using Neural Networks to Diagnose Web Breaks on a Newsprint Paper Machine, Tappi Journal, Vol. 81, No. 9, pp 163-170. 1998.

11. Pulkkinen M., Saastamoinen M. and Skyttä M., Kotka fuzzifies its kappa control, Pulp Pap. Eur., Vol. 2, No. 10, pp 30-32. 1997.

12. Smith B.A. and Broeren L.A., A tool for process optimization: neural network software, TAPPI Proceedings of the 1996 Recycling symposium, New Orleans, LA, USA, ISBN 0-89852-657-4. 3-6 Mar, pp 163-187. 1996.

13. Smith G.C., Wrobel C.L. and Stengel D.L., Modelling TRS and Sulphur Dioxide Emissions from a Kraft Recovery Boiler Using an Artificial Neural Network, Tappi J., Vol. 83, No. 11, p 69. 2000.

14. Stapley C.E., Butner R.E., Kangas M.Y.O., Broeren L.A. and Smith B.A., Tools for strategic analysis: neural networking, Pulp Pap., Vol. 71, No. 1, pp 89-90,93-96. 1997.

15. Tchaban T., Taylor M.J. and Griffin J.P., Establishing impacts of the inputs in a feedforward neural network, Neural Comput. & Applic. Vol. 7, pp 309-317. 1998.

16. Tsoukalas L.H. and Uhrig R.E., Fuzzy and Neural Approaches in Engineering, Ed. Wiley Interscience, . 1997.

Page 62: Use of modeling and simulation in pulp and paper making

62

17. Vaughan J.S., Gottlieb P.M., Lee S.-C. and Beilstein J.R., The Development of a Neural Network Soft Sensor for Chlorine Dioxide Stage Brightness Control, TAPPI 99 ""Preparing for the next millennium"", Atlanta, GA, USA, Book 1, ISBN 0-89852-734-1. 1-4 Mar, pp 147-159. 1999.

2.2.4 EVENT DRIVEN MODELS

Erik Dahlquist, Malardalen University

In manufacturing industries and for applications in logistics event driven simulation models are common. Very often you have a number of actions to take in a serial way, but slightly different actions depending on what product that is coming on e.g. a conveyor band. If a piece is broke, it should be put in one box, otherwise in another. If a car is blue it should be directed to one line, but if it is red to another, as examples.

If we have a logistics problem the simulation may be used to plan e.g. distribution of a number of products to a number of sites. The models then are used together with an optimization algorithm to determine a best distribution where storages, delivery times, capacity etc are constraints.

In pulp and paper industry this type of models may be used to handle overall planning like orders coming in, storage in a warehouse, distribution by boat or lorry etc.

A suitable simulation tool for this is e.g. EXTEND, which has a number of predefined blocks to use, and is easy to learn how to handle. It also contains easy to use animations.

2.2.5 HYBRID MODELS

Erik Dahlquist, Malardalen University

With hybrid models we normally mean “grey box models”, or combinations of physical models with statistical data. This type of models has the advantage of physical models giving reasonably robust solutions in a broad area of operations together with adaptation with real process data. The negative aspect is that the physical models have to be simplified, and there is always a risk that you loose important features of the process behavior. It is often also more complex to tune a physical model than to build a completely data driven model like a neural net or a PLS model.

Until we have computer power enough to model physical processes in very high resolution the grey box models will most probably be the most used models for on-line applications, as they are easier to transfer from one machine to another, and normally needs less maintenance work after installed.

Page 63: Use of modeling and simulation in pulp and paper making

63

2.2.6 OVERVIEW WHEN DIFFERENT TYPES ARE TO BE USED

Erik Dahlquist Mälardalen University

From what has been said already we can conclude that principally white, physical models are the best from a process modeling point of view, at least if we understand the processes to be modeled. On the other hand, we seldom have the computer power needed to model a process in enough detail to really operate it as a white model.

The black box models have the advantages of principally no need for a process understanding. On the other hand the models need good feed data to be good and robust, and the verification of the process results used for building the model need to be very good, to give a good model. “Shit in – shit out” is very relevant for this type of models. To gather a good model we thus need e.g. a factorial design for experiments in the process equipment to be modeled, to really get information needed to build a model that is valid in a reasonable large operational area. For different product grades it may be needed to have several models built. Still, black box models like ANN, PLS, PCA and similar can be very good for e.g. building soft sensors for important quality variables to be used on-line. This has been shown in many examples, and will be covered later on in this book.

The grey box models or hybrid models have the advantage of having a better robustness in a larger operational area even if we don´t have process data covering the whole area. From physical relations reasonable assumptions can be made, at least if we know the process well. By tuning the model with available process data, we still can get reasonable accuracy of the predictions. This type of models are gaining more interest for many different applications, and are used to build standard libraries for simulators, where different machines can be tuned with suppliers data, but the general equations are the same. They have the advantage to be possible to expand with more variables without having to redo all the tuning work from scratch, as is the case for statistical models. Grey box models can also be used for soft sensors if there is a process understanding of the factors affecting the quality.

2.3 DATA PROCESSING AND UNCERTAINTY HANDLING

Erik Dahlquist, Mälardalen University

2.3.1 HOW TO HANDLE UNCERTAINTIES

Uncertainties are something we have to live with. Still, often this is not considered in many models, and thus the models may be very accurate sometimes, but totally unreliable at other times. If we can determine when it is accurate and when it is not, we could take this into account also in the control and optimization.

Page 64: Use of modeling and simulation in pulp and paper making

64

We then can identify different types of uncertainty. One type is measurement inaccuracy. E.g. consistency sensors often are quite unreliable if we have air bubbles in the flow, but may be much more reliable if the air is removed. A temperature meter may be very accurate just after calibration, but is drifting away as it is fouled. And so on…

Another type of uncertainty is if equipment will be in operation or not. If we make an optimization calculation assuming that the plant will be performing well, and then a pump is cavitating, and thus the pump flow significantly lower than the set point from the production plan, the whole production plan may be upset.

There also may be planned upsets like service of equipments, but it still will be uncertainty as if we run to long the performance of equipment may be lower sometime in the future if you neglect the service right now to increase the production short term.

It is not evident exactly how to handle the uncertainty but one way is to put different uncertainty levels on different equipments or process parts. The constraints then may have different intervals depending on this. It may also be an alternative to make a number of different optimizations with different constraints to see how much the different alternatives will affect the final production result. We get a sensitivity analysis, which can give a decision support for the one making the production plan.

By collecting data from previous operations and feeding these into a BN, Bayesian net, it may also be possible to create a tool for how to make a good decision if we have a similar situation to compare with. If we know how we acted earlier and the outcome then, we can get a statistical “best alternative” from earlier experience. The back side is that we can not be sure if the situation really is the same, but at least we have some reasonable statistics to fall back on. We also should look at not only the most favorable alternative, but also those close to this, and try to analyze the worst scenario if you take the decision recommended by the decision support system. By a good structure of the decision support system you may get a very valuable tool for the decision, and a possibility to really make use of not only your own experience, but also the whole organizations.

If we have uncertainties in the actual models these can also be decreased by updating the models on a frequent basis. To do this we need to have a good control over the quality of the input data, and an understanding of what variables to include and possibly also what extra measurements that are needed! Dynamic calibration with a moving window technique is one way and dynamic data reconciliation another. These techniques are presented later on in this book. First we will discuss the data preprocessing, which is a key for successful updating of models in an automated way.

2.3.2 PREPROCESSING, “DIGGING THE DIAMONDS”

Arjo Sinon, SAPPI

2.3.2.1 SUMMARY

Page 65: Use of modeling and simulation in pulp and paper making

65

Selection of data to use. Availability of data.

There will be a lot of data available from the process. Still, these may be of very variable quality. A good way to determine the quality is to follow the performance of each measurement and process equipment on a frequent basis. The data to use for model updates needs a preprocessing. This can be performed by filtering in different time perspectives, comparing data from one sensor to other sensors with a physical or a statistical model, or by e.g. variance analysis. One problem is to do this during non-steady state conditions, which is the normal case in most pulp and paper mills, most of the time!

When it comes to simulation models and interaction between the simulator and the process DCS, we also have to consider that initialization of the simulator may need more information than is available from the actual process DCS. If the data comes via the Information system, it may be filtered in different ways already once, which may affect the results from the simulations when a production plan or a model based control is implemented, if not considered.

2.3.2.2 DATA RECONCILIATION

Christer Karlsson, Mälardalen University, Department of Public Technology, Process

Diagnostics Group, P. O. Box 833, SE-72123 Västerås, Sweden

Extract from Oxford Advanced Learners Dictionary:

« Data » : Facts or information used in deciding or discussing something.

« Reconcile (something with something) » : make (aims, statements, ideas etc) agree when they seem to conflict.

From the definitions above it is easy to understand that data reconciliation is an algorithm used in a data treatment system capable of resolving conflicts in data. Humans handle such situations daily, but handling it mathematically is a tough task. Due to limitations in data reconciliation algorithms, they are often used in combination with algorithms treating real plant data before performing data reconciliation. In the following text, data reconciliation is presented from a first principles model perspective. Examples are from heat and power plants, but can analogically be transferred to other applications.

2.3.2.2.1 INTRODUCTION

Measuring properties are essential for plant control and monitoring. Constant strife for improved plant efficiency demand better control, which often means more instrumentation. The sensors are used for on-line computation of e.g. plant operation cost, emissions, heat power, etc. These applications need reliable on-line measurement data. A known problem is

Page 66: Use of modeling and simulation in pulp and paper making

66

that sensors in the same location show different values. Regarding one as faulty or not reliable often solves this or an average computation is made. The use of simulators make it possible to both measure and compute the same state variable, but it makes the situation complex when it comes to selecting reliable sensors and what sensor readings to believe in.

These problems can be solved by a method able to handle redundant information and estimate the state of the process. The three components; gross error detection, gross error isolation and data reconciliation can be put together to form a system. The research area of data reconciliation and gross error detection spans over many other research areas, such as signal analysis, statistics, optimization, process control, information theory, etc. Complex industries have much to gain from data treatment. The development in the field is fastest in the nuclear power generation and the chemical process industry. An accident in these industries may be disastrous with consequence in both causalities and company bankruptcy, [Hoo04]. Other areas using this kind of data treatment are the mining industry and the conventional power generation industry.

The chain from sensor signal to operator interaction must be considered to find optimal solutions. The economic considerations on number of sensors and maintenance costs must also be included when choosing if, and by what degree the technique is to be implemented.

The core of data treatment in this text is data reconciliation (DR). DR algorithms can handle random noise in the measurements, but they cannot handle systematic and large errors. A protective layer of gross error detection and gross error isolation are therefore vital parts of a data treatment system. The goal of the data treatment is to deliver a data set, which is closer to the true process state than raw data. This data set is ready for use in top applications, such as those in the bottom of Figure 1.

Page 67: Use of modeling and simulation in pulp and paper making

67

Figure 1. Data treatment structure including data reconciliation.

2.3.2.2.2 CAUSE OF DEGRADATION IN HEAT AND POWER PLANT PROCESS AND

SENSORS

Energy and environmental taxes and fees have pushed power plant owners to experiment with new fuels to lower costs. Fuel flexibility is more important than ever before, and it is possible to meet this requirement with improved boiler technologies such as fluidized beds and circulating beds. Fuels earlier used only for heat water production is now introduced in heat and power plants, and these plants operate at higher temperatures. Also new chemical reactions are activated. The experience of combusting these new fuels and mixes of fuels in heat and power plants are increasing. Normal degradation of sensors and process in heat and power plants are mainly due to:

• Corrosion

• Fouling

• Erosion

Page 68: Use of modeling and simulation in pulp and paper making

68

• High temperature

Unbalance in plant heat balance or flue gas flow pattern cause problems. This can be due to use of fuels not intended or flaws in the plant design. The degradation causes listed above can force a shut down of the plant for maintenance. However, normally a certain degree of degradation is accounted for. Faster degradation in sensors is possible if they are positioned where they are exposed to erosion, corrosion or fouling. The same reasoning can be applied on the process if fuel quality differs from the expected, regarding for example moisture, alkali, sulphur, metals and properties such as; size distribution and burn-out rate.

Temperature sensors have a drift due to ageing and the high temperature they are working in. Positioning is important for temperature sensors in sections where large temperature gradients may occur. Pressure sensors are sensitive to clogging and also need careful positioning. Most plants are shut down and revised annually. During periodic maintenance and revision the process is controlled and the sensors are calibrated. Not all sensors are calibrated annually.

2.3.2.2.3 EFFECT OF DEGRADATION IN HEAT AND POWER PLANT PROCESS AND

SENSORS

It is essential to control or reduce degradation of important sensors for alarm, control, optimization and maintenance, primarily to reduce risk for personal injury, and secondly to minimize costs. Examples of effects from degradation are:

• Nuisance alarms, delayed alarms and not triggered alarms.

• Non-optimal controller set point and loss of control.

• Non-optimal overall plant optimization.

• Faulty economic and environmental reporting.

• Damages on process.

Initial degradation can cause delayed alarms due to change in dynamics or because of a biased sensor. Nuisance of false alarms due to degrading sensors is important when it comes to the plant operator situation. In power plants, clogging of pressure sensors and mass flow sensors is an effect of fouling. Erosion is due to streaks of flue gas transporting bed sand or fuel particles hitting boiler walls or other components and thus causing damage. Heat exchangers are also exposed to erosion, often in combination with fouling. Some parts of the boiler are exposed to corrosion and are protected, but corrosion can appear on surfaces not designed for aggressive chemicals, due to e.g. fuel mix properties. All the mentioned phenomena can cause plant shutdown, and need to be monitored to take preventive actions and plan maintenance. It is difficult to measure when this degree of degradation moves into the field of loss of control, especially when the degradation is slow and controllability is lost within the alarm limits.

There are tools for detection of degradation using statistics, but they need to be performed off-line by an engineer. To solve the problem by increased maintenance is costly.

Page 69: Use of modeling and simulation in pulp and paper making

69

Sensors showing the same value over time can be faulty even though quality measures like standard deviation and drift trends show excellent values. This kind of degradation is causing loss of control and prevent maintenance because no fault is indicated, the fault remain hidden until extensive data mining have been performed to find such sensors.

Optimization, financial reports, environmental reports and other top applications using measurements also suffer from degrading sensors. Examples of how degrading sensors and process can affect a process can be found in reports on recovery boilers used in pulp and paper industry [Doy72] and [Kle98]. The Black Liquor Recovery Boiler Advisory Committee has documented 156 explosions and 450 near-miss incidents in the last 35 years. Following quote from Lefebvre and Santyr [Lef92] comment the report. “In addition to equipment damage, some of the more severe explosions have resulted in injury or even death of operating personnel. There have also been several hundred emergency shutdowns where the fear of an explosion has led to a forced outage. The frequency of explosions has remained relatively constant over the years, and the problem cannot be considered solved”. More examples are found in [Doy72] and [Kle98].

2.3.2.2.4 DATA RECONCILIATION FOR HANDLING DEGRADATION

Data reconciliation (DR) is a special case of the general estimation problem. DR is the problem of having measurements in an over-determined system satisfying all process constraints. Unmeasured variables are either estimated simultaneously with DR in a general estimation problem or as a separate step after the DR problem is solved. Solving DR and estimation simultaneously is in some literature called data coaptation. Data reconciliation has been reported in applications for mineral industry processes [Sim91], chemical production plants [Abu03], and nuclear power plants, [Sun03]. The most active application field for research in data reconciliation is in chemical engineering. The first method of data reconciliation for a chemical process was proposed in 1961, [Kue61]. The methods involved linear system equations under the assumption that all process variables were measured and absence of gross errors. The described problem can be solved analytically by least squares methods. Complementary methods as gross error detection and isolation where developed parallel to DR in order to handle real DR application problems. Graph theory was connected to the DR problem formulation by Mah et al., [Mah75]. The assumption of unmeasured variables was not efficiently solved until Crowe [Cro83], presented the projection matrix method. Swartz [Swa89], showed that the projection matrix operation could be solved by the robust QR-factorization.

Nonlinear data reconciliation was first proposed to be solved by successive linearization by Knepper and Gorman, [Kne80]. Liebman et al., [Lie91], showed that nonlinear programming methods gave improvements in accuracy, but had long computation time. Liebman et al., [Lie92], later proposed a time window approach for dynamic data reconciliation. Russo and Young implemented a dynamic DR algorithm in 1999, [Rus99]. Recently Soderstrom et al., and Gatzke et al., reported industrial implementations in chemical plants, [Sod00] and [Gat02], using nonlinear dynamic DR strategies.

Page 70: Use of modeling and simulation in pulp and paper making

70

The general data reconciliation problem formulation for linear equations solved for I samples by a least squares method [Rom00] is here formulated as:

( )∑=

−I

iii

uxxy

0

2

,min

s.t. 021 =⋅+⋅ uAxA

Eq. 0.1

Where y is the measured state, x is the estimated state, u is the unmeasured state, A1 is the columns of A with only measured variables and A2 is columns of A with unmeasured variables. The simultaneous solution of data reconciliation and estimation of unmeasured variables in Eq. 0.1 is in many cases a large task. To reduce the size of the optimization problem the DR problem can be divided into two sub-problems, [Cro83]. After the division of the total problem, the next task is to solve the over-determined equations. The following sub-problem is to estimate the unmeasured variables with data from the solution of the over-determined equation system. The simplest DR problem with only measured variables presented below is with linear equations with no gross errors present and the process in steady-state:

( )∑=

−I

iii

xxy

0

2min

s.t. 01 =⋅ xA

Eq. 0.2

The linear process model A1, containing only measured variables can for example be the conservation laws for a system with mass and energy streams. The following sub-problem solves the linear equation system of unmeasured variables.

02 =⋅uA Eq. 0.3

This equation system is solved with any linear equation solution method, for example LU-decomposition or QR-decomposition. The sensors have different measurement error; this can be implemented in the problem formulation as a weight, w, on each sensor. How this weight is determined does not follow any standard procedure. In [Sod00] an estimated accuracy of the sensor is used, which can be modified by the plant operation engineers on the basis of past measurements and their experience and knowledge about the considered instrument. Define a matrix W, where the weights w is put in the diagonal. Now we can formulate the weighted least squares problem,

( )∑=

−I

iiiii

uxxyW

0

2

,min

s.t. 021 =⋅+⋅ uAxA

Eq. 0.4

Where x and u are computed to minimize the objective function value. In a one-sample approach it is difficult to distinguish a large random noise from gross error. Data reconciliation for the present vector y in Eq. 0.4 can be improved by computing a time series

Page 71: Use of modeling and simulation in pulp and paper making

71

of samples. A time window makes more data available for the problem solving and noise can efficiently be smoothed. The cost for the decreased measurement error is increase in computational load, which at least is proportional to the number of time steps in the time window. The DR problem formulation in Eq. 0.4 can be extended with a time window, T:

( )∑∑= =

−T

t

I

iititiit

uxxyW

0 0

2,,,,

min

s.t. ∑=

=⋅+⋅T

t

uAxA0

21 0 , t = 1, … , T.

Eq. 0.5

The solution x and u from Eq. 0.5 is ready to use for applications depending on measured data as, plant optimization, control purposes, diagnostics, economical and environmental reports.

2.3.2.2.5 EXAMPLE OF DATA RECONCILIATION

We continue on the example used throughout this thesis beginning with classification and ending with data reconciliation. Earlier a gross error was detected in the data set and was isolated to sensor 5. The size of the gross error was estimated to -2.97 kg/s. Use the data set given in

Table 1 and correct the measured value in sensor 5 with the estimated gross error. Solve the data reconciliation problem Eq. 0.5 for the corrected data set in

Table 1. Observe that there are no unmeasured mass flows in this case.

Sensor reading [kg/s]

S1 S2 S3 S4 S5 S6 S7

1.9934 2.0062 4.0233 0.9807 5.0696 3.0382 8.1164

2.0315 2.0170 3.9402 0.9887 4.9836 3.0055 7.8238

1.9589 1.9771 3.9818 0.9987 5.0722 3.0144 7.9316

1.9978 2.0097 3.9323 1.0165 5.084 3.0143 7.8115

2.0312 2.0110 4.0403 0.9796 4.9512 2.9813 7.8665

2.0333 2.0339 3.9903 1.0228 4.8818 3.0555 8.1078

1.9944 2.0121 4.0903 1.0070 4.9068 2.9779 7.8751

Page 72: Use of modeling and simulation in pulp and paper making

72

1.9991 1.9909 3.9927 1.0055 4.8628 2.9721 8.0434

1.9675 2.0121 3.9492 1.0044 4.9715 2.9947 8.0166

2.0442 1.9842 3.9804 0.9904 4.9479 2.9679 7.9577

Table 1. Sensor readings with corrected gross error in sensor 5 equal to -2.97 kg/s.

In this case the least squares problem as in Eq. 0.2 is possible to solve analytically, resulting in the vector x = [2.0119 2.0247 4.0367 1.0126 5.0493 3.0526 8.1020].

Define the residual r, as:

trueyxr −= Eq. 0.6

where x is the vector of estimated values and ytrue is the flow vector without noise and gross errors. What effect does the data treatment have on the errors and noise in the measured data? Two-norm of the residual vector r is a measure of how far the true and

estimated values are from each other. The two-norm is denoted r . By computing the two-

norm, it is possible to follow the contribution of each data treatment step in

Table 2.

Step in data treatment Comments Two-norm of r

Initialization

True measurement 0

True measurement with gross error and noise used for simulation

3.0722

Detection by Global Test of the Gross Error

The gross error is detected. No treatment of measurement vector in this step

3.0722

Isolation Search for the measurement with gross error. Correction of measurement lower the two-norm considerably

0.1431

Data reconciliation Data reconciliation of measurements. Two-norm is further lowered

0.1336

Table 2. Two-norm of the residual in different steps of the data treatment.

Page 73: Use of modeling and simulation in pulp and paper making

73

To simulate a measurement vector, noise and gross error is added to the true measurement. The three first rows illustrate the two-norm for true vector, vector with only 5% noise and vector with gross error of 3 kg/s. The gross error was detected by the global test. It was isolated to signal 5 and estimated to –2.97 kg/s in the isolation algorithm. After signal 5 was corrected by 2.97 kg/s (and thereby the gross error was deleted) the two-norm decreased from 3.0722 to 0.1431, a significant improvement. However, the added noise still remains. Using a least squares method for data reconciliation, the noise is reduced and the two-norm is further lowered to 0.1336, which is the remaining error. This value is lower than any of two-norms for the simulated signals. This result implies that the treated measurements, output of the data treatment, are closer to the true measurement vector than input measurement vectors. Thus, the data treatment has in this example shown to improve the measurements on the whole.

References

Avelin A, Jansson J, Dahlquist E. Use of Modelica For Multi Phase Flow In Complex systems, With application For Continous Pulp Digester. APMMCT, Ukraine, Khmelnitsky, 2005

Crowe C.M., Campos Y.A.G., Hrymak A., “Reconciliation of process flow rates by matrix projection. Part I: Linear case”, American Institute of Chemical Engineers Journal, vol. 29. pp 881-883, 1983.

Crowe C.M., “Reconciliation of process flow rates by matrix projection. Part II: The nonlinear case”, American Institute of Chemical Engineers Journal, vol. 32. pp 616-623, 1986.

Karlsson C., Widarsson B., Dotzauer E., “Data reconciliation and gross error detection for the flue gas channel in a heat and power plant”. Conference on Probabilistic Methods Applied to Power Systems, PMAPS2004, USA, Ames, 2004.

Karlsson C., Arriagada J., Genrup M., “Detection and interactive isolation of faults in steam turbines for maintenance decision support”, submitted to Journal of Modelling and Simulation, Practice and Theory in February 2007.

Karlsson C., Kvarnström A, Dotzauer E., ” Estimation of process model parameters and process measurements – a heat exchanger example”, Conference on New Trends in Automation, Sweden, Västerås, 2006.

Sanchez M. A., Bandoni A., Romagnoli J., “PLADAT – A package for Process Variable Classification and Plant Data Reconciliation.” Journal of Computers and Chemical Engineering (suppl. 1992), pages 499-506.

Sunde, S., Berg, Ø., “Data reconciliation and fault detection by means of plant-wide mass and energy balances”, Progress in Nuclear Energy, Vol 43, no , 2003.

Wang Y., Rong G., Wang S., “Linear dynamic data reconciliation: refinery application”, 6th IFAC Symposium on Dynamics and Control of Process Systems, Korea, pp 650-655, 2001.

Page 74: Use of modeling and simulation in pulp and paper making

74

2.3.2.3 DYNAMIC VALIDATION OF SENSORS

2.3.2.3.1 DYNAMIC VALIDATION OF MULTIVARIATE LINEAR SOFT SENSORS WITH

REFERENCE LABORATORY MEASUREMENTS

Aino Ropponen, Kimmo Konkarikoski and Risto Ritala,Tampere University of Technology Institute of Measurement and Information Technology, P. O. Box 692, FIN-33101 Tampere, Finland

2.3.2.3.1.1 INTRODUCTION

Soft sensors can be understood as steady-state simulators with process measurements as inputs. The soft sensor models may be physico-chemical, black-box or combinations of these two, often referred to as gray box models.

In black-box and gray box models the model parameters are identified from process and laboratory data with input measurements from the process and the output to be soft-sensed measured in laboratory. However, as the direct physico-chemical meaning of the model parameters is rather unclear, the parameters may depend on the process structures and conditions at the instant of model identification. Thus the parameters often change over time and the predictive power of the soft sensor diminishes.

In order to keep the soft sensor reliable, occasional laboratory measurements are made about the property soft-sensed and compared with the soft sensor predictions. Such occasional data sets are typically too small to serve as a basis for re-identification of the parameters. However two alternative questions may be answered on the basis of such data:

- is there a need for proper recalibration, or

- how much the model parameters should be adapted on the basis of such data?

In this paper we shall shortly describe a general Bayesian framework for answering these questions, assuming that the degradation of model parameters can be described as a stochastic differential equation. We shall give explicit results for the case of multivariate linear soft sensor and integrated white noise degradation of parameters.

2.3.2.3.1.2 BAYESIAN ESTIMATION OF PARAMETER DISTRIBUTION

General case

We shall denote the input signals to the soft sensor by vector s, the output by a scalar x, and model parameters by a vector �. The stochastic dependence describing the soft sensor model and the uncertainty of the soft sensor is the conditional probability density function (pdf)

Page 75: Use of modeling and simulation in pulp and paper making

75

),|(,| ββ sxf sX (1)

where the capital letters refer to stochastic variables and small letters to their values. The soft sensor output is the maximum likelihood value:

),|(maxargˆ ,| ββ sxfx sXx

= (2).

We shall consider the model parameters � also as random variables. After model identification the uncertainty in parameters is described by the joint probability distribution

)(),0( β+Bf (3).

Now let us assume that we know the pdf of parameters after (n-1)th laboratory reference

measurement xn-1 to be )(),1( β+−nBf . Furthermore, let us assume that after this reference

measurement the (unknown) changes in model parameters are described with a stochastic differential equation

),( ξββF

dt

d= (4),

where � is some stochastic process. We can use (4) to solve for the pdf of � at later time instants, in particular at time when we get the nth laboratory reference measurement,

)(),( β−nBf (the minus sign in superscript denotes the distribution before reference

measurement is taken into account).

By using Bayes formula, by assuming that – in case of several simultaneous reference

measurements – the measurements are independent, and by using )(),( β−nBf as a priori

information, we get

∏=

++ =N

iiiSX

nB

nB sxffNf

1|,

),(1

),( )|,(*)(*)( βββ β (5).

Here N1 is an uninteresting normalization factor. Furthermore as s and b are statistically independent when marginalized over x [ i.e. )(*)(),(, ββ ΒΒ = fsfsf SS ], we can write (5)

as

∏=

++ =N

iiisX

nB

nB sxffNf

1,|

),(1

),( ),|(*)(*)( βββ β (6).

Now using (1) and (4) defines a recursion between )(),1( β+−nBf and )(),( β+n

Bf .

The need for recalibration can be detected through that the actual coefficients �actual are

exceptional according to )(),( β+nBf at the probability p0 of false alarms, i.e. Aactual ∈β , with

Page 76: Use of modeling and simulation in pulp and paper making

76

{ }∫

<

+

+

+

=

<=

)()(

),(0

00),(

00),(

)(

)()(|

pff

nB

nB

nB

dfp

pffA

β

ββ

ββ (7)

The new best estimates of parameters – to be implemented at each step or when need for recalibration is detected – are given as the maximum likelihood values:

)(maxargˆ ),( βββ

+= nBn f (8).

Linear – Gaussian – random walk case

Let us consider a linear model sx Tcββ += 0 . In what follows we denote [ ]TT

cβββ 0≡ .

Assuming model errors Gaussian we have

−−−= − 2

022/12

,| )(2

1exp)2(),|( sxsxf c

eesX ββ

σπσββ (9).

Hence

[ ]

−−−

=

−−

−−−

=∏=

)ˆ()ˆ(21

exp*

ˆ

ˆ1ˆˆ2

exp*

),|(

2

00

..0.

.00022

1,|

ββββ

ββββββββ

σ

ββ

BN

BB

BNN

sxf

T

cc

Tc

Tc

e

N

iiisX

(10)

With

=

=

=

==

iiic

ii

iii

ii

T

sxN

xN

ssN

B

sN

BB

1

1

0

..

.00.

β

β (11).

Let us assume that after the nth update the distribution is normal and thus characterized by the mean �n

(+) and covariance matrix �n(+):

−∆−−= +−+++Β )()()(

21

exp)( )(1)()(3

),(nn

Tn

n Nf µβµββ (12).

Page 77: Use of modeling and simulation in pulp and paper making

77

We assume that the uncertainty in b degrades through a random walk process:

)'(*)'()(;0)(

)(

ttDttt

tdt

d

T −>=ΓΓ<>=Γ<

Γ=

δ

β (13).

Then

−−+∆−−= +−+

++−+Β )())(*()(

21

exp)( )(11

)()(3

),1(nnnn

Tn

n ttDNf µβµββ

(14).

Inserting (10) and (14) into (6), we see that )(),1( β+−nBf is Gaussian:

−∆−−= ++

−++

++

++Β )()()(

21

exp)( )(1

1)(1

)(13

),1(nn

Tn

n Nf µβµββ (15)

with

( )

111

)()(1

)(11

)(111

)()(1

)))(*((

)))(*(ˆ())(*(−−

+++

+

−−+

+−−+

+++

−+∆+=∆

−+∆+−+∆+=

nnnn

nnnnnnnnn

ttDB

ttDBttDB µβµ

(16).

Therefore our assumption of Gaussian in (12) is self-consistent and an explicit recursion between the distribution parameters has been established. The recursion is initialized with the known mean and covariance estimates for parameters of linear multivariate models.

The test for exceptional values of parameters is that for Mahalobis distance

( ){ })/1log(*)()(| 0)(

)

1)()()

)( pkA Tnactualn

Tnactualactualn >−∆−= +−+++ µβµββ

(17).

with k as the dimensionality of �.

The best estimate for the parameters during [ )1, +∈ nn ttt is �n(+).

2.3.2.3.1.3 EXAMPLE

Simple plug-flow reactor - System model, case PGW bleaching

Page 78: Use of modeling and simulation in pulp and paper making

78

We discuss models of plug pipe reactors in the continuous processes. In the plug reactor the fluid is assumed to move as a plug and it is assumed that the fluid is not mixing in the axial direction and the volumetric flow out of the reactor is the same as the flow into the reactor.

Simple plug-flow reactor can be used e.g. as a simple model for pulp bleaching in the PGW process. The target of the study was to estimate the amount of the total organic carbon (TOC) after the bleaching. The TOC is needed to be estimated because high levels cause disturbances to the process.

In the bleaching mechanical pulp, suspended in water and containing some TOC, and the bleaching chemical are fed to the bleaching tower. In the tower the fiber and the chemical react, increasing pulp brightness but at the same time producing TOC.

Let us note the position in the flow direction in the reactor by x. The chemical reactions at each position are described with simple kinetic equations

( ) ( ) ( )

( ) ( ) ( )

( ) ( ) ( )xcxckdt

xdc

xcxckkdt

xdc

xcxckdt

xdc

chemfiberchem

chemfibertoc

chemfiberfiber

⋅⋅−=

⋅⋅⋅=

⋅⋅−=

3

21

1

(Eq. 7)

where ( )xc fiber , ( )xctoc and ( )xcchem are the amount of fiber, TOC and chemical at position x,

respectively. The model parameters of interest are k1, k2 and k3. However, we shall assume all material removed from fibers to turn into TOC, and thus k2 = 1.

The movement of material is described by writing the mass balance in the form:

Vdttft

tt

tot =∫−

')'()(τ

(Eq. 8)

where ftot is the total volumetric flow and �(t) is the residence time of material coming out of the reactor, to be solved from (Eq. 8).

System Eq. (7-8) is solved in two steps within each time step: first the concentrations are updated according to reaction dynamics at each discretized position and then the material at each position is updated according to the mass flow condition. Figure 3 shows an example of simulated total dissolved/colloidal organic carbon from a bleaching reactor.

Page 79: Use of modeling and simulation in pulp and paper making

79

Figure 3. Amount of TOC out of the reactor in VU/TU.

In this simulation the parameters are kept constant and the input flow measurements are assumed are exact. Before the simulation starts, the reactor inflow has been 0.97 volume units/time unit (VU/TU) of water, 0.03 VU/TU fibers and 0.001 VU/TU Total Organic carbon (TOC). No chemical has been fed to the reactor prior to the zero time of the simulation. The volume of the reactor is 30 VU. Reaction kinetics is simple conversion of fibers to TOC under the action of bleaching chemical, see below. The simulation runs 500 TU.

Three actions are made on the bleaching tower during the simulation period. At t=0 chemical flow of 1 CVU/TU is introduced into the inflow while all the other conditions the same. At t=250 chemical flow is increased to 2 CVU/TU, and at t=450 inflow is increased by 2/0.97.

Reference measurements and parameter updating

Figure 4 outlines the TOC and parameter estimation of the model (Eqs. 7-8) based on the method given in Section 2.1.

Page 80: Use of modeling and simulation in pulp and paper making

80

Figure 4. Scheme of output and parameter validation.

Whenever the TOC needs to be estimated the on-line flow condition history, is applied as the model input, the model parameter uncertainty is increased according to the diffusion model, (Eq. 2), and then the TOC estimate and its uncertainty taking into account parameter uncertainty are calculated.

When reference data is obtained, the corresponding likelihood function is calculated, (Eq. 3). In calculation of the likelihood function the parameters need to be varied and model predictions with all the parameter values calculated. Then the product of likelihood function and the pre-existing information about parameters is calculated and normalized as the updated parameter information established. We applied the GMM approach outlined in section 2.2 for these computations.

Because of the characteristics of the bleaching process model parameters cannot be estimated using only TOC reference measurement but also the reference measurement of the outcoming bleaching chemical is required. If only the reference of the TOC is used, the parameter distribution is spreading and the TOC estimates are useless. This is showed in the figure 5.

PGW- model

L(θ)=

exp(-(TOCmeas–TOC (θ))2/(2*σ2))

f(θ)

Scan θ f(θ)*L(θ)

Normalize

TOCmeas

TOCpred

(θ)

f(TOCpred) x

D PGW-model

Page 81: Use of modeling and simulation in pulp and paper making

81

When the reference measurements of TOC and chemical are available, the parameters can be updated by combining the old distribution with the reference distributions. The out coming TOC-concentration is updated combining the estimated distribution with the reference distribution.

Figure 5. On the left the parameter distribution without the output chemical and on the right the distribution with both the output toc and the output chemical. The real values for the parameters are k1=0.0005 and k3=0.005.

Results from simulation

In this Section we show results from a simulation study on dynamic validation method. We have one simulator representing the “true behavior”. This simulator is an implementation of models, (Eq.7-8), but with drifting k1 and k3. In parallel with the “true behavior” we are running another simulator model, “predictor”, that with earlier input history estimates the current output TOC with constant values for k1 and k3. Occasionally we get information about the “true” output as uncertain observations of TOC and chemical concentration. On the basis of reference measurements, “predictor” k1 and k3 are updated with the methods of Section 3.2. We compare the “true” drifting parameters

Page 82: Use of modeling and simulation in pulp and paper making

82

k1 and k3 and their estimated values and uncertainty in parameter estimates, and the effect of parameter uncertainty on TOC estimate uncertainty

Figures 6 and 7 represent the estimated values of the parameters k1 and k3. The upper illustrates the true (black) and the expected value (grey) and the lower the standard deviation of the value. Black circles represent the reference values. The expected value is kept constant between the reference measurements. The variance increases between the reference measurements, but decreases when a new reference measurement is available.

Figure 8 represents the estimated value of TOC. Again the upper illustrates the true (black) and the expected value (grey) and the lower the standard deviation of the value. Black circles represent the reference values.

Figure 6. The expected value and the uncertainty of the parameter k1. Circles show points of reference measurements.

Page 83: Use of modeling and simulation in pulp and paper making

83

Figure 7. The expected value and the uncertainty of the parameter k3. Circles show points of reference measurements.

Figure 8. The expected value and the uncertainty of the output. Circles show points of reference measurements.

Page 84: Use of modeling and simulation in pulp and paper making

84

2.3.2.3.1.4 CONCLUSIONS

In this presentation we have shown a method of modeling the uncertainty of the parameters of the nonlinear model. Modeling the uncertainty more accurate information can be achieved.

Uncertainty is described in probability distributions. Modeling is based on an idea that if new information is not available, our knowledge of the process becomes more uncertain. This is describes by a stochastic diffusion process. If new information is available, the parameters can be updated and the uncertainty decreases.

In this paper we used Gaussian mixture model to describe probability densities of nonlinear models. Gaussian mixture model approach enables to formulate probability density function from history data of the process and distributions need not to follow any known distribution.

We have simulated the method using a plug-flow reactor. Simulation shows that the method is working and valid for further developments.

2.3.2.3.1.5 REFERENCES

[1] Soeterboek, R. 1992. Predictive Control: A Unified Approach. Prentice-Hall.

[2] Jokinen, H,. Latva-Käyrä, K., Pulkkinen P. and Ritala R. 2006. Modeling of uncertainty: case studies on operation of papermaking process. Proceeding os 5th MATHMOD Conference, Vienna, Austria, 8th -10th February, 2006, Argesim report 30, Eds. I. Troch, F. Breitenecker, Session on modeling in pulp and paper industry pp. 4-1..4-10 (abstract p. 242)

[3] Latva-Käyrä, K. and Ritala, R. 2005. Dynamic validation of multivariate linear soft sensors with reference laboratory measurements. In: Kappen, J., Manninen, J. & Ritala, R. (eds.). COST ACTION E36, Modeling and simulation in pulp and paper industry. Proceedings of Model Validation Workshop. 6 October, 2005, Espoo, Finland. VTT Symposium 238. p. 57-64.

[4] O’Hagan, A. and Forster J., 2004. Bayesian Inference, second ed., Kendall’s Advanced Theory of Satistics, Arnold.

[5] French, S and Rios Insua, D. 2000. Statistical Decision Theory. Kendall’s Library of Statistics. Arnold 2000. 301 s.

[6] Nabney, I.T, 2002. Netlab: Algoritms for Pattern Recognition. Springer.

Page 85: Use of modeling and simulation in pulp and paper making

85

2.3.2.4 SIGNAL FILTERING AND OUTLIER

Erik Dahlquist, Mälardalen Univeristy

When we want to use process data of different kind for building simulation models or for verification of models, it is important to have input data that is as correct as possible. If a signal is very noisy it may have to be filtered. If the values a spreading a lot it may be that we have to make a decision on what signals are really values that can be used, respectively what values can be considered as outliers.

When it comes to filtering this can be performed by just taking the average value of a number of measured values, or we can use a more advanced “moving window” type of filtering. Here we may take the average of e.g 10 values, but then next timestep add the new value and remove the oldest one, before calculating a new average. This type of method is very useful when we want to collect data on-line.

Still it is crucial not to make too strong filtering. It is common that noisy signals are filtered so strong, that the true information is lost! If you have a very noisy signal it may be relevant to first ask your self why the signal is so noisy. Is it e.g. a consistency or flow meter it may be that it is placed to close to a pump outlet, and thus a lot of bubbles are disturbing the measurement. To move the sensor some five meters down stream may then solve the problem. Otherwise it may be of interest to compare the result if we use different filtering, to find some condition that is not too noisy, but still keeping the important information.

When it comes to outlier removal, this is even trickier than to do the filtering. It is not that easy to judge if a value that is far away is it because something actually happened in the process causing this, or if it was just a disturbance in the actual measurement.

One way of automatic removal of outliers is to just make a decision on what deviation from an average we shoul accept, and then remove all values outside this window. Still, we then also should remove all other measurements at the same time step, and not use any of these data for the model building or model verification. If we only have a limited amount of good measurements, this may cause you a significant uncertainty about the reliability and usability of the model, but by adding more reliable measurements later on the model quality will be improved.

Another aspect is to have values that are not only noise, but also gives true information. This means that you need to look at the correlation between different variables. Ideally the different variables should be varied totally independent of each other, but as this is normally not possible, we at least should try to get as close as possible. We also have to vary all variables of importance, and not just a few. For e.g. a paper machine you have to vary also the wire speed, if you want to get the influence of this on the quality variable you want to model, e.g. some strength or printability property. This of course complicates the task, as you normally don´t

Page 86: Use of modeling and simulation in pulp and paper making

86

want to do this, but keep most important variables constant. Still, if you don´t do it, you will never ever find the optimum conditions as well! To avoid problems it can actually be wise to make use of a simulation model to evaluate the probable impact of the variations, if reasonable information and knowledge is available from the actual machine, or some similar.

2.3.2.6 ADAPTATION OF MODELS USING ON-LINE DATA

Erik Dahlquist, Mälardalen Univeristy

It has been a dream of control engineers to update models on-line in an automatic way. Still, this is easier said than done. Mostly the problems relates to poor control of the status of the process and the sensors. Still, if we can keep control of these in an efficient way by the methods presented above, it is possible to update parameters in different models and control algorithms on-line. In self-tuning adaptive controls this has been performed for primarily linear systems [Åström K-J…]. Here predictions are made for a number of timer steps ahead, and the difference between the predicted value and the measured when we come to the actual time is used to update the parameters of the controller using a “forgetting factor” lambda. In this way a slower or faster adaptation can be achieved. This was implemented in from 1982 by ABB in the NovaTune controller. For slow moving processes like the active sludge process in waste water treatment this was working very well [Rudberg and Olsson, 1984] and the controller was in operation for 13 years without any significant problems. For paper winders there was a significant problem in the beginning, as the controller was reset to 0 for all parameter at start, but when the parameter values from the last run were stored, the application was working nicely also for this very on-linear application. ABB later applied this technique also for refiner controls as part of the Auto-TMP package together with a moisture sensor using NIR-measurements (now within Metso´s scope of supply). Adaptive PID-controllers is a commodity today, and is possible for at least simple control loops.

When it comes to on-line adaptation of process models it has still been most common to do the updates as a batch action, as the risk of including irrelevant data often is too high compared to the advantage of getting the update automatically. If a process engineer make quick look at the data very often irrelevant time periods can be removed to avoid rebuilding the model in the wrong way.

Still, as the advancement of different diagnostic methods is proceeding, the automation of the data handling becomes more robust, and in the future we can be prepared to see many more automatic updates.

Page 87: Use of modeling and simulation in pulp and paper making

87

2.3.2.7 THE IMPORTANCE OF SAMPLING FREQUENCY

Erik Dahlquist, Mälardalen University

The importance of sampling frequency will be discussed through a few examples. If we consider a delivery of a new sensor and the request is to reduce the process variation by 2 %. If we then first of all use a sampling frequency of once every day, and the variations are mainly in the scale of 10 minutes, it may very well be that we reduce the variations in the scale of 10 minutes with much more than 2 %, but perhaps due to coincidence the variation from one day to the other is not affected.

If we instead look at sampling every 10 seconds, it may be that this variation is not affected as well, as the control system is not handling these variations.

So if we have only told that the variation should be reduced by 2 % but not defined exactly how this should be measured you may afterwards disagree totally about if the goal was reached or not.

Another example is shown in figure 1 where we can see a curve and how misleading sampling can be, if to few samples.

Figure 1 Illustration of how different sampling intervals can give very different information

Page 88: Use of modeling and simulation in pulp and paper making

88

Here we can see that if we only consider the squares the trend is that the process is operating at steady state conditions. No variations are seen. If we instead look at the circles we see a quite strong variation. This shows that we have to define what process variations we need to consider, and make sampling in accordance to this. It is normally good to start having a high frequency of the sampling to learn about the process, and from this make a judgement of what frequency would fulfil our real needs. It is not of interest to get too much information either. Signal filtering is all about making this in a balanced way. Not too much, not too little!

2.3.2.8 TIME MATCHING DIFFERENT SIGNALS

Especially when we build statistical models it is very important to keep track on the time lag between different measurements. If we only measure all variables at the same time, but it is e.g. 8 hours for the pulp to move through the digester, it is evident that this is needed.

One way of collecting all measurements for a certain volume of pulp is to use a plug flow model, like the Quality Foot Print (PQF) made by ABB. Here we follow the pass of e.g. 5 ton of pulp all the way through the fiber line to the final paper product. All new measurements along the process line will be attached to the same volume of fibers, and thus prediction models will be much easier to build with some robustness.

For time matching between different signals there are tools looking for patterns and comparing curves to each other. In this way we can see if different signals are connected to each other in some way. This is important to evaluate the performance of control systems. Sometimes one control may give oscillations that are significantly enhanced in other parts of the process, and not directly understood until a thorough analysis has been performed.

Page 89: Use of modeling and simulation in pulp and paper making

89

CHAPTER 3 SOFT SENSORS

3.1 SOFTSENSORS – WHERE TO USE

Arjo Sinon, SAPPI, Holland

A soft sensor is a transformed signal from the process. The transformation can be as simple as multiplying the original value with a constant value, like used in currency calculations, but can also be as complicated as calculating the total actual running costs of the process and comparing this to some key performance indicator. In both cases the result of the soft sensor is recognizable to the end-user, instead of some less meaningful value like temperature, flow or pressure.

The most viable places to use soft sensors are in the control-room of the papermaking process and on the desk of the operations manager. All other places need detailed, inside information of the actual process or sensor-readings and transforming the original signals from the process will only lead to confusion or possible errors. Of course this is a short-cornered statement as, for instance, the technology department may need some sort of performance indicator as well, but as a general rule-of-thumb soft sensors should be used as close as possible to the decision makers, be it in the field (operators) or be it in the operations (manager).

3.2 SOFT SENSORS IN PULP AND PAPER INDUSTRY

Kauko Leiviskä and Aki Sorsa,Control Engineering Laboratory,University of Oulu

3.2.1 INTRODUCTION

In processes, there are variables that are difficult or even impossible to measure. If such a variable is crucial for process control or holds somehow profitable information, there is a definite need to measure it indirectly. The term “soft(ware) sensor” basically refers to a measurement that is not taken directly from a process but that is produced in some alternative way based on other information, typically other measurements. In process industry, the alternative way refers almost exclusively to modelling. Soft sensors can be utilized similarly as direct measurements in process monitoring and control. However, the variable produced through modelling often holds information beyond the computed numeric value. That information can be particularly useful in advanced process control tasks, like controller adaptation or process monitoring. Several other terms, having close

Page 90: Use of modeling and simulation in pulp and paper making

90

resemblance in action, are in use: indirect measurements, smart sensors, sensor fusion. These names, more or less, emphasize the technology applied.

Software sensor has been defined as the association of (a) sensor(s) (hardware), which allows on-line measurements of some process variables, with an estimation algorithm (software) in order to provide on-line estimates of the non-measurable variables, model parameters or to overcome measurement delays (de Assis and Filho 2000, Chéruy 1997). There are several estimation techniques and four of them have been recognised to have strong potential in the on-line estimation of bioprocesses, namely, (1) estimation through elemental balances; (2) adaptive observer; (3) filtering techniques (Kalman filter, extended Kalman filter); and (4) artificial neural networks (ANN). A review of soft sensor applications is also available in McAvoy (2002).

A generalized model for smart sensors is given in IEEE 1451 standard. It complements the transducer with a smart transducer interface module (STIM), which in turn communicates with a network-capable application processor (NCAP) over a transducer-independent interface (TII). It is the basis of NASA’s intelligent rocket test facility (IETF, (Schmalzel et al. 2005)). Their approach is based on hierarchical systems approach, largely autonomous sensors, Gensym’s G2 software as an expert system development environment, and network structures.

According to Luo et al. (2002), multi-sensor fusion and integration refers to the synergistic combination of sensory data from multiple sensors to provide more reliable and accurate information. The potential advantages of multi-sensor fusion and integration are redundancy, complimentarily, timeliness, and the cost of the information. It can reduce overall uncertainty and thus serve to increase the accuracy and reliability in the case of sensor error or failure.

According to Juuso (2004), intelligent analysers are software sensors combining on-line analysers and measurements to predict output, and detect input changes or trends. They are used in quality prediction and control and in applications in the detection of operating conditions.

In Control Engineering laboratory, indirect measurements have been developed and used for a long time originating from 1980’s. This paper reviews some of the applications developed for the pulp and paper industry, together with a short general survey of methods.

3.2.2 METHODS

A good collection of modelling methods has found use in soft sensors. According to Luo et al. (2002), four groups of methods exist: estimation methods, classification methods, inference methods and artificial intelligence methods. A recent paper concerning the architecture and standardisation aspects of smart sensors is available in (Schmalzel et al. 2005).

Page 91: Use of modeling and simulation in pulp and paper making

91

3.2.2.1 ESTIMATION METHODS

The simplest way to implement soft sensors is to built a steady-state or dynamic estimate of the non-measurable process variable, y, using one or more measured variables, x

y(t)=f(x, p, t),

where p includes the estimated parameters that are usually defined in the least-squares sense. The biggest problem is the variation of parameters in time, and several solutions using adaptive observers exist. It requires the dynamic model of the process and in its simplest form implements in the following equations (de Assis and Filho 2000, Chéruy 1997)

( )

( )

1

2

( , , ) ( , , )

( , , )

dg K y f t

dtd

K y f tdt

= + −

= −

ee e e e

ee e

xx u p x p

px p

Above e refers to the estimate and u are the control variables. K1 and K2 are the observer parameters.

If the system can be described with a linear model and both the system and sensor error can be modelled as white Gaussian noise, a Kalman filter provides unique, statistically optimal, estimates. Extended Kalman filters (EKF) can be used where the model is nonlinear, but can be suitably linearised around a stable operating point. Using conventional notation, the state equations are

( ) ( ) ( 1) ( ) ( ) ( )

( ) ( ) ( ) ( )

x k A k x k B k u k v k

y k H k x k w k

= − + += +

Above v(k) and w(k) are zero mean independent white Gaussian noise with covariance matrices Q(k) and R(k). Kalman filter gives unbiased optimal estimate of the state vector (Luo et al. 2002)

ˆ( 1) ( ) ( 1 1) ( ) ( )

( 1) ( ) ( 1 1) ( ) ( )T

x k k A k x k k b k u k

P k k A k P k k A k Q k

− = − − +

− − − +

For the estimation part, one can write

Page 92: Use of modeling and simulation in pulp and paper making

92

1

( ) ( 1) ( ) ( ) ( ^ 1) ( ) ( )

ˆ ˆ ˆ( ) ( 1) ( ) ( ) ( ) ( 1)

( ) ( ) ( ) ( 1)

T TK k P k k H k H k P k k H k R k

x k k x k k K k y k H k x k k

P k k Î K k H k P k k

− = − − +

= − + − −

= − −

3.2.2.2 CLASSIFICATION METHODS

The implementation of parametric templates is computationally efficient for multi-sensor fusion systems. Cluster analysis tries to establish geometrical relationships on a set of sample data in a training process. Unsupervised or self-organized learning algorithms such as learning vector quantization (LVQ), K-means clustering, Kohonen feature map can also be used for classification-based sensor fusion.

The well-known fuzzy c-means algorithm minimises the objective function (Yliniemi et al. 2003)

2

1 1

( )n c

mm ik ik

k i

J Dµ= =

=∑∑

where Dik denotes the Euclidean distance between the data point z and the cluster centre, m (>1) is the fuzziness parameter and c is the number of clusters. In the beginning of the algorithm the memberships µik are initialized with random number [0 1], so that the following condition holds

1

1c

iki

µ=

=∑

Next, the cluster centres are calculated

1

1

( ), 1,...,

( )

nm

ik kk

i nm

ikk

zv i c

µ

µ

=

=

= =∑

Then the membership functions are updated

21

1

1

)

ik

mcik

i jk

DD

µ−

=

=

The procedure is repeated until the optimal conditions are achieved.

Page 93: Use of modeling and simulation in pulp and paper making

93

3.2.2.3 INFERENCE METHODS

Bayesian inference allows multi-sensor information be combined according to the rules of probability theory. Dempster–Shafter evidential reasoning is an extension to the Bayesian approach that makes explicit any lack of information concerning a proposition’s probability.

Bayesian approach utilises the classical theory of conditional probability. Let a be the value of variable A (that occurs for certainty) and let b be the value of variable B. The conditional probability that b occurs is

( )( | )

( )P a and b

P b aP a

=

Above P(a)>0. For pattern recognition, machine learning and classification, we can use the formulation given in Gama and Castillo (2002). Suppose that P(Cli | x) denotes the probability that example x belongs to class i. Any function that computes the conditional probabilities P(Cli | x) is referred to as discriminant function. In this case it is

( ) ( | )( | )

( )i i

i

P Cl P ClP Cl

P=

xx

x

The decision rule is

arg max ( | )i iP Cl= x

Although this rule is optimal, its applicability is reduced due to the large number of examples required to compute P(x |Cli). To overcome this problem several assumptions are usually made. Depending on the assumptions, different discriminant functions lead to different classifiers, e.g. naive Bayes classifier

( | ) log( ( )) log( ( | ))i i j ij

P Cl x P Cl P x Cl∝ +∑r

Gama and Castillo (2002) have also introduced adaptive naïve Bayes, iterative Bayes, and incremental adaptive Bayes.

The belief function theory of evidence aggregates degrees of belief with new pieces of evidence (Hong et al. 2003). When a new piece of evidence T is observed, the belief function Bel(·) is updated to the conditional belief function Bel(·/T). The conditional belief function can be calculated by using prior belief functions

( ~ ) (~ )( | )

1 (~ )Bel S T Bel T

Bel S TBel T

∩ −=

−.

Page 94: Use of modeling and simulation in pulp and paper making

94

3.2.2.4 AI METHODS

The potential of neural networks to model dynamic non-linear processes, in order to provide an on-line estimator, has been demonstrated in several applications (de Assis and Filho 2000, Chéruy 1997). Fuzzy logic allows the uncertainty in sensor fusion to be directly represented via fuzzification and fuzzy inference. Sasiadek (2002) refers also to the use of fuzzy Kalman filter and genetic algorithms in soft sensors and sensor fusion.

3.2.2.5 REFERENCES

Ainali, I., Piironen, M., Juuso, E.: Intelligent Water Quality Indicator for Chemical Water Treatment Unit. Proceedings of SIMS 2002 - the 43rd Scandinavian Conference on Simulation and Modelling, Helsinki, Finnish Society of Automation SIMS, 2002, pp. 247-252.

Alcaraz-González, V., Harmand, A., Rapaport, A., Steyer, J.P., González-Alvarez, V. and Pelayo-Ortiz, C.: Software sensors for highly uncertain WWTPs: a new approach based on interval observers. Water Research 36(2002) 10, 2515-2524.

An, W.S. and Sun, Y.G.: An Information-Geometrical Approach to Kernel Construction in SVM and its Application in Soft-Sensor Modeling. Proceedings of 2005 International Conference on Machine Learning and Cybernetics, Volume 7, Guangzhou, China,18-21 Aug. 2005, 4356 – 4359

Chéruy, A.: Software sensors in bioprocess applications. Journal of Biotechnology 52(1997), 193-199.

Choi, DJ. and Park, H.Y.: A hybrid artificial neural network as a software sensor for optimal control of a wastewater treatment process. Water Research 35(2001)16, 3959–3967.

de Assis, A.J., Filho, R.M.: Soft sensors development for on-line bioreactor state estimation. Computers and Chemical Engineering 24 (2000), 1099-1103.

Dufour, P., Bhartiya, S., Dhurjati, P.S. and Doyle III, F.J.: Neural network-based software sensor: training set design and application to a continuous pulp digester. Control Engineering Practice, 13(2005) 2, 135-143.

Gama, J. and Castillo, G.: Adaptive Bayes for user modeling. Eunite Annula Symposium 2003. Albufeira, Portugal, September 19-21, 2002, 6 p

Haataja, K., Leiviskä, K. and Sutinen, R.: Kappa-number estimation with neural networks. In: Proceedings of IMEKO World Congress, Finnish Society of Automation, Tampere, Finland, Volume XA, pp. 1 - 5.

Hadj-Sadok, M. Z. and Gouzé, J. L.: Estimation of uncertain models of activated sludge processes with interval observers. Journal of Process Control, 11(2001)3, 299-310

Page 95: Use of modeling and simulation in pulp and paper making

95

Hong, X., Liu, W. and Scanlon, W.: Integrating belief functions with model-based diagnosis for fault management. Eunite Annual Symposium, Oulu, Finland, July 10-11, 2003, 6 p

Järvensivu M., Juuso E. and Ahava O.: Intelligent control of a rotary kiln fired with producer gas generated from biomass. Engineering Applications of Artificial Intelligence 14(2001), 629-653.

Juuso, E.K.: Integration of intelligent systems in development of smart adaptive systems. International Journal of Approximate Reasoning 35 (2004), 307–337.

Leiviskä, K.: Kappa number prediction with neural networks. Control Systems 2006, Measurement and control – Applications for the operator. Tampere, Finland, June 6-8, 2006, pp. 135-140. ISBN 952-5183-26-2.

Leiviskä K. and Juuso E.: Modelling of Industrial Processes Using Linguistic Equations: Lime Kiln as an Example. In Proceedings of the Fourth European Congress on Intelligent Techniques and Soft Computing -EUFIT'96, Aachen, August 28 - 31, 1996 (H.-J. Zimmermannn, ed.), volume 3, pp. 1919-1923, Aachen, 1996. Verlag und Druck Mainz.

Leiviskä K., Juuso E. and Isokangas A.: Intelligent Modelling of Continuous Pulp Cooking. In Leiviskä K., (editor): Industrial Applications of Soft Computing. Paper, Mineral and Metal Processing Industries. Physica-Verlag, Heidelberg, New York, 2001, 147-158.

Luo,R.C., Yih, C-C., Su, K.L.: Multisensor Fusion and Integration: Approaches, Applications, and Future Research Directions. IEEE Sensors journal, 2(2002)2, 107-119.

Mc Avoy, T.: Intelligent "control" applications in the process industries. Annual Reviews in Control, 26(2002)1, 75-86.

Murtovaara S., Juuso E. K., Sutinen R.: Fuzzy Logic Detection Algorithm. In Mertzios B.G. and Liatsis P., (editors): Proceedings of IWISP’96 - Third International Workshop on Image and Signal Processing on the Theme of Advances in Computational Intelligence, 4-7 November 1996, Manchester, UK, 1996, 423-426.

Murtovaara, S., Juuso, E. K., Sutinen, R., Leiviskä, and K.: Neural Networks Modelling of Pulp Digester. In: Dourado, A. (Ed.): Proceedings of CONTROLO'98, 3rd Portuguese Conference on Automatic Control, APCA, Portugal, pp. 627 - 630.

Rao, M., Corbin, J. and Wang, Q., Soft sensors for quality prediction of batch chemical pulping process. Proceeding of the 1993 international Symposium on Intelligent Control, Chigago, U.S.A., 150-155.

Sasiadek, J.Z.: Sensor Fusion. Annual Reviews in Control 26 (2002), 203-228.

Schmalzel, J., Figueroa, F., Morris, J., Mandayam, S., Polikar, R.: An Architecture for Intelligent Systems based on Smart Sensors. IEEE Transactions on Instrumentation and Measurement, 54(2005)4, 1612-1616.

Yliniemi, L., Koskinen, J. and Leiviskä K.: Data-driven fuzzy modeling of a rotary dryer. International Journal of Systems Science, 34(2003)14-15, 819-836.

Page 96: Use of modeling and simulation in pulp and paper making

96

CHAPTER 4 TRANSFER OF PROCESS KNOW HOW INTO

MODELS

Erik Dahlquist, Mälardalen University

It is advisable to try to identify the basic functions in certain process equipment. If we look at a screen in a digester, in the stock prep or deinking they all have basic function similar. There is a relation between fibers or particle how they are separated and the size of the wholes they have to pass. As a fiber web is built up, this will give the actual hole size. The control of the size of this will be depending on shear forces at the surface, flow rate of the particles also through the screen plate, which in turn depends on the pressure drop over the screen and the web. Of course there are differences between a screen in a continuous digester and a rotating screen, but the basic principles are the same, and thus the model can be very similar.

If we look at an heat exchanger in a boiler or a digester house, the basic principles are also the same, although one may be counter flow and the other co-current flow, and one may be exposed to much more fouling than the other. Still, the basic functions are the same, and can be modelled in a similar way.

When we have identified the basic principles of process equipment we need to understand how it really behaves. What factors are impacting the operations, and roughly how much. Here it is very valuable to discuss with suppliers, operators and process engineers. They will probably have different views on it, as they see it from different angles.

From the discussion we can identify more in detail what parameters that need to be tuned to fit different operational conditions.

Page 97: Use of modeling and simulation in pulp and paper making

97

PROCESS CONTROL AND DECISION MAKING

CHAPTER 5 MODEL BASED CONTROL

5.1 MODEL PREDICTIVE CONTROL

Bernt Lie, Telemark University and William Heath, Manchester University

Model based controls are used for multi-objective on-line optimization of processes. The models can be statistical or physical. Normally optimization is performed and set points sent to a number of parallel PID-controls. There also directly linked algorithms to the control equipments like valves or pumps. The most common version of model based control is the so called MPC, or Model Predictive Control.

We can also have a general type of model based control, which is more like a production plan. If we have a process and make a dynamic optimization of this, we can get a number of set points for a number of different process variables. These can be sent down to local ramps or PID-controls, but will have only a feed forward action and not a feed back to the control algorithm. This is normally implemented as a production plan with a time step of 15 minute up to hours, and is often more open loop control than closed loop control.

As an example an installation is implemented for a continuous digester at Korsnäs pulp and paper mill in Sweden [Avelin et al 2006].

5.1.1 INTRODUCTION

5.5.1.1 BACKGROUND

Feedback control loops, in the form of individual PID loops, are ubiquitous in the pulp and paper industry, as well as other industrial processes. The benefits of control in terms of product quality as well as efficient running of the plant are well understood. The advent of computer control has vastly increased the potential functionality of control, but according to Maciejowski (2002) Predictive Control, or Model-Based Predictive Control (`MPC' or `MBPC') as it is sometimes known, is the only advanced control technique – that is, more advanced than standard PID control – to have had a significant and widespread impact on industrial process control. According to Isaksson & Weidemann (2006), in the context of the pulp and paper industry, it is only in the last decade that model-based multivariable controllers have started to spread more widely. At the last

Page 98: Use of modeling and simulation in pulp and paper making

98

three Control Systems Conferences (Stockholm, Sweden, 2004; Quebec City, Canada, 2004; Tampere, Finland, 2006) successful applications of MPC were reported for

• Dry weight, ash, machine speed and moisture control (Kosonen, Fu, Nuyan, Kuusisto & Huhtelin, 2002), (Kuusisto, Nuyan & Kaunonen, 2006) as well as moisture control with multiple actuators (Korpela & Mäkinen, 2006).

• Wet end consistency (Kokko, Lautala, Korpela & Huhtelin, 2002), (Hauge & Lie, 2002), (Austin, Mack, Lovett, Wright & Terry, 2002).

• White liquor production (Chmelyk, Ip, Sheehan & Korolek, 2004).

• Bleaching (Shang, Forbes & Guay, 2004), (Dinkel, Villeforth, Mickal & Sieber, 2006) and brightening (Mongrain, Fralic, Gurney, Singh, Shand & Vallée, 2004), (Major, Bogomolova, Perrier, Gendron & Lupien, 2006).

• Single array cross-directional control (Fan, Stewart & Dumont, 2002),( Shakespeare & Kaunonen, 2002) and multiple array cross-directional control(Backström, He, Baker & Fan, 2004), (Fu, Ollanketo & Makinen, 2006), (Fan, Stewart, Nazem & Gheorghe, 2006).

• Continuous pulp digestion (Bhartiya & Doyle, 2002),( Alexandridisa, Sarimveisa, Angeloub, Retsinab & Bafasa, 2002) and TMP refining (Sidhu, Van Fleet, Dion, Anderson & Weger, 2004).

• Steam header pressure control (Gough, Kovac, Huzmezan & Dumont, 2002) – this paper also discusses consistency and bleaching applications.

• Control of a pulp mill powerhouse (Mercangöz & Doyle, 2006).

• Production planning (Pettersson, Ledung & Zhang, 2006).

• Wood grinding (Böling, Forsman & Lönnberg, 2002)

• Chip level control (Lindgren, Gustafsson, Forsgren, Johansson & Östensson, 2004) and holding tank level control (Sidhu, Allison & Dumont, 2004).

• Co-ordination of several control loops in grade changes (Nuyan, Huhtelin & Kaunonen, 2004).

Our aim is to give a brief overview of what MPC is, why it may be beneficial to install MPC, and what design choices are required for the successful implementation of MPC.

5.1.1.2 BRIEF HISTORY

Page 99: Use of modeling and simulation in pulp and paper making

99

MPC was pioneered in the 70s. Two early developments were model predictive heuristic control reported by Richalet, Rault, Testud & Papon (1978) and dynamic matrix control reported by Cutler & Ramaker (1979). These included quadratic cost functions based on linear models and constraint handling capability. Much of the early development of MPC was associated with the oil industry.

Since then the use of MPC has become widespread in the chemical process industries, to the extent that discussion is included in undergraduate textbooks such as Seborg, Edgar & Mellichamp (2004). Many companies and consultants provide MPC technology; Qin & Badgwell (2003) give a useful survey of current practice.

Meanwhile interest in academia has been steadily growing. This is surely inspired for the main part by MPC's practical success. Important links have been established between MPC and optimal control theory, and in particular LQG control. MPC can also be related to traditional Dahlin controllers and Smith predictors. Many embellishments are proposed to tackle nonlinear, constrained and robust control problems.

Widespread interest in the pulp and paper industry kicked off in the 90s. This was largely a direct influence from other process control applications. However, there are two other important inspirations for our particular industry:

• At the Control Systems 1994 conference, Stockholm, Sweden (with results subsequently reported in Control Engineering Practice) a benchmark control problem was proposed (Hagberg & Isaksson, 1995), (Isaksson, Hagberg & Jönsson, 1995). This was to control MD variation of dry weight and ash content. The thick stock valve position and filler valve position were available as manipulated variables. Of the seven proposed solutions reported in Control Engineering Practice, four used MPC technology (Makkonen, Rantanen, Kaukovirta, Koivisto, Lieslehto, Jussila, Koivo & Huhtelin, 1995), (Bozin & Austin, 1995), (Chow, Kuznetsov & Clarke, 1995), (Fu, Ye & Dumont, 1995). Although (Isaksson, Hagberg & Jönsson, 1995) state no single controller was best, in all cases the potential of MPC for the pulp and paper industry had been clearly demonstrated.

• The cross-directional control problem is specific to pulp and paper production (and also a handful of other industrial processes such as plastic film extrusion and steel rolling). The control loop requires the co-ordination of a large number of actuators subject to constraints and it has been recognized since the 90's (Heath, 1996), (Rawlings & Chien, 1996) that constrained MPC is an appropriate technology for this application. See also Heath & Wills (2004) for a more recent discussion of the suitability of MPC in this context.

5.1.2 MPC IN THE AUTOMATION HIERARCHY

MPC usually sits somewhere between low level (fast) PID controls and high level (slow) scheduling in a control hierarchy (see Fig 1). Thus it performs a dual role:

Page 100: Use of modeling and simulation in pulp and paper making

100

• MPC co-ordinates low level PID loops. These may be interacting, so MPC may be viewed as a large scale multivariable controller.

• MPC achieves steady state values that fulfil (or are designed to fulfil) some economic goal. MPC does this via on-line optimization.

Traditionally MPC has been implemented with large sampling times. Nowadays it is possible to solve convex optimization problems in micro-seconds; computation time is no longer an impediment for reducing the sampling speed. Nevertheless the slow sampling time means that the dynamics of low level loops are (for the most part) simple which makes control design and tuning easier.

Optimization and scheduling are crucial to any large scale processing plant. Small fractions of percentage differences in yield can translate into millions of dollars profit or loss and it is no coincidence that MPC was pioneered in the oil industry. MPC algorithms themselves often have an internal hierarchy of optimization, distinguishing steady state values from transient behaviour.

The clear trend is for MPC to become the control technology of choice in all chemical process industries, including pulp and paper. With increased availability of efficient computation the technology will be applied to faster loops (with shorter sampling times); it will be used to replace PID loops as well as to co-ordinate them.

Optimization and

scheduling

Model predictive

control

PIDloops

Figure 2: MPC is typically used to co-ordinate many lower level PID loops, with set points determined by optimizers and schedulers.

5.1.3 WHY USE MPC?

MPC is gaining wider and wider acceptance in the pulp and paper industries. Although fashion plays some part in this trend, there are tangible benefits that MPC brings. The bottom line is that payback times for projects introducing MPC are often short.

Page 101: Use of modeling and simulation in pulp and paper making

101

As with any successful control scheme, MPC has the following features:

• It leads to stable and improved operation and/or quality.

• It tolerates model uncertainty.

• It is based on intuitive ideas and so yields operator acceptance.

• It is cost effective.

• Its widespread use means it has industry support and is relatively easy to maintain.

It has the following additional features:

• It is straightforward to apply to multivariable systems (where there is more than one controlled variable and/or more than one manipulated variable).

• It is straightforward to include constraints in the control specifications.

• It can be used to co-ordinate lower level PID loops and achieve optimal steady state trade-off between loops.

5.1.4 BASIC MPC PRINCIPLES

5.1.4.1 DEFINITIONS

MPC stands for ``Model Predictive Control'', sometimes ``Model-based'' or ``Multivariable Predictive Control''. It is sometimes referred to as simply ``Predictive Control'' or even ``Receding Horizon Control''. It comes in many different guises. All forms share three key features:

Receding horizon. At each time step a sequence of future inputs (i.e. set values for the manipulated variables) is computed, but only the first element is implemented. This is analogous to human decision making – at any particular time we may formulate a plan that involves both immediate and future actions; but as the scenario unfolds we update the plan and modify our actions accordingly.

Optimization. The sequence of future inputs is computed as the solution (or possibly a heuristic approximation to the solution) of an optimization problem. The optimization can usually be solved using standard software tools.

Prediction. The optimization problem is usually expressed in terms of predictions on future outputs (i.e. measured variables). The relation between future inputs and future outputs is determined via a plant model.

Page 102: Use of modeling and simulation in pulp and paper making

102

The combination of optimization and prediction implies that the cost function corresponds to some control performance specification evaluated over some future horizon.

The specification of the model becomes crucial: it should be sufficiently good that the performance criterion corresponds to economic or quality requirements, but sufficiently simple that the optimization can be solved in a straightforward manner.

5.1.4.2 SIMPLE MPC CONSTRUCTION

In what follows we set out a simple MPC scheme based on a linear state space model of the plant. We point out some of the alternatives that are available to enhance MPC design.

PLANT MODEL

There are many forms of discrete plant model. We will consider a linear state space model of the form

kkk

kk

kkk

dCxy

dd

BuAxx

+==

+=

+

+

1

1

Here kx represents the state which may or may not correspond to some tangible

quantity, ky represents the output(s), ku represents the input(s) and kd represents output

disturbance(s).

A popular alternative to a state-space model is an input-output transfer function model of the form

( ) kkk duzGy +=

where z is the forward shift operator. The two model types are equivalent if we put

( ) ( ) BAzICzG 1−−=

Step-response models are also commonly used for MPC.

It is necessary to include a disturbance model (in some form) if one requires integral action. We have incorporated a very simple output disturbance model. It is possible (and often useful) to have more sophisticated models for both output and input disturbances.

Page 103: Use of modeling and simulation in pulp and paper making

103

When measurements of disturbances are available it is also possible to incorporate feed forward paths into the MPC design.

The state space formulation is highly versatile, but one needs to be careful about indexing. In what follows we will construct one form of model predictive control – we make inherent assumptions about the availability of data and the computation time.

PREDICTION MODELS

We assume the current input ku is known and the current output ky is measured.

However present states and future inputs are unknown.

One-step-ahead predictors of the states ( kkx |1ˆ + ), the disturbance(s) ( kkd |1ˆ

+ ) and the

output(s) ( kky |1ˆ + ) are given by

( )( )

kkkkkk

kkkkkdkkkk

kkkkkxkkkkk

dxCy

dxCyJdd

dxCyJBuxAx

|1|1|1

1|1|1||1

1|1|1||1

ˆˆˆ

ˆˆˆˆ

ˆˆˆˆ

+++

−−−+

−−−+

+=

−−+=

−−++=

with xJ and dJ tuneable gains. These expressions are given in recursive form where

new values are given in terms of previous values together with the measured input and output data.

Further state predictions may be made as

( )

kik

j

i

ijkk

jkjk

kkkkkk

kkkkkk

kkkkkk

kkkkkk

uBAxAx

uBuABxA

uBuBxAA

uBxAx

uBxAx

|

1

1

1|1

1|

|2|1|12

|2|1|1

|2|2|3

|1|1|2

ˆˆˆ

ˆˆˆ

ˆˆˆ

ˆˆˆ

ˆˆˆ

+

=

−−+

−+

+++

+++

+++

+++

∑+=

++=++=

+=+=

M

We have introduced predicted inputs kjkkk uu |1|1 ˆ,,ˆ −++ K which we are (for the moment) free

to choose arbitrarily.

Similarly further output predictions can be made as

Page 104: Use of modeling and simulation in pulp and paper making

104

kkkik

j

i

ijkk

jkjk

kkkkkk

kkkkkk

kkkkkk

kkkkkk

duBCAxCAy

duCABxCA

dxCy

duCBxCA

dxCy

|1|

1

1

1|1

1|

|1|1|12

|3|3|3

|1|1|1

|2|2|2

ˆˆˆˆ

ˆˆˆ

ˆˆˆ

ˆˆˆ

ˆˆˆ

++

=

−−+

−+

+++

+++

+++

+++

++=

++=+=

++=+=

M

With our simple disturbance model further disturbance predictions take the form

kk

kjkkjk

d

dd

|1

|1|

ˆ

ˆˆ

+

−++

==

Notice that all the predicted values are expressed in terms of the current one-step ahead

predicted state kkx |1ˆ + , the current one-step ahead predicted disturbance kkd |1ˆ

+ and

predicted inputs.

COST FUNCTION

The control action is determined by minimizing a cost function. A typical cost function takes the form

∑ ∑= =

−++++ −+−=y u

N

j

N

jkjkkjkkjkkjkk uuryJ

1 1

2

|1|

2

|| ˆˆˆ~ λ

with

kkk uu =|ˆ

The cost function penalizes both deviations of the predicted output from the set-point and control moves. It has a number of elements:

Prediction horizon yN . Generally speaking a longer horizon ensures better performance.

Control horizon uN . Historically control horizons have been chosen to be short to

reduce computational load. If computation time is not an issue then choosing the control horizon approximately equal to the prediction horizon is a sensible choice. With our choice of indexing a suitable choice of uN is 1−= yu NN ; we will

assume it takes this value from now on.

Control weighting λ . Generally speaking larger values of λ lead to more cautious controllers with slower responses.

Page 105: Use of modeling and simulation in pulp and paper making

105

Reference trajectory kjkr |+ . This may be chosen equal to the desired set point, or may be

some filtered version of it. For some MPC schemes the design emphasis is on choosing suitable trajectories for kjkr |+ ; this is called reference governing.

With our choice of indexing, kjky |ˆ + is independent of future inputs kku |1ˆ + , kku 2ˆ + , K .

Hence minimizing kJ with respect to future inputs is equivalent to minimizing

∑ ∑= =

−++++ −+−=y u

N

j

N

jkjkkjkkjkkjkk uuryJ

2 1

2

|1|

2

|| ˆˆˆ λ

(where we have only changed the indexing in the first summation).

Some MPC schemes have a separate optimization that computes steady state values. This will take into account the disturbance model and hence achieve integral action. It is advantageous in that it links MPC to the optimization and scheduling layer in the automation hierarchy. In our simple scheme we have precluded the necessity for such a calculation by working only in control increments kjkkjk uu |1| ˆˆ −++ − .

CONSTRAINTS

A key advantage of model predictive control is that constraints can be included in the control objective. For example, actuator constraints of the form

max|min ˆ uuu kjk ≤≤ +

with uNj ,,2,1 K= may be imposed. Similarly output constraints can be imposed, for

example of the form

max|min ˆ yyy kjk ≤≤ +

Constraints on the state can also be included.

It is important to note a distinction in category between input and output constraints. Input constraints can be guaranteed. Output constraints may be violated if the prediction model is insufficiently accurate. On a similar note, model predictive controllers often distinguish hard and soft constraints, where hard constraints are guaranteed but soft constraints may be violated where necessary.

There are two common techniques associated with output constraints:

1. Zone. The output is constrained (with either a hard or soft constraint) to lie within some fixed bounds, but otherwise there is no set point.

2. Funnel. The output is constrained to lie between some bounds, which converge over the horizon to the set point.

Page 106: Use of modeling and simulation in pulp and paper making

106

COMPUTING CONTROLLER ACTION

It is standard to summarise the various equations in matrix form.

The prediction equations may be written as

kkkk UxY Φ+Λ= + |1ˆ

where

=

+

+

kNk

kk

k

yy

y

Y

|

|2

ˆ

ˆ

M and

=

−+

+

kNk

kk

k

yu

u

U

|1

|2

ˆ

ˆ

M

and where

−1

2

yNCA

CA

CA

M and

−− CBBCABCA

CBCAB

CB

yy NN

k

L

OM32

The cost function can be written as

{ }kT

kkT

kk UfUHUUJ oft independen terms2 ++=

where

=

++

+

kNk

kk

k

yr

r

R

|1

|2

M and

=

+

+

kk

kk

k

d

d

D

|1

|1

ˆ

ˆ

M

and also

−=

11

11

1

OO

L

M and

=

0

0

1

ME

and where

Page 107: Use of modeling and simulation in pulp and paper making

107

MMH TT λ+ΦΦ= and ( ) kT

kkT

kkT EuMRDxf λ−−Φ+ΛΦ= + |1ˆ

Input constraints of the form

max|min ˆ uuu kjk ≤≤ +

can be written

− min

max

u

uU

I

Ik

where

=

max

max

max

u

u

u M and

=

min

min

min

u

u

u M

Output constraints of the form

max|min ˆ yyy kjk ≤≤ +

can be written as

−min

max

y

yY

I

Ik

with max

y and min

y defined similarly. This becomes

kkk xI

I

y

yU

I

min

max Λ

−−

≤Φ

General linear constraints including all input, output and state constraints, can be written in the form

bAU k ≤

for some matrix A and some vector b .

The control law becomes

( )bAUts

fUHUUU

k

Tkk

Tk

Uk

k

+=

..

2minarg*

Page 108: Use of modeling and simulation in pulp and paper making

108

This is in the form of a convex quadratic program with H fixed, but f dependent on the

measured data. With only input constraints A and b fixed, but with output or state constraints, they may vary with the data. Such a quadratic program has many nice properties. In particular:

1. The map from f to U is continuous. If the cost is replaced by a linear cost the resultant optimization is a linear program which often results in nasty switching.

2. Since H is fixed it may be computed off-line. The matrices H and A are usually highly structured.

3. Quadratic programs may be solved efficiently using either interior point algorithms or active set algorithms.

The presence of state or output constraints means the control algorithm should include a check for feasibility (and a strategy for dealing with infeasibility). Soft constraints, nonlinear models or nonlinear constraints usually require more intensive computation. Generally speaking it should be possible to solve an optimization if it is convex. Many controllers compute only a suboptimal solution.

PRINCIPLE OF RECEDING HORIZON CONTROL

We have calculated *kU . We only implement the first term – i.e.

*|11 ˆ kkk uu ++ =

This value is delivered to the plant as the next input.

5.1.5 DEVELOPMENTS IN MPC THEORY

MPC has traditionally been driven by industrial requirements and developed in tandem by industrialists and academics. Meanwhile there has been growing interest in the development of the theory of MPC within academia. This is in part due to the success of MPC and a recognition of its further potential, and in part a recognition that there are many aspects that we simply do not understand. A good summary of our current understanding can be found in the survey by Mayne, Rawlings, Rao & Scokaert (2000).

Nominal stability. First and foremost, there are no guarantees of nominal stability (stability under conditions where the model exactly fits the true plant) for our simple MPC construction example. This is despite the usual experience that such schemes can be tuned to work well. It is possible to test for nominal stability when there are no constraints, but even here the relation between the tuning parameters (such as horizon

Page 109: Use of modeling and simulation in pulp and paper making

109

length, choice of observer and value of the control weighting) and the performance is not straightforward.

If the horizon is allowed to go to infinity and the full state is known (or there are no constraints) then the controller is guaranteed to be stable. Furthermore, under certain circumstances, it is possible to augment a finite horizon cost function kJ with a weight

on the terminal state kNk yx |ˆ + such that the control action is equivalent to that with an

infinite horizon. This is known as quasi-infinite horizon MPC, and the augmentation to the cost function is known as a terminal weight. Generally such a terminal weight makes the control tuning more straightforward.

Unfortunately the result only holds when the steady state values are away from the constraint boundaries – this is often not the case in practice. Furthermore the required horizon length varies with both initial conditions and the steady state values. Thus in practice the terminal weight does not guarantee even nominal closed-loop stability.

Many researchers propose including a terminal constraint that ensures the final predicted state kNk y

x |ˆ + is sufficiently close to the steady state value. Theory shows this gives

nominal stability, but in practice it can de-tune the controller and be highly detrimental to performance, especially for short horizons. The idea is analogous to applying deadbeat control.

Robustness. There are almost no useful robustness results for MPC. In practice, of course, most MPC schemes work very well despite the presence of significant model errors. It is possible to guarantee robustness for certain MPC schemes where there are only input constraints, including the cross-directional control problem (Morales & Heath, 2006).

There are many proposals in the academic literature for augmenting MPC in order to guarantee robustness. A typical example is to replace the simple optimization with a min-max optimization (so that the worst-case scenario is taken into account). All such schemes add considerably to the computation. At this time there is no consensus that such schemes improve performance, or even improve robustness in practice.

Feasibility. If there are hard state constraints then it is possible for there to be no feasible solution to the MPC optimization problem at a specific interval k . Any practical MPC scheme must either have a hierarchy of constraints (where constraints may be dropped when necessary) or a contingency strategy for when infeasibility occurs.

Nonlinearities. It is possible to incorporate a nonlinear model into the MPC prediction and optimization. Similarly it is possible to include nonlinear constraints, and even make the cost function nonlinear. All such modifications increase the necessary computation, often considerably.

Page 110: Use of modeling and simulation in pulp and paper making

110

5.1.6 PRACTICAL ISSUES IN STATE-OF-THE-ART MPC

Models and simulation. The most important requirement for an MPC system is an appropriate model. Often this is acquired from plant data via system identification experiments (termed an empirical model). Nevertheless it should be consistent with both physical principles and any simulations available from existing plant models. For some applications it may be advantageous to use a model based on physical principles (a mechanistic model) directly. Hauge, Slora & Lie (2005) discuss the advantages of mechanistic models in MPC roll-out. In Table tab:mechVSemp we compare the relative advantages of empirical and mechanistic models for MPC.

The model should also be transparent enough to be updated over the years as plant conditions change. There is a useful discussion of control loop performance monitoring by Mitchell, Shook & Shah (2004).

Tuning. The lack of theoretical results for MPC means controllers should be verified by extensive simulation. It is good practice to ensure unconstrained performance is (robustly) stable with good performance.

In general the horizon should be chosen to be long, and the control weighting � should be chosen large if necessary.

Computation time. Historically the application of MPC has been limited to loops with slow sample times due to the heavy computational requirement. Nowadays this should no longer be an impediment – successful application of MPC to fast vibrating systems with 5kHz sampling rates have been reported (Wills, Bates, Fleming, Ninness & Moheimani , 2005).

Nevertheless it may be necessary to perform the computations on a dedicated processor, or still limit the application to slow sampling times if the computation is to be performed on existing shared hardware.

Good design. Although MPC is a useful and versatile tool, it should not be seen as a substitute for good control design. Success remains dependent on good control and plant insight – for example observing which loops may be effectively be decoupled, or which variables are best controlled by feedforward or feedback loops.

MPC algorithms are usually implemented by vendors or consultants. Even if the controller has been developed internally within a company, it is likely to be implemented by engineers from a separate division from the process operations. Vendors usually spend a period of time (typically between two days and a week) making tests on the plant. They will be seeking the following information:

1. Which manipulated variables should be associated with which measured variables? Tools such as the Bristol gain array method are used.

Page 111: Use of modeling and simulation in pulp and paper making

111

2. Simple empirical dynamic models. Early MPC algorithms were based directly on step or impulse response models. Nowadays transfer function models are often (but not always) used. Step input tests remain typical, but other excitation signals may be used.

3. What are the constraints on the manipulated variables? What are the state and output constraints necessary or desirable for the operation?

4. Over what range are linear models with fixed parameters appropriate?

There is always a temptation to use MPC's constraint handling techniques to satisfy criteria that would otherwise be achieved by careful closed-loop design. Generally better performance is achieved by not being over-reliant on such functionality of MPC.

Operator and engineer re-education. It goes without saying the success of any new technology is dependent on its acceptance by both plant operators and plant engineers.

Table 1.1: Mechanistic versus empiric models. Reproduced from Hauge (2003).

Properties Mechanistic Empiric

Utilize physical knowledge and insight yes no

The parameters have known range yes no

Number of unknown parameters low high

Time needed to develop a model high low

Easy to use for complex/unknown processes no yes

Amount of data needed low high

Applicability to control and training yes yes

Applicability to design yes no

Extrapolation properties good* bad

Increases process knowledge yes no

Complex yes (non-linear) no (often linear)

Simulation long/difficult quick/easy

Possible roll-out of model yes no

*if structure is correct

Page 112: Use of modeling and simulation in pulp and paper making

112

5.1.7 CONCLUSIONS

MPC is already in widespread use throughout the process industries, and is gaining more and more acceptance in the pulp and paper industries. We have given a brief overview of the state-of-the-art, both in terms of basic MPC algorithms and where it is being applied in pulp and paper industries.

5.1.7.1 FUTURE POSSIBILITIES AND TRENDS WITH MPC

We envisage that MPC's use will grow, and it will soon be seen as a standard technology. Furthermore we expect that its role will evolve and expand in future years. With reference to Figure Fig:hierarchy:

• Model predictive control will continue to be used as a mid-hierarchy co-ordinator. The recognized applications for MPC will grow.

• It is likely that MPC will begin to replace low level PID loops. This will be driven by:

ο Computation time ceasing to be a limiting factor for application of MPC.

ο More widespread understanding of MPC tuning rules (just as PID tuning rules are now understood by operators).

ο The recognition that MPC can bring improved performance and reliability even at the level of low-level controls.

• We expect to see MPC being used at the level of optimization and scheduling. MPC should become an enabling tool for more understanding and improved dialogue between production planners, control engineers and plant operators.

There is already a differentiation between linear MPC for standard operations and nonlinear MPC for batch processes. There will be a drive to improve and tailor plant models, as these are key to the successful implementation of MPC. In this context we also envisage increasing integration of MPC and simulation software to provide decision support for operators. Similarly the monitoring of control loops will become high priority.

5.1.7.2 FURTHER READING

Page 113: Use of modeling and simulation in pulp and paper making

113

We recommend the survey paper of Qin & Badgwell (2003) as a useful introduction to MPC. There is also a useful chapter in the undergraduate textbook by Seborg, Edgar & Mellichamp (2004).

The book by Prett & García (1988) is by now quite old and some of what it says about MPC is out of date. Nevertheless it is probably the best book on MPC from the perspective of industrial requirements and motivation. More recently there have been a number of books on MPC. The best of these is probably by Maciejowski (2002).

5.1.8 BIBLIOGRAPHY

Alexandridisa, A., Sarimveisa, H., Angeloub, A., Retsinab, T. & Bafasa, G. (2002). “A model predictive control scheme for continuous pulp digesters based on the partial least square (PLS) modeling algorithm”. Control Systems 2002. Stockholm, Sweden, June 3-5.

Austin, P., Mack, J., Lovett, D., Wright, M. & Terry, M. (2002). “Improved wet end stability of a paper machine using model predictive control”. Control Systems 2002. Stockholm, Sweden, June 3-5.

Backström, J., He, P., Baker, P. & Fan, J. (2004). “Advanced multivariable CD control delivers improved quality, runnability and probability for different CD processes”. Control Systems 2004. Quebec City, Canada, June 14-18.

Bhartiya, S. & Doyle, F. J. (2002). “Modeling and control of grade transition in a continuous pulp digester”. Control Systems 2002. Stockholm, Sweden, June 3-5.

Böling, J., Forsman, T. M. I. & Lönnberg, B. (2002). “Modeling, identification and multivariable control of wood grinding”. Control Systems 2002. Stockholm, Sweden, June 3-5.

Bozin, A. S. & Austin, P. C. (1995). “Dynamic matrix control of a paper machine benchmark problem”. Control Engineering Practice 3(10), 1479-1482.

Chmelyk, T., Ip, T., Sheehan, C. & Korolek, J. (2004). “White liquor pressure filter effciency improvements through the application of model predictive control”. Control Systems 2004. Quebec City, Canada, June 14-18.

Chow, C. M., Kuznetsov, A. G. & Clarke, D. W. (1995). “Application of generalised predictive control to the paper machine benchmark”. Control Engineering Practice 3(10), 1483-1486.

Cutler, C. R. & Ramaker, B. L. (1979). “Dynamic matrix control – a computer control algorithm”. AIChE national meeting, Houston, Texas.

Page 114: Use of modeling and simulation in pulp and paper making

114

Dinkel, M., Villeforth, K., Mickal, V. & Sieber, A. (2006). “Measurement and control strategies for bleaching”. Measurement and control – applications for the operator. Tampere, Finland, June 6-8.

Fan, J., Stewart, G. E. & Dumont, G. A. (2002). “Model predictive cross-directional control using a reduced model”. Control Systems 2002. Stockholm, Sweden, June 3-5.

Fan, J., Stewart, G. E., Nazem, B. & Gheorghe, C. (2006). “Automated tuning of multiple array cross-directional model predictive controllers”. Control Systems 2006. Measurement and control – applications for the operator. Tampere, Finland, June 6-8.

Fu, C., Ollanketo, J. & Makinen, J. (2006). “Multivariable CD control and tools for control performance improvement”. Control Systems 2006. Measurement and control – applications for the operator. Tampere, Finland, June 6-8.

Fu, C., Ye & Dumont, G. A. (1995). “A generalized predictive control design for the paper machine benchmark”. Control Engineering Practice 3(10), 1487-1490.

Gough, B., Kovac, S., Huzmezan, M. & Dumont, G. A. (2002). “Advanced predictive adaptive control of steam header pressure, saveall consistency, and reel rightness in a TMP newsprint mill”. Control Systems 2002. Stockholm, Sweden, June 3-5.

Hagberg, M. & Isaksson, A. J. (1995). “Preface to the special section on benchmarking for paper machine MD-control”. Control Engineering Practice 3(10), 1459-1462.

Hauge, T. A. (2003). Roll-out of Model Based Control with Application to Paper Machines. PhD thesis, Norwegian University of Science and Technology, and Telemark University College, Norway.

Hauge, T. A. & Lie, B. (2002). “Model predictive control of a Norske Skog Saugbrugs paper machine: Preliminary study”. Control Systems 2002. Stockholm, Sweden, June 3-5.

Hauge, T. A., Slora, R. & Lie, B. (2005). “Application and roll-out of infinite horizon MPC employing a nonlinear mechanistic model to paper machines”. Journal of Process

Control 15(2), 201-213.

Heath, W. P. (1996). Orthogonal functions for cross-directional control of web forming processes. Automatica 32, 183-198.

Heath, W. P. & Wills, A. G. (2004). “Design of cross-directional controllers with optimal steady state performance”. European Journal of Control 10, 15-27. With discussion pp. 28-29.

Isaksson, A. J., Hagberg, M. & Jönsson, L. E. (1995). “Benchmarking for paper machine MD-control: Simulation results”. Control Engineering Practice 3(10), 1491-1498.

Isaksson, A. J. & Weidemann, H.-J. (2006). “Control systems for pulp and paper production – challenges and next steps”. Control Systems 2006. Measurement and control – applications for the operator. Tampere, Finland, June 6-8.

Page 115: Use of modeling and simulation in pulp and paper making

115

Kokko, T., Lautala, P., Korpela, M. & Huhtelin, T. (2002). “Comparative study of consistency control strategies and algorithms”. Control Systems 2002. Stockholm, Sweden, June 3-5.

Korpela, M. & Mäkinen, J. (2006). “Moisture control with multiple manipulated variables”. Control Systems 2006. Measurement and control – applications for the operator. Tampere, Finland, June 6-8.

Kosonen, M., Fu, C., Nuyan, S., Kuusisto, R. & Huhtelin, T. (2002). “Narrowing the gap between theory and practice: Mill experiences with multivariable predictive control”. In Control Systems 2002. STFi and SPCI, pp. 54-59. June 3-5, Stockholm, Sweden.

Kuusisto, R., Nuyan, S. & Kaunonen, A. (2006). “Multivariable predictive grade change controller”. Control Systems 2006. Measurement and control – applications for the operator. Tampere, Finland, June 6-8.

Lindgren, T., Gustafsson, T., Forsgren, H., Johansson, D. & Östensson, J. (2004). “Model predictive control of the chip level in a continuous pulp digester, a case study”. Control Systems 2004. Quebec City, Canada, June 14-18.

Maciejowski, J. (2002). Predictive Control with Constraints. Prentice Hall, Harlow, England.

Major, D., Bogomolova, O., Perrier, M., Gendron, S. & Lupien, B. (2006). “Control and optimization of hydrosulphite brightening for a three-adition point process”. Control Systems 2006. Measurement and control – applications for the operator. Tampere, Finland, June 6-8.

Makkonen, A., Rantanen, R., Kaukovirta, A., Koivisto, H., Lieslehto, J., Jussila, T., Koivo, H. N. & Huhtelin, T. (1995). “Three control schemes for paper machine MD-control”. Control

Engineering Practice 3(10), 1471-1474.

Mayne, D. Q., Rawlings, J. B., Rao, C. V. & Scokaert, P. O. M. (2000). “Constrained model predictive control: Stability and optimality”. Automatica 36, 789-814.

Mercangöz, M. & Doyle, F. J. (2006). “Mathematical modeling and model based control of a pulp mill powerhouse”. Control Systems 2006. Measurement and control – applications for the operator. Tampere, Finland, June 6-8.

Mitchell, W., Shook, D. & Shah, S. (2004). “A picture worth a thousand control loops: An innovative way of visualizing controller performance data”. Control Systems 2004. Quebec City, Canada, June 14-18.

Mongrain, A., Fralic, G., Gurney, C., Singh, S., Shand, D. & Vallée, M. (2004). “Model predictive control of newsprint brightening”. Control Systems 2004. Quebec City, Canada, June 14-18.

Morales, R. M. & Heath, W. P. (2006). “Numerical design of robust cross-directional control with saturating actuators”. Control Systems 2006. Measurement and control – applications for the operator. Tampere, Finland, June 6-8.

Page 116: Use of modeling and simulation in pulp and paper making

116

Nuyan, S., Huhtelin, T. & Kaunonen, A. (2004). ”Development of production line management – and integrated process and control design perspective”. Control Systems 2004. Quebec City, Canada, June 14-18.

Pettersson, J., Ledung, L. & Zhang, X. (2006). “Decision support for pulp mill operations based on large-scale on-line optimization”. Control Systems 2006. Measurement and control – applications for the operator. Tampere, Finland, June 6-8.

Prett, D. M. & García, C. E. (1988). Fundamental Process Control. Butterworth-Heinemann, Boston.

Qin. S. J. & Badgwell, T. A. (2003). “A survey of industrial model predictive control technology”. Control Engineering Practice 11, 733-764.

Rawlings, J. B. & Chien, I.-L. (1996). “Gage control of film and sheet-forming processes”. AIChE J 42, 753-766.

Richalet, J., Rault, A., Testud, J. L. & Papon, J. (1978). “Model predictive heuristic control: Applications to industrial processes”. Automatica 14, 413-428.

Seborg, D. E., Edgar, T. F. & Mellichamp, D. A. (2004). Process Dynamics and Control, second

edn. Wiley, Hoboken, New Jersey.

Shakespeare, J. & Kaunonen, A. (2002). “Robust cross-machine control with rank-deficient processes”. Control Systems 2002. Stockholm, Sweden, June 3-5.

Shang, H., Forbes, J. F. & Guay, M. (2004). “Distributed parameter predictive control of bleaching towers”. Control Systems 2004. Quebec City, Canada, June 14-18.

Sidhu, M. S., Allison, B. J. & Dumont, G. A. (2004). “Multivariable averaging level control”. Control Systems 2004. Quebec City, Canada, June 14-18.

Sidhu, M. S., Van Fleet, R., Dion, M. R., Anderson, D. W. & Weger, B. W. (2004). “Modeling and advanced control of TMP refiner system”. Control Systems 2004. Quebec City, Canada, June 14-18.

Wills, A. G., Bates, D., Fleming, A. J., Ninness, B. & Moheimani, S. (2005). “Application of MPC to an active structure using sampling rates up to 25 kHz”. 44th IEEE Conference on Decision and Control and European Control Conference ECC-05, Seville, Dec 12-15.

Page 117: Use of modeling and simulation in pulp and paper making

117

CHAPTER 6 PRODUCTION PLANNING

6.1 PLANNING AND SCHEDULING

Erik Dahlquist, Mälardalen University

Production planning can be with different time horizons. It can be for the next few hours, for the next few weeks but also for the next years. The purpose of course is quite different. In the first case it may be just to get the plant running producing the right amount and quality according to the order list and priority list. In the second case we would include need for service and maintenance as well. For the long time perspective we also have to add need for new investments, rebuild, long term contract with suppliers and customers.

A typical example of a production plan for a paper machine may be how to run a sequence of products with different grammage, widths and lengths of the roll. For this purpose different optimization models have been implemented, and the most economic alternative is used for the production plan.

A problem occurs if e.g. an important and demanding customer demands a faster deliver than you have planned for, or something happens in the production. Normally you optimize for a sequence, and then try to deliver as fast as possible. The reprioritization aside of the plan is normally not done automatically, but by a decision from the production manager and the planner together.

In the future it may be possible to re-plan on a more frequent base and also to include optimal time for service and maintenance, as well as risks for shut-downs due to too tough production during a limited time period, to fulfill some demanding customers’ needs.

6.2 DYNAMIC OPTIMIZATION

We have already been mentioning how dynamic optimization can be used for both production planning, scheduling and on-line control using model based control. In the example below we will show how it was implemented in the EU DOTS project for Dynamic Optimization of Paper mills.

Page 118: Use of modeling and simulation in pulp and paper making

118

6.2.1 A TOOLSET FOR SUPPORTING CONTINUOUS DECISION MAKING

Petteri Pulkkinen and Risto Ritala,Institute of Measurement and Information Technology,

Tampere University of Technology, P.O. Box 692, FIN-33101 Tampere

6.2.1.1 ABSTRACT

The production processes are becoming increasingly complex and the responsibilities of the operators and the engineers are widening. Tools to manage this complexity under dynamic conditions are needed. The dynamic optimization integrated to operator and engineer decision support has a high potential in everyday use. As a result of an EU Commission funded research project a toolset for supporting continuous decision making has been developed. In this study the toolset is used to reduce grade change time on a paper machine.

6.2.1.2 INTRODUCTION

The main objective of the paper machine staff is to maintain acceptable machine run ability. In practice this means high utilization degree and machine speed as well as meeting high quality requirements. The paper is usually produced on order, and thus frequent grade changes are inevitable. The time spent in grade changes must be minimized in order to maximize the production time and minimize the costs. An important constraint for the grade change operation is to avoid web breaks due to fluctuations in key variables during the grade change.

The minimization of the grade change time while avoiding web breaks forms a challenging optimization problem. In this paper the objective function for optimization is formulated, and evaluated through running dynamic simulation. As the optimizer may need hundreds or thousands of iterations to reach the optimal solution, the simulation time in practical applications has to be as short as possible. The simulation model was implemented in Matlab/Simulink. To reduce the simulation time the model is as simple as possible while being realistic.

The optimization is done with the toolset developed in an EU Commission funded research project “DOTS” (G1RD – CT – 2002 – 00755). The present implementation of toolset is within Matlab which is easy to bring to mill environments either as a full system, or as embedded in other systems, e.g. the process analysis system KCL-WEDGE (KCL, 2004). The DOTS toolset offers inbuilt stochastic optimization methods and utilizes Tomlab environment (Holmström, 2004) as an external optimizer. In this paper we present the test the Sequential Quadratic Programming (SQP) method provided by the Matlab Optimization Toolbox in comparison with the stochastic methods offered by the DOTS Toolset. The objective functions are kept unaltered so that the results shown are comparable.

Page 119: Use of modeling and simulation in pulp and paper making

119

6.2.1.3 THE SIMULATION MODEL

When the model is driven by the optimizer, the response time of the simulation is essential. All extra features like displays and irrelevant outputs are removed. The model is parsimonious in that it captures only the features relevant for a realistic grade change, and nothing else. The parameterization of the model must be done so that the optimizer can run the model. The problem formulation sets requirements for the simulation model (Pulkkinen, et al., 2004). The inputs and outputs must be chosen and communicated correctly to get the information the optimizer requires.

The paper machine simulator developed is a combination of blocks based on the basic elements of a paper machine: tanks, valves and controllers (Pulkkinen, et al., 2003). The basic idea of the simulation has been to formulate a simplified model of what happens in a paper machine when a grade change is made. The main attention has been focused on the examination of the quality variables of the end product.

Figure 1 presents the simulation model. The simulator can be divided in three main elements: short circulation including the wire section, the press section and the drier section. In real life the optimal operation of the short circulation including wet end chemistry has been ranked to be one of the hardest challenges of the papermaking process. In the simulator the simplified short circulation model consists of the machine tank, wire section and the white water tank. The accuracy of parameters required is not high because of the nature of simulator use. The flow of material has been divided into four categories in the simulator: the flow components are water, filler and both short and long fibers. The retention at wire section is taken into account by applying component-specific, yet constant in time, retention coefficients. The main emphasis is put on the fiber, filler and water flow after each block of the simulator.

Page 120: Use of modeling and simulation in pulp and paper making

120

The basic tank dynamics is that of ideally mixed one. Therefore the dynamics is added to the model as first order transfer functions. The dynamics of the stock feeding is expressed with the transfer function G1 = (1 + 20s)-1e-20s. Transfer function G2 = (1 + 120s)-1 is used to express wire pit dynamics. The two transfer functions form the model of the dynamics of the short circulation.

A PI-controller has been used to control the filler content and the basis weight. It is well known that these variables are dependent on each other and this statically decoupled in the controller.

The press and drier blocks only increase the dry content. The delay caused by the dryer section can be calculated knowing the machine speed and the web length. The dynamics of the drier section is modeled with simplified thermodynamics of the heating cylinders.

6.2.1.4 COST FUNCTION AND PARAMETRIZATION OF SET POINT

TRAJECTORIES

When changing to a grade with higher basis weight the moisture of the paper produced increases if no action is made on the dryer section. Even with moisture control on at constant set point, there is a momentary increase in moisture, as the heating of the dryers is

Figure 1 The Simulink model for the paper machine

Page 121: Use of modeling and simulation in pulp and paper making

121

slower than the dynamics of fiber retention. An opposite variation happens during a change to a grade with smaller basis weight. The former case is used as an example in this paper. The idea of the optimization of the grade change is to minimize the fluctuation of the moisture as well as the fluctuations of basis weight and filler content.

The combination of control actions that results in smallest cost value is the answer to our optimization problem. The total cost in this case is a sum of the costs calculated for each key variable: moisture content, basis weight and filler content. Here the cost is calculated by squaring the variation from the grade-specific set point and integrating over the duration of the grade change. In practice the variance is minimized while the expected values are changed. This approach has been used in the following examples due to the better success in optimization.

The overall cost can be expressed as

cost cost costmoist bw fillerC M B F= ⋅ + ⋅ + ⋅,

(1)

where M, B and C are weighing factors with a relation 1:1.7:0.08. The cost caused by the variations in each quality variable is quadratic

2

1

2

variablecosti

n

i t arg eti n

x x=

= − ∑,

(2)

where x is trajectory of the variable and xtarget is the grade-dependent target for the variable. n1 and n2 represent the selected start and end times of the cost calculation.

An alternative approach of cost calculation is as follows. The cost is zero when the quality is within grade-specific quality requirements and a positive constant when the quality is not within the specifications (“quality pipe”). Hence the minimum cost is reached when the time that quality variables are outside the quality pipe is minimized.

In order to find a minimum cost we are seeking optimal set point trajectories. This leaves us with infinite choices: the trajectories as functions of time. It is however justified to simplify the optimization task by parameterizing the trajectories appropriately.

In practice, a step down and up again is parameterized into the moisture set point time series. The optimization algorithm is used for finding the optimum for the three parameters: the two timing parameters and the step size.

The timing of the grade change is fixed, so the trajectory can be formed knowing one constant parameter. The last parameter needed is the one for filler content. The change in filler content is stepwise and its size fixed, again only the timing parameter is needed. The parameterization reduces the search space down to four dimensions.

Page 122: Use of modeling and simulation in pulp and paper making

122

The parameters that need to be evaluated are shown in Table 1. The initial values are given for each optimized variable and the timing of the grade change is a constant. The optimization algorithm then finds the combination of values of the variables that result in the least cost.

Table 3 The evaluated parameters of the simulator

Parameter Type Range

Grade change, timing (Gc)

(= time of set point change of basis weight)

Fixed Gc

Filler content, timing Optimized Gc +/-

100

Moisture set point, step size Optimized 0.01-0.05

Moisture set point,

1st timing parameter (down) Optimized

Gc +/- 100

Moisture set point,

2nd timing parameter (up) Optimized

Gc +/- 100

6.2.2 OPTIMIZATION

DOTS toolset offers an easy-to-use configuration for the dynamic optimization problems. In this paper stochastic methods have been used from the portfolio of optimization methods in the toolset. The parameters of the problem and the algorithm are specified through a graphical user interface in Matlab environment.

It is known that the algorithm performance is case dependent. We will show that choosing an algorithm plays an important role in starting with a new optimization problem (Dhak, et al., 2004). The minimum of the smooth quadratic cost function can be found by almost any optimization algorithm, which is also shown in (Ihalainen and Ritala, 1996). In this case the cost function is more complicated and differences in performance can be shown.

The optimization process was repeated using the stochastic methods in the DOTS Toolset and the SQP in Matlab Optimization Toolbox while the objective was kept the same.

Perhaps the simplest way of optimizing is a blind random search of right parameters. The parameter combination with best found cost is then selected.

Genetic algorithms (Goldberg, 1989) consist of following steps:

Page 123: Use of modeling and simulation in pulp and paper making

123

1. Start a random population of n chromosomes is generated

2. Fitness the fitness of each chromosome in the population is evaluated

3. New population

a new population is created by repeating following steps until the new population is complete

a. Selection

two parent chromosomes from a population are selected according to fitness

b. Crossover

the parents are crossed over with a certain crossover probability to form new offspring

c. Mutation new offspring is mutated with a certain mutation probability

d. Accepting new offspring is placed in the new population

4. Replace generated population is used for a further run of the algorithm

5. Test if the end condition is satisfied, the optimization is stopped, otherwise the loop continues from the step 2

The idea of the simulated annealing algorithm (Otten and van Ginneken, 1989) is described with following steps:

1. a feasible set of movements is generated randomly

2. the standard deviation of the cost function values in the feasible set is estimated and the estimate as a starting temperature T is chosen

3. steps 3-6 are iterated n times

4. a feasible new point according to the previous procedure is chosen

5. the new point is accepted as a current point certainly if ΔC<0 and with probability of: exp(-ΔC / T) if ΔC>0, where ΔC is the difference of cost function values between the new point and previous point

6. If the new point is the overall best found so far, the best point is updated

7. the temperature is reduced by 0.01*T2 / σ, where σ is the estimate of the standard deviation over the previous iteration path

8. If the best score has not improved during the last 50 iteration paths, the optimization is stopped; otherwise continued from the step 3

Page 124: Use of modeling and simulation in pulp and paper making

124

Tabu Search is also an iterative stochastic procedure designed for solving optimization problems. Tabu Search keeps a list of previously found solutions so that re-finding solutions in subsequent iterations is prevented. SQP, the sequential quadratic programming method, is a smooth nonlinear optimization method. It is a generalization of Newton’s method for unconstrained optimization in that it finds a step away from the current point by minimizing a quadratic model of the problem. In its purest form, the SQP algorithm replaces the objective function with the quadratic approximation and replaces the constraint functions by linear approximations.

The simulation results with the initial values for parameters to be optimized are shown in the upper right corner of the Figure 2. It can be seen that the grade change actually results in a large peak in the moisture. When the set point of the moisture is not manipulated and the filler content is changed at the same time as the basis weight set point is changed, the cost caused by the grade change is 373.2 units (costs after optimization range from 31 to 35). It is obvious that with a step in the moisture set point we can stabilize part of the effect of grade change on the moisture of the final product. The three moisture set point parameters determine how the stabilization is done. The filler content change time is chosen so that the all the quality variables behave optimally. The lower right corner of the Figure 2 presents the fluctuation of the moisture after the optimization

An example of the behavior of the key variables and the set point trajectories in the optimized grade change situation are shown in the left side of the Figure 2. It must be remembered that the results depend on the parameters given for the algorithm, and the comparison between the algorithms was carried out with default values.

Figure 2 Example of an optimization result in the left side of the figure. The behavior of the key variables and their set points are shown in the following order: moisture,

Page 125: Use of modeling and simulation in pulp and paper making

125

basis weight and filler content. The grade change starts at 75500. The right side of the figure presents the fluctuation of the moisture before (upper corner) and after the optimization.

Different methods of the toolset have been used in this optimization task. The simulator and functions used with DOTS Toolset have also been used with an SQP function in the Optimization Toolbox. It should be noted that the SQP algorithm gives local minima and the result depends on the initial values given for the function/algorithm. The results of each method used are shown in Table 2. Table 2 shows that the best result is achieved with genetic algorithm.

Table 4 The optimization results

Method /

Variable Tabu Search

Simulated annealing

Blind Random Genetic

algorithm SQP

Grade change 75000 75000 75000 75000 75000

Filler content, tim. 75015 74980 74985 74990 75005

Moisture sp, tim. 1 74965 74965 74975 74980 74975

Moisture sp, tim. 2 75070 75055 75055 75050 75050

Moisture sp, step 0.05�0.038 0.05�0.036 0.05� 0.034 0.05� 0.032 0.05�0.032

Cost 33.57 33.90 31.57 31.32 34.34

Noticeable in the table is also the good result of the blind random search. This indicates, that the search space is rather flat, which gives no grip for the more advanced methods. However, as the Figure 3 shows, the optimization reduces the fluctuation effectively. Similar effect can also be seen in the behavior of basis weight and filler content. Reduced variance improves the stability of the process and speeds up the grade change.

6.2.3 CONCLUSIONS

By observing the cost values achieved for solutions with each method we conclude that the best result in this grade change optimization is achieved by using genetic algorithm. The genetic algorithm provides best results also for the “quality pipe” cost function. Despite the fact that the methods give results close to each other, the genetic algorithm with proper parameters should be preferred because of better optimized cost and lower number of iterations. However, the performance of the methods depends on the cost function used and

Page 126: Use of modeling and simulation in pulp and paper making

126

the parameters of the algorithms. The algorithms have been applied with default parameters, if the parameters were tuned, the results could be somewhat different.

Compared with the functions in Optimization Toolbox, DOTS Toolset offers an easy way of solving an optimization problem. The graphical user interface makes the toolset somewhat more desirable tool to use.

6.2.4 REFERENCES

Dhak, J., Dahlquist E., Holmström K., Ruiz J., Belle J., Goedsch F. (2004), Developing a Generic Method for Paper Mill Optimization, Control Systems Conference, Quebec City.

Dhak, J., Dahlquist E., Holmström K., Ruiz J. (2004),Generic Methods for Paper Mill Optimization, Simulation and Process Control for the Paper Industry, Munich.

Goldberg, D. E (1989). Genetic Algorithms, Addison-Wesley.

Holmström K., Edvall M. (2004), The Tomlab Optimization Environment, in Kallrath J. (ed) Modelling Languages in Mathematical Optimization, Chapter 19, Kluwer Academic publishers.

Ihalainen, H., Ritala R. (1996), Optimal Grade Changes, Control Systems Conference.

KCL, http://www.kcl.fi/wedge

Otten, R. H. J. M, van Ginneken, L. P. P. P. (1989), The Annealing Algorithm, Kluwer Academic Publishers.

Pulkkinen P., Ritala R., Tienari M., Mosher A. (2004), Methodology for dynamic optimisation based on simulation, Simulation and Process Control for the Paper Industry, Munich.

Pulkkinen P., Ihalainen, H., Ritala R. (2003), Developing Simulation Models for Dynamic Optimization, SIMS Conference, Mälardalen.

Page 127: Use of modeling and simulation in pulp and paper making

127

CHAPTER 7 DECISION SUPPORT

7.1 DIAGNOSTICS AND DECISION SUPPORT

We already have been discussing diagnostics in some forms in previous chapters. Data reconciliation is a method that may be used for diagnosing sensors in a network. Variance analysis may be a method for evaluation of noise level of a sensor or a rotating machine. Development of heat transfer coefficient by a material balance over a heat exchanger is another method. Loop tuning diagnostics is a fourth type of diagnostics.

After these types of diagnostics we want to give the process engineers, maintenance people and operators an evaluation of what is coming out from the different diagnostic systems. This can be performed e.g. through a Bayesian net, giving relative probability for different types of faults, or e.g. Case Based reasoning (CBR) giving a distinct value – fault or no fault. Other methods also can be used, like look up tables etc.

Related to this we also have maintenance on demand. Also what-if scenarios may be used for decision support, to determine how to operate the process in best possible way, e.g. to avoid bottle necks.

Commercial diagnostics systems of different kinds are available for pulp and paper industry, like from the major vendors like ABB, Honywell, Metso and Siemens, as well as specific tools like WEDGE from Savcor Oy.

Early warning systems are something not used so much yet, but will be used much more in the future, probably. One example of this is given under applications for “utilities” later on in the book. Here an early warning system was implemented to determine probable faults in a recovery boiler, like leakage in tubes in the combustor part respectively in the ECO-part, as well as different sensor faults. The implementation was performed at Rottneros Vallvik in Sweden, where a gas explosion followed by a steam explosion occurred in 1998.

Other types of early warning/diagnostics/decision support systems can be exemplified with the system implemented by e.g. ABB for analysis of vibrations in rotating machines. By having wire stretch meters mounted on motors or drives the vibrations can be determined and analyzed. By variance analysis a specific value can be determined and warnings sent as a certain level is passed. If nothing is done, the level will increase and a new, stronger warning is sent to the operator displays. At a third level an alarm is sent to the alarm system.

Depending on where the sensor is located different most probable cause of the vibrations are given to the operator/maintenance people, so that they can take the most adequate action.

Page 128: Use of modeling and simulation in pulp and paper making

128

APPLICATIONS AND CASE STUDIES

Page 129: Use of modeling and simulation in pulp and paper making

129

CHAPTER 8 DESIGN TOOLS

8.1 ENGINEERING AND DESIGN OF NEW PROCESSES OR MODIFICATION OF

EXISTING

There are a number of design tools for both steady state simulators and dynamic simulators. For process design ASPEN plus is probably the most frequently used for chemical processes, as it includes a strong data base with equilibrium data for most possible substances and compounds. There are also other programs like SOLGASMIX with very strong data bases. If you want to study chemical recations kinetics ChemKin is a strong tool, with open code and a huge network of users exchanging data within the community.

When it comes to dynamic or semi dynamic simulators there are a number of the tools. Especially coomon in pulp and paper industry are IDEAS today part of Andritz, FlowMac,PaperMac and PulpMac, APROS from VTT, WinGEMS now part of Metso, former SACDA, now Pacific Simulation, owned by Honywell and Simconx owned by ABB. As can be seen the strong suppliers of simulators most of them now are strongly linked to or even owned by strong Automation companies, indicating that they see this as a type of tool needed in the future. We also can assume that the tools will be both linked to and integrated into the vendors DCS-systems.

Page 130: Use of modeling and simulation in pulp and paper making

130

CHAPTER 9 APPLICATIONS IN PULP MILLS (KRAFT,

CTMP, RECOVERY )

9.1 PROCESS OPTIMIZATION AND MODEL BASED CONTROL IN PULP MILLS

Johan Jansson now SAPPI S.A. , Erik Dahlquist , Malardalen University, Ulf Persson, earlier

ABB, now Vattenfall AB

9.1.1 INTRODUCTION

AutoCook from ABB is an example of an advanced, process-oriented control system developed to control, monitor and report the performance of the continuous and batch digesters where kraft pulp processes, where lignin is dissolved from the fibers in wood chips. The system enables mill personnel achieve efficient and high quality pulp production. The AutoCook uses tables, where the operator can see suitable temperatures and flows to use for all loops around the digester for a certain production rate and quality. Behind these tables models are calculating the values. Models are also used for example to track chips/pulp throughout the whole fiber-line. The tracking calculates the movement of chip/pulp “packages” downwards through the impregnation and digester vessels. The tracking function is used by many of the control functions to make sure that the right action is made at the right time. During specie/grade changes the quality dependent nominal set points are not changed until the new quality has reached the level in the digester where the control action is to take place.

Examples where tables are used is when AutoCook is used to change production or change wood species without producing more off-spec pulp than necessary. Each wood specie/pulp grade has its own set of pulp tracking parameters.

The trend today is to optimize the whole mill not only towards production and quality, but also towards for example minimization of energy and chemical consumption, and effluents. Most of the models are first principles process model, which is physical models. These are used to simulate the process as part of the optimization. With a good process model it is possible to implement a more advanced control system like model predictive control (MPC) and 2D or 3D dynamic online models for diagnostic purpose. E.g. hang ups and channeling are of interest to identify in a continuous digester.

For the on-line simulation of the whole mill ABB has developed a Pulp and Paper On-line Production Optimizer and this optimizer works like an “overall mill management system” giving set points to each process section. The models will be on the same software platform as the more advanced models, for example the digester model presented in this paper.

Page 131: Use of modeling and simulation in pulp and paper making

131

9.1.2 OVERALL PULP MILL OPTIMIZATION:

On the overall mill level we want to optimize and control the chemical balance in a complex pulp mill. The first application is the control of Sulfur in an integrated Kraft and NSSC (Neutral Sulfite) pulp mill. This type of integrated recovery systems is called “cross recovery”, as sulfur is passed over between the two lines. It is important to adjust the dosage of sulfur to adjust the sulfidity, so that there is enough but not too much, and to keep the balance between different parts of the plant. This has been done by physical models that have been combined to form a plant model. This model is then used to make a production plan for keeping the sulfur balance at its optimum. Implementation has been done recently at a Billerud’s Gruvön Mill in Sweden, which typically produces 630 000 metric tonnes of paper a year.

Modeling

The modeling is based on a description of all components in the system as objects. The object type groups are streams, production units, measurements and calculated properties in streams.

Some examples of production units are digester, pulp tank, black liquor tank, lime kiln and paper machine. Stream types may for example be pulp, white liquor, black liquor or wash liquor. Examples of measurements are pulp flow, tank level, sulfidity, effective alkali, reduction efficiency, black liquor density or alkali charge.

The streams are described by the total flow and by the concentrations of the different components that are relevant for each stream. Examples of stream components used are, fiber, dissolved organic solids, hydroxide, sulfide, carbonate, sulfate, sulfur. These relations may be considered as a dynamic mass balance and are a system of differential and algebraic equations of the form

( ) ( ) ( ) ( )[ ] 0,,,, =ttwtutxtxF &

where ( )tx& denotes the time derivative of the state ( )tx . ( )tw denotes not measurable disturbances and is accounting for modelling uncertainty. ( )tu denotes measurable disturbances which in our case are equivalent to the variables which are possible to manipulate by the operators or control system.

The production units are connected with streams in a flow sheet network. Measurements are connected to both production units and to streams. The pilot mill system with production units, streams and measurements has the following size:

• 25 production units

• 38 buffer tanks

• Approx. 250 streams or pipes

•Approx. 250 measurements or observations

• Approx. 2500 variables

Page 132: Use of modeling and simulation in pulp and paper making

132

The levels in buffer volumes are changing as a function of changes in the flow in and out. Tanks are rigorously modelled assuming ideal mixing as

where inF and outF denotes incoming and outgoing total flows. M denotes total mass in tank, inc and outc denotes incoming and outgoing concentrations. One equation like the second is used for each component. More about this is described in Persson et al, 2003 [1]

The optimization of the plant is aiming at minimizing:

• Variation in sulfidity

• Cost of sodium hydroxide and sulfur usage

• Production losses due to up and down stream disturbances

• Variation in sodium stock

• Variation of the distribution of sodium between white and black liquor

The optimization is done by estimating what value each variable should have during each time step during the time horizon we want to control. We are then not allowed to pass limits like high or low level in tanks, feasible flows in pumps and valves etc. This is where we really need a computerized system to keep control of all these constraints. This optimization schedule is then updated on a frequent basis, to handle new situations coming up like need for changing priority in the production at the paper mill; some equipment has to be replaced etc. Each time we make a new optimization calculation we need to have a starting point for where we are right now. This is not as clear as you might believe, as the process seldom is as stable as you would like it to be. To handle this a technique called Moving-horizon estimation (MHE)[2] can be used.

This means that we include a number of time steps and make an average for these. For the next time step we then take out the oldest, and add the newest, and make a new average of these. This is repeated in the same way every new time step. In this way we can filter out noise from both sensors and actual upsets in the process. This is normally positive as we do not use the optimization for doing changes on the very short time basis. We then instead get a better view of “stable values”, so that the optimization is done from more correct values than if only the momentarily measured values were used as starting points. This calculation also can be used to determine process upsets and faulty sensors, if the data are processed further.

outin FFdt

dM−=

( )outininout cc

M

F

dt

dc−=

Page 133: Use of modeling and simulation in pulp and paper making

133

Optimization

The production-planning problem is formulated similar to a model predictive control problem [3]. The optimization criteria consist of different terms, quadratic and linear, and include terms to enable the following functions included in the objective function:

1. Minimization of the deviation from set-point trajectories of any of the manipulated variables. One example is the paper machine production plan that is incorporated in the optimization with production rates and recipes. These set-points are automatically imported from the paper machine scheduling system. The pilot mill very often re-plan the paper machine every second hour.

2. Minimization of the deviation from set-points trajectories of any of the measurement or soft-point variables. One example is the white liquor sulfidity.

3. Minimization of the deviation from set-point trajectories of any of the state variables. One example is the total sodium stock in the recovery cycle and the part of the sodium stock distributed on the white side. Another common example is a tank level set-point.

4. Minimization of the rate of change of the manipulated variables.

5. Minimization of linear costs. One example is make-up chemical cost

6. Maximization of linear revenues associated with a state variable. One example is revenue by producing pulp during periods of overcapacity in pulp production compared to paper production.

This optimization criterion is minimized subject to the following constraints:

1. Process model

2. Sensor model

3. Upper and lower bounds on state variables (may be a time dependent trajectory)

4. Upper and lower bounds on manipulated variables. A maintenance stop is introduced in the optimization by adjusting the upper bound of the production rate during the stop time for a production object e.g. digester.

5. Maximum and minimum sensor values.

6. Upper and lower bounds on change of manipulated variables. Changes in the manipulated variables are sometimes bounded by hard constraints to avoid that the resulting trajectories will be too aggressive. One example is that a digester should not change the production rate faster than a certain rate per hour.

7. The production planning should start from the current estimated value from the state estimation.

The results will be the set points for all important controlled variables for the coming hours.

During steady-state periods, the buffer tank volumes are most likely not needed to even out disturbances. It is then preferable to adjust tank levels in a way that statistically

Page 134: Use of modeling and simulation in pulp and paper making

134

minimize the production loss around each tank by considering the intensities of stops that propagate through buffer tanks to adjacent production units. Long term set points for all levels in buffer tanks are computed that minimize the production loss considering the time between failure and time to repair for each process section.

Simulation

To check the feasibility of the way the process is operated for the moment, as simulation is made to see how long it can be operated until some limits are violated. This is used as a support for the operators.

Results

An example of a process display from the mill is seen in figure 1. Here we can see the history of the levels in the tanks in the fiber line to the left; the optimized values recommended for the future production strategy at the right of this graph and far two the right the actual level right now. This figure gives the operator a very good quick overview over the process status and recommended operations.

Figure 1. Historic trend of levels, optimized values for the future as well as the actual level right now, seen from left to right. This for the fiber line.

In figure 2 we can see a bottle neck analysis. This gives the operator a quick overview of were the bottle necks will be if we precede operations without any changes of the set points.

Finally we look at a single variable, in this case the sulfidity of the white liquor, in figure 3. To the left of the dotted, vertical line we have the estimated value of the sulfidity. Horizontally at 35 % we see the actual last measurement of the sulfidity. The difference is that the concentration

Page 135: Use of modeling and simulation in pulp and paper making

135

Figure 2. Bottle neck analysis.

varies in different parts of the system. To the right we have first a strait, dark line, which predicts what the value would be without any actions. The brighter curve shows the optimized value, if actions are taken for control.

Figure 3. Actually measured value of sulfidity is 35 %, in one spot. To the left we have the estimated real sulfidity in the system, and to the right predicted value if no action is taken ( dark) respectively the optimized value( the bright).

9.1.3 DIGESTER OPTIMIZATION AND MPC

Digester performance optimization

The next level of optimization is to look inside each process section, in our case a continuous digester. The overall set points for the digester we have got from the mill optimization schedule. For digester operations we have been working with physical modeling of the digester in 1-D and 2-D, including pressure drops inside the digester for the 2-D case

Page 136: Use of modeling and simulation in pulp and paper making

136

9.1.3.1 DIFFERENT MODELING APPROACHES

A number of different approaches for the simulation of the digesters can be applied. A number of these are discussed in this section.

9.1.3.1.1 SEQUENTIAL SOLVER USING FORTRAN CODE WITH ITERATION

BETWEEN PRESSURE-FLOW CALCULATIONS AND CHEMICAL REACTIONS-

TANK LEVEL CALCULATIONS. 1-D.

Figure 1. Use of a sequential solver

First the calculations of pressure is performed by looking at pressure lift by pumps, levels and pressure head in tanks, pressure drops in valves and entrainments. This gives the actual flows in each single pressure point, as we know the characteristics of the pumps, valves, valves openings etc.

In the second step the flow rates are used to calculate flows into a volume element and out of it, giving new levels in the tanks/vessels.

Page 137: Use of modeling and simulation in pulp and paper making

137

In the third step we use the actual temperatures and concentrations in each volume element to calculate chemical reactions, diffusion, transfer between phases etc. These are then used to calculate the new concentrations of all the flows in and out, as well as the new concentrations in the different phases in each volume element. In the figure to the right we have real or “faked” valve between all the arrows, giving the pressure at all points separating the arrows. Real valve data are used for the dimensions, but tuning has to be done to give correlations between the actual flows as a function of chip size distribution, viscosity of the mixtures, temperature etc.

A variant of this approach was used by Bhartia and Doyle ( 2004) to model plugging in the digester, and also other types of engineering use is common, to study specific design or operational problems.

This first approach gives dynamics for the complete mill, and interactions between all equipments. It is very useful as a training simulator and teaches you how the DCS functions like interlocking, PID controls etc respond. An example of a response to a change in a flow rate is shown in the figure below. Here we get the process values directly at a process display, just like in the real plant. Trends can be shown of variables, PID control response etc.

Figure 2. A process display from a training simulator, where a DCS system is interacting with a dynamic process simulator on-line.

Page 138: Use of modeling and simulation in pulp and paper making

138

9.1.3.1.2 SIMULTANEOUS SOLVER BUT WITHOUT PRESSURE FLOW

CALCULATIONS. THE FLOWS ARE ASSUMED CONTROLLED BY THE REAL

DCS SYSTEM. 1-D

Page 139: Use of modeling and simulation in pulp and paper making

139

Figure 3. Use of a simultaneous solver but with fixed flow directions

In this case we assume all flows being controlled by the DCS system, and the direction of the flows are always fixed - up or down. This makes it easier to calculate the concentrations, as we know what concentration is going what direction. If you have alternating flow directions as the pressure drop is changing in a section, it takes more calculation power to find a stable solution with both correct concentrations and correct flows, otherwise. In this approach with simultaneous concentration and flow calculations we get more correct concentrations, especially during transients, but more unpredictable calculation times. The advantage with not including pressure flow calculations is that it is easier to use this model for MPC- calculations.

The 1-D model has been used to do MPC, Model Predictive control, for the digester, by optimizing the flow in each circulation loop. This is to keep control over the temperature and chemical additions depending on the real demands.

The main advantage with MPC is its ability to handle multivariable processes with strong interaction between process variables and with constraints involving both process and state variables. An example of how the results with the MPC compared to just operating according to the “normal practice” is shown in the table 1 below.

Page 140: Use of modeling and simulation in pulp and paper making

140

Table 1. Optimized set-points from MPC compared to “normal practice” operations:

In figure 4 below we can see what this means for the temperature profile from the top to the bottom of the reactor. The dark (blue) line is how the set-points would aim at if we were operating according to the normal strategy for this type of wood. The brighter (red) line shows how the MPC proposes to operate, to get lower energy - and chemical consumption, but with the same kappa number (remaining surface lignin) out of the reactor.

Base Optimized Description

Decision Variables:SI103A 22,421 22,421 Chip Screw Rate, rpm

FI107A 2,499 2,370 White Liquor to High Pressure Feeder, lit/s

FI104B 4,463 4,240 White Liquor to Digester, lit/s

FI217B 2,701 2,836 White Liquor to Wash Circulation, lit/s

FI216B 6,578 6,907 White Liquor to Low er Cook Heater Circulation, lit/s

TI216A 147,387 136,874 Low er Cook Heater Outlet Temperature, degC

TI217A 153,017 160,274 Wash Circulation Heater Outlet Temperature, degC

TC102A 94,000 99,000 Chip Bin Temperature, degC

FI216C 6,467 6,790 Wash Liquor to Low er Cook Heater Circulation, lit/s

FI217C 6,363 6,048 Wash Liquor to Wash Circulation, lit/s

FI212H 42,237 42,238 Dilute Filtrate, lit/s

FI104C 0,108 0,102 Wash Liquor to Chip Tube, lit/s

FI102A 1,220 1,354 Reboiler Steam, kg/s

Dependent Variables:Pulp Produced 6,56 6,62 kg/s

Chip Utilization 26,067 25,976 kg/s

Dissolved Lignin 5,680 5,495 kg/s

NaOH Consumption 0,713 0,698 kg/s

Na2S Consumption 0,406 0,405 kg/s

LP Steam Utilization (0,069) 0,001 kg/s

MP Steam Utilization 1,653 1,499 kg/s

Steam Condensate (0,130) 0,003 kg/s (negative value indicates net production)

Total White Liquor Utilization 16,241 16,353 lit/s

Total Wash Liquor Utilization 55,175 55,178 lit/s

Black Liquor 51,521 51,545 lit/s

Kappa Number 89,56 89,56Pulp Yield 57,45 58,22 %

Objective Function 0,047839 0,074172 US$/s

Temperature Profile

90.0

100.0

110.0

120.0

130.0

140.0

150.0

160.0

170.0

180.0

Base Case

Optimized

Page 141: Use of modeling and simulation in pulp and paper making

141

Figure 4. Temperature profile in the digester from top to bottom according to “normal operation” (dark blue) respectively as proposed by the MPC ( brighter red)

9.1.3.1.3 2-D CALCULATIONS WITH A SIMULTANEOUS SOLVER FOR BOTH

PRESSURE FLOW CALCULATIONS AND CHEMICAL REACTIONS

In this approach with 2-D we have a “fake valve” between each volume element, giving a complex net inside the digester, but also including real valves in and out of the digester. With this approach we calculate the flow between the volume elements, and can simulate hang ups, channeling, different packing etc. The model is used together with measurements of real flows, chip size distributions, chemical additions, concentrations in black liquor in extractions, temperatures, final kappa number of the chips etc.

Figure 5. Use of a 2-D model with a full pressure-flow network

This model can easily be extended to 3-D, although the computation time goes up significantly with 3-D compared to 2-D.

Page 142: Use of modeling and simulation in pulp and paper making

142

This gives the opportunity to use the model for different type of diagnostics as well as for testing of different design and operational options.

The digester model will also be used for sensor validation for the measuring apparatus around the digester. If the calculated value will start to differ to much from the measured value, that will indicate in an early stage that there are some problems with that measure point. An early indication that there are some problems will save production, quality and money. This type of sensor validation is important as a pre-step to the actual optimization.

The 2-D physical model of the digester has been made taking into consideration the pressure drop inside the digester due to the channels between the wood chips. When there are mostly large chips, the channels between the chips are large, with low pressure drop for fluid flowing between. When there are a major amount of fine pins etc, the channels between the chips will decrease, and the pressure drop increase. This is what happens in reality if we have different chip size distribution or different packing in the digester. One reason for this may be that the chip size distribution is inhomogeneous or the chip screw in the stack may not feed in a constant way. When we have a lot of flakes these may adhere to the screens and cause hang ups. Aside of causing an increased pressure drop in the screens, also the chips will get different residence times and contact with liquors of different concentration of both chemicals and dissolved organics. This may cause a significant variation in the kappa number of the final fibers. By identifying pressure drops, residual concentration of chemicals in the liquors, temperatures and flows and compare actual results to those predicted by the model, we can tune the model to match reality. This is under the assumption that we first have achieved a good process performance. The model can then be used both to optimize the performance by adjusting e.g. temperature and chemicals dosage, as well as back flushing screens to avoid hang ups before the problems become severe. There is also a potential for finding channeling to have a chance to go in and adjust, although this demands regularly measurements as well.

This approach with a 2-D model is best to use for diagnostics purpose as to determine channeling in the digester, or hang ups. Channeling will mean that liquid is not getting into the chips, but passing in channels between the chips. This means that we will get less reaction between chemicals and lignin, resulting in less dissolved lignin (DL) in the extraction liquor, as well as more residual alkali (NaOH), as not all got the chance to react. An example of how this can look is seen in the simulation in figure 6.

CHANNELING

0

1

2

3

4

5

6

7

8

9

1 2 3 4 5 6 7 8 9 10 11 12

TIME

% D

L,N

aO

H*0

.1

NaOHpred

NaOHmeas

DLpred

DLmeas

Page 143: Use of modeling and simulation in pulp and paper making

143

Figure 6 Simulated values for concentrations of NaOH (‰) respectively dissolved lignin ( % DL) in the extraction line liquor. Predicted values are from normal operations and measured values correspond to values when there is channeling.

9.1.3.2 OTHER APPLICATIONS

So far we have tested this in a simulation environment with good results. Next step is to make real application tests in a mill.

When it comes to production planning and scheduling, including “tank-farming”, we have to model not only the digester, but also all other equipment surrounding this. There are several approaches for the modeling here. We can use the approach with models for tanks and some other features directly in the formulation of the Objective function and the constraints, and solve the dynamic optimization problem directly. Another alternative is to use a dynamic simulation model, and use this to calculate the result for the whole time horizon using a “branch and bound” type of solver. Then communication is intense between the optimizer and the simulator, but on the other hand we know we get a feasible solution. A third approach is to use the optimization using the first alternative, and then test this solution towards the more detailed simulator, to avoid running into possible problems implementing the optimal schedule.

All these approaches has been implemented, but for different applications. In the EU DOTS project ( Bell et al ,2004 and Dhak et al 2004), we tested all these approaches, but for paper mill applications. At Gruvon pulp mill, the approach was the first alternative. Here the problem was very large and complex, where a large number of tanks and processes had to be included. To avoid problems with unreliable sensor measurements the signals were filtered through a “moving window” approach, which turned out to be quite successful. This was done by ABB ( Persson et al 2003).

9.1.3.3 MODELLING OF DELIGNIFICATION

For both the 1-D and the 2-D model we need to tune the model equations with real process data. The boarders between statistical models and physical models may not be that clear. If we introduce a number of parameters into a physical model and these have to be tuned by plant data, it is in reality a combined physical and statistical model. The advantage is that we get the robustness of the physical model, but can make use of the statistics really relating to the actual process.

Page 144: Use of modeling and simulation in pulp and paper making

144

To do this we first have to consider what input data to use to update the model. Which values are important for the model? Which data are reliable, so that we do not feed in “scrap information”?

When it comes to the digester application, we need information on kappa of the fibers as a function of operational data, as well as data on what fibers are introduced (chip size, type of wood etc). The operational data include the capacity, concentrations of chemicals and temperatures primarily. From these data we can calculate among other residence time for the chips.

We can get the kappa number for what is coming out of the digester, but normally not from inside the reactor (although there are some digesters having sample points inside the actual digester as well).

Liquors from the extraction lines still are available, and the composition with respect to residual NaOH, NaHS and dissolved lignin may be measured on-line or in lab.

Average values will give a filtered value, but also the standard deviation, which is of great value to determine the process performance.

When we want to update the reaction rates with respect to the dissolution of lignin from the fibers we have to simplify some factors, as it will be too complex otherwise. First we can group the collected data into classes. The first class level will be the quality of the pulp. If we operate short fiber and long fiber, we keep these two classes as the primary classes. If we also have different qualities of short and long fibers, we make sub groups also for these. In the long run it may hopefully be possible to determine on-line what type of fibers we feed by measuring e.g. the NIR spectrum of the chips at the conveyor belt feeding the digester. A lot of research has been performed to find good correlations between NIR spectra and chip quality ( e.g. Axrup [4]) a well as NIR –pulp quality (e.g. Liljenberg [5]). Principally we calculate the H-factor:

H-factor = ∫ e (43.2 – 16113 / T) dt

Still, this is not including the effects of varying concentrations of chemicals, and thus we use the following expression instead:

dLignin/dt = a * [OH] b * [HS] c *e ( d – f/ T)

Here we note that there is normally approximately 21-27 % lignin (see table 1 below) in the wood chips. The constant “a” is representing the wood properties including density and size. A large chip will have a lower “a” than a small one.

By integrating over the whole digester we will principally get the total lignin dissolution, assuming the washing is perfect. If the washing is less good we have to adjust for that as well. We can also just make a digital notation that the washing is working as it shall or not. Thus we can neglect it in the model as an analogue feature that needs to be updated.

Table 1. Wood composition

Cellulose Hemi cellulose Lignin

Page 145: Use of modeling and simulation in pulp and paper making

145

% % %

Spruce 42 28 27

Pine 41 28 27

Birch 41 34 21

From this we can estimate that Kappa number to lignin relation is like the equation below:

% Lignin in dry pulp = kappa number * 0.18

Softwood Pulp that should be further bleached normally is cooked to approximately kappa 30, with a yield of approximately 40, while unbleached Kraft for liner is cooked to kappa 90-110, with a yield around 58-60 %. For sack paper and some other applications, kappa 40-50 is the target, and the yield becomes around 50 %. These are the rough values.

What we do now is to calculate the lignin dissolution rate over the whole digester, and then divide by the residence time in the cook part of the reactor. From this we can then calculate the constants a,b,c,d and f. This in practice can be done in an iterative way, to get updated constants.

For pines specie we got a reduction of lignin + hemicelluloses by 40 % from the original 55% with a residence time of 3.3 hours and 145 oC. We then have a reduction rate by 12.0 %/hour. This we use as dL/dt = 0.120. In this case we started with 0.9 moles/l of NaOH and 0.45 moles/l NaHS.

The constants b =0.5 and c = 0.4 are taken from literature as suitable, d= 18.33 and f= 5000 we calculate from process data when operation was done at different temperatures. The constants here are for fast reacting wood specie. From this we can calculate “a” = 2.986*10^-4, and we have the formula:

dL/dt= 2.986^10-4*[0.95]^0.5*[0.45]^0.4* exp(18.33-(5000/T)),

where T is in Kelvin (K)

This we feed into each volume element in the model and calculate for the whole digester. Here the actual temperatures are used, as well as the actual calculated concentrations. If the overall rate simulated is 0.125 instead of 0.12, we reduce the constant “a”, for the temperature we have decided should be the normal set point. When we have got process data for a number of different temperatures, we recalculate the constants d and f. The constants b and c we keep as constants.

What we have to be careful about is the specie going into the digester, while collecting tuning data. Preferably we would like to have an on-line measurement of the species by e.g. NIR spectrum, and correlate spectrum characteristics to final result of kappa under certain conditions. Then we just keep the set of constants to be used and switch to the correct ones as a consequence to what NIR spectra is measured at the feed point. A time delay is used to compensate for the time from measurement until the fibers have passed the different sections in the digester. When we do not have on-line NIR measurement, we need at least a good “guesstimate” of the mix as manual input by the operators. The difference between the “guestimate” and the resulting kappa will be used by the operator to train on how to make better “guesstimates” from the information available.

Page 146: Use of modeling and simulation in pulp and paper making

146

In reality we tune to kappa number and not lignin, as the kappa number is the measured variable, and not the lignin. In reality this kappa number is a mix of lignin and hemi-cellulose, as it just measures material that is oxidized by an oxidant in the standard method, which is in reality affected by many different factors not always that well controlled.

By storing operational data from different runs we can both calculate averages for each run, as well as an average for many runs during a longer time period.

So from these calculations we will both get a tool to predict what value the lignin should have at the fiber surface, as well as in the liquor. We then can compare to measured values. The difference between predicted values and measured can then be used for both control and diagnostics.

If we combine the physical model for the digester and an optimization algorithm we can optimize the operation of the digester in such a way that the yield can be optimized, chemical consumption and energy minimized and quality variations minimized. The MPC then gives set points for the different circulation loops, which may be controlled by PI or PID controls with respect to flow rates. Under the assumption that also the other parts of the plant are coordinated with the production in the digester we can get a significant improvement of the economy.

If we just replace PID loops with MPCs without coordinating the operations of the different equipments Castro and Doyle [2004] got the result that the difference was small, and thus recommended that coordination of the production in the whole plant should be done.

Figure 7. Typical variations of kappa number and actual production rates for a digester during a week.

It is not enough to only control flows, temperatures and pressures. Also measurements on quality variables are needed. Still, it is custom that all these variables are moving around all the time, and steady state conditions are seldom reached. E.g. screens are clogged and have to be back flushed. Because of this flow rates are changing, and with them also chemical concentrations and temperatures.

Typical variations in Kappa number and production rates are as shown in the figure 4. Kappa trends are shown as solids lines. Temperature and flow are changing in a similar way. To use these values for tuning the model we test different averages. A problem is that we often do not get lab measurements more than every two hours. On-line measurements may be every 5-10 minutes up to every 30 minutes. These measurements may be usable.

First 10 day in June- July 30 day period Data sample rate 30 min.

1000.001100.001200.001300.001400.001500.001600.001700.001800.001900.002000.00

1 22 43 64 85 106127148 169190211232 253274295316337 358379400421 44246348450580.0085.0090.0095.00100.00105.00110.00115.00120.00125.00130.00

Production Kappa test

Page 147: Use of modeling and simulation in pulp and paper making

147

Now we have to try to fit the kappa number measured to the average digestion temperature, time and average concentrations of hydroxide and sulfide.

As can be seen the variations can be significant, although the wood is principally of the same quality all the time!

The flow rates often drop as the screens are clogged. When they are back flushed the flow goes up again. The problem is that the pumps cannot overcome the pressure drop over the screens fully. The temperatures are collected in each zone and the average multiplied by the residence time, which is calculated as the volume of the segment divided by the flow rate of chips including liquor per second.

So far we have tested this in a simulation environment with good results. Implementation is being done summer 2007 at Korsnas mill in Sweden.

For a simulation of the variations in the process we can see that variations occur in the feed chips with respect to reactivity (lignin dissolution rate), moisture content and chips size distribution. The variation will depend on where in the stack material is taken. A heavy rainfall may decrease the DS content a lot at the surface and a bit into the stack. Chip density may vary depending on how the screw moves in the stack. At the end positions, the bulk density may go up. The wood quality depends on not only species, but also how long it was since it was cut, if the trees were growing in a valley or at a hill side etc.

When it then comes to the digester performance, we have to consider these variations. We also will often have a clogging of the screens by time, and thus need back flushing now and then. The frequency needed will depend on the chip size and type. When the screen starts to clog, the recirculation flow goes down, as the pressure drop goes up. This may affect both temperatures and concentrations in the liquor. We also may have an in-homogenous reaction due to unequal distribution of chips, with different size distribution both vertically and horizontally in the digester.

9.1.3.4 CONCLUSIONS

What we have given here is a discussion on how physical models tuned with process data can be used for several different purposes. First we discuss overall mill optimization. Thereafter the optimization respectively diagnostics in the digester operations is discussed. It is important to fit process data into the models in a good way and how this can be done in an automatic way is discussed. An on-line measurement of chip quality fed to the digester with e.g. NIR would principally have a major impact on the overall process performance including the yield. A 1-D model was used for MPC and a 2-D model for diagnostics of hang ups and channeling. There are advantages to have different degrees of complexity in the models depending on what it should be used for. Long term when the computer power is significantly better than today, it may still be possible to have the same model for many purposes.

In the presentation we have discussed different means for modeling digesters to use for MPC and optimized scheduling. The first approach with a dynamic simulation model with iteration between pressure-flow net calculations and reactions in each volume element is suitable for dynamic simulation, were also control actions through the DCS system has to be

Page 148: Use of modeling and simulation in pulp and paper making

148

encountered. This method is suitable for detection of different faults, testing of “what if scenarios” and can be used in optimization interactively with an optimization algorithm. The second approach without a pressure flow net work solver, but with a simultaneous solver for calculation of all reactions taking place inside the digester is well suited to use for MPC applications, where set points are to be given to the control loops. The third approach with more detailed models in 2-D ( or even 3-D) is best suitable for detection of hang ups, channeling and other type of faults in the process. In the future, when computer capacity is significantly higher, it should also be possible to use this instead of the two other types of models.

9.1.3.5 REFERENCES

Axrup L.,Markides K. and Nilsson T.: “ Using miniature diode array NIR spectrometers for analyzing wood chips and bark samples in motion”, Journal of Chemometrics 2000;14:561-572

Bell J.,Dahlquist E.,Holmstrom K.,Ihalainen H.,Ritala R.,Ruis J.,Sujärvi M.,Tienari M: Operations decision support based on dynamic simulation and optimization. PulPaper2004 conference in Helsinki, 1-3 June 2004.Proceedings.

Bhartiya S and Doyle F J (III): Mathematical model predictions of plugging phenomena in an industrial single vessel pulp digester, will be published in Ind&Eng Chem Res during 2004.

Castro JJ and Doyle F J III: A pulp mill benchmark problem for control : problem description, p 17-29, Journal of process control 14, 2004

Castro JJ and Doyle F J III: A pulp mill benchmark problem for control : application of plant wide control design , p 329-347, Journal of process control 14, 2004

Dhak J.,Dahlquist E.,Holmstrom K.,Ruiz J.,Bell J.,Goedsch F: Developing a generic method for paper mill optimization.Control Systems 2004 in Quebec City, 14-17 June 2004. Proceedings.

Liljenberg T, Backa S.,Lindberg J.,Dahlquist E.: “On-line NIR charactization of pulp”.Paper presented at ISWPC99, Japan, 1999

Persson U, Ledung L, Lindberg T, Pettersson J, Sahlin P-O and Lindberg Å.: “On-line Optimization of Pulp & Paper Production”, in proceedings from TAPPI conference in Atlanta, 2003.

Rau, Cristopher V, Moving horizon strategies for the constrained monitoring and control of nonlinear discrete-time systems, University of Wisconsin-Madison, 2000

Maciejowski, J.M., Predictive control with constraints, Prentice Hall, 2001

Wisnewski P.A, Doyle F.J and Kayihan F.: Fundamental continuous pulp digester model for simulation and control. AIChE Journal Vol 43, no 12, dec 1997, pp 3175-3192.

Page 149: Use of modeling and simulation in pulp and paper making

149

9.2 ECONOMIC BENEFITS OF ADVANCED DIGESTER CONTROL.

Erik Dahlquist 1,3 Lysette Shuman 2 ,Robert Horton 2, Lennart Hagelqvist 1

1) ABB Process Industries AB, Vasteras, Sweden, 2) ABB Process Industries Inc, Columbus, OH,

US 3) Malardalen University,Vasteras, Sweden

Potential economic benefits with better control:

Here the basis for the calculations is a number of audits of different mills, primarily in the US and Scandinavia. An attempt to generalize has been done, as the variation between mills is significant, both due to the age of the equipments, and the pulp grades they produce.

Typically the kappa variation is in the range of 10-15 % from the set point (1 sigma). This is with conventional control. This means around 10 -15 units for a kappa 100 pulps or +- 3-4.5 for a kappa 30 pulp. Sometimes the variation is significantly larger, due to problems with e.g. chip feed stability, or strong variation in wood properties. By better control of what is going into the digester, and model the properties of wood in relation to final fiber properties, a significantly better feed forward control may decrease the kappa variation by 10-50%.

The yield for different pulps can vary between 40 and 75 %, depending on grade. The interesting thing is if we could produce a kappa number of e.g. 25, with a 1-2 % higher yield. This might be done by driving the process in such a direction, that we have bulk delignification most of the way. We avoid residual delignification. By measuring dissolved lignin and total solids in the circulation lines and extraction line, it may be possible to control this in a much better way, than has been done earlier. This by controlling the ratio between dissolved lignin and total solids, with both lignin and hemicelluloses included.

An example of measurements from a batch digester with respect to this is shown in the figure 1 below. The interesting thing is to see, that dissolved solids increases even after the bulk delignification has stopped (90-110 minutes). This gives yield losses without much improvements of kappa.

In figure2 kappa measured in lab is plotted (upper curve) together with the production rate, as an example. It gives a typical figure of what variation in kappa there can be, and also typical frequencies for production disturbances in many digesters ( lower curve).

Another example is from a Swedish mill with a batch digester (Obbola) , where the kappa standard variation with advanced control together with conventional free alkali measurement was 4.0 % before introducing CLA2000. After the installation and using only the EA (free alkali) sensor, the kappa standard deviation was decreased to 2.6 %, but when using all three sensors as shown in figure 3, the standard deviation dropped to 2.06.

Page 150: Use of modeling and simulation in pulp and paper making

150

Figure 1. Dissolved lignin, dissolved total solids and residual alkali during a batch cook as a function of time.

As a complement also new software was introduced, to make use of the new function, where a kappa prediction model was introduced. Similar results have been achieved in a continuous digester in Finland, as well.

Figure 2. Typical variation in kappa and production . Kappa normalized to 100 as the average (right)

30 day Production Rate and Kappa laboratory testsData sample rate 30 min.

1000.001100.001200.001300.001400.001500.00

1600.001700.001800.001900.002000.00

1 73 145 217 289 361 433 505 577 649 721 793 865 937 1009 1081 1153 1225 1297 1369 144180.0085.0090.0095.00100.00105.00

110.00115.00120.00125.00130.00

Production Kappa test

170 (338)

150 (302)

130 (266) °C

Cooking Time In

100 %

50

20 40 60 80 100 12 140 160

Residual

Temperature

Dissolved Dissolved

CLA 2000

Page 151: Use of modeling and simulation in pulp and paper making

151

If we go into some details in the figure 2, we can see significant kappa shifts. This is very common, and can be due to wood variation as well as problem in operation of the digester like channeling. The shifts are seen more clearly in figure 3.

Figure 3. Kappa shifts indicated with solid lines during a 10 day period.

Income increase due to increased production as kappa variation is reduced and

yield increased:

By reducing the kappa standard deviation from 12 to 10 units, it is possible to run the process with a kappa target of 100 instead of 98. This corresponds to an increase in yield by 0.18 * 2 kappa units = 0.36 %.

For a 400.000 tpy mill with kappa 100, yield 0.57 (57%), this means 0.0036 * 400 000/0.57 = 2526 tpy additional production. With a price of 600 USD/t, it corresponds to 1 516 000 USD/y in additional income.

If we instead look at an increase in the yield the other way, and assume an increase in yield by 1 %, it would mean (1/0.36)* 1 516 000 = 4 210 000 USD/y in increased income.

For a kappa 30 mill, with yield 0.45(45 %), the corresponding value would be

0.0036 * 400 000 /0.45 * 600 = 1 920 000 USD/y

respectively

0.01* 400 000 / 0.45 * 600 = 5 333 000 USD/y

With a cost for wood, chemicals and energy of 180 USD/y, the net earnings will be 1 344 000

respectively 3 733 000 USD/y.

Digester cooking chemical savings:

A typical alkali/bdwwood% level is 14 - 18 % . For a shift in kappa by 1 unit we can assume 0.1 % alkali shift. For a kappa shift of 2 units, this means a shift in chemicals with 0.2 % per

First 10 day in June- July 30 day period Data sample rate 30 min.

1000.001100.001200.001300.001400.001500.001600.001700.001800.001900.002000.00

1 22 43 64 85 106127148 169 190211232 253 274295316 337 358379400421 442463 484505

80.0085.0090.0095.00100.00105.00110.00115.00120.00125.00130.00

Production Kappa test

Page 152: Use of modeling and simulation in pulp and paper making

152

BDW%. % alkali savings based on 15 % A/BDW% means 15*0.2 = 3 % alkali savings. With an alkali cost of 95 USD/t alkali, we get a reduction ( assuming 57 % yield) : 3/100 * 15/100 * 400 000/0.57 * 95 USD/t alkali = 300 000 USD/y.

Washing soda loss reduction:

With better washing control, the washing result can typically be improved without increasing the water usage. This can be translated into reduced soda loss. Typically the soda loss in washing is 2.5 – 10 kg/ton pulp when running a high capacity. The wash water can be approximately 4 -6 twater/tpulp. For a reduction of wash water by 2%, the dilution factor should go down by 0.8-0.12 .We say 0.1. If we assume 15 kg alkali/tp and the same price 95 USD/t alkali, we get the annual savings of:

0.1 * 15 kg/t * 95/1000 USD/kg* 400 000/0.57 tpy= 100 000 USD/y

Reduction in defoamer usage:

Often defoamer is added as a constant flow. By measuring and modeling the process more accurately, also control of the defoamer addition is meaningful. This can mean a reduction of defoamer cost by 10-20 %, or approximately 10 000 to 20 000 USD/y.

Reduced load on the evaporators and recovery boiler:

In very many mills today, the recovery line is the bottleneck for the production.

By increasing the yield by e.g. 1 %, the load on the recovery boiler will also go down, depending on what yield the plant is running at. This means that the production could be increased by approximately another 1 % in the whole plant, which means a gross additional income of:

400 000/0.57 tpy*0.01* 600 USD/t = 4 210 000 USD/y

if the recovery line is the bottleneck.

For kappa 25 at 45 % yield the corresponding would be

400 000 / 0.45 tpy * 0.01 * 600 USD/t= 5 333 000 USD/y

With a direct cost of chemicals, wood and energy of 180 USD/t, the corresponding net income increase would be (600- 180)/600 = 0.7 times the above figures. This means 2 950 000 respectively 3 730 000 USD/y.

Bleached pulp:

For bleached pulp the effect can be different. If we can drive the delignification towards only bulk delignification, and avoid the negative effects of the residual delignification phase, the bleaching should also be better. By reducing the kappa variation, we can also save chemicals in the bleach plant.

Normally the consumption is approximately 2* kappa number kg active Chlorine per ton pulp. The price for the Chlorine as ClO2 would be around 0.35 USD/kg active Chlorine. For a kappa 30 pulp this means 60 kg/active Chlorine per ton pulp.

Page 153: Use of modeling and simulation in pulp and paper making

153

If the kappa variation into the bleach plant is reduced by 1 kappa unit due to better control of the digester, it means a kappa shift of also 1 kappa unit. The chemical savings then will be:

400 000 * (2* 1 kg active chlorine/ton) * 0.35 USD/kg = 280 000 USD/y

Energy savings:

If we assume the following heat balance:

1.42 GJ steam/ tp as flash steam and 0.61 GJ/tp as fresh steam for impregnation.

1.63 GJ /tp for heating to boiling temp

0.61 GJ/tp for hi-heat wash

- 0.47 GJ/tp is released due to chemical reactions

______________________________________________-

Net consumption 2.38 GJ/tp

To this comes steam in the evaporation plant, approximately 4 GJ/tp

The total then would be 6.4 GJ/tpulp

The steam cost is approximately 3 USD/GJ

This means a total energy cost of 3* 6.4 = 19.2 USD/tp.

If we increase the yield by 1 % the saving also means a reduced energy cost per ton pulp produced:

0.01* 19.2 * 400 000 / 0.57 = 135 000 USD/y for the 57 % yield and

0.01* 19.2 * 400 000 / 0.45 = 170 000 USD/y for the 45 % yield mill.

Summary of the potential economic benefits with advanced control using process

models:

For a 400 000 tpy mill, with kappa 100, net earnings will be:

Increased yield by 1 % 2 950 000 USD/y

Cook chemical savings 300 000 USD/y

Reduced wash soda losses 100 000 USD/y

Defoaming chemical reduction 10 000 USD/y

Reduced load on bottle neck Recovery 2 950 000 USD/y

Page 154: Use of modeling and simulation in pulp and paper making

154

Reduced energy cost 135 000 USD/y

____________________________________________________

Total 6 445 000 USD/y

For a kappa 30 mill with 400 000 tpy, the net earning will be :

Increased yield by 1 % 3 733 000 USD/y

Cook chemical savings 300 000 USD/y

Reduced wash soda losses 100 000 USD/y

Defoaming chemical reduction 10 000 USD/y

Reduced load on bottle neck Recovery 3 733 000 USD/y

Reduced chemical cost in bleach plant 280 000 USD/y

Reduced energy cost 135 000 USD/y

____________________________________________________

Total 8 291 000 USD/y

Conclusions:

From this we can see that there is a significant potential for improvements in digester control, in relation to what we have in most mills today. Still, it must be recognized that it is the combination of the right measurements together with the model based control, which will give the benefits. If we only install the sensors or only introduce new models to be used with the control, it will only give parts of the benefits (less than 50%), while the combination is the driver.

The use of model based control will be a good method to achieve optimal income at every moment, especially if we can measure the incoming wood properties on-line for feed forward control.

References:

Axrup L. : Determination of wood properties using NIR. Journal of Chemometrics (2001)

Dahlquist E.,H. Ekwall,J. Lindberg,S.Sundstrom,T.Liljenberg,S.Backa: On-line characterization of pulp – stock preparation department, SPCI ,Stockholm (1999)

Lindstrom M. : Some factors affecting the amount of residual phase lignin during kraft pulping. PhD thesis Royal Inst of Tech, Stockholm, 1997.

MacLeod M. and T. Johnson: Kraft Pulping Variables,TAPPI Short Course inj Savannah 1995.

Page 155: Use of modeling and simulation in pulp and paper making

155

Svedman M. and P. Tikka : Effect of softwood morphology and chip thickness on pulping with displacement kraft batch cooking, 1996 Pulping Conference p 767-777.

Vroom K. E. : The “H” factor: A means of expressing cooking times and temperature as a single variable, Pulp and Paper Magazine Canada (,1957) 58,228

9.3 APPLICATION OF SOFTSENSORS FOR COOKING

Kauko Leiviskä, Oulu University, Finland

9.3.1 BATCH COOKING

Kappa number cannot be measured on-line in batch digesters, but there are many approaches to model, or predict, it from the existing measurements, and use the predicted Kappa to complement the H-factor control, as mentioned before. Both mechanistic and data-based models are in use, but due to the complexity of analytical models, current industrial practices rely almost exclusively on simple empirical or semi-empirical models. Several intelligent methods (neural networks and fuzzy logic) together with advanced controls have found applications also on this arena. There are also applications of intelligent soft sensors, extended Kalman filters and fuzzy neural networks. Conventionally, these models utilize on-line alkali measurements done at the particular time instant of the cook, but lately also other composition measurements (solids, lignin) have gained ground. It is essential to use the information from the beginning of the cook, because it leaves more time to make corrections as the cooking proceeds.

Leiviskä (2006) reports on the use of Elman network as the soft sensor for the final Kappa number after batch digesting using short sequences of the measured alkali content and calculated H-factor from the very beginning of each cook. The sequences include 70-85 measurement values (corresponding 30-40 minutes in time), where H-factor is between 200 and 300, and the alkali content decreases fast. The selection of training data is crucial – there are several factors later on during the cook that can effect on Kappa number. The training strategy was to train the network as good as possible to correspond to the training set. The final judgement needed visual comparison, because e.g. correlation coefficients do not tell the whole truth. After testing several training algorithms and network configurations, Levenberg-Marquardt training showed the best results. Training continued until the correlation was at least over 0.8. The tested configurations differed in the number of neurons at the recurrent level. It seemed that the training performance improves when the number of neurons at the recurrent level increases. In practice reasonable small networks worked well.

The final test of the network consisted of introducing 50 unseen cooks to the network and recording the network outputs in these cases. Even though the training performance differed, there was not too much difference in actual testing. About 70-80 percent of points

Page 156: Use of modeling and simulation in pulp and paper making

156

were inside ± 1 Kappa number that is a good result as such. Calculating the mean of all network outputs seemed to work better than calculating the mean only from the 15 last outputs. Also, the case where only the declining part of alkali trend was used in training, gave clearly worse performance than the original way of using the alkali trend starting from the very beginning of the cook.

The network worked well with low Kappa numbers and there were so few observations at the high Kappa levels that it is dangerous to draw any conclusions in this region. Also, the whole number of cooks tested was so low that it is too early to decide on the applicability of the method in on-line use. On-line use, of course, will require more careful determination of the starting point of the trends together with the optimal sequence length, and also defining the way, how to eliminate occasional outliers.

Soft sensors have also been used in the quality prediction of the batch digester (Rao et al. 1993).

9.3.2 CONTINUOUS COOKING

Haataja et al. (1997) reported on the software sensor using cooking liquor analyzer for Kappa number estimation in the continuous digester. By measuring cooking liquor contents on-line all variations inside the digester can be monitored. The most interesting and useful variables that can be analyzed from cooking liquor are alkali, solids content and dissolved lignin. In this case these variables were measured on-line from the different liquor circulations of the continuous Kraft pulp digester by the cooking liquor analyzer (CLA 2000).

Artificial neural networks (ANN) were used in the soft sensor to map input-output relationships of measurements. In this study, the feed forward networks with back propagation training were built and tested. Kappa number measurements that were used in training as outputs came from an on-line device located in the blow line. Tested networks were quite small in the sense of number of parameters. The number of hidden layers was one or two and the number of neurons in a hidden layer two to five.

Estimators performed very well. It was encouraging that a good estimate of the final Kappa number was given already after the impregnation of pulp.

Using wavelets for the data pre-processing gave more benefits, because more stable data resulted in (Murtovaara et al. 1998). This way the data corresponded better the real measurements than by using the median filter. Several techniques in building the software sensor for Kappa number were also tested (Isokangas et al. 2001): partial least squares, neural networks, linguistic equations, fuzzy logic and ANFIS (adaptive fuzzy-neural inference system). The best results were obtained with linguistic equations.

Another approach is in (Dufour et al. 2005).

Page 157: Use of modeling and simulation in pulp and paper making

157

9.4 DECISION SUPPORT SYSTEM FOR TMP PRODUCTION

Petteri Pulkkinen and Risto Ritala, Institute of Measurement and Information Technology,

Tampere University of Technology

9.4.1 ABSTRACT

The dynamic optimization integrated to operator and engineer decision support has a high potential in everyday use. As the production process and the structure of its material flows is becoming increasingly complex, and as the scope of responsibilities of an operator or an engineer are widening, tools to manage this complexity under dynamic conditions are needed.

9.4.2 INTRODUCTION

This article presents designing decision support system for thermo-mechanical pulp production for papermaking. In general the case is about continuous decision support for running a plant of identical on/off processes under time-variant production costs, time-variant needs of the product and limited intermediate storage capacity.

The decision support system consists of dynamic simulation for predicting future evolvement of the system, dynamic optimization based on simulations and a proper software toolset to gain the user acceptance.

The task is challenging in several ways. The optimization methods require a huge number of iterations. The objective function is evaluated at every iteration step which requires running a dynamic simulation. The challenge is to have fast simulations or, equivalently, simple enough models. The decision support environment must also support the user in defining which operational scenarios are acceptable, and allow the end user to improve upon the optimal scenario found by revising the scenario.

9.4.3 DESCRIPTION OF THE CASE

In the production of TMP, one of the primary costs is the electricity consumption of the refiners. The market price of the electricity fluctuates. As the energy usage of the refiner plant is huge, an optimal production schedule generates inevitably savings.

In order to minimize the operating costs, the TMP production schedule needs to be optimized around the market energy costs and constraints of the paper mill. This is not as simple as shutting down refiners while the electricity costs are high and running all of them while energy prices are low.

Page 158: Use of modeling and simulation in pulp and paper making

158

The support tool performs dynamic optimization over a rolling horizon and determines a reliable refiner schedule e.g. for the next 50 hours of production at e.g. 60 minutes decision intervals. The decision is based on information obtained through measurements and predictions.

The goal of the optimization is - with the dynamic simulation model of the process - to determine which TMP production scenario and way of dividing TMP production between paper machines will have the lowest electricity costs for a given TMP demand and development of the free market electricity price. The schedule must satisfy the TMP demand at all times and may not exceed the intermediate storage volumes.

The area of simulation consists of two refiner plants, TMP1 and TMP 2, and three paper machines, PM 4, PM 5 and PM 6, Figure 1. Each refiner plant has five refiner lines of two refiners each, in a two stage refining process called Tandem system. TMP 1 also has four reject refiners, extensive screening and bleaching with hydrosulfite or hydrogen peroxide. This plant feeds two paper machines, PM 4 and PM 5. TMP plant 2 also has five refiner lines, three reject refiners and screening and feeds one paper machine, PM 6.

TMP 1

Paper Machine 4

Paper Machine 5

Paper Machine 6

WetBroke

DryBroke

MixingTank

TMP 2

WetBroke

DryBroke

MixingTank

WetBroke

DryBroke

MixingTank

Figure 1. The area of simulation: two refiner plants and three paper machines

9.4.4 PHASES OF THE OPTIMIZATION

In our approach the optimization begins with the management of operational scenarios. Scenario management means different things to different people. Economists use scenarios for long range planning, management scientists use them for strategic decision making and policy makers use them to weigh the consequences of their actions. Here an operations scenario is a set of actions on the manipulatable variables of the process (process set points) [1]. The task is to search the optimal scenario amongst all possible scenarios. The optimization problem to be solved is initially very high dimensional, but can be reduced to a

Page 159: Use of modeling and simulation in pulp and paper making

159

Simulation

Objective &

Constraints

Scenario

Management

Optimizer

Data

Structure

User

Interface

more manageable one by limiting the scenarios to consist of small set of waveforms, such as steps, ramps, and first order exponential dynamics.

Figure 2. The optimization loop (grey arrows) and the interaction with the data structure (white arrows)

Figure 2 shows a flowchart of the optimization environment. The procedure is controlled by the user via user interface. The set points of the initial scenario are given to the simulation model. Objective function and constraints are derived from the simulated data, as well as other parameters needed by the optimizer. The optimizer makes a decision towards a better scenario and transmits a new set of parameters further on. The loop continues until the solution that full fills the criteria is found. On top of everything is the data structure. The data structure collects, stores and delivers all the information needed.

A software environment that realizes the functionalities presented in Figure 1 has been developed. The present implementation of toolset is within Matlab which is easy to bring to mill environments either as a full system, or as embedded in other systems, e.g. the process analysis system KCL-WEDGE [2]. For more information about generic methodologies for dynamic optimization and toolset, see [3, 4].

9.4.5 THE SIMULATION MODELS

The simulation model will be driven by the optimizer. It is controlled by the inputs, decision variables and some specific parameters. Inputs can be controllable or uncontrollable. The decision variables usually measure the amounts of resources, such as the number of products to be manufactured, the amount of a chemical to be blended and so on. Some of the elements in a system are dynamic in nature, in the sense that they move through the system causing other entities to react to them in response to some signal. Other elements are static in nature in a sense that they wait for the dynamic entities or resources to act upon them and pull or push them through the system. The complexity, size and detail level of the model define which elements should drive the system.

Page 160: Use of modeling and simulation in pulp and paper making

160

In this case, multiple levels of simulations are used, to have fast simulation within optimization and to verify the optimal outcome with a more detailed simulation model. The typical optimization methods will require thousands, if not tens or possibly hundred of thousands of simulation runs to find an optimal solution. Therefore, it is imperative that the simulations take very little computing time, in the order of tenths to thousandths of a second.

To speed up the simulation time, first run simulations are developed on the basis of simplified process. They only consider mass balances of one component, TMP, and follow it mostly through its storage locations. Chemical pulp and its mass are accounted for in the wet broke tanks to include for their impact on total volume and wet broke availability. The different areas in the refining plants are considered storage areas that are only based on the total volume of the tanks and set point consistencies.

Simplifying the simulations leaves out important interactions, such as intermediate process waters, intermediate tank levels, etc. Because of the absence of these interactions, a second level simulation will be used after the optimization. This detailed process simulator assists the end user in assessing the detailed feasibility of the optimal operational scenario determined through a simplified process model.

The level 1 simulation is based on mass flows and storage only. The tanks after the refiners are not defined by their volume, but by the mass of pulp that they can hold at their set point consistencies. For example, a 2000m3 tank at 4.5% consistency has a storage capacity of 90 dry tons of pulp. Figure 3 presents a section of the schematic of level 1 simulation with defined variables.

Figure 3. A selection of the schematic of the level 1 simulation

As an example, the level of a TankPM6 can be calculated from the flows FlowToPM6(t), DemandPM6(t) and BrokeFeedPM6(t). When the production schedule is known and the recipe of each grade is related to that schedule, it is possible to calculate the GradePM#(t) for each machine. From those values, and assuming a constant broke flow from the machine, DemandPM#(t) is easily calculated. By establishing these flows, it is possible to then optimize the number of refiners and the other flows. Within MATLAB, these calculations are fast.

The level 1 model is only concerned with the flow of the TMP, not the amount of chemical pulp used or the transfer of dilution water or filtrate. It is just used to determine an optimized production schedule within the defined constraints. Some limits related to filtrate

PM 6 TMP 2

5 lines

TankPM6(t)

BrokefeedPM6(t)

FlowToPM6(t)

BrokeTankPM6(t)

GradePM6(t)

RefinersTMP2(t)

DemandPM6(t)

Page 161: Use of modeling and simulation in pulp and paper making

161

flows could be included in the constraints of the first level simulation. If it is know, for example, that turning on a certain number of refiners in a short time period causes a shortage of dilution water, which can become a constraint of the system. For more complex interactions, a level 2 simulation is needed that takes into account water flows and intermediate tanks and their levels.

The level 2 model incorporates both pulp and liquid flows. It also contains all the intermediate storage tanks. The primary purpose of this model is to verify that the solution determined in the level 1 model and optimization is reasonable. The level 2 model can be used to ensure that there is not a need for large volumes of extra fresh water or that there is not excessive overflowing of filtrate tanks.

9.4.6 DECISION MAKING AND OPTIMIZATION

The operational decision making can be understood as an optimization problem. In reality, there are almost always multiple of objectives. The optimization viewpoint helps us to analyze the ingredients of the decision making process, and then when implementing support for decision making, ask the end user to provide complete specification of the decision making within application. It is important to note that these specifications are subjective: based on strategic goals, economic goals, values and policies of the end user organization. Thus only the end user organization can specify them. Once the decision making process has been clearly specified, the decision support can be implemented efficiently.

Optimization time horizon has a strong effect on the optimal solution. Process operators, engineers, mill managers, and corporate executive officers make decisions that are related to one another but with different horizon. In this case a practical time horizon is 50 hours and a practical decision interval is 1 hour. Primary instantaneous objective is the efficiency of the TMP production and the intermediate objective is to find a feasible way of distributing the flows between the two TMP plants and the actual storage volumes. The freely adjustable variables in this case are the number of refiners running at a given time and the splitting of flows to the intermediate storage tanks. The volumes of the storages are the system state variables.

The role of end point objective is to ensure that at the end of optimization horizon, the operating is not a poor starting point. Here the end point objective defines the desirable end volumes of the storage tanks. Due to long horizons we use, the selection of the end volumes is not crucial, but the optimization shall drive the storages empty at the end, which is not desirable. Therefore the end points are set to match the initial conditions.

The hard constraints for the system are numerous. The maximum storage capacity of the bleached and unbleached pulp prevents long durations of high production. The requirement for satisfying the TMP need of the PMs at all times prevents long duration of low production. No more then nine refiners can be running at the same time due to contractual obligations on maximum electricity consumption. Pulp can be pumped from

Page 162: Use of modeling and simulation in pulp and paper making

162

TMP2 to TMP1, yet cannot be pumped from TMP1 to TMP2. Therefore, the optimization must avoid scenarios that underutilize production in TMP2 for favor of production in TMP1.

There are costs and constraints associated with startup and shutdown of refiner lines. However, these costs are rather subtle as they involve e.g. the cost of wear and tear and utilizing maintenance personnel. These actions require operator involvement and thus their number is constrained. Instead of including these effects explicitly in the optimization, we limit the number of refiner startups and shutdowns over the optimization horizon. Furthermore, this aids the optimization. We allow only for twelve state changes in each refiner schedule over 50 hours. A process state change, for example, would be changing from three refiners operating to only one refiner running in TMP1.

The goal of the optimization is - with the dynamic simulation model of the process - to determine which TMP production scenario and splitting of TMP production between paper machines will have the lowest electricity costs for a given TMP demand and development of the free market electricity price. The schedule must satisfy the TMP demand at all times and may not exceed the intermediate storage volumes.

In principle, this optimization problem can be solved with deterministic methods, such as linear and nonlinear programming methods, or stochastic methods. Based on the size of the search space and the type of the problem, we have chosen stochastic optimization methods, genetic algorithm [6] and simulated annealing [7] to be implemented within in the DOTS toolset. For the deterministic methods we mostly use the Tomlab environment [8] for advanced optimization.

The results of the optimization are displayed to the end user either as direct suggestions on the action to be performed next, or through more detailed graphical representations, such as Fig. 4.

Page 163: Use of modeling and simulation in pulp and paper making

163

Figure 4. An optimization result for a production system with 39 hours prediction horizon. Figures from top: production schedules for PMs, electricity price prediction, electricity cost for optimal operation, optimal refiner schedules, tank levels, feed from TMP2 to TMP1, feed of the two PMs from TMP1.

9.4.7CONCLUSIONS

We have introduced our vision of the decision support system. The case example was the production of TMP but the methodology and the applications are valid in a broad range of processes.

With presented methodology and tools we can manage complex dynamics of wide process areas through optimizing the future actions with respect to an objective derived from the economy of the production line, and to repeat this optimization with a rolling optimization horizon.

The tools presented can be applied in on-line mode to improve production efficiency and supply chain efficiency. They can also be used in off-line mode to provide a tool for setting up and developing operational policies. This is very useful in investment projects. Finally, tools can be used to analyze the effects of production equipment and their dimensioning on operational efficiency and thus to make more educated investment decisions.

Page 164: Use of modeling and simulation in pulp and paper making

164

We are convinced of the potentiality of the toolset and are looking forward to utilizing it in further projects.

9.4.8 REFERENCES

1. PULKKINEN P., RITALA R., TIENARI M., MOSHER A., “Methodology for dynamic optimisation based on simulation”, Simulation and Process Control for the Paper Industry, Munich 2004, PTS Manuscript PTS-MS 441.

2. http://www.kcl.fi/wedge

3. DHAK, J., DAHLQUIST E., HOLMSTRÖM K., RUIZ J., BELLE J., GOEDSCH F., “Developing a Generic Method for Paper Mill Optimization”, Control Systems Conference 2004, Quebec City.

4. DHAK, J., DAHLQUIST, E., HOLMSTRÖM, K, RUIZ, J.,”Generic Methods for Paper Mill Optimization”, Simulation and Process Control for the Paper Industry, Munich 2004, PTS Manuscript PTS-MS 441.

5. RITALA R., BELLE J., HOLMSTRÖM K., IHALAINEN H., RUIZ J., SUOJÄRVI M., TIENARI M., “Operations Decision Support based on Dynamic Simulation and Optimization”, Pulp and Paper Conference, Helsinki 2004.

6. GOLDBERG, D. E. Genetic Algorithms; Addison-Wesley, 1989.

7. OTTEN, R.H.J.M; VAN GINNEKEN, L. P. P. P. The Annealing Algorithm; Kluwer Academic Publishers, 1989.

8. HOLMSTRÖM K., EDVALL M., “The Tomlab Optimization Environment”, in Kallrath J. (ed) “Modelling Languages in Mathematical Optimization”, Chapter 19, Kluwer Academic publishers, 2004.

9.5 OPTIMISATION OF TMP PRODUCTION SCHEDULING

Mika Suojärvi, Savcor Oy and Matti Tienari, Accenture Oy, Finland

9.5.1 ABSTRACT

In 2005, a three year optimisation research project was finished. The project studied how to connect dynamic optimisation tools with existing simulator software. During the project several cases were implemented for testing the usability of the tools developed during the

Page 165: Use of modeling and simulation in pulp and paper making

165

project. One of the cases was a TMP (Thermo-Mechanical Pulp) production scheduling. In that case TMP demand of mill’s paper machines was estimated based on paper machine production schedules and grade dependent TMP consumption for all paper grades. Based on the estimated TMP demands the optimal production schedules for TMP plants were calculated. In the calculation several constraints needed to be taken into account e.g. limits for level variation of the TMP intermediate towers. By using well-defined optimisation problem it was easier to include all relevant information from several data sources and run the mill wide process more efficiently.

The results from the TMP case were so promising that after the project the mill decided to extend the case to cover another mill. This extension made the application even more useful. It seems that there exists great saving potential in TMP production scheduling and it can be achieved by decision support tool combined with optimisation. The implementation procedure of optimisation tools on top of Savcor-WEDGE process analysis system is described in this presentation.

9.5.2 INTRODUCTION

At the beginning of year 2005 ended a three year EU-funded project called DOTS. The abbreviation DOTS refers to “Dynamic Optimisation of Operational Tasks and Scenarios”. The objectives of the project were to develop operations scenario management and dynamic optimisation tools to be used with process simulators, and to combine scenario management, dynamic optimisation and dynamic simulation into a process operator decision support system for greater efficiency.

During the project several cases were implemented to test the usability of the tools developed during the project. One of the cases was a TMP production scheduling case. The case consisted of two refiner plants, TMP1 and TMP2, and three paper machines, PM1, PM2 and PM3. The TMP1 plant feeds pulp to paper machines PM1 and PM2 while the TMP2 plant feeds pulp mainly to PM3 but also some pulp to TMP1 screening section and from there to PM1 and PM2.

Page 166: Use of modeling and simulation in pulp and paper making

166

Figure 1. Process diagram of the project case.

The target of the optimisation was to product right amount of TMP to fulfil paper machines’ TMP demands and keep the TMP towers between certain limits. Starting up and shutting down the refiner lines are mainly used to control the levels of the TMP towers. Unfortunately right after the refiner line has started up the quality of the TMP is not at the normal level; therefore starting up or shutting down refiner lines should be avoided. The TMP tower levels can be affected also by changing the TMP flow from TMP1 plant between PM1 and PM2 and from TMP2 plant to TMP1 screening section. To make this complicated system easier to handle, a mill wide TMP flow model was created and combined with optimisation tool to get a decision support tool for the operators.

The case mill was very satisfied with the results and after the project they decided to extend the case to cover another mill which made the application even more useful. The extended case is introduced in this presentation.

9.5.3 PROCESS DESCRIPTION

The extended optimisation application consists of two paper mills. The mills share a joint electricity power with maximum usage limit and therefore the mills need to be handled as a single unit. The application includes

− two TMP plants at mill A

− two TMP plants at mill B

− three paper machines using TMP at mill A

Page 167: Use of modeling and simulation in pulp and paper making

167

− one paper machine not using TMP at mill A, but using electricity and therefore it was included in the optimisation

− three paper machines using TMP at mill B

At mill A both refiner plants have five refiner lines. At mill B the first refiner plant has also five refiner lines and the second plant four refiner lines. So totally there are 19 refiner lines at the mills.

The refiner lines need to be scheduled to produce a right amount of TMP to fulfil TMP demands all the time but not let the intermediate TMP towers to overflow. Yet this is not as simple as shutting down refiners while the intermediate towers are high and running more of them while intermediate towers are low. The mill must predict electricity usage for next couple of days and there are penalties for under- and overconsumption compared to the predicted usage. Because the mill wants to avoid penalty costs, the number of refiner lines in use is more or less fixed for the near future.

The refiner lines optimisation is done only for the time period following a certain fixed time horizon. After the fixed time period starting up and shutting down the refiner lines are mainly used to control the levels of the TMP towers. Because of the TMP quality issues starting up or shutting down refiner lines should be avoided. The paper machines’ TMP demands vary a lot based on grade they are producing. Another problem is that the input information can change frequently. For example paper machine’s production schedule can change every day based on new orders from the mill’s customers or based on some production problems at the paper machine. Overall electricity consumption must be taken into account as well because there also is a maximum usage limit that cannot be exceeded. In the electricity consumption calculation also the seventh paper machine not using TMP need to be included because electricity demand of paper machine changes whether it’s running or not. As a consequence of all these factors, it is a complicated task to take into account all relevant information coming from several sources when doing decision of TMP production schedules.

After several discussions with mill’s personnel the problem statement was defined as:

“Given initial TMP tower volumes and TMP-flows to the six paper machines, find the optimal refiner schedules for the four TMP plants to minimize start-up/shutdowns of refiner lines and to keep intermediate tower levels within certain limits and predict electricity consumption for next two days and keep it below it’s limit”.

From this decision support tool the process operators should get suggestions when to start up and shut down refiners. Also a reasonable prediction of electricity usage is given, which they can follow to avoid penalties of electricity consumption forecasting.

9.5.4 APPLICATION

Page 168: Use of modeling and simulation in pulp and paper making

168

The application was implemented into the mill’s readily available KCL-Wedge (referred from now as Wedge) process analysis system. The application was configured in Matlab environment and after the configuration the optimisation was used as a background tool in the Wedge.

9.5.4.1 TWO MILL MODEL

In the optimisation calculation a process model is needed. Typical optimization methods will require thousands, if not tens or possibly hundred of thousands of simulation runs to find an optimal solution. Therefore, it is imperative that the simulations take very little computing time. To speed up the simulation, a model was developed that greatly simplifies the process. It considers only mass balances of TMP, and mostly follows it through its storage locations. The different areas in the refining plants are considered as storage areas that are only based on the total volume of the towers and set point consistencies. Simplifying the simulations to that degree leaves out important interactions, such as intermediate process waters, some intermediate tower levels etc.

In the project case study, the process was simplified so that there was only one TMP tower at each paper machine. In the extended case the process model was more realistic i.e. there are both intermediate towers at TMP plants and TMP towers at paper machines. The intermediate tower levels are allowed to vary more than the TMP towers at paper machines. Therefore the flows from TMP plants’ towers to paper machines’ towers was also included in the optimisation.

Figure 2. Process diagram of the extend case.

Page 169: Use of modeling and simulation in pulp and paper making

169

9.5.4.2 PAPER MACHINES’ PRODUCTION SCHEDULES AND TMP DEMANDS

Paper machines’ TMP demand is not constant. Actually it can vary a lot based on paper grade type and of course the biggest change in TMP demand is when paper machine starts up or shuts down. The optimisation calculation needs to know the TMP demands over the optimisation horizon. The first step of this is to calculate how much each paper grade has consumed TMP per production hour on history. In the optimisation it is assumed that the consumption will be the same for the same grade in the future also.

The Wedge system has a connection to production schedule system so the current production schedules for each paper machines are always available. From production schedules and assumed TMP demands, paper machines’ TMP demands are easily calculated. By establishing these flows, it is possible to then optimise the number of refiners running and the other flows. The optimisation occurs within the constraints around the maximum and minimum tower levels, the maximum electricity consumption, the maximum number of refiners running and maximum flows between towers.

9.5.4.3 IMPLEMENTATION

To make the optimisation system as user-friendly as possible, the case was implemented into mill’s readily available Wedge system. Wedge is on-line connected to the mill’s process databases and also to the production plan system and it can call Matlab-functions, like optimisation algorithms in this case. The application was configured in Matlab and after the configuration the optimisation was used as a background tool in Wedge.

Based on grade specific average TMP consumption values and the current paper machines’ production plans, the total TMP demand for near future is calculated. Because Wedge is connected to mill’s databases it can automatically get the current status of the process like current TMP tower levels and the number of TMP refiners running at each TMP plant. These numbers are set as initial values to the simulation and optimisation. The user can run the calculation at any time just by one mouse click on the process diagram. To make the use even easier this application was configured to run automatically once per hour. With this configuration the operator can easily and quickly check the latest suggested control actions to TMP plants without waiting the optimisation calculation to proceed

9.5.4.4 RESULTS

The results are presented with tables and trends. The main results are the optimised refiner schedule tables telling when to start or stop refiner lines. To increase the user confidence towards the results several trends are also presented, e.g. forecasted TMP demands in the paper machines and forecasted levels of the TMP towers. This decision support system

Page 170: Use of modeling and simulation in pulp and paper making

170

helps the operators to take into account all relevant information when deciding when to start-up or shutdown refiner lines.

In the Figure 3 an example of optimisation results are shown. In the first and the second graph at figure 3 the TMP demands of PM1 and PM2 are shown. In the third graph the suggested production schedule to TMP1 plant is presented. To increase user confidence towards the results, the predicted TMP tower levels are also presented, graphs 4 –6. At the end of time period there is a shutdown period at PM1, therefore its’ TMP tower is run to low level (graph 5). During the PM1 shutdown the suggested schedule has swapped between 2 ad 3 refiner lines. That is because the PM2’s TMP tower level is kept near the target value, around 50 %, while the intermediate tower don’t have target value and hence can vary freely between it’s minimum and maximum values.

Figure 3. Example of future time series for one TMP plant and corresponding paper machines.

The application is in use in the mill and it works well. With the tool, the mill wide TMP balances can be handled more efficiently and electricity consumption penalties can be avoided.

Page 171: Use of modeling and simulation in pulp and paper making

171

9.5.4.5 CONCLUSIONS

With the decision support tool it is possible to run the mill-wide system from TMP production to paper machines more efficiently and keep the intermediate tower within certain limits. It is possible to modify the application so it would minimise electricity cost of the TMP production. This kind of application requires that the electricity price has some fluctuation and that the near future fluctuations are known or can be adequately predicted. These requirements are met at least in Finland. Then the optimiser could propose a higher TMP production when the electricity price is at the lower level and less when the price is high, but still keep the tower levels within their limits and also avoid start-ups and shut-downs. The question is how to weight the different factors to be optimized.

In the production of TMP, one of the primary costs is the electricity consumption of the refiners. Therefore there exists a great electricity cost saving potential in TMP production by combining electricity cost to operator decision support tools.

Page 172: Use of modeling and simulation in pulp and paper making

172

CHAPTER 10 APPLICATIONS IN PAPER MILLS (INCL

DEINKING)

10.1 DEVELOPIING A GENERIC METHOD FOR PAPER MILL OPTIMIZATION

Janice Dhak, Erik Dahlquist, Kenneth Holmström, Mälardalen University, Jean Ruiz, Centre

Technique du PapierDomaine Universitaire, Juergen Belle,Frank Goedsche,Papiertechnische

Stiftung

10.1.1 ABSTRACT

A generic method for formulating pulp and paper optimization problems is presented. Two ongoing projects in the framework of the DOTS project illustrate the method: optimization of sizing quality at a specialty paper mill, and optimization of the water and broke systems at a coated paper mill. Explicit and implicit formulations are compared, and different usages of external simulators in conjunction with optimization are discussed. The problems are solved using MATLAB/TOMLAB. Some results from different optimization algorithms are also presented.

10.1.2. INTRODUCTION

All pulp and paper mills strive to consistently produce high quality products and schedule production runs and maintenance shutdowns to maximize mill output. Also all mills must be ecologically and economically responsible when using resources and utilities such as fiber, water, chemicals and power, and when handling waste products such as effluents and off grade production. Clearly, a single mill can have a diverse range of optimization problems. Also, the detail level of the optimization problems can vary significantly, ranging from a complete mill to a single piece of equipment. There are a number of references regarding optimization of production schedules, and managing mill inventories [1], [2]. There are also a number of references regarding optimization of energy and water systems [3], [4].

Optimization tools for production planning, and utilities consumption are most often used by mill management to get an overview of the whole plant. However, there is also a need for optimization tools for operators and process engineers to formulate and solve “area specific” problems, in addition to the more general mill overview problems.

Page 173: Use of modeling and simulation in pulp and paper making

173

Two very different optimization projects are described in this paper. The first deals with optimization of a mill’s white water and broke handling systems. The second project involves optimization of paper sizing quality. Both projects are part of an EU funded project, the “DOTS project”. The goal of this project is to develop a toolset for operator decision support based on dynamic optimization. The toolset developed in the project will be tested and implemented at four partner mills; however it will be generic in nature so it can easily be applied to other types of problems, and other mills. This paper deals with the part of the toolset regarding a generic method for formulating and solving optimization problems. Other papers have been written describing other components of the toolset [5], [6], [7].

10.1.3 A GENERIC METHOD

The main tasks in formulating the optimization problem are defining the objective function and constraints and formulating them according to the chosen optimization tool.

TOMLAB [8] has been selected as one of the tools for solving the optimization problems in the DOTS project. TOMLAB is a commercially available optimization environment developed primarily for use in MATLAB. The environment consists of over 70 different algorithms for various types of optimization problems, as well as a large number of C+ and FORTRAN solvers implemented as MEX interfaces. The guidelines for formulating the problems are independent of the optimization package or algorithm used to solve the problem. However, one of the advantages of using TOMLAB is that it provides a standardized platform for formulating problems, the Prob structure (refer to TOMLAB Users Guide [9]). Also, once a problem is formulated using the Prob structure, it is possible to test and compare different types of optimization algorithms, without reformulating the problem, in order to determine the most appropriate algorithm for a specific case. There is also a standardized Result structure. The Prob and Result structures provide convenient interfaces for incorporating the optimization platform into the complete toolset, which includes user interfaces, scenario management tools, simulators, and a data structure that collects, stores and delivers all pertinent information.

Following are some guidelines for formulating optimization problems:

System boundaries

Review the process and define the physical boundary limits of the system to be optimized. If relevant, prepare a process flow diagram showing all parameters within the boundary limits. Also, define the time frame for the optimization, bearing in mind that a long time frame may take an unrealistic time to solve and produce results which have little relevance to the actual process because there are many unpredictable factors, such as paper breaks, which can occur during an extended time period. Also, identify which parameters can be modified by the optimization (optimization variables), and which parameters cannot be modified by optimization (inputs to the optimization problem).

Objective function

Page 174: Use of modeling and simulation in pulp and paper making

174

Based on mill operating experiences, describe the objective of the optimization. For example:

- minimize water consumption;

- minimize effluent generation;

- minimize energy consumption;

- minimize consumption of a certain chemical;

- minimize variation of a process parameter;

- minimize variation of a product quality parameter

- maximize output from a certain piece of equipment, or a particular area of the mill; etc

Each term in the objective function should also be assigned a priority or weight to indicate its relative importance. For example, weight factors can be the unit costs of chemicals, or energy prices during peak and off peak hours. However, in some cases, the choice of weight factor is not so obvious. For example, what is the cost associated with minimizing the variation in a tank level? Optimizing the problem with different weight factors is one way the user can test different “what if scenarios”.

Constraints

Determine all of the constraints on the problem. Some examples of constraints are:

- bounds on flows due to valve sizes and/or pump capacities;

- bounds on tank levels due to tank volume, and/or overflow position;

- bounds on product quality parameters due to grade specifications or customer requirements;

- limited availability of resources such as water or energy;

- limited machine availability due to maintenance requirements

- mass balances around tanks or other pieces of equipment

- some variables can only be integers

When defining constraints it is important to review the process and understand why certain constraints exist. For example, if a mill is accustomed to operating a tank at a certain level, based on trial and error, this does not necessarily mean that this is the most appropriate level. If the level is allowed to be modified by the optimization, perhaps a better overall operating strategy will be found.

Constraints can be formulated implicitly or explicitly. With implicit formulation a simulator generates the parameters required to formulate the constraints. For example,

Page 175: Use of modeling and simulation in pulp and paper making

175

when dealing with quality parameters which cannot be measured directly, a simulation model is used to predict the parameter. With explicit formulation, simulation is accomplished by converting recursive equations into equality constraints. Recursive equations are equations that relate the present to the past. An example of an explicit constraint is the mass balance around a tank:

∑ ∑++−= )()()1()( tflowtflowtleveltlevel outin

In general it is preferable to deal with linear constraints instead of nonlinear constraints because linear problems often require less time to solve. In some cases there are methods for converting nonlinear constraints into linear constraints [10].

When some of the optimization variables are constrained to having only integer values, the method for formulating the problem is similar, and there are specific solvers for handling these mixed integer programming (MIP) problems. A problem dealing with on/off decisions for process equipment (a scheduling problem) is an example of an integer programming problem.

Problem Formulation

In this project the problems are formulated in MATLAB. The following is a summary of the formulation:

For the optimization variables:

- Let the vector x(t) = optimization variables; the number of variables =n

- Define the upper and lower bounds on the optimization variables; n x 1 vectors x_L and x_U

- For MIP problems, define a 0-1 vector, IntVars, where nonzero elements indicate the integer variables

For the linear constraints:

- Define a matrix for linear constraints; m x n matrix A, where m=number of linear constraints

- Define the upper and lower bounds on the constraints; m x 1 vectors b_L and b_U

For nonlinear constraints

- Write a function to compute nonlinear constraints

- Define bounds on the nonlinear constraints; mN x 1 vectors c_L and c_U, where mN= number of nonlinear constraints

- Write a function to compute the gradient of the constraints (constraint Jacobian)

- Define an mN x n 0-1 matrix, ConsPattern, where 0 indicates zero values in the constraint Jacobian, and ones indicate values that might be nonzero

Page 176: Use of modeling and simulation in pulp and paper making

176

- Some solvers also require the Hessian of the constraints

The objective function may be expressed in different ways, depending on the type of problem. For example, for linear programming problems, define an n x 1 vector of objective function coefficients. For nonlinear problems, write functions to compute the objective function and the gradient of the objective function (Jacobian of the objective function). Also, define a matrix, JacPattern (similar to ConsPattern). Some solvers may also require the Hessian of the objective function. For the least squares solvers in TOMLAB, the user defines the residual vector and the Jacobian matrix of derivatives, and the objective function, gradient and Hessian are computed from these, by the optimization algorithms.

Depending on the optimization tool used and the type of problem, other parameters, such as initial values for the optimization variables, maximum number of iterations, feasibility tolerances for the constraints and / or the objective function, weighing factors, etc can also be defined in the formulation.

10.1.4 USE OF AN EXTERNAL SIMULATOR

An important aspect in this project is the use of external simulators in conjunction with optimization. Simulators can be used in various ways:

- to provide inputs to the optimization to formulate the objective function and constraints (ex. explicit formulation of constraints).

- to calculate the objective function and constraints (ex. implicit formulation of constraints)

- to provide initial conditions for the optimization

- to validate optimization results. The boundaries of the optimization problem and the simulation may be the same or, a mill may have a detailed simulation program for the entire process or a large part of the process, while the optimization problem focuses on a specific area. In this case the simulator can be used to verify how the optimized solution will affect other parts of the process that are outside the scope of the optimization, before implementing changes in the real process.

10.1.5 OPTIMIZATION OF DREWSEN SIZING QUALITY

Drewsen Spezialpapiere, in Lachendorf, Germany, produces over 500 grades of wood free specialty paper. The case study from this mill is to optimize the sizing quality on one of the mill’s 3 paper machines.

Different types of models have been proposed to predict the sizing quality parameters W, M, and A60, and have resulted in two possible variants of the optimization problem at

Page 177: Use of modeling and simulation in pulp and paper making

177

Drewsen. Initially a partial least squares (PLS) model based on 20 different process parameters (total input matrix) was formulated. After examining the PLS model it was determined that the quality parameters could also be predicted using simpler linear regression models. A linear regression model based on the total input matrix was formulated, and then another linear regression model with only 7 input parameters (reduced input matrix). Table 1 summarizes the parameters in the total input matrix and the reduced input matrix. The parameters with the suffix “_x “ are the optimization variables for the two different variants of the problem. The quality parameters W(t), M(t), and A60(t) are the other optimization variables in both variants.

TABLE 1: DREWSEN OPTIMIZATION INPUTS AND

OUTPUTS

Total Input Matrix

Reduced Input Matrix

Resin size Int1_x(t) Int1_x(t)

PAC Int2_x(t) Int2_x(t)

Wet end starch Int3_x(t) Int3(t)

Retention Aid Int4_x(t) Int4(t)

Microparticles Int5_x(t) Int5(t)

Starch solution SP

Surf1_x(t) Surf1_x(t)

Surface Sizing Polymer SP

Surf2_x(t) Surf2_x(t)

Specific refining power

u1(t)

Chalk u2(t)

Broke Input u3(t)

Finished Stock u4(t)

Consistency Finished Stock

u5(t)

Dilution water size press

u6(t)

Moisture before u7(t)

Page 178: Use of modeling and simulation in pulp and paper making

178

size press

Grammage u8(t)

Ash u9(t)

Consistency broke

u10(t)

Steam consumption

u11(t)

Dilution water headbox

u12(t)

Web speed u13(t)

Objective Function

The objective of the optimization is:

- minimize variation of the sizing parameters W(t), M(t), and A60(t) from reference values REF_W, REF_M, and REF_A60

- minimize the quantity of sizing chemicals required

- minimize the ratio of (Int1 + Int2) / (Surf1 + Surf2)

- minimize variations in application rates of the sizing chemicals

[ ]

[ ]

[ ]

[ ] [ ]

( ) ( )

[ ] [ ]2

1

1

15

1

2

14

22

1

2

11

13

1

2

12

2

1 11

2

13

2

12

1

21

)1()1(

)(/)(

)()(

60_)(60

_)(

_)(

∑∑∑∑

∑∑∑

∑∑∑∑

=

== =

===

= == =

=

=

=

+∆∆++∆∆

+

++

+−

+−

+−=

L

li

hp

k

J

j

hp

tj

jj

l

hp

t

L

l

hp

ti

J

jj

hp

t

hp

t

hp

t

hp

tobj

tSurftInt

tInttSurf

tSurftInt

AREFtAw

MREFtMw

WREFtWwF

αα

α

αα

where,

hp = prediction horizon, with t time steps

α, w = weight factors

Page 179: Use of modeling and simulation in pulp and paper making

179

J = 2 for reduced input matrix

= 5 for total input matrix

L = 2

Constraints

1. The ratio of surface sizing to internal sizing must be within certain limits defined by Rmin and Rmax:

max2121min ))()(/())()(( RtInttInttSurftSurfR ≤++≤

2. The rate of change in the application rate of the sizing chemicals must be less than a certain value, Δmax. The following equation is for resin size, but the equations for the other sizing chemicals are similar:

max111 )()1( InttInttInt ∆≤−+

3. The sizing quality parameters W(t), M(t), and A60(t) must be within certain limits. For the PLS model, the constraints that predict the quality parameters are formulated implicitly. There is a function that calls the PLS models for each of the quality parameters. There is also a function that computes, or estimates the gradients of the constraints.

For the linear regression models, the constraints that predict the quality parameters are formulated explicitly, using the regression coefficients, bi, from the model. For the total input matrix, the number of regression coefficients, i is equal to 20, and for the reduced input matrix, i=7. Using the quality parameter M as an example, the constraint defining the quality parameter is for the total input matrix is:

))(*())(*())(*()(

13:120:8

2:17:6

5:15:1

tubtSurfbtIntbtW j

ji

ij

ji

i

ji

ji ∑∑∑==

==

==

++=

Results

Appendix 1 shows representative optimization results using the 3 different models to predict the product quality parameters. The solver used in each case was NLSSOL. The figures also show how the simulated quality parameters from each model compare to actual process data.

Table 2 also compares the 3 different problem formulations, and shows that the optimization using the nonlinear PLS model takes longer to solve, compared to the linear models, and that the reduced input linear model with fewer input parameters and fewer optimization variables takes less time to solve than the total input linear model. Another advantage of using the simpler linear model with fewer inputs is that the model may prove to be more robust over time compared to the other more complicated models.

TABLE 2: ALTERNATE FORMULATIONS OF

DREWSEN PROBLEM

Page 180: Use of modeling and simulation in pulp and paper making

180

PLS Model Linear Regression model– total input

Linear regression model – reduced input

Number of variables

300 300 210

Number of residual elements

533 533 356

Number of linear constraints

263 353 266

Number of nonlinear constraints

90 0 0

CPU time, seconds

124.14 3.52 0.8590

f_k 5840000 5831415 127810

x_k Fig. 3, 4, 5 Fig. 6, 7, 8 Fig. 9, 10

Table 3 compares the performance of some different optimization solvers in TOMLAB for the linear regression model with the total input matrix. All 3 solvers find optimum solutions in a reasonable amount of time, with NLSSOL taking the least amount of time.

TABLE 3: COMPARISON OF OPTIMIZATION

ALGORITHMS

Solver Optimum Objective function

Solution time, seconds

NLSSOL 5831415 3.52

SNOPT 5831415 21.28

MINOS 5831415 855.59

Page 181: Use of modeling and simulation in pulp and paper making

181

10.1.6 OPTIMIZATION OF LANCEY WATER BROKE SYSTEM

Figures 1 and 2 show the water and broke handling systems at Papeteries de Lancey, a coated paper mill near Grenoble, France. The variables beginning with “Q” or “U” represent flows; variables beginning with “C” represent consistencies; SW, HW, GW, and EUC represent softwood, hardwood, ground wood and eucalypt compositions respectively. Tank levels in the water system are designated as T1, T2,…T4, and tank levels in the broke system are designated as X1, X2…X7. Variables that are optimized have the suffix “_x”; other variables are inputs to the optimization. Optimization variables include those that are part of the constraints and the objective function, and those that are only part of the constraints, and are manipulated as a result of the optimization.

Figure 3: Lancey water system

to broke system(repulping)

effluent

fresh waterQe9

2_x(t)

Qs9o_x(t)

Qs96_x(t)

Ws97_x(t)

Ue43_x(t)=Us9

1_x(t)

Ue101_x(t)=Us9

2_x(t)

Ue

11 1_x(t)=U

s9 3_x(t)

Qe102_x(t)

Qe112_x(t)

Qs10o_x(t)

Qs11o_x(t)

fresh water

fresh water

mechanical pulp prep

chemical pulp prep

Qe91(t)

T4_x(t)

T9_x(t)

T10_x(t)

T11_x(t)

to “fosse”

Qs41(t)

effluent

disc filter

to broke dilution

effluent

Qs95(t)

Qe42(t)Qe4

1(t)

Qs101(t)

Qs111(t)

Qs94(t)

to broke system(repulping)

effluent

fresh waterQe9

2_x(t)

Qs9o_x(t)

Qs96_x(t)

Ws97_x(t)

Ue43_x(t)=Us9

1_x(t)

Ue101_x(t)=Us9

2_x(t)

Ue

11 1_x(t)=U

s9 3_x(t)

Qe102_x(t)

Qe112_x(t)

Qs10o_x(t)

Qs11o_x(t)

fresh water

fresh water

mechanical pulp prep

chemical pulp prep

Qe91(t)

T4_x(t)

T9_x(t)

T10_x(t)

T11_x(t)

to “fosse”

Qs41(t)

effluent

disc filter

to broke dilution

effluent

Qs95(t)

Qe42(t)Qe4

1(t)

Qs101(t)

Qs111(t)

Qs94(t)

to broke system(repulping)

effluent

fresh waterQe9

2_x(t)

Qs9o_x(t)

Qs96_x(t)

Ws97_x(t)

Ue43_x(t)=Us9

1_x(t)

Ue101_x(t)=Us9

2_x(t)

Ue

11 1_x(t)=U

s9 3_x(t)

Qe102_x(t)

Qe112_x(t)

Qs10o_x(t)

Qs11o_x(t)

fresh water

fresh water

mechanical pulp prep

chemical pulp prep

Qe91(t)

T4_x(t)

T9_x(t)

T10_x(t)

T11_x(t)

to “fosse”

Qs41(t)

effluent

disc filter

to broke dilution

effluent

Qs95(t)

Qe42(t)Qe4

1(t)

Qs101(t)

Qs111(t)

Qs94(t)

Page 182: Use of modeling and simulation in pulp and paper making

182

Figure 4: Lancey Broke System

Centre Technique du Papier is developing a simulator for the Lancey mill (PS200). The simulator provides inputs to explicitly define the objective function and constraints for the optimization problem. Later these inputs could come from the process DCS. The simulator also generates initial conditions for the optimization variables, and will also be used to verify optimization results before implementing in the process. Figure 2 shows that measurements from the FiberMaster, an online fiber analyzer developed by STFI, are also inputs to the optimization. The FiberMaster has been installed in the mill, and validation testing is underway. Up until now, simulated values for the stock compositions have been used as input to the optimization.

Figures 1 and 2 show the water and broke systems as two separate systems. The connection between the two systems is the flow of water from tank T9, Ws97, to a repulper for off spec production. Qe32 is the flow of stock from the repulper to tank X3. In the initial work that has been done, Qe32 and Ws97 have been treated as continuous variables, and the problems has been formulated and solved as a nonlinear least squares problem. This work has been useful in checking the simulator, which is being developed concurrently.

X1_x(t)Cs1_x(t)

Mechanical pulp

Rejects after dilution Accepted from cleaningQe5

3(t), Ce53(t)

Chemical pulp

Qe53(t), Ce5

3(t)

X2_x(t)Cs2_x(t)

X3_x(t)Cs3_x(t)

X4_x(t)Cs4_x(t)

X5_x(t)Cs5_x(t)

X6_x(t)Cs6_x(t)

X7_x(t)Cs7_x(t)

Sw_s7_x(t)Euc_s7_x(t)Hw_s7_x(t)Gw_s7_x(t)

Qe31(t)

Ce31(t)

Qe32_x(t)

Ce32(t)

Us52_x(t), Cs5

2_x(t)

Qs11_x(t)

Cs11_x(t)

Us22_x(t), Cs2

2_x(t)

Ue11_x(t)

Ce11_x(t)

Qe13(t)

Ce13(t)

Qe72_x(t)

Ce72(t)

Gw_e72(t)

Qe71_x(t)

Ce71(t)

Sw_e71(t)

Euc_e71(t)

Hw_e71(t)

Qs71(t)

Cs71_x(t)

Sw_s71_x(t)

Euc_s71_x(t)

Hw_s71_x(t)

Gw_s71_x(t)

Qe75(t)

Ce75(t)

Sw_e75_x(t)=Sw_s7

1(t)Euc_e7

5_x(t)=Euc_s71(t)

Hw_e75_x(t)=Hw_s7

1(t)Gw_e7

5_x(t) =Gw_s71(t)

Qe74(t)

Ce74(t)

Sw_e74(t)=Sw_e71(t)

Euc_e74(t)=Euc_e71(t)

Hw_e74(t)=Hw_e71(t)

Qe61(t), Ce6

1(t)

Us4o_x(t), Cs4

o_x(t)U

e 55 _

x(t)

, Ce 5

5 _x(

t)

Ue12_x(t), Ce1

2_x(t)

Ue21_x(t)=Us1

2_x(t)Ce2

1_x(t)=Cs12_x(t)

Ue51_x(t)=Us2

1_x(t)Ce5

1_x(t)=Cs21_x(t)

Ue41_x(t)=Us3

1_x(t)Ce4

1_x(t)=Cs31_x(t)

Ue52_x(t)=Us4

1_x(t)Ce5

2_x(t)=Cs41_x(t)

Coated broke Re-pulped off spec paperUncoated broke

Qs51(t)

Cs51_x(t)

Ue73_x(t)=Us6

1_x(t)Ce7

3_x(t)=Cs61_x(t)

Sw_e73(t)

Euc_e73(t)

Hw_e73(t)

Gw_e73(t)

papermachine

effluent

Ue54_x(t)

Ce54_x(t)

Us62_x(t)

Cs62_x(t)

From Fibermaster

From Fibermaster

Pulp from disc filter after dilution

X1_x(t)Cs1_x(t)

Mechanical pulp

Rejects after dilution Accepted from cleaningQe5

3(t), Ce53(t)

Chemical pulp

Qe53(t), Ce5

3(t)

X2_x(t)Cs2_x(t)

X3_x(t)Cs3_x(t)

X4_x(t)Cs4_x(t)

X5_x(t)Cs5_x(t)

X6_x(t)Cs6_x(t)

X7_x(t)Cs7_x(t)

Sw_s7_x(t)Euc_s7_x(t)Hw_s7_x(t)Gw_s7_x(t)

Qe31(t)

Ce31(t)

Qe32_x(t)

Ce32(t)

Us52_x(t), Cs5

2_x(t)

Qs11_x(t)

Cs11_x(t)

Us22_x(t), Cs2

2_x(t)

Ue11_x(t)

Ce11_x(t)

Qe13(t)

Ce13(t)

Qe72_x(t)

Ce72(t)

Gw_e72(t)

Qe71_x(t)

Ce71(t)

Sw_e71(t)

Euc_e71(t)

Hw_e71(t)

Qs71(t)

Cs71_x(t)

Sw_s71_x(t)

Euc_s71_x(t)

Hw_s71_x(t)

Gw_s71_x(t)

Qe75(t)

Ce75(t)

Sw_e75_x(t)=Sw_s7

1(t)Euc_e7

5_x(t)=Euc_s71(t)

Hw_e75_x(t)=Hw_s7

1(t)Gw_e7

5_x(t) =Gw_s71(t)

Qe74(t)

Ce74(t)

Sw_e74(t)=Sw_e71(t)

Euc_e74(t)=Euc_e71(t)

Hw_e74(t)=Hw_e71(t)

Qe61(t), Ce6

1(t)

Us4o_x(t), Cs4

o_x(t)U

e 55 _

x(t)

, Ce 5

5 _x(

t)

Ue12_x(t), Ce1

2_x(t)

Ue21_x(t)=Us1

2_x(t)Ce2

1_x(t)=Cs12_x(t)

Ue51_x(t)=Us2

1_x(t)Ce5

1_x(t)=Cs21_x(t)

Ue41_x(t)=Us3

1_x(t)Ce4

1_x(t)=Cs31_x(t)

Ue52_x(t)=Us4

1_x(t)Ce5

2_x(t)=Cs41_x(t)

Coated broke Re-pulped off spec paperUncoated broke

Qs51(t)

Cs51_x(t)

Ue73_x(t)=Us6

1_x(t)Ce7

3_x(t)=Cs61_x(t)

Sw_e73(t)

Euc_e73(t)

Hw_e73(t)

Gw_e73(t)

papermachine

effluent

Ue54_x(t)

Ce54_x(t)

Us62_x(t)

Cs62_x(t)

From Fibermaster

From Fibermaster

Pulp from disc filter after dilution

X1_x(t)Cs1_x(t)

Mechanical pulp

Rejects after dilution Accepted from cleaningQe5

3(t), Ce53(t)

Chemical pulp

Qe53(t), Ce5

3(t)

X2_x(t)Cs2_x(t)

X3_x(t)Cs3_x(t)

X4_x(t)Cs4_x(t)

X5_x(t)Cs5_x(t)

X6_x(t)Cs6_x(t)

X7_x(t)Cs7_x(t)

Sw_s7_x(t)Euc_s7_x(t)Hw_s7_x(t)Gw_s7_x(t)

Qe31(t)

Ce31(t)

Qe32_x(t)

Ce32(t)

Us52_x(t), Cs5

2_x(t)

Qs11_x(t)

Cs11_x(t)

Us22_x(t), Cs2

2_x(t)

Ue11_x(t)

Ce11_x(t)

Qe13(t)

Ce13(t)

Qe72_x(t)

Ce72(t)

Gw_e72(t)

Qe71_x(t)

Ce71(t)

Sw_e71(t)

Euc_e71(t)

Hw_e71(t)

Qs71(t)

Cs71_x(t)

Sw_s71_x(t)

Euc_s71_x(t)

Hw_s71_x(t)

Gw_s71_x(t)

Qe75(t)

Ce75(t)

Sw_e75_x(t)=Sw_s7

1(t)Euc_e7

5_x(t)=Euc_s71(t)

Hw_e75_x(t)=Hw_s7

1(t)Gw_e7

5_x(t) =Gw_s71(t)

Qe74(t)

Ce74(t)

Sw_e74(t)=Sw_e71(t)

Euc_e74(t)=Euc_e71(t)

Hw_e74(t)=Hw_e71(t)

Qe61(t), Ce6

1(t)

Us4o_x(t), Cs4

o_x(t)U

e 55 _

x(t)

, Ce 5

5 _x(

t)

Ue12_x(t), Ce1

2_x(t)

Ue21_x(t)=Us1

2_x(t)Ce2

1_x(t)=Cs12_x(t)

Ue51_x(t)=Us2

1_x(t)Ce5

1_x(t)=Cs21_x(t)

Ue41_x(t)=Us3

1_x(t)Ce4

1_x(t)=Cs31_x(t)

Ue52_x(t)=Us4

1_x(t)Ce5

2_x(t)=Cs41_x(t)

Coated broke Re-pulped off spec paperUncoated broke

Qs51(t)

Cs51_x(t)

Ue73_x(t)=Us6

1_x(t)Ce7

3_x(t)=Cs61_x(t)

Sw_e73(t)

Euc_e73(t)

Hw_e73(t)

Gw_e73(t)

papermachine

effluent

Ue54_x(t)

Ce54_x(t)

Us62_x(t)

Cs62_x(t)

From Fibermaster

From Fibermaster

Pulp from disc filter after dilution

Page 183: Use of modeling and simulation in pulp and paper making

183

The repulping of off spec paper is actually a batch process. The optimized solution should provide a schedule for repulping. Formulation and solution of this problem, which is a nonlinear least squares problem with integer variables is ongoing. The objective function and constraints described below are relevant to the case where Ws97 and Qe32 are treated as continuous variables, and to the case where repulping is treated as a batch operation. However, there will be additional constraints involving integer variables to describe the batch operation of the repulper.

Objective Function

The objectives for the optimization at Lancey are:

- minimize total fresh water consumption,

- minimize the amount of effluent requiring treatment,

- stabilize the process in terms of variations in transferring flow rates and tank levels.

- minimize variations in the composition of stock in the tank feeding the paper machine (Tank X7)

- minimize the difference between the actual quantity of off spec paper repulped, compared to a required quantity, Qe32_des

- minimize variations in the ratio of fresh pulp to broke pulp, RFP/BP, feeding the paper machine

( ) ( ) ( )

( ) ( ) ( ) ( )( )( ) ( ) ( )[ ]

( ) ( ) ( ) ( )[ ]∑

=

=

=

=

∆+∆+∆+∆+

∆+∆+∆+

++++

++=

hp

k

hp

k

hp

k

ooo

hp

kobj

kTkTkTkT

kUekUekUe

kQskQskQskQs

kQekQekQeF

1

21111

21010

299

2444

1

211111

211010

23443

1

269

2

11

2

10

2

92

1

2211

2210

2291

)(*)(*)(*)(*

)()()(

)()()()(

)()()(

χχχχα

βββα

α

α

( ) ( )( ) ( )

( )

( )

( )

( )∑ ∑

= =

=

=

=

=

+

−−+

−+

+

−−+−−

+−−+−−+

7

1 1

25

1

2//4

1

2_23

233

1

2112

217

1712

217

1713

217

1712

1

217

17111

)(

)1()(

)(

)(

)1(_)(_)1(_)(_

)1(_)(_)1(_)(_

j

hp

kjj

hp

kBpFpBpFp

hp

k

des

hp

k

hp

k

kXDiff

kRkR

QekQe

kQs

ksGwksGwksHwksHw

ksEucksEucksSwksSw

δω

ω

ω

ω

ωω

ωωω

Constraints

All of the constraints for this problem are expressed explicitly. Following is a description of the constraints for the problem:

Page 184: Use of modeling and simulation in pulp and paper making

184

1. Constraints which recursively define the mass balance around each tank. For example for Tank T4:

)()()()()()1()( 101

102

102

101

1044 kQskQskQekQekUekTkT o−−−++−=

2. Constraints which only allow manipulated controls in the water system to be changed during a control horizon of duration hc, rather than over the whole prediction horizon, hp.

3. Constraints on flows and tank levels. These constraints are actually upper and lower bounds on the optimization variables.

4. Constraints that define the consistency of stock in each tank in the broke system, and the composition (%SW, %HW, %GW, and %EUC) of stock that is fed to the paper machine. These are nonlinear constraints. For example, for Tank X4

)(/))(*)(())(/)(1(*)1()( 41

41

441

444 kXkUekCekXkUekCskCs +−−=

5. Constraints regarding the ratios of chemical pulp to mechanical pulp, and broke pulp to fresh pulp.

10.1.7 CONCLUSIONS

For the Drewsen project it has been determined that explicit formulation results in a faster optimal solution than implicit formulation. Initial results from both projects have indicated that the optimizations can be performed within a reasonable time, and TOMLAB has proven to be a useful tool in comparing different types of optimization algorithms. Also, concurrent development of the simulator and optimizer can be useful in verifying each of the components, as in the case of Lancey.

10.1.8 REFERENCES

1. PERSSON, U., LEDUNG, L., LINDBERG, T., PETTERSSON, J., SAHLINE, P., LINDBERG, A., ”On-line Optimization of Pulp & Paper Production”, TAPPI 2003 Fall Technical Conference, October 2003, Chicago, Illinois, USA.

2. SARIMVEIS, H., ANGELOU, A., RESTSINA, T., BAFAS, G., ”Maximization of profit through optimal selection of inventory levels and production rates in pulp and paper mills”, TAPPI

Journal, Vol.2: No 7., pp 13-18.

3. SANTO, A., DOURADO. A., “Constrained GA applied to Production and Energy Management of a Pulp and Paper Mill”, Proceedings of the Symposium on Applied Computing,

San Antonio, Texas, USA, 1999.

Page 185: Use of modeling and simulation in pulp and paper making

185

4. GONG, M., “Optimization of industrial energy systems by incorporating feedback loops into the MIND method”, Energy, Vol 28, Issue 15, Dec. 2003, pp 1655-1669.

5. PULKKINEN, P., RITALA, R., TIENARI, M., MOSHER, A., ”Designing Operator Decision Support System for TMP Production Based on Dynamic Simulation and Optimization”, Control Systems 2004 Conference Proceedings, Quebec City, Canada, June 2004.

6. FARD, B.G., ALA-KURIKKA, J., KVARNSTRÖM, A., CRNKOVIC, I., ”Enhancing Distributed Simulation Systems by Utilizing Component-based Technologies”, Proceedings from 44th Scandinavian Conference on Simulation and Modeling, September 2003, pp.33-40.

7. DHAK, J., DAHLQUIST, E., HOLMSTRÖM, K, RUIZ, J., ”Generic Methods for Paper Mill Optimization”, Simulation and Process Control for the Paper Industry, Munich 2004, PTS Manuscript PTS-MS 441.

8. HOLMSTRÖM, K., EDVALL, M., “Modeling Languages in Mathematical Optimization, Chapter 19: The Tomlab Optimization Environment”, Boston, Dordrech, London, Kluwer Academic Publishers, 2004.

9. Users Guide for TOMLAB 4.2.

10. WILLIAMS, H.P., “Model Building in Mathematical Programming”, John Wiley & Sons, West Sussex, 4th ed., 1999.

APPENDIX 1

Fig. 3. PLS model – total input matrix

Page 186: Use of modeling and simulation in pulp and paper making

186

Fig. 4. PLS model – reduced input matrix

Fig. 5. PLS model – total input matrix

Page 187: Use of modeling and simulation in pulp and paper making

187

Fig. 6. Linear regression model – total input matrix

Fig. 8. Linear regression model – total input matrix

Fig. 9. Linear regression model – total input matrix

Page 188: Use of modeling and simulation in pulp and paper making

188

Fig. 10. Linear regression model – reduced input matrix

Fig. 11. Linear regression model – reduced input matrix

10.2 ON-LINE STRENGTH PREDICTION AND OPTIMIZATION FOR MULTI-PLY

KRAFT LINER

Jens Pettersson, Erik Dahlquist, Jonas Warnqvist, Mattias Carlsson ABB

Presented at Control Systems February 12, 2002

10.2.1 INTRODUCTION

With the introduction of advanced computer based control systems and reliable on-line sensors much of the work of running a modern paper machines have been automated. For

Page 189: Use of modeling and simulation in pulp and paper making

189

example, in the wet end of the paper machine the amount of pulp used is measured by consistency and flow meters and kept on its set point by local regulatory controllers controlling the pulp pump speed. On a modern paper machine, several hundreds of these local control systems control properties like flows, consistencies, tank levels, pH, pressures etc.

However, even though much of the control of a modern paper machine is automated, paper quality is still controlled manually by the machine operators. The main reason for this situation is that paper quality is often characterized by specific testing methods, most of them being destructive in their nature, usually performed in a laboratory with specific testing equipment. Attempts have been made to transfer these measurement methods to on-line sensors, and although some methods are possible to transfer, most laboratory testing methods still have no on-line equivalent. This results in that the operator(s) controls the paper quality based on laboratory measurements, which are usually done on samples taken from the end of each jumbo roll. Besides being rather infrequent (approximately 1 sample per hour), the results from the laboratory are usually delayed 20-30 minutes due to climate conditioning. This means that if the quality suddenly goes out of the specifications, the operators might not know until a lot of paper has been produced. This is of course very unsatisfactory, and a lot of research and development have been done trying to solve this problem. Most solutions are based on the use of a mathematical model that describes the relation between signals measured in the paper manufacturing process, i.e. on the paper machine, in the stock preparation and in the pulp mill, and the corresponding laboratory measurement. Because most of these signals are measured continuously, such a model would make it possible to on-line calculate the current quality, so that the operator can continuously monitor the quality. It also opens up for the possibility to automate the control and optimization of paper quality and production.

This paper is devoted to the development of such a model for the on-line prediction and optimization of the quality for two-ply liner. The quality variables that are of interest are primarily Ring Crush (RCT) and Burst Strength, but we have also studied Tensile Stifness (TSI) and Tensile Strength (TStr). The last three variables are measured by an Auto- Line 300 Profiler [8], while RCT is measured by a stand alone Crush Tester. All variables are measured as profiles in the cross-machine direction and then averaged. Also, TSI and TStr are measured with a machine direction (MD) and cross-machine direction (CD) component. We thus want to predict a total of 6 quality properties.

10.2.2 PROCESS DESCRIPTION

The studied paper machine is a two-ply fourdrinier kraft liner machine, using unbleached kraft pulp and recycled pulp. The recycled pulp enters the mixing tanks almost directly from its storage tower, while the unbleached pulp is dived on two stock preparation lines, one for each ply respectively. The bottom wire line includes two refiners in series, while the top wire line has a single refiner. The intention is to use only bottom wire pulp in the bottom ply, while the top ply contains a mixture of recycled pulp, broke pulp and unbleached pulp from both refiner lines. It can be mentioned that the recycled pulp mainly consists of

Page 190: Use of modeling and simulation in pulp and paper making

190

container board recycle. Besides ordinary sensors like flow and consistency meters, the stock preparation area is equipped with some more advanced sensors: the ABB Smart Pulp Platform (SPP), Near Infra-Red sensor and a Pulp Expert lab robot. It has been shown in [5] that the combination of measurements from the SPP (Kappa number and fiber size distribution) together with NIR and other process variables can predict paper strength. On the paper machine, there is a press section consisting of two shoe presses, which are followed by the drying section and a soft nip hot-roll calander. Before the paper is rolled up on the jumbo reel, a Quality Control System (QCS) measures and controls properties like basis weight, moisture and caliper on the paper. It also measures fiber orientation and angel on both sides of the paper web [7]. When a jumbo-reel has been produced and removed from the paper machine, a strip of the outmost layer of the reel is cut out and taken to the laboratory for quality analysis. The paper machine produces a number of different grades, most of them with different basis weight but some grades differs in the pulp blending and machine settings as well.

10.2.3 MODELLING AND IDENTIFICATION

To be able to predict the quality of the paper, a suitable model relating measurable signals from the process to the quality variables, must be found. This can be done by assuming that the process can be described by the following equation

ˆy(k) = f(θ,u(k)) (1)

where u(t) is a vector of the measured process signals,

θ is a vector of parameters and ˆy(k) is a vector of predicted paper quality variables to.

If the function f(¢; ¢) is known, then the parameter vector θ can be identified by least squares.

In the choice of f(¢; ¢), several approaches for predicting paper quality have been done over the past years. In [13] multivariate data analysis is applied for on-line prediction of quality variables for newsprint production. Neural networks have been applied by e.g. [14] and [12] for the prediction and control of fluting and liner production, respectively. An adaptive ARMAX predictor of bending stiffness for paper board manufacturing is described in [6]. However, we have in this work chosen to follow the track of [1] and [11], in which semi-physical models are derived and used for the prediction of density, tensile- and bending stiffness for multi-ply paper board. The reason for this was that the generic results for multi-ply paper board, combined with information on liner manufacturing from [3] and [4] on an early stage turned out to be fruitful combination.

The process signals chosen as inputs to the model are shown in Table 1.

Page 191: Use of modeling and simulation in pulp and paper making

191

Signal unit

Pulp flows ton/h

Refiner loads kWh/ton

SPP fiber size distr -

Sizing ton/h

Starch flow ton/h

Ply mass flows ton/h

Line load shoe press 1&2 kPa

Line load calendar kPa

QCS Caliper um

QCS basis weight g/m2

QCS moisture %

QCS machine speed m/min

QCS fiber ratio top fly -

QCS fiber ratio bottom ply -

Table 1: Process signals used as inputs to the model

Without going into details, the prediction model for the paper quality properties can be described in the following steps:

1. Basic properties of the pulps, such as density, tensile stiffness, compression- and tensile strength are calculated as functions of refiner loads and other available measurements.

2. The pulp flows and mixing, including the broke handling, are calculated, resulting in pulp fractions of the different pulps in the two plies.

3. Ply properties are calculated based on pulp properties, pulp fractions and machine settings.

4. The mechanical properties of the plies are divided into a MD and CD component based the fiber orientation measurement.

5. The final quality variables are calculated from ply properties and ply weight fractions.

The model contains a number of parameters, some of which are reflecting properties of the pulps and plies while other represent the influence from machine settings. Although it is possible to get

reasonable correlation between predicted and measured quality variables by using values from the literature and previous work, a much better result can be obtained by fitting the parameters using the least squares method. For this, process data and corresponding laboratory measurements were collected during October - December 2001. The process

Page 192: Use of modeling and simulation in pulp and paper making

192

data was sampled with a 1 minute sampling period, while the data measurements were stored at the time instant when they were collected. Samples with laboratory measurement were removed together with samples where one or more of the measured signals where outside pre-specified bounds. This resulted in a data set with laboratory measurements and corresponding process data from 1780 jumbo rolls. For the identification purpose, the laboratory measurements were normalized with the basis weight of paper. The reason for this is to attenuate the dominating influence from basis weight on the quality variables, which has a tendency of reducing the identifiability of other parameters.

Ident. Data Valid. Data

RCT 4.1 4.7

Burst 5.5 7.6

TStr CD 4.7 7.1

TStr MD 6.8 10

TSI CD 4.6 6.8

TSI MD 4.1 6.8

Table 2: Prediction error [%] for the identification and validation sets, respectively.

To verify the accuracy of the identified parameters, a validation data set was created from the time period of December 2001-February 2002. Cleaning the data in the same way as for the identification data set, the validation data set includes 707 laboratory measurements. The result of the prediction for both the identification and validation data set can be found in Table 21. As can be seen, the prediction error is slightly larger for the validation data set, mainly because of systematic biases in the prediction errors.

10.2.4 ON-LINE PREDICTION

With the models structure and parameters identified from process data it would be straight forward to implement the model as it is for on-line prediction. However, just running the model on-line would typically result in a prediction that captures the large variations in quality but with a low frequency bias in the prediction during in-grade production (cf. the validation data set). The reason for this is of course that paper making is a complex process and even though we have a large and reasonable model with a lot of inputs, there are always things that are immeasurable or even unknown. To deal with this, some kind of adaptation of the model to the changing process behavior is necessary.

The method chosen for this case is based on recursive batch identification [9]:

N

Page 193: Use of modeling and simulation in pulp and paper making

193

θ(k)= arg min (1/N)∑(y(k-i) - ˆy(k-i)^T Q(y(k-i) -ˆy(k-i) (2) θ i=1 where N can be seen as the memory in the identification and Q is a matrix reflecting the scale of the

1It should be mentioned that the sensor for measuring Burst Strength was broken during a part of both the identification and validation data sets, which means that the statistics for the Burst prediction is based on a smaller number of samples.

Figure 1: Prediction (’-’) and measurement (’+’) of Burst (Top) and Burst Index (Bottom).

signals. This gives the following on-line predictor:

ˆy(k + 1) = f(ˆθ(k), u(k + 1)) (3)

The choice of N in Equation 2 is crucial for the performance of the on-line predictor. If N is chosen too large, the adaptation of the predictor will become too slow to changes in the

Page 194: Use of modeling and simulation in pulp and paper making

194

process, if it is chosen too small the least-square minimization of Equation 2 becomes ill-conditioned with the subsequent loss of identifiability of θ.

To overcome this, the following was done:

• Select N such that it uses only the last 3-5 laboratory measurements.

• Select only a subset of the original θ for the recursive identification.

In Table 3 the result from a simulation with N = 3 and the recursive identification of two parameters reflecting the gain of the process in the MD and CD direction, respectively, is shown. As can bee seen, the bias in the prediction is removed and the total absolute error is reduced (cf. Table 2). In Figure 2

Mean Mean Abs.

RCT 0.0 3.1

Burst 0.1 4.0

TStr CD 0.4 4.0

TStr MD 0.4 4.0

TSI CD 1.3 4.7

TSI MD 0.2 4.7

Table 3: Mean of prediction error and absolute prediction error from the simulation of the recursive identification.

The prediction is shown as time-series for 350 consecutive lab measurements. Notice how well the model captures the grade changes. It is also interesting to notice that although both Ring Crush and Burst correlates, there is almost no correlation between their indices. This means that, besides basis weight, Ring Crush and Burst reacts very differently the machine settings. In fact, Ring Crush is related to the CD component of the tensile strength, while Burst relates mainly to the MD component of tensile strength.

On-line optimization

Besides the use of the model as an on-line predictor paper quality, the intentions are to use it as an aid the operators to actively improve the quality. As first approach, methods for on-line optimization of quality were developed in [2], in which the problem finding ”good” machine settings was formulated a constrained optimization problem:

min c(u(k)) (4) u(k)

s.t.

Page 195: Use of modeling and simulation in pulp and paper making

195

f(θ(k),u(k)) = yopt(k) (5)

g(u(k)) = 0 (6)

ulo(k) · u(k) · uhi(k) (7)

where c(¢) is a cost function (typically c(u(k)) = c(k)u(k)), yopt(k) is the quality target at time k, g(¢)

process constraints (mass- and component balances) and ulo(k); uhi(k) are upper and lower bounds for the inputs, respectively.

In [2] the optimization problem is solved using Matlab and the Optimization Toolbox [10].

One of the situations when this functionality is very useful is when the operator is to perform a grade change. The operator can then select suitable initial values and bounds for u(k), typically from some grade library, select the target quality and then launch an optimization, which will find the cheapest

Page 196: Use of modeling and simulation in pulp and paper making

196

Figure 2: Prediction(’-’) and measurement(’+’) of Ring Crush (Top) and Ring Crush Index (Bottom). Notice the obvious outlier at tambour nr 45.

In the figure 3 below we have another example of RCT for predicted versus measured values, from another set of tambours..

Figure 3 Ring Crush Test (RCT) with measured values(stars) vs predicted values (solid line).

10.3 ON-LINE MONITORING OF THE PAPERMACHINE PERFORMANCE

Arjo Sinon, Sappi Fine Paper Europe, Netherlands, D.M.R. Lo Cascio, TNO, Netherlands

10.3.1 BACKGROUND

Poor performance of the paper machine always results in extra operating costs or lost profits, which makes the outcome of the PDM a valuable tool for the operator of the paper machine or for the technological staff.

Page 197: Use of modeling and simulation in pulp and paper making

197

Poor performance of the press section means that final press dryness is lower than optimal meaning that more water has to be evaporated by the dryer section. This results in extra (avoidable) steam usage.

Whenever the dryer section is performing inadequately the actual drying speed is lower than optimal, which means that the machine speed could be increased, resulting in higher output of the machine. In case no need exists for increased output of the machine, improving the dryer performance allows lower steam pressures while maintaining the current speed. Again this results in steam savings, mainly because of reduced blow-through steam usage.

10.3.2 INTRODUCTION

The TNO Press and Dryer Performance Monitor (PDM) is an on-line software application for paper and board mills, integrated into the existing mill information system (e.g. PI, PIMS). The PDM is based upon a mechanistic model of the dewatering process from the press- and dryer section of a paper machine. This model calculates the dryness of the paper web as it travels all the way through the paper machine, based on actual process data like speed, grammage, steam pressures, and so on. The calculated dryness of the paper web is continuously compared to the best-practice value for the current grade and running conditions. This way the performance of the press- and dryer section is determined and presented.

Poor performance of the paper machine always results in extra operating costs or lost profits. Poor press performance leads to an increased dryer load resulting in extra (avoidable) steam usage and poor dryer performance leads to reduced machine speed. Because all problems connected to press- and dryer performance will be made visible by the PDM, it is a valuable tool for the operator or for the technological staff.

Results you may expect from the PDM include:

• Improved machine efficiency

• Stable sheet moisture control

• Reduced sheet breaks

• Reduced dryer picking

• Extended life of press clothing

• Wider range of operational information

Page 198: Use of modeling and simulation in pulp and paper making

198

During the trials with the PDM we identified three sources of potential erroneous behaviour of applications using on-line data:

• Modifications of the process

• Malfunctioning sensors (i.e. calibration, defects, drift)

• Noise and measurement errors

These points, which seem obvious on first sight, can make life very difficult, especially because the quality of data available in peppermills nowadays leaves plenty of room for improvement. However, if good results can already be obtained based on the current state of the on-line data, imagine what could be achieved if we could fully rely on the data being correct all the time.

10.3.3 ON-LINE MONITORING OF THE PAPERMACHINE PERFORMANCE

The TNO Press and Dryer Performance Monitor (PDM) is an on-line software application for paper and board mills, integrated into the existing mill information system (e.g. PI, PIMS). The PDM is based upon mechanistic models of the dewatering process from the press- and dryer section of a paper machine [1, 2, 3, and 4]. These models have been validated extensively [5] in close cooperation with the Dutch paper and board industry.

The current version of the PDM [6, 7, and 8] determines the actual performance of the press- and dryer section of the paper machine, based on real-time data from the running process.

Definition of the Press Performance

The performance of the press section is presented as the actual final press dryness compared to the best-practice value for the current grade and running conditions. Together with the net water removal of all individuals press felts this result in valuable real-time information about the performance of the presses including the clothing. The press dryness is calculated using the mechanistic press model, which uses real-time

Page 199: Use of modeling and simulation in pulp and paper making

199

process data like speed, grammage, temperature and so on.

Figure 1: Example of the press performance as determined with the PDM for the production of one grade during a total of 1000 hours. The effect of felt changes, indicated by a vertical red line, can be clearly seen as a stepwise improvement of the performance after each change.

Definition of the Dryer Performance

The performance of the dryer section is presented as the actual drying speed for the individual dryer banks or steam groups compared to the best-practice value for the current grade and running conditions. The actual drying speed is calculated with the mechanistic dryer model, which uses real-time process data like speed, final moisture content, grammage and so on, to calculate the dryness of the paper web as it travels through the dryer section, all the way from the press to the reel.

Figure 2: Detail of one possible way to visualise the dryer performance. This one is like a snapshot of the current production situation. Visualised is the dryer section (top) with the

0.5

0.6

0.7

0.8

0.9

1.0

1.1

1.2

1.3

1.4

1.5

0 200 400 600 800 1000Production time (h)

Per

form

ance

EN

P

Top Bottom Top Bottom

Page 200: Use of modeling and simulation in pulp and paper making

200

thickness profile of temperature and moisture (middle), followed by the average dryness and temperature of the sheet (bottom) from press to reel (left to right).

Application of the PDM

Because the PDM is integrated in the existing mill information system [11] determination and visualisation of the performance of the press- and dryer section happens real-time. This makes it possible to act or react on changes in the performance. Without this information the operator or technologist has very limited indications on what causes poor machine operations or where the origins of performance problems are located.

Example 1:

All dryer capacity problems are connected to poor performance of either press section or dryer section or from the stock preparation or wet-end of the machine. The PDM actually presents the performance of the press- and dryer section. This either tells you that the origin of the dryer capacity problems you are facing is in the press- or dryer section: for instance a bad pick-up felt, or flooded cylinders in the second pre-dryer; or it tells you that the origin is not in the press- or dryer section, strongly reducing the amount of searching necessary to pinpoint the origin.

Example 2:

Sheet breaks mostly occur at positions without web support, especially at low dryness. Such a position often is an open draw between two presses. Low dryness of the web in the open draw seriously increases the risk on sheet breaks. The PDM continuously presents the operator with the dryness of the sheet all the way through the press section. This information can be an early warning for upcoming sheet breaks, even when final press dryness is normal. A dryness sensor after the press section would not have given the operator this early warning.

Results from the PDM

The PDM is tested in a real-life production situation [9, 10]. Following list gives an overview of some of the results achieved with it:

• Improved machine efficiency

• Stable sheet moisture control

• Reduced sheet breaks

• Reduced dryer picking

Page 201: Use of modeling and simulation in pulp and paper making

201

• Extended life of press clothing

• Wider range of operational information

Pitfalls in using on-line data

The quality of results from all applications using on-line data from the running process is only as valid as the used inputs. During the trials with the PDM we identified three possible sources of erroneous behaviour of applications using on-line data.

Possible sources of errors

On first sight the following sources seem very obvious. However, they can make life very difficult while no fully suitable solutions exist today.

• Modifications of the process:

Applications using models of the process or part of the process need to be sure that the actual process is the same as the one that once was modelled. Changes in process conditions will give no problems because adequate tracking mechanisms exist, but real modifications or structural changes of the process must be known to the application. It is still very common in the paper or board industry to modify parts of the process without satisfactory documentation. This can easily lead to situations where on-line applications try to track processes they are not designed for. This results in unpredictable outcomes of such applications, the least severe just being “wrong”.

• Malfunctioning sensors:

Detecting malfunctioning sensors sounds rather easy but once we start defining malfunctioning it might become clear that a whole world of interesting challenges lies beneath this. It goes without further explanation that using drifted sensor readings for model calculations will result in false results.

It is not very uncommon in the paper and board industry that parts of the process are operated based on this kind of sensor readings. This is possible because the operators translate such readings and compensate for it. Of course this is intolerable when using the same sensor readings in automated model calculations.

• Noise and measurement errors:

Page 202: Use of modeling and simulation in pulp and paper making

202

The effect of noise on the information in a signal most likely is known. In most cases any form of filtering will reproduce the original signal but filtering also can introduce many unwanted effects. It gets even worse when this is combined with measurement errors. It is not always possible to exactly measure what is needed, so something else is measured instead that represents the quantity we need. This we can identify as measurement errors. This may results in (transient or dynamic) behaviour of the measured signal that does not belong to the original quantity. Of course the application can compensate for this, but it must be known first.

It is very difficult to get to know all these measurement details, especially because most of these details are so self-evident to mill staff.

These points show that the quality of on-line data available in peppermills nowadays leaves plenty of room for improvement. It would be interesting to see how this is in other industries. On the other hand we would like to stress once more that good results already can be obtained, despite the current state of the on-line data. Imagine what could be achieved if we could fully rely on the data being correct all the time.

10.3.4 CONCLUSIONS

The TNO Press and Dryer Performance Monitor (PDM) is an on-line software application for paper and board mills, integrated into the existing mill information system. The PDM determines the actual performance of the press- and dryer section using mechanistic models of the dewatering process, based on real-time process data, even if the quality of this data leaves room for improvements.

The outcome of the PDM consists of the actual final press dryness compared to the best-practice value and the actual drying speed compared to the best-practice value. These quantities give valuable information to the operator and technological staff, enabling them to realise cost-savings in practice.

Three main sources of possible erroneous behaviour of any application using on-line data have been identified. From these sources it becomes clear that the quality of on-line data in the paper and board industry leaves room for improvements, presenting many technical challenges to solve.

Even with the current state of the on-line data it is possible to achieve good results with applications using mechanistic models. We can only dream about the possibilities with this kind of applications when we can fully rely on process data being correct all the time.

Page 203: Use of modeling and simulation in pulp and paper making

203

10.3.5 REFERENCES

[1] DRYING TECHNOLOGY 1995; Vol. 13; No. 4 - Mechanistic and lump approach of internal transport phenomena during drying of a paper sheet; W.J. Coumans, W.M.A. Kruf

[2] TAPPI JOURNAL May 2000; Vol. 83; No. 5 - Single Sided Steam Impingement Drying Of Paper, A Modelling Study; M. Riepen , H. Kiiskinen , R. Talja and O. Timofeev.

[3] TAPPI JOURNAL October 2000; Vol. 83; No. 10 - An Inside View on Impulse Drying Phenomena by Modelling; M. Riepen

[4] Conference paper “Paper Machine Technology”, Lanaken, 7&8 februari 2001 - Modelling the influence of the press felt on the moisture distribution in the paper web; D. Lo Cascio

[5] Desing and optimisation of the press and dryer section of Kappa Graphic Board with the TNO Dewatering model as useful tool, Haanappel; Uil; Lo Cascio; Riepen

[6] TNO TPD 2002 – Technology development of the Online Papermachine Performance Monitor; A.M.J. Sinon, M. Riepen

[7] TNO TPD Report 2003 – Press- and Dryer Monitor: Specifications and Implementation working plan; A.M.J. Sinon

[8] Conference paper “Seminar on Adaptive Techniques in the Paper and Board industry”, Eefde (NL) 17 mei 2001 - Online Papermachine Performance Monitor; A.M.J. Sinon

[9] Conference paper “Intelligent Measurement and Control of Papermaking”, Vlodrop (NL) 5 februari 2002 – Mill demonstration of the Press and Dryer Monitor; M. Riepen & A.M.J. Sinon

[10] TNO TPD Report 2004 – TNO Press and Dryer Performance Monitor: Applications & Revenues; A.M.J. Sinon

[11] OSI Documentation – OPC Interface to the PI System, version 2.1.35

Page 204: Use of modeling and simulation in pulp and paper making

204

10.4 IMPROVED PAPER MACHINE PERFORMANCE

Do you want to know the following?:

• Where to put the effort to increase the performance of your press section or dryer section

• How to monitor the performance of individual dewatering units online

• How to improve product quality parameters

• How to reduce the number of web breaks

• What set points will be required to run a new product on your machine

• How effective a rebuild really will be above the machine builders guarantee

Then simulation can help you to get the right answer

Expertise on press section and dryer section analysis has been bundled into physical mathematical models. These models have been validated in co-operation with an industrial User Group with ten participating paper mills and have been applied for several optimisation studies. Combined with statistics and sensor technology, now online versions of our models can be used for process monitoring and even web break analysis.

Dryer section of folding board machine

Temperature and Moisture profiles in MD and z-direction

Page 205: Use of modeling and simulation in pulp and paper making

205

Web Break Predictor at work in Liner board mill

Risk indicator warns for the onset of “technological web breaks”

0 200 400 600 800 1000 1200 1400 1600

Time [min]

BreakRisk Indicator

+ + +

+ : indication correct and on time

!

! : indication correct but late

: indication incorrect

Third press of Liner board machine

Case studies: some examples from the past

Optimisation of Paper machine dryer section

Advice: Improved hood ventilation system and adjusted cylinder steam pressures

Result: 4% production increase, 10% energy reduction

Analysis of steam box performance for Paper machine press section

Advice: Steam box position prior to first press

Result: 1.5 % dryness increase after 3rd press

Rebuild of Board machine dryer section

Page 206: Use of modeling and simulation in pulp and paper making

206

Advice: Adjust dryer cylinder positions and steam pressures

Result: 16 % production increase

Analysis of press section performance Liner board machine

Advice: Best option for improvement: double felting of last press nip

Result: 2 % dryness increase after 3rd press

Optimisation of press section of Heavy board machine

Advice: Replacement of press fabrics by press wires,

Result: Reduction of energy consumption 212 MWh/year with equal production capacity.

Feasibility study of steam impingement pre-dryer section for Newsprint machine

Advice: Replacement of 8 cylinder pairs by 10 m pre-dryer

Result: Production increase 30 % up to 2000 m/min

10.5 PAPER MILL APPLICATIONS

Kari Edelmann, Sakari Kaijaluoto VTT, Marja Nappa KCL

10.5.1 KNOWLEDGE ON DETRIMENTAL PHENOMENA ORIGINATING FROM

STOCK

Laboratory analyses have shown that the deposits found in various parts of the process consist mostly of substances such as wood pitch, slime or bacteria, fillers, inorganic salts and paper chemicals. Inherently, these substances are not sticky, but they may be turned into a sticky form.

For example, wood pitch appears in process water as small droplets that are quite stable due to steric stabilisation by glucomannan. Steric stabilisation is known to be affected by pH, multivalent ions and temperature. Similarly, additives and binders used in paper products may be turned into stickies in fibre recovery processes. Today chemical additives such as fixatives are used to bind these substances on paper. Internal process water purification methods are not widely used in paper industry due to their relatively high investment cost.

Page 207: Use of modeling and simulation in pulp and paper making

207

10.5.2 TMP

A research group of Åbo Akademi lead by Prof. Bjarne Holmbom has studied water chemistry related to mechanical pulping since the early 90´s. Several publications and doctoral thesis have been published. One of the post graduates, Anna Sundberg, has made in her doctoral thesis a very thorough review on research concerning the chemical composition of TMP pulps, effects of DisCo substances in papermaking and control of detrimental substances.1 In the following, main items of the review are presented.

10.5.2.1 FIBRES

The TMP process results in considerable fibre damage. The middle lamella and the primary wall are partly removed from most of fibres, revealing smaller segments of the outer secondary wall. The secondary wall can also be loosened or ruptured into ribbon like pieces, which remain attached to the fibre. The fibres therefore are not uniform but will have areas with a high concentration of lignin (middle lamella) and other areas with high concentration of cellulose (outer secondary wall). The TMP suspension will also contain fines of different origin, i.e. ray cells, fines from the middle lamella and fines from the primary wall.

The most abundant hemicelluloses in softwood are galactoglucomannans. In addition, softwoods contain also arabinoglucoronoxylans, arabinogalactans and pectins. Wood resin can be found inside unbroken parenchyma cells or is smeared onto tracheids and parenchyma cell surfaces in patches.

The content of anionic groups in papermaking is important for paper strength. The amount of anionic groups also affects fines retention, sizing and adsorption of wet end additives. The amount of anionic groups in, or on, the surface of fibres can be considered beneficial for the papermaking process. These groups can improve swelling and may improve the retention of fillers by interaction with retention aids. Most anionic groups in wood or mechanical pulp fibres are carboxyl groups of uronic acids that are units in mainly xylans and pectins. Furthermore, wood resin components, i.e. fatty and resin acids, also contain carboxyl groups. Carboxyl groups in uronic acids and in fatty and resin acids are mainly dissociated in pH 5. The amount of anionic groups in unbleached spruce wood is about 70 -100 meq/kg. The amount of carboxyl groups in uronic acids corresponds to about 80 -90 meq/kg and in fatty and resin acids to about 9 -15 meq/kg.

1 Anna Sundberg: Wood resin and polysaccharides in mechanical pulps, Chemical analysis, interactions and effects in papermaking, Thesis 1999, Åbo, Finland

Page 208: Use of modeling and simulation in pulp and paper making

208

10.5.2.2 DISSOLVED AND COLLOIDAL SUBSTANCES

Up to 5 % of wood substances are released from the fibres to the aqueous phase in ground wood production. The main constituents of the organic compounds are hemicelluloses, pectins, dispersed wood resin and lignin material. Smaller amount of lignan, acetic acid, formic acid and inorganic constituents can also be found. The release is affected by pH and temperature at different points of the process.

In softwood pulps, galactoglucomannans are the most abundant hemicelluloses followed by arabinogalactans and pectins. The wood resin released from TMP is dispersed as colloidal droplets with an anionic charge. The colloidal resin droplets released from unbleached TMP are sterically stabilised and can only partly be destabilised by addition of electrolytes. Microbiological activity during wood storage can change the composition of wood resin, which, in turn, can affect the stability and the deposition tendency. The lignin material and lignans do not interact very strongly with paper chemicals, nor do they participate in deposit formation. However, they can accumulate in the process water and may contribute to the decrease of the brightness of paper. DisCo substances with anionic charge are sometimes referred to as “anionic trash” or “detrimental substances”. These substances may interact with, and consume, added process chemicals without beneficial effects to the papermaking process such as retention.

10.5.2.3 EFFECTS OF PEROXIDE BLEACHING

Mechanical pulping, conducted without pH adjustment, results in only small chemical changes compared to the original wood. However mechanical pulps are often bleached with hydrogen peroxide to improve the optical properties of the final product, and thus including chemical changes both in the fibres and in the substances released from the pulp.

Glucomannans are deacetylated during alkaline conditions in peroxide bleaching and their solubility is decreased resulting in their sorption back on fibre. Peroxide bleaching increases TMP fibres´ charge through deacetylation of pectins and probably through lignin oxidation and formation of anionic groups in lignin.

The amount of glucomannans released from peroxide bleached TMP is smaller compared to unbleached TMP due to the deacetylation and re-sorption of glucomannans onto fibres. Acetic acid is released in peroxide bleaching, primarily due to the deacetylation of glucomannans. More galacturonic acid units, the building blocks of pectins, are released from peroxide bleached TMP compared to unbleached TMP. The released pectins are demethylated and form pectic acid with a high anionic charge. The composition of wood resin is not significantly altered by alkaline peroxide bleaching, except for resin acids

Page 209: Use of modeling and simulation in pulp and paper making

209

with conjugated double bonds, which are oxidised and degraded. Wood resin is no longer sterically stabilised and coagulates upon the addition of electrolytes. Most of the lignans are degraded during peroxide bleaching. More lignin material is released after peroxide bleaching, and this lignin contains carboxyl groups.

10.5.3 DIP

De-inking process is used to recover fibres from waste paper. Different process steps such as slushing, screening, cleaning, fractionation, flotation, dispersing, and beating are used to separate fibres from the constituents of waste paper. In the course of processing, deposits referred as stickies are formed that may reduce production efficiency and paper quality. The stickies originate from wood pitch, coating binders, printing inks, adhesives and papermaking additives.

The most common classification of stickies is based on size. Macro stickies remain as screening residue after laboratory screening with a slotted plate equipped with a slot width of 0.10 or 0.15 mm. Micro stickies can pass the screening slots. Substances in recovered paper processing classified according to their size are shown in Figure 2.2

Figure 2, Substances in recovered paper processing

2 Fapet: Book: Recycled fibre and de-inking, Chapter 5, Unit operations and equipment in recycled fibre processing, in Papermaking science and technology series edited by Johann Gullichsen and Hannu Paulapuro

Page 210: Use of modeling and simulation in pulp and paper making

210

Stickies are hydrophobic, anionic and tacky in their nature and they have various shapes and surface areas. Known factors for stickies to form deposits are among other things their chemical composition, viscosity and surface area, and process temperature and pH. High shear forces used in disintegration of RCF and coated broke may increase the tackiness of detrimental substances.

The reason for this type of distinction is that macro stickies from recovered paper in industrial processing systems can be removed from the pulp slurry by screens and cleaners. Micro stickies are usually so small that they cannot be eliminated even by the most effective screening units. If the macro stickies content in the head box of a paper machine or in the produced paper is excessive, the screening of the pulp processing system, the approach flow system, or both require improvement. Micro stickies that pass the screens possibly agglomerate and lead to deposits on the paper machine or its clothing or pass into the product as newly formed secondary macro stickies. The efficiency of contaminant removal of different unit operations are shown in Figure 3.5

Figure 3, Efficiency ranges of unit processes for contaminant removal in recovered paper processing.

A distinction is made between primary and secondary stickies. Primary stickies are identical with the aforementioned macro and micro stickies and characterized as intact tacky particles of adhesives such as hot melts or pressure sensitive adhesives, inks, binders, waxes, plastics, or wet strength resins.

Shock-type, chemical-physical changes (temperature, pH value, charge, shearing forces, or concentration) in pulp suspensions are potential causes for formation of secondary stickies.

Page 211: Use of modeling and simulation in pulp and paper making

211

There can also be interactions that increase the tackiness of stickies and the problems experienced with stickies. Such interactions can involve primary or secondary stickies. Fig. 4 shows in a schematic form some possible interactions between virgin fibres, chemical additives, and contents of the recycled fibres that can cause deposits. For the formation of secondary stickies of Fig. 5, the cause is often a sudden change (shock) in the physico-chemical state.3

Figure 4, Interactions contributing to deposition of stickies

3 Fapet Book: Recycled fibre and de-inking, Chapter 11, Stickies in recycled fibre pulp, in Papermaking science and technology series edited by Johann Gullichsen and Hannu Paulapuro

Page 212: Use of modeling and simulation in pulp and paper making

212

Figure 5, Interactions contributing to formation of secondary stickies

Figures 4 and 5 show a highly simplified representation of the complexity of the interactions that can contribute to formation of stickies. Because chemical measures to control stickies usually influence only one described mechanism, an absence of comprehensive knowledge of the interrelations that lead to formation of stickies continues despite many years of research into stickies. Major problems therefore continue to exist in the daily practice of recovered paper processing.

10.5.4 CONTROL AND ANALYSIS OF THE PROCESS CHEMICAL STATE

Process measurements are chiefly needed for process control. Other application areas include

− monitoring of the chemical state of the process

− monitoring of equipment

− operator support

− increasing understanding (gaining of insight)

Page 213: Use of modeling and simulation in pulp and paper making

213

10.5.4.1 AVAILABLE PROCESS CONTROL METHODS

The objective of process control is to lower production costs and to maximise the run ability and to minimise the quality variations of the produced paper. Control methods are used to minimise the fluctuation of the process. Consistency, ash content, pH, temperature, conductivity, drainage and wet strength properties and charge are the main items to be managed. Direct control methods exist only for flow, conc., ash, temp. and pH. The significance of these control parameters is described in Table 6.4

Table 5. Control parameter and its significance

Control parameter Significance

Consistency Variations in short circulation system have direct impact on paper quality (MD and CD) and measurement tendency

Ash content Variations in the wet end affect paper quality such as strength and porosity. Accordingly, uneven distribution of ash (both MD and CD) generates problems on the paper machine, and in coating and printing

pH pH affects all chemical reactions at the wet end, especially charge level and performance efficiency of additives and chemicals. Sudden changes may cause paper machine run ability problems.

Temperature Considerable temperature variations should be avoided due to their impact on reaction kinetics, deposit formation and drainage.

Conductivity Conductivity is an important measure of the system cleanliness. This parameter indicates the amount of dissolved inorganic material that can potentially form deposits.

Drainage and wet strength

Pulp freeness is the most watched quality variable for furnish management. Drainage and wet end strength properties vary with grade, fiber furnish and running conditions thus creating process and quality control challenges to be tackled at the source.

Charge The ability to control the interactions of charged particles, such as fibers, fines and DisCo material is the cornerstone of the wet end stability

Wet end chemistry plays a crucial role in water removal, retention and formation – thus affecting, more or less directly, numerous paper quality properties and PM run ability. A

4 Metso Automation: kajaaniWEMTM , The proven Way to Optimise Papermaking Performance, commercial brochure

Page 214: Use of modeling and simulation in pulp and paper making

214

central factor in the interaction between fibres, fines, fillers and trash (DisCo) is the charge of each component. The control and stabilisation of charge is one cornerstone of modern, efficient process management.5

10.5.4.2 CHEMICAL METHODS TO CONTROL DETRIMENTAL SUBSTANCES

The DisCo substances can be removed from the white water system with the effluent, with the final paper or by using an internal cleaning stage. Washing of TMP pulp after primary-stage refining could provide an effective means for removal of DisCo substances while taking advantage of the fast drainage characteristic of high freeness primary-stage pulp.

The DisCo substances can also be aggregated and, subsequently, retained into the sheet during papermaking. Fixing agents, i.e. cationic polymers with high charge density and low molar mass, are often used to aggregate substances detrimental to the papermaking process. These can be synthetic or starch-based cationic polymers. It has been suggested that the mode of action of fixing agents is aggregation according to charge neutralisation mechanism, although some evidence has also been found for fixation of wood components to the fibres. The dosage and contact time are very important, and so are the charge density, type and structure of the polymer. A retention aid compatible with the fixing agents is a prerequisite for good control of DisCo substances. Other chemical used to aggregate DisCo substances are alum and PAC. Examples on polymers used as fixing agents are shown in Table 7.

Table 6. Fixing agents

Organic fixing agents Inorganic fixing agents

Polydadmac (diallyldimethyl ammonium chloride)

- copolymers of dadmac and acryl amide

PAC (poly ammonium chloride)

Polyethylene imines Alum

Modified cationic starch PANS (poly aluminium nitrate sulphate)

Polyamindiamines

5 Metso Automation: Automatic charge control, Application report

Page 215: Use of modeling and simulation in pulp and paper making

215

The organic polymers may be either linear or branched. The molar mass and charge density of organic polymers may range from 20,000 Daltons to millions and from 2 to 8 meq/kg.

Enzymes can also be used to decrease the amount of certain DisCo substances or to diminish the problems caused by them. The highly anionic pectic acids, released in alkaline peroxide bleaching, can be degraded by enzymes so that they no longer interact with cationic process chemicals. Lipases are used to degrade triglycerides to free fatty acids in wood resin even in industrial scale. The paper quality is improved with fewer holes and spots in the paper, and the deposits on paper machine equipment are reduced.

Fillers and special pigments, such as talc and bentonite, can also be used to adsorb DisCo substances or to change the “tackiness” of wood resin and deposits. The formation of deposits can also be controlled by the addition of dispersants. These are anionic polymers with low molar mass that can change the viscosity of pitch droplets or may reduce the machine surfaces´ affinity for pitch. Dissolved air flotation, evaporation or membrane filtration techniques can be used as internal cleaning stages.

10.5.5 SAVCOR -WEDGE PROCESS ANALYSIS SYSTEM

Although a lot of data are available for the highly instrumented process industry, finding the essential information is difficult. Savcor -WEDGE is a process analysis system designed to analyse fluctuations in continuous processes. The goal of using the system is typically either to reduce quality variations or improve process efficiency, or both. Savcor -WEDGE includes a wide range of tools for searching and locating origins of variations and for determining the cause-effect relations present in a process. Continuous process monitoring can be automated making it easier to identify problematic areas and making the analysis results accessible for a wider range of users.6

KCL-WEDGE integrates with the existing systems that provide the information needed for analyses. All measurement data can be combined into graphs for easy viewing and editing. In order to generate reliable results of the process, the data must be pre-processed before analysis. The application includes handy tools for pre-processing carried out by user, which do not call for any knowledge of mathematical signal processing. The pre-processing procedures can also be automated in KCL-WEDGE and performed hidden from the end-user.

The application enables users to quickly form a picture of the whole process and its status. The following data analysis methods are implemented in KCL-WEDGE as easy-

6 Maija Federley, KCL: Information on KCL-Wedge process analysis tool (http://www.kcl.fi/wedge/wedge.html)

Page 216: Use of modeling and simulation in pulp and paper making

216

to-use applications: statistics of the data, xy-plots, histograms, and correlations, delays between measurements, principal component analysis, periodic fluctuation analysis, spectra, MAR analysis and waveform matching. A user can also define with html- and Matlab-language own additional analyses tailored for his/her needs.

Figure 6. A result of a periodic fluctuation analysis of a measurement in KCL-WEDGE

KCL-WEDGE has been used widely to analyze papermaking processes in Europe. Currently 23 paper production lines are using the system on permanent basis, some 30 case studies on wet end chemistry variability and some 60 case studies on rapid (0.01 s - 10 s) physical variability in short circulation have been carried out.

Page 217: Use of modeling and simulation in pulp and paper making

217

10.5.6 QUALITY PARAMETERS OF WATER AND MEASUREMENT METHODS

The water properties that are considered to be important in papermaking are summarised in Table 8.

Table 7. Water quality

Property Effects or interactions

Temperature Correlates with solubility and sorption of DisCo on fibre, high value good for water removal but sensitivity to pitch problems increases

PH Solubility and sensitivity for pitch problems increases with pH, important parameter for chemical reactions such as bleaching, hydrolysis, dissociation of carboxyl groups, surface charge, internal sizing etc.

Conductivity Correlates with the amount of inorganic compounds and ions. Affects the performance of cationic polymers, surface charge and stability of colloids. Increases viscosity of pitch compounds. Affects the hydro dynamical volume of polymers

Hardness High value increases sensitivity to deposit formations. Affects colloidal stability

Alkalinity High alkalinity means a good pH stability

Dissolved organic compound Large amount increases microbiological activity

Turbidity Measures the performance of filtration processes, correlates with the amount of colloidal particles

Dissolved gases Large amount of gases causes web quality defects

Surface tension Low surface tension promotes foaming, important parameter for fibre bonding and water removal. Is affected by the constituents of water

Viscosity Important for water removal and pitch cohesion

Problems such as reduced quality, deposits on the paper machine or periodically occurring decrease in run ability may originate from variations in the chemical state of the wet end. It is often not possible to solve these problems with common process

Page 218: Use of modeling and simulation in pulp and paper making

218

knowledge. In order to improve understanding on the factors affecting the chemical state, chemical measurements are needed.

The on-line measurements relating to the chemical state of the PM wet end can be divided into two groups: Direct measurements and indirect measurements needing pre-processing.7

10.5.7 MEASURING THE CHEMICAL STATE OF PAPER MACHINE STOCK AND

WATER SYSTEMS

Term on-line measurement is often used to refer widely different measurements needing different amounts of maintenance of device, pre-treatment of sample and interpretation of results. On-line measurement refers here to a measurement that is automatic, measures continuously and needs relatively little maintenance. Also the results of measurement should be reliable. In the report on-line measurements are divided into direct measurement and indirect measurements that need pre-treatment e.g. filtration. Direct measurements are e.g. different on-line measurements. The chapter includes also a category for possible future on-line measurements that are under development and a category for other chemistry monitoring systems. A summary of information obtained from Ecotarget partners (STFI, KCL, PTS and PMV) is shown in Appendix 1.

10.5.7.1 DIRECT ON-LINE MEASUREMENTS

pH

pH is a logarithmic value of the hydrogen ion activity and only slightly influenced by external factors. Fluctuations in pH are undesired phenomena and lead to changes in solubility of different substances, making the system much more sensitive to build-up of different precipitates and causing web measurements. On-line measurement of pH is a well-established technique. However, the measurements need to be cleaned and calibrated regularly.

Conductivity

Changes in composition and concentration of dissolved inorganic substances can easily be detected by conductivity measurements. They give a rough picture of the total concentration of electrolytes in the aqueous phase. Although the conductivity technique shows good sensitivity it is a non-specific property, because ions have different mobility. On-line measurements are available.

7 Marja Nappa, KCL, Information: On-line measurements of water quality

Page 219: Use of modeling and simulation in pulp and paper making

219

Temperature

Temperature affects many reactions occurring in the aqueous phase. Thus, solubility, dewatering, reaction rates, microbial activity and effects of most chemicals are dependent on temperature. Fluctuations in temperature are undesired and may cause precipitations and run ability problems.

Redox potential

The redox potential is a measure of the affinity of a substance for electrons (its electro negativity) compared with hydrogen. By definition, the redox potential of hydrogen is zero. Substances more strongly electronegative (i.e., capable of oxidizing) than hydrogen have positive redox potential and those capable of reducing have negative redox potential. Oxidations and reductions always go together. Changes in redox potential are a sign of the presence of oxidants or reductants in the process. Chlorine/bleaching compounds, dithionite, biocides, etc and their stability can be followed. Redox potentials are measured with electrode measurements.

Gas content

Air content in short circulation affects paper machine runnability and dewatering in the wire section, as well as consistency measurements and control. Low air content also improves formation and decreases pinholes and dirt deposits in the paper web. Air can exist as bubbles or as dissolved air. Both can be measured on-line with several devices based on ultrasonic waves or by measuring the volume decrease when the water sample is compressed.

10.5.7.2 INDIRECT ON-LINE MEASUREMENTS

Most chemical analyses of the water phase of pulp suspension require filtration of the sample as solid matter disturbs the measurements. Continuous separation of solid matter from the aqueous phase is often difficult as the fine fraction in the pulp suspension clogs the filter, sooner or later. This will naturally affect the functionality of the filter which has to be cleaned or replaced. Today new techniques exist for continuous thick stock filtration.8

10.5.8 SAMPLING TECHNIQUES

8 Fapet: M. Holmberg, Paper machine water chemistry p.213-219, in Papermaking science and technology series, book 4 Papermaking chemistry edited by Leo Neimo, pp. 219, 1999

Page 220: Use of modeling and simulation in pulp and paper making

220

A continuous filter with counter-current cleaning has been developed at KCL. A ceramic tube filter with 4 or 6 mm ID channels is used. The pore size of filter is between 0.2 - 1 µm. The filter is equipped with an automatic washing cycle.

BTG has a filtration sampler (Mütek TSS-70) specially developed for paper industry application. The sampler automatically draws pulp samples and produces samples with constant low fibre content. Sampler is cleaned automatically with rinsing water and high-pressure air. The filter is developed for BTG charge and turbidity analysers.9

Metso Automation has also an application for thick stock filtration (kajaaniFSD100).10

10.5.8.1 METHODS FOR SOLIDS-FREE SAMPLES

Charge density

BTG has a commercial charge analyser (Mütek PCT-20). The analyser continuously monitors the total surface charges of all dissolved and colloidal substances in an aqueous sample. The method is based on the streaming current principle, where a moving piston in a measuring cell causes a liquid stream between the cell wall and the piston. This distorts the surface charges of the colloidal substances so that a current is induced. Polyelectrolyte titration is used to determine the charge level. 11

The Kajaani (CATi) cationic demand measurement is also based on titration technology. The measurement principle is an industry standard that complements laboratory methods for detecting anionic trash. Proactive sample handling is used so that always a fresh sample is measured. Large sample volume enables representative samples and accurate measurements. The measuring chamber is cleaned automatically by pressurized air, water, cleaning chemical and ultrasound.12

Turbidity

BTG has a turbidity measurement device which can be included into the TSS-70 filtration sampler. The filtrate turbidity measurement is based on white light that is sent through the sample. Opposite the light source there is a sensor, which measures the transmitted light.13

9 BTG, Product sheet

10 Metso Automation, internet pages

11 BTG, Product sheet

12 Metso Automation, internet pages

13 BTG, Product sheet

Page 221: Use of modeling and simulation in pulp and paper making

221

On-line titration

Applikon is an on-line process titrator for aqueous samples. Different methods can be programmed for the device. In KCL methods have been developed for determination of the following process components: dissolved calcium, silicate, aluminium, starch, COD (Chemical Oxygen Demand), dithionite and dissolved sodium.

Multivalent cations with X-ray fluorescence

The Courier instrument is based on X-ray fluorescence. The sample is irradiated by an X-ray source (55Fe) and the emitted secondary radiation, which is specific for each element, is measured. The device needs calibration for each element. In papermaking the device is mostly used for detection of dissolved calcium, but other multivalent cations can be measured as well. Filtration is needed for reliable results.

TOC

There are commercial devices for continuous detection of TOC (Total Organic Carbon). The sample is treated with phosphoric acid to remove inorganic carbon dioxide and then burned in 900 °C. The carbon dioxide formed in combustion is fed into an IR spectrometer and the absorbance is detected continuously. Integrating over the sample time gives the amount of carbon.

10.5.8.2 FUTURE ON-LINE MEASUREMENTS

Flow cytometry

For decades, Flow Cytometry (FCM) has been used for characterization of blood cells and bacteria and has become indispensable for medical and biological use. FCM is able to count thousands of particles per second and simultaneously determine their type and size, ending up in a statistically significant report within less than a minute. The principle of FCM is based on a light excitation of a "lined up" particle stream and a multi-channel determination of scatter and fluorescence stained particles. This rapid technology has so far not been used to any greater extent within process industry, except for counting bacteria in milk and beer.

The methodology and suitability of this method for paper mill applications was developed and tested in a project in the national CACTUS research programme aiming at

Page 222: Use of modeling and simulation in pulp and paper making

222

reduced water consumption in papermaking.14 Three pre-filtered sample streams (Ground wood, Head Box and Wire Water), filtered by WIC-100 were analysed, each every 30 minutes.

The main result of the project was that the FCM system is fully capable of measuring pre-filtered samples even in such a rough environment as a paper mill without fouling of the flow cell. FCM can provide unique information about colloidal particles and be valuable both in papermaking process research and continuous process control. Information is obtained about concentration and size of different types of colloidal particles in the size range about 0.3-20 µm. Nile Red was found particularly well suited for staining process waters due to the polarity dependent shifts occurring in the fluorescence of the stained particles.

FCM is a valuable method for the determination of colloidal particles in papermaking. It is fast and capable of finding different particles and agglomerates. Furthermore, preparation and measurement can be done within minutes. However, at the moment no commercial applications exist because expertise is needed for interpretation of the results.

Capillary electrophoresis

A commercial capillary electrophoresis device has been tested for on-line monitoring of individual ions in the wet end part of papermaking process.15 The sample has to be filtered carefully before measurement. Two-stage filtration has turned out to be necessary to prevent any precipitation of the sample. 4 ml of filtrate is pumped to the measurement cell that is pressurized with the pumping system of the apparatus. Approximately 10 nl is pumped to the capillary. The measurement cycle is about 15 minutes. The measurement is based on so called indirect UV-detection principle. A buffer mixture of dicarboxylic acid gave good separation and detection of anions. Respectively, an imidatsol buffer was good for cations. The measurement cycle for cations is about 10 minutes. It is possible to measure concentrations of selected anions (such as chloride, sulphate, oxalate and formiate ions) and cations (potassium, calcium, sodium, magnesium and aluminum). Interestingly, anions and cations can be measured from the same sample. For the moment no commercial on-line CE measurement system is available because the system would require too much maintenance.

NIR measurement of paper sample

PTS is developing within Ecotarget measurement methods for stickies that is based on the spectrum analysis of radiation that is reflected from paper surface. The paper sample

14 Ralf Degerth, Final report of the CACTUS programme, Åbo Akademi, Laboratory of Forest Products Chemistry

15 Heli Siren, VTT Processes, internal information

Page 223: Use of modeling and simulation in pulp and paper making

223

is illuminated with near infrared radiation between 1400 and 1900 nm.16 Characterisation of detrimental substances is done by comparing the measured spectrum with reference spectres (Indege).

Other chemistry monitoring systems

Liqum chena index

A small company from Jyväskylä Finland has developed an on-line measurement system for monitoring the chemical state of process water. The system is based on electrochemical measurements.17 It has been used to improve the control of papermaking by optimizing the dosing of chemicals and by observing the process load caused by different ions. Measurement technology is based on measurements of reduction and oxidation reactions. This is done by using a combination of ion-selective electrodes (or receptors). The electrodes may be coated or pure metals/metal mixtures. Also temperature and pH are measured as they will have effects on the activity and solubility of most ions. Following receptors, depending on the application, may be used

− Si, Al, O, Mg, Na

− S

− Br, CN

− O, Br, S

− O, C, CN, Zn

− Al, S, Na, Mg

Following chemicals have been noticed to give response to the electrochemistry in various paper machines:

− Dithionite

− Thiosulphate

− Biocide

− Peroxide

− Fillers; kaolin and Talcum

16 Patrick Plew, PTS, Work report: Introduction to new at-line measurement system, Ecotarget SP5 meeting in Munchen 21-22 April 2005

17 Sakari Laitinen, Liqum Paper Oy, Ohjelmakaari 1 FIN-40500 Jyväskylä, Finland, Principle of Liqum Chena measurements

Page 224: Use of modeling and simulation in pulp and paper making

224

− Bentonite

− Defoamers

− Hydrex

− Fixatives

− Dispersing agents

− Optical brighteners

− Alum.

− Sulphuric acid

− EDTA

The results of measurements are formulated into an electrochemical taste index that is followed by the operators. The index is followed in selected parts of the papermaking line. Un-disturbing process conditions have a taste index of 100. Deviation from optimal processing conditions is used as the guide for improvements in processing conditions. The successful use of the methodology requires continuous learning on how to affect the indexes in a desired way. Several mills have adopted the methodology.

10.5.9 WIC SYSTEMS

A small company from Finland, WIC Systems, have developed a measurement system for on-line chemistry monitoring of a paper machine. The system takes samples from various parts of the process and pre-treats the samples for water quality analyses. The sampling system is patented. The following analyses are available: charge density/ turbidity, starch, alkalinity, manganese, aluminium and COD. The analysis time is usually 10 minutes, but two hours for COD. Altogether 4 sample lines are connected to the measurement system. Over the past 10 years, more than hundred on-line analysers have been sold across paper, tissue and board and pulp industry. The measurement system can be provided with remote monitoring capabilities through a modem or internet.18

10.5.10 FIBRE AND PAPER PROPERTIES

18 Mika Laihonen, WIC Systems Oy, WIC Brochure 25.5.2005

Page 225: Use of modeling and simulation in pulp and paper making

225

Paper machines are normally equipped with on-line scanning devices for the control grammage, ash and moisture content, and thickness of the paper web. Holes and specks are also quite often monitored continuously by measuring the transmittance of light through the web.

Pulp freeness is the most watched quality variable for furnish management. Drainage and wet end strength properties vary with grade, fibre furnish and running conditions thus creating process and quality control challenges to be tackled at the source. Both laboratory and on-line methods exist for the measurement of pulp freeness. More detailed analysis of fibre properties can be done with equipment such as FiberMaster®19 and FibreLab®20.

Numerous paper quality parameters are measured from machine reel samples either in lab or with automatic paper testing equipment. Automatic paper lab21 is capable to analyze every machine reel. Paper quality parameters, especially when out-of-quality specifications, can also be used for indication of process disturbances.

10.5.11 CONCLUSIONS

There are a number of measurements that would be useful to improve the control of the papermaking process. Some of them (a few) are already in use and at-line and on-line methods are available on the market. There is, however, a large number of parameters that can be measured only by complicated laboratory methods. These parameters are sometimes measured and it is possible to get some useful information from the values. It is, however, difficult to prove that such parameters are useful and that they should be measured frequently in the process.

Scanning devices are used to control the retention, grammage, ash and moisture content, and thickness of the paper web. Holes and specks of the whole paper web can also be monitored continuously.

Numerous paper quality parameters are measured from machine reel samples either in lab or with automatic paper testing equipment. Automatic paper lab is capable to analyse every machine reel.

Modern mills are equipped with automation systems that gather huge amount of process data and highly sophisticated process analysis tools such as KCL Wedge are widely used

19 Lorentzen & Wettre, http://www.lorentzen-wettre.com/

20 Metso Automation, http://www.metsoendress.com

21 Metso Automation, http://www.metsoautomation.com/

Page 226: Use of modeling and simulation in pulp and paper making

226

in paper mills. Extraction of data relating to chemical state of the process, and verification and processing of the data into useful form for process control has, however, turned out to be problematic.

A lot of laboratory research results and mill experience exist on factors that affect the release of detrimental substances and their impact on runnability and paper quality defects, but clear cause and effect knowledge on detrimental phenomena is not available. According to literature, a large change in pH, temperature, conductivity and anionic trash increases the probability of deposition incidences. From practical point of view, the carry-over of detrimental substances from pulping plant should be minimised or stabilised and on the other hand processing conditions leading to agglomeration and adhesions on process surfaces should be avoided. The operator on the other hand needs to have information about the changes of chemical state of the process and on the other hand he needs clear instructions; should he react to this information and in what actions he should take.

The control of the chemical balance of the wet end has turned out to be very difficult. Occasionally problems occur that are known to be connected to “fluctuation” of the process, but the mechanisms leading to process disturbances are poorly known. Higher concentrations of detrimental substances may cause unstable physico-chemical conditions, where even small variations, for example in pH or temperature, can disturb chemical balance in the wet. On the other hand, a paper mill may very well operate the process at low water consumption level successfully longer times and then suddenly enter into troubles.

Even though the pulping plant is often integrated with the paper machine, they are not operated as a whole, but have their own operators. Steady state operation of the mill is very seldom possible due to grade changes, process shut-downs for maintenance and reduced performance of equipment. The interactions of plant operations on web measurements and paper quality defects are not known.

The overall objective of WP5.2 is to develop methodology that can be used to minimise web measurements and paper quality defects caused by deposit formation originating from changes in the chemical state of process water networks. The methodology will be based on the utilisation of the present data system. The objective is to study what kind of additional data would be necessary in connecting the deposit formation with the changes in process operation. The methodology is described in Figure 7.

Page 227: Use of modeling and simulation in pulp and paper making

227

Case mills

Measurementsand

mill operations

Samples

Lab-simulationof

mill operation

New measurement

methods

Extraction and analysis of

process data

Dynamic modelof the mill

Factorsleading to

disturbances

Monitoringsystem

Complementarymeasurements

Case mills

Measurementsand

mill operations

Samples

Lab-simulationof

mill operation

New measurement

methods

Extraction and analysis of

process data

Dynamic modelof the mill

Factorsleading to

disturbances

Monitoringsystem

Complementarymeasurements

Figure 7. Methodology of WP 5.2 to control and monitor detrimental phenomena in papermaking.

10.6 MODEL-BASED WET-END OPTIMISATION

Alfonso Alonso and Angeles Blanco, University Complutense, Madrid

10.6.1 INTRODUCTION

The stock preparation plant and the paper machine are very well controlled and many data are nowadays accessible at paper mills by means of data acquisition systems e.g. WinMOPS. These data are used to find correlations between the different process and product parameters and to build models that allow predictive and adaptive control of the processes. Furthermore, external data, coming from laboratory or from other sensors installed at the mill (e.g. on-line “Focused Beam Reflectance Measurement” probe) are also considered in this modelling process to build better simulations.

Process modelling involves lots of different techniques that can be used to build soft sensors, simulations and diagnostic tools to monitor the system, predict its behaviour under different scenarios, and optimise the product quality or the machine runnability.

Results of process modelling depend on different variables. Data analysis, removal of non significant or wrong data, the choice of an appropriate modelling technique and how to use each technique are some of those variables. The quality of monitoring and product quality

Page 228: Use of modeling and simulation in pulp and paper making

228

predictions or process simulation depends on them. Thus, selecting an appropriated methodology becomes crucial when defining each research stage.

Process modelling in papermaking becomes especially difficult in some sections like the wet end, whose chemistry and physics are still not well known, and deterministic models have not been developed. Thus, the use of advanced techniques to monitor the wet end and to predict product properties becomes necessary. However, using such tools is a decision that cannot be taken at the first step. First of all, a preliminary study has to be carried out in order to compare the different alternatives and discard the simplest ones only if results differ significantly. The reason is that the simpler the model is, the easier the implementation and operators’ training stages are.

Once the simplest methods are discarded (for instance, multiple regressions or simple statistical analysis), another crucial step in a successful implementation is to demonstrate that those ‘black box’, ‘complex’, ‘advanced’ models are not a ‘mystic’ tool, and to prove that a soft sensor, built on the base of an advanced model, gives the results as easily as the most simple one. The only difference reside on the modelling stage, which need more expertise in the case of advanced models, but stages before and after modelling are not necessarily different.

10.6.2 OBJECTIVES

A general aim of papermakers is to predict the quality of the product and/or runnability problems based on available data in order to define potential improvements for the optimisation of the process based on the increase of process stability. The final aim is to increase the productivity of the mill and/ or to increase the stability of the product quality.

In this example, the optimisation of the wet end in order to increase the quality of the product has been carried out. A model to predict paper properties based on wet end parameters has been developed.

UCM approach implies building advanced models that are able to generalise the behaviour of the wet-end, as well as give results with enough accuracy and robustness, in order to optimise the wet-end of a paper machine. This optimisation will suppose an increase of process stability as well as an increase in average quality values, through direct recommendations that operators will be able to easily transfer into the paper mill. This approach could be later on transferred to other paper machines.

At the moment main efforts have been forwarded to optimise off-line models, by modifying the values of internal parameters and finding the optimum combination that may allow creating robust on-line models in future.

Furthermore, an additional objective is to develop an automatic-updating system that may allow facing process changes automatically, as well as selecting the ‘Current Best Model’.

Page 229: Use of modeling and simulation in pulp and paper making

229

That would allow operators a rapid monitoring of a great variety of changes in operational states, and a periodic renewal of the models. Maintenance and calibration would also be in that way automatic, representing important time savings when facing possible troubles.

10.6.3 METHODOLOGY

When a model is built to use it, for instance, as a soft sensor, objectives have to be clearly defined. A model built to optimise the process or a model built to just predict or monitor specific variables with highest accuracy have quite different needs. When an optimisation has to be carried out, a model with reduced number of inputs is more useful, because during optimisation, best values for each input variable are provided for each output. If those recommendations are obtained over an excessive number of inputs, operating the system to find the optimum range becomes almost impossible, because process variables are not totally independent. With a reduced number of inputs, this problem is significantly reduced. Theoretically, a model with only one input would make this problem disappear, but accuracy would be significantly affected. Thus, a compromise solution must be adopted.

On the other hand, when a model is developed to have the best possible prediction, selection of only a few inputs is not usually the best option. Anyway a definition of interdependencies and avoiding the use of too correlated variables has to be taken into account.

Therefore, the number of variables to include in the study and the compromise between accuracy and robustness has to be defined as objectives. A model can be developed to have a very accurate response over a certain time period despite robustness is lower, or can be developed following the highest possible robustness in long term predictions. There are infinite intermediate situations.

Modelling stage will slightly change depending on selected robustness and modelling techniques. For instance, when an artificial neural network is trained and validated, stopping criteria vary depending on robustness, that can be quantified through testing the model with data from different periods of time than those used for training or validation.

Results are shown in quite different graphic forms, but they have to be compiled in order to be easily understood, following the same structure than objectives. Final idea must be that mathematics going beyond the models can be complex, but applying them at the mill can be as easy as a straight line model.

The construction of the models has been structured defining a methodology that implies different steps. There are parallel stages that are present during the entire research. One example is the automation of each stage by developing user-friendly interfaces that allow a rapid analysis or modelling. This task should be linked in further steps to operators’ training and model implementation, once the utility of a model, to solve specific problems at mill scale, has been proven and accepted. The main steps are:

Page 230: Use of modeling and simulation in pulp and paper making

230

- Data preprocessing: Available data are filtered according to specific needs in each model. Paper grade selection is also carried out.

- First variable selection: A first selection of process variables and laboratory measurements has to be carried out according to the experience from personnel at the mill. The results in statistical analysis stage could modify this selection due to possible correlations among data.

- Statistical analysis: A statistical analysis is performed in order to have a first impression about the working environment and the correlations among variables.

- Proposal of modelling techniques: After that, different modelling techniques are proposed in order to build paper quality prediction tools that allow process optimisation. Multiple regressions and artificial neural networks (ANNs) have been proposed due to their adequate characteristics to solve current problems.

- Modelling scheme definition: For each modelling method, a scheme has been developed to assure desired robustness.

- Preliminary modelling: Preliminary models are developed to select optimum model parameters.

- Full study: After that, a full study is carried out to fulfil the objectives and to evaluate the influence of each input over each output.

- Second variable selection: With the results from the full study, further selection of inputs and outputs can be carried out in order to build reduced models with few selected inputs. These models will be called ‘optimisation models’, and they are only used if process or quality optimisation are predefined objectives.

- ‘Optimisation’ modelling: In order to build the reduced models, the same ‘preliminary/full study’ sequence is applied. In this case, the reduced number of inputs allows further analysis of results and the establishment of recommendations in order to fulfil the objectives.

- Model updating/upgrading: Finally, an automated updating procedure can be applied in order to maintain the model with time when new data are available, as well as to create new models if operating conditions change significantly.

After the preliminary approach used to justify the selection of ANNs, another preliminary study was carried out to select the optimum validation procedure, as well as to select a range of ANN architectures and training algorithms to use in the full study. Since, there are numerous factors that may influence over the accuracy and robustness of the results. ANN’s architecture, training/validation procedures and the range of time of data used for training are some of these factors.

The full study has optimised the validation procedure through the analysis of different groups of ANNs. The same procedure has been applied to optimise the interval of time considered for training data.

Page 231: Use of modeling and simulation in pulp and paper making

231

Both preliminary and full studies follow specific methodologies based on the creation of numerous models grouped by different parameters.

Architecture and training algorithm optimisation

The selection of the optimum number of hidden layers has been skipped due to previous experience that shows that only one hidden layer is enough in this specific case.

The selection of transfer functions in each layer has also been skipped. From previous experience we know that logistic functions in both input and hidden layers and linear functions in the output layer give best performance as well as an important time saving when reducing training errors.

Thus, these steps involve the development of several models varying the number of hidden neurons and the training algorithm. Levenberg-Marquardt algorithm (LMA) and the gradient descent algorithm (GDA) with momentum and variable learning rate have been the two proposed training algorithms. In the case of GDA algorithm, momentum values are another parameter to be optimised.

5 ANNs have been developed for each combination of parameters. Developed programming scripts allow an automatic selection of the models with less validation error in order to save their parameters and analyse their results. The way of working in architecture optimisation step is shown in figure 1.

Page 232: Use of modeling and simulation in pulp and paper making

232

Training algorithm

Alternatives

LMA

GDA with momentum

and variable

learning rate

Model parameters

to be optimised

Number of

hidden neurons

Number of

hidden neuronsMomentum

Number of hidden layers

and transfer functions

previously fixed

Developing 5 ANNs

for each combination

of parameters

Selection of best models

Optimum parameters

Figure 1 – Architecture optimisation scheme

Best ANN selection and grouping

Once the ANN architecture and training algorithm have been optimised, in full studies, there is another step that varies depending on the case. In a full study, several ANNs are created to compare, for instance, different distributions of validation data over time respect to training data. Each studied distribution implies developing ANNs with previously selected optimum architecture that is usually a tight range of values. From all created models, the best ones are selected in order to analyse validation and simulation errors. A scheme of this selection task is shown in figure 2.

Page 233: Use of modeling and simulation in pulp and paper making

233

Optimum architecture 1

Optimum architecture 2

Optimum architecture 3

Optimum architecture N

Developing 5 ANNs

Developing 5 ANNs

Developing 5 ANNs

Developing 5 ANNs

Selection of best ANNs

Average

validation error

Average

simulation error

Analysis of results

Figure 2 – Best ANN selection. Best models are selected through finding the minimum validation error.

Training, validation and simulation

Validation error is used while training the models, to select the optimum number of iterations. Validation data are not used for training the ANNs, but to check the results. Simulation errors are calculated in the same way, but the purpose is to check model robustness when modelling data from different periods of time.

In each step of the modelling scheme definition, ‘training’, ‘validation’ and ‘simulation’ concepts have been constantly present. Validation and simulation involve only the calculation of outputs with an ANN, by putting as inputs validation and simulation data respectively.

On the other hand, training process vary depending on the selected stopping criteria. In this case, stopping criteria at training is finding the minimum validation error. These validations are carried out for different number of training iterations. The optimum number of training iterations is the one where the validation error (difference between ANN responses and validation data) reaches its minimum. A schematic view of this procedure can be seen in figure 3.

Page 234: Use of modeling and simulation in pulp and paper making

234

Validation dataTraining data

TrainingX iterations

Training2·X iterations

Training3·X iterations

TrainingN·X iterations

.

.

.

Validation 1

Validation 2

Validation 3

Validation N

.

.

.

Validationerror 1

Validationerror 2

Validationerror 3

Validationerror N

.

.

.

Find minimumvalidation error

Optimum numberof trainingiterations

Figure 3 – Scheme of the automated procedure to develop robust ANNs.

When training each model, a plot of both training and validation errors is carried out, as well as an automatic record of them, in order to automate the selection of best ANNs and the analysis of results.

10.6.4 THE OFF-LINE TO ON-LINE PATH

UCM has also been working on this stage, planning possible alternatives in order to pass from off-line modelling to on-line automatic decision support systems based on different models that could be used as soft sensors or simulators.

UCM is handling two different options:

- Updating ANN parameters with time.

- Building a switching tool that may allow either the selection of best models according to operating conditions or the update of a generic model that may be used to generalise strange/new data.

Page 235: Use of modeling and simulation in pulp and paper making

235

In principle, the second one seems to be more accurate and useful, minimising the risk of having a misleading model. However, this decision will be carried out when these studies are in a more advanced stage.

The switching tool in the second option can be selected from many different techniques. Dynamic validation has been combined with ANNs in a preliminary stage. Figure 4 shows the concept of dynamic validation of an ANN output.

Outputs of ANN and corresponding reference measurements are taken as inputs of dynamic validation block. Then, a simple linear model is formulated between outputs of ANN and reference measurements (one model per variable (SISO) or combined model with all variables (MIMO)).

Model (SISO) is the form of y = a*x + b, where x is the output of ANN and y is corresponding laboratory reference measurement. In optimal situation, where output of ANN equates to reference value, the parameter a is 1 and b is 0.

Then, there are two options:

1) To observe dynamically this linear model and when it differs statistically enough from original model we produce a warning that ANN should be trained or the new ANN should be taken on-line or,

2) to update dynamically the parameters of linear model. This is equivalent that ANN acts as static part of the whole model and linear model is the dynamic part of the whole model, as in dynamic linear model is in serial to ANN. Uncertainties of ANN parameters are aggregated into the linear model parameters. This option using dynamic validation has been discarded due to the high number of parameters that an ANN has.

Figure 5 shows an application of practical use of dynamic validated ANN output. On-line data is soft-sensed with a previously trained ANN. The output is dynamically compared with reference measurements and if the difference is statistically enough the warning is produced and soft sensor is updated. Here dynamic validation block act as a dynamic state estimator. If none of the previously trained ANN will produce good enough results then general ANN is used or a new one is trained.

Page 236: Use of modeling and simulation in pulp and paper making

236

Figure 4 – Concept of dynamic validation of ANN output.

ANN Dynamic On-line

Reference

1) Produce warning if output of ANN differs statistically enough from reference values.

• Update ANN.

2) Produce dynamically validated ANN output.

• Update ANN, when difference exceeds given limit.

Page 237: Use of modeling and simulation in pulp and paper making

237

ANN1

Dynamic Validation

ANN2

ANN3

ANN - G

Reference data

Online data

Soft Sensor

Output

Figure 5 – Application of dynamic validation of ANN output.

10.6.5 ROBUSTNESS OF DEVELOPED MODELS

The best ANN in terms of validation error has been analysed as an example of an ANN showing good robustness in both simulation steps.

Figure 6 shows an example of simulation 1 for formation index. It must be noticed that variability in that period is quite low and simulation errors are significantly low (See Table 1).

When the same ANN was simulated with data obtained three months after the training interval, errors increased significantly, but still continued being acceptable (See Table 1).

When all simulation 2 data were plotted, it was encouraging to notice that the model, 3 months after developing, still performed following the majority of fluctuations in the predictions, as shown in Figure 7.

Page 238: Use of modeling and simulation in pulp and paper making

238

57

59

61

63

65

67

69

0 100 200 300 400 500 600

Validation sample number

Fo

rma

tio

n I

nd

ex

Lab data

Predicted data

Figure 6 – Simulation 1 of formation index by the best ANN in ANN-3 set.

Table 1 – Simulation 1 & 2 errors for each output variable by the best ANN in ANN-3 set.

Longitudinal

breaking

load

Longitudinal

breaking

length

Formation

index

Simulation

1

Averag

e error

(εεεε) (%)

9 7.6 4.4

Simulation

2

Averag

e error

(εεεε) (%)

13 17 9

Page 239: Use of modeling and simulation in pulp and paper making

239

2

2.1

2.2

2.3

2.4

2.5

2.6

2.7

2.8

2.9

0 500 1000 1500 2000 2500 3000 3500 4000 4500 5000 5500 6000 6500 7000 7500 8000 8500

Validation sample number

Lo

ng

itu

din

al

Bre

ak

ing

Lo

ad

(k

N/m

)

Lab data

Predicted data

Figure 7 – Simulation 2 of longitudinal breaking load by the best ANN in ANN-3 set.

10.6.7 CONCLUSIONS

� Paper quality optimisation may be carried out through building soft sensors.

� ANNs have proven to be adequate tools in order to predict paper quality from wet-end measurements.

� Robust models to predict paper properties based on wet-end analysis, with errors around 10-15%, have been developed. Robustness have been analysed with two simulations for different time periods.

� FBRM measurements have an important weight in developed models. Thus, taking continuous measurements with this device would boost paper quality optimisation through wet-end measurements.

� An on-line intelligent decision support system (UCM-IDSS) is being developed involving not only modelling, but software implementation and creation of interfaces.

10.7 MODEL-BASED WET-END OPTIMISATION: DESIGN OF A SOFT SENSOR

Wolfram Dietz, Jürgen Belle, Johannes Kappen, Desiree Somnitz ,Christian Mannert, Frank

Goedsche and Frank Brüning, PTS, Munich

Page 240: Use of modeling and simulation in pulp and paper making

240

10.7.1 OBJECTIVES AND SPECIFIC ASPECTS OF WET-END OPTIMIZATION

In the face of high machine velocities, increased use of secondary fibre sources, rising levels of detrimental substances in the water circuits and high demands on paper qualities, modern papermaking needs a comprehensive understanding and an efficient control of the core wet end processes. The interactions between different factors influencing process stability and paper quality are complex and could up to now not be described on the basis of first principles. Reliable predictions of improvements or pending problems are difficult. As a consequence suboptimal efficiencies of the additives applied, reduced productivity and decreased paper quality may occur.

Data based modelling serves the aim of describing wet end interrelations without the necessity to appraise all chemical and physical interactions. The procedure of setting up a model is generic, but an individual model has to be identified for every application case.

10.7.2 DESIGN OF A SOFT SENSOR ESTIMATING SIZING

This chapter will outline how data based modelling can be used to design a soft sensor for sizing.

10.7.2.1 PROJECT LAYOUT

Many quality parameters can be determined only after a reel has been finished, by taking samples and by conducting subsequent laboratory analysis. If quality problems occur while production is ongoing, corrective action involves a delay. This inevitably produces broke. This also applies to setting the required specifications after a grade change. It is therefore desirable to be able to predict quality parameters that cannot be measured online.

The objective of the project presented here was to develop a soft sensor estimating sizing. The benefit aimed at was to adapt the sizing agent dosage to the desired paper quality, based on the internal sizing / surface sizing ratio, resulting in better sizing stability while reducing sizing agent consumption. The work was originally part of a research project funded by the European Commission. The paper mill involved produced a wide variety of different specialty paper. The project steps will be described as follows. All mathematical analyses were conducted using MATLAB software

10.7.2.2 PRESELECTION OF CORRELATING PARAMETERS

Page 241: Use of modeling and simulation in pulp and paper making

241

To start off, a brainstorming session involving the papermakers was held to compile all conceivable factors that could correlate to sizing. In addition to data relating to sizing agent dosage and sizing agent dilution for internal and surface sizing, all other additive dosages were also regarded. These parameters were supplemented by settings from the preparation process, such as refining settings or stock consistencies. Last but not least, the calculations also had to include the quality data from online and laboratory quality assurance. All these factors added up to more than 150 individual parameters that had to be registered online or through lab work for the acquisition of calibration data.

10.7.2.3 MACHINE TRIALS

In consent with the mill operator, machine trials with strong parameter variations were conducted, even hazarding the consequences of out-off-specification production. In order to limit the trials, it was decided that the system should be stimulated abruptly but briefly by changing individual settings. The following parameters were altered:

the internal resin size was doubled,

the resin size dosage was dropped to zero – with and without surface sizing, with and without surface starch,

the retention additives dosage was dropped to zero

the broke pulp dosage was varied.

The trials were conducted during four days of production without causing a single broke. Neither the grade nor the grammage were varied during the trial period.

To distinguish between internal and surface sizing, sizing measurements were performed with an ultrasonic penetration measurement. The parameters W and MAX indicate the degree of surface sizing, the parameter A60, which is comparable to the Cobb value, represents the internal sizing degree.

The data from these trials laid the basis for extensive correlation analysis and initial modelling.

10.7.2.4 CORRELATION ANALYSIS

To evaluate all the data quickly various mathematical tools and methods were applied. One such method is known as correlation mapping. The strength of the correlations between pairs of data time series is calculated and visualized. A positive correlation, i.e. two variables reacting to a system stimulus by both rising or both falling, is marked red. Blue is used to indicate a negative correlation, i.e. two variables diverging in their

Page 242: Use of modeling and simulation in pulp and paper making

242

reaction. The deeper the colouring, the greater is the correlation. The correlation map in Figure 1 contains and illustrates more than 30 variables with respect to their paired correlations. Variable p9 stands for the dosage of the sizing agent, p10 for the amount of PAC. The map points out that there is a correlation between these two variables and p92 (steam consumption), p34 (moisture in front of the size press), and p179, p182 (A60, size result).

This evaluation is important for subsequent mathematical modelling, since it makes it easier to select the important correlating factors to sizing quality. It has to be kept in mind that parameters that are kept constant through a closed loop control will not show any correlations. They would fall out of a cause-effect interpretation of the correlation map, even if in theory having an impact. For building a soft sensor it is not necessary to refer to cause-effect-relationships. The goal of the correlation analysis is to assess correlating factors, not necessarily influencing factors. But to be able to use the models for optimization calculations, the inputs should be influencing and be manipulable.

Resin sizePAC Chalk KaolinWet end starch Retention agent MicroparticlesBroke treatment Starch solution Moisture before size press GrammageMoisture pope reel Ash Paper output Stean consumption Surface starch 1 Surface starch 2 Surface sizing Sizing Sizing factor Cobb water 60 top side Cobb water 60 wire side W top side Max top side A60 top side W web side Max web side A60 web side Poly Dadmac 0.001n white water Retention Resin size of paper Resin size of head box Resin size of white water

p9p10p11p12p13p14p15p19p25p34p35p36p37p41p92p117p119p127p146p147p148p149p177p178p179p180p181p182p183p184p185p186p187

Resin sizePAC Chalk KaolinWet end starch Retention agent MicroparticlesBroke treatment Starch solution Moisture before size press GrammageMoisture pope reel Ash Paper output Stean consumption Surface starch 1 Surface starch 2 Surface sizing Sizing Sizing factor Cobb water 60 top side Cobb water 60 wire side W top side Max top side A60 top side W web side Max web side A60 web side Poly Dadmac 0.001n white water Retention Resin size of paper Resin size of head box Resin size of white water

p9p10p11p12p13p14p15p19p25p34p35p36p37p41p92p117p119p127p146p147p148p149p177p178p179p180p181p182p183p184p185p186p187

Resin sizePAC Chalk KaolinWet end starch Retention agent MicroparticlesBroke treatment Starch solution Moisture before size press GrammageMoisture pope reel Ash Paper output Stean consumption Surface starch 1 Surface starch 2 Surface sizing Sizing Sizing factor Cobb water 60 top side Cobb water 60 wire side W top side Max top side A60 top side W web side Max web side A60 web side Poly Dadmac 0.001n white water Retention Resin size of paper Resin size of head box Resin size of white water

p9p10p11p12p13p14p15p19p25p34p35p36p37p41p92p117p119p127p146p147p148p149p177p178p179p180p181p182p183p184p185p186p187

Figure 1: Correlation mapping for PM trial 1

The analysis can be supplemented by a principal component analysis (PCA). Large loadings on the first principal component characterise the strongest correlating variables in each case. Here again, no differentiation between inputs and outputs is made. Opposite signs of the loadings or a change of sign characterise anti-correlations. Figure 2 shows the loadings of a PCA with preselected parameters.

Page 243: Use of modeling and simulation in pulp and paper making

243

2 4 6 8 1 0 1 2-0 . 4

-0 . 3

-0 . 2

-0 . 1

0

0 . 1

0 . 2

0 . 3

0 . 4

V a r i a b le

Load

ings

on

PC

1 (6

9.76

%)

p 0 0 9 p 0 1 0

p 0 1 9

p 0 3 4

p 0 3 6

p 0 3 7

p 0 9 2

p1 7 9 p 1 8 2

p1 7 8

p 1 8 1

p 1 8 5

p 1 8 6

Loadings on PC1 for PCA subsystem

2 4 6 8 1 0 1 2-0 . 4

-0 . 3

-0 . 2

-0 . 1

0

0 . 1

0 . 2

0 . 3

0 . 4

V a r i a b le

Load

ings

on

PC

1 (6

9.76

%)

p 0 0 9 p 0 1 0

p 0 1 9

p 0 3 4

p 0 3 6

p 0 3 7

p 0 9 2

p1 7 9 p 1 8 2

p1 7 8

p 1 8 1

p 1 8 5

p 1 8 6

Loadings on PC1 for PCA subsystem

Figure 2: Loadings for modeling acc. to the PCA method (Principle Component Analysis)

Like PCA, PLS (partial least squares) modelling operates with principal axis transformation of the covariance matrix. Unlike PCA, however, the PLS method allows models to be developed for selected output variables. Figure #3 shows the loadings for a PLS model for the sizing parameter A60 (wire side). The loadings for variables p009 (resin size), p010 (PAC), p019 (broke input), p034 (moisture in front of size press), p037 (ash) and p092 (steam consumption) correspond to the strong correlations of these variables with the output variable A60.

Page 244: Use of modeling and simulation in pulp and paper making

244

1 2 3 4 5 6 7-0.5

-0.4

-0.3

-0.2

-0.1

0

0.1

0.2

0.3

0.4

V ariab le

Load

ings

on

LV 1

(73.

24%

)

p009

p010

p019

p034

p036

p037

p092

1 2 3 4 5 6 7-0.5

-0.4

-0.3

-0.2

-0.1

0

0.1

0.2

0.3

0.4

V ariab le

Load

ings

on

LV 1

(73.

24%

)

p009

p010

p019

p034

p036

p037

p092

Figure 3: Loadings for modeling of variable p182 (A60 value wire side) acc. to the partial least squares (PLS) method

10.7.2.5 MODELLING OF SIZING

The correlation studies were initially conducted individually for all paper machine trials to obtain good individual models in this way. Once these calculations had been completed, the models were merged to form an overall model that combined all system reactions stimulated by the individual trials.

One simulation using this model is shown in Figure 4. The calculated value deviates only slightly from the measured value. Two important points must be taken into consideration in this context: The sizing rate was measured every 5 minutes while the trials were running, the values in between these intervals are not known. The individual values were analysed by linear interpolation. The calculated values result from the available online data that was collected from the system at a sampling rate of one minute. This means that one value can be calculated every minute. The fluctuations result from the system fluctuations as a whole and might possibly have to be smoothed when the calculations are refined and improved. In conclusion, it is definitely possible to calculate the sizing rate on the basis of online data.

Page 245: Use of modeling and simulation in pulp and paper making

245

10 20 30 40 50 60 70-1.5

-1

-0.5

0

0.5

1

1.5

S am ple S ca le

Y M

easu

red

1, Y

Pre

dict

ed 1

W erten für V ariab le p -182 (A 60 -wire -s ide )

predic ted

m eas ured

for variable p 182 (A60 wire side)

10 20 30 40 50 60 70-1.5

-1

-0.5

0

0.5

1

1.5

S am ple S ca le

Y M

easu

red

1, Y

Pre

dict

ed 1

W erten für V ariab le p -182 (A 60 -wire -s ide )

predic ted

m eas ured

for variable p 182 (A60 wire side)

Figure 4: Comparison of measured and PLS predicted values for p182 (A60 value wire side). Bold line: predicted; thin line: measured value.

Other models were calibrated and validated using the trials' data. Figure 5 shows the A60 simulation results for all paper machine trials, i.e. data from various trials was used to calibrate the PLS model. Validation (also shown) was based on steady state data from a normal production series in calendar week 9 (KW9). The sizing rate A60 is plotted along the y axis. It is possible to satisfactorily calculate the sizing rate with deviations. The results were checked against data taken from different production runs. Other grammages and one grade with lower additive requirements were used for this purpose. Calculation of the sizing quality on the basis of online data is also possible using these data. The fluctuations result mainly from the “noise” in the input channels. Similar simulation curves could be calculated for all parameters of sizing quality.

Page 246: Use of modeling and simulation in pulp and paper making

246

Figure 5: Comparison of measured and PLS predicted values for p182 (A60 value wire side) for all PM trials with a steady state validation range. Orange line: predicted; bold black line: measured value.

10.7.2.5.1 SIZING OPTIMIZATION TRIALS

The soft sensor that was developed for evaluating sizing quality can also be used to implement optimisation procedures. The optimisation algorithm is intended to provide new targets for selected input variables (correcting variables) that are to be manipulated.

In the first optimisation trials, the correcting variables that were chosen were the input dosage variables p09 (resin size) for internal sizing, p25 (starch solution size press) and p127 (surface sizing polymer size press) for surface sizing. Input variable p10 (PAC) is included in a fixed dosage ratio to p09. The aim of optimisation was to stabilise these correcting variables with an optimum ratio of surface to internal size so as to minimise the cost function K as defined by

,09212725

2

∗+

=p

ppK

(the sum extends over all time periods of the observed time window).

Page 247: Use of modeling and simulation in pulp and paper making

247

This is intended to result in a reduction of the dosage of the surface sizing agent at the expense of the lower-cost internal size without suffering any resultant loss in sizing quality.

To provide a constraint with respect to optimisation, the optimisation algorithm requires that the sizing quality characterised by the penetration values W, MAX and A60 must remain within predefined acceptance limits. To simplify the optimisation process, a multivariable regression model was used instead of the PLS model variant as the soft sensor model. The cost function was minimised under constraints with the help of the Monte Carlo “simulated annealing” search scheme.

Initial results for sizing optimisation that were found in this way are shown in the diagrams in figures. 6 to 9 for a time frame of 100 minutes.

Tot

al a

mou

ntof

K

Time in minutes

Simulation

Optimisation

Tot

al a

mou

ntof

K

Time in minutes

Simulation

Optimisation

Figure 6: Cumulated course of the cost function for simulated sizing (with and without optimization)

The cost function that was reduced during the optimisation procedure corresponds to intensified internal sizing and weakened surface sizing. In spite of this, the resulting sizing quality (shown in Figure 9 based on the calculated A60 value) might even be improved.

Page 248: Use of modeling and simulation in pulp and paper making

248

Dos

age

quan

tity

[l/h]

Time in minutes

optimised

measured

Dos

age

quan

tity

[l/h]

Time in minutes

optimised

measured

Figure 7: Measured p09 (resin size) curve compared to optimized p09 dosage: optimization resulted in intensified internal sizing.

measured

optimised

Time in minutes

Dos

age

quan

tity

[l/h]

Manipulated input variable p 127 (surface sizing polymer)

measured

optimised

Time in minutes

Dos

age

quan

tity

[l/h]

Manipulated input variable p 127 (surface sizing polymer)

Figure 8: Measured p127 (surface sizing polymer) curve compared with the optimized p127 dosage: optimization causes reduced surface sizing.

Page 249: Use of modeling and simulation in pulp and paper making

249

Time in minutes

optimised

Comparison of simulated and optimisec A 60 curve

A 6

0 va

lues

Lower limit

Setpoint

Upper limit

Time in minutes

optimised

Comparison of simulated and optimisec A 60 curve

A 6

0 va

lues

Lower limit

Setpoint

Upper limit

Figure 9: Sizing quality (A60 value) simulated with the soft sensor compared to the optimized A60 curve:

Both curves are within the predefined acceptance limits. The surface to internal size ratio, however, was reduced during optimization.

10.7.3 CONCLUSIONS AND OUTLOOK

The application example proved that parameters for sizing quality can be calculated with the help of online data recorded by the process control system using a suitably calibrated soft sensor. Actually, it does not make sense to include all available data in the calculation. The most important correlating factors should be determined by correlation analysis prior to modelling and then incorporated as input variables in the soft sensors models.

Still many paper quality parameters can be measured only by laboratory analysis of reel samples. Thus raw material and process fluctuations cannot immediately be balanced. The consequence is off-specification production. Soft sensors for these quality parameters are well suited for ensuring constant paper quality. In a next stage of completion, the soft sensor can be further enhanced to calculate not only the quality parameters but also the optimum process parameters required for compliance with set values.

Page 250: Use of modeling and simulation in pulp and paper making

250

10.7.4 REFERENCES

Cho, B.-U., G. Garnier and M. Perrier Dynamic modelling of retention and formation processes; Control Systems 2002, Swedish Pulp and Paper Research Institute, Stockholm, 2002, 200-204

Austin, P., J. Mack, D. Lovett, M. Wright and M. Terry Aylesford PM 14 – Improved wet end stability using model predictive control, Paper Technology 43 (5), 41-44 (2002)

Brosilow, C. and B. Joseph Techniques of Model-based Control, Prentice Hall PTR, 2002

Pickhardt, R. and K. Schulze Verbesserte Stabilität der Nasspartie einer Papiermaschine, Allgemeine Papier Rundschau 126 (46), 25-30 (2002)

Hauge, T.A., R. Slora and B. Lie Model predictive control of a Norske Skog Saugbrugs paper machine: Preliminary study, Control Systems 2002, Swedish Pulp and Paper Research Institute, Stockholm, 2002, 75-79

Yue, H., E. Brown, J. Jiao and H. Wang On-line web formation control using stochastic distribution methods, Control Systems 2002, Swedish Pulp and Paper Research Institute, Stockholm, 2002, 318-322

Dhak, J., E. Dahlquist, K. Holmström, J. Ruiz, J. Belle and F. Goedsche Developing a generic method for paper mill optimization, Control Systems 2004 Conference, 207-214

Fridén, H. and H. Näslund Online decision support and optimization for kraft paper production Control Systems 2006 Conference

Belle, J., J. Kappen, F. Goedsche and H.-A. Kührt Simulation-based control of the sizing rate of resin sized paper, professional papermaking, No. 1/May 2005, 23-29

10.8 REAL-TIME PAPER WEB FORMATION CONTROL USING STOCHASTIC

DISTRIBUTION CONTROL CONCEPT

Hong Yue and Hong Wang, Control Systems Centre, University of Manchester

Page 251: Use of modeling and simulation in pulp and paper making

251

10.8.1 INTRODUCTION

Modern papermaking industry is intrinsically complex in terms of both material and mechanical processes. Vast amounts of paper are produced at high speed with specialized machinery and different forming methods. However, no matter what types of paper the paper manufacturers are producing, it is always important that the end product paper is as uniform as possible. This requires the intensive monitoring and rigid control of the paper machine forming section. In general, the various available types of forming section include twin-wire, roll, blade, hybrid and fourdrinier.

Physically, paper is formed when water drains from a suspension of fibres on or between specialised mesh fabrics. Formation is thus defined as the small-scale variation of fibre distribution within a sheet. The term ‘small-scale’ in this case implies that the size of variation is less than 100mm. The ideal distribution of the fibres in the formed sheet should be perfectly uniform [1]. However, this is commonly unattainable, because of the combined additive effects of the stock approach system, the random nature of component fibres, chemical additions, and the particular operating principles of the head-box and forming section, etc. The evolving importance of formation has spawned a great deal of studies concerning measurement and possible real-time control, so as to quantify and measure the mass density of paper and make it as close as possible to uniform distribution. However, due to the complicated nature of the model and the multiple effects of all the wet end variables on the web forming process, the effective closed loop control of the formation is not achievable at present.

Of course, one possible method that could be applied to achieve optimum formation would be to use the well-developed principle of basis weight control to attempt to realise a possible formation improvement. Basis weight is defined as the total oven-dry mass per unit area and can thus be used to reflect the distribution of solid material, to some extent. In recent years, this method has been utilised to gain some success in the improvement of formation. However, since only a small portion of the input variables have been considered in the process basis weight control, the actual improvements in formation achieved using these methods, have been minimal to-date. For example, if the retention polymers in the wet end are not precisely controlled, the subsequent retention of the solids will have high variability. Such variability will lead to concurrent variations in paper formation, which cannot realistically be corrected automatically by the existing basis weight control systems on present paper machines. A further difficulty is that current basis weight control relies on a scaling sensor that is installed after the drying section of paper machines. This will cause a very long time-delay between the wet end operation and the dry end monitoring. As such, it can be concluded that precise control of paper formation is still an unresolved important problem, and hence, an effective solution would be to

1) include all the possible variables in the wet end of paper machines (not only those used in present basis weight control systems), and to

Page 252: Use of modeling and simulation in pulp and paper making

252

2) utilise the recently available, rapidly developing wet end sensor technology (such as Webform) as an effective measurement device in the wet end, so that a local fast closed loop system can be constituted.

10.8.2 VARIABLES TO BE INCLUDED

Figure 1.1 shows a general diagrammatic structure of the wet end of a paper machine that has a fourdrinier single wire forming process.

A = Head box J = Vacuum box

B = Slice and slice adjustment K = Suction couch roll

C = Breast roll L = Wire turning roll

D = Forming board M = First return roll

E = Hydrofoil assemblies N = Stretch roll

F = Table rolls P = Guide roll

G = Vacuum box (to control drainage) Q = Stretch roll

H = Dandy roll R = Forming fabric or wire

Figure 1 A fourdrinier wet end [2]

This part of the process consists of a headbox approach system, head-box and a wire table. In the head-box approach system, fibres, fillers and some chemicals are added and in general there are 5% solids and 95% water inside the head-box. Indeed, the head-box can be regarded as a unit where a mixing of all the solid constituents with water should be achieved. This mixture of solids is then delivered onto a fast moving wire table through a careful control of the head-box slice lip geometry. The initial paper web is formed via the filtration of solids on the wire. As a result, most of the solids remain on the wire. During this filtration and water drainage process, bonding between fibres and fillers occurs. The actual filtration and drainage processes are also influenced by some machine operation variables such as foil angle and the suction power of the vacuum

Page 253: Use of modeling and simulation in pulp and paper making

253

boxes located underneath the wire. In most cases, a visible 'dry line' will be apparent on the wire table, indicating the end of the primary filtration and drainage process. In this context, the process taking place between the slice lip of the head-box and the dry line should be regarded as the main paper formation process in papermaking. From this brief analysis, it can be concluded that paper formation is affected by a large number of variables in the wet end of paper machines. The major input variables that should be considered should include:

1) the quality of fibres and fillers;

2) chemical inputs;

3) machine operation variables;

4) variables which characterise fluid dynamics in filtration and drainage processes;

5) slice lip geometry and thick stock input.

In terms of fibre quality, all the variables in both the refining stages and stock preparation systems will have some effects (such as CSF, stock pH and conductivity, etc). Considering chemical inputs, variables such as retention aids should at least be considered, since too high a retention will adversely affect the formation process. Machine operation variables to be taken into account, should consist of all the foil angles, drainage parameters, vacuum power and machine speed. As for the slice lip geometry, all the cross-directional actuators should be included. In addition, due to the high operating speeds of commercial paper machines and the involvement of large scale movement of huge volumes of water, severe turbulent flow has an important impact on the formation process as well. The force generated by such turbulent flow is referred to as shear force in papermaking. In general, the greater the degree of turbulent flow, the worse will be the retention. This variation in retention will also have an effect on the formation process. To this end, it can be concluded that the formation system is subjected to a large number of variables during the wet end operation of paper machines.

As discussed, paper formation can be measured by the calculation of the solid distribution of finished paper. Since the final association of fibres and fillers in the finished sheet have a random nature as a consequence of their individual sizes and properties, this solid distribution is a 2D random variable controlled by the above mentioned inputs.

10.8.3 CONTROL OBJECTIVES

In terms of control strategies, minimum variance control is widely used in stochastic control systems, where the closed loop performance will be optimised if the controller has been precisely tuned. This is generally true for such systems where the random inputs are either Gaussian or at least have symmetric probability density functions. However, for

Page 254: Use of modeling and simulation in pulp and paper making

254

non-Gaussian systems, minimum variance cannot be used to represent the performance of the closed loop systems alone. This is simply because the spread area of any non-symmetrical distribution cannot be purely described by the variance. As a result, a more general measure of uncertainty, namely the entropy, should be considered in order to characterise close loop performance of stochastic systems.

The aim of this work is to implement a new on-line entropy control strategy for the wet end section of the UMIST pilot paper machine, so as to demonstrate the achievable improvement of the web formation through the control of an input retention polymer.

10.8.4 EXPERIMENTAL SYSTEM

10.8.4.1 PILOT MACHINE AND EXPERIMENTAL CONDITIONS

All experiments have been carried out on the paper machine at the Department of Paper Science in UMIST. This is a typical Fourdrinier forming section with adjustable machine speeds of 3-50 m/minute. There are two presses, seven pre-dryers, and three after-dryers prior to reel up. For this paper machine, there is a beta gauge for basis weight measurement and a BTG backwater consistency meter for on-line retention measurement. The schematic of the real-time closed loop system for the web formation control is presented in Figure 2, where a digital camera is used to grab images of the sheet as it travels through the machine.

Figure 2 Schematic of the machine trial device.

Machine trials have been designed to study the effects of chemical addition (polymer) on the formation process with all other inputs remaining constant. Experimental conditions have been kept constant for all the machine trials to obtain consistent results. The pulp blend used 70:30 blends of bleached sulfate chemical hardwood and softwood fiber respectively, which was known to give a fairly well formed sheet. The hardwood was Eno

Page 255: Use of modeling and simulation in pulp and paper making

255

birch and the softwood was Lapponia pine and this blend is refined at 120 kWh/tonne. The grammage was set to 60 g/m2 and the machine speed at 8m/minute. The polymer used is a cationic polymer with concentration of 0.05%.

10.8.4.2 DATA SAMPLING WITH DIGITAL CAMERA

The best measure of the formation process is distribution of mass density, which is not easy to obtain during on-line trials. Instead, the grayscale distribution of the paper image was adopted as a means of assessing the web formation quality. It was therefore important to find a way to obtain the information of the grayscale density on-line. The concept here was to take images of the paper at each sampling time and then obtain the distribution of the grayscale density by means of image processing. Using the obtained image, the entropy calculation can be carried out within each sample period. Local variations in basis weights can be determined by measuring the attenuation of visible light through the sheet, providing the measurement occurs before the calendar and taking into consideration light scattering effects of both fiber and filler [3]. A Kodak Professional DCS 520 Series Digital Camera (based on the Canon EOS 1) was used for this purpose. It can provide a fast frame rate (a continuous frame rate of 0.5 frames/second), a good resolution and an image size of 2.1 million pixels (1728x1152).

The camera was equipped with an IEEE 1394 (Firewire) interface, which can be plugged directly into an industry standard interface available on most PCs, and facilitates high speed transfer of images. This fast image transferring capability will satisfy the on-line control requirements of the web formation process. The software development kit (PDC-SDK) includes source code libraries supporting the direct use of a DCS camera without Twain and Photoshop packages. The code libraries are in a variety of programming languages and can access commands for image compression and storage. With certain image processing programs and calculation techniques, the grayscale intensity distribution of the paper image can be obtained within seconds and thus provide the on-line output measurement of the control system. Positioning the camera near the end reel reduces the light scattering effect due to moisture in the sheet, although large time delays would thereby occur between control input and measurement output, due to the time taken for the sheet to travel down the machine.

Some problems with the illumination system employed were initially encountered, when a basic 500-Watt industrial halogen floodlight was tried. The floodlight provided a powerful and robust light source and was also easy to install on the paper machine. However, it provided an uneven illumination over the sample area being imaged, and allowances had to be made, during subsequent image correction. The improved method then developed was to add a piece of etched safety glass in front of the camera so as to produce a diffuse light source. This effectively eliminated the uneven illumination. Figure 3 shows the schematic of the optical sampling system ultimately devised

Page 256: Use of modeling and simulation in pulp and paper making

256

Figure 3: Schematic of the optical sampling system

10.8.4.3 INPUT POLYMER

Modern paper machines have varied chemical additions for many different requirements, such as wet strength, retention, and de-foamers, etc. Since it would be impractical to include all chemicals in the initial trials, a cationic retention aid polymer has been used for this work, so as to provide a simple single chemical input.

This polymer was added into the pulp by a pump whose input flow is determined by the pump speed. As such, by setting the voltage of the DC-motor in the pump, the flow rate can be adjusted. In this work, the voltage signal was generated from the PC platform and was sent to the pump by a D/A converter card. A programme developed using Borland C code provides data acquisition.

10.8.5 PROCESS MODELLING

10.8.5.1 IMAGE PROCESSING

Once the image of a given area of paper was sampled into the PC platform, the information of greyscale distribution was obtained, and subsequently analysed by the image processing program. The actual procedures consisted of the following three aspects using the MATLAB Image Processing Toolbox:

1) read the image from a graphic file containing greyscale intensity. Pixels of the image are stored using 8 bits data in an array,

2) formulate the histogram with N bins for the intensity image above a greyscale colour bar of length N. A default value of N is 256 for a greyscale image,

Page 257: Use of modeling and simulation in pulp and paper making

257

3) obtain the greyscale intensity distribution by normalizing the data of the histogram.

Figure 4 is a typical sample image obtained from the paper. Figure 5 shows the corresponding greyscale distribution of the image, in which n is the greyscale value ranged from 0 to 255, and f (n) is the probability distribution function (pdf) of greyscale. In fact, f(n) stands for the frequency of the occurrence of different greyscale. It is apparent that the summation of all f(n)s should be 1, that is

∑=

=n

i

nf1

1)( (1)

Figure 4 Sample image Figure 5 Gray-scale distribution

10.8.5.2 CALCULATION OF ENTROPY

Entropy is defined as a measure of uncertainty about the occurrence or non-occurrence of stochastic events [4]. For example, if x is a discrete type random variable taking the values ix with probability

ii pxxP == }{ (2)

then the entropy of x is defined as

ii

i ppxH ln)( ∑−= (3)

For the greyscale distribution of a paper image, its entropy is calculated from

)(ln)(255

0

nfnfHn∑=

−= (4)

Page 258: Use of modeling and simulation in pulp and paper making

258

10.8.5.3 DIRECT STEP-RESPONSE IDENTIFICATION METHOD

Once the entropy is calculated at each sampling instant, a pair of input and output sequences can be obtained with the polymer addition as the input and the entropy as the output. This is a single input and single output system, where the purpose of controller design is to use the calculated entropy as a feedback signal so that the entropy of the closed loop system can follow a given entropy value. For this purpose, the model describing the mathematical relationship between the input polymer and the output entropy needs to be established. In this work, the well known step response test is used, which is a convenient way to characterise process dynamics because of its simple physical interpretation [5]. This test can be easily implemented in industrial applications. Typical ways to obtain the continuous-time transfer function models from a step response test include graphically based methods, area-based methods and some least squares identification methods [6]. Different from the existing methods, the direct step-response identification method used here has the following advantages:

1) Model parameters of the continuous transfer function are estimated directly with the development of least squares regression form. No extra efforts are needed for model transformation. The precision of the result is thus increased.

2) The time delay item is considered in the construction of a parameter vector and can be estimated directly without prior knowledge.

3) The method is insensitive to data length. It does not need to wait for the process to enter into the new steady state completely. This will reduce the time of step response tests.

4) It is robust to measurement noise, because multi point integrations are included. The identification algorithm for First-Order Plus Dead-Time (FOPDT) model is proposed in [7]. Extensions to second-order system and other common systems are provided in [6].

In terms of the current entropy control system, a FOPDT model

Lsp e

Ts

ksG −

+=

1)( (5)

is sufficient to approximate the process dynamics within the operating range. Suppose the process is under zero initial conditions before a step change with amplitude a is applied at

0=t , then the regression form of the FOPDT model can be formulated to give

[ ] )()()( ttA

T

Lk

k

tyaat δ−=

−− (6)

Page 259: Use of modeling and simulation in pulp and paper making

259

where ∫=t

dytA0

)()( ττ and )(tδ is an item of noise. Linear equations for this system at

sampling time Nmmi ,,1, L+= will lead to the following linear matrices equation

∆+Γ=Ψθ (7)

for Ltt i ≥= , where

TTLkk ][=θ

−−

−−−−

=Ψ ++

)(

)(

)(

11

NN

mm

mm

tyaat

tyaat

tyaat

MMM

[ ]TNmm tAtAtA )()()( 1 L+=Γ

and ∆ is the noise vector. Using this equation, the estimate for θ can be obtained using the following least squares estimation

ΓΨΨΨ= − TT 1)(θ (8)

10.8.6 ROBUST PID CONTROLLER DESIGN

Although empirically or optimally tuned PID controllers used without considering robust performance may work well under certain nominal conditions, they cannot provide consistent performance in the case of large model errors. This is true for the formation control process where the model uncertainties can be caused by many variables, including the variation of the stock chest level, the polymer concentration and the machine speed, etc. As such, the control method used for the web formation entropy control is an IMC-base robust PID controller. The inherently robust PID controller design combines robust control technique with conventional PID tuning so that its ability to accommodate process uncertainties is increased effectively.

The objective of the robust PID controller is to achieve a minimum integral squared error (ISE) considering the entire closed loop response under the worst-case process dynamics, and to satisfy the performance ratio requirement. The performance ratio r is employed to adjust the speed of the closed loop system and its robustness.

timeresponse loop open timeresponse loop closed

=r

The optimization is based on the min-max rule, that is, to achieve the best performance for the worst process situation. From the robustness point of view, the worst case means the

Page 260: Use of modeling and simulation in pulp and paper making

260

least stable case within an uncertain range. The solution of min-max design is less sensitive to model errors and improves the overall control performance over the entire range of model dynamics [8].

To solve this constrained optimization problem, a new method combining both numerical search and explicit optimization is proposed in [9], where the design procedures include the following steps:

1) Perform a general IMC-PID tuning for the nominal system without incorporating specified uncertainty measurements, as occurs in the FOPDT model (5) with standard PID controller,

)1

1()( sTsT

KsG di

pc ++= (9)

the IMC-PID tuning rules are

)2(2

LK

LTK p +

+=

λ (10)

2L

TTi += (11)

LT

TLTd +

=2

(12)

The key point is to design the tuning parameter λ according to the performance ratio r . Since the IMC control can provide certain robustness [10], the pre-design results are applicable in some situations when the requirement of robustness is not very strict or the model uncertainty is not large.

2) Determine the worst-case situation within the specified model uncertain ranges. For the FOPDT model, it is easy to find the worst case in terms of robust stability. Suppose the parameters are within the range of ],[ maxmin KKK ∈ , ],[ maxmin TTT ∈ , ],[ maxmin LLL ∈ , then the worst case should be

),,( maxminmax LTK .

3) Use the numerical SIMPLEX method to search the min-max solution for the IMC tuning parameter λ with the initial value in step 1.

4) Design the robust IMC-PID parameters with the optimum λ and the min-max rule.

The proposed method provides a simple, stable and reliable solution to the robust PID control design problem.

10.8.7 REAL-TIME IMPLEMENTATION

Page 261: Use of modeling and simulation in pulp and paper making

261

10.8.7 1 OPEN LOOP STEP RESPONSE MODELLING

The dynamic of this process is determined from open loop step response tests within the operation ranges. The system input is the polymer flow added into the pulp and the output to be measured is the entropy of greyscale distribution within the paper image (see Figure 6).

Figure 6 Input & output of step response test

The open loop step response tests have been repeated many times for different amplitudes of the input and at different operating conditions. The purpose of this was to observe the uncertainty of the model and, at the same time, decide the input range of a linear model.

The data of the calculated entropy has been filtered with a time lag unit to reduce the effects of noise pollution. The auto-correlation result presented in Figure 7 shows the effectiveness of the filter.

Moter voltage of Polymer

Entropy of grey- scale distribution

u y Process Model

Page 262: Use of modeling and simulation in pulp and paper making

262

Figure 7 data filtering for modelling

The fitting curves from a step response test are shown in Figure 8.

Figure 8 Open loop step response curves.

The estimated process model is

sp e

ssG 150

14503.0

)( −

+=

with 25% of parameters’ uncertainty. This implies that the ),,( LTk might change ± 25%

from the nominal model parameters under different operating conditions. The large time delay is introduced by the position of the camera, as discussed previously.

Page 263: Use of modeling and simulation in pulp and paper making

263

10.8.7.2 CLOSED LOOP TEST

At each sampling instant, an image of the selected paper area is produced by the camera, transferred to a PC, and saved as a *.tif file. The entropy of grayscale distribution is then calculated within the sampling time and applied as the on-line measured output. The sampling time was 8 seconds for the pilot machine trial.

The closed loop result is presented in figure 5.4 with the tuning PID controller pK =17,

iT =120 seconds and 28=dT seconds.

Figure 9 Closed loop results of pilot machine operation.

From figure 9, it can be seen that a desired entropy response has been obtained, indicating that the uncertainty of the formation has been gradually reduced.

A user-friendly operation interface is developed in the MATLAB environment (Figure 10), in which operations of open loop step response test, data sampling, modeling, controller design and closed loop running can be conveniently handled for pilot machine trials.

Page 264: Use of modeling and simulation in pulp and paper making

264

Figure 10 Main operation interface

10.8.8 CONCLUSIONS

In this paper, a new entropy control strategy has been implemented for the web formation at the wet end of a pilot paper machine. The innovative idea of entropy control for stochastic system has been proposed and proved to be applicable for this system. The effectiveness of the new modelling method and the robust controller design method has been demonstrated through real-time tests. The current research concerns only the influence of the polymer on formation quality, which may be insufficient for the requirements of the real paper industry. Further research should consider the retention system with fibre as well as filler. Therefore, other control inputs must be considered and the control strategy will be considerably more complex.

10.8.9 REFERENCES

1. Trepanier, R.J., Jordan, B.D. & Nguyen, N.G. (1998). “Specific Perimeter: a statistic for assessing formation and print quality by image analysis” , Tappi Journal, 81(10), pp. 191-196.

Page 265: Use of modeling and simulation in pulp and paper making

265

2. Smook, G.A., (1994). Handbook for Pulp and Paper technology, Ed.2, Angus Wilde Publications.

3. Landmark, P. & Joensberg, C., (1984). “Development of an On-line Formation Tester To Determine Optimal Use of Retention Aids”, Paper Trade Journal, No.9, pp.84-86.

4. Papoulis, A. (1991). “Probability, Random Variables, and Stochastic Processes”, Third Edition, McGraw-Hill, Inc., Singapore.

5. Astrom, K. J. & Hagglund, T. (1995). PID Controllers: Theory, Design and Tuning, 2nd edition, Instrument Society of America, USA.

6. Wang, X.Z., Yue, H., Gao, D. J. (2000). “Direct Identification of Continuous Models with Dead-Time/Zeros”, Proceedings of the 3rd Asian Control Conference, Shanghai, China, July 4-7, 2000, pp. 1709-1714.

7. Bi, Q., Cai, W.-J., Lee, E.-L., Wang, Q.-G., Hang, C. -C., Zhang, Y. (1999). “Robust Identification of First-Order Plus Dead-Time Model from Step response”, Control Engineering Practice, 7(1), pp. 71-77.

8. Honeywell Inc. (1995), Robust-PID User-Manual.

9. Yue, H., Gao, D. J. & Liu, S. (1999). “A Method for Improving Robustness of Industrial PID Controllers”, Chinese Journal of Automation, 11(3), pp. 263-269.

10. Morari, M. (1989). “Robust Process Control”, Prentice-Hall Inc., New Jersey.

Additional references:

Austin, P., J. Mack, D. Lovett, M. Wright and M. Terry: Aylesford PM 14 – Improved wet end stability using model predictive control, Paper Technology 43 (5), 41-44 (2002)

Austin, P., J. Mack, D. Lovett, M. Wright and M. Terry: Improved wet end stability of a paper machine using model predictive control, Control Systems 2002, Swedish Pulp and Paper Research Institute, Stockholm, 2002, 80-84

Belle, J., J. Kappen, F. Goedsche and H.-A. Kührt: Simulation-based control of the sizing rate of resin sized paper, professional papermaking, No. 1/May 2005, 23-29

Brosilow, C. and B. Joseph: Techniques of Model-based Control, Prentice Hall PTR, 2002

Cho, B.-U., G. Garnier and M. Perrier: Dynamic modelling of retention and formation processes; Control Systems 2002, Swedish Pulp and Paper Research Institute, Stockholm, 2002, 200-204

Page 266: Use of modeling and simulation in pulp and paper making

266

Deng, M. and C.T.J. Dodson: Paper – An Engineered Stochastic Structure, Tappi Press, Atlanta, 1994

Dhak, J., E. Dahlquist, K. Holmström, J. Ruiz, J. Belle and F. Goedsche: Developing a generic method for paper mill optimization, Control Systems 2004 Conference, 207-214

Hauge, T.A., R. Slora and B. Lie: Model predictive control of a Norske Skog Saugbrugs paper machine: Preliminary study, Control Systems 2002, Swedish Pulp and Paper Research Institute, Stockholm, 2002, 75-79

Pickhardt, R. and K. Schulze: Verbesserte Stabilität der Nasspartie einer Papiermaschine, Allgemeine Papier Rundschau 126 (46), 25-30 (2002)

Yue, H., E. Brown, J. Jiao and H. Wang: On-line web formation control using stochastic distribution methods, Control Systems 2002, Swedish Pulp and Paper Research Institute, Stockholm, 2002, 318-322

10.7 OFF-LINE APPLICATIONS

10.7.1 TRAINING SIMULATORS

Erik Dahlquist, Mälardalen University, Vasteras, Sweden

10.7.1.1 OPERATOR TRAINING:

One reason for using an operator training simulator can be to reduce the amount of e.g. paper breaks or down time of the process during the start up phase of a new paper machine or pulp mill. As often people with very little experience of paper machine or pulp mill operations are hired for new green field mills, it will be very risky to start the new process, if the operators do not get good training in advance. Here it can be interesting to refer to a study done in the US on how much we remember of information we are fed with:

10 % of what we see

30% of what we see and hear simultaneously

70 % of what we also train at simultaneously

and close to 100 % of what we repeatedly train at.

This is the reason for training at a dynamic process simulator, as the operators can acquire a very good knowledge even before the actual start up of the mill.

Page 267: Use of modeling and simulation in pulp and paper making

267

If we just make some rough estimates on what benefits a training simulator can mean, we may assume 10 % higher production the first month after starting up a new process line. For a paper mill this would mean approximately 400 USD/ton* 1000 tpd * 30 days* 10% = 1.2 MUSD in earnings. One paper break can be worth some 100,000 USD in lost production etc.

This is no guarantee on earnings, but a qualified estimate, showing that this is not just a game for fun.

At MNI, Malaysian Newsprint Industries, the start up to full production was achieved in a 20% shorter time period than “normal” although operators with no previous experience from pulp and paper industries were recruited. They had been trained during a eight week time period, half day in the simulator, while the other half being out in the mill looking at the real hardware.[Ryan et al 2002], [NOPS 1990].

10.7.1.2 TRAINING SIMULATOR SYSTEMS

Both ABB with Simconx and Honeywell SACDA have simulator systems primarily for operator training. First principles algorithms made in FORTRAN and C++ are covering process equipments are linked in a sequence. Flows in and out of the equipments are determined from a separate pressure –flow calculation, which iterates every time step with the material balance calculations inside the algorithms. In these chemical reactions, separation etc is determined. Changes in inventory in each equipment is calculated, and thus giving the dynamics. For valves, pumps etc the process model is interacting with DCS controls. If the flow is too low, the control system sends a signal to open the valve. Next time step the valve position then becomes a little larger, and the pressure drop becomes less, and the flow increases. In this way the dynamics also includes the DCS interactions.

The DCS controls are run as emulated code in the simulator in these cases. This means that FORTRAN code is produced for the functions, and a translator transfers the DCS configuration into the simulator automatically. This means that in reality everything is run in the simulator, and the actual Process Computers are not needed. In the modern versions everything is on PC with DOTNET, while earlier everything was run on an Alpha computer, for both simulators.

These simulators are the most advanced dynamic simulators on the market, where a complete plant with full controls can be operated in real time!

Simconx uses G2 as the graphical interface. In this way full object orientation can be achieved with inheritance of functions, although the code in each module is still “old fashion” FORTRAN. Modules for different equipments from the 70th are operated together with those produced today, which is a great advantage. Otherwise it is too expensive to transfer code from one system to the other, having to rewrite major parts of the code.

When the system is used for operator training the teacher is sitting on the other side of a glass window, so that the teacher can see the trainee, but not the opposite. The teacher then

Page 268: Use of modeling and simulation in pulp and paper making

268

can introduce actions without the notice by the operator outside what is seen on the process displays.

Figure 1 The configuration of ABBs training simulator including DCS system, process simulator , iGES (G2) graphical configuration and engineering interface and the relation to the process.

Typical process displays used in operator training looks like below for a digester plant in pulp and paper industry (figure 2).

Advant Controller AC 450

Operator Station PC

MB 300

AS 515

I/O Process

Model

Emulated

Controls

Operator Station Instructor Station

ABB Simconx

AS 515

The same

Real Process Simulated Process

iGES

Graphical User

AABBBB DDCCSS SSiimmuullaattoorr

P&I Process Diagram

The same

TCP/IP

Page 269: Use of modeling and simulation in pulp and paper making

269

Figure 2 . Typical operator display from a process industry. In this case from a digester in pulp and paper industry.

10.7.1.3 TRAINING PROCEDURE FOR OPERATOR TRAINING ON SIMULATOR

In this section a training program for operators is presented

PULP MILL:

1. Start bringing up one display, e.g. the Pulper. Explain the functionality of the DCS display first:

- Alarmlists and acknowledgement

- Event list

- MOTCON, the motor controller.

- What tools?

- Object display

Page 270: Use of modeling and simulation in pulp and paper making

270

- Sequences

- Group starts

- VALVECON, the valve controller

- The same

- PIDCON, the PID controller

- The same

- Also trend log for e.g. Flow rate

- Let operator build a trend log, following a “detailed instruction”??

- Recipe

- Show how you bring up on screen

- Show how you enter new values

2. Select a sequence and run through

- Pulper 1

- Explain what happens, along the sequence

3. Trip a motor, e.g. the pulper rotor (stirrer)

- Show the alarms, and the alarm list

- Group start and sequence display

- Object display, for the tripped motor

- Show that motor stops because to heavy load, as too high concentration

- The fault can be level sensor gave wrong level. Assume fixing this.

- Add some water, and then restart pulper

4. Now go through the whole process from pulper to storage tower, for the pulp mill.

- Look at values and what the displays look like, when everything is OK.

- Explain what is happening on each display

- Explain, that on the displays, a lot can be seen, but not everything

Page 271: Use of modeling and simulation in pulp and paper making

271

- Exemplify the latter by shutting off the reject totally for a series of screens, and show that actually not too much is seen. You have to go out in the mill to see that reject is actually coming out. This should be done now and then.

- Explain the impact if no reject- a risk that “ shit” is passing through, causing more pin- holes in the final paper product, although far down the line.

5. Set a screen rotor stopping

- Look at the pressures and flows, what happens

- Tell that now you have to go out and clean the screen.

- Restart, by going back to an original initial condition.

6. Change the flow rate in the whole line slowly, by 20%.

- Look at some flow meter trend of some main stream, e.g. to flotation or in the cleaner line.

7. Go back again

- Follow trend of some main stream, especially in to flotation and in the cleaner line.

8. Start drifting a flow meter. Don´t tell what you are doing right now. After a certain time period, we should have a look at what is going in to flotation, and what is into the cleaner line.

- Ask why one is flat, and the other goes down. Explain due to drift in sensor.

9. Shut down the process

- Start with the pulper. Don´t feed anything more.

- Then stop the process feeding, filling in with dilution water for an easy start up again, in pumps, cleaners and screens.

- Then stop according to written instruction

10. Start up a process from initial conditions, where everything is filled with water.

11. Do some more faults:

- Seal water failure - motor stops in e.g. screen

- Too high conc. in screen- clogs. Have to clean.

- One pump in flotation stops- all pumps stops.

- Clog reject of 15 % of cleaners in a battery. What happens?

- Too high level in a tower. What to do?( may need replay, to run in a different way?)

Page 272: Use of modeling and simulation in pulp and paper making

272

PAPER MILL

1. Start bringing up one display, e.g. the mixing chest. Explain the functionality of the DCS display first:

- Alarmlists and acknowledgement

- Event list

- MOTCON, the motor controller.

- What tools?

- Object display

- Sequences

- Group starts

- VALVECON, the valve controller

- The same

- PIDCON, the PID controller

- The same

- Also trend log for e.g. Flow rate

- Let operator build a trend log, following a “detailed instruction”??

- Recipe

- Show how you bring up on screen

- Show how you enter new values

2. Select a sequence and run through

- Decide suitable.

- Explain what happens, along the sequence

3. Trip a motor, e.g. the agitator rotor (stirrer)

- Show the alarms, and the alarm list

- Group start and sequence display

- Object display, for the tripped motor

Page 273: Use of modeling and simulation in pulp and paper making

273

- Show that motor stops because no seal water

- Shut on seal water. Restart.

4. Now go through the whole process from pulper to storage tower, for the pulp mill.

- Look at values and what the displays look like, when everything is OK.

- Explain what is happening on each display

- Explain, that on the displays, a lot can be seen, but not everything

- Exemplify the latter by shutting off the reject totally for a series of screens, and show that actually not too much is seen. You have to go out in the mill to see that reject is actually coming out. This should be done now and then.

- Explain the impact if no reject- a risk that “ shit” is passing through, causing more pin- holes in the final paper product, although far down the line.

5. Set a screen rotor stopping

- Look at the pressures and flows, what happens

- Tell that now you have to go out and clean the screen.

- Restart, by going back to an original initial condition.

6. Change the flow rate in the whole line slowly, by 20%.

- Look at some flow meter trend of some main stream, e.g. from white water tank 1.

7. Go back again

- Follow trend of some main stream, as above.

8. Start drifting a flow meter. Don´t tell what you are doing right now. After a certain time period, we should have a look at what is going in to flotation, and what is into the cleaner line.

- Ask why one is flat, and the other goes down. Explain due to drift in sensor.

9. Shut down the process

- Start with the feed from the pulp mill.

- Then stop according to written instruction (feed to wire, steam etc)

10. Start up a process from initial conditions, where everything is filled with water or pulp.

11. Do some other failures, like

Page 274: Use of modeling and simulation in pulp and paper making

274

- Shower water failure. Explain that this will cause wire to clog faster if not adjusted. Then the production will be influenced, and even the wire has to be changed.

- No vacuum in deculator, indicated from instructor as too many pin-holes- Why?What caused the effect? Look at the vacuum etc.

- To high level in broke tower. How react. Balance to avoid (do replay).

10.7.2 MNI EXPERIENCE WITH PROCESS SIMULATION

PAPER PRESENTED AT ASIA PAPER 2002 IN SINGAPORE

Kevin Ryan, Technical Manager, Malaysian Newsprint Industries and Erik Dahlquist,

Malardalen University (and ABB when the paper was first presented)

• In 1997 MNI brought over the Genting Sanyen newsprint project.

• This meant that MNI acquiring a completed design and all the major equipment from Voith Sulzer and DCS from ABB. Consequently, this meant that the projected time line till start-up was relatively fast. In a normal project the critical path is the delivery time for the normal green field paper-machine is the paper machinery, but in MNI case, this was the main process building)

• In addition the senior Management of MNI required a world class start-up of the mill. This target was to meet start up was based on the previous start-up of paper machines

• It was also soon clear that MNI could not hire enough experienced pulp and paper operators from within Malaysia, so the only viable option is to train up a team to meet the challenge of meeting the start-up curve.

• The task of the Operation Mill Team (OMT) was to deliver an operating team to meet the startup curve.

• Thus an extensive training program was to be developed of which a significant part was the ABB DCS simulator. Consequently a contract was awarded to ABB Industrial systems to supply a DCS simulator package in June 97 and to deliver in June of the next year.

• Two process engineering left for Sweden to assist in the development At the same time they would gain the experience to operate the simulator during the later operator training

• The most important objective was to deliver a simulator on time to successfully train the DCS operators in all the aspect of operating both the process and the DCS system.

Page 275: Use of modeling and simulation in pulp and paper making

275

• As some of the trainee did not have exposure to a DCS system, the simulator was used to train the operator in the basic of the ABB Advant system. This included

Navigation, Trending, menus etc, Meaning of Manual, automatic and E1 (Cascade)

• Familiarise the trainees in how the process is represented on the DCS system.

E.g. what and where the controls for each item are located are located, control loops are presented

• Equally important is to train the operator in :

• How to put the plant in the correct start-up condition

• How to start the process using the group start up loops

• For these reason a conscious decision was made to give priority to those element in the DCS that were visible to the DCS operator.

Main objectives for the simulator:

• First Priority : Train Operators

How to use the DCS system.

Learn the display layouts

How unit process react.

Plant Start-up

Plant Shut down

• Second Priority: Process Optimization

E.g. Water Balance problems

Confirm Operating Procedures etc

MNI Newsprint Paper Machine & Deinking line

• Separate models for RCF and PM

• 20 P&IDs

• 3 000 process elements

• 7000 I/O points

410 PID controllers

600 pumps and agitators

Page 276: Use of modeling and simulation in pulp and paper making

276

500 valves

• 200 process displays

View of Operator Consoles and AC450 Controllers in the background

Figure 1 View of Operator Consoles and AC450 Controllers in the background

Page 277: Use of modeling and simulation in pulp and paper making

277

Figure 2 View of the engineering computer and Alpha Computer with the simulator models

One of the most important aspects being the customer was to define what the limits to the simulation are.

This usually takes the form of what processes should we simulate and what information about the streams should we include in the simulator model.

During the development of the simulator it quickly became apparent that it would be impractical to simulate all the physical process especially for the paper machine compared than the RCF plant.

The reasons behind being unable to simulate are varied:

Too complicated

Commercially sensitive

Information doesn’t exist

Not cost effective

An example: The Simulation of the paper machine was complicated by the PLC control of the certain items of the paper machine such as on line calendars or the PLC controlling the

Page 278: Use of modeling and simulation in pulp and paper making

278

amount of crown on the press rolls. In the end a decision was made to eliminate these from the model.

The most important rule that we used to make these decisions was to determine if the operator could observe the effect of the simulation. Effort was put into things that are material to the operator e.g. alarms,

The other major area to consider for the simulation is the number and type of the stream parameters that should be modelled. In general these are properties that are conserved such mass, energy, etc are very easy. However simulation of properties such as pH, pulp free, ink specks number are not conserved and are depending on other physical properties that are not well defined.

Thus the choice of the stream parameters to be models needs to be balanced against the advantages vs. the difficulty involved.

What is required for modelling both physical process and stream properties is that there must be a reason for why you want to model this parameter. In other words - be realistic, not everything is worth simulating.

• Resistance was encountered from equipment supplier about supplying detailed design information about performance of unit processes such a pressure screens, cleaners and other process equipment,

• In the end analysis, for the simulator to achieve the objective of training the operators, details knowledge is not generally required.

• We found a large number of errors in the configuration of the DCS system. These we communicated back to the ABB/Voith DCS people for rectification. The importance of correcting these errors is that the majority would have slowed down the commissioning of the plant during the water run and or worse still the initial fibre runs. Typically these error would be such things as Reversed controllers, reversed logic connections

We also found that complete section of DCS configuration were also missing or did not correspond with other sections. E.g. Bleaching tower controls etc.

Another aspect is that developing the DCS simulator forced the people involved such as the pulp mill manager, myself as the technical manager and the process engineers working of the development of the simulator developed a much deeper understanding of the way that the control system was designed to operate. This information is simply not available on the PI&D. It also forces you to determine exactly how a piece of equipment has been configured to operate (almost a rigorous process to teach both the operators and the management about the process)

Everyone involved in the development process gained a detailed knowledge of how the plant would run before the plant start-up.

Page 279: Use of modeling and simulation in pulp and paper making

279

Figure 3. Process layout for the flotation unit with the too large valve

• When commissioning the simulator, the simulation would keep shutting down on a reverse flow alarm.

• The reverse flow alarm is to prevent stock from flowing back into the filtrate tank if there should the amount of stoke flowing forward should exceed the pumping capacity of the pump.

• To control the level in the flotation cell, the valve to the suction of the screen feed pump would open up, increasing the supply of stock. Since there is a large amount of water in the floatation cell, the flow through the valve must increase too greatly to bring down the level within a reasonable time. This would result in more stock to the pump could handle and the excess would then flow back to the filtrate tank and generate a reverse flow alarm.

• The solution was to clamp the opening of the valve to such an opening. This prevents a flow that is too high for the pump and hence prevents a reverse flow alarm.

• The training program start with a basic induction course to ensure that every one have basic level of English and other education skills

• This was followed by a introduction to pulp and paper .This was designed to give the students a basic understanding of the unit process such as pulping screening and cleaning etc

To Fine Slot Screens

From White Water FiltrateTank

L.T

Rerverse Flow Alarm

Flotation Cells

Screen Feed Pump

To Secondary Flotation

Page 280: Use of modeling and simulation in pulp and paper making

280

• An external company (Transtech Interactive) specializing in producing training materials was hired to produce manuals for the RCF and papermaking processes.

• The people then spent approximately 6 weeks at one of the companies sister mills to gain basic experience in their respective areas.

• This program was then competed with a extensive program on the training simulator

Simulator training was organised into small teams of three to four people to maximise the ability to of each of the plant operators to use the simulator hands on experience.

To get through all the scheduled training, teams scheduled 24 hours per day

Things that were practices on the simulator:

A) Start-up and Shut down both lines or shut down line 1 only

B) Change from operating 2 lines down to 1 line only and vice versa.

C) Change the production rate by ramping up or down the production rate

The actual training for the operators was very well received. I (Kevin) believe that this was because the simulator was absorbing the operators’ interest and it was interactive. When using the simulator in fact they often requested additional simulator time

Simulated faults introduced into the system required a more thought than we originally has estimated since

If the fault was too bad, the process interlocks would shut the process down immediately

And if the system fault was too minor, the DCS operator usually has very little or nothing to indicate that there was a fault in the entire system

It was very clear that the combination of the training program and the simulator resulted in a team of operator that was very well trained and prepared for the start-up

It was also clear that the simulator increased the level of the confidence of the operators. If they could run the simulator then they could run the real thing

One of the definite benefits was the increased confidence of the operators when faced with commissioning. This allowed the operators to keep up with the commissioning personnel rather than play catch up. Usually the commissioning people are concerned with the process of getting the plant up and running rather than training and coaching the operators. Comments from the equipment suppliers indicate that they were impressed with the understanding that the operators had of the process.

Conclusions:

• Start the process as early as possible. This is for a number of reasons

Page 281: Use of modeling and simulation in pulp and paper making

281

• Any design error that the simulator finds is likely to more expensive to fix as the project progress

• Decide what you want the simulator is needed for. This will impact decision on the type an extent of the process units and stream parameters that will require simulation.

• Gives adequate time to develop the training programs we do take time.

• To get the best use of the simulator required it to be incorporated into an extensive program. The simulator is a very good way to bring all the training together before starting to practise with the very expensive real and very easily damaged equipment.

For MNI the process of development was important in fully understanding the RCF and PM plants. It also meant that the production manager, the technical manager and two process engineers learnt both the process and the interaction with the DCS system as they first made the training program, and later performed the training with the operators.

MNI achieved the target Production Start-Up Curve. This meant to reach the full production 20% faster than is normal at an Australian mill with experienced staff!

10.8 USING SIMULATION TO TUNE A SENSITIVITY INDICATOR

Kauko Leiviskä, University of Oulu, Control Engineering Laboratory, Oulu, Finland

Timo Ahola, Outokumpu Stainless Oy, Tornio, Finland

Introduction

Paper web breaks account for 2-7 percent of the total production loss, depending on the paper machine type and its operation. For a modern paper machine with production of 330 000 tons a year, this means 1.5-6 million Euros lost in a year (Ahola 2005).

Computational break prediction systems have been studied. Various signal processing methods have been used in data classification: high and low signal levels, abrupt changes and rapid trends and bumps (Suojärvi et al., 1996, Ihalainen et al., 1997). The main problem with signal processing methods is that correlations change in time and from one machine to another – so some kind of adaptation is required. There are some reports (Hokkanen 1996; Li and Kwok 2000; Ihalainen 2000) where PCA has been used successfully for the classification of break situations. Independent component analysis (ICA) was tested in (van Ast and Ruusunen 2004). Reported neural network applications utilise back-propagation network (Miyanishi & Shimada 1998) and Kohonen Self-Organizing Map (Sorsa et al. 1992).

Sensitivity indicator

Page 282: Use of modeling and simulation in pulp and paper making

282

The web break sensitivity indicator was developed as a Case-Based Reasoning type application with Linguistic Equations approach and Fuzzy Logic (Ahola 2005). The Case Base contains case models with different number of breaks. (Ahola 2005, Ahola and Leiviskä 2005). The system requires a lot of data pre-processing and modelling calculations, but these are introduced elsewhere. The hierarchical model structure is shown in Figure 1.

Figure 1. The model structure. Categories here refer to the number of breaks during the 24 hours modelling period. Category 1 means no breaks and Category 7 a lot of breaks.

The calculations start when a new set of measurements is coming from the paper machine. The system defines the best fitting web break category and the break sensitivity. The best fitting category is selected using a simple fuzzy strategy. If an equation is true, the degree of membership for the equation is one. All deviations reduce this degree according to a triangular membership function. The degree of membership for each case is evaluated, in the original system, by taking the weighted average from the degrees of membership of the individual equations.

1 0 1

ein

ej ejj

ci ejei

w

wn

µµ == ≤ ≤

where ci refers to the ith case, ej to the jth equation and nei is the number of equations in the ith case.

The degree of membership for each web break category is calculated from the degrees of membership for all the cases included in the category. Once again, the weighted average is used.

Page 283: Use of modeling and simulation in pulp and paper making

283

1( ) 0 1

cin

ci cii

cat k cici

ww

n

µµ == ≤ ≤

Here nci is the number of cases in the category.

Finally, the weighted average method uses certain risk levels for break categories: e.g. in the case of five categories 0 for no breaks, 0.25 for a few breaks, 0.5 for normal, 0.75 for many breaks and 1 for a lot of breaks. These risk levels are weighted with memberships in these break categories and the break sensitivity is calculated as the weighted average (Ahola 2005).

Simulation

The indicator was tested off-line with a simple Simulink model. The indicator operates as a Matlab function and uses measurement data collected from the paper machine as input. As output the indicator gives memberships in the break categories and the break sensitivity calculated from these memberships. (Ahola et al. 2002).

In practice, simulation was used to tune the model and improve its sensitivity to changes in the process variables. This was mainly done by testing different alternatives in fuzzy calculations shown above. On-line tests showed very soon that while using the weighted average method it was impossible to make the difference between break categories and the results tended to vary close to average value of 0.5, as shown in Figure 2.

Figure 2. Example of results using the weighted average method (Ahola 2005). White vertical lines show the actual breaks, black fractional lines the break sensitivity calculated from the actual breaks and black line the predicted breaks.

Three other methods were tested by simulation (Ahola 2005). In the first method, the biggest membership was given the weight 1, the second a weight 1/3, third the weight 1/9 and so on. In the second method, only the biggest membership was used. Finally, the third method based on the biggest adjoining memberships. This method starts from the biggest membership and adds to it the effect of two closest to it. The biggest membership gets the weight 2/3 and the average of the two adjoining memberships the weight 1/3.

Page 284: Use of modeling and simulation in pulp and paper making

284

The selection of the correct calculation method improved the results markedly, as Figure 3 shows. It should also be noted that in this case the output was the average of calculated values for 120 minutes.

Figure 3. Results when using the maximum membership as the basis of calculations (Ahola 2005). Here vertical lines show the actual breaks, white fractional line the break sensitivity calculated from the actual breaks and black thicker line the predicted number of breaks in 24 hours.

References

Ahola T. (2005) Intelligent estimation of web break sensitivity in paper machines. Acta Universitatis Ouluensis C232. University of Oulu. http://herkules.oulu.fi/isbn9514279573

Ahola T., Juuso E. and Leiviskä K. (2006) Variable selection and grouping in paper machine application. Proceedings of ALSIS’06 - The 1st IFAC Workshop on Applications of Large Scale Industrial Systems. August 30-31, Finland. Also in: XXX

Ahola T., Juuso E.K. and Oinonen K. (1997) Data analysis for web break sensitivity indicator. Proceedings of TOOLMET '97 - Tool Environments and Development Methods for Intelligent Systems, Oulu, 1997, pp. 150-159.

Ahola T., Kumpula H. and Juuso E. (2002) Simulation of Web Break Sensitivity. In Proceedings of SIMS 2002- The 43rd Conference on Simulation and Modelling. 26-27 September 2002, Oulu, Finland, pp. 178-184.

Ahola T. and Leiviskä K. (2005) Case-based reasoning in web break sensitivity evaluation in a paper machine. Journal of Advanced Computational Intelligence and Intelligent Informatics. 9(2005)5.

Ihalainen H., Paulapuro H. Rislakki M., Ritala R. and Suojärvi M. (1997) Novel method to analyse relationship between web breaks and process variability. 47th Canadian Chemical Engineering Conference, Edmonton, Alberta, Canada, 5.-8.10.1997. 1997, University of Alberta, Abstract.

Ihalainen I.J. (2000) Determination of runnability factors using on-line measurements on a newsprint machine. M.Sc. Thesis. Helsinki University of Technology, Department of Forest Products Technology, Espoo 2000.

Page 285: Use of modeling and simulation in pulp and paper making

285

Li I.S., Kwok K.E. (2000) Prevention of sheetbreaks using multi-variate statistical analysis in an expert system environment. Pulp & Paper Canada 101(11):T336-340 (November 2000).

Miyanishi T., Shimada H. (1998) Using neural networks to diagnose web breaks on a newsprint paper machine. Tappi Journal, VOL. 81:NO.9, September 1998.

Sorsa T., Koivo H. N. and Korhonen R. (1992) Application of neural networks in the detection of breaks in paper machine. Preprints of the IFAC Symposium On-line Fault Detection and Supervision in the Chemical Process Industries, Newark, Delaware, USA, April 22-24, 1992, pp. 162-167.

Suojärvi M., Ritala R., Ihalainen H. and Jokinen H. (1996) Finding significant changes from process data. The yearbook of Finnish Statistical Society 1996, part I, pp 75-82. Helsinki, Finland: Finnish Statistical Society, 1996, 255 pp.

van Ast J. and Ruusunen M. (2004) A guide to independent component analysis – Theory and practice. Report A No 23. University of Oulu, Control Engineering Laboratory. Oulu 2004. 53 pp.

Page 286: Use of modeling and simulation in pulp and paper making

286

CHAPTER 11 APPLICATIONS IN UTILITIES

11.1 SOFT SENSORS IN WASTE WATER TREATMENT

Kauko Leiviska, Oulu University, Oulu, Finland

Flotation is a wastewater purification process, which uses air bubbles to remove impurities from water. Usually, pressurized air is supplied to reactor forming air bubbles due to significant pressure drop. Impurities attach to bubbles and rise at the water surface. Some chemicals are used to make process more efficient. Basically, those chemicals bind impurities together, thus increasing the amount of impurities attached to an air bubble. In such a water purification process, quality variations in inlet water have great influence on process efficiency. It should also be noticed that used chemicals are often quite expensive, thus overdose is not desired. (Ainali et al. 2002)

An indirect measurement was developed for a flotation unit that gave a quality index for inlet water. The calculation of quality index was based on the deviation in process behaviour compared to typical process model. The typical process model was developed with linguistic equations. The indirect measurement not only gave valuable information for process modelling but also was used in process control. A feed forward controller was developed for the flotation unit. The controller strategy was solely based on the water quality index and expert knowledge. (Ainali et al. 2002)

In the literature, the uses of interval observers and neural networks have been reported (Choi and Park 2001, Hajd-Sadok and Gouzé 2001, Alcaraz-Gonzáles et al. 2002).

11.2 SOFT SENSORS IN RECOVERY BOILER

Kauko Leiviska, Oulu University, Oulu, Finland

A software sensor was developed for the recovery boiler used in the pulp and paper industry in (Murtovaara et al. 1996). The aim was to further improve the edge detection capability, the image processing algorithms, and the tuning of the threshold parameters in an existing commercial system for the image analysis of the recovery boiler. These directly affect the searching for the contour of the bed. The resulting improvements were to be embedded in the existing system.

The efficiency of the edge detection was improved by developing a fuzzy algorithm. It presented two improvements compared to the existing system. The first was to utilise the

Page 287: Use of modeling and simulation in pulp and paper making

287

previous contours in the analysis. The algorithm generated and updated membership functions for each contour point on the basis of history data. Then it defuzzified the resulting fuzzy numbers into a new contour. Defuzzification was based on the centre of gravity method. According to the tests, this algorithm filtered out fast changes of the contour. The second improvement was to include the adjacent (neighbouring) points in the analysis. This corresponds to the filtering in the space dimension. If the distance between the predicted point and the point given from the image processing exceeded the limit value, algorithm eliminated the point from the edge.

The horizon of the history data effects also on the performance of the algorithm. A suitable number of contours in history data was 10 since the effective changes in the state of burning processes are slow, a typical time constant being in the order of minutes. The fuzzy algorithm was developed and tested in Matlab® and later coded in C-language in the actual commercial system as an individual module. It was tested and found to filter out fast changes and to improve the edge detection capabilities.

11.3 SOFT SENSOR IN LIME KILN

Kauko Leiviska, Oulu University, Oulu, Finland

Another indirect measurement application for pulp and paper industry is developed for a rotary lime kiln. Lime is used in recausticising process for recovery of white liquor. A lime kiln is a huge heat exchanger, where lime mud is burned into lime. The operation of lime kiln has three stages: drying, heating, calcining and agglomeration of the formed CaO powder. The main problem in lime kiln control is the lack of on-line measurements for the product quality. Efficiency of the process depends on operating temperature and temperature profile over the kiln. Too low temperature leads to product quality degradation while too high temperature leads to a decrease in causticising efficiency. Additional control problem is due to the use of sawdust as a fuel in burning. Sawdust quality changes randomly leading to fluctuations in heat production and temperatures. (Järvensivu et al. 2001) As the process is rather sensitive to occurring temperature changes, adaptive control is needed. Adaptation has been implemented through scaling of control variable. Scaling coefficient was determined based on an indirect measurement defining operating condition changes. Indirect measurement was implemented as a LE model inputs being certain important process variables.

11.4 EARLY WARNING SYSTEM FOR RECOVERY BOILERS,

Bjorn Widarsson and Erik Dahlquist, Mälardalen University, Västerås, Sweden

Page 288: Use of modeling and simulation in pulp and paper making

288

Rottneros, Vallvik, is a kraft pulp mill in Mid-Sweden. In 1998 a gas explosion followed by a steam explosion took place in the recovery boiler. The interest for an early warning system that is removing the risk for this happening again thus is very strong!

In a project performed in co-operation with the branch organisation for heat production in Sweden, Varmeforsk, and also co-funded by SEA, Swedish Energy Agency, a project was performed where an early warning system was developed and implemented by Malardalen University.

The focus of the system was to identify the sensors available for keeping control of the energy and material balance for the recovery boiler. These were then logged and used to calculate the energy and material balance on-line, where both the immediate value and the trend for different time perspectives were analysed, and used as input to a Bayesian Net.

As a complement the blow down water is analysed with respect to conductivity. If there is a break in a tube in the actual boiler, the conductivity will go down, as water is pouring out instead of being evaporated, increasing the conductivity in the steam drum. If on the other hand the break is in the ECO-part, the conductivity will not be affected. In this way an indication of the position, and risk for explosions, can be achieved.

This information together with bed camera information on bed and combustion are all given as input to the BN. This then gives a feed back to the operators about possible risks for tube leakage, sensor faults, and poor performance of the boiler as such.

The system has been implemented taking data from the DCS system, but with the actual calculation on a separate computer. The process flow sheet is seen in figure x.

Page 289: Use of modeling and simulation in pulp and paper making

289

Figure 1. Recovery boiler at Rottneros, Vallvik, where an “early warning” system has been implemented to detect risk of tube leakage, sensor faults and other problems.

The functionality can be structured as below:

– Energy balance over the recovery boiler using the measurements from the process together with a physical process model. This balance is update frequently.

– Trending the deviation between measured data and predictions from the physical model. Complement with other information like bed camera and lab measurements.

– When the deviation passes a predefined level, an ”early warning” is sent to the operator displays

– Decision support on probable cause of problems is given through a Bayesian net. Examples of faults:

• Tube leakage in the furnace part of the boiler: 65% probability

• Sensor fault in black liquor flow meter: 43 % probability

• Tube leakage in ECO part: 24 % probability

Page 290: Use of modeling and simulation in pulp and paper making

290

COMMERCIAL HW/SW TOOLS

CHAPTER 12 COMMERCIAL SIMULATION

ENVIRONMENTS

12.1 INTRODUCTION

Carlos Negro, Complutense University, Madrid, Spain

Working Group C (WG-C) of the COST Action E36 has been entitled ‘Use of Simulation Software in the Pulp & Paper Industry’. This WG intends to bring software developers and (possible) users together in order to come to agreements on contents, features, relevance and performance of software products.

WG-C is specially intended to exchange knowledge between the WGs and to foster the development of better simulation tools with a high compatibility across platforms used. This WG has performed two different working paths in order to achieve the results:

• The first one has been the elaboration of three simulation samples with different complexity grade, starting with a steady-state one and continuing with dynamics and process control. These sample models have been constructed under different simulation software packages by WG-C members. In addition, there is an open space for presentations of each one’s works before the end of the action.

• On the other hand, WG-C is collecting through different surveys both current use of simulation software packages within COST E36 members, user’s needs when using them and descriptions of available packages. User’s experience with different selected topics will also be evaluated through an assessment survey that is being spread.

This chapter will not focus on the first option, which is being reported in COST E36 Action as additional information. Description of each software package is out of the scope of this book, and last survey will be finished by the end of 2007. Thus, this chapter will cover current use of software within COST E36 members, and the most important criteria to take into account when referring to simulation software, according to people from the pulp and paper sector.

12.2 CURRENT USE OF SOFTWARE IN COST E36 ACTION

Page 291: Use of modeling and simulation in pulp and paper making

291

Carlos Negro, Complutense University, Madrid, Spain

First of all it must be strongly noticed that the conclusions have been made from 17

replies only from members of this COST E36 action. Thus, they can not be extrapolated

to the overall industry or any other group of companies, research centres, etc. Still it

gives at least some indication of what softwares are used by a relatively representative

group of users in European pulp and paper industry.

12.2.1 PREFACE

The purpose of this survey was to give an overview of the current use of software by COST Action E36 members. All the questionnaires received from the different members have been analysed, and the conclusions were considered as a starting point to clearly distinguish every member’s needs and goals and translate them into the whole WG-C goals.

First of all, we would like to thank the members that have contributed with their replies:

• CTP, France

• ICP, Slovenia

• KCL, Finland

• Mälardalens Univ., Sweden

• Millvision, The Netherlands

• STFI-Packforsk, Sweden

• PTS, Germany

• TNO, The Netherlands

• UCM, Spain

• UdG, Spain

• Voith Paper, Germany

• VTT, Finland

• VÚPC, Slovakia

12.2.2 ANALYSIS OF THE REPLIES

Page 292: Use of modeling and simulation in pulp and paper making

292

Some general questions were asked, to get an overview of current situation concerning the use of simulation software.

How do you acquire data?

Other2% With specific

software20%

With Excel34%

With ASCII files22%

By hand22%

Figure 2.1 – Data acquisition options

Data acquisition takes place in a great variety of ways (see Figure 2.1). Excel books are the most used data acquisition format, but the difference to other formats is quite small. Data taken by hand are still quite common (22%) so, in that field, an effort can be made to automate every data acquisition.

When data amount is huge, Excel has the drawback of its maximum number of rows (65535). In those situations, both ASCII files and specific software seem to be the best approaches.

Data acquisition requires another crucial step in order to analyse data: format conversion. For companies work with huge amounts of data, conversion tasks can take a lot of time. In that way, automated conversion seems to be the most time-saving option.

Page 293: Use of modeling and simulation in pulp and paper making

293

Would you use other software packages

if you could?

I don't know41%

Yes18% No

41%

Figure 2.2 – Trend to test new software packages

People of COST Action E36 are not very much willing to use other software packages than the ones that they have. Only 18% of WG-C members (see figure 2.2) agreed to use other software tools e.g. STATISTICA and ASPEN PLUS if they could.

A high percentage (41%) of the replies doubted whether they should use other software tools or not. A conclusion was that the WG-C would try to help all the members to solve these issues in order to get maximum benefit from any software and for everyone’s specific purpose.

One point to focus on was the strengths of each software package. These strengths, according to the results, make the COST Action E36 members not to consider changing software.

Changing software is quite difficult when users have already been learning one, since all of them have tricks and details. Assessment can be mainly useful when there is a decision step. In that way, it was decided to classify the most useful criteria to be evaluated for each software package.

Page 294: Use of modeling and simulation in pulp and paper making

294

Do you work on-line in a paper mill or pilot plant?

No47%

Yes24%No, but in future...

29%

Figure 2.3 – Working environment

Nowadays only 24% COST E36 members are working with paper mills or pilot plants (See Figure 2.3). This fact generates new discussion topics:

• What is everyone modelling?

• What are the purposes of simulating?

• What are the objectives of our developed models?

The importance of defining objectives is crucial because WG-C will make the assessment of different software packages and this assessment must consider the specific environment of this WG.

Members that work with mills or pilot plants can add more experience in complex data acquisition systems, addressing the points to be boosted and those to be avoided.

Page 295: Use of modeling and simulation in pulp and paper making

295

Is your simulation static or dynamic?

Both87%

Dynamic0%

Static13%

Figure 2.4 – Type of simulation

All the people that have dynamic simulation also have a static one. But there are some cases where only static simulation is carried out (see Figure 2.4).

Perhaps, there are different points of view about what dynamic means. In that way, approaches have some slight variations:

• Dynamic is a simulation that takes time variable into account. In that case, user can introduce changes and see both the effects and the time that these effects need to take place.

• Dynamic is a simulation in which the user can change parameters, but time is not needed. This point of view can be seen as ‘changing between different steady states’.

• Dynamic is a simulation with a dynamic convergence process, in which the simulation software reaches equilibrium having dynamic algorithms into account.

Establishment of a precise criterion to distinguish dynamic simulations would help to clarify what everyone is doing.

Page 296: Use of modeling and simulation in pulp and paper making

296

How many compounds can your simulation take

into account?

Water, fiber and different fillers

28%

Water , fiber and fillers in general

11%

Water and solids11%

Without compounds

0%

Other50%

Figure 2.5 – Compounds at the simulation

Simulations take several compounds into account, and even energy and other variables (see figure 2.5). In the ‘Other’ field, a great variety of responses can be seen, from chemicals to energy variables.

The majority of simulations are multi-component, but… does everyone need that multi-component environment? Defining clear objectives will help to decide proper simulation environments.

12.2.3 LIST OF SOFTWARE PACKAGES

There is a great variety of used software packages. Table 2.1 shows the software used by the participants, classified by application. It also shows the number of users within the COST E36 participants.

Table 2.1 – Used software packages

Application Name Number of replies

using it

Data acquisition Excel 4

Access 1

Page 297: Use of modeling and simulation in pulp and paper making

297

Application Name Number of replies

using it

ADV V3.2 1

KCL-Wedge 1

LabView 1

Lasentec FBRM Acquisition 1

MS-SQL 1

NivuLOG 1

Optimas 1

PI (OSI-Soft) 1

Pulping process control program 1

TransPort data manager 1

WinMOPS 1

Data analysis

Excel 8

Matlab 6

Design Expert 2

KCL-Wedge 2

Simca 2

SPSS 2

Statgraphics 2

Voith’s tools 2

Darec 1

Page 298: Use of modeling and simulation in pulp and paper making

298

Application Name Number of replies

using it

Extract 1

GPC 1

Maple 1

Page 299: Use of modeling and simulation in pulp and paper making

299

Data analysis

Modde 1

Neuronline 1

NNModel 1

SAS 1

Unscrambler 1

Simulation

WinGEMS 8

APROS Paper 2

Excel 2

Simulink 2

ABAQUS 1

Balas 1

Cadsim Plus 1

CASSANDRA 1

Chem Up 1

Dymola 1

FlowMac 1

gPROMS 1

IDEAS 1

KCL-ECO 1

KCL-PAKKA 1

Page 300: Use of modeling and simulation in pulp and paper making

300

Matlab 1

NNModel 1

PS2000 (GII based) 1

ReKo 1

TNO Models 1

Other

Aspen Plus - HYSIS 3

ABAQUS 1

AnovaTM(Taguchi method) 1

FEMLAB 1

OPTO 1

Solgas Water 1

Page 301: Use of modeling and simulation in pulp and paper making

301

12.2.4 SOFTWARE EVALUATION

All software packages were analysed by their users. An average of the evaluations was made for the most used ones. Results are available at the report in COST E36 page.

As a summary, a table can be made with these software packages and their main benefits and drawbacks.

Benefits Drawbacks

Data acquisition

Excel

Easiness of use Code accessibility

I/O tools

Data analysis

Excel

Easiness of use Code accessibility

Matlab

Code accessibility

Improvement possibilities

Handbooks

Easiness of use

Simulation

WinGEMS

Easiness of use Code accessibility

Other

Aspen Plus - HYSIS

Page 302: Use of modeling and simulation in pulp and paper making

302

Benefits Drawbacks

Complete Code accessibility

It can be seen that easiness of use is a common strength of most used software, but their code accessibility is pointed as a weakness. Thus ideal software should be easy to use, cheap and with high code accessibility to introduce new features that may fit everyone’s needs.

12.2.5 OTHER QUESTIONS

The following questions were carried out as an open space to include different opinions:

• Topics and themes in which you have more interest for Working Group

discussions.

• Do you know other potential partners that could have interest in this

Working Group?

• Research activities/projects that you would like to present during the

meetings.

• What do you think that this Working Group should achieve at the end?

• Do you have any available document/study/paper related to assessment

of simulation software in the Pulp & Paper Industry?

• Which algorithms or mathematical tools do you use for data analysis?

(Neural Networks, Genetic Algorithms, Multivariate Analysis…)

• What would you like to boost in your simulation?

Results were collected at the corresponding report, as well as other comments or suggestions.

12.2.6 CONCLUSIONS

Utility of this COST Action has been reflected in the responses. All possible doubts show that a good assessment of all software should be of great value not only for members, but also for pulp and paper mills and research centres.

Page 303: Use of modeling and simulation in pulp and paper making

303

According to the replies, there are lots of questions, topics, algorithms and software packages. Thus, there is a hard work to be done in this WG-C, if a helpful assessment must be carried out.

12.3 USERS’ REQUIREMENTS FOR SIMULATION SOFTWARE

Carlos Negro, Complutense University, Madrid, Spain

12.3.1 PREFACE

The purpose of this survey was to give an overview of the requirements that different users’ groups have for simulation software. Different criteria have been analysed in order to classify them by their importance to the users. The questionnaires have been received from different people at DOTS-ATIP Conference held in Annecy (France) on April 2005; COST E36 Workshop on Validation held at Espoo (Finland) on October 2005, and COST E36 members. We thank all off them their contribution.

A study over different criteria about simulation software, in order to classify the importance that users give them in their work, has been carried out with 56 replies from people from 10 different countries:

• Denmark

• England

• Finland

• France

• Germany

• Norway

• Romania

• Spain

• Sweden

Page 304: Use of modeling and simulation in pulp and paper making

304

• The Netherlands

12.3.2 ANALYSIS OF THE REPLIES

Replies have been classified according to different affiliations. Results are shown in figure 3.1.

Number of replies: 56

Research Institute /

University; 33; 58%Engineering /

Consultancy; 2; 4%

Paper Manufacturing;

16; 29%

Suppliers; 5; 9%

Figure 3.1 – Affiliation of participants.

Conclusions concerning the affiliations ‘Engineering/Consultancy’ and ‘Suppliers’ must be carefully analysed because they have been obtained from only 2 and 5 replies respectively.

70% of the repliers use simulation software. Table 3.1 summarises all the software packages used according to the replies.

Table 3.1 – Name of used software packages and number of replies using them.

Name of the software Number of

replies

Matlab (Simulink) 14

Page 305: Use of modeling and simulation in pulp and paper making

305

Name of the software Number of

replies

WinGEMS 8

Balas 6

FlowMac 4

Aspen-Hysis 3

Biowin 3

CadSim Plus 3

IDEAS 3

PS2000-GII (From CTP) 3

ACSL 1

APROS 1

C++ 1

FcoSim Pro 1

Femlab 1

Fluent 1

Fortran 1

Modelica 1

PI 1

Siemens’ tools 1

Witness 1

Page 306: Use of modeling and simulation in pulp and paper making

306

The importance of each criterion has been analysed and the general results have been summarised in table 3.2.

Results show that criteria concerning the supplier are the most important for people working in the pulp and paper industry. Both reliability of the software and reliability of the supplier are the most important criteria.

On the other hand, total cost of the system, licensing policy of the supplier, the possibilities of handling both chemistry and chemical reactions and compatibilities with other software packages are considered as the least important criteria. This points show that:

• Ambitious simulations involving several chemicals and chemical reactions are still seen as too complex and nonsense. Thus, people do not try to simulate them, because they probably think that there is not enough knowledge to trust in such simulations. Therefore, they look for easy software packages concerning chemical handling.

• People usually do not interconnect different software packages: Instead separate software is used for specific purposes.

• The licensing policy acquires different importance depending on the affiliation.

• Total cost of the system usually does not matter too much.

Page 307: Use of modeling and simulation in pulp and paper making

307

Table 3.2 – Average importance of proposed criterions.

IMPORTANCE OF EACH SPECIFIC CRITERION

Criterias

Imp

ort

an

ce

NUMBER OF REPLIES

ve

ry

hig

h (

1)

Hig

h (

2)

Av

er

ag

e (

3)

Lo

w (

4)

no

n i

mp

or

tan

t

(5)

no

op

inio

n

General

The total cost of the system, including training.

2.46 9 19 22 5 1 0

Payback time (what benefits do you expect …).

2.30 18 17 13 3 4 1

Variety of applications and uses. 2.32 8 27 19 0 1 1

User friendliness / Training period. 2.14 13 25 15 3 0 0

Compatibility with a standard PC. 2.14 19 17 13 7 0 0

Time consumption (when building or running the simulation).

2.37 10 24 16 3 3 0

Reliability of the software (availability with years and future).

1.67 25 24 6 0 0 1

Average importance for this section 2.20

Page 308: Use of modeling and simulation in pulp and paper making

308

Related to suppliers

Imp

ort

an

ce

NUMBER OF REPLIES

ve

ry

hig

h (

1)

Hig

h (

2)

Av

er

ag

e (

3)

Lo

w (

4)

no

n i

mp

or

tan

t

(5)

no

op

inio

n

Reliability of the supplier (availability with years and future).

1.87

20 23 11 1 0 1

Licensing policy of the supplier. 2.51 9 18 21 6 0 2

Service and support (manuals /on-line help / help desk).

1.93

19 22 13 1 0 1

Average importance for this section 2.10

Software possibilities

Imp

ort

an

ce

NUMBER OF REPLIES

ve

ry

hig

h (

1)

Hig

h (

2)

Av

er

ag

e (

3)

Lo

w (

4)

no

n i

mp

or

tan

t

(5)

no

op

inio

n

Possibilities of developing own tools or new blocks by creating / changing code scripts.

2.00

19 22 10 3 1 1

Possibilities of modifying existing blocks by creating / changing code scripts. 2.00

18 23 10 4 0 1

Possibilities of including several compounds.

2.04

12 28 14 0 0 2

Possibilities of handling both chemistry 2.44 10 18 17 8 1 2

Page 309: Use of modeling and simulation in pulp and paper making

309

and chemical reactions.

Possibility to upgrade / update. 2.05 14 26 13 2 0 1

Quality of both Steady State and Dynamic validations (evaluation with standard samples).

2.04

13 26 13 1 0 3

Variety of Importing / Exporting tools (even while running simulations). 2.28

11 23 14 6 0 2

Compatibilities with other software packages.

2.48

10 21 12 10 0 3

Average importance for this section 2.17

The same calculation has been carried out according to affiliation. Results are shown in table 3.3. People from research institutes or universities and people from paper manufacturing think in general the same about the criteria list. There are some exceptions:

• Payback time is logically more important for people from paper mills, while the variety of applications is more important for research centres and universities.

• Criteria related to supplier are in general more important for people from research centres and universities, especially in the case of the licensing policy.

• Criteria about software possibilities and some related general criteria (e.g. the variety of applications and uses, compatibility with a standard PC, the possibility of modifying existing blocks, handling chemistry and chemical reactions, and the variety of importing/exporting tools) are important for research centres and universities, while paper manufacturers give them less importance, probably due to the fact that they are sceptic about complex simulations and they do not expect too much when using them.

Page 310: Use of modeling and simulation in pulp and paper making

310

Table 3.3 – Average importance of proposed criterions by affiliation

(1=very high, 2=high, 3=average, 4=low, 5=non important).

AFFILIATION

Criterion

Re

se

ar

ch

Inst

itu

te /

Un

ive

rs

ity

En

gin

ee

rin

g /

Co

ns

ult

an

cy

Pa

pe

r

Ma

nu

fac

tur

in

g

Su

pp

lie

rs

General

The total cost of the system, including training. 2.48 2 2.44 2.6

Payback time (what benefits do you expect …). 2.82 1.5 1.69 1.2

Variety of applications and uses. 2.03 2.5 2.94 2.2

User friendliness / Training period. 2.21 2.5 2.06 1.8

Compatibility with a standard PC. 2.03 3.5 2.37 1.6

Time consumption (when building or running the simulation).

2.30 3.5 2.25 2.8

Reliability of the software (availability with years and future).

1.65 2.5 1.69 1.4

Average importance for this section 2.22 2.6 2.21 1.9

Page 311: Use of modeling and simulation in pulp and paper making

311

Related to supplier

Re

se

ar

ch

Inst

itu

te /

Un

ive

rs

ity

En

gin

ee

rin

g

/

Co

ns

ult

an

cy

Pa

pe

r

Ma

nu

fac

tur

i

ng

Su

pp

lie

rs

Reliability of the supplier (availability with years and future).

1.81 3.5 2.00 1.2

Licensing policy of the supplier. 2.22 3.5 3.00 2.4

Service and support (manuals /on-line help / help desk).

1.84 2.5 2.19 1.4

Average importance for this section 1.96 3.2 2.40 1.7

Software possibilities

Re

se

ar

ch

Inst

itu

te /

Un

ive

rs

ity

En

gin

ee

rin

g /

Co

ns

ult

an

cy

Pa

pe

r

Ma

nu

fac

tur

in

g

Su

pp

lie

rs

Possibilities of developing own tools or new blocks by creating / changing code scripts.

1.91 1.5 2.19 2.2

Possibilities of modifying existing blocks by creating / changing code scripts.

1.84 1 2.37 2.2

Possibilities of including several compounds. 1.78 1.5 2.47 2.6

Possibilities of handling both chemistry and chemical reactions.

2.22 1.5 3.19 1.8

Possibility to upgrade / update. 2.03 1.5 2.31 1.6

Page 312: Use of modeling and simulation in pulp and paper making

312

Quality of both Steady State and Dynamic validations (evaluation with standard samples).

2.00 1.5 2.13 2.2

Variety of Importing / Exporting tools (even while running simulations).

2.03 2 2.67 2.8

Compatibilities with other software packages. 2.39 2.5 2.75 2.2

Average importance for this section 2.02 1.6 2.51 2.2

The other two affiliations (Engineering / Consultancy and Suppliers) are quite different in their interpretation. Suppliers give importance to criteria concerning general aspects of the software and the supplier, while the possibilities of the software are considered less important. People from Engineering/Consultancy group have the opposite opinion. They give far more importance to software possibilities than the other groups, and for them general criteria and those related to supplier are far less important than for the other groups.

A last question was made to consider suggestions to include new criteria to the list. The replies have been the following:

• Better possibilities for data exchange between different tools.

• Payback calculation and commitment.

• The possibility to present results in an easy way.

12.3.4 REMARKS

Page 313: Use of modeling and simulation in pulp and paper making

313

In addition to these surveys, that also include the collection of specifications and information from main software packages (provided by suppliers), WG-C is performing several practical cases that have been exposed in different WG meetings, and are being reported within the COST E36 Action.

Final aim is to create a final report that should help potential simulation software users to decide within different available options, and even help current users, showing different possibilities when a bottleneck appears.

Page 314: Use of modeling and simulation in pulp and paper making

314

CHAPTER 13 PRESENTATION OF APPLICATION OF

DIFFERENT SOFTWARE TOOLS

Alvaro Alonso, University of Complutense, Madrid, Spain and Jussi Manninen, VTT, Espo

Finland

13.1 BROWN STOCK WASHING PROBLEM

Jalel Labidi , University of the Basque Country , San Sebastian, Spain

13.1.1 OBJECTIVE

The objective of the exercise is to simulate the operation of brown stock washing process. The washing sequence is formed by three units. The description of the washers is given below. The stock is feed to a tank in which the level is controlled by PID controller to 50%. The outlet of the tank is connected to the first washer mean while the washing water is feed to the third washer (counter flow operation). The washing filtrate is collected in tanks, maintained at constant level (80%), and used as washing water in the previous unit and also to dilute the feed of the washers (consistency 1%).

To study the ‘dynamic’ behavior of the system we can introduce a perturbation (linear or conditional) for the treated brown stock flow rate.

13.1.2 DATA

Feed:

Fiber flow: 500 t/d

Consistency: 10 %

Organic Dissolved Solids concentration: 4 %

Inorganic Dissolved Solid concentration: 3 %

Temperature: 80 C

Wash water:

Page 315: Use of modeling and simulation in pulp and paper making

315

Clean water at 70 C to give a 2.0 Dilution Factor on last washer

Washers: 3 drum

Feed Consistency (%) 1

Mat Consistency (%) 12

Filtrate Consistency (%) 0.002

Disp. Ratio at 2.5 DF, 14% consistency 0.7

Heat losses 0.08 %

No heat losses in the tanks.

Filtrate cascaded on level control.

13.1.3 BROWN STOCK WASHING FLOW SHEET

Description Washer module

A WASHER module simulates pulp washing using a Displacement Ratio versus Dilution Factor correlation. The correlation is customized for each washer, so that most modern washing equipment can be modelled, and further customized to match observed plant performance. This module can be customized to simulate many different types of washers.

Modeling Pulp Washer Performance

In order to simulate washer performance over a wide range of wash water flows and production rates, a constant washing efficiency is not adequate, and it is necessary to accommodate the typical non-linear washing performance curves. Washer performance is characterized using different efficiency curves customized for each washer.

Typical Performance of a Vacuum Washer The following graph shows a typical displacement ratio versus dilution factor (or wash ratio curve) for a Vacuum Washer. Other types of washers will have different curves. Typical curves for a variety of modern washers have been entered into the drawing parts to be found in the Cadsim Plus Library.

Displacement ratio (DR) is a measure of how close the washer comes to attaining the same dissolved solids concentration S

S as the shower. DR is a ratio of the washing achieved in

taking the feed dissolved solids concentration, SF, to a discharge dissolved solids

concentration, SD, over the ideal washing that could be achieved with infinite washing:

Page 316: Use of modeling and simulation in pulp and paper making

316

At very low dilution factors, before any breakthrough of the wash water, there will be displacement washing. The displacement ratio for this segment of washing is equal to the wash ratio. At some point the wash water will break through the sheet and the displacement ratio curve falls away from perfect displacement.

The washers are modelled by entering two coordinate points to describe the curve:

- Measured washing efficiency point

- Breakthrough point

Note: The breakthrough point is in reference to the wash ratio axis as DR = WR on the perfect displacement line.

The first point should come from measurements. The slope of the washing efficiency curve can be modified to fit the curve to measurements at other conditions by sliding the breakthrough point up and down the perfect displacement line.

Additional information

Dilution factor is the liquor in the shower minus that in the discharge:

DF = LS

– LD

where:

L = mass liquor / mass fiber in the discharge

S refers to shower, D refers to discharge

The wash ratio can be calculated from dilution factor by:

WR = LS/L

D = (DF + L

D)/L

D

where:

LD

can be calculated from the discharge consistency, C, by:

LD

= (100 – C)/C

Tests have been performed with the first stead state model with : Flowmac, APROS, BALAS, IDEAS, WINGEMS,CADSIM

Later tests also have been perfoemd also with the dynamic model with : Flowmac,APROS, BALAS, CADSIM,

Finally two of the tools have been tested with interacting PID controller : Flowmac, APROS.

The results of the tests will be published later, but were not ready for this book.

Page 317: Use of modeling and simulation in pulp and paper making

317

CHAPTER 14 COMMERCIAL HW/SW STRUCTURES (DCS,

FIELDBUSES ETC)

14.1 GENERAL – HW AND SW ARCHITECTURE

OPC AND ODBC

This chapter will give abrief overview of the structure normally used when models and simulators are used with interaction to a DCS-system or an information system interacting with the DCS-controllers.

What we generally can see is that simple models can be put directly in the DCS controllers, while more advanced functions usually are kept in a separate computer. If both control functions and the model are kept in the same computer ODBC may be used directly.

If they are on separate computers and perhaps have to interact through an information system with a data base, the interactivity between the vendors DCS and the information system can be done with an OPC-server or some vendor specific solution. The communication from the database to the simulator or advanced model usually also is done with an OPC-server. This can be done directly, or by using an environment like Matlab for the connection. Many sogtware programs have links to Matlab, so that complete objects or models can be imported and used with Matlab´s OPC-solutions to different other databases or environments.

When we have an interaction between a simulator and a database and further on to a DCS-system, we often have different sampling frequency in the different systems. DCS-signals logged from the controllers often have a very high sampling frequency, while the signals are filtered and stored as one-minute averages normally in the data base, or in a TTD-database they may even be filtered so that values varying just a little from previous value are not stored at all. In some simulators like gPROMS and Modellica, the solvers usually are not using fixed time step in the calaculations. When some steep event happens, the time step is shortened to increase the accuracy of the calculations. This may cause significant problems when the interaction is made to the real-time system.

Still, there are solutions available also for this. You can to some extent fix the time step and keep track on the real time, so that the timing is synchronized between the two systems. A timer is starting the execution and if just the calculation time is shorter than the “time step” we can get an interaction also with a real time system.

Still, if the calculation time becomes longer than the real time time step, we will have a problem. This may limit the size of the application you can have on-line.

Page 318: Use of modeling and simulation in pulp and paper making

318

When we log sensor values it may be nessecary to filter the signals or check the signal quality before you use the values in the simulation. There are different methods and tools for this, but in reality this may be a quite tricky part in the chain from sensor to simulation result. Especially out-liers may cuase problems. What values should be included or excluded? This signal chack normally should be inbetween the DCS data base and the simulation software.

Most major DCS vendors today also have solutions for this type of applications. As mentioned earlier OPC-servers are the most common links between different softwares today, but we also can communicate on a lower level like directly with TCP/IP, Ethernet ans similar protocols.

If we want to consider how the user interface should look like and what properties there should be, most users seem to want the simulator interaction interface in the same environment as the DCS, or at least looking the same. If you have an information management system, it may be sitauated there as well. To have a totally separated software may be acceptable, but it is difficult to keep track on many different systems, why it is considered as a less good solution if the system shall be used by operators in the daily work.

One interesting alternative is to just let the interaction show up if something happens on the operator screens, but be hidden otherwise. If for instance an early warning is to be submitted, it may pop-up at the screen at a relevant position. This is something vendors are working on and will probably be standard in the future. It may also be of interest to send an SMS or similar to process engineers, operators and/or service staff, preferably together with some short message indicating what to do.

14.2 DATA TO AND FROM SIMULATOR MODELS

Andreas Kvarnström and Erik Dahlquist, Malardalen University

The communication between databases and DCS systema and different software tools like simulators, optimizers etc will be discussed. An example from the communication between DCS-systems with simulators and optimization solvers shows what to consider.

Simulation-based optimization using external simulation models is an optimization method where it is possible to use simulation models developed in one software in an optimization procedure controlled by another software. The optimization is handled from the optimization software that controls and runs the simulation software and the simulation model whenever needed.

The external simulation models can be used in various ways in the optimization procedure [3]:

• to provide inputs to the optimization to formulate the objective function and constraints

• to provide initial conditions for the optimization

Page 319: Use of modeling and simulation in pulp and paper making

319

• to provide parameter values for calculation of the objective function and constraints • to generate required parameters to calculate and formulate constraints, e.g. when

quality parameters which cannot be measured directly is handled a simulation model can be used to predict the parameter.

• to validate optimization results. If only a part of the process is optimized the optimization results must be tested and validated in a simulation model for the entire process.

14.2.1 SOFTWARE REQUIREMENTS

There are certain requirements that must be met by the software used for optimization and simulation if they should be able to interoperate and share data. They must provide interfaces to set and receive data. For instance, the optimization software must be able to set parameters in the simulation model according to certain values of the optimization problem. Similarly, after a simulation is performed the optimization software must be able to obtain output values from the simulation model.

14.2.2 INTEROPERATION SOLUTIONS

The interoperation between the optimization and simulation software can be accomplished using different techniques. In general, there are two different ways to achieve interoperability between the optimization and the simulation software [4]:

• One common technique is to use a file-based interface where the optimizer change values of certain parameters in an input file to the simulation software, and where the simulation software then prints the results of the simulation into an output file readable for the optimization software.

• Another technique highlighted in this paper is using component-based software technology like Microsoft’s Component Object Model (COM). This component technology allows a binary interoperation between the simulation and optimization software which is a more direct method for sharing of data via a programming interface.

To be able to send data from one application to another they have to interact using the same interface type. The file-based interface is a more manual interface but is probably supported by more optimization and simulation software than COM is. However, the file-based approach is quite ineffective and impractical because of the risks for reading and writing wrong data and it is also more time-consuming and inefficient than the faster and more reliable interaction technique when using COM.

Page 320: Use of modeling and simulation in pulp and paper making

320

14.2.3THE COMPONENT OBJECT MODEL

The Microsoft Component Object Model (COM) is the Microsoft standard for creating software components. The Component Object Model was introduced by Microsoft in 1993, and is today one of the most commonly used component technologies for developing component-based applications. A component is defined as a specific part of software that does something specific and predefined. Any program should be able to use the component. The component will work by itself and should be easy to replace. COM is creating binary software components that are available to interact with each other [6][7].

The Microsoft Component Object Model (COM) is a standard for specifying an object model, and the programming requirements that are enabled for COM objects to interact with each other. The programming can be done in many programming languages, and the objects can be structurally different. This is why COM is referred to as a binary standard that applies after a program has been translated to binary machine code.

The COM technology permit objects to interact with each other across process and machine boundaries. A component’s object can never have direct access to another components object. An object can only get access to another object through interface pointers. This is the main feature of COM which allows it to completely defend encapsulation of data and processing [8].

Figure 1: Two applications may connect to each other's objects, in which case they extend their interfaces toward each other [6].

A COM component’s interface refers to a predefined collection of associated functions that a COM class implements. When a COM object implements an interface, the object must implement all methods related to the interface and present pointers to those methods to the COM library. The COM library makes those functions available to any client that requests pointers to the interface, both for clients inside the process that implements the methods and for clients outside this process [6].

Page 321: Use of modeling and simulation in pulp and paper making

321

Figure 2: A COM Component that supporting three interfaces A,B and C.[6]

Figure 3: Interfaces extend toward the clients connected to them [6].

14.2.4 MARSHALING

The mechanism that let objects be used across threads, process and network boundaries, is called marshaling. Marshaling also makes independency of the location possible. Marshaling allows one object’s interfaces that are exposed in one process to be used in another process [8].

14.2.5 CLIENTS AND SERVERS IN COM

The client is any piece of code that uses another object’s services by calling methods through that object’s interfaces. The only way that clients can interact with a COM object its by using a pointer to one of the object’s interfaces and calling methods through the interface pointers. Encapsulation is achieved through this pointer.

There are two types of COM servers:

Dynamic Link Library (DLL) that is used for in-process servers that is loaded in the same process as the client. The communication between the client and server handles through function calls [8].

Executables Files (EXE) that are used for out-of-process servers. They are running in another process on the same machine as a local server or in another process on a remote machine as a DCOM component [8].

Page 322: Use of modeling and simulation in pulp and paper making

322

Figure 4: In-process and out-of- process servers [9].

14.2.6 DISTRIBUTED COM (DCOM)

A standard COM component runs on the same machine and in the same process space as the software that are using the COM component. The Distributed Component Object Model (DCOM) usually runs on a different machine and it runs its own process space. The DCOM component can also run on the same machine as the software that is using it. The functionality of the component is available to programs or other components for all computers on the network. This fact provides you with the ability to talk to services on a machine other than the client computer. For example if the database is located on some server some where, with DCOM you can place actual functionality on the server and access it like it was local [10].

14.2.7 AN EXAMPLE

In this section an example of an optimization problem using an external simulation model is shown. The method was applied to a fuel mix optimization problem presented in [5]. The optimization problem was to minimize the cost of heat and power production by finding the optimal fuel mix depending on the heat load in the district heating system, the price of electricity, the price of fuel, and the emission taxes and fees etc. The fuels for the combined heat and power plant was coal and biomass, and the differences in energy and moisture content between the two fuels affect the production ratio between power and heat considerably [5]. Figure 5 shows the process to be optimized.

Page 323: Use of modeling and simulation in pulp and paper making

323

Figure 5: The combined heat and power production system to be optimized [5].

The interoperation between the optimization software and the simulation software was accomplished using Microsoft’s COM technology.

14.2.7.1 THE OPTIMIZATION SOFTWARE

The optimization software used in this example was TOMLAB, an optimization toolbox for MATLAB. TOMLAB is developed by Tomlab Optimization AB in Sweden and provides MATLAB interfaces for many optimization algorithms and commercial optimization software [13]. MATLAB is a high-performance language for technical computing that integrates computation, visualization, and programming in an easy-to-use environment where problems and solutions are expressed in familiar mathematical notation [14].

Flue Gas Condenser

Boiler

Condenser

GeneratorTurbine

~

Buildings

Flue Gas to the Stack

Biomass

Coal

District Heating System

~48 oC

70 to 100 oC

~170 oC

~55 oC

170 MW

Flue Gas Condenser

Boiler

Condenser

GeneratorTurbine

~

Buildings

Flue Gas to the Stack

Biomass

Coal

District Heating System

~48 oC

70 to 100 oC

~170 oC

~55 oC

170 MW

Page 324: Use of modeling and simulation in pulp and paper making

324

MATLAB supports COM and can interact with contained controls or server processes, or to act as a computational server controlled by a client application program. MATLAB can be configured to either control or be controlled by other COM components. When MATLAB controls another component, MATLAB is the client, and the other component is the server. When MATLAB is controlled by another component, it is acting as the server. MATLAB supports four different COM client-server configurations; MATLAB Client and In-Process (Control) Server, MATLAB Client and Out-of-Process Server, Client Application and MATLAB Automation Server, Client Application and MATLAB Engine Server [15].

The configuration used in this case is the second configuration, MATLAB Client and Out-of-Process Server. This configuration runs the server in a separate process from that of the client application, here MATLAB. Since the server runs in a separate process, it may be run on either a local or remote system.

14.2.7.2 THE SIMULATION SOFTWARE

The simulation software used in the example was the simulation software IPSEpro from SimTech Simulation Technology in Austria. IPSEpro is a highly flexible environment for modelling, simulation, analysis and design of components and processes for energy and chemical engineering. IPSEpro is provided with an object-oriented modeling interface for building new components and modifying existing components using a language similar to C++. IPSEpro also provides a graphical user interface for editing the simulation model [11][12]. IPSEpro is widely used within power plant engineering. It also includes some commonly used functions, for instance heat balancing. The simulator, which is completely COM-based, is usually run in Windows-environments.

The COM interface provided to IPSEpro makes it possible to control almost everything in IPSEpro through the COM interface.

14.2.7.3 OPTIMIZATION PROBLEM

The optimization problem was to minimize the cost of combined heat and power production over a 24-hour period by finding the optimal fuel mix of coal and biomass. The heat load on the district heating system, the electricity price on the Nordic power exchange, Nordpool, the emission taxes, and the fuel prices are parameters affecting the optimization [5].

In this example the simulation model was used for providing the optimizer with values for calculating the constraints and the objective function of the optimization problem. The optimization algorithms used in [5] was global optimization algorithms where the simulation model was used as a black-box where the optimizer provided values of different simulation parameters as input and got values of other simulation parameters as output. In [5] different optimization algorithms where tested but in this paper the aim is only to show how the optimization method with an external simulation model can be used in an optimization procedure.

In this example both the optimization software, MATLAB, and the simulation software, IPSEpro, supports Microsoft’s COM technology, and that is a demand to be able to use COM. MATLAB acts as the controller and launches and controls IPSEpro. IPSEpro is called with parameters from MATLAB and IPSEpro is run as an out-of-process automation control server to MATLAB.

Page 325: Use of modeling and simulation in pulp and paper making

325

14.2.7.4 HOW TO RUN THE IPSEPRO FROM MATLAB

In this subsection an example on how to run a simulation in IPSEpro from MATLAB is shown. This can be seen as a general example of how easy it can be to establish interaction and data exchange between two applications that supports COM through programming interfaces. In the example, the value of one simulation parameter is put into the simulation model in IPSEpro, and after the simulation is completed the value of one simulation parameter is brought back to MATLAB from IPSEpro.

To start IPSEpro from MATLAB the MATLAB-function actxserver is used. The input to actxserver is a string representing the name of the application that is mapped to the application’s GUID in the Windows registry [15]. The actxserver function creates a COM Automation server and returns a COM object for IPSEpro’s COM interface:

% Start IPSEpro

app = actxserver('PSE.Application');

Now IPSEpro has been started and it is time to open the simulation model that shall be used. The MATLAB-function invoke invokes a method on an object or interface [15]. In the code below the method openProject is invoked on IPSEpro’s COM object. The second input parameter to the invoke function is the name of the simulation model that shall be opened.

% Open the simulation model “pmaps.pro” in

% IPSEpro

proj = invoke(app,'openProject',’pmaps.pro’);

Now IPSEpro and the correct simulation model are started. When loading a simulation model into IPSEpro the simulation model already contains some initial simulation data. In IPSEpro the simulation model consists of a collection of objects representing different components in the real process; one object for the flue gas condenser, one for the steam condenser, one for the generator, one for the district heating feed stream, objects representing the biomass and coal combustors, and so on. The objects in IPSEpro are represented by a name. Every object also has one or more parameters representing different parameters to be adjusted on the specific object [11].

Figure 6 shows the graphical representation of the simulation model for the combined heat and power plant process used in this example.

Page 326: Use of modeling and simulation in pulp and paper making

326

Figure 6: The combined heat and power plant system in IPSEpro [5].

To be able to set the values of the parameters of the different objects in IPSEpro the method findObject in the IPSEpro COM object is used. findObject gets the name of the object in IPSEpro as input and returns a handle to that. With this handle it is then possible with the method value to set the value of an object’s parameter. In the following code the value of the parameter mass of the object fuel_stream001b is given the value 7. The parameter represents how much biomass is given to the biomass combustor every second.

% Set the amount of biomass in the simulation

% model

object = invoke(proj,'findObject','fuel_stream001b');

temp = invoke(object,'findItem',0,'mass');

invoke(temp,'value',7);

Now it is time to do a simulation. To do a simulation the method runSimulation of the IPSEpro COM object is invoked.

Biomass Coal

Flue gas to the stack

Boiler

District heating consumer

Condenser

Turbine

Flue gas condenser

Combustion air Combustion air

Feed water pump

DH water pump

Generator

Biomass Coal

Flue gas to the stack

Boiler

District heating consumer

Condenser

Turbine

Flue gas condenser

Combustion air Combustion air

Feed water pump

DH water pump

Generator

Page 327: Use of modeling and simulation in pulp and paper making

327

% Do a simulation in IPSEpro

run = invoke(proj,'runSimulation',0);

The simulation is run for one time step and is then stopped. To get the value of an object’s parameter follows the same procedure as to give the parameter a new value, with the difference of invoking the method value without an input parameter. In the code below the MATLAB variable Q_heat is given the value of the parameter q_trans of the object htex_counter001 in IPSEpro.

% Get the amount of heat produced when using

% the input amount of biomass

object = invoke(proj,'findObject','htex_counter001');

temp = invoke(object,'findItem',0,'q_trans');

Q_heat = invoke(temp,'value');

14.2.7.5 THE OPTIMIZATION PROCEDURE

The optimization procedure starts with loading the initial values for the optimization variables and parameters into MATLAB. MATLAB then starts TOMLAB and the optimization procedure is performed so that TOMLAB calculates new values of the optimization parameters, puts the new values into the simulation model in IPSEpro and then starts a process simulation with IPSEpro. The results from the simulation are then transferred back to MATLAB and used by TOMLAB to calculate the values of the objective function and the constraints. The values of the objective function and the constraints are evaluated by the optimizer. This sequence is then run again until the optimal value of the objective function is found according to some criteria specified in the optimizer.

14.2.7.6 RESULTS AND CONCLUSIONS OF THIS EXAMPLE

The results of the fuel mix optimization problem in [5] showed that it was preferable to use coal rather than biomass if the electricity price was high, and vice versa. The results of the optimization problem in [5] were that the method with an external simulation model clearly works, but the optimization algorithm to be used depends on the type of optimization problem and the structure of the objective function.

The differences between the algorithms tested were the speed of how fast they found an acceptable minimum of the objective function. One optimization algorithm produced a slightly more accurate result than the other, but took on the other hand considerably longer

Page 328: Use of modeling and simulation in pulp and paper making

328

time to find the optimum and required many more simulation runs than the other algorithm [5].

The simulation model of the combined heat and power plant is flexible and easy to maintain and change if the system is to be retrofitted in the future. As long the component names of the objects in the simulation model, the flue gas condenser, the steam condenser, the generator, the district heating feed stream and the biomass and coal combustors are not changed, the user is free to change the simulation model without changing the optimizer setup in TOMLAB [5].

The results and discussion in [5] showed that simulator-based optimization of industrial processes is promising but needs to be further

investigated on a large-scale problem [5].

14.2.7.7 DISCUSSION

The aim of the simulation-based optimization approach is to separate the process model from the optimization environment. This to be able to use the best possible optimization software for the optimization procedure and the best possible simulation software best suited for the specific process.

When traditionally trying to optimize an industrial process using computers and mathematical algorithms the optimization has to be done in one specific environment suited for the optimization. The simulation model of the process has to be built and implemented in the environment suited for the optimization software.

However, the environments for the optimization software are not always well suited for implementing simulation models. Simulation models can be easier to implement and tuned for specific processes when using simulation software intended for specific processes. The benefit of simulation-based optimization using external simulation models is the possibility to use simulation models built in another software environment than the optimization software environment. The simulation software is then controlled and run by the optimization software.

When using an external and a more process-specific simulation model there is also the advantage of having more functionality and process dynamics incorporated into the optimization, such as the mass and energy balances of the process and the quality characteristics of the product produced in the process. Using a more accurate process model also gives the possibility to check the feasibility of the optimization results where a larger scope of the process is simulated than is directly included into the objective function and constraints of the optimization.

In many cases companies have invested much money in well developed and tuned simulation model for their specific process. The simulation software often though lacks the possibility to automatic optimize the processes with the help of optimization algorithms. This means that even though a powerful simulator is used, the optimization of the process still has to be made manually by changing values of different parameters in the simulator.

Page 329: Use of modeling and simulation in pulp and paper making

329

For many companies there could be much money to save if the already tuned simulation models can be used in the optimization procedure.

As mentioned in the results of the example, the method of using an external simulation model in optimization of processes is very flexible. Depending on the simulation software it can be very easy to change the simulation model without having to change the setup of the optimization software. This leads to the advantage of having experts on the specific process tuning and updating the simulation model and the experts of optimization algorithms tuning, changing and updating the optimization software and the optimization algorithms.

Optimization using external simulation models can though be extremely time consuming, e.g. when using the simulation output to calculate the objective function and constraints in each iteration in the optimization procedure. The quest for the optimal solution requires a vast number of iterations and since the simulation is performed at each iteration step, the decrement of the simulation time becomes crucial. The speed on the optimization procedure depends in a high amount on the speed of the simulation software [16].

The limitations of this method can also be the dependent on the interoperation techniques. The fastest one is to have an interoperation between software using a programming interface like when using component-based technologies such as COM. However, this method it though a little bit slower than having everything in one software package.

There might also be some limitations with the optimization algorithms possible to use. As mentioned the optimizations might also take a bit longer when having to run a whole simulation when trying new parameters.

The data format is not a specific one, this application is totally specialized to this application. To use another simulation software in the optimization procedure another data format must be used. You also have to adjust the optimization parameters to suit the specific simulator. A general simulation data format for optimization with external simulation models should make the process of changing simulation software much more easy.

14.2.8 REFERENCES

[1] Pulkkinen P, Tienari M, Mosher A, Ritala R, “Methodology for Dynamic Optimization Based on Simulation”, PTS Symposium, December 2003.

[2] Law A.M., McComas M.G., “Simulation-based Optimization”, Proceedings of the 2002 Winter Simulation Conference, December 2002.

[3] Dhak J., Dahlquist E., Holmström K., Ruiz J., Belle J. and Goedsch F. (2004), “DEVELOPING A GENERIC METHOD FOR PAPER MILL OPTIMIZATION”, Control Systems 2004, June 14-17, Quebec City, Canada.

[4] Fard B.G, Ala-Kurikka J, Kvarnström A, Crnkovic I, ”Enhancing Distributed Simulation Systems by Utilizing Component-based Technologies”, Proceedings from 44th Scandinavian Conference on Simulation and Modelling, September 2003, 33-40.

Page 330: Use of modeling and simulation in pulp and paper making

330

[5] Häggståhl D., Kvarnström A., Dotzauer E. and Holmström K. (2004), “Fuel Mix Optimization of Combined Heat and Power Production Utilizing a Simulation Model”, The 9th International Symposium on District Heating and Cooling, August 2004, Espoo, Finland

[6] MSDN, http://msdn.microsoft.com, 2005.

[7] David S. Platt. (2000), “The Essence of COM”, Prentice Hall Inc, ISBN: 0-13-016581-6

[8] Williams S., Kindel C., The Component Object Model: A Technical Overview, Microsoft Corporation, 1994.

[9] Component Object Model Specification, Part I, Microsoft Corporation, 1995.

[10] Rofail A, Shohoud Y (2000), ”COM and COM+”, ISBN: 0-7821-2384-8.

[11] IPSEpro, Manual PSE, version 4.4.001, 2003

[12] IPSEpro, http://www.simtechnology.com, 2005

[13] TOMLAB, http://www.tomlab.biz, 2005

[14] MATLAB, http://www.mathworks.com/products/matlab/, 2005

[15] MATLAB Help, MATLAB 6.5, 2005

[16] Pulkkinen P, Ihalainen H, Ritala R, “Developing Simulation Models for Dynamic Optimization”, Proceedings from 44th Scandinavian Conference on Simulation and Modelling, September 2003.

-