Modeling, Identification and Control at Telemark University College

Master studies in process automation started in 1989 at what soon became Telemark University College, and the 20 year anniversary marks the start of our own PhD degree in Process, Energy and Automation Engineering. The paper gives an overview of research activities related to control engineering at Department of Electrical Engineering, Information Technology and Cybernetics.


Introduction
The Norwegian research journal MIC was initiated by late Professor Jens Glad Balchen, with the first issue published in 1980.MIC has played a central role in Norwegian cybernetics research, as it coincided with a dramatic growth in the number of PhD students and gave these an arena to publish.Telemark University College (HiT) salutes the journal, and those who made the journal possible.
The master studies in engineering at HiT started in 1988, and the initial board was led by Finn Lied and included Inge Johansen and Sven G. Terjesen, all central people in the engineering community of Norway in the last part of the 20th century.The leader of the engineering studies was May-Britt Hägg, now professor at The Norwegian University of Science and Technology (NTNU).In 1989, a study in Process Automation started; this study was planned by Terje Hertzberg, Steinar Saelid, Gudolf Kjaerheim, Sven G. Terjesen, Ivar Loe, Jens I. Ytreeide, and Rolf Ergon.Later, Ytreeide became professor in these studies, while Loe was adjunct professor for many years.The Process Automation study was led by Rolf Ergon, now professor emeritus.In 1994, these studies became part of HiT, organized under Faculty of Technology (HiT-TF).
From the start, the MSc studies in Porsgrunn had their accreditation from the Ministry of Education and Research, while the PhD study was formally a degree at NTNU, where HiT-TF operated almost as a faculty under NTNU.In April 2009, the Ministry of Education and Research gave HiT the right to give our own PhD-degree, in Process, Energy, and Automation Engineering.
The current MSc studies are in Process Technology, Systems and Control Engineering, and Energy and Environmental Technology, and they are taught in English.Initially, the strong position of the regional process industry shaped the process automation study, which had a strong emphasis on modeling of dynamic systems, numeric methods, process chemistry, separation technology, thermodynamics, etc.Control engineering was also important, with topics in multivariable control, optimal and predictive control, state estimation, and control structures for industrial processes.Instrumentation technology and process safety were core topics, and laboratory exercises widely used.
With a compact group of teachers in close touch with the students, this enabled necessary changes in pace with the developments in the regional and national industry, and today, the core topics are modeling of dynamic systems, model based control, model based sensor technology, and industrial IT.The content of the study is thus more general today, but the theory is still tested through laboratory work as well as examples and projects/theses which to a significant degree (70-80%) come from the regional industry.
Although the terms modeling, identification and control (MIC) do not explicitly mention instrumentation and sensor technology (IST), we will still consider IST a part of MIC: without IST, there is no information to be used in identification and control.And without models, there is no IST: as an example, consider the mercury thermometer -clearly it is not the temperature that is measured, but the expansion of mercury; the temperature is inferred from a model of the relationship between temperature and expansion.
The paper is organized as follows.In Section 2, glimpses into education and research activities are given, while in Section 3, a survey of past and on-going PhD studies is given.In section 4, an overview of activities in societies is given.

Glimpses into Education and
Research Activities Professor Bjørn Glemmestad did his BSc and MSc at HiT, and his PhD study at NTNU in association with HiT; he recently came from the process industry with experience in application of nonlinear MPC.

Control Education and Laboratory Experience
We believe that students get a much deeper understanding of theoretical methods by implementing the methods in practical applications.To this end, we have developed a number of laboratory assignments which are part of different courses in our master study in Systems and Control Engineering.We have standardized on using PCs (laptops or desktops) with National Instruments LabVIEW and the inexpensive NI USB 6008 I/O device, but MATLAB and SIMULINK are also used to some extent Haugen (2005Haugen ( , 2008)); Haugen et al. (2007Haugen et al. ( , 2008)).As lab stations we use air heaters (seven items) Haugen (2009a) and water tanks (six items) Haugen (2009i), which are "desktop" lab stations.Due to the large number of lab stations, we can run labs in parallel, and with small student groups.Although the final aim in the assignments is to apply the solutions developed by the students to the physical system, the students are required to apply their solutions to simulated processes first.The feedback from the students on these assignments is very positive.
The following laboratory assignments have been designed: • Implementing an industrial PID controller and a measurement filter from scratch as C code with practical features such as bumpless transfer, anti integral windup, and reverse/direct action.The controller is applied to either the water tank or the air heater.Haugen (2009b,h,d).
• Hardware-in-the-loop simulator based on an industrial PID controller (Fuji PGX) controlling a simulated process.Haugen (2009h,c).
• Soft-sensor (state estimator) for estimating an unknown outflow from a water tank using various methods: "Direct estimator" (solving the model for the unknown variable), a Luenberger observer, and a Kalman filter.The flow estimate is used in feed-forward control of the water level.Haugen (2009h,f).
• System identification of air heater in the form of a discrete-time transfer function using a subspace identification method (n4sid in MATLAB).
A temperature controller for the simulated process is tuned in SIMULINK, and a practical temperature control system is then implemented in Lab-VIEW.Haugen (2009h,g).
• Model-based predictive control (MPC) of air heater, using the MPC controller of LabVIEW Haugen (2009h,e).

Sensor Data Fusion, Soft Sensors and Sensor Networking
"Data fusion is a process of associating, correlating, combining measured data and other relevant information from single and/or multiple sensors to achieve better estimates of observed parameters or even estimating parameters normally not amenable for direct measurements.Data fusion gives an added leverage to the measurement and control engineer in achieving more complete and timelier assessments of process status indicating simultaneously undesirable or dangerous situations, and their significance.The fusion process involves continuous refinements of its estimates and assessments, and by evaluation of the need for additional sources of information (i.e., possibly new sensors), leading very often to the modification of the process itself, leading thus to an overall improvement of the process and its performance indicators", Viumdal et al. (2010).Data fusion is inherently associated with the concept of soft sensors.Soft sensor or virtual sensor is a common name for software based algorithms processing/fusing a plethora of measurements.The fusion of these measurements can be used in the estimation of new quantities that need not or can not be measured.Strategies based on soft sensors are essential in modern data fusion and use among others the following: • System identification methods • Support vector machines

• Fuzzy Neural Methods
In the R&D activities, usually carried out in collaboration with industries and research institutes, the focus is on process measurements and sensorics, with innovations based on new usage of existing sensors, incorporating new sensors and developing algorithms for soft sensors.Some recent applications are in • powder technology, Waerstad et al. (2002); Mylvaganam (2003); Datta et al. (2003); Mylvaganam et al. (2003); Mylvaganam and Dyakowski (2005); Datta et al. (2007a,b) • light metal industries, Viumdal et al. (2010) • production, storage and transport of oil and gas, Vefring et al. (2002), Nygaard et al. (2004a,b); Nygaard and Naevdal (2006); Nygaard et al. (2006Nygaard et al. ( , 2007)), Lorentzen et al. (2008) • patient care on 24/7 basis • soft sensor approach to diagnosis of electrical machines, sensors and actuators, Yahoui et al. (2004); Yahoui and Mylvaganam (2009) Process tomography is essentially a form of sensor networking and data fusion on a smaller scale and involves multimodal, multifunctional sensors, data from which have to be fused to give insight into the process, preferably non-intrusively, Alme (2006.);Alme and Mylvaganam (2006b,a), Alme and Mylvaganam (2007).As such, process tomography involves all the above and requires electromagnetic modeling, to which some of our R&D efforts are dedicated to, Lorentzen et al. (2008); Timmerberg et al. (2009).

Applied Chemometrics
Acoustic chemometrics is a relatively new method for on-line process monitoring based on characterisation of system vibrations as generated by an industrial process such as a manufacturing process or transportation flow.Acoustic chemometrics is applicable for quantitative analysis of constituents for process monitoring and for physical characterisation of the state of process equipment.Principal component analysis (PCA) or partial least squares regression (PLS) Martens and Naes (1989); Esbensen (2001) based on empirical data are used to extract relevant information from relevant acoustic signals.The PLS model can then be used to predict parameters of interest based on new independent acoustic spectra.Proper validation of resulting regression models is critical in order to develop realistic prediction models for industrial process monitoring.
The main advantage of acoustic chemometrics compared with many other on-line methods for process monitoring is the use of nonintrusive so-called "clampon" sensors which can be easily mounted onto the process equipment (pipelines, reactors etc.).The sensor, which often is a standard accelerometer or a so-called acoustic emission (AE) sensor, has no moving parts, and can withstand harsh environments.The measured acoustic signatures will often contain information about several process-relevant properties which makes it possible to predict several parameters/states from the same acoustic spectrum.
Acoustic emission from industrial processes is often considered as audible noise only, but it has recently been proven that within this "noise" there is also a significant part of useful information Esbensen et al. (1998); Halstensen et al. (1998); Esbensen et al. (1999); Halstensen and Esbensen (2000); Halstensen et al. (2006), which can be used for processes mon-itoring.The fact that almost all processes produce some kind of acoustic emission opens up the potential for applications which depend totally on sound, signal processing, sensor technology and multivariate calibration.
Acoustic chemometrics has its greatest benefits in cases where traditional sensors and measurement techniques, such as flow, temperature and pressure transmitters can not be used.In many cases it is preferable to use nonintrusive sensors because their counterpart may cause disturbances, e.g., fouling and clogging inside the process equipment such as pipelines, reactors cyclones etc. Figure 3 shows an overview of the data path from acoustic emission to the final multivariate calibration model.The main research activities of the Acoustic Chemometrics Research Group are acoustic process monitoring, multivariate image analysis, chemometric theory, multivariate process monitoring, representative sampling, and soft sensors.

Soft Sensors for Level Estimation in
Oil/Water Separators  (2006); Skeie (2008).The advantage of this method compared to existing methods are: inexpensive, simple to install, independent of foam on top of the oil layer, redundancy of sensor devices and does not expose people to any harmful radiation.Further work will investigate how the number and type of sensor devices will influence among others the accuracy and robustness of such soft sensors, and how the models can be calibrated depending on the locations of the sensor devices and the density of the liquids.

Theoretical Aspects of Process Monitoring
Theoretical issues in system identification and chemometrics, with regards to process monitoring applications, has been and still is an active research area at HiT. Product quality estimation based on known process inputs and secondary process measurements was investigated by Ergon and Di Ruscio (1997), and an approach based on identification of an output error (OE) model using a prediction error method was developed by Ergon (1999b).Not only is an OE model necessary, it also makes it possible to use low rate and even irregular sampling data of the primary quality variables, Ergon and Halstensen (2001), which is quite important from a practical point of view.This system identification approach can also be combined with multivariate calibration methods from chemometrics, Ergon (1999a).
A second problem investigated by Ergon is multivariate calibration model reduction.The projection based principal component regression (PCR) and partial least squares regression (PLSR) methods for static process data often result in more than two principal components, and process monitoring based on traditional score and loading plots is then a non-attractive option.This can be solved by further projections, such that the relevant process information can be presented in a single score-loading-contribution plot Ergon (2002b), Ergon (2003Ergon ( , 2004Ergon ( , 2006Ergon ( , 2009a)).As part of the model reduction effort, the highly profiled, patented and popular orthogonal signal correction method OPLS Trygg and Wold (2002) has also been studied.Although this is claimed to be a pre-processing method, it has been shown that it in fact is a disguised post-processing procedure Ergon (2005).It can also be shown that even further model reduction is possible Ergon (2007).
A third problem that has been investigated is a residual inconsistency resulting from the conventional NI-PALS algorithm used in PLSR.This problem was first found by Ergon as a by-product of another work Ergon (2002a), but at the time judged to be of little practical interest.However, the related and essential problem of score-loading correspondence for the modeled data was investigated, Ergon (2002b).A later paper Pell et al. (2007) brought attention to the problem with the twenty year old and very central algorithm, and recommended use of the Bidiag2 algorithm Golub and Kahan (1965) instead of NIPALS, and this caused a heated debate in the chemometrics community.Ergon (2009b) clarified that the problem could be solved by a simple re-interpretation of the NIPALS results, and Ergon, Halstensen and Esbensen are in an upcoming paper looking further into the problem in relation to squared prediction errors in the process monitoring context.This problem is illustrated in Figure 4, where SPE C = ε T C ε C based on the conventional PLSR residual ε C may both over-and underestimate the true squared perpendicular distance SPE B = ε T B ε B from a sample z to the projection space where the scores are found.
The results are the projection ẑB and the non-orthogonal mapping ẑC , while the orthogonal complement of the column space of W defines the common residual space.The prediction coefficient vector b is contained in the column space of the loading weight matrix W, while an alternative (and never used) projection ẑP is contained in the column space of the loading matrix P. Points A and B refer to an example in the manuscript of the upcoming paper.

Process Monitoring Based on Wireless
Sensor Network There is an increasing focus on and interest in wireless communication and services utilizing this concept.
Process monitoring is part of the research work at HiT and wireless communication with and within measurement systems is part of this research area.Process HiT is one of the academic partners in the Center for Wireless Innovation (CWI) (www.cwin.no),which is a facilitator for industry and the academic participants in forming a strategic partnership in wireless R&D.

Modelling and Simulation of Electrical Systems
The simulation of systems is a very useful method to investigate different behaviors of physical systems, e.g., stress-tests, faults.Thus we can test if a certain experiment will damage the test equipment or even worse might prove to have dangerous impacts on personnel.Especially fault scenarios are the ideal application field for simulation runs.We would like to know what happens if certain devices fail and perhaps derive security measures which will protect our applications if a certain fault occurs.
In the past it was often good enough to simulate different aspects of a physical system individually within their physical domains.For each of those domains there was a specialized simulation tool.This becomes problematic when different physical systems interact which each other, as it is normally the case in the real world.Now one has to find some means to couple different tools in a way that they can exchange simulation results during run-time (aka co-simulation).This is normally quite inflexible wrt.step-size and solvertype.
Another solution is to find a simulation language which allows for modelling of different physical domains within the same language and tool.The nonproprietary modeling language Modelica R (Modelica-Association ( 2009)) was especially developed to simplify the simulation in different physical domains within one simulation model and also have the means of exchanging your models without being bound by a particular tool.In addition, the non-profit organization Modelica Association provides a standard library (Modelica Standard Library) that already contains a huge amount of components and connectors from different domains.This freely down-loadable library also serves as a common base for different tools (free and commercial types are available Modelica.org (2009)).
The multi-domain capability allows us to easily build simulation models of complex systems including, for example, mechanical, electrical, and chemical components and reactions.Furthermore it allows us to concentrate on the physics of a model rather than building models which represent mathematical equations which in turn then represent the actual physical behavior.
Using Modelica allows us to build models of systems like electric drives/generators where we have to deal with mechanical, electrical, and thermal quantities.These can also be extended to even more complex models including mechanical or electrical faults Winkler andGühmann (2008, 2009).Such systems can then be analyzed and optimized wrt.physical meaningful results.An example would be to optimize the control of the voltage level in a weak electricity network.

The Relation to the Partial Least Squares (PLS) Algorithm
The Partial Least-Squares (PLS) algorithm has received widespread attention and is widely used in Chemometrics, which has been defined as The use of mathematics and statistics on chemical data, Martens and Naes (1989).In our view the PLS method is com-plicated to understand due to the iterative nature of computing the solution, B PLS , for the regression coefficients, B, in a linear (or bi-linear) model Y = XB +E from known data matrices X ∈ R N ×r and Y ∈ R N ×m .In Di Ruscio (2000), insight and theoretical understanding into the Partial Least Squares algorithm is given, and a new, non-iterative formulation of the PLS algorithm is given in case of univariate data (PLS1), i.e., m = 1 and Y a column vector.In that paper it is shown that the PLS1 algorithm is equivalent to using a truncated Cayley-Hamilton polynomial expression of degree 1 ≤ a ≤ r for the matrix inverse (X T X) −1 ∈ R r×r which is used to compute the Least Squares (LS) solution.Here the integer a is the number of PLS components.Furthermore, the a coefficients, p ∈ R a , in this polynomial are computed as the optimal LS solution (minimizing parameters) to the prediction error.Hence, the PLS1 solution is optimal in the sense that p * = arg min p Y − XK a p 2 F , where then B PLS = K a p * .The resulting solution is non-iterative.The solution can be expressed in terms of a matrix inverse and is given by is the controllability (Krylov) matrix for the pair (X T X, X T Y ).Relationship to the score and loading vectors are also given in the paper.It is furthermore pointed out that the PLS1 algorithm is equivalent to a truncated Conjugate Gradient (CG) method, Hestenes and Stiefel (1952), for iteratively computing the ordinary least squares solution.Interestingly the PLS1 algorithm is equivalent to a truncated version of Iteration 10.2.13 in Golub and Van Loan (1986), p. 370.Note also the similarity with PLS1 and truncated Lanczos iterations in Algorithm 9.3.1 in Golub and Van Loan (1986), p. 345.This shows that the PLS1 algorithm has strong similarities with iterative methods for solving the normal equation, X T Y = X T XB for the vector B of regression coefficients, in which X T X in this problem is a symmetric matrix.Bi-and tri-diagonalization of symmetric matrices are involved in the iterative LS algorithms.Both the univariate and the multivariate cases are considered in Di Ruscio (2000).The usual PLS2 algorithm for multivariate data presented in the literature is not optimal.A new optimal PLS2 algorithm was also developed along the lines in which the non-iterative PLS1 solution was developed.

On Subspace System Identification
A landmark for the development of so called subspace system identification algorithms is the algorithm for obtaining a minimal state space model realization from Hankel matrices constructed from a series of known Markov parameters (or impulse responses), i.e., as presented by Ho and Kalman (1966).This method was completely new to the control community at that time.A numerically efficient implementation of the Ho algorithm through Singular value Decomposition (SVD) was presented in Zeiger and McEwen (1974) and furthermore used to estimate stochastic models directly from observed data in Aoki (1987), and the interest of the topic increased.The DSR algorithm for identifying the entire Kalman filter model matrices directly from observed input and output data was developed in the early 90's and onwards.The method is presented in Di Ruscio (1996,1994,2003) among other papers.The particularly interesting feature of the DSR algorithm is that the Kalman filter gain matrix, K, and the square root of the innovation process covariance matrix, F , are estimated directly from known input and output data, also documented in Di Ruscio (1996).The innovation process in this algorithm is consistently identified also for closed loop data.A modified implementation of the DSR algorithm which is consistent both for open as well as for closed loop data was developed in the early 2000's and implemented in the dsr e.m function in the D-SR Toolbox for MATLAB.In this method a series of "future" outputs, y J|1 , are decomposed into a "signal" part, y d J|1 = DX J|1 , and an innovations ("noise") part, ε J|1 = F e J|1 where e k has unit variance, i.e., as The decomposition, Equation (1), is consistently computed by projecting the "future" outputs onto the row space of the "past" inputs and outputs, i.e., as where U 0|J and Y 0|J are defined from "past" inputs and outputs.Hence, at this stage we have a deterministic identification problem for the entire Kalman filter model matrices, i.e., using that the inputs and output to the Kalman filter are known, i.e., using known "future" inputs u J|1 and the known "future" innovations ε J|1 = y J|1 − y d J|1 as inputs, and using the signal part y d J|1 = DX J|1 as known outputs.This may efficiently be solved as a deterministic subspace system identification problem in order to estimate the Kalman filter including the system order.Details of this algorithm are presented in Di Ruscio (2008) and used in the PhD thesis work Nilsen (2005).Recently this method, dsr e.m, is analyzed and compared with the PARSIM-E method by Qin and Ljung (2003); Qin et al. (2005) and it is shown that in general the PARSIM-E method gives larger variance on the parameter estimates compared to dsr e, which is close to as optimal as the prediction error method.In the PARSIM-E method iterations, i = 0, 1, . . ., L, are used to iteratively compute the future innovations, ε J|1 , ε J+1|1 , • • • , ε J+L|1 , and at the same time computes Markov parameters as well as a matrix with the same column space as the extended observability matrix.These iterations are believed to give rise to the "high" parameter variance.Notice however that the first step in the PARSIM-E and dsr e methods are similar.Interestingly Sotomayor et al. (2003) have found the dsr.m algorithm to produce the best model on validation data in comparison with four other subspace methods, CCA, MOESP, N4SID and Robust N4SID.The dsr e.m algorithm is a variant of the dsr.m algorithm superior for closed loop identification; both dsr.m and dsr e.m are available in the D-SR Toolbox for MATLAB.

Mechanistic Models and Model Based Control
Modeling and Simulation of Dynamic Systems has been a key course in the education at Telemark University College since 1991; all master students follow this course, and hence it forms a common ground.This course on mechanistic models gives a perfect background for applications of model based control: knowledge of modeling gives a good background for understanding the system under study, and the developed model can be used in a model based controller -or a model that can be used for further simplification.One such control strategy, Model Predictive Control (MPC), has been taught at HiT since 1990, see Lie et al. (2005); Lie and Heath (2008).Control of polymerization processes was a focus in the 1980-1990s.In Lie (1990), polymerization of polypropylene in a continuous reactor was studied, and the work included a population balance model in the form of moments of the chain length distribution.Part of the work dealt with limitations in attainable bandwidth in optimal control, Lie (1995).Damslora et al. (1998); Damslora (1998) looked into the modeling of a PVC batch reactor, and an optimal control strategy was developed with active use of initiators and inhibitors which indicated the possibility of a significant reduction in the batch time.
Modeling and control of paper production was studied ca.1999-2009.A simplified model was used to develop an Extended Kalman Filter and the linearized model was used in an MPC algorithm, see Hauge et al. (2005) and Hauge (2003).The solution was implemented at Norske Skog's PM6 in Halden, Norway together with Prediktor in late April 2002.Roger Slora from Norske Skog was instrumental in this project, and also worked with enthusiasm to tailor-make the user interface to something that the operators would accept.
The new control solution was used with success from the beginning of May 2002.A couple of years later, some new measurement equipment was acquired for the paper machine, and this new equipment didn't fit right into the MPC solution.Instead of redesigning the state estimator in the MPC solution, the choice was made to turn off the MPC.This is an interesting observation, and indicates a need to work on the problem of advanced control solutions and how these can be maintained through changes in process, control equipment, and personnel.Later, through the COST E-36 action, Dahlquist (2008), some work was done on model uncertainties and control consequences, Lie (2009).
The production of silicon from ferrosilicon was studied in the period 2000-2004.An advanced population balance model was developed of Elkem's leaching reactor in Bremanger, Norway, and the model was fit to measurements both from laboratory experiments and from operational data, Dueñas Díez et al. (2006).A passivity based nonlinear controller involving reaction networks was developed and tested through simulations.See Dueñas Díez et al. (2008);Dueñas Díez (2004) for details.
In cooperation with the bio engineering group of Professor Rune Bakke at HiT, a Modelica model was developed for the activated sludge purification of water, by fitting a Modelica library for the intended use.A central problem with bio processes is the lack of available measurements, and a study of parameter identifiability was carried out, Sarmiento Ferrero et al. (2006).The possibility to control the system using on-off MPC was studied, with a comparison with simpler control structures, Chai and Lie (2008); Chai (2008).The cooperation with Bakke's group continues through work on biogas production.
Norway has a strong industry in the area of photovoltaic wafers and producing the raw material for these wafers.In cooperation with Elkem, work has been carried out on the solidification of silicon.The process is complicated with two phases and distributed properties within each phase.Two possible modeling strategies are a two domain and a one domain strategy, in both cases leading to relatively nonlinear models, Furenes (2009).In this work, the main idea is to control the solidification rate, as this determines the purity of the final product.The solidification rate is equal to the velocity of the solidification front, which must be inferred from temperature estimates involving nonlinear state estimators.The task is further complicated by few available measurements.
Energy is important for modern society, both the efficient transformation and the efficient use.In a study involving SINTEF Byggforsk and Action 42 in Inter-national Energy Agency, a Modelica library is being developed for the climate of buildings, Videla and Lie (2006).The work also involves cogeneration and the use of biofuel in a spark ignition engine, Videla and Lie (2008).Future work will involve activities related to district heating in cooperation with Østfold University College.
As already mentioned elsewhere in this edition of MIC, some past work with Xstrata on Cu leaching, Lie and Hauge (2008); Alic et al. (2009.), will continue and will be extended.Also, some activity in the production of silicon for the PV industry is ongoing, Komperød et al. (2009).

Nonlinear Model Predictive Control of Polyolefin Plants
Linear Model Predictive Control (MPC) such as the DMC algorithm has become popular in plants such as oil refineries and crackers during the last few decades.
In polyolefin production, linear MPC has been tested with only limited success and is not widely used.The perhaps most important reason for this limited success is that a typical polyolefin plant operates over a wide operation window to produce products of different qualities.Thus, the inherent non-linearities of such processes become evident, and good control with a linear control scheme will be difficult.
During the last decade, nonlinear MPC has been successfully implemented in many polyolefin plants.The first known implementation of nonlinear MPC in an industrial plant was done by Borealis on a polypropylene plant in Norway.This controller was put into closedloop in 1993 and (upgraded versions) has been used continuously until the plant was shut down a couple of years ago.Borealis' technology for nonlinear MPC has proven to be successful through the implementation in practically all Borealis polyolefin plants and is also an integral part of Borealis' Borstar technology, Glemmestad and Hillestad (2001), Glemmestad et al. (2004), for polyolefin production built in Europe, the Middle East and Asia.
During the last years, commercial technology for nonlinear MPC has also become available and implemented in many polyolefin plants.While Borealis uses mathematical models based on first principles, some commercial vendors of nonlinear MPC use models based on artificial neural networks or nonlinear statespace models based on plant responses etc.
The process model used by the nonlinear MPC technology in Borealis is a nonlinear state space model that roughly can be divided into the following parts: • Dynamic mass balance equations (dm/dt = inflow -outflow -reacted).
• Reaction kinetics (also aggregated to calculate production rate and split factors).
• Calculation of various plant measurements (for online model updating).
One advantage of first principles modeling is that the models can be reused.That is, the modeling in a new project does not start from scratch but instead one can start with the best knowledge from previous projects.Each mass balance equation is usually quite simple to create, however, knowledge and experience is needed in order to select states to model and what can be omitted in the model.Reaction kinetics modeling is usually done based on lab experiments but will normally also be tuned against plant data.The control problem is solved using an SQP algorithm and the parametrization of the control signal is flexible.
Figure 5 shows results from a critical transition in a real plant before and after the nonlinear MPC (called OnSpot) is installed, Glemmestad et al. (2002).Thick lines are with OnSpot and thin lines are without OnSpot.Hydrogen concentration is shown in the upper plot, production rate in the middle and solid concentration is in the lower plot.The results demonstrate that nonlinear MPC yields a faster transition, but first of all it yields higher production rate (plot in middle) and a safer operation due to less variation in the solids concentration in the reactor (lower plot).
The success of nonlinear MPC in the polyolefin area, Haugwitz et al. (2008), shows that linear MPC is not always sufficient for satisfactory control and that nonlinear MPC now is becoming a mature technology within some industry segments.

PhD Studies
An important part of research is work with PhD students.Through the years, 17 candidates have defended their thesis through the cooperation with NTNU.
More candidates are in the pipeline, and while the contact with NTNU will remain important, future candidates will mainly be associated with HiT's own program in Process, Energy, and Automation Engineering.Multidisciplinary problems with industrial relevance will be studied in cooperation with the Department of Process, Energy, and Environment Technology at HiT, and with the industry.

Activities in Societies
NFA -the Norwegian Federation of Automatic Control: Finn Haugen has been a frequent contributor with industrially oriented courses.The Norwegian chapter of SIMS -the Scandinavian Simulation Society -is organized in NFA.Bernt Lie has served in the board of SIMS for a decade, and is an active participant in SIMS conferences.EU -the group has been involved in several EU-thematic projects such as THEIERE, EIE-Surveyor,and is currently involved in ELLEIEC, and have developed different modules for teaching purposes.Under the period of the THEIERE and EIE-Surveyor, the group was involved as coordinator for the Measurement and Control module, particularly the block on "Sensor to Web", where our flowrings were used for demonstrations.
Nordic Process Control Group -Bjørn Glemmestad and Bernt Lie have served in the board.Lie has organized their workshop in 2009.

Figure 1 :
Figure 1: Sensor Data Fusion/ Soft Sensors involved in Process Tomography with multiple resistive and capacitive sensors serves simultaneously as an example of sensor networking when different protocols are used, in selecting the combinations of sensors and their automatic switching.Graphic by PhD student Yan Ru.

Figure 2 :
Figure 2: Sensor Networking in Light metal Industries, involving innovative usage of existing and new sensors.Graphic by PhD student Yan Ru.

Figure
Figure 3: Schematic overview of acoustic chemometrics.

Figure 4 :
Figure 4: Orthogonal splitting of sample z into ẑB and ε B based on the Bidiag2 algorithm, and nonorthogonal splitting into ẑC and ε C based on the conventional NIPALS algorithm.

Figure 5 :
Figure 5: Results from a critical transition in a real plant before (thin lines) and after (thick lines) installation of OnSpot.Scaled variables are shown, with hydrogen concentration (top), production rate (middle), and solids concentration (bottom).
Modelica R -The free modeling language Modelica R is developed by the Modelica Association, a non-profit organisation with members from industry and academia.Dietmar Winkler has been an active member of the Modelica Association for several years now.To participate actively in the development of open source Modelica R tools the group has become an organizational member of The Open Source Modelica Consortium (OSMC) in October 2009.IET -the Institution of Engineering and Technology: Saba Mylvaganam has close collaboration with IET, and has jointly organized many seminars and workshops during the last decade.Through the IET, lecture tours have been organized in Norway with prominent industrialists and academics from the UK.Halstensen have close collaboration with the R&D organisation tel-tek in Porsgrunn.There have been and are many activities running in collaboration with tel-tek, funded by the Research Council of Norway and the industry.
estimation of the level and thickness of the oil, emulsion, and water content.The work shows that it is possible to combine a set of pressure sensor devices, a guided radar sensor device, and models calibrated using PCR, PLSR, or ANN to estimate the liquid level, the water level, the thickness of the emulsion layer, and the thickness of the oil layer in the oil/water separatorSkeie et al. (2006); Skeie and Lie