The OpenModelica Integrated Environment for Modeling, Simulation, and Model-Based Development

OpenModelica is a unique large-scale integrated open-source Modelicaand FMI-based modeling, simulation, optimization, model-based analysis and development environment. Moreover, the OpenModelica environment provides a number of facilities such as debugging; optimization; visualization and 3D animation; web-based model editing and simulation; scripting from Modelica, Python, Julia, and Matlab; efficient simulation and co-simulation of FMI-based models; compilation for embedded systems; ModelicaUML integration; requirement verification; and generation of parallel code for multi-core architectures. The environment is based on the equation-based object-oriented Modelica language and currently uses the MetaModelica extended version of Modelica for its model compiler implementation. This overview paper gives an up-to-date description of the capabilities of the system, short overviews of used open source symbolic and numeric algorithms with pointers to published literature, tool integration aspects, some lessons learned, and the main vision behind its development.


Introduction
The OpenModelica environment was the first open source Modelica environment supporting equationbased object-oriented modeling and simulation using the Modelica modeling language (Fritzson and Engelson, 1998;Modelica Association, 2017;. Its development started in 1997 resulting in the release of a flattening frontend for a core subset of Modelica 1.0 in 1998 (Fritzson and Kågedal, 1998). After a pause of four years, the open source development resumed in 2002. A very early version of OpenModelica is described in (Fritzson et al., 2005). Since then the capabilities of OpenModelica have expanded enormously. The Open Source Modelica Consortium which supports the long-term development of OpenModelica was created in 2007, initially with seven founding organizations. The scope and intensity of the open source development has gradually increased. At the time of this writing the consortium has more than fifty supporting organizational members. The long-term vision for OpenModelica is an integrated and modular modeling, simulation, model-based development environment with additional capabilities such as optimization, sensitivity analysis, requirement verification, etc., which are described in the rest of this paper. Fritzson et al. (2005Fritzson et al. ( , 2018c are two less detailed and now partly out of date overview papers about OpenModelica.
The current overview paper gives an up-to-date greatly expanded description of the capabilities of the system, short overviews of used open source symbolic and numeric algorithms with pointers to published scientific literature, tool integration aspects, some lessons learned, and the main vision behind its development.
This paper is organized as follows. Section 2 presents the idea of integrated environment, Section 3 details the goals for OpenModelica, Section 4.1 presents a detailed overview of the OpenModelica environment, Section 5 describes selected open source applications, Section 6 presents related work, and Section 7 the conclusions.

Integrated Interactive Modeling and Simulation Environments
An integrated interactive modeling and simulation environment is a special case of programming environments with applications in modeling and simulation. Thus, it should fulfill the requirements both from general integrated interactive environments and from the application area of modeling and simulation mentioned in the previous section. The main idea of an integrated programming environment in general is that a number of programming support functions should be available within the same tool in a well-integrated way. This means that the functions should operate on the same data and program representations, exchange information when necessary, resulting in an environment that is both powerful and easy to use. An environment is interactive and incremental if it gives quick feedback, e.g., without recomputing everything from scratch, and maintains a dialogue with the user, including preserving the state of previous interactions with the user. Interactive environments are typically both more productive and more fun to use than non-interactive ones.
There are many things that one wants a programming environment to do for the programmer or modeler, particularly if it is interactive. Comprehensive software development environments are expected to provide support for the major development phases, such as:

Requirements analysis
Design Implementation Maintenance A pure programming environment can be somewhat more restrictive and need not necessarily support early phases such as requirements analysis, but it is an advantage if such facilities are also included. The main point is to provide as much computer support as possible for different aspects of systems development, to free the developer from mundane tasks so that more time and effort can be spent on the essential issues.
Our vision for an integrated interactive modeling and simulation environment is to fulfill essentially all the requirements for general integrated interactive environments combined with the specific needs for modeling and simulation environments, e.g.: Specification of requirements, expressed as documentation and/or mathematics

Goals for OpenModelica
Providing a complete open source Modelica-based industrial-strength implementation of the Modelica language, including modeling and simulation of equation-based models, system optimization, and additional facilities in the programming/modeling environment.
Providing an interactive computational environment for the Modelica language. It turns out that with support of appropriate tools and libraries, Modelica is very well suited as a computational language for development and execution of numerical algorithms, e.g. for control system design and for solving nonlinear equation systems.
The research related goals and issues of the Open-Modelica open source implementation of a Modelica environment include, but are not limited to, the following: Development of a complete formal specification and reference implementation of Modelica, including both static and dynamic semantics. Such a specification can be used to assist current and future Modelica implementers by providing a semantic reference, as a kind of reference implementation.
Language design, e.g. to further extend the scope of the language, e.g. for use in diagnosis, structural analysis, system identification, integrated product development with requirement verification, etc., as well as modeling problems that require partial differential equations.
Language design to improve abstract properties such as expressiveness, orthogonality, declarativity, reuse, configurability, architectural properties, etc.
Improved implementation techniques, e.g. to enhance the performance of compiled Modelica code by generating code for parallel hardware. Improved debugging support for equation-based languages such as Modelica, to make them even easier to use. Improved optimization support, with integrated optimization and modeling/simulation. Two kinds: parameter-sweep optimization based on multiple simulations; direct dynamic optimization of a goal function without lots of simulations, e.g., using collocation or multiple shooting.
Easy-to-use specialized high-level (graphical) user interfaces for certain application domains.
Visualization and animation techniques for interpretation and presentation of results.
Integrated requirement modeling and verification support. This includes the ability to enter requirements formalized in a kind of Modelica style, and to verify that the requirements are fulfilled for selected models under certain usage scenarios.
High-performance simulation, e.g., of large-scale models, generating simulations to efficiently utilize multi-core computers for high performance.

History and System Architecture
The OpenModelica effort started by developing a rather complete formal specification of the Modelica language. This specification was developed in Operational Semantics, which still is the most popular and widely used semantics specification formalism in the programming language community. It was initially used as input for automatic generation of the Modelica translator implementations which are part of the OpenModelica environment. The RML compiler generation tool (our implementation of Operational Semantics) (Fritzson et al., 2009a) was used for this task.
However, inspired by our vision of integrated interactive environments with self-specification of programs and data, and integrated modeling and simulation environments), in 2005 we designed and implemented an extension to Modelica called MetaModelica , see also Section 4.1.3. This was done in order to support language modeling and specification (including modeling the language itself), in addition to the usual physical systems modeling applications, as well as applications requiring combined symbolic-numeric capabilities. Modeling the semantics in Modelica itself was also inspired by functional languages such as Standard ML (Milner et al., 1997), andOCaml (OCaml, 2018). Moreover, it was an investment into a future Modelica becoming a combined symbolic-numeric language such as Mathematica, but more efficient and statically strongly typed.
This language extension has a backwards compatible Modelica-style syntax but was initially implemented on top of the RML compiler kernel. The declarative specification language primitives in RML with single-assignment pattern equations, potentially recursive uniontypes of records and match expressions, fit well into Modelica since it is a declarative equationbased language. In 2006 our whole formal specification of Modelica static and translational semantics, at that time about 50 000 lines, was automatically translated into MetaModelica. After that, all further development of the symbolic processing parts of the OpenModelica compiler (the run-time parts were mainly written in C), was done in MetaModelica.
At the same time we embarked on an effort to completely integrate the MetaModelica language extension into the Modelica language and the OpenModelica compiler. This would enable us to support both Modelica and MetaModelica by the same compiler. This would allow modeling the Modelica tool and the Open-Modelica compiler using its own language. This would get rid of the limitations of the RML compiler kernel and the need to support two compilers. Moreover, additional tools such as our Modelica debugger can be based on a single compiler.
Such an ability of a compiler to compile itself is called compiler bootstrapping. This development turned out to be more difficult and time-consuming than initially expected; moreover, developers were not available for a few years due resource limitations and other priorities. Finally, bootstrapping of the whole OpenModelica compiler was achieved in 2011. Two years later, in 2013, all our OpenModelica compiler development was shifted to the new bootstrapped compiler Sjölund, 2015), after automatic memory reclamation (garbage collection), separate compilation, and a new efficient debugger had been achieved for our new compiler platform.

The OpenModelica Environment
At the time of this writing, the OpenModelica environment primarily consists of the following functionalities and subsystems: The relationships between the main OpenModelica subsystems are briefly depicted above in Figure 1. Their functionality and selected applications are described in the rest of this article. An example of using OpenModelica to perform simulations and plotting simulation results is depicted in Figure 2 for the V6Engine model.

OMC -The OpenModelica Model Compiler
OMC is the OpenModelica compiler which translates Modelica models into simulation code, which is compiled and executed to perform simulations. The Open-Modelica compiler is generated from formal specifications in RML (earlier) or MetaModelica (currently). At the time of this writing the OpenModelica compiler (OMC) is generated from a specification of about three hundred thousand lines of MetaModelica. Moreover, OMC is able to compile itself, i.e., it is bootstrapped . There is also a compilation mode to generate low-footprint code for embedded systems (Section 4.20).

Lexical Analysis
Keywords, operators and identifiers are extracted from the model.

Parsing
An abstract syntax tree represented in Meta-Modelica is created from the operators and identifiers.

Semantic Analysis
The abstract syntax tree gets tested for semantic errors.

Intermediate Representation
Translation of the abstract syntax tree to an intermediate representation (IR) called SCode in MetaModelica. This is further processed by the frontend (Section 4.1.2) producing DAE IR code.

Symbolic Optimization Backend
The intermediate representation gets optimized and preprocessed (Section 4.2).

Code Generation
Executable code gets generated from the low level intermediate representation.

OMC Compiler Structure
The compilation of Modelica models with the Open-Modelica Compiler (OMC) can be divided into six phases ( Figure 3) to get an executable simulation. In a nutshell the Frontend performs lexical and semantic analysis and the Backend performs symbolic optimization on the provided DAE-model-representation. From the optimized MetaModelica intermediate representation an executable simulation program in a target language (C, C++ and some others) is generated and compiled.

New Compiler Frontend
As previously mentioned in Section 3.1, a new Open-Modelica compiler frontend has been developed. This large effort has been made in order to provide complete language coverage as well as much faster compilation including efficient support for compilation of very large models. The first usable version was released in Open-Modelica 1.14.0, in December 2019. The new frontend  uses model-centric and multiple phases design principles and is about 10 to 100 times faster than the old frontend. A few highlights: The new front-end was carefully designed with performance and scalability in mind.
References (pointers) are used to link component references to their definition scope via lookup and usage scope via application.
Constant evaluation and expression simplification are more restricted compared to the old frontend.
Arrays of basic types and arrays of models are not expanded until the scalarization phase.
Expansion of arrays is currently needed because the backend currently cannot handle all the cases of non-expanded arrays, but will be eliminated in the future (Section 4.2.8) to give increased performance for array computations.
One of the design principles of the new frontend has been to find ways to break dependencies between the various frontend phases. Instead of being componentfocused like the old compiler frontend it has been designed to be model-focused, meaning that each frontend phase processes the whole model before the model is passed on to the next phase. The result is the design seen in Figure 4, which shows the flow of the model through the different phases of the new frontend.
The symbolic instantiation phase builds the instance tree and constructs all the nodes, and the expression instantiation phase instantiates all expressions in that instance tree. This involves looking up the names used in expressions and associating them with the correct nodes in the instance tree. The lookup tree for a class is only constructed once and then reused for all instances of that particular class, unlike the old frontend where a new lookup tree is constructed for each instance.
The typing phase traverses the instance tree and determines the type of all variables and expressions. The flattening phase of the new frontend traverses the instance tree and flattens the tree into a flat model that consists of a list of variables, a list of equations, and a list of algorithms. It also expands connect-equations and for-equations into basic equations.
The new frontend is implemented in modern Meta-Modelica 3.0 which combines Modelica features with functional languages features. The implementation currently consists of 65 MetaModelica packages or uniontypes defining encapsulated data structures and functions that operate on the defined data.

MetaModelica for Symbolic Programming and Meta-Programming
The need for integrating system modeling with advanced tool capabilities is becoming increasingly pronounced. For example, a set of simulation experiments may give rise to new data that is used to systematically construct a series of new models, e.g. for further simulation and design optimization. Such combined symbolic-numeric capabilities have been pioneered by dynamically typed interpreted languages such as Lisp (Teitelman, 1974) and Mathematica (Wolfram, 2003). Such capabilities are also relevant for advanced modeling and simulation applications but lacking in the standard Modelica language. Therefore, this is a topic of long-running design discussions in the Modelica Design group.
One contribution in this direction is the MetaModelica language extension  that has been developed to extend Modelica with symbolic operations and advanced data structures in a backwards-compatible way, while preserving safe engineering practices through static type checking and a compilation-based efficient implementation.
The MetaModelica language is an efficiently compiled language that provides symbolic programming using tree and list data structures. This is similar to what is provided by the rather young language Julia (Bezanson et al., 2017;Julialang, 2018) which has recently ap- peared, Julia 1.0 was released in August 2018. A comparison between MetaModelica and Julia is presented by . MetaModelica is also used for modeling/specification of languages (including the Modelica language) and for Modelica-style programming of model transformations, where the OpenModelica compiler itself is the currently largest application.
The research contributions of MetaModelica are not about inventing new language constructs since they have already been well proven in several other languages. However, in the context of Modelica there are contributions on integrating such constructs into the Modelica language including the Modelica type system in a backwards compatible way. The following is a very brief overview of the most important language extensions: Overloading of user-defined operators and functions. Note: overloading is called multiple dis-patch in Julia.
Uniontype construct to define unions of possibly recursive record types. This is used to create tree data structures. Also a fail() function to cause an exception.
The following recent enhancements available in MetaModelica 3.0 were found to be quite useful in the implementation of the new frontend: Flexible pattern matching specified by (), that does not require verbose listing of all record fields (or named field access) of the record in the pattern matching, e.g., UNTYPED BINDING().
Record field access via dot notation inside the case, e.g., binding.bindingExp.

Definition of functions inside uniontypes.
Definition and usage of parameterized union datatypes such as trees using redeclare/replaceable types.

Experimental Just-in-Time Compilation
Just-in-Time Compilation (JIT) allows compilation and executing code during runtime. Such a facility opens up new flexible strategies for handling the compilation and execution of Modelica code and even going beyond Modelica to general variable structure systems. The following work is currently ongoing related to OpenModelica.

The OpenModelica LLVM backend (OMLB)
The OpenModelica LLVM backend (OMLB) is an experimental OpenModelica prototype backend to investigate just-in-time compilation using the LLVM compiler framework (Tinnerholm, 2019). The goal was to investigate the advantages and disadvantages of having OMC target LLVM instead of C. The investigation was also performed to examine if targeting LLVM would be a viable option to achieve efficient Just-intime compilation (JIT). Another approach with similar goals was conducted by (Agosta et al., 2019). While OMLB currently is not complete enough for bootstrapping, it demonstrates the benefits of having an LLVM based backend and JIT. OMLB is presently able to compile the algorithmic subsets of MetaModelica and Modelica interactively. Inspired by the design goals of the Julia programming language and the successful use of Julia for equation-based modeling as done by Elmqvist et al. (2017), an investigation was conducted in 2018 comparing MetaModelica and Julia .
This investigation highlighted the similarities and differences between the two languages, both in terms of design goals and programming paradigm. The conclusions were that there are similarities both with regards to the indented audience and the design goals of the two. These similarities prompted another investigation  regarding the possibility of automatically translating the existing OpenModelica frontend into Julia. Such an OpenModelica frontend in Julia could provide a framework for experimentation with variable structured systems while at the same time adhering to the Modelica standard.

An Experimental Julia-based Modelica Compiler Prototype
To empirically investigate the advantages, disadvantages, and challenges of providing a Modelica compiler in Julia, an OpenModelica to Julia translator was developed together with an extension of the Meta-Modelica runtime initially described in . From our preliminary experiments in  we observed that automatically generated Julia code may outperform hand-written Meta-Modelica code in some cases. However, the compilation time was overall slower compared to OMLB, due to OMLB making use of precompiled runtime functions in contrast with the overhead imposed by the Julia compiler due to type specialization.

Prototyping a Standards Compliant Modelica Compiler with Run-time Just-in-Time Compilation
Regarding just-in-time compilation (JIT), the status in the fall of 2019 was that there are still two options to provide a JIT in the OpenModelica compiler environment. One is via OMCompiler.jl -an experimental standards compliant prototype subset Modelica compiler in Julia, the other is to increase the scope of OMLB with its associated JIT. However, since the MetaModelica to Julia translator is capable of translating the existing OMC frontend, it is also capable of converting the OMLB code-generator into Julia. Thus, further development of OMCompiler.jl will not invalidate the possibility of having LLVM as a final backend target for OMC.

Template-Based Code Generation
The OMC code generation uses a text-template based approach. The Susan text template language (Fritzson et al., 2009b) based on MetaModelica was developed for this purpose. It facilitates generation of code for multiple target platforms from the low-level intermediate code in and enables writing concise code generation specifications. Several alternative regular code generators are available to produce the simulation code as C or C++ code (or Java or C# code using experimental code generators), which is compiled and executed to perform simulations or to export FMUs.

OMC Backend with Numeric-Symbolic Solver Modules
In the following we briefly present four of the most important numeric-symbolic modules inside the OMC Backend that perform symbolic optimization (Figure 3).

Removal of Simple Equations
Some variables in the equation system correlate, because they are connected by so-called simple equations. The most elementary equation is equality, e.g.: x = y. In this equation it is possible to declare either x or y as an alias variable and replace it in every equation it occurs with the corresponding other variable. The equation can be removed from the system and is later used to reconstruct the value of the removed alias variable if necessary. Even more complex, but still simple equations can be extracted such that the resulting system will be much smaller (e.g. any linear equation connecting two variables). More information for this process regarding a specific model can be gained using the compiler flag -d=debugAlias.

BLT-Transformation (Matching/Sorting)
The transformation of a system of differential-algebraic equations to Block-Lower-Triangular form is fundamental to the simulation. The first step is to assign every variable to an equation such that the equation can be solved (explicitly or implicitly) for the assigned variable. This step is called Matching and is unique if there are no algebraic loops in the system. Afterwards the equations are sorted into blocks, such that an evaluation sequence is achieved (Sorting). If a block contains more than one equation, it forms an algebraic loop, where all variables assigned to those equations have to be solved simultaneously. Further information on BLT-Transformation can be found in Duff et al. (2017, chapter 6). More information regarding a specific model can be gained using the compiler flag -d=backenddaeinfo.

Index Reduction
The differential index of a system of differentialalgebraic equations is defined as the maximum number of differentiations of all equations such that all unknowns of the system can be solved by integrating an ordinary differential equation. Most solvers are designed to work with systems of index zero or one, so an efficient reduction is necessary. The equations that have to be differentiated and the corresponding number of differentiations can be obtained with Pantelides (1988) algorithm. The index reduction algorithm with dummy-states, described in Söderlind and Mattsson (1993), reduces the system to index one, so that it can be simulated with common solvers. Alternative methods to handle index reduction have been proposed in Qin et al. (2016Qin et al. ( , 2018. Simulation without index reduction is also possible, but less reliable. The process of index reduction identifies a set of state variables which are algebraically connected. Some of those states will be treated as regular algebraic variables (dummy states) to simulate the system correctly. One can influence this process of state selection by providing stateSelect attributes for states, e.g., x(stateSelect=StateSelect.avoid), see Table 1. More information for this process regard-

Tearing
For every algebraic loop some of the assigned variables are chosen as tearing-variables, such that all other variables can be evaluated explicitly on the basis of those variables. The goal is to efficiently find small sets of tearing-variables. Many algorithms are already implemented in the OpenModelica Compiler and published in Cellier and Kofman (2006). One can influence this process by providing tearingSelect annotations, similar to the stateSelect attribute. Since this is not part of the Modelica language and specific to Open-Modelica, it must be provided as an annotation (e.g. x annotation(tearingSelect = prefer); see Table 2. Discrete variables can never be tearing variables. More information for this process regarding a specific model can be gained using the compiler flags -d=dumpLoops or -d=iterationVars.

Simulation using Numeric Solvers
After code generation for specified target language and linking with the OpenModelica Simulation Runtime, the model can be simulated. For the simulation OpenModelica offers multiple numeric integration/solver methods for ODE systems as well as DAEmode (Section 4.2.6) for direct solution of DAE systems. Mostly DASSL (Petzold, 1982) respectively IDA (Hindmarsh et al., 2005) are used to integrate the systems, but there are more solvers for specific problems (Table 3). For models containing algebraic loops there are multiple linear (Table 4) and non-linear (Table 5) algebraic solvers to choose from. There are general purpose solvers like LAPACK for linear problems and a combination of a Newton method with the total pivot method as fallback.

DAEMode
A recent extension of the numeric solver module is the DAEMode which is used for solving very large models. DAE-mode can be accessed using the compiler flag {daeMode. This is part of an emerging trend in Modelica tools of handling large-scale models, with hundreds of thousands or possibly millions of equations, (Casella, 2015). OpenModelica has pioneered this field by introducing sparse solvers in the solution chain: KLU for linear algebraic equations, Kinsol for nonlinear algebraic equations, and IDA for causalized differential equations. It also introduced the direct use of IDA as differential-algebraic equation solver, skipping the traditional causalization step, which is computationally more efficient for certain classes of systems. The largest system handled so far is an electro-mechanical power system model with about 600 000 differential-algebraic equations .

Homotopy-based Initialization
In many cases, solving the initialization problem of Modelica models requires solving nonlinear system by means of iterative methods, whose convergence may be critical if the provided initial guesses are not close enough to the solution. To mitigate this problem,    (Dennis Jr. and Schnabel, 1996) kinsol Combination of Newton-Krylov, Picard and fixed-point solver (Taylor and Hindmarsh, 1998) newton Newton-Raphson method (Cellier and Kofman, 2006) mixed Homotopy with hybrid as fallback (Keller, 1978; homotopy Damped Newton solver with fixed-point solver and Newton homotopy solver as fallbacks OpenModelica implements the homotopy() operator of the language, which allows to replace some key expressions in model equations with simplified counterparts, to make the initialization problem less sensitive to an accurate choice of initial guesses. Once the solution of the simplified problem has been found, a homotopy transformation is performed from the simplified to the actual formulation of the expression in the homotopy operators. If the simplified expression is chosen appropriately, the homotopy path followed by the solution is continuous and allows to reliably reach the solution of the actual initialization problem (Sielemann et al., 2011;Keller, 1978). See also Casella et al. (2011b) for an application.

New OMC Backend
The current OMC backend is lacking in modularity, efficiency, and does not support non-expanded arrays in a general way. The latter functionality is needed to support compilation and simulation of large-scale models with large arrays. Therefore an effort has been started spring of 2020 of re-designing and reimplementing the backend to improve modularization and enable efficient handling of general non-expanded arrays.

OMEdit -the OpenModelica Graphic Model Editor and Simulator GUI
OMedit is the OpenModelica graphical model editor (Asghar et al., 2011) for component-based model design by connecting instances of Modelica classes.
The editor also provides text editing. Moreover, the OMEdit GUI provides a graphical user interface to simulation and plotting ( Figure 2). Also, it also provides browsing, parameter update, 3D animation (Section 4.4), debugging and performance analysis (Section 4.5), and FMI composite editing (Section 4.10). Figure 5 depicts the connection editing view of OMEdit in the center. The model browsing window is to the left and a model documentation window is shown at the upper right.
A typical usage of OMEdit is to first create a model using the connection editor, then simulate, and finally plot by selecting which variables should be plotted in the variable plot selection window ( Figure 5, lower right).
A model can be created by opening a new empty model and dragging/dropping model components from the model browsing window to the left into the central connection editing area and creating a new model by connecting those components. Alternatively an existing model can be opened by double clicking the model in the model browser window to the left. A model can also be created textually by clicking the text button and typing in Modelica text.
A simulation is performed by clicking on the green right-arrow at the top. After a successful simulation the plot selection window will appear at the right. One rather unusual example of how a plot can appear is visible in Figure 2). There are also variants of the simulation green arrows at the top that combine simulation with debugging or 3D visualization.

3D Animation and Visualization
The OpenModelica 3D animation and visualization is a built-in feature of OMEdit to animate based on 3D shapes defined by the MSL Multi-Body library. It provides visualization of simulation results and animation of geometric primitives and CAD-files. OpenModelica generates a scene description XML-file which assigns model variables to visualization shape attributes. The scene description file can also be used to generate a visualization controlled by an FMU either in OMEdit or in an external visualization tool as Unity 3D (Waurich and Weber, 2017). In combination with the Modelica DeviceDrivers Library, interactive simulations with visual feedback and 3D-interactions can be implemented for training, development and testing purposes.

The Algorithm Debugger
The OpenModelica algorithm debugger (Figure 7), (Pop, 2008;Sjölund, 2015) is available for use either from OMEdit or from the MDT Eclipse plug-in. The debugger provides traditional debugging of the algorithmic part of Modelica, such as setting breakpoints, starting and stopping execution, single-stepping, inspecting and changing variables, inspecting all kinds of standard Modelica data structures as well as Meta-Modelica data structures such as trees and lists.

The Equation Model Debugger
The OpenModelica equation model debugger (Figure 8) Sjölund, 2015) is available for use from OMEdit. It provides capabilities for debugging equation-based models, such as showing and explaining the symbolic transformations performed on selected equations on the way to executable simulation code. It can locate the source code position of an equation causing a problem such as a run-time error, traced backwards via the symbolic transformations.
In February 2020, new functionality was demonstrated to perform "backward" trace of which variables     or equations that directly influence a chosen variable ( Figure 9). This can be useful to understand the dependencies causing a faulty variable value.

The Performance Profiler/Analyzer
By using performance profiling analysis it is possible to detect which equations or functions cause low performance during a simulation. The OpenModelica profiler (Sjölund, 2015) uses compiler-assisted source code instrumentation. There is one call to a real-time clock before executing the equation block or function call and one call to the clock after execution of the block. Associated with each call is a counter that keeps track of how many times this function was triggered for the given time step. Similarly, each call is associated with clock data -one variable for the total time spent in the block for all time steps and one variable for the total time spent in the block for the current time step. The time measurement uses the best real-time clock available on the used platform.
With profiling enabled only for equation blocks (strongly connected equation sets) and functions, the overhead cost is low compared to the cost of solving most nonlinear systems of equations. The profiler is integrated with the equation debugger, which enables the tool to directly point out the equations using a large fraction of the simulation time ( Figure 10).

Teaching with Interactive Electronic Notebooks
Electronic notebooks provide an executable electronic book facility supporting chapters, sections, execution of simulation models, plotting. The first versions of OpenModelica used the proprietary Mathematica notebook facility. Later versions include a simple open source implementation called OMNotebook described below. More recently, a web-based notebook has been developed as well as a plug-in to the Jupyter notebook facility.

OMNotebook and DrModelica
OMNotebook ( Figure 11) (Fernström et al., 2006) is a book-like interactive user interface to OpenModelica primarily intended for teaching and course material. It supports sections and subsections to any level, hiding and showing sections and cells, interactive evaluation and simulation of Modelica models as well as plotting results. The user can define his/her own books. This tool is useful for developing interactive course material. The DrModelica (Sandelin et al., 2003) interactive Modelica teaching course was the first main application, at that time based on Mathematica notebooks, later translated to use interactive Modelica scripting in OMNotebook.

OMWebbook -Interactive Web-based Editable and Executable Book
OMWebbook ( Figure 12) (Moudgalya et al., 2017;Fritzson et al., 2018b) is an interactive web-based electronic book. This is similar to OMNotebook, but textual model editing and simulation is performed in a web-browser. Simulation is performed by a dedicated simulation server. Thus, the user need not install OpenModelica on a computer. Editing and simulation can even be done from smartphones or tablets. Figure 12: OMWebbook with editable models, simulations, and plots, here simulating the bouncing ball.

Jupyter Notebook for OpenModelica
More recently, the Python-based Jupyter notebook software (Project Jupyter, 2016) has appeared, supporting a number of scripting languages. Therefore, based on user demand, we have also developed a Jupyter notebook plug-in for OpenModelica (2020) supporting Modelica scripting. However, Python scripting together with the OMPython package was already available and used in the Jupyter notebooks

Self-Learning Audio-Video Tutorials
A number of interactive audio-video tutorials, called spoken tutorials, have been developed to provide stepby-step teaching about how to use OpenModelica and develop simple Modelica models (Moudgalya et al., 2017;FOSSEE-Modelica, 2020). The audio parts of the tutorials are dubbed in many languages and are suitable for on-line usage (Moudgalya et al., 2017). A total of 14 short Spoken Tutorials of 10 minutes each are available, and a total of 10,000 students have been trained using these tutorials at the time of writing this article.

Text-book Companions for Teaching and Prototyping
Most open source software projects rely on contributions from the user community. Students form a substantial fraction of this community. One of the shortcomings of Free and Open Source Software is inadequate documentation, and the lack of contributions to it by students aggravate this problem: Students often lack motivation to document or are not capable of creating good documents. The converse is often true: Students are much better coders and they often enjoy coding. We addressed the above mentioned problem by solving the inverse problem: Ask students to write code for existing documents. For students, documents are textbooks. A textbook companion comprises a collection of code for all relevant solved problems in a textbook (Moudgalya, 2018). A student who has understood the concepts in a textbook may be motivated to learn an appropriate open source software and code the solved examples, verifying the correctness at every step. There is no document containing the code and hence there is very little chance of copyright violations.
The student community has already created a large number of textbook companions for OpenModelica (Moudgalya, 2018;FOSSEE-OM-Textbook, 2020

Interactive Scripting APIs using
Modelica, Python, Julia, and Matlab Interactive scripting APIs (Application Programming Interfaces) are provided for several scripting languages using interactive read-eval-print loops.
There is an interactive session handler, OMShell, that parses and interactively interprets commands and expressions in Modelica for evaluation, thus providing Modelica scripting. The session handler also contains simple history facilities, and completion of file names and certain identifiers in commands.
Interactive sessions handlers with scripting APIs to OpenModelica are also provided for the languages Python (Python Software Foundation, 2018), Julia (Julialang, 2018)   -the causality of ports has to be fixed. The Open-Modelica toolset can be used to both export any given Modelica model as an FMU and import FMUs to create a composite model.

OMSimulator -FMI and TLM-based Simulation/Co-simulation and Composite Model Editor
Simulation according to the FMI standard can be done using model-exchange FMUs (exported models without a solver), co-simulation FMUs (exported models including an embedded solver), or tool-to-tool cosimulation. Standard Modelica simulation uses the same solver for all included model components, which is the approach used for model-exchange FMUs. Cosimulation mechanisms that synchronize several solvers have to be used for co-simulation FMUs, which sometimes may cause numerical stability issues.
OMSimulator (2020) is an OpenModelica subsystem that provides efficient simulation and co-simulation of FMUs. Thus, models from non-Modelica tools compiled into FMUs can also be utilized and simulated. Furthermore, models that cannot be exported as FMUs can be integrated in a simulation using tool-to-tool cosimulation. This is provided via wrappers to models in tools such as ADAMS (MSCSoftware, 2020), Beast Fritzson, 2018), Simulink (MathWorks, 2019a), Hopsan (Axin et al., 2010), or cosimulation of FMUs with embedded solvers. The system can optionally be used with TLM (Transmission Line Modeling) connectors, which provide numerically more stable co-simulation.
OMSimulator is provided together with a composite model editor integrated in OMEdit (Figure 13), that allows combining external models (e.g. FMUs for both model-exchanged and co-simulation) into new composite models, simulating them and in some cases (for the TLM version) perform 3D animation. Composite models can be imported and exported by using the SSP standard (Systems and Structure Parameterization) standard (Modelica Association, 2018; Open-Modelica, 2020).

Parameter System Identification
OMSysIdent (2020) is a system parameter identification module built on top of the OMSimulator (2020) API. For estimating the sought parameter values, a system model needs to be provided as FMU, as well as respective measurement data of the system. The API of OMSysIdent is integrated with the scripting interfaces of OMSimulator and OpenModelica (using Lua or Python scripting). Internally, the module uses the Ceres Solver (Agarwal et al., 2018) library for the optimization task.

Parameter Search-Based Design Optimization
An important use for modeling and simulation is to improve a system design, usually before it is physically realized and manufactured. In this process it is customary to perform a number of simulations for different values of the design parameters, until a design has been obtained that best fulfills a given set of design criteria.
The traditional parameter sweep based design optimization performs many simulation runs while sweeping, i.e., performing a linear search, of the desired pa-  rameters over an interval in order to find an optimal value of a goal function or goal variable. The drawback is the very large number of simulations that might be required. For example, three parameters each with an interval that is subdivided into 100 steps would require one million simulations to cover all combinations for these parameters.

The OMOptim Tool with Genetic Algorithms for Parameter Search
The OMOptim OpenModelica tool (Figure 14), (Thieriot et al., 2011) provides a GUI and uses genetic algorithms (simulated annealing) during parameter exploration as a search heuristic to find an optimal parameter setting. Figure 15: Sensitivity of reactor temperature to randomly varied heat transfer coefficient UA. Nominal parameters (solid) for increase in cooling temperature Tc (red) and decrease in cooling temperature Tc (blue).

Parameter Sweep Optimization based on Python, Julia, or Matlab Scripting
With a simulation model expressed in Modelica, and a cost function either expressed in Modelica or the scripting language, for-loops in a scripting language, Section 4.9, such as Python, Julia, Matlab, and to some extent Modelica can be used to compute how the cost function varies with changing values of parameters. Typical scripting language code for carrying out this operation is as follows (Julia syntax): This idea can trivially be extended to multi parameter problems, including parameterization of inputs. To find parameters which, say, minimize the cost, one can then simply search for minimal point in the cost array, or by fitting the cost array data to some analytic parametric function, and then find the minimum of the analytic function.
To illustrate the idea of parameter sweep, Figure 15 shows how the solution of a reactor model (Seborg et al., 2011) changes for a randomly drawn heat transfer coefficient within a range.

Dynamic Optimization Using Collocation
Another approach, dynamic optimization using collocation ( Figure 16), Åkesson, 2008;Ruge et al., 2014;Houska et al., 2011), avoids the combinatorial explosion of multiple simulation runs since only a single model evaluation is required to find an optimal trajectory. This is at the cost of being very sensitive to the model -the methods may typically not converge except for small models, i.e., not at all robust. A collocation method formulates an optimization problem directly on a whole trajectory which is divided into trajectory segments ( Figure 16) whose shapes are determined by coefficients which are initially not determined.
During the optimization process these coefficients are gradually assigned values which make the trajectory segments adjust shape and join into a single trajectory with a shape that optimizes the goal function under the constraints of fulfilling the model equations.
The systems to be optimized are typically described using differential-algebraic equations (DAEs), which can be conveniently formulated in Modelica. The corresponding optimization problem can be expressed using graphical or textual formulation based on annotations.
Solution algorithms based on collocation methods are highly suitable for discretizing the underlying dynamic model formulation. Thereafter, the corresponding discretized optimization problem can be solved, e.g. by the interior-point optimizer Ipopt (Wächter and Biegler, 2006). The performance of the optimizer heavily depends on the availability of derivative information for the underlying optimization problem. Typically, the gradient of the objective function, the Jaco-bian of the DAEs as well as the Hessian matrix of the corresponding Lagrangian formulation need to be determined. If only some or none of these derivatives are provided, usually numerical approximations are used. The generation of symbolic Jacobian is already available in OpenModelica (Braun et al., 2012;Shitahun et al., 2013) and the generation of symbolic Hessian is currently under development.
The main symbolic transformation steps during compile time and the dynamic optimization tool chain for OpenModelica with Ipopt are visualized in Figure 17.
The optimization can be called via a batch process using the following commands: s e t C o m m a n d L i n e O p t i o n s ( " + gDynOpt " ) ; loadFile ( " ... " ) ; optimize ( nmpcProblem, numb erOfIn terva ls =20, tolerance =1 e-8 ) ; The implementation has been tested with several applications and is demonstrated in the following using a combined cycle power plant model, see Figure 18. The model contains equation-based implementations of the thermodynamic functions for water and steam, which in turn are used in the components corresponding to pipes and the boiler. The model also contains components for the economizer, the super heater, as well as the gas and steam turbines. The model has one input, 10 states, and 131 equations. Additional details on the model are presented in (Casella et al., 2011a).
The optimization problem is set up to use 50 collocation points that result in 1651 variables for the nonlinear optimal control problem and was solved on a PC with a 3.2GHz Intel(R) Core(TM) i7. The algorithm requires an initial trajectory of all problem variables, which is provided by a simulation where the rate of change of the gas turbine load is set to a constant value. The optimization results are shown in Figure 18 and correspond with the results that are discussed in detail in (Casella et al., 2011a). Here, the trajectories are smoother, and the performance has been improved substantially.

Parameter Sensitivity Analysis Based on Optimization
The sensitivity of non-linear models in the form of ordinary differential equations is understood as the tendency to undergo qualitatively noticeable changes in response to shifts in the parameters used for the model setup (Khalil, 2002). Given a nonlinear model there exists an interest in automatically and efficiently detecting small sets of parameters that can produce strong changes in state variables when perturbed within ranges smaller than the uncertainty bounds.   This promotes the design of reusable frameworks that treat the models as black boxes (not excluding exploiting internal knowledge on the model structure) The OpenModelica (2020) tool also includes a Sundials/IDA solver that calculates parameter sensitivities using forward sensitivity analysis. Yet, this approach cannot be applied to models that are not fully differentiable. The option of picking several values within a parameter interval and sweep all possible combinations quickly leads to a combinatorial explosion that renders the approach unfeasible. In the simultaneous approach, after defining the parameters and their intervals, an algorithm (typically an optimization-based strategy) finds a vector of smallest perturbation values that produces the largest impact on the state variables. The OM-Sens OpenModelica (2020, ch. Paremeter Sensitivitites with OpenModelica) sub-system is a tool to assess the sensitivity of Modelica models (Danós et al., 2017). OMSens uses different methods for sensitivity analysis including robust, derivate-free non-linear optimization techniques based on the CURVI family (Dennis Jr. et al., 1991).

Sensitivity Analysis of Modelica models
Unlike most previous approaches, OMSens offers a wide choice of computational methods for sensitivity analysis. Elsheikh (2012) uses automatic differentiation to augment the model with the sensitivity equations. This is similar to the IDA Solver approach in OpenModelica (2020, ch. Paremeter Sensitivitites with OpenModelica) which is simpler to employ since it numerically computes sensitivities directly. Wolf et al. (2008) compares several methods including parametersweep and solver-based approaches using the DASPK solver (Petzold et al., 2006). Many optimization methods can be employed for sensitivity analysis. For example, Ipopt (Wächter and Biegler, 2006) is a well-known non-linear optimization routine. Other methods are mentioned in Section 4.12 and Section 4.13, some of which are time consuming or not at all robust.

Optimization-driven Sensitivity Analysis
Numerical aspects of the optimization algorithms need to be considered carefully, as they affect their efficiency, robustness and scope of applicability. A correct analysis should consider the combined, simultaneous effects of many perturbations of the parameters, something that is unmanageable due to the number of combina-tions and the impossibility of determining beforehand the size of those perturbations. Nonlinear optimization can be used to solve the problem by reformulating it as a model stability problem (Danós et al., 2017).
In the current version OMSens implements a derivative-free optimization algorithm named CURVIcurvilinear search method, Dennis Jr. et al. (1991) which is able to solve very difficult problems while allowing for custom interval constraints. There are three versions: CURVIF, CURVIG, CURVIH that use, respectively, function values, function values plus gradients, and the latter plus Hessians. All versions are globally convergent.
CURVIF is the flavor currently adopted in OMSens, and does not necessarily employ the least number of function evaluations. It can be seen as a trade-off between robustness and efficiency. Moreover, global optimization functionality is currently being added to OM-Sens.

OMSens Architecture
OMSens provides a flexible experimentation arena of different sensitivity analysis strategies for Modelica models. It provides modularity by being split into decoupled backend and frontend modules. It also provides flexibility since the backend is subdivided into modules that encapsulate responsibilities and expose clear invocation interfaces.
The OMSens modules can be divided in two groups: simultaneous sensitivity analysis and individual sensitivity analysis. In Figure 19 we find six main modules. In the simultaneous scenario, module 3 (Optimization) leads the workflow, invoking modules 1, 2 and 4 to perform an exploration of the parameter space. This needs successive simulations requested from module 2 (Modelica) depending on the simulation results of previous invocations, following a closed loop strategy. In the individual scenario modules 5 and 6 lead their own workflows, invoking single simulations with no dependency on the results of previous runs (open loop). Module 6 (Parameter sweeping) invokes simulations while sequentially picking values from a parameter space defined by the user.
A summary of the presented sensitivity analysis methods can be found in Table 6.
A sensitivity method measures the change of a chosen variable/state variable with respect to changes in one or more parameters. OpenModelica (2020, ch. Paremeter Sensitivitites with OpenModelica) can calculate sensitivities using the Sundials/IDA solver using the derivatives of each state variable with respect to each top-level parameter during a simulation (IDAsens method) defined for all values of time. OMSens can launch experiments using the IDASens method. OM- Table 6: Summary of sensitivity methods for a generic state variable x with respect to the i-th parameter p i .
sens allows defining custom methods for comparisons of perturbed vs. unperturbed runs. For example, we use a Relative (Rel) method defined as i.e. the difference in a state variable with and without perturbation of a parameter (x per vs. x). It can be used to rank the parameters affecting a variable the most at a target year. We also define the Root Mean Square (RMS) method s RM S (t 0 , t f ) that calculates the root mean square of the differences σ(t k ) for integer years t 0 ≤ t k ≤ t f . It can be used to rank the most relevant parameters impacting a variable throughout a range of years.

Case Study -Sensitivity Analysis of a Complex Socio-Economic Model
World3 is a world-level socio-economic model available as a Modelica library (Cellier, 2008), here referred to as W3-Mod. It implements the World3 model as described in Meadows et al. (2004, Limits to Growth, 3rd edition) Meadows et al. (1974), simplistic sensitivity experiments were considered, but W3-Mod lacks a comprehensive sensitivity study. The model has long been characterized as strongly nonlinear and unstable with regards to parameter changes (Castro, 2012;Scolnik, 1979;Vermeulen and de Jongh, 1976).
Applying optimization-based analysis, we used OM-Sens to analyze the state variable Population of W3-Mod at year 2100, perturbing the top-12 most influencing parameters found using the Rel analysis method for that state variable. We allowed all parameters to change within a ±5% interval (conservative bounds for socio-economic indicators). CURVI found three nonintuitive perturbation values compared to the results obtained with the Rel method alone, not shown here.
Parameters p land yield fact 1 and p avg life ind cap 1 (Default land yield factor and Default average life of industrial capital) were perturbed in opposite directions compared to what an individual parameter-based approach would indicate. With these differences the impact is noticeable. In Figure 20 we observe several simulations to interpret the results, all extended up to the year 2500 for the sake of readability. The black curve is the unperturbed run. The red curve is the run perturbed with the individual parameter-based strategy (Rel method) and the green curve represents perturbations found by CURVI. We can see that for the target year 2100 CURVI found a parameter combination that takes the population way up compared to what was achieved relying solely on the Rel method.
Verification with multi-parameter sweeping is available to automate the simulation and analysis of arbitrary combinations in a parameter space. The optimization-based method yields a substantial improvement compared to an individual parameter-based study. We now assess whether other perturbations offer extra insights. We create a space for both parameters Standard Run and OMSens perturbations: Rel method, multi-parameter sweep and CURVI.
using the same perturbation vector [−5%, 0%, +5%] and launch the analysis. The results in Figure 20 are denoted with (0) and (1) for values of land yield and industrial capital parameters. We observe that the population variable converges smoothly from the individual parameter-based to the simultaneous parameter-based study.

Model-based Control with Dynamic Optimization
Modelica has been applied to the formulation of dynamic optimization problems for complex physical systems since many years (Franke et al., 2003). The optimization methods of control vector parameterization and multiple shooting enable the efficient use of simulation models with numerical optimization solvers and the treatment of large-scale problems with parallel computing. Many successful industrial applications to model-based control of power systems underline the suitability. The Modelica technology and the implementation in OpenModelica evolved continuously. FMI 2.0 for model exchange standardizes the solver interface to simulation models. It covers executable model code, an XML interface description, sparse model structures and analytic Jacobian matrices. Multiple FMI instances of one and the same model enable parallel optimization. The OpenModelica C++ runtime was developed with focus on the export of simu-lation models to real-time control applications. C++ provides for improved type safety, deterministic memory management and compiler optimizations resulting in best in class execution times (Franke et al., 2015).

Synchronous Modelica for Model-Based Control
Continuous-time physical models are treated with discrete sample times in digital control applications. Modelica's synchronous language elements extension was introduced for precisely defining and synchronizing sampled-data systems with different sampling rates (Modelica Association, 2017). OpenModelica was the second Modelica tool which supported this extension, implemented both on top of the OpenModelica C runtime and the C++ run-time. This can be used for model-based control, using Modelica and FMI (Franke et al., 2017).

Control and Optimization of Electric Power
System as Exported FMUs Figure 21 shows OMEdit with an example model of an electric power system, covering an off-shore interconnector combined with wind farms and a back-toback HVDC coupling. The object-oriented model comprises 4 722 variables. 788 of those variables are nontrivial. OMEdit exports the model as FMU 2.0 using the OpenModelica C++ runtime. The control task is to maximize the transfer capacity of the interconnector on top of collected wind power, considering active and reactive power flows subject to grid and voltage limitations. The prediction horizon spans 96 sample intervals. This results in 76 436 (97 · 788) non-trivial optimization variables. Table 7 shows speedups achieved with different FMU features and optimization solver configurations. The reference configuration uses multiple shooting with 1 CPU, exploiting sparsity that results from the time staggering structure over 96 intervals, but neglecting sparsity inside model Jacobians at each time point. The Jacobians are obtained with the method of finite differences. A speedup of 1.9 is achieved with sparse Jacobians by exploiting information in the XML interface description. The speedup increases to 3.9 when additionally enabling OpenModelica's algorithmic differentiation and re-use of numerical factors of equation systems inside the model.
Parallel multiple shooting using a separate FMU instance for each CPU further increases the speedup to 6.3 with 5 CPUs and up to 7.6 with 20 CPUs. The speedup is still 7.0 with finite differences and 20 CPUs. This is at the cost of twice the CPU usage though, whereas algorithmic differentiation leaves more CPU capacity for other tasks running at the same time.

Model-based Control System Design
In addition to the scripting API commands mentioned in Section 4.9, the APIs OMPython (Lie et al., 2016), OMJulia (Lie et al., 2019), OMMatlab (OpenModelica, 2020) also allow for getting and setting linearization options, and carrying out linearization. The linearize method returns a tuple of linear, time invariant (LTI) matrices (A,B,C,D), which can be further used in various control tools, e.g., the MATLAB Control System Toolbox (MathWorks, 2019b), the Python Control Systems Library (Murray and Livingston, 2019), or the Julia Control Systems Toolbox (JuliaControl, 2019).
We consider a liquid reactor ( Figure 22) taken from (Seborg et al., 2011), where we seek to control the effluent temperature by manipulating the influent cooling temperature.
The model is implemented in Modelica, and an object is created in a scripting language (here: Julia, using OMJulia). At the operating point, the linearize method is used to find an LTI approximation. Based on this LTI approximation, Julia's ControlSystems package can be used to do a root locus plot (Figure 23) on how the closed loop eigenvalues vary with proportional gain in a Proportional controller (P-controller), Based on the root locus plot, a suitable controller gain for the P-controller can be found, and likewise a suitable reset time/integral time in a Propor-tional+Integral controller (PI controller). The result-   ing PI controller gives quite good control of the reactor temperature. However, the resulting cooling temperature input dips down to minus twenty Celcius, which is unrealistic. Constraining the cooling temperature to lie in the range of liquid water, together with integral antiwindup leads to the results in Figure 24a

Model-based Fault and Dependability Analysis
The purpose of reliability, and more generally, of dependability studies is to evaluate non-functional performances, that is, to calculate probabilities of undesirable events such as the failure of the mission of a system, or to estimate the probability distribution of quantities like: total production on a given time interval, maintenance cost, number of repairs etc. Usually, dependability studies are performed with dedicated methods and tools, based on discrete (and often even Boolean) models of systems: fault trees, Markov chains, Petri nets, BDMP (Boolean logic Driven Markov Processes) etc. EDF (Electricité du France) designed the Figaro modeling language in 1990 (Bouissou et al., 1991). This language generalizes all the above cited models, and allows casting knowledge in categories of systems in libraries. It is the basis of KB3 which is the reference tool used for building fault trees and dynamic models for probabilistic safety analyses of nuclear power plants and most other reliability analyses at EDF. In order to benefit from this type of analysis a prototype for coupling Modelica models with their Figaro counterpart has been developed (Bouissou et al., 2016). This coupling presented two main issues: The systems are modeled with different degrees of granularity in the two languages.
Links are explicit objects in the Figaro world that can have properties and behavior, whereas the default port connections in Modelica are not objects. When mapping Figaro to Modelica this is handled by letting connections go through intermediary objects that contain that information.
Therefore, the mapping between Modelica and Figaro components is not one-to-one. Instead component types with a Figaro counterpart are identified in Modelica through special interfaces and this information is then used to export the Figaro model from the corresponding Modelica model.
The reliability analysis performed on the Figaro model can be then used to identify potential issues (for example, critical components) and this information can be fed back into the Modelica simulation (for example, investigate in more details the effect of the failure of a critical component).
Another approach has been to use Monte Carlo simulation on Modelica models. In order to do this, random failures (and possibly repairs) are added to the original simulation model, which is slightly modified in order to propagate the effects of failures. The article by Bouissou et al. (2014) explains how standard Modelica solvers can be used to simulate systems with failure rates that depend on continuous variables (like temperature, etc.). This was tested with a well-known benchmark that was first published in (Aldemir, 1987) and since then has been solved with many different methods and tools.
In the OpenModelica setting, scripting is used to run the simulation a large number of times (10 000 times in this example), which allows to calculate by simple statistical estimators the probability of various undesirable events over time, and quantities such as average production, life-cycle cost etc.

Data Reconciliation for Enhanced
Accuracy of Sensor Data The operation of power plants requires a good quality of the process data obtained through sensor measurements. Unfortunately, sensor measurements such as flow rates, temperatures, and pressures are subject to errors that lead to uncertainties in the assessment of the system's state. Operational margins are therefore set to take into account uncertainties on the system's state. Their effect is to decrease production because safety regulations put stringent limits on the thermal power of the plants. It is therefore important to compute the best estimates of the measurement uncertainties in order to increase power production by reducing operational margins as much as possible while still preserving system safety.
The best estimates can be obtained by combining data statistics with a knowledge a priori of the system in the form of a physical model. Data reconciliation is a technique that has been conceived in the process industry for that purpose. It is fully described in the VDI 2048 standard (VDI -Verein Deutscher Ingenieure, 2012, 2017 for the "Control and quality improvement of process data and their uncertainties by means of correction calculation for operation and acceptance tests". Up to now, it was only available in dedicated tools such as VALI from Belsim (2019) that require to develop a specific model of the system under consideration. The main drawbacks are that such models are costly to develop and difficult to validate. A natural answer to this problem is to perform data reconciliation on Modelica models. However, under the current state-of-the-art of Modelica tools, such task is not possible because an appropriate subset of the Modelica model of the system under consideration must be considered for data reconciliation. This subset contains the (presumably) exact physical laws that constrain the variables of interest to be reconciled. All other equations such as boundary conditions or approximated equations that affect the variables of interest must be removed. This subset is called the "auxiliary conditions" in VDI 2048. The auxiliary conditions are therefore underdetermined (more unknowns than equations) and cannot form a valid Modelica model for simulation.
OpenModelica is currently being extended to perform data reconciliation on regular Modelica models. To that end the following is developed: New annotations are introduced to tag the variables of interest and the approximated equations.
An algorithm is being developed to automatically extract the auxiliary conditions.
The data reconciliation procedure taking as inputs the variables of interest as tagged by the user, their measured values (mean values and statistical confidence intervals) as provided by the user and the automatically extracted auxiliary conditions from the Modelica model provided by the user, is being implemented. It produces as outputs the reconciled values and their reduced confidence intervals.
The GUI is being extended to handle inputs and outputs.
The main benefit is to be able to perform data reconciliation on existing validated model without having to modify them for that purpose. This is a considerable improvement with respect to the current state-of-theart.

Artificial Neural Networks for Model Calibration and Augmentation
The pursuit of both model accuracy and simplicity in general results in conflict. Ultimately, a model should meet its accuracy requirements whilst remaining as simple as possible. To validate a model, measurements taken from the original system can be used (Zhu et al., 2007). As an example, the trajectories from simulations of the model can be compared to the actual measurement data to validate the model behavior.
A more sophisticated approach may involve the extraction of high-level features from said trajectories. For example, a good model of a pendulum should exhibit frequency and amplitude of oscillation similar to those of the original pendulum.
Inherently, after the modeling process, there are differences between the measurements of the model and the original system that, by magnitude or trend over time, cannot purely originate from measurement noise. Assuming negligible numerical error, parameter errors or model errors must be assumed. The former can be eliminated by choosing accurate physical parameters.
Model errors, however, cannot be completely avoided and the question arises how their impact can be reduced. In practice it is usually not possible to distinguish between parameter and modeling errors and hence during model calibration often physically "wrong" parameters compensate for modeling errors. This can be seen as a motivation for grey-box modeling, where reference measurements are not only used for model calibration, but to augment the first principles model (Modelica model) with a data-based model. That is, specific relations within a model shall be learned on the basis of reference data while keeping physically established relations unchanged. This is different from pure black box modeling (Mohajerin et al., 2018). The idea of localized adaption of single equations is especially applicable to object-oriented modeling (like in Modelica) and aims at keeping learning results understandable to the user by separation from existing "white box" relations.
In order to evaluate an artificial neural network (ANN) grey-box modeling approach in OpenModelica, a framework for the training of Tensorflow machine learning models (Tensorflow.org, 2019) as shown in Figure 25 is set up. The simulation data generated in OpenModelica using the reference model is used to train the ANN. This is achieved with the help of the Python modules OMPython (Lie et al., 2016), numpy, and Tensorflow. The integration of Tensorflow models in a Modelica model is done using the "external C" interface of Modelica.
The approach by Bruder and Mikelsons (2019) has been tested using a dynamic system model of a motorcycle (from the Planar Mechanics library (Zimmer, 2012) extended by a simple driver model. Particular mathematical relations (drag force, dynamical wheel loads and tire forces) therein were learned by feedforward ANNs which are trained using simulation data generated from the original model driving specific maneuvers (eight shaped trajectories with and without acceleration). These machine-learnt relations were then used to replace the original relations in new grey-box motorcycle models.
This emulates a situation in which the original relations (e.g. those of a real system) are unknown but measurements indicate an interdependence between variables. The resulting grey-box model is then simulated in order to validate it. The validation maneuver is a double lane change and the simulation results are shown in Figure 26. It can be seen that learning the drag force and dynamical wheel loads worked out quite well, while the learned tire force model is not applicable at all.
Moreover, the extrapolation quality of the databased models in this example is investigated, e.g. using

Embedded System Support
OpenModelica provides code generation of real-time controllers from Modelica models, for small foot-print platforms such as Arduino boards or in tools for RexRoth PLCs (Menager et al., 2014). One example of code generation to small targets is the Single board heating system ( Figure 28) from IIT Bombay (Arora et al., 2010). It is used for teaching basic control theory, and usually controlled by a serial port (set fan value, read temperature, etc.). OpenModelica can generate code targeting the ATmega16 on the board (and other AVR microcontrollers  or STM32F4 (Berger et al., 2017)).
The program size is 4090 bytes including LCD driver and PID-controller (out of 16 kB flash memory available). The ATmega16 we target has 1 kB SRAM available for data (stack, heap, and global variables). In this case, only 130 bytes is used for data variables.
To simplify interfacing of low-level devices from Modelica, OpenModelica supports the Modelica DeviceDrivers library , which is a free library for interfacing hardware drivers that is developed primarily for interactive real-time simulations. The library is cross-platform (Windows and Linux). Using this library, modeling, parameterization and configuration can be done at a high level of abstraction using Modelica, avoiding the need for low level C programming. Another example using the embedded system support is the Arduino controlled electromagnetic levitation system depicted in Figure 29. The application is based on a commercially available electromagnetic levitation kit by Zeltom LLC (2019), which is targeted at educational applications. The controller design is described in Thiele et al. (2019) and uses additional OpenModelica technologies like the interactive Julia scripting (Section 4.9) and the synchronous language elements extension (Section 4.15.1).

ModelicaML UML Profile and Eclipse
Plug-in ModelicaML ( Figure 31), (Schamai, 2013;Schamai et al., 2014) is an Eclipse plug-in and Modelica-UML profile for the description of system architecture and system dynamic behavior. It is based on an extended subset of the OMG Unified Modeling Language (UML) as well as Modelica, and is designed for Modelica code generation from graphical models such as state machines and activity diagrams, supporting hardware/software co-modeling and system requirement verification against selected scenarios. The current prototype has not been updated recently and only works together with an old version of Eclipse.

Verification of Designs against Requirements using Simulation
Mastering the development of today's complex systems requires a structured approach called Systems Engineering. One of the activities involved is design verifi-  cation is to determine if a given design meets a set of specified requirements. Designs are often modeled and can then be simulated. In contrast, requirements are typically expressed in natural language to serve for better communication between different stakeholders involved. The drawback of using natural language (e.g., English) is that it may make the requirement specification prone to human errors and ambiguity. To address these challenges, a modeling approach called vVDR (virtual Verification of Designs vs. Requirements) was developed that allows formalizing requirements and creating executable Modelica models, called requirement monitors, for each requirement statement (Schamai, 2013;Otter et al., 2015). Once connected to executable design models, requirement monitors show the status of the requirement violation with at least three literals: not applicable, not violated, violated, at any simulated time instant, as well as the accumulated status (i.e., has been tested, has been violated, etc.).
Furthermore, the vVDR modeling approach creates separate models for scenarios that can be used for testing different requirements. For this to be efficient it includes a way to automatically compose executable models each including the design to be tested, the scenario to be used, and the relevant requirement monitors. This is enabled by the binding concept and an algorithm that iterates over design alternatives and all available scenarios, and uses semantic equivalents of requirement monitor inputs/outputs to identify monitor models to be included (Schamai, 2013;Schamai et al., 2014).
The approach has been tested successfully in several case studies, including at Airbus , Scania (Liang et al., 2012), Electricité du France (Schamai et al., 2014), and the Swedish Road and Traffic Institute (Andersson and Buffoni, 2018). OpenModelica was used as the prototyping environment when running the case studies.
The vVDR approach was first made available in the OpenModelica ModelicaML Eclipse plug-in mentioned in Section 4.22, using a combination of UML and Modelica for formal requirement specification. Later, a Modelica-only version of vVDR has been designed and implemented in OpenModelica using OMEdit as user interface, requirement specification in Modelica, and a vVDR Modelica library Buffoni et al., 2017;. That library enables defining binding information which is processed by the binding algorithm implemented in OpenModelica.
As mentioned, the vVDR simulation-based approach can be used to verify (Figure 32) design alternatives against sets of requirements using different scenarios. The tool automatically generates verification models in Modelica, performs the simulations, compares the results, and generates a report about verification results.

Parallelization and Multi-Core
Work on generating parallel code from Modelica models has been ongoing for OpenModelica during several years. Both automatic and explicit parallelization approaches have been investigated and implemented. Automatic parallelization of simulation code for Modelica models has been investigated in different contexts and perspectives (Aronsson, 2006;Gebremedhin and Fritzson, 2017). These parallelization approaches attempt to automatically detect, extract and utilize potential parallelism in the equation systems generated from Modelica models.
Two recent approaches, hpcom  and parmodauto (Gebremedhin and Fritzson, 2017;Gebremedhin, 2019), are similar in the sense that they both utilize equation level processing of strongly connected components of large equation systems. The hpcom parallelization approach utilize a semi-static cost estimation approach based on previous execution history of a given model to effectively schedule and load balance large simulation executions. This has the advantage that there will be a very small overhead on the execution of a given simulation. However, it also means that simulations cannot effectively respond to changes in computational load and behavior during one simulation run. On the other hand, the parmodauto approach utilizes a runtime profiling and scheduling approach where each simulation run is monitored and load-balanced dynamically at runtime. This approach has two main advantages. The first is that no prior information is needed. In addition, it is also able to respond to systems with simulation of dynamic behav- ior. However, this also means that there is an additional overhead involved in performing the monitoring and profiling of simulation executions.
An explicit parallelization approach and language extension, ParModelica (Gebremedhin, 2011;Gebremedhin, 2019), based on the OpenCL (2018) framework targeting shared memory multi-core processors is also currently available in OpenModelica. The explicit parallelization brings support for expressing parallelization directly in a Modelica model using language constructs that are partially based on familiar General-Purpose Graphics Processing Unit (GPGPU) frameworks such as CUDA (Nvidia, 2008) and OpenCL. The Modelica language is extended with constructs such as parallel for-loops and parallel functions among others. These constructs can be utilized to write an explicit parallel program in the algorithmic parts of a Modelica model. The OpenModelica compiler analyzes these constructs and generates OpenCL code that can be executed on general purpose CPUs, GPUs and accelerators without requiring any change to the original Modelica source code.

Selected Open Source Modeling and Simulation Applications
In the following a few open source and/or crowdsourced applications of OpenModelica are briefly presented.

Process Modeling Using Extended Petri Nets in Modelica
Process modeling is not the most common application for Modelica modeling. Fortunately, the open source PNLib library (Proß and Bachmann, 2012) has been developed in Modelica to support the xHPN (extended Hybrid Petri Net) formalism. This formalism supports modeling of processes that can combine elements which are stochastic, deterministic, discrete, and continuous which gives very powerful modeling capabilities. The library was later extended and generalized to version 2.0 and tool support in OpenModelica was implemented by Ochel (2017), including applications to biological processes. Figure 34 illustrates a restaurant process model with customers arriving stochastically, with ordering, waiting, serving, and eating.

Examples of Crowd-Sourced Applications with OpenModelica
One of the benefits of OpenModelica being open source software is that it is possible to engage the community to contribute to appropriate content generation. An example of this is already provided in Section 4.8, wherein the Textbook Companion effort is explained. In this section, we describe a few crowd-sourced simulation activities using OpenModelica. Very few open source chemical process simulators are available to the community. The situation is worse when it comes to general purpose dynamic simulation of chemical processes. A prerequisite for this is the ability to simultaneously solve all equations that make up the flowsheet or the circuit. As there could be tens of thousands of nonlinear equations in such problems, Figure 34: A restaurant process model using PNLib in OpenModelica, with ordering, waiting, serving, eating. unless proper tools are made available, it is difficult to get contributions from the community.
In order to make it convenient for the community to contribute, a library of thermodynamic models has been made available. This was achieved by a port of thermodynamics to Modelica, including component data and correlations for the calculation of properties FOSSEE-OMChemSim, 2020).
Models of chemical process unit operations, building blocks of chemical engineering operations, have been created using OpenModelica. Using these and a thermodynamics library models available in Open-Modelica, chemical process flowsheets have been created FOSSEE-Flowsheets, 2020). A schematic of a sample flowsheet is given in Figure 35.
With the above mentioned tools, it has become convenient for the community to create chemical process simulations and offer them as open source. A total of more than 50 chemical process flowsheets solved using OpenModelica are now available (FOSSEE-Flowsheets, 2020) and many more are in progress. Given that simultaneous solution of thousands of equations is a difficult task, training a large number of engineers on this important technology would have been impossible without an open source simulator such as OpenModelica.
Can the above approach be extended to another discipline? We explored applying the same principles to power system simulation. Fortunately, a power system simulation library OpenIPSL (Baudette et al., 2018) is already available. Students have contributed 35 power system simulation models (FOSSEE-Power, 2020), a sample of which is shown in Figure 36 1 .

Related Work
Since OpenModelica is a Modelica environment it has of course been influenced by other Modelica tools. The  most influential of these tools is Dymola (Elmqvist et al., 1996;Brück et al., 2002;Dassault Systèmes, 2018), which was the first full-scale industrial-strength Modelica environment. Certain aspects have also been influenced by the MathModelica environment (Fritzson, 2006), later renamed and further developed to Wolfram System Modeler (Wolfram Research, 2018). The systems InterLisp (Teitelman, 1974), Mathematica (Wolfram, 2003), and ObjectMath (Fritzson et al., 1995) have influenced the design of OpenModelica as an integrated symbolic-numeric environment. Recently, the rapidly developing symbolic-numeric Julia language (Bezanson et al., 2017;Julialang, 2018) has appeared, with similar goals as MetaModelica regarding integration and efficient execution of both symbolic and numeric operations.

Conclusion
OpenModelica has been developed into a powerful open source tool suite for modeling, simulation, and modelbased development. It is a unique effort that provides a workbench for research on integration and development of methods, tools and scientific knowledge in an open source setting. Still some challenges are being worked on and remain to be addressed, for example very large models with several million equations. The debugger can be further improved to provide high-level, user-friendly diagnostic messages to help the user resolve run-time numerical errors, a difficult task particularly for novice users. Recently new methods such as data reconciliation and usage of the machine learning TensorFlow framework for model calibration have been integrated. There is room for more such efforts. Integration aspects between tool functionalities can be further enhanced. Just-in-time compilation would improve the system's interactive properties. Two large recent OpenModelica efforts briefly described in this article are the OMC new frontend development for 100% compilation coverage and greatly enhanced compilation speed, and the OMSimulator tool for efficient large-scale FMI-based simulation. A new effort has just been started on designing and implementing an improved compiler backend with enhanced scalable symbolic algorithms to be able to handle very large models. Recently OMJulia has been introduced that provides OpenModelica access from Julia. More powerful integration options between Julia and OpenModelica are also being considered in order to benefit from the Julia libraries and infrastructure.