Energy Speaker Series - Module 1: Transmission System Operations - Improving System Performance and Reliability
Barbara Weinzierl, TenneT TSO GmbH
Dr. Dominik Schlipf, TransnetBW
Simon Remppis, TransnetBW
Prof. Dr. Martin Wolter, Otto-von-Guericke University Magdeburg
Eric Glende, Otto-von-Guericke University Magdeburg
Overview
Session 1.2: Simulink based Load Frequency Controller Model for continental Europe,
Barbara Weinzierl, System Operations, Forecast and System Balancing, TenneT TSO GmbH, Germany
Each control area in Continental Europe operates a load frequency controller (LFC) which automatically activates the secondary balancing reserves (aFRR) of the area. The presented MATLAB/Simulink model simulates the effect of imbalances in the system on frequency by simulating the reaction of the grid, the LFC of each control area and the units providing balancing energy. It can be used for various kinds of analysis e.g.to analyze mitigation measures against deterministic frequency deviations or support discussions on requirements of FCR delivery.
Session 1.3: Test-Driven development of a TSO-TSO optimization platform,
Dr. Dominik Schlipf and Simon Remppis, TransnetBW, Germany
Within the PICASSO-project, the European TSOs are developing a common platform aiming to optimize the activation of aFRR (automatic frequency restoration reserve) in real time.
The project had the typical challenges:
- Tight time plan for the implementation of the solution
- Several different stakeholders
- Need for a flexible development due to uncertainties of the final requirements
- Complex algorithm, feasibility and side effects of changes were not always obvious.
Within the project, a test-driven development method with a MATLAB prototype was used. Within this agile development method, “problems” are formulated, the expected behavior is discussed, and a test case is developed. In parallel, the test case is used to develop and implement a solution in the algorithm. All test cases together serve as test suite that is continuously used to validate any changes of the prototype. This approach allows developing the software solution in a very flexible way. In addition, it is very easy for stakeholders to formulate requirements towards the solution, as the formal mathematical specification is replaced by the formulation of test cases.
The presentation is starting with a general introduction in the PICASSO project itself and the main principles of the optimization platform. After this, we explain the main challenges and the advantages of an agile software development method. We will show the way in which we used the method in practice, focusing on the chosen approach, the challenges we faced and how we developed the prototype. At the end of the presentation, we show how we used the MATLAB prototype as a validation tool during the implementation of the optimizer at the target IT-environment.
Session 1.4: Remote control of a fully featured transmission system operator’s SCADA system,
Prof. Dr. Martin Wolter and Eric Glende, Otto-von-Guericke University Magdeburg, Germany
Keeping transmission systems safe and secure is a more and more challenging task for grid operators due to increased complexity and faster system dynamics. Expert and assistant systems are therefore a promising approach to support operational staff in decision making and relieves their daily routine by automating subtasks. Before these assistant systems can be set into force and implemented into the SCADA system, they need to be developed, tested and probably modified offline and in a lab environment.
Therefore, OVGU together with PSI AG have developed a MATLAB interface for the PSIcontrol SCADA system. This interface allows MATLAB to fully remote control PSIcontrol and execute every task, a human operator can also do. Using this flexibility, it is possible to rapidly develop and test strategies for grid automation, decision making and grid optimization directly in MATLAB and in a small scale. It is then scaled to industry level and tested using the interface to the fully featured PSIcontrol SCADA system. In this lab environment, even large bulk power transmission systems can be simulated and controlled in real-time. By doing so, development of new operational strategies is much more flexible, needs less development time and can be more easily integrated into existing SCADA systems. Thus, grid operators can benefit earlier from new operational innovations.
About the Presenters
Session 1.2
Barbara Weinzierl received her Diploma of Renewable Energy Systems Engineering at TU Dresden and joined TenneT TSO GmbH in 2018. As referent for operational balancing management she is involved in national and international balancing projects and is responsible for data analysis, modelling and simulations. One of her MATLAB projects is the model of the European Load Frequency Controllers, which she is hosting for an ENTSO-E working group.
Session 1.3
Dr.-Ing. Dominik Schlipf, Team Lead at TransnetBW System balance, responsible for the operational processes related to system balance (e.g. LFC, PICASSO). Dr. Schlipf received his doctor degree in 2012 in Engineering Cybernectics from University of Stuttgart. Between 2012 and 2020, Dr. Schlipf held several positions including technical project lead for several conventional and renewable power plants and engineer system operation at TransnetBW leading the implementation of the PICASSO IT-solution together with the IT-supplier.
Simon Remppis, a system operation engineer at TransnetBW leading the development of optimization algorithm for real time exchange of balancing energy. Mr. Remppis studied Engineering Cybernetics at University of Stuttgart and was a research associate at University of Stuttgart between 2012-2016, where he conducted various research projects in the field of power system operation and dynamics.
Session 1.4
Prof. Dr. Martin Wolter received his Dr. degree and venia legendi from Leibniz University Hannover in 2008 and 2012. He then was head of the team operational concepts and system security at 50Hertz Transmission GmbH. Since 2015 he is head of the chair of Electric Power Networks and renewable energy at Otto-von-Guericke University Magdeburg. His research interests are modelling and simulation of electric power system, power system operation and power system state identification. He is member of VDE ETG and senior member of IEEE PES.
Eric Glende received his Master degree in Electrical Energy Systems – Renewable Energy Sources at the Otto-von-Guericke-University Magdeburg (OVGU) in Germany. He is now working as a scientific assistant at the OVGU at the chair of electric power networks and renewable energy and does his PhD on the topic operational concepts for VSC HVDC systems in the German transmission system. He is member at the IEEE Germany section and active member in the IEEE Student Branch Magdeburg as well as in the VDE. His interests are HVDC systems in the power network and grid calculation.
Recorded: 10 Nov 2021
Welcome to everyone. Thank you for having me here at the Energy Speaker Series. And thank you for your interest in my today's presentation about a MATLAB Simulink model of the load frequency controllers in the synchronous area of continental Europe. I am Barbara Weinzierl, and I will give you first an introduction to the topic of balancing and of the company TenneT Then I will give an overview of the Simulink model, show you how we set it up in MATLAB and Simulink and then show some examples.
TenneT is the transmission service operator of the Dutch and one part of the German grid. And as a TSO, our responsibilities to operate the high voltage grid, and as part of that to keep the balance of demand and production at every single point in time. To assure this, we have several balancing products contracted, which react within different time frames. And if there is, for example, an outage of a power plant, then the frequency will start dropping immediately. And the frequency containment reserve will then react within two seconds, at least, and will be fully activated within 30 seconds and prevent the frequency from dropping further. But it will not bring it back to 50 hertz because it's activated proportionately to the frequency deviation. So there will always be a lasting deviation.
And for this, there is the automatic frequency restoration reserve, the AFRR. And this is activated in each control area centrally by the load frequency controllers. And they are PI controller, which then will bring back the frequency to 50 hertz. And then if there's a longer lasting frequency deviation, then there is also the MFRR, the Manual Frequency Restoration Reserve, which we will activate manually to free the reserve for the automatic frequency restoration reserve to assure that it can react also in the further time frames.
And the load frequency controllers that activate the air for the main part of the model that we created in Simulink, and the goal is to analyze how they react on imbalances and how they interact with each other. And regarding interaction, there's one important international platform, the International Grid Control Corporation, IGCC. And the purpose of this platform is to net opposing balancing demands.
So if, for example, there's one control area that has a positive balancing demand and at the same time another control area in continental Europe has a negative demand, then the platform the net these two opposing demands. And then only the rest of the needed AFR will be activated. The international cooperation on balancing will be extended in the close future, for example, by the PICASSO Project. But I think my colleagues from France and BW will tell you more about this in the next presentation.
So, I will give you next a short overview of the model, of its components, how the different components can be adjusted, and how it is implemented in Simulink. So, we have in the Simulink model 28 controlled areas implemented. They all follow a modular design, where you can choose what kind of control area you want to have for each country. We have, as I said, the imbalance netting implemented.
And you can individually tune different parameters. I will show you that on the next slide. But the simulation is done in a timely resolution of usually one second. So we don't have any transient behavior simulated in that model. Maybe that's something that should be made clear. And as input you can use any imbalance for each control area. So that can be a drop in generation or that can also be historical real data.
Each block of the model that you just saw on the previous slide is one controlled area. And they all have the same structure. But you can choose different kind of elements. So let's start with the input. As I just said, you can have as input or imbalance a single load step. You can generate the input data artificially yourself or you use real historical data.
Then we have the simple version of the grid model with the inertia and the grid self-regulation effect of the loads and the FCR, the frequency containment reserve. And then we have the load frequency controllers, where you can also individually adjust the structure of the controller, the parameters of the controller, the threshold of the controller, and some other things.
And then we have the activation of the balancing provide us, which can happen either by merit order list or ProRata. And then we have the activation behavior of the providers themselves. You can adjust the full activation time. You can implement a linear ramp or PT1 reaction.
And this is how it then looks in MATLAB Simulink. We have here as input the disturbance, the imbalance in the control area. We have the self-regulation effect and the primary control, the FCR. And they are both dependent on a frequency, which is an input from the upper model. And then this imbalance is input to our load frequency controller, which is basically a PI controller with anti-windup reset. And then the output of the controller is distributed among the AFR providers according to the order book, the merit order list.
And then the providers, they have their activation behavior, which then results in the actual activated AFRR, which is then an input, again, to the imbalance of the control area. The whole model is set up in a modular way. So, if you want to start a simulation, you will first choose your configuration that you want. Then the script will start pre-processing and choose from the library the blocks that you chose and set the parameters that you chose.
Then the simulation will run in the model. And then afterwards we have several scripts for plotting and analyzing the results. And this already leads me to the presentation of some examples. The simulations are not only done for TenneT, but the model is hosted for an international working group within ENTSO-e And the simulations that we are doing, they try to support the questions that are discussed in this working group, as well.
So, let's start with a simple example. Here we have as input the outage of an 800 megawatt power plant, really from one second to the other, which will start immediately the frequency dropping. And here you see in the second graph that immediately the inertia will react as well as the self-regulation effect of the loads. And then after some seconds, the FCI will react and keep the frequency from dropping further. And then after 1 minute or 30 seconds, the AFRR will start reacting and bring the frequency back to 50 hertz.
When we use the real historical data, it looks a little bit different. Here we used as input the AFRR demand of each control area that is part of the IGCC, so the international network of demands which I explained in the beginning. And then from these demands we can calculate the output of each LFC and then the reaction of AFRR providers in each control area.
And then after the reaction of the providers, there is a remaining area control error in each control area, which will lead to frequency deviations that are plotted in the last graph. And the good thing about real input data is that you can use it to compare the calculated output data and compare it to the actual reaction of the system. And that's what we did. We compared the calculated frequency that was the output of the model here in blue to the real frequency to the same time slot, in the red line here in the first graph.
And when I saw that the first time, I have to say I was quite happy because it already was close. And then we could use the data of a longer time period to adjust the parameters of the model. So we adjusted the available AFRR of each controlled area and we adjusted response behavior of AFRR and FCR providers. And it led to a response behavior of the model, which was quite close to the real frequency of the system.
And I think that shows that, of course, the model has many assumptions and it does not represent all details of our electricity system. But in the time frame of balancing, it represents quite well the reaction of the system. Yeah, and as a last example, I want to show you some simulations that we did on the topic of deterministic frequency deviations.
You can also see them already here in the slides. Usually at the full hour, changes we have in the evening we have frequency deviations that are quite big, especially in the evening hours. And that's a topic also the working group of ENTSO-es is working on. And that's why we looked at it closer.
To explain the domestic frequency deviations, they come from the market schedules because the demand of electricity changes constantly over time. But the electricity markets, the energy is traded in one hour time slots or 50 minute time slots. And as the market participants, they want to follow their schedules. That change of generation will be quite big at the hour changes or the 50 minute changes.
And this leads at the full hours to a big difference between generation and demand, which then leads to these deterministic frequency deviations. And to investigate closer on this topic and to also try which measures could be efficient against deterministic frequency deviations and which not, we created from public data input data for our Simulink model. To do this, we made several assumptions. So, for example, for coal fired power plant, it takes longer to try to change the schedule than, for example, for a hydro plant in this example. The coal fired power plant would take 14 minutes to change their schedule and the hydro plant would only ramp up or down within four minutes.
And then this leads to in-feed profile for each control area, which we subtracted then from a constantly changing load. And from that regard, as input, the disturbances for each control area that you see in the upper graph. And from that, the model calculates a frequency deviation, which looks similar to the frequency deviations that we see in real life. But it is bigger than the usual frequency deviations that we see. This could be because of the data quality of the input data, but also because there's much more happening in the grid than only these schedule changes.
But still, you can use the simulations to investigate on mitigation measures on the deterministic frequency deviations and see which of them could be effective and which not. Yeah, this was it with my presentation about the MATLAB Simulink model and the load frequency controller. I hope it was interesting for you and I'm looking forward to answering your questions now.
Hello, everyone. Welcome to our presentation about a test-driven development of a complex IT system in the context of the PICASSO Optimization System. My name is Dominik Schlipf. I'm representing TransnetBW. TransnetBW is one of the four German TSOs. We are placed in the south of Germany in the area of Baden-Wurttemberg. And within TransnetBW my position is I'm leading the team System Balanced. The team System Balance is responsible for the LFC controller of the TransnetBW control area. And we also host a TransnetBW several optimization systems that are working on a European-wide level.
And one of these optimization systems is the PICASSO System. The PICASSO system is optimization system for the secondary control. And yes, today I will show this presentation together with my colleague Simon Remppis, which will now introduce himself.
Good morning from my side, as well. My name is Simon Remppis. I'm working for the system operation department at TransnetBW in the team System Balance. In this position I've been strongly involved in the development of the PICASSO platform, which will be the European platform for the cross-border exchange of balancing energy from secondary reserves. I've been mainly responsible and leading the development and international generation of the optimization algorithm, which will be used by this platform. During this development phase, we have used MATLAB and the test-driven development approach based on MATLAB to develop and align the concepts for the optimization algorithm within TransnetBW but also with other TSOs and other European stakeholders.
Before explaining how we have set up this test driven toolchain based on MATLAB, Dominik will first give you an introduction to the PICASSO Project and the cross-border exchange of balancing energy using European balancing platforms.
Yes. Thank you, Simon. We want to start with the status quo. We called it the old world. So at the moment, the European system, the ENTSO-e system, is based on several independent LFC areas. TSO is operating one of these LFC areas. And he's controlling his system balance on its own. So in theory it's possible that there are several LFC areas that are short so that they need to activate controlled resource, balancing resource, and some LFC area along so they have to deactivate controllers of. This leads to counteracting imbalances that are not efficient.
Furthermore, we have different types and different dynamic behaviors of balancing service providers. So we have hydraulic units, we have thermal units. We have renewable units that are all used for balancing resource. Furthermore, we have different cost structures. We have different market approaches, and also different pricing methods. So in some countries we have marginal pricing. In some countries, we have pay as bid.
So, in the old world we had the local balancing systems without cross-border exchange, and therefore not a very efficient system. Now coming to the new world, in the new world it's defined in the electricity balancing guideline that all TSO have the requirement to implement European platforms, to enable and also to optimize cross-border exchange of balancing energies.
With this cross-border exchange of balancing energy, we aim to increase the efficiency by on the one hand netting imbalances to avoid contracting activation of reserves. So for example, if RTE has a demand for controller reserves while, for example, Germany has too much, then we test net both positions with each other and no reserves are activated. Furthermore, we aim on an activation of balancing following a common merit order list. So we will activate the VSP that is, at the moment, the cheapest.
All this efficiency gains have to be safe. So we need to ensure operational security and operational-- safe operation of the whole net and grid. And that's why during this cross-border exchange of balancing energy it's necessary for us to consider the available cross-border transmission capacity so we are not allowed to transport or to exchange more energy that is possible. Then there are several other requirements. So we have to use also HVD ceilings. And with this whole system, we create a common European market for balancing energy.
Now, getting a little bit into the details, as you might know in the European system we have four different kinds of balancing reserves. We have on the one hand, the FCR, then on the other hand the AFRR, which is in Germany called the secondary reserves. It's an automatic activated balancing reserve activated per LFC area. Then we have the MFRR as a manual activation also per LFC area. And then in some countries of Europe we have the replacement reserves are also manually activated balancing reserves that comes more or less after the MFRR.
So, we have different balancing processes. And for all these balancing processes we now implement in the European market different platforms to exchange the respective product on a cross-border basis. In this short introduction I want to focus on the first platforms, the imbalance netting platform and also the AFR platform. Both platforms are hosted by TransnetBW. And our IGCC data imbalance netting platform is already in place and in operation. And the PICASSO system is at the moment implemented.
And the IGCC, the imbalance netting platform, is used to avoid activation, a counter activation from one LFC area to another LFC area. And the PICASSO system is used to optimize and exchange AFR over cross-border platforms.
And now, how does this work? We always talk about optimization and different schemes. How do we do this optimization? And therefore I want to hand over to Simon to introduce to us the optimization systems and the optimization principles.
So, the European regulation, the electricity balancing guideline, clearly require that we implement the cross-border exchange of balancing energy in a way which minimizes the costs and maximizes social welfare. This clearly demands for an optimization system, which actually solves an optimization problem in each market timed unit or in each optimization cycle.
Here in this slide you see how the optimization system for PICASSO and the IGCC is set up and what are the major inputs and outputs. So, the major inputs are, of course, the offered bits, which contain information about the power and price offered by balancing service providers all across Europe. However, of course, it is not possible to transfer balancing energy all across Europe without also considering natural constraints. Therefore the available transmission capacities are also an input to the optimization problem.
And finally, of course, the imbalances, which are caused by previous market time frames and which we have to control for and regulate to zero are a major input of this optimization problem. As somebody already said, these inputs can be divided into two different groups. We will basically implement a system for two different purposes. So the AFR or PICASSO system is used to optimize the cross-border exchange of balancing energy coming from AFR. While the PICASSO-- well, the IGCC is used only to minimize the counter activation of balancing energy without taking the flexibilities and bits offered by PCPs into account. Therefore calls, of course, the offered bits are not an input to the IGCC but only to the PICASSO optimizer.
Then on the right side you see the outputs. The optimization system will determine which bits are activated or need to be activated to ensure a cost optimal activation of balancing energy all across Europe in each optimization cycle. And of course the prices, which will be used for the financial settlements of the energy flows and the remuneration of balancing service providers are also an output to the optimizer.
And finally, of course, the cross-border flows of energy are an output not only for the PICASSO optimizer but also for the IGCC optimizer because also the imbalance netting process, which minimizes counter activation will, of course, lead to cross-border flows of energy. And in the middle, of course, there is the central optimization system. For PICASSO and IGCC we have to constrain that this optimization system needs to operate on a very short time frame because the demands and imbalances in the European power system are actually changing in a very volatile manner, which means that is optimization system needs to find a solution each couple of seconds.
Also it needs to find a very robust solution because the system of course determines the energy flows in the European power system basically immediately. And this, of course, demands for very robust system. These two requirements, the short optimization system and the requirement for very robust optimization system, have led to the general design principle to use a linear optimization problem, which, of course, somehow limits the possibilities that we have in the formulation of the optimization system. But at the same time makes sure that this optimization system can robustly be solved in each optimization cycle, short optimization cycle and optimize the exchange of balancing energy in a short time frame.
Then on the next slide we basically show the process which was used for the development and also the international coordination of the optimization approaches in the beginning before we actually started with a structured agile approach based on MATLAB. So at the beginning we, of course, already had the requirement that we have to-- that this optimization approach needs to be aligned with all stakeholders, which are basically all European transmission system operators.
In the beginning this generation and joint development of the optimization algorithm was mainly done in a qualitative manner based on discussions among experts from all European TSOs. So basically we discussed the inputs and outputs and the functionality of the optimization problem. And we also discussed several problems or issues which could occur, like, for example, the issue that in some cases the optimization system also led to the deliberate activation, count activation of bits if it was economically beneficial, which is not really intended from an operational aspect because we do not want to implement a market algorithm in the first place, but an approach to balance the system. So we wanted to ensure that balancing energy is only activated when there's actually an imbalance in the system.
So, previously on the beginning of this development phase, we thought about such problems. Then we formulated simple examples, described possible behaviors, and then facilitated or initiated discussions among various stakeholders. So in the first place, of course, in the TSO community with an ENTSO-E. But also we've also included the regulatory authorities of balancing service providers from all European countries in these discussions. But in the beginning all these discussions were in a very qualitative level based on simple examples that we've made up basically manually. And after these discussions we then decided for the intended behavior of this optimization algorithm.
Sometimes we tested a feasibility of the chosen approaches using, I would say, standalone simulations. In some cases, we already applied MATLAB for these feasibility checks, but not in a structured manner but basically by creating simple simulations for each individual case. And the idea was that once we had all this figured out we would then formally describe the intended behavior of the optimization system and then pass it to an IT supplier for the actual implementation in a system which can operationally be used.
Within the protect it then became evident that this unstructured approach was not suited for the development of such a complex concept. So, the approach that we finally chose was based on a typical pattern for test-driven development, which is a common practice in agile software development so basically all changes that-- so basically we've done, we've developed a MATLAB prototype which covered the functionality of the final product, of the final intended algorithm by 100%.
And the development of this MATLAB prototype followed a clear pattern. This pattern required that for each change we, first of all, created test cases, which basically covered the intended functionality. Of course after adding such a test case to our test book, we first made sure that its functionality is not already covered with the existing algorithm so we first had it as a test case. Then we run the test case, ran the test case in MATLAB and confirmed that without the adoption to the algorithm it failed. Then we wrote the MATLAB code to cover this new functionality.
We then ran all tests again and adapted the code until not only the new tests but also the already existing tests in the test book passed. And finally we found the effect of the code and validated each change that we've done to the code by running the whole test book again. And, of course, this pattern was repeated until we finally arrived in a situation where all scenarios were covered with test cases and all tests passed.
Of course, to allow such a pattern for tested development in an effective or efficient manner, some requirements must be fulfilled. So first of all, tests must be automated in order to be able to repeat the testing on a highly frequent basis. And also the codes must be developed in a modular way to allow also specific tests of certain parts or units of the code.
This modular project of a MATLAB prototype can be seen here on this picture, which basically outlines the design of the MATLAB prototype. The MATLAB prototype consisted of multiple units. First of all, the left side, you have to input and output unit which allows for the definition of test scenarios using multiple interfaces. And then in the middle, of course, there's the actual algorithm core which also contains sub modules used for a pre-processing the actual optimization problem and also the post-processing of the data.
And also we've developed an extensive graphical representation or graphical user interface, which allowed us to view the optimization results and also allowed to test the human machine interface and the interactions between the optimization system and the users.
Does matter prototyped and allow to adapt? The pattern for the development in the way which is shown here. So the left path is basically the usual process which was used with external stakeholders or with other TSOs and market participants. This approach was basically augmented by the prototype, which is in the middle column and the possibility also to not only discuss certain problems based on simple examples that we made up basically manually.
So, I will not go through each individual line of the MATLAB code, which is shown here. But basically this code shows how a test case can be easily developed and implemented in MATLAB with the approach for the test-driven development. So here you see how a new test case is defined. And on the last three lines of MATLAB code you see the expected results, which are explicitly stated in MATLAB. And then this slide also shows how the graphical user interface of the MATLAB GUI can directly be used to visualize the test cases here with a very simple scenario, covering only three different NFC areas and the flows in between.
And then once the test case has been formulated this way the-- and of course this MATLAB prototype could not only be used to verify the correct implementation based on the command line interface. But it also included a more sophisticated graphical user interface, which allows also to visualize more complex scenarios. Like here in this example, you see a scenario with five different NFC areas. So all the nodes in this graph shown here are basically NFC areas. And the arrows in between shows the flows, the real time flows of energy between the NFC areas.
On the right side, you see the list of-- a graphical representation of all the bits by balancing service providers, which are considered in this test case. And now I will pass over back to Dominik, who will now also explain how we extended this approach in order to also include our IT supplier in this test-driven development.
Yeah, thank you, Simon. Yeah, as you see on this slide with the headline is taking the next step. The development with the MATLAB prototype was very successful in our view. And that's why we also took the next step, talked with the IT supplier. And together with him we decided to use this model prototype also for the IT implementation. So what we did, we took our test cases, we took our optimizer. And together with the IT supplier we then always validated the implementation done by the IT supplier with this MATLAB prototype.
So we made sure that new development was always in line with the development we already did with the MATLAB prototype. And with this way, we made sure that from the beginning, so from the stakeholder discussions, while doing the tests in the MATLAB prototype and then we could close the chain to the final product with the IT supplier. And then in the end, after all tests were successful, we had our finished and complete IT development.
So, in principle the IT supplier configures and run the same test cases that we run on the MATLAB prototype also on its system. We always compare to the simulation results. And if the results differ, we analyze the failure. We compare the steps. And that's a very good part with the MATLAB prototype because we could also take steps with the IT algorithm and compare those steps within the algorithm with the steps that we are taking in the final solution.
And, of course, after all the results were the same, we compared the results of the scenarios and changed some input data, changed some parameters and also made mass tests and random test cases to cover a wide range of possible scenarios.
What are lessons learned? From our development and from also from the development within the PICASSO community-- so while doing the MATLAB prototype but also while we did the comparison and the creation of the final IT's solution. One very big lessons learned is that the creation of test cases is critical.
For example, if you create a test case, it's normal, perhaps it's normal human behavior that you always take positive values. But it's also important to change the science. So if you have one scenario positive directory, also test this scenario in negative directory. And also what we also learned, that if you create test cases, you focus on extraordinary or critical scenarios. But it's also necessary and critical to use test cases that covers just normal states.
Also a lesson that we had and that we learned is if you have a complex problem, the expected results are not that easy to determine. And one of our rules during the test-driven development was that we always have to define the expected results before we run the code. So, it's critical or important to reduce the complexity of the test cases. And also it's critical to create test series. So, you start with a base test case. And then you change step by step parameters or step by step inputs and then compare the results.
Also very important and one of very big lessons learned is that you have to ensure a transparency during the development phase with the IT supplier. So it's necessary to share results, intermediate results, of the prototype also to share step values and so on. Also one important factor was that we also already received feedback from the IT supplier. We received improvement ideas and these were also included in our prototype development. And finally, but perhaps also the most critical part, the IT supplier must commit itself to this development approach. So, it's not that you have a specification ready before you start implementation, but the IT supplier must be ready to also change implementation fast and to change some things in the implementation during the development.
Now, coming to the final product, the PICASSO system is at the moment implemented at TransnetBW. And with this test-driven development approach we succeeded. We have now a successful implementation of the optimization system. The optimization system is running since now several months in a parallel operation. And it's running very stable and it's very robust.
And now we have more or less two products. We have on the one hand, the final productive product and also we have this MATLAB prototype. And we can now use this MATLAB prototype for further development. For example, during operation we can use the MATLAB prototype to validate the results of the IT platform. We can pre-test changes of the parametrization. We can also analyze different scenarios. And for more or less, we can use the MATLAB prototype to further develop the system and then do this whole abroad perhaps again or test for small improvements, add new test cases, test is on the MATLAB prototype, then give this to the IT supplier. And the IT supplier can then, in the end, develop the final product further.
And we can also combine this result with other MATLAB driven scenarios that are developed by other TSOs. And therefore I give over to Simon again.
So, in the previous presentation of today's session, Barbara Weinzierl from TenneT has shown you how MATLAB is supporting all European TSOs in simulating and evaluating the load frequency control system, which is used in the whole synchronous area of continental Europe. And for this reason, for this purpose, TenneT has developed a MATLAB simulation approach. And as Barbara has already presented, this simulation model already contains an implementation or simplified representation of the IGCCs, the model, which prevents the account activation of bits.
However, with the introduction of PICASSO, which is scheduled for next year, of course the behavior of the whole load frequency control in Europe will again significantly change and will be significantly determined by the PICASSO optimization system. For this reason, we are in close generation with TenneT in order to make sure that the Matlab prototype, in its original implementation, which basically 100% covers the actual functionality of the productive PICASSO system, the system can be integrated with the MATLAB model developed by TenneT.
The input and output units or input and output modules of this MATLAB prototype that we've developed allows for this easy integration, which will then also allow us, in the future, to evaluate the behavior of the whole load frequency control in Europe under consideration of the cross-border exchange of balancing energy using MATLAB.
So, this brings us to the end of our presentation. Of course, we are further available for any further questions. Thank you for your attendance.
Yeah, thank you. Happy to answer your questions.
Hello, everyone from my side and welcome to the MATLAB Energy Speaker Series. I'm Martin Walter from Otto von Guericke University in Magdeburg. And I would like to talk about our control center that we have at our chair and what we're doing with this and how we integrated MATLAB into our system operation concepts.
So the first question is why does the University need a control center? So, we are doing very urgent research for TSOs. And TSOs who want to optimize their system operation processes now and the German energy transition process very urgent questions so we need very fast solutions. Now, that's why we decided to go with a system that is also available at the grid operator side, so that prototyping and now the implementation of the solution is much faster than we would do it with completely different systems.
So therefore, we do our research and development with MATLAB. And we had to connect MATLAB to the control center so that we can directly test our solutions in a real world environment. So what are our research topics? Now, first of all, we do optimization of system operation concepts. This is, for example, auxiliary services, like the very famous Redispatch 2.0. Also emergency procedures, we are dealing with voltage and auto angle stability. And finally we are doing research on grid restoration and automation of grid restoration to speed up these processes.
Speed up is done usually by automation, not so in the past we had just a few power plants and all of the operators were aware what to do. Nowadays we don't have only a couple of conventional power plants, but thousands of renewable sources which have to be organized somehow. This cannot be done manually anymore. Therefore, we need automation.
So we need tools that support in decision making. We need grid protection systems that adapt to the fast changing situations. And speed up is also done by automating such processes. One important topic is that we do not want to completely automate the entire system operation. Now we are talking about the partial automation so the final decision will always be done by staff member of the respective grid operator.
Finally, especially in the low and medium voltage levels, the grid is not fully observed. Therefore we also talk about state identification procedures. So how do we get information out of the grid, for example, and under determined measurement vector, how can we improve the state estimation and which data exchange processes are needed so that all players at the grid or at the energy market can fully now take the advantage of the power system.
Now, I talked about the concepts and the methods that we developed together with grid operators so state identification is of the biggest problems now we want to utilize PMU-data, for example. We want to improve grid dynamics, which is also a very urgent problem in the near future, especially when power electronic devices will dominate the conventional spinning reserve generation.
Now, I talked about grid automation, which has a very nice name, system cockpit. Now, so things get automated and therefore we also need forecasts for the future grid situation. So we need to optimize forecasts for load and for generation, especially for renewables. And this has to be done on a regional basis, so having only a very general look on the entire control zone is not sufficient. We need information at best for each substation.
So finally, we talk about the flow controlling assets. So if we want to improve the utilization of our grid, we need flow controlling assets, like phase shifting transformers that already have been installed at the German/Polish border, for example. We also talk about HVDC lines which will come to Germany in the near future. And this new flow controlling assets need to be implemented and optimized now for the correct utilization. And therefore we'll have a special focus on the interdependence of these devices, so how does one grid controlling, grid flow controlling assets interfere with another phase shifting transformer with another HVDC line, for example.
And finally, when we have a look at the entire system, we see that interests may be divergent. So we have, on the one hand, cooperation between grid operators and market players but on the other hand, competition. So we are talking about a cooperative competitive system here. And one example for competitive interests of, for example, loop flows, which is now highly discussed debate in Brussels. And so now grid losses will always be optimized in one controlled zone, which then burdens another control zone. Therefore grid losses also are competitive interest. And we simulate this using multi-agent systems.
Finally, we need to talk about the interconnection between the physics and the market, so control zones and prices. And the interference of physics and market is also very important. Therefore we are doing a research on how, let's say, on the size and dimension of control zones or price zones affect the grid utilization remedial actions and low frequency control. In Europe the control zones work together in the so-called IGCC. So the region where the load frequency control is harmonized increases. And this has strong effects on system dynamics and so therefore we need to do a research on this, as well on how far can we push the borders.
So, how do we do this? Now, here you can see the general architecture of our system. Now you can see that in the center we'll have a control system from PSI, which is installed at a lot of TSOs in Germany. And this control center has a couple of interfaces. Let's start at the bottom of this page. So this is where the data comes from. So, on the one hand, we can use life information from our campus. Now we can implement data from a hardware in the loop system. And now we can model different grids starting from the European transmission system down to, let's say, a local utility here in Magdeburg, where we model this grid and do the simulations in conventional MATLAB environment. Or we can use several other data sources.
Now, this data is integrated into the control system using classic 104 data interface, although we can use other more modern interfaces, as well. Now the most important part is in the middle of this picture. On the right hand side you can see the conventional scalar operator, so human who sits in front of the control system and interacts with the control system. Now, what is new is the left side now. So what we are doing here is we remotely control the PSI system using sort of MATLAB operator.
So, this is special MATLAB instance that talks to the control system over a special interface that PSI programmed for us where we can remotely control everything that a human operator would also do with the system. So this MATLAB operator gets information on the grid state out of the control system, does some optimization, does some decision making and then returns the decision back to the control system and the control system will then execute this decision. Now, on the top of the picture, you can see the output now, which is, of course, on the one hand, the main screen, which you can see behind me. We do back ups and achieve all of our information. And we also want to present this scalar system online. So in the future there will also be a web interface where you can have an online view on what's going to happen here in our system.
OK, so we now want to go into detail how MATLAB and the PSI system interfere and how all of this works. And therefore I hand over to Eric who will talk about the details. Thank you for your attention.
Hello, ladies and gentlemen. My name is Eric. And I want to give you some more details of our system architecture. I want to start with the MATLAB, which we use mainly for the grid modeling.
So, if we start creating a grid for some validation, we start with open source data from OpenStreetMap, for example, from the ENTSO-e platform, or others. And the first thing, we draw a grid like it is shown behind me. And then we start to parameterize all the assets, the transformers, the breakers, the loads, the lines, the generation units. And we also parameterized the communication.
So we are using the IEC 6875-104 protocol. And it's very state of the art in the German grids. And then from the piece I control we can do a seize the export. In MATLAB we have a MATLAB converter to read all the data, which we need for some powerful calculations. So, we can read the topology, the parameters, the information for the communication. And then we can start to create scenarios for this grid. So we have the topologies. And we now need some scenarios for loads and generations.
And then we can start the process. And for the process, the piece I control is used for monitoring and control. So, we can send commands for breakers, for setpoints of generating units, and so on. And through a Java API, we can read those commands. We can do a plausibility check, like the date is correct, for example. Then we will execute those commands. We do another power flow calculations. And we send the results of measurements back to the PSI control so that it can be monitored.
In between there is this Java API and it is usually for the communication between both systems. And it is mainly for different types of variables with different telegrams. And it takes the information from our grid MATLAB, from the workspace, puts it into the correct structure of the telegram and send it to the PSI control. And yeah, so the main advantages or the main reasons why we used MATLAB is it's very easy to read and import these files and work with them.
You have a very clear structure of the data and the workspace. It's very easy to have interfaces to other programming languages, like Java, of course, but also C, C++, Python. It's easy to create graphical user interfaces, like the weather map. You can unfortunately not really see pretty well in the top left. And it's also very high performance if we do our powerful calculations. And the time, as a key perspective, so we have to calculate very fast, even huge grids, to send the measurements in specific intervals to our PSI control and monitor them.
And this is how it may look like at the end. This is an example from the TSO 50 hertz transmission. And it consists of 100 substations with 208 lines. And you can see in red the 380 kV level and the green the 220 kV level. On this grid we usually work with one specific scenario where we have a very high penetration of wind in the north, especially from offshore wind farms, and then the east, especially from very huge wind farms onshore. We have high penetration of photovoltaics in Saxonia and additionally, one brown coal fired power plant.
Also, there are a lot of transits in this grid, mainly from the west to the east. And this scenario and this topology can be used for different algorithms and validations of them. And now I will come to the key feature of our system. And this is the MATLAB operator.
So, the MATLAB operator interacts through a remote procedure call with the PSI control. So we can use, get, and set commands to directly interact with it from our MATLAB. And we can ask the system for the results from the state estimation, the contingency list from the contingency analysis, the breaker states, the setpoints from generating units, whatever. So you can see the MATLAB operator as a digital twin, and with additional information from, for example, weather forecast of different providers, flexibility platforms, or additional measurements from phase and measurement units, or wide area monitoring systems.
With this additional information, you can write some algorithms, some assistant systems in the MATLAB operator, and directly set commands or set some set points and switching states in the PSI control, or just send some informations to it. We have assistance system and the operator in the control center can use it to operate the system in the best way.
And now I want to introduce an assistant system which we programmed at our university. And it's a decision-making tool for flexibility platform. So we're taking the information from the grid, the results from the contingency analysis, and additional information from a flexibility platform. And then we can do some optimal power flow calculations in our MATLAB operator. And it can be used to rank all flexibilities that they can be used in the best way. And they can be displayed on the PSI control. And the operator can use them just by clicking this and automatically new setpoints will be sent to the assets, which offered some flexibilities. And it can be used to reduce contingencies.
Yes, so this was a small demonstration of our system. I hope you enjoyed it. And we are now here for questions. Thank you.