The Infosys Utilities Blog seeks to discuss and answer the industry’s burning Smart Grid questions through the commentary of the industry’s leading Smart Grid and Sustainability experts. This blogging community offers a rich source of fresh new ideas on the planning, design and implementation of solutions for the utility industry of tomorrow.

« May 2012 | Main | July 2012 »

June 28, 2012

Water UK Resilient Water Resources Event

Last week I attended the Water UK Resilient Water Resources event. Many assume that, with so much rain, the UK has no issues with water resources, however, with such a high population density that is not the case. For example, in SE England there is less water per person available each year than in many Middle Eastern countries.

The event provided good insights and interesting debate. Richard Ackroyd (CE Scottish Water) and Chris Loughlin (CE South West Water) gave overviews of some of the initiatives their organisations are pursuing. Trevor Bishop (EA Head of Water Resources) discussed Green Growth, resilience and lowest sustainable cost, and said regulators must be less myopic. Richard Benyon MP (Defra Minister) covered some of the proposals in the new Water Bill, such as more competition and choice, reforms to abstraction (this may take some time), and most interesting changes to planning law to ensure water sustainability is covered.

The debates and discussions covered water security and sustainability, and I facilitated a good discussion on how to engage stakeholders. The general view is more should be done at source, working with farmers to reduce harmful runoff, with manufacturers to use water more effectively, etc. A recurring theme was that 'one size does not fit all', and solutions must be tailored to needs.

The final debate was 'this house believes a measured pace of environmental change is the only realistic option'. Good argument was put forward on both sides, however the final vote was a resounding 'no', and there was a clear view in the room that action needs to be taken now in order to ensure a sustainable future for our water resources.

It is up to all of us to deliver these objectives in a truly sustainable way.

June 13, 2012

European Gas to Power (II) - squeezing margins

Gas producers across Europe and beyond are today presented with unprecedented growth opportunities. Major supply projects including the operational North stream and proposed South stream, Nabucco and Transanatolian pipelines are evidence of the expected need for gas in the coming decades. Liquefied Natural Gas continues to grow and increasingly exert its position as the key swing gas source across the Atlantic and Pacific basins. And, though it is not yet fully accepted as a primary source of energy in Europe, shale gas exploration in the US is affecting the global gas market and the drilling process may yet in the coming years find a better and safer way to gain public buy-in.  A recent report by IEA suggests that gas could replace coal as the second largest primary energy source (after oil) and world demand for gas could rise by 50% within 25 years. Similarly Exxon Mobile predicts that in Germany gas consumption will rise to 34% of the energy mix by 2040 (from 20% today). But as one of the gas industry's key customers, the gas to power sector is not just keenly aware that gas demand growth means upward pressure on gas prices.

As described in our last blog, the sector is facing pressure from renewables too and is faced with a bleak picture on declining margins in operating their plant assets. Relying on revenues over marginal running periods requires a seismic shift in thinking for generators, and investments in forecasting technology and market modelling tools with fast response are crucial (see also High Performance Computing). So too though is investment in engineering innovation and some of the current research projects being funded by European governments make interesting reading. BMWi's "FleGs" research project is developing a CCGT plant scheme whereby instead of being forced to consume heat and power at the same time, heat is instead stored in an integrated high-temperature thermal storage system. This means that at night-time, when baseload and wind generation is often sufficient to meet electricity demand but district heating is still required for customers, the costs of idling the gas turbine can at least be avoided. Similarly, new turbine advances such as the 50 MW/minute ramp rate of GE's FlexEfficiency turbine may also provide much needed faster load response and thereby allow CCGT plant to better react to intra-day price signals. Increasingly money will be made and lost in these critical ramping periods and in offering backup services to TSOs for ramping capability - whoever can adapt quickly and remain agile across engineering and IT technologies is likely to hold the competitive advantage.


June 5, 2012

European Gas to Power - squeezing margins

The European gas to power sector is traditionally made up of asset backed utilities and independents power producers operating combined cycle gas turbine (CCGT) and, to lesser extent, open cycle gas turbines (OCGTs). CCGTs are more efficient and have therefore have historically run closer to a baseload running regime with load factors > 50%, with the gas burned often purchased via oil-linked gas contracts. While these assets owners are increasingly trying to restructure their gas contracts and tap into cheaper market gas (at least for the time being), the load factors they are achieving are under extreme pressure from growth in the renewables sector. The explosive growth of wind and more recently photovoltaic solar in Europe is displacing gas in the 'merit order'. Often these forms of energy, solar during the summer and in the daytime and wind during the winter and at night, together with cheaper to run nuclear and coal sources are sufficient to meet demand and gas fired power plant are as a result left underutilized. Instead CCGTs are forced to perform a load smoothing role to compensate for wind patterns or cloud cover and are burdened by poor response rates to support renewables variability.

The situation is of course exacerbated by the global recession. Financing for new CCGTs is based on expected power price proceeds; but if they aren't running, they are not making any money. Indeed the combination of strong renewables output and high gas prices has led to a crisis in Italy's heavily gas-fired generation sector, Statkraft putting a CCGT on cold reserve in Germany and talks of plant mothballing in the UK. The UK's Department for Energy and Climate Change has partly acknowledged the current market issue and called for evidence on the role of gas in the electricity market. Additionally, a capacity payments mechanism has been proposed by the DECC in its draft bill for Electricity Market Reform. Assuming these availability payments don't constitute illegal state aid, they may in the future at least go some way to providing non-operational incomes for gas plant. If not brought into legislation however, and assuming continuing bullish gas prices and renewables growth, CCGT -backed utilities and operators are increasingly forced to innovate around the inherent flexibility or optionality of the plant for those more limited hours that plant are able to run.

June 1, 2012

Data Quality: The difference between OMS/DMS success and failure

While implementing an Outage Management System (OMS) or Distribution Management System (DMS), high quality data is critical for the product to do its job correctly.  GIS and CIS data are the backbone of the system.  The data represents the customers and devices those customers exist.  The data also provides the network connectivity, and general information dispatchers use to make decisions.  If the quality of data is low, the ability of the OMS and DMS to be productive is low.  Visually, the OMS user must be able to clearly see the correct network information and be able to clearly see the annotations.  Also important is the correct mapping of customers to the correct premise and meter.  Correct mapping enables correct outage prediction and correct counts of customers predicted out.  In addition to CIS, the OMS and DMS depend on the GIS data model in many ways.  Without sufficient data quality, many different defects will be visible throughout the system, but they will all have the same root cause, data.  Bad data undermines OMS users confidence in the system.  If data shows results or information incorrectly, they will think the system is flawed when the system isn't the real problem.  Their negative feelings become a new challenge to the project's success.   Possible problems are that outages predict to the wrong location (or not at all),  deenergized areas, loops,  parallels, missing critical information like critical device attributes, or incorrect voltage values (resulting in incorrect overload and violation warnings), the model can look incorrect in the viewer, or device names to appear in tools incorrectly.   Data problems are like the proverbial onion.  There are many layers.  When one data issue is "peeled away" the next layer of data problems can be seen.  To have a successful project, it is critical to start reviewing the data early and perform reviews over multiple iterations.  It isn't necessary for all configurations to be complete in order to review data quality, so don't wait for the system to be 100% configured to start.  Reviewing data needs to begin early in the project so that when users begin working with the system that it looks reasonable.  Also data must be frozen with enough time to create test scripts for System Integration Testing.  Changing/updating the data model can and will affect how outages predict.   Having good data allows testing to show functional and usability issues instead of being hidden behind poor data.

Poor data quality shows up in several ways and every way is highly important.  The simplest way problems appear is just visual appearance.  All devices must appear as expected.   Devices must appear in the correct size, with reasonable placement, showing the proper symbol for the device, and with the correct annotations appearing near the device.  The electronic representation of the feeder should look similar to the paper copies printed from GIS. 


In addition to appearance, OMS feeder connectivity must be correct.   This is one of the most important data factors because improper energization leads unexpected deenergization, looping, or paralleling of the network.  Having deenergized areas causes outages to not predict or calls not to group together.  Having power loop back around causes the prediction engine to get confused and not know where to predict an outage.  It is possible that the logic used to connect together the segments of a feeder to be bad.  Either extra connections can happen where it is undesirable (resulting in loops/parallels) or connections can fail to occur (resulting in breaks in the energization).   Another important check is to make sure correct phasing is present.  For example, a B phase only conductor should not have an A phase transformer.  The B phase would never bring power to the A phase transformer.  One of the common problems I've seen is when multiphase transformers are modeled to single phase conductors.  When the transformer is restored, only one phase reaches the transformer, so the transformer and the outage doesn't get fully restored.  Just because data looks OK in the GIS system does not mean it will in the OMS/DMS.  An OMS/DMS has greater requirements because it is doing more than showing a visual representation of the model.  


Dispatchers are the primary OMS users.  To do their jobs, dispatchers depend on the GIS and CIS data built into the OMS/DMS.  The system needs correct data to enable quick and correct decisions.  Reviews of data need to occur before they are too hands on with the system.  That way bad first impressions are avoided.  The visual representation of the network must be high quality so they can trust what it tells them.  When they need more information, such as looking up various attributes of a device, they must be clear and correct.  When they need to know how many customers are affected by an action, they must be correct so that the impact of actions can be properly weighed.  When they look at their current outages dashboard, a map viewer, or other tools, they want to see correct information like the feeder name and the name of the affected device. When they need attribute information on the device, it needs to be correct.  Proper logic behind the scenes must be put in place and the results verified.   If the data is missing because the logic to the attribute information was incorrect, or it was never present in the GIS system, the dispatcher can't get the information they need.


To mitigate the data risks, there is a recommended approach to reviewing OMS/DMS data.  In my experience with prior projects, doing a series of data reviews with two different (but similar) sets of data, increases data quality.  In the first review, have a small data model that contains at least 1 of every object wished to be modeled.  The point is to make sure it can build each object class (whether electric or not) and eliminate systemic problems.  Looking for localized issues can wait until the next review.  The first need is to get rid of systematic issues, like getting devices to build correctly or proper device naming.  If it is incorrect, the problem will appear every time that device appears.  Doing this will get rid of many errors all at once and make it easier to identify when a data issue is a one off data problem.  Many devices of the same class are built throughout the model.  If the device builds once successfully, it will do it the same for the rest of them too.  Having a small model for reviews is important because it is intended to rebuild the model after making fixes to verify the fix's success.  This will occur 3 times or until issues are small in number.   To successfully do this in a reasonable time period, the model must be small.  Large, complete models take too much time to be efficient.  To achieve these goals, it is perfectly acceptable to fictionalize data and break real world rules to ensure every device, including those that appear infrequently, are present for the first dataset review.  For the purpose of this dataset, go ahead and put an automatic throw-over device in a place one doesn't naturally appear.  The important thing is to just make sure it builds correctly.


Once the first data set review is complete, it's time to move onto the second dataset.  This dataset is also a small dataset, but this time only real data is used.  Using a small dataset is still important because, like in the first dataset, the model is repeatedly rebuilt after fixes are made.  The dataset should be big enough to use for functional and integration testing, but not bigger.  We assume in this dataset review the first set of reviews was completed and devices now build correctly.  Now we are looking for connectivity issues, individual data errors, or other problems beyond just in how the device builds.  Pick a representative area of the model, about 2-4 substations in size.  Build them, and begin the review.  In this review, problems in how connections are made and topology problems are found.   Making these fixes is also done in iterations.   While this is not the full model, fixes here often apply to the whole model.  Seeing the kind of problems found here also provides insight into the potential problems to look for in the rest of the model.


Since data is the backbone of any OMS, having as high quality data as possible is critical to being successful.  The methodology described above has been proven to efficiently lead to high quality of data.  If there are other data modeling tips you'd like to share, I'd enjoy hearing them.