The Infosys Utilities Blog seeks to discuss and answer the industry’s burning Smart Grid questions through the commentary of the industry’s leading Smart Grid and Sustainability experts. This blogging community offers a rich source of fresh new ideas on the planning, design and implementation of solutions for the utility industry of tomorrow.

April 25, 2018

The Only Constant is Change - the Water Cycle

There has a move back towards catchment based working for a number of years, and this has brought many advantages, especially in regard to environmental improvements. Generally however such working tends to be sector and company based. Although there have been a few cross sector studies and solutions, these are the exception rather than the rule.

There are a number of disruptive factors, such as the European Water Framework Directive, that are increasingly moving organisations towards multi sector and company integrated catchment solutions. There are already many studies that are pin-pointing pollution, both point source and diffuse, and moving solutions towards beneficial outcomes and away from 'tick box' outputs. There are similar studies looking at drought risk. However there are very few examples where such studies are joined, let alone linked to other water related impacts, such as flooding and agricultural production.

As new tools, especially the ability to collate and use large and disparate data sources, and the rise of AI, become increasingly available and affordable, such catchment whole water cycle working will increase, and provide real benefit across sectors. To enable this however will not only require new technology, but more importantly changes in working practice. For example, sharing of data between organisations will be critical. Individuals will need to understand more about the issues and potential solutions for others affected by the water cycle in an area. Whilst the technical challenges are complex, the organisational and people aspects present even bigger challenges. We must however overcome such issues if we are to deliver truly holistic and sustainable solutions.

April 17, 2018

The Only Constant is Change - Electricity 2.0

Electric networks are facing more variable loads at the local level (down to LV), including demands, such electric vehicles and heat pumps, embedded generation, such as photovoltaic, micro-hydro and wind and more variability of population density. These localised demand peaks put stress on the system and risk, leading to phase imbalance, voltage frequency and waveform issues, increased outage (customer interruptions, network interruptions), and thermal issues.

Traditional management of the network to mitigate those risks would lead to many issues. These include wholescale network capacity upgrades i.e. lay larger cables, larger transformers, major disruption, including to traffic and customers (planned outages), and significant increases to charges. These impacts would be unacceptable to customers and other stakeholders, including those whose journeys are interrupted by street works.

In the future Distribution Network Operators will need to become Distribution System Operators (DSOs). They will use LV automation and switching to balance loads and demands, This will mean a move towards Active (or Adaptive) Network Management, to be able to minimise and optimise the need for network upgrades. As such they will manage local networks like large national Transmission networks.

To become a Distribution System Operator, a network operator will need a solid base. This includes a sound connectivity model, the ability to link/share connectivity details with modelling tools, and secure links between core asset systems (e.g. GIS/aDMS). A few orgaisations are already moving in this direction, and I am currently involved in a DSO project. Such changes will become 'the norm' over the next few years.

January 28, 2018

Managing Smart Electric Meters- Things to Consider

The utility industry has been witnessing an immense rise of smart electric meters Implementation across the globe. With the digital revolution setting in, there has been an increasing move towards enabling advanced metering infrastructure(AMI) for effectively managing meter data and operations. The ability to enhance grid reliability, effectively manage peak loads and passing the control of usage back to the end customers have all catalyzed this trend. The envisioned benefits of smart meters to the Industry are many, but for me as an asset management consultant it gets me thinking- What's in store for me?

Continue reading "Managing Smart Electric Meters- Things to Consider" »

September 26, 2017

The Only Constant is Change

Everyone lives in changing times, and the pace of change is accelerating. In Utilities however, caution is rightly placed on any change, as our societies and to a large degree civilisation are supported by sound infrastructure. Nonetheless, the way we use our infrastructure will have to change radically over the next few years. Increasing population and population densities, climate change and aging infrastructure are leading to more system failures, in terms of outages, flooding and limitations on use. It is becoming more difficult to model the impacts of this change on our infrastructure, as many of the historic 'norms' no longer apply. Our universities have many research projects to try and better understand, and hence predict, how infrastructure will be affected by change, and the best options to adopt to ensure infrastructure can meet these challenges. Undoubtedly some of the new tools being developed, especially AI coupled to effective IT/OT integration will greatly assist in this area. I am helping to organise a Future Water Association conference on 4/5 December this year that will look at how we move towards 'smart water networks'.

Over the next few years however probably the area that will see the greatest change is electricity distribution. The way we both generate and use electricity is changing at an exponential rate. Embedded generation, such as wind and solar, means that supply enters the overall grid at many diverse locations, and intermittency means that the quantity of that supply will vary greatly over days and years. More demands, such as electric vehicles and heat pumps, mean that the peaks and toughs of power required will become more intense. To manage this in 'traditional' ways would mean major upgrades to the networks, which we cannot afford, either in monetary or disruption terms. Organisations are thus moving towards 'Distribution System Operation', where local networks, including LV, will be actively managed, in a similar, but more local way to how transmission networks are managed regionally and nationally.

This is the first of a series of blogs where I will start to explore what change might mean to utilities, starting with 'Distribution System Operation'.

July 20, 2017

Utility Procurement - a New Vision

Innovation is part of the 'DNA' of Infosys, and we are always being asked to innovate by our clients. All too often however the procurement process constrains our ability to offer that innovation. The deliverables are given strict bounds, and we are only able to offer specific solutions. For example, the need may be for improvements in asset management, but the tender constrained to configuring and installing a particular software package. Whilst in a few cases that may be due to a poor procurement strategy, in most cases it is due to the constraints, both regulatory and corporate, that control how procurement can be undertaken.


Does it have to be this way? I believe that clients could procure in an innovative way, that allows their suppliers to show their ability to offer novel ways to solve problems. The process could be two stages, the first a simple pre-qualification exercise to determine a shortlist (as is currently undertaken), the second to deliver an outline design of the solution, where the client pays a small fee to the tenderers to get into far more detail than current tenders allow. This will enable the supplier to demonstrate their ability to deliver innovation, and the client to both understand that ability, and know how the supplier performs in a work situation. Such a process would enable to client to tackle much larger issues than generally covered in a tender, and indeed a few utility clients are already using a more agile approach. I will demonstrate with an example in asset management.


This example tender could be phrased "Devise a solution that will deliver an x% reduction in asset management costs, whilst producing a y% improvement in performance, without increasing overheads." In the pre-qualification, tenderers would need to demonstrate experience in such areas (although not necessarily in the same industry), and provide good and pertinent references: this would allow the client to shortlist. Tenderers could also consider partners to add to their bid, for example instrumentation suppliers and installers. In the tender, the client would allow a certain sum for each tenderer to produce their innovative solution, with sufficient access to client staff to determine constraints, both technological and business. This phase would of course need to be undertaken under non-disclosure agreements to protect all parties. Once the 'tender' is completed, the client would be able to select a supplier with a much greater understanding of that supplier's ability to innovate in a way that will benefit their business.


Whilst this system may seem strange to some in utility procurement, it is similar to those employed in areas like architecture, that have allowed buildings such as the Sydney Opera House to be developed. Do we want our future to be full of bland boxes, or Guggenheims?

March 14, 2017

The Security trap

Security in IT is very important. Unauthorised access to confidential information can cause major disruption to companies, and to individuals lives. Some disruption can have life changing impacts to finance and reputation. Even 'lesser' security issues, such as viruses, can cause massive damage to company systems. Breaches to Operational Technology (OT) systems (such as SCADA) in utilities could cause countrywide failures, and put lives at risk. IT security is therefore quite rightly taken very seriously by governments, organisations and individuals.


However IT security is just one amongst the many risks we all face on a daily basis. Even a major breach of a utility OT system would not have the impact of an atomic bomb, and yet the world managed to increase overall wealth, and made great strides to reduce poverty, throughout the Cold War, under the threat of mutually assured destruction. IT security is therefore just another risk that we all have to manage.


Unfortunately in too many organisations IT security is used as a reason not to implement technological improvements. For example, video conferencing between computers, and even mobile devices, is something many of us use regularly, however video conferencing between organisations is very rare, generally because of 'IT security' concerns. Sharing of information is frequently blocked, and yet shared information often increases knowledge and opportunity for all of the participating organisations. For example, Transport for London (TfL) made most of the information for its transport systems (e.g. timetables) publically available: there are now a plethora of 'apps' to help travellers plan their journeys, all of which have been produced at no expense to TfL, and increase customer satisfaction.


I believe it is a duty of those of us in the IT world to ensure that IT security is managed appropriately, and not used as an excuse to block the business and personal benefits that our innovative technology can bring. Like any other risk it should be managed appropriately and balanced against the benefits. We cannot let the few who would wish to take advantage of us through IT security breaches constrain our future.

March 3, 2017

The Asset Management Journey - into Adaptive

For utilities, traditionally most asset management was based on cycles of planned maintenance, interrupted by many occurrences of reactive work. The planned maintenance was generally based historic norms, often with little feedback of benefit. With the advent of asset management systems, both IT (e.g. EAM/WAM) and Process (e.g. PAS55, now ISO 55000), work became more planned, and was more based on benefit, drawing particularly on asset risk and criticality. Such changes made major improvements in efficiency, with reductions of reactive work from 70% to 30% not uncommon. However planned work was, and in many cases still is, based on expectations of asset lifecycle performance, and not on actual asset feedback. Whilst such proactive strategies reduced service impacts, it led to higher levels of planned maintenance than necessary to ensure optimum asset life.


Over the last 20 years industries have increasingly turned to predictive methodologies, using sensors and instrumentation, coupled with appropriate analytic software, to predict and prevent asset failure though understanding trends. For example, a large transmission operator uses transformer load measured against ambient and internal temperature. A band range of 'normal' internal temperature against load and ambient temperature is mapped, and the system flags when internal temperature is outside of this range, so that checks can be made before any failure. Increasingly such tools are using machine learning which further helps to predict 'normal' asset behaviour. Asset management has therefore moved from Reactive through Proactive to Predictive.


Artificial Intelligence (AI) tools, such as Infosys NIA, are now starting to be used in asset management. These new methodologies use the AI engine to collate, compare, analyse, and highlight risks and opportunities. The tools can use structured and unstructured data, static and real time, and have the ability to take data from disparate sources. The systems will increasingly refine understanding of asset behaviour based on multiple inputs, such as sensors/instrumentation, third party data (weather), social media feeds, and impacts flagged by external, but publically available, sources. The tools will then be able to advise courses of action based on current events. They could also then be used to model possible scenarios, and advise actions and impacts based on their understanding of inputs against outputs (stochastic modelling +). Such tools will enable an organisation to continuously adapt its asset management strategies and implementation to current and future events.


I call this Adaptive Asset Management.

October 14, 2015

10 key pointers for an effective Web-GIS implementation leveraging ArcGIS Server

 The following pointers came out of a couple of large Web GIS implementation experience in the Utilities domain using ArcGIS for Server version 10.2.1

1. Never try to replicate your Desktop GIS into Web
We have been using GIS as a desktop application since ages. It is a natural tendency to adopt a similar view in the web as well. Long lists of layers in the Table of Contents, plethora of tools that are seldom being used, North arrow, Measurement scale, are few things that remind us of a Desktop GIS. Build Application for targeted audience - give no more features than what the users absolutely need. Restrict them within a (work) flow so that they can navigate your app with ease. Always remember that your web GIS users are not GIS experts.

2. Map server is the key to the success!
Pay special attention while creating your map services. ESRI has made it very easy to serve your spatial data. However, serving them optimally can be very tricky - particularly if you're targeting hundreds of concurrent users. Follow some basic rule of thumbs - create multiple map services instead of one; no more than 8 to 12 layers in a single map service; try to keep symbols as simple as possible; try not to use Definition Query; follow the n+1 rule while setting up for the 'maximum number of instances per machine', n being the number of cores; allow Windows to manage page file automatically (in case of virtual memory).

3. Avail free base maps and other services from Bing, Google or ESRI
No matter how cleverly you prepare your base maps, I can assure you, they are not better than all the base maps that are available for free. Instead of concentrating on a killing base maps that you will use as a backdrop for your GIS data, use one that is free - as a bonus, you will be saving the trouble of updating them as well.

4. Choose your frontend technology carefully
Not many options are currently available for delivering a frontend API.  For a wider audience, use  JavaScript and HTML5 - unless you're developing some features that are not mature enough in this environment.

5. Keep mobile devices in mind during design
More and more people are online on mobile devices, than through their PCs. However, though the majority of these online users are mainly in the social networking sites during this time, they do see maps in their mobile devices (http://marketingland.com/outside-us-60-percent-internet-access-mostly-mobile-74498). Think of different screen sizes your users will be using to browse your app and plan for accommodating 'Tap's along with 'Click's.

6. Initial load time should never exceed 8 seconds
The average adult's attention span, for a page-load event, is around 8 seconds (http://www.entrepreneur.com/article/232266). Today's users, with the availability of information at their fingertips (taps?), become increasingly impatient for the wait time. On opening a page, if it takes more than 8 seconds, majority of the users will 'X' it out. If you want a wider foot print for your web application restrict the initial load time to 8 seconds, quicker the better.

7. Display non-spatial data spatially
Integration is commonplace in today's GIS. Display of Non-GIS data within GIS is a norm rather than an exception. There are various ways you can integrate - try displaying data on the map as graphic texts rather than in a table within the map. Spatial distribution helps us see patterns that tabular display fails to provide.

8. Pay more attention to User Experience over User Interface
User Experience (UX) is mostly (but not completely) achieved through User Interface (UI). For example, when you provide a zoom-in feature in a mapping application, you can implement that as a command (fixed zoom-In) or a tool (for a user to draw a polygon on the map to zoom into). This is UI. However, implementing a zoom-in feature as a tool can have a different UX depending on how you have programmed the cursor for 'after zoom-in event'  -  retain it in zoom-in mode, or take it back to the default mode(which is usually a pan), when finished. For a better UX, always provide a feedback to the user for each action they perform.

9. Know your users (behind the scene!)
Knowing your users is the best thing you can do for your application. There are some products out there that can capture user statistics, map server performance, number of hits, etc. but they cannot capture an individual user's feedback. If dissatisfied while using your product, majority of the users will not complain and issues but will stop using your application. User survey is another option but they fail to give you a clear picture because of poor participation. It is always a good idea to capture the user feedback behind the scene. For example, if you have a customized 'search' button, log each of its click events.  Try to capture who is searching what and how long is it taking before they see the results. You can fine tune your application based on this log; even, give them a 'hint' on effective searching.

10. Secure your application
Security comes with a price. While Confidentiality and Integrity are achieved easily, Availability is sometimes compromised. It restricts your application to a lesser footprint. Whether to 'Share' or to 'Secure', will be dictated by the business requirements.  At the least, you should always secure your map services through token and Secure Socket Layer; make sure the Server Manager is not visible from outside of your firewall.


Continue reading "10 key pointers for an effective Web-GIS implementation leveraging ArcGIS Server" »

June 3, 2015

I want obedience

I am increasingly becoming frustrated by 'smart tech', systems designed to help us complete tasks, but that all too often  actually impede delivery. For example, my work means I need to leave home early and, as I wish to watch the news when eating breakfast, I activate my satellite box. This is however connected to the television in my lounge, and that TV decides it also should switch on: I then have to quickly switch this TV off to avoid waking the rest of the family!

Talking to others in the utility industry it appears they suffer similar frustrations. Whilst IT applications deciding you meant to type one word when you meant another can be annoying, IT that decides to wrongly alter the settings of operational equipment can have very severe consequences. Even in sectors that are extremely safety focussed, such as amusement parks, software errors have caused serious incidents, such as in the Big One rollercoaster crash at Blackpool (http://www.computerweekly.com/news/2240040871/Software-fix-failed-to-avert-Blackpool-crash). In utilities, where errors can have fatal consequences, the need for caution is even greater.

The benefits of the latest 'smart tech' are however great, and utilities are keen to embrace them, but at the same time are rightly concerned about the potential risk. I thus believe that we need to focus on 'obedient tech', systems and devices that can advise and help us to make effective decisions, but require human input to effect an action. A good example of this is a 'well known' web based shopping service, that gives excellent advice on potential purchases (similar items, etc.) and guides users through simple processes, but leaves all decisions to the user. In utilities an example is a grid operator who uses Operational Decision Support Analytics to monitor transformer temperature against ambient temperature and load, building normal profiles. The system then flags in near real time when transformer temperature goes outside of the determined normal operating ranges, however it is the operator who decides on the appropriate intervention. There are systems that do automate actions (e.g. the Cardiff East Control Strategy - http://www.waterprojectsonline.com/case_studies/2010/DCWW_Cardiff_East_2010.pdf), but these work within very strictly controlled boundaries. It may be that 'self learning' devices are developed to a point where utilities have confidence in their decisions, however I suspect that will not be for some time.

Perhaps it is time we all aimed to be a little less 'smart', and a bit more 'obedient'!

April 2, 2015

The Utilities Data Dilemma

Increasingly utilities are being directed to big data, and all the benefits that appears to offer. However such calls miss a fundamental issue, in that asset data is an expensive element for utilities, both to obtain and to maintain. Most utility physical assets are geographically widely spaced, sometimes in locations difficult to access. Costs can be quite high, for example a manhole survey can average >$100. The EPA estimates 12 million municipal manholes in the US, so a 5% validation survey would cost circa $60 million! Surveys can also have complex health and safety risks that need to be managed. For these reasons asset data is often limited, and of dubious quality. Sensors and instrumentation are improving, being both cheaper to install, run and maintain, and more robust, nonetheless they are still relatively expensive items.

With asset data being limited, suspect, and costly to improve, and sensors and instrumentation expensive to deploy, smarter utilities are looking to make better use of the information they already hold. By using a combination of engineering knowledge coupled with effective analytics, trends can be mapped and normal asset behaviour determined. Where data is readily available such analysis is relatively simple, however where asset data is limited engineering knowledge and understanding can be used to define relationships between the seemingly unrelated data sets. The key is in understanding how data sources can be meaningfully linked.

Large Business Information systems may thus be of limited value to utilities in terms of managing their assets. Of more value is the effective linking of dispersed data sources, coupled with an effective, easily configurable analytics engine. Such tools have already been used to answer many asset related questions, such as the viability of rainwater harvesting in differing regions and climates. It is indeed possible to answer many of the asset related questions posed by utilities, even with the limited asset data many hold. Each question is however individual to the specific situation, so only those who can understand both the engineering and system elements will be able to successfully deliver beneficial results.