The Infosys Utilities Blog seeks to discuss and answer the industry’s burning Smart Grid questions through the commentary of the industry’s leading Smart Grid and Sustainability experts. This blogging community offers a rich source of fresh new ideas on the planning, design and implementation of solutions for the utility industry of tomorrow.

« Need of Smart Application Portal for EV charging station | Main | Securing Energy Supply across the EU »

Eliminating waste in OMS Projects

Eliminating waste in any kind of project is a good thing.  OMS projects have specific ways they are most prone to incurring waste.  Wasting effort affects an OMS project in 2 main ways.  Either it can use up limited amount of time on a calendar or it can add cost through extra hours charged by team members.  In cases where the deadlines are tight, spending time on wasted tasks is especially undesirable for the project.  The remaining tasks wind up getting squeezed even tighter and can cause hurried actions that impact quality.  Targeted planning designed specifically to avoid waste is often not included in project plans, but should be.  Project managers do important planning, such as risk response planning, creating the WBS, scope management, and establishing a communication plan.  However, there is often not enough attention specific to preventing waste in the project.  In my 15 years of outage management experience, I have found many places where waste has happened.  This blog posting will describe some scenarios I have seen in the past and are things to watch out for when planning an OMS project.  In coming up with the list, I considered the memory aid "TIM WOOD", which stands for Transportation, Inventory, Motion, Waiting, Over production, Over processing, and Defects. 

Transportation

 Transportation is a large waste source.  It is necessary to ask yourself these questions: How critical is having an onsite presence is to the project?  When is it critical to have the project team onsite?  There are times where an onsite presence is important and the team must be there, like workshops and go live support.  However, there are other times where remote working is possible, like when the system is being configured based on the defined requirements from the workshops.  When the team is remote, not only is there avoidance of travel costs, but also the time is not lost by traveling 2 days each week.  Those factors then are multiplied by the number of team members traveling.  We live in an age where tools such as VPN, virtual machines, and web conferencing enable more offsite work.   I have worked on multiple projects where much of the configuration and interface development occurred off site.  Once the system is configured and the adapters built, they can be delivered to the customer electronically.   To make this work, there must be good communication between the remote workers and the team that is on site.  

Inventory

Inventory in a software IT project isn't the same as manufacturing a hard item, like a hard drive.  We aren't talking about ordering and stocking component parts.  However there needs to be the appropriate inventory of environments to support the project.   Too many environments increase the effort needed to maintain them.  Not having enough environments lead to either a team member waiting (another waste point) to use the system or team members doing work that interferes with the other.  An example of interference is when multiple people need to work in the same file or when one person needs to perform system restarts while another team member is doing other work in the system.  To be able to have people working in multiple systems, there must be good version control to make sure nothing is overwritten as well.  Even if team members are working on completely different areas, there can be problems without a proper inventory of environments.  For example, during testing, that test environment must remain stable.   Introducing configuration or model changes will affect system stability and the results seen from a test.  Using that environment for any multiple purposes has risks from interaction to changing configurations, or perceived defects. 

WAITING

Waiting is always a risk of waste in projects.  There are some factors that are universal to projects, like waiting for proper resources or waiting for dependant tasks.  I'll be writing only to more specific to outage management.  Some aspects of waiting have already been mentioned as it can combine in other areas as it did with inventory (environments).   One of the biggest risks for waiting in an outage management project is waiting for the data quality to reach the necessary standards and be stable.  Many tasks are dependent on having correct data.    A novice to outage management may feel overconfident in the GIS data's quality because it 'looks good' in the GIS.  However, OMS data must not only look good, but have proper connectivity so that the power flow is properly represented.  It also needs to have necessary device attribute information and correct phasing.  On the development side, it is necessary to ensure the configurations that go into the building of the OMS model from the GIS data build the model as desired.  Performing the necessary reviews with corrections to the data or model build configurations takes time and multiple iterations.  Therefore, the data reviews need to start early so that when the project is ready to enter into later phases, like test planning; the data is stable so that results remain consistent.  All types of devices must successfully build into the model, the connectivity must be good, customers must map to the correct premise, and a user looking for information to appear must be able to see the correct information.  If this isn't done, it is necessary to wait to do test script writing because writing them based on changing data will lead to the tests to be wrong and perceived defects seen.  Perceived defects will happen because the test script's expected results will say one thing, but the viewed result, while being correct, will show something else.  Waiting to create test scripts then delays the start of testing, which can lead to delaying the go live.  When new data is introduced, customers and devices change, which can and does change predictions.  Even if testing can occur, if the data isn't production ready, go live still must wait.  Wait can also come from seasonal causes.  During a utility's storm season, the utility's team members will have less availability to the project.  It is important to plan activities that are less dependent on the utility's staff during the storm season and time the estimated completion of work to be in the calm weather season.   Additionally it would be unadvisable to bring a new system or a system upgrade to production during the storm season, so it would be necessary to wait until it is over.  Plan the start of the project so that the predicted end is in the calm season. 

Over Production

Over production can happen when there isn't full commitment to a task.  When a project has tasks/goals that "would be nice", those tasks are first tasks to be neglected as available time begins to run out.  I have seen with other projects where time has been spent on time consuming goals that are "not critical" to the project.  These tasks wind up being abandoned because other priorities take precedence or time runs out to complete them.  When tasks are abandoned, the time spent on them is now wasted as far as the project is concerned.   

 

Another over production habit is to making configurations decisions too early in the project.  This can lead to "reinventing the wheel" by making configuration changes that provide little or no benefit or finding that the choices don't work as well as was hoped.  OMS manufacturers have used best practices to create a base product configuration.  Sometimes it is necessary to deviate from a Commercial Off The Shelf (COTS) configuration.  What is leanest is to make only make changes when needed and once they are made, not to change them.   Getting a better understanding of how the COTS configuration works and how well it works may avoid the need to make changes or change the configuration multiple times.   Additionally, configurations unique to a customer require additional maintenance efforts.  By over configuring (processing) the OMS, effort and cost increases.

OVER PROCESS

In a conservative and risk averse industry, like utilities, over processing is a big waste risk.  While it is important to be thorough, being excessive is wasteful.  When writing test scripts, it is easy to write multiple scripts that test the same functionality, or often a utility may rerun tests with multiple people doing the testing.  Some overlap is good, but a lot is excessive.  The important thing is to find the balance and lead the team to recognize when it is excessive.  There is much to test in an OMS and there will be defects that cause retesting, so it is important to not use up the time retesting the same functionality too many times.   In testing the OMS, regression testing is important.  Retesting after a patch verifies not only the fix, but that the fix didn't break something else.  Testing more than solely the fix is needed.  It is equally important to target the additional tests for regression purposes to areas that the fix could affect.  If the fix is localized, it is not necessary to retest the entire system.  It helps to understand what is in the fix when planning the regression test so that the right amount is done

 

When planning fixes to apply to an OMS system, it is most efficient to use the manufacturer's planned releases instead of pursuing multiple one off fixes.  There are multiple time wastes in the one off fixes.  Each one off fix would be received separately.  They would need separate testing, promotion plans, and system downtime for the installation.  Then at a later time the same fixes that were the one off fixes are released as a part of a release point.  By picking up that release point all the fixes are available all at once.  The effort to test and implement is condensed into one effort and retesting the same modules is not needed.

 

Data can be over processed as well.  There are many details and objects that can be built into the data model.  However, building every attribute of every device is wasteful.  Some objects, like geographic objects, don't need details built into the model.  Other devices may have attributes that aren't going to be important to outage management.  Building attributes takes up space in the database and increases the time needed to build.  Eliminating unnecessary processing saves effort, build time, and disk space.

DEFECTS

Like any software product, defects are an unavoidable part of the process.  In fact if none are found, questions as the thoroughness of testing would be raised.  Since every defect that is written takes time to process, it is important to limit the defects to 'real' defects.  Try to avoid the creation of duplicate defects.  Even a duplicate carries a time overhead to close out.  In addition to defects, another type of 'nonreal' defects is a Works As Designed defect.  This is often the cause of a lack of understanding of how the OMS works.  These take even more time away with more documentation and explanation necessary.  With good communication, good training, and using testers that understand the OMS, many of these can be avoided.  This way effort is spend where it needs to be, fixing actual problems with the software.  Issues that are perceived to be software defects can have an origin from bad or changing data models.  If the model is updated without updating test scripts, testers may see counts or predictions different than what is in the script.  Because customers or devices may have been added, removed, or moved, the result may be correct despite what the script says.  It is important to keep the data model stable entering into testing until exiting.  Investing time early in the data review process is part of the solution to defects from bad data.  If the data coming into the OMS is bad, it will display bad information in dashboards or look strange in the map viewer.  If sufficient lead time is given to ensuring data quality, many data origin defects can and will be avoided.  When resolving defects, planning the process also minimizes the risk of additional defects and enables a speedy closure of existing defects.  It is important not only that the defects get fixed by development, but also there is a plan that says when, how, and by whom the fixes get promoted and retested.  I have seen many times where an improperly deployed set of fixes leads to the belief that a fix did not work, which leads to wasted time testing and having to perform another deployment.   

 

In using the TIM WOOD memory aid to analyze waste in Outage Management projects, I hope I have increased your awareness of project waste and some common sources of waste.  With this information, I hope you all will have smooth and efficient OMS projects.  If you have other suggestions on where to find and prevent more waste in the project, I would love to hear from you in the blog comments.   

Post a comment

(If you haven't left a comment here before, you may need to be approved by the site owner before your comment will appear. Until then, it won't appear on the entry. Thanks for waiting.)

Please key in the two words you see in the box to validate your identity as an authentic user and reduce spam.

Subscribe to this blog's feed

Follow us on

Blogger Profiles

Infosys on Twitter