The Infosys global supply chain management blog enables leaner supply chains through process and IT related interventions. Discuss the latest trends and solutions across the supply chain management landscape.

« January 2015 | Main | April 2015 »

March 30, 2015

Aspects of Shutdown - Pre & Post Audit

Now that we have seen all the key factors/phases for a successful shutdown such as the scoping, kick off, identification, safety and procurement. A successful execution of shutdown should start with an audit and should be followed by an audit. Let us dive deep now.

Pre-Audit:

The aim of this audit is to ensure that all the ingredients (as discussed in my previous blog) required for conducting a successful shutdown are available. We will be looking into some of the key ingredients.

Maintenance & Repair Inventory: The materials required to carry out the shutdown are procured and stored in the storeroom/staging area. And also the materials are to be pre-tested for the required standards, as this may hamper/delay the entire process.

Manpower: The personnel with the right skillset & training are assigned to carry out the job at hand for each of the activities.

Inventory Reconciliation: Any work in progress/raw material type of materials should not be present in the shop floor, these could prove to be a hazard.

Rental Equipment: Ensure the availability of the rental equipment and also timely return of the materials as this may increase the cost of shutdown.

Health, Safety & Environment Audit: This will be conducted by the safety team, they should ensure that the aspects of the safety are in place w.r.t environment and health of humans working.

Post-Audit:

Operational Area: As part of this we are revisiting the first pillar of TPM (Total Preventive Maintenance) i.e. Jishushozen, which when applied are

  • Audit each of the assets that was a part of the maintenance exercise and ensure that it is in complete working condition viz. check for any leakage, noise, insulation.

  • Audit the floor and ensure that none of material/equipment that are left over from maintenance are lying around the shop floor.

  • Housekeeping again becomes a key aspect i.e. the floor should be completely conducive to initiate work viz. not slippery, etc.

  • Ensure that precaution/notice/signage are placed AS IS to avoid any confusion.

Inventory perspective: Return any additional material/equipment, inventory that was left unconsumed. Return rental equipment.

Statutory Compliance: Ensure that any statutory compliance related certifications are renewed. Also identify equipment which needs to be inspected by competent/ government authority.

Safety: Log any accidents/mishappenings that happened during the shutdown, this would act as a precursor for future operations and shutdown planning's. Ensure that the permits like zero permit, vessel entry, work at height, cold & hot permit cycles are duly adhered to and completed in all sense.

Vendor Analysis: List down all the third party vendors involved in the process and then grade their performance against expectation. Some of the key factors to be considered would be delivery, quality, payment, flexibility, etc.

Finance Audit: An analysis with respect to the actual cost incurred versus the initially planned shutdown budget is prepared. Based on the outcome of this report, remedial actions need to be planned for future shutdown.

Rollout:

Rollout is a mini value stream map created during the process of shutdown, as you know the job of a value stream map is to identify value adding, enabling & non-value adding activities and then removing the non-value adding activities as much as possible. All the non-value adding activities are duly recorded for future purpose and value adding activities can be inculcated (Wherever possible & plausible) as part of regular maintenance, this could bring the shutdown timings to a huge extent.

The above discussed aspects for a successful shutdown planning is a gist. There are many other factors that can and should be considered depending on the industry.

March 27, 2015

The Expert Talk Predictive Analytics

Predictive Analytics in the area of Equipment Reliability has been a key focus area within Enterprise Asset Management(EAM) practice in Infosys. Thought papers, frameworks and real-time project experience to reinforce the subject knowledge did not limit us in our pursuit of seeking the ultimate in this domain. Bringing you one among many such pursuits- our interaction with Yuri Gogolitsyn. Read on as Yuri take us through some un-traversed areas within in the domain of Predictive Analytics.

 About The Expert

Yuri Gogolitsyn is an experienced EAI Technical Architect and Consultant who has worked on numerous multinational projects and  has substantial hands-on experience with many leading integration technologies with exposure to real-time assignments involving predictive analytics. He is based out of the UK and before moving into the professional IT was doing brain research dealing with statistical processing of the brain electrical signals.

 

Welcome to the first edition of Expert Talks, good to have  your with us today. Could you please share with us your tryst with Predictive Analytics?

Thank you! First of all, I think that Predictive Analytics is still much more a research area than a set of tools or products capable of providing quick and immediate solutions to emerging requirements in various industries.

Quite a while ago, before moving into the professional IT, I was doing a scientific research in the area of the brain science. My main interests were, to state it very briefly, in using statistical methods to detect and evaluate brain electrical responses to various stimuli (pictures, words etc.). As a rule, the responses were tiny and buried in the unavoidable background variations and noise. The goal was to obtain a statistical proof that the response actually exists at all and to provide some estimate of the extent it is consistently repeated when you present the same stimulus under the same conditions. This is just one of the examples of a more general area of Pattern Recognition. I believe that Predictive Analytics belongs to the same general area. This area is really huge both in terms of classes of problems it deals with and methods used.

 

Brain science and its relation to statistics seems fascinating! Do you think we can relate this concept to an Equipment response as well?

 Yes, in fact the concept finds its application in a wider scope. Look around and you will see the science of correlations in all possible aspects of things.  For example, when analyzing large amounts of data on the contents of the supermarket baskets the researchers recently found that when a person buys baby's diapers, there is a good probability that this person would also buy some beer. It was a bit of a surprise! An immediate pragmatic recommendation from the study would be to keep both items close on the shelves to make it more convenient for the customers. However, the researchers did some more digging and found an explanation for this unusual effect. It turned out to be due to young fathers whom their wives ask to buy diapers for their baby on the way home from work. It often implies that the husband expects to spend evening at home with his family, so he also buys beer for himself.

 An example from a completely different area - the width of the annual growth rings on the tree stumps strongly correlates with the annual number of fatalities from heart attacks. However, there is no causal relationship between the two observed variables in this case. The actual driving mechanism underlying this effect is the annual variations in the solar activity.

In the context of Equipment, an example would be to note a response against parameters such as load, pressure, rotations per minutes etc. and try correlating it with the failure pattern. With the objective of optimizing equipment performance, one can study these specific parameters and try channelizing it towards a safer zone. This way, we essentially work on a need based maintenance as we know whether a failure is imminent and could avoid the pit fall of overdoing the maintenance activity.

 

If you want to describe the Predictive Analytics to a novice in this field, how would it be?

 

The logic underlying Predictive Analytics could be outlined as follows. A combination of parameters is repeatedly measured for a system under observation. At some moment in time an important event occurs due to an unknown reason - the system noticeably changes its behavior in some way (e.g., breaks or stops functioning). Over a substantial period of observation a large volume of data on the values of parameters that precede the important event's occurrence has been accumulated. The question to answer is to what extent it is possible to predict that the important event is imminent by looking at the current values of the measured parameters.

 

One fundamental aspect should be stressed here - the repeatability of the important event.  It is impossible to predict events that are unique or occur very rarely indeed - statistical methods just do not work under such scenarios.  On a lighter note, this is nicely illustrated by the following joke.

A University professor is conducting a seminar on telekinesis. He explains to his students that telekinesis is an ability to move objects using just one's will power and says:

-          Let's now all close our eyes, concentrate for one minute and try moving ourselves outside this room into the corridor.

In a minute they open their eyes and are very surprised to see that one person is missing! The professor, stunned not less than his students, asks them for comments. One of the students is doing a course in statistics. He says:

-          I am not sure you would be able to prove that this effect is significant using statistical methods...

 

 Which industry according to you  would have the most requirements of Predictive Analytics?

Everyone would like to know the future!  The quality of prediction benefits from the careful statistical analysis of the available data. Unfortunately it may often be the case that even the very large volumes of data do not allow prediction with any usable degree of confidence - we do not know if parameters we are monitoring indeed have the required predictive power. You are unfortunately not guaranteed a success when you start dealing with a prediction task. A very good example in this respect is a long history of attempts to predict earthquakes and volcanic eruptions. We are still very far from where we ideally would like to be in this area. You really need not have to put this in an Industrial perspective.

 

Based on your experience, could you please tell us about the tools/software widely used in the field of Predictive Analytics? Is there a best -of- breed solution available?

There is a huge number of packages for the statistical data analysis available. You can do a lot in Excel, for example, regression models. You can try the machine learning algorithms or even neural networks. In addition to this, there are online courses available on latest analytics tools such as DataStream, Hadoop etc. which can be tried as well. However, I believe that the tools used should be chosen after considering the nature of the problem in details.  You should decide on the approach first, and then pick up the right tool. Also, to work in Predictive Analytics a very good understanding of statistical methods and models is required.

 

You mentioned about the models, could you please elaborate on this? Is there a best of breed which one can pursue in this regard? According to you, what are the key determinants/factors to ensure accuracy of analysis?

 

To make a prediction you need a model. A model here is a very general concept. Depending upon the approach and techniques you use the model could be explicitly presented as a formula (e.g., regression models) or, like in neural networks, be not directly visible - embedded in the structure of connections between the neurons in the network. The outline of the general approach used in Pattern Recognition is as follows. Use some part of the data to build a model. Then test the validity of your model by feeding it the data from the other part.  The second step shows how good your model is.

In addition to this, The Data to be analyzed needs to represent an actual behavioral pattern or a trend which can be analyzed using a statistical model forming a basis for drawing meaningful conclusions. It is therefore essential to gather data from a real scenario.

 

The data gathering aspect is becoming more promising as we move towards the Internet of Things. Utility companies have now started offering the home hubs enabling their domestic customers to monitor energy consumption and control home appliances remotely from smartphones, say, switching on the heating some time before arriving home. Actually, Infosys was already involved in integration aspects of one of such projects.

 

Everyone is talking about the transition to Strategic Maintenance Practices and the Prescriptive Maintenance practices lately, what are your thoughts on this?


 If we are talking about Prescriptive Maintenance of some expensive equipment in utilities etc., I think that the organizations that should look in this direction are the companies that actually make the equipment. They are in the best position in terms of being able to collect vast amounts of data from many installed pieces of this equipment. They also should have a better understanding of what needs to be monitored. This increases chances of success. 

 

I am a bit skeptical about quick success in scenarios like "It costs me a lot to maintain my three expensive gadgets/widgets, and one of them failed recently causing me a lot of problems. How nice would it be to use the Predictive Analytics to warn me when one of my gadgets is close to failure? Those guys need to tell me what exactly I should start monitoring. I am sure there are some best practices somewhere".

 

So quick result is a challenge, what are the other challenges you think one may face while approaching a Predictive Analytics Solution?

 

From just a task it may develop into a serious research project that would start consuming all your time. Do not expect readily available best practices and universal recipes. You will need to understand a lot about the target process. It takes time and many iterations until (with substantial degree of luck) you arrive to something usable. Furthermore, the most common pitfall I would suggest any analyst is be wary of is generalizing an Asset class, in I think generalizing an Asset class across domains are also not intended. Another common problem I have seen companies struggling with is having huge set of data and having no clue on what to do with them. A predictive data analytic model cannot be generic, it differs case by case. For performing predictive analysis in Asset Management, each Assets specific information needs to be viewed specifically and the asset specific predictive factors determined accordingly.

 

What are your thoughts on the heavy investments which this area entails? something which Predictive Analytics is infamous for!

I need to make it clear that I am on the side of skeptics in relation to Predictive Analytics, those who tend to believe that the number of scenarios where it is potentially possible to provide a prediction with a reasonable degree of confidence is rather small, definitely much smaller than the number of scenarios where it is not possible. The best negative example we all know about is prediction of the share prices.

The investments in this area should be probably considered as spending on research and development. Usual considerations are valid here - the investments are heavy indeed and in no way they guarantee the desired solution. However, I think that the beneficial side of heavy investments in research is clear - it may lead to better technologies, algorithms etc. that would have much wider usage and substantial benefits.

 

Besides investments on research, do you think there are other avenues of higher spends which the organization should watch out for?

 

Certainly the investment cost are higher, the early adopters of predictive analytics would certainly have challenges in substantiating the cost. The investments could range from gathering Instrumentation controls and analytic tools to the company personnel who need to get trained on the using the technology and deciphering the results to act on them. However, looking at the advantages in terms of catching failure before it causes beyond repair damages, the investments seem to be promising.

 

What would be your advice to Organizations attempting to go the Predicative Analytics route?

 

Be ready for a trial and error approach albeit at a smaller scale, have some experts who has good qualification in statistical computations. I have often seen companies providing research grants to universities, there is a cost advantage to this. Collaborating with equipment manufacturers also helps as they bring in a consolidation of   data to cover the expanse of operational scenario which is a must thing in predictive analytics. Role of Equipment manufacturer and critical component (e.g. Bearing, Bushes) manufacturer are key and should be partnered with, in the journey of Predictive analytics. Every equipment and machine is different and unique therefore developing predictive analytical model would turn out to be very time consuming and costly at times.
Above all, you also need immense patience to succeed in this domain.  Never expect to master the art and also do not expect a radical result. Taking things one at a time would help and yeah -All the Best!

The future of Manufacturing from an IT perspective

During my MBA days, I had the opportunity to learn a lot about Manufacturing, Operations, Supply Chain management and of course a lot of other courses in the form of lectures/case studies/web/books. When we spoke about process improvements during those days we spoke more often than not about the lean manufacturing concepts, the six sigma's, the Toyota production systems, the value stream maps, business process reengineering's and all this was not long ago (last decade). Today's organizations when challenged with these issues have started relying more and more on their data and this data is generated from none other than their own backyard "The Shop floor" powered with Big Data. 


Manufacturing Operations has become more complex and intriguing than ever before. The key driver for this has been the ability to generate data from the shop floor, analyze the data and then be able to take the right/much required kind of business/manufacturing decisions.


Recently, I had the opportunity to read through a fine case study pertaining to General Electric, as to how they have been able to generate data. This plant is enabled with over 10,000 sensors, all these sensors tightly knitted with their integrated IT systems across the globe (more than 400 plants). These sensors monitor the performance of each of their manufacturing processes and build information viz. temperature, humidity, energy used. If something goes wrong, the system triggers a notification/message to the process owner's hand-held device/phone/mail and this timely notification to process owner will enable him in taking the right decision. This was envisioned (data collection through sensors) for improving their own processes to start with and now that they have improved their process tremendously, their focus is improving the costs and how to get more out of this plant? Think "complete system" is the mantra they are going to rely on. This was an attempt by GE to build a world class manufacturing setup, they call it "brilliant factory" and this also is the test bed for research into the use of their own "Internet of Things" using process suite GE Intelligent Platforms.


To sustain in the long run, organizations have to take a leaf from GE, where they are able to implement an integrated system that is able to collect, synthesize, analyze data and is intelligent enough to make decisions for the respective process owners/stake holders. It is easier said than done, as most of the organizations are on legacy systems and will have to revisit their IT strategy, IT setup and realign the architecture. To manage this kind of colossal data, with inputs and outputs in different formats. Big Data happens to hold an answer to many of these questions.

 

To conclude, there is abundance of data available and there is no question about the capability of the advanced and integrated IT systems that we have today. The question is "Are these really enough to attain meaningful and desired business decisions to reap maximum business benefits". The answer seems to be a big "YES"!

March 9, 2015

Moving on with Mobility for Asset Maintenance

Most of us have a smart phone / tab nowadays.  Earlier, we found our laptops to be an innovative gadget making our lives easier. Now we have shifted our focus towards our smart devices since it has replaced many of the activities we do in our computers with ease. Mobile technology coupled with internet has opened up a new gateway of communication. I sometimes get perplexed to see how different ends of the world are being connected in fraction of seconds. We do shopping, check our mails, recharge our mobiles, take pictures and videos, chat with friends, read books, track flight status, play games,......a whopping list of endless features. We sometimes don't even reveal our lack of awareness of any such feature in the smart device, to avoid being considered as 'outdated'.

While Smart devices have changed our life style in different ways, it has also influenced the way we do our asset maintenance. As software vendors look out for opportunities to develop mobile apps for different uses, some of them have already placed their footprint into Enterprise Asset Management arena. Many mobile versions of EAM software are made available now a days to capitalize on the potential market in maintenance segment. However, Maintenance Organizations are gradually but carefully changing their processes in this direction since it involves analysis of application fit, investment plan and implementation strategy. This trend is picking up its pace since the benefits are coming out to be evident. However gradual pace of this shift is accountable to few challenges faced by these organizations in implementing a mobile EAM application.

Let us see how few of these challenges are being met in recent times:

 

1. Hardware Investment Cost:
One of the main concern of the organizations to implement a mobile EAM application is the initial investment on handheld devices such tabs, kiosks, consoles, etc. During earlier years, there were specific device vendors producing handheld devices which were rugged enough to be used for maintenance. However, they were equipped to work on with inbuilt software. Now we have a new philosophy of 'Bring Your Own Device' (BYOD). Assuming employees have a smart device, using their own devices to carry out maintenance process functions / data entry etc. will considerably reduce the capital investment on hardware. Since most of the mobile EAM solutions are available on cloud, interfaces can be easily accessed with a compatible browser. This gives a good reason for moving on with mobility for organizations with cost advantage.

2. Software Cost:
It was true that mobile versions of EAM software needs a considerable investment as they are considered as separate licenses from the desktop versions. However, software vendors have found ways to reduce such cost by providing them as a bundle with desktop versions. The best part of attraction in the recent times is Mobility as a Service (MaaS) concept. Many of mobile EAM apps are moving towards Cloud hosting and lending the service on per-device / per-month fee.

3. Connectivity
When it comes to maintaining distributed assets like railway tracks, pipelines, offshore platforms, aircrafts etc., it becomes a constraint that workforce cannot be online every time. Without the connectivity established, mobility is of limited use. These worries are now getting eliminated with the latest telecomm inventions. World is about to witness some wonder called 5G. Yes, the fifth generation mobile network which is expected to be 100 times faster!!! Mobile internet keeps you connected most of the time in distributed environments. Even in instances where there is a connectivity issue, no worries, Mobile EAM applications are designed to work in occasionally connected or disconnected modes. They can sync to servers once connected. This will ensure the ends are connected now.

4. Data Security
Another major challenge or decision factor is data security. Organizations are worried on the security provided by mobile apps when it comes to live maintenance data. As the mobile apps are vulnerable to threats across the network, mobile EAM software vendors have taken the safer cloud based route to ensure data security. Mobile apps are of two type - Native and Cloud based. Native apps are downloaded, residing on our devices and utilize the device resources. On the contrary Cloud based apps are hosted via secure server and data storage is centralized. As users will login to the apps via browser, all the data entered remains secured. Added advantage is its independence from the operating system (Android, iOS, etc.).

5.  IT Infrastructure Management
Conventional Mobile EAM apps needed a platform to design the interfaces, integrate with other systems, connect with servers - all managed by Organizations own IT team. As the direction is towards reduction in IT spend, mobile apps are now being provided as a service with complete hosted infrastructure which eliminates capital investment on the infrastructure.

With host of options available for implementing mobile solutions, organizations can now reap the benefits in terms of increased user experience, process efficiency and improved equipment utilization - both in connected / disconnected modes. Some of the mobility features which are advantageous are:
1.Get alerts on the work assignments and update work orders
2.Record real time readings on measureable parameters
3.Check the availability of spares at the nearest warehouse and raise material requests on the go
4.Take a look at e-manuals to know about the task in hand
5.Report any snags, defects (which sometimes gets out of our mind) then and there while performing any scheduled / unscheduled maintenance. Take a picture of the snag / damage of the equipment and share with others for crowdsourcing ideas to fix it

There are potential benefits for the organizations to move on with the solutions available against the mentioned challenges. It's time for the organizations to fasten the belts and be ready for the major mobility sweep coming their way. Mobility is changing the way we work and live. Let us be part of this change.

March 3, 2015

Best Practices in Facilities Management

Facility management is a business practice that optimises people, processes, assets, and the working environment to support the delivery of the organisation's commercial objectives. It ensures that the customer's facility is in optimum operational condition and that they are receiving services in a prompt and organized manner. The Facility Management Services could range from maintaining building's air conditions, electrical network, plumbing to cleaning building premises, maintaining landscapes, provide catering services etc. It is about improving and maintaining the quality of life within a facility. It is the role of facility management service provider to ensure that everything is available and operating properly for building occupants to do their work.

Facilities Management is expanding, yet competitive and price sensitive market-place, wherein it is critical for the service provider to maintain the cost leadership and at the same time ensure high customer satisfaction. As part of the next generation solution the facilities management service providers are looking for solutions that can reduce their high operating and administrative cost, ensures optimum utilization of resources, provide them real time visibility on the work that is executed at a remote location, helps them in in meeting customer SLAs and brings automation to reduce redundancy.

Infosys has worked as a strategic partner with some of the world's leading Facilities Management Service Providers. Infosys was involved in designing their Work and
Service Management, Asset and Inventory Management, Customer Management and Procurement Management processes.  Some of leading best practices that were leveraged in designing Facilities Management process were -

Automated Work Force Management - A tightly integrated Enterprise Asset Management, Scheduling and Mobility solutions.
Intelligent Call Scripting - Optimum way to record issues raised by customers over a call.

One click Work Order creation - An highly automated way to process service request, perform multiple validations and create work order as a positive outcome. All this with a single click.

Quality inspection - A customer oriented automated solution to control the quality of work and gather customer feedback.
Multi-dimensional Pricing models - Pricing rules based on multiple factors to incorporate multi-dimensional pricing models.

SLA Monitoring - Priority based SLA tracking based on the time that a work order is sitting in a particular status.

Out of Scope Services - Display out-of-scope services as well to the customers and give them the option to choose these services. Use this information in the KPI reports for the opportunity to add new service line for the customer.

Work Type handling - Automated solution to validate the source of generation of work order initiation and route it to the relevant path.

March 1, 2015

Importance of Prototying in Package Implementation- II

The Realizations

For SMEs and Business Analysts, It's best to know the product capability to effectively place a requirement, it acts a priming!
If someone would have said this before this assignment happened to me then I would have thrown it right out of the window for sure! However, this assignment made me realize this key element of human nature- We are always in the inertia of our local needs unless and until an external force acts upon it- Now thats a "requirement nature" reincarnation of Newton's law for you. In this context the external force is the knowledge of the product and its capability to handle a business need. No matter, how hard a package consultant talk about the product feature, the local business needs will try to paint it in the way they want to see it. You go back contended to only realize later that  what the business actually requested for in these painstaking requirements sessions were not what wanted in actual, not at least what the Application package would have addressed the best. The process owners representing the respective utility business function were all seated together, in fact it was surprising to note that most of them even had consensus on many common process which were suggested to be followed as part of the process optimization exercise. However they were completely unaware of the workflow capability of the package and gave requirements which were not specific enough.  For us the boot camp sessions during the project initiation were meant to cover the features, in the hindsight it did very little help. When the development were completed and the workflow was demonstrated, the result were already out, the business did not like it. The folks at a point felt that the requirements they gave could have been fine-tuned had it been a case wherein they could have pictured their requirements through a demonstrable model during the initial requirement stages.


The "Blind men and an elephant" syndrome
I am referring here to an age old Indian parable which emphasizes on the fact that an individual's perception is highly subjective and largely influenced by the ecosystem which one belongs to. The SMEs belonged to five different business functions with each one of them viewing the package's "workflow engine" elephant in five different ways on mapping their respective process needs. Why blame business users alone, even as a system integrators, we sometimes get carried away with our knowledge of the product and try painting the design using a biased brush. We soon realized that the requirements given by the individual functions were not mapping to what was being intended apparently. In our projects we end up with such scenario, don't we?
A congruence of views was the need of the hour, what the SMEs lacked was a common language which they could use to communicate during these sessions. The Boot camp briefing on the workflow features was expected to set the foundation for the following process mapping exercise using the tool. However, it did not meet its purpose. The process owners had tough time to visualize how their departmental processes would be implemented using the workflow. By now I was convinced that the SMEs were gripped with the tough feeling of uncertainty they were getting into. It was high time to show them some real stuff and modify it on the go. Did we manage it then- yes we did!


Over the years I realized one thing, we end up spending huge effort in charting out the business requirements, but sometimes miss out on the implementation element; the part which actually shapes the requirements into quantifiable deliverables. Being in the IT for over a decade now and having seen multiple IT implementations, I must say that even the best of the breed solutions does not have a panacea for all business problems. The key element for system integrator is a visual communication, on a regular basis, of the requirements getting transformed into implementable packets. Prototyping does wonder in achieving this, at least this is what was apparent from the zealous attitude the process owners had when we actually got them participating in the workflow development process. With critical sub processes identified and developed to be demonstrated, these sub processes were treated as working development models which got refined based on the inputs. Each process owners now had a working workflow model in front of them, a canvas on which they could align their thoughts. I also see a strong potential of our typical phased requirement gathering phase in a package implementation process to get subdivided into smaller packed of prototype discussions.


One may argue as to what happens to the project timelines. Well, to be honest with you, in my experience the development cycle was definitely longer with the iterative approach of prototyping, however there is a good scope of effort reduction in the testing phases. Engaging with the user early on in the development lifecycle save a great deal of effort in the later stage of the cycle. And if you ask me about the intangible benefits, it helps you avoid all the weird surprises like - this is not what I wanted to see; you know what, I might want to squeeze in this one small bit, we just passed the UAT and still have some time for the go-live etc. Above all, to me, a smile of content on the customer face is a priceless intangible you may ask for and I bet you will see it with Prototyping!

 

Importance of Prototying in Package Implementation- I

Catching on latest happenings in LinkedIn; reading through some wonderful posts and getting involved in discussions have been some of my favorite unwinding lately. The guys out here are really cool, there is a lot to learn from them. The knowledge they possess, the experience they have and the way they articulate things have been inspiring. Read a comment from Biju Varughese recently on how a wrongly done requirements gathering could be a precursor to a painful IT Implementation. Actually Biju's note resonated with folks I met recently during a recent customer meet. They had concerns on how requirements are destined for frequent changes. While the folks discussed emphatically on how requirements are changed as late as User Acceptance test, I actually got tele-transported back to my stint in a process optimization exercise at one of the largest utility companies. I would like to share this bit of my tryst with you in a hope that it would add value to your projects. The meetings were scheduled in one of the busiest conference room at the customer location. On the commencing day the cold weather was adding to my anxiety of addressing a full house- a team of thirty odd subject matter experts from different business functions across the utility. The plan was to have meetings for a period of four weeks to collate and finalize the requirements. This followed by the mapping the requirements to  "best of breed" package application and be ready for the big profile to-be sessions thereafter. The requirement analysis exercise ended well, not before I partially lost my voice on the very next day of the session requiring a kind hearted soul to offer me a microphone so that I am audible to everyone for the rest of the sessions. My confidence rose all the more when we successfully presented the To-Be design and secured the signoff to start with the build. To me, covering and finalizing the whopping list of work management requirements which included a comment which read "the current processes are clunky!" was in itself an achievement. Such a comment was a natural outcome of these folks dealing with complex process day in and out. Now before you folks remind me to I start talking about the prototype model, which is the topic you actually wanted to read about, I would like tell you that this blog is not about technicalities of "Prototyping", I will be beating this topic to death otherwise. Instead I would like to present some human aspect of it, as for any model to sustain, I think it needs to align to how people think.
The project entered the build phase and the development team started with mapping the requirements into the package. With the team managing to complete it on time, every face around was a smiling one. However, when the SMEs got a first look at the deliverable, they realized that one of the development tracks-the aplication workflow, did not meet their expectations. The workflow was required to essentially automate processes across the business functions and incorporate all the business rules and roles stated in the requirements.  A seamless implementation of the workflow engine would have ensured that it streamlined the end to end work management process enabling standardization and adherence. This was a core area where most of the folks found the challenges with.Analysing the past, we made some realizations on why things turned up the way they did 
Please read my two realizations, which I think made the difference

Subscribe to this blog's feed

Follow us on

Blogger Profiles

Infosys on Twitter