The Infosys global supply chain management blog enables leaner supply chains through process and IT related interventions. Discuss the latest trends and solutions across the supply chain management landscape.

« January 2011 | Main | March 2011 »

February 26, 2011

Infosys EAM Team at Pulse 2011 - Annual Maximo User Conference

Next week, starting Feb 28th till March  2nd, I'll be at Las Vegas attending Pulse 2011 conference. Pulse has been the stage over the last few years where IBM has propounded the use of Maximo as THE one solution for all asset management blues, regardless of asset categories (MRO/facilities/IT), industry vertical you may belong to or modular footprint (work management vs inventory management vs procurement, for eg). We are a Gold sponsor this year and you can meet me and my colleagues at booth # 305. We have a host of activities lined up :

On Monday afternoon, I will be hosting a session on Smarter Asset Management in the Energy and Utility Industry at Expo Theatre 1. My talk will focus on an innovative approach using adoption of best practices, benchmarking, process standardization and smart tools and templates to achieve true "enterprise-wide" asset management capability via consolidation and upgrade to the latest Maximo versions.

Later in the evening from 7pm-9pm, Infosys is one of the sponsors at the  "GOMAXIMO" reception for Oil & Gas users of Maximo held at Room 118 - MGM Conference Center - Level 1. We look forward to share our Maximo success stories with large Oil & Gas companies with the estimated100 people, representing about 40-50 companies from upstream (40%), midstream (30%) and downstream (30%).

On Tuesday evening, our Utilities customer, Arizona Public Service (APS) will speak about the value of having implemented Maximo as an enterprise-wide asset management system. APS will share their experiences with IBMs Utility focused Maximo solution, which has enabled them instill proactive, planned and deliberate processes to promote accountability, responsibility and compliance. APS will also share how standardization and automation of Work and Asset Management processes/resources helped them create a framework to improve and measure operational performance.

In addition, we will be attending the Rail Road Summit customer forum on Saturday, Feb 26th followed by the Maximo Utilities Work Group on Feb 27th afternoon to network and build introductions with Maximo users.

If you're planning to attend Pulse, we do look forward to meeting you at our customer and expo theatre session! Don't forget to stop by our booth # 305 to understand our capabilities and success stories in Maximo, and interact with industry and Maximo experts. And if you are the lucky one, you could WIN the iPAD drawing, so that's an added incentive.

Also as the build up to our presence at the event, I have been doing a host of media  meetings with reporters  from Supply Chain Matters, InformationWeek, Connected Planet,etc.. to evangelize our experiential thought leadership on the importance of asset modernization and consolidation, after sales services, Green EAM and IBM Maximo partnership and will continue to lead multiple interactions with reporters and industry analysts from Gartner & Forrester.

I will share my first impressions  LIVE during the event based on any key announcements, general sessions and meetings with customers and IBM senior executives

February 23, 2011

Forecaster ABCs - The 'Vital Few' for Forecasting

Lets travel back in time in the 19th century, to take a quick look at the very interesting observation made by the Italian economist Vilfredo Pareto - 20% of the population possessed 80% of the country's wealth and the same was observed for other countries and over different periods of time. This has been known widely as the 'Pareto's Principle' or the 80/20 rule or the 'Law of Vital Few'. This principle has been adopted in the ABC Classification, which also happens to be the topic of my blog, with the 'A' group items [the 'vital few' representing 20%] contributing to 80% of the phenomenon, the 'B' group [representing 30%] contributing to 10% of the phenomenon and the 'C' group [the 'trivial many' representing 50%] contributing to only 10% of the phenomenon.

This law asserts that the outputs are not always equal as the inputs; that a small set of inputs, contribute or influence significantly the outputs. The principle plays an important role in depicting the imbalance, which may be 70/30, 80/20, 95/5 or 80/10, 90/30 or any set of numbers in between. The key to note is that this relationship is between two different sets of data [input & output or cause & effect] and hence need not add up to 100.

Pareto Chart sample.jpgThis phenomenon has been observed in many areas of business and management, some of them being - 20% of customers are responsible for 80% of revenue; 20% of steps in a typical process add 80% of the value. In the context of forecasting, we generally see this principle being analyzed for the contribution of Products to the overall revenue of the firm or business unit. There are these 'vital few' [Group 'A'] Products or Product Families that contribute to a significant extent of the revenue of the Business Unit or the firm and then there are the 'trivial many' [Group 'C'] Products that result in only a marginal contribution to the revenue, nevertheless are important for the business to continue. Any forecast error (in terms of %) would have a significant impact on the revenues for Group 'A' products compared to that for Group 'C' products. The subsequent fallout of this would be to concentrate the energies of the Forecasting organization to develop the forecasts for these Group 'A' products and ensure that these products are forecasted more often. On the other hand, we would want to setup the forecast process for the Group 'C' items on autopilot with minimal manual intervention - an example of that being using the Automodel Forecast model provided by SAP APO which will choose systematically the forecast model with the least error. We could decide the approach to manage the Group 'B' products to be in between the above two approaches. In a more general sense, we can say that the products are classified based on their importance to the business.

While this approach gives consideration to the key factor - the contribution of the product to the overall revenue, helping us to allocate our limited resources on the right priorities, there is one more key factor to further enhance this classification  - the forecastability of the product. In my earlier blog, I had discussed that the measure of dispersion can be used to assess the uncertainty related to a part, which directly impacts the forecastability of the part. The perspective here is to analyze the products for their individual forecastability and how these measure up against the collective whole, with the assumption that the more difficult a part is to forecast, the more time should be spent in generating the forecast. So similar to the classification done based on the contribution to the revenue, we must check for the pattern of imbalance for the products for their forecastability. In event we see a similar pattern, then we must classify the products as Group 'A' that contribute to the 80% of the overall uncertainty and hence need more energy and time to forecast these products compared to the Group 'C' products that would contribute to around the remaining 10%.

So this now brings us to a critical juncture - We have so far talked about classifying the population of products by two factors -

1. Importance of the product to the business and

2. Ability to forecast the product

ABC Classification Options.jpg

With these two being considered as 2-axes, we develop the following grid:

2-axes grid.jpg

With this approach, the parts that will be classified as Group 'A' will be the products that are important to business [and hence forecast error would have a bigger impact] and as well are difficult to forecast. Herein lies the paradox - these products are important to business and hence important to be forecasted with maximum Forecast Accuracy, but at the same time are difficult to forecast. This mandates the forecasting organization to plan for these parts more rigorously at a higher frequency with very narrow band of permissible forecast error. The forecast numbers generated can be buffered for these products to avoid any OOS for these business critical products.

On the other hand, the products that are of low importance to Business and are easy to forecast will be classified as Group 'C' items. These parts can be planned with minimal manual intervention and the forecast numbers reviewed at a lesser frequency.

Since we classify the products by charting them against two axes, the definition of the products in the diagonal to be classified as Group 'B' would be dependent on the Business discretion. Depending on the Product or Product Family, the thresholds could be set to classify these products as either Group 'B' or Group 'A' or 'C'.

 

I believe that there is a lot of value in not just classifying the products on the basis of business importance [or contribution to revenue], but also by considering the ability to forecast the products accurately. The exercise to review the thresholds for classification of products in groups 'A', 'B' and 'C' needs to be done on an annual basis for every product or product family. The ABC Classification such determined must be used as a key input to develop specific strategies for forecast generation and review process and also for reporting exceptions.

I will be happy to hear your thoughts and point of view on this simple yet effective approach.

February 18, 2011

Product Allocation Planning - Managing allocation parameters

Continuing from my previous post, what questions come to mind when one thinks of a finished good product allocation-planning tool?

Well, one of the first question is how does the tool ensure right amount of supply is being allocated to the various demand channels, and how does it share supply when faced with a supply constraint?

This presents us with the two important aspects of product allocation planning, that is, Planning Strategy and Managing Supply Constraints. I'll focus on the first aspect in this post.

Setting up the allocation planning strategy

The supply gets distributed based on the setup of allocation parameters. This in turn depends on the mid term sales strategy and the product life cycle stage. I have tried to laid down a sequence of the basic and important strategies that need to be arrived at:

One of the first steps is deciding on what level of detail you want the allocation to be done. The allocation plan can range from a global view of demand and supply to a refined, customer specific level that considers country or sales area specific supply and demands. It is a good practice to start from a global allocation plan that is based on global product and sales strategy, and gives an allocation across geographies and sales channels. This is further refined to geography or channel specific allocation plan versions that are derived from the global allocation plan. This can be further broken down to customer/customer group specific plans that are eventually communicated to the ATP module of the OM system. In order to implement this, the tool must be able to derive a supply projection at various levels of aggregation

The next important step to start allocation planning is deriving the right set of groupings you want to allocate for. For e.g.: It is imperative to have an individual allocation plan for your top 20% customers in a particular sales area, but the rest 80% can be clubbed in a sets of tier 1, 2 and 3 customers. Similarly, an individual allocation plan could be created for geographies like USA, versus a combined allocation plan for a set of geographies like the South Americas.

Following these two decisions, arriving at the right amount of supply for a sales channel or customer in a particular week is the most important strategy to drive the allocation plan. This is usually based on targets days/weeks of inventory that are derived keeping in view the sales plans, customer importance and delivery lead times. The allocation quantity for a given week for a customer or channel would be the average sales spreading across the target inventory weeks, and netting out whatever inventory the customer or channel already holds.

Setting allocation priorities is the next key parameter. Depending on the level of allocation plan being derived, these priorities are applied among various levels, for e.g.: various sales channels or geographies, or even at the customer level. The tool should start distributing supply to allocation sets in the order of priorities defined, so that the higher priority allocation sets get satisfied first.

In the attached diagram I have presented a view on setting up allocation plan parameters based on product sales geographies, revenues and number of customers

ManagingAllocationParameters_0.0.jpg

While the discussed parameters are presented as sequential, in situations they can be interdependent when defining an allocation strategy, and need to be refined on a periodic basis to keep up with changing business scenarios. In my coming posts I will continue more on allocation parameters and focus on strategies to manage supply constraints... Looking forward to your thoughts

February 14, 2011

Getting Serious about Supply chain collaboration

Based on the various client interactions & inquires so far, I  can say that organizations are getting very serious about collaborative  relationships with their suppliers and are investing a great deal of their time & resources in strengthening the relevant processes. In this blog, I share my thoughts on the market direction & activity in this space.  Supply chain collaboration is nothing new in the world of competitive supply chains, but collaborative relationships have so far been limited to mature and large suppliers. They were limited to suppliers  who can afford, and are mature in terms of IT infrastructure, to be connected with EDI. Organizations have realized that even smaller suppliers are important to be ahead in this competitive landscape, as disruptions from their end could potentially disrupt the entire supply chain.

Also the sky rocketing supply chain costs could be pushed back somewhat by setting up automated online collaboration, communication & data exchange processes, with small & Large suppliers, which reduces the overhead of resources to manage these manual  processes.
 
Transactional data /document exchanges like Purchase Orders & Shipping notifications were implemented using EDI by many organizations. Now the same basic  document exchange functionalities are being extended to smaller suppliers who do not need EDI to exchange documents but can do the same on the Web browser interface.  Customers are extending their planning processes beyond the inward looking planning applications to include extended supply chains, e.g. exchanging forecasts with suppliers and receiving their supply  plans. This has a potential to have a huge positive impact on supply chain agility - response times, long term planning accuracy,  improve compliance  and reduce supply chain costs resulting in contributions to revenue and profitability.

The fact that supply chain vendors have also ramped up their offerings in this space, by adding significant functionalities that could be used to fully exploit the potential of supply chain collaboration, has helped.  A case in point is SAP supply network collaboration (SNC). Organizations considering implementation of  collaborative processes should definitely consider SAP SNC , especially if they use SAP ECC & APO modules. SAP has added number of new and significant functionalities to position SNC as the module of choice for supplier collaboration to SAP customers. SNC supports large number scenarios for materials  & outsourced manufacturing suppliers like supplier managed inventory, Work order collaboration etc. Apart from these scenarios other functionalities like Quick view, micro blog,  integration with cFolders to upload documents etc. has also provided customers the end to end collaborative tools needed for their collaboration needs.

Implementing Supply chain collaboration and tools to support those processes needs a focused approach with due emphasis on considerations like IT & Process readiness, supplier segmentation, supplier onboarding, Deployment architectures etc. Organizations where time & effort has been spent to understand the various requirements & challenges in such implementations have been immensely successful in their initiatives.

I would be very interested to know your experiences and thoughts  on implementing supplier collaboration processes and tools.

February 6, 2011

How to Monitor Forecast Accuracy? Forward or Backward

Generally Statistical Forecast generation process works in following way. History and Forecast horizons are always fixed. (Example - 72 months history period will be used to generate forecast for next 60 months). Every month oldest history data point is dropped and newest history data point is included in history zone. This process is called history rollover (Continuing with above example, if you are in Feb 2011 then January 2005 history data point will be dropped and January 2011 history data will be included in history zone to keep 72 month history period constant). Forecast engine runs after history rollover to generate forecast based on preset models for each SKU. Generally statistical tuning to find out best suitable forecast model for a particular SKU is performed only once during initial implementation. After that some sort of monitoring process and alerts are set up to find out forecasting cases where time series pattern has changed. Topic for this blog is to discuss logic for this monitoring process and consequently alerts.

Most of the standard statistical forecasting suite offers all standard forecast error measures like MAPE, MAD, RMSE, MPE, MSE, etc. General tendency amongst the users is to use one of these error measures and set up alerts if error increases beyond specified threshold. For example - If MAPE is your chosen measure and last month MAPE is 18%. After this month history rollover and forecast engine run if MAPE becomes 25% then system should generate alert as drop in the forecast accuracy is more than 5%. User will see this alert. He or she will look for reasons for this drop in the forecast accuracy. It could be introduction of outlier in the data or it could be genuine trend or seasonality change which warrants refitting of new forecast model with new parameters.

Looks a reasonable process but I have seen it failing in many cases. This process will not necessarily identify all cases where problem exist. Reason for this is as below.

Ask a simple question, how MAPE in above example is calculated? In most of the statistical forecasting suite it works following way. Suppose in above case forecast model used is say "Holt Winters Seasonal-Trend model". Minimum number of data points required for this forecast model to generate first forecast is 2 season data for seasonality plus 3 additional data points for trend. Suppose period per season is 12, we will require 24 plus 3 that is 27 data points to generate first forecast. Forecast engine works following way - It use first 27 points in history to generate forecast for point number 28. In above example it means it will use history from Feb 2005 till April 2007 to generate forecast for May 2007. Similarly it will use first 28 data point history to generate forecast for point number 29 and so on - This way out 72 history data point it can have forecast for 45 data points (Data point number 28 to 72). For these 45 data points both history and forecast are available. Forecast engine compare them and calculate MAPE. Suppose this MAPE calculated is 18%. Now imagine what will happen next month. Feb 2005 data point will be dropped and Feb 2011 will be included in history. Everything else will stay the same. Again first 27 data points will be used to generate forecast for point number 28. Again we will have situation where 45 data points will have both history and forecast but please notice most important thing, 44 out of these 45 data points are same as last month. Only one new data point Feb 2011 has rolled in to make new MAPE different than previous one. It is highly unlikely that by introduction of just one new data point MAPE will differ drastically and we may or may not get alert. However even if this one data point of Feb 2011 is bad it will vitiate the forecast for next year to great extent as every data point matter a great deal in time series pattern having seasonality. So my contention is to set up alerts for monitoring with following specifications.

Instead of setting them up based on forecast accuracy measure calculated based on past history data better is to use forecasted numbers. Continuing with same example, when Forecast engine generated the forecast on 1st Feb 2011 based on history till Jan 2011 it would have generated forecast for next 60 months starting Feb 2011, ending on Jan 2016. On 1st March 2011 we will have history up to Feb 2011. Forecast engine will again generate forecast but this time for 60 months starting March 2011 and ending on Feb 2016. Point to note here that 59 out of these 60 months are same as previous months. My contention is that, rather than comparing MAPE based on past it will be good if you compare previous forecast with new forecast after history rollover. Example - You generated forecast for period March 2011 to Feb 2012 on 1st Feb 2011 and also again on 1st March 2011. If something is wrong with history rollover, these two numbers will differ drastically. Alert set up based on comparison of these two forecast numbers will be much more sensitive and trustworthy than what is set up based on Forecast Accuracy Measures like MAD, MAPE, RMSE etc which are based on past data. Forecast alerts invariably detect all deviations. Forecast Accuracy based alerts miss many time series pattern changes.

A thought worth trying.....it benefitted my clients.

Subscribe to this blog's feed

Follow us on

Blogger Profiles

Infosys on Twitter