The Infosys global supply chain management blog enables leaner supply chains through process and IT related interventions. Discuss the latest trends and solutions across the supply chain management landscape.

« November 2010 | Main | January 2011 »

December 31, 2010

Best practices for Master Data Load - Part 1

Many organizations face multiple challenges during ERP implementation due to various reasons; few of them are absence of key data, presence of duplicate data, load errors, incorrect data, etc. This actually leads to incorrect transactions, inconsistent reports, regulatory compliance issues and lack of customer satisfaction.

So, data plays a very significant role for Go-Live & organizations use data templates & tools to load bulk data into the system. Praveen has rightly said that "EAMs are data eaters & master data management is a a critical aspect in EAM implementations" in his blog "Why my EAM implementation is not giving me as I expected?"

There are various checks & balances which you need to do to get this right first time. The below mentioned are the best practices for a master data load based on my own implementation experiences. I have categorized the best practices into various categories as mentioned below

Preparation for Master Data Load

  • Prepare a list of data you need for Go Live
  • Create a sequential order to load those data
  • Identify the data which you need to create manually and the data which you need to load using load tools or other uploading methods.
  • Categorize the master data into Organization specific data, user data, finance data, Meta data etc
  • For each category, identify an owner to extract the data from various sources.

Validating source data before transferring into data templates

  • Validate the source data, correct errors, correct duplicates, etc before you transfer the data into new data capture templates.

Preparation of Data capture template

  • Prepare a data capture template to match the ERP application
  • Prepare a data capture template per application or per master data like Item Template, Vendors Template etc
  • Define basic validation rules (below mentioned) in the template to minimize the errors in data before loading
      • Mandatory fields should not be left blank
      • Key information should not be left blank
      • Duplicate data should not exist
      • Spelling should be corrected in detailed descriptions or terms & conditions
      • Junk values should be avoided by having drop down lists in the fields
  • Setup a checklist and instructions for filling the data so that the person who fills the data can do a self check & correct errors if any. Also mention some of the important business rules as part of this checklist so that the person can validate those checks also & correct errors if any.

Please add if you have come across any best practice related to master data load in your earlier implementations. In my next blog, I would like to continue the same category "Preparation of Data capture template" in a bit more detailed manner and also discuss about "Pre Load data validation".

 

 

Partnering with Honda at Supply Chain World Conference in Singapore: a lot to learn about high-growth markets

This blog of mine is quite different from all my blogs. Instead of trying to analyze what trends I see or what directions I construe in supply chain management, here I will share with you some of my experience during a recent presentation at the Supply Chain World Conference in Singapore. We (Infosys and Honda) were invited for presenting at this conference on some of the innovations carried out in supply chain inventory optimization and distribution lead-time management

The gathering was a mix of industry practitioners in supply chain from the high-growth markets of India, China, SE Asia and South Africa; as well as the developed markets of North America, Europe and Japan. I have termed these as "high-growth" markets.... since I could realize in the Conference that these "emerging" economies have actually emerged and evolved their unique strategies. We were invited to present on the vehicle supply chain à order and distribution management improvements carried out in India market, experiencing double-digit growth. The topic specifically delineated on the challenges faced in these markets and the supply chain strategies required for implementing late-customization, inventory optimization and reducing replenishment lead times. What was most interesting, were the types of questions asked by the audience, which reflected the nature of problems faced in their organizations.

There were questions on strategies which could be adopted for developing supply chain systems which would enable inventories to be slashed across the pipeline and in the stockyards of the channels, without allowing their channels to indulge in trading among themselves. These issues clearly demonstrated the prime focus on inventory reduction among the manufacturers in these high-growth markets. But what was also evident was the constraint among these manufacturers who would not take the benefit of trade among their dealers/ distributors to free up tons of inventory locked in individual dealer's stockyards. Unlike the developed markets where such practices are welcome by manufacturers, in these "high-growth" markets the perception that "allowing such operations is likely to reduce the manufacturer's control over these dealers" prevents accepting such dealer trade practices.

Another notable observation was the focus on reducing replenishment lead times across the supply chain through visibility. Though it is a no-brainer that visibility would reduce planning lead-times and slash supply and delivery lead-times, but was came out as an "outlier" observation was the questions on KPIs and risk-management methodologies which would enable developing early warnings for delays, escalations and exigency costs. What executives in supply chain were scouting for, were quick-deployable low-cost solutions which would enable them watch systems from remote and take decisions as early as possible in the problem-life-cycle.

 It is evident, that the manufacturers and supply chain managers of these markets are not taken-away by the stupendous growth, but are more cautious that they don't fall into the trap which their counterparts in the developed markets ignored. They are still focused on inventories and replenishment lead times, to maintain the lean structure that they would like to retain. The only item they have to learn is sense the growing maturity among their channel partners and enable them support these goals, by giving greater autonomy; though it may be perceived as loosening their "traditional" grip on these partners

December 25, 2010

When and How to Use "Best Fit Model" in Your Statistical Forecasting Suite?

Most of the Demand Planning applications like SAP APO, i2, Manugistics, Demantra etc offer statistical forecasting as one of their major differentiating functionality. Statistical forecasting generates the forecast for future periods based on history data provided. Lots of algorithms are on offer in these suites. Sometimes as many as 30 to 40 different algorithms / methods / Forecasting Strategies (All are different terms used for same thing) are on offer.

Statistical forecasting is a bit complex and "not so easy" to understand feature. More the number of algorithms more the confusion it creates in the mind of end users. To tackle this problem almost every demand planning suite has a "Best Fit Algorithm". This algorithm is supposed to be "Superset" of all algorithms. If planner applies this algorithm then in the background all possible combinations of all algorithms will be run and one that will give "Best Fit" in the history data will be chosen and forecast generated based on that algorithm. Many call this algorithm as "algorithm for dummies". Generally it is observed that those who do not understand the statistical part of algorithm use this option more to save the trouble. However in 9 out of 10 cases this algorithm generates forecast which is wrong and cannot be used for planning purposes. Why does this happen?

To unravel this question let us break down what this Best Fit Algorithm do in the background. First of all it generates all possible parameter combination for each algorithm on offer. For example, if Holt Winter Algorithm is on offer then it creates all possible combinations of Alpha, Beta and Gamma values. For those who do not know - Alpha, Beta and Gamma are parameters used by Holt Winters Algorithm. Values of these parameters should be between 0 and 1. In SAP APO this algorithm is on offer and there is additional restriction that values off Alpha, Beta and Gamma can be only in multiples of 0.05, so with this restriction each of them can only take 20 possible values starting from 0.05, 0.10, 0.15 and so on. So there will be 20*20*20 (8000) possible combinations of alpha, beta and gamma values. In some other suites number of combinations can reach one million. Best Fit will generate forecast for all these combinations and will select the one that has lowest "Forecasting Error". Now let us unravel what is the definition of "Forecasting Error"?

All these forecasting suites offer all standard Forecast Error options like MAPE, MAD, RMSE etc. User is given the option of using one of these Error Measures for selecting Best Fit Algorithm. For example, if user chooses MAPE then Best Fit Algorithm will go through all combinations and will select the one that has lowest MAPE. Looks great but as famous saying goes "Devil Lies in the Detail". How is this MAPE calculated?

For very least calculating MAPE require a forecast number and history number. It takes difference of forecast and history number, take the absolute value of difference, divide it by history number and multiply it by 100. Best Fit Algorithm works the following way. Take for example Holt Winters model, it requires minimum 27 history data points to generate first forecast. Hence first 27 data points in history are used to generate the forecast for 28th data point in history. This forecast and history for 28th data point is used for calculating MAPE. Similarly first 28 data points are used to generate the forecast for 29th data point, first 29 data points are used to generate forecast for 30th data point and so on. So if you have 72 data point in the history effectively you will have 45 (72 -27) data points for which you will have both forecast and history. MAPE is calculated based on these 45 data points. And as pointed out previously, combination with lowest MAPE will be selected by Best Fit Algorithm. I see following problems with above selection mechanism of Best Fit
  1. Quality of forecast in initial periods will be absolutely poor as it is based on bare minimum history. As pointed out above only first 27 point history will be used to generate forecast for 28th point. Since number of data point on which this forecast is based is absolutely bare minimum, quality of this forecast will be very poor. Consequently MAPE calculated based on this value will be erroneous.
  2. Every error measure chosen is susceptible to problem. For example, MAPE calculation fails if history data has zero values or values very close to zero. RMSE fails if history data has even one outlier data point. MAD fails if standard deviation of time series is high. So if history data has these problems then error measure chosen will fail and consequently Best Fit model selection based on these measure will fail.
  3. Sectional fitment of model is another problem. For example, if history data has 72 data points, it may so happen that model selected by best fit will fit the best in initial part of the history but will fail in recent history. However overall MAPE can still be low for this combination hence Best Fit will still chose this combination.
  4. Sensitivity of best fit model is very low. Every month new history data point will be added. It may so happen that best fit model last month will be discarded and new one will be chosen. This happens because of problems in error measure calculation mentioned in point 2.
With all above problems use of best fit model becomes very complicated and require deep understanding of underlying statistical principles. It is not the algorithm for dummies; on the contrary it is the algorithm to be used by statistical experts. So next time you are not happy with Best Fit algorithm you know the reason - You have to ramp up your statistical knowledge and understand selection mechanism used by Best Fit better.

December 21, 2010

Product Allocation Planning - Sharing the supply pie and managing order commits

In my recent engagement at a consumer electronics client, I worked with the ops planning teams to understand their finished good allocation planning process, build a tool to support the processes and enable opportunities to improve upon

The function of product allocation planning is that of a bridge between the planning and order fulfillment side processes. In the context of multiple sales channels, this function primarily ensures the right amount of supply is being allocated to the right channel partners at the right time

A case for allocation planning

Typically operations planning teams require a robust, real-time link to the order management ATP function. An allocation-planning tool provides this, with the division of finished good supply quantities as input, to manage commits on customer orders in the order to book cycle. In addition, they also want the ATP and allocation functions to work in tandem so that consumption rates from the various sales channels could be monitored leading to re-adjustment of the allocation strategy

To illustrate the point let me give a simple example of a company selling products via reseller channel:

Lets assume that the supply is available in the supply hub at the start of the week, and a small customer places a relatively big order on the same day. Without an allocation plan, the ATP function might commit a large amount of supply to this customer, and sales orders from other customers will be impacted. In a scenario where supply is constrained in relation to the overall demand to begin with, this could lead to impact on fulfillment for regular or premium customers.

The allocation planning function comes into play here, ensuring the amount of product allocation available to the customers is derived based on their demand forecast, current on-hand inventory and product/customer strategy. Thus if the ATP check includes material availability along with product allocation for the customer in the week in question, it only promises the customer's order to the limit of the product allocation planned for that customer.

By monitoring the performance of sales channels (store/reseller/online etc.) along with the incoming supply projections, within a given time period, the supply allocation strategy can be shifted to best suit the organization's mid term strategy.

The benefits can be further enhanced by consumption monitoring and real time re-adjustment of strategy. The ops planners can keep a check on customer consumption levels (in terms of bookings) and react to deviations against forecasts.

In this example, if a relatively large customer had not placed orders against its demand forecast in the week, the product allocations could be adjusted to accommodate larger share for other customers that are doing better than their demand forecasts.


What constitutes an allocation plan

Various factors decide the planning cycle and horizon of the allocation plan. There can be different needs e.g. in a retail context, store replenishment operations would plan the allocation on a daily basis but the horizon could be limited to 1 or 2 weeks.

In my engagement, one important focus for the implementation was on product strategy guidance (NPI or EOL) between the central and regional planning teams. This required an allocation plan horizon stretching across multiple quarters, to drive direction for quarter end inventory targets. The plan would be reviewed periodically based on updated supply forecasts i.e. shipments, in transits from the manufacturing centers and receipts at the regional hub.

Another key characteristic of the allocation plan is that it can be used to drive supply allocations across a mix of dimensions, for e.g. across sales channels (product-channel), across geographies (product-channel-region), or within a sales channel (product-customer-location, product-store)

While there are many more factors that go into building an allocation plan, the above are basic building blocks to start with

 
I will continue to post more on the allocation planning process and share my experiences. Please let me know your thoughts...

December 15, 2010

How to define boundaries for Supply Chain Operations between EAM & ERP

Recently some of my Infosys colleagues attended MUWG (Maximo Utility Work Group) conference and while they shared their experience about the conference, they mentioned about one of the most discussed topics which was "how to define boundaries for supply chain operations between EAMs & ERPs".

Hence, I thought of sharing my learning from some of my previous Maximo implementations, where I implemented Maximo in following two scenarios:

(i) Maximo for supply chain operations alone, without asset management, and
(ii) Maximo for asset & work management integrated with ERP for supply chain

The supply chain operations considered here are only for MRO spares supply chain or indirect procurement, as relevant in asset management space.


In my view, following should be the main decision making criteria for this situation:

• One User: One system - A business user should not be using two different applications for the same business function. For example, a buyer creating a Purchase Order in EAM for some items and same buyer using ERP for other items should not be proposed. If the buyers are different for different items, then probably they can use different applications but then the organization needs to manage two different procurement applications and would not be able to leverage maximum benefits

• Business, not IT, driven decision - Business is generally the decision maker and should evaluate from business benefits point of view and not from IT cost point of view. Though IT cost is also important but that probably is a onetime initial cost and then some associated maintenance cost, later on. The benefits coming out of business driven decisions may not have short term benefits but would have significant long term benefits from overall usability point of view.

• Interlinked processes in same system - To avoid too much of data floating between EAM & ERP systems, keep similar processes in same systems. For example, from PO approval process to invoice generation process in one system. Some of the intermediate information can be duplicated in other systems like material receipts could be sent to EAM or ERP.

• Integration Complexities - Integration technologies have changed now enabling new levels of integrations. Complexities, risks & pains associated with integrating huge applications have reduced now. Most of the EAMs & ERPs provide APIs which help in integrating to other systems. SOA being the most modern approach, companies should leverage this advancement in technology to maximum possible level.

• Long term Solution - Think from best solution point of view and not from some initial cost point of view. For a robust long-lasting solution, initial cost may be high as there may be more integration points to be developed.


Generally most of the EAM systems have adapters with Tier-1 ERP systems which are flexible and can exchange almost all the information between EAM & ERP systems. Hence the decision should be made considering the best business sense. Leave integration & implementation complexity to your system integrator and focus on best business solution.

December 11, 2010

Distributed Order Orchestration implies channel reduction?

The other day, a colleague of mine sent me the brochure of Oracle's fresh pitch to the multi-channel world titled "Oracle Fusion Distributed Order Orchestration - The New Standard for Order Capture and Fulfillment". While the DOO terminology is a slight tweak on the more popular Distributed Order Management (DOM), I read through and noticed a fundamental difference in the pitching of the offering. The text goes thus:

QUOTE
Today, almost all companies manage multiple order capture and order fulfillment systems. In fact, studies show that the average company has 5.2 order capture systems and 4.3 order fulfillment systems. These overlapping and redundant systems create inefficiencies in terms of cost and service. Gathered through channel growth and acquisition, reducing the number of these types of systems is an extremely difficult endeavor with a high degree of risk. For that reason alone, not many organizations are willing to undertake projects that rationalize or retire these systems. Oracle's solution to this dilemma is Oracle Fusion Distributed Order Orchestration, a key component of Oracle Fusion Supply Chain Management. Unlike any order capture and order fulfillment platform in existence today, Oracle Fusion Distributed Order Orchestration seeks to normalize processes and data across multiple capture and fulfillment systems, yielding a single view of order and fulfillment information and decisions.
UNQUOTE

Right off the bat I was wondering whether Oracle is saying "thou shalt reduce your order capture and order fulfillment system"? Noble objective for sure and quite simplistic, but it does come with a bunch of qualifiers. A step back into understanding why these systems came in the first place could illustrate why the DOO argument is not so much of a can-DOO in all situations. Order capture channels proliferated because customers wanted those choices - to walk into a store to touch/feel stuff, to click a promo email & buy through the net, to search & compare from a mobile phone, to rant & rave by calling up a call center (regardless of the 1-800-WAIT-ING lines). Most of the order capture channels were a reaction to customer behavior, so its no use saying we're going to reduce it.

The second part of the story is in the fulfillment side of the house. If the objective is to pick up inventory wherever its lying around (store warehouses, distribution centers, factory warehouses, 3PL storage etc), ship it to the customer and support suppliers via multiple fulfillment models (supplier to end customer or to any hops in between), why would a retailer want to lose that flexibility? The challenge to supporting these order fulfillment models is that in most cases, they exist in their own splendid isolation without even providing visibility into what they are holding let alone more complex asks like supporting split fulfillment.

In summary, the objective regarding "distributedness" should be around "managing" (rather than reducing) these channels. That would mean regardless of from where orders are coming in or items are going out (or how orders are getting split across fulfillment channels), back-end operations remain transparent to the customer. Consequently, there should be no heartburn in all the three primary super-cateogories of order management metrics - customer experience, inventory utilization and cycle time.

Where does channel reduction make sense? If there are half a dozen systems all for capturing orders from stores (for instance, based on geographies, product categories or brands), then there may be a possibility of making life a wee bit simpler for the hapless folks tasked with capturing all those orders (assuming the burden is outside customer's responsibility!). Also, due to M&A, if the acquired company's processes & systems are at divergence to those of the parent company, there would be possibilities in cutting down redundancy. Ditto with under used facilities and too many cross-links from suppliers to various holding points. Where there is duplication, there does exist a case for reduction of channels, but where an organization is catering to multiple forms of order capture and fulfillment, that would be part of the competitive advantage. And one really wouldn't want to reduce that I suppose.

Subscribe to this blog's feed

Follow us on

Blogger Profiles

Infosys on Twitter