Infosys’ blog on industry solutions, trends, business process transformation and global implementation in Oracle.

« June 2009 | Main | September 2009 »

August 12, 2009

An Introduction to Oracle Manufacturing Operations Center (MOC)

Does your manufacturing intelligence system support proactive monitoring for quick decision support? Lots of data gets collected on your shop-floor – but how should it be organized to assess the performance of a machine, a line, a plant, or a fleet of plants? What will it take you to make the process more a science than an art? Oracle has launched a new product - Manufacturing Operations Center - to answer some of these questions.

A real-time manufacturing intelligence system can equip shop-floor supervisors, plant managers, and VPs with the tools to see the operations at the right level and obtain valuable insights. Some examples of such insights are:

  1. Can production losses from existing equipment be reduced to increase the effective capacity, and therefore postpone new equipment purchases?
  2. Which machines are showing abnormal temperature and vibration trends and need to be immediately put under preventive maintenance to avoid costly scrap?
  3. When should you move production from Equipment-A (or Plant-A) to Equipment-B (or Plant-B)?
  4. What is the backlog on the shop-floor – and which customer orders should be prioritized for manufacturing?

The primary challenge in any manufacturing intelligence solution is the sheer variety of data standards that need to be handled. Each plant can have equipment based on a different technology with different control parameters expressed in different communication protocols. Integrating information from all equipment to get a holistic view of the shop-floor becomes a complex task. However, as more and more companies try Lean and Six-sigma initiatives - accurate, current, and holistic measurements of equipment and plant parameters become critical to identify and eliminate waste in all its forms.

Oracle’s Manufacturing Operations Center (MOC) is a manufacturing intelligence solution aimed at measuring what needs to be managed. It can be implemented as a stand-alone product, or in conjunction with Oracle E-Business Suite for which it has prepackaged adaptors. It collects real-time, high-resolution data from shop-floor systems (for example: SCADA, PLCs, DCS, quality systems, MES, counters, sensors, and data historians) and contextualizes it for analysis. Its architectural elements are: 

  1. A persistent data model that supports extensible attributes for capturing customer-specific process parameters.
  2. A contextualization rule engine that adds business context (work order #, shift, product, etc.) to the data collected from the shop-floor.
  3. Configurable role-based dashboards based on Oracle Business Intelligence Enterprise Edition (OBIEE)

MOC supports advanced graphics and drill-downs for more than 50 KPIs and more than 90 measures. These KPIs and measures are organized under the MOC Catalog that has the following categories: 

  1. Manufacturing Asset Performance: To analyze Overall Equipment Efficiency (OEE) and production loss
  2. Batch Analyzer: To analyze batch production variances (quantity, material usage, resource usage) and cycle time
  3. Schedule Adherence: To analyze production performance and production slippage
  4. Agility Responsiveness: To analyze flexibility ratio at equipment level
  5. Plant Maintenance: To analyze equipment downtime measures
  6. Quality: To analyze first pass yield, scrap, and rework
  7. Service Levels: To analyze manufacturing performance related to schedule, requested, and promise dates for pegged sales orders
  8. Equipment Downtime Analysis: To analyze equipment downtime by downtime reasons
  9. Equipment Scrap Analysis: To analyze scrap quantity by scrap reasons
  10. Equipment Attributes Data – Actual: To compare actual attributes data with specifications 

Role-based dashboards can be configured from these measures to foster focused monitoring. The seeded Plant Manager Dashboard has tools to track the following metrics: 

  1. Asset performance (OEE) by plant, department, and equipment: OEE is a product of machine availability, machine performance, and first-pass yield. It summarizes the current state of the plant and helps in benchmarking operations against other companies and divisions.
  2. Batch performance: Measures work order quantity variance, PPM trend, batch cycle time trend, and service level performance
  3. Production performance: Measures production schedule performance by department and equipment

The first measure – OEE – helps plant managers dig deeper into the reasons for poor equipment availability, equipment performance, and product quality. Figure 1 shows how these losses (shown as B, C, and D) can reduce the total capacity (A) of equipment to effective capacity (E).

Figure 1: Understanding Overall Equipment Effectiveness (OEE)

DW-MOCOEE-081209-002.gif

 

Plant managers can view the departments with low OEE, and drill down to individual equipment to find out which loss is dragging the effective capacity down. Similar drill down is possible with other metrics in the Plant Manager Dashboard.

MOC uses ISA-95 standard for defining equipment hierarchies. Shop-floor data is collected as tags using 3rd party OPC servers (from companies like Kepware, ILS, and Matrikon) and other 3rd party solutions. With Kepware’s KepServer, for example, a Channel is a group of shop-floor equipment, and each data collection point (pressure, temperature, vibrations, etc.) for an equipment is defined as a tag. Each tag is mapped to a database table-field in MOC. Data on each tag can be collected at a user-defined frequency. 

MOC requires Oracle Warehouse Builder (OWB) 10.2.0.4 and OBIEE 10.1.3.4. It uses OWB as an ETL tool to extract data from source systems. OBIEE is used to generate configurable role-based dashboards, ad-hoc reports, and alerts. The reports can be easily downloaded to Microsoft Excel and PowerPoint. A profile option in MOC points to the machine and port where OBIEE is installed. 

The latest release (12.1.1.01) of MOC comes with EBS process manufacturing integration, enhanced EBS discrete manufacturing integration, production performance reporting, and production quality monitoring.

MOC is built for quick deployment and adoption. It is advisable to start MOC deployment with a controlled scope (a cell or a line in a plant) and expand gradually to cover the production landscape.

August 4, 2009

Best Practices in Handling Uncertainties in Business Requirements in Enterprise Solution Implementations

Experience shows it is impossible to define all business requirements in the initial phase of a packaged software implementation. Some requirements are missed due to schedule pressure or oversight, while others originate later due to changed business realities. How can program sponsors manage changing requirements without impact to budgets and schedules? This article attempts to answer the question based on practices observed in multiple large enterprise system implementations.

Implementations of packaged software aim to deliver solution components for all validated business requirements (See Figure 1) gathered mostly in the initial phases.


Figure 1: Evident and hidden Business Requirements and Solution Components in a packaged software implementation

Figure 1: Evident and hidden Business Requirements and Solution Components in a packaged software implementation 

Business owners - with assistance from process improvement groups and IT members - can specify current and foreseeable new requirements. However, some business requirements still remain unforeseen as they do not exist when the program is in nascent stage. Solution components, on the other hand, can take the form of non-system solutions (business process changes and user orientation), or system solutions (use of standard packaged software features and budgeted development resulting in package extensions). Some package extensions remain unseen (and hence unbudgeted) in the initial stage. 


The implementation team focuses on the cells shaded solidly in Figure 1. These cells represent known and foreseen requirements, and budgeted solution components that deliver these requirements. Cells not shaded in Figure 1 represent sources of uncertainty in the implementation.  



Figure 2: Sources of uncertainty in a packaged software implementation


Figure 2: Sources of uncertainty in a packaged software implementation

As shown in Figure 2, this uncertainty has three themes: Requirement Coverage, Solution Scalability, and Solution Extensibility. Requirement Coverage represents the uncertainty due to requirements that should have been captured in the initial pass but are were missed due to oversight, schedule pressures, or inadequate business involvement. These requirements surface later in the implementation (usually during System Testing - or worse - during User Testing). Implementation teams rely on business members to minimize this source of uncertainty. Solution Scalability represents the uncertainty related to how well can the budgeted solution components address the unforeseen business requirements without additional development. Such requirements arise unannounced from unforeseen business process changes during the implementation. Only mature implementation teams design solutions with the foresight to handle this source of uncertainty. Solution Extensibility represents the uncertainty due to the unbudgeted changes needed in the solution to handle unforeseen business requirements. This is a blind spot for most implementation teams.


The following strategies have been found useful in minimizing the impact of these uncertainties on the quality, cost, schedule, and scope of an implementation:


Uncertainty



  Strategies to reduce the impact of the uncertainty



Requirement Coverage



 Make first-pass requirement gathering a foolproof process

  • Involve the right levels of business users in requirement gathering. Ensure representation from right geographies, roles, departments, and product/service segments. Involve senior business managers (VPs/Directors) in reviewing the requirements and signing them off.
  • Validate against previously created requirement inventories to ensure requirements are not missed. Such inventories can come from older business or IT assessments, business cases, package evaluation exercises, and industry best practice literature.
  • Make sure the requirements tie to the to-be process flows. Tying each requirement to a step in the to-be process flow sometimes unearths many missed requirements.
  • Validate against the business functionalities provided by the current legacy system. Focus on functionalities required in the to-be system and check if all have been captured as business requirements.
  • Conduct detailed Conference Room Pilots (CRPs) to validate the first-pass requirements.

 Enforce early and continuous user involvement for smooth course corrections

  • Involve business users early in the solution testing process. Let business own the task of test script preparation to ensure the system design has voice-of-customer embedded. Involve newer business members (different from the ones involved in requirement gathering) early in testing for an unbiased assessment of the solution.
  • Conduct testing with a consistent data set that represents real business data. Use such data sets to conduct testing for all business process flows and variants from start to finish.
  • Pilot training materials early with real business users


Solution Scalability



 Develop “Solution Platforms”, and not just “Solutions”

  • Design first for “Foundational Processes” and then for “Variants” within each foundational process. Adopt a Solution Platform approach adopted in other industries where solutions for individual variants can be enabled by configurable setups and options built over a core Solution Platform that addresses the foundational process. Continuously improve on the solution platforms based on experience.
  • Avoid monolithic design. Keep the design modular by dividing the solution into pieces that can be loosely coupled to address a process variant, as needed. Build design redundancies based on implementation experience. Design data structures with the capability to handle multiple scenarios of the same data.
  • Design reporting platforms with ad-hoc reporting capability for end users.

 Enforce a standards-based architecture

  • Centrally coordinate the solution design process. Embed approvals for detours from established design standards.
  • Enforce open integration architecture with plug-and-play components and common input and output data formats. Use industry standards (if available) for integration design and data mapping.
  • Define uniform data cleansing, enrichment, and data stewardship rules. Define clear standards for master and slave systems for master data.
  • Evaluate compatible hosted software for specialized process steps that need not be kept in-house (for example, tax calculation, export control validation, and background checks during recruitment).


Solution Extensibility



 Define clear governance guidelines around solution changes

  • Lay down clear guidelines for developing a business case for new requirements that demand changes to the budgeted solution. Get senior business and IT leaders involved in evaluating business cases above defined cost and schedule thresholds. Define rules to decide how the validated solution changes will be funded.
  • Prioritize and schedule validated solution changes so that they do not impact the running projects.

 Expand the review net to capture unforeseen requirements and capabilities

  • Monitor business initiatives to catch unforeseen requirements. While reviewing requirements, involve stakeholders in charge of areas from where the unforeseen requirements might come. Some examples of such stakeholders are: business owners responsible for establishing a new plant and business leaders from sites where the solution will be rolled out next.
  • Monitor upcoming package upgrades for new features that can be useful.


Handling these three uncertainties demands different leadership skills. Business leaders can effectively lead the effort of ensuring “Requirement Coverage”, Functional and Technical Solution Architects can lead the effort of ensuring “Solution Scalability”, and the Program Management Office – with inputs from business leaders and solution architects - can ensure “Solution Extensibility”. This guideline can be customized based on how the program is organized.

Subscribe to this blog's feed

Follow us on

Blogger Profiles

Infosys on Twitter