Infosys’ blog on industry solutions, trends, business process transformation and global implementation in Oracle.

March 31, 2018

Blockchain & Finance - An Introduction for the CFO

Have you heard about blockchain? Even if you have not heard about blockchain, you would surely have heard about bitcoin.  Bitcoins are not blockchain but Bitcoins use the blockchain technology.

Why should a CFO concern about blockchain technology?

The blockchain technology is a big game changer.  It can be used to solve many business problems. While some industries are hugely impacted, others might have minor impact. Also, since the technology is evolving and maturing new impacts are getting discovered every day. Ignoring the technology could mean loss of competitive advantage, inefficient process impacting shareholder value. As the guardian of the shareholder value, it is of great importance to the CFO to understand the technology in general and impact on finance function in particular.

Before, we discuss how the blockchain impacts the finance function, let us understand what blockchain is, what its unique features are, what are its benefits.

What's the name?

The blockchain technology is also sometime referred to as DLT i.e. Distributed Ledger technology. While there are minor differences between the two, to keep things simple, we can assume both are the same.

What is blockchain / DLT (Distributed Ledger Technology)?

As the name indicates the technology uses blocks, chains, is distributed (i.e. decentralized) and ledgers (list of data). Basically DLT uses blocks to store data, the data is linked / chained to each other most likely using cryptology.  Apart from data storage / linkage, in DLT the complete data will be replicated (distributed). The data in the block chain is stored based on a 'consensus' rule and blockchain might also have smart contracts, which gets executed based on certain criteria.

Blockchain / DLT (Distributed Ledger Technology) - How does it help?

Because of the above characteristics, a blockchain can help businesses

  • Speed up business processes - transactions taking days can be done in seconds.

  • Reduce costs - as it will enable direct peer-to-peer interaction without the need for intermediaries.

  • Reduces risks - as the transactions are immutable and cannot be changed ones created

  • Enforces and builds trust - all data is transparent and additions are through a consensus mechanism.

Maybe the above discussions are very technical, let me describe a finance use case for better understanding of the technology and the benefits. 

Trade Finance - Use Case - Using Oracle Cloud, Oracle Blockchain Cloud Service

Trade finance is one of the areas where the blockchain technology is already in use. Let us imagine a typical bill discounting scenario.  The scenario will have the following participants - buyer (say 'ABC Electronics'), seller (say 'LG Electronics'), and financing bank (say HSBC).  Assume we are the buyers, using Oracle Cloud applications.

ABC Electronics buys the goods from the LG, on receipt of the goods and the invoice from the LG, the details are sent (physical copies of invoice) to the HSBC bank. HSBC bank verifies the data and then releases funds to the LG based on the due date.

Note the above process

  • Might take 3-5 days, probably more

  • The participants to the process, do not have a visibility of the status - Are the goods received by the ABC Electronics, is the invoice received by the ABC Electronics, has HSBC bank got the document, has HSBC bank verified the documents.

  • The invoices might get damaged, lost, tampered with - as they move between the different parties.

How can Oracle Blockchain Cloud Service help here-

With blockchain we can now build a solution whereby

  • The business process of sending goods, receiving goods, receiving invoices, sending invoices to the buyer, verification of receipts and invoices by the buyer, sending the invoice to the bank can be captured / shared  on the blockchain

  • The transactions on consensus gets added to the block chain and cannot be tampered with (immutable)

  • Additions to the blockchain can be done by automatic process / manual process. Oracle Blockchain Cloud Service offers REST API's to automatically integrate the Oracle cloud applications with Oracle Blockchain Cloud Service.

  • New data can be added based on an agreed consensus mechanism, which can be built using Oracle Blockchain Cloud Service.

  • Oracle Blockchain Cloud Service also offers a front end application, which help the participants to view the status of the transactions (data transparency)

  • The physical invoices need not be sent to the bank, the bank can directly connect via RESTAPI offered by Oracle Blockchain Cloud Service, to verify the invoices captured by the buyer. ( eases and speeds up the process)

  • With Oracle Blockchain Cloud Service, a smart contracts can be built to automatic transfer amounts to the seller, on due verification of the invoices (process automation)

Below is the pictorial representation of how data (block) gets added to each node after each business event based on consensus between all participants and the same view is available to all participants.

With the above solution

  • The data is visible to all participants and is consistent across all participants.

  • Physical invoices need not be sent to the bank.

  • The correct invoice details are confirmed by all parties and cannot be tampered with (immutable). The ability is only possible due to the use of blockchain technology.

  • Smart contracts executed automatically to initiate supplier payments.

  • The time to process the payment to the seller can be done in few minutes instead of days

Are there other Use cases - Impacts on finance function?

While there is a big impact on financial services industry, crypto-currencies, the focus of this note is to discuss the impact on the finance function perspective, at a more micro level.

There are many other use cases. As the technology matures, the way it is implemented is also evolving and new use cases are getting discovered.

Oracle (in Oracle Open World 2017) while releasing the Blockchain Cloud Service solution, have listed a good set of questions which will help you determine the possible use cases for blockchain. Businesses need to check on below to discover potential use cases

  • Is my business process pre-dominantly cross departmental / cross organizational? ( think of intercompany reconciliation, interparty reconciliations)

  • Is there a trust issue among transacting parties? ( think of trade finance scenarios)

  • Does it involve intermediaries, possibly corruptible?

  • Does it require period reconciliations? ( think of intercompany reconciliation, interparty reconciliations)

  • Is there a need to improve traceability or audit trails? (think of bank confirmation letters, third party balance confirmation letters needed by auditors)

  • Do we need real time visibility of the current state of transactions? (think of publishing reports to various stakeholders)

  • Can I improve the business process by automating certain steps in it? (think of automatic payment, based on inspections by a third party).

From above, we can see numerous opportunities for improving the finance functions. Let me try to list possible use cases by critical functions of finance.

S Num

Function

Sub-Function

Possible impacts

1

Financial Management

 

Ø  Strategic Planning

Ø  Annual Planning

Ø  Rolling Forecasting (Quarterly / Monthly)

Ø  Working Capital management

Ø  Forex management

An internal, permissioned blockchain can be built to get consensus on the plan, which is transparent to all participants and immutable.

 

A permissioned blockchain can be setup to speed up the funds disbursement process for trade finance

2

Financial Reporting and Analysis

 

Ø  Statutory and External Reporting (GAAP / IFRS / VAT etc.)

Ø  Management Reporting (Scorecard, Dashboard)

Ø  Strategic Finance (Scenario Planning. M&A)

Ø  Customer and Product Profitability Analysis

Ø  Balance Sheet, P&L ,Cashflows

A permissioned blockchain can be setup for secured communication of reports which is secured, tamperproof, quick to publish.

 

3

Governance, Risk and Compliance

 

Ø  Financial Policies & Procedures (Business Rules Management)

Ø  Tax Strategies and Compliance

Ø  Tax  Accounting

Ø  Audit, Controls and SOX Compliance

Ø  Enterprise and Operational Risk Management

Ø  System Security and Controls

Secured communication of reports to government authorities.

 

A permissioned blockchain can be built to get consensus on the account balances for audit purposes.

4

Finance Transactions and Operations

 

Ø  General Accounting

Ø  Managerial Accounting

Ø  Accounts Payable

Ø  Credit and Collections

A permissioned blockchain can be built which is transparent, immutable and consensus based to capture customer promises for cash collections.

5

Financial Consolidation

 

Ø  Period end Book closure (monthly, quarterly, yearly)

Ø  Currency translation and trial balances

Ø  INTRA and INTER company transaction accounting

Ø  System of records close ( COA,  GL, Sub-ledgers)

A permissioned blockchain can be built to share and agree on intercompany balances.

 

Any pitfalls? What should you check?

There are many potential uses of this technology. As the technology matures and more Proof of concept projects get executed, new use cases are getting discovered and old use cases are also getting dropped.  As per Gartner Hype cycle, blockchain technology has passed the 'Peak of Inflated expectation' phase and is likely to enter in the 'Trough of Disillusionment' phase as POC's start failing before entering the 'Slope of entitlement' phase.

Considering the hype, there is a risk of trying to force-fit blockchain in scenarios, where simpler, cheaper, faster options might work better. While blockchain are immutable, highly secure, there are few exceptions and special attention is needed to ensure the exceptions are understood and managed. The government regulation to manage blockchain contracts also need to be evolve. There are also concerns with data transparency, which might not always be a good thing.

Conclusion

Blockchain is a big game changer.  Its impact on the finance function is inevitable. As the technology matures, the technology will help the CFO automate, speedup processes, build internal controls even with third parties outside the organization.  The CFO organization should start discussion on discovering use cases. It is likely that new ways of doing processes might be developed, in a way never imagined before.

The intention of the article is to give an introduction to blockchain, the impact on finance function and how Oracle Blockchain Cloud Service can help with build a block chain quickly.

Continue reading " Blockchain & Finance - An Introduction for the CFO " »

March 28, 2018

Resource demand analysis with High Level Resource Planning

Resource feature getting used in Primavera is limited to the projects that are in hand/currently running and resource assignment is done appropriately for the same.

But for EPC Organizations having turnover more than $500M-$1Billion or more they always have a need to know the resource shortage/excess they have on board for the next 2 years or atleast for next 6 months or 1 year at a high level.

Organizations are using different methods to get this high level resource plan for company to company mainly to know the manual worker count required example for welders/fitters/riggers etc. in the next coming year or so where this requirement and usage are at peak level every year and having a difficulty to get at the right time such resources on board.

However this can be mitigated using a feature 'High Level Resource Planning' in Primavera Web module itself but the Primavera capability is used only for a narrow level of Project Planning, Scheduling, Resource Assignments capabilities by most of the EPC Organizations however the potential to tap it's usage is already inherent to plan even for an Organizational level Project staffing requirements and mainly for EPC sectors yet not dived into that functionality.

At any given point in time Primavera already provides the Resource analysis graph for the current running projects assignments cumulatively with high level resource planning being done now we are able to get the picture to depict on Manpower planning in future.

Resource planning flow of Organizations when High Level Resource Planning is in use:

 

Capture_Ara.png

Continue reading " Resource demand analysis with High Level Resource Planning " »

March 26, 2018

Driving Business Intelligence with Automation

Business Intelligence applications are around for a very long time and have been widely used to represent facts using reports, graphs and visualizations. They have changed the way many companies use tools in a broader spectrum and in a more collaborative way. However, users these days are choosing these tools to be operated autonomously yet connected to a vast network that can be accessed anywhere. As BI applications analyze huge sets of data in its raw format, it makes one way or another a bit difficult to conclude intuitions and insights. Automation can help users achieve this in extending the use of Business Intelligence beyond its current capabilities just by pondering a few points in cognizance.


Intuition Detection
Automate the intuition.

Business Intelligence should be able to shoot out right questions in a click of a second and must be able to provide insights without any manual interference. Artificial Intelligence can do this in a better way using the potential of super computers to help Business Intelligence take a deep dive into the data. This makes it out to try all the available possibilities to make patterns. Leading in the discovery of trends in the business providing the needs of the user.


Intuition Blending
Automate what intuitions are prioritized.

The very idea of detecting insights in Business Intelligence should be automated in a way that it prioritizes the intuitions and ranks the worthy ones higher based on the user needs. Artificial Intelligence then further compares all the possible insights and makes relationships between them helping users to work on multiple insights at once.


Common Dialect
Automate how intuitions are defined.

Business Intelligent tools that are existing so far have done a boundless job in analyzing huge amounts of data through reports, graphs and several other graphical representations. But users must still have to figure out the insights on the best possibilities based on the analytics. This human factor leaves room for misconception, delusion and misinterpretation. This is where Artificial Intelligence takes the Business Intelligence next level and provides insights in best understandable language like English. Nevertheless, Graphs and reports can still be used to represent this accurate and widely comprehended solution.


Common Accessibility
Create a just-one-button-away experience.

Finding an insight should be very accessible and must be as simple as a click away. This is surely possible with Artificial Intelligence where BI is automated allowing user to get professional insights instantly just by clicking a button. This makes users to easily access the data without any prior knowledge on the tool or data science to make the intuitions. Available tools like Einstein Analytics from Salesforce has already got this implemented attracting huge set of users all over the globe.


Cut-Off Sway
Trust the data to reduce human fault and anticipation.

Artificial Intelligence generally reduces manual intervention thus avoiding the human errors. This sway could be because of individual views, misinterpretation, influenced conclusions, deep-rooted principles, faulty conventions. Automation by means of Artificial Intelligence gets rid of all these factors and reduces the risk of getting provoked by defaulted information. It purely trusts the data.


Hoard Intuitions
Increase efficiency by integrating insights.

Integrating the insights in the application alongside the factual data using Artificial Intelligence may make the users to stop using the BI tool directly. This hoarding ultimately makes the application an engine for other software increasing the effectiveness of the tool. Users can then spend much time in making wise money-spinning choices rather than putting unwanted efforts in using the tool. This can change the minds of many occupational users that presently do not use any sort of Business Intelligence tool.

Continue reading " Driving Business Intelligence with Automation " »

March 20, 2018

HR Leaders at 'cross roads' again, but third time's the charm !

Most HR leaders and practitioners have witnessed the proverbial 'Change is the only constant' phenomenon in HCM space since last 50 - 60 years. These changes are across the functional areas like HR Administration, HR Service Delivery, Talent Management and Workforce Management. While there has been a sea change in employee centric operational and strategic HR processes over these years from business side, technology has aided or enabled these changes.

A quick analysis of these changes will unveil interesting factors.

  1. The pace of such changes in business and technology layer have been steady and controlled.

  2.  Most of the times, they were decided based on Organizations' operational or strategic requirements.

  3. Change initiatives are driven and executed by the Organization's own team or partners / vendors.

  4. A closer look will reveal that most of these factors are 'internal' not guided by the 'external' factors, baring statutory regulatory compliance.

Today, we witness that the 'external' factors are more forceful as compared to internal factors in guiding these change decisions in an organizations' HR practices. To understand the impact, it is important to note these chronological patterns which forced these change initiatives. Lets go thru these 'decision points' for change, which forced the HR Leaders into 'cross roads' for critical decision making about their HR practices.

Manual Vs. Automation

In the period where mainframe applications were being built or personal desktop revolution post 1985, HR practices in most organizations have been evaluated on a single question 'Manual Vs. Automation' to decide the scope of HR IT work and what comes into the HR Operations part.

Mundane and higher volume transactions like Payroll, Employee Administration or Time & Attendance which were centered around data keeping, rule driven and workflow driven activities were the initial targets to decide what is being automated and what remains a manual process.

Most legacy HR applications were designed and built on this paradigm and still are functional in the same way.

User driven Vs. Self-Service driven

As we progressed, all these multiple HR applications - both in-house and commercially available COTS solutions had to be redesigned with another key question - what processes are driven / enabled by the HR User and what HR processes have to be enabled for Employee or Manager Self-Service. HR leaders were left on another cross-roads at this moment, as the control of the HR process or operations have moved from Users to Employees in certain areas.

HR Leaders and Practitioners had to redesign the current HR processes to determine the operational processes retained by the HR Users and the processes that are delivered to Employee or Mangers on self-service basis. Most in-house HR applications and COTS applications have been redesigned to respond to this trend of 'empowering the employee'.

HR Processes across various business functions centered around approvals like leave or time-sheet approvals, workflow mechanisms, simple data driven queries like pay slips or leave balance checks etc..

Human Interface Vs. Digital AND Standardized Vs. Personalized

While it is tedious to be at 'cross roads' again, HR leaders are finding themselves in cross roads again - however the third time is definitely looking very exciting as compared to last two tides of change. Aided by the technology advancements which are changing the face of the world and the imperative to cater to diversified employee needs, the upcoming changes in HR business processes and associated HR IT applications is phenomenal. Digital and Employee Experience are going to drive these changes - an exciting opportunity for all HR leaders.

This time the questions are juxtaposed asking for multiple decisions:

Decision #1: What HR processes to be retained with 'Human / Human Interface' layer and what HR processes form the 'Digital Experience' layer...

Decision #2: What HR processes are to be delivered as Industry 'Standardized' processes and what HR processes need to be 'Personalized' to improve the Employee Experience.

Moreover, these decisions impact the entire HR functional areas and not limited to only certain areas where there are data-entry or operational or approval needs.

So, the task for HR leaders now is to determine how to break-up or slice & dice the HR processes and map them into the right areas in this two-dimensional model along with the benefits or value-adds depicted above.

As part of our HCM Advisory / Strategy offerings, the 'HCM process maturity model' assessment framework is re-engineered to suit the upcoming paradigm shifts in HCM space, starting with assessment and evaluation of the each L0 - L3 process to be mapped as per these two dimensions.  

Stay tuned to our 'HCM World' showcase and sessions, to get updates about this renewed 'process maturity model' framework.

 

March 18, 2018

ODI 12C Miscellaneous concepts:

In this blog I want to share the concepts related on the below topics

Export/Import:

Export and import in ODI normally export only specific component and child components. Components under the selected mappings like temporary interfaces, reusable mappings and scenarios etc.

The exported components saved in the XML file format. Usually this process is preferred when there is a very minor change in the particular object and all environments are in sync.

odi1.JPG

Smart process:

Usual best practice of export and import is smart import/export process.Just drag the components to the wizard to be exported , it will automatically pick all the designer ,topology and dependencies used by  information related to corresponding component. Import action is similar vice versa process . Response file will track the log of import process.

odi2.JPG

 

Solutions in ODI:

Solutions are used to group the scenarios which were created for mappings, packages, procedures etc. and then we can synchronize the changes to the particular group and easily export and import to the respective environments. This would be helpful if a project contains more number of scenarios its difficult to find the changes and manually doing the migration process. So solutions are useful in grouping the required objects in the migration process. Mainly it was used in production support

odi3.JPG

 

Deployment in ODI:

Under every mapping in physical tab we can create new deployment option for the same mapping. Like changing the options in the knowledge modules or changing the type of KM's. Mostly it would be useful to run the full load or incremental load by creating new deployment and opting whichever is needed. This deployment is newly introduced in the 12c which was not present in 11g.

odi4.JPG

Commit and Isolation levels in Procedures:

Commit levels: Procedure is made up of group of tasks which are executed step by step .Each task is an SQL statement which needs to be committed. To manage the commit levels we have options like Auto commit and Transaction 0 to Transaction 9. Auto commit simply commits particular task in procedure. Transactions 0 to 9 are used to group the tasks need to commit at a time sequentially. This way we can manage the commit levels in ODI.

Isolation Levels: These levels are classified into four Committed, Uncommitted (Dirty), Repeatable, Serialazable.

Committed: The Tasks in the procedure will read the only committed data from the source.

Uncommitted (Dirty): The Tasks in the Procedure will read uncommitted data from the source (oracle doesn't support)

Repeatable: The Tasks in the Procedure while reading the huge amount of data it will lock the rows not to update at the point of time to avoid inconsistency.

Serialazable: The Tasks in the Procedure while reading the huge amount of data it will lock the respective table not to update at the point of time to avoid inconsistency.





 

March 15, 2018

Job Scheduler

A job scheduler is a system which enables the execution of jobs from the back-end. Job scheduling is also referred to as batch scheduling, or workload automation. The job-schedulers offers a GUI based platform for definition and monitoring of background processes running on a distributed sets of servers, applications, operating systems and networks. The scheduler works on simple logic. It simply picks up the jobs which are to be run on a day at particular time and processes them based on various constraints. It also offers the dependency kind-of model, in a way that the jobs can follow each other. Thus, day-to day activities of Hyperion, SAP, PeopleSoft, or mainframes admins can be easily automated based on the underlying business needs and timelines.

One of the popular job schedulers is Maestro, which offers all the wonderful features of graphical user interface. Few of the features of maestro are mentioned below: -


  • Central control: It offers central control for the processes configured from various platforms and applications. It saves from the hassle of dealing with the different platforms and applications present at different servers. It consolidates job scheduling across various systems like Linux, Unix, Windows, Mainframes etc. at one place.


  • Simple Interface: It offers a simple interface, which enables to easily track and manage jobs, schedules, tasks, job status, and job history helping in speedy diagnosis of the underlying issues. It offers views, which show jobs that are active, running, inactive, held, or that require assistance.


  • System Recovery: This feature makes sure that after a job fails, they are addressed automatically by putting dependent jobs on hold, run a recovery job (if defined), stop processing further jobs, re-run the original jobs, or continue to the next job, alert the operator via email.

  • Calendars and Schedules:  Maestro offers scheduling and calendar feature to manage batch job execution. Each schedule has a date, runs on that particular date/week/ calendar and contains a list of jobs which are defined to be executed on that particular date. Maestro carried forward the uncompleted schedules from the previous day. A calendar if an array of dates. Any number of calendars can be created to meet scheduling needs. For e.g.: a calendar name "month-end" can contain the list of the dates of "financial close", "Holiday" can have list of financial holidays when the jobs should not execute.

  • Queuing system: Maestro carries out a deterministic approach to identify the priority of the jobs. It considers parameters like elapsed execution time, estimated execution time, Job dependency, count of simultaneous jobs runs allowed for a user, file dependency, occurrence of prescribed events, resource availability, etc.

  • Security: The system is secure enough to check IP validation to prevent access by external systems.

POV: FDMEE vs Cloud Data Management

 

Introduction

 

As more organizations begin to embrace the Oracle software as a service (SaaS) of Enterprise Performance Management (EPM) Cloud offerings, there is an often overlooked but important decision that needs to be made early in the adoption cycle - what toolset will be used to integrate data into the EPM Cloud Service products such as Planning and Budgeting Cloud Service (PBCS), Financial Close and Consolidation Cloud Service (FCCS), or Profitability and Cost Management Cloud Service (PCMCS).

                   

                        Herein we are going to explore the two primary data integration options that are available to customers and the pros and cons of each. The conclusion provides a recommendation that can be applied to organizations of all industries and sizes as they plan their journey into the Cloud.

 

The Encounter

 

Oracle continues to grow its Cloud service offerings both in terms of customer volume and functionality. The changing landscape of software and infrastructure has facilitated a number of organizations to adopt a cloud strategy for one or more business processes. We cannot refute the aids of the Cloud, as due to this software is regularly improved, the hardware is possessed and preserved by Oracle, and application upgrades are shown as a historical thing. While the shift to the Cloud is a broad topic with many considerations, herein our focus is the data integration toolset, and more broadly, the integration strategy.

                       

                        When customers shift to Cloud, they are repeatedly educated that the Cloud service contains a section named Cloud Data Management which can address all of the data integration requirements for the Cloud service. Honestly, this is an overly optimistic view of the capabilities of Cloud Data Management. Data integration requirements can drive solutions that range from very simple to incredibly complex, and this large spectrum demands a more holistic assessment of the integration options. It is impracticable to assess the thorough solution necessities in a software sales cycle, the important query that every organization should have when bearing in mind their data integration plan for EPM Cloud Services is - what are my options?


Cloud Service Data Integration Options


As with any software offering, there are numerous potential solutions to a given requirement. While assessing software choices, options are normally gathered into two classes - buy versus build. A 'buy' choice is acquiring a packaged software offering. The Oracle EPM Cloud Services are an instance of a buy decision. In addition to prebuilt functionality, an important advantage of a packaged subscription is maintenance for the solution including future version releases. A build resolution means making a routine solution which is explicit to a single association. The last one is usually unsubstantiated by a software merchant, and range of capabilities and advancement are both subject to the skillset of the individual or team that developed the solution.


                        Herein we focus on embalmed resolutions as those are more thoroughly line up with the often-expressed goal of adopting a Cloud approach that is streamlining the solution and its possession. The choices such as ODI or the rest API are all valid, these are considered build options in the build versus buy decision and thereby are excluded from this analysis.


Considering packaged submissions for integration with Oracle EPM Cloud Services, the two main options available to customers are FDMEE and Data Management. FDMEE is a separate on-premises solution whereas Data Management is a combined component within each of the Oracle EPM Cloud Services. Before comparing these products, it is necessary to highlight the purpose and capabilities of each.


 FDMEE


Financial Data Quality Management, Enterprise Edition (FDMEE) is a purpose-built application for integrating data into the Oracle EPM suite of products. This solicitation comprises predefined reasoning for loading data to the on-premises EPM applications of HFM, ARM, Hyperion Planning, and Essbase. Additionally, FDMEE (as of the 11.1.2.4.210 release) can integrate data directly to the EPM Cloud Services of FCCS, PBCS, PCMCS, and Enterprise Planning and Budgeting Cloud Service (EPBCS).


                         FDMEE as an ETL: Financial Data Quality Management Enterprise Edition can lightly be mentioned as an ETL-type application. The ETL is incorporation procedure for extracting, transforming and loading data. FDMEE is not a true ETL tool because it is not intended to handle the extremely large volumes of data (millions of records in a single execution). For handling large volumes of data, a pure ETL tool such as Informatica or ODI would theoretically be a better fit. Financial Data Quality Management Enterprise Edition offers many core functionalities as ETL tools. It has the capability to extract data from a diversity of sources, transform the data to the EPM dimensionality, and load the resultant data to EPM applications.


                         FDMEE is different than pure ETL because FDMEE was designed keeping business user in mind. ETL solutions are generally owned and operated by the IT department. ETL executions are scheduled, and any deviation from the defined process or timeline often requires coordination between the business user requesting the off-cycle execution and the IT owner of the ETL solution. The Financial Data Quality Management Enterprise Edition is regularly managed and preserved by business users. The FDMEE operators have the capability to apprise alteration reasoning through a web interface with tiny to no coding awareness required. Users can schedule FDMEE jobs or execute them in an ad-hoc fashion as data is needed. The end-to-end process is completely within the hands of the business users.

                        FDMEE adaptors: The Financial Data Quality Management Enterprise Edition offers strong and authoritative extract abilities comprising prebuilt adaptors to source data from Oracle EBS GL, PeopleSoft GL, HANA, J.D. Edwards Enterprise One GL, SAP GL, HCM and Business Warehouse. All these adaptors offer the reasoning and coding required to source data and remove the need for organizations to define and maintain custom extract queries. This is a significant value-add of FDMEE. Additionally, Financial Data Quality Management Enterprise Edition can source data from any relational source as well as any flat file format. These three methods, 'prebuilt connecters, relational association, and flat files' guarantee that Financial Data Quality Management Enterprise Edition is capable to utilize closely any data source required to support the EPM systems to which it is intended to load data.


                        Data Conversion in FDMEE: The conversion proficiencies, which is identified as mapping are another main aspect for FDMEE. Often the transformation that occurs during a standard ETL process is accomplished through SQL queries that must be designed from scratch. FDMEE uses SQL to execute conversion in the background, the mapping logic is input in a web interface that looks and feels very much like an Excel worksheet. Source system values are aligned to target system values in a columnar grid format. Financial Data Quality Management Enterprise Edition maps provision multiple mapping methods including Explicit (One to One), Between (continuous range mapping to a single value), In (non-continuous range mapping to a single value), Like, and multi-dimensional maps where numerous source system sections are used to find an EPM target dimension value.


                       Data Loading in FDMEE: FDMEE further differentiates itself from standard ETL solutions in its load processes. Load procedure is resolution-built for incorporation into the Oracle EPM product suite. Not only does this ensure a seamless load of data to the target EPM application, but also it comprises prebuilt logic that improves the data load process. For example, without any additional build effort, Financial Data Quality Management Enterprise Edition can perform calculations in the target EPM application to perform tasks such as currency translation, data clearing or aggregation. While standard ETL tools can certainly achieve this, Financial Data Quality Management Enterprise Edition offers this proficiency natively and needs no further build effort outside of organizing the integration to perform these actions.


 FDMEE Value- Add Landscapes


                        Audit Controls in FDMEE: Because FDMEE stores its transformation logic within the application, users are able to investigate the data conversion that was applied, to better recognize how a source system data point was converted to a target system intersection. FDMEE has the capability to track changes to the renovation logic. FDMEE can track who the was the user and when he or she altered the transformation logic. FDMEE application tracks the conversion logic before and after the change so the effect of the change is understood. Finally, FDMEE provides a tremendous amount of activity-based logging. FDMEE application captures each accomplishment of the ETL (the Workflow) process and captures thorough information such as the user performing the process, start and end times, and in-depth technical actions that allow for not only debugging but also for performance and process tracking. Over and over again internal or external auditors enquire for proof to support, that the data in a reporting application is latest, precise, and comprehensive. As the data is regularly transmuted throughout an ETL process, the preconfigured Financial Data Quality Management Enterprise Edition reports and user interface capabilities can be used to easily validate the transformation effect. As well, a number of reports are available to audit the overall process execution -when it was run and by whom. These powerful tools can be used to demonstrate the rationality of data inside the EPM application.

                        Drill options in FDMEE: Financial Data Quality Management Enterprise Edition provides functionality known as drill back and drill through.


                        Drill back is the action of moving from the EPM application to the FDMEE application to investigate the source records that make up the balance from which the drill back was initiated. Drill back is native to any EPM system to which FDMEE loaded data. The main requirement to this functionality is that the drill back should definitely be originated from an input level intersection to which data was loaded. It specifies that drill back cannot occur from parent levels within any of the hierarchies. This is undoubtedly a zone where the community would like to see Financial Data Quality Management Enterprise Edition drill back developed. Appropriate training of the process of drill down and then drill back can regularly overcome this apparent limitation.


                       Drill through, by contrast to drill back, is not native to each source system from which FDMEE can extract data. Drill through is provided natively with the preconfigured adaptors to the Oracle branded GLs as well as SAP R3 and ECC. For non-Oracle or non-SAP data sources, the drill through is dependent on the capabilities of the source system. Financial Data Quality Management Enterprise Edition drives a drill through request in the form of a 'http' request. The ability to drill through to the source system is thereby depends on the source system having a handler for the web request. Any system that can accept the web request could in-theory be configured to support drill through from FDMEE.


Cloud Data Management

Cloud Data Management is intended to allow an organization to adopt a pure Cloud solution for Oracle EPM deployments. Cloud Data Management is a module within the Oracle EPM Cloud Services. It is built using the same code line as on-premises FDMEE. Cloud Data Management can integrate flat files. It includes all the on premises FDMEE transformation capabilities including SQL mapping which can accommodate complex transformations. It includes the prebuilt logic to natively load data to each of the Oracle EPM Cloud Service offerings. Cloud Data Management can integrate with other Oracle Cloud Service offerings including the ability to source data from and load data back to Fusion G/L Cloud. As well, it can source data from other Oracle EPM Cloud Services.


Variance between FDMEE and Cloud Data Management

While the main transformation and load capabilities of FDMEE are available within Data Management, some of the key features of an on-premises deployment of FDMEE have been disabled.


The below shown table highlights the accessibility of FDMEE features in Cloud Data Management:


Topics

FDMEE

Cloud Data Management

Pre-built connection to Oracle branded ledgers

Yes

No

Pre-built connection to SAP ERP and DW

Yes

No

Direct connection to relational data sources

Yes

No

Data Synchronization( Hybrid Mode)

Yes

No

Import, Custom and Event Scripting

Yes

No

Custom Reports

Yes

No

Flat File Processing

Yes

Yes

Pre-built connection to Oracle Fusion GL

Yes

Yes

Mapping

Yes

Yes

Multi-Period Processing

Yes

Yes

Data Synchronization( Full Cloud Mode)

Yes

Yes

Automation

Yes

Yes

Drill Through

Yes

Yes


 

                       A key feature that is not available in Cloud Data Management is the ability to connect to on-premises systems. This applies to both the systems for which Oracle has created adaptors as well as those that require additional application configuration. For example, if an organization is utilizing Oracle E-Business Suite (EBS) or SAP General Ledger, Cloud Data Management is not able to connect to either of those systems. To assimilate data from on-premises systems to the Oracle Cloud Service products such as EPBCS or FCCS using Cloud Data Management, a process needs to be developed to transfer a flat file to the Cloud instance and then invoke a Cloud Data Management process. While this is certainly achievable using Oracle's EPM Automate command line utility, many organizations prefer to avoid flat file integrations when possible.


Role of EPM Automate:  Cloud Data Management is most often used in conjunction with a light weight on-premises command line utility known as EPM Automate. At a minimum, EPM Automate is required to transfer data files to the Cloud Service instance in which Cloud Data Management is deployed; however, multiple EPM Automate commands can be threaded together to create a lights-out end-to-end process for data integration. A data integration task flow may contain the following steps:


1. Login to the Oracle EPM Cloud Service instance


2. Upload a data file


3. Initialize a Cloud Data Management routine to process the data file


4. Execute calculations in the EPM Cloud Service product (e.g. EPBCS)


5. Execute a Financial Reports (FR) report in the EPM Cloud Service


6. Download the report output to an on-premises location


7. Log out of the Oracle EPM Cloud Service instance


8. Generate and send an email with the FR report attached


EPM Automate helps Cloud Data Management function as a somewhat more fully integrated solution for the Oracle EPM Cloud Services. The utility EPM Automate is not limited to Cloud Data Management; it can and often is used in on-premises FDMEE deployments as well.


Further Comparisons:


                        Scripting: Another feature of FDMEE that is not available in Cloud Data Management is the ability to use scripting. Scripting allows the FDMEE application to be extended beyond the standard out-of-the-box features. Scripting enables us to achieve common functions like connecting to a relational repository and extracting data or generating email status messages for lights out data processes. The scripting language that is used by FDMEE is either Visual Basic or Jython. Both of these languages have the ability to interact with the operating system, including that of the FDMEE application server. This creates a significant risk in a Cloud-based deployment. A malformed or malicious script could cripple an entire Cloud deployment. Because neither language has the ability to remove the specific functionality that is potentially harmful to the Cloud environment, Oracle has simply disabled all scripting ability for Cloud Data Management.


                        The inability to use scripting reduces the capabilities of Cloud Data Management. The criticality of data integration is generally a snubbed portion of an Oracle EPM project. Integrations can be complex as there are numerous systems from which data can be sourced in a variety of formats. While many systems can produce data files that can be easily consumed by Cloud Data Management, others produce files that require additional processing in order to be consumed. This is generally achieved through scripting. Since Cloud Data Management does not support scripting, any additional manipulation of a source system extract would need to be accomplished either by another application/process or through manual intervention. This is suboptimal and generally avoided by most organizations due to the added complexity or data risk. The scripting capabilities of FDMEE help to eliminate this risk.

                        Custom Reports: Another feature of on-premises FDMEE that is not available in Cloud Data Management is the ability to create custom reports. FDMEE and Cloud Data Management are one of the few products in the Oracle EPM stack that come with a number of preconfigured reports. These reports provide not only access to the financial data processed through FDMEE/Data Management but also to critical process and audit information including map modifications. While Oracle has done a remarkable job delivering reports with significant value-add, there are instances where the reports need enhancement. In other cases, where a new report is needed to address a specific requirement, unfortunately, Cloud Data Management does not provide the ability to change or author reports.


Delusions

Unfortunately, there is some level of misunderstanding or misinformation in the marketplace about the capabilities of FDMEE and Cloud Data Management. One of the biggest misconceptions about FDMEE and Cloud Data Management is regarding the drill through to source system capability. The common belief is that drill through to the source system is only available for data that was loaded through on-premises FDMEE using one of the source system adaptors for E-Business Suite (EBS), PeopleSoft, J.D. Edwards, or SAP.


                        This is 100% inappropriate on two fronts. First, the adaptors are used solely to extract data from the source system. The use of the adaptor is undeniably separate from the capability to drill through to a source system. Drill through to source systems is available for any system which supports a web request for data, even those from which FDMEE or Cloud Data Management has not sourced data. Second, and to some degree in concert with the first point, drill through is absolutely available from Cloud Data Management to on-premises systems even though Cloud Data Management does not support the use of source system adaptors to integrate data from on-premises systems. While there are certainly design and configuration requirements to support drill through from Cloud Data Management, it is available and supported by Oracle.


Cloud Drive Considerations for Multi-Product EPM Organizations


The inability of Cloud Data Management to connect to on-premises systems also applies to Oracle EPM applications such as Hyperion Financial Management (HFM) or Hyperion Planning. Many organizations are 'walking' their EPM landscape to the Cloud. In other words, for those organizations that have multiple Oracle EPM products currently in use, the acceptance of the Cloud is done in a stepped fashion. For example, an organization may transition their Hyperion Planning deployment to EPBCS before transitioning their HFM application to FCCS. Many organizations have been comfortable adopting PBCS while choosing to wait for FCCS to continue maturing.


                        Another reason organization takes a stepped approach to the Cloud is that transitioning both financial close and planning processes at the same time creates risk. This risk can manifest itself in any of the three project levers:


                        Choice:  The idea of scope risk is that migrating multiple processes at the same time can be a long, complex project. While the mantra that the Cloud is faster and easier is often promoted, the reality is that a project in the Cloud can have just as many complexities as an on-premises deployment. Moreover, the prebuilt value-add content of the Cloud services can often mean an adjustment to the existing business processes used in the Oracle EPM applications.


                        Timeline: Moving multiple processes to the Cloud simultaneously definitely adds timeline risk. Moving a single process like Financial Planning allows an organization's resources to stay focused on a single project work stream. Often the key team members within an organization that are involved in an EPM project tend to overlap, at least in some fashion, even for processes as distinct as financial close and financial planning. Undertaking a project to move multiple processes to the Cloud concurrently requires these resources to spread their efforts across more than one work stream. This can lengthen the overall project timeline and add risk that project milestones are missed due to resource constraints.

                       Budget: Transitioning multiple processes to the Cloud concurrently can be more expensive than a multi-phased transition. One might argue that the costs should at least be equivalent or even less with a project that migrates multiple streams concurrently. The argument for lower overall cost would be attributable to the idea that certain project tasks such as testing and migration would benefit from concurrency and only needing to be performed once as opposed to across multiple projects. However, as noted above in relation to timeline risk, projects that migrate multiple business processes to the Cloud generally leverage more external resources (consultants) due to the nature of internal resource constraints.


                        As an outcome of these risks, establishments often find it advantageous to separate, the move to the Cloud into multiple projects that do not run in parallel. This means that an organization will be operating in a hybrid mode - a mix of on-premises and Cloud applications. Often, there is a need to exchange data between the close solution and the financial planning solution. Cloud Data Management enables the exchange of data between Oracle EPM Cloud Services (e.g., FCCS à EPBCS); however, it does not provide native functionality to achieve this data exchange in a hybrid mode. By contrast, on-premises FDMEE natively provides the ability to exchange data between on-premises Oracle EPM systems and Oracle EPM Cloud Service products. While processes can certainly be created to allow the exchange of data between on-premises EPM systems and EPM Cloud Service applications, it requires custom development that often requires in-depth knowledge of the application programming interface (API).


Judgment Aspects


For an organization facing the choice between on-premises FDMEE and Cloud Data Management, there are a variety of factors that contribute to the final decision.


Software Possession


                        First, an organization needs to determine if additional software needs to be procured. For organizations that currently have the Oracle EPM stack deployed on-premises, especially HFM, there is a high likelihood that FDM Classic (prior generation of FDMEE) is already owned. Any organization that has licensed FDM Classic is entitled to FDMEE without any additional licensing expense. FDMEE licensing is fairly straight forward. The software is licensed on a per user basis by specific functionality. Every organization that procures FDMEE needs to pay for the software itself. This portion of the licensing gets you the software but does not include any of the functionality to connect to source systems using the Oracle preconfigured adaptors or to the Oracle EPM products. Each of these things is licensed separately. In addition to the core software license, there are three licensing options in the FDMEE functionality tier:


1.HFM Adaptor: Provides the logic needed to natively integrate with HFM on-premises deployments


2. Adaptor Suite: Provides the logic for multiple integrations. First, includes all of the Oracle source system preconfigured adaptors (e-BS, PeopleSoft, J.D. Edwards E1). Second, includes the ability to integrate with on-premises Hyperion Planning and Essbase applications. Finally, the adaptor suite provides the ability to integrate from an on-premises FDMEE deployment to the Oracle EPM Cloud Service products that leverage an Essbase data repository i.e. PBCS, FCCS, & PCMCS. TRCS is also on the roadmap. While ARCS does not leverage an Essbase data repository, on-premises FDMEE utilizing scripting can integrate with it. The adaptor suite must be licensed to integrate with the Oracle EPM Cloud Services.


3. SAP Adaptor: Provides the ability to source data from SAP General Ledger (ECC or R3) as well as SAP Business Warehouse (BW). This source adaptor is licensed separately because Oracle needs to pay royalties to the partner (BristleCone) that developed and maintains the adaptor. The need to procure FDMEE can certainly be a restrictive especially for organizations that are attempting to adopt a pure Cloud model.

                        There are numerous factors that should be weighed before abandoning the idea of procuring on-premises FDMEE:


Figure of Cloud Services

                        Another important decision point in the choice between on-premises FDMEE and Cloud Data Management is the number of Oracle EPM Cloud Services to which data needs to be integrated. This concern is part of the overall EPM integration strategy for an organization. For organizations that have or anticipate having multiple Cloud services (e.g., PCMCS and PBCS), on-premises FDMEE is worth exploring.  Integrations in Cloud Data Management are specific to the instance to which Data Management will load data. For example, if Cloud Data Management within the PBCS instance is used to load data to PBCS, that Cloud Data Management application cannot be used to load data to the PCMCS instance. A completely discrete Cloud Data Management application within the PCMCS instance would need to be created in order to load data to PCMCS.


                         An extra layer of complexity with Cloud Data Management is when data needs to move between instances; for example, loading actual consolidated results from FCCS to EPBCS. The Data Management application within the FCCS instance cannot push data to the EPBCS instance; data must be pulled from FCCS using the Cloud Data Management application within EPBCS. Conversely, if budget or forecast data needs to be loaded to FCCS, the pull of data from EPBCS would need to be initiated from the Cloud Data Management application within the FCCS instance. The need to exchange data between applications highlights a current shortcoming of Cloud Data Management. The user initiating the action must always evaluate from which instance the integration needs to be performed and log in appropriately. While this is not a show stopper, it is definitely a training issue and one that could be viewed as a suboptimal end user experience.


                        Lastly, it should be noted that the inability to share core metadata (including security) objects across Oracle EPM Cloud Service instances results in duplicative maintenance of those items across the multiple Cloud Data Management applications. On-premises FDMEE by contrast has the ability to connect to multiple Cloud instances as well as on-premises EPM environments from a single FDMEE application. This allows data to be loaded to and exchanged between applications using a single, common application. Since on-premises FDMEE is a single application, core application elements including security can be shared across different integrations.






Difficulty of Integrations


                        Below are some questions that need to be considered when deciding between on-premises FDMEE and Cloud Data Management:


                       • Will my integrations need to be fully automated including email status alerts? If so, then on-premises FDMEE will be preferred.


                       • Are my data files in a consistent format? This means that I can import the raw data file into Excel and each field is populated consistently in each data row? If so, Cloud Data Management will very likely be able to process the file. If not, FDMEE scripting may be required to properly process the data.


                       • Does the organization expect the need for custom reports? This is hard to know with a high level of certainty. For large organizations that have vigorous audit requirements and users with highly specific reporting requests across other EPM products, it is likely that the ability to generate custom reports will be necessary, and on-premises FDMEE would be required. The majority (80%+) of organizations find the out-of-the-box FDMEE/Data Management reports to sufficiently address their needs.


True Cost of Ownership


                      To properly calculate the true cost of ownership of FDMEE or Cloud Data Management, we need to consider not only the financial expenditures but also the human capital expense associated with owning and maintaining a solution. Often Cloud Data Management is presented as having a lower total cost of ownership. From a financial expenditure perspective, this is very likely true.

                        First and foremost, Cloud Data Management does not introduce any incremental software cost. The Data Management module is included in the subscription price for the Oracle EPM Cloud Service instance. While the Cloud subscription is a recurring cost similar to annual maintenance, this cost will be paid regardless of the integration approach used, and as such, the initial software cost and on-going annual maintenance cost of on- premises FDMEE must be considered. Second, an on-premises FDMEE deployment requires additional hardware. This hardware can be physical or virtual. It can be within the data center of the organization, or it can be installed on hardware owned by an Infrastructure as a Service (IaaS) provider such as Oracle. Depending on the hardware deployment method (physical/virtual), there is capital expense and depreciation expense or operating expense that must be incurred.



                        Finally, the Total Cost of Ownership analysis often includes a discussion of the cost of future advancements which can be a somewhat flawed analysis. While the prior two components are certainly valid and factor into the upgrade discussion, the upgrade point is somewhat twisted. First, Oracle has stated that there will be one more major release to the on-premises software (version 11.2) that will be supported through December 2030. Any version of on-premises FDMEE that would integrate with the Oracle EPM Cloud Services would require the most current version which is 11.1.2.4. This means that there would only be one significant version upgrade of an on-premises FDMEE deployment. While patch set updates will be released at certain intervals, organizations can defer an upgrade or patch set update until a time that aligns with their needs and business processes.


                        Conversely, the Cloud is patched every month with new functionality being introduced. These updates do not always apply to the Data Management module, but when they do, a certain level of regression testing befits an organization to ensure the patch did not impact any functionality currently in use. Moreover, a full version upgrade of the Cloud has yet to occur. When such a major upgrade does occur in the EPM Cloud Services, an organization will certainly need to perform the same level of testing to the system as one would with an on-premises deployment. The core basis for challenging the cost of Cloud upgrades being lower is that the Cloud needs to be tested more frequently. Each time the Cloud is patched/updated, there is time required to evaluate if testing is needed and if so, to actually execute the test. An organization cannot defer the Cloud updates for more than one or two cycles and as such will consistently need to test. In contrast, an on-premises FDMEE application can be upgraded on a cycle defined by the organization. If no new functionality or bug fix is required, an upgrade can be delayed forever. As a result of the more frequent and mandatory testing cycles, the true cost of upgrades in the Cloud is higher because the administrators are more frequently undertaking testing activities.


                        Software and hardware costs are an important part of the TCO analysis; however, the analysis should also include the human cost. As previously noted, Cloud Data Management lacks certain features - direct connections to on-premises systems, automation email alerts, and the bi-directional hybrid or Oracle EPM Cloud Service data movement. The lack of these features often means more manual maintenance and workflow executions for administrators and end users. This erodes productivity and certainly has a cost associated with it. Moreover, the confusion of which Cloud Data Management application to use for data movement between the EPM Cloud Services can frustrate end users and administrators. This suboptimal experience can impact the perception and perceived value of the EPM Cloud Services and therefore should also be considered in the TCO analysis.


Logistic ETL Criteria


                        Some organizations have a distinct ETL standard that is usually a system such as Informatica or ODI. Prior to the arrival of the Cloud, these standards would sometimes not be applied to the Oracle EPM suite of products and, in particular, Hyperion Financial Management (HFM) since FDM Classic and later FDMEE were purpose-built to allow end users to integrate data to these systems. The on-demand need for data throughout a financial close cycle was often better served by FDM/FDMEE. With the introduction of the Oracle EPM Cloud Services, this decision must again be assessed particularly as it relates to maintaining on-premises software. The necessities that drove the use of FDM Classic or FDMEE - drill back, on-demand execution, end user maintenance of transformation logic (mapping) - are certainly still valid; however, the features that Cloud Data Management lacks may be improved by an existing ETL tool.

                        Consider this example: an organization needs to integrate data from an on-premises Oracle PeopleSoft general ledger to PBCS. One option would be to utilize on-premises FDMEE and its native connections to PeopleSoft to source and load the data to PBCS. Utilizing batch automation and scripting, an email status report would be generated. The process would be able to be initiated on-demand or scheduled to run at set intervals. But if the organization has not procured on-premises FDMEE licenses or is looking to abolish the licensing, then the ETL tool on which the organization is standardized may be used in concert with EPM Automate and Cloud Data Management to achieve an FDMEE-like integration.


A solution using the latter may look something like the following:


1. ETL tool executes a procedure to query data from PeopleSoft and generate a flat file.


2. ETL tool uploads the output to the Cloud Service instance using EPM Automate.


3. ETL tool initializes a Cloud Data Management workflow process using EPM Automate.


4. ETL tool executes a Cloud Data Management report to determine workflow status using EPM Automate.


5. ETL tool downloads the report output using EPM Automate.


6. ETL tool emails status report to required recipients.


                        The latter option is nearly on par with an on-premises deployment of FDMEE with a couple of cautions. First, the sourcing of data from PeopleSoft is provided as an out-of-the-box solution with FDMEE whereas in an ETL/Cloud Data Management approach, the extract query would need to be defined and maintained by the organization. Second, the level of detail that could be included in the email status generated by an ETL tool would not be as detailed or robust as an on-premises FDMEE solution. There are multiple reasons for this.


                        First, EPM Automate has limited return codes. The execution of a Cloud Data Management workflow process returns a success or a failure like most command line executions. However, success in this instance does not indicate that the workflow process completed without error; it simply means that the initialization of the workflow process was successful. As such, in order to determine the status of the workflow execution, a Cloud Data Management report needs to be run. Second, the workflow status report will provide workflow status, but it will not include detailed error information in the event of a failure in the workflow process. This information is housed in the Cloud Data Management relational repository, but reports to provide that information are not currently available, and as previously noted, custom reports cannot be created for Cloud Data Management. In contrast, an on-premises FDMEE deployment provides not only the ability to create custom reports but also access to the relational repository in which the detailed error information is contained. FDMEE scripting or custom reports can be used to provide this more detailed and actionable information to the appropriate users.


Conclusion


                        Data incorporation within the Oracle EPM Cloud Services is an extremely important topic. We explored the on-premises and Cloud-based options, including the features and functionality of both as well as important considerations such as the total cost of ownership. The last remaining question to answer is: which tool should my organization use? Unfortunately, there is not a singular answer to that question.


                        On-premises FDMEE functions as a data hub within the EPM environment. Data can flow to and from on-premises applications (ERPs, Data Warehouses, on-premises EPM systems) seamlessly to and from the Oracle EPM Cloud Services. Integrations can be fully automated and centralized to a single touch point. This functionality; however, comes at an additional cost to an organization. Conversely, Cloud Data Management has no additional software licensing costs or infrastructure overhead. That said, there is a human capital cost of ownership associated with the reduced features and functionality of Cloud Data Management.


                       Organizations with straight forward integration requirements - no automation alerting, consistent file formats, no integration to on-premises systems into a single Cloud service may find the 'built-in' nature of Cloud Data Management to be convincing. Organizations that have a multitude of integrations with varying degrees of complexity, a need to integrate from on-premises systems including Oracle EPM products, a need to restructure integrations through advanced automation, a need to integrate data into multiple Oracle EPM Cloud Services, or any combination of these factors, will certainly favor an on-premises deployment of FDMEE.


                        Any organization facing the decision of which integration toolset best addresses their needs should consider each of the factors highlighted above and weigh them against the financial and human costs of each potential solution.


Continue reading " POV: FDMEE vs Cloud Data Management " »

First Timer for OAC- Essbase on Cloud..Let's get Hand's On -Rocket Speed!

First Timer for OAC- Essbase on Cloud..Let's get Hand's On -Rocket Speed!

Well, the launch of a rocket needs to consistently be on bar with 4.9 miles a second and yes this is 20 times the speed of sound. That's what defines the greatness and even the prime reason for a successful space launch.

There is going to be Blog series from Infosys on the Essbase on Cloud component of the OAC ( Oracle Analytics Cloud) Service [ should I say "oak" and "o.a.c"? I rather like it the former as it gained popularity that way the last 3 years]. This would constitute 6 Blogs in a staircase step fashion that is going to enable on-premise Essbase consultants and developer and also new bee from the BI world to gain control over OAC Essbase at Rocket speed! We start with landing page in blog 1 and will end with using Essbase as Sandbox for multiple data sources in blog 6(coming soon..)

Much to the release of OAC in 2014 as the most

a)       comprehensive analytics offering in the cloud

b)      having business intelligence

c)       big data analytics

d)      Embedded SaaS analytics

e)      Lately -Essbase on Cloud got released in the second half of 2017.

OAC now got even more packed with introductions of Synopsis, Mobile HD, the Day by Day App.

 

The first step...Set up an user in OAC - Essbase to move on..

As a pre-requisite, we would have an admin credentials to login and set up access for the other needful folks in your team!

OAC -Essbase login page: Once you click on the link for Essbase Analytics Cloud, you would see the entry gate to enter the admin user name password. Doing that and clicking on Sign-In button will take you to the Landing page.

OACEssbase_pic1.png

This is the first screen called the Landing page. Since i have not created any application yet, you see an empty list in the left column. On the Right is a set of neatly arranged cards providing amazingly great ease of access -all that you might need in one view.

OACEssbase_pic2.png

 

 

 Get your focus to the security icon to accomplish our work and that is obvious in the landing page. Those that are accustomed to the cards concepts in EPM Saas Products, this would be not be a surprise but rather on a white background.

 

OAC_Essbase_pic3.png

 

Once you there, you would see three buttons on the right -"Import", "Export","Create" - very much self explanatry what they are meant for!! :) Apparently you by now know to Click on "Create" button to create new users:

OACEssbase_pic4.png

 Provide ID, name, email, password , and role for the new user:

OACEssbase_pic5.pngOACEssbase_pic6.png

 

 List of Users of are currently available on OAC - Essbase will be listed here ..

OACEssbase_pic7.png

Creating another user can be done in the same sequence of steps and/or via the Import option for Bulk creation of Users.

Please be cognizant of the fact that the above method is different from the Users and Roles managed via the IDCS Console. We will drill into the BI Analytics service instance specific application roles and authorized users in a sequel. The goal behind that security section revolves around the Access and Actions. A user can access only the data that is appropriate for him/her. This is achieved by applying access control in the form of catalog permissions in the Catalog and Home pages. Secondly, a user can perform only those actions that are appropriate to him/her. This is achieved by applying user rights in the form of privileges performed in the Administration page.

 Now that my id and access has been set up let's look at step 2 the Application creation in the blog 2(coming soon..soon as in tomorrow!..)

Thank you,

Prithiba Dhinakaran

 

Summary

The core intention of writing this blog is to describe features of legacy mainframe and the emerging era of Big Data. It also demonstrates the process of offloading data to Hadoop.  

 

Advantages of Mainframe:

Legacy systems used by many organizations are more secure, scalable and reliable machines which are capable of tackling huge workloads.

Mainframe handles even mission-critical applications with minimal resistance, like processing banking transactions, where both security and reliability and security are equally important.

 

Drawbacks of Mainframe:

They sustain Large hardware and software.

Processing Prices.

 

Many organizations, in the current era, take the urge to initiate a part of migration and continue the same in all the aspects of business applications to newer reliable platforms.

This process helps organizations to reduce costs incurred and meeting the evolving needs from the business 

 

 

Advantages of Big Data technology over Mainframe:

Cost Effective.

Scalable.

Fault Tolerant.

 

The cost in maintaining and to process the mainframe can further be reduces by the assimilating a layer of Hadoop or to completely off load the batch process to Hadoop

 

Similarities of mainframe and Hadoop are as below :

Distributed Frameworks

Handle massive volumes of data

Batch workloading

strong sorting

 

Business Benefits of adopting the Big Data Technologies along with Mainframe or over Mainframe

 

Using Hadoop ecosystems like PIG , HIVE or MapReduce the Batch processing can easily be done.

 

Jobs from the Mainframe systems can be taken and processed on the Hadoop the output of the same can be viewed at the mainframe end reducing million instructions per second (MIPS) costs.

(MIPS is a way to measure the cost of computing: the more MIPS delivered for the money)

 

Organizations look at return on investments at every corner during up-gradation or migration. similarly Migrating mainframe to Hadoop is this condition met due to minimal infrastructure , the batch process costs and flexible upgrade of applications.

 


Process of Offloading Data to Hadoop

 

Offloading approach is recommended in the following simple steps.

To create Active Archives and copies of limited mainframe datasets in the Hadoop distributed File system.

Secondly to migrate larger amount of data from the source from sources like semistructured data sources or Relational DBs

Final iteration of moving the expensive mainframe batch workloads to the much-sophisticated Hadoop

 

 

 

March 14, 2018

Hyperion EPMA Way

For developers in Hyperion community, choosing between Classic Vs EPMA approach has been a vital one. This blog gives a brief  insight to EPMA approach.

Features -

  1. For HFM applications EPMA provides a better interface for managing metadata
  2. Dimensions from Shared library can be shared in individual applications as Shared and/or Local dimension. Same Chart of Accounts can be used in HFM & Hyperion Planning application.



  3. Make bulk changes using Grid Editor . Select the members for the rows.


     

     

     

     

     

     

     

     

     

     

     

     


    Select the properties for the columns.

     

     

     

     

     

     

     

     

     

     

     


    Make changes to member properties for selected members, copy paste also works for multiple changes.




  4. Track changes made to Metadata


  5. Dimension can be maintained in single repository (Shared Library) , from shared library dimensions can be shared with applications in dimension library or local dimensions can be created .
  6. Metadata synchronization from Dimension library to shared library and vice versa can be done on demand, for Shared & Local dimension.

    Benefits over Classic applications-:

    1. For HFM, metadata can be managed in dimension library as compared to classic where metadata has to be exported using HFM client and then imported again after making changes.

    2. Managing metadata is easier. Make changes in shared library and it reflects in all target application  

    3. In EPMA the dependency upon the traditional ETL tools is reduced to a much extent. It provides a single interface where hierarchies can be built and loaded to various applications through flat files and interface tables.

    4. Through EPMA we get facility to automate dimension update and maintain applications like Import Profiles and Application Deployment can be invoked through the EPMA batch client for an automated dimension update process.

    5. EPMA integrates easily with database, flat file as well as DRM.

    Author -

    Jagjot Singh Chadha , ORCALL







    1. Continue reading " Hyperion EPMA Way " »

      Subscribe to this blog's feed

      Follow us on

      Blogger Profiles

      Infosys on Twitter