Infosys’ blog on industry solutions, trends, business process transformation and global implementation in Oracle.

Main

February 9, 2018

Getting Started with People soft Campus Solution

Overview on PeopleSoft Application

·         PeopleSoft is a comprehensive, multi-tasking ERP system

·         Various domains e.g. FSCM, HRMS, Campus solutions are integrated in the product

·         Flexibility in decision- making is enhanced due to integration of data in between different domains in PeopleSoft

·         Reporting can be in summarized and detailed analysis depending upon the requirement from single domain or consolidation of two or more domains

·         Robust facility for audit trail enables management to cater for user login transactions creating accountability across organizations

·         Reports and data can be synchronized for different accounting periods

Campus Solutions (CS 9.0, 9.1,9.2) Business Processes:

·         Student Management:

 

o   Recruit Prospects

o   Admin Students

o   Enroll in Classes

o   Advise & Transition Students

o   Manage Financial Services

o   Manage campus services

·         Academic Management

o   Schedule courses & Resources

o   Plan Academic Programs

o   Teach courses

·         Institutional Advancement

·         Campus Community

·         Personal Information Management

·         Maintaining Biography/Demographic/ Health Data

·         Maintaining Health Data

·         Maintaining Identification Data

·         Maintaining Participation Data

·         Organization Data Management

·         Maintaining Organization Data

·         Maintaining Event Data

·         Maintaining Committees

·         The 3Cs

o   Communications

o   Checklists

o   Comments

Continue reading " Getting Started with People soft Campus Solution " »

February 7, 2018

Data Visualization in OBIEE 12c

Backdrop

Data visualization is the presentation of data in a pictorial or a graphical format which takes care of a complex problem that could be easily overlooked and makes things easier using graphs, patterns, and design. It is useful in converting real-time data into rich reports and visualizations. Effective data visualization should be informative, efficient, appealing, and in some cases interactive and predictive.

Continue reading " Data Visualization in OBIEE 12c " »

January 31, 2018

Configuring Task Flow dependency and Teradata ODBC connection in Informatica cloud

In this blog, I am going to cover Configuring dependency between the two task flows and configuring ODBC connection for Teradata relational Database in Informatica cloud.

This blog will help to run one task flow when the dependent task flow completed successfully and create an ODBC connection to connect Teradata Relational Database which can be used in Mappings/Data Synchronization/Data Replication jobs to load/retrieve the data.

Configuring Task Flow dependency

In real time, we would require the taskflows to run in certain order or in other words, we should trigger a task flow (taskflow2) after the completion of a task flow (taskflow2). This can be achieved by runajobcli package. This package helps to triggering a taskflow from command prompt (API call). 

Steps to configure the package and trigger the task flow

 

Step 1: Ask Informatica cloud support to activate the Runajob package on the secure agent or ORG if it is not already available.

Step 2: Configure restenv.properties file present in below path

Path: C:\Program Files\Informatica Cloud Secure Agent\downloads\package-runAJobCli.2\package\

 

Step 3: Give full control access to the Informatica cloud installed folder to the user (secure agent installed user).

Step 4: Try to call one dummy task from command prompt by going to the runajob package path.

Ex: C:\Program Files\Informatica Cloud Secure Agent\downloads\package-runAJobCli.2\package\

    cli.bat runAJobCli -n W_ETL_RUN_D_AMDOCS_End_MCT -t MTT

Step 5: If you get 'java internal external command not found', then add java path environment variable in 'My computer'->Advanced properties

EX: Add- C:\Program Files\Informatica Cloud Secure Agent\jre\bin(C or D directory)  

Step 6: Then try to call the same command from post processing command of the MCT. If you get the java error in the log, then add full java path to CLI.bat file

Example CLI.bat: @echo off

pushd %~dp0"Informatica Cloud Secure Agent\jre\bin\java.exe" -cp .;runAJobCli.jar;lib\* com.informatica.saas.utilities.plugins.RunAJobPlugin %* popd @echo on

or

@echo off

pushd %~dp0"D:\Apps\Informatica Cloud Secure Agent\jre\bin\java.exe" -cp .;runAJob

 

Configuring Teradata ODBC Connection

To establish Teradata ODBC connection in Informatica cloud you would require below utilities which can downloaded from below URL

https://downloads.teradata.com/download/tools/teradata-tools-and-utilities-windows-installation-package


Once you have downloaded follow the below steps

Step 1: Run the installer (.exe) from the Downloaded directory and Choose the Language and click on 'Next'.

Step 2: Installation Wizard will be opened. Click on 'Next'.

 

Step 3: Accept the License Agreement by clicking 'I accept the terms in the license agreement' and click on 'Next'.

Step 4: After selecting destination folder , Click on 'Next'.

Step 5: Do check 'ODBC Driver for Teradata *' and Click on 'Install'.

Step 6: Click on 'Finish'.

Step 7: Run the installed one more application file from the following path :

EX:D:/TeradataODBC_windows_indep.15.10.17.00/TeradataODBC/x64/SharedICU-64bit

Step 8: Click on 'Next'.

Step 9: Accept the License Agreement by clicking 'I accept the terms in the license agreement' and click on 'Next'.

Step 10: Click on 'Finish'.

Step 11: Run one more installed application file from the following path

EX: D:/TeradataODBC_windows_indep.15.10.17.00/TeradataODBC/x64\TeraGSS-64bit

 

Click on 'Next'.

Step 12: Select 'Modify' and click on 'Next'.

 

 

Step 13: Select the required feature and click on 'Next'.

Step 14: Click on 'Install' to start installation.

Step 15: Under System DSN, set the ODBC Driver Setup for Teradata Database by providing user Name and PWD And click on 'OK'.

Step 16: Test the ODBC connection in command prompt.

Step 17: Using the ODBC connection you've created, you can create a 'New connection' in Informatica Cloud.

Step 18: If test connection fail, restart the secure agent and try again.

Continue reading " " »

January 30, 2018

Data Integration in Einstein Analytics

This Blog explains various ways to integrate and load data into Einstein Analytics

Data Integration is one of key aspects of BI Tool, Einstein Analytics does exceedingly well in this department with seamless integration, as it doesn't require data to in particular format (like star or snowflakes schema).

Unlike the other BI tools, Einstein Analytics stores the data itself in the cloud not just the metadata. Hence, we need to refresh data time to time. In order to ensure we are working or using the latest data. Storing the data along with inverted index (the way data is stored in Einstein Analytics) boost the performance of the tool.

The options available to load data into Einstein Analytics are

  1. Salesforce: The salesforce objects can be directly loaded into Einstein Analytics using dataset builder or dataflow.
  2. CSV: Csv files can be directly uploaded into Einstein Analytics.
  3. Informatica Rev: The data from external data source that can be loaded into Einstein Analytics using Informatica Rev.
  4. ETL Connector: ETL tools like Informatica cloud, mulesoft, Boomi, snaplogic, etc have connector for Einstein analytics using which you can load data into Einstein analytics.

Screenshot taken from Einstein Org


The data integration can be done at

Dataflow- Dataflow view in Einstein Analytics has etl transformation like Extract, Augment(Join), Append(Union), Slicer, etc., One can leverage this option to integrate data coming from different data sources. Dataflow is also used for data refresh in Einstein Analytics.

Using ETL Connectors- (Eg: Informatica Cloud, the data can be integrated at informatica cloud and integrated data can be loaded into Einstein analytics using "salesforce analytics" connectors.

Apart from this you can also establish connection between datasets at dashboard level using bindings, connect data sources and SAQL mode.
  • Binding: The changes in one step/widget triggers change in other step/widget in a dashboard, this is achieved using binding if the step(s)/widget(s) are created using 2 different datasets.
  • Connect Data Source: This option is available at dashboard level, using which you can connected two columns from two different datasets.
  •        SAQL:  You can write a SAQL query to fetch data from two different datasets at dashboard level.

January 29, 2018

Excel to App - a journey beckoning finance world to the future of technology? A view from the Oracle Analytics Cloud Consultants' Lenses

Around last thanksgiving there were two interesting articles in Wall Street Journal both concerning the finance world, one mentioned the diktat given out by CFOs across companies to get rid of the spreadsheet-based finance analysis and reporting within the finance organizations, and the other from the finance analysts and planners rebuking that the Microsoft Excel continues to be their one powerful tool for analysis and reporting of financial numbers.

For an Oracle BI-EPM consultant like me implementing Essbase, planning and other technology solutions predominantly for Finance organizations of my clients both articles throw an interesting perspective on the transition that is imminent, from a conservative macro based analysis to the futuristic app based analysis, and I wanted to write about the possibilities that organizations can draw a balancing line with Oracle's latest offering in the cloud to ensure none of the goodies of spreadsheets are lost and at the same time an organization can plan to have a nimble, and efficient solution that can bring value to the CFOs table.

In the first article "Stop Using Excel, Finance Chiefs Tell Staff.", Adobe Inc.'s finance chief Mark Garrett says that it takes days for his team to put together the job fulfilment report across the organization and to get the numbers to analyze the impact of salary on budgeting. Garrett wants his finance staff to focus more on the analysis than spending time on getting that data from disparate systems into the spreadsheets.

As a direct opposite to this view, the second article carried the view of Finance professionals who say "You'll Have to Pry Excel Out of Their Cold, Dead Hands". This shows the comfort level of maintaining and analyzing huge amounts of data for most of the professionals in finance as well as analyzing the data for insights following cumbersome ways without utilizing the features that modern technology solutions provides. The article opines that finance analysts spend a good amount of time trying to develop a Point of View (PoV) from the spreadsheet and carry a sense of pride around it and take ownership of the analysis.

There is technology supporting Spreadsheet

In FP&A systems specifically Oracle Essbase has gained so much user base due to its ability to combine the best of the worlds in terms of executing complex calculations using hierarchical data model and be able to showcase specific Points of View (PoVs) using Excel Add-in - SmartView. This tool has so much flexibility in the hands of the end users that it enables them to actually not worry about data or its integrity and actually gives ample flexibility for end users to perform business analyses needed for their organizations.

Excel is my favorite 

Despite the fact that a small formula error could lead to large differences in the value reported through spreadsheets, excel remains the favorite choice of analysts. One of the reasons that I think the reason behind this strong conviction is that all the business logic applied and all the calculations are visible to the analysts right in front and they do not get any surprises to say that logic was applied but the value came out wrong. Whereas, in a system there is a set standard as far as data entry, business logic configuration, and expected output. If junk is fed to the system, junk comes out. So, this encapsulation in a system in terms of backend data processing becomes a bigger hindrance in embracing such a system by financial analysts.

Oracle Analytics Cloud (OAC)

Essbase in Cloud is perfectly suited to give best of the two worlds in terms of calculation performance and flexibility to end users as well as be able to define business calculations and dimensionality by the actual end users / consultants. It gives power in the hands of the right people. It does not stop at that, OAC perfectly aligns to the technological future state that top level executives across organizations are looking forward to, a tech savvy futuristic state where finance data is available along with insights and commentary at the "Hi" of a voice command and seamlessly across different types of corporate and personal devices for the financial analysts' of the millennial generation.

Transformation leads to Transition

Oracle Analytics Cloud (OAC) in my opinion will emerge as a fusion arena where it brings the solid features of being a PaaS offering combined with the SaaS features in the FP&A domain. In relation to the existing on cloud applications for financial planning and consolidation (EPBCS & FCCS), OAC comes up with a good packaging option along with PaaS advantages of performance, scalability, and to serve not only the finance community but also the ability to incorporate the needs of corporate reporting across the organization through its DVCS, Data Lake, and BI packages combined with power punching, number crunching Essbase.

What's ahead?

As transformation progresses from on premise to cloud, demand for app based, real time insights with commentary can be enabled through Oracle Mobile applications such as Synopsis, Oracle Mobile HD, and the uber-cool Oracle Day-by-Day is sure to transform the consumption of data from spreadsheets to insights on-the-go through futuristic apps in finance.

January 16, 2018

Automating file based data loads for Hyperion Cloud applications

Automation is the need of the hour! Time has come when everyone wants minimal human intervention in any process or task especially when it is a periodic activity (daily, weekly, fortnightly, monthly etc) and data analytics is no different. Clients look for maximum automation especially in the data preparation processes so that they spend more time on planning and strategizing.

Data management is part of every Hyperion cloud application viz. PBCS, EPBCS, FCCS, PCMCS, ARCS, TRCS. In the simplest terms, the role of Data Management (referred to as DM henceforth) is to bring together data from various ERP source systems, transform it and load it to the cloud applications. It is possible to extract data from any on-premise or cloud source system either through direct integration or through flat files. Oracle provides direct integration of Hyperion cloud applications with Oracle Financials Cloud, Budgetary Control, HCM cloud, NetSuite.

However, to extract data from other source systems or third party applications, a flat file or comma delimited file is needed which is used as a source for DM. This makes the file based integrations a manual process where the source system has to extract and provide data in a file and then this file will be imported into DM. However, this manual step can be eliminated by using some scripting techniques and APIs. You can automate any data load process as long as you are able to access and extract data from the source system using any scripting or programming language.

Below is an example of one such kind of automation where the source system is a third party on-premise software having Oracle DB as the backend database and target application is FCCS. The solution is implemented in three basic steps:

1.       Call a Jython script to extract data from third party source system into a file.

2.       Upload the file onto the DM inbox folder

3.       Execute the data load rule to load data to FCCS.

All this is bundled into a batch script which is executed from the command prompt or can be scheduled to execute on the desired date and time.

1.       Jython script to extract data from source system :

I have implemented this piece of code using Jython. It can be implemented using any other scripting or programming language as well. 

This is psedo code and is to be used as a guideline only

import java.sql as sql

import sys

# Period and Year passed as arguments to the script

VARPERIOD = sys.argv[1]

sys.stdout.write(VARPERIOD)

VARYEAR = sys.argv[2]

sys.stdout.write(VARYEAR)

# Connect to source system database

sourceConn = sql.DriverManager.getConnection("<<jdbc url, username, password>>")

selectStmt = "SELECT ACCOUNT, ENTITY, ICP, DEPT, AMOUNT FROM <<source table>> WHERE PERIOD = '" + VARPERIOD + "' and YEAR =" + VARYEAR

stmt = sourceConn.createStatement()

stmtRS = stmt.executeQuery(selectStmt)

myCount = 0

outfilename = "OUTPUTFILE.TXT"

outfile = open(outfilename, "w")

outfile.write("ACCOUNT;ENTITY;ICP;DEPT;AMOUNT" + chr(10))

while(stmtRS.next()):

  myCount = myCount + 1

  ACCOUNT = stmtRS.getString("ACCOUNT")

  ENTITY = stmtRS.getString("ENTITY")

  ICP = stmtRS.getString("ICP")

  DEPT = stmtRS.getString("DEPT")

  AMOUNT = stmtRS.getBigDecimal("AMOUNT")

  mystr = str(ACCOUNT) + ";" + str(ENTITY) + ";" + str(ICP) + ";" + str(DEPT) + ";" + str(AMOUNT) + chr(10)

  outfile.write(mystr)

outfile.close()

stmtRS.close()

stmt.close()

 

2.       Upload the file OUTPUTFILE.TXT onto the DM inbox folder and execute the data load rule:

These steps are achieved by EPMAutomate script which is similar to a batch script

@echo off

SET /p period=Enter Period in MMM format:

If %period% == "" goto :ERROR_P

SET /p year=Enter Year in YYYY format:

If %year% == "" goto :ERROR_P

echo Executing script to extract data from source system

cd /d c:\jythonscripts

jython -J-cp ojdbc6.jar SOURCE_EXTRACT.py %period% %year%

IF %ERRORLEVEL% NEQ 0 goto :ERROR

echo Source extract complete

echo Logon to EPM Cloud

call epmautomate login <<username>> <<password>> <<url>> <<domain>>

IF %ERRORLEVEL% NEQ 0 goto :ERROR

echo Upload file to inbox folder

call epmautomate uploadfile c:\jythonscripts\OUTPUTFILE.TXT inbox/Source_Files

IF %ERRORLEVEL% NEQ 0 goto :ERROR

echo Executing load rule

call epmautomate rundatarule DLR_LOAD_FCCS %period%-%year% %period%-%year% REPLACE MERGE Source_Files/OUTPUTFILE.TXT

IF %ERRORLEVEL% NEQ 0 goto :ERROR

:EOF

echo Scheduled Task Completed successfully

call epmautomate logout

pause

exit

:ERROR

echo Failed with error %ERRORLEVEL%

call epmautomate logout

pause

exit

:ERROR_P

echo Period and/or Year cannot be blank!

pause

exit

 

Points to note:

1.       This automation process is for EPM cloud applications where on-premise FDMEE is not available and Data Management has to be used.

2.       This is not the only way to achieve automation. It is the way I have implemented according to my project requirements. There are other ways of automating as well using a combination of APIs of the source system (if available), EPM REST API and batch scripts.

3.       This approach requires EPMAutomate, Jython and the batch files to be hosted on a local workstation or server which will be on-premise even though target Hyperion applications are on cloud.

4.       Further to this, email notifications can also be configured to send out emails to stakeholders after a particular data load completes successfully or errors out. This can be achieved using email programs like BLAT, POWERSHELL, MAPISEND, MAILSEND etc. These programs are available for free download and can be configured in a windows batch and called after the data load step is complete.

Write-Back of Budget Data from PBCS to Fusion-GL through FDMEE

Introduction

PBCS is the latest release from Oracle which is a user friendly tool with less amount of efforts needed to the taken care from a developer point of view. It has been made so user friendly to developers that there is no need of installation and maintenance that needs to be performed. Only creation of application and artifacts which are required in a functional perspective needs to be developed.

For PBCS there are few ways to load data into the application.

1)   Loading directly through planning data load utility, by creating a load file in a planning format and loading into application.

2)   Through EPM Automate Utility, in this the load format file can be uploaded to planning INBOX and from there the automation can be done to load into planning application.

3)   Through Data Management (FDMEE), in this we can integrate the source application with the target application by creating the mappings between two systems.

Post loading source data into the PBCS application, and after performing budgets in Hyperion Planning application, we can write back the budget data to source system through Data Management (FDMEE).

 


Continue reading " Write-Back of Budget Data from PBCS to Fusion-GL through FDMEE " »

January 1, 2018

Enrich Oracle Database Data with Data from Distributed File System

In today's era of advance analytics, data from only relational database is not enough to get proper insight about any business. There is a growing need to integrate data from various sources and analyze them to produce more meaningful and innovative idea to take better decision.

Below are the steps to Enrich Oracle Database Data with Data from distributed File System e.g. HDFS) and accessed through Hadoop jobs for Integrated Reporting:

1.Table function's launched DBMS_SCHEDULER manages Hadoops Map reduce job

2.Enabling to redirect Hadoop's result into Oracle query for blending

3. A bash script is launched by the DBMS_SCHEDULER asynchronously which in turn register Hadoop job

4.Oracle table function and the Hadoop's mapper collaborate using Oracle's Advanced Queuing feature

•A Common queue is used to en-queue and de-queue data by Hadoop mapper and Oracle table function respectively

Oracle Advance Queue supplies the data into Oracle Database of various BI tool's consumption

5. At a time one of the slave node submits the External Job

 

View imageEnrich Oracle Data With HDFS Data.PNG


The following steps will provide more insights and detailed steps regarding the integration and various steps/procedure to optimize performance while fetching data from hadoop's process: 


  • Create a table function that will act like a Query coordinator(QC) as well processing role
  • DBMS_SCHEDULER of Table function invocation starts an asynchronous job which will run a bash script and in return the script will start a synchronous launcher in the Hadoop cluster for the mapper process
  • The Mapper process processed the data and writes into a queue. To improve the performance, rather the writing sequentially by each and every mapper process, we can choose to write in batch mode
  • In the next step, the De-queue process, Oracle database supports parallel invocation of table function to get data from Oracle Advance Queue. The table functions leverages both the oracle data and data from queue, then integrates them into a single dataset for the user
  • Once the Hadoop mapper processes are started, the job monitor process keeps an eye of the launcher bash script. The bash script finishes execution once the mapper process finished processing

·           The job monitor also keeps track of remaining data elements in queue and the table function keeps on processing the data.

  Pros of Using Oracle table Function and Advance Queuing Facility:

  •  The Oracle Advance queuing feature provides us the load balancing capability by enabling the parallel execution of table function and          at   the same time Hadoop mapper job will also run in parallel with a different cardinality of parallelism and without any intervention of            Oracle's Query Coordinator
  • This method eliminates the necessity external data storage while accessing Hadoop's result
  • Data from both Oracle table and Oracle queue (Hadoop's result) are used by Table function to produce combined result for the user

        Cons of Using Oracle table Function and Advance Queuing Facility:

  • When there is a need to synchronize the timing of the data to be processed, we should use Table Functions and Oracle Advanced Queue          features

        Recommendation:

  1.  For better performance of Queue, we can insert data in a batch mode from individual mapper job
  2.  Delivery mechanism can be changed to pipeline and relational table for optimized performance

 

 

December 29, 2017

Comparative Study Between Oracle Big Data Cloud Service and Compute Engine

 

Comparative study between Oracle BDCS and Oracle Big Data Cloud Compute Engine.

 

1.             Oracle Big Data Cloud Service: Gives us access to the resources of a preinstalled Oracle Big Data environment, this also comes with an entire installation of the Cloudera Distribution Including open source Apache Hadoop and Apache Spark. This can be used to analyze data generated from Social Media Feeds, E-mail, Smart Meters etc.

OBD CS contains:

·         3-60 Nodes cluster, 3 is the minimum number of cluster node(OCPU) available to start with; where we can increase the processing power and secondary memory of the cluster node can be extended by adding Cluster computer nodes("bursting").

·         Linux Operating System Provided by Oracle

·         Cloudera Distribution with Apache Hadoop (CDH):

-          File System: HDFS to store different types of files

-          MapReduce Engine (YARN is default for resource management)

-          Administrative Framework, cloud era manager is default

-          Apache Projects e.g. Zookeeper, Oozie, Pig, Hive, Ambari

-          Cloudera Application, Cloudera Enterprise Edition Data hub, Impala Search and Navigator

 

·         Built-in Utilities for managing data and resource

·         Big Data Spatial and Graph for Oracle

·         Big Data Connectors for Oracle:

-          Oracle SQL Connector for HDFS

-          Oracle Loader for Hadoop environment

-          Oracle XQuery for Big Data

-          ORE Advanced Analytics for Big Data

-          ODI Enterprise Edition

 

Typical Workflow of OBDCS: Purchase a subscription -> Create and manages users and their roles -> Create a service instance -> Create an SSH key pair -> Create a cluster -> Control network access to services -> Access and work with your cluster -> Add permanent nodes to a cluster -> Add temporary compute nodes to a cluster (bursting) -> Patch a cluster -> Manage storage providers and copy data

odiff (Oracle Distributed Diff) is a Oracle developed innovative tool to compare huge data sets stores sparsely using a Spark application and compatible with CDH 5.7.x. Maximum file/directory size limit is 2 G.B. to compare.

 

2.       

Continue reading " Comparative Study Between Oracle Big Data Cloud Service and Compute Engine " »

December 27, 2017

Hyperion Essbase Metadata Comparison Tool

Hyperion Essbase Metadata Comparison Tool:

In my earliear blog, I have explained about How the Essbase Java API can be used for alternate hierarchy sync up. On this blog, I going to explain about how the Essbase API can be used to find out the inactive members in the Essbase outline. I have followed same procedure as similar to alternate hierarchy sync up tool to develop this Essbase metadata comparision tool.

During the Essbase implementations, we received lot of new members from SAP source system for testing and loaded into Essbase outline. Later, those test members were not used for any reporting and validation. So business admin requested us to find out those test members and remove it from outline. It is very difficult to find out the test members manually. So I have developed a tool to compare Essbase outline members against the metadata files received from Source system and to check if there are any inactive members in the outline.

Hyperion Essbase API:

I have used Essbase Jave API to develop this Essbase Metadata comparion tool. Essbase API has a more efficient features that can be used for hyperion administration tasks, such as outline extraction, user provisioning , etc. I have developed Java programs using these Essbase API to pull the Essbase outline members for specific dimension and those extracted level 0 members could be compared against source files received from SAP system.

Metadata Comparison:

 

The following flow chart describe about the flow of Essbase metadata comparison against Source metadata files.

 

Figure 1View image

A) Business Requirements:

 

SAP source system was providing metadata source files to the business and these source files were loaded into Essbase outline to perform the system testing and mock testing as per the business requirements.  The SAP development team has created many dummy members to test the metadata extraction functionality in the SAP source system and these dummy members are loaded into the Essbase outline through the dimension build process. This dimension build process is automated.

 

Later, Business admin has requested us to find out the dummy or inactive members and remove it from the Essbase outline to increase the performance of the application. Initially this comparison activity was done manually by the developers using the Excel VLookup method. But this method was more time consuming and there was a possibility of human error in the end results. Later I have developed the comparison tool using JAVA API which has been used to pull the outline members from Essbase application and does the comparison against metadata source files. During the testing phase, This tool has reduced the manual efforts in terms of testing.

B) Comparison Tool


The aim is to extract the leaf level members from the Essbase outline and perform the VLookup against the Source metadata files to produce missing members in the source files where the missing members might be exists as inactive members in the Essbase outline.

This metadata comparison tool has the following sub modules.

1.    Essbase OLAP Server Sign-On

2.    Extracting Level 0 members from Essbase outline

3.    VLookup Operation

4.    Check if there are any data for Missing Members

5.    Inactive members report

  

          The following are the Essbase API packages which has been used to develop the comparison tool:

com.essbase.api.base.*;

com.essbase.api.datasource.*;

com.essbase.api.domain.IEssDomain;

com.essbase.api.metadata.*;

com.essbase.api.session.*;

com.essbase.api.dataquery.*;

com.essbase.api.domain.*;

         

Figure 2View image

C)  Pseudo Code

 

I have given pseudo code which is used for building this comparison tool.

1.  Essbase OLAP Server Sign-On

 

To Sign on to the Essbase applications , you have to pass  the login credentials , name of the OLAP server and the URL of the provider services as parameters.

 

acceptArgs(args);

ess = IEssbase.Home.create (IEssbase.JAPI_VERSION);

/**Please provide username, password, and provider services URL**?

IEssDomain dom = ess.signOn(ess_userName, ess_password, false, null, s_provider);

olapSvr = (IEssOlapServer)dom.getOlapServer(ess_olapSvrName);

// Give the olap server name

olapSvr.connect(); //Sign on into Essbase server.

IEssCube cube = olapSvr.getApplication("TestApp").getCube("TestDB");

 

2.  Level 0 Members Extraction:

 

The source system provides the required metadata for the business reporting. You need to get the extraction of all the level 0 members for the specific dimension from the Essbase outline and it should be compared with SAP source file.

The Method listAccountMembers() pulls all the leaf level members from the Account dimension and writes the output to excel or CSV file. If you need to pull any specific members from any of the base dimension, you should update the arguments of executeQuery() method. The executeQuery() method extract the level0 member from the Account dimension in the below code.

          static void listAccountMembers(IEssCube cube) throws EssException

          {

                   IEssCubeOutline ebOtl = null;  //Outline object creation and initiation

FileWriter fWriter = null; //File object creation and initiation

                   try {

                    IEssMemberSelection essMbrSel = cube.openMemberSelection("Sample ");

//Cube operation and object initiation

EssMbrSel.executeQuery("Accounts", IEssMemberSelection.QUERY_TYPE_BOTTOMLEVEL, IEssMemberSelection.QUERY_OPTION_MEMBERSONLY,

 "Accounts", "", ""); //Accounts - Standard Dimension, QUERY_TYPE_BOTTOMLEVEL - Level0 members

                   IEssIterator mbrs = mbrSel.getMembers();

                   fileWriter = new FileWriter("ZAccounts.csv"); //Create output file object.

 

for(int i = 0; i < mbrs.getCount(); i++) //For loop iteration for each bottom level members in the outline

{

                   IEssMember mbr = (IEssMember)mbrs.getAt(i); //Get the member

fileWriter.append(mbr.getName() +","+ mbr.getAlias("Default") +","+mbr.getShareOption()+"\n"); //append the output into the file

                              }

          } catch (EssException essx) {}

}

3.   VLookup Operation:


In the above code, the leaf level members are fetched from the Account dimension using listAccountMembers () method and then these level0 members should be compared with source metadata account file to generate the absent members.  GetAccounts () method has the code to perform VLookup operation. All the missing members are written to the output file in VLookup operation.

The below psudeo code get the members from Account.CSV file which is generated from listAccountMembers () method and get the members from glaccount.csv file which is uploaded from source system.

 

private static void GetAccounts(IEssbase essX, IEssCubeView csv) throws EssException

{       

   List<String[]> colSAP; //Initiate list object

   List<String[]> colHyp; //Initiate list object

   Boolean matchVar = false;

   FileWriter fileW = null; //Initiate File Writer object

   String ESSdata = null; //String Variable

        try

          {

  colHyp = readFile("Accounts.CSV"); //Read extracted level 0 members

            colSAP = readFile("glaccount.csv");//Read Source Metadata file

            fileW = new FileWriter("CompareAccounts.csv"); // output file object initiation.

            for(int i = 0; i < colHyp.size();i++)

            //For loop iteration for each line from the Essbase extaction file

                   {

                            for(int j = 0;j < colSAP.size();j++)

                             //Iteration for each line in the SAP source file

                             {

                                       //VLookup Operation

if(colHyp.get(i)[0].equals(colSAP.get(j)[0]))        

                                      {

                                                matchVar = true;

                                                break;

                                      }                                    

                                      else

                                      {

                                                match = false;

                                      }

                             }

                             if (matchVar == false)

                             {

 fileW.append(colHyp.get(i)[0]+" "+ ESSdata +"\n" ) ;  

//append the output into the new file       

                             }

                    catch (IOException ex)        {e.printStackTrace();}

          }

}

 

 

4.   Check if there are any data for Missing Members


Once the list of missing members are populated after the VLookup operation, the script checks if there are any data for those missing members. If the data found for any of the members, that may be retained in the outline.

The method performCubeViewOperationACC () is implemented for verifying data for the year FY17. The New Year can be updated in the code if needed.

Static String performCubeViewOperationACC (IEssbase essArg, IEssCubeView csv, String opeStr) throws EssException

          {

                    String value = null;

                    IEssGridView grid = csv.getGridView(); //Create object for data retrieval grid

                    grid.setSize(5, 10); //Set size for the grid

       grid.setValue(3, 1, opStr); //opeStr - Level0 member

       grid.setValue(1, 2, "FY17"); //Year

       grid.setValue(0, 2, "Total Entities"); //Below fields are optional

       grid.setValue(2, 2, "YearTotal");

       grid.setValue(0, 5, "Actual");

       grid.setValue(0, 6, "Final");

                    grid.setValue(0, 7, "LOC");

       

                   IEssOperation op = null; //Create object for the data retrieval

       

                   op = cv.createIEssOpRetrieve();

                   csv.performOperation(op); //Performs data retrieval for the given pov.

 

                   if (grid.getValue(3, 2).toString().equals("#Missing")

                                      && grid.getValue(3, 3).toString().equals("#Missing")

                                      && grid.getValue(3, 4).toString().equals("#Missing"))

                   //Checks if the data is present.

                   {

                             value = "- This member has No Data (#Missing)";

                   }

                   else

                   {

                             value = "- This member has Data ";

                   }

                   return value;

    }

 

5.   Inacitive members report

Finally, the list is produced for missing members from the Essbase outline. These members can be deleted from the outline if the members are not required.

Example

 

The below example explains about how the missing members are found using comparison tool.

Level 0 members from Essbase outline - Account Dimension:

Account Parent

Account Child

Desc

100

10000XX

Sales

300

30000XX

Consumable

400

40000XX

Salary

500

50000XX

Marketing

 

 

Level 0 members from Source System:

Account Parent

Account Child

Desc

100

10000XX

Sales

400

40000XX

Salary

500

50000XX

Marketing

 

After the execution using Comparison tool:

Output File:

Missing Members

30000XX - This member has No Data (#Missing)

 

Conclusion:

I have used this in all the testing phases and save lot of efforts. Also there are option to delete the members automatically from the outline.Based on the business requirments, you can modify the Java program and execute it.

Reference(s)

1.    Oracle Documentations https://docs.oracle.com/cd/E40248_01/epm.1112/aps_java_api/com/essbase/api/metadata/IEssCubeOutline.html

 


December 26, 2017

Hyperion Essbase Alternate Hierarchy Sync Up

Hyperion Essbase Alternate Hierarchy Sync Up

I was working with Essbase Implementations project for one of the Retails customer and the business requested us to create the alternate hierarchies to fulfill their month end reporting requirements. These alternate hierarchies are derived from base hierarchy and this should be updated whenever the new member is added into base hierarchy. So business admin has to find out the new member added to the base hierarchy and add that member to the alternate hierarchy. But it is difficult to find out the new member added to the Essbase outline. On this blog, I have explained about how the Alternate Hierarchy sync tool can be used to find out the new members added to the base hierarchy.

Hyperion Essbase API:

The Essbase Java API is an interface which has the powerful features such as application and database administration. I have written Java program using this Essbase API that can extract the outline members from the base hierarchy and does the VLookup against alternate hierarchy. Also the members can be added to the alternate hierarchy automatically. But business wants to review the members and add it manually.

Alternate Hierarchy Sync up:

The below flow chart describe about the flow of Essbase alternate hierarchy sync up process.

 

Figure 1   View image

 

Business Requirements:

 

Business users were using the alternate hierarchy for the month end reporting requirements and these alternate hierarches are manually built by the business based on the base hierarchy structure. If any new members added to the base hierarchy, then these new members might be added to the associated alternate hierarchy structure. Both base hierarchy and alternate hierarchy should be sync for certain hierarchy. After the dimension build process, users need to manually search for the new members added in the base hierarchy and add those new members to the alternate hierarchy. This process was more time consuming.

The requirement was to develop and automate the process to identifying the new members added to base hierarchy and those members which are not added the alternate hierarchy. I have developed alternate hierarchy sync up tool using Essbase API programming which can pull the outline members from Essbase application and does the comparison against the given alternate hierarchy to provide the new member details. This tool has saved lot of efforts in terms of alternate hierarchy building and also this sync process can be automated using the sync up tool.

A)  Alternate Hierarchy Sync up tool:

This Essbase API Java programming is used to develop this Alternate Hierarchy Sync up tool which can extract the level0 members from the outline base hierarchy and does the VLookup against the alternate hierarchy members to produce the missing members from the alternate hierarchy. These missing members are might be added to the alternate hierarchy if it is needed for business reporting.

This Alternate Hierarchy Sync up tool has the following modules.

1.    Connecting into the Essbase Server

2.    Extracting Level 0 members from Base Hierarchy

3.    Extracting Level 0 members from Alternate Hierarchy

4.    VLookup

5.    Generate the results

             The below are the Essbase API packages used in the Java Program:

com.essbase.api.base.*;

com.essbase.api.datasource.*;

com.essbase.api.domain.IEssDomain;

com.essbase.api.metadata.*;

com.essbase.api.session.*;

com.essbase.api.dataquery.*;

com.essbase.api.domain.*;

 

Figure 2   View image

B) Pseudo Code

 

I have given the pseudo code in the below section and explained about how it is used for building the Alternate hierarchy sync up tool.

1.     Connecting into the Essbase Server

The parameters - Username, pwd, OLAP server name and the provider services URL are passed through the signOn() method to sign on into Essbase applications.

The connection is created for TestApp.TestDB application.

acceptArgs(args);

ess = IEssbase.Home.create (IEssbase.JAPI_VERSION);

/**Please provide username, password, and provider services URL**?

IEssDomain dom = ess.signOn(UserName, PWD, false, null, s_provider);

olapSvr = (IEssOlapServer)dom.getOlapServer(s_olapSvrName);

olapSvr.connect(); // Connecting into the Essbase server.

IEssCube cube = olapSvr.getApplication("TestApp").getCube("TestDB");

                  

Once the connection is made, system gets the name of base hierarchy and alternate hierarchy from user.

System.out.println("\nEnter Base Hierarchy Name (Ex - 200):");

                   Scanner baseScanner = new Scanner(System.in);//Reads input from user

                   String baseHier = baseScanner.nextLine();

                                     

System.out.println("\n\nEnter Alternate Hierarchy Name (Ex - AH_200_Total):");

                   Scanner alterScanner = new Scanner(System.in); //Reads input from user

                   String alterHier = alterScanner.nextLine();

2.    Extracting Level 0 Members from Base Hierarchy

The Base hierarchy level 0 members is extracted and it is prepared to compare with level 0 members from alternate hierarchy. The sync up tool reads the hierarchy name details from user and extract the level0 members for those given names only.

The Method listBaseEntityMembers () pulls the level 0 members from the Entitiy dimension for the given hierarchy and writes the output to csv file. If we want to pull the specific dimension, we need to modify the parameters in the executeQuery() method. In the below sample code, the executeQuery() method pulls the bottom level member from the Entity dimension.

static void listBaseEntityMembers(IEssCube essCube, String essBaseHier) throws EssException

          {

                   IEssCubeOutline essOtl = null; //Create object for outline and initiate

                   FileWriter fileWriter = null; //Create object and initiate to write output                        

                  

          try {

          IEssMemberSelection mbrSel = cube.openMemberSelection("Sample ");

//Create object and initiate for cube operation

 

mbrSel.executeQuery(baseHier, IEssMemberSelection.QUERY_TYPE_BOTTOMLEVEL,

IEssMemberSelection.QUERY_OPTION_MEMBERSONLY,

 "Entities", "", "");

//Entities - Base Dimension, QUERY_TYPE_BOTTOMLEVEL - Level0 members

 

          IEssIterator mbrs = mbrSel.getMembers();

                                     

          fileWriter = new FileWriter("ZBaseEnt_"+baseHier+".csv");

//Create new output file

 

          fileWriter.append("Entities"+","+"Desc"+"\n");

for (int i = 0; i < mbrs.getCount(); i++) //Iteration for each level 0 members in the outline

 

{

              IEssMember mbr = (IEssMember)mbrs.getAt(i); //Get the member

                   {

fileWriter.append(mbr.getName()+","+ mbr.getAlias("Default")  +","+mbr.getShareOption()+"\n"); //Write output to the file

                    }

          }

          } catch (EssException x) {System.err.println("Error" + x.getMessage()); }

           }

                                  }

         

3.    Extracting level 0 members from alternate hierarchy

The alternate hierarchy level 0 members is extracted and it is prepared to compare with level 0 members from base hierarchy. The sync up tool reads the alternate hierarchy name details from user and extract the level0 members for those given names only.

The Method listAlterHierMembers () pulls the level 0 members from the Entitiy dimension for the given hierarchy and writes the output to csv file. If we want to pull the specific dimension, we need to modify the parameters in the executeQuery() method. In the below sample code, the executeQuery() method pulls the bottom level member from the Entity dimension.

static void listAlterHierMembers (IEssCube cube, String baseHier) throws EssException

          {

                   IEssCubeOutline otl = null; //Create object for outline and initiate

                   FileWriter fileWriter = null; //Create object and initiate to write output                        

                  

          try {

          IEssMemberSelection mbrSel = cube.openMemberSelection("Sample ");

//Create object and initiate for cube operation

 

mbrSel.executeQuery(baseHier, IEssMemberSelection.QUERY_TYPE_BOTTOMLEVEL,

IEssMemberSelection.QUERY_OPTION_MEMBERSONLY,

 "Entities", "", "");

//Entities - Base Dimension, QUERY_TYPE_BOTTOMLEVEL - Level0 members

 

          IEssIterator mbrs = mbrSel.getMembers();

                                     

          fileWriter = new FileWriter("ZAltEnt_"+baseHier+".csv");

//Create new output file

 

          fileWriter.append("Entities"+","+"Desc"+"\n");

for(int i = 0; i < mbrs.getCount(); i++) //Iteration for each level 0 members in the outline

 

{

              IEssMember mbr = (IEssMember)mbrs.getAt(i); //Get the member

          if (mbr.getShareOption().toString().equals("Shared member") && !mbr.getShareOption().toString().equals("Never share")) //Gets only shared member

                   {

fileWriter.append(mbr.getName()+","+ mbr.getAlias("Default")  +","+mbr.getShareOption()+"\n"); //Write output to the file

                   }

          }

          } catch (EssException x) {System.err.println("Error" + x.getMessage()); }

           }

                                  }

         

 

4.     VLookup:

Now the level 0 members are extracted from both base and alternate hierarchy for the given name. Both base and alternate hierarchy level 0 members should be compared to get the new members which are not added to the alternate hierarchy members.  GetEntities () method is implemented to perform VLookup operation that lists all the missing members from the alternate hierarchy.

The below code reads the level0 members from base hierarchy .CSV file which is generated from listBaseEntityMembers () method and reads the level0 members from alternate hierarchy .CSV file which is generated from listAlterHierMembers method.

 

private static void GetEntities (IEssbase ess, IEssCubeView cv, String baseHier, String alterHier) throws EssException

{       

   List<String[]> colSAP; //Initiate list object

   List<String[]> colHyp; //Initiate list object

   boolean match = false;

   FileWriter fileWriter = null; //Initiate File Writer object

   String dataESS = null; //String Variable

        try

          {

                        colHyp = readFile("ZBaseHier.CSV"); //Read Base hierarchy members

                   colSAP = readFile("ZAlterHier.csv");//Read Alternate Hierarchy members

                   fileWriter = new FileWriter("Results_Base&AlterHier.csv");

//New output file

 

                    for (int i = 0; i < colHyp.size();i++) //Iteration for each line from the

essbase level 0 extaction file

                   {

                              for (int j = 0;j < colSAP.size();j++)

                             //Iteration for each line in the SAP metadata file

                             {

                                       //VLookup Operation

if(       colHyp.get(i)[0].equals(colSAP.get(j)[0])) 

                                      {

                                                match = true;

                                                break;

                                      }                                    

                                      else

                                      {

                                                match = false;

                                      }

                             }

                             if (match == false)

                             {

 fileWriter.append(colHyp.get(i)[0]+" "+dataESS+"\n" ) ;        

//Write the output to the new file  

                             }

                    catch (IOException e)         {e.printStackTrace();}

          }

}

 

5.    Produce the results

Finally the list is produced for new members which are not available in the alternate hierarchy. These members can be added if it is needed for business reporting.

1.  Example

The below example explains about how the new members are found using alternate hierarchy sync up tool.

Level 0 members from Base Hierarchy - Entity Dimension:

Entity Parent

Entity Child

Desc

100

10000XX1

HR

100

10000XX2

Corps

100

10000XX3

Sales

100

10000XX4

Wholesale

 

Level 0 members from Alternate Hierarchy:

Entity Parent

Entity Child

Desc

100

10000XX1

HR

100

10000XX2

Corps

100

10000XX3

Sales

 

After the execution using Alternate Hierarchy sync up tool:

Output File:

New  Members

10000XX3

 

2.  Conclusion:

I have used this tool while building the alternate hierarchy based on the business request and it saved lot of efforts and also this sync process can be automated.

Reference(s)

1.    Oracle Documentations https://docs.oracle.com/cd/E40248_01/epm.1112/aps_java_api/com/essbase/api/metadata/IEssCubeOutline.html

 

December 19, 2017

Essbase Users Last Login Time Using API

Essbase Users Last Login Time Using API

             I was working for one of our Retail customers who were using Essbase applications 11.1.1.3 for Actuals, Budget and forecast month end reporting. There are several business users are provisioned to access the application based on their role. Only few users might be accessing the Essbase application regularly and other users access the application only if there is any need for reporting. To use the number of license efficiently, Business admin wanted to find out User's last login time details which helps the business to remove the inactive users from the application. Usually I export the security file in the readable format and do lot of manual work to pull the user last login time details. I guess now it is very easy to get the last login time from the EAS console in the latest release. But it was not possible in the older version. So I have written Essbase API scripts to automate this extraction of users' login details from Essbase application.

Hyperion Essbase API:

              The Essbase API is an interface that provides powerful features such as application and database administration. The Essbase OLAP user's details can be extracted using this Essbase API programs and this API programs can be run from the server machine and don't need to concern about where the server is located. I have developed this Essbase user's last login time tool using this Essbase API programs.

Business Requirements:

              There are more than 300 users are provisioned to access the Essbase application for reporting, but only few users connect into the Essbase application regularly. So Business has requested us to find out if there are any inactive users based on their last login time. The last login time helps the business to revoke application access if those users are no longer using the application. 

Generally the last login time details is stored in Essbase security file. If we use this file to get the timing details, it will require lot of effort to convert the raw data into information. But this Essbase Users Last Login time tool helps to automate the process of pulling login timing details for every user and this tool has resulted in lot of effort savings.

Essbase users last login time extraction


The Essbase API Java programming is used to develop the user's Last login time extraction tool which can extract the last login time for every user provisioned in Essbase application.

             The below are the Essbase API packages which are used to develop the tool:

com.essbase.api.base.*;

com.essbase.api.datasource.*;

com.essbase.api.domain.IEssDomain;

com.essbase.api.metadata.*;

com.essbase.api.session.*;

com.essbase.api.dataquery.*;

com.essbase.api.domain.*; 

Pseudo Code

 

I have written a simple java program and included the below pseudo code to get the user login details. Below section explains about the pseudo code used for building the Users last login extraction tool.

1.    Connecting to Essbase Server


The user name, password, OLAP server name and the provider services URL should be passed as parameters to sign on into Essbase applications.

In the below Sample code, the Essbase server connection is created.

acceptArgs(args);

/**Please provide username, password, and provider services URL**/

SignOn(Name, PWD, false, null,NULL);

GetOlap Server()

// Provide olap server name

Connect(); //creates connection with Essbase server.

 

2.    Getting OLAP users


The method "getOlapUsers ()" is used to get the last login time of all the Essbase users. It extract all the users of OLAP server and their login details. Users might be granted access to either Essbase or Essbase & Planning applications. Based on the application, application type is updated.

static void getOlapUsers(IEssOlapServer olapServer) throws IOException

          {

                    //create a file object and initiate it

                   fileWriterObj = new FileWriter("Essbase_Users_LastLogin.xls");

                   //Create new file - Essbase_Users_Last_Login.xls

           try

          {

Get Olap Users(null, null, null, false);

//Iterator to get all the Essbase users.

              int length = users.getCount();

              for(int i=0; i < length; i++)

                   {

                     fileWriterObj.append(usr.getName()+"\t");

                   //Get name of the user

                   fileWriterObj.append(convertTime(usr.getLastLoginDate())+"\t");

                   //Get the last login time and date. converTime() method corrects the format of date and time.

            }

            }

        catch (Exception x)

          {          System.out.println("\n::GetOlapUsers:: failed with error:");

        }   }

 

3.    Convert Time:


The convertTime() method is implemented to get the correct date/time format of last login time.

 public static String convertTime(long time)

          {        if (time == 0) {        return "NeverLoggedIn";               }

                   else    {

                             return format.format(date)//Converted into readable date format

                   }

          }

         

4.   Produce the results


I have created a batch scripts to call this Java program. The Java class path should be set before executing it. I executed the script and the list is produced for all users with their last login time. User's access can be revoked if they are no longer using the Essbase application.

Example

 

The below is the example of last login time for the few users which is extracted using Essbase users last login time tool.

Essbase Users last login time:

  User Name

Last Login Time

Application Type

UserA

4/28/2017 01:00 PM

Essbase

      UserB

NeverLoggedIn

Essbase and Planning

UserC

9/30/2016 5:00 PM

Essbase

 

 

Conclusion:

This blog helps to understand about how the users last login time can be extracted from Essbase application using Essbase Java API.

Reference(s)

1.    Oracle Documentations https://docs.oracle.com/cd/E40248_01/epm.1112/aps_java_api/com/essbase/api/metadata/IEssCubeOutline.html

 

December 12, 2017

Significance of ETL in BI Reporting


What is ETL?

Extract Transform and Load (ETL) typically done by using ETL tools to Extract Data from source systems, Transform and Load in to Data-warehouse for BI Reporting, such as,

  • Extract - Extracting the data from Multiple Source Systems (OLTP) containing data in different Format.
  • Transform - Applying the Transformations as per the business requirements.
  • Load - Loading the extracted and transformed data in to single format in Data ware House (Single Centralized Data base (OLAP)).

Fig 1. ETL Data Flow Process


List of Oracle Products which can be used for ETL:

  • Oracle Data Integrator Enterprise Edition (ODI)

  • Oracle Data Integrator for Big Data (DIBD) 

  • Oracle Enterprise Data Quality. (Integrated with ODI)

  • Oracle Warehouse Builder (OWB)


Why ETL?

               This is the big question exist in BI World always. When reporting tools has the ability to extract from multiple data sources what is the need for ETL?


Below are the main features which makes ETL as a mandatory Process in BI Reporting:

Data Cleansing and Data Profiling:

              Reporting Tools will provide the template and data flow as per the Report requirements. But it won't Process the data at column or row level in a table. ETL Process ensures that the good and valid data as per the business requirements are available for BI Reporting increasing the credibility and quality of the reports to the Users.

Increasing the Performance of Reporting:

             When we fetching the data from single Data ware house (single node) for reporting instead of from multiple sources (multiple nodes), it reduces the time taken for latest data availability in BI Reporting. When we plan to create and run thousands of reports every day for large organizations, this becomes an important aspect in BI. De-normalization and caching Process also plays the vital role which is achieved though ETL and ensures good Performance of BI Reports.

Avoiding the Usage of Live Transaction sources for Reporting:             

               Since we have Data ware house (OLAP) in a different node (database), we are not directly querying from the Transaction Database and hence there will be no impact on source systems performance due to BI Reporting.


Need of Tools for ETL Process in BI

Can't we achieve this through traditional Programming and scripting languages?

Yes, it is possible to achieve this Process through Programming Languages. But the ETL tools provide many advantages that are hard to ignore, not to mention the automation and scheduling capabilities.

Below are the some of the key reasons for using ETL tools:

Data Integration Utilities:

            To extract the data from different databases we need to import the respective drivers and packages in the programing code. But the ETL tools are enriched with the utilities to connect to different databases which reduces the ETL developer tasks in data Extraction Part .It also Provides the parallel data extraction which reduces the time consumption for data extraction.

Reducing the dependency on Programming languages:

           ETL tools will remove the dependency on Programming languages which reduces the need for complex coding requirements. Also ETL Developers can learn the different ETL tools easily in a short time period. It also makes the Debugging process easier one for Developers as these tools comes up with the enriched logs about failures.

Enriched User Interface in ETL Tools:

            ETL tools are built with enriched Interface which will enable BI Techies to achieve the complex business requirements easily. Developers can complete the ETL work by drag and drop of the utilities which are provided in the ETL tools. In this Case, it enables the Developers to focus in the business requirement rather than the ETL code. To conclude this, people who are working in a functional business domain also can learn and work in ETL easily.


Conclusion

              In future, ETL Process may evolve in a different ways. For instance, many ETL tools are enriched with Cloud Utilities (ODI Cloud Service) and have capabilities to handle the Unstructured Data. But, if we want to ensure the Data Quality and Report Performance for the BI Users, then ETL Process is always required. Also Data quality plays a vital role in BI Reporting as the business decisions and predictions are based on the Reports generated on BI data.


Continue reading " Significance of ETL in BI Reporting " »

December 6, 2017

Tips to conduct a successful Reverse KT Session

Tips to conduct a successful Reverse Knowledge Transition (RKT) Session

1. Selection of topics (All party involvement)

This is a very important part of the whole RKT process. Selection of topics should be aligned with all parties involved in KT along with customer. We need to ensure the topics selected are covered during our KT sessions and should be important from customer point of view. We can always suggest topics that we want to cover and request customer to pick from those topics.

Select fewer topics so that you can delve deeper into the topic within time frame - which showcases our knowledge level on that topic and thereby increases confidence of customer.

Select topics that are important and forms bulk of your work on a day to day basis - this will also ensure that we cover more information which is useful and not just theory.

2. Duration for each topic

It makes more sense if you tag certain timeframe to each topic selected. This will provide you a lever to be prepared about the topic and how deep you can delve on that particular topic.

3. Preparing a deck (Why slides?)

Presentation becomes really key in RKT sessions. The slides are best tools to represent information in a short and crisp manner. Don't use passages or paragraphs in slides but just use bullet points or numbered points only. This will give us two fold advantage:

    1. It will enable us to remember all the information on that particular topic
    2. Customer will get confidence if we talk about each bullet point in-depth rather than reading from a passage/paragraph

It is always important to keep it to the point and talk about each point for a designated amount of time without reading from paragraph.

Pictures /screen shots will help depicting the information in a more visual format which is always preferred than textual content.

Slides also provide animation tools and is a very powerful mechanism to attract attention of viewers on particular point to highlight during our presentation. Word of caution: Don't go for fancy animations, keep it simple (for example use "Float In").

4. Review of material by all parties

Once you prepare the material to your satisfaction, review it internally (within your team) first and get feedback. Once you incorporate all the feedback, then circulate it among the stakeholders and seek feedback. This will ensure that there are no surprises during RKT session. (Note: You can chose to remove all animations before circulating it to stakeholders to keep it interesting during RKT session)

Incorporate the feedback from all the parties. In case you cannot incorporate feedback please respond to that particular party about why you could not incorporate the feedback. (This is key to gain confidence of parties which might also help in obtaining a good rating.)

5. Dry run

This is one of the important steps that needs to be performed without any exception. Dry run the entire RKT session internally within the team or you can invite someone who can help in providing candid feedback. Time the session as well during dry run to find out about the timing. Ensure that you talk slow and loud with clear voice. Pause during switching between sub topics and ask for questions.

6. Involve team members

Utilize team members to talk about particular topic that they are comfortable with. Guide them in case of any mistakes during dry run. Ensuring that more people talk about topics will provide confidence to customer as well on the entire team.

7. Remote RKT?

If you are conducting RKT session remotely using phone then keep your phone closer to your mouth. Speak slow and loud so that other people can hear without any disturbance. Pause and give enough time for audience to respond. Always be polite and use thank you and please wherever possible. Greeting is an important part which will start the meeting on a right tone.

Please remember to keep your phone on mute if you are not talking to avoid any unnecessary background noise.


November 29, 2017

Answering Analytics on the Go - a review of Oracle Day by Day

What if, we have a personalised BI App which can answer the business questions on an adhoc basis through voice commands?

Provide custom reports proactively based on the location & time using GPS location of the phone?

Yes, that's now a reality with Oracle Day by Day mobile App.

Oracle launched Day by Day mobile App in the first half of this year, which looks to be a futuristic personal analytics application. It is currently available as part of Oracle Analytics Cloud (OAC) Enterprise license and has got both Android and Apple versions. It's quite different to earlier Oracle Mobile BI and Synopsis App. Oracle Mobile BI HD app is more like an actual BI application tailored for mobile screen, whereas synopsis is for adhoc analysis sourcing data from excel sheets.Oracle Day by Day is more than mobile BI with built in AI ,NLP (Natural Language Processing) capabilities. 

I got a chance to evaluate this product while working on POC of Oracle Analytics Cloud platform. App does not require any additional setups and can be downloaded from Google Play store or Apple App store very easily. I am elaborating my brief experience and take on each of its key salient features as below,

1) Easy to use: One can easily configure the app just by entering OAC application URL and user credentials. Application itself has got 3 simple tabs Smart Feed, Search and Settings

2) Really SMART, Smart Feed: First tab of the app feeds the frequently used charts and Numbers based on the usage pattern and explicit selections. Essentially app learns on what to feed in the first tab so that user can get the required insights proactively without even asking for them.

It is so personalized that it delivers the charts based on when and where to various business questions. Oracle calls the list of these mobile charts and graphs as smart cards. Here is how it looks...

newmain.jpg

Fig1 : Day by Day app tabs and Smart Feed

3) Search with NLP & AI: In Search tab users can type the key words for the data and KPI they are looking for and any specific way it needs to be presented (like bar graph, pie chart etc.). In below example a chart is rendered based on 4 keywords typed "Costs, Germany, City, bar chart".

NLP1.jpg

Fig2 : Report rendering based on key words
It is a mobile version of BI Ask available in the OAC which presents the content based on the indexing of Subject Areas clubbed with backend AI programs to construct the report on the fly.

More interesting is voice based search ..., users can tap on the Google voice icon and start asking the question to their personal assistant. In the below example, user asks a question "Revenue for phones in Germany "and app responds with a number tile with required fact.

voice.jpg

Fig 3: Get numbers and charts through voice based queries
If a specific visualization is not mentioned in the request, App selects one or more chart types to present the data such as a Tile, Bar Graph etc. automatically.

App leverages various NLP techniques to interpret the user's voice inputs so as to construct a meaningful search criteria.

4) Bring back reports based on Date/time & Location: Assume a senior executive wants to know Sales numbers for a location when he/she is at that location or wants to have a glance of report before a meeting scheduled at a particular date/time, yes it can now be easily achieved through this app. As shown in below example with few taps user can select a Sales report to be available when he/she is at Berlin tomorrow or next week.

newlocation.jpg

Fig 4: Personalize your smart feed based on Date/time and Location

App uses the GPS information available in the cell phone to provide the report based on locations.

5) Share a report to a group or an individual: Users can create their own group called as Crew by adding other users (who have OAC Id & are on App) to it in Settings tab. A report can be shared to the Crew or to an individual with single tap as shown below,

Share1.jpgFig 5: Report sharing, broadcasting and Comment options

Report can also be broadcasted to nearby devices. And one more appealing feature available only in android version is to make report available to external devices through chromecast.

6) Start a group chat : Users can simply start a conversation around a report with their Crew by typing in the Comments section. It would be very helpful for Sales team to have discussions on the Go.

7) Notifications based on calls/text messages Users can set notifications in the App if they receive a call or text message from a particular contact. Accordingly reports pertaining to that contact will be made available in the Smart Feed. It really helps in quickly gaining the insight about the contact (might be a prospect or an existing customer) before starting the conversation.

I would like to say that Oracle Day by Day App is a game changer in the BI space delivering truly personalized, AI based, collaborative mobile BI capability. Oracle is trying to change or extend the definition of BI from charts/graphs to getting required answers to business questions. Having said that, App at the current state still needs lots of refinements especially to its NLP capabilities in interpreting the user's voice commands or text queries. Oracle will definitely bring more enhancements to it in the coming releases. Overall it's great trend setter for the BI and Analytics Industry.

November 28, 2017

Cross Module Integration-EPBCS and EPRCS

Turbocharge your EPM planet with cross module integration "EPRCS - EPBCS Integration"

 Its only, just a matter or a question of time that EPRCS will be the most preferred Cloud Reporting Service available in the EPM market. As strong as this comes you would second it in another not greater than 600 seconds as you reach the C ya later climax of this blog.

Gone are the days of dossier when the Decision makers seamlessly miss insights, chill the soup, miss the story behind the data. Purposeful Intent of the EPRCS is being a One stop shop for managing all your reporting needs.  Let's get beyond merriness is advocating this process driven approach, never before seen in reporting.

Process for defining reports, authoring reports, reviewing and approving reports. Regulatory report packages get's published, financial and management reports of your company in a much secure and collaborative and process driven approach. If I understand it correctly, this is Cloud Only solution and hence would be a great Value add with the never seen before capabilities and features on your report publishing process.

Just like many, I too got carried over by the naming similarity and miss-understood it as HFR ( Hyperion Financial Reporting) on Cloud and was later proven wrong. May be it was intended to co-work other technologies in the reporting gamete like Smart view and financial reports. 

While there are other Blogs of mine that takes a deep dive into EPRCS and its features and capabilities along with details of the latest December upgrade, this one will retain focus on the recent repeat asks from peers..."How to integrate EPBCS with EPRCS".

I am making a serious attempt here to enable the reader quickly come up to speed on using EPRCS and integrating it with other EPM stack of products like EPBCS, FCCS, OAC.

First let us take a peek into integration EPBCS and EPRCS (integrating here is not DATA but marrying the User interface) advantages of these and as a matter of fact in general would be

o   Seamless Customer experience

o   Enterprise wide well connected system

o   Maximum utilization of the products on the ground

o   Extreme flexibility hand picking and mixing what to choose from source instance for integrating

o   Plenty of Cloud based tools

o   Ease of Importing/Exporting artifacts within the various Cloud products

Should I be using EPRCS to integrate it to EPBCS by setting up EPBCS as a "Source" in EPRCS Data Source. Well, with all signs on a traditional HFR developer that's  the route that brain would take. As I generously elaborate in all my articles, EPM has a magical touch and that really left me wondering, if this is the only way to tunnel to EPBCS. What was I thinking by digging onto the targets..shouldn't I obviously know that there should be a way to set it up in EPBCS?? Well If I want to pull in the user interface all or selective from EPRCS and use it within the EPBCS world for super enhanced power packed design then a big YES, so I decided to take a look at the EPBCS landing page...as you see below:

Prithiba_Pic_1a.jpg

Let's get to the apparent Tools section and open it up to enjoy what it has in store for us..

 

Prithiba_Pic2.png

Oh..so this must be it. As simple to navigate and it's like almost most and definitely the commonly used administrator activities are indeed available on the cards right in front of you, very intelligently arranged and Thanks for Oracle for that...Someone there had really done a deep thinking into not only giving all that we can ask for but also in a way that is very convenient and comfortable for us!

Since I don't have any connection set up yet, this is what I am going to see the first time I land at "Manage Connections"

 

Prithiba_Pic3.jpg

Prithiba_Pic4.jpg

















Prithiba_Pic_5a.jpg

Onto the top right, it wasn't that difficult to find the Add Connection...

Prithiba_Pic_6.jpg

Oh What a sight...that I see when I click onto Add Connection...Beautiful ladies lined up to volunteer to be targets for our connection set up and at this context, without any detour I better select Oracle Enterprise Reporting Cloud service to reach any closer to accomplish the mission intended!

Prithiba_Pic_7a.jpg 

The Create Connection pop-up is self-explanatory with the below basic mandatory fields for completing the connection set up.

  • 1.       URL - The URL of the Enterprise Performance Reporting Cloud Service
  • 2.       Connection Name - That's for you to choose and I would say EPRCS-Conn
  • 3.       Description -I would lay out Details of the intended use for this connection.
  • 4.       Service Administrator - The user id that would actually be used for the connectivity- must be service admin or identity domain
  • 5.       Password- The password for the above user id
  • 6.       Domain - This is the domain in which the EPRCS is hosted. Apparently this means that cross domain / cross pod integration is also possible! Wow..!!
Prithiba_Pic_8a.jpg

Once you "Validate" the connection and make it successful you are good to go to the next of "Save and Close" on the top right. Another though fancy yet helpful item here is the Domain entry at the last box at the bottom autofills based on the URL entered in the first box. So you can never go wrong here, even if you intend to!!nce the connection has been saved, you can find it appended to the list in the manage connections. From here, you can also perform further editing or deletion.

Prithiba_Pic_9a.jpg

Bingo! Just click on the home icon and to the navigator you will see the new connection!

Prithiba_Pic_10.jpg


Let's now see how to play around with the newly added target application. Click the connection name and you will see the application opening in the same browser tab. On the other hand, you can also click the open window icon and the application will open in a brand new browser window.

Did I just see what I just saw? One user Login-Single point of Entry-Access to multiple cloud services!! Another magical moment!!

Since I believe that the same can be applied to all the EPM cloud services, all the cloud pods that I have access to and that's exactly what I am going to cook tonight.

One last tip with this write-up. We can also add any card or cluster from the target application to your source application. How that can happen is via the navigation flows.

Prithiba_Pic_11a.jpg


Once you click on "Add Existing Card/Cluster" the library window will open. Also on the left side you will have the option to choose one of your connections.

Gladly into the seamless world of connectivity...! 


Signing Off for now...Prithiba.D

November 9, 2017

Transaction matching of around two million records in under 5 minutes in ARCS

Oracle Account Reconciliation Cloud Service (ARCS) with Transaction Matching is a cloud based reconciliation platform with pre-built configurations and adherence to industry best practices; a recommended solution to cater to your reconciliation and matching needs.

Transaction Matching is a module within ARCS which inherits the features that facilitate preparation and review of reconciliations.

  • Additionally, Transaction matching adds efficient automation of the detailed comparison of transactions in two or more data sources
  • The calculation engine allows for intuitive "netting" of transactions within each data source to provide output which is easy to work with
  •  Flexibility in the setup and timing of the process allows to minimize the effort during "crunch time" and reduce risk

 

Transaction Matching Use Cases

Typical Transaction Matching Use Cases are shown below.

 

Use Cases.jpg

Often clients need to match more than million records between two source systems with complex match set rules. We have seen clients spending hours to try to manually match them in excel or use some solutions like Access database, Oracle tables etc. which can be very time consuming and have data quality issues. We will share our experience and some insights on how we successfully loaded and matched two source files with around 2 million records in less than 5 minutes using Transaction matching feature of ARCS for one of our e-commerce client.

Idea Inception

Client wanted to match up to 2 million records from their point of sale system (POS) and the details obtained from Merchant transaction system. They were using access data base for this activity which was giving them results in hours and they reached out to Infosys with this requirement to help them streamline this time-consuming and frustrating process.

 

Solution and Approach

Source Files.

1. Point of Sale transaction file.

    The POS file had 9 columns and the file provided was in txt format (a pdf report converted into a text file). Below is the snapshot of the same.

POS.jpg

2. Merchant system transaction file

           The Merchant system transaction file had 21 columns and the file was in csv format. Below is the snapshot of the file.

Merchant.jpg

Matching rules

Client wanted the matching rules to be based on the condition that the card number and the amount from POS transaction file matches against the cardholder number and amount from the Merchant transaction file with the stipulation of many to one transaction match where many transactions from Point of Sale system matches with single batch (grouped by amount) transaction from Merchant system file.

 

Initial Challenges

The initial challenges with this requirement are below

1. Size of File.

    The size of the files provided were huge as there were 9 and 21 columns respectively and both the files had around 2 million records resulting in file sizes of > 1 GB per file. This much large a file is difficult to read and edit by any text editor.

2. Formatting

    Another bigger challenge was formatting the given files as per ARCS transaction matching needs. The files provided were in text format and to read and format them given their file size was a tough nut to crack.

 

Infosys Solution

We took this challenge and delivered as promised. The biggest challenge was to import the file containing about 2 million transactions into the ARCS Transaction matching from both the system and match them automatically in quick time. Other tools and custom solutions were taking hours for this process. Importing 2 million records in a csv file is a huge input for any system to ingest. It would typically take anywhere between 15-30 minutes just to import one file into a system. We had another challenge in formatting the files because the file we received was a .pdf file converted into text format and we needed them to be converted into .csv to be accepted by ARCS Transaction Matching. We used Oracle ARCS TM, formatting tools, text editors and Oracle provided EPM Automate utility to format the files, automatically ingest and auto-match the files from two transactional systems.

 

The EPM Automate Utility enables Service Administrators to remotely perform tasks within Oracle Enterprise Performance Management Cloud instances and automate many repeatable tasks like import and export metadata, data, artifact and application snapshots, templates, and Data Management mappings.

 

Tips and Lessons Learnt

With the above requirement's implementation, we have learned a few lessons and below are some tips when implementing similar type of solution.

  • ARCS TM also accepts .zip format input files, hence compress the files into .zip format so that they are smaller in size plus quick and easy to upload on the ARCS cloud.
  • Powerful text editors like Notepad++ or Textpad when formatting the files, could be used.
  • Create custom attributes which can be used in matching rules for faster auto-matching of transactions.
  • If possible, try to get the export from the transactionsystems in .csv format to reduce conversion times.

Performance Metrics

Below are our performance metrics while implementing client's requirement of matching around 2 million records using Oracle ARCS Transaction Matching.

 

Import POS million records - 27 seconds

Import Merchant million records - 61 seconds

Run Auto Match - 53 seconds

 

Complete Process - 2 minutes 21 seconds (Less than half of 5 minutes)

 

Result?

 

Happy client and Happy us.

 

We deliver!!!! - Please visit our company website to know more about our Account Reconciliation and Transaction matching solutions.



October 31, 2017

Unravel the mysteries around Equity Overrides

In this blog, I would like to share one of my experiences with HFM translation rules; it is still engraved, because it was a major issue that I had faced in the initial days when I had just started playing around with HFM & was keen on unravelling the mysteries that revolve around translation issues. During one of the implementations, I had a brief understanding of the historical rate concept, which we usually encounter to translate a subsidiary's equity.

So, before I proceed with the problem statement & the resolution, let me define historical rate for the beginners (especially who do not have a background in Finance, just like me) in the field of HFM. Historical rate is the exchange rate which was prevailing at the time of the transaction consummation. So, these transactions (mainly Investment & Equity) have to be translated at Historical Rate rather than using EOM Rate or AVG Rate. This is usually coming from clients who are reporting in multiple GAAPs, or I must say, US GAAP particularly.

Let me describe the issue in a practical example:

Org A invests USD 1 million in Org B, which reports in AUD, at an exchange rate of 0.7, hence making the subsidiary receive AUD 130,000. In A's books, the investment is USD 100,000; while in the books of B, there will be equity of AUD 130,000.

Now, in future, suppose the exchange rate becomes 0.8. Here, the translated equity for subsidiary, i. e, Org B becomes USD 104,000; whereas, for Org A, the investment still remains, USD 100,000 in the books. Hence, at the time of elimination the plug account will capture an imbalance of USD 4000, which actually is coming in due to the incorrect exchange rate being used for translation of the transaction. And the actual transaction is nowhere to be blamed for the mismatch. Hence, there is an urgent need for some solution to report the correct investment & equity in B's books, or else, the reported values would be incorrect.

Now the first thing, that struck me was, why don't we capture the translation through a rule which would take care of only the changes in equity & differences in A's investments during the entire month. Hence, this would automatically be taken care by standard EOM Rate, i. e, Balance Sheet rate, which is pre-defined in the application. But there was a gap here, suppose A invests in B on the tenth working day of the month. At his point of time, the rates are quite different.

Hence, the solution revolves around using Equity Overrides. But how to achieve this was another big hit. This would benefit the users, by rendering complete hold of the authentic translated values that were required to be shown in the Balance Sheets of the subsidiary organization.

We must be manually capturing the Historical rates for conversion through a separate Data Form defined in the application. The values then be translated using the historical rates using the rule file, overriding the actual rates. The difference arising  would be captured in a separate account, which we refer to as a Plug account for currency overrides, i. e, Foreign Currency Transfer Adjustments (FCTA)


Continue reading " Unravel the mysteries around Equity Overrides " »

October 30, 2017

Analytics and the APP!

 

Welcome Back!!! In parts 1 and 2 we started out to understand the concept of analytics and the app (or analytics on a mobile platform) and review a few case studies from different leading products - Kronos, Oracle, and SAP. In this concluding part we will look at the significance of these case studies and draw inferences as to how they impact the world of analytics...

 

Inferences:

 

We have seen 3 case studies across different verticals with varying background and use case scenarios. However all have the common feature of using an analytics tool on a mobile platform to showcase the versatility of this combination of Analytics and the App!

 

When organizations go mobile with analytics, they are able to extend the reach of information and empower people from every aspect of their business with the facts to make better, more informed decisions.

This is evident from the 2015 Mobile Business Intelligence Market Study by Dresner Advisory Services:


  • Mobile business intelligence (BI) is the third most common use case for mobile business workers, next to e-mail and contact information

  • 90% of study participants said mobile BI is important to their business

  • In a surprising find by a Dresner market survey (*) Business Intelligence is of the 3rd  highest priority in Mobile applications, ranking higher than social media and even personal banking, coming in below only email and basic phone services.

 

*SOURCE - Wisdom of Crowds ® Mobile Computing/ Mobile Business Intelligence Market Study 2015

 

Trends observed during the research on case studies indicate the growing importance of Mobile analytics in different verticals - IT being the prominent horizontal across most of the industries. Some of the reasons for this are listed below:


  • Exploding usage of 'smart' mobile devices in general - personnel, org-wide, technological leap

  • Growing use of BYOD among enterprise employees - personnel get more opportunity to tap into the client systems and data as organizations open up accesses to employees.

  • Rapid use of mobile devices for other aspects of daily life - communication, mails, social media, entertainment - to make a convenient platform for including analytics.

  • Flexibility of usage and availability on-the-go. From being a straight-line process to being agile.

  • Advanced functionality of apps and devices - inducing enhanced parts and software.

  • Technology growth to aid predictive analysis and user data customization.

 

Suggestions/Future Prep: 


  • It is seen that the concept of mobile analytics is well known but almost negligible in application. This could be leveraged further to achieve Customer Delight.

  • The analytics functionality on ERP systems remains a niche area. Consultants could be empowered with training on this module to also include the mobile apps that are usually pre-built for such applications.

  • Another option to be explored would be provision of sample tablet devices (i-pad or android) to respective practices so as to enable learning, hands on and PoC processes.

  • From the case studies and also from project experience, it is observed that even though customers may be aware of the implications of mobile analytics on their processes, a PoC is helpful in all cases to create the right focus to open up further avenues of engagement.


Conclusion:

 

The advent of the mobile platform has been another epoch making event, probably making it to the top 20 inventions/events that changed lifestyles across the world significantly. Added to this event, parallel advancements in related areas like data analysis, cloud computing, big data, to name a few have been instrumental in converging the big with the best, giving rise to a concept such as mobile analytics. Since this concept is still in its nascent stage, it provides great potential for further exploration to discover the myriad use case scenarios and adaptability, which could lead to several success stories of - Analytics and the App!


 

End of part 3...hope you found this interesting - Please do leave your valuable feedback!

Part1 :  http://www.infosysblogs.com/oracle/2017/09/analytics_and_the_app_1.html

Part2 :  http://www.infosysblogs.com/oracle/2017/10/analytics_and_the_app_2.html

 

Analytics and the APP!

 

Welcome Back!!! In part 1 we saw an example of analytics being used on a mobile platform - tablet - to realize the retail store objectives and gain advantage of real time data updates. In part 2 let us take a look at more case studies across similar leading products...

 

Case Study 2:

 

Scenario - The client is a US based fortune 500 energy and utilities company with approximately 10 million customers. Their integrated business model provides a strong foundation for success in this vertical which is experiencing dramatic changes. They strongly believe in capitalizing on emerging trends and technologies to provide insight into operations that will drive value for their end customers

 

Background - The organization uses Oracle - one of the top ERP applications for their myriad business processes. As part of this PoC the Infosys team setup custom analytics solutions for the client. Oracle's business tool OBIEE 12c is used here to showcase the length and breadth of the analytics tool available as part of the wide array of modules in Oracle.

 

Problem Statement - The client needed to do a comparative evaluation between two mobile analytics applications as part of their PoC to be reviewed by their senior management.

 

POC details - The PoC was aimed at the OBIEE module's ability to work on a mobile platform. The PoC also aimed to do a comparative demo of features between Microstrategy (another analytics tool) and Oracle tools (apps). A set of commonly identified features was expected to be compared and in most cases, the feature was available within these tools but the enablement of the feature was different between OBIEE and Microstrategy.

 

Pilot & Feedback - For the pilot, the app was shared only among the senior management in the organization. The focus group was impressed to see that OBIEE could provide the features needed and appreciated the way it is achieved in OBIEE, which was different from their current applications. Further using OBIEE on mobile presented a very unique but scalable scenario as it proved to be a seamless extension to the existing suite of oracle products and which meant lesser chance of data integrity issues. Post the successful demo, client is now evaluating an option of a complete migration to OBIEE with preference to the analytics app as it aligns successfully with their established principles.

 

Being an energy and utilities company, it is always essential for the organization to possess the latest status and forecasts in a rapidly changing environment with unpredictable trends. With the analytics tool on mobile, it has brought the leadership very close to data and trends that were hitherto not feasible. Management can now make an informed decision much faster and just as easily track the results through OBIEE. Also, the time and effort saving is huge since it allows the stakeholders to pull their own graphs and data analysis, first hand and without chances of error. As the gap between technology, user and data/solution is greatly reduced leadership is also now very keen on applying this model to other areas of analytics.

 

Case Study 3:

 

Scenario - The client is a global tool manufacturing corporation with interests in construction, heavy equipment and technological solutions. They excel through outstanding innovation, top quality, direct customer relations and effective marketing. Client also has their own production plants as well as research and development centers in Europe and Asia. They pride in integrating the interests of all their partners, customers, suppliers and employees - into their growth and sustenance strategies.

 

Background - The client uses SAP package and tools for running their analytics platform integrating the various aspects of their business from planning to customer feedback & support. Combining areas like technology, design, real time feedback and automated order processing and metrics like quantity, geographical location, customer database, the analytics tool (SAP's BI system), provides the necessary inputs to the stakeholders to catchup on the latest available information/trend.

 

Problem Statement - The client needs an on-the-go platform to deploy their analytics solution to enable salesforce and service personnel to meet client demands as and when they arise in an automated fashion.

 

Introduction of Mobile Analytics - The organization has about two-thirds of its workforce employed directly for their customers in sales organizations and in engineering. They average about 200,000 customer contacts every day. This entails a constant need to be up to speed with the latest and greatest as regards the end customer data (or detail). A ready reckoner for this situation is the SAP mobile analytics (ROAMBI as it is known otherwise), that most employees in the organization use on a daily basis. Further, the entire solution is a cloud based model, so they have the best of both cases - cloud computing and mobile application. This has proved to be very advantageous to their on the job salesmen, technicians, customer support or even the top executives discussing an org-level strategy.


A real-life scenario involves the following situation:


  • A critical time bound customer order is not received at site on time.

  • However, the automated tracking tool, looking for the delivery report, has sensed it and raised an alert to the support center of the tools manufacturer.

  • This triggers the next set of established workflows in order to compensate for the delay in delivery.

  • Alerts sent to the nearest customer support personnel through a geo fencing feature enables the employee to locate the nearest outlet/warehouse/distribution center for the right part.

  • The support person raises a request under the right priority and is able to head over to the site to personally supervise the final delivery

All this has actually taken place on-the-go using the mobile device loaded with the BI tools and supporting applications to augment the corrective actions.


In this particular scenario, even the customer delight can be captured on the same mobile as feedback and, back at the corporate HQ, the top management will be able to gauge a real time heat map/graph showing customer satisfaction survey results that have been processed seamlessly through cloud.

 


End of part 2... in part 3 we will review the inferences and conclusion.

Part1 :  http://www.infosysblogs.com/oracle/2017/09/analytics_and_the_app_1.html

Part3 :  http://www.infosysblogs.com/oracle/2017/10/analytics_and_the_app_3.html


Continue reading " Analytics and the APP! " »

October 29, 2017

Migrate Oracle Hyperion Cloud Applications(PCMCS) on Autopilot!!! Using EPMAutomate


Migrate Oracle Hyperion Cloud Applications (PCMCS) on Autopilot!!! Using EPMAutomate


What is EPMAutomate?

EPMAutomate utility helps in automating administrator's activities for EPM Hyperion cloud products.


What is Migration and Why it is required ?

Migration of application in cloud is required to move an application from Test instance to Production instance and vice versa. Manual migration of application across the instance could take hours, it can be automated using EPMAutomate utility which subsequently reduce the time from hours to minutes. Migration of application includes Data, Metadata, Security, Data Management, Reports etc i.e every artifacts of application will be migrated using EPMAutomate utility without manual intervention. Migration can be server to cloud or cloud to cloud. It is always preferable to move backup artifact from server to cloud. Here example has been demonstrated with respect to PCMCS application .

Migration methods:


  1. Server to Cloud

  2. Cloud to Cloud


  1. Steps to automate Server to Cloud Migration from daily backups process using EPMAutomate utility in PCMCS


  1. Login into PCMCS Workspace by entering Identity Domain, User Name, Password and Click on Sign in



 

2. Delete existing application from instance if available, where new application will be migrated and imported from another instance.

Click Application->Application


3. Click on 'X' to delete application and click Yes


 

4. Now, modify the attached reusable sample batch script with relevant url and user credentials to automate Migration process using EPMAutomate utility

Sample Script:


@echo off

rem This script is used to perform On-Demand migration of Artifact Snapshot using

rem server backups i.e server to cloud migration

rem Update the parameters: url (Target url), Folder (source folder) and SnapshotName as per requirement

rem Please make sure application has been deleted from target instance before importing snapshot into it

SET url=

SET user=abc

SET password=D:\Oracle\password.epw

SET IdentityDomain=

SET Folder=D:\Oracle

SET SnapshotName=Artifact_Snapshot_09_13_2017_1322

SET UploadedSnapshot=%SnapshotName%.zip


call epmautomate login %user% %password% %url% %IdentityDomain%

timeout 10

call epmautomate uploadfile %Folder%\%UploadedSnapshot%

timeout 8

call epmautomate importsnapshot %SnapshotName%

timeout 10

call epmautomate logout           



5. Trigger .bat script :

Uploading relevant snapshot to cloud


 

 

6. Once migration completes, check for migration report in in PCMCS workspace.

Click Application->Migration->Reports->Migration Status

  

B) Steps to automate Cloud to Cloud Migration from daily backups process using EPMAutomate utility in PCMCS



  1. Follow steps 1 to 3 from section A

  2. Attached script to migrate artifact from one cloud instance to another

     

    'Copysnapshotfrominstance' command is used to move artifact snapshot across instances in cloud 

     

    Sample Script:

    @echo off

    rem This script is useful to migrate Artifact Snapshot from Test to Production instance 

    rem Update the following parameters based on requirement

    SET url=

    SET FromURL=

    SET user=

    SET password=D:\Oracle\password.epw

    SET IdentityDomain=

    SET SnapshotName=Artifact Snapshot

    call epmautomate login %user% %password% %url% %IdentityDomain%

    timeout 10

    call epmautomate copysnapshotfrominstance %SnapshotName% mdeshmukh %password% %FromURL% %IdentityDomain%

    timeout 8

    call epmautomate importsnapshot %SnapshotName%

    timeout 10

    call epmautomate logout