Infosys’ blog on industry solutions, trends, business process transformation and global implementation in Oracle.

« November 2017 | Main | January 2018 »

December 30, 2017

Blockchain-based Logistics of the future...

Issue: Today, a supplier/3PL estimates the cost of transportation. This is followed by the carrier completing the service at which point they send the invoice with actual cost to supplier/3PL. This invoice is then reviewed, approved, and settled. This review process is often quite lengthy and could be avoided.

With the aid of emerging technologies: Imagine using a blockchain technology like Ethereum. The smart-contracts inherent in Ethereum can digitally validate the milestones and trigger payments. Proof of delivery is an important milestone in shipment execution. One can imagine setting up Ethereum to digitally validate the proof of delivery and even make payments right away, thereby eliminating the need for carriers to send invoices and then wait for months to get paid. Of course this is only the basic golden transaction. There could be additional charges that the carrier would have incurred which may not have been part of the original contract. This can always be rectified with an ad-hoc invoice from the carrier.

All the carriers could save their contracts in the blockchain itself. The supplier/3PL can get a visibility of available contracts once the shipment is planned based on the route of the shipment. The blockchain automatically triggers tenders to the best carrier and if the carrier accepts, sends it back to the supplier/3PL for approval. If rejected, blockchain iteratively awards the tender to the next best carrier. This process repeats until the tender is accepted by both parties and shipment execution begins.

Once the carrier picks up the goods, the blockchain triggers 25% payment to the carrier as configured in the milestones. Ethereum smart contracts can come handy here. The data feed itself could be received from smart locking IoT devices installed on the truck's doors.  When the supplier/3PL planner overseeing the loading process locks the truck's door with the smart lock IoT device, it prompts him for an identification. If he chooses to identify himself via the fingerprint scanner on the lock, it sends the data to blockchain. The smart contracts on the blockchain listening to the milestones oblige with the necessary advance payment which is sent to the carrier's wallet address. But this transaction stays in "pending" status. The subsequent milestones also trigger advance payments and all of them land in the carrier's wallet address with the "pending" status. Only the final proof of delivery milestone validates the advance payments and sets the status to "validated and paid."

The final proof of delivery also relies on the smart lock IoT device in case of large FTL shipments. For smaller parcel shipments, this process has to be reimagined with the familiar old-fashioned signature on a mobile screen...

Meet our experts at the Modern Supply Chain Experience 2018, January 29-31, 2018

- Kranthi Askani

December 29, 2017

Comparative Study Between Oracle Big Data Cloud Service and Compute Engine


Comparative study between Oracle BDCS and Oracle Big Data Cloud Compute Engine.


1.             Oracle Big Data Cloud Service: Gives us access to the resources of a preinstalled Oracle Big Data environment, this also comes with an entire installation of the Cloudera Distribution Including open source Apache Hadoop and Apache Spark. This can be used to analyze data generated from Social Media Feeds, E-mail, Smart Meters etc.

OBD CS contains:

·         3-60 Nodes cluster, 3 is the minimum number of cluster node(OCPU) available to start with; where we can increase the processing power and secondary memory of the cluster node can be extended by adding Cluster computer nodes("bursting").

·         Linux Operating System Provided by Oracle

·         Cloudera Distribution with Apache Hadoop (CDH):

-          File System: HDFS to store different types of files

-          MapReduce Engine (YARN is default for resource management)

-          Administrative Framework, cloud era manager is default

-          Apache Projects e.g. Zookeeper, Oozie, Pig, Hive, Ambari

-          Cloudera Application, Cloudera Enterprise Edition Data hub, Impala Search and Navigator


·         Built-in Utilities for managing data and resource

·         Big Data Spatial and Graph for Oracle

·         Big Data Connectors for Oracle:

-          Oracle SQL Connector for HDFS

-          Oracle Loader for Hadoop environment

-          Oracle XQuery for Big Data

-          ORE Advanced Analytics for Big Data

-          ODI Enterprise Edition


Typical Workflow of OBDCS: Purchase a subscription -> Create and manages users and their roles -> Create a service instance -> Create an SSH key pair -> Create a cluster -> Control network access to services -> Access and work with your cluster -> Add permanent nodes to a cluster -> Add temporary compute nodes to a cluster (bursting) -> Patch a cluster -> Manage storage providers and copy data

odiff (Oracle Distributed Diff) is a Oracle developed innovative tool to compare huge data sets stores sparsely using a Spark application and compatible with CDH 5.7.x. Maximum file/directory size limit is 2 G.B. to compare.



O          Oracle Big Data Cloud Compute Engine: Oracle Big Data Cloud combines open source technologies such as including Apache          Spark and Apache Hadoop various tools and technologies developed by other vendor like Horton works with distinguished innovations from Oracle to provide an entire Big Data Platform for executing and handling Big Data Analytics applications. It leverages Oracle's Infrastructure Cloud Services for a holistic solution with proper security, reliability and elasticity. It consists of:


·         It provides ability to Spin up multiple Hadoop or Spark clusters in minutes

·         Use built-in tools such as Apache Zeppelin to understand & process data

·         Use various open interfaces to integrate third-party tools to analyze data

·         Provides ability to Launch multiple clusters at the same time against a centralized data lake to achieve data sharing without compromising on job isolation

·         Ability to create very small clusters or huge ones based on workload and business requirements

·         Elastically scale the compute and storage tiers independently of one another, either manually or in an automated fashion

·         Power to pause a cluster when not in use

·         Use REST APIs to monitor, manage, and utilize the service


Typical Workflow of OBDCCE: Sign up for a free credit promotion or purchase a subscription -> Add and manage users and roles -> Create an SSH key pair -> Create a cluster -> Enable network access -> Load data -> Create and manage jobs -> Create and manage notes -> Monitor clusters -> Monitor the service


Big Data Cloud Can be accessed in many ways; Using CLI, Rest API, SSH(Putty) , Console etc.


Differences Between OBDCS and OBDCCE :



Preconfigured PaaS Service with CDH and Apache Spark

Includes both Apache Spark and Hadoop with Unique innovations from Oracle

Defined number of cluster with minimum 3 and maximum 60

No defined number of cluster, can be added as and when needed

Use Cloudera manager and YARN as default option for administrative service and resource manager

No default administrative service is there,

Uses CDH impala and various other cloud era application for data understanding

Uses apache Zeppelin for to understand data better

Default Oracle Big Data Connectors are provided for Data loading and unloading, and Oracle R integration

No default Oracle Big Data connectors are provided, can be added/installed based upon requirement

Odiff utility is used to find difference between large datasets/file/directory

Zeppeline notebook is used to write code to explore and visualize data

Odcp utility is used to load large dataset in a distributed environment like HDFS, Amazon S3, Oracle Cloud Storage etc.

Data can be loaded to Cloud Storage & HDFS by simply browsing the file in local system using URL

December 28, 2017



Block chains are incredibly popular now-a-days. But before understanding what is a block chain and how do they work, let's see how current digital finance system works.

Let's say that you buy a book on Amazon. You would immediately pay it using Paytm. Paytm charges your credit card. Credit card charges your bank account. Before it gets to the bank account of the seller there are couple of intermediaries involved. These intermediaries take transaction fee because they all are part of this buy.

Who is paying them?

Obviously, It's you!!

Unless if you are using a cash transaction, there will be no intermediaries. You give money to seller and you take the goods. There is no fee taken. It would be great if we have the same system on the internet.

Bitcoins does the same. Bitcoins are developed on a technology called blockchain.

But blockchain is way more than bitcoin. Blockchain technology is more attractive and probably the most disruptive technology in the future financial services industry.


Block chain is a chain of blocks that contain information. It is an open distributed ledger.

"The blockchain is an incorruptible digital ledger of economic transactions that can be programmed to record not just financial transactions but virtually everything of value."
Don & Alex Tapscott, authors Blockchain Revolution (2016)

Once some data has been recorded inside a block chain, it becomes very difficult to change it. Each block contains some Data, Hash of the block and Hash of the previous block.


Data is the transactional information which will get stored in the block. Data that stores inside the block, depends on the type of the block chain

Example: Data inside a bitcoin block



Hash is an encrypted value which identifies the block and all its content and it is always unique. Once the block has been created, it's hash is been calculated. Changing something inside the block will cause the hash to change. In other words, hash is very useful to detect the changes to the block. If the hash value changes, it will no longer be the same block.

HASH OF PREVIOUS BLOCK:                                                              

Every block stores the hash of the previous block. It effectively creates chain of blocks and this technique makes the block chain so secure.



Here, we have a chain of 3 blocks. Each block has data, hash and hash of previous block. Block 3 points to block 2 and block 2 points to block 1. As the block 1 cannot point to previous block it doesn't have a previous block has value and this block is called as Genesis block.



Let's say we tamper with the second block. This causes the hash of the block to change. In turn, that will make block 3 and all following blocks invalid because they no longer store the valid hash of previous block. So, changing a single block makes all the following blocks invalid.


Using hashes is not enough to prevent tampering. Computers these days are fast and can calculate thousands of hashes per second. One could effectively tamper with the block and recalculate all the hashes of the other blocks to make the block chain valid again. So, to mitigate this, block chains have something called as Proof of Work.

It is a mechanism which slows down the creation of new block. In case of bitcoin, it takes about 10min to calculate the required proof of work and add a new block to the chain. This mechanism makes it very hard to tamper with the blocks, because if you tamper with one block, you need to recalculate the proof of work for all the following blocks. So, the security of the block chain comes from its creative use of hashing and the proof of work mechanism.


There is one more way that block chains secured themselves and that is by being distributive. Instead of using a central entity to manage chain, block chain uses a peer to peer network and everyone is allowed to join. When someone joins this network, they get the full copy of the block chain. The node can use this to verify that everything is set in order.


 Let's see what happens when someone creates a new block

New block is broadcast to every mining node in the network. Miners in the network approve the transaction if it is valid and adds this block to their block chain. All the nodes in this network creates consensus. They agree about what blocks are valid and which aren't. Blocks that are tampered with will be rejected by other nodes to the network. So, to successfully tamper with the block chain one need to tamper with all the blocks on the chain and redo the proof of work for each block and take control of more than 50% of the peer to peer network. Only then your tampered block will be accepted by everyone else. This is almost impossible to do.


Block chains are constantly evolving. Bitcoins are just the beginning of block chains. In the future, the block chains that manage and verify online data could enable us to launch companies that entirely run by algorithms which means making self-driving cars safer, help us protect our online identities and even track the billions of devices connected on internet of things. These innovations will change our lives forever and it's all just beginning.

December 27, 2017

Hyperion Essbase Metadata Comparison Tool

Hyperion Essbase Metadata Comparison Tool:

In my earliear blog, I have explained about How the Essbase Java API can be used for alternate hierarchy sync up. On this blog, I going to explain about how the Essbase API can be used to find out the inactive members in the Essbase outline. I have followed same procedure as similar to alternate hierarchy sync up tool to develop this Essbase metadata comparision tool.

During the Essbase implementations, we received lot of new members from SAP source system for testing and loaded into Essbase outline. Later, those test members were not used for any reporting and validation. So business admin requested us to find out those test members and remove it from outline. It is very difficult to find out the test members manually. So I have developed a tool to compare Essbase outline members against the metadata files received from Source system and to check if there are any inactive members in the outline.

Hyperion Essbase API:

I have used Essbase Jave API to develop this Essbase Metadata comparion tool. Essbase API has a more efficient features that can be used for hyperion administration tasks, such as outline extraction, user provisioning , etc. I have developed Java programs using these Essbase API to pull the Essbase outline members for specific dimension and those extracted level 0 members could be compared against source files received from SAP system.

Metadata Comparison:


The following flow chart describe about the flow of Essbase metadata comparison against Source metadata files.


Figure 1View image

A) Business Requirements:


SAP source system was providing metadata source files to the business and these source files were loaded into Essbase outline to perform the system testing and mock testing as per the business requirements.  The SAP development team has created many dummy members to test the metadata extraction functionality in the SAP source system and these dummy members are loaded into the Essbase outline through the dimension build process. This dimension build process is automated.


Later, Business admin has requested us to find out the dummy or inactive members and remove it from the Essbase outline to increase the performance of the application. Initially this comparison activity was done manually by the developers using the Excel VLookup method. But this method was more time consuming and there was a possibility of human error in the end results. Later I have developed the comparison tool using JAVA API which has been used to pull the outline members from Essbase application and does the comparison against metadata source files. During the testing phase, This tool has reduced the manual efforts in terms of testing.

B) Comparison Tool

The aim is to extract the leaf level members from the Essbase outline and perform the VLookup against the Source metadata files to produce missing members in the source files where the missing members might be exists as inactive members in the Essbase outline.

This metadata comparison tool has the following sub modules.

1.    Essbase OLAP Server Sign-On

2.    Extracting Level 0 members from Essbase outline

3.    VLookup Operation

4.    Check if there are any data for Missing Members

5.    Inactive members report


          The following are the Essbase API packages which has been used to develop the comparison tool:









Figure 2View image

C)  Pseudo Code


I have given pseudo code which is used for building this comparison tool.

1.  Essbase OLAP Server Sign-On


To Sign on to the Essbase applications , you have to pass  the login credentials , name of the OLAP server and the URL of the provider services as parameters.



ess = IEssbase.Home.create (IEssbase.JAPI_VERSION);

/**Please provide username, password, and provider services URL**?

IEssDomain dom = ess.signOn(ess_userName, ess_password, false, null, s_provider);

olapSvr = (IEssOlapServer)dom.getOlapServer(ess_olapSvrName);

// Give the olap server name

olapSvr.connect(); //Sign on into Essbase server.

IEssCube cube = olapSvr.getApplication("TestApp").getCube("TestDB");


2.  Level 0 Members Extraction:


The source system provides the required metadata for the business reporting. You need to get the extraction of all the level 0 members for the specific dimension from the Essbase outline and it should be compared with SAP source file.

The Method listAccountMembers() pulls all the leaf level members from the Account dimension and writes the output to excel or CSV file. If you need to pull any specific members from any of the base dimension, you should update the arguments of executeQuery() method. The executeQuery() method extract the level0 member from the Account dimension in the below code.

          static void listAccountMembers(IEssCube cube) throws EssException


                   IEssCubeOutline ebOtl = null;  //Outline object creation and initiation

FileWriter fWriter = null; //File object creation and initiation

                   try {

                    IEssMemberSelection essMbrSel = cube.openMemberSelection("Sample ");

//Cube operation and object initiation

EssMbrSel.executeQuery("Accounts", IEssMemberSelection.QUERY_TYPE_BOTTOMLEVEL, IEssMemberSelection.QUERY_OPTION_MEMBERSONLY,

 "Accounts", "", ""); //Accounts - Standard Dimension, QUERY_TYPE_BOTTOMLEVEL - Level0 members

                   IEssIterator mbrs = mbrSel.getMembers();

                   fileWriter = new FileWriter("ZAccounts.csv"); //Create output file object.


for(int i = 0; i < mbrs.getCount(); i++) //For loop iteration for each bottom level members in the outline


                   IEssMember mbr = (IEssMember)mbrs.getAt(i); //Get the member

fileWriter.append(mbr.getName() +","+ mbr.getAlias("Default") +","+mbr.getShareOption()+"\n"); //append the output into the file


          } catch (EssException essx) {}


3.   VLookup Operation:

In the above code, the leaf level members are fetched from the Account dimension using listAccountMembers () method and then these level0 members should be compared with source metadata account file to generate the absent members.  GetAccounts () method has the code to perform VLookup operation. All the missing members are written to the output file in VLookup operation.

The below psudeo code get the members from Account.CSV file which is generated from listAccountMembers () method and get the members from glaccount.csv file which is uploaded from source system.


private static void GetAccounts(IEssbase essX, IEssCubeView csv) throws EssException


   List<String[]> colSAP; //Initiate list object

   List<String[]> colHyp; //Initiate list object

   Boolean matchVar = false;

   FileWriter fileW = null; //Initiate File Writer object

   String ESSdata = null; //String Variable



  colHyp = readFile("Accounts.CSV"); //Read extracted level 0 members

            colSAP = readFile("glaccount.csv");//Read Source Metadata file

            fileW = new FileWriter("CompareAccounts.csv"); // output file object initiation.

            for(int i = 0; i < colHyp.size();i++)

            //For loop iteration for each line from the Essbase extaction file


                            for(int j = 0;j < colSAP.size();j++)

                             //Iteration for each line in the SAP source file


                                       //VLookup Operation



                                                matchVar = true;





                                                match = false;



                             if (matchVar == false)


 fileW.append(colHyp.get(i)[0]+" "+ ESSdata +"\n" ) ;  

//append the output into the new file       


                    catch (IOException ex)        {e.printStackTrace();}





4.   Check if there are any data for Missing Members

Once the list of missing members are populated after the VLookup operation, the script checks if there are any data for those missing members. If the data found for any of the members, that may be retained in the outline.

The method performCubeViewOperationACC () is implemented for verifying data for the year FY17. The New Year can be updated in the code if needed.

Static String performCubeViewOperationACC (IEssbase essArg, IEssCubeView csv, String opeStr) throws EssException


                    String value = null;

                    IEssGridView grid = csv.getGridView(); //Create object for data retrieval grid

                    grid.setSize(5, 10); //Set size for the grid

       grid.setValue(3, 1, opStr); //opeStr - Level0 member

       grid.setValue(1, 2, "FY17"); //Year

       grid.setValue(0, 2, "Total Entities"); //Below fields are optional

       grid.setValue(2, 2, "YearTotal");

       grid.setValue(0, 5, "Actual");

       grid.setValue(0, 6, "Final");

                    grid.setValue(0, 7, "LOC");


                   IEssOperation op = null; //Create object for the data retrieval


                   op = cv.createIEssOpRetrieve();

                   csv.performOperation(op); //Performs data retrieval for the given pov.


                   if (grid.getValue(3, 2).toString().equals("#Missing")

                                      && grid.getValue(3, 3).toString().equals("#Missing")

                                      && grid.getValue(3, 4).toString().equals("#Missing"))

                   //Checks if the data is present.


                             value = "- This member has No Data (#Missing)";




                             value = "- This member has Data ";


                   return value;



5.   Inacitive members report

Finally, the list is produced for missing members from the Essbase outline. These members can be deleted from the outline if the members are not required.



The below example explains about how the missing members are found using comparison tool.

Level 0 members from Essbase outline - Account Dimension:

Account Parent

Account Child
















Level 0 members from Source System:

Account Parent

Account Child












After the execution using Comparison tool:

Output File:

Missing Members

30000XX - This member has No Data (#Missing)



I have used this in all the testing phases and save lot of efforts. Also there are option to delete the members automatically from the outline.Based on the business requirments, you can modify the Java program and execute it.


1.    Oracle Documentations


December 26, 2017

Hyperion Essbase Alternate Hierarchy Sync Up

Hyperion Essbase Alternate Hierarchy Sync Up

I was working with Essbase Implementations project for one of the Retails customer and the business requested us to create the alternate hierarchies to fulfill their month end reporting requirements. These alternate hierarchies are derived from base hierarchy and this should be updated whenever the new member is added into base hierarchy. So business admin has to find out the new member added to the base hierarchy and add that member to the alternate hierarchy. But it is difficult to find out the new member added to the Essbase outline. On this blog, I have explained about how the Alternate Hierarchy sync tool can be used to find out the new members added to the base hierarchy.

Hyperion Essbase API:

The Essbase Java API is an interface which has the powerful features such as application and database administration. I have written Java program using this Essbase API that can extract the outline members from the base hierarchy and does the VLookup against alternate hierarchy. Also the members can be added to the alternate hierarchy automatically. But business wants to review the members and add it manually.

Alternate Hierarchy Sync up:

The below flow chart describe about the flow of Essbase alternate hierarchy sync up process.


Figure 1   View image


Business Requirements:


Business users were using the alternate hierarchy for the month end reporting requirements and these alternate hierarches are manually built by the business based on the base hierarchy structure. If any new members added to the base hierarchy, then these new members might be added to the associated alternate hierarchy structure. Both base hierarchy and alternate hierarchy should be sync for certain hierarchy. After the dimension build process, users need to manually search for the new members added in the base hierarchy and add those new members to the alternate hierarchy. This process was more time consuming.

The requirement was to develop and automate the process to identifying the new members added to base hierarchy and those members which are not added the alternate hierarchy. I have developed alternate hierarchy sync up tool using Essbase API programming which can pull the outline members from Essbase application and does the comparison against the given alternate hierarchy to provide the new member details. This tool has saved lot of efforts in terms of alternate hierarchy building and also this sync process can be automated using the sync up tool.

A)  Alternate Hierarchy Sync up tool:

This Essbase API Java programming is used to develop this Alternate Hierarchy Sync up tool which can extract the level0 members from the outline base hierarchy and does the VLookup against the alternate hierarchy members to produce the missing members from the alternate hierarchy. These missing members are might be added to the alternate hierarchy if it is needed for business reporting.

This Alternate Hierarchy Sync up tool has the following modules.

1.    Connecting into the Essbase Server

2.    Extracting Level 0 members from Base Hierarchy

3.    Extracting Level 0 members from Alternate Hierarchy

4.    VLookup

5.    Generate the results

             The below are the Essbase API packages used in the Java Program:









Figure 2   View image

B) Pseudo Code


I have given the pseudo code in the below section and explained about how it is used for building the Alternate hierarchy sync up tool.

1.     Connecting into the Essbase Server

The parameters - Username, pwd, OLAP server name and the provider services URL are passed through the signOn() method to sign on into Essbase applications.

The connection is created for TestApp.TestDB application.


ess = IEssbase.Home.create (IEssbase.JAPI_VERSION);

/**Please provide username, password, and provider services URL**?

IEssDomain dom = ess.signOn(UserName, PWD, false, null, s_provider);

olapSvr = (IEssOlapServer)dom.getOlapServer(s_olapSvrName);

olapSvr.connect(); // Connecting into the Essbase server.

IEssCube cube = olapSvr.getApplication("TestApp").getCube("TestDB");


Once the connection is made, system gets the name of base hierarchy and alternate hierarchy from user.

System.out.println("\nEnter Base Hierarchy Name (Ex - 200):");

                   Scanner baseScanner = new Scanner(;//Reads input from user

                   String baseHier = baseScanner.nextLine();


System.out.println("\n\nEnter Alternate Hierarchy Name (Ex - AH_200_Total):");

                   Scanner alterScanner = new Scanner(; //Reads input from user

                   String alterHier = alterScanner.nextLine();

2.    Extracting Level 0 Members from Base Hierarchy

The Base hierarchy level 0 members is extracted and it is prepared to compare with level 0 members from alternate hierarchy. The sync up tool reads the hierarchy name details from user and extract the level0 members for those given names only.

The Method listBaseEntityMembers () pulls the level 0 members from the Entitiy dimension for the given hierarchy and writes the output to csv file. If we want to pull the specific dimension, we need to modify the parameters in the executeQuery() method. In the below sample code, the executeQuery() method pulls the bottom level member from the Entity dimension.

static void listBaseEntityMembers(IEssCube essCube, String essBaseHier) throws EssException


                   IEssCubeOutline essOtl = null; //Create object for outline and initiate

                   FileWriter fileWriter = null; //Create object and initiate to write output                        


          try {

          IEssMemberSelection mbrSel = cube.openMemberSelection("Sample ");

//Create object and initiate for cube operation


mbrSel.executeQuery(baseHier, IEssMemberSelection.QUERY_TYPE_BOTTOMLEVEL,


 "Entities", "", "");

//Entities - Base Dimension, QUERY_TYPE_BOTTOMLEVEL - Level0 members


          IEssIterator mbrs = mbrSel.getMembers();


          fileWriter = new FileWriter("ZBaseEnt_"+baseHier+".csv");

//Create new output file



for (int i = 0; i < mbrs.getCount(); i++) //Iteration for each level 0 members in the outline



              IEssMember mbr = (IEssMember)mbrs.getAt(i); //Get the member


fileWriter.append(mbr.getName()+","+ mbr.getAlias("Default")  +","+mbr.getShareOption()+"\n"); //Write output to the file



          } catch (EssException x) {System.err.println("Error" + x.getMessage()); }




3.    Extracting level 0 members from alternate hierarchy

The alternate hierarchy level 0 members is extracted and it is prepared to compare with level 0 members from base hierarchy. The sync up tool reads the alternate hierarchy name details from user and extract the level0 members for those given names only.

The Method listAlterHierMembers () pulls the level 0 members from the Entitiy dimension for the given hierarchy and writes the output to csv file. If we want to pull the specific dimension, we need to modify the parameters in the executeQuery() method. In the below sample code, the executeQuery() method pulls the bottom level member from the Entity dimension.

static void listAlterHierMembers (IEssCube cube, String baseHier) throws EssException


                   IEssCubeOutline otl = null; //Create object for outline and initiate

                   FileWriter fileWriter = null; //Create object and initiate to write output                        


          try {

          IEssMemberSelection mbrSel = cube.openMemberSelection("Sample ");

//Create object and initiate for cube operation


mbrSel.executeQuery(baseHier, IEssMemberSelection.QUERY_TYPE_BOTTOMLEVEL,


 "Entities", "", "");

//Entities - Base Dimension, QUERY_TYPE_BOTTOMLEVEL - Level0 members


          IEssIterator mbrs = mbrSel.getMembers();


          fileWriter = new FileWriter("ZAltEnt_"+baseHier+".csv");

//Create new output file



for(int i = 0; i < mbrs.getCount(); i++) //Iteration for each level 0 members in the outline



              IEssMember mbr = (IEssMember)mbrs.getAt(i); //Get the member

          if (mbr.getShareOption().toString().equals("Shared member") && !mbr.getShareOption().toString().equals("Never share")) //Gets only shared member


fileWriter.append(mbr.getName()+","+ mbr.getAlias("Default")  +","+mbr.getShareOption()+"\n"); //Write output to the file



          } catch (EssException x) {System.err.println("Error" + x.getMessage()); }





4.     VLookup:

Now the level 0 members are extracted from both base and alternate hierarchy for the given name. Both base and alternate hierarchy level 0 members should be compared to get the new members which are not added to the alternate hierarchy members.  GetEntities () method is implemented to perform VLookup operation that lists all the missing members from the alternate hierarchy.

The below code reads the level0 members from base hierarchy .CSV file which is generated from listBaseEntityMembers () method and reads the level0 members from alternate hierarchy .CSV file which is generated from listAlterHierMembers method.


private static void GetEntities (IEssbase ess, IEssCubeView cv, String baseHier, String alterHier) throws EssException


   List<String[]> colSAP; //Initiate list object

   List<String[]> colHyp; //Initiate list object

   boolean match = false;

   FileWriter fileWriter = null; //Initiate File Writer object

   String dataESS = null; //String Variable



                        colHyp = readFile("ZBaseHier.CSV"); //Read Base hierarchy members

                   colSAP = readFile("ZAlterHier.csv");//Read Alternate Hierarchy members

                   fileWriter = new FileWriter("Results_Base&AlterHier.csv");

//New output file


                    for (int i = 0; i < colHyp.size();i++) //Iteration for each line from the

essbase level 0 extaction file


                              for (int j = 0;j < colSAP.size();j++)

                             //Iteration for each line in the SAP metadata file


                                       //VLookup Operation

if(       colHyp.get(i)[0].equals(colSAP.get(j)[0])) 


                                                match = true;





                                                match = false;



                             if (match == false)


 fileWriter.append(colHyp.get(i)[0]+" "+dataESS+"\n" ) ;        

//Write the output to the new file  


                    catch (IOException e)         {e.printStackTrace();}




5.    Produce the results

Finally the list is produced for new members which are not available in the alternate hierarchy. These members can be added if it is needed for business reporting.

1.  Example

The below example explains about how the new members are found using alternate hierarchy sync up tool.

Level 0 members from Base Hierarchy - Entity Dimension:

Entity Parent

Entity Child















Level 0 members from Alternate Hierarchy:

Entity Parent

Entity Child












After the execution using Alternate Hierarchy sync up tool:

Output File:

New  Members



2.  Conclusion:

I have used this tool while building the alternate hierarchy based on the business request and it saved lot of efforts and also this sync process can be automated.


1.    Oracle Documentations


December 19, 2017

Essbase Users Last Login Time Using API

Essbase Users Last Login Time Using API

             I was working for one of our Retail customers who were using Essbase applications for Actuals, Budget and forecast month end reporting. There are several business users are provisioned to access the application based on their role. Only few users might be accessing the Essbase application regularly and other users access the application only if there is any need for reporting. To use the number of license efficiently, Business admin wanted to find out User's last login time details which helps the business to remove the inactive users from the application. Usually I export the security file in the readable format and do lot of manual work to pull the user last login time details. I guess now it is very easy to get the last login time from the EAS console in the latest release. But it was not possible in the older version. So I have written Essbase API scripts to automate this extraction of users' login details from Essbase application.

Hyperion Essbase API:

              The Essbase API is an interface that provides powerful features such as application and database administration. The Essbase OLAP user's details can be extracted using this Essbase API programs and this API programs can be run from the server machine and don't need to concern about where the server is located. I have developed this Essbase user's last login time tool using this Essbase API programs.

Business Requirements:

              There are more than 300 users are provisioned to access the Essbase application for reporting, but only few users connect into the Essbase application regularly. So Business has requested us to find out if there are any inactive users based on their last login time. The last login time helps the business to revoke application access if those users are no longer using the application. 

Generally the last login time details is stored in Essbase security file. If we use this file to get the timing details, it will require lot of effort to convert the raw data into information. But this Essbase Users Last Login time tool helps to automate the process of pulling login timing details for every user and this tool has resulted in lot of effort savings.

Essbase users last login time extraction

The Essbase API Java programming is used to develop the user's Last login time extraction tool which can extract the last login time for every user provisioned in Essbase application.

             The below are the Essbase API packages which are used to develop the tool:








Pseudo Code


I have written a simple java program and included the below pseudo code to get the user login details. Below section explains about the pseudo code used for building the Users last login extraction tool.

1.    Connecting to Essbase Server

The user name, password, OLAP server name and the provider services URL should be passed as parameters to sign on into Essbase applications.

In the below Sample code, the Essbase server connection is created.


/**Please provide username, password, and provider services URL**/

SignOn(Name, PWD, false, null,NULL);

GetOlap Server()

// Provide olap server name

Connect(); //creates connection with Essbase server.


2.    Getting OLAP users

The method "getOlapUsers ()" is used to get the last login time of all the Essbase users. It extract all the users of OLAP server and their login details. Users might be granted access to either Essbase or Essbase & Planning applications. Based on the application, application type is updated.

static void getOlapUsers(IEssOlapServer olapServer) throws IOException


                    //create a file object and initiate it

                   fileWriterObj = new FileWriter("Essbase_Users_LastLogin.xls");

                   //Create new file - Essbase_Users_Last_Login.xls



Get Olap Users(null, null, null, false);

//Iterator to get all the Essbase users.

              int length = users.getCount();

              for(int i=0; i < length; i++)



                   //Get name of the user


                   //Get the last login time and date. converTime() method corrects the format of date and time.



        catch (Exception x)

          {          System.out.println("\n::GetOlapUsers:: failed with error:");

        }   }


3.    Convert Time:

The convertTime() method is implemented to get the correct date/time format of last login time.

 public static String convertTime(long time)

          {        if (time == 0) {        return "NeverLoggedIn";               }

                   else    {

                             return format.format(date)//Converted into readable date format




4.   Produce the results

I have created a batch scripts to call this Java program. The Java class path should be set before executing it. I executed the script and the list is produced for all users with their last login time. User's access can be revoked if they are no longer using the Essbase application.



The below is the example of last login time for the few users which is extracted using Essbase users last login time tool.

Essbase Users last login time:

  User Name

Last Login Time

Application Type


4/28/2017 01:00 PM




Essbase and Planning


9/30/2016 5:00 PM





This blog helps to understand about how the users last login time can be extracted from Essbase application using Essbase Java API.


1.    Oracle Documentations


December 18, 2017

Quick Tips for EPBCS Series 1 - Adding a little pep to the morning Latte makes the difference!

Quick Tips for EPBCS Series1- Adding a little pep to the morning Latte makes the difference!

When it comes to EPBCS(Enterprise Planning Budgeting Cloud Service), I am indeed mad as a march hare and that is one reason why I couldn't let be in quietude when there is an issue that my customer would have..

 So the whole thing about the Cloud movement had and has its repercussions with security and with all the debate, discussions, assessments around bigger areas of data, there is a small side effect my customer was facing with its own corporate security policies. Be it the Workforce Module, Financial Planning, Capital Expense or Projects Planning, the capability of Email notification is available for below in no particular order.

  • 1.       Business Rule
  • 2.       Job Console
  • 3.       Task List
  • 4.       Approval - Process Management.

The only setting that defines the Business Rule notification is at the" System Settings and Defaults -> Advanced Setting" Section.


When it comes to Job Console / TaskList/Approval, you might want to be mindful of placing the settings at the correct place.. Navigation -> User Preferences->Application Settings as below.


So now, where is the problem? I faced an issue with one of my customer's "Planning Process Management" Email notification process. The issue is he non-receipt of Email notifications when a Plan/Budget is getting submitted for review/approvals by the PUH owner up to the next level in the hierarchy. Since I am expecting the email to come out from Process management, I should have ideally set the owner/reviewer email addresses properly in the Approval Unit -> Assign Owners Section.



With this being checked OK..I verified the section Preferences if all is set as it has to be. And Yes it is!


Fine and am I done here? Yes, when it comes to On-Premise, but there is another work area where you might want to actually double check. And it is "Application->Settings".


Though both means the same, not sure why it is being an intended process to have it set at both places. Am sure Oracle Product Team might have a valid justifiable reason, but what my little brain could think is that maybe we can have default application settings at the Application -> Settings area and then a user can toggle between default Application Setting and her/his own User preferences by just one check box enablement as against re-doing all the settings once again whenever they want a shift. Chances are they forget what it was before?? ;) So three areas to check now...highlighted in snippet below.



Over and above of this, provided we are good with these settings and still you are not able to receive the emails out to your corporate email address? The next attempt that I did was to test if it works fine for gmail or other personal email address. I replaced all of my corporate email address with Gmail. It sure worked as expected here, but the small note in blue mentioned it might be considered phishing email.


Reason: The Application owner is the same id as the recipient. This means, the sender address in the email and the recipient address in the email are the same.The sender address is a Gmail account that actually sends an email NOT actually from the Gmail Email Server. Then came the thought, will it be same issue with EPBCS? So my corporate Email Administration policies caused it?!! The email is out from the EPBCS application from the Oracle Emailing module, but carries my corporate emails address (app owner) at the sender address which originally doesn't gets generated from my company's email server. Hence my company gateway actually filters these emails for suspicion on the grounds of spoof/spam.

Email management appears de-centralized. But if we get the knack of it, see beyond what you see and understand the nuances of the design of the EPBCS Navigation ground, you will be doing the right thing in the right order to find it indeed works! At Infosys, while are fully focused on customer satisfaction, couldn't resist delivering what the customer aspired for.

Here is how we decided to tackle the situation at hand. Created a Admin id in my company's network. All it needed was an email routing address and an outlook to authenticate the first time password. and this would be my Application Owner/Admin. Will this solve my issue ? No way, again the same problem with policies. Well we then worked withIT team and Added the sender email address, IP Address in the exception. Bingo! This is one option to get your mission accomplished..

The other one is to follow Oracle guidelines:


One of the most trusted tool of the trade, being ePBCS, it of course definitely considers the fact that it matters most whether things are done on time and perfection!

I trust my coffee and I want it in my style! That put me on Never Give-Up or Compromise your needs mode.. It's just the OCD kicking in again..Do you need one now, in case you feel like dozing reading thru the above content in full?, Did you?


-Prithiba Dhinakaran

December 14, 2017

Implementation of Oracle BI Applications for Multi Instance ERP Financials Source

Oracle BI Applications (OBIA) supports the loading of data into the Oracle Business Analytics Warehouse (OBAW)  from multiple source systems and multiple instance of the same source system. The main features of OBIA Multi Instance Implementation for common facts and dimensions are as follows


       Each source is assigned a DATASOURCE_NUM_ID to avoid data load failures due to a duplicate primary key between transactional systems.

       Lookups are done using source primary key (INTEGRATION_ID) and DATASOURCE_NUM_ID thus ensuring unique record is returned

       Each new insert into the Oracle DW fact or dimension is assigned a new ROW_WID (surrogate key)


       All Joins in reporting layer are on ROW_WID and not the natural keys from the source


The following table gives a few scenarios along with details of its implementation in OBIA




Different dimension values from different sources/instances

Treated seamlessly as different rows of the given dimensions at ETL and reporting layer


Different Transaction data in facts from 2 different sources/instances

Treated seamlessly as different rows of the given fact at ETL and reporting layer


Same dimension from different sources (item 1 is present in both)

       Duplicate rows in Oracle DW dimension with different ROW_WID. Thus prompts might show duplicates and both need to be selected

       Facts data will point to correct row of the dimension as lookup is on source primary key (INTEGRATION_ID) and DATASOURCE_NUM_ID and aggregation will show correct value

       The duplicated data will result in multiple rows in reports where there should only be one row at the most granular level

Optional if de-duplication of dimension is desired. Ideal would be to have an MDM solution

Same fact from different sources (Same PO in both)

       Each transaction will pick the correct dimension values and 2 rows will be inserted into the fact table with different ROW_WID

       The duplicated data will result in multiple rows in reports where there should only be one row at most granular level

Optional if de-duplication of fact is desired. Ideal would be to have an MDM solution


OBIA can handle different COA structures present in different instances of Oracle EBS. For example, we will consider one instance to be Fusion Cloud EBS while the other is a legacy on prem EBS as shown in table below. The chart of account for On Prem source has 4 segments while that for fusion cloud source has 7 segments. Moreover, the same segment type for example 'Company' is stored in different segments of GL_CODE_COMBINATIONS in the 2 sources.

Segment Type

101 (On Prem)

2002 (Cloud)




Sales Division





Cost Center











The configuration file file_glacct_segment_config_ora for Financial Analytics, helps ensure that same segment types of different COA of different instances get loaded into the same column of the OBAW table W_GL_ACCOUNT_D as shown in screenshot below.


For example, we can store 'Company' segments from On Prem COA and Cloud COA in the segment 1 column in W_GL_ACCOUNT_D; and Cost Center segments from on prem and cloud COA in the segment 2 column in W_GL_ACCOUNT_D, and so on as shown below



We will have to define source file directory (\biapps\etl\data_files ) corresponding to each source/instance while registering a Data Source in Configuration Manager.  Please refer below URL for more details.
Also other parameters like the Master Inventory Org Setup , Product Category Setup are all source specific. Thus will be defined for each source when defining the load plan.

Thus OBIA is designed to handle data from multiple sources and multiple instances of the same source and caters to most scenarios that might be needed during implementation.

December 12, 2017

Significance of ETL in BI Reporting

What is ETL?

Extract Transform and Load (ETL) typically done by using ETL tools to Extract Data from source systems, Transform and Load in to Data-warehouse for BI Reporting, such as,

  • Extract - Extracting the data from Multiple Source Systems (OLTP) containing data in different Format.
  • Transform - Applying the Transformations as per the business requirements.
  • Load - Loading the extracted and transformed data in to single format in Data ware House (Single Centralized Data base (OLAP)).

Fig 1. ETL Data Flow Process

List of Oracle Products which can be used for ETL:

  • Oracle Data Integrator Enterprise Edition (ODI)

  • Oracle Data Integrator for Big Data (DIBD) 

  • Oracle Enterprise Data Quality. (Integrated with ODI)

  • Oracle Warehouse Builder (OWB)

Why ETL?

               This is the big question exist in BI World always. When reporting tools has the ability to extract from multiple data sources what is the need for ETL?

Below are the main features which makes ETL as a mandatory Process in BI Reporting:

Data Cleansing and Data Profiling:

              Reporting Tools will provide the template and data flow as per the Report requirements. But it won't Process the data at column or row level in a table. ETL Process ensures that the good and valid data as per the business requirements are available for BI Reporting increasing the credibility and quality of the reports to the Users.

Increasing the Performance of Reporting:

             When we fetching the data from single Data ware house (single node) for reporting instead of from multiple sources (multiple nodes), it reduces the time taken for latest data availability in BI Reporting. When we plan to create and run thousands of reports every day for large organizations, this becomes an important aspect in BI. De-normalization and caching Process also plays the vital role which is achieved though ETL and ensures good Performance of BI Reports.

Avoiding the Usage of Live Transaction sources for Reporting:             

               Since we have Data ware house (OLAP) in a different node (database), we are not directly querying from the Transaction Database and hence there will be no impact on source systems performance due to BI Reporting.

Need of Tools for ETL Process in BI

Can't we achieve this through traditional Programming and scripting languages?

Yes, it is possible to achieve this Process through Programming Languages. But the ETL tools provide many advantages that are hard to ignore, not to mention the automation and scheduling capabilities.

Below are the some of the key reasons for using ETL tools:

Data Integration Utilities:

            To extract the data from different databases we need to import the respective drivers and packages in the programing code. But the ETL tools are enriched with the utilities to connect to different databases which reduces the ETL developer tasks in data Extraction Part .It also Provides the parallel data extraction which reduces the time consumption for data extraction.

Reducing the dependency on Programming languages:

           ETL tools will remove the dependency on Programming languages which reduces the need for complex coding requirements. Also ETL Developers can learn the different ETL tools easily in a short time period. It also makes the Debugging process easier one for Developers as these tools comes up with the enriched logs about failures.

Enriched User Interface in ETL Tools:

            ETL tools are built with enriched Interface which will enable BI Techies to achieve the complex business requirements easily. Developers can complete the ETL work by drag and drop of the utilities which are provided in the ETL tools. In this Case, it enables the Developers to focus in the business requirement rather than the ETL code. To conclude this, people who are working in a functional business domain also can learn and work in ETL easily.


              In future, ETL Process may evolve in a different ways. For instance, many ETL tools are enriched with Cloud Utilities (ODI Cloud Service) and have capabilities to handle the Unstructured Data. But, if we want to ensure the Data Quality and Report Performance for the BI Users, then ETL Process is always required. Also Data quality plays a vital role in BI Reporting as the business decisions and predictions are based on the Reports generated on BI data.

December 6, 2017

Chatbot's - What & it the way forward?

Chat-bot also called as talkbot or bot or interactive agent, which is a program which helps in carrying out conversation via audio or text. Chabot's are typically used in various purposes including customer care service or information gathering. For example, a user could ask the bot a question or give it an instruction and the bot could respond or perform an action as appropriate (E.g., ask about weather, dial a number, etc...).

You all will be thinking why I am speaking about Chat-bot now...there you go!!! As you all will be knowing that, world is moving towards AI and Machine Learning. When AI and Chabot's combine you might be thinking what will it do?

Over a period of time they become very intelligent, as they collect information and then this is stored in their neural network, which is reused again. Chatbot is also trying to make use of Machine Learning, specifically Natural Language Processing (NLP) which is a rapidly improving field of ML. NLP is making conversational chat-bots that are more intuitive as well as really cool and beneficial for customers ...and that is what make us say "Chatbot will give high usage of Enterprise and will make life easier for humans" J

Following are some of the stat's that we see on usage of Chabot's

  • Increased 200% more in the big sized companies in last few years

  • B2B (Business to Business) are taking more interest in this model

  • Last but not the least they are reducing labor costs and helps enterprise "significantly"

Complete automation might not be possible in this space but by bring this innovation of Chatbot via AI will definitely give customers savings by time, "$" and reduction in labor cost.

At high level we can classify into two varieties:

  • Predefined Chabot's: Where you have to follow a predefined path and gets you the result

  • AI Chabot's: Freeway chat and based on best fit or real time data decisions can be made by the users

Sample bots that are available, some of them are highlighted below:

  • Weather bot: This gives you instant information about weather

  • News bot: Information of news anytime, anywhere

  • Financial bot: How do you manage your money matters

  • Scheduling bot: Helps in scheduling our day today activities, meetings etc...

Personal devices such as Amazon Echo or Google Home are already available at a very affordable cost and most importantly has made conversing with a bot as normal as possible. It's only natural that we worry that eventually we will be replaced by a more efficient machine/computer/robot. Doesn't it ring a bell for all of us!!! L But worry not, we as humans have always found a way to keep ourselves employed and busy. Below is a quote from a brilliant movie 'Before Sunrise (1995)', that summarizes this emotion,

"You know what drives me crazy? It's all these people talking about how great technology is, and how it saves all this time. But, what good is saved time, if nobody uses it? If it just turns into more busy work. You never hear somebody say, "With the time I've saved by using my word processor, I'm gonna go to a Zen monastery and hang out". I mean, you never hear that."

Oh yes, Chabot's are the way forward and this will keep you ahead in this competitive world. This not only increases the productivity, but also connects with client's/buyers/user's in a more meaningful way, and it is more important that we are not left behind in this world of IT!!!

More to come on Chatbot....

Tips to conduct a successful Reverse KT Session

Tips to conduct a successful Reverse Knowledge Transition (RKT) Session

1. Selection of topics (All party involvement)

This is a very important part of the whole RKT process. Selection of topics should be aligned with all parties involved in KT along with customer. We need to ensure the topics selected are covered during our KT sessions and should be important from customer point of view. We can always suggest topics that we want to cover and request customer to pick from those topics.

Select fewer topics so that you can delve deeper into the topic within time frame - which showcases our knowledge level on that topic and thereby increases confidence of customer.

Select topics that are important and forms bulk of your work on a day to day basis - this will also ensure that we cover more information which is useful and not just theory.

2. Duration for each topic

It makes more sense if you tag certain timeframe to each topic selected. This will provide you a lever to be prepared about the topic and how deep you can delve on that particular topic.

3. Preparing a deck (Why slides?)

Presentation becomes really key in RKT sessions. The slides are best tools to represent information in a short and crisp manner. Don't use passages or paragraphs in slides but just use bullet points or numbered points only. This will give us two fold advantage:

    1. It will enable us to remember all the information on that particular topic
    2. Customer will get confidence if we talk about each bullet point in-depth rather than reading from a passage/paragraph

It is always important to keep it to the point and talk about each point for a designated amount of time without reading from paragraph.

Pictures /screen shots will help depicting the information in a more visual format which is always preferred than textual content.

Slides also provide animation tools and is a very powerful mechanism to attract attention of viewers on particular point to highlight during our presentation. Word of caution: Don't go for fancy animations, keep it simple (for example use "Float In").

4. Review of material by all parties

Once you prepare the material to your satisfaction, review it internally (within your team) first and get feedback. Once you incorporate all the feedback, then circulate it among the stakeholders and seek feedback. This will ensure that there are no surprises during RKT session. (Note: You can chose to remove all animations before circulating it to stakeholders to keep it interesting during RKT session)

Incorporate the feedback from all the parties. In case you cannot incorporate feedback please respond to that particular party about why you could not incorporate the feedback. (This is key to gain confidence of parties which might also help in obtaining a good rating.)

5. Dry run

This is one of the important steps that needs to be performed without any exception. Dry run the entire RKT session internally within the team or you can invite someone who can help in providing candid feedback. Time the session as well during dry run to find out about the timing. Ensure that you talk slow and loud with clear voice. Pause during switching between sub topics and ask for questions.

6. Involve team members

Utilize team members to talk about particular topic that they are comfortable with. Guide them in case of any mistakes during dry run. Ensuring that more people talk about topics will provide confidence to customer as well on the entire team.

7. Remote RKT?

If you are conducting RKT session remotely using phone then keep your phone closer to your mouth. Speak slow and loud so that other people can hear without any disturbance. Pause and give enough time for audience to respond. Always be polite and use thank you and please wherever possible. Greeting is an important part which will start the meeting on a right tone.

Please remember to keep your phone on mute if you are not talking to avoid any unnecessary background noise.

December 4, 2017

How to handle Foreign Currency Translation Reserve (FCTR) in HFM - PART II

Hi friends!!!! In part I on FCTR we have seen the functional aspect of when translation reserve arises and how is it calculated. Now we will have a look at how to handle FCTR in HFM.

To simplify the understanding of handling FCTR in HFM we will breakup activity in following four categories:

  1. Identifying account is to be translated at historical rate

  2. Storing historical rate

  3. Calculating FCTR

  4. Posting FCTR

There can be multiple approaches to handle each of the activity and I have seen diverse approaches seen implemented successfully. Some of the approaches were good while some were really confusing for a layperson to understand and even for many HFM resources it was a challenge to understand. So in the current discussion we will focus on approach on how to handle FCTR instead of how not to. Also please note that the current discussion is limited to HFM and does not cover FCCS. (FCCS has inbuilt capabilities for handling FCTR and can be understood separately).

1. Identifying account is to be translated at historical rate

There may be one or more account which needs to be translated at historical rate and there might new account added in future which may be required so it's prudent to have some mechanism which will be flexible and future proof.

I suggest to use UDA 'Hist_Rate' in accounts dimension to mark those account to be so translated. A member list can be created for all accounts for which the UDA is 'Hist_Rate'. The rule to calculate FCTR will run in loop for each member in the member list.

I also recommend to build a hierarchy to capture moments in the accounts and FCTR.

Eg. (Please note that this is an indicative hierarchy and can be different based on the client requirements)


2. Storing historical rate

Since each account will have its unique rates for each addition/deletion it will be required to captured to rate separately for every account. While some prefer to have a separate rate account for each account having historical translation, I prefer to have a Historical rate, say HR0000 - Historical-Rate, as a node in custom dimension (preferably where moment in the account are captured as shown above). Further children can be added to the Historical Rate hierarchy if there are different historical rates for addition of deletion during a single period.

Using custom dimension to capture historical rate will help in capturing rates for each account without creating account and it will be easy to retrieve and track historical rate for each account.

A data entry grid can be created for entering historical rates, for accounts having historical rate (use the member list created above) with Custom 1, 'For Currency' and custom 2 'To Currency'.

3. Calculating FCTR

In the current discussion I will not cover how to write FCTR rule but will discuss the logic how to calculate the FCTR in HFM. In the below example you will see how FCTR will be calculated based on the flow and rate accounts defined above. However you will observe that in the month of May and April the calculation gets complicated as there is an addition and deletion to the account. So different historical rates will be applicable to different amounts.

I suggest an alternative which is easier. Since we will be calculating FCTR for every period, the difference the EOM and Historical rate is already calculated and posted as FCTR. So for subsequent periods we require to do the following:

  1. Calculate the difference between translated values at current EOM rate and previous period EOM rate.

  2. Calculate Difference between translated values at EOM rate and historical rate for all addition/deletion during the period.

  3. The total of above differences is the change in FCTR for the period and should be added or reduced from the previous period FCTR which is brought forward.

  4. You can use difference in EOM rate for current period and previous period to get the same result as arrived in (h) below.

Please note:

  • Separate rates may be applicable for additions and deletions in the same period so have to be handled accordingly.

  • The previous period for which the EOM rates/EOM amount is considered will depend upon if the data is YTD or MTD. For YTD data the previous period will be always last period of previous year while for MTD data it will be the previous period of the same year (except for the 1st period).

  • All FCTR calculations will be at Parent Currency level and not at Entity Currency level.


4. Posting FCTR

Once the FCTR is calculated it will be posted in the accounts at Custom1 nodes as defined above, FCTR2000 (addition) or FCTR3000 (deletion) as the case may be. The previous period balances need to be brought forward using a rule (please ref the note above for previous period balance).

For posting FCTR you can create a single account FCTR or two account as FCTR_Assets, and FCTR_Liabilty and will be grouped in Liability/Asset or in passed through the Income statement based on the accounting policy of the client.

Based on the calculations above:

  • Any +ve FCTR on Assets account will have to be reduced from the asset account (-ve value in node FCTR2000 of the account) and added to FCTR_assets account, while -ve FCTR will be added to the asset and added to FCTR_Liability account.  

  • Any +ve FCTR on Liability account will have to be reduced from the liability account (-ve value in node FCTR2000 of the account) and added to FCTR_Liability account, while -ve FCTR will be added to the liability and added to FCTR_asset account.

  • To achieve the above either the rule should multiply the amount by -1 before posting to FCTR2000/3000 or alternatively the node FCTR2000/3000 should me marked as 'Switch Sign' as 'Yes' in custom 1 dimension.

  • If a single account FCTR is defined it is suggest to be defined as account type Liability so any addition to FCTR_asstes account above will be deletion in FCTR account and vice versa.

I hope I am able to simplify one of the most complicated topic as far as possible. In the pursuit of keeping it simple I have not get in too much details but hope it gives you sufficient insight. In case you have any queries please do write to me and I will try to get back at the earliest possible. Till then keep consolidating J.

December 1, 2017

Efficient Solutions in PeopleSoft HCM for Smart HR


 One of the major challenge the Onboarding HR operations team has to deal with is to cope up with the huge volume of data entry to be completed in a short period to enable pleasant experience for all the new joiners. The generation of employee record is a prerequisite to have all the other facilities to be provisioned for the new employee. The smart HR template solution of PeopleSoft HCM tries to bridge this gap but has lots of limitations to enable huge volume of data entry in a short period. The ideal solution that the IT consultants can offer is to club with other functionalities and come up with a hybrid approach that can suit the client business needs.

Why Smart HR Templates?


Oracle PeopleSoft introduced Smart HR Template from 9.0 as a precursor to the broader 'Smart HR' templates in 9.2 to help enterprises simplify their Hire process. The Smart HR template offers configuration of hiring pages based on unique onboarding processes that differ by jobs, geographies, employee types, industries, etc. by the HR administrators/Super Users. With the help of Smart HR templates the Implementation Consultants can help the customers in optimally configuring the templates with identified data defaults specific to job, person record there by mandating data entry for variable data only, such as name or address etc., thus significantly reducing the time to complete the hiring process.


Typically the HR administrator has to spent considerable amount of effort in creating and maintaining various employee specific transactions like Hire, Rehire, Terminate, promote, transfer, maintain address, maintain dependent data, maintain compensation data, maintain position, maintain jobs, maintain competencies, maintain qualifications, maintain work experiences, maintain salary increments, maintain manager changes etc. The need to minimize data entry and default repeatedly used common data triggered the introduction of smart HR template that helps in configuring templates that reduce the number of data entry fields necessary for repeatedly used transactions in the area of Hire/Rehire, Job data Updates, Personal data updates and Profile data updates.


The latest version of PeopleSoft smart HR template1 even has built in integration with Profile Management profiles and Smart HR to help the end users enter/update a person's profile using Smart HR transaction templates and pages that can cater to multi row transactions that can be represented in tabular grid form as in the case of address or Profile management. Template specific field level validations can be achieved with additional options of having user defined application class methods that can be configured. Also there is option to allow customers to create templates by position number or by job code as well as to define Global and Country specific sections.  We can even configure templates to have compensation predefined by the Salary Plan, Grade and Step. Implementation Specialist along with HR administrator can determine which of the fields among Salary Plan, Grade and Step can be overridden by the end-user, or even visible to the end-user. It is a recommended practice not to define compensation data as display only to end user as it will deprive end-users to edit the compensation components based on changing Organization Policies.  Also in case of Hires a Search/Match functionality can be enables that will be run before the hire can be saved or submitted and a suitable action to Hire or refer to an HR specialist to complete can be opted out by end user.


We have followed the below suggested steps while implementing a Smart HR template in PeopleSoft HCM for a major service provider in Canada .The first step was to gather the number of Global sections that can be reused and decide on optimizing the local sections that are mandated by regulations or by local policies . The next step we followed was to decide on the organizational relationships that are used and analyze the data volume to decide on the number of templates to be created. The different defaults for prompt fields and repeatedly entered data points that can minimize the manual entry.  The sections like phone, address and compensation details that might need multiple rows of data was segregated and optimized next. The roles associated with notifications and approvals were identified and configured. The data level security was then decided upon considering both the global and local needs. This is one of the challenging area that needs more focus to have a strong security built in the system. Finally any mandatory custom fields that needs to be incorporated was as well added as this involves more customization and additional maintenance overheads. The process flow followed is depicted below:


As a sample business use case to explain the benefits offered by Smart Hire template taking the requirement for a Local HR administrator of this service industry giant having multi country presence was required to key in on an average 150 new hires a week and 50 transfers a week and they wanted to minimize the manual efforts required for data entry across multiple components, tabs and sections.  With the help of smart hire functionality we had suggested configuring customized Hire templates and Job Update templates with the flexibility of local administration of changes that occur frequently without requiring full administration on the part of HR. Also the flexibility to configure setup of the template by section as well as prompts is well utilized in the project. The major consideration of data level security that can as well be configured such a way that security is granted to the user based on data permission roles and can access the right  template is strategized and planned well so that there are no sudden pitfalls. This customization has helped the Local HR administrator to save more than 29 Person Hours a week by reducing at least 10 mins per new hire data entry and 5 minutes per data change with the help of Smart HR templates along with necessary approval notifications that ensures data accuracy.(ref Appendix A for detailed calculations)


The client had been first educated on the need to optimize the global and local templates such a way that if there is a need to have process change that adds to the efficiency of the process it needs to be adopted as part of the Smart HR template rollout. The change management at client side needs to be strong enough for the effective usage of smart HR templates. The security roles used to get dynamically modified as the client many times are not aware of the security tree hierarchy that gets used for the smart HR template and this is an area that needs more time for discussions and implementation and should not be overlooked at the planning stage.


What Customer want?


From a traditional accounting background role of an HR manager the HR department is now looked upon as a revenue enhancer and this needs a shift in the role of HR manager to parallel the needs of this developing, changing organization. More adaptability, agility, resilience, change driven and customer centric HR personnel is necessary for the Success of organizations and the HR needs to focus more on strategic initiatives than spent time in administrative activities. This shift is well accommodated in the new HR applications and these enablers are critical for success of HCM tool implementations.


In "Predicts 2016: Digital Dexterity Drives Competitive Advantage in the Digital Workplace," 2 Gartner emphasizes the suggestion that the ability to promote digital dexterity in the workforce will be a critical source of competitive advantage, based on the simple notion that an engaged, digitally literate workforce capable of seizing technological advantage will drive better business outcomes. Human capital management (HCM) applications enable enterprise people management processes including core HR data management, payroll, talent management, workforce management, integrated HR service delivery and workforce analytics. The Strategic Planning Assumptions from "Predicts 2016: HCM Applications Transform to Support the Emerging Digital Workplace" 2 highlight the changes to HCM being driven by the emerging digital workplace, and a workforce investment strategy that enables new, more effective ways of working, raises employee engagement and agility, and exploits consumer-oriented styles and technologies.


To meet the digital dexterity one of the key strategy is to maximize the digital capability to reduce the manual efforts and in this line Smart HR templates are surely a value add. However there are many limitations while using the Smart HR and one of the key limitation is the inability for a mass update of transactions like a mass hire for a data conversion scenario like new business acquisition or conversion of contract employees to permanent. This is a frequently occurring adhoc scenario and having a mass update capability is real value add that customers look for. Since the Smart HR templates alone doesn't support this functionality currently the implementation consultants need to evaluate the business scenario and come up with optimal solutions that can be implemented to meet the pressing business needs.


What else can be done?

                To meet the varied needs of end customer and pressing need for automating mass uploads along with a short process for individual hires smart HR template alone cannot meet all the customer needs. We have to work out multiple options in tandem with the smart HR template to meet the growing needs of the customer. The PeopleSoft delivered approach to the problem of mass hire is to use CI (Component Interface) based integration that can be either an Excel to CI utility or a CI embedded in an Application engine (AE) program that can take an excel based file input of mass hire data. The biggest challenge faced with this approach is the highly limited data validation that can be performed prior to run which results in many data errors and difficulty in interpreting the standard error report that gets generated out of a component interface program. There is regular dependency on a technical consultant to assist the HR operations team and an end-user can seldom perform this task independently. This approach can be majorly used for mass hire transactions.


                In scenario of mass data updates for simpler Organization wide updates like mass salary hikes or organization level policy change and changes to job codes or transfer between departments. However it is a best practice to test the transaction initially on a small group first and ensure that all the defaults and mandatory fields are populated with data and use a new effective dated row rather than run in a correction mode unless sure of data override. The mass data update can as well be suggested for use in tandem with Smart HR templates for better automation.



                However the end user is still on need for a better solution that can leverage the efforts utilized with the help of a stronger ERP tool that can maximize the ROI and help HR to build and work more on strategic initiatives. There should be a more easy integration with the use of data integration from excel spreadsheet directly to the system with minimal errors in each load.


How to make the right decision?


                This problem essentially posed by a major service provider client put us to thought on identifying a better design for Smart HR templates to be scalable to mass uploads and multi row data transactions per employee. The in tandem use of Excel to CI template has its own limitations as the data validation performed is very minimal and is a weaker choice to rely on. There should be a major design revamp in the way currently Smart HR templates are designed with incorporation of one more key field (say Row ID) that uniquely identifies an employee that needs to be created in case of a mass hire.


                The system should be designed to first read an excel input file with the help of a file based AE integration and populate into a scroll level based grid format with multiple employee data each uniquely identified by a Global Record Id. The user can perform on the screen validation and editing before submitting for the approval notification process as is currently available in smart HR transaction screens. The data gets inserted via CI to the different Job, Personal data and Profile data after getting approved/submitted for data creation. The error report will continue to prompt to the data correction that can be performed for data inserts. The implementation consultants as well as end users can be well benefited with such a new feature added to the future patches/release of Oracle PeopleSoft HCM Smart HR templates. Also end users look forward additionally for fluid enabled UI features for the smart HR pages and approval pages so that they can better utilize their productive time and save on unavoidable delays.






Appendix A: Calculation of Effort Savings



Transaction Data

No. of New hires per week                                     


No. of data changes per week                               



Time taken per new hire                                         

20 mins

Time taken per data change                                   

10 mins


Time taken per new hire with Smart HR               

10 mins

Time taken per data change   with Smart HR       

5 mins


Total Effort Savings   (10 * 150) + (50*5)/60        

29.17 hours































2)  Gartner: 100 Data and Analytics Predictions Through 2020 dt: 24 March 2016 | ID: G00301430 Analyst(s): Douglas Laney, Ankush Jain

Subscribe to this blog's feed

Follow us on

Blogger Profiles

Infosys on Twitter