Infosys’ blog on industry solutions, trends, business process transformation and global implementation in Oracle.

Main

March 26, 2018

Driving Business Intelligence with Automation

Business Intelligence applications are around for a very long time and have been widely used to represent facts using reports, graphs and visualizations. They have changed the way many companies use tools in a broader spectrum and in a more collaborative way. However, users these days are choosing these tools to be operated autonomously yet connected to a vast network that can be accessed anywhere. As BI applications analyze huge sets of data in its raw format, it makes one way or another a bit difficult to conclude intuitions and insights. Automation can help users achieve this in extending the use of Business Intelligence beyond its current capabilities just by pondering a few points in cognizance.


Intuition Detection
Automate the intuition.

Business Intelligence should be able to shoot out right questions in a click of a second and must be able to provide insights without any manual interference. Artificial Intelligence can do this in a better way using the potential of super computers to help Business Intelligence take a deep dive into the data. This makes it out to try all the available possibilities to make patterns. Leading in the discovery of trends in the business providing the needs of the user.


Intuition Blending
Automate what intuitions are prioritized.

The very idea of detecting insights in Business Intelligence should be automated in a way that it prioritizes the intuitions and ranks the worthy ones higher based on the user needs. Artificial Intelligence then further compares all the possible insights and makes relationships between them helping users to work on multiple insights at once.


Common Dialect
Automate how intuitions are defined.

Business Intelligent tools that are existing so far have done a boundless job in analyzing huge amounts of data through reports, graphs and several other graphical representations. But users must still have to figure out the insights on the best possibilities based on the analytics. This human factor leaves room for misconception, delusion and misinterpretation. This is where Artificial Intelligence takes the Business Intelligence next level and provides insights in best understandable language like English. Nevertheless, Graphs and reports can still be used to represent this accurate and widely comprehended solution.


Common Accessibility
Create a just-one-button-away experience.

Finding an insight should be very accessible and must be as simple as a click away. This is surely possible with Artificial Intelligence where BI is automated allowing user to get professional insights instantly just by clicking a button. This makes users to easily access the data without any prior knowledge on the tool or data science to make the intuitions. Available tools like Einstein Analytics from Salesforce has already got this implemented attracting huge set of users all over the globe.


Cut-Off Sway
Trust the data to reduce human fault and anticipation.

Artificial Intelligence generally reduces manual intervention thus avoiding the human errors. This sway could be because of individual views, misinterpretation, influenced conclusions, deep-rooted principles, faulty conventions. Automation by means of Artificial Intelligence gets rid of all these factors and reduces the risk of getting provoked by defaulted information. It purely trusts the data.


Hoard Intuitions
Increase efficiency by integrating insights.

Integrating the insights in the application alongside the factual data using Artificial Intelligence may make the users to stop using the BI tool directly. This hoarding ultimately makes the application an engine for other software increasing the effectiveness of the tool. Users can then spend much time in making wise money-spinning choices rather than putting unwanted efforts in using the tool. This can change the minds of many occupational users that presently do not use any sort of Business Intelligence tool.

Continue reading " Driving Business Intelligence with Automation " »

March 15, 2018

First Timer for OAC- Essbase on Cloud..Let's get Hand's On -Rocket Speed!

First Timer for OAC- Essbase on Cloud..Let's get Hand's On -Rocket Speed!

Well, the launch of a rocket needs to consistently be on bar with 4.9 miles a second and yes this is 20 times the speed of sound. That's what defines the greatness and even the prime reason for a successful space launch.

There is going to be Blog series from Infosys on the Essbase on Cloud component of the OAC ( Oracle Analytics Cloud) Service [ should I say "oak" and "o.a.c"? I rather like it the former as it gained popularity that way the last 3 years]. This would constitute 6 Blogs in a staircase step fashion that is going to enable on-premise Essbase consultants and developer and also new bee from the BI world to gain control over OAC Essbase at Rocket speed! We start with landing page in blog 1 and will end with using Essbase as Sandbox for multiple data sources in blog 6(coming soon..)

Much to the release of OAC in 2014 as the most

a)       comprehensive analytics offering in the cloud

b)      having business intelligence

c)       big data analytics

d)      Embedded SaaS analytics

e)      Lately -Essbase on Cloud got released in the second half of 2017.

OAC now got even more packed with introductions of Synopsis, Mobile HD, the Day by Day App.

 

The first step...Set up an user in OAC - Essbase to move on..

As a pre-requisite, we would have an admin credentials to login and set up access for the other needful folks in your team!

OAC -Essbase login page: Once you click on the link for Essbase Analytics Cloud, you would see the entry gate to enter the admin user name password. Doing that and clicking on Sign-In button will take you to the Landing page.

OACEssbase_pic1.png

This is the first screen called the Landing page. Since i have not created any application yet, you see an empty list in the left column. On the Right is a set of neatly arranged cards providing amazingly great ease of access -all that you might need in one view.

OACEssbase_pic2.png

 

 

 Get your focus to the security icon to accomplish our work and that is obvious in the landing page. Those that are accustomed to the cards concepts in EPM Saas Products, this would be not be a surprise but rather on a white background.

 

OAC_Essbase_pic3.png

 

Once you there, you would see three buttons on the right -"Import", "Export","Create" - very much self explanatry what they are meant for!! :) Apparently you by now know to Click on "Create" button to create new users:

OACEssbase_pic4.png

 Provide ID, name, email, password , and role for the new user:

OACEssbase_pic5.pngOACEssbase_pic6.png

 

 List of Users of are currently available on OAC - Essbase will be listed here ..

OACEssbase_pic7.png

Creating another user can be done in the same sequence of steps and/or via the Import option for Bulk creation of Users.

Please be cognizant of the fact that the above method is different from the Users and Roles managed via the IDCS Console. We will drill into the BI Analytics service instance specific application roles and authorized users in a sequel. The goal behind that security section revolves around the Access and Actions. A user can access only the data that is appropriate for him/her. This is achieved by applying access control in the form of catalog permissions in the Catalog and Home pages. Secondly, a user can perform only those actions that are appropriate to him/her. This is achieved by applying user rights in the form of privileges performed in the Administration page.

 Now that my id and access has been set up let's look at step 2 the Application creation in the blog 2(coming soon..soon as in tomorrow!..)

Thank you,

Prithiba Dhinakaran

 

March 12, 2018

Automate your File based Data Loads in FDMEE with Open Batches

Automate your File based Data Loads in FDMEE with Open Batches


In one of my recent implementation assignments, we were faced with the requirement of eliminating any manual intervention and automating our data loads in Financial Data Management Enterprise Edition (FDMEE), the target system being Hyperion Financial Management (HFM) and Hyperion Tax Provisioning (HTP). Open Batches in FDMEE is an effective way of automating your file based data loads. Though the standard User Guides contain precious little on creating and running Open Batches and are not entirely comprehensive, Open Batches are easy to use and implement. In this blog, I attempt to explain in detail the process of creating Open Batches. These details have come purely from my personal research and my experience from my past assignments.


 


What is an Open Batch?


Batches play a critical role in automating the Data/Metadata load process. By creating batches, we can easily club different rules together or execute different rules in an automated way by leveraging the scheduling capabilities of FDMEE.



An Open Batch is instrumental in importing data into a POV in FDMEE from a file based data source. Open Batches can be automated to run at scheduled timings. Most importantly, Open Batches pitch in when we have to load more than 1 data file for any particular location. Also, Open Batches provide the flexibility of loading these multiple data files in 'Serial' or 'Parallel' mode.



Configuring and Executing an Open Batch


Let us assume that we have 2 FDMEE Locations and from each of these locations, we have to load 2 data files:


Step 1: Setting up the required FDMEE Locations


We create 2 locations as shown below (without deep-diving into the details of an FDMEE Location)

a. LOC_Sample_Basic1

b. LOC_Sample_Basic2




Step 2: Setting up the Open Batches



A new Open Batch may be created from: Setup -> Batch -> Batch Definition. Click on 'Add' to add a new Batch Definition.



Key in the basic details such as NAME (Name of the Open Batch), TARGET APPLICATION, WAIT FOR COMPLETION and DESCRIPTION. Apart from these, there are a few other details as below:


 


  1. Type: Select 'Open Batch'.

     

  2. Execution Mode: Select 'Serial' if you wish to load all the data files in that location one by one. Select 'Parallel' for loading all the files simultaneously.

     

  3. Open Batch Directory: This is where the User Guides are specifically vague. All we need to do is to specify the name of the folder where the data files will be placed. But, we need to take special care that this folder is created at a level which is lower than the level of the folder 'openbatch'. So, assuming that we need to place the data files in a folder 'Test1', this folder will be created as shown below:


  1. File Name Separator: For File Name Separator, we have to choose from (~ , _ , @ or : ).

     

  2. Naming convention of the Data Files: For a data file to be executed from an Open Batch, it needs to have the following components in it name, separated by the chosen 'File Name Separator':

     


  • File ID: free-form value used to sort the files for a given Location in an order. The values for this can be 1,2,3... or a,b,c... etc, just to specify the order in which the files will be loaded.

  • Data Load Rule Name: The name of the Rule through which the file will be loaded

  • Period: The period for which the file is to be loaded

  • Load Method: 2-character value where first character denotes 'Import Mode' (possible values 'A' and 'R' for 'append' and 'replace' respectively). The second character denotes 'Export Mode' (possible values 'A', 'R', 'M' and 'S' for 'accumulate', 'replace', 'merge' and 'replace by security' respectively).


So, now let us assume that we have to load 2 data files from the location 'LOC_Sample_Basic' using the Data Load Rule 'DLR_Sample_Basic'. Assuming that the 'File Name Separator' chosen is '@', the file names for the 2 files will be:


  • 1@DLR_Sample_Basic@Jan-17@RR

  • 2@DLR_Sample_Basic@Jan-17@RR


  1. The 2 Open Batches for loading the data files will be as below:

OP_Sample_Basic1

OP_Sample_Basic2




 


Step 3: Configuring the Master Batch


After doing all the above configurations, the default question would be that 'Where is the automation in this when we have ended up creating 2 or more Open Batches? How will we execute 2 or more Open Batches without Manual Intervention?'. The answer to this query lies with the creation of the open batch type 'Batch'. We configure an open batch with the type as 'Batch' and name it 'BatchMaster'.



The feature of the Batch is that we can add multiple Open Batches to the Batch Master and then, schedule just this one batch instead of worrying about multiple Open Batches. To add the 2 Open Batches that we had created earlier to this BatchMaster, click the 'Add' button and enter the details. Ensure that you enter the Job Sequence number to tell the system which Open Batch to execute first.



 


Step 4: Executing the Open Batches


The Open batches or the BatchMaster can be executed from: Workflow -> Other -> Batch Execution


You may simply select the BatchMater now and schedule it easily through the 'Schedule' button.



FDMEE Multi-Period Data Load


There may be a requirement to load data files that contain data/amount values for multiple periods. Files for an open batch multiple period load are stored in 'inbox\batches\openbatchml' directory. There are just a few minor variations between setting up a Batch for Multi-Period Load and a Single Period Load.


 


  1. Open Batch File Name: The file name for multi-period files should be as per the format 'FileID_RuleName_StartPeriod_EndPeriod_Loadmethod':


    • FileID: free-form value used to define the load order

    • RuleName: Name of the data load rule which will be used to process the file

    • StartPeriod: First period in the data file

    • EndPeriod: Last period in the data file

    • Load Method: Import and Export Mode

      So, a multi-period file name would look like for a Data Load Rule named 'Actuaload':


    • 1_Actuaload_Jan-17_Dec-17_RR

    • 2_Actuaload_Jan-17_Dec-17_AR

       

       


  1. Open Batch Directory: We specify the name of the folder in the same way as we did for single period batch. But, we need to take special care that this folder is created at a level which is lower than the level of the folder 'openbatchml' instead of 'openbatch'. So, assuming that we need to place the data files in a folder 'Test3', this folder will be created as shown below:



 


  1. Sample Open Batch for Multi-Period:


Avoid Validation failures with DRM Validation checks

Avoid Validation failures with DRM Validation checks

Data Relationship Management (DRM) as a master data management solution has been a favorite in a complex EPM and EBS landscape for its audit and versioning features along with other unique features. Now that DRM is available in cloud, its adaption for customers is expected to increase multifold. This blog would help you in understanding validation checks configuration in DRM that would help in finding any issues before your target systems get impacted.

Whenever a metadata update is done in DRM, it is advisable to do a few checks. The most vital of these checks is to ensure that all the target system exports that may be affected by the metadata update are frisked for any validation failures. This is especially important when new members are added or any existing members are deleted. This blog talks in detail about the process of validating the exports and hierarchies after a masterdata update. It also highlights how to go about fixing a validation failure, if there is any.

You may ask as to why is this this important? DRM Hierarchies and Exports are used for supplying various downstream systems with masterdata. These hierarchies and target system exports in DRM have validations applied to them. These validations are to ensure that certain business requirements are being met during masterdata maintenance. So, whenever there is a masterdata update done to the system, it becomes imperative for the DRM Admins to validate all the hierarchies and exports that may have got affected by the update.


Detailed Process for Validation Check:

Doing the validation check is a very simple process. Let us assume that some metadata update was done in the 'Total_AuditTrail' hierarchy. So, our next steps would be:

  • Open the related exports. In this case, let us assume that the export that is affected is 'HFM_AuditTrail_ParentChild'.

  • Click on the 'Run Export Validations' icon at the top left corner of the screen (shown below as well).

 

  • DRM would run a check to ensure that the metadata changes are meeting the criterion of all the validations that are applied on the Hierarchy or the export. If all the validations are being met, we will get a success message like below.


Note: Running the validation check does not mean that we are exporting anything to any target system, and is completely unrelated to any metadata being exported to the target. So, feel free to run this check as many times as required.

Now, let's consider a scenario in which the Validation check returns error.


  • Let us assume that we are running the validation check on the export HFM_Area_MemberAttributes'. Click the 'Run Export Validations' icon.

  • DRM would run a check to verify whether all the validations are being met and return you to the home screen.



 


 



  • Click on the 'Validation Results' tab.


  • The Validation Results tab will highlight all the members which have failed the validations check.

     

  • Click on the Go icon to be directed to the nodes that have failed the validation check.

 

  • Clicking on the GO Icon again will take you to the position of that particular member in the Hierarchy.

The above process will help you in identifying the error within minutes so that it may be fixed at the earliest. After making the required modifications, run the validation checks again to ensure that the export has no more validation failures.

Alternately, you may also run the Validation Check at the Hierarchy Level as well.

  • In the below example, let us assume that we had made some metadata updates in the ar.NVCLS hierarchy. When you have finished making the metadata changes, select the top member of the hierarchy, and right click. In the validate option, select 'Assigned'.


 


 


 


 


 



  • This will also highlight the validation errors in the same way.
  •  

    Note: Running the validation check whenever metadata is updated is a best practice suggested by Oracle.

March 7, 2018

ETL vs. ELT

In today's world, the process of innovative data warehousing is going through a rapid transformation like big data, sensor data, social media data, endless behavioral data about website/mobile app users etc. This gives rise to various new methodologies, for example, ELT is heard progressively in today's analytical environment. We have used ETL in the past but what happens when the "T" and "L" are switched since they both solve the same need.

Continue reading " ETL vs. ELT " »

March 6, 2018

Big Data Processing Architecture

 

 

In today's world, business needs to take decisions instantly on the data provided by business analyst to stay on top. In current scenario business analyst needs to process and analyze all types of data (structured, semi-structured and unstructured) in short span of time, which is not possible only through the traditional data warehouse concepts, to achieve this we need to move to big data. When we have decided to move to big data, which is the bes