Infosys’ blog on industry solutions, trends, business process transformation and global implementation in Oracle.

Main

August 24, 2018

FCCS Integration with Oracle Fusion Financials - End to end process and pain points

The integration of FCCS (or any other Hyperion cloud application) with Oracle Fusion Financials (Oracle GL cloud) is said to be a "direct" integration. However, when you start to configure it, you realize that it's not as "direct" as it appears J

So here I am, explaining the steps involved and the points to note while setting up this integration right from configuring the connection to Fusion Financials up to the Drill Through from FCCS back to Fusion Financials.

1.      Setup Fusion Source System in FCCS Data Management

Setup the Source System of type Oracle Financials Cloud as you would normally do.

Note: For the Drill Through URL make sure to enter the Fusion Financials Cloud release URL format - "R12" for release R12 or earlier and "R13" for R13 release format.

2.      Create a User in Fusion Financials to establish connectivity

Create a new user in Fusion Financials with the correct roles. This user will be configured in FCCS Data Management setup.

Note: The user has to be assigned the following roles:

i.                     Employee

ii.                   Financial Analyst

iii.                 General Accountant

iv.                 General Accounting Manager

3.      Configure Source Connection in Data Management

After the user is created in Fusion Financials, go back to Data Management and configure the source connection with the user created.

Enter the Web Service URL of the Fusion Web Service and click on Test Connection.

Once the connection test is successful, click on Configure to save this configuration.

4.      Initialize the Source system

Select the Oracle GL source system and click on Initialize. Initializing fetches all metadata from Fusion GL, that is needed in Data Management, such as ledgers, chart of accounts etc. The initialize process may take some time. You can monitor the progress on the Workflow tab under Process Details.

5.      Period Mappings

After the Source System is initialized successfully, an Essbase application with the same name as the Fusion GL application gets created in Data Management. All the metadata from Fusion GL is fetched into this application. The next step is to configure the Period Mappings for this application.

It is setup under the Source Mapping tab for both Explicit and Adjustment Period Types.

Select the Fusion application name as the Source Application and FCCS application as the Target. Add the period mapping.

Similarly, to bring in Adjustment data from Fusion GL, create the period mapping for the adjustment periods. Select Mapping Type as Adjustment and create the period mappings.

After this initial setup is complete, you are ready to create the Locations to import data from Fusion GL. The standard process of creating an Import format, Location, Data Load Rule and Data Load Mappings can be followed to create the load locations. However, the Data Load Rule setup differs from the file based loads as it has source system filters which can be setup to filter/limit the data you would like to import from Fusion GL.

6.      Data Load Rule setup

Open the Data Load Rule for the Fusion GL location. It has most of the fields setup with the default values. You may change the filter conditions per your import requirements and setup the data rule.

Review the source filters for each dimension and update as required.

With this, your "Direct" Integration setup is complete and you are ready to import data from Fusion GL and Export to FCCS.

7.      Drill Through setup

A very important feature of the direct GL integration is the ability to drill through from FCCS to Data Management right up to Fusion GL.

In Smart View, when you drill through for a particular amount, it takes you to the Data Management landing page.

When you right click on the amount on the DM landing page and select Drill Through to Source, it will take you to the GL Balance Inquiry landing page to see the details of the individual transaction records.

Some pre-requisites to perform drill through successfully:

Note:

i.                     The user performing the drill through needs to have Data Access Set assigned in order to view the Inquiry page. Without this, you get an error saying Invalid Data Access Set.

To assign Data Access Set,

·         Login to the Fusion GL application. On the Home Page, go to Setup and Maintenance.

·         Search: Manage Data Access for User

·         Query the user name for which this access is to be granted.

·         Add the Data Access Set for the roles assigned to this user

 

ii.                   Another pre-requisite is that the user has to be already logged in to Fusion GL while performing Drill Through. If the user is not logged in and if he/she clicks on Drill Through to Source, you get an error "You can only drill from Smart View into detail balances but not to Account Inspector".

8.      Automation of GL Integration

The Fusion GL loads can be easily automated since they are not dependent on the presence of a source file.

You may automate it through Data Management by creating a Batch Definition and specifying the Data Load Rule.

You may also automate it completely without having to login to Data Management using EPM Automate and Windows batch jobs. This automation is explained in detail in my blog here  - http://www.infosysblogs.com/apps/mt.cgi?__mode=view&_type=entry&blog_id=29&id=10580


August 7, 2018

OAC Essbase Application Migrations from On-Premise

OAC Essbase Application Migrations from On-Premise:

Introduction

I am working on the Oracle EPM Cloud implementation project which is focused on migrating on-premise Essbase applications to Oracle Analytics Cloud (OAC) Essbase cloud applications. The OAC Essbase provides few utilities which can be used for exporting applications from On-Premise application and importing it into OAC environment. This document explains about how to migrate the Essbase applications to OAC using utilities.

 

Utilities

Below are utilities which is downloaded from OAC environment. I have downloaded the  Export Utility and Command Line Tool to my local machine for migrating on-premise applications to OAC Essbase.

                1. EssbaseLCMUtility

                2. CommandLineUtility

 

Utilities.png

 

Prerequisites

       To use the Essbase LCM utility and Command line interface, Java JDK 8 should be installed and the JAVA_HOME path should be set.

       On-Premises Essbase applications should be converted to Unicode mode (UTF-8 encoding) before migrating into OAC Essbase environment. I used the below maxl script to convert the application into Unicode mode.

I have changed the Server level variables to Application level variables.

Exporting On-Premise Application

       I have followed the blow steps to export the on-premise application using Essbase LCM Utility.

       Open the 'CMD' and change the directory to 'D:\EssbaseLCMUtility' where I have downloaded the utility.

       Run the below command to export the application from Essbase. This command exports the data and artifacts.

                EssbaseLCM.bat export -server  test.xxx.local:1423 -user TestAdmin

                -password ******** -application TEST -zipfile     TEST.zip -skipProvision

 

Export Application - Progress

 

Export.png

Application Export Folder

-          The Application folder is exported to EssbaseLCMUtility Folder.

ExportLocation.png

 

Importing the application into OAC

I have manually copied the exported application folder to CommandLineUtility home folder.

 

CommandLineUtility.png

 
 

 Importing the application to OAC

·         I have executed the below CLI commands to import the application folder into OAC Essbase.

·         Set the Java Home and CLI Home:

                                SET CLI_HOME=D:\CommandLineUtility

                                SET JAVA_HOME=C:\Program Files\Java\jdk1.8.0_161

·         Logging into OAC:

                                esscs login -user TestAdmin -url https://test.xxx.com/essbase

·         Importing Application into OAC:

                                Esscs lcmimport -v -zipfilename TEST.zip

 

Import Application - Progress

 

Import.png

 

Application in OAC Environment

Application migration is succeesful. Go to OAC application console and refresh the application list to see the migrated application.

 

Console.png

 

Reference(s):

1. https://docs.oracle.com/en/cloud/paas/analytics-cloud/essug/preparing-migrate-premises-applications-cloud-service.html.

2. http://www.redstk.com/migrating-an-essbase-cube-to-the-oracle-analytics-cloud/.

 

Automating the Purge Job for Usage Tracking Data

 

Why Usage Tracking?

Usage Tracking will be helpful in measuring and monitoring user interactions with the OBIEE. This will provide the deep understanding into usage statistics and performance congestions in the reporting application.

Usage tracking functionality creates entry into S_NQ_ACCT table as when a report is executed in the application by a user.

This table will capture metrics like report performance time, report start/end time, user ID etc.,

When Usage tracking is enabled, it helpful in determining which user queries are creating performance bottlenecks, based on query response time.

It also provides information on frequently accessed reports. It involves in Enterprise Manager set up changes and RPD changes.

Why Automate the Data Purge?

For a reporting application which receives user requests every minute, Usage Tracking will generate huge volume of data. This data gets written in S_NQ_ACCT database table. Purging data in this table periodically is essential. Otherwise reports created on top of usage tracking data would perform slowly. Manually purging this data requires intervention from database team and add overhead in application maintenance.

We can automate data-purging in S_NQ_ACCT table using BI Publisher technology. This automation will work for any data-purging. Also the entire automation can be done with technology stack that exists with BI application. There is no need to involve any additional tools.

Steps to Automate:

  1. Create a BI Publisher data source with READ WRITE user for OBIEE meta-data schema.

  2. Create a database package which deletes records from S_NQ_ACCT table.

  3. Create a BI Publisher data-model to invoke the DB package via Event Trigger.

  4. Create a scheduled job which will invoke the data-model periodically.

 

  1. Create a BI Publisher Data Source

     

Go to BI Publisher administration, Click on JDBC connection as shown below.

Click on "Add Data Source"

 

 

 

 

Enter the following details for a New Connection

     Data Source Name: Give a Data source name.

    Username: Enter the user name who have Read and write access to OBIEE meta schema to access the data source.

     Password: Enter the password associated with the user name.

Click on Test Connection, a confirmation message will be displayed.

  1. Create a Data Model to know how many records got purged.

    Go to New TabàData Model

     

 

 

 

Enter the following details

Default Data Source: Select the Data source which is created in above step from dropdown.

Oracle DB Default package: Enter package name which is created in the database in the OBIEE Meta schema.

 

Attached the package code for Reference.

 

 

 

Click on Data Sets and select SQL Query as shown below.

 

Enter the following Details

Name: Enter a Data set name

Data Source: Select the newly created Data source from the dropdown

Write the Query and click on OK.

Query: SELECT COUNT(1) num_deleted_records FROM APLINST02BI_BIPLATFORM.S_NQ_ACCT WHERE start_dt < SYSDATE - (:m)

 

Create an Event Trigger to initialize the Data model after Report gets triggered by the Scheduler.

Enter the Event trigger details as below

Name: Enter the name of the Event Trigger

Type: After Data

Language: PL/SQL

Oracle Default package will be populated automatically. Select the appropriate function which will trigger the Report from the Available function section and move to the Event Trigger section by Click on ">" icon

 

Now click on Parameters and provide the parameter details to pass it in Event trigger. We are passing the number of days with this parameter to purge the data from the S_NQ_ACCT table with below logic.

DELETE FROM Schema_Name.s_nq_acct WHERE start_dt < sysdate - m;

 

 

 

  1. Create a RTF template for Scheduling a Job to automate

Go to NewàReport

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Click on Upload

Rtf Document for reference.

 

 

 

 

Save the report in the required folder after providing the above details.

Now click on MoreàSchedule

Enter the parameter value as 'm'

 

 

 

 

Enter the details for scheduler as below

Name: Give a name of the file name

Layout: This will be populated automatically

Format: Select a required format of the Report form the dropdown

In the Destination section, select the Destination as Email or FTP and provide the details accordingly.

In the Schedule tab, give the frequency of the Job when to run.

 

Now click on Submit Job from top right corner. A job will be scheduled as per given details below

August 6, 2018

Oracle Data Visualization (DVD/DVCS) Implementation for Advanced Analytics and Machine Learning

Oracle Data Visualization Desktop(DVD) or Cloud Server(DVCS) is a very intuitive tool, which helps every business user in the organization to create quick and effective analytics very easily. People at all level can leverage the benefit of blending and analysing data in just a few clicks and help the organization to take informed decision using actionable insights. Oracle DVD is a Tableau like interactive tool which helps to create analysis on-the-fly using any type data from any platform, be it on premise or Cloud. Main benefits of Oracle DVDs are below:

·         A personal single user desktop tool, or a SAAS cloud service, which can be leveraged by any business user in the Organization.

·         Enable the desktop user to work even offline

·         Completely private analysis of heterogeneous data

·         Business user can have entire control over the dataset/connections

·         Direct access to on premise or cloud data sources

·         Administration task has been removed completely

·         No concept of remote server infrastructure

Oracle DVD/DVCS enables the business user to perform analysis using traditional methodologies as well as provides capability to perform Advance Analytics using R and creating Predictive model using Machine Learning algorithm using Python.

This simple and intuitive tool provides a very unique way to enable you to perform Advance analytics by just installing all the required packages. DVML (Data Visualization Machine Learning library) is the tool to help you install all the required packages for implementing machine learning algorithm for predictive analysis in one go.

Install Advance Analytics(R) utility will help you to install all the required R packages to perform Advanced Analytics functions like Regression, Clustering, Trend line etc. However, to run both the utility in your personal system/server, you need administrative access as well as access to internet and permission to automatically download all the required packages.


In the below slides we are going to discuss, how to leverage Advance analytics and machine learning functions to provide predictive analytics for the organization.

In order to create a Trend line graph, we need to enable Advanced Analytics and then pull required column into the Analysis.

Trend line Function: This function takes 3 parameters to visualize the data in a trending format.

Syntax: TRENDLINE(numeric_expr, ([series]) BY ([partitionBy]), model_type, result_type)

Example : TRENDLINE(revenue, (calendar_year, calendar_quarter, calendar_month) BY (product), 'LINEAR', 'VALUE')

We need to create various canvases and put them into one story line by providing corresponding description over the canvas. While creating Trend line visualization, we need to provide the Confidence level of data. By default, it will take 95% confidence level, which means the analysis will be performed over the 95% of data.


Continue reading " Oracle Data Visualization (DVD/DVCS) Implementation for Advanced Analytics and Machine Learning " »

August 3, 2018

OAC-Essbase Data Load & Dimension Build Using CLI

OAC-Essbase Data Load & Dimension Build Using CLI

 

Introduction

I am working with one of the Oracle EPM Cloud implementation projects which is focused on migrating on-premise Essbase applications to Oracle Analytics Cloud (OAC) Essbase cloud applications. The OAC Essbase provides Command Line Interface utilities which can be used for data load and dimension build in OAC Essbase applications. This document explains about how to use the utility for data load and dimension build in OAC.


Utilities

Command Line Utility - We can download the Command Line Tool from OAC Essbase instance to our local machine to perform the Essbase data load and dimension build tasks

 Utilities.png

Setting up CLI environment

  • Open the command prompt and change the directory to CLI home directory.
  • To use Command line interface, Java JDK 8 should be installed and the JAVA_HOME path should be set
  • Set the CLI Home and Java Home:

                                SET CLI_HOME=D:\CommandLineUtility

                                SET JAVA_HOME=C:\Program Files\Java\jdk1.8.0_161

 

Logging into OAC Essbase through CLI

Before performing Dimension build and data load activities, we need to be logged into OAC Essbase.

Logging into OAC using admin id:

D:\CommandLineUtility> esscs login -user TestAdmin -password ****** -url https://test.OAC.com/essbase

                user " TestAdmin " logged in with "service_administrator" role

                Oracle Analytics Cloud - Essbase version = 12.2.1.1.112, build = 211

 

Create Data Base local Connection

-                        The DB local connection can be created using CLI command createlocalconnection. It takes all the required JDBC connection details as arguments.

Command Syntax:

D:\CommandLineUtility>esscs createLocalConnection -name oraConn -connectionString jdbc:oracle:thin:@DevDW:1XXX/DevID -user DB_USER

                Connection already exists, it will be overwritten

                Enter Password:

                User credentials stored successfully



Essbase Dimension Build

  • Run the dim build command with stream option 
  • Database query is required either in the rules file or must be provided as argument for dimbuild. If not given in command, it is taken from the rules file.
  • The streaming API is used to push the result from database to cube.

Command Syntax:

D:\CommandLineUtility>esscs dimbuild -application TEST -db TEST -rule Acct.rul -stream -restructureOption ALL_DATA -connection oraConn

                Streaming to Essbase...

                Streamed 9 rows to cube

 

Essbase Data Load

  •  Run the Data load command with stream option 
  • Database query is required either in the rules file or must be provided as argument for data load. If not given in command, it is taken from   the rules file

Command Syntax:

D:\CommandLineUtility>esscs dataload [-v] -application TEST -db TEST -rule DataLoad.rul -stream -connection oraConn

                Streaming to Essbase...

                Streamed 10 rows to cube


Reference(s):

1.https://docs.oracle.com/en/cloud/paas/analytics-cloud/essug/command-line-interface-cli.html.

2.https://support.oracle.com/epmos/faces/DocumentDisplay?_afrLoop=442454943227210&parent=EXTERNAL_SEARCH&sourceId=HOWTO&id=2259032.1&_afrWindowMode=0&_adf.ctrl-state=aphtjq7kl_132.

 

Continue reading " OAC-Essbase Data Load & Dimension Build Using CLI " »

May 31, 2018

Development of eText templates based BIP report'

Objective:

The objective of this material is to provide a beginner's guide( similar to Hello world codes in Java world) to BIP eText report developers. Using this material, any report developer with basic knowledge on BIP can start working on eText Templates.

Introduction

An eText template is an RTF-based template used in generating text output intended for electronic communication. For the same reason, the eText templates must follow very specific format instructions for exact placement of data. At runtime, BI Publisher applies this template to an input XML data file to create an output text file that can be transmitted to a bank or other customer.

eText templates are categorized as Electronic Funds Transfer (EFT) and Electronic Data Interchange (EDI).

An EFT(Fixed Position) is an electronic transmission of financial data and payments to banks in a specific fixed-position format flat file (text).

EDI(Delimiter Based) is similar to EFT except it is not only limited to the transmission of payment information to banks. It is often used as a method of exchanging business documents, such as purchase orders and invoices, between companies. EDI data is delimiter-based, and also transmitted as a flat file (text).

In this series, I am developing BIP reports with Delimiter based and Fixed position templates.

Delimiter Based eText Report:

For creating Delimiter based eText Report, please follow the trailing steps.

1. Use trailing sample data and create a Datamodel.

<DATA_DS>
  <G_1>
   <Bank>600</Bank>
   <Obligor>600664421</Obligor>
   <Customer_Name>XYz Indus Valley bank</Customer_Name>
  </G_1>
  <G_1>
   <Bank>601</Bank>
   <Obligor>600664421</Obligor>
   <Customer_Name>O'Commerce bank</Customer_Name>
  </G_1>
  <G_1>
   <Bank>602</Bank>
   <Obligor>600664421</Obligor>
   <Customer_Name>ABC bank</Customer_Name>
  </G_1>
</DATA_DS>

2. Create a RTF template file. To begin with, Copy trailing content and save it as Delimetr_eText.rtf.


XDO file name:    Mapping of Payment Format:                                                                                                          Date: 4/15/2018


Delimetr_eText.rtf                                                                              US NACHA Payments EDI Format


Delimiter Format Setup:

Hint: Define formatting options...



<TEMPLATE TYPE>

DELIMITER_BASED

<OUTPUT CHARACTER SET>

iso-8859-1

<CASE CONVERSION>

UPPER

<NEW RECORD CHARACTER>

Carriage Return


Hint: Format Data Records Table for DELIMITER_BASED

 



<LEVEL>

DATA_DS

<MAXIMUMLENGTH>

<FORMAT>

<DATA>

<COMMENTS>

<NEW RECORD>

TableHeader

20

Alpha

'Bank'

 

1

Alpha

'|'

 

20

Alpha

'Obligor'

 

1

Alpha

'|'

 

20

Alpha

'Customer_Name'

 

1

Alpha

'|'

 




<LEVEL>

G_1

<MAXIMUMLENGTH>

<FORMAT>

<DATA>

<COMMENTS>

<NEW RECORD>

CLRDAta

20

Alpha

Bank

 

1

Alpha

'|'

 

20

Alpha

Obligor

 

1

Alpha

'|'

 

20

Alpha

Customer_Name

 

1

 

'|'

 

<END LEVEL>

G_1


<END LEVEL>

DATA_DS


3. Create a BIP report with the above Data model (ref point 1) , upload Delimetr_eText.rtf  file with template type as eText.

4. Execute the Report. You might expect outcome as below:

5. For detailed steps, kindly go thro the Oracle link https://docs.oracle.com/cd/E10091_01/doc/bip.1013/b40017/T421739T481436.htm.

Fixed Position Based Report:

For creating Fixed Position Based Report, please follow the trailing steps.

1. Use trailing sample data and create a Datamodel.

<DATA_DS>
  <G_1>
   <Bank>600</Bank>
   <Obligor>600664421</Obligor>
   <Customer_Name>XYz Indus Valley bank</Customer_Name>
  </G_1>
  <G_1>
   <Bank>601</Bank>
   <Obligor>600664421</Obligor>
   <Customer_Name>O'Commerce bank</Customer_Name>
  </G_1>
  <G_1>
   <Bank>602</Bank>
   <Obligor>600664421</Obligor>
   <Customer_Name>ABC bank</Customer_Name>
  </G_1>
</DATA_DS>


2. Create a RTF template file. To begin with, Copy the trailing content in a word document and save it as FixedPositionr_eText.rtf                .

XDO file name:    Mapping of Payment Format:                                                                                                          Date: 4/15/2018

FixedPositionr_eText.rtf                                                                    US NACHA Payments EDI Format

FIXED POSITION Format Setup:

Hint: Define formatting options...


<TEMPLATE TYPE>

FIXED_POSITION_BASED

<OUTPUT CHARACTER SET>

iso-8859-1

<CASE CONVERSION>

UPPER

<NEW RECORD CHARACTER>

Carriage Return


Hint: Format Data Records Table for FIXED_POSITION_BASED


<LEVEL>

DATA_DS

<POSITION>

<LENGTH>

<FORMAT>

<PAD>

<DATA>

<COMMENTS>

<NEW RECORD>

TABLE_HEADER

1

15

Alpha

R, ' '

'Bank'

 

16

3

Alpha

R, ' '

'|'

 

19

15

Alpha

R, ' '

'Obligor'

 

35

3

Alpha

R, ' '

'|'

 

39

50

Alpha

R, ' '

'Customer_Name'

 

90

3

Alpha

R, ' '

'|'

 

 

<NEW RECORD>

ENTRY_DETAIL_A

 

2

90

Alpha

R, ' '

'________________________________________________________________________________________________________'

 

<LEVEL>

G_1

<POSITION>

<LENGTH>

<FORMAT>

<PAD>

<DATA>

<COMMENTS>

<NEW RECORD>

TABLE_HEADER

1

15

Alpha

R, ' '

Bank

 

16

3

Alpha

R, ' '

'|'

 

19

15

Alpha

R, ' '

Obligor

 

35

3

Alpha

R, ' '

'|'

 

39

50

Alpha

R, ' '

Customer_Name

 

90

3

Alpha

R, ' '

'|'

 

<NEW RECORD>

NewLine

 

2

90

Alpha

R, ' '

'________________________________________________________________________________________________________'

 

<END LEVEL>

G_1

<END LEVEL>

DATA_DS


 

3. Create a BIP report with the above Data model (ref point 1), upload FixedPositionr_eText.rtf     with template type as eText.

4. Execute the Report. You might expect outcome as below:

5. For detailed steps, kindly go thro the Oracle link https://docs.oracle.com/cd/E10091_01/doc/bip.1013/b40017/T421739T481436.htm.

Continue reading " Development of eText templates based BIP report' " »

March 26, 2018

Driving Business Intelligence with Automation

Business Intelligence applications are around for a very long time and have been widely used to represent facts using reports, graphs and visualizations. They have changed the way many companies use tools in a broader spectrum and in a more collaborative way. However, users these days are choosing these tools to be operated autonomously yet connected to a vast network that can be accessed anywhere. As BI applications analyze huge sets of data in its raw format, it makes one way or another a bit difficult to conclude intuitions and insights. Automation can help users achieve this in extending the use of Business Intelligence beyond its current capabilities just by pondering a few points in cognizance.


Intuition Detection
Automate the intuition.

Business Intelligence should be able to shoot out right questions in a click of a second and must be able to provide insights without any manual interference. Artificial Intelligence can do this in a better way using the potential of super computers to help Business Intelligence take a deep dive into the data. This makes it out to try all the available possibilities to make patterns. Leading in the discovery of trends in the business providing the needs of the user.


Intuition Blending
Automate what intuitions are prioritized.

The very idea of detecting insights in Business Intelligence should be automated in a way that it prioritizes the intuitions and ranks the worthy ones higher based on the user needs. Artificial Intelligence then further compares all the possible insights and makes relationships between them helping users to work on multiple insights at once.


Common Dialect
Automate how intuitions are defined.

Business Intelligent tools that are existing so far have done a boundless job in analyzing huge amounts of data through reports, graphs and several other graphical representations. But users must still have to figure out the insights on the best possibilities based on the analytics. This human factor leaves room for misconception, delusion and misinterpretation. This is where Artificial Intelligence takes the Business Intelligence next level and provides insights in best understandable language like English. Nevertheless, Graphs and reports can still be used to represent this accurate and widely comprehended solution.


Common Accessibility
Create a just-one-button-away experience.

Finding an insight should be very accessible and must be as simple as a click away. This is surely possible with Artificial Intelligence where BI is automated allowing user to get professional insights instantly just by clicking a button. This makes users to easily access the data without any prior knowledge on the tool or data science to make the intuitions. Available tools like Einstein Analytics from Salesforce has already got this implemented attracting huge set of users all over the globe.


Cut-Off Sway
Trust the data to reduce human fault and anticipation.

Artificial Intelligence generally reduces manual intervention thus avoiding the human errors. This sway could be because of individual views, misinterpretation, influenced conclusions, deep-rooted principles, faulty conventions. Automation by means of Artificial Intelligence gets rid of all these factors and reduces the risk of getting provoked by defaulted information. It purely trusts the data.


Hoard Intuitions
Increase efficiency by integrating insights.

Integrating the insights in the application alongside the factual data using Artificial Intelligence may make the users to stop using the BI tool directly. This hoarding ultimately makes the application an engine for other software increasing the effectiveness of the tool. Users can then spend much time in making wise money-spinning choices rather than putting unwanted efforts in using the tool. This can change the minds of many occupational users that presently do not use any sort of Business Intelligence tool.

Continue reading " Driving Business Intelligence with Automation " »

March 15, 2018

First Timer for OAC- Essbase on Cloud..Let's get Hand's On -Rocket Speed!

First Timer for OAC- Essbase on Cloud..Let's get Hand's On -Rocket Speed!

Well, the launch of a rocket needs to consistently be on bar with 4.9 miles a second and yes this is 20 times the speed of sound. That's what defines the greatness and even the prime reason for a successful space launch.

There is going to be Blog series from Infosys on the Essbase on Cloud component of the OAC ( Oracle Analytics Cloud) Service [ should I say "oak" and "o.a.c"? I rather like it the former as it gained popularity that way the last 3 years]. This would constitute 6 Blogs in a staircase step fashion that is going to enable on-premise Essbase consultants and developer and also new bee from the BI world to gain control over OAC Essbase at Rocket speed! We start with landing page in blog 1 and will end with using Essbase as Sandbox for multiple data sources in blog 6(coming soon..)

Much to the release of OAC in 2014 as the most

a)       comprehensive analytics offering in the cloud

b)      having business intelligence

c)       big data analytics

d)      Embedded SaaS analytics

e)      Lately -Essbase on Cloud got released in the second half of 2017.

OAC now got even more packed with introductions of Synopsis, Mobile HD, the Day by Day App.

 

The first step...Set up an user in OAC - Essbase to move on..

As a pre-requisite, we would have an admin credentials to login and set up access for the other needful folks in your team!

OAC -Essbase login page: Once you click on the link for Essbase Analytics Cloud, you would see the entry gate to enter the admin user name password. Doing that and clicking on Sign-In button will take you to the Landing page.

OACEssbase_pic1.png

This is the first screen called the Landing page. Since i have not created any application yet, you see an empty list in the left column. On the Right is a set of neatly arranged cards providing amazingly great ease of access -all that you might need in one view.

OACEssbase_pic2.png

 

 

 Get your focus to the security icon to accomplish our work and that is obvious in the landing page. Those that are accustomed to the cards concepts in EPM Saas Products, this would be not be a surprise but rather on a white background.

 

OAC_Essbase_pic3.png

 

Once you there, you would see three buttons on the right -"Import", "Export","Create" - very much self explanatry what they are meant for!! :) Apparently you by now know to Click on "Create" button to create new users:

OACEssbase_pic4.png

 Provide ID, name, email, password , and role for the new user:

OACEssbase_pic5.pngOACEssbase_pic6.png

 

 List of Users of are currently available on OAC - Essbase will be listed here ..

OACEssbase_pic7.png

Creating another user can be done in the same sequence of steps and/or via the Import option for Bulk creation of Users.

Please be cognizant of the fact that the above method is different from the Users and Roles managed via the IDCS Console. We will drill into the BI Analytics service instance specific application roles and authorized users in a sequel. The goal behind that security section revolves around the Access and Actions. A user can access only the data that is appropriate for him/her. This is achieved by applying access control in the form of catalog permissions in the Catalog and Home pages. Secondly, a user can perform only those actions that are appropriate to him/her. This is achieved by applying user rights in the form of privileges performed in the Administration page.

 Now that my id and access has been set up let's look at step 2 the Application creation in the blog 2(coming soon..soon as in tomorrow!..)

Thank you,

Prithiba Dhinakaran

 

March 12, 2018

Automate your File based Data Loads in FDMEE with Open Batches

Automate your File based Data Loads in FDMEE with Open Batches


In one of my recent implementation assignments, we were faced with the requirement of eliminating any manual intervention and automating our data loads in Financial Data Management Enterprise Edition (FDMEE), the target system being Hyperion Financial Management (HFM) and Hyperion Tax Provisioning (HTP). Open Batches in FDMEE is an effective way of automating your file based data loads. Though the standard User Guides contain precious little on creating and running Open Batches and are not entirely comprehensive, Open Batches are easy to use and implement. In this blog, I attempt to explain in detail the process of creating Open Batches. These details have come purely from my personal research and my experience from my past assignments.


 


What is an Open Batch?


Batches play a critical role in automating the Data/Metadata load process. By creating batches, we can easily club different rules together or execute different rules in an automated way by leveraging the scheduling capabilities of FDMEE.



An Open Batch is instrumental in importing data into a POV in FDMEE from a file based data source. Open Batches can be automated to run at scheduled timings. Most importantly, Open Batches pitch in when we have to load more than 1 data file for any particular location. Also, Open Batches provide the flexibility of loading these multiple data files in 'Serial' or 'Parallel' mode.



Configuring and Executing an Open Batch


Let us assume that we have 2 FDMEE Locations and from each of these locations, we have to load 2 data files:


Step 1: Setting up the required FDMEE Locations


We create 2 locations as shown below (without deep-diving into the details of an FDMEE Location)

a. LOC_Sample_Basic1

b. LOC_Sample_Basic2




Step 2: Setting up the Open Batches



A new Open Batch may be created from: Setup -> Batch -> Batch Definition. Click on 'Add' to add a new Batch Definition.



Key in the basic details such as NAME (Name of the Open Batch), TARGET APPLICATION, WAIT FOR COMPLETION and DESCRIPTION. Apart from these, there are a few other details as below:


 


  1. Type: Select 'Open Batch'.

     

  2. Execution Mode: Select 'Serial' if you wish to load all the data files in that location one by one. Select 'Parallel' for loading all the files simultaneously.

     

  3. Open Batch Directory: This is where the User Guides are specifically vague. All we need to do is to specify the name of the folder where the data files will be placed. But, we need to take special care that this folder is created at a level which is lower than the level of the folder 'openbatch'. So, assuming that we need to place the data files in a folder 'Test1', this folder will be created as shown below: