Infosys’ blog on industry solutions, trends, business process transformation and global implementation in Oracle.

« July 2018 | Main | September 2018 »

August 27, 2018

Oracle Cloud R13- One Time Payment Request

Oracle Fusion Financials Cloud R13 offers functionality to import invoice and payment details as Payables Payment Request from external systems using a predefined FBDI template

Supplier for the request

ü  Not an existing supplier in Oracle

ü  Entered as a Party

ü  Cannot be queried at Manage Supplier page

ü  Cannot be reused for standard invoice

ü  Bank Details (Account Number, Bank, Branch) are required to be entered in the import data

FBDI Template - Payables Payment Request Import

ü  The 18A Template can be downloaded from below link

Payables Payment Request Import

ü   Key Template Columns 

Column Name

Details

Transaction Identifier

Invoice identifier to uniquely identify the OTP request

Party Name

Name of the Supplier/Party

Party Original System Reference

Reference information of party from the source system

Party Type

Supplier Type.

Party type can only be Person, Organization, Group or Relationship.

Location Original System Reference

Source System Reference for location.

Country

Country of Party Address

Address Line 1, 2, 3, 4

Address Line 1, 2, 3, 4 of Party Address

City, State, Postal Code

City, State, Postal Code of Party Address

Supplier Bank Details

Account Country, Currency, Account#, Type, Bank & Branch #

Business Unit & LE

Business Unit and Legal Entity

Source

One Time Payments (This is defined at Payables Lookup, Type= Source)

Invoice Details

Invoice Number, Date, Currency, Description, Paygroup, Payment Term

Payment Method

Mode of Payment

Liability Combination

Liability Account Code Combination.

If left blank, value of this field defaults from the setup.

 

Mandatory Setups

1.       Enable One Time Supplier feature for the instance (View image)

2.       Add the OTP Code as Payables Source (View image)

3.       Add the source at Trading Community Source System (View image)

4.       Enter default Liability and Expense accounts at 'Manage Common Options for Payables and Procurement' (View image)

5.       Enter default location at 'Manage Business Unit' (View image)

Creating Payment Request Invoice and Payment

1.       Prepare the FBDI template with Payment Process Request data (View image)

2.       Generate csv and zip files from template (View image)

3.       Upload the zip to UCM Server. Account- fin/payables/import (View image)

4.       Run 'Load Interface File for Import' process to load the data to interface table (View image)

5.       Run 'Import Payables Payment Request' process. Source: OTP (View image)

6.       Invoice created (View image)

7.       Payment can be made by selecting 'Pay in Full' from Invoice Action or Creating a new Payment at Manage Payments.

 1099 Reporting for Payables Payment Requests

1099 reporting is not supported for One Time Payments. Assumption is that the source application generating one-time payments would handle any tax requirements.  If payments handled within Oracle Cloud Financials require 1099 then the supplier needs to be created in Oracle and paid by invoice.

 

August 24, 2018

FCCS Integration with Oracle Fusion Financials - End to end process and pain points

The integration of FCCS (or any other Hyperion cloud application) with Oracle Fusion Financials (Oracle GL cloud) is said to be a "direct" integration. However, when you start to configure it, you realize that it's not as "direct" as it appears J

So here I am, explaining the steps involved and the points to note while setting up this integration right from configuring the connection to Fusion Financials up to the Drill Through from FCCS back to Fusion Financials.

1.      Setup Fusion Source System in FCCS Data Management

Setup the Source System of type Oracle Financials Cloud as you would normally do.

Note: For the Drill Through URL make sure to enter the Fusion Financials Cloud release URL format - "R12" for release R12 or earlier and "R13" for R13 release format.

2.      Create a User in Fusion Financials to establish connectivity

Create a new user in Fusion Financials with the correct roles. This user will be configured in FCCS Data Management setup.

Note: The user has to be assigned the following roles:

i.                     Employee

ii.                   Financial Analyst

iii.                 General Accountant

iv.                 General Accounting Manager

3.      Configure Source Connection in Data Management

After the user is created in Fusion Financials, go back to Data Management and configure the source connection with the user created.

Enter the Web Service URL of the Fusion Web Service and click on Test Connection.

Once the connection test is successful, click on Configure to save this configuration.

4.      Initialize the Source system

Select the Oracle GL source system and click on Initialize. Initializing fetches all metadata from Fusion GL, that is needed in Data Management, such as ledgers, chart of accounts etc. The initialize process may take some time. You can monitor the progress on the Workflow tab under Process Details.

5.      Period Mappings

After the Source System is initialized successfully, an Essbase application with the same name as the Fusion GL application gets created in Data Management. All the metadata from Fusion GL is fetched into this application. The next step is to configure the Period Mappings for this application.

It is setup under the Source Mapping tab for both Explicit and Adjustment Period Types.

Select the Fusion application name as the Source Application and FCCS application as the Target. Add the period mapping.

Similarly, to bring in Adjustment data from Fusion GL, create the period mapping for the adjustment periods. Select Mapping Type as Adjustment and create the period mappings.

After this initial setup is complete, you are ready to create the Locations to import data from Fusion GL. The standard process of creating an Import format, Location, Data Load Rule and Data Load Mappings can be followed to create the load locations. However, the Data Load Rule setup differs from the file based loads as it has source system filters which can be setup to filter/limit the data you would like to import from Fusion GL.

6.      Data Load Rule setup

Open the Data Load Rule for the Fusion GL location. It has most of the fields setup with the default values. You may change the filter conditions per your import requirements and setup the data rule.

Review the source filters for each dimension and update as required.

With this, your "Direct" Integration setup is complete and you are ready to import data from Fusion GL and Export to FCCS.

7.      Drill Through setup

A very important feature of the direct GL integration is the ability to drill through from FCCS to Data Management right up to Fusion GL.

In Smart View, when you drill through for a particular amount, it takes you to the Data Management landing page.

When you right click on the amount on the DM landing page and select Drill Through to Source, it will take you to the GL Balance Inquiry landing page to see the details of the individual transaction records.

Some pre-requisites to perform drill through successfully:

Note:

i.                     The user performing the drill through needs to have Data Access Set assigned in order to view the Inquiry page. Without this, you get an error saying Invalid Data Access Set.

To assign Data Access Set,

·         Login to the Fusion GL application. On the Home Page, go to Setup and Maintenance.

·         Search: Manage Data Access for User

·         Query the user name for which this access is to be granted.

·         Add the Data Access Set for the roles assigned to this user

 

ii.                   Another pre-requisite is that the user has to be already logged in to Fusion GL while performing Drill Through. If the user is not logged in and if he/she clicks on Drill Through to Source, you get an error "You can only drill from Smart View into detail balances but not to Account Inspector".

8.      Automation of GL Integration

The Fusion GL loads can be easily automated since they are not dependent on the presence of a source file.

You may automate it through Data Management by creating a Batch Definition and specifying the Data Load Rule.

You may also automate it completely without having to login to Data Management using EPM Automate and Windows batch jobs. This automation is explained in detail in my blog here  - http://www.infosysblogs.com/apps/mt.cgi?__mode=view&_type=entry&blog_id=29&id=10580


August 22, 2018

Oracle Exadata, MPP Databases or Hadoop for Analytics

INTRODUCTION

There is a plethora of databases today for example SQL databases, No SQL databases, Open source databases, Columnar databases, MPP Databases etc. Oracle which is a leader in the relational databases space is often compared to these. So let us look at some basic differences between Oracle and some other players like MPP databases (Teradata, Vertica, Greenplum, Redshift, Netezza etc). and Hadoop for various Analytics workflows


ARCHITECTURE

Oracle, Teradata, Vertica, Greenplum, PostgresSQL, Redshift and Netezza are all Relational Databases. However, Teradata, Vertica, Greenplum, PostgresSQL, Redshift and Netezza are massively parallel processing databases which have parallelism built into each component of its architecture. They have a shared nothing architecture and no single point of failure. On the other hand, Oracle database has a shared everything architecture. Even Exadata (which is Oracle's engineered system or Oracle's database appliance specifically for Analytics or OLAP) is based on existing Oracle engine which means any machine can access any data which is fundamentally different from Teradata as shown in diagram below. Thus MPP databases are able to break a query into a number of DB operations that are then performed in parallel thus increasing the performance of a query

This brings us to the next logical question of how are these MPP databases different from Hadoop? Hadoop is also an MPP platform. The more obvious answer would be MPP databases are used for structured data while Hadoop can be used for structured or for unstructured data with HDFS - a distributed file system. Also, while MPP databases introduce parallelism mainly in storage and access of data.  Hadoop, with Map Reduce framework is used for batch processing of large amounts of structured and unstructured data more like an ETL tool. So it is a data platform


MPP_Oracle_Architecture_small.png













USER INTERFACE

Oracle and most MPP databases use SQL interface while Hadoop uses Map reduce programs or Spark which are java based interfaces. Apache HIVE project however is aimed towards introducing a SQL interface over Map Reduce programs.


INFRASTRUCTURE

The other difference between these systems is that most MPP databases like Teradata and Oracle Exadata run on propriety hardware or appliances while Hadoop runs on commodity hardware.


SCALABILITY

Oracle Exadata and most MPP databases scale vertically on propriety hardware while Hadoop scales horizontally which results in a very cost effective model especially for large data storage


STORAGE

The MPP databases use columnar data storage techniques while Oracle uses row wise storage which is less efficient in disk space usage and also in performance to columnar storage. However, Oracle Exadata uses Hybrid Columnar Compression (HCC) which is an aggregate data block created above the rows of data. The compression is achieved by storing the repeating the values only once in the HCC. Thus performance of Oracle Exadata is considerably better than row wise storage Oracle database. Hadoop on the other hand supports HDFS which is distributed file storage


USE CASES

Oracle is often the choice of database for Analytics where Oracle ERP systems are deployed. Oracle Exadata can meet OLAP workflow/ DSS requirements and has many Advanced Analytics options. More details can be seen at Oracle's Machine Learning and Advanced Analytics 12.2c and Oracle Data Miner 4.2 New Features.

 

Teradata is the choice of DB in case of pure OLAP workflows with its massively parallel processing capabilities especially when data volumes are high. Teradata is also the preferred choice in case of low latency analytics requirement where an RDBMS is still required However, it is losing market share as Teradata migration is a priority for most cost conscious CEOs due to its prohibitive year on year expense. Another reason for migration off Teradata is the adoption of new generation data analytics architecture with support for unstructured data.

 

The above sets the stage for Hadoop with its support for big data which can be structured or unstructured. It provides a platform for data streaming and analytics over large amounts of data coming from IOT sensors, social data from various platforms, weather data or spatial data. It is based off open source technologies and uses commodity hardware which is another attraction for many companies moving from Data warehouse to data lake ecosystem.

 

CONCLUSION

Thus, it is important to consider the Use case a database under consideration is designed to serve before deciding the best fit for your Big Data ecosystem. Making a decision solely on amount of data (Petabytes or terabytes) that need to be stored might not be accurate. The other factors that can influence one's decision might be your overall IT landscape/ preferred Infrastructure platform, developer skills, cost, future requirements which is specific to each individual organization. So though new age databases are opening new opportunities for data storage and usage, the traditional RDMS will most likely not go away in the near future.

 

REFERENCES

https://docs.oracle.com/cd/E11882_01/server.112/e17157/architectures.htm#HAOVW215

https://downloads.teradata.com/blog/carrie/2015/08/teradata-basics-parallelism

 

 

August 9, 2018

Gamify your Sales Automation (Baseball theme example included!)

Looking for a way to bring some excitement, motivation and a sense of competition within your sales force but do not necessarily want to spend extra dollars on incentive payouts? Do you want to improve user adoption of your sales automation application but not spend time and effort on retraining or having to listen to complaints from the sales team about how the system is no good? Gamification may be the answer that you are looking for.

Gamification in sales automation refers to creating game like scenarios which include principles like garnering points, rankings, competition etc. to motivate your sales teams in a non- monetary way, although in some cases, points earned may also be redeemed for non-cash incentives if the organization so chooses. The objectives for gamification may be manifold:

Process adherence Organizations may have trouble getting their sales teams to follow recommended sales processes. Examples may include updating contact information or capturing minutes of meeting with clients. Such activities may even seem to be trivial to sales managers who do not wish spend time discussing these items with their teams and who might rather spend their time discussing more 'important' matters like specific opportunity details, sales forecasts, pipeline review etc. Gamification can address such situations effectively by reinforcing ideal behavior through reward of points to salesreps who follow the recommended sales process.

User Adoption Organizations implement sales automation software only to find that their sales teams couldn't be bothered to use them. Gamification can be a reason for the sales reps and managers to start using the application and lead them in understanding the benefits of sales automation.

Sales Engagement Sales resources tend to work in isolation. They are on the road constantly meeting clients and prospects and there isn't enough time to build employee engagement. Any internal office meeting tends to be formal reviews and planning exercises which can be quite serious affairs. Gamification can help to reduce tensions within sales teams, bring some fun into an office culture and bring about some good-natured competition and a feeling of 'know thy team'.

Gamify Sales Activities

Below are some models or examples of how a simple gamification can be designed for routine sales activities using a points system. Salesreps can be notified on their points accumulated and also be ranked vis a vis other sales reps.

Gamify activities performed on Lead and Opportunity objects by assigning them suitable points. For example

  • Creating a Lead gets you 1 point.
  • If the Lead is Qualified, you get 2 Points.
  • If the Lead is converted to an Opportunity, you get 3 points, and so on.
Similar gamification can be performed on Account and Contact objects. For example,

  • Creating an Account gets you 1 point
  • Adding a contact to the account gets you 2 points
  • If the contact is a decision maker (Title VP or higher), you get 3 points and so on.
Gamify Sales Performance Metrics

Sales performance metrics can be mapped to sports themes. Below is an example of mapping them to a baseball theme. Such gamification can then be included as part of the salesrep's profile which is viewed by everybody in the organization. Similar themes around other sports or games can be creatively designed.

  • Batting Average- % of Leads that get converted to Opportunity. A batting average of 0.250 means 1 out of 4 Leads are getting converted
  • On Base Percentage- % of Leads that get converted to Opportunity but also includes walks (standalone opportunities)
  • Slugging Percentage- % of Won Opportunities upon total opportunities (A slugging percentage of 0.500 means 50% of Opportunities are won)
  • Home Runs- Number of high value opportunities won (say above 10k)
  • Hits- Number of Opportunities that reached a particular sales stage (say Submit Quote)
  • Runs Scored- Number of Opportunities Won
  • Assists- Number of Opportunities won where the salesrep is not the owner but on the sales team
  • Errors- Number of Stale Opportunities
Note that control functionalties may have to be built in to the game mechanics to ensure that sales users don't enter dummy or wrong data to win points or to score more.

Hope the above gave you some ideas on how you can gamify your sales teams!

August 7, 2018

OAC Essbase Application Migrations from On-Premise

OAC Essbase Application Migrations from On-Premise:

Introduction

I am working on the Oracle EPM Cloud implementation project which is focused on migrating on-premise Essbase applications to Oracle Analytics Cloud (OAC) Essbase cloud applications. The OAC Essbase provides few utilities which can be used for exporting applications from On-Premise application and importing it into OAC environment. This document explains about how to migrate the Essbase applications to OAC using utilities.

 

Utilities

Below are utilities which is downloaded from OAC environment. I have downloaded the  Export Utility and Command Line Tool to my local machine for migrating on-premise applications to OAC Essbase.

                1. EssbaseLCMUtility

                2. CommandLineUtility

 

Utilities.png

 

Prerequisites

       To use the Essbase LCM utility and Command line interface, Java JDK 8 should be installed and the JAVA_HOME path should be set.

       On-Premises Essbase applications should be converted to Unicode mode (UTF-8 encoding) before migrating into OAC Essbase environment. I used the below maxl script to convert the application into Unicode mode.

I have changed the Server level variables to Application level variables.

Exporting On-Premise Application

       I have followed the blow steps to export the on-premise application using Essbase LCM Utility.

       Open the 'CMD' and change the directory to 'D:\EssbaseLCMUtility' where I have downloaded the utility.

       Run the below command to export the application from Essbase. This command exports the data and artifacts.

                EssbaseLCM.bat export -server  test.xxx.local:1423 -user TestAdmin

                -password ******** -application TEST -zipfile     TEST.zip -skipProvision

 

Export Application - Progress

 

Export.png

Application Export Folder

-          The Application folder is exported to EssbaseLCMUtility Folder.

ExportLocation.png

 

Importing the application into OAC

I have manually copied the exported application folder to CommandLineUtility home folder.

 

CommandLineUtility.png

 
 

 Importing the application to OAC

·         I have executed the below CLI commands to import the application folder into OAC Essbase.

·         Set the Java Home and CLI Home:

                                SET CLI_HOME=D:\CommandLineUtility

                                SET JAVA_HOME=C:\Program Files\Java\jdk1.8.0_161

·         Logging into OAC:

                                esscs login -user TestAdmin -url https://test.xxx.com/essbase

·         Importing Application into OAC:

                                Esscs lcmimport -v -zipfilename TEST.zip

 

Import Application - Progress

 

Import.png

 

Application in OAC Environment

Application migration is succeesful. Go to OAC application console and refresh the application list to see the migrated application.

 

Console.png

 

Reference(s):

1. https://docs.oracle.com/en/cloud/paas/analytics-cloud/essug/preparing-migrate-premises-applications-cloud-service.html.

2. http://www.redstk.com/migrating-an-essbase-cube-to-the-oracle-analytics-cloud/.

 

Automating the Purge Job for Usage Tracking Data

 

Why Usage Tracking?

Usage Tracking will be helpful in measuring and monitoring user interactions with the OBIEE. This will provide the deep understanding into usage statistics and performance congestions in the reporting application.

Usage tracking functionality creates entry into S_NQ_ACCT table as when a report is executed in the application by a user.

This table will capture metrics like report performance time, report start/end time, user ID etc.,

When Usage tracking is enabled, it helpful in determining which user queries are creating performance bottlenecks, based on query response time.

It also provides information on frequently accessed reports. It involves in Enterprise Manager set up changes and RPD changes.

Why Automate the Data Purge?

For a reporting application which receives user requests every minute, Usage Tracking will generate huge volume of data. This data gets written in S_NQ_ACCT database table. Purging data in this table periodically is essential. Otherwise reports created on top of usage tracking data would perform slowly. Manually purging this data requires intervention from database team and add overhead in application maintenance.

We can automate data-purging in S_NQ_ACCT table using BI Publisher technology. This automation will work for any data-purging. Also the entire automation can be done with technology stack that exists with BI application. There is no need to involve any additional tools.

Steps to Automate:

  1. Create a BI Publisher data source with READ WRITE user for OBIEE meta-data schema.

  2. Create a database package which deletes records from S_NQ_ACCT table.

  3. Create a BI Publisher data-model to invoke the DB package via Event Trigger.

  4. Create a scheduled job which will invoke the data-model periodically.

 

  1. Create a BI Publisher Data Source

     

Go to BI Publisher administration, Click on JDBC connection as shown below.

Click on "Add Data Source"

 

 

 

 

Enter the following details for a New Connection

     Data Source Name: Give a Data source name.

    Username: Enter the user name who have Read and write access to OBIEE meta schema to access the data source.

     Password: Enter the password associated with the user name.

Click on Test Connection, a confirmation message will be displayed.

  1. Create a Data Model to know how many records got purged.

    Go to New TabàData Model

     

 

 

 

Enter the following details

Default Data Source: Select the Data source which is created in above step from dropdown.

Oracle DB Default package: Enter package name which is created in the database in the OBIEE Meta schema.

 

Attached the package code for Reference.

 

 

 

Click on Data Sets and select SQL Query as shown below.

 

Enter the following Details

Name: Enter a Data set name

Data Source: Select the newly created Data source from the dropdown

Write the Query and click on OK.

Query: SELECT COUNT(1) num_deleted_records FROM APLINST02BI_BIPLATFORM.S_NQ_ACCT WHERE start_dt < SYSDATE - (:m)

 

Create an Event Trigger to initialize the Data model after Report gets triggered by the Scheduler.

Enter the Event trigger details as below

Name: Enter the name of the Event Trigger

Type: After Data

Language: PL/SQL

Oracle Default package will be populated automatically. Select the appropriate function which will trigger the Report from the Available function section and move to the Event Trigger section by Click on ">" icon

 

Now click on Parameters and provide the parameter details to pass it in Event trigger. We are passing the number of days with this parameter to purge the data from the S_NQ_ACCT table with below logic.

DELETE FROM Schema_Name.s_nq_acct WHERE start_dt < sysdate - m;

 

 

 

  1. Create a RTF template for Scheduling a Job to automate

Go to NewàReport

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Click on Upload

Rtf Document for reference.

 

 

 

 

Save the report in the required folder after providing the above details.

Now click on MoreàSchedule

Enter the parameter value as 'm'

 

 

 

 

Enter the details for scheduler as below

Name: Give a name of the file name

Layout: This will be populated automatically

Format: Select a required format of the Report form the dropdown

In the Destination section, select the Destination as Email or FTP and provide the details accordingly.

In the Schedule tab, give the frequency of the Job when to run.

 

Now click on Submit Job from top right corner. A job will be scheduled as per given details below

August 6, 2018

Mitigating Low User Adoption in Sales Automation

So the project went live perfectly. It was on-time, on-budget and all the success criteria were met. Except one. This is a nightmare scenario for many project managers and sponsors, when the sales automation project that they have worked so hard on for many months and executed perfectly (in their opinion) does not seem to enthuse the end users resulting in the very common problem of Low user Adoption. In this blog we are specifically talking about low user adoption related to Sales Automation projects although many aspects could be common with other endeavors as well.

Below are the major causes and mitigation for Low User adoption:

Lack of Process Adherence or "We don't work that way"

Often in the hurry to implement 'best practices' and a 'vanilla solution', short shrift is given to some core processes in the sales organization. Sometimes, in a global implementation, processes are 'standardized' without a real buy-in from regional stakeholders who may perceive that their way of doing business has not been heard sufficiently. 
Mitigation: Get explicit sign-offs and buy-in from stakeholders when processes get modified. Build in customizations where required to ensure core processes are protected.

Lack of Trust or "Is my data secure?"
Another reason that your sales reps are reticent in sharing information on the sales automation application is due to Lack of Trust. For sales reps, their contact and account information is gold. They do not want just anybody in the organization having access to their contact and account details. Sales Teams may not have a problem with their managers accessing data but may not want say, the marketing team to get the access to their contact details without their knowledge. If their misgivings in this regard are not addressed, you will find that they may not be updating their most important information. 
Mitigation: Most software today comes with configurable security parameters. You should ask your SI to implement suitable security configurations that balance both the need for sales effectiveness and address the trust issues of your sales teams. 

Lack of Business Commitment or "Even my Manager doesn't use it"
Many times Sales Automation projects focus only on the direct sales reps as the end user. This is a mistake because although direct sales reps may form the largest part of the sales force, when other sales roles like sales managers, partner managers, key account managers are not included, it is perceived by the direct sales team that they have been saddled with an unnecessary burden. This results in them not taking the implementation seriously thus resulting in low user adoption.
Mitigation: It is important that companies take a strategic view when it comes to sales automation and implement functionalities in the software that benefits their entire sales organization. Hence we recommend to implement modules like Sales Forecasting management which requires sales managers to review forecasts from their reps and in turn submit to their managers. Modules like Partner Relationship Management are used by Partner managers to manage sales processes using the partner organization. Customer Data Management and Incentive Compensation functionalities involves the sales operations teams to ensure data quality and sales incentives through the sales automation product.

Lack of Knowledge or "Not sure how it works"
Most SIs and implementation consultants work on the "Train the Trainer" model were key users from the sales organization are trained on various aspects of the application. It is then expected that these Key Users in turn will go back and ensure a quality End User training. Many companies ignore or do not pay enough attention to this phase of the project since vendors are not involved in this process. It is not surprising then that end users forget about the inadequate training they received and go back to their old way of doing things.
Mitigation: It is important that enough thought is put into the End User trainings as well. If the number of end users are large, it should be treated as a separate project and vendors can be involved in this phase of the project as well. Enough and appropriate training collateral should be developed and rollout should be planned so as individual attention can be given to each participant in these training sessions. Follow up or refresher training can also be organized on a need basis.

Lack of Productivity or "The Old way was better"
Although sales effectiveness, improved sales reporting, sales collaboration are all important reasons to implement a sales automation application, user adoption of such software will suffer if sales reps feel that all these benefits are happening at the cost of their productivity. Companies should guard against building a 'comprehensive solution' as that may mean that sales reps have to spend more time on the application when they would rather be selling and having face time with their prospects and customers.
Mitigation: Sales Productivity should be an important success criteria metric and included as part of all requirements and design conversations. Data entry requirements should be kept to the minimum mandatory fields and rest of the fields can be optional. Application performance should be tested comprehensively to ensure that improvements can be made before go-live. Mobility and Outlook Synch functionalities should be explored to improve productivities. 

Lack of Perceived Value or "What's in it for me?"
This is perhaps the most important question that needs to be answered in terms of user adoption. Unless the sales automation helps the sales reps to meet his personal career goals, he is not going to spend time on the application. It is important that he perceives the application as a tool that will improve his sales effectiveness, help him get recognition, and advance his career.
Mitigation: Sales automation software should focus on sales effectiveness improvements which could mean sales collaboration, new technology interventions like AI-ML to help the salesrep focus on the important leads and improving his win rates. Intelligent analytics that provide not just information but also insights on his key concerns and suggesting a workable plan of action. Sales Performance and Gamification solutions can work on top of the base solution to provide value in real terms to the sales users.

Keeping Track
It is important to measure the User Adoption through analytical reports to try and understand the status of user adoption even after applying many of the above mitigation measures. Reports should give an adoption breakdown by region, role etc to answer questions like which sales roles are using or not using the application. Which country's users are lagging behind? Answers to such questions will help the IT organization to take suitable interventions and corrective measures. All the best on your user adoption journey!

Oracle Data Visualization (DVD/DVCS) Implementation for Advanced Analytics and Machine Learning

Oracle Data Visualization Desktop(DVD) or Cloud Server(DVCS) is a very intuitive tool, which helps every business user in the organization to create quick and effective analytics very easily. People at all level can leverage the benefit of blending and analysing data in just a few clicks and help the organization to take informed decision using actionable insights. Oracle DVD is a Tableau like interactive tool which helps to create analysis on-the-fly using any type data from any platform, be it on premise or Cloud. Main benefits of Oracle DVDs are below:

·         A personal single user desktop tool, or a SAAS cloud service, which can be leveraged by any business user in the Organization.

·         Enable the desktop user to work even offline

·         Completely private analysis of heterogeneous data

·         Business user can have entire control over the dataset/connections

·         Direct access to on premise or cloud data sources

·         Administration task has been removed completely

·         No concept of remote server infrastructure

Oracle DVD/DVCS enables the business user to perform analysis using traditional methodologies as well as provides capability to perform Advance Analytics using R and creating Predictive model using Machine Learning algorithm using Python.

This simple and intuitive tool provides a very unique way to enable you to perform Advance analytics by just installing all the required packages. DVML (Data Visualization Machine Learning library) is the tool to help you install all the required packages for implementing machine learning algorithm for predictive analysis in one go.

Install Advance Analytics(R) utility will help you to install all the required R packages to perform Advanced Analytics functions like Regression, Clustering, Trend line etc. However, to run both the utility in your personal system/server, you need administrative access as well as access to internet and permission to automatically download all the required packages.


In the below slides we are going to discuss, how to leverage Advance analytics and machine learning functions to provide predictive analytics for the organization.

In order to create a Trend line graph, we need to enable Advanced Analytics and then pull required column into the Analysis.

Trend line Function: This function takes 3 parameters to visualize the data in a trending format.

Syntax: TRENDLINE(numeric_expr, ([series]) BY ([partitionBy]), model_type, result_type)

Example : TRENDLINE(revenue, (calendar_year, calendar_quarter, calendar_month) BY (product), 'LINEAR', 'VALUE')

We need to create various canvases and put them into one story line by providing corresponding description over the canvas. While creating Trend line visualization, we need to provide the Confidence level of data. By default, it will take 95% confidence level, which means the analysis will be performed over the 95% of data.


View image

 

Fig 1: Profit Trend line (Method: Linear)

 

View image

Fig 2: Canvases for Profit Trend line

 

View image

 

Fig 3: Story Line to Visualize Profit Trend line

 

Outlier: Outlier functions distinguished particular values (Min /Max) from the rest of data set. This will help business user to understand those values which are way out of the specific pattern of data. This outlier can cause a serious problem while performing predictive analytics. So by removing the outliers (after detecting) from the data set, we can have better actionable insights from data.

Syntax: OUTLIER((dimension_expr1 , ... dimension_exprN), (expr1, ... exprN), output_column_name, options, [runtime_binded_options])

Example: OUTLIER((product, company), (billed_quantity, revenue), 'isOutlier', 'algorithm=mvoutlier')

Outlier function takes two parameters from the dataset, one: Dimension value, on which we perform outlier, two: Measure value to find out the outlier data. We also need to specify which algorithm we are going to use to perform outlier analysis. Below visualization shows the outlier city name based on the profit they generates.

View image

Fig 4: Outlier Graph for Various cities

 

View image

Fig 5: Profit by City Outliers in a Story line

 

Cluster: Cluster functions takes one or more input (measures value) to put particular dimensions into specific groups by applying different clustering algorithm such as k-means or hierarchical clustering.  Cluster functions takes various parameters as input where we mention the dimension's value as well as measure value to put the dimensions into multiple clusters. We also need to mention which clustering algorithm we are going to use (e.g. k-means) and the number of clusters.

Syntax: CLUSTER((dimension_expr1 , ... dimension_exprN), (expr1, ... exprN), output_column_name, options, [runtime_binded_options])

Example : CLUSTER((product, company), (billed_quantity, revenue), 'clusterName', 'algorithm=k-means;numClusters=%1;maxIter=%2;useRandomSeed=FALSE;enablePartitioning=TRUE', 5, 10)



View image

Fig 6: Cluster Visualization for Product category by Sales

 

Predictive Analytics using Machine Learning: Before starting to implement Machine learning algorithms, we need to install all the required packages by executing DVML utility in case you are running into DVD mode. In case DVCS (available with OAC) all the required libraries are already installed.

In-order to preform predictive analytics, we need to start with Data Flow tab in the DV window. In data flow, we mentioned all the required steps, starting from picking up the sample data set, selecting the required columns for the to pass as parameter/in dependent variables and selecting machine learning model (e.g. Train Numeric Prediction). Once everything is selected, we need to save the model by providing appropriate name and then execute the Data flow by clicking on 'Execute Data Flow' button, on top right.

We need to provide various parameters while sleeting the model, e.g. Model type (Linear), column (Sales) , Train/Test data set (70% - Train, 30% Test) etc. to optimize the model for better result.

 

View image

Fig 7: Machine Learning Data Flow on Sales Data

 

View image

Fig 8: Various Machine learning algorithm 

 

Once Data flow is executed we can see them under the Machine learning tab.

 

View image

Fig 9: Machine learning models 

 

To explore, how the model performed we need to right click on top the model name and click on inspect option. This will provide the below screen, which will allow us to inspect various properties or values returned by the machine learning model after getting trained on certain amount of data and also tested.

View image

Fig 10: Machine learning models various parameters/output

 

August 3, 2018

OAC-Essbase Data Load & Dimension Build Using CLI

OAC-Essbase Data Load & Dimension Build Using CLI

 

Introduction

I am working with one of the Oracle EPM Cloud implementation projects which is focused on migrating on-premise Essbase applications to Oracle Analytics Cloud (OAC) Essbase cloud applications. The OAC Essbase provides Command Line Interface utilities which can be used for data load and dimension build in OAC Essbase applications. This document explains about how to use the utility for data load and dimension build in OAC.


Utilities

Command Line Utility - We can download the Command Line Tool from OAC Essbase instance to our local machine to perform the Essbase data load and dimension build tasks

 Utilities.png

Setting up CLI environment

  • Open the command prompt and change the directory to CLI home directory.
  • To use Command line interface, Java JDK 8 should be installed and the JAVA_HOME path should be set
  • Set the CLI Home and Java Home:

                                SET CLI_HOME=D:\CommandLineUtility

                                SET JAVA_HOME=C:\Program Files\Java\jdk1.8.0_161

 

Logging into OAC Essbase through CLI

Before performing Dimension build and data load activities, we need to be logged into OAC Essbase.

Logging into OAC using admin id:

D:\CommandLineUtility> esscs login -user TestAdmin -password ****** -url https://test.OAC.com/essbase

                user " TestAdmin " logged in with "service_administrator" role

                Oracle Analytics Cloud - Essbase version = 12.2.1.1.112, build = 211

 

Create Data Base local Connection

-                        The DB local connection can be created using CLI command createlocalconnection. It takes all the required JDBC connection details as arguments.

Command Syntax:

D:\CommandLineUtility>esscs createLocalConnection -name oraConn -connectionString jdbc:oracle:thin:@DevDW:1XXX/DevID -user DB_USER

                Connection already exists, it will be overwritten

                Enter Password:

                User credentials stored successfully



Essbase Dimension Build

  • Run the dim build command with stream option 
  • Database query is required either in the rules file or must be provided as argument for dimbuild. If not given in command, it is taken from the rules file.
  • The streaming API is used to push the result from database to cube.

Command Syntax:

D:\CommandLineUtility>esscs dimbuild -application TEST -db TEST -rule Acct.rul -stream -restructureOption ALL_DATA -connection oraConn

                Streaming to Essbase...

                Streamed 9 rows to cube

 

Essbase Data Load

  •  Run the Data load command with stream option 
  • Database query is required either in the rules file or must be provided as argument for data load. If not given in command, it is taken from   the rules file

Command Syntax:

D:\CommandLineUtility>esscs dataload [-v] -application TEST -db TEST -rule DataLoad.rul -stream -connection oraConn

                Streaming to Essbase...

                Streamed 10 rows to cube


Reference(s):

1.https://docs.oracle.com/en/cloud/paas/analytics-cloud/essug/command-line-interface-cli.html.

2.https://support.oracle.com/epmos/faces/DocumentDisplay?_afrLoop=442454943227210&parent=EXTERNAL_SEARCH&sourceId=HOWTO&id=2259032.1&_afrWindowMode=0&_adf.ctrl-state=aphtjq7kl_132.

 


Subscribe to this blog's feed

Follow us on

Blogger Profiles

Infosys on Twitter