Quantcast
Channel: More to life...
Viewing all 181 articles
Browse latest View live

EPM Cloud - Capturing rejections in Data Management - Part 2

$
0
0
Moving on to the second part where I am looking at a possible solution to capturing and handling rejections in EPM Cloud Data Management.

Just a quick recap on the first part, data was loaded through Data Management but there were records rejected due to missing entity members in the target application. With help from scripting and the REST API I covered a method to run the data load rule, check the status of the load rule and if there was a failure then download the process log, the process log was parsed for data load rejections if any were found these were sent in an email.


In this post I am going to extend the process for capturing rejections and handle them by using the metadata functionality that is now available in Data Management.

Like in the previous part I will be focusing on the entity member rejections but there is nothing stopping expanding the process to handle other dimensions, though saying that if there are rejections across multiple dimensions then it might be best to concentrate on starting with a cleaner data source file.

I am going to use the same source file where the rows containing entity members “901” and “902” will be rejected due the members not existing in the target application.


The aim will be to extract the rejected entity members from the process log and write them to a text file which can be uploaded to Data Management.

Using a metadata load rule, the members will be loaded based on mapping logic, so in this example they will be loaded as children of “900”.
 

Even though I am not yet a fan of the simplified dimension editor I thought I had better start using it in my examples as the classic editor is no longer going to be supported.

Once the members have been loaded and a refresh performed, the data load can be run again and there should be no rejections.

Before we can get to that stage a metadata load needs to be setup in Data Management, I am not going to go into great detail around the metadata functionality as I have already covered this topic in a previous blog which you can read all about here.

Under the Target Application Summary dimensions are added.
 

For the entity dimension I have only enabled the properties I will be using in the metadata load, these are parent, member and data storage. I could have got away with not including data storage and let the system pick the default value but I just wanted to show that properties that are not contained in the source can be included.


The import format is simple as the source file will only have one column containing the new entity members.

The File Type should be set as “Delimited – All Data Type” and the Data column is ignored for metadata loads.
 

A new location was added and a new data load rule created against the new location.

I assigned to a category named “Metadata” which I have for metadata type loads.
 

I did not set a file as I am going to include that in the automation script, the directory was defined as the location folder where the script will upload the rejected member file to.

In the target options of the rule the “Refresh Database” property value was set to “Yes” as I want the members to exist in Essbase when the data is reloaded.
 

On to the mappings, an explicit mapping is defined for “Data Storage” to map “N” from the import format to “never share”.


For the parent I used a “#FORMAT” mapping type which will take the first character of the member and then suffix “00”, if you look back I also mapped the entity member to parent, so as an example member “901” will be mapped to the parent “900”


The entity member property was defined as a like for like mapping, this is because I want the source member to be the same as the target.


If I wanted to expand the automation process further I could add in a step to upload explicit mappings for the new entity members.

Now the rule is in place it is time to go back to my original PowerShell script from the last blog post and modify it to handle the metadata load.

I am going to continue with the script from the point where the process log has been downloaded, this means the data load has taken place and failed.

In the last part I parsed the process log and added any rejections to an email, this time I am going to parse and create a file with the rejected members in.

In the process log the rejected rows of data will contain “Error: 3303”.
 

I will break the script into bite sized sections and include the variables which usually would be at the start of the script.


The variables above have comments so I shouldn’t need to explain them but I have cut down on the number of variables for demo purposes in this post, the final script includes variables where possible and functions to stop repetition.

On to the next section of the script which checks if there is an existing entity rejection file, if there is, delete.

Each line of the process log is then cycled through and if a line contains “Error: 3303” then it is parsed to extract the entity member, the script could be enhanced to handle multiple dimensions but I am trying to keep it simple for this example.
 

To break down the parsing section further let me take the following line as an example:


First the line is split by a pipe delimiter and stored in an array, for the above the array would look like:


The second entry in the array contains the member that was rejected, the third entry contains the data record.

The data record is then split by a comma delimiter and stored in an array which looks like.
 

A test is then made to confirm that the rejected member is part of the entity dimension as that has been defined as the second entry in the data record, if they match the member is then appended to the entity rejection file.

Now I have a file containing all the rejected entity members.
 

Using the REST API, the file can be uploaded to the location directory in Data Management, as there could be an existing file in the directory with the same name, the delete REST resource is used to remove the file, it doesn’t matter if the file does not exist.


After this section of the script has run, the entity rejection file should exist in the Data Management location folder.


Next the REST API comes into play again to run the metadata load rule that I created earlier in Data Management.


At this point, if I check process details in Data Management it shows that metadata rule has successfully completed.


In the workbench you can see that entity members have been mapped to the correct parent.


Within the target application the entity members have been successfully loaded.


As the metadata has been loaded and pushed to Essbase the export stage of the data load rule can be run again.


This time the data load was successful


The workbench confirms all rows of data have been loaded to the target application.


A quick retrieve verifies the data is definitely in the target application.


After the data load has successfully completed, a step could have been added to run a rule to aggregate the data, once again this can be achieved by calling a REST resource which I have covered in the past.

A completion email could then be sent based on the same concept shown in the previous blog post.


An example of the output from the full automated process would be:


So there we go, an automated data load solution that captures and handles rejections. If you are interested in a similar type of solution then please feel free to get in touch.

Planning audit reports with help from Groovy - Part 1

$
0
0
A common requirement in planning is to be able to produce an audit report, you would think this would be easy as the options are available in the user interface to define which actions to audit.


The problem is that for on-premise planning, once the options have been enabled there is currently no ability through the user interface to produce a report.

The documentation states:

“View results in the HSP_AUDIT_RECORDS table using a RDBMS report writer.”

So not very helpful and a planning administrator may not have access to the planning applications database table “HSP_AUDIT_RECORDS”, this can mean involving a DBA to gain access to the table or have an extract generated, this is not always an easy task.

It is only recently that the ability to view and download audit data was made available in EPM Cloud, so don’t expect this for on-premise any time soon.

Let us first take a quick look at what is available in the cloud, like with on-premise, auditing can be enabled by selecting one or more of the audit types. Once enabled audit data will be displayed in the UI.


There is the ability to filter the different audit types.


The date range can be filtered from a selection of predefined ranges.


The data can then either be viewed or exported and opened in Excel.


Until this functionality is available for on-premise we must look at alternative solutions. As the audit data is stored in one database table I thought maybe using Groovy could come to the rescue again. In a previous post I covered how easy using SQL with Groovy can be, which you can read about here.

I am going to demo the solution first and then go into more detail on how it was put together, it is based on 11.1.2.4 and to be able to use Groovy functionality you need to be on at least Calculation Manager patch 11.1.2.4.006 but I recommend .008+

I will be using the simplified interface as that is the closest to cloud, even though still a long way off. It also provides an inbox/outbox explorer which is important for this solution.


I have a business rule which will call a Groovy script to handle the audit data, three runtime prompts are displayed after launching the rule.


The dropdown for audit data type allows the selection of all audit data, all audit data excluding data, or data only.


The final solution allows the selection of all the different audit types similar to the cloud, but for simplicity the example I am going to go through will be based on the above, if you are interested in the final solution then please feel free to get in touch.


Just like with the cloud, a date range can be selected for the audit data.


It doesn’t have to be limited to a predefined range, it could just as easily have the option to select a start and end date.


There is also the option to set a delimiter for the audit data, after making the selections the rule is ready to be launched.


If audit data records are returned for the selected values the rule should run successfully.


After running the rule, you can go to the console.


Then under “Actions” the “Inbox/Outbox Explorer” can be accessed.


There will be will be a timestamped archive file available.


This can then either be downloaded or deleted.


Once downloaded, the zip file can be opened which will contain a text file containing the audit data.


The audit text file can be extracted and viewed in a text editor, the data is ordered by date. I could have changed the column headings but for this example I have stuck with the names of the columns in the database audit table.


Alternatively, it can be opened and viewed in Excel.


The functionality may not be exactly the same as the cloud but in terms of the ability to filter the audit records and then download, it comes pretty close.

The solution includes the ability to archive the audit table which basically means the records in the audit database table are transferred to another table which holds the history, then the audit table is cleared down.


The rule has a runtime prompt included to stop it from being accidentally run.


Finally, there is a form which runs a Groovy based rule on load to display the number of records in the audit table, the form displays the last time the audit table was archived. The rules to either archive the data or produce the audit file are also attached to the form.


So how was this all put together? Well you are going to have to wait until the next part where I will cover it in more detail.

Planning audit reports with help from Groovy - Part 2

$
0
0
In the previous post I went through a possible solution for producing audit reports through the planning UI with the help from business rules and Groovy. I mainly covered the background to planning audit reports and went through a demo of the solution, in this post I am going to break down how it was all put together.

Let us start with running the audit report business rule which can be accessed through the standard or simplified user interface. I am going to stick again with the simplified interface as it provides the ability to download the audit report once it has been generated.


Once launched this will present the user with runtime prompts to define how the audit report will be generated.


The audit type and date range are built from Smart Lists within the planning application.

As I explained in the previous post I have limited the list of options for demo purposes but the final solution has all the different audit types available.
 

The above list is built from the following Smart List.


In Calculation Manager a variable has been created as a runtime prompt with the type set to integer and the planning Smart List selected.


This means when the rule is launched the audit type drop down will be displayed as shown above, the selection that is passed into the business rule is the integer value that matches the Smart List ID.

It is the same concept for the date range, there is a planning Smart List.
 

The reason why the IDs have been created this way is due to the way the SQL will be generated in the Groovy script, beside the “All” option the IDs match the number of days.

Just like with the audit type, a variable has been created to display the Smart List when launching the rule.
 

The option to select a select a delimiter for the audit report is handled using a Calculation Manager runtime prompt string variable with the default value set.

There is also a hidden variable which passes the name of the planning application into the business rule.
 

On to the business rule which basically just calls a Groovy script and passes in the variable names and the values from the runtime prompts.


If you want more detail about getting up and running with Groovy then please refer to one of my previous posts which you can read about here or here.

Before I get on to the Groovy script there are a few points to mention.

I have disabled static compile and sandbox in groovycdf.properties in order to relax the static type checking and access to Java packages, no additional Java files are required other than the Groovy jar.

The SQL in the script is based on Oracle but wouldn’t require much updating for it to run against SQL Server.

The script does not contain any hardcoding of database connection information including passwords as they are generated at runtime.

I will break down the script into chunks, the variables would have all been defined at the beginning of the script but I have moved them about to make more sense.

The beginning section of the script generates the connection information to the planning application database, a method is called to return the planning system database connection details from the EPM registry.

A connection is made to the planning system database and a SQL query executed to return the planning application database connection information, the planning application name used in the SQL query was passed in from the business rule, the value had been defined in a Calculation Manager variable.

Once the planning application connection details have been returned a connection is then made to it.
 

If you are not interested in dynamically generating the database connection details then the above can be simply replaced with something like:

sql=Sql.newInstance("jdbc:oracle:thin:@dbserver:port/servicename", "username", "password","oracle.jdbc.OracleDriver")

The next section manages the variables, which like I said would usually be at the start of the script.

The values selected from the drop downs for the list of audit types and date ranges are passed into the script as strings so they are converted to integers.

A date is generated that is then used to form the names for the exported audit text file and zip.

A temporary directory is defined which will be the location where the audit file will be generated before it is compressed and then moved to the planning inbox/outbox explorer location.

The Groovy script is executed by the Essbase process so the temporary directory is located on the Essbase server.

Next a lookup is made to the EPM Registry to retrieve the planning/inbox explorer location, which if you are not aware, is the same location in the LCM import/export directory.
 

If I ran the audit business rule and with the following runtime prompt selections:


The variables in the Groovy script would be assigned with the following:

auditList=2
dateRange=30
delimiter=|
auditFile=audit110320181534.txt
zipFileName= audit110320181534.zip
planInbox=\\FILESERVER\epmshare\import_export\

The next section builds a SQL statement to retrieve the records from the planning audit table based on the values selected from the business rule runtime prompts.

There are two SQL statements built, they are practically the same, except one of them returns a count of the number of audit records based on the selected criteria, this is because we don’t want to generate an audit file if there are no records returned.
 

Based on the same selection from the previous example the SQL generated for the count would be:

SELECT count(*) as numrecords FROM hsp_audit_records WHERE 1=1 AND type = 'Data' AND TRUNC(time_posted) >= TRUNC(sysdate) -30

The query is returning a count of the number of audit records where the audit type is data for the last 30 days.

The SQL is then executed and if the number of records returned equals zero then the business rule will terminate with an error and the error message “No records returned” will be available in the job console.
  

The SQL is then generated to return the audit records which is based on the same criteria and would produce:

SELECT * FROM hsp_audit_records WHERE 1=1 AND type = 'Data' AND TRUNC(time_posted) >= TRUNC(sysdate) -30 order by time_posted desc

If the above SQL query is run using a tool like SQL developer it would produce the following results:


The Groovy script executes the SQL and the column names and rows returned are split by the delimiter and written to a text file
 

The file is temporarily created in the directory defined in the “tmpFileDir” variable.


The contents of the file will be similar to the results shown in the earlier query.


The audit text file is then compressed into a zip file.


The zip file is created in the same temporary directory.


Finally, the text file is deleted.


The zip file is moved to the planning inbox/outbox location.


The file is then accessible from the planning simplified interface where it can be downloaded or deleted.

As shown in the last post the solution does also have the ability to archive the records in the audit table and includes a planning form to show the number of records in the audit table and when it was last archived, if you would like to find out more information then feel free to get in touch.

EPM Cloud – Limiting the use of an application through automation

$
0
0
The question around how to automate the process to put an application into maintenance mode has been raised on a few occasions, so I thought I would put a post together to cover this topic.

If you are not aware, it is possible to restrict access to an application to only administrators; maintenance tasks or processes can then be carried out, once complete the application access can be returned to all users.

This is not new functionality and has been around in the on-premise planning world for a long time.

Let us first cover off the manual method and then move on to different ways to accomplish it using automation.

Selecting “System Settings and Defaults” from the navigator will provide the options to set the application maintenance mode.


There are two options available, either “All users” or “Administrators”

This is similar to on-premise planning with the exception of being able to limit the access to only the application owner.


Staying with on-premise for the moment, there is a command line utility which provides the ability to automate the process of setting the application maintenance mode.

An example to restrict an application to only administrators would be:


The output log provides a good insight to what happened when running the above command.


If a user tries to log into the application they are greeted with a message to inform them that application is in maintenance mode.


Going back to EPM Cloud, you would expect an EPM Automate command to be able to restrict application access. Well, not so fast, it is possible but not with a direct command, though this may well change in the future.

I will now cover a couple of methods to automate the process of limiting application access, the first being through a refresh database job.

From the navigator if you select overview there is an action available to refresh the database.


If create is then selected, the refresh database window is displayed, you will notice there are options to limit access to the application before and after the refresh.



I have selected to limit the application to only administrators, this can now be saved as a job.


If this job is run, a refresh will take place and the application will only be accessible to administrators even after the refresh completes.

To return the access back for all users the process is repeated but this time after the refresh, the option is set to enable the application for all users.


Once again this can be saved as a job.


Now we have two jobs, one will put the application into maintenance mode and the other will take it out of maintenance mode, though the downside is that two database refreshes will also be carried out.


In terms of automation, one way to handle this could be to create two schedules, one against the job to set the application into maintenance mode and the other against the job that takes the application out of maintenance mode.


This is fine if you know the exact times and frequencies of when you want to change the application maintenance mode, there is a higher probability that you would want to incorporate into an existing process or script and this can be achieved using EPM Automate or the REST API.

EPM Automate has a “refreshcube” command which you include the job name, so with a simple PowerShell script you could put the application into maintenance mode:


After running the script, the application has been limited to admin users.


If the application is accessed with a non-admin account the following page is displayed:


To take application out of maintenance mode then the script just needs to be updated to call the other refresh database job.


Once the script has completed, the application has been enabled for all users.


To take EPM Automate out of the picture, the same can be achieved using the REST API.

I have covered the REST API in detail in the past, there is a REST resource to run jobs and all that is required is the job type of “CUBE_REFRESH” and the name of the job are passed as JSON in the body of request.

Here is an example of how to achieve that with a script:


To take the application out of maintenance mode you just need to change the job name in the script.

To be honest I am not a fan of having to perform a database refresh to put the application in maintenance mode, it’s just not an efficient way and I feel there should be a direct command available, until this happens I looked for an alternative solution.

I will take you through the solution I came up with.

If you go to migration and export only the application settings


This will create a snapshot which can be downloaded.


If the snapshot is extracted there will be an xml file which contains all the application settings.


The xml file contains a node called “LoginLevel” which defines whether the application is limited to only administrators or is available to all users.


Now I know I could have created two snapshots of the application settings. I could have taken one with the login level set to all users and one for administrators, then I could automate importing the snapshot to limit the application access. The problem would be if any of the other application settings change they would be overwritten when importing the snapshot, I could keep taking new snapshots but that didn’t feel optimal.

I updated the xml to include just the login level setting, I created one for administrators.


Then a second xml for all users.


The directory structure of the snapshot was kept identical so there would be no issues when importing, these were then compressed into zip files.


The two snapshots were uploaded to the application.


To automate this the process of limiting access to the application with EPM Automate, the “importsnapshot” command can be used.

To limit the application to administrators the “maintenance_mode_admin” snapshot can be imported.


After the script has completed, the application maintenance mode will be set to administrators.



To confirm, logging in with a non-admin user produced the maintenance mode message.


To revert to all users, all that is required is the snapshot name is changed when calling the “importsnapshot” command.


The application access level has been correctly returned to all users.


It wouldn’t be complete if I didn’t provide an example script on how to go about this with the REST API.



To change the maintenance mode to all users in the above script, all would be required is to update the snapshot name.

So there we go, a few different options to automate the process of limiting access to an EPM Cloud application.

FDMEE/Data Management - Managing period mappings - Part 1

$
0
0
In today’s post I am going to start off by going back to basics as I have recently seen similar questions raised around period mapping, these usually go along the lines of: “Do you have to manually enter period mappings?”.

If you are new to on-premise FDMEE or the cloud based Data Management then you will probably be wondering where the import option is for period mappings, unfortunately at the time of writing there is still not an option to do this in the user interface.

Adding mappings manually is a cumbersome task and is not helped by the error messages you can be hit with if you don’t follow the correct process exactly.


All is not lost though as there are ways to handle the importing of period mapping and I am going to cover some possible solutions. If you have been around FDMEE or Data Management for a while then you will probably have your own solution.

In this post I am going to concentrate on Data Management but the same concept can be used with FDMEE, then in the next part I will focus on a possible method which will only be available with on-premise FDMEE.

To be able to demonstrate this first method I have manually added a couple of global mappings and replicated these against a single target in application mappings.
 

The next step is to extract these mappings, this can be achieved by using Migration or Lifecycle Management (LCM) in the on-premise world.


Once migration has been accessed then click “Data Management” to allow the selection of artifacts to be exported.


If you expand “Global Setup Artifacts” you will see “Period Mapping”, this artifact relates to the Global Period Mappings.

You will also see “File” which is part of the Source Period Mappings, this relates to the following in period mappings in the UI.
 

One reason why you might use file mappings is to load data where the period and years are in the rows of the source file, I covered this in a previous post which you can read about here.

Back to migration, under “Application Data” you will see “Application Period Mapping” and “Explicit Source Period Mapping
 

The “Application Period Mapping” artifact will export any period mappings which have been added for the target application, an example in the UI would be:


For the “Explicit Source Period Mapping” these are taken from the Source Mapping tab in the UI, an example would be if you select the source system as “EPM” and then select a source and target application:


For this post I am going to concentrate on the Global and Application Mapping, though it will be the same concept if you want to expand the solution to the other types of mappings.

Once the artifacts have been selected in migration and the export run, a snapshot will be generated.
 

The snapshot can be downloaded as a zip file and then extracted.

Once extracted, in the directory “\FDMEE-FDM Enterprise Edition\resource\Global Setup Artifacts” there will be an XML file which contains the global period mappings.
 

Opening the XML reveals the mappings which I entered into the Data Management UI earlier.


If you don’t really know much about XML then I can imagine the format of the file can be a little bewildering but don’t worry about that at the moment.

Hopefully you understand how the elements in the file map back to the UI in data management, for example, “Periodkey” in the file maps to the “Period Key” column in Data Management.

Under the directory “\FDMEE-FDM Enterprise Edition\resource\Application Data\<app type>\<app name>” you will see another XML file which holds the application period mappings.
 

The format of this file is pretty much the same as the global mapping file except for the element:

<Intsystemkey>Target App Name</Intsystemkey>


The order of the elements is slightly different from the global file but actually the order does not matter.

So you know the mappings can be exported, this means they also can be imported, all that would be required is to create the XML files in the correct format.

Having to go through the XML file manually and add new period mappings would be a tedious task, so how about an automated solution to make life easier.

In this first solution I am going to generate the XML from a simpler format which could be first defined in say an Excel file, for Global Mappings I created the following file with two entries for period mappings, the file could contain as many mappings as you like.
 

Alternatively, it could be produced directly into a text based file.


I have made the headings match those in the XML file, they don’t have to match, the only requirement is they are in the same order, so “Period Key” is first and “Year Target” last.

I considered different possible ways to generate the target XML file from the above text file. I tested with the XML functionality in Excel which I wasn’t overly impressed with.  I looked into VBA in Excel which was certainly possible but it required adding a reference to be able to work with XML objects or it was too messy.

I finally decided on PowerShell because it is easily accessible on any Windows machine, there is also a decent XML writer available which is not overcomplicated.

I am certainly not saying this is correct solution and you should pick the one that works out best for you, an Excel based solution could be perfectly acceptable, in the end PowerShell was my preferred option. Don’t worry if you don’t know PowerShell as you should be able to reuse the script by just updating the variables in it.

Let me briefly go over the script, it does contain comments so I am not going to go into too much detail.

The first section of the script contains the variables, some of these are constants like the snapshot directory structure and the period mapping file, the ones that would require updating are the base directory of the extracted snapshot and text file contain the mappings.

There are the elements which map to the XML file, the order of these should match the order in the text file.
 

The remaining section of the script basically creates a new XML document, cycles through the lines from the source text file containing the period mappings and writes then in XML format.


After running the script, the source text file with the mappings has been transformed into the correct XML format in the snapshots period mapping file.


The snapshot directory structure can be compressed again into a zip file.


Then uploaded to the EPM cloud snapshot area.


Next, the snapshot is imported.


I know the process to compress, upload and import could be automated but as they are simple tasks that would not require repeating too often I left them as manual.

After the import has completed, the new mappings are available as Global Mappings in Data Management.
 

This solution works well and can be adapted to work across any of the period mapping types, the downside is you still have to produce the text file with the mappings, how about taking that a step further and letting the script do all the work.

Based on the same format for the mappings as I have used above I came up with a new script, it takes user input to enter a start month and period and the number of months to generate in the period mapping file.

The first section is nearly identical to the previous script, the only difference is the reading in of the variables for the start month/period and number of months.
 

The main section of the script operates in a similar way to the previous script, except this time the period key, prior period key, period name, target period month and year target are all calculated.


The script generates period keys based on the last day of the month but could easily be updated to suite any range for the key.

An example of running the script to produce a mapping file that starts from Jan 2020 and produces two months of output would be:
 

I am only selecting two months for demo purposes, the file can be generated with any number of months.

Once the script has been run, the file is generated in the defined snapshot location and in the required XML format.
 

The snapshot can be compressed to a zip again, uploaded and imported.

The new mappings are available after the import has completed.
 

With a few changes and additions, the script can generate application period mappings.

I am not going to show the full script, only the differences to the previous one.
 

The variable for the directory location for application period mappings is different than the global one, the mapping file is also different.

The elements array includes “Intsystemkey” and the value is the target application name which is read from user input.

The only difference in the main section of the script is due to the extra element, so the check goes from 4-6 to 5-7
 

An example to generate an application mapping file for the application “Vision” for two months, starting from Jan 2020 would be:


The application period mapping file is then generated based on the input.


Again, zip the snapshot, upload and import then the application mappings will be available in Data Management.


This type of solution can be implemented across EPM Cloud or on-premise and can be used again and again.

In the next part I will look at a possible solution that is only available for on-premise as it based around FDMEE custom scripting and direct updates to the repository database tables.

EPM Cloud – Limiting the use of an application through automation - Update

$
0
0
Back with a very quick update, recently I wrote a post about the different options for limiting the use of an EPM Cloud application to administrators and then returning it to all users.

At the time of writing the post, the only option to schedule this process was by creating a refresh database job and setting the enable use of application settings.


Moving forward to the 18.05 cloud release and there is now a new schedule job option called “Administration Mode”.


So if you know the exact times you want to limit access to the application, you can create a new job.


It is possible to run the job straight away which would really be the same as going to “System Settings and Defaults” from the navigator, this then provides the options to set the application maintenance mode.


Anyway, once you have defined the frequency you then have the option to enable the use of the application to administrators or all users.


You can then create another job to return the application back to all users.


This time you would select the use to “All users”.


When the scheduled job to limit the application to administrators runs, non-admins that are logged into the application will be automatically logged out, any new logins will be greeted with the following message:


This will be the case until the job to return the application to all users runs or the system setting is changed in the UI. The next time the schedule jobs are set to run will be available under pending jobs.


So what if you don’t know the timings of when you want to limit access to the application (as you may want to include it as part of a scripted process)? Well unfortunately it does not look like Oracle has decided to provide the option to do this using EPM Automate or the REST API yet. In this case you can look at the possibility of trying out the method I provided in my recent post.

I am hoping Oracle will include this at some point in the future and if they do I will provide a further update.


FDMEE/Data Management - Managing period mappings - Part 2

$
0
0
In the last part, I went through Data Management/FDMEE period mappings and possible methods to automate populating them. The solution concentrated on the cloud but could still be used with on-premise, the idea was to update the XML period mapping files generated by snapshots and then import them back to generate the new mappings.

With Data Management in the cloud there is currently no custom Jython scripting allowed, so this pushes you to develop some solutions outside of the cloud. With on-premise FDMEE it is a different story, as you are able to build the solutions into the product using custom scripting. Also with on-premise FDMEE you have the option of the Excel interface which allows you to directly load files to the FDMEE database tables.

In this post I am going to go through a couple of possible methods to update period mappings directly from FDMEE.

In the FDMEE, all the period mapping in the UI are populated by reading the information from a database table.


For global period mappings, these are populated from a table named “TPOVPERIOD


To be able to update the mapping table directly you can go to the Excel interface in the UI and select the entity type as “Period Mapping”. Select a file and location and download.


You can then populate the Excel template.


Before uploading the file, it is worth pointing out the following rules with the Excel interface:
  • Data is only inserted. It cannot be updated or deleted.
  • Data is not validated.
  • When FDMEE encounters a duplicate row, the row is skipped.
Once you are happy, the Excel file can be uploaded.


The FDMEE database table has had information from the Excel file inserted.


Back in the UI, the global period mappings now contain the two new mappings.


This is all fine, but what if you want something more dynamic where you don’t have to be bothered with populating Excel files? Well this is where a bit of custom scripting could help.

I am going to go through an example of updating the global and application period mapping tables by using a custom script, this will have parameters to define the start month/year and number of months to generate.

The application period mappings are stored in a table named “TPOVPERIODADAPTOR”, the only difference from the global mapping table is the “intsystemkey” column which holds the target application name.


First a new script is registered in FDMEE, I will get on to the details of the Jython script later.


There are four parameters, one which defines which defines the start month, instead of allowing direct input to minimise errors, a query type has been used to generate the periods (based on Oracle database).


Another query has been created to generate years where a start year can be selected.


The number of months parameter has been set as static so they are manually entered.

The target application name is defined by a SQL query, there is one already by default with FDMEE.


On to executing the script, the group and script are selected.


When the script is executed a window is displayed with the available parameters.


If start month is selected, a list of months is displayed for selection.


If start year is selected, a list of years is displayed for selection.


For this example, I am going to generate four months of period mappings, any number can be entered.


I have left the target application blank as I just want to update the global mapping period table.

After executing the script, a message is displayed to inform how many rows have been inserted into the period mapping table.


The database table has been populated with the 4 months of mappings.


These are available in the UI.


If I run the script again and this time select the application, a list of target applications is available for selection.


I selected the Vision application and to generate 3 months of mappings.


Once executed, a message confirms 3 rows have been inserted into the mapping table.


The application period mapping database table has been updated with the 3 new mappings.


These are then available under application mappings in the UI.


If I run the script again with the same parameter values, the message this time informs that no rows were inserted as the period keys already exist.


On to the Jython script that does all the work to populate the mapping tables.

I am not going to go through it in detail as the script is already commented so should give you a good idea what is happening.

In summary, the start month, year, number of months and target application parameter values are retrieved and stored.

There are similar SQL statements depending on whether a target application has been selected or not.  There is a query to count if there are duplicate period mappings and an insert statement to the relevant period mapping table.

The start month and year are converted into a valid date.


A loop cycles through the number of months that need to be populated into the mapping table.

The period key is generated by calculating the last day of the month for the current date and appended to the SQL parameters.

The last period key, period description and year target are generated and appended to the SQL parameters.

The target period quarter, year and day are not used in this example so nulls are generated in the SQL parameters.

The query is executed to check if a period key already exists for the current period key that will be inserted into the mapping table.


If there is already an existing period key, store the duplicate key.

If there is not an existing period key, insert the period information into the mapping table.

Then the date moves forward one month and the process is repeated until all months have been looped through.

Finally, a message is displayed to inform how many rows were inserted and if there were any period key duplicates.


If you are running FDMEE 11.1.2.4.210+ then you can take advantage of the REST API to execute the custom script, I have covered the FDMEE REST API in detail and you can read about it starting here

An example using a REST client to generate 12 months of mapping starting from January 2021 would be:


Once the script has completed the period mappings will be available in the UI.


This could be converted to a script where input is taken, the REST API is called to run the custom script which then generates the period mappings.


The period mappings will then be available under application mapping.


Well that concludes the two-part look into managing FDMEE/Data Management period mappings.

Data Management now supports executing business rules

$
0
0
In Data Management, an area of functionality that has not been available until now is the ability to execute business rules. If you wanted to run a business rule to say aggregate the loaded data you would have to run the data load in Data Management and then switch over to planning to run the rule.

This was not a problem if you were running processes outside of the Data Management UI, because with EPM Automate or the REST API you have the ability to run a data load rule and business rule separately.

Before I get on to the new functionality in Data Management I think it is worth briefly going through what is currently available in on-premise FDMEE.

With on-premise FDMEE there is no option to execute business rules but it is possible to execute calculation scripts, this is true for both Essbase and Planning target applications.

Within the target application details there is a tab for calculation scripts which allows scripts to run be run at Data Rule, Location, Category or Application level.


The scripts can be executed before or after a data load, or before or after check events.


The functionality works alongside Essbase runtime substitution variables in calculation scripts, this allows a large selection of values to passed from FDMEE into the Essbase calc script.


So on-premise FDMEE provides quite a bit of flexibility for running scripts, even if it doesn’t meet your requirements then custom scripting can help achieve whatever you need.

There is not really any direct access to Essbase in EPM Cloud so this means the new functionality introduced in the 18.06 release is directed towards executing business rules.

As I have quickly provided a summary of what is available for on-premise, it is time to compare this to the new feature in PBCS and EPBCS Data Management.

There are the following important notes from the documentation for this release:
  • Business rules are registered in the Target Application option.
  • Business rules are available only for Planning applications.
  • Business with run time prompts are not supported when executed from Data Management.
  • Business rules can be assigned an Application scope or to a specific data load rule.
If you are running EPBCS or PBCS + additional module then the rules can either be standard calculation script or a Groovy script.

Currently business rules are only executed after data load rules have completed, there are no options to change this behaviour like with on-premise.

There are some other points that are not yet mentioned in the documentation which will become apparent as I test out the functionality.

From the above notes, the stand out one for me is there no ability to have pass values to runtime prompts which does limit the rule. This is disappointing as it is possible to pass runtime parameters when running rules using EPM Automate or the REST API.

Anyway, let’s get on with it and test out executing business rules in Data Management.

If you select a target planning application in data management there will be a new “Business Rules” tab.


As described in the notes, the scope of the rules can only be at application or data load rule level.

There is no option to retrieve the available business rules in planning so the name must be entered manually and it is not validated.


Personally, I think it would have been nice to be able to select from a list of available rules, it is possible to return a list of rules using the REST API so I don’t understand why this could have not been included.

I have entered a valid business rule name and set it to application scope, this means any data load rules that are run will execute the business rule after the data load completes.


The load method is set to the default of “Numeric Data Only” which loads data directly to Essbase using an Essbase load rule.


The data management load rule is then executed and the process logs indicate it was successful.


Looking at the planning job console, no rule was executed.


In the data management process logs there is no mention of the business rule.

I updated the load method in the data load rule to “All data types with security” which means data is loaded through the planning layer using the Outline Load Utility (OLU).


The data load rule is run again and this time there is an entry in the job console to show the business rule has been run.


In the data management process log there is the following entry:

Property file arguments: /DF:MM-DD-YYYY /DL:tab /PDR:MTL_AGG_1 /I:***Vision_882.dat /TR

So it looks like there is a new property being used in the OLU which defines to run a business rule.

There is no further information in the process log about the business rule or whether it ran, the assumption is if there is not an error the rule ran successfully.

It is a shame if the business rule functionality only works when the load type is set to all data types, at the time of writing, the documentation does not specify this information.

On to the next test, the documentation specifies:

“The Application scope rules does not run if a Data Rule scope exists for the data load rule.”

I added a new valid rule at data rule level and applied a data load rule to it.


This should mean only the rule defined at data rule level should run and the one at application level will be ignored.

After running the data load rule, the process log proves this theory to be correct.

Property file arguments: /DF:MM-DD-YYYY /DL:tab /PDR:MTL_AGG_2 /I:***Vision_883.dat /TR

The planning job console shows that the rule was run.


The next test I wanted to check whether the following information in the documentation was correct:

“If the scope is Data Rule, only rules for the running data rule run in sequential order.”

The statement should also apply to  business rules added at application level.

I added a new rule at data load rule level, I set the sequence so “MTL_AGG_3” should run first.


The data load rule was successful.


This time the process log had the two rules in the OLU arguments and in same the order defined in data management.

Property file arguments: /DF:MM-DD-YYYY /DL:tab
/PDR:MTL_AGG_3,MTL_AGG_2 /I:***Vision_884.dat /TR

The job console showed that both the rules ran, and after checking the times, they ran in the correct order.


For the next test I wanted to see what happens if an invalid rule name was entered, would the data management rule fail?


The data load rule ran successfully.


The process log contained the rule name as an argument.

/DF:MM-DD-YYYY /DL:tab /PDR:InvalidRule /I:***Vision_885.dat /TR

So if an invalid rule name is entered there is no failure.

Next to test what happens if a rule errors, I added a rule which requires a runtime prompt value.


The good news is the data load rule failed.


There was the following entry in the process log that provided the reason for the failure.

com.hyperion.planning.HspCallbackInvocationException: Business rule failed to execute. See the job console page for error details.

As expected the planning job console shows an error against the rule.


Finally, I wanted to test what happens with a non-admin user and the load method in the rule set to “Add data types with security”. This means the data is loaded from data management to planning using the REST API and the import data slice resource, a grid of data is generated in JSON format and posted to planning.

A business rule was defined in the target application details.


The user does not have access to run the rule in planning.


The non-admin user ran the data management rule and it completed with no errors.


This time in the process log it actually mentions the business rule name and you can see that the REST API using the import data slice resource is in operation.

DEBUG [AIF]: businessRuleName: MTL_AGG_1
DEBUG [AIF]: Overrode info.loadMethod for the non-admin user: REST
DEBUG [AIF]: requestUrl: http://localhost:9000/HyperionPlanning/rest/v3/applications/Vision/plantypes/Plan1/importdataslice


The job console shows the business rule was run, even though the user does not have access to the rule in planning, it can still be run using the REST API resource.


What is interesting is in the payload of the REST API request there is a new parameter that is not currently in the REST API documentation.


   "aggregateEssbaseData":false,
   "dateFormat":"MM-DD-YYYY",
   "customParams":{ 
     
"PostDataImportRuleNames":"MTL_AGG_1"

The parameter “PostDataImportRuleNames” allows business rules to be executed after submitting a grid of data.

Just to prove that multiple business rules can be executed in a set sequence, I added a new rule and defined the sequence.


The data load rule ran successfully and the process log had entries for both rules in the correct order.

DEBUG [AIF]: businessRuleName: MTL_AGG_2,MTL_AGG_1

The JSON posted with the REST API had both the rules and in the correct order.


   "aggregateEssbaseData":false,
   "dateFormat":"MM-DD-YYYY",
   "customParams":{ 
     
"PostDataImportRuleNames":"MTL_AGG_2,MTL_AGG_1"

I checked the job console and the rules did run in the correct sequence.

That completes my initial look at the data management functionality for executing business rules, it is certainly not without limitations and there is room for improvement.

Essbase REST API - Part 1

$
0
0
I am no stranger when it comes to Web Services and in particular REST APIs which I have been covering for a number of years now, I am a strong believer that they play an important part in integration and automation. With the rise of the cloud, REST APIs are integral to process automation between different cloud services.

In the past I have covered on-premise EssbaseWeb Services which is based on the SOAP protocol, the preferred method these days is REST and prevalent in most cloud services. I definitely enjoy working with REST over SOAP, mainly for its simplicity and ease of use.

The 11.1.2.4 Essbase patch releases have contained the following in the readme:

"JAPI SOAP web services will be removed in a future release and they will be replaced with REST APIs."

Unfortunately for on-premise Essbase this has not yet happened and it is unclear when it will happen.

It is a different case for the Essbase side of Oracle Analytics Cloud (OAC) where a REST API is available. Though unless I am missing something it does not seem to be well documented yet but I am sure that will change over time.

I have to assume that one day the REST API will also be pushed down from cloud to on-premise, but for now I will be covering what I have found available in the Essbase cloud service. In this first part I am going to go through a really quick introduction in getting started with the REST API.

Just in case you are not aware, the OAC – Essbase Command-Line Interface (EssCLI) is built on top of the REST API so you may have already inadvertently been using the API.

Getting started with the REST API is simple, all you need is a REST client. There are lots of REST clients available as add-ons for most browsers, alternatively standalone apps like Postman or Insomnia.

If you want to take using REST a stage further, for say automation, then most scripting languages provide the functionality to achieve this.

I am going to stick with what I have been doing for the last few years, using the Boomerang app for Chrome and for scripting PowerShell, mainly for demo purposes and ease of use. The concept will be the same whichever client or scripting language you choose to use.

I won’t go into detail on what I have already covered in the past but here is a description of REST from my first post on the planningREST API:

REST describes any simple interface that transmits data over a standardized interface (such as HTTP) without an additional messaging layer, such as SOAP.

REST provides a set of design rules for creating stateless services that are viewed as resources, or sources of specific information, and can be identified by their unique URIs.

RESTful web services are services that are built according to REST principles and, as such, are designed to work well on the Web. Typically, RESTful web services are built on the HTTP protocol and implement operations that map to the common HTTP methods, such as GET, POST, PUT, and DELETE to retrieve, create, update, and delete resources, respectively.



Examples of each method in terms of Essbase could be:

GET– Retrieve a list of applications, databases or logged in users.
POST– Execute a calculation, MDX or MaxL script.
PUT– Update a calc script, substitution variable or filter.
DELETE– Delete an application, script, variable or filter.

The URL structure to access Essbase REST resources would follow the lines of:

https://<oac_essbase_instance>/rest/{api_version}/{path}

Currently the api_version is v1.

So let’s start out with one of the most basic resources to return the version information, this is the same as selecting ‘about’ in the web UI.


 The EssCli has a command to return the version.


Using a REST API client, you would just need to enter base REST API URL plus “about”, for example:


Before you run the request, you will also need to enter your credentials.


This will add a basic authorization header to the request with the username and password base64 encoded.


Now you are ready to execute the request and receive a response back in JSON format.


To convert this into a scripting equivalent does not take much effort.


Basically, the user credentials are encoded into base64 and added to the request header, this is automatically done for you when using a REST client.

The URL for the about REST resource is defined, a GET request is made to the URL and the response stored, this is then outputted.

To return session information about the user making the REST request, then all that is required is “session” suffixed to the base URL.


In terms of scripting, the existing script only requires the URL updating to include “session”


To retrieve a list of Essbase provisioned users then the “users” resource can be accessed.


This will return a list of user information in text format, there is an equivalent resource for returning group information, no surprise that this is accessed with “/groups”.

To retrieve information on which users are logged into the Essbase instance then the “sessions” resource can be requested.


The same script could be updated to return a list of logged in users and the time they have been logged in.


I am going to leave it there for this post and in the next part I will get into some of the more interesting REST API resources available.

Essbase REST API - Part 2

$
0
0
In the last part I introduced the Essbase REST API which is currently only available in OAC, though you never know it might make it down to on-premise one day.

It seems like a big chunk of the functionality in the Essbase web UI is built around REST which means you should be able to achieve a lot with it.

In this part, I am going to cover REST resources that could form part of Essbase application and database monitoring such as retrieving available applications/database and properties, starting, stopping and deleting.

As explained in the last post the URL structure to access the REST API will be:

https://<oac_essbase_instance>/rest/v1/{path}

Once again, I am going to use a combination of a REST client and example scripting. I am not getting into a debate about which is the best client or scripting language as that usually boils down to personal choice, the idea is to provide information on some of the Essbase REST API resources that are out there.

So let us start out with returning a list of applications.

The format of the URL is:

https://<oac_essbase_instance>/rest/v1/applications

An example of the URL would be something like:


If we put this in to a REST client and make a GET request the following response will be received in JSON format.


A list of available applications will be returned and some properties, such as whether the application is started, and the type. You will notice that the creation and modified time are in Unix time which basically means the number of seconds since 1st Jan 1970.

Included in the response are four http links that provide the ability to start, stop and delete the application, as well as return a list of databases, I will get on to these shortly.

Any of the responses that return links can be suppressed with a query parameter and value of “links=none”

For example:


The response will be returned without links.


In terms of scripting, I am going to start out with the same script I used in the last post, the only difference this time is I am including applications in the path and the parameter to suppress links.

This means with a simple script a list of applications can be returned and associated properties.


If I wanted to return the applications that are currently started:


I could take this a step further and show which applications are started and when they were started, in this example I current the Unix time into an understandable timestamp.


It is just as simple to return only BSO or ASO applications.


To display the total number of applications there is a property named “totalResults” that is available.


This matches to the value shown in the UI.


To return settings for the application then settings can be included in the path, if you include a query parameter and value of “expand=all”.

GET - https://<oac_essbase_instance>/rest/v1/applications/{appName}/settings?expand=all


The option to suppress the links could have been included, the following settings and values are returned in the response.


Going back to the links that were shown earlier, there is the ability to start and stop an application.


In the UI we can see the sample applications are stopped.


With a REST client I can simply add the URL to start the Sample application, a PUT method is required.

PUT - https://<oac_essbase_instance>/rest/v1/applications/{appName}?action=start


If the operation is successful, a http status code of 200 will be returned with no JSON.


To stop the application, all that is required is a quick change to the URL to include the action parameter with a value of stop.

PUT - https://<oac_essbase_instance>/rest/v1/applications/{appName}?action=stop


If a http status code of 200 is returned the application should be stopped.


If you try and stop an application that has been stopped, a status code of 400 will be returned with an error message in JSON format.


It is similar if you try and start an application that is already started.


The message is the same if you try the same operations in the web UI.


Moving on to the delete link which is included in the response when listing applications.


In this example I am going to delete an application named “Sample-MTL”.


This time a delete method is required with the application name included in the URL.

DELETE - https://<oac_essbase_instance>/rest/v1/applications/{appName}


If the operation is successful, a http status code of 204 with no JSON is returned.

In the UI, the application has been deleted and now there are 5 applications in total.


The remaining link that is returned provides the ability to list the databases that are part of the application.


In this example, the Sample has two databases.


The URL includes the application and databases in the path, to return a list of databases and properties a GET method is required.

GET - https://<oac_essbase_instance>/rest/v1/applications/{appName}/databases


The list of properties that are returned are the same as with applications, there is only one link that allows you to return a single database, but the properties that are returned are exactly the same.


In terms of scripting, the previous script could be updated to provide a list of all applications and databases.

First, all the applications are returned and then a loop cycles through each of them, for each application the databases are returned, these are cycled through and the name of the application and database are displayed.


Any of the properties could have been included so it could have displayed all the applications and databases that were started and when they are were started.

To get a list of database settings then the URL requires databases and the database name like:

GET - https://<oac_essbase_instance>/rest/v1/applications/{appName}/databases/{dbName}/settings


Including the expand parameter will return all the available settings.


If settings are not enabled they will not be returned, calculation has not returned any properties, if you look in the Essbase UI there are the following settings:



If I update the aggregate missing values setting.


This time the setting is included in the response.


To update the setting, it should be possible using a PATCH method, unfortunately this is not working for me in the current release.

In theory it should be something along the lines of:

PATCH - https://<oac_essbase_instance>/rest/v1/applications/{appName}/databases/{dbName}/settings


To obtain the database statistics the following URL can be used with a GET request:

GET - https://<oac_essbase_instance>/rest/v1/applications/{appName}/databases/{dbName}/statistics


This will provide a response of:


No information is currently being returned for runtime but this is similar to the UI, so maybe just related to the version.

Any of this information can be returned with a script, in the following example the statistics and settings for the sample basic database are returned, the start time of the database is converted from Unix time using a function.


The properties and values are only outputted to the console window as an example, these could easily be turned into a report or sent out by email, alternatively actions could have been taken depending on the returned values.


With all these different resources available through the REST API, it provides an excellent alternative for Essbase monitoring.

In the next part I am going to move on to managing substitution variables, filters and calculation scripts.

EPM Cloud - Data Integration comes to the simplified interface

$
0
0
A new piece of functionality was added in the EPM Cloud 18.07 update that I thought I should cover.

The cloud readiness monthly update document introduced the new feature in the following way:

"In addition to the standard Data Management interface, the 18.07 update provides a new, simplified interface to work with integrations in the Oracle’s Planning and Budgeting Cloud Service. The new simplified interface, called Data Integration is the front-end for all integration-related activities. Using this interface, the workflow to create mapping rules to translate and transform source data into required target formats, and to execute and manage the periodic data loading process is now streamlined and simplified."

So basically, Data Management is using the legacy standard interface and the end game for Oracle is to move everything into the simplified interface, so this is the first step in that process for Data Management.

What you will also notice is the name is changing to Data Integration, which means for now we have on-premise FDMEE, Data Management and Data Integration. Just to add to the confusion there is the unrelated Enterprise Data Management cloud service which is probably why Oracle want to change the name to Data Integration.

Before you get too excited about this release, it is early days and it is definitely nowhere near close to replacing Data Management.

Here is an excerpt from the documentation:

“Currently, Data Integrations is available as a preview version only for Planning and Oracle Enterprise Planning and Budgeting Cloud system administrators. The Data Integration component must be used in conjunction with Data Management to complete setup tasks such as registering source systems or target applications. Data Management is still fully supported and remains available as a menu selection on the Navigator menu.”

and:

“Data Integration does not replace the legacy Data Management, it is an additional feature that supports the same workflow with a subset of legacy features. Data Integration will continue to be enhanced until it has one hundred per cent parity with Data Management.”

What we can extract from the above statements is that it is a preview version and it is only available for standard and enterprise PBCS. It also does not contain all the functionality, so the missing pieces still need to be undertaken in Data Management.

The areas that still need to be setup in Data Management are:
  • Register Source System
  • Register Target Application
  • Period Mapping
  • Category Mapping
There are some important terminology changes with Data Integration compared to Data Management.


I am sure this is going to cause some added confusion while both Data Management and Data Integration exist, plus the fact there is on-premise FDMEE.

It is worth going through the list of functionality that is not supported or available in this release, for now it is quite a big list but obviously this will change over time.
  • Only supported in standard and enterprise PBCS
  • Only available for Service Administrators
  • Location attributes like currency, check entities and groups cannot be defined.
  • Logic groups are not available.
  • Fixed length files are not supported
  • Import expressions must be entered manually.
  • In the workbench the following is unavailable:
    • Import/Validate/Export/Check workflow processes
    • Validation errors
    • Displays only dimensions in the target application and columns cannot be added
    • Drill to source
    • View mappings
    • Source and Target view is only available.
    • Import/Export to Excel
  • In map members (data load mappings):
    • Rule Name is replaced with Processing order
    • Mappings cannot be assigned to a specific integration (data load rule)
    • Exporting is not available
    • Mapping scripts are unavailable
    • Multi-dimensional mappings are available but cannot be defined.
  • Column headers for multi-period loads are unavailable
  • Scheduling is unavailable.
With all of the current limitations, maybe you are starting to understand that it is definitely a preview version.

I think it is about time we take the new functionality for a test drive; my objective is to create a simple file-based data load.

Before I start out I think I should point out that the issues I hit might only be because it is extremely new, you may not have the same experience and if you do, then I am sure any bugs will be ironed out over time.

You can access Data Integration though the navigator by selecting Data Exchange.


Alternatively, you can select the Application cluster and then Data Exchange.


This takes you to the Data Integration homepage where all the available integrations are displayed, you can also access Data Maps.


What I did experience is that the sort option did not work correctly, especially by last executed. As well all the executions had a time of 12:00:00 AM.

It is nice though that all the integrations are available and can be executed or edited from this homepage.

Let us start out by creating a new integration by selecting the plus icon.


This opens an integration workflow where you are taken through the steps in defining an integration in a simplified way.


The general section allows you to define a name for integration, create a new location by typing in a name, or selecting an existing location.


After providing a name for the integration and location, you can select the source icon, this then allows you to select from all the available sources.


As I am creating a file-based load I selected File which then opens a file browser window.


This allows you to select, create, delete a folder and upload a file. It operates in the same way as the file browser in Data Management but I actually prefer this one in the simplified interface.

I opened the inbox and the browser displays the directory you are in.


Selecting inbox provides the option to create or delete folder.


I selected to create a folder and provided a name.


Now the source file can be uploaded.


The file will then be displayed in the browser where you have the option to download or delete it.


After clicking OK, the source details are populated in the integration configuration.


Below the filename there is a file options button, selecting this will open up a file import window, this is like the first part of creating an import format in Data Management.


This provides the option to select a different file and select the file type and delimiter. These were correctly automatically selected and I didn’t need to select so they must be determined from Data Integration reading the file.

The options for type and delimiter match to that in Data Management.

You also get to see a preview of the file which is a nice feature and can select the header to use as the column names.

The next screen is the file column mapping, if selected in the previous screen, the header in the file will populate the column names, it is possible to override the naming of columns by just entering new names.


Moving on to the target,


Selecting Target allows you to select the target for the integration which provides the same look and feel as when selecting a source.


I selected the planning application, this completes the general section of the workflow.


Select “Save and Continue”, this moves on to the map dimensions section which is equivalent to the source and target mappings when creating an import format.


Just like with an import format you can select a source dimension and map it to a target.


If you select the cog icon you have the following options.


This pretty much mirrors the ‘add’ functionality in the standard Data Management interface.


In the simplified interface you have to manually type any required expression. In the standard interface you have the option to select an expression type from a dropdown.


Now that the map dimensions have been configured, save and continue can be selected.

For me, it didn’t continue in the workflow and after saving I had to select “Map Members”.


The map members section of the workflow is the equivalent of creating data load mappings.

There is the option to import mappings but not export.


So let’s add some simple mappings.


In the source you have the option to define the type of mapping, this differs from the standard interface where there are tabs available for the different mapping types.


The concept of the mapping types and order of precedence is exactly the same as in Data Management, it wouldn’t make any sense if the logic had changed.

You will see “Multi Dimensional” is in the list but I don’t believe you can define the mappings in this release.


There is a column for processing order which is the replacement for rule name in Data Management, it operates in the same way and defines the order of precedence within a mapping type based on an alphanumerical value.

Now this is the point where I started to hit issues, even though I saved the mappings for each dimension when I returned to them they were blank.


When I got to the next part of the workflow to define the options I could not select a category or plan type.


The options section of the workflow is the equivalent to the options available in a data load rule in Data Management.

When I went back and edited the integration I received the following error message when I tried to save.


I went into Data Management and could see the import format had been created but there was no location.

I tried to create a location with the same name as the one in the simplified interface and was hit with another error.


I now have a location that does exist but I can’t see it in the standard interface and it doesn’t seem to save properly in the simplified interface.

I did have a couple of attempts at it and hit the same problem, maybe it was because I was trying to create a location from simplified interface instead of using an existing one. Once I get the opportunity I will look at it in more detail and update this post if anything changes.

The integrations I created do appear in the Data Integration homepage.


Though unless I am missing something, I don’t seem to be able to delete them in the simplified interface and they don’t appear in Data Management so I am stuck with them.

Instead of dwelling on this problem as I might have just been unlucky, I decided to create an integration in Data Management and then edit it in Data Integration.

The integration is available from the Data Integration homepage.


Selecting Actions provides the following options for the integration:


I selected “Map Members” and this time the mappings were available.


You will notice that a multi-dimensional mapping is displayed with a value of #SCRIPT, in this release even though it is an option it is not possible to fully define a multi-dimensional mapping in the simplified interface.

In the options section of the workflow, the category and plan type were now available and I could update them to other available ones if needed.

The filename and directory are also displayed which didn’t happen when I configured the integration through the simplified interface.


From the Data Integration homepage, the integration can be run.


This provides similar options to running a load rule in Data Management in the standard interface.


A confirmation window is displayed and provides the process ID.


As I mentioned earlier the time of the last execution is not correct and for some reason is displayed at 12:00:00 AM


It is possible to view the process details from the actions menu.


Process details in the simplified interface is basically a cut-down version of the one in Data Management, at least the execution times are correct and the process log can be downloaded.


The workbench is an extremely simplified version of the one in Data Management.


I only seemed to be able to change the period and selecting the icons did not do anything, so at the moment it looks like they are just there to indicate whether an import/validation/export/check has been run.

An annoyance for me was that if you filtered on an integration in the homepage – this is something you will do if you have lots of integrations – if I went to any section in Data Integration and returned to homepage the filter had been removed.

For instance, if I filtered down on the in integration I was interested in.


I then opened the workbench and closed it, the filter was gone even though I was still technically in Data Integration.


Another thing I noticed, even though I was active in Data Integration I would still get expiration warnings.


As stated earlier it is a long way off parity with Data Management and there are lots of improvements and functionality additions that need to happen. At the moment I could only see using it to run integrations or quickly access process details without having to open Data Management.

I am sure this will change over time and no doubt I will be back with further updates.

Essbase REST API - Part 3

$
0
0
Moving on to part 3 of the series looking at the Essbase REST API which is currently only available in OAC. As I mentioned previously, most of the Essbase web UI is built on top of REST so there is there is a hell of a lot you can do with it, this also means it would be an endless task trying to cover everything, so I am going to pick some of the common management type tasks where adding in automation can assist further.

Historically carrying out everyday tasks would be done with MaxL, this is definitely still the case and there is nothing wrong with that. The problem I have always found with MaxL is that it can be restrictive, the output is not great, and it doesn’t interact so well with other scripting languages. This is where the REST API can come to the rescue and the good news is that it can be integrated with most scripting languages.

I am going to assume you have read at least the first part of this series, so you understand what the Essbase REST API is all about. I will stick with the same style examples using a mixture of a REST client and PowerShell, the beauty is you can pick whichever scripting language you feel most comfortable with, the example scripts I show are just there to give you an idea of what can be achieved with simplicity.

Let’s start off with a common task of managing substitution variables.

In the OAC Essbase UI sub vars can be managed through the variables tab.


To return the variables using the REST API is extremely easy.

Remember the URL structure to access the REST APIs will be:

https://<oac_essbase_instance>/rest/v1/{path}

To return all the sub vars at an application level then the format is:

https://<oac_essbase_instance>/rest/v1/applications/<appname>/variables

and database level:

https://<oac_essbase_instance>/rest/v1/applications/<appname>/databases/<dbname>/variables

For example, using a REST client with GET request with the following URL will return all the subs vars for the Sample application, I am also suppressing the links for display purposes.


This will return the following variables in JSON format.


If you want to focus on a single variable then the name of the variable is included in the URL.



This time I have included the links which give you an indication of how to edit and delete sub vars.


Using a PUT method request, it is possible to update a variable, the body of the request requires the variable name and new value in JSON format. In the following example I am going to update the “currMonth” variable from “Jul” to “Aug”


The response shows the variable has been updated.


A quick check in the Essbase UI proves it has been updated.


A DELETE request to delete a variable at application level would be

https://<oac_essbase_instance>/rest/v1/applications/<appname>/variables/<subvarname>

or database level

https://<oac_essbase_instance>/rest/v1/applications/<appname>/databases/<dbname>/variables/<subvarname>


A 204 status code will be returned if the deletion was successful.

Back in the Essbase UI, the variable has been removed.


To create a new variable then the URL format is similar, a POST request should be made and the body of the request should include the variable name and value in JSON format.


The response returns the newly created variable details.


In the Essbase UI the new variable is available.


In terms of scripting it is simple to manage sub vars, I am starting out with the same base script that I have used in the previous parts of this series.

The user and password are base64 encoded so they can be added to the header of the request.

The REST base URL and Essbase application are defined and are then combined to create the URL of the resource.

A request is made to return all the variables at the application level and the response is outputted to the console window.


To update a variable it is a similar script, this time the sub var name and new value are defined, these are then converted to JSON and added to the body of the request.

A PUT request is made and the response with the updated variable value is outputted to the console window.


To create a new variable, the only difference from the previous script would be POST request instead of a PUT.


To delete, again very simple with not much variation required.


So as you can see, it is not a difficult task to manage sub vars.

Moving on to another common maintenance task, which is managing filters.

To retrieve all the filters against an Essbase database, the following URL format is required:

https://<oac_essbase_instance>/rest/v1/applications/<appname>/databases/<dbname>/filters

An example using a GET request and suppressing links:


The response in JSON format will contain the names of all the filters assigned to the database.


To return a single filter the format is:

https://<oac_essbase_instance>/rest/v1/applications/<appname>/databases/<dbname>/filters/<filtername>

This time I am not supressing the links, the response provides you with the URLs and methods to manage the filters.


If I look at the same filter in the Essbase UI, the member specification and access level are displayed.


To return the same information using the REST API then a GET request is required with the URL format:

https://<oac_essbase_instance>/rest/v1/applications/<appname>/databases/<dbname>/filters/<filtername>/rows


The response as usual is in JSON format and is split by row definition like in the UI.


To update a filter definition a PUT request is required:


The body of the request should include all the required filter rows, any that are not included will be removed, basically it acts in the same way as a replace.

In the follow example I am going to only include one row which will mean the other existing rows will be removed.


A check back in the UI confirms the filter now only contains one row, alternatively I could have used the REST API to confirm this.


To delete a filter the URL format is the same except a DELETE request is required.


I have two filters created and say I wanted to delete the filter named “MTL-Filter”


A DELETE request would be made including the filter name in the URL


If the deletion was successful a 204 status will be returned.

In the UI you can see the filter is no longer available.


To create a new filter and define access rows then the format for the URL is the same as retrieving all the filters.

https://<oac_essbase_instance>/rest/v1/applications/<appname>/databases/<dbname>/filters

A POST method is required, and the body of the request should include (in JSON format) the name of the new filter and the access definition rows.


In the UI you can see the filter and the access definition have been created.


In reality you wouldn’t be using a REST client to manage filters and you probably would want it automated using a script.

In the following example I have a text file and the filename is the name I want to create.


I did the delete the filter first as it is the same name as the one that already exists.

The content of the file contains the access rows.


The script defines variables for the Essbase application, database name and filename.

The filter name is extracted from the filename.

The REST URL is built from the variables.

The contents of the CSV file are read and then converted to JSON to be used in the body of the REST request.

A POST request is made to create the filter and definition.


Back in the Essbase UI you can see the filter is there with the access rows from the file.


If you have ever worked with MaxL then hopefully you agree it much simpler to manage filters using the REST API.

So, what about assigning permissions to a filter, well once again you can use the REST API to achieve this.

Before assigning the filter permissions, let us take a step back and first provide access to the Essbase application.

Currently there is only one user with access to the application.


To update permissions for an application then the REST URL format is:

https://<oac_essbase_instance>/rest/v1/applications/<appname>/permissions/<user_group_name>

A PUT request is required, and the body of the request should include the user/group and role.


If successful, a 204 status code will be returned.

In the UI, the user has been granted with access to the Essbase application.


Alternatively, the REST API could return all the permission information for the specified application.


Back to the filter that was created earlier, now the user has been granted access to the application they can be assigned permission to the filter.

In the UI they would usually done in the permissions tab for the filter.


To achieve this with the REST API then a POST request should be made to the URL format:

https://<oac_essbase_instance>/rest/v1/applications/<appname>/databases/<dbname>/filters/<filtername>/permissions

The body of the request should include the user id or group you want to assign permissions to.


This can be simply achieved using scripting, I don’t feel like I need to explain as the concept is pretty much the same as the previous example scripts.


In the UI, the user has been assigned permissions to the filter.


To delete permissions then a DELETE request should be to URL format of:

https://<oac_essbase_instance>/rest/v1/applications/<appname>/databases/<dbname>/filters/<filtername>/permissions/<user_group_name>


Right, I am going to leave it there for this post. In the next part I will go through managing calculation scripts and running and monitoring jobs.

Essbase REST API - Part 4

$
0
0
On to Part 4 of this series looking at the Essbase REST API, which is currently only available in OAC. Just to recap, in the first part I provided an overview of the REST API, the second part was focused on application and database monitoring such as applications/database and properties, starting, stopping and deleting. In the last part I concentrated on management type tasks like managing substitution variables, filters and access permissions.

In this post I am going to cover scripts, listing, creating and editing. The examples will be based on calculation scripts, but the concept is the same for the other available types of scripts like MDX, Maxl and Report scripts. I will also look at running scripts through jobs and monitoring the status.

As usual I will stick with the same style examples using a mixture of a REST client and PowerShell, the choice is yours when it comes to scripting so pick the one that you feel most comfortable with.

You should be aware now that the URL structure to work with the REST API is:

https://<oac_essbase_instance>/rest/v1/{path}

In the UI, the different type of scripts can be viewed at the database level.


To retrieve a list of calc scripts the URL format is:

https://<oac_essbase_instance>/rest/v1/applications/<appname>/databases/<dbname>/scripts

Just like with the other REST resources you can add the parameter “links=none” to suppress the links in the JSON that is returned.

With a GET request against the following URL:

https://<oac_essbase_instance>/essbase/rest/v1/applications/Sample/databases/Basic/scripts?links=none

A list of available calc scripts for the Sample Basic database are returned in JSON format.


This matches what is displayed in the UI. If the “links=none” parameter is removed, then the links to the different resources are returned.


To view the content of a calc script, a GET request is made to the URL format of:

https://<oac_essbase_instance>/essbase/rest/v1/applications/<appname>/databases/<dbname>/scripts/<scriptname>/content

Let us take the “CalcAll” script in the Sample Basic application.


A GET request to

https://<oac_essbase_instance>/essbase/rest/v1/applications/Sample/databases/Basic/scripts/CalcAll/content


will return the contents in JSON format, the response will include “\n” for any new lines in the script.


To edit a calc script, the content of the script is required in JSON format in the body of the request, the PUT method is required with the URL format.

https://<oac_essbase_instance>/essbase/rest/v1/applications/<appname>/databases/<dbname>/scripts/<scriptname>

This time I am going to have a PowerShell script that reads in the following file:


Basically, it is the same calc all script with an additional SET command.

Once the script is read, it is converted to JSON and the REST request is made.


With a GET request, the content of the script can be outputted, and it now includes the changes.


Creating a new calc script is very similar, a POST method request is made to the URL format:

https://<oac_essbase_instance>/essbase/rest/v1/applications/<appname>/databases/<dbname>/scripts

The body of the request must include the name of the script and the content in JSON.


The following example creates a new script with one line of content.


In the UI the new script is available.


To delete a script the DELETE method is used against the URL format.

https://<oac_essbase_instance>/essbase/rest/v1/applications/<appname>/databases/<dbname>/scripts/<scriptname>

So that covers managing scripts, now on to running them, which is done through Jobs.


The following jobs can be run from the UI.


The list on the left is the available jobs in the user managed version of OAC and the right is autonomous OAC, the difference is that autonomous OAC does not include the ability to run MaxL and Groovy scripts.

As I have mentioned before, the majority of what you can do in the UI can also be achieved with REST.

To run a calculation script in the UI, you just select “Run Calculation” from the list of jobs.


To run jobs with the REST API, a POST method request to the following URL format is required.

https://<oac_essbase_instance>/essbase/rest/v1/jobs

The body of the request includes the application, database, script name and job type.


Some of the other job types are maxl, mdxScript, groovy, dataload and dimbuild.

An example of the response from running a script is:


The response includes the current status of the job and a URL where you can keep checking the status.

https://<oac_essbase_instance>/essbase/rest/v1/jobs/<jobID>

Checking a job returns a similar response.


This is the equivalent of what would be displayed in the UI.


To replicate the list of jobs displayed in the UI with a script is an easy task.


As you can see, the start and end times are returned in Unix time format, these can be converted to a readable format with a simple function which I provided an example of in the second part of this series.


To run a job using a script can once again be achieved with very little code, the following example runs a calc script which creates a level0 export using the DATAEXPORT command.


You can either construct the URL to view job information with the job ID, or alternatively extract the URL from the response.


Now the status of the job can be repeatably checked until the status changes from “In progress”.


To run other job types only changes to the body of the request are required.

For example, to run an MDX script:


A data load:


A dimension build:


To clear all data from a database:


Anyway, back to the calc script I ran earlier, the data export in the script produces a level0 extract file named "level0.txt", this can be viewed in the UI.


With the REST API, a list of files can be returned by making a GET request to the URL format:

https://<oac_essbase_instance>/essbase/rest/v1/files/applications/<appname>/<dbname>

To be able to list the files there is an additional header parameter required in the request, which is “Accept=application/json”

The following script returns all the calc script and text files in the Sample Basic database directory.


To download the file, the name of the file is required in the URL, the accept header parameter is not required.


The text file will then be available in the location specified in the script.


Another nice feature in OAC Essbase is the ability to view an audit trail of data either through the UI or Smart View.

To be able to do this a configuration setting “AUDITTRAIL” is required to be added at application level.


You will not be surprised to know that configuration settings can be added using the REST API with a post to the URL format:

https://<oac_essbase_instance>/essbase/rest/v1/files/applications/<appname>/configurations

The body should include the configuration setting name and value.


A http status code of 204 will be returned if the operation was successful.

If the application is started it will need restarting for the configuration to be applied, I went through stopping and starting applications in part 2 of this series.

Once the setting is in place, any data changes can be viewed in the UI at database level for the user logged in.


To return the audit data with the REST API then a GET method request can be made to the following URL format:

https://<oac_essbase_instance>/essbase/rest/v1/applications/<appname>/databases/<dbname>/audittrail/data


This will return the data in text format


To return the data in JSON format the accept header is required with “application/json”.


It doesn’t require much to replicate this functionality using scripting, the next example downloads the audit data to a CSV file.


The file can then be viewed in say Excel, the time will require updating to a readable format which can either be done in the script or Excel.


I am going to leave it here for this post, until next time….

EPM Cloud Data Management – Incremental File Adapter

$
0
0
New functionality has been added to Data Management in the EPM Cloud 18.09 release, and in this post I am going to cover the incremental file adapter.

The monthly cloud release documentation contains the following summary:

“Data Management now includes an incremental file adapter to optimize the data load process. The new Incremental adapter compares the source data file with a prior version of the source data file and identifies new or changed records and loads only this data set. The user has the option to sort the input source data file before making the comparison or they can provide a pre-sorted file for better performance.”

So basically, if you are loading files which contains a full data set then before this release you would need to replace the existing the existing data in Data Management and reload, which, depending on the size on the source file, can cause performance issues. With the new incremental file adapter, the previous file will be compared with the latest file and only the differences will be loaded, so in theory performance should be much improved.

To get a good understanding of the adapter it is worth going through a simple example.

There are some important points that should be considered when working with the adapter:
  • The source data file must be a delimited data file.
  • Data files used must contain a one-line header, which describes the delimited columns.
  • Both numeric and non-numeric data can be loaded.
  • Any deleted records between the two files is ignored. In this case, you have to handle the deleted records manually.
  • If the file is missing (or you change the last ID to a non-existent run), the load completes with an error.
  • Sort options determine the level of performance using this feature. Sorting increases the processing time. Pre-sorting the file makes the process faster.
  • Only single period data loads are supported for an incremental load. Multi-period loads are not supported.
  • Drill down is not supported for incremental loads since incremental files are loaded in Replace mode and only the last version of the file comparison is present in the staging table.
  • Copies of the source data file are archived for future comparison. Only the last 5 versions are retained. Files are retained for a maximum of 60 days. If no incremental load is performed for more than 60 days, then set the Last Process ID to 0 and perform the load.
Some of the above points will become clearer with the following example, so let’s get started.

The file I am going to begin with only contains a few records as this is just meant to be a simple example.


The file satisfies the first two points, it contains a header record which describes the columns, and it is delimited.

I first thought that the incremental file adapter would be configured from the source system area in Data Management, but this is not the case; it is created as a target application.

You need to go to target application and add a new data source.


Once “Data Source” has been selected, a new window will open where you can select “Incremental File” as the source system.


This will open the file select window where you can select an existing file or upload one.


The filename will be the basis for the name of the target application


If you intend to use multiple files with the same naming convention, then you should use the prefix option to make the target applications unique.

Selecting the file at this point is only to define the dimension details in the application details.


The dimension names are automatically created from the header record in the source file.

The steps to create an integration are the same as you would normally go through, so I am only going to highlight some of them.

In the import format creation, the newly created application should be available as a source.


The source columns of the file are then mapped to the target application.


The location set up is no different.


On to the data load rule which is slightly different than a standard file-based load, the file is selected in the source filters section.


There are two other properties in the source filters, the incremental processing options has the following possible values:

Do not sort source file - Source file is compared as provided. This option assumes that the source file is generated in the same order each time. In this case, the system performs a file comparison, and then extracts the new and changed records. This option makes the incremental file load perform faster.

Sort source file—Source file is sorted before performing the file comparison for changes. In this option the source file is first sorted. The sorted file is then compared to the prior sorted version of this file. Sorting a large file consumes a lot system resources and performs slower.

For this example, I am going to go with not sorting the source file, not that it would make much difference in my case because the file is so small. Once I get the opportunity I would like to test the performance of having to sort a large file.

Next is the last process ID, the first time you load a file this would be set to 0, when the load is run again the ID will be updated to match the process ID.

If there are no differences between the latest file and the previous file or the source file doesn’t exist, the ID will be kept as the last successful load ID.

To reload all the data the process ID should be set back to 0 and the file defined in the source filter will be considered the baseline file.

If you are going to change the sorting option then to stop data issues you will need to reset the process ID back to 0 and reload.


After selecting the source file, some simple pass through mappings are added, which means the file can be loaded.


The data load acts like any other, it is only when you take a look at the log you get an idea of what is going on.

INFO  [AIF]: Executing the following script: /u02/Oracle/Middleware/EPMSystem11R1/products/FinancialDataQuality/bin/plugin/IncrementalFile.py
INFO  [AIF]: ************************************************************************************
INFO  [AIF]: *      IncrementalFile.py Started for LoadID: 95
INFO  [AIF]: ************************************************************************************
DEBUG [AIF]: loadID: 95
DEBUG [AIF]: ruleID: 14
DEBUG [AIF]: LAST_PROCESS_ID: 0
DEBUG [AIF]: INCREMENTAL_PROCESSING_OPTION: NO_SORT_NO_MISSING
DEBUG [AIF]: Source file: /u03/inbox/inbox/incremental.csv
DEBUG [AIF]: Data file: /u03/inbox/data/appdata/sourcefiles/14/Vision_SOURCE_95.dat
DEBUG [AIF]: Deleting old source files from /u03/inbox/data/appdata/sourcefiles/14
DEBUG [AIF]: impGroupKey: MT_Incremental_IF
DEBUG [AIF]: Import format delimiter = ,
DEBUG [AIF]: Instantiated FileDiffUtils.
DEBUG [AIF]: Full load.  No incremental file.
DEBUG [AIF]: dataFileName: Vision_SOURCE_95.dat

It looks like it is a python/jython script that is handling the incremental files and differences, the file is moved into a source file directory which is numbered by the load rule ID, in this case 14 and the file format is <TARGET_APP>_SOURCE_<LOADID>.dat

As the last process ID is 0 then it is considered a full load and no incremental differences are required and the file is loaded.

If you go back into the load rule you can see the last process ID has been updated to reflect the load ID of the last import.


Now, time to add additional records to the same file. I selected not to sort the source file in the load rule, so I made sure the records were sorted.


Run the process again and this time in the logs you can see that a difference file has been created and loaded, the format of the difference file is <TARGET_APP>_DIFF_<LOADID>.dat

DEBUG [AIF]: Source file: /u03/inbox/inbox/incremental.csv
DEBUG [AIF]: Data file: /u03/inbox/data/appdata/sourcefiles/14/Vision_SOURCE_96.dat
DEBUG [AIF]: Deleting old source files from /u03/inbox/data/appdata/sourcefiles/14
DEBUG [AIF]: impGroupKey: MT_Incremental_IF
DEBUG [AIF]: Import format delimiter = ,
DEBUG [AIF]: Instantiated FileDiffUtils.
DEBUG [AIF]: Diff file: /u03/inbox/data/Vision_DIFF_96.dat
DEBUG [AIF]: Header line: Account,Entity,Product,Amount
DEBUG [AIF]: dataFileName: Vision_DIFF_96.dat

Back in the load rule the last process ID has been updated.


In the workbench I spotted an issue.


There was one record that was from the previous load, in theory this should not have been loaded again.

I was thinking maybe it was the way I sorted the file, so I decided to run the process again but this time set the processing option to sort the file.


I reset the last process ID back to zero and uploaded my original file and ran the process again.


In the logs you can see entries to show that the source file is being sorted.

DEBUG [AIF]: Sorting data file started.
DEBUG [AIF]: Sorting data file completed.

Uploaded the incremental file and ran the process.


In the logs you can see the source file is being sorted and a difference file created.

DEBUG [AIF]: Sorting data file started.
DEBUG [AIF]: Sorting data file completed.
DEBUG [AIF]: Header line: Account,Entity,Product,Amount
DEBUG [AIF]: dataFileName: Vision_DIFF_98.dat

I was hoping that the issue would be fixed and the record from the full data load would not be included in the incremental load.


Unfortunately, it is still loading the record, so unless I am missing something there is a problem with the script that is working out the differences.

I thought I would have a look at the files that have been generated by Data Management, the files can be downloaded using EPM Automate or REST.


The files were sorted in exactly the same way as in my original testing so at least I know I got that correct, though the difference file is not correct as it contains a record from the first full load.


I found out if I created the incremental file by appending the records to the end of the original file.


Then set the processing option not to sort the source file.


Reset the last process ID back to zero, reloaded both files, this time the incremental data was correct and did not contain the record from the original load.


Anyway, moving on, if I try to load the file again with no changes then the difference file will contain no records.


The process will be displayed as a warning.


No records will exist in the workbench.


Going back to one of the earlier points:

“Only the last 5 versions are retained. Files are retained for a maximum of 60 days.”

You will see an entry in the log when go over 5 versions.

DEBUG [AIF]: Deleting old source files from /u03/inbox/data/appdata/sourcefiles/14
DEBUG [AIF]: Deleted file: /u03/inbox/data/appdata/sourcefiles/14/Vision_SOURCE_95.dat

For each load rule there will be a maximum of 5 archived source files on the file system.


It is still possible to run the data load rule with EPM Automate or REST and pass in the filename.

First the file can be uploaded.


Next the data load rule can be executed by passing in the filename.


In the process logs you will see an entry for EPM Automate and the filename.

DEBUG [AIF]: epmAutomateFileName: sep-18-actuals.csv
DEBUG [AIF]: Source file: /u03/inbox/inbox/sep-18-actuals.csv

I did begin to have a look at loading larger files to check out the performance when sorting is enabled, but I seem to hit more problems.

I started with a file like in my previous example but this time containing 50,000 records. I then loaded an incremental file containing 100,000 records which included the 50,000 from the original file.

For some reason 199,000 records were loaded to Data Management.

In the log I could see:

DEBUG [AIF]: dataFileName: Vision_DIFF_116.dat

HEADER_ROW] Account,Entity,Product,Amount
INFO  [AIF]: EPMFDM-140274:Message - [TC] - [Amount=NN] 99990,4000,1200,9299Account,Entity,Product,Amount
INFO  [AIF]: EPMFDM-140274:Message - Rows Loaded: 199999
Rows Rejected: 1

I downloaded the file containing the differences and the file contained 200,001 records.


It seems to have created the difference file by joining the incremental file to itself.


I am concerned about some of these issues I have seen, it may just be me but if you are going to use the adapter functionality then please do carry out testing first.

I am going to leave it there for this introduction to the incremental file adapter, if I find any more details about the problems I have experienced then I will update this post.


EPM Cloud – Managing users with EPM Automate and REST API

$
0
0
New functionality has been added in the EPM Cloud 18.09 release to provide the ability to manage users and roles at an identity domain level with either EPM Automate or the REST API. In this post I am going to cover this functionality starting off with EPM Automate.

Four new commands have been added to EPM Automate and these are:
  • addusers – Creates new users in the identity domain based on the contents of a comma separated file.
  • removeusers – Deletes identity domain accounts based on the contents of a comma separated file.
  • assignrole – Assigns an identity domain role to all users that are contained in a comma separated file.
  • unassignrole – Unassigns an identity domain role to all users that are contained in a comma separated file.
Please note, to be able to use these commands you will need to be logged in with an account that has the “Identity Domain Administrator” role.

The comma separated files have to be uploaded to the cloud instance first using the “uploadfile” command before you can use the new commands, I would have preferred it if it could have been done in a single command without having to upload files but unfortunately that is not the way it has been developed.

I am quickly going to go through each command and provide an example.

Let’s start off with adding new users to the identity domain.

Before you can use the “addusers” command, you will need a file containing the new user information in the correct format.

The file can contain as many users you would to like to add, for demo purposes I am just going to be adding one user.

The file is required to be in the following format:


The format is the same as if you were importing a batch of users through Oracle Cloud My Services.

Now that I have the file ready I can upload it using EPM Automate, I have assumed the user has been logged in and a file with the same name does not already exist in the cloud instance, you can easily use the “deletefile” command first to remove it if needed.


The file will then be available in the “Inbox/Outbox Explorer”


The format to add users with EPM Automate is:

epmautomate addusers <FILE_NAME> <userPassword=PASSWORD > <resetPassword=true|false>

FILE_NAME is the name of the comma separated file containing the new user information which I just uploaded.

userPassword is a default password that is assigned to the new users. It will need to meet the minimal password requirements for identity domain passwords.

resetPassword defines whether the new users must change the password the first time they log in. I recommend this is always set to true.

An example to create the users contained in the file that was just uploaded is:


The response from issuing the command will include how many new users were processed, including the number of successful and failed user creations.

If successful, the user should then be available in the identity domain. Oracle cloud should also send out an email to the new user with their account information.


As I set the reset password parameter to true, the first time the user logs in they will be redirected to the “Identity Self Service” and require changing their password.


Now that the user has been created we can assign a new identity domain role.

The format for the EPM Automate command is:

epmautomate assignrole <FILE_NAME> <ROLE>

FILE_NAME is the name of a comma separated file containing the user login for the users that you want to assign an identity domain role to.

ROLE is one of the predefined identity domain roles which are:
  • Service Administrator
  • Power User
  • User
  • Viewer
The file requires to be in the following format:


I hoped I could use the same file used for creating the users as it also contains the “User Login” information, but when I tried that I received the following EPM Automate error:

EPMAT-1:Failed to assign role for users. File does not have valid header. Please provide a valid header in file.

Before being able to use the “assignrole” command the above file was uploaded using EPM Automate.


After the file has been uploaded, the “assignrole” command can be executed, in my example I am assigning the “Power User” role to the user in the “assignUsers.csv” file.


The response is the same as when adding users, as the command was successful the user has been assigned the role.


Unassigning a role is no different to assigning a role, the format for the command is:

epmautomate unassignrole <FILE_NAME> <ROLE>

I don’t feel I need to explain what the parameters are this time.

In the following example I am using the same file I uploaded for assigning roles to users, this time I am going to unassign the “Power User” role.


The EPM Automate command completed successfully and a quick check in “My Services” confirms the role has been removed.


On to the final command to delete users from the identity domain.

The format for the EPM Automate command is:

epmautomate removeusers FILE_NAME

The filename should contain all the users you want to remove, and the file format is the same as when assigning/unassigning roles.


In the following example using EPM Automate, I upload the file containing the users to remove and then remove them with the “removeusers” command.


Back in “My Services” the user has been deleted.


As the EPM Automate utility is built on top of the REST API then all the above commands will be available using REST.

So let us repeat the available functionality using a REST client. There are lots of free clients out there so pick the one you prefer. As usual I will be using the Boomerang REST client for Chrome.

First, I am going to delete the CSV file in the cloud instance containing the users and then upload a new one.

The REST URL format to delete files is:

https://<cloud_instance>/interop/rest/11.1.2.3.600/applicationsnapshots/<filename>

A DELETE method is used, so to delete the “newusers.csv” file a request would be made to the following:


If the deletion was successful a status of 0 will be returned.


If the file does not exist, a status of 8 will be returned and an error message.


Now to upload a new file containing the users to add in the identity domain. I have covered uploading files using the REST API in the past which you can read about here, so there is no need for me to go into much detail again.

The REST URL to upload files is:

https://<cloud_instance>/interop/rest/11.1.2.3.600/applicationsnapshots/<filename>/contents?q={"isLast":<true/false>,"chunkSize":<sizeinbytes,"isFirst":<true/false>}

A POST method is required, for example:



In the body of the post I added the user information for the file.


You can include as many users as you want to add.

The request header content type equals “application/octet-stream”

A response status of 0 will be returned if the file was uploaded.


If the file already exists, you would receive something like:


The file is now available in the “Inbox/Output Explorer”.


On to adding the user contained in the file using the REST API.

The URL format for managing users is:

https://<cloud_instance>/interop/rest/security/v1/users

A POST method is required, and the body of the request should contain the filename and the default and reset password values. These are the same parameters which are used with EPM Automate commands.


It is a shame that the user information could not have been included the body of the request instead of having to upload a file.

The response will contain job information for adding new users. It includes a URL which can be accessed to check the job status.


A GET request can be made to keep checking the job status until it completes.


A status of 0 means the operation was successful, just like with EPM Automate details are included to inform how many new users creations were processed and how many succeeded or failed.

As the job was successful the new user has been added and is available in “My Services”


We can now move on to assigning a role for the new user.

I uploaded the following file:


In my example the file only contains a single user but it can contain as many as you want to assign a role to.

To assign a role to the users contained in a file the same URL format is required as when adding users.

A PUT method is required, and the body of the request should include the filename, the role name and a job type of “ASSIGN_ROLE”


This time I am going to assign the “User” identity domain role.

The current job status for assigning the role is returned, once again it includes a URL to check the job status.


The job status can then be checked until it completes.


As the operation was successful, the “User” role has been assigned to the user contained in the file.


To unassign roles is very similar to assigning. A PUT method is required, and the body of the request should contain the filename containing the users that the role should be unassigned from, the role name and a job type of “UNASSIGN_ROLE”


The current job status is returned in the response.


The job status can be checked until it completes.


As the job was successful the “User” role has been removed for the user contained in the specified file.


To remove users, a file should be uploaded containing the user login details of the users to remove.

I uploaded the following file.


A DELETE method is required, and the URL should include the filename containing the users to remove.


No surprise that job status information is returned in the response.


The job status can be checked until it completes.


The response confirms the user was removed. A look in “My Services” confirms the user has been removed.


I was going to include an example using scripting, but I think you should get the idea. I have covered scripting with the REST API many times in the past so just have a look at my previous posts if you are unclear.

Automating data flows between EPM Cloud and OAC – Part 1

$
0
0
In past blogs I have covered in detail, automation in EPM Cloud using the REST API. Recently I have blogged comprehensively on the Essbase REST API in OAC, so I thought I would combine these and go through an example of automating the process of moving data between EPM Cloud and OAC Essbase.

The example will be based on extracting forecast data from PBCS using Data Management, downloading the data and then loading this to an OAC Essbase database. I will provide an option of downloading data directly to OAC from PBCS for those who have a customer managed OAC instance, alternatively for autonomous OAC the data can be downloaded from PBCS to a client/server before loading to Essbase.

I am going to break this into two parts, with the first part covering the setup and manual steps to the process, then the second part gets into the detail of automating the full process with the REST API and scripting.

Before I start I would like to point out this is not the only way to achieve the objective and I am not stating that this is the way it should be done, it is just an example to provide an idea of what is possible.

To start out with I am going to want to extract forecast data from PBCS and here is a sample of the data that will be extracted:


To extract the data, I am going to use Data Management, once the integration has been defined I can add automation to extract the data using the REST API.

As it is EPM Cloud, I will need to extract the data to a file and this can be achieved by creating a custom target application in Data Management.


The dimensions have been created to match those of the OAC Essbase database, I could have included scenario but that is always going to be static so can be handled on the Essbase side.


There are slight differences between the format of the Year in PBCS


to that in the Essbase database.


Aliases could be used but I want to provide an example of how the difference can be handled with period mappings in Data Management.


This will mean any data against, say FY19, in PBCS will be mapped to 2019 in the target output file.

If there are any differences between other members these can be handled in data load mappings in DM.

In the DM data load rule, source filters are created to define the data that will be extracted


In the target options of the file a fixed filename has been added, this is just to make the process of downloading the file easier. If this is not done, you would need to either capture the process ID from the REST response to generate the filename or read the filename from the jobs REST response, both methods produce the same outcome but, in this example, I am going for the simpler option.


Before running the integration, I will need to know which start and end period to select.

For the automated process I am going to pick this up from a substitution variable in Essbase, it would be the same concept if the variable is held in PBCS as both have a REST resource available to extract the information.


The data will be extracted for a full year, so based on the above sub var, the start period would be Jan-19 and the end period Dec-19


Running the rule will extract the data from PBCS, map and then produce an output file.


The rule ran successfully so the exported file will be available in the inbox/outbox explorer.


If I download the file you can see the format of the exported data.


When I cover the automation in the next part I will provide two options, the first one will download the data file directly to OAC from PBCS and then load the data, the second will download the file from PBCS to a machine running the automation script and then stream load it to Essbase.

As this post is about manually going through the process, I have downloaded and the file from PBCS and uploaded to OAC Essbase.


The file has been uploaded to the Essbase database directory.


Now an Essbase data load rule is required to load the above file.

A new rule was created, and the uploaded data file selected.


The columns in the data file were mapped to the corresponding dimensions.


The data is always being loaded to the forecast scenario member and not contained in the file, so this was added to the datasource information.


As I mentioned earlier I could have easily included scenario in the data export file by adding the dimension to the target application in Data Management, it is up to you to decide which method you prefer.

Once created it will be available from the scripts tab and under rules.


To run the rule, head over to jobs in the user interface and select “Load Data”


The application, database, rule and data file can then be selected.


The status of the data load can then be checked.


This is a hybrid database there is no need to run a calculation script to aggregate the data, if aggregations or calcs were required to be run then you could simply add this into the process.


A retrieve on the data confirms the process from extracting data from PBCS to OAC Essbase has been successful.


You could apply this process to extracting data from OAC and loading to EPM Cloud, one way to do this could be to run an Essbase data export script, the export file could then be uploaded to EPM Cloud, and a Data Management rule run to map and load to the target application.

We have a process in place, but nobody wants to live in a manual world, so it is time to streamline with automation which I will cover in detail in the next part. Stay tuned!

EPM Cloud - Recent additions to EPM Automate and REST API

$
0
0
In the EPM Cloud 18.10 release there were a few additional commands added to the EPM Automate utility, these are also available through the REST API as the utility is built on top of the API.

An annoyance for me with EPM Automate and the REST API has been not being able to rename a snapshot, even though it has always been possible through the web UI.


Not being able to rename out of the UI made it difficult to automate archiving the daily snapshot in the cloud instance before the next snapshot overwrote the previous one. You could download, rename and upload but this over complicates what should have been a simple rename.

With the 18.10 release it is now possible to rename a snapshot with a new EPM Automate command.

To rename a snapshot, the syntax for the utility is:

epmautomate renamesnapshot <existing snapshot name> <new snapshot name>

Using EPM Automate and a script, it is simple to rename the snapshot, in the following example the daily snapshot is renamed to include the current date.


This means the snapshot is now archived and the next daily maintenance will not overwrite it.


Please note though, there is a retention period for snapshots which currently stands at 60 days and a default maximum storage size of 150GB. If this is exceeded then snapshots are removed, oldest first to bring the size back to 150GB.

The documentation does not yet provide details on how to rename a snapshot using the REST API, but I am sure it will be updated in the near future.

Not to worry, I have worked it out and the format to rename a snapshot using the REST API is:


If the rename is successful, a status of 0 will be returned.


In the UI you will see the snapshot has been renamed.


If the rename was not successful, a status that is not equal to 0 will be returned and an error message will be available in the details parameter.


The functionality will only rename snapshots and does not work on other file types.

It is an easy task to script the renaming of a snapshot using the REST API. In the following example I am going to log into a test instance and rename the daily snapshot, then copy the daily snapshot from the production instance to the test instance. This means the production application is ready to be restored to the test environment if needed, also the test daily snapshot has been archived.


The above section of the script renames the test snapshot, the next section copies the production snapshot to the test instance.

When calling the REST API to copy a snapshot, a URL is returned which allows you keep checking the status of the copy until it completes.


Now in the test instance, the daily snapshot has been archived and contains a copy of the production snapshot.

 

It is also possible to copy files across an EPM Cloud instance using the EPM Automate command “copyfilefrominstance”. This command was introduced in the 18.07 release and the format for the command is:

epmautomate copyfilefrominstance <source_filename> <username> <password_file> <source_url> <source_domain> <target_filename>

To achieve this using the REST API is very similar to my previous copy snapshot example.

Say I wanted to copy a file from the test instance to the production one and rename the file.


An example script to do this:


The file has been copied to the production instance and renamed.


When the 18.10 monthly readiness document was first published it included details about another EPM Automate command called “executejob”

“executejob, which enables you to run any job type defined in planning, consolidation and close, or tax reporting applications”

This was subsequently removed from the document, but the command does exist in the utility.


The command just looks to bypass having to use different commands to run jobs, so instead of having to use commands such as “refreshcube”,”runbusinessrule” or “runplantypemap” you can just run “executejob” with the correct job type and name.

For example, if I create a new refresh database job and name it “Refresh”


The job type name for database refresh is “CUBE_REFRESH” so to run the refresh job with EPM Automate you could use the following:


The command is really replicating what has already been available in the REST API for running jobs.

The current list of job types is:

RULES
RULESET
PLAN_TYPE_MAP
IMPORT_DATA
EXPORT_DATA
EXPORT_METADATA
IMPORT_METADATA
CUBE_REFRESH
CLEAR_CUBE


I am not going to go into detail about the REST API as I have already covered it previously.

The format for the REST API is as follows:


The response will include details of the job and a URL that can be used to keep checking the status.


I was really hoping that the functionality was going to allow any job that is available through the scheduler to be run, for instance “Restructure Cube” or “Administration Mode” but it looks like it is only for jobs that can be created. Hopefully that is one for the future.

In 18.05 release a new EPM Automate command appeared called “runDailyMaintenance” which allows you to run the daily maintenance process without having to wait for the maintenance window. This is useful if new patches are available and you don’t want to wait to apply them. In 18.10 release the command includes a new parameter which provides the functionality to skip the next daily maintenance process.

The format for the command is:

epmautomate rundailymaintenance skipNext=true|false

The following example will run the maintenance process and skip the next scheduled one:


I included the -f to bypass the prompted message:

“Are you sure you want to run daily maintenance (yes/no): no?[Press Enter]”


The REST API documentation does not currently have information on the command but as the EPM Automate utility is built on top of the API, the functionality is available.

The format requires a POST method and the body of the post to include the skipNext parameter.


The response will include a URL to check the status of the maintenance process.


When the process has completed, a status of 0 will be returned.


It is worth pointing out that as part of the maintenance steps, the web application service is restarted so you will not be able to connect to the REST API to check the status while this is happening.

Another piece of functionality which has been available through the REST API for a long time, but not EPM Automate, is the ability to return or set the maintenance window time.

To return the maintenance time, a GET method is required with the following URL format:


The “amwTime” (Automated Maintenance Window Time) is the scheduled hour for the maintenance process, so it will be between 0 and 23.

To update the schedule time a PUT method is required and the URL requires a parameter called “StartTime”


If the update was successful a status of 0 will be returned.

You can then check the maintenance time has been updated.


The following script checks the current maintenance time and updates it to 03:00am


I did notice a problem, even though the REST API is setting the time, it is not being reflected in the UI.


It looks like a bug to me. Anyway, until next time…

Automating data flows between EPM Cloud and OAC – Part 2

$
0
0
In the first part, I went through an example of extracting forecast data from PBCS using Data Management, downloading the data file and then loading this to an OAC Essbase database. All the steps in the example were manual, so in this post I am going to add some automation using REST APIs and scripting.

I would recommend reading through the first post if you have not already, as I will be referring to it and this post will probably not make much sense unless you have read it.

As always, I am going to stress this is not the only way to go about automating the process and is only to provide an idea as to what can be achieved.

I will provide examples of the REST API using a free REST client and the scripting will be mainly with PowerShell, though you can achieve the same results with pretty much any scripting language. There will also be a little bit of Groovy thrown into the mix for those that are running a user managed version of OAC vs autonomous.

A summary of the process that will be automated is:
  • Extract Forecast year substitution variable from OAC Essbase.
  • Transform variable into the start/end period for an EPM Cloud Data Management data load rule.
  • Run a Data Management data load rule to extract planning forecast data, map and then generate a file.
  • Download data file from Data Management (Groovy example, downloads directly from DM to OAC).
  • Run an Essbase Load rule to load data from file.
It is possible to run the whole process directly from OAC using Groovy, but I am trying to provide options for autonomous OAC as well. Also, I didn’t really want to show one big Groovy script because that is not very interesting for a blog post.

Before I start out, it is worth pointing out that I going to be using the same forecast year sub var, Data Management and Essbase data load rule that I covered in the last post.

For the first part of the process, I want to extract the Essbase forecast sub var. This has been created at application level.


To extract using the REST API, a GET request is made to the following URL format:

https://<oac_instance>/essbase/rest/v1/applications/<app_name>/variables/<sub_var_name>

In my case this would equate to:


The JSON response includes the name and value of the sub var.


For Data Management I need to convert this to the start and end period.


This is where a script comes into play and can automate the process:


Now that the variable has been extracted and transformed, the Data Management load rule can be executed.

The idea is to execute the rule with the following values:


I have covered this in the past but to run a rule using the REST API, a POST method is required, and the body of the request should include the above values in JSON format.


The response includes the job ID (process ID), current job status and a URL to keep checking the status.


The job status can then be checked until it completes.


Time to convert this into a script which will execute the rule and store the response.


The rule has been executed and the response stored, now it is time to keep checking the status until it completes.


In the Data Management target options of the rule, a static filename has been set.


This means the file is available for download using the defined filename and from a location accessible using the REST API.


A GET request can be made to the following URL format which includes the filename.


This is where my example splits: if you want to use a Groovy script and download directly to the OAC instance, this could be an option available to user managed OAC instances.

Alternatively, for an autonomous instance which I will cover first, you can download the file to a staging location, an example to do this could be:


The file will be available to load to OAC.


There are a couple of options available, you could upload the file to the OAC instance and then run a data load rule or use the data load stream option.

The streaming option allows you to run an Essbase data load rule but stream in the data, removing the requirement to upload the file first.

To stream data using the REST API you must use to a POST method to indicate you want to start a stream data load. The body of the post should include the Essbase load rule name.


The response will include a URL to post the data to.


The data can then be streamed using the returned URL.


The response will include URLs to either stream more data or end the data load rule.


To end the data load, a DELETE method is required to the same URL.


If there were no errors, a successful message should be returned.

If I update the data to include an invalid member and run the data load again.


The response will indicate there were records rejected and the filename containing the errors.


This file will be available in the Essbase database directory.


An example of the error file is:


This file could be downloaded using the REST API if required.

An example of automating the stream data load method using a script could be:


I did have some fun trying to get the script to work as it needs to keep a web session active between the start and end of the streaming. I had to use “Invoke-WebRequest” where I generated a session variable and then used this in subsequent REST calls.

If you are interested in what is happening behind the scenes with the data load streaming method, here is an excerpt from the Essbase application log.

[DBNAME: GL] Received Command [StreamDataload] from user [john.goodwin]
[DBNAME: GL] Reading Rules From Rule Object For Database [GL]
[DBNAME: GL] Parallel dataload enabled: [2] block prepare threads, [1] block write threads.
[DBNAME: GL] Data Load Updated [21739] cells
[DBNAME: GL] [EXEC_TIME: 0.82] Data load completed successfully
Clear Active on User [john.goodwin] Instance [1]


If you don’t want to go down the streaming route, you could upload the file to the Essbase database directory using the REST API.

A PUT method is required to the following URL format which includes the name of the file and if you want to overwrite if it already exists:


This can simply be converted into a script.


After uploading you can then run a load job which I will cover shortly.

Going back to the Groovy option, which if available could be used to carry out all steps of the process to move data between EPM Cloud and OAC. As an example, I am going to use it for downloading the data file from EPM Cloud directly to the Essbase database directory in OAC.

In the Groovy script, variables are defined such as the EPM Cloud URL for downloading files, the data filename, location in OAC to download the file to. The user credentials are encrypted to create the basic authentication header for the REST call.


A method is then called to make the REST request and download the file.


The script should be saved with an extension of “gsh” and then uploaded to OAC.


The script can be run from the jobs in the UI.


The application/database and script can then be selected and the Groovy will then be run.


One of the disadvantages at the moment with Groovy in OAC is that parameters can not yet be passed into the script when running as a job.

After running the job, an output file will be available that contains the output of the “println” method in the script.


As the script was successful, the output file contains the following:


As this blog is all about automation we can run the Groovy script with the REST API.

A POST method is required to the jobs URL, the Groovy job type and script to run is included in JSON format in the body of the post.


The response includes detailed information about the job and a URL to keep checking the job status.


Once again this can be simply converted into a script to automate the process.


With a GET method, the status of the job can be checked with the jobs URL that contains the job ID.


A script can automatically keep checking the job status, this is a similar concept to the earlier example when checking the status of a Data Management job.


The file will have been downloaded directly from EPM Cloud to the Essbase database directory in OAC.


Finally, on to running the Essbase load rule to load the data contained in the file.

Using the REST API, it is the same call to the jobs URL. The only difference is the job type is “dataload” and parameters define the load rule and the data file.


The information returned in the response is similar to running any type of job.


The status of the job can be checked until it completes.


The beauty of running a data load job compared to streaming data is that the response includes the number of records that were processed and rejected.

This part of the process does not take much effort to convert into a script.


Now that the full process has been automated and run, the data from EPM cloud is available in OAC Essbase.


With scripting you can also automate the opposite process of extracting data from Essbase and then loading to EPM Cloud.

Once a script is in place it can be reused across different data flows by just changing variables.

If you are interested in understanding in more detail about how automation can help, please feel free to get in touch.

EPM Cloud – Managing users with EPM Automate and REST API update

$
0
0
In the 18.09 release of EPM Cloud new functionality was added to provide the ability to manage users and roles at an identity domain level with EPM Automate or REST API. I covered this functionality in detail in a previous post which you can read all about here.

The EPM Automate commands added in that release were:
  • addusers – Creates new users in the identity domain based on the contents of a comma separated file.
  • removeusers – Deletes identity domain accounts based on the contents of a comma separated file.
  • assignroles – Assigns an identity domain role to all users that are contained in a comma separated file.
  • unassignroles – Unassigns an identity domain role to all users that are contained in a comma separated file.
Since writing the post, I have been asked a few times if it is possible to add users to a group. Well, from the 18.11 release this has been made possible and there are two new commands available for EPM Automate.
  • adduserstogroup -- Adds a batch of users contained in a file to an existing group in Access Control.
  • removeusersfromgroup - Removes a batch of users contained in a file from an available group in Access Control.
In order to use the commands, a file containing a list of users has to be uploaded to EPM Cloud. As you would expect, the functionality is also available through the REST API.

In this post I will quickly go through the commands, first with EPM Automate and then with the REST API.

Let us start with the “adduserstogroup” command.

The syntax for the EPM Automate command is:

epmautomate addUsersToGroup FILENAME GROUPNAME

Where FILENAME is a file containing a list of users that has already been uploaded to EPM cloud. GROUPNAME is the group you want to assign the users in the file to.

The users will need to exist in the identity domain, if they don’t you can add them with the “addusers” command. The users will also need to have an identity domain applied, this can be achieved with the “assignroles” command.

I would have preferred it if you could specify the user and the group they should be assigned to in the file instead of only being able to assign a single group at a time.

I will go through an example to add the following user to a group.


The user has already been assigned an identity domain role.


The group to assign the user to already exists in “Access Control”.


The group does not currently have any users assigned to it.


To be able to use the EPM Automate command, you need a file containing the list of users to assign to a group.


Obviously you can include as many users as you like in the file.

The file must have the header “User Login” otherwise you will generate an error when trying to use the command.


Once the file has been produced it has to be uploaded to EPM Cloud, this can be achieved with the EPM Automate “uploadfile” command.


The file will then be available from the application “Inbox/Outbox Explorer”.


Now the file exists, the “addusertogroup” command can be executed to assign the users in the file to the group specified in the command.


The response from issuing the command will include how many users were processed, including the number of successful and failed group assignments.

Checking the group in "Access Control" confirms the user has been successfully assigned.


To remove users from a group is pretty much the same concept, only difference is this time it will be the command “removeusersfromgroup”. I am going to use the same user file and remove them from the same group.


The output will once again highlight how many users in the file were successfully or unsuccessfully removed from a group.

As the command successfully removed the user they have been unassigned in "Access Control".


If you try to run the command against a group that does not exist, then you will receive an error.


I did wonder whether the command would allow you to add a group to a group and not just users to a group.


Considering the header in the file has to specify “User Login” I wasn’t holding out much hope, anyway I added a group to the file and uploaded.


Running the command generated a failure.


It would be good if the command included a parameter to define where to assign users or groups to a group. This would be preferable over another new command.

Now on to achieving the same functionality with the REST API.

I am not going to go through uploading a file using the REST API again as I covered that in my previous post.

The REST API URL format for adding/removing users to/from groups is:

https://<cloud_instance>/interop/rest/security/v1/groups   

To assign a group to the users contained in a file a PUT method is required, the body of the request should include the filename, the group name and a job type of “ADD_USERS_TO_GROUP”.

I said this in my previous post, but it is a shame that the user/group information could not have been included the body of the request instead of having to upload a file.

Using a rest client an example to assign users to a group is:


The response will contain job information for adding users to groups. A status of -1 means the job is in progress, a URL is included which then can be accessed to check the job status.


Using the URL from the response, a GET request can be made to keep checking the job status until it completes.


A status of 0 means the operation was successful, just as with EPM Automate details are included to inform how many assignments were processed and how many succeeded or failed.

As the details show the process was successful, the user in the file has been assigned to the group.


To remove users from a group is very similar, the only difference is the “jobtype” parameter which should be “REMOVE_USERS_FROM_GROUP”.


The response contains the same information as when using the resource to add users to a group.


The status can be checked until the job completes.


The user has now been removed the group.


To automate the process with the REST API and scripting you could put together something like:


The above script first tries to delete any existing file in EPM Cloud with the same name as the one that will be uploaded, once this is done, a file containing the list of users to assign a group to is uploaded.

The file will then be available from the applications “Inbox/Outbox Explorer”.


Next, the REST resource to add users to a group is called. The URL to check the job status is then extracted from the response. The job status is checked until it completes.


The user contained in the uploaded file has been successfully assigned to the specified group.


To remove users from a group the same script can be reused with the job type changed to “REMOVE_USERS_FROM_GROUP”.

I am sure I am going to get asked if it is possible to create a group with EPM Automate or the REST API, unfortunately there is no direct command to do this yet.

EPM Cloud – Data Integration expressions

$
0
0
New functionality was added in the EPM Cloud 18.11 release to Data Integration which is available in the simplified interface. I did put together a detailed post which covered the first release of Data Integration back in the 18.07 release, which you can read all about here.

Back in the original blog post I hit some problems with the functionality and once again I encountered the same type of issues with the new features in 18.11. These issues have now been recognised as a bug and should be fixed in 18.12.

Before I go through the new expressions functionality it is worth pointing out there are quite a few restrictions with Data Integration in the simplified interface and in my opinion, it is still quite a long way off from parity with Data Management.

The areas that still need to be set up in Data Management are:
  • Register Source System
  • Register Target Application
  • Period Mapping
  • Category Mapping
  • Logic Groups
  • Check Entity/Rule Groups
  • Report Definition
  • Batch Definition
The following features are not currently supported or available in this release, this list will reduce over time.
  • Only supported in standard and enterprise PBCS
  • Only available for Service Administrators
  • Fixed length files are not supported
  • In the workbench the following is unavailable:
    • Validation errors
    • Displays only dimensions in the target application and columns cannot be added
    • Drill to source
    • View mappings
    • Source and Target view is only available.
    • Import/Export to Excel
  • Map members (data load mappings):
    • Rule Name is replaced with Processing order
    • Mappings cannot be assigned to a specific integration (data load rule)
    • Exporting is not available
    • Mapping scripts are unavailable
    • Multi-dimensional mappings are available but cannot be defined.
  • Column headers for multi-period loads are unavailable
  • Batch execution is unavailable
  • Report execution is unavailable
  • Scheduling is unavailable.
With all the above restrictions, Data Integration is still considered a preview version but there is nothing stopping you from using some of the functionality in Data Integration and the rest in Data Management until parity is reached.

Anyway, on to the new functionality which is for file-based sources and allows source and target expressions to be applied to an import format, this is part of the “Map Dimensions” step in Data Integration.

Source expressions were previously available but had to be manually entered, there are also a few slight terminology differences between Data Integration and Data Management.

The main functionality update is the ability to apply target dimension expressions. This is an excerpt from the 18.11 release documentation which clearly explains the new target expressions:

“New target dimension expression types include Copy Source value, Prefix, Suffix, Substring, Replace, Default, Rtrim, Ltrim, Rpad, Lpad, Constant, Conditional, Split, and SQL.

When importing data, you can apply target expressions to the mapped dimensions. Target expressions enable you to transform the source value read from file to the target dimension values to be loaded to target application. These expressions can be used instead of member mappings for performing simple data transformations. For large data sets using import expressions, the transformation improves data load performance.


The new expressions can be defined only using Simplified user interface. They will not be made available automatically in Data Management.”


The above is good news if you are going to be using Data Integration, not so good news for Data Management or FDMEE users.

Please be aware this is my first look at the new functionality so there may be a few slight inaccuracies, if this is the case I will update this post once I am aware of them.

I think the best way to understand these new features is to go through a few examples, if you have not already read my previous post on Data Integration then it is worth doing so as I will be assuming you understand the basics.

Let us start by creating a new integration.


A name for the integration and new location were defined. The source was set as a file and the target as the planning application where the plan type and category were applied.


An example of the source comma delimited file is:


On to “Map Dimensions” where the import format can be created, and source file columns mapped to target dimensions.


This is where the new functionality comes into play, the following options will be available by clicking the gear icon next to the target dimension:


There are two new options available which are “Edit Source Expression” and “Edit Target Expression”.

If you select “Edit Source Expression” on a target dimension there will be three expressions to choose from.


The equivalent selection in Data Management has two options.


“Lpad” and “Rpad” are similar to using “FillL” and “Fill”.

The “Constant” source expression allows you to define a source value for each row.


In the import format I defined a constant of “BaseData” for the “HSP_View” dimension and “Working” for the “Version” dimension.


Once they have been defined they appear in the source of the import format.

If you look in Data Management, you will see how the expressions have been added.


It is the equivalent of just defining the value directly like:


At this point if I run an import and the workbench, the constant values have been applied to the source columns.


The “Lpad” source expression allows you to pad the left side of the source value which is the equivalent of using Fill in Data Management.


In the above example, source account values will be left padded with “9” up to six characters in length. An account 4120 would be become 994120

Once the expression has been defined it will be displayed in the source of the import format.


If I run an import and view the workbench, the source account members have been left padded. “4120” has become “994120” and “4130” is updated to “994130”.


In Data Management you can see the expression from Data Integration has been applied.


This is the same as using “FillL” expression.


which produces the same result.


If you apply source expressions like “FillL” in Data Management, you will see the same expression in Data Integration.


The “Rpad” source expression is the same concept except it pads to the right, I will provide an example when we get on to target expressions.

There are different source expressions available if you select the amount row in the import format.


These are the same as the ones available in Data Management, so I don’t feel I need to cover them in this post.


Now on to the main feature and that is the new target expressions which can be accessed from the import format in the same way.


The following expressions are available:


Please be aware these target expressions can only be defined in Data Integration, there is no way to define or view them in Data Management.

I will now go through a simple example of each of the available target options starting with “Copy Source”.


It is pretty much self-explanatory; this expression will copy source values to the target.

It will be shown in the import format as copysource()


If I import the source file and then check the workbench you will see the expression has been applied.


Source account values have been copied directly to the target account dimension.

Next on to “Constant” which is exactly the same as the source expression I went through earlier.


This has been applied to the entity dimension so will set all the target entities to “No Entity”.


Now on to “Default” which is pretty useful, this expression will apply a default value when the source is blank, if not it will use the source value.


For this example, I updated my source file to include a version column where some values are populated, and some are blank.


In the workbench, where the source version is blank it has been mapped with a target of “Working”, where the source version exists it has been mapped explicitly to the target.


The “Prefix” target expression doesn’t need much explanation, it will just add a prefix to the source value.


I added the above target expression to the entity dimension and reran the import.

As expected the source entity values have been prefixed with “ENT_”


The “Replace” expression searches for a string value in the source and applies a replacement value.


I applied the expression to the account dimension, where the source contains “50” it will be replaced with “00”.


The “Split” expression will split the source based on a delimiter, it will then return the nth value after the split.


I applied the expression to the product dimension where the source starts with “P_”, setting the delimiter to underscore and component number to 2 it will set the target to the value after “P_”


We have been through the “Prefix” expression so now it is time for “Suffix” and you guessed it, it suffixes specified text to the source value.


The expression was applied to the account dimension so all source values were suffixed with “_ACC”.


The “Substring” expression will extract a string from the source based on a start position and the number of characters to extract.


I applied the expression to the account dimension so it will extract the first two characters from the source value.


The “Lpad” expression I covered earlier with the source expressions.


Once again the left side of the source has been padded with the defined characters and length.


The “Rpad” expression is just the reverse logic, so it will pad to the right of the source value.


This produces the following:


The “Ltrim” expression will trim a leading specified character from the source.


This logic has removed the leading “4” from the source value.


The “Rtrim” expression is the reverse of “Ltrim” so it will trim specified trailing characters from the source.


In this example the trailing “0” in the source has been trimmed.


Now on to one of the more interesting target expressions, “Conditional” will allow you to use if-else statements on the source to return the target.


In the above example I have applied a conditional expression to the account dimension. If the source account value equals “4150” then “Other Revenue” is returned as the target, else return the source account value. Make sure you put a space after the first “if” otherwise the expression doesn’t seem to work.


It possible to use multiple “if-else” statements and logic operators such as “and”, “or”. You can also bring in additional source columns.


If you are not sure of source column names, then look at the “Data Table Column Name” in the target application details.


The logic for the above expression is: where the source account is equal to “4150” then set the target to “Other Revenue, else if the source account value is equal to “4130” and the version (UD1) is equal to “Final” then set the target to “Cash”, for everything else set the target to be the same as the source value.


You also handle blank source values with the conditional expression.


The above expression sets the target version to “Working” where the source is blank, otherwise return the source value.


On to the last target expression which is “SQL”. This expression will allow you to put together a SQL statement as a mapping.


In the above example which was applied to the entity dimension, a CASE statement is used, the source column is ENTITY which must be enclosed inside $.

When the source entity is either “120” or “130” then set the target to “500”, else the target will be “110”.


Unless I am doing something wrong, at the moment it doesn’t look like you can use more than one “WHEN” condition in the “CASE” statement. Maybe it is related to the information provided in the expression hint – “Use only one source value in the source expression”. The following expression caused a failure in the import process.


Also, I couldn’t set the “ELSE” result to be the same as the source column value as this generated an error when running the import.


I will update this post if I find any further details about the restrictions when using the SQL expression.

So now I have a completed import format where a target expression is applied to all dimensions.


If an import is run, in the workbench all source columns are mapped to target dimensions and the validation step is successful. There is no need to apply any member mappings in the integration for it to succeed.


You may be wondering whether it is possible to apply target expressions in the import format and then apply member mappings.

If you apply a target expression to a dimension, then member mappings for that dimension seem to be ignored.

If you don’t apply a target expression to a dimension, then you will be able to apply member mappings to that dimension.

So that concludes my initial first look at the new expression functionality. I hope you have found it useful, until next time…

Viewing all 181 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>