Quantcast
Channel: More to life...
Viewing all 181 articles
Browse latest View live

Planning - loading text member data update

$
0
0
A while back there was a post on the planning forum about loading text member data to a planning application using the Outline load utility, there was a reply on the post saying it was not possible in 11.1.2.x and an SR was raised and it had been confirmed as a bug. I knew it was possible in previous versions so I was going to test it out but never got round to it.

Then recently I received an email asking if it was possible to load text data using ODI to planning 11.1.2.1 as once again an SR had been raised and it had been confirmed as a bug. I knew I had definitely loaded text data when working on a previous project but that was with planning 11.1.1.3

I did write a blog on using an alternative method to loading text data because that was before it was possible directly with the ODI planning adaptor, I think it wasn’t possible in the first releases of 11 but it certainly was possible in 11.1.1.3

I thought I had best test out both the Outline load utility and ODI to see if it is possible to load text member data to a 11.1.2.1 planning application.

I first created an account member called TextMember1 with a data type of text.


I constructed a simple form so that I could view the results of load text data to the selected POV.


I set up the Data Load Settings in planning to match how I had set up the form with the Data Load Dimension as Period, Driver Dimension as Account with the member TextMember1


I made sure the Data Load Settings had definitely been correctly saved by running a SQL query against the planning applications relational tables.


I created a CSV file in the format required for the Outline load utility and populated it with the same POV details as the planning web form.


OutlineLoad /A:PLANSAMP /U:admin /M /I:textload.csv /D:Period /L:dataload.log /X:dataload.exc

The Outline load utility was run from command line using the load dimension set as Period to match the Data Load settings and CSV file.
 

The output log from the utility showed that 1 record was successfully processed and loaded.


The planning web form was run again and the text data was displayed successfully.

I know that you can also use the Outline utility to load data without having to set the Data Load setting in planning and specify the driver information directly in the source file so I thought I would give that a try as well.


I created and populated a new file using the Driver Member and Value column headings.


OutlineLoad /A:PLANSAMP /U:admin /M /I:textload2.csv /TR /L:dataload.log /X:dataload.exc

I successfully ran the utility again but this time removing the load dimension /D and used /TR which means the driver information is specified in the file.
 

The form displayed the newly loaded text data.

So no problems loading text data using the Outline load utility so time to move on to ODI, the email I received specified ODI 10.1.3.6.x with planning 11.1.2.1 so that is what my first test would be with.
 

I reversed the planning application and checked the columns “Data Load Cube Name”, “TextMember1” and “Point-of-View” had been added to the Period Datastore.


I created a file Datastore against the CSV file I used with the Outline load utility.


I created a new interface with the CSV file as the source and the Period dimension as the target, the target columns were then mapped to the source.

The flow was set as LKM File to SQL, the memory engine as the staging area and then “IKM SQL to Hyperion Planning”


The interface executed successfully and the web form was reloaded to show the correct text data.

So no problems using ODI 10g how about ODI 11.1.1.5


I replicated everything I had created in ODI 10g but this time used Oracle as the staging area due a bug using the memory engine with planning.
 

The interface ran with no problems and once again the correct text data was shown in the planning web form.

All testing was successful so at least I can put my mind at rest on this subject.

Loading to EPMA planning applications using interface tables – Part 7

$
0
0
Back once again with another instalment of the EPMA interface series even though I thought the last one would have been the final part, today’s blog is about loading attributes, Smart Lists and UDAs and is mainly due to the number of requests I have had on how to go about it.

I am going to start off with attributes and run through the steps to load an attribute hierarchy and apply an attribute to a base member using interface tables. Once again I am not going to overcomplicate the matter so I am going manually create the dimension and associations first, it only takes a couple of minutes to do and is probably quicker than trying to achieve it through the interface route and considering the creation of an attribute dimensions don’t happen that often it makes sense to use this method.

I will also assume you have been following the series and that you are up to speed on the concept of how interface tables work.

Anyway on to the steps, once in the dimension library select File > New Dimension


I am going to create an unimaginative attribute dimension called ProductType which will be used to analyze products in the Segments dimension of the sample planning application.


Once the dimension has been created the association between the segments and the ProductType dimension has to be created, this is achieved by right clicking the Segments dimension in the dimension library and choosing “Create Association”. If I didn’t create this association then I would not be able to apply attribute members against base level members in the Segments dimension.


I also created an association between the Alias and ProductType dimension to allow aliases to be added to the attribute member names.


The ProductType dimension was added to the sample planning application and all the associations activated.


If a member in the Segments dimension is selected then the ProdTypAtt property is now available and members from the ProductType dimension can be assigned to it.

This is where the interface tables can perform the rest of the work such as loading the attribute dimension hierarchy and assign attribute members to base level members in the Segments dimension.

The first step is to return the interface table IM_DIMENSION, I went through the details of this table in part 2 of the series so if you are unsure about the table have another read of that part of the series.



A new record was added to the table with the following details

C_DIMENSION_NAME– this defines the name of the dimension so ProductType is used.
C_DIMENSION_CLASS_NAME– this defines the type of dimension which is Attribute.
C_HIERARCHY_TABLE_NAME– this provides the table name which holds the attribute dimension hierarchy, the table name is going to be PLAN_PRODUCTTYPE_HIER


I created the table PLAN_PRODUCTTYPE_HIER which will hold the hierarchy metadata for the ProductType attribute dimension, the ISPRIMARY shouldn’t really be necessary but if you don’t include it then warnings are generated in EPMA when an import from the interface tables takes place.


The table was populated with a simple one level attribute hierarchy.


As attribute members are going to be assigned to base level members in the Segments dimension a column was added to the Segments hierarchy table “PLAN_SEGMENTS_HIER”, the column was named the same as the attribute property “PRODTYPEATT” defined in EPMA.


The import profile requires editing to take account of the new attribute dimension otherwise if an import was run the new information would not be picked up and loaded to EPMA.


In the Map Dimensions section the Interface table was mapped to the ProductType dimension in the shared dimension library.



In the Dimension mapping section the ProductType dimension Alias column was mapped to the Alias property in EPMA, an attribute member only has the Alias property to map.


The Segments dimension was updated to map the attribute column from the source interface table to the property in EPMA, usually I keep the source/target with the same naming convention so then it is much easier to map.

The import profile was then executed to load the metadata from the interface tables to EPMA.


The ProductType attribute dimension hierarchy was successfully created and the Segment members were assigned with an attribute member.

Now say you didn’t create the associations manually as I did in EPMA and you wanted to achieve this using the interface tables, I still think it is just as quick to manually do it but here is how you would go about it.

When the sample interface tables are first created there is a table available called IM_DIMENSION_ASSOCIATION, the table name basically explains what it is about and it provides the ability to map dimensions and properties.

IM_DIMENSION_ASSOCIATION

There are four columns to populate

I_LOAD_ID which is the same as the load ID used in pretty much all the interface tables
C_BASE_DIMESION - Name of the base dimension whose member property will be associated with another dimension
C_PROPERTY - The name of the associated property.
C_TARGET_DIMENSION - Name of the dimension with which the associated property is associated.

The table has been populated to associate the Segments dimension with the attribute dimension and its property, also the alias property is associated between the attribute and alias dimensions.
If the import profile is run again and the associations did not exist they would be created replicating what I originally did manually.

If you want to also created the dimension in EPMA from the information in the interface tables then you need to populate an extra column in the IM_DIMENSION table

IM_DIMENSION

The C_DIM_PROPERTY_TABLE_NAME column holds a table name to define the properties of the dimension, so in this case the properties for the ProductType attribute dimension will be held in table PLAN_PRODUCTTYPE_PROPERTY.

PLAN_PRODUCTTYPE_PROPERTY

For an attribute dimension there are only a few property columns.
DIMENIONALIAS– Alias of the dimension.
ATTRIBUTEDATATYPE– Defines the attribute dimension type which can be Text, Boolean, Date, Numeric

By populating this information the dimension and its properties will be created when the import profile is executed.

Let’s move on to UDAs.


Once again I am going to create the dimension first in the EPMA dimension library and set the Type to UDA.


When a UDA dimension is created two members are added by default, these are specific to planning and if you are unclear what HSP_NOLINK and HSP_UDF do then have a read of the documentation here


I am going to assign UDAs to the Segments dimension so an association between the two dimensions was created using the existing UDA property.

Once again the IM_DIMENSION table requires an addition row with the UDA information.


Dimension name = UDA_Shared
Dimension class = UDA

Hierarchy table name will be PLAN_UDA_HIER

PLAN_UDA_HIER

The table was created and populated with UDA members in the child column.

As the objective is to apply UDAs to members in the Segments dimension the Segments hierarchy table requires updating.

PLAN_SEGMENTS_HIER

I am going to apply multiple UDAs to one member and to do this you need one column per UDA, so this case an extra UDA column was added.

The import profile required editing to map the newly added UDA dimension.


The Segments dimension mapping was also updated to map to the new UDA columns from source to target.


The import profile was executed and the UDA dimension was built and multiple UDAs were assigned to Segment dimension members.

On to the final topic and that is Smart Lists, this time the objective is to create and build the Smart List dimension and then assign a Smart List to a member in the accounts dimension, the logic is pretty much the same to that of the attribute and UDA dimensions.

IM_DIMENSION

A Smart List dimension is going to be created called GradeSL which will basically just define different employee grades.

A new record is added to the IM_DIMENSION table, the Smart List members will be in table PLAN_GRADESL_HIER and the properties for the Smart List dimension will be in table PLAN_GRADESL_PROPERTY.


If you created a Smart List dimension in EPMA then there are a number of properties than can be set, most of these can be applied from an interface table.

PLAN_GRADESL_PROPERTY


The table to hold the Smart List dimension property definitions was created and populated, if you don’t understand what the properties mean then it is worth consulting either the EPMA or Planning administrator documentation as they contain detailed information, the above screenshots should provide enough detail to map the EPMA property to the interface table property.

PLAN_GRADESL_HIER

The table to hold the Smart List member information was created and populated.

ITEMVALUE– when a Smart List member is added it requires an integer value
SMARTLISTENTRYLABEL– each Smart List member requires a label which will be used to populate a drop down menu in planning.

As the Smart List is going to be applied to an account member the account hierarchy interface table will require updating.

PLAN_ACCOUNT_HIER

The Smart List GradeSL is applied to account member GradeType and the Data Type is set as a Smart List.

No associations are required between the Smart List dimension and the dimension the Smart List will be applied to.

The import profile is edited again to define the new Smart List dimension.


The Smart List dimension does not exist yet in the Shared Library so “Create dimensions for the non-mapped dimensions with the source dimension name” is enabled, this creates the dimension in EPMA with the same name as defined in the C_DIMENSION_NAME column in the IM_DIMENSION table.



The interface property columns for the Smart List members are mapped to their corresponding properties in EPMA.

The import profile was then executed to create and populate the Smart List dimension and assign the Smart List to an account member.


The Smart List dimension has been created and properties set.


The Smart List GradeSL has been assigned to account member GradeType.

Hopefully you should now have a grasp on how to handle Attributes, UDAs and Smart Lists dimensions using EPMA tables.

Loading to EPMA planning applications using interface tables – Part 8

$
0
0
Just when you thought the end of the interface series had come about I return with another instalment. Today I thought I would quickly go over loading data from interface tables as it is an area I have not covered and I want the series to be complete.

Within EPMA there is built in functionality known as data synchronization which allows the synchronizing of data between EPM products, flat files and interface tables which I will be covering.

The objective of today is as usual to go back to basics and load numeric, date and text data from an interface table into a planning application.


A simple form was created with three account members, TextMember1 which is a text data type, DateMember1 which is a date member type and a standard account member Acc1.


A relational interface table was created to hold the data, there are not really any special requirements on the format of the table and the one created basically has column for each dimension and one for the data.


The table was populated with three different data type records, I am not going to be covering populating the table as the data could have come from practically any source and it could have gone through numerous transformations using the tool or method of your choice before being loaded to the interface  table.

To be able to load the data there are a few steps that need to be carried out within the Data Synchronization area accessed through workspace.

The first step is to create the Interface Area definition which basically means which relational table and columns will be used to load the data from.


Select Navigate > Administer > Data Synchronization in workspace.


In the Data Synchronization area select New > Data Interface Table Definition.


The interface area that is going to be is selected, if you have followed this blog series you should already know or have set up an interface area, if you are unclear then have a read here.

The table which holds the data and the data column are selected from the drop down boxes.


The dimension definition section allows you to add the columns that are going to be used in the synchronization and provide a friendly display name for them.


Finally a name is provided for the Interface area.


The interface area now appears in the data synchronizer, you are not confined to just one interface area and can create as many are required.

The next step is to create the Synchronization which defines which interface area to associate with, the application to load data to and the dimension mappings between the source interface table and the target application.


To create the Synchronization select New > Synchronization.


The source type is then defined which can either be a Hyperion application, external source such as a flat file or in my case an interface area.


A list of available interface areas will then be displayed.


A list of available target applications are then displayed.


As the destination is a planning application a list of available plan types to load the data to are displayed and one can be selected.


The next screen defines the dimension mapping from the source interface columns to the target planning application dimensions, to create a link between source and target then it is as simple as dragging the source to the target.

There is not always match between source and target and in the example above you can see there is no mapping for the dimension HSP_Rates in the source, in this case you can force the target dimension to a member.


If you right click the dimension there is an option “Assign Default Member”


A member can then selected and the data will be loaded to this member each time the synchronization is run.

It is also possible to filter the data from the source, so say for example you only wanted to load data for one account then a filter can be applied.


If you right click the source dimension the option to “Create Filter” will be available.


 As the source is an interface table then there two types of filter available EQUAL or LIKE, if the source was an application there are also a number of functions available.

Wildcards such as * and ? are allowed when using the LIKE filter, to be honest it is pretty basic functionality and if you are using interface tables then  it is probably best to transform the data into a format ready to be loaded before using the data synchronizer.

There is also the option to create mapping tables if the source is a Hyperion application.

 If you create a filter an icon will be displayed against the source.


Once all the mappings are complete the synchronization can be saved.


The synchronization should now appear in the Data Synchronizer window and then right click “Execute Synchronization” to load the data, check the Job Console to see if the synchronization was successful.


The synchronization was successful and the web form displayed the desired results.

If you don’t want to have to run the synchronization each time from workspace and want to automate the process then you can use the EPMA batch client, I covered the client in part 5 of the series so if have not used it then it might be worth having a read.

The syntax to execute synchronization using the batch client is -

Execute DataSynchronization Parameters(DataSynchronizationName, DataTransformationOperator,DataTransformationValue, FileName, ValidateOnly, WaitForCompletion, dataSyncLoadOptionHpMode) Values('', '', '', '', '', ‘',’’);

DataSynchronizationName—Name of the Data Synchronization profile to execute.

DataTransformationOperator—Valid values are:

•    None
•    '*' (Multiply)
•    '/' (Divide)
•    '+' (Add)
•    '-' (Subtract)


DataTransformationValue—Value to use in conjunction with the DataTransformationOperator to modify the data values.

FileName—If the synchronization uses an external source file for the source of the synchronization, the location of the external source file.

ValidateOnly—Validates the data synchronization without executing it.

WaitForCompletion—If set to true, the Batch Client waits for the job to finish. If set to false, the Batch Client submits the job and continues. Allowed values:  True or  False

dataSyncLoadOptionHpMode. Allowed values: ADD, SUBTRACT, OVERWRITE (which is the default)

So in the examples I have been using the syntax would be.

Execute DataSynchronization Parameters(DataSynchronizationName, DataTransformationOperator,DataTransformationValue, FileName, ValidateOnly, WaitForCompletion, dataSyncLoadOptionHpMode) Values('INT_2_EPMASAMP', 'None', '', '', 'false', ‘true',OVERWRITE’);


This can then be added to a script that can be called from command line, an example of the command line would be

epma-batch-client.bat -C"F:\Scripts\ExcDataSync.txt" -R"F:\Scripts\ExcDataSync.log" -Uadmin –Ppassword   


The output is written to the command window and if specified to a log file, the output includes a link to the job console to view a summary of the synchronization.


If you are interested in using ODI to automate the process of using the batch client then have a read of part 6 of the series.

If you are experiencing any issues with the synchronization then it is possible to enable additional logging.

There is a file called dme.properties located at
<MIDDLEWARE_HOME>\user_projects\<instancename>\config\EPMA\DataSync


If you edit the file and remove the # from the beginning of the following lines.

preTranslationProcessingClass=com.hyperion.awb.datasync.custom.FileBasedRowLogger
preTranslationProcessingClass.outputFile=preTransOut.txt

postTranslationProcessingClass=com.hyperion.awb.datasync.custom.FileBasedRowLogger
postTranslationProcessingClass.outputFile=postTransOut.txt

createDebugFiles=true
debugSampleSize=1000


Save the file and then restart Data Synchronization web application service web application

The additional logs will be available from
<MIDDLEWARE_HOME>\user_projects\epmsystem1\tmp\oracle\temp
 

There is a log available displaying the data before it is loaded to planning, the filename is prefixed with a unique ID.


and another log with the data has the filters and mappings applied which is the format that is then loaded to the planning application.


There is also an additional folder created each time a synchronization is executed with the same unique ID and contains further diagnostic logging information to investigate.

To turn off the addition logging edit dme.properties again and place a hash in front of the lines that were originally edited and restart Data Synchronization web application.

ODI Series – Launching Calculation Manager rules

$
0
0
This blog has come about due to a postof the Oracle Planning forum, basically it is addresses an issue with executing Calculation Manager rules using the command line utility CalcMgrCmdLineLauncher using ODI.  In the past I wrote a blog on how to execute Hyperion business rules using ODI but I to be perfectly honest I have never tried the same with Calculation manager rules so I thought I would give it a go.

Once again it would be nice if there was an API available to launch the rules but unfortunately there is not one available yet so we are left with the command line utility which can be restrictive in terms of logging and functionality.


First of all let’s start with a simple rule which also uses a variable with a runtime prompt for defining the scenario member.

Before even going near ODI I am going to make sure that I can get the command line utility running successfully.

The syntax for using the utility is

CalcMgrCmdLineLauncher.cmd [-f:passwordFile] /A:appname /U:username /D:database [/R:business rule name | /S:business ruleset name] /F:runtime prompts file [/validate]

The parameters are pretty much self-explanatory but if you are looking for detailed information then have a read of the section “Launching Business Rules With a Utility” in the planning admin guide.
If you are unsure on how to create an encrypted password file then have a read here


I created a RTP file and the format for the file is
<VARIABLE_NAME>::<MEMBER_NAME>

Password file will be : password.txt
Application name: PLANSAMP
Username: admin
Database/Plan Type name : Consol
Rule name : SIMPLE
RTP file: RTP.txt

The location of the utility is
<MIDDLEWARE_HOME>\user_projects\<instancename>\Planning\planning1\

This means the command line syntax to run the SIMPLE rule would be
E:\Oracle\Middleware\user_projects\epmsystem1\Planning\planning1\CalcMgrCmdLineLauncher.cmd -f:password.txt /A:PLANSAMP /U:admin /D:Consol /R:SIMPLE /F:RTP.txt


Personally I find the output from the utility to be pretty poor and the assumption is if there is no error message it ran successfully.


It is possible to check the Job Console from within planning to see more details around the execution of the rule, this information can also be queried from the planning application relational table HSP_JOB_STATUS

If the rule is run from planning then there is additional information in calcmgrlaunch.log in
<MIDDLEWARE_HOME>\user_projects\<instancename>\diagnostics\logs\planning

The command line utility unfortunately does not write to this log.

So now the rule is running from command line it is time to transfer this logic to ODI, I am using 11g and I am going to use an ODI procedure using the ODI Tools technology, if you are using 10g the process is very similar.


The ODI tool I am using is OdiOSCommand which you probably guessed invokes an OS command line shell using cmd on Windows and sh on nix.

There are parameters to split out the standard output and errors, working directory is the location the command is executed from and synchronous waits for the completion of the command.

OdiOSCommand "-OUT_FILE=E:\ODIDEMO\CalcManager\SIMPLE.log" "-ERR_FILE=E:\ODIDEMO\CalcManager\SIMPLE.err" "-FILE_APPEND=NO" "-WORKING_DIR=E:\ODIDEMO\CalcManager" "-SYNCHRONOUS=YES" E:\Oracle\Middleware\user_projects\epmsystem1\Planning\planning1\CalcMgrCmdLineLauncher.cmd  -f:password.txt /A:PLANSAMP /U:admin /D:Consol /R:SIMPLE /F:RTP.txt

If you have issues running the command then there is also an extra parameter "-COMMAND=" so the syntax would be

OdiOSCommand "-OUT_FILE=E:\ODIDEMO\CalcManager\SIMPLE.log" "-ERR_FILE=E:\ODIDEMO\CalcManager\SIMPLE.err" "-FILE_APPEND=NO" "-WORKING_DIR=E:\ODIDEMO\CalcManager" "-SYNCHRONOUS=YES" "-COMMAND=E:\Oracle\Middleware\user_projects\epmsystem1\Planning\planning1\CalcMgrCmdLineLauncher.cmd  -f:password.txt /A:PLANSAMP /U:admin /D:Consol /R:SIMPLE /F:RTP.txt"

I am not a fan of hardcoding of parameter values and prefer to use variables as much as possible.


Each variable was created as text type.

CM_DIR– Working directory containing RTP, password and log files.
CM_PASSWDFILE– password file containing encrypted password for the administrator user.
CM_PLANAPP– Planning application name.
CM_PLANDIR– Path of the calculation manager command line utility.
CM_PLANTYPE– Plan type to run the calculation against.
CM_RTPFILE– Runtime prompt file
CM_RULE– Calculation Manager rule to execute.
CM_USER– Planning application administrator.


These variables were then transferred to the syntax of OS command.


A scenario was generated from the ODI procedure as I will be using a Load Plan to execute the command line call.


It is possible to achieve the same results using a package if you prefer or are using 10g.

The scenario was added to the load plan and all the variable values were set .

The logs generated from running command line utility will be based on the name of the rule e.g. SIMPLE.log and SIMPLE.err


The standard out log includes the variable information.


The error log includes information messages as well as any errors that were generated.

As I said earlier there is not really any information in the logs to say that executing the rule was successful.

So let’s change the RTP to an invalid member “Actual1” and see what happens when it is executed.


 The Operator shows a successful execution, this is because the utility has not returned an error code to the calling OS shell.


The output log now contains the invalid member in the launch variables.


The error log does have an additional line with a failure message even though it doesn’t say it has failed and the line should have started ERROR: but unfortunately that is the best you get at the moment with the utility.

This means the logic is that if the rule completed successfully the last entry in the error log will be

INFO: Application PLANSAMP exists in Registry: {1}

Otherwise it will have failed and contained an error message.

Using ODI there are a number of different ways to handle this logic and generate a failure, one method could be to load the error log into a table and then analyse the records which I have described the process in a previous blog.

Another technique is to simply read in the error lines of the error log and check to see if the last line starts with “INFO”, this could be done using Java, Groovy or Jython which I am going to use.


I added a step to the ODI procedure using Jython technology which will execute after calling the command line utility, the code basically reads in all the lines of the error log which is not a problem because the file is so small and then checks if the last line starts with ‘INFO’, if it doesn’t then an error is raised.


If the load plan is run again this time it fails and generates an error with the message to why it failed so at least now any errors can be trapped and acted upon.

ODI Series – ASO parallel data loads

$
0
0
I have been meaning to write up this blog for a long time but never got round to it and a recent post on the essbase Oracle forum prompted me to revisit this topic.

As you may be aware there are a couple of methods available to parallel load data into an ASO database,  one method is to have multiple sessions of Maxl each performing a data load or the much more efficient way of using one Maxl statement using multiple load rules (up to eight).

The Maxl syntax is
Import database <appname>.<dbname> data connect as <SQLUSER> identified by <SQLPASSWORD> using multiple rules_file ‘<RULENAME>’,’<RULENAME>’.. to load_buffer_block starting with buffer_id <ID_NUMBER> on error write to …

A buffer is created for each load rule and the id starts with the number defined in the statement, so if you have two load rules starting at buffer id 100 then once the statement is executed buffers 100 and 101 will be created and data from the first load rule will be loaded to buffer 100 and the second to 101, once both data loads are complete then the content of the buffers will be committed to the database in one operation.

For an example I am going to load data to the ASOsamp.Sample database using two load rules with the parallel load Maxl and then go through the process to achieve the same using ODI and the essbase adaptor.


I created two SQL load rules DLSQL2,DLSQL3 which were pretty much identical in format.

import database AsoSamp.Sample data connect as ODISTAGE identified by 'password' using multiple rules_file 'DLSQL2','DLSQL3' to load_buffer_block starting with buffer_id 100 on error write to "F:\Scripts\dloaderror.txt";



A simple Maxl statement was created to use the parallel data load functionality starting with buffer id 100 and using the two rules files I had created.


If you query the load buffers while the data loads are taking place you can see that two buffers (100,101) have been created.

The default aggregation method is AGGREGATE_SUM which means “Add values when the buffer contains multiple values for the same cell.”

By default missing values are ignored but zero values are not.

Later on I will compare how ODI manages the buffers.

If you check the essbase application log when the parallel loads are being run you will see that the load buffers are initialised


The data loads then take place


And finally once the data loads are complete the buffers are committed in one operation.


So a nice and simple example which I will now try and replicate the functionality using ODI.

I am going to be using ODI 11g as load plan are available which make it much simpler to execute scenarios in parallel, it is certainly possible to achieve it in 10g using a package and the scenarios set to asynchronous mode.


The essbase ODI model reversing options was set to multiple data columns with measures being the data column and members “Original Price”, “Price Paid”, “Returns”, “Units”, “Transactions”, this will match the data load in the load rules which were originally set up for the Maxl method.


I created two similar interfaces with the source as a relational table and the target being the essbase database datastore, I had to use the SUM function on the members due to an issue I encountered when loading same cell records with multiple values, I will cover this issue towards the end of this blog.


In the IKM options you will notice there are three options relating to ASO, these options just appeared in one of the patch releases of 10g though I am not sure exactly which version that was, if you don’t have the options then you really should be patching ODI because many issues with the Hyperion adaptors have been addressed over time.

BUFFER_ID– This is exactly the same as the buffer id concept used with Maxl

BUFFER_SIZE– When performing an incremental data load, Essbase uses the aggregate storage cache for sorting data. You can control how much of the cache a data load buffer can use by specifying the percentage (between 0 and 100% inclusive). By default, the resource usage of a data load buffer is set to 100, and the total resource usage of all data load buffers created on a database cannot exceed 100. For example, if a buffer of 90 exists, you cannot create another buffer of a size greater than 10. A value of 0 indicates to Essbase to use a self-determined, default load buffer size

This is the same as resource usage in the essbase world, it differs slightly than that when used in Maxl it is a percentage between 0.1 and 1 and in ODI it is between 0 and 100.

GROUP_ID– This option is used when using parallel data loads, each interface that is going to be run in parallel will require the same group id. I am not sure exactly how internally it works but it looks like in the core Java code for the adaptor there is a buffer id manager and when interfaces are executed the buffer id is added to the manager, once all executing sessions are complete then the array of IDs in the buffer manager are committed at once.

Second interface KM options -

Each of the interfaces have a load rule defined and I set both interfaces to a group ID of 1, the first interface had a buffer id of 1 and the second a buffer id of 2, as there are two loads in parallel the buffer size in each of the interfaces was set to 50%.


A scenario was generated for each interface this is because when using load plans it is not possible to directly add an interface and a scenario is required.


A simple load plan was created and the two scenarios generated from the interfaces were added and set to execute in parallel.


After executing the Load Plan you can see in the operator that both the scenarios are executed in parallel.


Looking at the essbase application logs then it seems to follow the same concept as with the Maxl method, now it is not going to be exactly the same as the Maxl method uses SQL load rules and ODI uses the Java API and streams the data via the load rule.

SQL load rule using Maxl -

ODI Data load -

If you query the load buffers when using ODI to load data then there is a difference in some of the default properties being used.


The aggregation method being used is “AGGREGRATE_ASSUME_EQ” compared the Maxl default of “AGGREGATE_SUM”, I am not aware of the property that ODI is using and I am not sure why it doesn’t use the aggregate sum method.

Ignore missing are also set to false compared to the default of True when using Maxl, setting missing to True is the optimal performance.

The problem here is that with ODI you have no option to change these default values and are stuck with what has been hardcoded in the Java code, this is pretty poor in my opinion and I wish Oracle would wake up a little and start enhancing the adaptors and take data integration with EPM seriously before they are left behind, if they were enhanced then I am sure more would use them instead of being forced down firing off Maxl from within ODI.

I had a hunt around in the Java code to see how the buffers are initialised.


They use the loadBufferInit method which is available the essbase Java API.


The aggregation method in the adaptor code is being fixed to duplicates assume equal.

The options ignore missing and zero value is also hardcoded.

The options when the buffer is committed are also fixed so you lose the options to work with slices and overriding data.

In my opinion it really wouldn’t be touch much of a development exercise to make these options available to be set in the IKM like with buffer id and size.

Anyway this leads me on to the issue I experienced when loading multiple values to the same cell which may be down to the fixing of the aggregation method as shown above.


The above source data has multiple values being loaded against the same cell and if the data is loaded using Maxl then it loads without any issues.


com.hyperion.odi.essbase.ODIEssbaseException: Cannot Load buffer term. Essbase Error(1270089): Data load failed: input contains different values for the same cell [(Original Price, Curr Year, Jan, Sale, Credit Card, No Promotion, 26 to 30 Years, 20,000-29,999, Photo Printers, 017589, 13835): 240 / 236]

If you load the same data using ODI then the interface fails with an error about the input data containing different values for the same cell, this may be because of the aggregation property being used when the essbase adaptor initialises the buffer.

The workaround is to sum the data in the interface but I still consider this to be a bug with the knowledge module because you have no way of changing any of the default values, over to you Oracle.

ODI Series – tips for improving essbase load times.

$
0
0
I thought I would cover a few tips for speeding up data loading to essbase using ODI.

The first tip is for speeding up data loads to essbase databases and while it may be obvious to many people I often find this setting in the IKM left as default and hear complaints that apparently the essbase adaptor is slow at loading data.

When you first create an interface to load data to an essbase database you will see an option in the IKM called COMMIT_INTERVAL.


The default setting for this option is 1000 which basically means that the data will be streamed using the Java API to the database in chunks of 1000 records.

Now let me go through an example of the difference in load times when using this option.



I successfully loaded over 311,000 records to an ASO database from a RDBMS source, in the operator you will see that it took the worrying time of 309 seconds.


In the log you can see that there is an entry for each of the 1000 records being loaded to the buffer, this is not only slowing the data load by a massive amount but also increasing the log size because for just one load the process is being repeated over 300 times.


So let’s see what happens when the COMMIT_INTERVAL is increased to a size that is bigger than the number of records to be loaded.



The data load is down to 14 seconds which is a huge improvement.

As the ODI agent is now handling loading the data in larger chunks then this means there will be an increase in the amount of the memory required for the JVM.


The graph displays the memory being used by the agent with the default 1000 commit records and then with the increased commit size so be prepared to increase the maximum heap size of the agent.

So if you are loading 5 million records it is not as simple as setting the commit size to that amount, testing is required to get a balance between an acceptable load time and JVM size.

If you are data loading to essbase using the ODI KM then I suggest to always use a load rule as it is the optimal method, if you don’t use a load rule if a record is rejected for any reason eg. Unknown member the load method changes and starts loading record by record.



If you have a large data load with possible error records then you could be waiting for an eternity for the load to finish without using a rule.

There are other optimisation techniques outside of ODI but If are loading data to a BSO database then it is also definitely worth reading the section “Optimizing Data Loads” which I am not going to cover as I believe in this case the documentation does a good job of explaining and if applied can drastically reduce load times.

There is another possible option to speeding up loading but I will be perfectly honest and say it may not actually make that much of a difference when using the Hyperion technologies and has a bigger impact when using RDBMS technologies.


If you look at a Hyperion related data server in the topology you will see “Array Fetch Size” which is set to 30 and cannot be changed.

This basically means is that when the knowledge module Java code retrieves data from the source or staging area it will retrieve 30 rows at a time.

If you edit one of the Hyperion knowledge modules and look at the code for a load step then you will see the fetch array being used.


The code retrieves the value stored in the “Array Fetch Size” and then uses that as the fetch size in the JDBC SQL queries.


If you look in the operator at the step the values is always set to the default of 30.

One way of being able to set the fetch size in the interface KM options is to first create a new option, this is done by right clicking a Knowledge module and selecting “New Option”.


I have created an option called FETCH_SIZE with a type of Value and a default of 30.


Now the load step of the KM is edited and the fetch size is set by retrieving the value in the option using.

stmt.setFetchSize(<%=odiRef.getOption("FETCH_SIZE")%>)


In the interface KM options the FETCH_SIZE should now be available to be set.

If you are going to test out changing the default fetch size then the answer is not just to set the size as big as possible because it will probably have a detrimental effect on the performance or crash the agent JVM, it does need playing around with to find the optimal value.

ODI Series - problems using pre/post maxl option in interfaces

$
0
0
 This problem was brought up on the ODI forum so I thought it is probably more beneficial if I write it up in case anybody experiences the same problem.

I will be running through the issue on a windows OS but the concept is the same if using *nix

If you are integrating with essbase 11.x and using either the PRE_LOAD_MAXL _SCRIPT or POST_LOAD_MAXL _SCRIPT options in any of the interface KM options


It is possible you could encounter one of the following error messages in the operator.


or the following error

org.apache.bsf.BSFException: exception from Jython:
Traceback (most recent call last):
File "<string>", line 89, in <module>
at com.hyperion.odi.essbase.ODIEssbaseConnection.executeMaxl(Unknown Source)
at com.hyperion.odi.essbase.AbstractEssbaseWriter.beginLoad(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)com.hyperion.odi.essbase.ODIEssbaseException: com.hyperion.odi.essbase.ODIEssbaseException: Cannot run program "essmsh": CreateProcess error=2, The system cannot find the file specified
at org.apache.bsf.engines.jython.JythonEngine.exec(JythonEngine.java:146)


I will just briefly explain what ODI does behind the scenes when you use any Maxl options.

Once an interface is executed the Java code used by the knowledge module checks if one of the Maxl options is populated, if it is it then uses a property file to format and form a command line statement.


essmsh = MaxL shell
-m error = Sets the level of messages returned by the shell. Valid values are: all (the default), warning, error, and fatal.
-s = server name
-l = user and password
0 = Essbase server name
1 = Essbase Port
2 = User name
3 = Password
4 = Maxl Script

An example of the command line generated is

essmsh -m error -s essServer:1423 -l admin password F:\scripts\log.mxl

The code then uses the Runtime Java class to execute the statement.

The reasons behind the errors are a combination of essmsh cannot be found as the location does not exist in the environment path variable.


In ODI this would generate the error message

Cannot run program "essmsh": CreateProcess error=2, The system cannot find the file specified

If the location of essmsh command does exist in the path variable then the following can occur


In ODI this would produce the first error I highlighted at the beginning of this blog.

From version 11 a number of environment variables are not automatically set and the use of the  StartMaxl script is the preferred method to start up a MaxL session.


If have the essbase client installed and edit the startMaxl script then you can see that the variables ESSBASEPATH, ARBORPATH and PATH are set before calling the MaxL shell.

It is similar on the essbase server install except the startMaxl script calls a setEssbaseEnv script which sets additional variables.

The simple solution to fix the issue with OD calling the Maxl Shell is to set the environment variables either at OS user, system or ODI agent level.
By ODI agent level I mean updating the scripts that start up the agent and adding in the variables.



If the ODI agent is using the essbase client then an example variables would be

ARBORPATH<MIDDLWARE_HOME>\EPMSystem11R1/common/EssbaseRTC-64/<VERSION>

e.g.  E:\Oracle\Middleware\EPMSystem11R1\common\EssbaseRTC-64\11.1.2.0

The ESSBASEPATH variable should be exactly the same as the ARBORPATH variable

The following locations are required in the PATH variable.

<MIDDLEWARE_HOME> EPMSystem11R1\bin; and <MIDDLWARE_HOME>\EPMSystem11R1\common\EssbaseRTC-64\<V ERSION>\bin;

e.g.  E:\Oracle\Middleware\EPMSystem11R1\bin; E:\Oracle\Middleware\EPMSystem11R1\common\EssbaseRTC-64\11.1.2.0\bin;

If the ODI agent is on the essbase server then just have a look at the setEssbaseEnv script to find out the values to use for the ARBORPATH,ESSBASEPATH and PATH variables.

Once the changes have been made restart all the ODI related components and possibly reboot the machine if using an older OS to make sure the variables have been applied and then the interfaces should be able to execute the MaxL scripts.

ODI Series – External authentication with Microsoft Active Directory in ODI 11g

$
0
0
I recently tasked myself to set up external authentication to Microsoft Active Directory (which I will refer to as MSAD from now on) in ODI 11g Studio as it is quite common to be requested to add in this functionality.

External authentication was a new feature in the first release of ODI 11g and I was expecting the configuration to be simple and be built in to the Studio, unfortunately there is no mechanism within the Studio to set up the authentication and to be honest the documentation is not the most helpful.

Before anybody says to me there is an Oracle by Example on the topic I need to point out that yes this is true but it is based on OID and the example is missing lots of key information plus there is no information on setting it up for MSAD.

Once you know how to configure then I agree it is really quite simple but in the process I believe I hit upon a possible bug which threw me for a while and I ended up using an LDAP browser, logging in the AD and watching the packets sent from Studio with Wireshark, mostly because the error messages returned from the Studio were not the most informative.

Anyway I thought that I would write up the process not only for my benefit but for anybody else that has the requirement to set up authentication to MSAD.

If you believe any of the information I provide in this write up is incorrect or you know of an easier method please feel free to get in touch.

I will run through the process in ODI 11.1.1.5 but should be valid for 11.1.1.3, there are some differences with 11.1.1.6 which I will point out as I go along.

Here is what the Oracle documentation has to say about setting up External Authentication

"Oracle Platform Security Services (OPSS) is a standards-based and portable security framework for Java applications. OPSS offers the standard Java Security Model services for authentication and authorization.
 
Oracle Data Integrator stores all user information as well as users' privileges in the master repository by default. When a user logs to Oracle Data Integrator, it logs against the master repository. This authentication method is called Internal Authentication.
 
Oracle Data Integrator can optionally use OPSS to authenticate its users against an external Identity Store, which contains enterprise user and passwords. Such an identity store is used at the enterprise level by all applications, in order to have centralized user and passwords definitions and Single Sign-On (SSO). In such configuration, the repository only contains references to these enterprise users. This authentication method is called External Authentication."

Ok, so ODI 11g is using OPSS to authenticate with an external identity store, let’s read the next section on how to configure and everything should make perfect sense.

"To use the External Authentication option, you need to configure an enterprise Identity Store (LDAP, Oracle Internet Directory, and so forth), and have this identity store configured for each Oracle Data Integrator component to refer by default to it.
Oracle Data Integrator Studio

The configuration to connect and use the identity store is contained in an OPSS Configuration file called jps-config.xml file. See "Configuring a JavaEE Application to Use OPSS" in the Oracle Fusion Middleware Application Security Guide for more information.

Copy this file into the ODI_HOME/client/odi/bin/ directory. The Studio reads the identity store configuration and authenticates against the configured identity store."


Basically when you want full details the documentation goes very vague and sends you off to the fusion middleware security guide.

So it sounds like all that is needed is to find and edit jps-config.xml and configure it to authenticate against the AD, JPS stands for Java Policy Store just in case you were wondering.


 If you open odi.conf in <ODI_HOME>\client\odi\bin then you will see that the Studio uses the jps-config.xml file.

First thing you would expect is an example jps-config.xml file to exist within the ODI installation, unfortunately if you are using 11.1.1.3 or 11.1.1.5 then there is no example file, in 11.1.1.6+ then there is an sample file.


^ ODI 11.1.1.6 comes with a sample jps-config file.

The Middleware Security Guide does provide an example jps-config.xml file and if you have deployed any middleware products with WebLogic then you will find a file under

<MIDDLEWARE_HOME>\user_projects\domains\<domainname>\config\fmwconfig

Once you do have a look at the file then you will notice there is a hell of a lot of configuration information and it can be confusing where to start.

There is also no information in the file relating to Microsoft Active Directory.

Most of the elements in the file are not actually required so can be stripped out and as I am kind I will shortly show you an example of the final working file that I am using.

I will first go through the steps of creating a user in the AD and testing the connection through LDAP.



supervisor user was created in the AD epmmsad.com, the user doesn’t have to be named supervisor and does not need to be an administrator.

To prove my sanity I installed an LDAP browser on the machine hosting the Studio.


Server = EPMAD, Port = 389, Base DN: DC=epmsad,DC=com


I successfully tested the connection to the AD with supervisor user with a full DN of
cn=supervisor,cn=users,DC=epmmsad,DC=com

It is important to make sure that there are no problems connecting to the AD using LDAP before configuring ODI to use external authentication because don’t always expect a nice error message telling you exactly where you have gone if there is a problem.









SecretStore-based CSF provider



LDAP-based IdentityStore Provider



This is Jaas Login Service Provider and is used to configure login module service instances















user.search.bases

CN=Users,DC=epmmsad,DC=com



group.search.bases

CN=Groups,DC=epmmsad,DC=com






File Based Credential Store Service Instance



Identity Store Login Module






User Authentication Login Module
















Above is a fully configured and working jps-config.xml file for MSAD, luckily you don’t have to go through the pain of getting it to that stage, if you are not editing odi.conf then it needs to be copied to <ODI_HOME>\client\odi\bin

To configure it for your own AD you will just need to update the AD information under
<serviceInstances> 













user.search.bases

CN=Users,DC=epmmsad,DC=com



group.search.bases

CN=Groups,DC=epmmsad,DC=com



I only managed to find one example of using MSAD in a jps-config file and that is what caused the most pain as if you use the example then ODI will generate an error, I will go through it in more detail at the end of this blog.




File Based Credential Store Service Instance

In the jps-config example I provided you will see the above elements which mean that a file based credential store also known as wallet-based credential store is going to be used, the file will hold encrypted information for the User DN and password to connect to the AD.

The cwallet.sso file has to be created and this was another stumbling block, if you read the Oracle By Example it says to run the  run_credtool script which doesn’t actually exist in ODI 11.1.1.3 and 11.1.1.5, I will get on to 11.1.1.6 later.

After searching the following article in Oracle Support –
ODI Studio - Unable to Configure LDAP Server Authentication Without Clear Password Information [ID 1319563.1]

The article highlights

“The problem is caused by internal Bug 12398394, were it is documented
"The issue is that ODI does not provide a facility to create credential stores for use with standalone ODI agent and ODI Studio."


It also goes through the steps on how to install the credstore tool, you can follow the document or follow my steps as there were a couple of anomalies and you even get screen shots with my version.

Download the credtool - odi_credtool.zip from here

Expand the zip into any location


You should end up with the above file structure.

Download ANT version 1.7.0 e.g apache-ant-1.7.0-bin.zip


If you look at the file structure for odi-credtool you will see a directory \lib\org.apache.ant_1.7.0



Open the downloaded ANT  1.7.0 zip file and go into directory \apache-ant-1.1.70  and extract all to odi-credtool\lib\org.apache.ant_1.7.0


You should end up with a structure like above.


If you look at the odi-credtool structure you will see it has \oracle_common\modules and then a number of empty directories, you need to populate these directories with files from a Fusion Middleware 11g installation e.g. OBIEE, SOA, EPM.


I am using an EPM 11.1.2.1 installation; copy the matching directories to the odi-credtool\oracle_common\modules folder and replace/merge


Edit the run_credtool script in the root directory of the credtool, you will probably need update the JAVA_HOME variable and point it to the valid Java location, if you are planning on running the tool from the directory then the other variables should be fine.


From command line change directory to the tool and execute the run_credtool script.



The first two inputs the tool requires are the Key and Alias to create a map for the credentials.


In the JPS-config file I provided you will see there are two properties “security.principal.key” and “security.principal.alias” and the values entered in the tool should match these.
e.g.
Alias :JPS
Key:msad.ldap.credentials

The next two inputs are the full distinguished name of the user in the AD that is going to be used and the password for the user.

User Name: cn=supervisor,cn=users,DC=epmmsad,DC=com

This is another area that threw me a little as the examples I have seen just used the  cn=username which I didn’t believe would work and found out that it wasn’t binding when watching the ldap packets.

I also realised that I needed to shut down the Studio and reopen it each time I made a configuration change as I am sure it was caching some of the information.

The final input is the full path and filename for the jps-config file e.g.
JPS Config:<ODI_HOME>\oracledi\client\odi\bin\jps-config.xml


If the tool was successful the credential file cwallet.sso will have been generated.

If you are running ODI 11.1.1.6 then you can still use the above method or use a much simpler method to create the cwallet.sso file, in the bin directory there is a script called odi_credtool.sh which has been created for Unix type operating systems.


If on a windows OS then you will need to edit the file.



Now what I expected you would be able to do is just copy from java onwards, open a command prompt, change directory to <ODI_HOME>\oracledi\client\odi\bin and paste the statement.


Unfortunately this generates an error as it can’t find a required Java class odi-core.jar which exists in the following locations

<ODI_HOME>\oracledi.common\odi\lib
<ODI_HOME>\oracledi.sdk\lib


and not  ../../jdev/extensions/oracle.odi.navigator/lib/odi-core.jar  as specified in the odi_credtool script.

plus jps-manifest.jar is in <ODI_HOME>\modules\oracle.jps_11.1.1 in my fresh install of 11.1.1.6

So I updated  the classpaths and this is the final working command line

java -classpath E:\Oracle\Middleware\OD11g\oracledi.common\odi\lib\odi-core.jar;E:\Oracle\Middleware\OD11g\modules\oracle.jps_11.1.1\jps-manifest.jar -Doracle.security.jps.config=E:\Oracle\Middleware\OD11g\oracledi\client\odi\bin\jps-config.xml oracle.odi.core.security.JPSContextCredTool


The inputs are exactly I explained earlier except there is no need to supply the path to jps-config.xml, there is an xml validation warning but that can be ignored as the cwallet.sso file is correctly generated.

So now we are ready to set up the master repository to use external authentication, it is possible to switch an active master repository to use external authentication within the Studio by going to ODI > Switch Authentication Mode but for this example I am going to create a new repository.


In the Studio you follow the normal process for creating a master repository by first providing the database connection information.


Next select “Use External Authentication” and enter the AD user you want to authenticate with.


I am using the ODI repository for password storage but don’t worry the AD password is not stored in the repository

Now once you click OK if everything has been configured correctly you will see the master repository being created in the log window otherwise you will be hit with an error message which is sometimes meaningful and sometimes not depending on what is wrong.


If you log into the master repository and go to users you will see the user has an External GUID which is a unique identifier for the user in the AD.

To add a new a new AD user into the Studio just select “New User” and enter a user that exists in the AD.


If clicking Retrieve GUID returns the identifier then you are pretty much assured that all is well.


You then should be able to log into the Studio using the AD user.

Going back to the issue that threw me for a while because I was not sure if the JPS-config file was not configured correctly or I managed to not set up the wallet in the right way.

When I was configuring the AD details in the jps-config file I only came across one example on the internet using MSAD which was in the Oracle document “Oracle® Fusion Middleware Application Security Guide 11g Release 1 (11.1.1)

In the oracle example it uses the following for the username and group properties.


Seeing as it is an Oracle security document I could only assume that ODI would accept these elements.



Caused by: oracle.security.jps.service.idstore.IdentityStoreException: java.lang.ClassCastException: [Ljava.lang.String;
at oracle.security.jps.internal.idstore.AbstractIdmIdentityStore.initStore(AbstractIdmIdentityStore.java:156)
    at oracle.security.jps.internal.idstore.AbstractIdmIdentityStore.getIdmStore(AbstractIdmIdentityStore.java:110)


Once I configured using the extended property element Studio would throw the above error which as you can imagine does not point to where the problem could be.


If the jps-config file is updated to the above format then ODI allows the configuration, I have raised an SR with Oracle and I am awaiting a response from the development team to acknowledge whether it is a bug or not.

Just as I finish the write up I get an update on the SR, it has been logged as “Bug 13855998 - SUPPLIED JSP-CONFIG.XML FILE IS INCORRECT RESULTING IN FAILED AUTHENTICATION

I think I will end it there but hopefully you have found this useful.

Planning 11.1.2.1 patch –Maintenance Mode utility

$
0
0
I just noticed a planning patch in Oracle support and thought I would share as I know the ability to change the maintenance mode in planning through a utility has been asked for so many times over the years on the planning forum, well it looks like Oracle have taken note of all the requests and finally come up with a utility to do this.

The patch details are - Release 11.1.2.1.101 Patch Set Exception (PSE): 13801144

Defect Number – 13798468

Details

A new utility is provided to set users’ log on level, which corresponds to Planning’s “Application Maintenance Mode” functionality by which administrators can grant and withdraw access to applications. 

If users are logged on to the application and administrators withdraw their access, users are forced off the system.

From the directory, enter this command at the Command Prompt, one space, and the parameters, each separated by a comma:

On Windows: MaintenanceMode.cmd
On UNIX: MaintenanceMode.sh

Options:
•    /A=app - Application name (required)
•    /U=user - Name of the administrator executing the utility (required)
•    /P=password - The administrator’s password (required)
•    /LL=loginLevel

[ALL_USERS|ADMINISTRATORS|OWNER] - Specify which users the utility affects (required):

o ALL_USERS - All users can log on or continue working with the application.

o ADMINISTRATORS - Only other administrators can log on. Other users are forced off and prevented from logging on until the parameter is reset to ALL_USERS.

o OWNER - Only the application owner can log on. All other users are prevented from logging on. If they are currently logged on, they are forced off the system until the option is reset to All_USERS or ADMINISTRATORS. Only the application owner can restrict other administrators from using the application.

•    /DEBUG=[true|false] - Specify whether to run the utility in debug mode. The default is false. (optional)
•    /HELP=Y - View the utility syntax online (optional)

For example, type:
MaintenanceMode.cmd /A=app1,/U=admin,/P=password,/LL=ADMINISTRATORS


I thought I would also install the patch and test it out so here are the steps to get up and running with it on windows.

The prerequisite to installing this patch is that you need to have patched planning to 11.1.2.1.101 – Patch 12666861

Please make sure you have carried out all your backup procedures before applying any patches.


If you have not yet applied patch 11.1.2.1.101 then download it from Oracle Support.


Also download the patch 13801144 which contains the new Maintenance Mode utility.

Stop all the EPM related services.

Extract the patches to /EPMSystem11R1/OPatch


OPatch is initiated from command line and the format for windows is

opatch.bat apply \EPMSystem11R1\Opatch\ -oh \EPMSystem11R1 -jre    \jdk160_21

Open a command prompt, change directory to the OPatch directory and run the above command (update to match the environment and

For example
E:
cd E:\Oracle\Middleware\EPMSystem11R1\OPatch
opatch.bat apply E:\Oracle\Middleware\EPMSystem11R1\OPatch\12666861 -oh E:\Oracle\Middleware\EPMSystem11R1 -jre E:\Oracle\Middleware\jdk160_21



Repeat for the utility patch e.g.

opatch.bat apply E:\Oracle\Middleware\EPMSystem11R1\OPatch\13801144 -oh E:\Oracle\Middleware\EPMSystem11R1 -jre E:\Oracle\Middleware\jdk160_21



To view which EPM patches have been applied on the server through OPatch you can use the following command.

opatch.bat lsinventory -oh   -jdk \jdk160_21


Next delete the temp directory on the planning server for the deployed planning web application - /user_projects/domains/EPMSystem/servers//tmp

e.g. E:\Oracle\Middleware\user_projects\domains\EPMSystem\servers\Planning0\tmp

This is so when the planning web application is started the ear file in the patch will be deployed and a new tmp directory created

The second patch will have deployed a file called MaintenanceMode.cmd.template to /EPMSystem11R1/products/Planning/bin   


Copy this file to /user_projects//Planning/planning1 and rename it to MaintenanceMode.cmd then edit the file


needs to be updated to the folder where the file has just been copied to.

Save and you are ready to go so start up the EPM services.

It is worth noting that the first patch that was applied updated a few CDFs so they will need to be refreshed by logging into each planning application and choosing
Administration > Application > Refresh Database > Update custom defined functions

Anyway let’s test the utility.


If you just run the Maintenance Mode utility without any parameters it will display the syntax required


The application is currently set just to be used by the Owner.


So let us change that to all users using the utility.


The change has been reflected in planning


How about changing it to Administrators while users are logged in.


Unfortunately nothing has changed there and it is not the nicest log out message but at least it does log the user out, if the user tries to log in the application again the standard application is in maintenance mode message is displayed.

I am sure this utility will be welcomed by many and it can easily be added into automated planning refresh scripts.

External authentication with Microsoft Active Directory in ODI 11g – Part 2

$
0
0
In the last part I went through the painful but easy when you know how process to set up ODI 11g Studio and repository to use external authentication through Microsoft Active Directory.

To be complete I thought I would cover setting up external authentication for the standalone agent which I go through today and then in the next part the J2EE agent and ODI console.

If you are considering setting up external authentication for a standalone agent then if you haven’t already make sure you read through the last part as there will be an assumption that you have configured the jps-config.xml file and generated the cwallet.sso file.

So let us see what the ODI documentation states about setting up the external authentication.

Standalone Agent

The configuration to connect and use the identity store is contained in an OPSS Configuration File called jps-config.xml file. Refer to the Oracle Fusion Middleware Application Security Guide for more information.

Copy this file in the ODI_HOME/agent/bin/ directory. The agent and the command line scripts will authenticate against the configured identity store.


So once again in its usual vagueness the documentation does not provide much detailed information but after already configuring the Studio you get the idea that you need to configure a jps-config.xml  and place it in the agent/bin

Before I go through the process I created a user in the AD to be used for the standalone agent called odi_agent, created the user in the Studio and then I configured the jps-config file and generated the cwallet.sso file based on the process in the last blog.


I generated the files in the <ODI_HOME>\oracledi\agent\bin directory.

The next step is to update the odiparams script to enter the connection information and agent user credentials.


The master repository encoded password was generated using the encode utility.


The ODI_SUPERVISOR variable was set to the AD user odi_agent.


The supervisor password was generated using the encode utility again.


If you look at the ODI_JAVA_OPTIONS variable you can see that there is a reference to the jps-config.xml file, the configuration assumes that the file is located in the same directory, if say you have the agent and the Studio on the same machine and want to use one instance of the file and wallet then you can update and add the full path to point to one location.

That’s pretty much all you need to do and you can start up the agent and all is good,well….

2012-04-03 20:40:44.920 ERROR odi.core.security.internal.ODIJpsHelper.createSubject Get exception. User:odi_agent. Execption msg is:oracle.security.jps.JpsRuntimeException: java.security.AccessControlException: access denied (oracle.security.jps.service.credstore.CredentialAccessPermission context=SYSTEM,mapName=JPS,keyName=msad.ldap.credentials read)        at oracle.security.jps.internal.jaas.module.idstore.IdStoreLoginModule.initializeLM(IdStoreLoginModule.java:663)

Unfortunately I was hit by the above error which didn’t fill me with joy.

I then added the additional parameters to the ODI_JAVA_OPTIONS variable in the odiparams script to turn on debugging.

"-Djps.auth.debug=true" "-Djps.auth.debug.verbose=true"


A large amount of errors were generated and it looks like the issue relates to java permissions on all the related packaged jars.

I decided to leave it there as there is the following bug on Oracle Support.

Bug 13255270: 'ACCESS DENIED...' ERROR WHEN STARING ODI STANDALONE AGENT WITH EXTERNAL AUTHENT

UPDATE : 25th April 2012

The bug in Oracle Support has been updated and I also got an anonymous comment on this blog entry from I assume the same person that has updated the bug.

I have removed my workaround as it was only temporary until a solution was available, hopefully the ODI documentation will be improved to reflect this.

First create a file called server.policy in the <ODI_HOME>\oracledi\agent\bin directory.


grant codeBase "file:${ODI_HOME}/../../-" {  
permission oracle.security.jps.service.credstore.CredentialAccessPermission "context=SYSTEM,mapName=*,keyName=*", "read";
};


paste the above grant statement into the file.

grant codeBase "file:${ODI_HOME}/../../-" {  
permission oracle.security.jps.service.credstore.CredentialAccessPermission "context=SYSTEM,mapName=JPS,keyName= msad.ldap.credentials", "read";
};


It is also possible to use the mapName and KeyName that are being used in the cwallet.sso file.


Edit odiparams.bat/sh and make sure ODI_HOME is set as an absolute path.


This could also be achieved by setting ODI_HOME as an environment variable instead of editing the file.

Also add the line

set ODI_ADDITIONAL_JAVA_OPTIONS=%ODI_ADDITIONAL_JAVA_OPTIONS% "-DODI_HOME=%ODI_HOME%"


The agent now starts up without any issues authenticating with the AD account.

Now if you want to take it one stage further and use OPMN to manage the agent and on windows create as a service then you could have a look at a previous blog I wrote on setting it up.

If you are going to use OPMN and go down the route outlined in the Oracle support document 1274484.1 or the equivalent Oracle by Example and install FMW 11g WebTier Utilities then

•    You don’t need to install WebTier Utilities 11.1.1.2 as 11.1.1.6 can be used and is available from here

•    When installing you don’t need to select Oracle Web Cache and can just select Oracle HTTP Server which will then install OPMN.

There are also a few things to watch out for when configuring OPMN to manage the agent with external authentication.


The configuration uses a template file in the agent\bin directory called odi_opmn_standaloneagent_template.xml

It is possible the usage of “./jps-config.xml” may be an issue when starting up the agent through OPMN as an error will be generated saying the file can’t be found, in this case the full path to jps-config.xml can be entered.

The configuration also defaults to a user name of SUPERVISOR so if you are using a different user for external authentication this will need to updated.

The changes can be made directly to the template file before adding the agent to OPMN or after adding the agent and updating opmn.xml which will be created in
<INSTANCE_HOME>\config\OPMN\opmn


Any changes to opmn.xml will require a full restart of OPMN.

Next time I will cover the steps to use external authentication with the J2EE agent and ODI console.

EPM 11.1.2.2 Installation

$
0
0
For the last four years I have written a blog about the installation of each release of EPM 11 and this year there is going to be no exception as 11.1.2.2 has recently been released.

This release is more than just a bunch of fixes as it brings in new products, a multitude of product enhancements and changes to installation and configuration which I will be covering today.

This blog is going to be around highlighting where there are changes from the previous release and not a detailed installation guide as there are a number of documents now available that help cover that area.

I apologise if the blog contains any inaccuracies as a lot of it is new and I may have misunderstood a few details.

First I think it is worth going through a number of the new features and changes that I have picked out of the documentation.

“IBM WebSphere 7.0.0.19+ is now supported as an application server.”– I know deployment to WebSphere has been on the cards for a while and it is now supported, the deployment is not through the EPM configurator and there is a script available to deploy the web apps, the full process to follow is available here.

FireFox 10.x+ and Internet Explorer 9 are now supported Web browsers. – A big leap for the support of FireFox as 11.1.2.1 supported 3.5+, though still not supported for FDM.  IE9 is not supported on XP SP3.

I think the browser versions are going to play a big part if you are implementing Planning or HFM as they use new ADF interfaces which are optimised for IE9 and Firefox 10

Microsoft Office 2010 64 bit is now supported.

“After completing an EPM System deployment, you can generate a deployment report that provides information about configured Web applications, Web servers, databases, and data directories used by EPM System. This report can help you troubleshoot issues that might arise in your deployment.”– I have already seen a presentation of this feature and I will provide an example after I have gone through the configuration.

“The Oracle Enterprise Manager “Fusion Middleware Control” is now installed and deployed with EPM System. You can use this tool to manage the WebLogic domain and all Java Web applications in EPM System out of the box.”  - This was available in previous releases by extending the WebLogic domain which I blogged about here and now you don’t even have to worry as it is all configured automatically.

“The EPM System Media pack on Oracle Software Delivery Cloud has been simplified. Software downloads have been merged together.”– I will go through this shortly.

“Installation of Oracle HTTP Server is now optional. If you choose not to install Oracle HTTP Server, for example in a development environment, Oracle Hyperion Enterprise Performance Management System Installer installs an embedded WebLogic HTTP Server as part of Oracle Hyperion Foundation Services that acts as a proxy server. In a production environment, Oracle recommends that you install Oracle HTTP Server for use with Oracle WebLogic Server or IBM HTTP Server for use with WebSphere. You can also install and manually configure Apache HTTP Server with WebLogic Server.”– Interesting to see that there are number of additional options available instead of just OHS and IIS

“Microsoft Windows Installer (MSI) Client Installers are now provided for Oracle Essbase Client, Oracle Essbase Administration Services Console, Oracle Essbase Studio Console, and Oracle Hyperion Financial Management Client.”– It was possible to get EAS and Studio installers in a 11.1.2.1 patch but it looks like all the clients are finally available as standalone installers.

“Three new “rapid deployment” documents provide step-by-step instructions for building a typical Oracle Hyperion Planning, Financial Management, or Essbase development environment on a single server running Microsoft Windows.”– These documents were actually available in 11.1.2.1 but have now been updated for 11.1.2.2 and merged into the EPM documentation.

“A new Oracle Enterprise Performance Management System Standard Deployment Guide outlines the best-practice approach for deploying EPM System products. This approach is based on creating a base deployment of the products and then scaling out the services to handle the needed capacity.”– This was also available for 11.1.2.1 and has been brought up to date and merged into the standard documentation. Is there such thing as a standard deployment?

A new “ADF” Web application has been added for Financial Management.– Is this the start of FM moving away from being forced down the windows IIS route, once again I think the new ADF interface has been optimised for IE9 and Firefox 10 but I believe you are not forced to use it and can use the existing one.

FM Clusters are now managed centrally through the SS Registry  - “You can now manage Financial Management clusters from one machine. Cluster information is now stored in Oracle Hyperion Shared Services Registry rather than in the Windows registry.”

“The Oracle Hyperion Financial Reporting Print Server is now part of the Financial Reporting Web application. You no longer have to install the Print Server as part of the Financial Reporting Studio installation, and you no longer need Microsoft Office on the Print Server.”– All change again in this release but I believe it is a milestone as at last it looks like the print server is not windows only and there is no need to install a PDF renderer such as GhostScript.

“You can now deploy EPM System Web applications to a single managed server (compact server) in Development, Test, and Production environments. This reduces the overall memory requirement of EPM System and reduces startup time.”– Compact deployment was available in 11.1.2.1 and I blogged about it here but it was only supported for development and was a manual configuration. I will cover this new feature in more detail in a future blog.

There are additional important considerations if you are upgrading to 11.1.2.2 and using planning.

“Oracle Hyperion Calculation Manager has replaced Oracle Hyperion Business Rules as the mechanism for designing and managing business rules, therefore, Business Rules is no longer released with EPM System Release 11.1.2.2”– Business Rules have finally bit the dust so if you are currently using them and planning to upgrade then it is worth investing time getting up to speed with Calc Manager.  I will cover the migration in the near future.

The documentation also states that the server hosting planning must have at least 16GB ram, I am interested to understand if this new release of planning really does require that amount of memory and how does that fit into the world of compact deployment.

There is also reference to having to upgrade your client browser to use the new version of Planning –

“The new, improved Planning user interface requires efficient browsers to handle interactivity provided through Web 2.0 like functionality. In our testing, Internet Explorer 7, Internet Explorer 8, and Firefox 3.x are not sufficient to handle such interactivity, and the responsiveness in these versions of browsers is not as fast as the user interface in the previous releases of Planning. For this reason, we strongly recommend that you upgrade your browser to Internet Explorer 9 or Firefox 10 to get responsiveness similar to what you experienced in previous releases.”

It looks like it still possible to use the existing 11.1.2.1 user interface by setting the property ORACLE_ADF_UI in planning to false.

There have been a number of enhancements with Shared Services -
•    Allows you to rename the default admin account during the deployment process. 
•    After deploying Foundation Services, you can deactivate the default EPM System Administrator account after provisioning another user with the Shared Services Administrator role.

I know the ability to change or rename the admin account has been raised a number of times in the past and it is good to see that Oracle have taken note.

LCM has been given an overhaul with many new features and changes -
•    Simplified User Interface
•    Simplified Migration Definition File
•    Simplified Migration Status Report
•    Automatic Application Shell Creation for Classic Applications
•    Shared Disk Location defined in the configurator
•    Support for ERPi
•    Additional FR artifacts – Annotations, User POV, Batch jobs
•    New replace option for Reporting and Analysis where only artifacts that have a newer last modified timestamp will be imported.

So you can see this release does bring in quite a few additions and changes and that is before even getting into the detail of all the product enhancements.

Anyway on to the installation and the first step is to download the assembles from edelivery, I am going to be installing on a single Windows 2008 R2 x64 machine using Oracle 11g as the repository.

The installation prerequisites are available here and the support matrix here


Straight away you will notice there is a change from the previous releases and many of the files have now been combined.

Parts 1-4 contain the common components and WebLogic, OPMN, Installer plus the core products, 5-7 contains the rest of the product assemblies.

Oracle HTTP Server is a separate download.

The standalone clients are available in one download.

So hopefully this will cause less confusion like in the past with a prime example being the additional contents download.

There are also additional product download available that sit outside of the EPM installer such as DRM, EAL, ODI.

So how do you know which files you require well the easy option is just to download them all or


You can click the README button


This will open a report that breaks down each of the license components such as “Essbase Plus” and provides a list of the files to download.


If you expand the Part 6 download you can see that it contains all the essbase related product assemblies.


The clients download includes every standalone client installer which does stand in at 1.6GB in total

Once the files have been downloaded, extract them to the same base location then the process for installation is like in previous versions so the installer is started by running installTool.cmd/sh as an administrator

If you are installing on a distributed environment make sure you read the information here


When the installer starts up it will run through all the prerequisite checks, this screen is slightly different than in previous versions and you don’t have a command window running in the background which you used to have to check to see if any of the checks had failed for such things as UAC.


If any of the checks failed these will now be displayed in an error panel window.


Additional prerequisite checks will also be displayed indicating whether they have passed or failed, these checks look to be the same as previous 11.1.2 releases, though I don’t see the check against OHS anymore which is probably because it used to say that it had passed the check even if it hadn’t.


Select the location for EPM installation.


As this is a new installation only one option is available.

The option to install components by tier has now been removed probably because it was a bit of a waste of time as most of the time choose components individually was selected.


Oracle HTTP Server is not checked by default as it is optional so if you intend of using it then it will need to be selected.

Now that many of the installation files are combined then the likely hood is you are going to be given the option to be able to install lots more products than with previous versions.


All the product components were installed successfully so it is on to the configuration.

Before configuring make sure you have met all the configuration prerequisites outlined here

If you plan on using Web Services with products like APS, FM, PCM then make sure you follow the steps on creating the required schemas using the repository creation utility.

If you are configuring on a distributed environment then make sure you read here


This screen has changed a little in previous versions there the options to create a new instance or modify an instance.


The first step like in previous version is to define the connection information to the Shared Services and Registry database.


As previously the options are available to select to configure all the products at once or individually, if you are going to use a separate schema/database for each product then you will need to configure the products individually.


There are now two additional options for Reporting and Analysis, these are not new in terms of configuration all that happened is the options have been expanded to allow to configure Framework Services and FR RMI ports separately (Configure Database is required if configuring Framework Services)

Besides that the options are pretty much the same as previous 11.1.2 releases so I am not going to go through each of the products and the configuration as there is enough documentation out there to understand the process.


There is a new option to set the LCM Export Import Location which could be on a shared disk, previously the location was fixed unless changes were made to update the path in the Shared Services registry.


As this is the first configuration a new domain is created and the admin server port and admin user details set.

Please note this user is the admin user for WebLogic and nothing to do with Shared Services, in the past there have been a number of posts on the Oracle forums where there has been confusion between the two sets of admin users.


By default “Deploy the web application to a single managed server” is selected, this is known as compact mode which was available in the previous version.

Compact mode is useful is you want to combine a number web applications under one managed server which would be sharing the same JVM and running on one port, this helps reduce the overall memory consumption and start up time though it is worth understanding that if the JVM crashes for any reason then you take out all the products running under that managed server.


If you deselect the option then the web applications will each be deployed to their own managed server and port.

It also looks possible to have a number of web applications to run under one managed server and then have the rest deployed to their own managed server.

I am deploying each of the web applications to their own managed server and will be covering compact deployment and scaling out in a future blog.

If you are deploying FM then you will notice that there is the new ADF web application that is deployed to the same managed server as the Web Services.

 If you are combining all the products into one database then you have the option to use the Shared Services database or configured them to a new schema/database.

If you are configuring a database/schema per product then you would choose first-time configure of database.


Here is another change which I outlined earlier in that you can define the admin account name to be used instead of it being fixed to admin, I am going to be daring and test out the new functionality so I changed it to EPMADMIN.


 Once all the products have been configured then like previously the Configure Web Server should be run.

Configure Web Server component has two additional options, as highlighted earlier it is possible to use the Embedded WebLogic HTTP Server and there is also the option “Setup Registry for manual web server configuration” if you want to manually configure the HTTP server then the relevant config files will be generated in
<MIDDLEWARE_HOME>\user_projects\<instancename>\httpConfig\autogenerated


If you select the embedded WebLogic HTTP server you will see that it is deployed to the foundation port 28080.

I am assuming this means you can access workspace and all the web applications through that port which I am sure has the possibility to cause no end of confusion.

There have not been many changes to the windows services created.


If you deployed any of the web applications to a single managed server a new windows service will be created called "Hyperion EPM Server - Web Application"

Hyperion Financial Management - Web Services has been renamed to Hyperion Financial Management – Web Tier

The print server service is no more and the majority of the service names still start with HyS9

Another nice new feature is the ability to generate a deployment report that lists

•    All logical Web applications and all Web servers that are configured
•    The Web application URL and domain name for each Web application deployment on a machine
•    All databases configured for EPM System
•    The data directories used by EPM System products


To generate the report


Open a command line window and navigate to
<MIDDLEWARE_HOME>/user_projects/<instancename>/bin

Run the command epmsys_registry report deployment


 This by default will generate the report at
<MIDDLEWARE_HOME>\user_projects\<instancename>\diagnostics\reports\deployment_report.html


If you start up the WebLogic admin server and go to http://adminservername:7001/em then you should be able to log into Enterprise Manager with no additional configuration required.

I think I am going to leave it there for today as I need to test out all the products, I will update if I find any serious issues.

Next time I will be testing out applying the maintenance release to an existing 11.1.2.1 environment.

Applying EPM 11.1.2.2 Maintenance Release

$
0
0
In the last blog I wen t through a clean install of 11.1.2.2 and covered the key changes in this latest release, today I am going to go through an overview of applying the maintenance release from 11.1.2.1 to 11.1.2.2.

Last year I covered the maintenance release from 11.1.2.0 to 11.1.2.1 and to be honest the process is pretty the same so I don’t really want to repeat too much and concentrate on differences and any issues that may arise.

I have tested applying the maintenance release to a couple of windows environments which I know is not a true reflection on whether issues are going to be experienced or not but this is only intended to give an idea on the possible process to follow..

The only difference between the environments was one of them started its life as 11.1.2.0 and then went to 11.1.2.1 and now 11.1.2.2 and the other started off as 11.1.2.1, both using Oracle as the repository.

Please do note this is only an overview and the documentation should be studied as it does contain vital information.

They are some key considerations to understand before applying the release

  • Business Rules no longer exist and you must migrate to Calculation Manager, I will cover this in more detail shortly.

  • Planning/HFM both use a new ADF interface (it is possible to use existing) which bring their own challenges.
    • Optimised for IE9/Firefox 10 – Do you currently use these version or are you able to move to these versions
    • o    Increased resource requirements – Can your current infrastructure cope.
I do suggest having a trial of the new ADF interfaces as they are quite a big step change which will impact the users, it also worth regression testing for possible issues/bugs and judging the performance as there may be some noticeable differences from previous versions.

There are a number of prerequisites before applying the maintenance for the full detailed rundown head over to the documentation.

  • You must apply the maintenance release to all EPM System products in the deployment. You cannot apply the maintenance release to only some products.

  • All EPM services must be stopped.

  • If you have any of the clients installed in the environment these must be removed as the clients are now standalone and the installer will not upgrade them.

  • If linked objects (LROs) in essbase are being used they must be exported and deleted first.

  • If you are using Business rules you must migrate to Calculation Manager. There are few options available and the documentation seems to differ between the configuration and the Calculation Manager documentation.

  • If using Web Services manager with PCM then MDS schema will need upgrading.

  • If you are using Financial Close then there are quite a few prerequisites so go through the documentation.
So to just cover some of the above topics, if there any clients installed on the environment you are upgrading then this will need to be uninstalled on each machine.


Above is a list of the standalone clients that are available as MSI installers and combined into one download.


The uninstaller can started from <MIDDLEWARE_HOME>\EPMSystem11R1\uninstall\unistall.cmd/sh



You don’t need to uninstall the EPMA and FDM clients as they still delivered as part of the installer.

If you are applying the maintenance release to 11.1.2.0 and have FR Studio installed then that will be in the list and should be selected.

 

The uninstaller will need to be run on each machine where clients are hosted.

The Financial Reporting Print Server is no more in 11.1.2.2 and has been merged into the web application so it can be removed.



Run FRRemovePrintServer.cmd from the location where the FR Studio was installed.


This should remove the windows service and the HR printers.

If you are upgrading from 11.1.2.0 then follow the process here.


If you are applying the maintenance release from 11.1.2.1 and have Financial Reporting Studio then it can be removed from Programs and Features. (Do not uninstall until after you have removed the Print Server)

If you are using Ghostscript or equivalent then this can also be uninstalled as it is no longer required.

If you are using Essbase and LROs then you will need to export and delete them using Maxl

e.g.
export database sample.basic lro to server directory 'exportedLROs';
alter database sample.basic delete LRO all;


These can imported in after the maintenance release has been configured using the Import LRO statement, more information on how to import is available in the essbase tech ref

Right on to business rules, as this is a maintenance release then I would probably advise to export the rules from EAS




Making sure that “For Calc Mgr” is selected, and then import them into Calculation Manager before applying the maintenance release, at least this way you know that the rules are definitely going to be available in Calc Manager.

The alternative method which I suppose is more prevalent if moving from an earlier release is outlined in the documentation

The documentation states that for each rule there must be no associated outline (this is for planning only, for essbase rules they should be all exported)


If there is an associated outline remove it.


“All locations” is not valid in Calculation Manager


A location should be applied to each application you want the rule to be applied to, a rule cannot be applied to multiple plan types in the same application to be able to migrate to Calculation Manager.


Once again "All locations" is not valid for Access Privileges.

 

Access permissions should be applied to each location.

That is the way the document reads to me but if you then look at the Calculation Manager documentation it has a table which defines the possible outcomes depending on how your rules are defined.

I also tested leaving the “All Locations” on the rules and it is possible to import them into Calc Manager 11.1.2.2.

So it is definitely worth having a study of the table and then decide what works best for you and there is also much more detailed information around migrating business rules available here.


Anyway on to the applying the maintenance release and the first step is to download the assembles from Oracle Software Delivery Cloud


I covered the differences in the file structure in the last blog.

Extract the files to the same base location; if you are applying to a distributed environment then it is a good idea if possible to make the install files available on a share to save having to keep extracting them


Log in with the same user that performed the original install and start the installer using installTool.cmd/sh and a number of pre-installation checks will be made, if there are any issues an error panel will be displayed.


The same prerequisite checks are made as if it was a clean install and as this is a maintenance release you would hope that all the checks will pass.


Depending on the products installed on the machine then it may be possible the PATH environment variable check will fail, in this case you would need to check the variable and see if you can remove any of the references.

 

The host name check may also fail as it has resolved to an IP address, this is more than likely to happen if it is a personal install or network/dns issues and can be fixed by updating the host file with the IP and hostname.

 

The Middleware Home will have been previously defined so it cannot be changed.


The “Apply maintenance release” should be the only option available.


All the product components that the maintenance will be applied to will be automatically selected, you will notice that the majority of clients are no longer available.


If the maintenance has run smooth then all the components should be marked with a green tick.


I did experience an issue OPMN on one of the machines I applied the release to but to be honest the machine has been put through its paces and it could a number of reasons why it failed.

I did find a workaround and if anybody else does get an OPMN failure with the log stating “java.lang.UnsatisfiedLinkError” then I will post a possible solution.

If you are installing in a distributed environment you should then you should repeat the same steps on each machine.

On to the configuration


The option to modify/create an EPM Oracle instance has been removed in this version and as it is a maintenance release you shouldn’t need to modify anything.


All the options are selected that need to be configured, this is all well and good but it assumes that the configuration is done using one database, if the EPM environment has been configured with one database/schema then it is possible to just carry on but if it has been configured for one database/schema per product then you will need to then run through in stages.


Connect to a previous configured Shared Services database should be selected.


Deploy to existing domain should be kept.


In this release there is the option to deploy the web apps to a single managed server (compact deployment).

If you want to start using this feature then you will need to deploy all the web apps to their own managed server first before the option can be selected.


If you are using HFM there you will notice an extra web application deployment which is for the new ADF interface and is combined with the Web Services.

This deployment will generate a new windows service name of “HyS9EPMAWebTier” and a display name “Hyperion EPMA Web Tier - Web Application”, so if you have a batch script that starts up the HFM services you will need to update it from the previous name of “HyS9FinancialManagementWebSvcs” – “Hyperion Financial Management- Web Services” to the new name.


When configuring the databases make sure that connect to a previously configured database is selected and double check that the correct schema/db has been selected as some of them were incorrectly selected while I was configuring.


Once the configuration is complete make sure that the Configure Web Server is carried out.


I noticed that in the Configurator even though EPMA had been successfully configured it still was enabled which I am sure is just a feature and will be fixed at a later stage.


Once all the web applications have been successfully deployed if you select any of them again it should be possible to deploy them to a single managed server if required.


The maintenance release doesn’t make a good job of cleaning the existing items from the start menu so you end up with additional menu items which are exactly the same as the previous ones.

You will also notice that new versions of jdk and jrockit are being used and the previous ones are still there, if you have been through all the maintenance releases you will end up with three versions.

Once the configuration is complete the clients could be installed with the standalone installers that are available which is straight forward so I am not going to cover it.

There are a number of post maintenance release configuration tasks to carry out depending on which products are installed, for full details have a read of the documentationbut here is a summary.

  • Clear the cache from all browsers this will ensure the latest files are being used.
  • Essbase - If you have exported LROs these will need to be imported in using Maxl.
  • Essbase - If you were using Business rules for essbase the exported file should be imported into Calculation Manager
  • Essbase Studio – The catalog will need to be updated using the Studio command line client.
  • Strategic Finance – the database must be converted using a utility – Patch 13776302
  • PCM – Reregister  applications in EPMA
  • FDM – Configure components, upgrade applications using the Schema Update Utility and configure adaptors
  • Planning – Applications should be upgraded through the Upgrade Wizard.


All the planning applications can be upgraded at once.


A message will be displayed to let you know if the upgrade was successful or not and if the calculation mode has been changed from Business Rules to Calculation Manager.


  • Planning - If you were using Business rules for planning then these will need to be migrated to Calculation Manager

When the maintenance release for planning was configured it creates an export file of the rules.


The file is called HBRRules.xml and is located at
<MIDDLEWARE_HOME>\EPMData\Planning

The format of the file is pretty much the same as if the rules were exported from EAS, if there are any issues with the migration to Calc Manager  then it should be possible to edit the xml file.


A table is also created in the planning system database called HSPSYS_HBR2CMGRMIGINFO.
It looks like this table should hold the information of any rules that are migrated to Calc Manager though it didn’t seem to populate when I carried out a migration, I am not sure yet if it is used in a different way or is more for upgrades and not a maintenance release.

To migrate to Calculation Manager.


Select an application in Calculation Manager and choose Migrate


Select the application, plan type and the migrate options.


A summary of the objects migrated will be displayed.

The rules can then be deployed to the planning and the process should be repeated for each planning application.

Security on the existing business rules can be migrated to planning using a utility
Make sure the rules have been converted to Calc Manager and deployed to Planning before using it.


The utility HBRMigrateSecurity.cmd is available in <MIDDLEWARE_HOME>\user_projects\<instancename>\Planning\planning1

Format = HBRMigrateSecurity.cmd [-f:passwordFile] /A:appname /U:admin /F:output file


I think I will leave it there for today, I hope this has been useful, enjoy!!

External authentication with Microsoft Active Directory in ODI 11g – Part 3

$
0
0
Previously I went through setting up the Studio to use to external authentication and then the standalone alone agent, to be complete I briefly want to go over the process to enable authentication for the J2EE components so that means the agent and console.

I am not going to go through the process of extending the WebLogic domain to deploy the agent and console as this is covered in detail here, it might be also worth looking at integrating the console with Enterprise Manager and the process to do so can be found here.

Do make sure if that you have added the credential store entries so the J2EE components can connect to the ODI repositories, in the documentation and OBE it gives an example on how to do it using the WLST (WebLogic scripting tool) but it can also be achieved if Enterprise Manager has been enabled by…

Starting up the WebLogic admin server and going to http://<adminserver>:7001/em


 Select the domain and right click Security > Credentials


Select Create Map and add the map name of oracle.odi.credmap


Select the map and click Create Key, the Key is SUPERVISOR and then enter the username and password for a supervisor account in ODI, if the password changes at any point then the Key can be edited and the password updated.

This updates the cwallet.sso file located at
<MIDDLEWARE_HOME>\user_projects\domains\ODI_DOMAIN\config\fmwconfig

This got me thinking as in the first two parts I used the odi credtool to create the cwallet.sso and wondered whether I could also use enterprise manager as an alternative, I copied the cwallet.sso file I configured previously to the fmwconfig folder.


The answer is yes and it also means you could use one cwallet.sso instead of having multiple copies; the file could possibly be located on a shared location where all the ODI components can access it by updating the path in the jps-config.xml file.

Anyway back to configuring the J2EE components for external authentication, if you look at the ODI documentation then you will see the following information on how to configure.

Java EE Components
Oracle Data Integrator components deployed in a container (Java EE Agent, Oracle Data Integrator Console) do not require a specific configuration. They use the configuration of their container.

See "Configuring OAM Identity Assertion for SSO with Oracle Access Manager 10g" in the Oracle Fusion Middleware Application Security Guide for more information on OPSS configuration in a Java EE context.


As usual the documentation is a bit vague and transports you off to yet another document which personally I think confuses the matter. Basically what the statement means is that the J2EE components will use the external authentication that has been configured in WebLogic.

Now if you have configured external authentication in WebLogic before then you will know what to do, for those that have never configured here is a summary of the process.


I am going to use the same Microsoft Active Directory that I used in the previous external authentication blogs.

To configure WebLogic log into the admin console e.g. 
http://<adminserver>:7001/console



Select Security Realms


Select the default realm called myrealm


Select the Providers tab

Select New to add a new provider


Provide a name for the AD and from the large list of types of authentication providers choose “ActiveDirectoryAuthenticator” and click OK


The provider will be added to list


I selected reorder and placed the newly added AD as the first in the list to authenticate against.


Select the AD provider



You will notice there is an option to set a Control Flag, this determines the login sequence if there is more than one authentication provider.

This needs to be set to SUFFICIENT or the login using the AD accounts will not work.

“A SUFFICIENT value specifies this LoginModule need not succeed. If it does succeed, control is returned to the application. If it fails and other Authentication providers are configured, authentication proceeds down the LoginModule list.”

Select the “Provider Specific” tab to enter the AD details


Enter the hostname of the AD, the port and the principal which is the full distinguished name of the user to connect to the AD with which is my case is cn=supervisor,cn=users,DC=epmmsad,DC=com


In the users section I updated the following.

User Base DN : where the users are located in the AD e.g. cn=users,DC=epmmsad,DC=com

User From Name Filter : As I want to use the sAMAccountName as the user attribute I updated to (&(sAMAccountName=%u)(objectclass=user))

User Name Attribute: set to sAMAccountName

Enable : Use Retrieved User Name as Principal

The rest of the sections I left as default, the WebLogic admin server requires restarting to take account of these changes.

The details of the AD configuration are contained in
<MIDDLEWARE_HOME>\user_projects\domains\ODI_DOMAIN\config\config,xml


Once the server has restarted log in and go to the User and Groups section under myrealm and if the AD was correctly configured you should see users from the AD. If you don’t see the users then check the AdminServer.log in
<MIDDLEWARE_HOME>\user_projects\domains\<Domain Name>\servers\AdminServer\logs

The J2EE agent and should now be using external authentication.


The agent tested successfully.


To test the console I created a user called odi_user in the AD


Within the security area in the Studio I added the AD user and provisioned them.


I was then able to log into the ODI console with the external user.

So over the three parts this completes the process of configuring all the ODI components to use external authentication, this means the Studio, console, standalone and J2EE agents are all enabled to authenticate against an Active Directory.

EPM Shared Services Registry cleaner

$
0
0
No doubt you aware of the EPM Shared Services registry but if you are a little unsure on what it is all about then here is the official Oracle description.

The Shared Services Registry is part of the database that you configure for Foundation Services. Created the first time that you configure EPM System products, the Shared Services Registry simplifies configuration by storing and reusing the following information for most EPM System products that you install:

•    Initial configuration values such as database settings and deployment settings
•    The computer names, ports, servers, and URLs you use to implement multiple, integrated, EPM System products and components


Configuration changes that you make for one product are automatically applied to other products used in the deployment.

The Registry was introduced in version 11 and it was Oracle’s answer to centralise all the property files that were scattered everywhere in previous releases, I did write a blog about the registry and property files a few years ago so I am not going to go over old ground


The registry basically comprises of a set of six relational tables which are held together by component ids , it is not advisable to start changing the information directly in the tables as there are a number of supported methods to control it, this can be done by the command line epmsys_registry utility or for the majority of registry properties by using the LCM import/export features in Shared Services.

The registry works quite well most of the time but like most of products it is one that has improved over time, though saying that with a combination of maintenance releases, reconfigurations and bugs the registry can also get into a bit of a mess, sometimes it is not noticeable but other times it can cause nothing but a headache.

Luckily Oracle were aware of the issues that have arisen with the registry and have provided a utility to clean it up.

At the time of writing this there is not much documentation available on the utility so I apologise if there are any inaccuracies as it is just my own interpretation.

The utility is based on six rules defined in an XML file for cleaning up the registry and these are
  • Remove components without parent HOST node
  • Remove APP_SERVER components with just HOST components in parents
  • Remove components without any children or parent nodes
  • Remove HOST to HOST links
  • Remove webapps with invalid ‘serverName’ property
  • Merge duplicate webapps
From 11.1.2.2 the utility is installed by default and is available in <MIDDLEWARE_HOME>/user_projects/<instancename>/bin and can be run by executing registry-cleanup.bat/sh


The utility will go through each of the six rules and search for problems in the registry, it will skip certain component configurations which are also defined in the rules xml file.

This was run on a fresh install of 11.1.2.2 so as expected the utility didn’t encounter any issues.

If you are running 11.1.2.1 then the utility is also available as a patch and is definitely worth having a look, I would probably consider it a prerequisite if going through a maintenance release to 11.1.2.2

The nice feature of the utility is that you don’t have to take any action if problems are found and can just use it to see how the registry is looking.

I will now go through the process of getting up and running with the utility on 11.1.2.1


The patch 13807599 :Providing a cleanup script for Shared Services Registry issues is available from Oracle Support


Once downloaded extract the patch to <MIDDLEWARE_HOME>\EPMsytem11R1\Opatch


To apply the patch run the following from the Opatch directory.

Windows

opatch.bat apply <EPM_ORACLE_HOME>\OPatch\13807599 -oh <EPM_ORACLE_HOME> -jre <MIDDLEWARE_HOME>\jdk160_21

On my instance that was

opatch.bat apply E:\Oracle\Middleware\EPMSystem11R1\OPatch\13807599 -oh E:\Oracle\Middleware\EPMSystem11R1 -jre E:\Oracle\Middleware\jdk160_21

*nix

./opatch apply <EPM_ORACLE_HOME>/OPatch/13807599 -oh <EPM_ORACLE_HOME> -jre <MIDDLEWARE_HOME>/jdk160_21 -invPtrLoc <EPM_ORACLE_HOME>/oraInst.loc


Once the patch has been applied you will find the utility in <MIDDLEWARE_HOME>\EPMSystem11R1\common\config\11.1.2.0 which is a different location than the one preinstalled on 11.1.2.2

Please note before running the utility make sure you perform a backup of the database/schema holding the Shared Services Registry as even though this utility is there to clean up the registry there is also a chance it could also do more damage.

The utility can be run from by executing registry-clean.bat/sh

So let’s give the utility a try on an EPM 11.1.2.1 instance though I am not sure how many issues it will actually find as I have not really had many problems with it.


Once the utility starts up it asks for the EPM instance location which is different than the 11.1.2.2 version, I think it just requires the epm instance so it can locate the reg.properties file which contains the connection information to the Shared Services Registry database.


The utility now steps through the six rules with the first being “Remove components without parent HOST node

The utility finds no issues with that rule and skips four components according to the definition in the rules xml file.


The next rule is to “Remove APP_SERVER components with just HOST components in parents

This time the utility finds one component matching the rule and provides five different options to take action upon.


Selecting I provides full details of the entry in the registry.


If you run a registry report using E:\Oracle\Middleware\user_projects\epmsystem1\bin\epmsys_registry.bat/sh then you can view the same entry in the registry.


It is also possible to run a SQL query against the registry tables to view the same information though please note these are not the only records in the registry tables relating to this entry.


Selecting Y deletes the component out of the registry.


A quick query confirms the records have been removed.


The utility runs through the remaining rules which with the instance I chose didn’t find any more issues.

So give the utility a trial as you don’t have to delete any entries and you have the ability to see what shape the registry is in.

EPM 11.1.2.2 – Compact deployment

$
0
0
I thought I would put together a write up on compact deployment in 11.1.2.2 as judging by some of the posts on the OTN forums lately there is a bit of confusion around this area.

I did write up a few posts in the past on compact deployment in 11.1.2.1 which can be read here, here and here, luckily the process in 11.1.2.2 has been much more simplified.

Now I must stress that all the information contained in this blog is just my interpretation and there may be some inaccuracies, as always if you feel there is then please get in touch and share your views.

So what is a compact deployment, well basically all it means is that instead of each web application being deployed to its own managed server they are combined and deployed as one, the number of web applications deployed to one managed server depends on what is selected when you run the EPM configurator which I will go through shortly.

So why have a compact deployment, well I think one of the main reasons the compact deployment came about was the big swing in the amount of resources required when entering the world of 11.1.2.x. In previous versions it was easy for say a consultant to run the full suite of Hyperion products on a laptop or run a POC, demo or training on an average spec machine, when 11.1.2.x came along this all changed as WebLogic had to be deployed and with the number of web applications and the amount of memory consumed this stopped many from being able to run what they could in previous versions.

Probably down to the amount of complaints and pressure then Oracle came up with the compact deployment method.

So what are the main advantages of a compact deployment
  • Reduced overall memory requirement.
  • Reduced start up time.
  • Easier to manage as less services to start
  • Combined web application log (individual logs are available though)
  • Mixed mode of compact deployment web applications and standard deployment
There are disadvantages as well though
  • If the JVM crashes then all web applications are taken down.
  • As all web applications share one JVM then overall performance may not be as good.
  • If there is an issue with one web application then this can stop the managed server from running.
  • Reduced logging information.
The advantages definitely out way the disadvantages if you are using EPM for development, training, POCs etc  Oracle do say that compact deployment is now supported in production, I am not going to get into a debate of whether that is a good idea but personally I have reservations about using it in a production environment.

Oracle has also helped out those who want to quickly deploy Essbase, Planning or HFM to a one server machine which is known as a rapid deployment and uses the compact deployment method.

The rapid deployment documentation contains a step by step guide to deploying on Windows 2008 R2, the documentation even goes into detail on the OS configuration and steps for installing Oracle as the database.

The docs are available at the following locations for
Ok, so let’s go through the steps of a compact deployment but please note this is not a step by step configuration guide and does not contain screenshots of every panel.


After the required products have been installed the EPM configurator is started and all the Web Applications which are going to be deployed to one managed server can be selected, if there are web applications you don’t want to be part of the compact deployment and you want them to run under their own managed server you can deploy them later.

Please note that by default all the available Web Applications will be selected for deployment so if you don’t want to deploy all the web applications to one managed server only select the ones that you do.

If it is the first time the web applications are being deployed I make sure that the configure database is selected so the datasources are configured correctly in WebLogic.

I am going to be deploying the following to one managed server : Foundation (Workspace, Shared services), EPMA, Calculation Manager, Financial Reports, Web Analysis, Provider Services, EAS and Planning.

It is important to understand which web applications you actually want to deploy to one managed server first as even though the memory footprint is reduced from a standard deployment the more web apps you select the higher the totally memory consumed will be.

There may be web applications that you want to deploy but are not going to use often so it is possible you wouldn’t include them in the compact deployment and deploy them to their own managed server and start when required though this could depend on the resources available.


If the configure database option is selected for multiple products then you will only be able to configure to one database, if individual database/schemas are required then it is possible to select the configure database for one product, configure and then move on to the next product to configure the database until all have been configured and then deploy the web applications.


As this is the first deployment the web applications will need to be deployed to a new domain, most of the time the default configuration can be kept and just a password provided.


Now for the most important panel in a compact deployment and I think this is where the biggest change is with 11.1.2.2 deployments, “Deploy the web applications to a single managed server” is selected so a compact deployment is the default.

You will notice that all web applications will be deployed to one managed server called EPMServer0 and all web applications will be running on one port which the default is 9000.



If you deselect “Deploy the web applications to a single manage server” then it is standard deployment where each web application will be deployed to its own managed server and running on a port which you are probably more accustomed to.


It is possible to change the default port of 9000 or SSL port 9443 if required.


I think it is worth going over the web server confiugration as well as I have noticed some confusion over it, Oracle HTTP Server comes as part of a separate download and it is possible to install without including it, now if you take this route the default web server will be the newly introduced Embedded WebLogic one.

The Embedded WebLogic HTTP Server uses a proxy servlet which is bound to Foundation services so whatever port Workspace/Shared Services is running on will define the port http server is running on, in a default compact deployment the port is 9000 so the http server will be running on port 9000.


If you have a compact deployment which includes Workspace/Shared Services using the default port of 9000 and using the embedded http server you would access over port 9000.

If you are using OHS as the http server then the default would be port 19000.

It can get confusing if Foundation was deployed to its own managed server and the embedded http server was chosen as then the default port for the http server would now be 28080.

It is worth noting that the embedded http server is not supported in production environments and I have also noticed rendering issues when using it with IE9 which don’t seem to exist if using Firefox, the Oracle http server is not a big overhead in terms of resources so it might be the best option anyway.


If connecting to say EAS and it is part of a compact deployment then to be able to connect successfully you would need to include the port, this could be either the direct port EAS is running on or via the http server port.


If the deployment to one managed server is on a windows server then a service called “Hyperion EPM Server – Web Application” will have been created.

It can also be started up by a script startEPMServer.bat/sh in <MIDDLEWARE_HOME>\user_projects\<instancename>\bin


If you start up the WebLogic admin server then you will see EPMServer0 has been deployed and is running on port 9000.


You will also be able to view all the web applications that have been deployed as part of the compact deployment.



The managed server can be monitored just like any other using Enterprise Manager which is installed and configured by default in 11.1.2.2


The combined logs are available in the services directory for windows and the starter directory for *nix.


The individual logs are available in <MIDDLEWARE_HOME>\user_projects\domains\EPMSystem\servers\EPMServer0\logs

I have noticed there is not as much information available as when the web applications are deployed to their own managed server.

So how does the memory consumption compare between a standard and a compact deployment.


With the ten web applications I deployed to one managed server this stabilised after start up at around 1.2GB


The equivalent memory when deployed to individual managed servers averaged at around 6GB so you can see a massive difference.

Remember there is the overhead of other services to consider which depends on the products installed e.g. RAF Agent, EPMA, Essbase, RMI

The maximum JVM size for a compact deployment is set to 2701mb which can be increased if required either in the registry under

HKEY_LOCAL_MACHINE\SOFTWARE\Hyperion Solutions\EPMServer0\HyS9EPMServer and JVMOptionXX -Xmx2701m

or by editing

<MIDDWARE_HOME>\user_projects\<instancename>\bin\deploymentScripts\setCustomParamsEPMServer.bat/sh
and updating the set JAVA_OPTIONS line.

So how about a comparison on start-up times between compact and standard.

Please note this is not scientific and is based on the web applications I deployed and obviously depends on the hardware being used.

One managed server averaged at 3 ½ - 4 minutes.

The equivalent individual managed servers if started one by one and waiting until each one had fully started up before starting the next one was about 30 minutes, if all the individual services were started at once which is not always advisable then it took around 7 minutes.


What I did notice was that if you didn’t start the RAF agent first then the above errors appeared in the error log which caused the overall start-up time to increase.

This was not just applicable to the compact deployment as it happens if RAF web is deployed to its own managed server and the RAF agent is not started first, It doesn’t cause any problems if it is started up after it just slows down the start-up time, I am sure this did not occur in previous versions of 11.1.2

Anyway my testing of start-up times was done with the RAF agent started first.

It is possible to scale out the compact deployment to additional machines if required and to do this you first need to install the same set of web applications on the additional machine.


When you start up the EPM configurator an option will become available called “Scale out compact server on this machine


This would then deploy exactly the same web applications as in the compact deployment on the first machine.


If you then look in WebLogic admin console you notice that EPMServer1 has been created and is part of the EPMServer cluster.

You also need to run the Web Server configuration again as this will then add the scaled out deployment to the http server configuration.

####<Jun 17, 2012 11:06:03 AM BST> <Critical> <WebLogicServer> <EPM11CLUSTER> <EPMServer1> <Main Thread> <<WLS Kernel>> <> <> <1339409163293> <BEA-000386> <Server subsystem failed. Reason: java.lang.AssertionError: java.lang.reflect.InvocationTargetException

Make sure the Weblogic admin server is running before starting up the EPM Server on the scaled out server as otherwise it will not start and you will be hit with the above error, the admin server does not need to be running for subsequent start-ups.

If you want to add additional web applications to the compact server


In the configurator select “Deploy to Application Server” for web app you want to add to the compact deployment.


You should see the additional web application(s) available to be deployed to the managed server.

Make sure you also configure the “Foundation” - “Web Server” component so that newly deployed applications is also configured with the http server.

If you have already scaled out the deployment then this is what the documentation says when adding an additional web application

“If you add additional Web applications to the single managed server and you have multiple machines configured with the single managed server, for each additional machine (other than the WebLogic Administration Server machine), do the following:
1.    Install the additional applications.
2.    Restart the single managed server.”


I think this is missing a step because if you only follow those steps then the web applications on the scaled out server will not be added to the Shared Services registry and will not be added to the http server configuration.


So the registry will only contain web application information for the EPMServer0 and the http configuration will not add in the scaled out server even if you run the configure web server component again.


By selecting “Scale out compact server on this machine” again this should configure the additional web application.


Running the registry report displays the web application has been added against EPMServer1 and after running “Configure Web Server” the additional configuration for the scaled out server has been added.

Now on to the final piece and that is removing a web application from a single managed server, there may be a number of reasons why you want to do this such as if you want a web app to run in its own managed server instead of being part of the single managed server.

The documentation states

"Use the WebLogic Administration Console to remove any Web applications from the single managed server, and restart the single managed server on all machines.

If you uninstall any product from a machine that is part of the single managed server on that machine, the entire single managed server on the machine is removed."


It would be so much simpler if the configurator just provided the ability to deselect web applications from the “deploy to application server” panel.

Let me go through the first option “WebLogic Administration Console to remove any Web applications from the single managed server” and try and remove the planning web application.



The planning deployment is deleted from within the admin the console though the problem with this method is that from an EPM configuration perspective planning will still exist.


Restarting the web application and logging into workspace planning will still exist even though it has been deleted.


If you try to configure again the web application deployment still believe Planning is part of the compact deployment.

So what about the other statement in the documentation

“If you uninstall any product from a machine that is part of the single managed server on that machine, the entire single managed server on the machine is removed”


The uninstaller was run from <MIDDLEWARE_HOME>\EPMSystem11R1\uninstall\uninstall.cmd/sh and the planning web application removed.


Maybe I am misreading the documentation but the single managed server is not removed.

Anyway the process I finally went through to remove a web application from a compact deployment was
  • Uninstall the product on each server the single managed server has been deployed to.
  • Go to into the WebLogic admin console and delete all the deployments related to the web application.
  • “Configure Web Server” again in the EPM configurator to update the http server configuration.
If you wanted to then deploy the removed product to its own managed server you could reinstall the product, select “Deploy to Application Server” in the configurator just for that web application and in the deploy to application server panel deselect “Deploy the web applications to a single managed server”.

EPM 11.1.2.1 Deployment Report

$
0
0
In 11.1.2.2 a new feature was added to the Shared Services registry command line utility which allows the generation of a deployment report, I briefly highlighted the new addition in my 11.1.2.2 installation blog.

The report lists out the following information
  • EPM Deployment Topology Report
    • Logical Web Addresses — all logical Web applications and all Web servers that are configured
    • Application Tier Components — the components configured for each EPM Instance in this deployment, including the Web application URL and domain name for each Web application
       
    • Database Connections — all databases configured for EPM System products
       
    • User Directories — user directories used by EPM System products; configured security providers are listed in the same order as configured in Shared Services
       
    • Data Directories — data directories used by EPM System products, indicating the directories that need to be on a shared file system
  • EPM Deployment History Report — configuration history of activities on the specified date for each server in the deployment
The report would usually be generated after completing an installation and is useful for keeping a track of the EPM environment, problem solving or just a general understanding of what has been deployed and where.

It is worth nothing this only a summary report and the standard registry report can be run for more detailed information.

This type of report would be beneficial for prior versions and luckily it is available for 11.1.2.1 in the form of patch so I will go through the process of getting up and running with it.

The patch details are “Patch 13530721: Patch for 13530702 EPM System does not provide a report containing deployment information.


Download the patch from Oracle Support and extract to
<MIDDLEWARE_HOME>\EPMSystem11R1\Opatch


Once the patch has been extracted it can be applied from command line using the following

Change directory to <MIDDLEWARE_HOME>\EPMSystem11R1\Opatch

Windows

opatch.bat apply 13530721 -oh <MIDDLEWARE_HOME>\ EPMSystem11R1-jre MIDDLEWARE_HOME\jdk160_21

e.g.
opatch.bat apply 13530721 -oh E:\Oracle\Middleware\EPMSystem11R1 -jre E:\Oracle\Middleware\jdk160_21

*nix

./opatch apply 13530721 -oh <MIDDLEWARE_HOME>/ EPMSystem11R1-jre <MIDDLEWARE_HOME>/ jdk160_21 -invPtrLoc <MIDDLEWARE_HOME>/ EPMSystem11R1/oraInst.loc


Once the patch has been applied the deployment report can be generated from command line.

Change directory to <MIDDLEWARE_HOME>\user_projects\<instance_name>\bin

Execute the following command

Windows >epmsys_registry.bat report deployment
*nix > epmsys_registry.sh report deployment

EPM services do not have to be running to run the report as the utility communicates directly with the Shared Services Registry relational database.


The report is in html format and once generated it can be found at

<MIDDLEWARE_HOME>\user_projects\<instance_name>\diagnostics\reports\deployment_report.html

e.g.
E:\Oracle\Middleware\user_projects\epmsystem1\diagnostics\reports\deployment_report.html

Please note if the report is run again it will overwrite an existing one so it is probably best to timestamp or archive the report.

In 11.1.2.1 the report is broken down into five sections

Logical Web Addresses


Application Tier Components


In the 11.1.2.1 report the FM cluster names are not displayed as this information is not stored in the Shared Services Registry until 11.1.2.2

Database Connections


User Directories


Data Directories


The 11.1.2.1 report does not contain the deployment history as this only available from 11.1.2.2+ due to the changes in the Shared Services registry.


Even though the 11.1.2.1 report does not contain all the information that is available in the 11.1.2.2 version it is definitely worth considering applying the patch and generating a deployment report for your EPM environment.

ODI Series – Standalone Agent High Availability using OPMN

$
0
0
There are a few options around to offer high availability with ODI agents and the usual route is to deploy and cluster two or more J2EE agents,  these would then be fronted by a HTTP server.  Using this method allows for load balancing of the active/active agents and the failover of the scheduler as only one agent should be the scheduler.

There are situations where you may not choose to go down the J2EE route as it adds in more complexity and are looking for a simpler solution  but still require some form of high availability,  for instance keeping in my tradition of the EPM world you may have scheduled ODI routines to build/ load data to essbase databases using a standalone agent, if the standalone agent goes down and is unable to restart you would like a method to keep the schedule intact and not lose out on the important loads.

A possible method to achieve this could be to control standalone agents using OPMN and configure OPMN to allow failover between the agents, in this scenario the agents would operate in an active/passive configuration.

I am going to go through the process of getting up and running with this concept in my usual way, the first step is to get OPMN installed on the machines which are going to host the standalone agents, in the EPM world this may be on the essbase server which makes  life a little easier as OPMN will be already installed and ready to configure, I did go through the steps in an earlier blog.

The prerequisite is that an ODI 11g standalone agent has been installed on two machines.

I am going to assume that OPMN is not installed and go through the whole process, now I know there is a OBE available on configuring OPMN to manage odi agents but I think it is outdated and informs to install an old version of OPMN.

The easiest way to get OPMN installed is download Oracle Web Tier utilities, version 11.1.1.6 is the latest version available at the time of writing this and can be download from here.

 

Select the OS, download and extract.

In the extracted structure there will be a folder called Disk1 and there execute the file setup.exe which will start up the installer.


I am not going to go through every step and stick with the ones that are important.


Select “Install and Configure”



Select a location and home directory, ignore any warning about an application being required.


To allow OPMN to be configured you will need to select only “Oracle HTTP Server”even though it is not required but don’t worry the configuration of OHS in OPMN can be removed at a later stage which will make it redundant.


Enter a path for the OPMN instance home, the default will be <WebTier_Home>\instances\instance1  I updated to be ODI instead of instance1

The OPMN Instance Name default is instance1 which I updated to ODI_Instance

The OHS Component Name can be ignored, it is just the name that will be used for OHS in OPMN.


If installing on windows a windows service will be created for OPMN.


If you take a look at the processes on the machine then you will notice that OPMN is running and it has also started OHS (which is basically apache HTTP server)

From command line you can check the status OPMN


The command line tool is available in
<WebTier_Home>\instances\<instance_name>\bin


As we are not interested in OHS then it can be removed from being controlled by OPMN


A component can be deleted with the command
opmnctl deletecomponent –componentname <component_name>

OPMN is now installed on the first machine which will be hosting the primary standalone agent, the same process is now repeated on the second machine which I don’t need to cover.

I have an ODI standalone agent called StandAloneAgent already installed on two machines ODIAGENT and ODIAGENTPASS so the next step is to configure them to use OPMN

To do this you need to edit agentcreate.properties in <ODI_HOME>\oracledi\agent\bin which is populated with default values.


I am not going to go into too much detail about updating the file as I covered that in a previous blog but here is a quick overview

ODI_MASTER_DRIVER
ODI_MASTER_URL
ODI_MASTER_USER
ODI_MASTER_ENCODED_PASS
ODI_SECU_WORK_REPO
ODI_SUPERVISOR
ODI_SUPERVISOR_ENCODED_PASS


These variables can be populated with the information held in odiparams.bat


INSTANCE_HOME
ORACLE_OPMN_HOME

These can be updated with the information provided when installing Web Utilities but make sure the path separator is entered as / in both windows and unix.

COMPONENT_NAME
PORTNO
JMXPORTNO

These variables are for the agent information which is available from the Studio or the agent start script.


A completed agentcreate.properties file would look something to similar to the one above.

To add the agent information to OPMN then there is a script provided in the same directory called odi_opmn_addagent.bat


The file will require editing before running as the OPMN_HOME and INSTANCE_NAME variables will require updating with the correct paths.


Executing odi_opmn_addagent.bat should add the agent to OPMN


You can view the agent through the OPMN command line tool, after it has been added the status will show as Down meaning the agent has not been started, the agent can be started using

opmn  startproc ias-component=<Agent_Name>

The status should be then displayed as Alive though this doesn’t guarantee the agent has started up with any errors.


The logs are located in
<WebTier_Home>\instances\<instance_name>\diagnostics\logs\OPMN\opmn



If you look at the processes running on the machine you should the agent java process being controlled by OPMN.

One issue that can occur is that the user specified for the ODI_SUPERVISOR variable in the agentcreate.properties is ignored and defaulted to supervisor in the OPMN configuration file.


If a different user than supervisor is being used then opmn.xml in <WebTier_Home>\instances\ODI\config\OPMN\opmn can be updated, any changes to this file require a restart of OPMN.

Once the agent is up and running without any issues then the same configuration can be replicated on the second machine.

This now means that both agents are being controlled OPMN so the next step is to configure OPMN for failover.



Edit the opmn.xml file and if you are not intending to use SSL communication between the two OPMN nodes then set <ssl enabled=”false”, if you are going to use SSL then the wallet file will need to be recreated on each machine.


Add in the topology and nodes list containing each of the OPMN hostnames, usually you would use the fully qualified name.


Remove numprocs=”1” from the <process-set id=”odi-agent” section


At the <process-type  id=”odiagent” section add in

service-failover=”1”  which will enable the failover functionality

service-weight="<value>" this defines which agent has priority, a higher value means higher priority.


On the second machine the opmn.xml configuration would be exactly the same except for the service-weight value.


There are additional configuration settings available but that is enough to get going with failover, after changes have been made the OPMN service should be restarted.


After starting the OPMN processes on both of the machines the agent should be active on one of the nodes, on the passive node the agent’s status should be down and if the agent is restarted on the passive node OPMN should check if there is an active agent process, if there is then the agent process will not start.


If the agent is stopped or crashes and cannot start (I believe the default is three attempts) then it should failover and become active on the other node, the agent will start as a scheduler so any future scheduled jobs should be honoured.

If the scheduler has already started a job which has repeat cycles and the agent fails over then the session and repeat cycles will not be run on the new scheduler agent.

So all good the agents are working as expected in an active/passive configuration, well there is on slight issue if you look at the configuration of the agent then you will notice the hostname is set as the active agent at the time.


When the agent fails over then the host in the agent configuration becomes invalid, this could be handled by updates to the DNS, VIP, hosts file entries or there is another method that keeps the host updated which I am sure not everybody will agree with.

OPMN has the ability to run event scripts and one of the available options is to execute a pre-start script, so it is possible to run a script just before an agent is started.


In this example prior to the agent starting the updateAgent script is executed.


The script sets the agent host name to host that is executing the script in the ODI master repository snp_agent table, this as basic as it can be but the type of scripting and complexity to be used is really down to your preference.

So let’s test a failover with the pre script added.


The agent is active on ODIAGENT before failover.


The agent fails over to node ODIAGENTPASS and successfully updates the host configuration, please note the ODI Studio does need restarting after a failover to refresh the hostname.

The event scripts could be expanded to add in logging or alerting email functionality.

If you are looking to implement high availability with ODI agents and don’t want to go down the J2EE route then this method is certainly worth investigating, if you want any further information then feel free to contact me.

11.1.2.2 Compact deployment performance degradation

$
0
0
A quick post from me, if you running a 11.1.2.2 compact deployment and in particular planning then you may experience performance degradation due to the default ODL logging level setting.

In most circumstances a compact deployment will be used in a Dev, POC, cloud, personal laptop type scenario and the last thing you need is any type of performance issues so updating the logging level is worth implementing.

There is a file called logging.xml which controls the ODL logging parameters and the logging level for all the deployments in the compact managed server.


The file is located in
<MIDDLEWARE_HOME>\user_projects\domains\EPMSystem\fmwconfig\servers\EPMServer0

EPMServer0 is the default managed server name for a compact deployment.


Edit the file and locate the handler name “epmcss-handler”, the logging level is set to a high value of “NOTIFICATION:32” and controls SharedServices_SecurityClient.log

The definition of the most common types of logging levels is as follows

Message TypeLevelDescription
INCIDENT_ERROR1A serious problem that may be caused by a bug in the product and that should be reported to Oracle Support.
Examples are errors from which you cannot recover or serious problems.
ERROR1A serious problem that requires immediate attention from the administrator and is not caused by a bug in the product.
WARNING1A potential problem that should be reviewed by the administrator.
NOTIFICATION1A major lifecycle event such as the activation or deactivation of a primary sub-component or feature.
This is the default level for NOTIFICATION.
NOTIFICATION16A finer level of granularity for reporting normal events.
TRACE1Trace or debug information for events that are meaningful to administrators
TRACE16Detailed trace or debug information that can help Oracle Support diagnose problems with a particular subsystem
TRACE32Very detailed trace or debug information that can help Oracle Support diagnose problems with a particular subsystem

All logs for the compact managed server are located at
<MIDDLEWARE_HOME>\user_projects\domains\EPMSystem\servers\EPMServer0\logs
 

Update the logger level from “NOTIFICATION:32”  to a level 1 type such as “WARNING:1


Next locate the hander name of “epmreg-handler” which controls registry.log


Update the logger level from “NOTIFICATION:32”  to  a level 1 type such as “WARNING:1

Restart the EPMServer managed server and the new logging levels will have taken effect, If at any point an extra degree of logging is required for diagnostics then the level can be updated again.

ODI 11.1.1.6 – Planning KM bug + fix

$
0
0
Just another really quick update from me today, I have seen a few posts on the following issue and it actually hit me first time I used 11.1.1.6 with planning so I thought I would quickly go through the problem in case anybody else hits it.

The issue seems to be only related to 11.1.1.6

When you create an Interface to load from any source to a planning application and go to select the IKM you will probably see the following.


No matter what has been set as the staging area and even though the “IKM SQL to Hyperion Planning” has been imported it is not possible to select the IKM.

Don’t worry it is not something that you have done wrong and has been recognised as a bug

“Bug 14274186 : HYPERION IKM SHOWING AS UNDEFINED IN INTERFACE FLOW”

There are two options available to get over this problem you can either use a workaround which is

 

First open the KM


Change the Source Technology from “Generic SQL” to “<Undefined>


Open the interface again and the IKM should either be available or selected.

The alternative is to download and apply a recent patch, personally I feel this should be the preferred option as you will not need to worry about changing the technology each time the IKM is imported. (ok you could change the technology and then export the KM overwriting the existing one)
 

The patch
14274186: HYPERION IKM SHOWING AS UNDEFINED IN INTERFACE FLOW
can be downloaded from Oracle Support.

The patch basically consists of an updated KM but to make sure it is applied correctly and recorded then Opatch should be used.


 Extract the patch to the ODI Opatch directory and run Opatch to apply the patch.


You will notice that Opatch has just copied the new IKM replacing the existing one.

In Studio right click “IKM SQL to Hyperion Planning” and select “Import_Replace” and locate the IKM file.


If you open the IKM then you will see that the patched version has set the Source Technology to “<Undefined>” and any planning interfaces should now be able to select the KM.

Planning 11.1.2.2.300 Outline Load Utility Enhancements

$
0
0
I noticed that the recent patch release of planning 11.1.2.2.300 includes some additional functionality for the outline load utility that is worth going through.

With each release the outline load utility seems to gain extra functionality and it has just grown from strength to strength since its first appearance in 11.1.1.0

It is a utility that I know has made consultants lives much easier, it is simple to use and is now packed with functionality.

The enhancements in this patch release are
  • Import metadata and data from a relational data source
  •  

  • Optimize command lines by storing command line arguments in a command properties file. For example, if you use a command properties file to run the same commands on multiple applications, you need only change a parameter in the command line for each import. This allows you to bundle switches for a common application. It also makes command lines shorter and easier to manage, and enhances readability and ease of use.
  •  

  • Export data export to a flat CSV file
If you are going to patch planning then make sure you go through the readme in detail as it can be quite painful if you are on a distributed environment and it also requires patching other products first plus additional ADF patching.

Today I am going to go through using the properties file and importing metadata from a relational source, in the next blog I will cover the remaining enhancements.

In previous versions of Hyperion before the days of the Shared Services registry most of the configuration settings were held in properties file, a properties file (.properties) basically allows you to store key-value pairs which are separated either with a colon (key:value) or an equals sign (Key=value) and these pairs are then read by the calling application.

For example in the outline load utility world these could be /U:username or /S:servername

You are not restricted to only using pairs in the file as the other parameter switches can be used as well such as /N /O /C etc.

Before this release all of the these parameters had to be included in the command line and depending on the number of parameters it could look messy and if you have many scripts a lot of replication was being exercised.

Now there is a new parameter available /CP: commandPropertieFileName which designates a properties file to use when the outline load utility executes:

OutlineLoad.cmd /CP:E:\PLAN_SCRIPTS\metaload.properties   

By using the above example when the outline load utility start its will look for metaload.properties in E:\PLAN_SCRIPTS


Instead of having to write this information to the command line it is read from the easier to read and manageable properties file.

It is also possible to override the values in the properties file by including them in the command line.

OutlineLoad.cmd /CP:E:\PLAN_SCRIPTS\metaload.properties /D:Entity

/D:Entity which defines the entity dimension to load to takes precedence over /D:Account in the properties file.

I did notice that it doesn’t look possible to include -f:passwordFile parameter in the properties file and had to be included in the command line.

Right on to the main event and loading metadata into a planning application from a relational source, I know this new functionality will be music to the ears for lots of consultants because in many cases the source can be relational and up to now a SQL download and formatted file would have to be produced before using the utility.

There a quite a lot of new parameters available for the relational functionality and I will cover the ones that you are likely to use when loading metadata.

Here is an extension to above metaload.properties file which includes the parameters to run a SQL query against a relational source to load metadata to a planning application.


/IR:RelationalConnectionPropertiesFileName 

Just like the /CP parameter for including a properties file there is also one available just for the connectional information to a relational database, it is possible to use the same properties file as the one for other parameters like I have used:

/IR:E:/PLAN_SCRIPTS/metaload.properties

This will read metaload.properties for the source relational database information.

/RIQ:inputQueryOrKey

This can either the SQL query to be run or it can be used to designate a key which will hold the query.

/RIQ: ACCOUNT_SQL

So in my example the SQL query to be executed is held in key ACCOUNT_SQL

You may be asking why not just put the SQL directly in the /RIQ value, well you may have multiple SQL statements for different metadata load and by just updating /RIQ you call the required one, if you use /RIQ in the command line and have all the keys in the properties file then it is simple call different queries and looks much tidier.

KEY=SQL

The key relates to the key defined in /RIQ and SQL is the query that will be run, so in my example

The key is ACCOUNT_SQL and the SQL that will be executed is

SELECT ACCOUNT as "Account",PARENT as "Parent",alias_default as "Alias: Default",data_storage as "Data Storage" FROM PLAN_ACCOUNT ORDER BY idno

For my example this generates the following records to be loaded as metadata into the account dimension:


The query must return column header names to exactly match the properties required for planning.

/RIC: catalog

For Oracle I don't believe it matters what you specify as the value, for SQL Server it should be the database name.

/RID: driver

The parameter is the JDBC driver that will be used for the connection to the relational database.

For Oracle use:  /RID:oracle.jdbc.OracleDriver
For SQL Server:  /RID:weblogic.jdbc.sqlserver.SQLServerDriver

/RIR:url

The parameter is the JDBC URL to be used for the connection to the relational database.

For Oracle the format is: jdbc:oracle:thin:@[DB_SERVER_NAME]:DB_PORT:DB_SID

So in my example that equates to /RIR:jdbc:oracle:thin:@[fusion11]:1521:FUSION

For SQL Server the format is:
jdbc:weblogic:sqlserver://[DB_SERVER_NAME]:DB_PORT

An example being
/RIR: jdbc:weblogic:sqlserver://[fusion11]:1433

/RIU:username

Nice and simple this is the user name to connect to the relational database.

/RIP:password

The password for the database connection, this is unencrypted the first time it is used in a properties file.


Once the outline load utility has been run it will update the properties file and encrypt the password.

If you don’t want to use a properties file then all these parameters can be entered directly into the command line.

So let’s give it a go


The Account dimension currently contains no members.


The outline load utility is executed passing in the parameters to the encrypted password file and the properties file to use.


The utility reads the properties file and checks whether the database connection password is encrypted and as it is not encrypted it updates the file with an encrypted value.

The utility reads through the rest of the properties and then merges them with the ones currently in the command line before submitting them.

The output in the log provides further detailed information.

Property file arguments: /RIU:PLANSTAGE /D:Account /RIR:jdbc:oracle:thin:@[fusion11]:1521:FUSION /RIP:TOzyauMwe2gtUQ9tjidf1Zq2pCA8iroN4i7HQxssFKUaogr16fi+WKmHFTD8NIIs /RIQ:ACCOUNT_SQL /X:E:/PLAN_SCRIPTS/Logs/accld.err /A:PLANDEMO /S:FUSION11 /L:E:/PLAN_SCRIPTS/Logs/accld.log /U:epmadmin /RIC:FUSION_CONN /RID:oracle.jdbc.OracleDriver /IR:E:/PLAN_SCRIPTS/metaload.properties

Command line arguments: /CP:E:\PLAN_SCRIPTS\metaload.properties

Submitted (merged) command line: /RIU:PLANSTAGE /D:Account /RIR:jdbc:oracle:thin:@[fusion11]:1521:FUSION /RIP:TOzyauMwe2gtUQ9tjidf1Zq2pCA8iroN4i7HQxssFKUaogr16fi+WKmHFTD8NIIs /RIQ:ACCOUNT_SQL /X:E:/PLAN_SCRIPTS/Logs/accld.err /A:PLANDEMO /S:FUSION11 /L:E:/PLAN_SCRIPTS/Logs/accld.log /U:epmadmin /RIC:FUSION_CONN /RID:oracle.jdbc.OracleDriver /IR:E:/PLAN_SCRIPTS/metaload.properties

Successfully logged into "PLANDEMO" application, Release 11.122, Adapter Interface Version 5, Workforce supported and not enabled, CapEx not supported and not enabled, CSS Version 3

A query was located in the Command Properties File "E:\PLAN_SCRIPTS\metaload.properties" that corresponded to the key passed on the Input Query switch (/RIQ) "ACCOUNT_SQL" so it's corresponding value will be executed as a query: "SELECT ACCOUNT,PARENT as "Parent",alias_default as "Alias: Default",data_storage as "Data Storage" FROM PLAN_ACCOUNT ORDER BY idno"

Attempting to make input rdb connection with the following properties: catalog: FUSION_CONN, driver: oracle.jdbc.OracleDriver, url: jdbc:oracle:thin:@[fusion11]:1521:FUSION, userName: PLANSTAGE

Source RDB "FUSION_CONN" on jdbc:oracle:thin:@[fusion11]:1521:FUSION connected to successfully.

Connection to input RDB made successfully.

[Mon Sep 17 22:24:26 BST 2012]Header record fields: ACCOUNT, Parent, Alias: Default, Data Storage

[Mon Sep 17 22:24:26 BST 2012]Located and using "Account" dimension for loading data in "PLANDEMO" application.

[Mon Sep 17 22:24:26 BST 2012]Load dimension "Account" has been unlocked successfully.

[Mon Sep 17 22:24:26 BST 2012]A cube refresh operation will not be performed.

[Mon Sep 17 22:24:26 BST 2012]Create security filters operation will not be performed.

[Mon Sep 17 22:24:26 BST 2012]Examine the Essbase log files for status if Essbase data was loaded.

[Mon Sep 17 22:24:26 BST 2012]Planning Outline data store load process finished. 317 data records were read, 317 data records were processed, 317 were accepted for loading (verify actual load with Essbase log files), 0 were rejected.

All the 317 data records were read, processed and loaded successfully to the planning application.


Back in the planning application you can see all the metadata has been loaded.

There is one more parameter that I have not mentioned and that is /IRA, if this is used then the database connection information to the connected planning application is used.

This allows queries to be run against tables in the connected planning application database and does not require the /RIQ, /RIC, /RID, /RIR, /RIU, /RIP parameters.

Using the new functionality will also allow you to query the planning tables directly to return metadata, if this is something you are looking to do then it is definitely worth having a look at the Cameron Lackpour’s blog as he has kindly spent quite a lot of time covering this area in a number of his posts.

In the next blog I will cover the rest of the new functionality.
Viewing all 181 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>