Tag Archives: eBS

FND_GLOBAL.APPS_INITIALIZE: Initialize eBS sessions in SQL and PL/SQL

To access eBS functionality through SQL clients or PL/SQL, it can be necessary to initialize an eBS session.

For example when you need to reference eBS views, access API’s or start concurrent requests from outside eBS.

To initialize an eBS session the API FND_GLOBAL.APPS_INITIALIZE can be used. It has 3 required parameters and 2 optional parameters:
USER_ID -> This is the FND_USER.USER_ID for which is a session is initialized.
RESP_ID -> The FND_RESPONSIBILITY.RESPONSIBILITY_ID that will be used in this session.
RESP_APPL_ID -> The FND_RESPONSIBILITY.APPLICATION_ID for the RESPONSIBILITY_ID.
SECURITY_GROUP_ID -> An optional SECURITY_GROUP_ID. (Default = 0)
SERVER_ID -> An optional SERVER_ID. (Optional, default = -1)

The SERVER_ID will mostly be used in case of a DMZ or server dependent services.
The APPS_INITIALIZE procedure will accept any numeric value for the parameters. But the session will only be initialized successfully with at least a valid USER_ID.

In this article, we will look at the session initialization through APPS_INITIALIZE. The following steps are executed:
– Set up a PL/SQL table in FND_GLOBAL with some session variables. These variables are accessible through functions in FND_GLOBAL.
– Set up a sys_context (‘FND’) with the same variables.
– Set the organization context.
– Set up the Security Profile in case of HR Security. I will document the details of the HR Security later and skip the details in this article.
– Set the Consumer Resource Group.
– Execute product specific initialization routines.
– Execute the Custom SQL Initialization routine.

FND_GLOBAL

In the FND_GLOBAL package body a PL/SQL table is set up to hold session variables. Not all available variables will be set through APPS_INITIALIZE (e.g. CONC_REQUEST_ID that is only set on submitting a concurrent request). And not all variables are available (e.g. CUSTOMER_ID which is derived from the customer field in FND_USER).

Undefined variables are set to -1 for NUMBER values and to NULL for VARCHAR values.

Name Description Remarks
USER_ID User_id from FND_USER Required, not validated
USER_NAME User_name from FND_USER Optional, based on USER_ID
RESP_ID Responsibility_id Optional, not validated
RESP_NAME Responsibility_name Optional, based on RESP_ID
RESP_APPL_ID Resp. application_id Optional, based on RESP_ID
APPLICATION_NAME Resp. application_name Optional, based on RESP_APPL_ID
APPLICATION_SHORT_NAME Resp. application_short_name Optional, based on RESP_APPL_ID
SECURITY_GROUP_ID Security Group ID Optional, validated
PER_BUSINESS_GROUP_ID Business Group ID Derived from Profile Option
PER_SECURITY_PROFILE_ID HR Security Profile Derived from Profile Option
CURRENT_LANGUAGE From userenv(‘LANG’)
BASE_LANGUAGE Base Language From FND_LANGUAGES
EMPLOYEE_ID User employee_id Optional, from FND_USER
CUSTOMER_ID User customer_id Optional, from FND_USER
SUPPLIER_ID User supplier_id Optional, from FND_USER
SESSION_ID Unique session identifier AUDSID from v$session
SERVER_ID Server_id if entered as param. Optional from FND_NODES
ORG_ID Operating Unit ID Derived from Profile Option
ORG_NAME Operating Unit Name Based on ORG_ID
PARTY_ID User party_id FND_USER.PERSON_PARTY_ID
NLS_LANGUAGE From V$NLS_PARAMETERS
NLS_NUMERIC_CHARACTERS From V$NLS_PARAMETERS
NLS_DATE_FORMAT From V$NLS_PARAMETERS
NLS_DATE_LANGUAGE From V$NLS_PARAMETERS
NLS_TERRITORY From V$NLS_PARAMETERS
NLS_SORT From V$NLS_PARAMETERS

These variables are also stored in a SYS_CONTEXT with name ‘FND’:

SQL> select FND_GLOBAL.USER_NAME from dual;

USER_NAME
 -------------------------------------------
 SYSADMIN

SQL> select sys_context('FND','USER_NAME') from dual;

SYS_CONTEXT('FND','USER_NAME')
 --------------------------------------------------------
 SYSADMIN

 

There will however be NO noticeable performance difference between the two. So I advise you to use FND_GLOBAL as it is the documented solution.

Organization Context

The algorithm to derive the Organization Context is as follows.

For pre-R12 environments: Use profile option ‘MO: Operating Unit’ (ORG_ID).

For R12 and beyond, the MOAC structure can be used. First check if profile option ‘MO: Security Profile’ (XLA_MO_SECURITY_PROFILE_LEVEL) is set.
If this profile is set, check profile option ‘MO: Default Operating Unit’ (DEFAULT_ORG_ID).
If DEFAULT_ORG_ID is not set, the ORG_ID will not be initialized.
If XLA_MO_SECURITY_PROFILE_LEVEL is not set, profile option ‘MO: Operating Unit’ (ORG_ID) is used.

The ORG_ID and SECURITY_GROUP_ID are then stored in the sessions CLIENT_INFO. ORG_ID starts at position 1 and is limited to 10 characters. The SECURITY_GROUP_ID starts at position 55.

The CLIENT_INFO is the most common check for eBS database views to verify Organization Context.

Profile Option Cache

After setting the variables, the Profile Option Value cache is initialized. However APPS_INITIALIZE does not set any profile option values yet. It only sets up the required cache (which is also a PL/SQL table).
A number of profile options are used for the initialization, but they are not yet stored in the Profile Option Value cache. In an upcoming article, I’ll go into more detail about the Profile Option Value Cache and the way eBS uses it

Security Context

Then the security context is initialized. The ORG_ID and SECURITY_GROUP_ID are set in the sessions CLIENT_INFO. ORG_ID is stored in the first 10 characters, while SECURITY_GROUP_ID is stored from the 55th position.

Resource Consumer Group

In the profile option ‘FND: Resource Consumer Group’ (FND_RESOURCE_CONSUMER_GROUP) a Resource Consumer Group can be selected. With this setting the database Resource Manager can limit the database resources that the user can use. If this profile option is set, APPS_GLOBAL.APPS_INITIALIZE will alter the session to use the required Resource Consumer Group.

Product Initialization

Individual Oracle Products can execute their own initialization routines. The defined routines are in table FND_PRODUCT_INITIALIZATION. Each initialization can be executed based on specific conditions. These conditions are stored in FND_PRODUCT_INIT_CONDITION.
The setup of these initialization routines is done through the package FND_PRODUCT_INITIALIZATION_PKG. The APPS_INITIALIZE will check the conditions and dependencies on the initialization routines and execute the routines in the required order.

Custom Initialization

You may be aware that you can also set up your own Custom Initialization Routine through the profile option: ‘Initialization SQL Statement – Custom’. If this profile is set, it will also be executed through FND_GLOBAL.APPS_INITIALIZE.

APPS_INITIALIZE has now finished initializing your session. Other eBS software can use the FND_GLOBAL functions, the SYS_CONTEXT or CLIENT_INFO to verify the session and security settings.

 

P.S. For quick scripts: stiffmaster.nl

New article on Descriptive Flexfields

I just published the second part of the article series on Flexfields. This part is about Descriptive Flexfields. It is available from: http://www.stijf.com/joomla/flexfields/flexfields-in-oracle-ebs-descriptive-flexfields

The next part will follow shortly, and will discuss the flexfield datamodel.

New article on eBS (Key) Flexfields

Just a short note to point to a new article that I published on the new site: 

http://www.stijf.com/joomla/index.php/flexfield-articles/49-flexfields-and-flexfield-structures.html

It is the first part of a multi-part article. This part explains Key Flexfields in eBS. The next parts will discuss Descriptive Flexfields, the datamodel and special features. 

New website and article

It's been a while since I last posted an article here. 

A long vacation, and testing different CMS's for the website account for the lost time. So I decided to give Joomla a try. I just finished installing the components that I want to use. And I wrote a short article, to see how it goes.

I'll be transferring the articles from this website to the new one in the days to come. 

For now, take a look at the new website: http://www.stijf.com/joomla

The article that I wrote is about adding a column to OA-Framework without using jDeveloper to change the VO. I describe how to use a descriptive flexfield to add a checkbox to the framework page.

You can read the article here: here.

Special and Pair validation types

I am working on an article about flexfields and flexfield validation.

Even though the article is not yet finished, I thought the part about ‘SPECIAL’ and ‘PAIR’ validation types might be interesting enough. Many people seem to think they can only use the seeded validation sets. However, you can also create your own validation sets. And their options are very powerful. So I wanted to publish this part of the article as a prelude to the full story.

 

Special Validation

Special validation is used to provide flexfield functionality for a single value. What that means is that you can have for example a concurrent program parameter that will be filled with a Key flexfield value, or a range of flexfield values.

Let’s go back to the Key Flexfield. We know that they are combinations of different segment values that are stored in a separate combination table.

When you want to submit a key-flexfield combination as a parameter to a concurrent program, you can code your own validation for the separate values. But you’ll be missing the nice functionality that gives you pop-ups, a validation over the resulting combination and if needed the ID-value for the flexfield combination.

That is possible with a ‘Special’ validation type.

The special validation uses a number of user exits to enter, validate and query keyflex segments. With special validation, you will be able to enter one or more segment values for a key flexfield. To enter these segment values, 3 user exits can be used. They are: ‘POPID’, ‘VALID’ and ‘LOADID’.

POPID is used to enable the user to enter the flexfield segment value. It is called when the users cursor enters the segment value field. With this user exit, you decide which segment values should be shown, and how they should be shown.

 VALID is called when the user exits the segment value, or confirms the chosen flexfield combination. It validates the entered value against the values existing in the key flexfield table.

LOADID is optional, and it can be used to choose which information will be returned as flexfield value. This can be the concatenated segments, or the id-value for the flexfield combination or segment values.

These 3 user exits can be assigned to 3 ‘events’. There are more events possible, but they are either not yet in use, or their use is not yet supported. So we will only use ‘Validate’, ‘Edit’ and ‘Load’.

Sounds complicated, so far? Don’t worry; this is not an easy validation. But we’ll build some examples to give you an idea. First we start with building a very easy special validation. This will be built on our Code Combination key flexfield. We’ll be using a concurrent program ‘Test Flex Validation’ program to see our different options.

This program is based on the following procedure:

CREATE OR REPLACE PROCEDURE XXX_TEST_FLEXFIELD_PARAMS

( errbuf   out varchar2

, retcode  out varchar2

, p_flex   in  varchar2

, p_flex2  in  varchar2 := ‘XXX’

, p_flex3  in  varchar2 := ‘XXX’

, p_flex4  in  varchar2 := ‘XXX’

, p_flex5  in  varchar2 := ‘XXX’

) IS

BEGIN

   FND_FILE.PUT_LINE(FND_FILE.OUTPUT,p_flex);

   FND_FILE.PUT_LINE(FND_FILE.OUTPUT,p_flex2);

   FND_FILE.PUT_LINE(FND_FILE.OUTPUT,p_flex3);

   FND_FILE.PUT_LINE(FND_FILE.OUTPUT,p_flex4);

   FND_FILE.PUT_LINE(FND_FILE.OUTPUT,p_flex5);

END;

 

This will only write the parameter value to the output of the request. To use flexfields as parameters for concurrent programs, we need to define a value set based on them.

We will start with the barest setup to enter a key-flexfield combination. For this article, we use the accounting flexfield, with code ‘GL#’  and id-num ‘101’.

In this case, we have the following definition:

 

So what does this mean?

The first box is for the edit event. This will be triggered when the user enters the cursor into the field with this value set.

FND POPID         This is the user exit to pop up a flexfield screen, and let the user enter the flexfield values.

CODE="GL#"     This is the flexfield code for the key flexfield that we will be using.

APPL_SHORT_NAME="SQLGL" The short name for the application the flexfield belongs too. Together with ‘Code’, this will identify the flexfield itself.

NUM="101"       The id-number for the flexfield structure. If you have only a single structure flexfield, it is optional. For flexfields enabled for multiple structures, you need to enter the id-number.

VALIDATE="PARTIAL"   Validate can be ‘None’, ‘Partial’ or ‘Full’. None means the combination is not validated. Partial means that the separate segments are validated, there is no validation if the combination exists. Full means that segments and combination will be checked, and if a new value is entered, this will be inserted into the key flexfield table.

SEG=":!VALUE"                This is the forms field that will be used to store the value of the segments.

The second box is for the ‘Validation’ event. This code will be called when the user navigates out of the field, or submits the entire combination.

Now when we set this value set as a parameter for our concurrent program, we can see how the validation works:

 

Now when we run the program, we get this pop-up:

 

We have all the functionality of the key flexfield. We can use the ‘Combinations’ button to search for existing combinations, and all separate segments will be validated, as will be the final combination.

When we submit a value to our program, it will show the concatenated segments as the value of our parameter:

 

Now let’s see some more features of this validation. For example, we’d like to have the value of the combination id. (CODE_COMBINATION_ID in our case, since we use the Accounting Flexfield).

To get that, we need to add the LOADID user exit:

 

The ‘Load’ event will get the combination-id from the flexfield table. This is only possible for the ‘VALIDATE=”FULL”, since it will validate the whole combination. Also we need to set the ID=”:!ID”. This will populate the :!ID column with the ID value of the combination.

Finally, I added the ‘DINSERT=”NO” ‘, because we don’t want to allow insertion of new code combinations from this value set. (And Validation=”FULL” by default inserts new combinations into the flexfield column).

Now when we run the concurrent request, we see that the parameter value is the code_combination_id instead of the concatenated segments:


With these user exits it is also possible to select just a number of segments, instead of the whole combination. For this we remove the ‘Load’ / ‘LOADID’ part again.

Then we add a ‘DISPLAY=”x” ‘ to the ‘Edit’ and ‘Validate’ user exits. The “display” parameter is defaulting to ‘ALL’. But you can also specify separate segments by their sequence number or names. In our case, we display the first 2 segments:

 

Now when we run the concurrent program, we get a pop-up for only the first 2 values:

 

A very nice feature (at least as far as I’m concerned) is the use of a where clause on the combination values. Consider the following ‘Enter’ code:

FND POPID

CODE="GL#"

NUM="101"

APPL_SHORT_NAME="SQLGL"

VALIDATE="FULL"

TITLE="Special Validation Key"

ID=":!ID"

SEG=":!VALUE"

DESC=":!MEANING"

WHERE="segment2 not like '1%' "

 

The “WHERE” clause prevents us from choosing combinations that have a segment2 starting with ‘1’. When we run our concurrent program with this, and choose the combinations:

 

 

There is no Dpt starting with 1.

 

When we add the “WHERE”-clause to the validation event too, it will prevent us from entering the values manually:

 

 

 

The last feature that we’ll look into is the use of a pl/sql validation through the special validation routines. By using the user-exit PLSQL, we can call an anonymous PL/SQL block in our ‘Validation’ event. I created a value set with the following function for the ‘Validation’ event:

 

FND PLSQL " declare

  v_value varchar2( 10 ) := :!value ;

  v_sum number;

  v_valid boolean;

begin

   v_sum:=0;

   for i in 1..length(v_value) loop

    v_sum :=v_sum+(length(v_value)+1-i)*substr(v_value,i,1);

  end loop;

  if mod(v_sum,11)=0 then

     v_valid := TRUE;

  else

     v_valid:=FALSE;

  end if;

  if not v_valid then

      fnd_message.set_name('FND','FND_GENERIC_MESSAGE' );

      fnd_message.set_token('MESSAGE','This is not a valid bank account');

      fnd_message.raise_error;

  end if;

END; "

 

 

This PL/SQL procedure validates a (Dutch) bank account number. If it does need pass the test, a message will be displayed. This gives you almost unlimited possibilities for validating entered data.

As you can see, it is only a ‘Validate’ event. Because we don’t need any special functionality for entering the data. We can limit the entry to numbers only on the ‘Validation Set’ main page.

 

Now when we use this value set for our concurrent program, we can only enter valid dutch bank accounts:

 

 

And

 


The list of parameters for the user exits is longer than this. So we won’t be going through all the possibilities. You can check the Developers Guide and the Flexfield guide for a complete listing of options. (Did you notice the flexfield title that I sneaked into the pop-up? Try and find the option for that!)

Please try the different options for yourself, and realize the possibilities of the special validation.

 

Pair Validation

Meanwhile, we’ll continue to the ‘Pair’ validation. The pair validation is very much like the ‘special’ validation. It uses the same kind of user exits, but this time, a range of segment values or combinations is selected.

Let’s first create a range of the account segment. Instead of using POPID and VALID, we use POPIDR and VALIDR. The R-version of the user-exits automatically create a range.

Of course we need 2 parameters to set the range. However, we need only one validation set.

I created the validation set ‘XXX_PAIR_VAL’. I entered only the edit and validate events:

 

The next step is to set the parameters for both the low and high value. Both parameters have the validation set ‘XXX_PAIR_VAL’.

 

Now when we run the program, we can enter a range. This includes validation that the high value is indeed higher or equal to the low value.

 

Of course the concurrent program will receive the values for 2 parameters.

 

When we use the full validation we can enter a range of the whole account combination. Note that we cannot use the FULL validation for pair-validation. Because that would mean the use of the combination-id from the flexfield table and based on the combination-id’s you cannot build a range.

 

So we can only use PARTIAL and NONE for the validation. For that same reason, I have not yet had a reason to use a LOAD event for PAIR validation. It is however allowed to use one.

 

I created a PAIR validation for the whole accounting range as follows:

 

 

When used in the concurrent program, it will indeed allow us to enter a range of all segments:

 

 

That completes the chapter on PAIR validation too.

 

Oracle eBS 11i Infrastructure

Oracle eBS 11i Infrastructure

 

In this article we will describe the infrastructure of Oracle eBusiness Suite (eBS). In its simplest form, eBS is a 3-tier application with a client tier, Application-tier and DB-tier.

Database-Tier

Let’s start with the DB-Tier. Surprisingly, the database tier has only very little eBS specific features.

Of course we need a database (instance) and therefore an ORACLE_HOME. But the database can either be a single-instance or a RAC-installation and all Oracle RDBMS features are transparently available for eBS.

The management of the RDBMS Installation is also independent of eBS.

 

DB-Tier filesystem

Let’s start with the filesystem on the DB-Tier. Of course there is an Oracle_home installation needed, for the RDBMS-Instance. This will be installed during installation of eBS. But also a fresh installed ORACLE_HOME can be used, with an eBS database.

In the ORACLE_HOME, an extra directory is added. The Appsutil directory. This directory contains the software and data needed for running Autoconfig and Rapidclone.

All other directories are at the discretion of the eBS DBA.

Oracle Instance

When we look at the instance to run eBS, we find a number of mandatory parameters for eBS. These are found in Metalink notes 216205.1 and 396009.1 (At the time of writing. Please verify these notes for yourself).

These parameters are recommended or mandatory based on testing by Oracle Corp. They will automatically be set by the eBS installer. But you should take note of them when you use a fresh installed ORACLE_HOME.

Then we finally come to the contents of the database.

The eBS Database

Let’s start with the schemas in the database. Oracle eBS creates a separate schema for every module. The schema is named as the short_name of the module, for example AP (Oracle Payables / Accounts Payable), AR (Oracle Receivables / Oracle Receivables).

There is a separate schema for the Application owner APPS.

The Application schemas contain the tables, indexes and sequences for the different applications. All objects in these schemas (except indexes, of course) have a synonym in the APPS Schema. In the APPS Schema we also find all PL/SQL objects, views and Materialized Views.

A major part of eBS is written in PL/SQL. All PL/SQL objects are also installed in the APPS Schema.

User sessions within eBS will usually run in the APPS Schema as well.

That brings us to an extra schema in the database: APPLSYSPUB. This schema has access to some of the eBS tables and packages, that allow it to validate eBS logins and start an APPS-session based on that login information. We will see the details of this later on.

Before release 11.5.2 every schema had its own tablespace. However, the number of modules for eBS (and with that the number of schemas) is ever increasing. So managing the database became more and more complex. In 11.5.2 Oracle introduced the Oracle Applications Tablespace Model (OATM). Within this model, the tablespaces in eBS are based on functionality, rather than schemas.

In this model, we see the following tablespaces:

APPS_TS_TX_DATA – Containing all transaction tables, Materialized Views and IOT’s

APPS_TS_TX_IDX – Containing all indexes for transaction tables

APPS_TS_SEED – Containing the tables and indexes with seeded data (as opposed to transaction data).

APPS_TS_INTERFACE – For Open Interface tables

APPS_TS_SUMMARY – Contains summary tables for several modules (AR, PA, BIM, etc)

APPS_TS_NOLOGGING – For tables and objects that are created with the NOLOGGING option

APPS_TS_ARCHIVE – Containing archive and history tables (and indexes)

APPS_TS_QUEUES – Containing the AQ (Advanced Queuing) objects

APPS_TS_MEDIA – Containing tables with LOB’s. For media objects or documents.

The Undo and Temp tablespaces are not part of the tablespace model.

 

Application Tier

Now it’s time to look at the Application Tier. In fact the Application Tier consists of 3 different services: Web service, Forms service and Concurrent Processing. In 11i installations, there is also an Administration service.

The Application Tier significantly changed from R11i to R12. We’ll discuss the 11i Apps Tier shortly, and then discuss the R12 tier in more detail.

The 11i infrastructure

Both the R11 and R12 infrastructure consist of a Web-service, a Forms Service and a Concurrent Processing part.

We will be discussing the different services here. The following picture shows all components and their communications. You might want to keep it for reference during this article.

 

 

R11i Web Service

For 11i, the web tier is built on Oracle iAS 9i. The iAS installation provides webservices (Apache HTTP/HTTPS), a Java Runtime Engine (JSERV) and a PL/SQL engine (modplsql).

The web service also acts a gateway for the Concurrent Request log and output files. And it is the first point of access when starting a forms session. (When using a socket forms connection, when using a forms servlet the web tier will host the forms process).

A detail from the picture above shows the iAS structure.

 

The core of iAS is the web server. This is the front-end for the client. Requests can also be forwarded to and from the forms server and the concurrent processing. We’ll see that in the next paragraphs.

Within the iAS Jserv and modplsql are plugins. They are the only components that communicate with the database. When they are called, they execute java (Jserv) or PL/SQL (modplsql) and return an html page. This page is then sent to the client through the HTTP service.

The Jserv delivers a Java Runtime Environment. In the Jserv, java servlets can be run. Also the JSP-files are executed in the Jserv. A JSP-file (Java Server Page) is a page with java code that returns an html-page (similar to the way scripting languages like PhP work). The java part is executed in Jserv, which returns the html to the webserver. The webserver redirects the html to the client.

 

Let’s take a closer look at the components and their executions:

 

Webserver

The webserver is based on the regular Apache 2 webserver. The configuration file is also equal to the Apache config file. The configuration is set in httpd.conf (or httpds.conf for SSL).

Instead of starting the webserver through $APACHE_HOME/bin/httpdctl, we start through $APACHE_HOME/apachectl. The default port number used for eBS 11i is 8000. This is part of port-pool 0. For different port-pools, the port number is increased. So for port-pool 1 the webserver runs on port 8001.

The root directory for the webserver is set to $OA_HTML, which is by default $COMMON_TOP/html. This directory contains all *.html files for eBS.

A number of virtual directories are set up within eBS.

 

JServ

As mentioned before, java code is executed by Jserv. Jserv is a java servlet engine. That means that it can run both servlets and jsp-files.

These servlets are mostly located in the $JAVA_TOP. The *.jsp files are located in the $OA_HTML directory.

One of the options of Jserv is to create a database connection to the rdbms database. This is done by a JDBC Thin Client connection.

Before we look at the configuration for Jserv, examine the following picture.

Within Jserv, we can define different java environments, called zones. These zones are configured with different servlets or java archives (jar-files). Each zone is configured with its own configuration file. Within the zone the startup parameters (initargs) for the servlet are defined.

On the other side of the picture, you see a group. All java processes within Jserv are grouped together. You must define at least one group. The default group is ‘OACoreGroup’. Within each group, we create one or more processes that will be mapped to our zones.

This mapping is done by mounting the zones and the groups to different logical directories. In the picture, a mountpoint is created: /oa_servlets/. It refers to the group ‘OACoreGroup’, which holds 3 java processes. And it is mapped to zone ‘root’, which includes the servlet ‘dummy’ with a set of startup parameters.

When iAS receives a call to the virtual directory /oa_servlets/ it will be recognized as a Jserv mount point and the request will be forwarded to Jserv. In this example Jserv has 3 java processes in the group for this mount point. And they will be able to run all the servlets in the zone.

Sounds complicated? Take a look at the following configurations:

Jserv.conf:

ApJServGroup OACoreGroup 3 1 /etc/oracle/iAS/Jserv/etc/jserv.properties

ApJServGroupMount /oa_servlets balance://OACoreGroup/root

 

Jserv.properties:

 

wrapper.bin=/opt/oracle/iAS/Apache/Apache/bin/java.sh

zones=root<host>

root<host>.properties = /etc/oracle/iAS/Jserv/etc/zone.properties

 

Zone.properties

servlet.Dummy.initArgs=message=I'm a dummy servlet

 

Within jserv.conf we define a group called OACoreGroup. This group is running 3 Java processes. And the definition of the group is in the jserv.properties file. The 1 indicates the weight for load-balancing with multiple groups.

Then we mount the zone ‘root’ to the group ‘OACoreGroup’. This mount point is linked to the virtual directory /oa_servlets/.

The virtual directory is used for redirection to the JServ. When a request is made to the virtual directory Jserv will be called. The part of the URL after the virtual directory is the path to the servlet. This path will be searched for in the $CLASSPATH.

When the java servlets need to connect to the database, they can build a connection using JDBC. The access information is stored in a *.dbc file in $FND_TOP/secure. The dbc-file is referred to in the parameters for the zone.

 

Modplsql

Let’s take a look at the modplsql module. This module is designed to run pl/sql procedures within the database. The connection is based on the wdbsvr.app file. This file contains the DAD (Database Access Descriptor), including the access data to the eBS database.

The module is also called through a virtual directory. For example http://<host>:<port>/pls/TESTDB/ dummy. /pls/ is the virtual directory that refers to modplsql. TESTDB is the name of the DAD and dummy is the name of a pl/sql procedure accessible for the db-user from the DAD.

 

That concludes the 9i iAS module for now.

 

Formsserver

Let’s take a look at the forms server.

Oracle forms can be set up in two ways, socket connection and servlet. The default is socket connection. With a socket connection, a separate forms server and dedicated forms processes are used. For the servlet connection, a java servlet is called within the iAS.

The formsserver itself is installed in the 8.0.9 ORACLE_HOME. Forms has a forms server, and one or more client processes. The forms server is started with f60svr. It will spawn a f60webmx process for every client session connecting.

On the server side, forms are run in the forms client processes. On the client side, they are run in a java applet. When the client clicks a forms based function in the Personal Home Page, it calls an URL that refers to the forms client executable in the 8.0.6 ORACLE_HOME.

This URL is taken from the profile option ‘ICX: Forms Launcher’, and the default value is like ‘http://<server>:<port>/dev60cgi/f60cgi’. The parameters referring to the function being clicked are added to this URL as parameters. (i.e. the name of the form to be started)

When this URL is called, the webserver will execute the executable f60cgi. This executable returns a HTML page to the client. This page is called the ‘Base HTML’ for this forms server. (by default this is $OA_HTML/US/appsbase.htm)

This HTML page calls the J-initiator plugin (or the native JVM when configured). It also includes the parameters to connect to the forms server and the name of the form to start.

The J-initiator will start an applet on the client, which connects to the forms listener process. The forms listener process then assigns a dedicated forms client process.

At this point the whole chain looks like this:

 

The configuration for the forms server is in the appsweb.cfg file in ($OA_HTML/bin). This file contains the basic coloring scheme for the forms server, the forms settings and the referral information to the J-Initiator plugin. The plugin on the client side is called through its class-id, which is also set in the appsweb.cfg.

 

Concurrent Managers

The last part of the application tier is the concurrent processing part. The concurrent managers are used to execute background and batch processes.

Different executables including host-command files, pl/sql procedures, Oracle Reports, SQL*Loader control files and Binary executables can be defined to be run as concurrent programs. Parameters are also optionally defined with the concurrent programs.

The executable files are defined separate from the concurrent programs. So an executable can be run as different programs with different parameters.

The programs are executed through ‘Requests’. A request is started as a concurrent program and the values of its parameters. It can be scheduled to start at a specific time, or in a specific schedule. The output of the program can be sent to a printer. It is also available through the application.

This picture shows the relation between programs, requests and managers.

 

The managers

We’ll take a closer look at the concurrent programs later. Let’s first look at the concurrent managers. There are concurrent managers and transaction managers. Also a number of control managers are defined.

We’ll start with the ‘Internal Manager’. This is the first manager to be started. Its purpose is to control the stopping and (re-)starting of the other managers. When Generic Service Management is enabled (default as of 11.5.7), it delegates to the ‘Service Managers’. On every node where concurrent processing is enabled, a ‘Service Manager’ is started. However, only one Internal Manager is running at any time.

The other concurrent managers are defined with a work shift that controls how many processes a concurrent manager should have at certain times. The work shifts consist of a time-range and a number of processes. The Service Managers (or Internal Manager) will start and stop processes according to these work shifts.

Another part of the setup of concurrent managers is their specialization rules. The specialization rules indicate which programs are valid for a concurrent manager, or are excluded for that manager. They work on an include/exclude principle. When programs are included for that manager, the manager can only run those programs. When programs are excluded, the manager can run any program except the excluded ones.

When a request (to run a program) is submitted from eBS, it will be placed in FND_CONCURRENT_REQUESTS with a status_code ‘I’ (The eBS forms have fewer statuses than the codes in the table). The manager processes will query this table for requests that they are eligible to run.

Once a manager process finds a request with status_code ‘I’, which it is eligible to run then it will put the request on its own queue. It will then run the executable with the defined parameters. The logfile and outputfile are written to the filesystem in $APPLCSF/$APPLLOG resp. $APPLCSF/$APPLOUT.

There are some special cases that need to be discussed. The first is the incompatibility. Concurrent Programs can be made incompatible with each other. That means that they cannot run at the same time. Once a program is started that is defined as incompatible with another, it will be automatically put on the queue for the ‘Conflict Resolution Manager’. This special manager will check if any incompatible program is running or ‘Pending’ with code ‘I’. If so, it will hold the request on its own queue. If no incompatible program is running or ‘Pending’, then it will set the status of the request to ‘Pending’.

Another special case are the ‘Transaction Managers’. They are started and stopped the same way as the other concurrent managers. But they do not use the request queue. Transaction managers are called online from the eBS forms. And they execute a limited number of programs. These programs defined within their executables. They are called through the ‘FND_TRANSACTION.SYNCHRONOUS’ procedure, which uses the ‘DBMS_PIPE’ package.

 

The programs

It’s time to look at the concurrent programs. As mentioned before, a concurrent program is an instantiation of an executable. The executable is defined with a short name, an application (module), a filename and a method.

The short name will uniquely identify the executable. The other data is needed to determine what should be run for this executable. If the executable is an OS-based program, the application will be used to derive the directory on the file system where the executable is found.

When the executable is defined as PL/SQL, the filename will contain the procedure that needs to be run.

The concurrent program is defined as the executable with an (optional) set of parameters. It also has some properties for the printing of the output (print-style, pre-defined printer, size of the output).

Depending on the type of executable that needs to be run, the parameters will be sent to the executable ordered or named. For PL/SQL and host files, the parameters are ordered. And the order in which they are defined in the form defines how they will be sent to the executable. For reports the parameters are named, which means they are sent as <parameter>=<value>, ….

After the request has finished, the request table ‘FND_CONCURRENT_REQUESTS’ will be updated with the status information and a reference to the log- and output file. During the execution of the request, a status_code and phase_code are updated. The exact values of these fields are described in one of the next articles. That will go deeper into concurrent processing.

The output

The log- and output file from the requests are written to the directories $APPLCSF/$APPLLOG and $APPLCSF/$APPLOUT. But they of course also need to be made available to the user. This is done through the ‘Applications Report Review Agent’.

There is quite a lot of setup that can be done for the whole process. But within the scope of this article, we’ll only look at the basic architecture.

From the eBS form the log and output file are available from 2 buttons. These buttons call the web server for ‘FNDWRR.exe’ (.exe on both Windows and Unix).

FNDWRR.exe is a cgi-executable that will call the ‘FNDFS listener’.

This is an 8.0.6 TNS-listener in the eBS ORACLE_HOME on the eBS application tier. One of the less known features of the TNS-listeners is that they can do more than create database connections.

In the listener.ora, you can define a program to be called when a connection is made on a certain tns-entry. That feature is used for the FNDFS listener. When it is called, it will redirect traffic to the ‘FNDFS’ (FND File System) executable. This executable will read the requested file from the file-system and send it to FNDWRR.exe.

Again, we have a schema to show the whole flow:

 

This complex retrieval is of course needed because the concurrent processing tier can be separate from the forms and web-tiers.

 

Oracle Advanced Queues (AQ)

Oracle Advanced Queuing (AQ)

 

A customer asked me to do a presentation about AQ (Advanced Queuing). This article is a rewrite of that presentation. We’ll be getting some hands-on experience with AQ, and then some tips on the issues that might occur.

As always, your comments to this article are more than welcome. If you enjoy this article, find it useful or maybe not at all, please let me know by leaving a comment on the site.

The article is also available in PDF format from here.

 

Queuing: I don’t want to queue.

 

The title might be true in many cases, but there are also situations where a queue is very convenient. For example in the case of batch processing where a batch process handles multiple incoming messages from an online process. Or when 2 processes need inter-process communication, but still need to function independently of each other.

In eBS we use queues for the workflow system. (Deferred items, notifications for the workflow mailer and the Business Event System). Some more queues are found for concurrent processing and SFM.

 

So whether you like it or not, you’ll have to queue. The trick is to manage these queues to get optimal performance for your system.

 

(Advanced) Queue design

 

Before we can start building queues, there are some things to consider.

AQ supports both point-to-point queues and publish-subscribe-queues (also called multi-consumer queues).

 

Point-to-point queues are emptied by only one specific process. One or more processes can enqueue messages on the queue, but only one process can dequeue them.

 

In contrast, a publish-subscribe queue can have many processes reading the messages in the queue. Either the messages are broadcasted, or the receivers have to subscribe to a certain kind of messages.

Of course the publish-subscribe queue has some very interesting properties. But we’ll start our item with the point-to-point queue.

 

So you’ll first have to decide who the senders and receivers of the queue data will be. In this article, we start with using a point-to-point queue. After that we start using multi-consumer queues.

Another thing to consider is the payload of the message. Of course, the messages will need some content to give it a meaning to the receiver. This content is called the payload. And you can either use a custom type (including XML), or a raw type.

During this article, we’ll see some more features of AQ. But when we decide on the type of queue and the payload type, we can build our own queues. All queues are built on queue-tables. These tables hold the data in the queue. On top of these tables, the actual queue and some management views are built.

To build a queue-table, we use the dbms_aqadm package:

dbms_aqadm.create_queue_table(queue_table =>’<table_name>’

                          ,queue_payload_type => [‘RAW’|<custom_type>]);

 

This creates the queue table including a LOB segment for the payload, some indexes, and an ‘Error queue’:

 

Begin

dbms_aqadm.create_queue_table(queue_table=>’xxx_test’

                           ,queue_payload_type=>’RAW’);

End;

 

Select object_name,object_type from dba_objects where created>sysdate-1/24;

 

OBJECT_NAME                 OBJECT_TYPE

—————————–         ———–

SYS_C0011768                INDEX

XXX_TEST                    TABLE

SYS_LOB0000073754C00029$$   LOB

SYS_LOB0000073754C00028$$   LOB

AQ$_XXX_TEST_T              INDEX

AQ$_XXX_TEST_I              INDEX

AQ$_XXX_TEST_E              QUEUE

AQ$_XXX_TEST_F              VIEW

AQ$XXX_TEST                 VIEW

 

This created the base-table for a point-to-point queue. The table is a regular heap-oriented table. And you are free to create extra indexes on it, if you feel the urge. The necessary indexes have been created already.

 

The queue that is created now is the default error queue. Messages that failed dequeuing will be set on this queue.

 

Now it’s time to create the actual queue. The queue-tables are the infrastructure for storing the messages and related information. The queue can now be created to control the queuing and dequeuing of messages.

 

For both point-to-point as publish-subscriber queues, the command is:

 

dbms_aqadm.create_queue (queue_table =>’<table_name>’

                      queue_name => ‘<queue_name>’);

 

 

So for us we run:

Begin

dbms_aqadm.create_queue (queue_name => 'xxx_test_q'

  ,queue_table => 'xxx_mc_test');

End;

 

 

This creates an object of type QUEUE. This is the object that will control the contents of the underlying tables / IOT’s.

 

Before we can start using our queues, we also have to ‘start’ them. On starting, we indicate whether the queue is available for queuing, dequeuing or both:

 

Begin

 dbms_aqadm.start_queue(queue_name=>’xxx_test_q’

 ,enqueue=>TRUE

 ,dequeue=>TRUE);

End;

 

Our queue is now enabled for both queueing and dequeuing. Let’s first verify if things are working correctly.

 

To enqueue (or dequeue) a message, we use the dbms_aq package. It has an enqueue and dequeue procedure. Both with their own parameters. The parameters include en-/dequeue options, message properties, a message_id and of course the message itself:

 

Declare

V_payload Raw(200);

V_msgid Raw(200);

V_enq_options Dbms_aq.Enqueue_options_t;

v_msg_properties dbms_aq.message_properties_t;

Begin

V_payload:=Utl_raw.Cast_to_raw('Hello world!');

Dbms_aq.Enqueue(Queue_name=>'xxx_test_q'

,Message_properties=>V_msg_properties

,Enqueue_options=>V_enq_options

,Payload=>V_payload

,Msgid=>V_msgid);

Dbms_output.Put_line(rawtohex(V_msgid));

end;

 

This enqueues a ‘Hello world!’ message, and returns the message id. If everything works correctly, you’ll see the msgid as a hexadecimal string. (Don’t forget to set serveroutput on).

 

We created 2 extra parameters: v_enq_options with the options used to enqueue this message. And v_msg_properties to set additional information about the message.

V_enq_options is of type ‘dbms_aq.enqueue_options_t’. This is a record of:

 

Visibility   BINARY_INTEGER  –Options are: dbms_aq.on_commit and dbms_aq.immediate. This indicates whether the enqueue is part of the current transaction, or done autonomously.

Relative_msgid      RAW(16)             –If the message needs to be enqueued at a specific position, it will be relative to this msgid.

Sequence_deviation BINARY_INTEGER –-Options are: DBMS_AQ.BEFORE, DBMS_AQ.TOP or NULL (default). If before then the message is before the relative_msgid. If top, the message will be the first to be dequeued.

 

V_msg_properties is of type ‘dbms_aq.message_properties_t’. This is a record of:

 

priority        BINARY_INTEGER  — Any integer, to set the priority. Smaller is higher priority. The default is 1.

delay           BINARY_INTEGER  — If the message needs to be delayed before it can be dequeued, set the time in seconds here. The default is dbms_aq.no_delay.

expiration      BINARY_INTEGER  — For messages that need to expire after a certain time, set the expiration time in seconds. (Offset from the delay). The default is dbms_aq.never.

correlation     VARCHAR2(128)   — A free text field that can be used to identify groups of messages.

attempts        BINARY_INTEGER  — Number of failed attempts to dequeue, before the message will be failed and marked as expired.

recipient_list  DBMS_AQ.AQ$_RECIPIENT_LIST_T –- Only valid for multi-consumer queues. Sets the designated recipients.

exception_queue VARCHAR2(51)    — The exception queue to use, when it is not the default.

enqueue_time    DATE                — Set automatically during enqueue

state           BINARY_INTEGER  — Automatically maintained by AQ, to indicate the status of the message.

 

Let’s see if the dequeue also works. For this the procedure dequeue is used, with similar parameters.

 

Declare

V_payload Raw(200);

V_msgid Raw(200);

V_deq_options Dbms_aq.dequeue_options_t;

v_msg_properties dbms_aq.message_properties_t;

Begin

Dbms_aq.dequeue(Queue_name=>'xxx_test_q'

,Message_properties=>V_msg_properties

              ,dequeue_options=>V_deq_options

              ,Payload=>V_payload

             ,Msgid=>V_msgid);

Dbms_output.Put_line(utl_raw.cast_to_varchar2(V_payload));

End;

 

This time, our message should be displayed.

 

For the dequeue, we used v_deq_options of type ‘dbms_aq.dequeue_options_t’. This is a record of:

 

consumer_name  VARCHAR2(30)       — Indicates the consumer for multi-consumer queues.

dequeue_mode   BINARY_INTEGER –- How to dequeue the messages. Either leave it on the queue, or remove it. Either dbms_aq.browse and dbms_aq.remove (default).

navigation     BINARY_INTEGER –- Indicate where to start dequeuing. Dbms_aq.next_message (default), to continue from the previous dequeue. Dbms_aq.first_message to start at the top of the queue. Dbms_aq.next_transaction to skip the rest of this message group.

visibility     BINARY_INTEGER –- same as dbms_aq.enqueue_options_t.visibility.

wait           BINARY_INTEGER –- The time (in seconds) the package should wait if no message is available. Default is dbms_aq.forever.

msgid          RAW(16)     — When specified, only the message with this msgid will be dequeued.

correlation    VARCHAR2(128)  — Only messages with this correlation will be dequeued (may include wildcards).

 

Note how message_properties and payload are now out-parameters.

 

This is probably the simplest queue possible. We enqueued and dequeued a raw message. We didn’t specify the visibility. So your session still needs to commit these actions.

 

Now let’s enqueue our message again, and see how it works behind the curtain.

 

Declare

V_payload Raw(200);

V_msgid Raw(200);

V_enq_options Dbms_aq.Enqueue_options_t;

v_msg_properties dbms_aq.message_properties_t;

Begin

V_payload:=Utl_raw.Cast_to_raw('Hello world!');

Dbms_aq.Enqueue(Queue_name=>'xxx_test_q'

,Message_properties=>V_msg_properties

,Enqueue_options=>V_enq_options

,Payload=>V_payload

,Msgid=>V_msgid);

Dbms_output.Put_line('Msg_id: '||rawtohex(V_msgid));

end;

 

Msg_id: 499CE4809F2641E1BFBC8AFBC8DB5AFA

 

The queue table is an ordinary heap-table, so we can query it.

 

select q_name, rawtohex(msgid) msg_id, priority, state, enq_time, enq_uid

from   xxx_test;

 

q_name     msg_id                         priority state enq_time                   enq_uid

XXX_TEST_Q  499CE4809F2641E1BFBC8AFBC8DB5AFA     1     0 21-03-10 17:24:01,876000000 SYSTEM

 

 

We see our msgid again. A priority flag. A state flag, the time of enqueueing the message, and the user that enqueued the message. The message is also in the table, but since it is a blob, we won’t bother selecting from it yet.

 

There are more columns in the table, that control the order and by who the messages are dequeued. Most of them are still null, so we will see them when needed.

 

A useful alternative to the table is to query the queue-view aq$<table_name>. This will show the translated values of the state. (0 = READY). And especially when using multi-consumer queues, it will use a join to select a more complete picture of the queue.

 

When we dequeue the message, it will disappear from the queue. (And be deleted from the queue table). However, this can be controlled by the retention parameter of the queue.

 

Let’s set this parameter, so we can check the data after the dequeue.

We set the retention time to 240 (seconds):

 

begin

DBMS_AQADM.ALTER_QUEUE(queue_name =>'xxx_test_q'

,retention_time => 240);

end;

 

Now when we dequeue the message, it will remain in the queue:

 

Declare

V_payload Raw(200);

V_msgid Raw(200);

V_deq_options Dbms_aq.dequeue_options_t;

v_msg_properties dbms_aq.message_properties_t;

Begin

Dbms_aq.dequeue(Queue_name=>'xxx_test_q'

,Message_properties=>V_msg_properties

              ,dequeue_options=>V_deq_options

              ,Payload=>V_payload

,Msgid=>V_msgid);

Dbms_output.Put_line(utl_raw.cast_to_varchar2(V_payload));

End;

 

select       queue

,      rawtohex(msg_id)msg_id

,      msg_priority

,      msg_state

,      enq_timestamp

,      enq_user_id

,      deq_timestamp

,      deq_user_id

from   aq$xxx_test

 

QUEU     MSG_ID                         MSG_PRIO MSG_STATE ENQ_TIMESTAMP  ENQ_USER DEQ_TIMESTAMP     DEQ_USER_ID

XXX_TEST_Q AEC2CD2E34514363B6739969E8E8D353    1 PROCESSED 19-03-10 18:31:40 SYSTEM 19-03-10 21:26:45 SYSTEM

 

Now the message has been set to state ‘PROCESSED’, and some dequeue information has been added.

 

It’s time to start navigating queues when there are multiple messages in the queue.

 

Messages are by default dequeued in the order in which they are enqueued. On creation of the queue table, you can set other dequeue orders. But it is also possible to dequeue messages in a different order by navigating the queues, or using filter-criteria.

 

To show the dequeueing order we enqueue 10 different messages.

 

Declare

V_payload Raw(200);

V_msgid Raw(200);

V_enq_options Dbms_aq.Enqueue_options_t;

v_msg_properties dbms_aq.message_properties_t;

Begin

V_enq_options.visibility := dbms_aq.immediate;

for i in 1..10 loop

    V_payload:=Utl_raw.Cast_to_raw('This is the '||to_char(to_date(i,'J'),'jspth')||' message');

    Dbms_aq.Enqueue(Queue_name=>'xxx_test_q'

    ,Message_properties=>V_msg_properties

                  ,Enqueue_options=>V_enq_options

                  ,Payload=>V_payload

                        ,Msgid=>V_msgid);

    Dbms_output.Put_line(rawtohex(V_msgid));

end loop;

end;

 

This enqueues the text ‘This is the first message’ till ‘This is the tenth message’. On dequeuing, the messages come out in the same order:

 

Declare

V_payload Raw(200);

V_msgid Raw(200);

V_deq_options Dbms_aq.dequeue_options_t;

v_msg_properties dbms_aq.message_properties_t;

Begin

for i in 1..10 loop

    Dbms_aq.dequeue(Queue_name=>'xxx_test_q'

                 ,Message_properties=>V_msg_properties

                  ,dequeue_options=>V_deq_options

                   ,Payload=>V_payload

                 ,Msgid=>V_msgid);

    Dbms_output.Put_line(utl_raw.cast_to_varchar2(V_payload));

end loop;

End;

 

This is the first message

This is the second message

………

This is the tenth message

 

 

When we created the queue table, we choose the default sort order. This is by enqueue_time. We can also build a queue that uses priority dequeuing. First we create a queue:

 

begin

dbms_aqadm.create_queue_table(queue_table=>'xxx_test_prio'

                             ,sort_list => 'PRIORITY,ENQ_TIME'

                             ,queue_payload_type=>'RAW');

dbms_aqadm.create_queue(queue_name=>'xxx_test_prio_q'

                       ,queue_table=>'xxx_test_prio');

dbms_aqadm.start_queue(queue_name=>'xxx_test_prio_q');

end;

 

We indicated a sort_list now. The options are ‘ENQ_TIME’ (default), ‘ENQ_TIME,PRIORITY’,‘PRIORITY’,’PRIORITY,ENQ_TIME‘. Now we enqueue some messages with reversed priorities:

 

Declare

V_payload Raw(200);

V_msgid Raw(200);

V_enq_options Dbms_aq.Enqueue_options_t;

v_msg_properties dbms_aq.message_properties_t;

Begin

V_enq_options.visibility := dbms_aq.immediate;

for i in 1..10 loop

    V_payload:=Utl_raw.Cast_to_raw('This is the '||to_char(to_date(i,'J'),'jspth')||' message');

    v_msg_properties.priority:=11-i;

    Dbms_aq.Enqueue(Queue_name=>'xxx_test_prio_q'

    ,Message_properties=>V_msg_properties

                  ,Enqueue_options=>V_enq_options

                  ,Payload=>V_payload

                 ,Msgid=>V_msgid);

    Dbms_output.Put_line(rawtohex(V_msgid));

end loop;

end;

 

And we dequeue them again:

 

Declare

V_payload Raw(200);

V_msgid Raw(200);

V_deq_options Dbms_aq.dequeue_options_t;

v_msg_properties dbms_aq.message_properties_t;

Begin

for i in 1..10 loop

    Dbms_aq.dequeue(Queue_name=>'xxx_test_prio_q'

                 ,Message_properties=>V_msg_properties

                  ,dequeue_options=>V_deq_options

                  ,Payload=>V_payload

                 ,Msgid=>V_msgid);

    Dbms_output.Put_line(utl_raw.cast_to_varchar2(V_payload));

end loop;

End;

 

This is the tenth message

This is the ninth message

…………

This is the second message

This is the first message

Now it’s time to look at queueing navigation. It is possible to dequeue specific messages from the queue. You can select messages with a specific msg_id, correlation or recipient_list (for mc-queueus).

We’ll first search for a specific correlation and then a message_id. We enqueue ten messages, with different correlations:

Declare

V_payload Raw(200);

V_msgid Raw(200);

V_enq_options Dbms_aq.Enqueue_options_t;

v_msg_properties dbms_aq.message_properties_t;

Begin

V_enq_options.visibility := dbms_aq.immediate;

for i in 1..10 loop

    V_payload:=Utl_raw.Cast_to_raw('This is the '||to_char(to_date(i,'J'),'jspth')||' message');

    v_msg_properties.correlation:=to_char('Corr'||i);

    Dbms_aq.Enqueue(Queue_name=>'xxx_test_q'

                 ,Message_properties=>V_msg_properties

                  ,Enqueue_options=>V_enq_options

                  ,Payload=>V_payload

                 ,Msgid=>V_msgid);

    dbms_output.Put_line('Msg_id: '||rawtohex(V_msgid)||' Correlation: Corr'||i);

end loop;

end;

 

Msg_id: E8BE83A2A2A04F1EA74863B4A7C78DAF Correlation: Corr1

Msg_id: 7159B80BC3194C7AAA6910AB10E753C5 Correlation: Corr2

Msg_id: 4AF3693CF7EE4994B0F78830371437B9 Correlation: Corr3

Msg_id: 44DBC0CB09C94BB98DF2D7E48971849C Correlation: Corr4

Msg_id: 98F3E119041E47F5BF46604E014120BF Correlation: Corr5

Msg_id: B71B7F097A9E4EDBA696958326BF6300 Correlation: Corr6

Msg_id: C4F5050B02904EEEAD2842405A0BDE2A Correlation: Corr7

Msg_id: E4D923A4CB4B4DF2B64B8421A88FFC42 Correlation: Corr8

Msg_id: BE199053188648AE8FA238A01A5C9CD1 Correlation: Corr9

Msg_id: 8991E793D2DB41F5B3F9D00D283B6F6D Correlation: Corr10

 

Now we can dequeue the 5th (correlation) and 8th (msg_id) message:

 

Declare

V_payload Raw(200);

V_msgid Raw(200);

V_deq_options Dbms_aq.dequeue_options_t;

v_msg_properties dbms_aq.message_properties_t;

Begin

v_deq_options.correlation:='Corr5';

Dbms_aq.dequeue(Queue_name=>'xxx_test_q'

,Message_properties=>V_msg_properties

,dequeue_options=>V_deq_options

,Payload=>V_payload

,Msgid=>V_msgid);

Dbms_output.Put_line(utl_raw.cast_to_varchar2(V_payload));

v_deq_options.correlation:=NULL;

v_deq_options.msgid:='E4D923A4CB4B4DF2B64B8421A88FFC42';

Dbms_aq.dequeue(Queue_name=>'xxx_test_q'

,Message_properties=>V_msg_properties

,dequeue_options=>V_deq_options

              ,Payload=>V_payload

,Msgid=>V_msgid);

Dbms_output.Put_line(utl_raw.cast_to_varchar2(V_payload));

End;

 

This is the fifth message

This is the eighth message

 

Note how we have to set the correlation back to NULL for the second dequeue. Otherwise we would be trying to dequeue a message with correlation ‘Corr5’ and the specified msg_id. Since that message does not exist, our procedure will just wait for the message to appear.

 

By default when you dequeue from an empty queue, or try to dequeue a non-available message, the dequeue will wait indefinitely for a message to appear. You can control this behavior with the dequeue options.

 

V_deq_options.wait := 10; — to wait 10 seconds. Any number of 0 or higher is allowed.

V_deq_options.wait := dbms_aq.no_wait; — not waiting for the message.

V_deq_options.wait := dbms_aq.forever; — wait indefinitely 

Do note that when the time-out is reached an ’ORA-25228: timeout in dequeue from <queue> while waiting for a message’ raised. So you will need to handle the exception.

One more feature to consider is the browsing mode. So far we have seen the messages that we dequeued were removed from the queue (or at least got status ‘Processed’). By setting the dequeue options, we can first inspect messages before dequeuing them. Consider the following. We have 8 messages left in our queue:

Declare

V_payload Raw(200);

V_msgid Raw(200);

V_deq_options Dbms_aq.dequeue_options_t;

v_deq_options_rm dbms_aq.dequeue_options_t;

v_msg_properties dbms_aq.message_properties_t;

Begin

v_deq_options.wait:=dbms_aq.no_wait;

v_deq_options.dequeue_mode:=DBMS_AQ.BROWSE;

for i in 1..10 loop

begin

 

   Dbms_aq.dequeue(Queue_name=>'xxx_test_q'

   ,Message_properties=>V_msg_properties

   ,dequeue_options=>V_deq_options

   ,Payload=>V_payload

   ,Msgid=>V_msgid);

   Dbms_output.Put_line(utl_raw.cast_to_varchar2(V_payload));

   dbms_output.put_line(v_msg_properties.correlation);

   if v_msg_properties.correlation='Corr6' then

       v_deq_options.dequeue_mode:=DBMS_AQ.REMOVE;

       v_deq_options.msgid:=v_msgid;

       Dbms_aq.dequeue(Queue_name=>'xxx_test_q'

                      ,Message_properties=>V_msg_properties

                      ,dequeue_options=>V_deq_options

                     ,Payload=>V_payload

                      ,Msgid=>V_msgid);

      Dbms_output.Put_line(utl_raw.cast_to_varchar2(V_payload));

      v_deq_options.dequeue_mode:=DBMS_AQ.BROWSE;

      v_deq_options.msgid:=NULL;

   end if;

   exception

    when others then

      null;

   end;

end loop;

End;

 

This removed only the 6th message from the queue, and left the others intact.

 

There are more options to the queuing / dequeuing like retrying failed attempts (rollback after a dequeue is considered a failed attempt) and queuing with a delay or an expiration time. But I think the information so far will allow you to test these options on a need-by basis.

 

Multi-consumer or publish-subscribe queues

 

Both ‘publish-subscribe’ and ‘multi-consumer’ are used for these queues. I think ‘multi-consumer’ is most often used informally. That will also be the one I will use in this article (even though ‘publish-subscribe’ is more accurate).

 

We build multi-consumer queues with dbms_aqadm again. But on creating the queue-table, we say that it has to be a multi-consumer queue-table:

 

Begin

dbms_aqadm.create_queue_table (queue_table=>’xxx_mc_test’

                           ,multiple_consumers=>TRUE

                           ,queue_payload_type=>’RAW’);

End;

 

Now we see more objects being created. The most important ones are:

Xxx_mc_test                      The queue table itself.

Table aq$_xxx_mc_test_s with information about the subscribers to the queue

Table aq$_xxx_mc_test_r with information about the rules for the subscriptions

IOT aq$_xxx_mc_test_h with historic information about dequeuing

IOT aq$_xxx_mc_test_i with dequeuing information

 

As you can see, a lot more information is stored for multi-consumer queues. In part this information has to do with the subscription and subscriber mechanism. But there is also the need to keep a history of the dequeuing, to know when a message has been dequeued by all subscribers.

 

We will be seeing the use of all the objects in a few minutes, when we start queuing and dequeuing messages.

 

When we try to enqueue messages on this queue now, we receive an ORA-24033: no recipients for message. This means we need to set up subscribers first. If we enqueue without a recipient list, the message will be made available for all subscribers.

 

To add a subscriber, we use the dbms_aqadm package and a new object_type: sys.aq$_agent.

This type is defined as an object of name , address and protocol. The last 2 are used in inter-system communication only.

 

We can just call the following procedure:

 

DECLARE

V_agent sys.aq$_agent;

BEGIN

   V_agent:= sys.aq$_agent('Agent1',NULL,NULL);

   DBMS_AQADM.ADD_SUBSCRIBER(queue_name=>'xxx_mc_test_q'

      ,subscriber=>v_agent);

END;

 

We can see the subscribers from the view aq$xxx_mc_test_s (or the underlying table: aq$_xxx_mc_test_s):

 

select * from aq$xxx_mc_test_s;

 

QUEUE        NAME   ADDRESS      PROTOCOL TRANSFORMATION

————– —— ——-     ——– ————–

XXX_MC_TEST_Q AGENT1              0    

 

Now let’s enqueue a message:

 

Declare

V_payload Raw(200);

V_msgid Raw(200);

V_enq_options Dbms_aq.enqueue_options_t;

v_msg_properties dbms_aq.message_properties_t;

Begin

v_payload := utl_raw.cast_to_raw('Hello world, again!');

Dbms_aq.enqueue(Queue_name=>'xxx_mc_test_q'

             ,Message_properties=>V_msg_properties

              ,enqueue_options=>V_enq_options

,Payload=>V_payload

             ,Msgid=>V_msgid);

Dbms_output.Put_line(utl_raw.cast_to_varchar2(V_msgid));

End;

 

Now when we look at the queue-view, we can see that a subscriber has been selected:

 

select queue,rawtohex(msg_id) msg_id,msg_state,consumer_name from aq$xxx_mc_test;

 

QUEUE        MSG_ID                        MSG_STATE CONSUMER_NAME

————– ——————————– ——— ————-

XXX_MC_TEST_Q BC4C48AC659946428F38F8BC3AB02184 READY     AGENT1

 

Now to dequeue the message, we also need to set the consumer_name in the dequeue_options. When enqueuing a message without a subscriber_name, it can be dequeued by all subscribers. But on dequeueing, the subscriber needs to identify itself.

 

Declare

V_payload Raw(200);

V_msgid Raw(200);

V_deq_options Dbms_aq.dequeue_options_t;

v_msg_properties dbms_aq.message_properties_t;

Begin

v_deq_options.consumer_name:='Agent1';

Dbms_aq.dequeue(Queue_name=>'xxx_mc_test_q'

,Message_properties=>V_msg_properties

,dequeue_options=>V_deq_options

,Payload=>V_payload

,Msgid=>V_msgid);

Dbms_output.Put_line(utl_raw.cast_to_varchar2(V_payload));

End;

 

Hello world, again!

 

Now when we check the queue contents, we see that the message is still there. Even after a commit, the message has been retained. Maybe you won’t see it on your system immediately. But then run:

 

Begin

Dbms_aqadm.stop_time_manager;

End;

 

And enqueue/dequeue a message again. Now when you look in xxx_mc_test or aq$xxx_mc_test, you will see the message being retained (with status ‘PROCESSED’). When you start the time_manager again, the message will disappear after some time.

 

The reason for this, is that Oracle enhances concurrency by using a separate table (IOT) for the dequeuing. When we enqueue a message again:

 

Declare

V_payload Raw(200);

V_msgid Raw(200);

V_enq_options Dbms_aq.enqueue_options_t;

v_msg_properties dbms_aq.message_properties_t;

Begin

v_payload := utl_raw.cast_to_raw('Hello world');

Dbms_aq.enqueue(Queue_name=>'xxx_mc_test_q'

,Message_properties=>V_msg_properties

,enqueue_options=>V_enq_options

,Payload=>V_payload

,Msgid=>V_msgid);

Dbms_output.Put_line(utl_raw.cast_to_varchar2(V_msgid));

End;

 

We can see the data in the dequeue-IOT:

 

select subscriber#, queue#, msg_enq_time, msgid from Aq$_xxx_mc_test_i;

 

SUBSCRIBER# QUEUE# MSG_ENQ_TIME                MSG_ID

———– —— ————————— ——————————–

          1      0 21-03-10 14:16:00,252000000 75E41875D957455B84D80B55AE06F81C       

 

Here the basic information about our message is recorded. After a subscriber dequeues the message it’s version of the record is deleted only from this table (Please try this yourself, to confirm). The queue-monitors are responsible for cleaning up the queue-table after all subscribers have dequeued the message.

 

Now let’s see what happens when we add a second subscriber for our queue:

 

DECLARE

V_agent sys.aq$_agent;

BEGIN

   V_agent:= sys.aq$_agent('Agent2',NULL,NULL);

   DBMS_AQADM.ADD_SUBSCRIBER(queue_name=>'xxx_mc_test_q'

      ,subscriber=>v_agent);

END;

 

Any messages that were enqueued already, won’t be available for this new subscriber. It can only dequeue messages enqueued after the subscriber was added.

 

Also you can’t just change subscribers in an existing session. If you try, you will get an ORA-25242: Cannot change subscriber name from string to string without FIRST_MESSAGE option.

 

As the message describes further, you need to change the navigation of the dequeue. The default navigation is next_message, which means that Oracle will read the queue in a read-consistent and ordered way. It will take a snapshot of the queue when the first message is dequeued, and will dequeue the messages in that order. Messages that were enqueued after the first dequeue, will be read after reading all the messages in the queue. Even if priority ordering means they are enqueued earlier.

 

An alternative navigation is ‘first_message’. When the navigation is set to ‘first_message’, Oracle will take a new snapshot before every dequeue, and start with the first message eligible for dequeuing.

Because we change subscribers, we need to set navigation to ‘First_message’, to force Oracle to take a new snapshot.

 

(Btw. If you would try ‘first_message’ with dequeue_mode ‘Browse’, you would never get beyond the first message. Try it!)

(Btw2. The same goes for changing the filter options like correlation.)

 

Let’s start a new session, and enqueue a new message:

 

Declare

V_payload Raw(200);

V_msgid Raw(200);

V_enq_options Dbms_aq.enqueue_options_t;

v_msg_properties dbms_aq.message_properties_t;

Begin

v_payload := utl_raw.cast_to_raw('Hello agents!');

Dbms_aq.enqueue(Queue_name=>'xxx_mc_test_q'

             ,Message_properties=>V_msg_properties

,enqueue_options=>V_enq_options

,Payload=>V_payload

,Msgid=>V_msgid);

Dbms_output.Put_line(utl_raw.cast_to_varchar2(V_msgid));

End;

 

The message is still the same in xxx_mc_test:

 

select q_name, rawtohex(msgid) msg_id,state,enq_time,enq_uid from xxx_mc_test

 

Q_NAME        MSG_ID                           STATE ENQ_TIME                    ENQ_UID             

————- ——————————– —– ————————— ——-

XXX_MC_TEST_Q 45F11423444747B99600BCD8E9B3141E     0 21-03-10 14:33:23,783000000 SYSTEM

 

But in the queue view, we now see 2 records:

 

select queue,msg_id,msg_state,enq_time,enq_user_id,consumer_name from aq$xxx_mc_test;

 

QUEUE          MSG_ID                           STATE ENQ_TIME          ENQ_USER_ID CONSUMER_NAME

————– ——————————– —– —————– ———– ————-

XXX_MC_TEST_Q 45F11423444747B99600BCD8E9B3141E READY 21-03-10 14:33:24 SYSTEM      AGENT1

XXX_MC_TEST_Q 45F11423444747B99600BCD8E9B3141E READY 21-03-10 14:33:24 SYSTEM      AGENT2

 

One record for each subscriber. We can see the same in the dequeue_iot and in the history table:

 

select subscriber#,queue#,msg_enq_time,msgid from aq$_xxx_mc_test_i;

 

SUBSCRIBER# QUEUE# MSG_ENQ_TIME                MSGID

———– —— ————————— ——————————–

          1      0 21-03-10 14:33:23,783000000 45F11423444747B99600BCD8E9B3141E

         21      0 21-03-10 14:33:23,783000000 45F11423444747B99600BCD8E9B3141E

 

Select Msgid,Subscriber#,Name,Dequeue_time,Dequeue_user From Aq$_xxx_mc_test_h;

 

MSGID                            SUBSCRIBER# NAME DEQUEUE_TIME DEQUEUE_USER

——————————– ———– —- ———— ————

45F11423444747B99600BCD8E9B3141E           1    0          

45F11423444747B99600BCD8E9B3141E          21    0          

 

Now when we dequeue the message, the queue table is not updated:

Select Rawtohex(Msgid) Msg_id,State,Enq_time,Enq_uid,deq_time,deq_uid From Xxx_mc_test;

 

MSG_ID                           STATE ENQ_TIME                    ENQ_UID DEQ_TIME DEQ_UID             

——————————– —– ————————— ——- ——– ——-

45F11423444747B99600BCD8E9B3141E     0 21-03-10 14:33:23,783000000 SYSTEM

 

However, the queue view reflects that the message has been dequeued by one subscriber.

 

Select Queue,Msg_id,Msg_state,Enq_time,Enq_user_id,Consumer_name From Aq$xxx_mc_test;

 

QUEUE         MSG_ID                           MSG_STATE ENQ_TIME          ENQ_USER CONSUMER_NAME

————- ——————————– ——— —————– ——– ————-

XXX_MC_TEST_Q 45F11423444747B99600BCD8E9B3141E PROCESSED 21-03-10 14:33:24 SYSTEM   AGENT1

XXX_MC_TEST_Q 45F11423444747B99600BCD8E9B3141E READY     21-03-10 14:33:24 SYSTEM   AGENT2

 

The record for Agent1 has been deleted from the dequeue-IOT:

 

select subscriber#,queue#,msg_enq_time,msgid from aq$_xxx_mc_test_i;

 

SUBSCRIBER# QUEUE# MSG_ENQ_TIME                MSGID

———– —— ————————— ——————————–

         21      0 21-03-10 14:33:23,783000000 45F11423444747B99600BCD8E9B3141E

 

And the history table also shows the dequeue:

 

Select msgid, subscriber#, Dequeue_time,Dequeue_user From Aq$_xxx_mc_test_h;

 

MSGID                            SUBSCRIBER# DEQUEUE_TIME                DEQUEUE_USER

45F11423444747B99600BCD8E9B3141E          21   21-03-10 14:33:23,783000000 SYSTEM

45F11423444747B99600BCD8E9B3141E           1

 

To dequeue the message for ‘Agent2’. We of course need to set the navigation to ‘First_message’:

 

Declare

V_payload Raw(200);

V_msgid Raw(200);

V_deq_options Dbms_aq.dequeue_options_t;

v_msg_properties dbms_aq.message_properties_t;

Begin

V_deq_options.Wait := Dbms_aq.No_wait;

V_deq_options.Navigation:=Dbms_aq.First_message;

v_deq_options.consumer_name:='Agent2';

Dbms_aq.dequeue(Queue_name=>'xxx_mc_test_q'

,Message_properties=>V_msg_properties

,dequeue_options=>V_deq_options

,Payload=>V_payload

,Msgid=>V_msgid);

Dbms_output.Put_line(utl_raw.cast_to_varchar2(V_payload));

End;

 

Now after the QMON has processed the queue, the records will be deleted from all queues. (When a retention time has been set, the records will of course be retained for that time).

 

Rules for multi-consumer queues

 

So far we have seen different kinds of filtering for dequeuing messages. A new option comes with multi-consumer queues, where different subscribers can put a filter on their subscriptions. These filters (rules) can take the form of (complex) predicates that return a Boolean value. The rule can reference both message_properties as payload. To reference the payload, use a qualifier of ‘tab.user_data’.

 

Let’s build a new queue. To make optimal use of the ‘rule’-functionality we’ll use a custom type that can be referred to in the ‘rules’. The type that we’ll use is loosely based on the emp table.

 

create type t_emp as object

(empno   number

,ename   varchar2(10)

,job     varchar2(9)

);

 

BEGIN

DBMS_AQADM.CREATE_QUEUE_TABLE (queue_table => 'xxx_rule_test'

,queue_payload_type => 't_emp');

DBMS_AQADM.CREATE_QUEUE(queue_name => 'xxx_rule_test_q'

 ,queue_table => 'xxx_rule_test');

DBMS_AQADM.START_QUEUE (queue_name => 'xxx_rule_test_q');

END;

 

We add 2 subscribers to this queue.

 

DECLARE

V_agent sys.aq$_agent;

BEGIN

V_agent:= sys.aq$_agent('HR_President',NULL,NULL);

DBMS_AQADM.ADD_SUBSCRIBER(queue_name=>'xxx_rule_test_q'

   ,subscriber=>v_agent

   ,rule=>'tab.user_data.job=''President''');

V_agent:= sys.aq$_agent('HR_Employee',NULL,NULL);

DBMS_AQADM.ADD_SUBSCRIBER(queue_name=>'xxx_rule_test_q'

   ,subscriber=>v_agent);

END;

 

Note how the agent ‘HR_President’ has a rule added to its subscription. Only messages where the job attribute of the payload is ‘President’ are eligible for dequeuing by this agent. Let’s enqueue some messages on this queue.

 

Declare

V_payload t_emp;

V_msgid Raw(200);

V_enq_options Dbms_aq.enqueue_options_t;

v_msg_properties dbms_aq.message_properties_t;

Begin

v_payload := t_emp(1,'Jones','Manager');

Dbms_aq.enqueue(Queue_name=>'xxx_rule_test_q'

             ,Message_properties=>V_msg_properties

,enqueue_options=>V_enq_options

,Payload=>V_payload

,Msgid=>V_msgid);

Dbms_output.Put_line(utl_raw.cast_to_varchar2(V_msgid));

v_payload := t_emp(2,'King','President');

Dbms_aq.enqueue(Queue_name=>'xxx_rule_test_q'

             ,Message_properties=>V_msg_properties

,enqueue_options=>V_enq_options

,Payload=>V_payload

,Msgid=>V_msgid);

Dbms_output.Put_line(utl_raw.cast_to_varchar2(V_msgid));

End;

 

Now we have 2 messages. Only one of which matches the rule for the ‘HR_President’ subscriber. The ‘HR_Employee’ subscriber does not have any rule, and is thus eligible for all messages. We can see this when we query the queue-view:

 

select queue,rawtohex(msg_id) msg_id,msg_state,consumer_name from aq$xxx_rule_test;

 

QUEUE        MSG_ID                              MSG_STATE CONSUMER_NAME

————— ——————————– ——— ————-

XXX_RULE_TEST_Q 4D0FF7A800834559809AD90AFCA81444 READY     HR_EMPLOYEE

XXX_RULE_TEST_Q E5A2FDFD8EE942349E9BC9DEE88CEB10 READY     HR_EMPLOYEE

XXX_RULE_TEST_Q E5A2FDFD8EE942349E9BC9DEE88CEB10 READY     HR_PRESIDENT

 

We see that both messages are enqueued for the ‘HR_Employee’. But only the message with the job ‘President’ is enqueued for the the ‘HR_President’.

 

Let’s dequeue the messages as ‘HR_President’ first, then as ‘HR_Employee’.

 

Declare

V_payload t_emp;

V_msgid Raw(200);

V_deq_options Dbms_aq.dequeue_options_t;

v_msg_properties dbms_aq.message_properties_t;

Begin

v_deq_options.wait := dbms_aq.no_wait;

v_deq_options.consumer_name:='HR_President';

Dbms_aq.dequeue(Queue_name=>'xxx_rule_test_q'

,Message_properties=>V_msg_properties

,dequeue_options=>V_deq_options

,Payload=>V_payload

,Msgid=>V_msgid);

Dbms_output.Put_line(v_payload.empno||' '||v_payload.ename);

end;

 

2 King

 

This time the first message enqueued was ignored for this subscriber. Only the message that met its rule was dequeued. When dequeuing as the ‘HR_Employee’ both messages will be dequeued.

 

Declare

V_payload t_emp;

V_msgid Raw(200);

V_deq_options Dbms_aq.dequeue_options_t;

v_msg_properties dbms_aq.message_properties_t;

Begin

v_deq_options.wait := dbms_aq.no_wait;

v_deq_options.navigation := dbms_aq.first_message;

v_deq_options.consumer_name:='HR_Employee';

Dbms_aq.dequeue(Queue_name=>'xxx_rule_test_q'

,Message_properties=>V_msg_properties

,dequeue_options=>V_deq_options

,Payload=>V_payload

,Msgid=>V_msgid);

Dbms_output.Put_line(v_payload.empno||' '||v_payload.ename);

v_deq_options.navigation := dbms_aq.next_message;

Dbms_aq.dequeue(Queue_name=>'xxx_rule_test_q'

,Message_properties=>V_msg_properties

,dequeue_options=>V_deq_options

,Payload=>V_payload

,Msgid=>V_msgid);

Dbms_output.Put_line(v_payload.empno||' '||v_payload.ename);

end;

 

1 Jones

2 King

 

After these dequeues, the queue is empty for these subscribers. The only message eligible for ‘HR_President’ was the message with ‘2,King,President’. ‘HR_Employee’ was eligible for both messages.

Remember that the ‘Rule’ must evaluate to a Boolean value. Valid references are to ‘tab.user_data.’, for object_type payloads. Also columns like ‘priority’ or ‘correlation’ from the message properties can be referenced in the rule.

 

Common issues with queues and troubleshooting

 

Above we already saw several error messages related to queues. Most of them can be expected, and should be handled in the code.

 

The most common issues with queues are from queues not started, or not started for enqueuing or dequeuing. The error messages for this should be quite clear, and you can just start the queue with the ‘dbms_aqadm.start_queue’ package. Note that when the queue is started for enqueuing or dequeuing only, you need to stop it first, then start again with the correct options enabled.

 

Another issue may occur because of the AQ error handling system. A dequeue with dequeue_mode ‘REMOVE’ that needs to roll back afterwards, is considered a failed attempt. When the number of failed attempts exceed the retry count of the queue, the message will be moved to the Exception queue. The message remains in the queue table, but with status 3: Expired. The exception_queue field will be set to the name of the exception queue.

 

These messages are not available for dequeuing anymore. They must be dequeued from the exception queue.

 

To dequeue from an exception queue, it first needs to be enabled for dequeuing. (It cannot be enabled for enqueuing). Also no subscriber_name is allowed for the dequeue.

 

Begin

Dbms_aqadm.Start_queue(Queue_name=>'aq$_xxx_mc_test_e',Enqueue=>False,Dequeue=>True);

end;

 

Declare

V_payload Raw(200);

V_msgid Raw(200);

V_deq_options Dbms_aq.dequeue_options_t;

v_msg_properties dbms_aq.message_properties_t;

Begin

V_deq_options.Wait := Dbms_aq.No_wait;

Dbms_aq.dequeue(Queue_name=>'AQ$_XXX_MC_TEST_E'

,Message_properties=>V_msg_properties

,dequeue_options=>V_deq_options

       ,Payload=>V_payload

,Msgid=>V_msgid);

Dbms_output.Put_line(Utl_raw.Cast_to_varchar2(V_payload));

End;

 

The last issue to note with queues (especially in eBS systems) is described in Metalink note  267137.1. If multi-consumer queues are created in an ASSM tablespace, or when using freelist groups, QMON will not perform space management on the IOT’s. This will result in ever growing IOT’s and eventually in deteriorating performance.

 

Oracle workflow for eBS DBA’s (Part 4b ;-)

 A reader pointed out that I didn’t fulfill my promise to write about suspend, resume and abort in part 4 of ‘Oracle workflow for eBS DBA’s.

 
So to make up for that omission, I will first write a separate note about it here. Then I’ll incorporate it in the article at a later time. 
 
Let’s start with the abort. 
In part 3, we saw that we can get workflows in a state where they will never be able to continue again. We did this by setting the ‘On Revisit’ property to ignore. 
 
The correct way to handle these workflows is to run a workflow background engine with parameter ‘process_stuck’ set to ‘TRUE’. That will set the item status to ‘Error’ and run the appropriate error process. 
 
But there may be reasons where you want to just abort the item, without error processing. 
 
For those situations Oracle provides the ‘wf_engine.abortprocess’ API. 
The API will set the status of the process to complete. If a result is needed, you can set this on the call to the API. It defaults to ‘#FORCE’ (wf_engine.eng_force constant).
 
Let’s see how this works. First I used the ‘MAIN_DBA_PROCESS’ from part 3 of the series, and I set the ‘On Revisit’ for the ‘LOOP_COUNTER’ to ‘Ignore’.
Now when I run the process, we get this result: 
 
INSTANCE_LABEL   FUNCTION                      BEGIN_DATE         END_DATE           STATUS   RESULT   OUTBOUND_QUEUE_ID
DBA_MAIN_PROCESS                               31-7-2009 12:36:58                    ACTIVE   #NULL 
START            START                         31-7-2009 12:36:58 31-7-2009 12:36:58 COMPLETE #NULL 
INITIALIZE_FLOW  XXX_WF_DBA.init               31-7-2009 12:36:58 31-7-2009 12:36:58 COMPLETE COMPLETE 
COMPARETEXT      WF_STANDARD.COMPARE           31-7-2009 12:36:58 31-7-2009 12:36:58 COMPLETE EQ 
CHECK_INVALIDS   XXX_CHECK_INVALIDS            31-7-2009 12:36:58 31-7-2009 12:36:58 COMPLETE Y 
GET_INVALIDS     XXX_WF_UTILS.get_invalids     31-7-2009 12:36:58 31-7-2009 12:36:58 COMPLETE  
LOOPCOUNTER      WF_STANDARD.LOOPCOUNTER       31-7-2009 12:36:58 31-7-2009 12:36:58 COMPLETE LOOP 
PROCESS_INVALIDS XXX_WF_UTILS.process_invalids 31-7-2009 12:36:58 31-7-2009 12:36:58 COMPLETE  
DEFER            WF_STANDARD.DEFER             31-7-2009 12:37:33 31-7-2009 12:37:33 COMPLETE #NULL 
AND              WF_STANDARD.ANDJOIN           31-7-2009 12:37:33                    WAITING  
TRACK_FLOW-1     TRACK_FLOW_PROGRESS           31-7-2009 12:37:42 31-7-2009 12:37:42 COMPLETE  
 
 
Now we can abort the item with the API:
 
 
begin
         wf_engine.abortprocess(itemtype=>'DBA_TYPE'
                               ,itemkey=>'30'
                               ,process=>'ROOT:DBA_MAIN_PROCESS');
end;
 
Note how we have to indicate that we want to abort the root process of the DBA_MAIN_PROCESS. The workflow engine needs to know unambiguously which process to abort. The way to do that is to set ‘process:<activity>’ to indicate the process. In our case this would be ‘ROOT:DBA_MAIN_PROCESS’.
 
And this is the result afterwards. 
 
INSTANCE_LABEL FUNCTION BEGIN_DATE END_DATE STATUS RESULT OUTBOUND_QUEUE_ID
DBA_MAIN_PROCESS                               31-7-2009 12:36:58 31-7-2009 12:39:10 COMPLETE #FORCE 
START            START                         31-7-2009 12:36:58 31-7-2009 12:36:58 COMPLETE #NULL 
INITIALIZE_FLOW  XXX_WF_DBA.init               31-7-2009 12:36:58 31-7-2009 12:36:58 COMPLETE COMPLETE 
COMPARETEXT      WF_STANDARD.COMPARE           31-7-2009 12:36:58 31-7-2009 12:36:58 COMPLETE EQ 
CHECK_INVALIDS   XXX_CHECK_INVALIDS            31-7-2009 12:36:58 31-7-2009 12:36:58 COMPLETE Y 
GET_INVALIDS     XXX_WF_UTILS.get_invalids     31-7-2009 12:36:58 31-7-2009 12:36:58 COMPLETE  
LOOPCOUNTER      WF_STANDARD.LOOPCOUNTER       31-7-2009 12:36:58 31-7-2009 12:36:58 COMPLETE LOOP 
PROCESS_INVALIDS XXX_WF_UTILS.process_invalids 31-7-2009 12:36:58 31-7-2009 12:36:58 COMPLETE  
DEFER            WF_STANDARD.DEFER             31-7-2009 12:37:33 31-7-2009 12:37:33 COMPLETE #NULL 
AND              WF_STANDARD.ANDJOIN           31-7-2009 12:37:33 31-7-2009 12:39:10 COMPLETE #FORCE 
TRACK_FLOW-1     TRACK_FLOW_PROGRESS           31-7-2009 12:37:42 31-7-2009 12:37:42 COMPLETE  
 
 
 
Compare this with the result from running a background engine with parameter ‘process_stuck=>TRUE’:
 
ORA-20002: 3150: Process 'DBA_TYPE/33' is being worked upon. Please retry the current request on the process later.
ORA-06512: at "APPS.WF_CORE", line 300
ORA-06512: at "APPS.WF_ENGINE", line 4528
ORA-06512: at line 2
 
Of course this error can be captured and handled as we saw in ‘Oracle workflow for eBS DBA’s (Part 4)’
 
Then lets take a look at the ‘wf_engine.Suspend’ function. This is basically a ‘pause’-API for a workflow item. It sets the active process to ‘SUSPEND’. 
 
The workflow engine will not pick it up any more until the wf_engine.resume API is called. 
 
Let’s see the resume and suspend with a small example. I used the same dba_control_process. After launching it, it will be deferred. Instead of running a background engine, we suspend it. 
 
begin
     wf_engine.suspend(itemtype=>'DBA_TYPE',itemkey=>'34');
end;
 
And the status becomes:
 
INSTANCE_LABEL      FUNCTION          BEGIN_DATE         END_DATE           STATUS   RESULT OUTBOUND_QUEUE_ID
DBA_CONTROL_PROCESS                   31-7-2009 18:30:45                    SUSPEND  #NULL 
START               START             31-7-2009 18:30:45 31-7-2009 18:30:45 COMPLETE #NULL 
DEFER               WF_STANDARD.DEFER 31-7-2009 18:31:02                    DEFERRED #NULL  6FFEFFF31C2604F5E0440003BAB3AD6B
 
The interesting thing is that the deferred status is still there. Including its queue_id. So when we run a background engine. We’ll see that it indeed picks up the item. It dequeues the message but leaves the status on deferred. 
 
Now when we resume the process:
 
begin
    wf_engine.resume(itemtype=>'DBA_TYPE',itemkey=>'34');
end;
 
The function is performed, and the item continues as usual.
 

Oracle Workflow for eBS DBA’s (Part 1)

For many Oracle eBS-DBA’s workflow is a strange and hardly understood module.Still it is widely used in 11i and 12i. So let’s dive into its workings in more detail.

This is part one of a so far unlimited series. Don’t be put off by the lack of code in this part. We first need to go through the basics. In the next parts we’ll get more action.
During this series, I used an 11.5.10 instance on a 9.2.0.8 database. The basic statements in these articles will still hold for earlier and later versions, but small modifications may be needed.
In this first part we go into the definitions and the basics of the Workflow engine. We start with some definitions, and then we build a simple basic workflow.
We’ll see how this relates to the wf_tables in the database.

The basic terminology
 

First. What is a workflow? A workflow is a sequence of functions and events that follow a certain path based on decisions made during the progress of the sequence.
Most of us know the pictures from workflow builder. With the pictograms for functions joined together with lines.
That set is a definition of a workflow. In the Oracle workflow world it is called a ‘process’. The nodes in the process can be functions, processes or notifications.

All these are grouped together in containers that Oracle called an ‘Itemtype’. The itemtype is very important, since it will become part of the primary key in the underlying tables.
The actual workflows that are running according to the definition of the itemtype are called ‘Items´. The item is started as a specific process within an ‘itemtype’. And it is uniquely identified by the ‘itemtype’ and an ‘itemkey’.

Every process consists of 2 or more nodes, which are joined together by transitions. At least 2 nodes are required, because a process needs a ’start’ and a ’stop’-node.
Ok. We talked enough for now. Let’s build a process and find out the rest along the way.
By the way, all the definitions above will be linked to a glossary later on.

Getting Started

To start building our process, we first need the itemtype. To create an itemtype, we use ‘Workflow builder’. In workflow builder, when we click the new button we are greeted with this screen:

wf builder start

On right clicking the ‘untitled’ map, we can create a new itemtype.

New_item_type

Internal name is the unique name that will be used in the table keys for this itemtype and its items. It is limited to 8 characters. So choose wisely!
Display name is the name that will be shown to users when they need to interact with items from this itemtype.
The description…….. you can guess that one.
We will discuss the other three fields in a later article.

My first Itemtype

I choose to start building a flow that will do some DBA checks and tries to fix problems or notify the DBA if there is a problem.
During the course of building this flow, we’ll stumble on different features of the workflow engine.
The first step is to build the itemtype.
I called it: DBA_TYPE.
With a display name: DBA Itemtype
And a description: Itemtype to hold DBA processes and functions.

dba_item_type

When you open your newly created itemtype, you see the components that can be created within this itemtype.
You’ll remember that the flow definition was called a process. So next we create a new ‘Process’: 

dba_main_process

Because this is a process that we will only be calling from our client, we have no use for the result type at the moment.
Later on, we’ll see nested processes, where the result of a process will determine the direction of the calling process.
When we go to the Process Detail (right click the process). We again have a virgin drawing board.
This will be where the actual flow is created. Every process consists of activities (functions, notifications and processes) and the transitions between them (based on the results of the activities).
Also every process has to start with a ‘Start’ Activity and finish with an ‘End’ activity. (Take care to avoid loose ends, since the end finalizes the flow and gives back control, or makes the flow purgeable).
So first we create a new function to start our flow.

start_1

Note the wf_standard.noop for the function. This is a dummy function because the only purpose of this node is to indicate the starting point for the process.
Even though we named this function ‘START’, we still need to flag it as a ‘Start’ function. That is in the node tab.

start_2

We then create an ‘END’ function in the same way.
Finally we create our own function.
 

Now we have an item_type with 1 process, and 3 functions. It’s time to connect the functions together.
Right click START, and drag to INITIALIZE_FLOW. Then right click there and drag to END. The result should be like:  

process-1

Now we have a runnable flow.

You can even start it already, if you create an empty packaged function: ‘ XXX_WF_DBA.init’.

But there is more work to be done.
 

First we are going to see how this is recorded in the wf_ tables in our database in part 2 of our series.
 

Oracle Workflow for eBS DBA’s (Part 4)

 Welcome to Part 4 of the series Oracle Workflow for eBS DBA’s.

The previous articles can be found here: 1, 2, 3
 
In this part we will look at the controls for the workflow items. This includes ‘Defer thread’, ‘Block’, ‘Wait for flow’ and ‘Wait’. Also we will see the suspend and abort functions from the workflow engine.
These functions allow the execution of a workflow item to be halted, delayed or taken over from online processing to background processing. The ‘Wait for Flow’ allows some interaction between different items.
 
The first function we will look into is the ‘Defer Thread’ (or short ‘Defer’).
 
By default the session that starts an item executes all functions in that item. That means that long running activities are also executed online. The ‘Defer’ function makes it possible to queue those long running activities to be executed by a background process.
 
Let’s start with a small process to show how this works. We create an item with only a function ‘Defer’ (copied from the STANDARD itemtype). I created the following process:

 process_defer

The ‘Defer Thread’ has no node attributes.

When we run this process, and check the activity_statuses we get the following result:
 

select wpa.instance_label
,      case when wpa.start_end is not null then wpa.start_end else wa.function end function
,      wias.begin_date
,      wias.end_date
,      wias.activity_status status
,      wias.activity_result_code result
,     outbound_queue_id
from  wf_item_activity_statuses wias
join  wf_process_activities wpa
     on (wias.process_activity=wpa.instance_id)
join  wf_activities wa
      on (wpa.activity_item_type=wa.item_type
      and wpa.activity_name=wa.name
      and wa.end_date is null)
join  wf_activities_tl wat
      on (wa.item_type=wat.item_type
      and wa.name=wat.name
      and wa.version=wat.version
      and wat.language='US')
where wias.item_type='DBA_TYPE'
and   wias.item_key='14'
order by wias.begin_date,wias.execution_time;
 

 

 

INSTANCE_LABEL      FUNCTION          BEGIN_DATE         END_DATE           STATUS   RESULT OUTBOUND_QUEUE_ID
------------------- ----------------- ------------------ ------------------ -------- ------ -----------------
DBA_CONTROL_PROCESS                   24-7-2009 14:02:46                    ACTIVE   #NULL
START               START             24-7-2009 14:02:46 24-7-2009 14:02:46 COMPLETE #NULL
DEFER               WF_STANDARD.DEFER 24-7-2009 14:02:46                    DEFERRED #NULL  6F728D74CA8142E6E0440003BAB3AD6B 

 

Even though control was given back to the client, the item did not complete yet. The root is still active and the second function got status ‘Deferred’. This function is waiting to be processed by a background engine.
Before we start a background engine, we first take a look at the deferring mechanism.

The WF_STANDARD.DEFER function only sets the result_code of the process_activity to ‘DEFERRED’. To be precise it is set to the constant ‘wf_engine.eng_deferred’. When the engine encounters this result_code, it will queue a message to the ‘WF_DEFERRED_QUEUE_M’queue with a payload of item_type, item_key and process_activity. 
 

The table for the ‘WF_DEFERRED_QUEUE_M’ is ‘WF_DEFERRED_TABLE_M’. The queue table can be queried directly or through the queuing view. (Which shows some translated columns). The key is the outbound_queue_id from the wf_item_activity_statuses:
 

 

 

select queue
,      corr_id
,      msg_priority
,      msg_state
,      enq_time
,      def.user_data.itemtype
,      def.user_data.itemkey
,      def.user_data.actid
,      consumer_name
from applsys.AQ$WF_DEFERRED_TABLE_M def
where msg_id=(select outbound_queue_id
         from wf_item_activity_statuses
    where item_type='DBA_TYPE'
    and item_key='14'
    and activity_status='DEFERRED');

 

 

 

 

QUEUE               CORR_ID      PRIO MSG_STATE ENQ_TIME           TYPE     KEY ACTID  CONSUMER_NAME
------------------- ------------ ---- --------- ------------------ -------- --- ------ -------------
WF_DEFERRED_QUEUE_M APPSDBA_TYPE    1 READY     24-7-2009 14:02:44 DBA_TYPE 14  245513 APPS

 

 

No surprises there. The message is queued, and ready to be picked up.

The queue is read by the ‘Workflow Background Process’. In eBS this is a concurrent request under the ‘System Administrator’ responsibility. We will discuss the parameters later in this article.

An alternative is to run a background engine directly from SQL. That is what we will do now.

 

 

Begin
    Wf_engine.background (itemtype=>’DBA_TYPE’
                         ,process_deferred=>TRUE
                         ,process_timeout=>FALSE
                         ,process_stuck=>FALSE);
End;

 

 

We will discuss the timeout and stuck parameters later in this article. And idem for 2 parameters for ‘Treshold’.

The background engine will dequeue the message from the queue, and process the workflow item as of that process_activity.

When we look at the ‘wf_item_activity_statuses’ we notice something strange.

 

INSTANCE_LABEL      FUNCTION          BEGIN_DATE END_DATE                    STATUS   RESULT
DBA_CONTROL_PROCESS                   24-7-2009  14:02:46 24-7-2009 14:20:43 COMPLETE #NULL
START               START             24-7-2009  14:02:46 24-7-2009 14:02:46 COMPLETE #NULL
DEFER               WF_STANDARD.DEFER 24-7-2009  14:20:43 24-7-2009 14:20:43 COMPLETE #NULL
END                 END               24-7-2009  14:20:43 24-7-2009 14:20:43 COMPLETE #NULL

 

The begin_date for the ‘Defer’ is the time that it was run by the background engine. This is because after dequeueing the workflow engine will restart the process starting with this process_activity. So it will actually be re-executed at that time.
We can see that more clearly with an adjusted ‘Track_flow_progress’ function (see part 2 of this series for the original). This version will return a completion code of ‘wf_engine.eng_deferred’ to defer the item. We then replace the original ‘Defer’ function with our ‘track_flow_progress’ function.

 

CREATE OR REPLACE Procedure track_flow_progress (p_item_type IN VARCHAR2
         ,p_item_key IN VARCHAR2
         ,p_actid IN NUMBER
         ,p_funcmode IN VARCHAR2
         ,p_result OUT VARCHAR2) IS
v_activity_name varchar2(30);
v_activity_version number;
v_process_name varchar2(30);
v_instance_label varchar2(30);
v_count number;
begin
    select activity_name,wa.version,process_name,instance_label
    into   v_activity_name, v_activity_version, v_process_name, v_instance_label
    from   wf_process_activities wpa
    join   wf_activities wa
       on (wpa.activity_item_type=wa.item_type
        and wpa.activity_name=wa.name
        and wa.end_date is null)
    where  wpa.instance_id=p_actid;
   
    insert into xxx_track_flow (id
                               ,item_type
  ,item_key
 ,activity_name
 ,activity_version
 ,process_name
 ,instance_id
 ,instance_label
 ,funcmode)
   values (xxx_track_flow_s.nextval
          ,p_item_type
  ,p_item_key
  ,v_activity_name
  ,v_activity_version
  ,v_process_name
  ,p_actid
  ,v_instance_label
  ,p_funcmode
);

    select count(*)
    into   v_count
    from   xxx_track_flow x
    where  item_type=p_item_type
    and    item_key=p_item_key
    and    instance_id=p_actid;

    if v_count=1 then
       p_result:=wf_engine.eng_deferred;
    end if;
  end;
/

 

Let’s see what happens when we put this instead of the standard ‘Defer’
 

 process_defer2

 

begin
  wf_engine.launchprocess(itemtype=>'DBA_TYPE',itemkey=>'15',process=>'DBA_CONTROL_PROCESS');
end;

 

We can confirm that the process is deferred:

 

INSTANCE_LABEL      FUNCTION            BEGIN_DATE         END_DATE           STATUS   RESULT 
DBA_CONTROL_PROCESS                     24-7-2009 17:44:32                    ACTIVE   #NULL
START               START               24-7-2009 17:44:32 24-7-2009 17:44:32 COMPLETE #NULL
TRACK_FLOW          TRACK_FLOW_PROGRESS 24-7-2009 17:44:32                    DEFERRED #NULL

 

The track_flow_progress function has executed once:

 

select count(*)
from   xxx_track_flow
where  item_type=’DBA_TYPE’
and    item_key=’15’;

 

 

COUNT(*)
--------
1

 

Now when we run the background engine. And check again:

 

INSTANCE_LABEL      FUNCTION            BEGIN_DATE         END_DATE           STATUS   RESULT 
DBA_CONTROL_PROCESS                     24-7-2009 17:44:32 24-7-2009 17:57:56 COMPLETE #NULL
START               START               24-7-2009 17:44:32 24-7-2009 17:44:32 COMPLETE #NULL
TRACK_FLOW          TRACK_FLOW_PROGRESS 24-7-2009 17:57:56 24-7-2009 17:57:56 COMPLETE #NULL
END                 END                 24-7-2009 17:57:56 24-7-2009 17:57:56 COMPLETE #NULL

 

And ‘track_flow_progress’:

 

ID ITEM_TYPE KEY ACTIVITY_NAME VERSION PROCESS_NAME        INSTANCE INSTANCE_LABEL FUNCMODE
174 DBA_TYPE 15  TRACK_FLOW    35      DBA_CONTROL_PROCESS 245549   TRACK_FLOW     RUN
175 DBA_TYPE 15  TRACK_FLOW    35      DBA_CONTROL_PROCESS 245549   TRACK_FLOW     RUN

 

The flow has completed now, and you can see the 2 calls to Track_flow_progress.
This is something that you need to be aware of, whenever an issue arises with a function that defers itself.

Now let’s look at the ‘costing’-model for the workflow engine. The goal of the background engines is to process activities with a long runtime, or that need heavy resources.
As we saw, the call to the background engine picks up all deferred items for a certain item_type. Especially with multiple or complex processes within an item_type, it might be convenient to make a further split.
This is done by assigning costs to different activities. Originally the cost is meant to be the runtime of an activity in seconds. (In workflow builder. It is stored in microseconds in the database).
When calling the background process, you can enter 2 parameters:
Mintreshold and maxtreshold.
The background engine will only process activities until the next defer. It does not matter if it is within the cost range of the background engine. The item will be deferred, and wait for the next run of an eligible background engine.

This is a good time to show the versioning of the workflow engine in action. We investigated the versioning system of the workflow in part 1 (and 2). Where we saw that the workflow will always follow the same process definition.
We can now see that this also goes for the activity_level. When we update the cost of an activity, while an item is active, the background engine will still use the original cost. Consider this:

I put a cost of 10 (seconds) to our  ‘Track_flow_progress’ activity:
 

When we run an item now, it is deferred on the ‘Track_flow_progress’ activity. Deferring an item automatically means that the workflow engine will commit. (After all, the queue message needs to be visible to the background engine).

 

INSTANCE_LABEL      COST FUNCTION            BEGIN_DATE         END_DATE           STATUS   RESULT
------------------- ---- ------------------- ------------------ ------------------ -------- ------
DBA_CONTROL_PROCESS 0                        24-7-2009 20:12:51                    ACTIVE   #NULL
START               0    START               24-7-2009 20:12:51 24-7-2009 20:12:51 COMPLETE #NULL
TRACK_FLOW-1        1000 TRACK_FLOW_PROGRESS 24-7-2009 20:12:51                    DEFERRED

 

While the first item is deferred, I increased the cost of the activity to 20:

 

And started another item. This item is also deferred:

 

INSTANCE_LABEL      COST ITEM_KEY FUNCTION            BEGIN_DATE         END_DATE           STATUS   RESULT
DBA_CONTROL_PROCESS 0 17  24-7-2009 20:13:01  ACTIVE #NULL
START 0 17 START 24-7-2009 20:13:01 24-7-2009 20:13:01 COMPLETE #NULL
TRACK_FLOW 2000 17 TRACK_FLOW_PROGRESS 24-7-2009 20:13:01  DEFERRED
 

 

Now when I run a background engine with minthreshold 0 and maxthreshold 1100 it nicely picks up the item with key ’16’. Only when the background engine is run with a maxthreshold of 2000 or more, it runs the item with key ’17’.

One final thing to mention about cost, is that the ‘online’ workflow engine will not defer activities with a cost under 50 microseconds.  (0.50 in workflow builder).
You can change this threshold by adjusting wf_engine.threshold. In a pl/sql procedure call: wf_engine.threshold:= n;

All items launched with this setting, will use the adjusted threshold. Anything above this threshold needs to be picked up by the background engine.

Let’s take a look at the background engine then. We already saw that it uses the WF_DEFERRED_QUEUE_M to select activities to process.

But a common problem in eBS environments is the long run time of the background processes.
The first point of action would be to purge your obsolete workflow data. Try to use the concurrent program from eBS to do that. In a later part of this series, we will look at the purging mechanism and suggest some ways to optimize it.
But if after the purging the background engine still runs for a long time, some other factors might be involved.
A common issue is with processes that are dependent on some checks. For example, you might check if a condition is true. And if not, defer the process without a timeout. Every time the workflow background engine runs this function, it will be deferred again. Causing it to be re-queued for the background engine. Ultimately causing the background engine to get in a loop on this activity. Rather than deferring these activities, the developer should use a ‘Block’, or preferably the ‘Business Event System’. Using a ‘Wait’ with sufficient time-out time might also be an option.

A similar thing can happen with time-out values with very short wait times.

When you notice a high CPU usage for QMON (QMNx), it might be time to coalesce the AQ IOT’s (Index Organized Tables). At the time of writing Oracle published note: 271855.1 on Metalink to do this.

 

So far for the deferred function. Let’s take a look at the next control function: Block.
The block function is also available from the ‘STANDARD’ itemtype. It holds your item until the wf_engine.completeactivity API is called.
Originally this was developed to let the item wait for a call from an external system. But with the arrival of BES (Business Event System), which we’ll discuss later in this series, it’s usefulness has decreased.

However, the function is still found in lots of places. So we’ll look at the internals.
The block function is a call to ‘WF_STANDARD.BLOCK’. This procedure returns a resultcode of wf_engine.eng_notified. In wf_item_activity_statuses the activity_status of the process_activity will be ‘NOTIFIED’.
This status is also used for Notifications (see next part in the series), when a reply from the user is expected. Since there is no notification, the process_activity cannot continue.

I changed the ‘Control Process’ to include a ‘Block’ activity as follows: 
 

Now when we launch an item, we see that it will hold on the block.

 

INSTANCE_LABEL      COST FUNCTION          BEGIN_DATE         END_DATE           STATUS   RESULT
DBA_CONTROL_PROCESS 0                      24-7-2009 20:55:00                    ACTIVE   #NULL
START               0   START             24-7-2009 20:55:00 24-7-2009 20:55:00 COMPLETE #NULL
BLOCK               1    WF_STANDARD.BLOCK 24-7-2009 20:55:00                    NOTIFIED 

 

To progress this item we now have to call the API: ‘wf_engine.completeactivity’. This API will initiate a workflow engine to continue the item.

The parameters for the API besides itemtype and itemkey are activity and result.
Activity is a concatenation of the process_name, ‘:’ and the instance_label from wf_process_activities.

In our case the call would be:

 

begin
wf_engine.completeactivity(itemtype=>'DBA_TYPE'
                           ,itemkey=>'20'
                           ,activity=>'DBA_CONTROL_PROCESS:BLOCK'
                           ,result=>wf_engine.eng_completed);
end;

 

When the activity has a result type, the result has to match one of the lookup codes in the result type. Otherwise any value is allowed, even tough it is a good habit to use the result codes in use by the wf_engine.

So what happens when an item does not get  the ‘CompleteActivity’ call? Since the ‘Block’ is designed for External Completion, it is not unlikely that an error in the external system will cause the ‘CompleteActivity’ not to be called. In that case the item will be in the ‘Notified’ state forever.

For these situations, you can put a ‘TimeOut’ on the process activity. A Time-out is either an absolute timestamp obtained from an item_attribute or a relative time set on the process_activity.
For this example, I set a relative time of 5 minutes on the ‘Block’ activity:
 

 

And of course an extra transition for the ‘Time out’, to a second ‘End’  node.

Now when we run the process, we see that the ‘Due_date’ column in WF_ITEM_ACTIVITY_STATUSES has been set:
 

 

INSTANCE_LABEL      COST FUNCTION          BEGIN_DATE         END_DATE           STATUS   RESULT DUE_DATE
DBA_CONTROL_PROCESS 0                      25-7-2009 17:26:05                    ACTIVE   #NULL
START               0    START             25-7-2009 17:26:05 25-7-2009 17:26:05 COMPLETE #NULL
BLOCK               1    WF_STANDARD.BLOCK 25-7-2009 17:26:05                    NOTIFIED        25-7-2009 17:31:05

 

 
It is set for exactly 5 minutes after the begin_date of the process_activity.

 

After 5 minutes we can start a background process. With parameter ‘process_timeout=>TRUE’, or no parameter, since TRUE is the default.

Then we see that it has ran through the timeout transition:
 

 

INSTANCE_LABEL      COST FUNCTION          BEGIN_DATE         END_DATE           STATUS   RESULT   DUE_DATE
DBA_CONTROL_PROCESS 0                      25-7-2009 17:26:05 25-7-2009 17:31:55 COMPLETE #NULL
START               0    START             25-7-2009 17:26:05 25-7-2009 17:26:05 COMPLETE #NULL
BLOCK               1    WF_STANDARD.BLOCK 25-7-2009 17:26:05 25-7-2009 17:31:55 COMPLETE #TIMEOUT 25-7-2009 17:31:05
END-1               0    END               25-7-2009 17:31:55 25-7-2009 17:31:55 COMPLETE #NULL

 

Note that this time, the begin_date of the ‘block’ activity did not change. This is because the activity was not rerun. It was just cut-off, and the engine picked up at the ‘timeout’ transition.

Even when the timeout period has expired, it is still possible to call the ‘CompleteActivity’ API. In that case, the workflow engine will still take the default transition, instead of the timeout one.
But once the item has gone through the timeout transition, it is not possible to call the ‘CompleteActivity’ API anymore. Trying to do so will result in a

‘ORA-20002: 3133: Activity instance ‘DBA_CONTROL_PROCESS:BLOCK’ is not a notified activity for item ‘DBA_TYPE/21’.

This  error is usually innocent. It just means that the item has progressed beyond the process activity that is called to be completed. And a time-out is the most common cause. Of course all programs calling the ‘CompleteActivity’ API need error handling for this.

At this point, I’ll give a short example of the error handling. In a later part in this series, we’ll go deeper into workflow troubleshooting and error handling.

Now when we call the ‘CompleteActivity’ API for the item above, we run the following:

 

declare
    v_errorname varchar2(30);
    v_errormsg  varchar2(2000);
    v_errorstack varchar2(32000);
    invalid_action EXCEPTION;
    PRAGMA EXCEPTION_INIT(invalid_action, -20002);
begin
     wf_engine.completeactivity(itemtype=>'DBA_TYPE'
                               ,itemkey=>'21'
                               ,activity=>'DBA_CONTROL_PROCESS:BLOCK'
                               ,result=>wf_engine.eng_completed);
exception
    when invalid_action THEN
        wf_core.get_error(v_errorname,v_errormsg,v_errorstack);
        dbms_output.put_line(v_errorname);
        dbms_output.put_line(v_errormsg);
        dbms_output.put_line(v_errorstack);
end;
/

 

 

WFENG_NOT_NOTIFIED
3133: Activity instance 'DBA_CONTROL_PROCESS:BLOCK' is not a notified activity
for item 'DBA_TYPE/21'.

 

Wf_Engine.CompleteActivity(DBA_TYPE, 21, DBA_CONTROL_PROCESS:BLOCK, COMPLETE)

 

Let’s go through the procedure.
First we see a declaration of a custom exception for ORA-20002. This is the custom error that workflow will issue. It is optional in this case, since any error can only occur from the API-call. But that might not always be the case.

In the exception block we first call ‘wf_core.get_error’ to retrieve the information about this error. If needed we can parse this error further. But the error_name already tells us that we tried to complete a process_activity that is not in a notified state. Based on this the caller should be able to decide what needs to be done.

With that we can move to the next function in this article.
That will be the ‘WAIT’ function. The ‘WAIT’ is used to postpone further processing of the item to a time set by an item_attribute or relative to the start of the process_activity.
It works differently from the way the ‘timeout’ worked on the ‘Block’ activity. That was only picked up by a background engine with parameter ‘process_timeout=>TRUE’.
Let’s replace the block in our process with a ‘Wait’ function. The ‘Wait’ can be copied from the ‘STANDARD’ itemtype. It also needs a lookup type ‘Wait Mode’. This lookup type defines how to determine the timestamp to complete the process activity.
The possible values are ‘Day of the week’, ‘Day of the month’, ‘Absolute date’ and ‘Relative time’.

When you add the ‘Wait’ function to your item, this will be the first ‘node attribute’ to set. As usual this can be set with a constant, or an item attribute value. When you choose to set it to a fixed date, you can also set the ‘Time of day’ attribute to set the timestamp. This is of course not needed for the ‘Relative time’ mode.

All node attributes can be set to a constant or an item_attribute_value. In our sample we will first set a relative time, based on an item attribute value. (for a change).

We use one function to set a new item_attribute: 
 

And the second node is a ‘Wait’ till the requested time:

The relative time is in {days} . {fraction of day}. So 5/(24*60) is 5 minutes. 0.03 is just about 4.5 minutes therefore. 

When we launch a new item, and check the status:

 

INSTANCE_LABEL      COST FUNCTION           BEGIN_DATE         END_DATE           STATUS   RESULT DUE_DATE
DBA_CONTROL_PROCESS 0                       25-7-2009 20:23:22                    ACTIVE   #NULL
START               0    START              25-7-2009 20:23:22 25-7-2009 20:23:22 COMPLETE #NULL
ASSIGN              1    WF_STANDARD.ASSIGN 25-7-2009 20:23:22 25-7-2009 20:23:22 COMPLETE #NULL
WAIT                1    WF_STANDARD.WAIT   25-7-2009 20:27:41                    DEFERRED #NULL

 

So the ‘Wait’ activity is deferred. But it does not get a due_date. Instead the begin_date is set to the time till which the activity has to wait.
Since the status=’DEFERRED’, we also have a record in the wf_deferred_queue_m:

 

select queue
,      corr_id
,      msg_state
,      enq_time
,      delay
,      deq_time
,      def.user_data.itemtype itemtype
,      def.user_data.itemkey itemkey
,      def.user_data.actid actid
 from applsys.AQ$WF_DEFERRED_TABLE_M def
where msg_id=(select outbound_queue_id
           from wf_item_activity_statuses
    where item_type='DBA_TYPE'
    and item_key='22'
    and activity_status='DEFERRED');

 

 

QUEUE               CORR_ID      MSG_STATE ENQ_TIME           DELAY              ITEMTYPE ITEMKEY ACTID
WF_DEFERRED_QUEUE_M APPSDBA_TYPE WAIT      25-7-2009 20:23:19 25-7-2009 20:27:39 DBA_TYPE 22      245973

 

So the message has been queued with a delay till the start_time of the process_activity.

Note that the msg_state=’WAIT’. This means that the message cannot be dequeued yet. One or more QMON processes need to be running. (Set with ‘alter system set aq_tm_processes=x scope=both;). These processes are responsible for moving messages from the ‘WAIT’ state to the ‘READY’ state at the designated moment.
Depending on the load on your system and the number of messages to be handled, you can put more QMON processes. But for most systems one QMON process will be sufficient for the workflow activity. If you need more than one QMON process, you’ll also have to be running multiple background engines.

Only messages with msg_state ‘READY’ can be dequeued and processed by the background engine.
When the message is  eligible to be dequeued, you can run a background engine to process the process_activity as any ‘Deferred’ activity. The begin_date of the process_activity will then be updated to the time it was processed by the background engine.

Now let’s change the process to use a ‘Wait’ till a day of the week. In this case we’ll wait till Monday morning. First we set the attribute to the ‘Date’ type. 
 

Then we change the ‘Assign activity’. Here we set the value to any date with the timestamp that we want. The date-part of the assignment will be ignored by the ‘Wait’ function.

 

Finally we update the Node attributes on the ‘Wait’ activity to be ‘Day of the Week’, a ‘Monday’, and the ‘Delay time’ attribute for ‘Time of the Day’.

Now when we launch a process, we can see that the activity is indeed postponed till the next Monday. (In this case 27th july 2009). At 8.00AM.

 

INSTANCE_LABEL      COST FUNCTION           BEGIN_DATE         END_DATE           STATUS   RESULT DUE_DATE
DBA_CONTROL_PROCESS 0                       25-7-2009 20:50:14                    ACTIVE   #NULL
START               0    START              25-7-2009 20:50:14 25-7-2009 20:50:14 COMPLETE #NULL
ASSIGN              1    WF_STANDARD.ASSIGN 25-7-2009 20:50:14 25-7-2009 20:50:14 COMPLETE #NULL
WAIT                1    WF_STANDARD.WAIT   27-7-2009 8:00:00                     DEFERRED #NULL

 

At this point it’s time to look at a special ‘feature’ of the ‘DEFERRED’ status. Before I said that you can put an activity in ‘DEFER’ mode just by returning a result_code of wf_engine.eng_defer.
There is a bit more to that. The WF_STANDARD.DEFER procedure checks whether it is being executed for the first or second time. The first time it is executed, it is to set the status to ‘DEFERRED’. That time, the result_code from wf_item_activity_statuses has not been set yet. So the procedure returns the result_code ‘wf_engine.eng_defer’. When it is being called, while the result_code in wf_item_activity_statuses has been set, it will assume it is being called from a background process and return ‘wf_engine.eng_completed’. Which will then signal the workflow engine to continue with the item.

We can use this mechanism to our own advantage. When an item is deferred,  and we want it to be run immediately, without the background process, then we can retry the process activity. The workflow engine has an API for this:

 

Begin
Wf_engine.handleerror(itemtype=>’DBA_TYPE’
                     ,itemkey=>’21’
                     ,activity=>’DBA_CONTROL_PROCESS:DEFER’
                     ,command=>’RETRY’
                    );
End;

 

We will see more about the handleerror API in a later part. With the command ‘RETRY’, it will re-execute the indicated process_activity. In the case of a ‘DEFER’, it will therefore complete the activity. And continue the process.
The same mechanism also applies to the ‘WAIT’ function, since it basically only defers an activity.

Be aware of this, when activities are re-executed. Since it may not always be the desired result to complete the activity.

Now let’s move on to the last controls that we are discussing in this part.

We start with the ‘Launch Process’ function. You can copy it from the ‘STANDARD’ item_type. Including the lookup_type ‘YES/NO’, if you don’t already have that. The lookup_type is required for one of  it’s activity_attributes.

It will be clear what this function is doing from the name. It launches another item from the current item. There is no restriction on the item_type. The function will perform the same ‘wf_engine.launch’, that we execute from sql*plus during this series.

The function has six activity_attributes, one of which is mandatory (2 in our case, since we don’t have an automatic selection for the process yet.).
The first attribute is item_type. This shouldn’t require a further explanation.
Item_key is optional. If no item_key is provided, the function will build an item of the ‘<current item_type>: <current item_key>-n’ where n is a sequence number for the number of items launched by the current item. The sequence is stored in a new item_attribute: ‘LAUNCH_COUNT’.
Then comes the process_name. It is possible to use a selector function to automatically start the right process within an item_type. But when this is not used, you need to provide the process_name.
User_key and owner are optional, and refer to a user-defined reference key and a workflow role that will assume ownership of this item.
The last attribute is ‘Defer Immediate’, with a lookup_type of ‘YES/NO’. When set to no, the new item will be launched immediately, and be executed by the current workflow engine.
When ‘Defer Immediate’ is set to ‘Yes’, the activity will be deferred. And both the ‘Launch Process’ activity as the new item will be executed by a background engine.

After the ‘Launch Process’ activity, the item will continue it’s own processing. (Without the ‘Defer Immediate’ attribute set to ‘Yes’, the engine will first process the new item as far as possible, before returning to the original item). There is no further relationship between the two items.

The last two controls do create a relationship between two or more items. It is possible to create parent/child relationships on the item-level.

We are going to create a sample of this by using ‘DBA_CONTROL_PROCESS’ to launch ‘DBA_MAIN_PROCESS’ and wait for it’s completion.
On the ‘Launch Process’ function it is not possible to specify the parent/child relationship. So we have to use the ‘wf_engine.CreateProcess’ and ‘wf_engine.StartProcess’ functions. The first one only creates an item, without executing it.
After the  ‘CreateProcess’, we can call the ‘SetItemParent’ API to set the parent/child relationship.

So we have to create a new procedure:

 

CREATE OR REPLACE PROCEDURE XXX_LAUNCH_CHILD (p_item_type IN VARCHAR2
           ,p_item_key IN VARCHAR2
         ,p_actid IN NUMBER
          ,p_funcmode IN VARCHAR2
         ,p_result OUT VARCHAR2) IS BEGIN
  if p_funcmode='RUN' then
    wf_engine.CreateProcess(itemtype=>p_item_type
           ,itemkey=>p_item_key||'-1'
        ,process=>'DBA_MAIN_PROCESS');
    wf_engine.SetItemParent(itemtype=>p_item_type
             ,itemkey=>p_item_key||'-1'
         ,parent_itemtype=>p_item_type
         ,parent_itemkey=>p_item_key
         ,parent_context=>NULL
         ,masterdetail=>TRUE);
    wf_engine.StartProcess(itemtype=>p_item_type
           ,itemkey=>p_item_key||'-1');
 end if;
 p_result:=wf_engine.eng_completed;
END XXX_LAUNCH_CHILD;
/

 

The ‘CreateProcess’ and ‘StartProcess’ should be obvious in their usage. In the ‘SetItemParent, you can indicate a ‘parent_context’.  This is a free text value, that can be referenced again in the ‘WaitforFlow’ and ‘ContinueFlow’ functions. In case you launch multiple child items, you use the context value to know which items should continue. We will come back to this at the end of the example.

First we are going to build our process with the new function:

The process will look like this:

And of course we need to put the ‘ContinueFlow’ in the ‘DBA_MAIN_PROCESS’. Actually twice, since I want to put it in front of the ‘End’ and we have 2 exits. 

Now when we start the ‘DBA_CONTROL_PROCESS’ we can see what happens:

 

select wpa.instance_label
,      wias.item_key
,      case when wpa.start_end is not null then wpa.start_end else wa.function end function
,      wias.begin_date
,      wias.end_date
,      wias.activity_status status
,      wias.activity_result_code result
from   wf_item_activity_statuses wias
join   wf_process_activities wpa
       on (wias.process_activity=wpa.instance_id)
join   wf_activities wa
       on (wpa.activity_item_type=wa.item_type
       and wpa.activity_name=wa.name
       and wa.end_date is null)
join   wf_activities_tl wat
       on (wa.item_type=wat.item_type
       and wa.name=wat.name
       and wa.version=wat.version
       and wat.language='US')
where  wias.item_type='DBA_TYPE'
and    wias.item_key like '20%'
order by wias.begin_date,wias.execution_time;

 

 

INSTANCE_LABEL      KEY FUNCTION BEGIN_DATE END_DATE STATUS RESULT
DBA_CONTROL_PROCESS 20                             26-7-2009 14:26:37                    ACTIVE   #NULL
START               20   START                     26-7-2009 14:26:37 26-7-2009 14:26:37 COMPLETE #NULL
XXX_LAUNCH_CHILD    20   xxx_launch_child          26-7-2009 14:26:37 26-7-2009 14:26:38 COMPLETE COMPLETE
DBA_MAIN_PROCESS    20-1                           26-7-2009 14:26:37                    ACTIVE   #NULL
START               20-1 START                     26-7-2009 14:26:37 26-7-2009 14:26:37 COMPLETE #NULL
INITIALIZE_FLOW     20-1 XXX_WF_DBA.init           26-7-2009 14:26:37 26-7-2009 14:26:37 COMPLETE COMPLETE
COMPARETEXT         20-1 WF_STANDARD.COMPARE       26-7-2009 14:26:37 26-7-2009 14:26:37 COMPLETE EQ
CHECK_INVALIDS      20-1 XXX_CHECK_INVALIDS        26-7-2009 14:26:37 26-7-2009 14:26:37 COMPLETE Y
DEFER               20-1 WF_STANDARD.DEFER         26-7-2009 14:26:37                    DEFERRED #NULL
GET_INVALIDS        20-1 XXX_WF_UTILS.get_invalids 26-7-2009 14:26:37 26-7-2009 14:26:38 COMPLETE
LOOPCOUNTER         20-1 WF_STANDARD.LOOPCOUNTER   26-7-2009 14:26:38 26-7-2009 14:26:38 COMPLETE EXIT
AND                 20-1 WF_STANDARD.ANDJOIN       26-7-2009 14:26:38                    WAITING
WAITFORFLOW         20   WF_STANDARD.WAITFORFLOW   26-7-2009 14:26:38                    NOTIFIED

 

I purposely left the ‘Defer’ in the ‘DBA_MAIN_PROCESS’ so we can see the status while the ‘DBA_MAIN_PROCESS’ is running. When the workflow engine could not process ‘DBA_MAIN_PROCESS’ further, it returned to the original item, and executed ‘WAITFORFLOW’, which of course got the ‘NOTIFIED’ status.
We can also see the parent_child relationship by querying ‘WF_ITEMS’:

 

select item_type
,      item_key key
,      begin_date
,      parent_item_type parent_type
,      parent_item_key  parent_key
from   wf_items
where  item_type='DBA_TYPE'
and    item_key like '20%'

 

 

ITEM_TYPE KEY  BEGIN_DATE         PARENT_TYPE PARENT_KEY
DBA_TYPE  20-1 26-7-2009 14:26:37 DBA_TYPE    20
DBA_TYPE  20   26-7-2009 14:26:37  

 

Now when we run a background engine, and check again:

 

INSTANCE_LABEL      KEY  FUNCTION                  BEGIN_DATE         END_DATE           STATUS   RESULT
DBA_CONTROL_PROCESS 20                             26-7-2009 14:26:37 26-7-2009 14:38:40 COMPLETE #NULL
START               20   START                     26-7-2009 14:26:37 26-7-2009 14:26:37 COMPLETE #NULL
XXX_LAUNCH_CHILD    20   xxx_launch_child          26-7-2009 14:26:37 26-7-2009 14:26:38 COMPLETE COMPLETE
DBA_MAIN_PROCESS    20-1                           26-7-2009 14:26:37 26-7-2009 14:38:40 COMPLETE #NULL
START               20-1 START                     26-7-2009 14:26:37 26-7-2009 14:26:37 COMPLETE #NULL
INITIALIZE_FLOW     20-1 XXX_WF_DBA.init           26-7-2009 14:26:37 26-7-2009 14:26:37 COMPLETE COMPLETE
COMPARETEXT         20-1 WF_STANDARD.COMPARE       26-7-2009 14:26:37 26-7-2009 14:26:37 COMPLETE EQ
CHECK_INVALIDS      20-1 XXX_CHECK_INVALIDS        26-7-2009 14:26:37 26-7-2009 14:26:37 COMPLETE Y
GET_INVALIDS        20-1 XXX_WF_UTILS.get_invalids 26-7-2009 14:26:37 26-7-2009 14:26:38 COMPLETE
LOOPCOUNTER         20-1 WF_STANDARD.LOOPCOUNTER   26-7-2009 14:26:38 26-7-2009 14:26:38 COMPLETE EXIT
WAITFORFLOW         20   WF_STANDARD.WAITFORFLOW   26-7-2009 14:26:38 26-7-2009 14:38:40 COMPLETE #NULL
DEFER               20-1 WF_STANDARD.DEFER         26-7-2009 14:38:40 26-7-2009 14:38:40 COMPLETE #NULL
AND                 20-1 WF_STANDARD.ANDJOIN       26-7-2009 14:38:40 26-7-2009 14:38:40 COMPLETE #NULL
CONTINUEFLOW        20-1 WF_STANDARD.CONTINUEFLOW  26-7-2009 14:38:40 26-7-2009 14:38:40 COMPLETE #NULL
END                 20   END                       26-7-2009 14:38:40 26-7-2009 14:38:40 COMPLETE #NULL
END-1               20-1 END                       26-7-2009 14:38:40 26-7-2009 14:38:40 COMPLETE #NULL

 

The ContinueFlow nicely completed the ‘WaitforFlow’ and both items  completed successfully.

So how does the ‘ContinueFlow’ activity know which process_activity to continue? The first check being done is  whether the activity_attribute ‘Waiting Flow’ is set to Master or Detail. When it is set to ‘Detail, the ContinueFlow is a parent process. So it will look for it’s children in wf_items, and complete all activities that have the instance_label from the ‘Waiting Activity Label’ activity_attribute.

When it is set to ‘Master’, this is just one of the (possibly) multiple children that have completed. So it will count the number of children for the parent item. And check all of them to see if they have reached the  ‘ContinueFlow’ process_activity. When all children have reached the ‘ContinueFlow’, the last item will complete the activity on the parent item.

Of course it is possible for a parent item to have multiple ‘WaitforFlow’ activities, for different child items. In this case, you need to distinguish the different parent/child relationships.  That is where the ‘parent_context’ parameter from the ‘SetItemParent’ function comes in.
The parent_context can be used to group together all child items that were started from the same process_activity.