Running

KnowledgeBase

The KnowlegeBase is a repository of all the application's knowledge definitions. It will contain rules, processes, functions, and type models. The Knowledge Base itself does not contain data; instead, sessions are created from the KnowledgeBase into which data can be inserted and from which process instances may be started. Creating the KnowlegeBase can be heavy, whereas session creation is very light, so it is recommended that Knowle Bases be cached where possible to allow for repeated session creation.

Example 3.24. Creating a new KnowledgeBase

KnowledgeBase kbase = KnowledgeBaseFactory.newKnowledgeBase();

StatefulKnowledgeSession

The StatefulKnowledgeSession stores and executes on the runtime data. It is created from the KnowledgeBase.

Figure 3.9. StatefulKnowledgeSession

StatefulKnowledgeSession

Example 3.25. Create a StatefulKnowledgeSession from a KnowledgeBase

StatefulKnowledgeSession ksession = kbase.newStatefulKnowledgeSession();

KnowledgeRuntime

WorkingMemoryEntryPoint

The WorkingMemoryEntryPoint provides the methods around inserting, updating and retrieving facts. The term "entry point" is related to the fact that we have multiple partitions in a Working Memory and you can choose which one you are inserting into, although this use case is aimed at event processing and covered in more detail in the Fusion manual. Most rule based applications will work with the default entry point alone.

The KnowledgeRuntime interface provides the main interaction with the engine. It is available in rule consequences and process actions. In this manual the focus is on the methods and interfaces related to rules, and the methods pertaining to processes will be ignored for now. But you'll notice that the KnowledgeRuntime inherits methods from both the WorkingMemory and the ProcessRuntime, thereby providing a unified API to work with processes and rules. When working with rules, three interfaces form the KnowledgeRuntime: WorkingMemoryEntryPoint, WorkingMemory and the KnowledgeRuntime itself.

Figure 3.10. WorkingMemoryEntryPoint

WorkingMemoryEntryPoint

Insertion

Insertion is the act of telling the WorkingMemory about a fact, which you do by ksession.insert(yourObject), for example. When you insert a fact, it is examined for matches against the rules. This means all of the work for deciding about firing or not firing a rule is done during insertion; no rule, however, is executed until you call fireAllRules(), which you call after you have finished inserting your facts. It is a common misunderstanding for people to think the condition evaluation happens when you call fireAllRules(). Expert systems typically use the term assert or assertion to refer to facts made available to the system. However, due to "assert" being a keyword in most languages, we have decided to use the <kw>insert</kw> keyword; so expect to hear the two used interchangeably.

When an Object is inserted it returns a FactHandle. This FactHandle is the token used to represent your inserted object within the WorkingMemory. It is also used for interactions with the WorkingMemory when you wish to retract or modify an object.

Cheese stilton = new Cheese("stilton");
FactHandle stiltonHandle = ksession.insert( stilton );      

As mentioned in the Knowledge Base section, a Working Memory may operate in two assertion modes, i.e., equality or identity, with identity being the default.

Identity means that the Working Memory uses an IdentityHashMap to store all asserted objects. New instance assertions always result in the return of a new FactHandle, but if an instance is asserted again then it returns the original fact handle, i.e., it ignores repeated insertions for the same fact.

Equality means that the Working Memory uses a HashMap to store all asserted Objects. New instance assertions will only return a new FactHandle if no equal objects have been asserted.

Retraction

Retraction is the removal of a fact from Working Memory, which means that it will no longer track and match that fact, and any rules that are activated and dependent on that fact will be cancelled. Note that it is possible to have rules that depend on the nonexistence of a fact, in which case retracting a fact may cause a rule to activate. (See the not and exist keywords.) Retraction is done using the FactHandle that was returned by the insert call.

Cheese stilton = new Cheese("stilton");
FactHandle stiltonHandle = ksession.insert( stilton );
....
ksession.retract( stiltonHandle );            
Update

The Rule Engine must be notified of modified facts, so that they can be reprocessed. Internally, modification is actually a retract followed by an insert; the Rule Engine removes the fact from the WorkingMemory and inserts it again. You must use the update() method to notify the WorkingMemory of changed objects for those objects that are not able to notify the WorkingMemory themselves. Notice that update() always takes the modified object as a second parameter, which allows you to specify new instances for immutable objects. The update() method can only be used with objects that have shadow proxies turned on. The update method is only available within Java code. On the right hand side of a rule, also the <kw>modify</kw> statement is supported, providing simplified calls to the object's setters.

Cheese stilton = new Cheese("stilton");
FactHandle stiltonHandle = workingMemory.insert( stilton );
...
stilton.setPrice( 100 );
workingMemory.update( stiltonHandle, stilton );              

WorkingMemory

The WorkingMemory provides access to the Agenda, permits query executions, and lets you access named Enty Points.

Figure 3.11. WorkingMemory

WorkingMemory

Query

Queries are used to retrieve fact sets based on patterns, as they are used in rules. Patterns may make use of optional parameters. Queries can be defined in the Knowlege Base, from where they are called up to return the matching results. While iterating over the result collection, any bound identifier in the query can be accessed using the get(String identifier) method and any FactHandle for that identifier can be retrieved using getFactHandle(String identifier).

Figure 3.12. QueryResults

QueryResults

Figure 3.13. QueryResultsRow

QueryResultsRow

Example 3.26. Simple Query Example

QueryResults results =
    ksession.getQueryResults( "my query", new Object[] { "string" } );
for ( QueryResultsRow row : results ) {
    System.out.println( row.get( "varName" ) );
}

KnowledgeRuntime

The KnowledgeRuntime provides further methods that are applicable to both rules and processes, such as setting globals and registering ExitPoints.

Figure 3.14. KnowledgeRuntime

KnowledgeRuntime

Globals

Globals are named objects that can be passed to the rule engine, without needing to insert them. Most often these are used for static information, or for services that are used in the RHS of a rule, or perhaps as a means to return objects from the rule engine. If you use a global on the LHS of a rule, make sure it is immutable. A global must first be declared in a rules file before it can be set on the session.

global java.util.List list

With the Knowledge Base now aware of the global identifier and its type, it is now possible to call ksession.setGlobal() for any session. Failure to declare the global type and identifier first will result in an exception being thrown. To set the global on the session use ksession.setGlobal(identifier, value):

List list = new ArrayList();
ksession.setGlobal("list", list);           

If a rule evaluates on a global before you set it you will get a NullPointerException.

StatefulRuleSession

The StatefulRuleSession is inherited by the StatefulKnowledgeSession and provides the rule related methods that are relevant from outside of the engine.

Figure 3.15. StatefulRuleSession

StatefulRuleSession

Agenda Filters

Figure 3.16. AgendaFilters

AgendaFilters

AgendaFilter objects are optional implementations of the filter interface which are used to allow or deny the firing of an activation. What you filter on is entirely up to the implementation. Drools 4.0 used to supply some out of the box filters, which have not be exposed in drools 5.0 drools-api, but they are simple to implement and the Drools 4.0 code base can be referred to.

To use a filter specify it while calling fireAllRules(). The following example permits only rules ending in the string "Test". All others will be filtered out.

ksession.fireAllRules( new RuleNameEndsWithAgendaFilter( "Test" ) );

Agenda

The Agenda is a Rete feature. During actions on the WorkingMemory, rules may become fully matched and eligible for execution; a single Working Memory Action can result in multiple eligible rules. When a rule is fully matched an Activation is created, referencing the rule and the matched facts, and placed onto the Agenda. The Agenda controls the execution order of these Activations using a Conflict Resolution strategy.

The engine cycles repeatedly through two phases:

  1. Working Memory Actions. This is where most of the work takes place, either in the Consequence (the RHS itself) or the main Java application process. Once the Consequence has finished or the main Java application process calls fireAllRules() the engine switches to the Agenda Evaluation phase.

  2. Agenda Evaluation. This attempts to select a rule to fire. If no rule is found it exits, otherwise it fires the found rule, switching the phase back to Working Memory Actions.

Figure 3.17. Two Phase Execution

Two Phase Execution

The process repeats until the agenda is clear, in which case control returns to the calling application. When Working Memory Actions are taking place, no rules are being fired.

Figure 3.18. Agenda

Agenda

Conflict Resolution

Conflict resolution is required when there are multiple rules on the agenda. (The basics to this are covered in chapter "Quick Start".) As firing a rule may have side effects on the working memory, the rule engine needs to know in what order the rules should fire (for instance, firing ruleA may cause ruleB to be removed from the agenda).

The default conflict resolution strategies employed by Drools are: Salience and LIFO (last in, first out).

The most visible one is salience (or priority), in which case a user can specify that a certain rule has a higher priority (by giving it a higher number) than other rules. In that case, the rule with higher salience will be preferred. LIFO priorities are based on the assigned Working Memory Action counter value, with all rules created during the same action receiving the same value. The execution order of a set of firings with the same priority value is arbitrary.

As a general rule, it is a good idea not to count on rules firing in any particular order, and to author the rules without worrying about a "flow".

Drools 4.0 supported custom conflict resolution strategies; while this capability still exists in Drools it has not yet been exposed to the end user via drools-api in Drools 5.0.

AgendaGroup

Figure 3.19. AgendaGroup

AgendaGroup

Agenda groups are a way to partition rules (activations, actually) on the agenda. At any one time, only one group has "focus" which means that activations for rules in that group only will take effect. You can also have rules with "auto focus" which means that the focus is taken for its agenda group when that rule's conditions are true.

Agenda groups are known as "modules" in CLIPS terminology. They provide a handy way to create a "flow" between grouped rules. You can switch the group which has focus either from within the rule engine, or via the API. If your rules have a clear need for multiple "phases" or "sequences" of processing, consider using agenda-groups for this purpose.

Each time setFocus() is called it pushes that Agenda Group onto a stack. When the focus group is empty it is popped from the stack and the focus group that is now on top evaluates. An Agenda Group can appear in multiple locations on the stack. The default Agenda Group is "MAIN", with all rules which do not specify an Agenda Group being in this group. It is also always the first group on the stack, given focus initially, by default.

ksession.getAgenda().getAgendaGroup( "Group A" ).setFocus();

ActivationGroup

Figure 3.20. ActivationGroup

ActivationGroup

An activation group is a set of rules bound together by the same "activation-group" rule attribute. In this group only one rule can fire, and after that rule has fired all the other rules are cancelled from the agenda. The clear() method can be called at any time, which cancels all of the activations before one has had a chance to fire.

ksession.getAgenda().getActivationGroup( "Group B" ).clear();

RuleFlowGroup

Figure 3.21. RuleFlowGroup

RuleFlowGroup

A rule flow group is a group of rules associated by the "ruleflow-group" rule attribute. These rules can only fire when the group is activate. The group itself can only become active when the elaboration of the ruleflow diagram reaches the node representing the group. Here too, the clear() method can be called at any time to cancels all activations still remaining on the Agenda.

ksession.getAgenda().getRuleFlowGroup( "Group C" ).clear();

Event Model

The event package provides means to be notified of rule engine events, including rules firing, objects being asserted, etc. This allows you, for instance, to separate logging and auditing activities from the main part of your application (and the rules).

The KnowlegeRuntimeEventManager interface is implemented by the KnowledgeRuntime which provides two interfaces, WorkingMemoryEventManager and ProcessEventManager. We will only cover the WorkingMemoryEventManager here.

Figure 3.22. KnowledgeRuntimeEventManager

KnowledgeRuntimeEventManager

The WorkingMemoryEventManager allows for listeners to be added and removed, so that events for the working memory and the agenda can be listened to.

Figure 3.23. WorkingMemoryEventManager

WorkingMemoryEventManager

The following code snippet shows how a simple agenda listener is declared and attached to a session. It will print activations after they have fired.

Example 3.27. Adding an AgendaEventListener

ksession.addEventListener( new DefaultAgendaEventListener() {                            
   public void afterActivationFired(AfterActivationFiredEvent event) {
       super.afterActivationFired( event );
       System.out.println( event );
   }
});     

Drools also provides DebugWorkingMemoryEventListener and DebugAgendaEventListener which implement each method with a debug print statement. To print all Working Memory events, you add a listener like this:

Example 3.28. Creating a new KnowledgeBuilder

ksession.addEventListener( new DebugWorkingMemoryEventListener() );     

All emitted events implement the KnowlegeRuntimeEvent interface which can be used to retrieve the actual KnowlegeRuntime the event originated from.

Figure 3.24. KnowlegeRuntimeEvent

KnowlegeRuntimeEvent

The events currently supported are:

  • ActivationCreatedEvent

  • ActivationCancelledEvent

  • BeforeActivationFiredEvent

  • AfterActivationFiredEvent

  • AgendaGroupPushedEvent

  • AgendaGroupPoppedEvent

  • ObjectInsertEvent

  • ObjectRetractedEvent

  • ObjectUpdatedEvent

  • ProcessCompletedEvent

  • ProcessNodeLeftEvent

  • ProcessNodeTriggeredEvent

  • ProcessStartEvent

KnowledgeRuntimeLogger

The KnowledgeRuntimeLogger uses the comprehensive event system in Drools to create an audit log that can be used to log the execution of an application for later inspection, using tools such as the Eclipse audit viewer.

Figure 3.25. KnowledgeRuntimeLoggerFactory

KnowledgeRuntimeLoggerFactory

Example 3.29. FileLogger

KnowledgeRuntimeLogger logger =
  KnowledgeRuntimeLoggerFactory.newFileLogger(ksession, "logdir/mylogfile");
...
logger.close();

StatelessKnowledgeSession

The StatelessKnowledgeSession wraps the StatefulKnowledgeSession, instead of extending it. Its main focus is on decision service type scenarios. It avoids the need to call dispose(). Stateless sessions do not support iterative insertions and the method call fireAllRules() from Java code; the act of calling execute() is a single-shot method that will internally instantiate a StatefulKnowledgeSession, add all the user data and execute user commands, call fireAllRules(), and then call dispose(). While the main way to work with this class is via the BatchExecution (a subinterface of Command) as supported by the CommandExecutor interface, two convenience methods are provided for when simple object insertion is all that's required. The CommandExecutor and BatchExecution are talked about in detail in their own section.

Figure 3.26. StatelessKnowledgeSession

StatelessKnowledgeSession

Our simple example shows a stateless session executing a given collection of Java objects using the convenience API. It will iterate the collection, inserting each element in turn.

Example 3.30. Simple StatelessKnowledgeSession execution with a Collection

KnowledgeBuilder kbuilder = KnowledgeBuilderFactory.newKnowledgeBuilder();
kbuilder.add( ResourceFactory.newFileSystemResource( fileName ), ResourceType.DRL );
if (kbuilder.hasErrors() ) {
    System.out.println( kbuilder.getErrors() );
} else {
    KnowledgeBase kbase = KnowledgeBaseFactory.newKnowledgeBase();
    kbase.addKnowledgePackages( kbuilder.getKnowledgePackages() );
    StatelessKnowledgeSession ksession = kbase.newStatelessKnowledgeSession();
    ksession.execute( collection );
}

If this was done as a single Command it would be as follows:

Example 3.31. Simple StatelessKnowledgeSession execution with InsertElements Command

ksession.execute( CommandFactory.newInsertElements( collection ) );  

If you wanted to insert the collection itself, and the collection's individual elements, then CommandFactory.newInsert(collection) would do the job.

Methods of the CommandFactory create the supported commands, all of which can be marshalled using XStream and the BatchExecutionHelper. BatchExecutionHelper provides details on the XML format as well as how to use Drools Pipeline to automate the marshalling of BatchExecution and ExecutionResults.

StatelessKnowledgeSession supports globals, scoped in a number of ways. I'll cover the non-command way first, as commands are scoped to a specific execution call. Globals can be resolved in three ways.

  • The Stateless Knowledge Session method getGlobals() returns a Globals instance which provides access to the session's globals. These are shared for all execution calls. Exercise caution regarding mutable globals because execution calls can be executing simultaneously in different threads.

    Example 3.32. Session scoped global

    StatelessKnowledgeSession ksession = kbase.newStatelessKnowledgeSession();
    // Set a global hbnSession, that can be used for DB interactions in the rules.
    ksession.setGlobal( "hbnSession", hibernateSession );
    // Execute while being able to resolve the "hbnSession" identifier.  
    ksession.execute( collection ); 

  • Using a delegate is another way of global resolution. Assigning a value to a global (with setGlobal(String, Object)) results in the value being stored in an internal collection mapping identifiers to values. Identifiers in this internal collection will have priority over any supplied delegate. Only if an identifier cannot be found in this internal collection, the delegate global (if any) will be used.

  • The third way of resolving globals is to have execution scoped globals. Here, a Command to set a global is passed to the CommandExecutor.

The CommandExecutor interface also offers the ability to export data via "out" parameters. Inserted facts, globals and query results can all be returned.

Example 3.33. Out identifiers

// Set up a list of commands
List cmds = new ArrayList();
cmds.add( CommandFactory.newSetGlobal( "list1", new ArrayList(), true ) );
cmds.add( CommandFactory.newInsert( new Person( "jon", 102 ), "person" ) );
cmds.add( CommandFactory.newQuery( "Get People" "getPeople" );

// Execute the list
ExecutionResults results =
  ksession.execute( CommandFactory.newBatchExecution( cmds ) );

// Retrieve the ArrayList
results.getValue( "list1" );
// Retrieve the inserted Person fact
results.getValue( "person" );
// Retrieve the query as a QueryResults instance.
results.getValue( "Get People" );

Sequential Mode

With Rete you have a stateful session where objects can be asserted and modified over time, and where rules can also be added and removed. Now what happens if we assume a stateless session, where after the initial data set no more data can be asserted or modified and rules cannot be added or removed? Certainly it won't be necessary to re-evaluate rules, and the engine will be able to operate in a simplified way.

  1. Order the Rules by salience and position in the ruleset (by setting a sequence attribute on the rule terminal node).

  2. Create an array, one element for each possible rule activation; element position indicates firing order.

  3. Turn off all node memories, except the right-input Object memory.

  4. Disconnect the Left Input Adapter Node propagation, and let the Object plus the Node be referenced in a Command object, which is added to a list on the Working Memory for later execution.

  5. Assert all objects, and, when all assertions are finished and thus right-input node memories are populated, check the Command list and execute each in turn.

  6. All resulting Activations should be placed in the array, based upon the determined sequence number of the Rule. Record the first and last populated elements, to reduce the iteration range.

  7. Iterate the array of Activations, executing populated element in turn.

  8. If we have a maximum number of allowed rule executions, we can exit our network evaluations early to fire all the rules in the array.

The LeftInputAdapterNode no longer creates a Tuple, adding the Object, and then propagate the Tuple – instead a Command object is created and added to a list in the Working Memory. This Command object holds a reference to the LeftInputAdapterNode and the propagated object. This stops any left-input propagations at insertion time, so that we know that a right-input propagation will never need to attempt a join with the left-inputs (removing the need for left-input memory). All nodes have their memory turned off, including the left-input Tuple memory but excluding the right-input object memory, which means that the only node remembering an insertion propagation is the right-input object memory. Once all the assertions are finished and all right-input memories populated, we can then iterate the list of LeftInputAdatperNode Command objects calling each in turn. They will propagate down the network attempting to join with the right-input objects, but they won't be remembered in the left input as we know there will be no further object assertions and thus propagations into the right-input memory.

There is no longer an Agenda, with a priority queue to schedule the Tuples; instead, there is simply an array for the number of rules. The sequence number of the RuleTerminalNode indicates the element within the array where to place the Activation. Once all Command objects have finished we can iterate our array, checking each element in turn, and firing the Activations if they exist. To improve performance, we remember the first and the last populated cell in the array. The network is constructed, with each RuleTerminalNode being given a sequence number based on a salience number and its order of being added to the network.

Typically the right-input node memories are Hash Maps, for fast object retraction; here, as we know there will be no object retractions, we can use a list when the values of the object are not indexed. For larger numbers of objects indexed Hash Maps provide a performance increase; if we know an object type has only a few instances, indexing is probably not advantageous, and a list can be used.

Sequential mode can only be used with a Stateless Session and is off by default. To turn it on, either call RuleBaseConfiguration.setSequential(true), or set the rulebase configuration property drools.sequential to true. Sequential mode can fall back to a dynamic agenda by calling setSequentialAgenda with SequentialAgenda.DYNAMIC. You may also set the "drools.sequential.agenda" property to "sequential" or "dynamic".

Pipeline

The PipelineFactory and associated classes are there to help with the automation of getting information into and out of Drools, especially when using services such as Java Message Service (JMS), and other data sources that aren't Java objects. Transformers for Smooks, JAXB, XStream and jXLS are povided. Smooks is an ETL (extract, transform, load) tool and can work with a variety of data sources. JAXB is a Java standard for XML binding capable of working with XML schemas. XStream is a simple and fast XML serialisation framework. jXLS finally allows for loading of Java objects from an Excel spreadsheet. Minimal information on these technologies will be provided here; beyond this, you should consult the relevant user guide for each of these tools.

Figure 3.27. PipelineFactory

PipelineFactory

Pipeline is not meant as a replacement for products like the more powerful Apache Camel. It is a simple framework aimed at the specific Drools use cases.

In Drools, a pipeline is a series of stages that operate on and propagate a given payload. Typically this starts with a Pipeline instance which is responsible for taking the payload, creating a PipelineContext for it and propagating that to the first receiver stage. Two subtypes of Pipeline are provided, both requiring a different PipelineContext: StatefulKnowledgeSessionPipeline and StatelessKnowledgeSessionPipeline. PipelineFactory provides methods to create both of the two Pipeline subtypes. Notice that both factory methods take the relevant session as an argument. The construction of a StatefulKnowledgeSessionPipeline is shown below, where also its receiver is set.

Example 3.34. StatefulKnowledgeSessionPipeline

Pipeline pipeline = PipelineFactory.newStatefulKnowledgeSessionPipeline( ksession );
pipeline.setReceiver( receiver );

A pipeline is then made up of a chain of Stages that implement both the Emitter and the Receiver interfaces. The Emitter interface enables the Stage to propagate a payload, and the Receiver interface lets it receive a payload. This is why the Pipeline interface only implements Emitter and Stage and not Receiver, as it is the first instance in the chain. The Stage interface allows a custom exception handler to be set on the Stage object.

Example 3.35. StageExceptionHandler

Transformer transformer = PipelineFactory.newXStreamFromXmlTransformer( xstream );
transformer.setStageExceptionHandler( new StageExceptionHandler() { .... } );

The Transformer interface extends Stage, Emitter and Receiver, providing those interface methods as a single type. Its other purpose is that of a marker interface indicating this particulare role of the implementing class. (We have several other marker interfaces such as Expression and Action, both of which also extend Stage, Emitter and Receiver.) One of the stages should be responsible for setting a result value on the PipelineContext. It's the responsibility of the ResultHandler interface, to be implemented by the user, to process on these results. It may do so by inserting them into some suitable object, whence the user's code may retrieve them.

Example 3.36. StageExceptionHandler

ResultHandler resultHandler = new ResultHandlerImpl();
pipeline.insert( factHandle, resultHandler );  
System.out.println( resultHandler );
...
public class ResultHandlerImpl implements ResultHandler {
    Object result;

    public void handleResult(Object result) {
        this.result = result;
    }

    public Object getResult() {
        return this.result;
    }
}

While the above example shows a simple handler that merely assigns the result to a field that the user can access, it could do more complex work like sending the object as a message.

Pipeline provides an adapter to insert the payload and to create the correct Pipeline Context internally.

In general it is easier to construct the pipelines in reverse. In the following example XML data is loaded from disk, transformed with XStream and finally inserted into the session.

Example 3.37. Constructing a pipeline

// Make the results (here: FactHandles) available to the user 
Action executeResultHandler = PipelineFactory.newExecuteResultHandler();

// Insert the transformed object into the session
// associated with the PipelineContext
KnowledgeRuntimeCommand insertStage =
  PipelineFactory.newStatefulKnowledgeSessionInsert();
insertStage.setReceiver( executeResultHandler );
       
// Create the transformer instance and the Transformer Stage,
// to transform from Xml to a Java object.
XStream xstream = new XStream();
Transformer transformer = PipelineFactory.newXStreamFromXmlTransformer( xstream );
transformer.setReceiver( insertStage );

// Create the start adapter Pipeline for StatefulKnowledgeSessions
Pipeline pipeline = PipelineFactory.newStatefulKnowledgeSessionPipeline( ksession );
pipeline.setReceiver( transformer );

// Instantiate a simple result handler and load and insert the XML
ResultHandlerImpl resultHandler = new ResultHandlerImpl();
pipeline.insert( ResourceFactory.newClassPathResource( "path/facts.xml", getClass() ),
                 resultHandler );

While the above example is for loading a resource from disk, it is also possible to work from a running messaging service. Drools currently provides a single service for JMS, called JmsMessenger. Support for other services will be added later. The code below shows part of a unit test which illustrates part of the JmsMessenger in action:

Example 3.38. Using JMS with Pipeline

// As this is a service, it's more likely that
// the results will be logged or sent as a return message.
Action resultHandlerStage = PipelineFactory.newExecuteResultHandler();

// Insert the transformed object into the session associated with the PipelineContext
KnowledgeRuntimeCommand insertStage = PipelineFactory.newStatefulKnowledgeSessionInsert();
insertStage.setReceiver( resultHandlerStage );

// Create the transformer instance and create the Transformer stage where we are
// going from XML to Pojo. JAXB needs an array of the available classes.
JAXBContext jaxbCtx = KnowledgeBuilderHelper.newJAXBContext( classNames,
                                                              kbase );
Unmarshaller unmarshaller = jaxbCtx.createUnmarshaller();
Transformer transformer = PipelineFactory.newJaxbFromXmlTransformer( unmarshaller );
transformer.setReceiver( insertStage );

// Payloads for JMS arrive in a Message wrapper: we need to unwrap this object.
Action unwrapObjectStage = PipelineFactory.newJmsUnwrapMessageObject();
unwrapObjectStage.setReceiver( transformer );

// Create the start adapter Pipeline for StatefulKnowledgeSessions
Pipeline pipeline = PipelineFactory.newStatefulKnowledgeSessionPipeline( ksession );
pipeline.setReceiver( unwrapObjectStage );

// Services, like JmsMessenger take a ResultHandlerFactory implementation.
// This is because a result handler must be created for each incoming message.
ResultHandlerFactory factory = new ResultHandlerFactoryImpl();
Service messenger = PipelineFactory.newJmsMessenger( pipeline,
                                                     props,
                                                     destinationName,
                                                     factory );
messenger.start();

Xstream Transformer

Example 3.39. XStream FromXML transformer stage

XStream xstream = new XStream();
Transformer transformer = PipelineFactory.newXStreamFromXmlTransformer( xstream );
transformer.setReceiver( nextStage );


Example 3.40. XStream ToXML transformer stage

XStream xstream = new XStream();
Transformer transformer = PipelineFactory.newXStreamToXmlTransformer( xstream );
transformer.setReceiver( receiver );


JAXB Transformer

The Transformer objects are JaxbFromXmlTransformer and JaxbToXmlTransformer. The former uses an javax.xml.bind.Unmarshaller for converting an XML document into a content tree; the latter serializes a content tree to XML by passing it to a javax.xml.bind.Marshaller. Both of these objects can be obtained from a JAXBContext object.

A JAXBContext maintains the set of Java classes that are bound to XML elements. Such classes may be generated from an XML schema, by compiling it with JAXB's schema compiler xjc. Alternatively, handwritten classes can be augmented with annotations from jaxb.xml.bind.annotation.

Unmarshalling an XML document results in an object tree. Inserting objects from this tree as facts into a session can be done by walking the tree and inserting nodes as appropriate. This could be done in the context of a pipeline by a custom Transformer that emits the nodes one by one to its receiver.

Example 3.41. JAXB XSD Generation into the KnowlegeBuilder

Options xjcOpts = new Options();
xjcOpts.setSchemaLanguage( Language.XMLSCHEMA );
KnowledgeBuilder kbuilder = KnowledgeBuilderFactory.newKnowledgeBuilder();
 
String[] classNames =
  KnowledgeBuilderHelper.addXsdModel(
    ResourceFactory.newClassPathResource( "order.xsd", getClass() ),
    kbuilder,
    xjcOpts,
    "xsd" );


Example 3.42. JAXB From XML transformer stage

JAXBContext jaxbCtx =
  KnowledgeBuilderHelper.newJAXBContext( classNames, kbase );
Unmarshaller unmarshaller = jaxbCtx.createUnmarshaller();
Transformer transformer = PipelineFactory.newJaxbFromXmlTransformer( unmarshaller );
transformer.setReceiver( receiver );
 


Example 3.43. JAXB to XML transformer stage

Marshaller marshaller = jaxbCtx.createMarshaller();
Transformer transformer = PipelineFactory.newJaxbToXmlTransformer( marshaller );
transformer.setReceiver( receiver );


Smooks Transformer

Example 3.44. Smooks FromSource transformer stage

Smooks smooks = new Smooks( getClass().getResourceAsStream( "smooks-config.xml" ) );
Transformer transformer =
  PipelineFactory.newSmooksFromSourceTransformer( smooks, "orderItem" );
transformer.setReceiver( receiver );


Example 3.45. Smooks ToSource transformer stage

Smooks smooks = new Smooks( getClass().getResourceAsStream( "smooks-config.xml" ) );
Transformer transformer = PipelineFactory.newSmooksToSourceTransformer( smooks );
transformer.setReceiver( receiver );


jXLS (Excel/Calc/CSV) Transformer

This transformer transforms from an Excel spreadsheet to a map of Java objects, using jXLS, and the resulting map is set as the propagating object. You may need to use splitters and MVEL expressions to split up the transformation to insert individual Java objects. Note that you must provde an XLSReader, which references the mapping file and also an MVEL string which will instantiate the map. The MVEL expression is pre-compiled but executed on each usage of the transformation.

Example 3.46. JXLS transformer stage

XLSReader mainReader =
  ReaderBuilder.buildFromXML( ResourceFactory.newClassPathResource( "departments.xml", getClass() ).getInputStream() );
String expr = "[ 'departments' : new java.util.ArrayList()," +
              "  'company' : new org.drools.runtime.pipeline.impl.Company() ]";
Transformer transformer = PipelineFactory.newJxlsTransformer(mainReader, expr );

JMS Messenger

This transformer creates a new JmsMessenger which runs as a service in its own thread. It expects an existing JNDI entry for "ConnectionFactory", used to create the MessageConsumer which will feed into the specified pipeline.

Example 3.47. JMS Messenger stage

// As this is a service, it's more likely the results will be logged
// or sent as a return message.
Action resultHandlerStage = PipelineFactory.newExecuteResultHandler();

// Insert the transformed object into the session associated with the PipelineContext
KnowledgeRuntimeCommand insertStage = PipelineFactory.newStatefulKnowledgeSessionInsert();
insertStage.setReceiver( resultHandlerStage );

// Create the transformer instance and create the Transformer stage,
// where we are going from XML to Java object.
// JAXB needs an array of the available classes
JAXBContext jaxbCtx = KnowledgeBuilderHelper.newJAXBContext( classNames, kbase );
Unmarshaller unmarshaller = jaxbCtx.createUnmarshaller();
Transformer transformer = PipelineFactory.newJaxbFromXmlTransformer( unmarshaller );
transformer.setReceiver( insertStage );

// Payloads for JMS arrive in a Message wrapper, we need to unwrap this object.
Action unwrapObjectStage = PipelineFactory.newJmsUnwrapMessageObject();
unwrapObjectStage.setReceiver( transformer );

// Create the start adapter Pipeline for StatefulKnowledgeSessions
Pipeline pipeline = PipelineFactory.newStatefulKnowledgeSessionPipeline( ksession );
pipeline.setReceiver( unwrapObjectStage );

// Services like JmsMessenger take a ResultHandlerFactory implementation.
// This is so because a result handler must be created for each incoming message.
ResultHandleFactoryImpl factory = new ResultHandleFactoryImpl();
Service messenger = PipelineFactory.newJmsMessenger( pipeline,
                                                     props,
                                                     destinationName,
                                                     factory );

Commands and the CommandExecutor

Drools has the concept of stateful or stateless sessions. We've already covered stateful sessions, which use the standard working memory that can be worked with iteratively over time. Stateless is a one-off execution of a working memory with a provided data set. It may return some results, with the session being disposed at the end, prohibiting further iterative interactions. You can think of stateless as treating a rule engine like a function call with optional return results.

In Drools 4 we supported these two paradigms but the way the user interacted with them was different. StatelessSession used an execute(...) method which would insert a collection of objects as facts. StatefulSession didn't have this method, and insert used the more traditional insert(...) method. The other issue was that the StatelessSession did not return any results, so that users themselves had to map globals to get results, and it wasn't possible to do anything besides inserting objects; users could not start processes or execute queries.

Drools 5.0 addresses all of these issues and more. The foundation for this is the CommandExecutor interface, which both the stateful and stateless interfaces extend, creating consistency and ExecutionResults:

Figure 3.28. CommandExecutor

CommandExecutor

Figure 3.29. ExecutionResults

ExecutionResults

The CommandFactory allows for commands to be executed on those sessions, the only difference being that the Stateless Knowledge Session executes fireAllRules() at the end before disposing the session. The currently supported commands are:

  • FireAllRules

  • GetGlobal

  • SetGlobal

  • InsertObject

  • InsertElements

  • Query

  • StartProcess

  • BatchExecution

InsertObject will insert a single object, with an optional "out" identifier. InsertElements will iterate an Iterable, inserting each of the elements. What this means is that a Stateless Knowledge Session is no longer limited to just inserting objects, it can now start processes or execute queries, and do this in any order.

Example 3.48. Insert Command

StatelessKnowledgeSession ksession = kbase.newStatelessKnowledgeSession();
ExecutionResults bresults =
  ksession.execute( CommandFactory.newInsert( new Cheese( "stilton" ), "stilton_id" ) );
Stilton stilton = bresults.getValue( "stilton_id" );

The execute method always returns an ExecutionResults instance, which allows access to any command results if they specify an out identifier such as the "stilton_id" above.

Example 3.49. InsertElements Command

StatelessKnowledgeSession ksession = kbase.newStatelessKnowledgeSession();
Command cmd = CommandFactory.newInsertElements( Arrays.asList( Object[] { 
                  new Cheese( "stilton" ),
                  new Cheese( "brie" ),
                  new Cheese( "cheddar" ),
              });
ExecutionResults bresults = ksession.execute( cmd );

The execute method only allows for a single command. That's where BatchExecution comes in, which represents a composite command, created from a list of commands. Now, execute will iterate over the list and execute each command in turn. This means you can insert some objects, start a process, call fireAllRules and execute a query, all in a single execute(...) call, which is quite powerful.

As mentioned previosly, the Stateless Knowledge Session will execute fireAllRules() automatically at the end. However the keen-eyed reader probably has already noticed the FireAllRules command and wondered how that works with a StatelessKnowledgeSession. The FireAllRules command is allowed, and using it will disable the automatic execution at the end; think of using it as a sort of manual override function.

Commands support out identifiers. Any command that has an out identifier set on it will add its results to the returned ExecutionResults instance. Let's look at a simple example to see how this works.

Example 3.50. BatchExecution Command

StatelessKnowledgeSession ksession = kbase.newStatelessKnowledgeSession();

List cmds = new ArrayList();        
cmds.add( CommandFactory.newInsertObject( new Cheese( "stilton", 1), "stilton") );
cmds.add( CommandFactory.newStartProcess( "process cheeses" ) );
cmds.add( CommandFactory.newQuery( "cheeses" ) );
ExecutionResults bresults = ksession.execute( CommandFactory.newBatchExecution( cmds ) );
Cheese stilton = ( Cheese ) bresults.getValue( "stilton" );
QueryResults qresults = ( QueryResults ) bresults.getValue( "cheeses" );

In the above example multiple commands are executed, two of which populate the ExecutionResults. The query command defaults to use the same identifier as the query name, but it can also be mapped to a different identifier.

A custom XStream marshaller can be used with the Drools Pipeline to achieve XML scripting, which is perfect for services. Here are two simple XML samples, one for the BatchExecution and one for the ExecutionResults.

Example 3.51. Simple BatchExecution XML

<batch-execution>
   <insert out-identifier='outStilton'>
      <org.drools.Cheese>
         <type>stilton</type>
         <price>25</price>
         <oldPrice>0</oldPrice>
      </org.drools.Cheese>
   </insert>
</batch-execution>

Example 3.52. Simple ExecutionResults XML

<execution-results>
   <result identifier='outStilton'>
      <org.drools.Cheese>
         <type>stilton</type>
         <oldPrice>25</oldPrice>        
         <price>30</price>
      </org.drools.Cheese>
   </result>
</execution-results>

The previously mentioned pipeline allows for a series of Stage objects, combined to help with getting data into and out of sessions. There is a Stage implementing the CommandExecutor interface that allows the pipeline to script either a stateful or stateless session. The pipeline setup is trivial:

Example 3.53. Pipeline for CommandExecutor

Action executeResultHandler = PipelineFactory.newExecuteResultHandler();

Action assignResult = PipelineFactory.newAssignObjectAsResult();
assignResult.setReceiver( executeResultHandler );

Transformer outTransformer =
  PipelineFactory.newXStreamToXmlTransformer( BatchExecutionHelper.newXStreamMarshaller() );
outTransformer.setReceiver( assignResult );

KnowledgeRuntimeCommand cmdExecution = PipelineFactory.newCommandExecutor();
batchExecution.setReceiver( cmdExecution );

Transformer inTransformer =
  PipelineFactory.newXStreamFromXmlTransformer( BatchExecutionHelper.newXStreamMarshaller() );
inTransformer.setReceiver( batchExecution );

Pipeline pipeline = PipelineFactory.newStatelessKnowledgeSessionPipeline( ksession );
pipeline.setReceiver( inTransformer );

The key thing here to note is the use of the BatchExecutionHelper to provide a specially configured XStream with custom converters for our Command objects and the new BatchExecutor stage.

Using the pipeline is very simple. You must provide your own implementation of the ResultHandler which is called when the pipeline executes the ExecuteResultHandler stage.

Figure 3.30. Pipeline ResultHandler

Pipeline ResultHandler

Example 3.54. Simple Pipeline ResultHandler

public static class ResultHandlerImpl implements ResultHandler {
    Object object;

    public void handleResult(Object object) {
       this.object = object;
    }

    public Object getObject() {
        return this.object;
    }
}

Example 3.55. Using a Pipeline

InputStream inXml = ...;
ResultHandler resultHandler = new ResultHandlerImpl();
pipeline.insert( inXml, resultHandler );

Earlier a BatchExecution was created with Java to insert some objects and execute a query. The XML representation to be used with the pipeline for that example is shown below, with parameters added to the query.

Example 3.56. BatchExecution Marshalled to XML

<batch-execution>
  <insert out-identifier="stilton">
    <org.drools.Cheese>
      <type>stilton</type>
      <price>1</price>
      <oldPrice>0</oldPrice>
    </org.drools.Cheese>
  </insert>
  <query out-identifier='cheeses2' name='cheesesWithParams'>
    <string>stilton</string>
    <string>cheddar</string>
  </query>
</batch-execution>

The CommandExecutor returns an ExecutionResults, and this is handled by the pipeline code snippet as well. A similar output for the <batch-execution> XML sample above would be:

Example 3.57. ExecutionResults Marshalled to XML

<execution-results>
  <result identifier="stilton">
    <org.drools.Cheese>
      <type>stilton</type>
      <price>2</price>
    </org.drools.Cheese>
  </result>        
  <result identifier='cheeses2'>
    <query-results>
      <identifiers>
        <identifier>cheese</identifier>
      </identifiers>
      <row>
        <org.drools.Cheese>
          <type>cheddar</type>
          <price>2</price>
          <oldPrice>0</oldPrice>
        </org.drools.Cheese>
      </row>
      <row>
        <org.drools.Cheese>
          <type>cheddar</type>
          <price>1</price>
          <oldPrice>0</oldPrice>
        </org.drools.Cheese>
      </row>
    </query-results>
  </result>
</execution-results>

The BatchExecutionHelper provides a configured XStream instance to support the marshalling of Batch Executions, where the resulting XML can be used as a message format, as shown above. Configured converters only exist for the commands supported via the Command Factory. The user may add other converters for their user objects. This is very useful for scripting stateless or stateful knowledge sessions, especially when services are involved.

There is currently no XML schema to support schema validation. The basic format is outlined here, and the drools-transformer-xstream module has an illustrative unit test in the XStreamBatchExecutionTest unit test. The root element is <batch-execution> and it can contain zero or more commands elements.

Example 3.58. Root XML element

<batch-execution>
...
</batch-execution>

This contains a list of elements that represent commands, the supported commands is limited to those Commands provided by the Command Factory. The most basic of these is the <insert> element, which inserts objects. The contents of the insert element is the user object, as dictated by XStream.

Example 3.59. Insert

<batch-execution>
   <insert>
      ...<!-- any user object -->
   </insert>
</batch-execution>

The insert element features an "out-identifier" attribute, demanding that the inserted object will also be returned as part of the result payload.

Example 3.60. Insert with Out Identifier Command

<batch-execution>
   <insert out-identifier='userVar'>
      ...
   </insert>
</batch-execution>

It's also possible to insert a collection of objects using the <insert-elements> element. This command does not support an out-identifier. The org.domain.UserClass is just an illustrative user object that XStream would serialize.

Example 3.61. Insert Elements command

<batch-execution>
   <insert-elements>
      <org.domain.UserClass>
         ...
      </org.domain.UserClass>
      <org.domain.UserClass>
         ...
      </org.domain.UserClass>
      <org.domain.UserClass>
         ...
      </org.domain.UserClass>
   </insert-elements>
</batch-execution>

Next, there is the <set-global> element, which sets a global for the session.

Example 3.62. Insert Elements command

<batch-execution>
   <set-global identifier='userVar'>
      <org.domain.UserClass>
         ...
      </org.domain.UserClass>
   </set-global>
</batch-execution>

<set-global> also supports two other optional attributes, <kw>out</kw> and <kw>out-identifier</kw>. A true value for the boolean <kw>out</kw> will add the global to the <batch-execution-results> payload, using the name from the <kw>identifier</kw> attribute. <kw>out-identifier</kw> works like <kw>out</kw> but additionally allows you to override the identifier used in the <batch-execution-results> payload.

Example 3.63. Set Global Command

<batch-execution>
   <set-global identifier='userVar1' out='true'>
      <org.domain.UserClass>
         ...
      </org.domain.UserClass>
   </set-global>
   <set-global identifier='userVar2' out-identifier='alternativeUserVar2'>
      <org.domain.UserClass>
         ...
      </org.domain.UserClass>
   </set-global>
</batch-execution>

There is also a <get-global> element, without contents, with just an <kw>out-identifier</kw> attribute. (There is no need for an <kw>out</kw> attribute because retrieving the value is the sole purpose of a <get-global> element.

Example 3.64. Get Global Command

<batch-execution>
   <get-global identifier='userVar1' />
   <get-global identifier='userVar2' out-identifier='alternativeUserVar2'/>
</batch-execution>

While the <kw>out</kw> attribute is useful in returning specific instances as a result payload, we often wish to run actual queries. Both parameter and parameterless queries are supported. The <kw>name</kw> attribute is the name of the query to be called, and the <kw>out-identifier</kw> is the identifier to be used for the query results in the <execution-results> payload.

Example 3.65. Query Command

<batch-execution>
   <query out-identifier='cheeses' name='cheeses'/>
   <query out-identifier='cheeses2' name='cheesesWithParams'>
      <string>stilton</string>
      <string>cheddar</string>
   </query>
</batch-execution>

The <start-process> command accepts optional parameters. Other process related methods will be added later, like interacting with work items.

Example 3.66. Start Process Command

<batch-execution>
   <startProcess processId='org.drools.actions'>
      <parameter identifier='person'>
         <org.drools.TestVariable>
            <name>John Doe</name>
         </org.drools.TestVariable>
      </parameter>
   </startProcess>
</batch-execution

Example 3.67. Signal Event Command

<signal-event process-instance-id='1' event-type='MyEvent'>
   <string>MyValue</string>
</signal-event>

Example 3.68. Complete Work Item Command

<complete-work-item id='" + workItem.getId() + "' >
   <result identifier='Result'>
      <string>SomeOtherString</string>
   </result>
</complete-work-item>

Example 3.69. Abort Work Item Command

<abort-work-item id='21' />

Support for more commands will be added over time.

Marshalling

The MarshallerFactory is used to marshal and unmarshal Stateful Knowledge Sessions.

Figure 3.31. MarshallerFactory

MarshallerFactory

At the simplest the MarshallerFactory can be used as follows:

Example 3.70. Simple Marshaller Example

// ksession is the StatefulKnowledgeSession
// kbase is the KnowledgeBase
ByteArrayOutputStream baos = new ByteArrayOutputStream();
Marshaller marshaller = MarshallerFactory.newMarshaller( kbase );
marshaller.marshall( baos, ksession );
baos.close();

However, with marshalling you need more flexibility when dealing with referenced user data. To achieve this we have the ObjectMarshallingStrategy interface. Two implementations are provided, but users can implement their own. The two supplied strategies are IdentityMarshallingStrategy and SerializeMarshallingStrategy. SerializeMarshallingStrategy is the default, as used in the example above, and it just calls the Serializable or Externalizable methods on a user instance. IdentityMarshallingStrategy instead creates an integer id for each user object and stores them in a Map, while the id is written to the stream. When unmarshalling it accesses the IdentityMarshallingStrategy map to retrieve the instance. This means that if you use the IdentityMarshallingStrategy, it is stateful for the life of the Marshaller instance and will create ids and keep references to all objects that it attempts to marshal. Below is he code to use an Identity Marshalling Strategy.

Example 3.71. IdentityMarshallingStrategy

ByteArrayOutputStream baos = new ByteArrayOutputStream();
ObjectMarshallingStrategy oms = MarshallerFactory.newIdentityMarshallingStrategy()
Marshaller marshaller =
  MarshallerFactory.newMarshaller( kbase, new ObjectMarshallingStrategy[]{ oms } );
marshaller.marshall( baos, ksession );
baos.close();

For added flexability we can't assume that a single strategy is suitable. Therefore we have added the ObjectMarshallingStrategyAcceptor interface that each Object Marshalling Strategy contains. The Marshaller has a chain of strategies, and when it attempts to read or write a user object it iterates the strategies asking if they accept responsability for marshalling the user object. One of the provided implementations is ClassFilterAcceptor. This allows strings and wild cards to be used to match class names. The default is "*.*", so in the above example the Identity Marshalling Strategy is used which has a default "*.*" acceptor.

Assuming that we want to serialize all classes except for one given package, where we will use identity lookup, we could do the following:

Example 3.72. IdentityMarshallingStrategy with Acceptor

ByteArrayOutputStream baos = new ByteArrayOutputStream();
ObjectMarshallingStrategyAcceptor identityAcceptor =
  MarshallerFactory.newClassFilterAcceptor( new String[] { "org.domain.pkg1.*" } );
ObjectMarshallingStrategy identityStrategy =
  MarshallerFactory.newIdentityMarshallingStrategy( identityAcceptor );
ObjectMarshallingStrategy sms = MarshallerFactory.newSerializeMarshallingStrategy();
Marshaller marshaller =
  MarshallerFactory.newMarshaller( kbase,
                                   new ObjectMarshallingStrategy[]{ identityStrategy, sms } );
marshaller.marshall( baos, ksession );
baos.close();

Note that the acceptance checking order is in the natural order of the supplied array.

Persistence and Transactions

Longterm out of the box persistence with Java Persistence API (JPA) is possible with Drools. You will need to have some implementation of the Java Transaction API (JTA) installed. For development purposes we recommend the Bitronix Transaction Manager, as it's simple to set up and works embedded, but for production use JBoss Transactions is recommended.

Example 3.73. Simple example using transactions

Environment env = KnowledgeBaseFactory.newEnvironment();
env.set( EnvironmentName.ENTITY_MANAGER_FACTORY,
         Persistence.createEntityManagerFactory( "emf-name" ) );
env.set( EnvironmentName.TRANSACTION_MANAGER,
         TransactionManagerServices.getTransactionManager() );
          
// KnowledgeSessionConfiguration may be null, and a default will be used
StatefulKnowledgeSession ksession =
  JPAKnowledgeService.newStatefulKnowledgeSession( kbase, null, env );
int sessionId = ksession.getId();
 
UserTransaction ut =
  (UserTransaction) new InitialContext().lookup( "java:comp/UserTransaction" );
ut.begin();
ksession.insert( data1 );
ksession.insert( data2 );
ksession.startProcess( "process1" );
ut.commit();

To use a JPA, the Environment must be set with both the EntityManagerFactory and the TransactionManager. If rollback occurs the ksession state is also rolled back, so you can continue to use it after a rollback. To load a previously persisted Stateful Knowledge Session you'll need the id, as shown below:

Example 3.74. Loading a StatefulKnowledgeSession

StatefulKnowledgeSession ksession =
  JPAKnowledgeService.loadStatefulKnowledgeSession( sessionId, kbase, null, env );

To enable persistence several classes must be added to your persistence.xml, as in the example below:

Example 3.75. Configuring JPA

<persistence-unit name="org.drools.persistence.jpa" transaction-type="JTA">
   <provider>org.hibernate.ejb.HibernatePersistence</provider>
   <jta-data-source>jdbc/BitronixJTADataSource</jta-data-source>       
   <class>org.drools.persistence.session.SessionInfo</class>
   <class>org.drools.persistence.processinstance.ProcessInstanceInfo</class>
   <class>org.drools.persistence.processinstance.ProcessInstanceEventInfo</class>
   <class>org.drools.persistence.processinstance.WorkItemInfo</class>
   <properties>
         <property name="hibernate.dialect" value="org.hibernate.dialect.H2Dialect"/>            
         <property name="hibernate.max_fetch_depth" value="3"/>
         <property name="hibernate.hbm2ddl.auto" value="update" />
         <property name="hibernate.show_sql" value="true" />
         <property name="hibernate.transaction.manager_lookup_class"
                      value="org.hibernate.transaction.BTMTransactionManagerLookup" />
   </properties>
</persistence-unit>

The jdbc JTA data source would have to be configured first. Bitronix provides a number of ways of doing this, and its documentation should be contsulted for details. For a quick start, here is the programmatic approach:

Example 3.76. Configuring JTA DataSource

PoolingDataSource ds = new PoolingDataSource();
ds.setUniqueName( "jdbc/BitronixJTADataSource" );
ds.setClassName( "org.h2.jdbcx.JdbcDataSource" );
ds.setMaxPoolSize( 3 );
ds.setAllowLocalTransactions( true );
ds.getDriverProperties().put( "user", "sa" );
ds.getDriverProperties().put( "password", "sasa" );
ds.getDriverProperties().put( "URL", "jdbc:h2:mem:mydb" );
ds.init();

Bitronix also provides a simple embedded JNDI service, ideal for testing. To use it add a jndi.properties file to your META-INF and add the following line to it:

Example 3.77. JNDI properties

java.naming.factory.initial=bitronix.tm.jndi.BitronixInitialContextFactory