Drools Flow is a workflow or process engine that allows advanced integration of processes and rules. A process or a workflow describes the order in which a series of steps need to be executed, using a flow chart. For example, the following figure shows a process where first Task1 and Task2 need to be executed in parallel. After completion of both, Task3 needs to be executed.
The following chapters will teach you everything you need to know about Drools Flow. Its distinguishing characteristics are:
All these features (and many more) will be explained in the following chapters.
This section describes how to get started with Drools Flow. It will guide you to create and execute your first Drools Flow process.
The best way to get started is to use the Drools Eclipse Plugin for the Eclipse development environment. It allows users to create, execute and debug Drools processes and rules. To get started with the plugin, you first need an installation of Eclipse 3.4.x including the Eclipse Graphical Editing Framework (GEF). Eclipse can be downloaded from the following link (if you do not know which version of eclipse you need, simply choose the "Eclipse IDE for Java Developers", and this one already includes the GEF plugin as well):
http://www.eclipse.org/downloads/
Next you need to install the Drools Eclipse plugin. Download the Drools Eclipse IDE plugin from the link below. Unzip the downloaded file in your main eclipse folder (do not just copy the file there, extract it so that the feature and plugin jars end up in the features and plugin directory of eclipse) and (re)start Eclipse.
http://www.jboss.org/drools/downloads.html
To check that the installation was successful, try opening the Drools perspective:
Click the "Open Perspective" button in the top right corner of your Eclipse window,
select "Other..." and pick the Drools perspective. If you cannot find the Drools perspective
as one of the possible perspectives, the installation probably was unsuccessful. Check
whether you executed each of the required steps correctly: Do you have the right version
of Eclipse (3.4.x)? Ensure that you have Eclipse GEF installed, by checking whether the
org.eclipse.gef_3.4.*.jar
exists in the plugins directory in your
Eclipse root folder. Make sure that you have extracted the Drools Eclipse
plugin correctly, by checking whether the org.drools.eclipse_*.jar
exists in the plugins directory in your Eclipse root folder. If you cannot find the
problem, try contacting us, either on irc or on the user mailing list. More information
can be found on our homepage:
The Drools project wizard can be used to set up an executable project that contains the necessary files to get started easily with defining and executing processes. This wizard will set up a basic project structure, the classpath, a sample process and execution code to get you started. To create a new Drools project, simply left-click on the Drools action button (with the Drools head) in the Eclipse toolbar and select "New Drools Project". (Note that the Drools action button only shows up in the Drools perspective. To open the Drools perspective (if you haven't done so already), click the "Open Perspective" button in the top right corner of your Eclipse window, select "Other..." and pick the Drools perspective.) Alternatively, you could also select "File", then "New" followed by "Project ...", and in the Drools folder, select "Drools Project". This should open the following dialog:
Give your project a name and click "Next". In the following dialog you can select which elements are added to your project by default. Since we are creating a new process, deselect the first two checkboxes and select the last two. This will generate a sample process and a Java class to execute this process.
If you have not yet set up a Drools runtime, you should do this now. A Drools runtime is a collection of jars on your file system that represent one specific release of the Drools project jars. To create a runtime, you must either point the IDE to the release of your choice, or you can simply create a new runtime on your file system from the jars included in the Drools Eclipse plugin. Since we simply want to use the Drools version included in this plugin, we will do the latter. Note that you will only have to do this once; next time you create a Drools project, it will automatically use the default Drools runtime (unless you specify otherwise).
Unless you have already set up a Drools runtime, click the "Next" button. The following dialog, as displayed below, shows up, telling you that you have not yet defined a default Drools runtime and that you should configure the workspace settings first. Do this by clicking on the "Configure Workspace Settings ..." link.
The dialog that pops up shows the workspace settings for Drools runtimes. The first time you do this, the list of installed Drools runtimes is probably empty, as shown below. To create a new runtime on your file system, click the "Add..." button. This shows a dialog where you should give the new runtime a name (e.g. "Drools 5.0.0 runtime"), and a path to your Drools runtime on your file system. In this tutorial, we will simply create a new Drools 5 runtime from the jars embedded in the Drools Eclipse plugin. Click the "Create a new Drools 5 runtime ..." button and select the folder where you want this runtime to be stored and click the "OK" button. You will see the selected path showing up in the previous dialog. As we're all done here, click the "OK" button. You will see the newly created runtime shown in your list of Drools runtimes. Select this runtime as the new default runtime by clicking on the check box in front of your runtime name and click "OK". After successfully setting up your runtime, you can now finish the project creation wizard by clicking on the "Finish" button.
The end result should look like this and contains:
ruleflow.rf
: the process definition, which is a very
simple process containing
a Start node (the entry point), an Action node (that prints out "Hello World") and an
End node (the end of the process).
RuleFlowTest.java
: a Java class that executes the
process.
The necessary libraries are automatically added to the project classpath as a Drools library.
![]() |
By double-clicking the ruleflow.rf
file, the process will be
opened in the RuleFlow editor. The RuleFlow editor contains a graphical representation
of your process definition. It consists of nodes that are connected to each other.
The editor shows the overall control flow, while the
details of each of the elements can be viewed (and edited) in the Properties View at the bottom.
The editor contains a palette at the left that can be used to drag-and-drop new nodes, and an
outline view at the right.
This process is a simple sequence of three nodes. The Start node defines the start of the process. It is connected to an Action node (called "Hello" that simply prints out "Hello World" to the standard output. You can see this by clicking on the "Hello" node and checking the action property in the Properties View below. This node is then connected to an End node, signaling the end of the process.
While it is probably easier to edit processes using the graphical editor, users can also modify the underlying XML directly. The XML for our sample process is shown below (note that we did not include the graphical information here for simplicity). The process element contains parameters like the name and id of the process, and consists of three main subsections: a header (where information like variables, globals and imports can be defined), the nodes and the connections.
<?xml version="1.0" encoding="UTF-8"?> <process xmlns="http://drools.org/drools-5.0/process" xmlns:xs="http://www.w3.org/2001/XMLSchema-instance" xs:schemaLocation="http://drools.org/drools-5.0/process drools-processes-5.0.xsd" type="RuleFlow" name="ruleflow" id="com.sample.ruleflow" package-name="com.sample" > <header> </header> <nodes> <start id="1" name="Start" x="16" y="16" /> <actionNode id="2" name="Hello" x="128" y="16" > <action type="expression" dialect="mvel">System.out.println("Hello World");</action> </actionNode> <end id="3" name="End" x="240" y="16" /> </nodes> <connections> <connection from="1" to="2" /> <connection from="2" to="3" /> </connections> </process>
To execute this process, right-click on RuleFlowTest.java
and select "Run As..." and "Java Application". When the process in executed, the
following output should appear in the Console window:
Hello World
If you look at the code of class RuleFlowTest
(see below),
you will see that executing a process requires a few steps:
You should first create a Knowledge Base. A Knowledge Base contains all the knowledge (i.e., processes, rules, etc.) that are relevant in your application. This Knowledge Base is usually created once, and then reused. In this case, the Knowledge Base only consists of our sample process.
Next, you should create a session to interact with the engine. Note that we also add a logger to the session to log execution events and make it easier to visualize what is going on.
Finally, you can start a new instance of the process by invoking the
startProcess(String processId)
method on the session. This starts
the execution of your process instance, resulting in the executions of the
Start node, the Action node, and the End node, in this order, after which the
process instance will be completed.
package com.sample; import org.drools.KnowledgeBase; import org.drools.KnowledgeBaseFactory; import org.drools.builder.KnowledgeBuilder; import org.drools.builder.KnowledgeBuilderError; import org.drools.builder.KnowledgeBuilderErrors; import org.drools.builder.KnowledgeBuilderFactory; import org.drools.builder.KnowledgeType; import org.drools.io.ResourceFactory; import org.drools.logger.KnowledgeRuntimeLogger; import org.drools.logger.KnowledgeRuntimeLoggerFactory; import org.drools.runtime.StatefulKnowledgeSession; /** * This is a sample file to launch a process. */ public class RuleFlowTest { public static final void main(String[] args) { try { // load up the knowledge base KnowledgeBase kbase = readKnowledgeBase(); StatefulKnowledgeSession ksession = kbase.newStatefulKnowledgeSession(); KnowledgeRuntimeLogger logger = KnowledgeRuntimeLoggerFactory.newFileLogger(ksession, "test"); // start a new process instance ksession.startProcess("com.sample.ruleflow"); logger.close(); } catch (Throwable t) { t.printStackTrace(); } } private static KnowledgeBase readKnowledgeBase() throws Exception { KnowledgeBuilder kbuilder = KnowledgeBuilderFactory.newKnowledgeBuilder(); kbuilder.add(ResourceFactory.newClassPathResource("ruleflow.rf"), KnowledgeType.DRF); KnowledgeBuilderErrors errors = kbuilder.getErrors(); if (errors.size() > 0) { for (KnowledgeBuilderError error: errors) { System.err.println(error); } throw new IllegalArgumentException("Could not parse knowledge."); } KnowledgeBase kbase = KnowledgeBaseFactory.newKnowledgeBase(); kbase.addKnowledgePackages(kbuilder.getKnowledgePackages()); return kbase; } }
Congratulations, you have successfully executed your first process!
Because we added a logger to the session, you can easily check what happened
internally by looking at the audit log. Select the "Audit View" tab on the
bottom right, next to the Console tab. Click on the "Open Log" button (the
first one one the right of the view) and navigate to the newly created
test.log
file in your project folder. (If you are not
sure where this project folder is located, right-click on the project folder
and you will find the location in the "Resource" section). An image like the
one below should be shown. It is a tree view of the events that
occurred at runtime. Events that were executed as the direct result of another
event are shown as the children of that event. This log shows that after
starting the process, the Start node, the Action node and the End node were
triggered, in that order, after which the process instance was completed.
You can now start experimenting and designing your own process by modifying our example. Note that you can validate your process by clicking on the "Check the ruleflow model" button, i.e., the green check box action in the upper toolbar that shows up if you are editing a process. Processes will also be validated upon save, and errors will be shown in the Error View.
Continue reading our documentation to learn about our more advanced features.
A RuleFlow is a process that describes the order in which a series of steps need to be executed, using a flow chart. A process consists of a collection of nodes that are linked to each other using connections. Each of the nodes represents one step in the overall process while the connections specify how to transition from one node to the other. A large selection of predefined node types have been defined. This chapter describes how to define such processes and use them in your application.
Processes can be created by using one of the following three methods:
The graphical RuleFlow editor is a editor that allows you to create a process by dragging and dropping different nodes on a canvas and editing the properties of these nodes. The graphical RuleFlow editor is part of the Drools plug-in for Eclipse. Once you have set up a Drools project (check the IDE chapter if you do not know how to do this), you can start adding processes. When in a project, launch the "New" wizard: use Ctrl+N or right-click the directory you would like to put your ruleflow in and select "New", then "Other...". Choose the section on "Drools" and then pick "RuleFlow file". This will create a new .rf file.
Next you will see the graphical RuleFlow editor. Now would be a good time to switch to the Drools Perspective (if you haven't done so already). This will tweak the user interface so that it is optimal for rules. Then, ensure that you can see the Properties View down the bottom of the Eclipse window, as it will be necessary to fill in the different properties of the elements in your process. If you cannot see the properties view, open it using the menu "Window", then "Show View" and "Other...", and under the "General" folder select the Properties View.
The RuleFlow editor consists of a palette, a canvas and an Outline View. To add new elements to the canvas, select the element you would like to create in the palette and then add them to the canvas by clicking on the preferred location. For example, click on the "RuleFlowGroup" icon in the "Components" palette of the GUI: you can then draw a few rule flow groups. Clicking on an element in your rule flow allows you to set the properties of that element. You can connect the nodes (as long as it is permitted by the different types of nodes) by using "Connection Creation" from the "Components" palette.
You can keep adding nodes and connections to your process until it represents the business logic that you want to specify. You'll probably need to check the process for any missing information (by pressing the green "Check" icon in the IDE menu bar) before trying to use it in your application.
It is also possible to specify processes using the underlying XML directly. The syntax of these XML processes is defined using an XML Schema definition. For example, the following XML fragment shows a simple process that contains a sequence of a Start node, an Action node that prints "Hello World" to the console, and an End node.
<?xml version="1.0" encoding="UTF-8"?> <process xmlns="http://drools.org/drools-5.0/process" xmlns:xs="http://www.w3.org/2001/XMLSchema-instance" xs:schemaLocation="http://drools.org/drools-5.0/process drools-processes-5.0.xsd" type="RuleFlow" name="ruleflow" id="com.sample.ruleflow" package-name="com.sample" > <header> </header> <nodes> <start id="1" name="Start" x="16" y="16" /> <actionNode id="2" name="Hello" x="128" y="16" > <action type="expression" dialect="mvel" >System.out.println("Hello World");</action> </actionNode> <end id="3" name="End" x="240" y="16" /> </nodes> <connections> <connection from="1" to="2" /> <connection from="2" to="3" /> </connections> </process>
The process XML file should consist of exactly one <process> element. This element contains parameters related to the process (its type, name, id and package name), and consists of three subsections: a <header> (where process-level information like variables, globals, imports and swimlanes can be defined), a <nodes> section that defines each of the nodes in the process, and a <connections> section that contains the connections between all the nodes in the process. In the nodes section, there is a specific element for each node, defining the various parameters and, possibly, sub-elements for that node type.
While it is recommended to define processes using the graphical editor or
the underlying XML (to shield yourself from internal APIs), it is also possible
to define a process using the Process API directly. The most important process
elements are defined in the packages org.drools.workflow.core
and
org.drools.workflow.core.node
. A "fluent API" is provided that
allows you to easily construct processes in a readable manner using factories.
At the end, you can validate the process that you were constructing manually.
Some examples about how to build processes using this fluent API are added below.
This is a simple example of a basic process with a ruleset node only:
RuleFlowProcessFactory factory = RuleFlowProcessFactory.createProcess("org.drools.HelloWorldRuleSet"); factory // Header .name("HelloWorldRuleSet") .version("1.0") .packageName("org.drools") // Nodes .startNode(1).name("Start").done() .ruleSetNode(2) .name("RuleSet") .ruleFlowGroup("someGroup").done() .endNode(3).name("End").done() // Connections .connection(1, 2) .connection(2, 3); RuleFlowProcess process = factory.validate().getProcess();
You can see that we start by calling the static createProcess()
method from the RuleFlowProcessFactory
class. This method creates
a new process with the given id and returns the RuleFlowProcessFactory
that can be used to create the process. A typical process consists of three parts.
The header part comprises global elements like the name of the process, imports,
variables, etc. The nodes section contains all the different nodes that are part of the
process. The connections section finally links these nodes to each other
to create a flow chart.
In this example, the header contains the name and the version of the process and the package name. After that, you can start adding nodes to the current process. If you have auto-completion you can see that you have different methods to create each of the supported node types at your disposal.
When you start adding nodes to the process, in this example by calling
the startNode()
, ruleSetNode()
and endNode()
methods, you can see that these methods return a specific NodeFactory
,
that allows you to set the properties of that node. Once you have finished
configuring that specific node, the done()
method returns you to the
current RuleFlowProcessFactory
so you can add more nodes, if necessary.
When you are finished adding nodes, you must connect them by creating
connections between them. This can be done by calling the method
connection
, which will link previously created nodes.
Finally, you can validate the generated process by calling the
validate()
method and retrieve the created
RuleFlowProcess
object.
This example is using Split and Join nodes:
RuleFlowProcessFactory factory = RuleFlowProcessFactory.createProcess("org.drools.HelloWorldJoinSplit"); factory // Header .name("HelloWorldJoinSplit") .version("1.0") .packageName("org.drools") // Nodes .startNode(1).name("Start").done() .splitNode(2).name("Split").type(Split.TYPE_AND).done() .actionNode(3).name("Action 1") .action("mvel", "System.out.println(\"Inside Action 1\")").done() .actionNode(4).name("Action 2") .action("mvel", "System.out.println(\"Inside Action 2\")").done() .joinNode(5).type(Join.TYPE_AND).done() .endNode(6).name("End").done() // Connections .connection(1, 2) .connection(2, 3) .connection(2, 4) .connection(3, 5) .connection(4, 5) .connection(5, 6); RuleFlowProcess process = factory.validate().getProcess();
This shows a simple example using Split and Join nodes. As you can see, a Split node can have multiple outgoing connections, and a Join node multiple incoming connections. To understand the behavior of the different types of Split and Join nodes, take a look at the documentation for each of these nodes.
Now we show a more complex example with a ForEach node, where we have nested nodes:
RuleFlowProcessFactory factory = RuleFlowProcessFactory.createProcess("org.drools.HelloWorldForeach"); factory // Header .name("HelloWorldForeach") .version("1.0") .packageName("org.drools") // Nodes .startNode(1).name("Start").done() .forEachNode(2) // Properties .linkIncomingConnections(3) .linkOutgoingConnections(4) .collectionExpression("persons") .variable("child", new ObjectDataType("org.drools.Person")) // Nodes .actionNode(3) .action("mvel", "System.out.println(\"inside action1\")").done() .actionNode(4) .action("mvel", "System.out.println(\"inside action2\")").done() // Connections .connection(3, 4) .done() .endNode(5).name("End").done() // Connections .connection(1, 2) .connection(2, 5); RuleFlowProcess process = factory.validate().getProcess();
Here you can see how we can include a ForEach node with nested action nodes.
Note the linkIncomingConnections()
and
linkOutgoingConnections()
methods that are
called to link the ForEach node with the internal action node.
These methods are used to specify the first and last nodes inside the ForEach
composite node.
There are two things you need to do to be able to execute processes from within your application: (1) you need to create a Knowledge Base that contains the definition of the process, and (2) you need to start the process by creating a session to communicate with the process engine and start the process.
Creating a Knowledge Base: Once you have a valid process, you can add the process to the Knowledge Base. Note that this is almost identical to adding rules to the Knowledge Base, except for the type of knowledge added:
KnowledgeBuilder kbuilder = KnowledgeBuilderFactory.newKnowledgeBuilder(); kbuilder.add( ResourceFactory.newClassPathResource("MyProcess.rf"), ResourceType.DRF );
After adding all your knowledge to the builder (you can add more than one process, and even rules), you should probably check whether the process (and rules) have been parsed correctly and write out any errors like this:
KnowledgeBuilderErrors errors = kbuilder.getErrors(); if (errors.size() > 0) { for (KnowledgeBuilderError error: errors) { System.err.println(error); } throw new IllegalArgumentException("Could not parse knowledge."); }
Next, you need to create the Knowledge Base that contains all the necessary processes (and rules) like this:
KnowledgeBase kbase = KnowledgeBaseFactory.newKnowledgeBase(); kbase.addKnowledgePackages(kbuilder.getKnowledgePackages());
Starting a process: Processes are
only executed if you explicitly state that they should be executed. This
is because you could potentially define a lot of processes in your
Knowledge Base and the engine has no way to know when you would like
to start each of these. To activate a particular process, you will need
to start it by calling the startProcess
method on your session.
For example:
StatefulKnowledgeSession ksession = kbase.newStatefulKnowledgeSession(); ksession.startProcess("com.sample.MyProcess");
The parameter of the startProcess
method represents the id
of the process that needs to be started. This process id needs to be
specified as a property of the process, shown in the Properties View
when you click the background canvas of your process. If your process
also requires the execution of rules during the execution of the process,
you also need to call the ksession.fireAllRules()
method to
make sure the rules are executed as well. That's it!
You may specify additional parameters that are used to pass
on input data to the process, using the
startProcess(String processId, Map parameters)
method, which
takes an additional set of parameters as name-value pairs. These parameters
are then copied to the newly created process instance as top-level variables
of the process.
You can also start a process from within a rule consequence, using
kcontext.getKnowledgeRuntime().startProcess("com.sample.MyProcess");
A ruleflow process is a flow chart where different types of nodes are linked using connections. The process itself exposes the following properties:
Id: The unique id of the process.
Name: The display name of the process.
Version: The version number of the process.
Package: The package (namespace) the process is defined in.
Variables: Variables can be defined to store data during the execution of your process. See section “Data” for details.
Swimlanes: Specify the actor responsible for the execution of human tasks. See chapter “Human Tasks” for details.
Exception Handlers: Specify the behaviour when a fault occurs in the process. See section “Exceptions” for details.
Connection Layout: Specify how the connections are visualized on the canvas using the connection layout property:
'Manual' always draws your connections as lines going straight from their start to end point (with the possibility to use intermediate break points).
'Shortest path' is similar, but it tries to go around any obstacles it might encounter between the start and end point, to avoid lines crossing nodes.
'Manhattan' draws connections by only using horizontal and vertical lines.
A RuleFlow process supports different types of nodes:
Start: The start of the ruleflow. A ruleflow should have exactly one start node, which cannot have incoming connections and should have one outgoing connection. Whenever a RuleFlow process is started, execution will start at this node and automatically continue to the first node linked to this start node, and so on. It contains the following properties:
Id: The id of the node (which is unique within one node container).
Name: The display name of the node.
Triggers: A Start node can also specify additional triggers that can be used to automatically start the process. Examples are a "constraint" trigger that automatically starts the process if a given rule or constraint is satisfied, or an "event" trigger that automatically starts the process if a specific event is signalled.
End: The end of the ruleflow. A ruleflow should have one or more End nodes. The End node should have one incoming connection and cannot have outgoing connections. It contains the following properties:
Id: The id of the node (which is unique within one node container).
Name: The display name of the node.
Terminate: An End node can be terminating for the entire process (default) or just for the path. If the process is terminated, all nodes that are still active (on parallel paths) in this ruleflow are cancelled. Non-terminating End nodes are simply ends for some path, while other parallel paths still continue.
RuleFlowGroup: Represents a set
of rules that need to be evaluated. The rules are evaluated when the node
is reached. A RuleFlowGroup node should have one incoming connection and
one outgoing connection. Rules can become part of a specific ruleflow group
using the ruleflow-group
attribute in the header. When a RuleFlowGroup node
is reached in the ruleflow, the engine will start executing rules that are
part of the corresponding ruleflow-group (if any). Execution will
automatically continue to the next node if there are no more active rules in
this ruleflow group. This means that, during the execution of a ruleflow group,
it is possible that new activations belonging to the currently active
ruleflow group are added to the Agenda due to changes made to the facts by
the other rules. Note that the ruleflow will immediately continue with the
next node if it encounters a ruleflow group where there are no active rules
at that time. If the ruleflow group was already active, the ruleflow group
will remain active and exeution will only continue if all active rules of the
ruleflow group has been completed. It contains the following properties:
Id: The id of the node (which is unique within one node container).
Name: The display name of the node.
RuleFlowGroup: The name of the ruleflow group that represents the set of rules of this RuleFlowGroup node.
Timers: Timers that are linked to this node. See section “Timers” for details.
Split: Allows you to create branches in your ruleflow. A Split node should have one incoming connection and two or more outgoing connections. There are three types of Split nodes currently supported:
AND means that the control flow will continue in all outgoing connections simultaneously.
XOR means that exactly one of the outgoing connections will be chosen. The decision is made by evaluating the constraints that are linked to each of the outgoing connections. Constraints are specified using the same syntax as the left-hand side of a rule. The constraint with the lowest priority number that evaluates to true is selected. Note that you should always make sure that at least one of the outgoing connections will evaluate to true at runtime (the ruleflow will throw an exception at runtime if it cannot find at least one outgoing connection). For example, you could use a connection which is always true (default) with a high priority number to specify what should happen if none of the other connections can be taken.
OR means that all outgoing connections whose condition evaluates to true are selected. Conditions are similar to the XOR split, except that no priorities are taken into account. Note that you should make sure that at least one of the outgoing connections will evaluate to true at runtime because the ruleflow will throw an exception at runtime if it cannot determine an outgoing connection.
It contains the following properties:
Id: The id of the node (which is unique within one node container).
Name: The display name of the node.
Type: The type of the split node, i.e., AND, XOR or OR (see above).
Constraints: The constraints linked to each of the outgoing connections (in case of an (X)OR split).
Join: Allows you to synchronize multiple branches. A join node should have two or more incoming connections and one outgoing connection. There are four types of splits currently supported:
AND means that is will wait until all incoming branches are completed before continuing.
XOR means that it continues as soon as one of its incoming branches has been completed. If it is triggered from more than one incoming connection, it will trigger the next node for each of those triggers.
Discriminator means that it continues if one of its incoming branches has been completed. Completions of other incoming branches are registered until all connections have completed. At that point, the node will be reset, so that it can trigger again when one of its incoming branches has been completed once more.
n-of-m means that it continues if n of its m incoming branches have been completed. The variable n could either be hardcoded to a fixed value, or refer to a process variable that will contain the number of incoming branches to wait for.
It contains the following properties:
Id: The id of the node (which is unique within one node container).
Name: The display name of the node.
Type: The type of the Join node, i.e. AND, XOR or Discriminator (see above).
n: The number of incoming connections to wait for (in case of a n-of-m join).
EventWait (or Milestone): Represents a wait state. An EventWait should have one incoming connection and one outgoing connection. It specifies a constraint which defines how long the process should wait in this state before continuing. For example, a constraint in an order entry application might specify that the process should wait until no more errors are found in the given order. Constraints are specified using the same syntax as the left-hand side of a rule. When a Wait node is reached in the ruleflow, the engine will check the associated constraint. If the constraint evaluates to true directly, the flow will continue imediately. Otherwise, the flow will continue if the constraint is satisfied later on, for example when a fact is inserted, updated or removed from the working memory. It contains the following properties:
Id: The id of the node (which is unique within one node container).
Name: The display name of the node.
Constraint: Defines when the process can leave this state and continue.
SubFlow: represents the invocation of another process from within this process. A sub-process node should have one incoming connection and one outgoing connection. When a SubFlow node is reached in the ruleflow, the engine will start the process with the given id. It contains the following properties:
Id: The id of the node (which is unique within one node container).
Name: The display name of the node.
ProcessId: The id of the process that should be executed.
Wait for completion: If this property is true, the SubFlow node will only continue if that SubFlow process has terminated its execution (completed or aborted); otherwise it will continue immediately after starting the subprocess.
Independent: If this property is true, the subprocess is started as an independent process, which means that the SubFlow process will not be terminated if this process reaches an end node; otherwise the active sub-process will be cancelled on termination (or abortion) of the process.
On-entry and on-exit actions: Actions that are executed upon entry or exit of this node, respectively.
Parameter in/out mapping: A SubFlow node can also define in- and out-mappings for variables. The value of variables in this process with variable names given in the "in" mapping will be used as parameters (with the associated parameter name) when starting the process. The value of the variables in the subprocess with the given variable name in the "out" mappings will be copied to the variables of this process when the subprocess has been completed. Note that you can use "out" mappings only when "Wait for completion" is set to true.
Timers: Timers that are linked to this node. See section “Timers” for details.
Action: represents an action that
should be executed in this ruleflow. An Action node should have one incoming
connection and one outgoing connection. The associated action specifies what
should be executed, the dialect used for coding the action (i.e., Java or MVEL),
and the actual action code. This code can access any globals, the predefined
variable drools
referring to a KnowledgeHelper
object
(which can, for example,
be used to retrieve the Working Memory by calling
drools.getWorkingMemory()
), and the variable context
that references the ProcessContext
object (which can,
for example, be used to access the current ProcessInstance
or
NodeInstance
, and to get and set variables). When an Action node
is reached in the ruleflow, it will execute the action and then continue with the
next node. It contains the following properties:
Id: The id of the node (which is unique within one node container).
Name: The display name of the node.
Action: The action associated with this action node.
Timer: represents a timer that can trigger one or multiple times after a given period of time. A Timer node should have one incoming connection and one outgoing connection. The timer delay specifies how long (in milliseconds) the timer should wait before triggering the first time. The timer period specifies the time between two subsequent triggers. A period of 0 means that the timer should only be triggered once. When a Timer node is reached in the ruleflow, it will start the associated timer. The timer is cancelled if the timer node is cancelled (e.g., by completing or aborting the process). Consult the section “Timers” for more information. - The Timer node contains the following properties:
Id: The id of the node (which is unique within one node container).
Name: The display name of the node.
Timer delay: The delay (in milliseconds) that the node should wait before triggering the first time.
Timer period: The period (in milliseconds) between two subsequent triggers. If the period is 0, the timer should only be triggered once.
Fault: A Fault node can be used to signal an exceptional condition in the process. It should have one incoming connection and no outgoing connections. When a Fault node is reached in the ruleflow, it will throw a fault with the given name. The process will search for an appropriate exception handler that is capable of handling this kind of fault. If no fault handler is found, the process instance will be aborted. A Fault node contains the following properties:
Id: The id of the node (which is unique within one node container).
Name: The display name of the node.
FaultName: The name of the fault. This name is used to search for appriopriate exception handlers that is capable of handling this kind of fault.
FaultVariable: The name of the variable that contains the data associated with this fault. This data is also passed on to the exception handler (if one is found).
Event: An Event node can be used to respond to internal or external events during the execution of the process. An Event node should have no incoming connections and one outgoing connection. It specifies the type of event that is expected. Whenever that type of event is detected, the node connected to this Event node will be triggered. It contains the following properties:
Id: The id of the node (which is unique within one node container).
Name: The display name of the node.
EventType: The type of event that is expected.
VariableName: The name of the variable that will contain the data associated with this event (if any) when this event occurs.
Scope: An event could be used to listen
to internal events only, i.e., events that are signalled to this
process instance directly, using
processInstance.signalEvent(String type, Object data)
.
When an Event node is defined as external, it will also be listening
to external events that are signalled to the process engine directly,
using workingMemory.signalEvent(String type, Object event)
.
Human Task: Processes can also involve tasks that need to be executed by human actors. A Human Task node represents an atomic task to be executed by a human actor. It should have one incoming connection and one outgoing connection. Human Task nodes can be used in combination with Swimlanes to assign multiple human tasks to similar actors. Refer to chapter “Human Tasks” for more details. A Human Task node is actually nothing more than a specific type of work item node (of type "Human Task"). A Human Task node contains the following properties:
Id: The id of the node (which is unique within one node container).
Name: The display name of the node.
TaskName: The name of the human task.
Priority: An integer indicating the priority of the human task.
Comment: A comment associated with the human task.
ActorId: The actor id that is responsible for executing the human task. A list of actor id's can be specified using a comma (',') as separator.
Skippable: Specifies whether the human task can be skipped, i.e., whether the actor may decide not to execute the task.
Content: The data associated with this task.
Swimlane: The swimlane this human task node is part of. Swimlanes make it easy to assign multiple human tasks to the same actor. See the human tasks chapter for more detail on how to use swimlanes.
Wait for completion: If this property is true, the human task node will only continue if the human task has been terminated (i.e., by completing or reaching any other terminal state); otherwise it will continue immediately after creating the human task.
On.entry and on-exit actions: Actions that are executed upon entry and exit of this node, respectively.
Parameter mapping: Allows copying the value of process variables to parameters of the human task. Upon creation of the human tasks, the values will be copied.
Result mapping: Allows copying the value of result parameters of the human task to a process variable. Upon completion of the human task, the values will be copied. Note that you can use result mappings only when "Wait for completion" is set to true. A human task has a result variable "Result" that contains the data returned by the human actor. The variable "ActorId" contains the id of the actor that actually executed the task.
Timers: Timers that are linked to this node. Consult the section “Timers” for details.
Composite: A Composite node is a node that can contain other nodes so that it acts as a node container. This allows not only the embedding of a part of the flow within such a Composite node, but also the definition of additional variables and exception handlers that are accessible for all nodes inside this container. A Composite node should have one incoming connection and one outgoing connection. It contains the following properties:
Id: The id of the node (which is unique within one node container).
Name: The display name of the node.
StartNodeId: The id of the node (within this node container) that should be triggered when this node is triggered.
EndNodeId: The id of the node (within this node container) that represents the end of the flow contained in this node. When this node is completed, the composite node will also be completed and trigger its outgoing connection. All other executing nodes within this composite node will be cancelled.
Variables: Additional variables can be defined to store data during the execution of this node. See section “Data” for details.
Exception Handlers: Specify the behavior when a fault occurs in this node container. See section “Exceptions” for details.
ForEach: A ForEach node is a special kind of composite node that allows you to execute the contained flow multiple times, once for each element in a collection. A ForEach node should have one incoming connection and one outgoing connection. A ForEach node awaits the completion of the embedded flow for each of the collection''s elements before continuing. It contains the following properties:
Id: The id of the node (which is unique within one node container).
Name: The display name of the node.
StartNodeId: The id of the node (within this node container) that should be triggered for each of the elements in a collection.
EndNodeId: The id of the node (within this node container) that represents the end of the flow contained in this node. When this node is completed, the execution of the ForEach node will also be completed for the current collection element. The outgoing connection is triggered if the collection is exhausted. All other executing nodes within this composite node will be cancelled.
CollectionExpression: The name of a
variable that represents the collection of elements that should
be iterated over. The collection variable should be of type
java.util.Collection
.
VariableName: The name of the variable to contain the current element from the collection. This gives nodes within the composite node access to the selected element.
WorkItem: Represents an (abstract) unit of work that should be executed in this process. All work that is executed outside the process engine should be represented (in a declarative way) using a WorkItem node. Different types of work items are predefined, e.g., sending an email, logging a message, etc. Users can define domain-specific work items, using a unique name and by defining the parameters (input) and results (output) that are associated with this type of work. Refer to the chapter “Domain-specific processes” for a detailed explanation and illustrative examples of how to define and use work items in your processes. When a WorkItem node is reached in the process, the associated work item is executed. A WorkItem node should have one incoming connection and one outgoing connection.
Id: The id of the node (which is unique within one node container).
Name: The display name of the node.
Wait for completion: If the property "Wait for completion" is true, the WorkItem node will only continue if the created work item has terminated (completed or aborted) its execution; otherwise it will continue immediately after starting the work item.
Parameter mapping: Allows copying the value of process variables to parameters of the work item. Upon creation of the work item, the values will be copied.
Result mapping: Allows copying the value
of result parameters of the work item to a process variable. Each
type of work can define result parameters that will (potentially)
be returned after the work item has been completed. A result
mapping can be used to copy the value of the given result parameter
to the given variable in this process. For example, the "FileFinder"
work item returns a list of files that match the given search
criteria within the result parameter Files
. This list
of files can then be bound to a process variable for use within the
process. Upon completion of the work item, the values will be copied.
Note that you can use result mappings only when "Wait for completion"
is set to true.
On-entry and on-exit actions: Actions that are executed upon entry or exit of this node, respectively.
Timers: Timers that are linked to this node. See the section “Timers” for details.
Additional parameters: Each type of work
item can define additional parameters that are relevant for that
type of work. For example, the "Email" work item defines additional
parameters such as From
, To
, Subject
and Body
. The user can either provide values for these
parameters directly, or define a
parameter mapping that will copy the value of the given variable
in this process to the given parameter; if both are specified, the
mapping will have precedence. Parameters of type String
can use
#{expression}
to embed a value in the
string. The value will be retrieved when creating the work item, and the
substitution expression will be replaced by the result of calling
toString()
on the variable. The expression could
simply be the name of a variable (in which case it resolves
to the value of the variable), but more advanced MVEL expressions
are possible as well, e.g., #{person.name.firstname}
.
While the flow graph focusses on specifying the control flow of the process, it is usually also necessary to look at the process from a data perspective. Throughout the execution of a process, data can retrieved, stored, passed on and used.
For storing runtime data, during the execution of the process, you use variables. A variable is defined by a name and a data type. This could be a basic data type, such as boolean, int, or String, or any kind of Object subclass. Variables can be defined inside a variable scope. The top-level scope is the variable scope of the process itself. Subscopes can be defined using a Composite node. Variables that are defined in a subscope are only accessible for nodes within that scope.
Whenever a variable is accessed, the process will search for the appropriate variable scope that defines the variable. Nesting of variable scopes is allowed. A node will always search for a variable in its parent container. If the variable cannot be found, it will look in that one's parent container, and so on, until the process instance itself is reached. If the variable cannot be found, a read access yields null, and a write access produces an error message, with the process continuing its execution.
Variables can be used in various ways:
startProcess
method. These parameters will be set as
variables on the process scope.
// call method on the process variable "person" person.setAge(10);Changing the value of a variable can be done through the Knowledge Context:
kcontext.setVariable(variableName, value);
#{expression}
. The results of a WorkItem
can also be copied to a variable using a result mapping.
Finally, processes and rules all have access to globals, i.e., globally defined variables that are considered immutable with regard to rule evaluation, and data in the Knowledge Session. The Knowledge Session can be accessed in actions using the Knowledge Context:
kcontext.getKnowledgeRuntime().insert( new Person(...) );
Constraints can be used in various locations in your processes, for example in a Split node using OR or XOR decisions, or as a constraint for an EventWait. Drools Flow supports two types of constraints:
person
being a variable
in the process:
return person.getAge() > 20;A similar example of a valid MVEL code constraint is:
return person.age > 20;
Person( age > 20 )This tests for a person older than 20 being in the Working Memory.
Rule constraints do not have direct access to variables defined
inside the process. It is however possible to refer to the current process
instance inside a rule constraint, by adding the process instance to the
Working Memory and matching for the process instance in your rule
constraint. We have added special logic to make sure that a variable
processInstance
of type WorkflowProcessInstance
will only match to the current process instance and not to other process
instances in the Working Memory. Note that you are however responsible
yourself to insert the process instance into the session and, possibly,
to update it, for example, using Java code or an on-entry or on-exit or
explicit action in your process. The following example of a rule
constraint will search for a person with the same name as the value
stored in the variable "name" of the process:
processInstance : WorkflowProcessInstance() Person( name == ( processInstance.getVariable("name") ) ) # add more constraints here ...
Actions can be used in different ways:
Actions have access to globals and the variables that are defined
for the process and the predefined variable context
. This
variable is of type
org.drools.runtime.process.ProcessContext
and can be used for
several tasks:
NodeInstance node = context.getNodeInstance(); String name = node.getNodeName();
WorkflowProcessInstance proc = context.getProcessInstance(); proc.signalEvent( type, eventObject );
Drools currently supports two dialects, Java and MVEL.
Java actions should be valid Java code. MVEL actions can use the business
scripting language MVEL to express the action. MVEL accepts any valid Java
code but additionally provides support for nested accesses of parameters
(e.g., person.name
instead of person.getName()
),
and many other scripting improvements. Thus, MVEL expressions are more
convenient for the business user. For example, an action that prints out
the name of the person in the "requester" variable of the process would
look like this:
// Java dialect System.out.println( person.getName() ); // MVEL dialect System.out.println( person.name );
During the execution of a process, the process engine makes sure that all the relevant tasks are executed according to the process plan, by requesting the execution of work items and waiting for the results. However, it is also possible that the process should respond to events that were not directly requested by the process engine. Explicitly representing these events in a process allows the process author to specify how the process should react to such events.
Events have a type and possibly data associated with them. Users are free to define their own event types and their associated data.
A process can specify how to respond to events by using Event nodes. An Event node needs to specify the type of event the node is interested in. It can also define the name of a variable, which will receive the data that is associated with the event. This allows subsequent nodes in the process to access the event data and take appropriate action based on this data.
An event can be signalled to a running instance of a process in a number of ways:
context.getProcessInstance().signalEvent(type, eventData);
processInstance.signalEvent(type, eventData);
workingMemory.signalEvent(type, eventData);
Events could also be used to start a process. Whenever a Start node defines an event trigger of a specific type, a new process instance will be started every time that type of event is signalled to the process engine.
Whenever an exceptional condition occurs during the execution of a process, a fault could be raised to signal the occurrence of this exception. The process will then search for an appropriate exception handler that is capable of handling such a fault.
Similar to events, faults also have a type and possibly data associated with the fault. Users are free to define their own types of faults, together with their data.
Faults are effected by a Fault node, generating a fault of the given type, indicated by the fault name. If the Fault node specifies a fault variable, the value of the given variable will be associated with the fault.
Whenever a fault is created, the process will search for an appropriate exception handler that is capable of handling the given type of fault. Processes and Composite nodes both can define exception handlers for handling faults. Nesting of exception handlers is allowed; a node will always search for an appropriate exception handler in its parent container. If none is found, it will look in that one's parent container, and so on, until the process instance itself is reached. If no exception handler can be found, the process instance will be aborted, resulting in the cancellation of all nodes inside the process.
Exception handlers can also specify a fault variable. The data associated with the fault (if any) will be copied to this variable whenever an exception handler is selected to handle a fault. This allows subsequent Action nodes in the process to access the fault data and take appropriate action based on this data.
Exception handlers need to define an action that specifies how to respond to the given fault. In most cases, the behavior that is needed to react to the given fault cannot be expressed in one action. It is therefore recommended to have the exception handler signal an event of a specific type (in this case "Fault") using
context.getProcessInstance().signalEvent("FaultType", context.getVariable("FaultVariable");
Timers wait for a predefined amount of time, before triggering, once or repeatedly. They cou be used to specify time supervision, or to trigger certain logic after a certain period, or to repeat some action at regular intervals.
A Timer node is set up with a delay and a period. The delay specifies the amount of time (in milliseconds) to wait after node activation before triggering the timer the first time. The period defines the time between subsequent trigger activations. A period of 0 results in a one-shot timer.
The timer service is responsible for making sure that timers get triggered at the appropriate times. Timers can also be cancelled, meaning that the timer will no longer be triggered.
Timers can be used in two ways inside a process:
By default, the Drools engine is a passive component, meaning that
it will start processing only if you tell it to. Typically, you first
insert the necessary data and then tell the engine to start processing.
In passive mode, a timer that has been triggered will be put on the
action queue. This means that it will either be executed automatically
if the engine is still running, or it will become delayed until the engine
is told to start executing by the user (by calling
fireAllRules()
).
When using timers, it does usually make sense to let the Drools
engine operate as an active component, so that it will execute actions
whenever they become available, without the need to wait until the user
tells it to resume execution. Thus, a timer would become effective as
soon as it triggers. To make the engine fire all actions continuously,
you must call the method fireUntilHalt()
, whereupon the
engine operates until halt()
is called. Note that you should call
fireUntilHalt()
in a separate thread as it will only
return if the engine has been halted, either by the user or some some
logic calling halt()
on the session. The following
code snippet shows how to do this.
new Thread(new Runnable() { public void run() { ksession.fireUntilHalt(); } }).start(); // starting a new process instance ksession.startProcess("..."); // any timer that triggers will now be executed automatically
Drools already provides some functionality to define the order in which rules should be executed, like salience, activation groups, etc. When dealing with potentially many large rule-sets, managing the order in which rules are evaluated might become complex. Ruleflow allows you to specify the order in which rule sets should be evaluated by using a flow chart. This allows you to define which rule sets should be evaluated in sequence or in parallel, to specify conditions under which rule sets should be evaluated. This chapter contains a few ruleflow examples.
A ruleflow is a graphical description of a sequence of steps that the rule engine needs to take, where the order is important. The ruleflow can also deal with conditional branching, parallelism, and synchonization.
To use a ruleflow to describe the order in which rules should be
evaluated, you should first group rules into ruleflow groups using the
ruleflow-group
rule attribute ("options" in the GUI). Then you
should create a ruleflow graph (which is a flow chart) that graphically
describe the order in which the rules should be considered, by specifying
the order in which the ruleflow-groups should be evaluated.
rule 'YourRule' ruleflow-group 'group1' when ... then ... end
This rule belongs to the ruleflow-group called "group1".
The above rule flow specifies that the rules in the group "Check Order" must be executed before the rules in the group "Process Order". This means that first only rules which are marked as having a ruleflow-group of "Check Order" will be considered, and then, only if there aren't any more of those, the rules of "Process Order". That's about it. You could achieve similar results with either using salience, but this is harder to maintain and makes the time-relationship implicit in the rules, or Agenda groups. However, using a ruleflow makes the order of processing explicit, in a layer on top of the rule structure, so that managing more complex situations becomes much easier.
In practice, if you are using ruleflow, you will most likely be doing more than setting a simple sequence of groups to progress though. You'll use Split and Join nodes for modeling branches of processing, and define the flows of control by connections, from the Start to ruleflow groups, to Splits and then on to more groups, Joins, and so on. All this is done in a grphic editor.
The above flow is a more complex example, representing the rule flow for processing an insurance claim. Initially the claim data validation rules are processed, checking for data integrity, consistency and completeness. Next, in a Split node, there is a decision based on a condition based on the value ofthe claim. Processing will either move on to an "auto-settlement" group, or to another Split node, which checks whether there was a fatality in the incident. If so, it determines whether the "regular" of fatality specific rules should take effect, with more processing to follow. Based on a few conditions, many different control flows are possible. Note that all the rules can be in one package, with the control flow definition being separated from the actual rules.
To edit Split nodes you click on the node, which will show you a properties panel as shown above. You then have to choose the type: AND, OR, and XOR. If you choose OR, then any of the "outputs" of the split can happen, so that processing can proceed, in parallel, along two or more paths. If you chose XOR, then only one path is chosen.
If you choose OR or XOR, the "Constraints" row will have a square button on the right hand side. Clickin on this button opens the Constraint editor, where you set the conditions deciding which outgoing path to follow.
Choose the output path you want to set the constraints for (e.g. Autosettlement), and then you should see the following constraint editor:
This is a text editor where the constraints - which are like the condition part of a rule - are entered. These constraints operate on facts in the working memory. In the above example, there is a check for claims with a value of less than 250. Should this condition be true, then the associated path will be followed.
The XML format that was used in Drools4 to store RuleFlow processes was generated automatically, using XStream. As a result, it was hard to read by human readers and difficult to maintain and extend. The new Drools Flow XML format has been created to simplify this. This however means that, by default, old RuleFlow processes cannot simply be executed on the Drools5 engine.
We do however provide a Rule Flow Migrator that allows you to transform
your old .rf file to the new format. It uses an XSLT transformation to
generate the new XML based on the old content. You can use this class to
manually transform your old processes to the new format once when upgrading
from Drools4.x to Drools5.x. You can however also let the KnowledgeBuilder
automatically upgrade your processes to the new format when they are
loaded into the Knowledge Base. While this requires a conversion every time
the process is loaded into the Knowledge Base, it does support a more
seamless upgrade. To enact this automatic upgrade you need to set the
"drools.ruleflow.port" system property to "true", for example by adding
-Ddrools.ruleflow.port=true
when starting your application,
or by calling System.setProperty("drools.ruleflow.port", "true")
.
The Drools Eclipse plugin also automatically detects if an old RuleFlow file is opened. At that point, it will automatically perform the conversion and show the result in the graphical editor. You then need to save this result, either in a new file or overwriting the old one, to retain the old process in the new format. Note that the plugin does not support editing and saving processes in the old Drools4.x format.
Our knowledge-based API allows you to first create a Knowledge Base that contains all the necessary knowledge. This includes all the relevant process definitions and other knowledge types like rules. The following code snippet shows how to create a Knowledge Base consisting of only one process definition, using a Knowledge Builder to add a resource, checking for errors and, finally, creating the Knowledge Base.
KnowledgeBuilder kbuilder = KnowledgeBuilderFactory.newKnowledgeBuilder(); kbuilder.add(ResourceFactory.newClassPathResource("ruleflow.rf"), ResourceType.DRF); KnowledgeBuilderErrors errors = kbuilder.getErrors(); if (errors.size() > 0) { for (KnowledgeBuilderError error: errors) { System.err.println(error); } throw new IllegalArgumentException("Could not parse knowledge."); } KnowledgeBase kbase = KnowledgeBaseFactory.newKnowledgeBase(); kbase.addKnowledgePackages(kbuilder.getKnowledgePackages());
Note that the knowledge-based API allows users to add different types of resources, such as rules and processes, in almost identical ways into the same Knowledge Base. This enables a user who knows how to use Drools Flow to start using Drools Fusion almost instantaneously, and even to integrate these different types of Knowledge.
Next, you should create a session to interact with the engine. Again, the API is knowledge-based, supporting different types of Knowledge, with a specific extension for each Knowledge Type. The following code snippet shows how easy it is to create a session based on the earlier created Knowledge Base, and to start a process.
StatefulKnowledgeSession ksession = kbase.newStatefulKnowledgeSession(); ProcessInstance processInstance = ksession.startProcess("com.sample.ruleflow");
The ProcessRuntime
interface defines all the session methods
for interacting with processes, as shown below. Consult the Javadocs to get
a detailed explanation for each of the methods.
ProcessInstance startProcess(String processId); ProcessInstance startProcess(String processId, Map<String, Object> parameters); void signalEvent(String type, Object event); Collection<ProcessInstance> getProcessInstances(); ProcessInstance getProcessInstance(long id); WorkItemManager getWorkItemManager();
Both the Stateful and Stateless Knowledge Session provide methods for
registering and removing listeners. ProcessEventListener
objects
can be used to listen to process-related events, like starting or completing
a process, and entering and leaving a node. Below, the different methods of a
ProcessEventListener
are shown. An event object provides access
to related information, like the process instance and node instance linked to
the event.
public interface ProcessEventListener { void beforeProcessStarted( ProcessStartedEvent event ); void afterProcessStarted( ProcessStartedEvent event ); void beforeProcessCompleted( ProcessCompletedEvent event ); void afterProcessCompleted( ProcessCompletedEvent event ); void beforeNodeTriggered( ProcessNodeTriggeredEvent event ); void afterNodeTriggered( ProcessNodeTriggeredEvent event ); void beforeNodeLeft( ProcessNodeLeftEvent event ); void afterNodeLeft( ProcessNodeLeftEvent event ); }
An audit log can be created based on the information provided by these process listeners. We provide various default logger implementations:
The KnowledgeRuntimeLoggerFactory
let you add a logger to
your session, as shown below. When creating a console logger, the Knowledge Session
for which the logger needs to be created must be passed as an argument. The file
logger also requires the name of the log file to be created, and the threaded file
logger requires the interval (in milliseconds) after which the events should be saved.
KnowledgeRuntimeLogger logger = KnowledgeRuntimeLoggerFactory.newFileLogger( ksession, "test" ); // add invocations to the process engine here, // e.g. ksession.startProcess(processId); ... logger.close();
The log file can be opened in Eclipse, using the Audit View in the Drools Eclipse plugin, where the events are visualized as a tree. Events that occur between the before and after event are shown as children of that event. The following screenshot shows a simple example, where a process is started, resulting in the activation of the Start node, an Action node and an End node, after which the process was completed.
Drools Flow allows the persistent storage of certain information, i.e., the process runtime state, the process definitions and the history information.
Whenever a process is started, a process instance is created, which represents the execution of the process in that specific context. For example, when executing a process that specifies how to process a sales order, one process instance is created for each sales request. The process instance represents the current execution state in that specific context, and contains all the information related to that process instance. Note that it only contains the minimal runtime state that is needed to continue the execution of that process instance at some later time, but it does not include information about the history of that process instance if that information is no longer needed in the process instance.
The runtime state of an executing process can be made persistent, for example, in a database. This allows to restore the state of execution of all running processes in case of unexpected failure, or to temporarily remove running instances from memory and restore them at some later time. Drools Flow allows you to plug in different persistence strategies. By default, if you do not configure the process engine otherwise, process instances are not made persistent.
Drools Flow provides a binary persistence mechanism that allows you to save the state of a process instance as a binary dataset. This way, the state of all running process instances can always be stored in a persistent location. Note that these binary datasets usually are relatively small, as they only contain the minimal execution state of the process instance. For a simple process instance, this usually contains one or a few node instances, i.e., any node that is currently executing, and, possibly, some variable values.
The state of a process instance is stored at so-called "safe points" during the execution of the process engine. Whenever a process instance is executing, after its start or continuation from a wait state, the engine proceeds until no more actions can be performed. At that point, the engine has reached the next safe state, and the state of the process instance and all other process instances that might have been affected is stored persistently.
By default, the engine does not save runtime data persistently. It is, however, pretty straightforward to configure the engine to do this, by adding a configuration file and the necessary dependencies. Persistence itself is based on the Java Persistence API (JPA) and can thus work with several persistence mechanisms. We are using Hibernate by default, but feel free to employ alternatives. A H2 database is used underneath to store the data, but you mighto choose your own alternative for this, too.
First of all, you need to add the necessary dependencies to your
classpath. If you're using the Eclipse IDE, you can do that by adding
the jar files to your Drools runtime directory (cf. chapter
“Drools Eclipse IDE Features”),
or by manually adding these dependencies to your project. First of all,
you need the jar file drools-persistence-jpa.jar
,
as that contains code for saving the runtime state whenever necessary.
Next, you also need various other dependencies, depending on the
persistence solution and database you are using. For the default
combination with Hibernate as the JPA persistence provider, the H2
database and Bitronix for JTA-based transaction management, the
following list of dependencies is needed:
Next, you need to configure the Drools engine to save the state of the
engine whenever necessary. The easiest way to do this is to use
JPAKnowledgeService
to create your knowledge session, based on a
Knowledge Base, a Knowledge Session Configuration (if necessary) and an
environment. The environment needs to contain a reference to your
Entity Manager Factory.
// create the entity manager factory and register it in the environment EntityManagerFactory emf = Persistence.createEntityManagerFactory( "org.drools.persistence.jpa" ); Environment env = KnowledgeBaseFactory.newEnvironment(); env.set( EnvironmentName.ENTITY_MANAGER_FACTORY, emf ); // create a new knowledge session that uses JPA to store the runtime state StatefulKnowledgeSession ksession = JPAKnowledgeService.newStatefulKnowledgeSession( kbase, null, env ); int sessionId = ksession.getId(); // invoke methods on your method here ksession.startProcess( "MyProcess" ); ksession.dispose();
You can also yse the JPAKnowledgeService
to recreate
a session based on a specific session id:
// recreate the session from database using the sessionId ksession = JPAKnowledgeService.loadStatefulKnowledgeSession( sessionId, kbase, null, env );
Note that we only save the minimal state that is needed to continue execution of the process instance at some later point. This means, for example, that it does not contain information about already executed nodes if that information is no longer relevant, or that process instances that have been completed or aborted are removed from the database. If you want to search for history-related information, you should use the history log, as explained later.
By default, drools-persistence-jpa.jar
contains a configuration
file that configures JPA to use Hibernate and the H2 database, called
persistence.xml
in the META-INF directory, as shown below.
You will need to override these defaults if you want to change them, by adding
your own persistence.xml
in your classpath, preceding the
default one in drools-persistence-jpa.jar
. Refer to
the JPA and Hibernate documentation for more information on how to do this.
<?xml version="1.0" encoding="UTF-8" standalone="yes"?> <persistence version="1.0" xsi:schemaLocation= "http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_1_0.xsd http://java.sun.com/xml/ns/persistence/orm http://java.sun.com/xml/ns/persistence/orm_1_0.xsd" xmlns:orm="http://java.sun.com/xml/ns/persistence/orm" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://java.sun.com/xml/ns/persistence"> <persistence-unit name="org.drools.persistence.jpa"> <provider>org.hibernate.ejb.HibernatePersistence</provider> <jta-data-source>jdbc/processInstanceDS</jta-data-source> <class>org.drools.persistence.session.SessionInfo</class> <class>org.drools.persistence.processinstance.ProcessInstanceInfo</class> <class>org.drools.persistence.processinstance.ProcessInstanceEventInfo</class> <class>org.drools.persistence.processinstance.WorkItemInfo</class> <properties> <property name="hibernate.dialect" value="org.hibernate.dialect.H2Dialect"/> <property name="hibernate.max_fetch_depth" value="3"/> <property name="hibernate.hbm2ddl.auto" value="update"/> <property name="hibernate.show_sql" value="true"/> <property name="hibernate.transaction.manager_lookup_class" value="org.hibernate.transaction.BTMTransactionManagerLookup"/> </properties> </persistence-unit> </persistence>
This configuration file refers to a data source called "jdbc/processInstanceDS". The following Java fragment could be used to set up this data source, where we are using the file-based H2 database.
PoolingDataSource ds = new PoolingDataSource(); ds.setUniqueName("jdbc/processInstanceDS"); ds.setClassName("org.h2.jdbcx.JdbcDataSource"); ds.setMaxPoolSize(3); ds.setAllowLocalTransactions(true); ds.getDriverProperties().put("user", "sa"); ds.getDriverProperties().put("password", "sasa"); ds.getDriverProperties().put("URL", "jdbc:h2:file:/NotBackedUp/data/process-instance-db"); ds.init();
Whenever you do not provide transaction boundaries inside your application, the engine will automatically execute each method invocation on the engine in a separate transaction. If this behavior is acceptable, you don't need to do anything else. You can, however, also specify the transaction boundaries yourself. This allows you, for example, to combine multiple commands into one transaction.
You need to register a transaction manager at the environment before using user-defined transactions. The following sample code uses the Bitronix transaction manager. Next, we use the Java Transaction API (JTA) to specify transaction boundaries, as shown below:
// create the entity manager factory and register it in the environment EntityManagerFactory emf = Persistence.createEntityManagerFactory( "org.drools.persistence.jpa" ); Environment env = KnowledgeBaseFactory.newEnvironment(); env.set( EnvironmentName.ENTITY_MANAGER_FACTORY, emf ); env.set( EnvironmentName.TRANSACTION_MANAGER, TransactionManagerServices.getTransactionManager() ); // create a new knowledge session that uses JPA to store the runtime state StatefulKnowledgeSession ksession = JPAKnowledgeService.newStatefulKnowledgeSession( kbase, null, env ); // start the transaction UserTransaction ut = (UserTransaction) new InitialContext().lookup( "java:comp/UserTransaction" ); ut.begin(); // perform multiple commands inside one transaction ksession.insert( new Person( "John Doe" ) ); ksession.startProcess( "MyProcess" ); ksession.fireAllRules(); // commit the transaction ut.commit();
Process definition files are usually written in an XML format. These files can easily be stored on a file system during development. However, whenever you want to make your knowledge accessible to one or more engines in production, we recommend using a knowledge repository that (logically) centralizes your knowledge in one or more knowledge repositories.
Guvnor is a sub-project that provides exactly that. It consists of a repository for storing different kinds of Knowledge, not only process definitions but also rules, object models, etc. It allows easy retrieval of this knowledge using WebDAV or by employing a Knowledge Agent that automatically downloads the information from Guvnor when creating a Knowledge Base, and provides a web application that allows business users to view and possibly update the information in the Knowledge Repository. Check out the Drools Guvnor documentation for more information on how to do this.
In many cases it is useful (if not necessary) to store information about the execution of process instances, so that this information can be used afterwards, for example, to verify what actions have been executed for a particular process instance, or to monitor and analyze the efficiency of a particular process. Storing history information in the runtime database is usually not a good idea, as this would result in ever-growing runtime data, and monitoring and analysis queries might influence the performance of your runtime engine. That is why history information about the execution of process instances is stored separately.
This history log of execution information is created based on the events generated by the process engine during execution. The Drools runtime engine provides a generic mechanism to listen to different kinds of events. The necessary information can easily be extracted from these events and made persistent, for example in a database. Filters can be used to only store the information you find relevant.
The drools-bam module contains an event listener that stores process-related information in a database using Hibernate. The database contains two tables, one for process instance information and one for node instance information (see the figure below):
![]() |
To log process history information in a database like this, you need to register the logger on your session (or working memory) like this:
StatefulKnowledgeSession ksession = ...; WorkingMemoryDbLogger logger = new WorkingMemoryDbLogger(ksession); // invoke methods one your session here logger.dispose();
Note that this logger is like any other audit logger, which means
that you can add one or more filters by calling the method
addFilter
d to ensure that only relevant information is
stored in the database. Only information accepted by all your filters will
appear in the database. You should dispose the logger when it is no
longer needed.
To specify the database where the information should be stored,
modify the file hibernate.cfg.xml
file. By default,
it uses a memory-resident database (H2). Consult the Hibernate
documentation if you do not know how to do this.
All this information can easily be queried and used in a lot of
different use cases, ranging from creating a history log for one
specific process instance to analyzing the performance of all instances
of a specific process. Class ProcessInstanceDbLog
(in
package org.drools.process.audit
) shows some
examples on how to retrieve all process instances, one specific process
instance (by id), all process instances for one specific process, all
node instances of a specific process instance, etc. You can of course
easily create your own Hibernate queries, or access the information in
the database directly.
By default, the audit logger uses the H2 memory-resident database
that is recreated on startup. You can change this default by including
your own configuration file hibernate.cfg.xml
. This
allows you, for example, to change the underlying database, etc.
Refer to the Hibernate documentation for more information on how to do
this.
Drools Flow is a workflow and process engine that allows advanced integration of processes and rules. This chapter discusses the integration of rules and processes, ranging from simple to advanced scenarios.
Workflow languages that depend purely on process constructs (like nodes and connections) to describe the business logic of applications tend to be quite complex. While these workflow constructs are very well suited to describe the overall control flow of an application, it can be very difficult to describe complex logic and exceptional situations. Therefore, executable processes tend to become very complex. We believe that, by extending a process engine with support for declarative rules in combination with these regular process constructs, this complexity can be kept under control.
Simplicity: Complex decisions are usually easier to specify using a set of rules. Rules can pinpoint complex business logic more easily, using their advanced constraint language. Multiple rules can be combined, each describing a part of the business logic.
Agility: Rules and processes can have a separate life cycle. This means that we can change the rules describing some crucial decision points without having to change the process itself. Rules can be added, removed or modified to fine-tune the behavior of the process to the constantly evolving requirements and environment.
Different scope: Rules can be reused across processes or outside processes. Therefore, your business logic is not locked inside your processes.
Declarativeness: Focus on describing "what" instead of "how".
Granularity: It is easy to write simple rules that handle specific circumstances. Processes are more suited to describe the overall control flow but tend to become very complex if they also need to describe a lot of exceptional situations.
Data-centric: Rules can easily handle large data sets.
Performance: Rule evaluation is optimized.
Advanced condition and action language: Rule languages support advanced features like custom functions, collections, conditional elements, including quantifiers, etc.
High-level: By using DSLs, business editors, decision tables, and decision trees, your business logic could be described in a way that can be understood (and possibly even modified) by business users.
Drools Flow combines a process and a rules engine in one software product. This offers several advantages, compared to trying to loosely couple an existing process and rules product.
Simplicity: Easier for end user to combine both rules and processes.
Encapsulation: Sometimes close integration between processes and rules is beneficial.
Performance: No unnecessary passing, transformation or synchronization of data
Learning curve: Easier to learn one product.
Manageability: Easier to manage one product, rules and processes can be similar artefacts in a larger knowledge repository.
Integration of features: We provide an integrated IDE, audit log, web-based management platform, repository, debugging, etc.
Workflow languages describe the order in which activities should be performed using a flow chart. A process engine is responsible for selecting which activities should be executed based on the current state of the executing processes. On the other hand, rules are composed of a set of conditions that describe when a rule is applicable and an action that is executed when the conditions are met. The rules engine is then responsible for evaluating and executing the rules. It decides which rules need to be executed based on the current state of the application.
Workflow processes are very good at describing the overall control flow of (possibly long-running) applications. However, processes that are used to define complex business decisions, to handle a lot of exceptional situations, and need to respond to various external events tend to become very complex indeed. Rules are very good at describing complex decisions and reasoning about large amounts of data or events. It is, however, not trivial to define long-running processes using rules.
In the past, users were forced to choose between defining their business logic using either a process or a rules engine. Problems that required complex reasoning about large amounts of data used a rules engine, while users that wanted to focus on describing the control flow of their processes were forced to use a process engine. However, businesses nowadays might want to combine both processes and rules in order to be able to define all their business logic in the format that best suits their needs.
Basically, both a rules and a process engine will derive the next steps that need to be executed by looking at its Knowledge Base (a set of rules or processes, respectively) and the current known state of the application (the data in the Working Memory or the state of the executing process instances, respectively). If we want to integrate rules and processes, we need an engine that can decide the next steps taking into account the logic that is defined inside both the processes and the rules.
It is very difficult (and probably very inefficient as well) to extend a process engine to also take rules into account. The process engine would need to check for rules that might need to be executed at every step and would have to keep the data that is used by the rules engine up to date. However, it is not that difficult to "teach" a rules engine about processes. If the current state of the processes is also inserted as part of the Working Memory data the rules engine reasons about, and we instruct the rules engine how to derive the next steps of an executing process, the rules engine will then be able to derive the next steps taking rules and processes into account jointly.
From the process perspective, this means that there is an inversion of control. A normal process engine exercises full control, deriving the next steps based on the current state of the process instance. If needed, it can contact external services to retrieve additional information, but it solely decides which steps to take, and is alone responsible for executing these steps.
However, only our extended rules engine (that can reason jointly about rules and processes) is capable of deriving the next steps taking both rules and processes into account. If a part of the process needs to be executed, the rules engine will request the process engine to execute this step. Once this step has been performed, the process engine returns control to the rules engine to again derive the next steps. This means that the control on what to do next has been inverted: the process engine itself no longer decides the next step to take but our enhanced rules engine will be in control, notifying the process engine what to execute next, and when.
The drools-examples project contains a sample process
(org.drools.examples.process.order
) that illustrates
some of the advantages of being able to combine processes and rules. This
process describes an order application where incoming orders are validated,
discounts are calculated and shipping of the goods is requested.
Drools Flow can easily include a set of rules as part of the process.
The rules that need to be evaluated should be grouped in a ruleflow
group, using the ruleflow-group
rule attribute. Activating a
RuleSet node for the group triggers the evaluation of these rules in your
process. This example uses two RuleSet nodes in the process: one for the
validation of the order and one for calculating the discount. For example,
one of the rules for validiting an order is shown below. Note the
ruleflow-group
attribute, which ensures that this rule is evaluated
as part of the RuleSet node with the same ruleflow group shown in the
figure.
rule "Invalid item id" ruleflow-group "validate" lock-on-active true when o: Order() i: Order.OrderItem() from o.getOrderItems() not (Item() from itemCatalog.getItem(i.getItemId())) then System.err.println("Invalid item id found!"); o.addError("Invalid item id " + i.getItemId()); end
Rules can be used for expressing and evaluating complex constraints in your process. For example, when to decide about the choice of the execution path at a Split node, rules could be used to define these conditions. Similarly, a Wait state could use a rule to define the wait duration. This example uses rules for deciding the next action after validating the order. If the order contains errors, a sales representative should try to correct the order. Orders with a value > 1000$ are more important, so that a senior sales representative should attend to the order. All other orders should just proceed normally. A decision node is used to select one of these alternatives, and rules are used to describe the constraints for each of them.
Human tasks can be used in a process to describe work that needs to be executed by a human actor. The selection of the actor could be based on the current state of the process and the history. Assignment rules describe how to determine the actor, based on this information. These assignment rules will then be applied automatically whenever a new human task needs to be executed.
Note that the rules shown below are written in a Domain Specific Language (DSL), tailored to the specific requirements for formulating conditions in the order processing environment.
/********** Generic assignment rules **********/ rule "Assign 'Correct Order' to any sales representative" salience 30 when There is a human task - with task name "Correct Order" - without actor id then Set actor id "Sales Representative" end /********** Assignment rules for the RuleSetExample process **********/ rule "Assign 'Follow-up Order' to a senior sales representative" salience 40 when Process "org.drools.examples.process.ruleset.RuleSetExample" contains a human task - with task name "Follow-up Order" - without actor id then Set actor id "Senior Sales Representative" end
Rules can be used for describing exceptional situations and how to respond to these situations. Adding all this information in the control flow of the regular process makes the basic process much more complex. Rules can be used to handle each of these situations separately, leaving the core process in its simple form. It also makes it much easier to adapt existing processes to take previously unanticipated events into account.
The process defines the overall control flow. Rules could be used to add additional concerns to this process without making the overall control flow more complex. For example, rules could be defined to log certain information during the execution of the process. The original process is not altered, whereas all logging functionality is cleanly modularized as a set of rules. This greatly improves reusability, allowing users to easily apply the same strategy to different processes, readability (by not altering the control flow of the original process) and maintainability, due to the separation of the logging strategy rules from those of the process itself.
Rules let you dynamically fine-tune the behavior of your processes. Imagine that a problem is encountered, at runtime, with one of the processes. Now, new rules could be added, at runtime, to log additional information or for handling specific process states. Once the problem is solved or the circumstances have changed, these rules can easily be removed again. Based on the current status, different strategies could be selected dynamically. For example, based on the current load of all the services, rules could be used to optimize the process to the current load. This process contains a simple example that allows you to dynamically add or remove logging for the "Check Order" task. When the "Debugging output" checkbox in the main application window is checked, the rule shown below is loaded dynamically, to write log output to the console whenever the "Check Order" task is requested. Unchecking the box will dynamically remove the rule again.
rule "Log the execution of 'Correct Order'" salience 25 when workItemNodeInstance: WorkItemNodeInstance( workItemId <= 0, node.name == "Correct Order" ) workItem: WorkItemImpl( state == WorkItemImpl.PENDING ) from workItemNodeInstance.getWorkItem() then ProcessInstance proc = workItemNodeInstance.getProcessInstance(); VariableScopeInstance variableScopeInstance = (VariableScopeInstance)proc.getContextInstance( VariableScope.VARIABLE_SCOPE ); System.out.println( "LOGGING: Requesting the correction of " + variableScopeInstance.getVariable("order")); end
Processes and rules are integrated in the Drools Eclipse IDE. Both processes and rules are simply considered as different types of business logic, to be managed almost identically. For example, loading a process or a set of rules into the engine is very similar. Also, different rule implementations, such DRL or DSL, are handled in a uniform way.
private static KnowledgeBase createKnowledgeBase() throws Exception { KnowledgeBuilder kbuilder = KnowledgeBuilderFactory.newKnowledgeBuilder(); kbuilder.add( ResourceFactory.newClassPathResource( "RuleSetExample.rf", OrderExample.class), ResourceType.DRF ); kbuilder.add( ResourceFactory.newClassPathResource( "workflow_rules.drl", OrderExample.class), ResourceType.DRL ); kbuilder.add( ResourceFactory.newClassPathResource( "assignment.dsl", OrderExample.class), ResourceType.DSL ); kbuilder.add( ResourceFactory.newClassPathResource( "assignment.dslr", OrderExample.class), ResourceType.DSLR ); KnowledgeBase kbase = KnowledgeBaseFactory.newKnowledgeBase(); kbase.addKnowledgePackages( kbuilder.getKnowledgePackages() ); return kbase; }
Our audit log also contains an integrated view, showing how rules and processes are influencing each other. For example, a part of the log shows how rule "5% discount" is executed as part of the node "Calculate Discount".
Rules do not need to be defined using the core rule language syntax, but they also can be defined using our more advanced rule editors, using domain-specific languages, decision tables, guided editors, etc. Our example defines a domain-specific language for describing assignment rules, based on the type of task, its properties, the process it is defined in, etc. This makes the assignment rules much more understandable for non-experts.
/********** Generic assignment rules **********/ rule "Assign 'Correct Order' to any sales representative" salience 30 when There is a human task - with task name "Correct Order" - without actor id then Set actor id "Sales Representative" end /********** Assignment rules for the RuleSetExample process **********/ rule "Assign 'Follow-up Order' to a senior sales representative" salience 40 when Process "org.drools.examples.process.ruleset.RuleSetExample" contains a human task - with task name "Follow-up Order" - without actor id then Set actor id "Senior Sales Representative" end
One of the goals of our unified rules and processes framework is to allow users to extend the default programming constructs with domain-specific extensions that simplify development in a particular application domain. While Drools has been offering constructs to create domain-specific rule languages for some time now, this tutorial describes our first steps towards domain-specific process languages.
Most process languages offer some generic action (node) construct that allows plugging in custum user actions. However, these actions are usually low-level, where the user is required to write custom code to implement the work that should be incorporated in the process. The code is also closely linked to a specific target environment, making it difficult to reuse the process in different contexts.
Domain-specific languages are targeted to one particular application domain and therefore can offer constructs that are closely related to the problem the user is trying to solve. This makes the processes and easier to understand and self-documenting. We will show you how to define domain-specific work items, which represent atomic units of work that need to be executed. These work items specify the work that should be executed in the context of a process in a declarative manner, i.e. specifying what should be executed (and not how) on a higher level (no code) and hiding implementation details.
So we want work items that are:
domain-specific
declarative (what, not how)
high-level (no code)
customizable to the context
Users can easily define their own set of domain-specific work items and integrate them in our process language(s). For example, the next figure shows an example of a process in a healthcare context. The process includes domain-specific work items for ordering nursing tasks (e.g. measuring blood pressure), prescribing medication and notifying care providers.
Let's start by showing you how to include a simple work item for sending notifications. A work item represent an atomic unit of work in a declarative way. It is defined by a unique name and additional parameters that can be used to describe the work in more detail. Work items can also return information after they have been executed, specified as results. Our notification work item could thus be defined using a work definition with four parameters and no results:
Name: "Notification" Parameters From [String] To [String] Message [String] Priority [String]
All work definitions must be specified in one or more configuration files in the project classpath, where all the properties are specified as name-value pairs. Parameters and results are maps where each parameter name is also mapped to the expected data type. Note that this configuration file also includes some additional user interface information, like the icon and the display name of the work item. (We use MVEL for reading in the configuration file, which allows us to do more advanced configuration files). Our MyWorkDefinitions.conf file looks like this:
import org.drools.process.core.datatype.impl.type.StringDataType; [ // the Notification work item [ "name" : "Notification", "parameters" : [ "Message" : new StringDataType(), "From" : new StringDataType(), "To" : new StringDataType(), "Priority" : new StringDataType(), ], "displayName" : "Notification", "icon" : "icons/notification.gif" ] // add more work items here ... ]
The Drools Configuration API can be used to register work definition files for your project using the drools.workDefinitions property, which represents a list of files containing work definitions (separated usings spaces). For example, include a drools.rulebase.conf file in the META-INF directory of your project and add the following line:
drools.workDefinitions = MyWorkDefinitions.conf
Once our work definition has been created and registered, we can start using it in our processes. The process editor contains a separate section in the palette where the different work items that have been defined for the project appear.
Using drag and drop, a notification node can be created inside your process. The properties can be filled in using the properties view.
Apart from the properties defined by for this work item, all work items also have these three properties:
Parameter Mapping: Allows you map the value of a variable in the process to a parameter of the work item. This allows you to customize the work item based on the current state of the actual process instance (for example, the priority of the notification could be dependent of some process-specific information).
Result Mapping: Allows you to map a result (returned once a work item has been executed) to a variable of the process. This allows you to use results in the remainder of the process.
Wait for completion: By default, the process waits until the requested work item has been completed before continuing with the process. It is also possible to continue immediately after the work item has been requested (and not waiting for the results) by setting "wait for completion" to false.
The Drools engine contains a WorkItemManager that is responsible for executing work items whenever necessary. The WorkItemManager is responsible for delegating the work items to WorkItemHandlers that execute the work item and notify the WorkItemManager when the work item has been completed. For executing notification work items, a NotificationWorkItemHandler should be created (implementing the WorkItemHandler interface):
package com.sample; import org.drools.process.instance.WorkItem; import org.drools.process.instance.WorkItemHandler; import org.drools.process.instance.WorkItemManager; public class NotificationWorkItemHandler implements WorkItemHandler { public void executeWorkItem(WorkItem workItem, WorkItemManager manager) { // extract parameters String from = (String) workItem.getParameter("From"); String to = (String) workItem.getParameter("To"); String message = (String) workItem.getParameter("Message"); String priority = (String) workItem.getParameter("Priority"); // send email EmailService service = ServiceRegistry.getInstance().getEmailService(); service.sendEmail(from, to, "Notification", message); // notify manager that work item has been completed manager.completeWorkItem(workItem.getId(), null); } public void abortWorkItem(WorkItem workItem, WorkItemManager manager) { // Do nothing, notifications cannot be aborted } }
This WorkItemHandler sends a notification as an email and then immediate notifies the WorkItemManager that the work item has been completed. Note that not all work items can be completed directly. In cases where executing a work item takes some time, execution can continue asynchronously and the work item manager can be notified later. In these situations, it might also be possible that a work item is being aborted before it has been completed. The abort method can be used to specify how to abort such work items.
WorkItemHandlers should be registered at the WorkItemManager, using the following API:
workingMemory.getWorkItemManager().registerWorkItemHandler( "Notification", new NotificationWorkItemHandler());
Decoupling the execution of work items from the process itself has the following advantages:
The process is more declarative, specifying what should be executed, not how.
Changes to the environment can be implemented by adapting the work item handler. The process itself should not be changed. It is also possible to use the same process in different environments, where the work item handler is responsible for integrating with the right services.
It is easy to share work item handlers across processes and projects (which would be more difficult if the code would be embedded in the process itself).
Different work item handlers could be used depending on the context. For example, during testing or simulation, it might not be necessary to actually execute the work items. The next section shows an example of how to use specialized work item handlers during testing.
Customizable execution depending on context, easier to manage changes in environment (by changing handler), sharing processes accross contexts (using different handlers), testing, simulation (custom test handlers)
Our process framework is based on the (already well-known) idea of a Process Virtual Machine (PVM), where the process framework can be used as a basis for multiple process languages. This allows users to more easily create their own process languages, where common services provided by the process framework (e.g. persistence, audit) can be (re)used by the process language designer. Processes are represented as a graph of nodes, each node describing a part of the process logic. Different types of nodes are used for expressing different kinds of functionality, like creating or merging parallel flows (split and join), invoking a sub process, invoking external services, etc. One of our goals is creating a truly pluggable process language, where language designers can easily plug in their own node implementations.
An important aspect of work flow and BPM (business process management)is human task management. While some of the work performed in a process can be executed automatically, some tasks need to be executed with the interaction of human actors. Drools Flow supports the use of human tasks inside processes using a special human task node that will represent this interaction. This node allows process designers to define the type of task, the actor(s), the data associated with the task, etc. We also have implemented a task service that can be used to manage these human tasks. Users are however open to integrate any other solution if they want to, as this is fully pluggable.
To start using human tasks inside your processes, you first need to (1) include human task nodes inside your process, (2) integrate a task management component of your choice (e.g. the WS-HT implementation provided by us) and (3) have end users interact with the human task management component using some kind of user interface. These elements will be discussed in more detail in the next sections.
Drools Flow supports the use of human tasks inside processes using a special human task node (as shown in the figure above). A human task node represents an atomic task that needs to be executed by a human actor. Although Drools Flow has a special human task node for including human tasks inside a process, human tasks are simply considered as any other kind of external service that needs to be invoked and are therefore simply implemented as a special kind of work item. The only thing that is special about the human task node is that we have added support for swimlanes, making it easier to assign tasks to users (see below). A human task node contains the following properties:
You can edit these variables in the properties view (see below) when selecting the human task node, or the most important properties can also be edited by double-clicking the human task node, after which a custom human task node editor is opened, as shown below as well.
Note that you could either specify the values of the different parameters (actorId, priority, content, etc.) directly (in which case they will be the same for each execution of this process), or make them context-specific, based on the data inside the process instance. For example, parameters of type String can use #{expression} to embed a value in the String. The value will be retrieved when creating the work item and the #{...} will be replaced by the toString() value of the variable. The expression could simply be the name of a variable (in which case it will be resolved to the value of the variable), but more advanced MVEL expressions are possible as well, like #{person.name.firstname}. For example, when sending an email, the body of the email could contain something like "Dear #{customer.name}, ...". For other types of variables, it is possible to map the value of a variable to a parameter using the parameter mapping.
Human task nodes can be used in combination with swimlanes to assign multiple human tasks to the similar actors. Tasks in the same swimlane will be assigned to the same actor. Whenever the first task in a swimlane is created, and that task has an actorId specified, that actorId will be assigned to the swimlane as well. All other tasks that will be created in that swimlane will use that actorId as well, even if an actorId has been specified for the task as well.
Whenever a human task that is part of a swimlane is completed, the actorId of that swimlane is set to the actorId that executed that human task. This allows for example to assign a human task to a group of users, and to assign future tasks of that swimlame to the user that claimed the first task. This will also automatically change the assignment of tasks if at some point one of the tasks is reassigned to another user.
To add a human task to a swimlane, simply specify the name of the swimlane as the value of the "Swimlane" parameter of the human task node. A process must also define all the swimlanes that it contains. To do so, open the process properties by clicking on the background of the process and click on the "Swimlanes" property. You can add new swimlanes there.
As far as the Drools Flow engine is concerned, human tasks are similar to any other external service that needs to be invoked and are implemented as an extension of normal work items. As a result, the process itself only contains an abstract description of the human tasks that need to be executed, and a work item handler is responsible for binding this abstract tasks to a specific implementation. Using our pluggable work item handler approach (see the chapter on domain-specific processes for more details), users can plug in any back-end implementation.
We do however provide an implementation of such a human task management component based on the WS-HumanTask specification. If you do not have the requirement to integrate a specific human task component yourself, you can use this service. It manages the task life cycle of the tasks (creation, claiming, completion, etc.) and stores the state of the task persistently. It also supports features like internationalization, calendar integration, different types of assignments, delegation, deadlines, etc.
Because we did not want to implement a custom solution when a standard is available, we chose to implement our service based on the WS-HumanTask (WS-HT) specification. This specification defines in detail the model of the tasks, the life cycle, and a lot of other features as the ones mentioned above. It is pretty comprehensive and can be found here.
Looking from the perspective of the process, whenever a human task node is triggered during the execution of a process instance, a human task is created. The process will only continue from that point when that human task has been completed or aborted (unless of course you specify that the process does not need to wait for the human task to complete, by setting the "Wait for completion" property to true). However, the human task usually has a separate life cycle itself. We will now shortly introduce this life cycle, as shown in the figure below. For more details, check out the WS-HumanTask specification.
![]() |
Whenever a task is created, it starts in the "Created" stage. It usually automatically transfers to the "Ready" state, at which point the task will show up on the task list of all the actors that are allowed to execute the task. There, it is waiting for one of these actors to claim the task, indicating that he or she will be executing the task. Once a user has claimed a task, the status is changed to "Reserved". Note that a task that only has one potential actor will automatically be assigned to that actor upon creation of that task. After claiming the task, that user can then at some point decide to start executing the task, in which case the task status is changed to "InProgress". Finally, once the task has been performed, the user must complete the task (and can specify the result data related to the task), in which case the status is changed to "Completed". If the task could not be completed, the user can also indicate this using a fault response (possibly with fault data associated), in which case the status is changed to "Failed".
The life cycle explained above is the normal life cycle. The service also allows a lot of other life cycle methods, like:
The task management component needs to be integrated with the Drools Flow engine just like any other external service, by registering a work item handler that is responsible for translating the abstract work item (in this case a human task) to a specific invocation. We have implemented such a work item handler (org.drools.process.workitem.wsht.WSHumanTaskHandler in the drools-process-task module) so you can easily link this work item handler like this:
StatefulKnowledgeSession session = ...; session.getWorkItemManager().registerWorkItemHandler("Human Task", new WSHumanTaskHandler());
By default, this handler will connect to the human task management component on the local machine on port 9123, but you can easily change that by invoking the setConnection(ipAddress, port) method on the WSHumanTaskHandler.
At this moment WSHumanTaskHandler is using Mina (http://mina.apache.org/) for testing the behavior in a client/server architecture. Mina uses messages between client and server to enable the client comunicate with the server. That's why WSHumanTaskHandler have a MinaTaskClient that create different messages to give the user different actions that are executed for the server.
In the client (MinaTaskClient in this implementation) we should see the implementation of the following methods for interacting with Human Tasks:
public void start( long taskId, String userId, TaskOperationResponseHandler responseHandler ) public void stop( long taskId, String userId, TaskOperationResponseHandler responseHandler ) public void release( long taskId, String userId, TaskOperationResponseHandler responseHandler ) public void suspend( long taskId, String userId, TaskOperationResponseHandler responseHandler ) public void resume( long taskId, String userId, TaskOperationResponseHandler responseHandler ) public void skip( long taskId, String userId, TaskOperationResponseHandler responseHandler ) public void delegate( long taskId, String userId, String targetUserId, TaskOperationResponseHandler responseHandler ) public void complete( long taskId, String userId, ContentData outputData, TaskOperationResponseHandler responseHandler ) ...
Using this methods we will implement any kind of GUI that the end user will use to do the task that they have assigned. If you take a look a this method signatures you will notice that almost all of this method takes the following arguments:
taskId: the id of the task with we are working. Probably you will pick this Id from the user task list in the UI (User Interface).
userId: the id of the user that is executing the action. Probably the Id of the user that is signed in the application.
responseHandler: this is the handler have responsability to catch the response and get the results or just let us know that the task is already finished.
As you can imagine all the methods create a message that will be sended to the server, and the server will execute the logic that implement the correct action. A creation of one of this messages will be like this:
public void complete(long taskId, String userId, ContentData outputData, TaskOperationResponseHandler responseHandler) { List<Object> args = new ArrayList<Object>( 5 ); args.add( Operation.Complete ); args.add( taskId ); args.add( userId ); args.add( null ); args.add( outputData ); Command cmd = new Command( counter.getAndIncrement(), CommandName.OperationRequest, args ); handler.addResponseHandler( cmd.getId(), responseHandler ); session.write( cmd ); }
Here we can see that a Command is created and the arguments of the method are inserted inside the command with the type of operation that we are trying to execute and then this command is sended to the server with session.write( cmd ) method.
If we see the server implementation, when the command is recived, we find that depends of the operation type (here Operation.Complete) will be the logic that will be executed. If we look at the class TaskServerHandler in the messageReceived method the taskOperation is executed using the taskServiceSession that is the responsible for get, persist and manipulate all the Human Task Information when the tasks are created and the user is not interacting with them.
The task management component is a completely independent service that the process engine communicates with. We therefore recommend to start it as a separate service as well. To start the task server, you can use the following code fragment:
EntityManagerFactory emf = Persistence.createEntityManagerFactory("org.drools.task"); taskService = new TaskService(emf); MinaTaskServer server = new MinaTaskServer( taskService ); Thread thread = new Thread( server ); thread.start();
The task management component uses the Java Persistence API (JPA) to store all task information in a persistent manner. To configure the persistence, you need to modify the persistence.xml configuration file accordingly. We refer to the JPA documentation on how to do that. The following fragment shows for example how to use the task management component with hibernate and an in-memory H2 database:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?> <persistence version="1.0" xsi:schemaLocation= "http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_1_0.xsd http://java.sun.com/xml/ns/persistence/orm http://java.sun.com/xml/ns/persistence/orm_1_0.xsd" xmlns:orm="http://java.sun.com/xml/ns/persistence/orm" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://java.sun.com/xml/ns/persistence"> <persistence-unit name="org.drools.task"> <provider>org.hibernate.ejb.HibernatePersistence</provider> <class>org.drools.task.Attachment</class> <class>org.drools.task.Content</class> <class>org.drools.task.BooleanExpression</class> <class>org.drools.task.Comment</class> <class>org.drools.task.Deadline</class> <class>org.drools.task.Comment</class> <class>org.drools.task.Deadline</class> <class>org.drools.task.Delegation</class> <class>org.drools.task.Escalation</class> <class>org.drools.task.Group</class> <class>org.drools.task.I18NText</class> <class>org.drools.task.Notification</class> <class>org.drools.task.EmailNotification</class> <class>org.drools.task.EmailNotificationHeader</class> <class>org.drools.task.PeopleAssignments</class> <class>org.drools.task.Reassignment</class> <class>org.drools.task.Status</class> <class>org.drools.task.Task</class> <class>org.drools.task.TaskData</class> <class>org.drools.task.SubTasksStrategy</class> <class>org.drools.task.OnParentAbortAllSubTasksEndStrategy</class> <class>org.drools.task.OnAllSubTasksEndParentEndStrategy</class> <class>org.drools.task.User</class> <properties> <property name="hibernate.dialect" value="org.hibernate.dialect.H2Dialect"/> <property name="hibernate.connection.driver_class" value="org.h2.Driver"/> <property name="hibernate.connection.url" value="jdbc:h2:mem:mydb" /> <property name="hibernate.connection.username" value="sa"/> <property name="hibernate.connection.password" value="sasa"/> <property name="hibernate.connection.autocommit" value="false" /> <property name="hibernate.max_fetch_depth" value="3"/> <property name="hibernate.hbm2ddl.auto" value="create" /> <property name="hibernate.show_sql" value="true" /> </properties> </persistence-unit> </persistence>
The first time you start the task management component, you need to make sure that all the necessary users and groups are added to the database. Our implementation requires all users and groups to be predefined before trying to assign a task to that user or group. So you need to make sure you add the necessary users and group to the database using the taskSession.addUser(user) and taskSession.addGroup(group) methods. Note that you at least need an "Administrator" user as all tasks are automatically assigned to this user as the administrator role.
The drools-process-task module contains a org.drools.task.RunTaskService class in the src/test/java source folder that can be used to start a task server. It automatically adds users and groups as defined in LoadUsers.mvel and LoadGroups.mvel configuration files.
The task management component exposes various methods to manage the life cycle of the tasks through a Java API. This allows clients to integrate (at a low level) with the task management component. Note that end users should probably not interact with this low-level API directly but rather use one of the task list clients. These clients interact with the task management component using this API.
This interaction will be described with the following image:
As we can see in the image we have MinaTaskClient and MinaTaskServer. They communicate to each other sending messages to query and manipulate human tasks. Step by step the interactio n will be something like this:
Some client need to complete some task. So he/she needs to create an instace of MinaTaskClient and connect it to the MinaTaskServer to have a session to talk to each other. This is the step one in the image.
Then the client can call the method complete() in MinaTaskClient with the corresponding arguments. This will generate a new Message (or Command) that will be inserted in the session that the client open when it connects to the server. This message must specify a type that the server recognize and know what to do when the message is recieved. This is the step two in the image.
At this moment TaskServerHandler noticed that there is a new message in the session so an analysis about what kind of message is will take place. In this case is the type of Operation.Complete, because the client is finishing succesfully some task. So we need to complete the task that the user want to finish. This is achieved using the TaskServiceSession that will fire an specific type of event that will be procesed by an specific subclass of TaskEventListener. This are step three and four in the image.
When the event is recived by TaskEventListener it will know how to modify the status of the task. This is achieved using the EntityManager to retrieve and modify the status of an specific task from the database. In this case, because we are finishing a task, the status will be updated to Completed. This is step five in the image.
Now, when the changes are made we need to notify the client about that the task was succesfully ended and this is achieved creating a response message that TaskClientHandler will receive and inform MinaTaskClient. This are steps six, seven and eight in the image.
The Drools IDE contains a org.drools.eclipse.task plugin that allows you to test and/or debug processes using human tasks. In contains a Human Task View that can connect to a running task management component, request the relevant tasks for a particular user (i.e. the tasks where the user is either a potential owner or the tasks that the user already claimed and is executing). The life cycle of these tasks can then be executed, i.e. claiming or releasing a task, starting or stopping the execution of a task, completing a task, etc. A screenshot of this Human Task View is shown below. You can configure which task management component to connect to in the Drools Task preference page (select Window -> Preferences and select Drools Task). Here you can specify the url and port (default = 127.0.0.1:9123).
![]() |
This section describes how to debug processes. This means that the current state of your running processes can be inspected and visualized during the execution. We use a simple example throughout this section to illustrate the debugging capabilities. The example will be introduced first, followed by an illustration on how to use the debugging capabilities.
Our example contains two processes and some rules (used inside the ruleflow groups):
The main process contains some of the most common nodes: a start and end node (obviously), two ruleflow groups, an action (that simply prints a string to the default output), a milestone (a wait state that is trigger when a specific Event is inserted in the working memory) and a subprocess.
The SubProcess simply contains a milestone that also waits for (another) specific Event in the working memory.
There are only two rules (one for each ruleflow group) that simply print out either a hello world or goodbye world to default output.
We will simulate the execution of this process by starting the process, firing all rules (resulting in the executing of the hello rule), then adding the specific milestone events for both the milestones (in the main process and in the subprocess) and finally by firing all rules again (resulting in the executing of the goodbye rule). The console will look something like this:
Hello World Executing action Goodbye cruel world
We now add four breakpoints during the execution of the process (in the order in which they will be encountered):
At the start of the consequence of the hello rule
Before inserting the triggering event for the milestone in the main process
Before inserting the triggering event for the milestone in the subprocess
At the start of the consequence of the goodbye rule
When debugging the application, one can use the following debug views to track the execution of the process:
The working memory view, showing the contents (data) in the working memory.
The agenda view, showing all activations in the agenda.
The global data view, showing the globals.
The default Java Debug views, showing the current line and the value of the known variables, and this both for normal Java code as for rules.
The process instances view, showing all running processes (and their state).
The audit view, showing the audit log.
The process instances view shows the currently running process instances. The example shows that there is currently one running process (instance), currently executing one node (instance), i.e. RuleSet node. When double-clicking a process instance, the process instance viewer will graphically show the progress of the process instance. At each of the breakpoints, this will look like:
At the start of the consequence of the hello rule, only the hello ruleflow group is active, waiting on the execution of the hello rule:
Once that rule has been executed, the action, the milestone and the subprocess will be triggered. The action will be executed immediately, triggering the join (which will simply wait until all incomming connections have been triggered). The subprocess will wait at the milestone. So, before inserting the triggering event for the milestone in the main process, there now are two process instances, looking like this:
When triggering the event for the milestone in the main process, this will also trigger the join (which will simply wait until all incomming connections have been triggered). So at that point (before inserting the triggering event for the milestone in the subprocess), the processes will look like this:
When triggering the event for the milestone in the subprocess, this process instance will be completed and this will also trigger the join, which will then continue and trigger the goodbye ruleflow group, as all its incomming connections have been triggered. Firing all the rules will trigger the breakpoint in the goodbye rule. At that point, the situation looks like this:
After executing the goodbye rule, the main process will also be completed and the execution will have reached the end.
For those who want to look at the result in the audit view, this will look something like this [Note: the object insertion events might seem a little out of place, which is caused by the fact that they are only logged after (and never before) they are inserted, making it difficult to exactly pinpoint their location.]
The Drools plugin for the Eclipse IDE provides a few additional features that might be interesting for developers.
A Drools runtime is a collection of jar files that represent one specific release of the Drools project jars. To create a runtime, you must point the IDE to the release of your choice. If you want to create a new runtime based on the latest Drools project jars included in the plugin itself, you can also easily do that. You are required to specify a default Drools runtime for your Eclipse workspace, but each individual project can override the default and select the appropriate runtime for that project specifically.
To define one or more Drools runtimes using the Eclipse preferences view you open up your Preferences, by selecting the "Preferences" menu item in the menu "Window". A "Preferences" dialog should show all your settings. On the left side of this dialog, under the Drools category, select "Installed Drools runtimes". The panel on the right should then show the currently defined Drools runtimes. If you have not yet defined any runtimes, it should look like the figure below.
![]() |
To define a new Drools runtime, click on the add button. A dialog such as the one shown below should pop up, asking for the name of your runtime and the location on your file system where it can be found.
In general, you have two options:
If you simply want to use the default jar files as included in the Drools Eclipse plugin, you can create a new Drools runtime automatically by clicking the "Create a new Drools 5 runtime ..." button. A file browser will show up, asking you to select the folder on your file system where you want this runtime to be created. The plugin will then automatically copy all required dependencies to the specified folder. After selecting this folder, the dialog should look like the figure shown below.
If you want to use one specific release of the Drools project, you should create a folder on your file system that contains all the necessary Drools libraries and dependencies. Instead of creating a new Drools runtime as explained above, give your runtime a name and select the location of this folder containing all the required jars.
After clicking the OK button, the runtime should show up in your table of installed Drools runtimes, as shown below. Click on checkbox in front of the newly created runtime to make it the default Drools runtime. The default Drools runtime will be used as the runtime of all your Drools project that have not selected a project-specific runtime.
![]() |
You can add as many Drools runtimes as you need. For example, the screenshot below shows a configuration where three runtimes have been defined: a Drools 4.0.7 runtime, a Drools 5.0.0 runtime and a Drools 5.0.0.SNAPSHOT runtime. The Drools 5.0.0 runtime is selected as the default one.
![]() |
Note that you will need to restart Eclipse if you changed the default runtime and you want to make sure that all the projects that are using the default runtime update their classpath accordingly.
Whenever you create a Drools project (using the New Drools Project wizard or by converting an existing Java project to a Drools project using the action "Convert to Drools Project" that is shown when you are in the Drools perspective and you right-click an existing Java project), the plugin will automatically add all the required jars to the classpath of your project.
When creating a new Drools project, the plugin will automatically use the default Drools runtime for that project, unless you specify a project-specific one. You can do this in the final step of the New Drools Project wizard, as shown below, by deselecting the "Use default Drools runtime" checkbox and selecting the appropriate runtime in the drop-down box. If you click the "Configure workspace settings ..." link, the workspace preferences showing the currently installed Drools runtimes will be opened, so you can add new runtimes there.
You can change the runtime of a Drools project at any time by opening the project properties and selecting the Drools category, as shown below. Mark the "Enable project specific settings" checkbox and select the appropriate runtime from the drop-down box. If you click the "Configure workspace settings ..." link, the workspace preferences showing the currently installed Drools runtimes will be opened, so you can add new runtimes there. If you deselect the "Enable project specific settings" checkbox, it will use the default runtime as defined in your global preferences.
The concept of process skins provides a way of
control the visualization of the different nodes of a processd. You may
change the visualization of the various node types to the way you prefer
by implementing your own SkinProvider
.
BPMN is a popular language used by business users for modeling business processes. BPMN defines terminology, different types of nodes, how these should be visualized, etc. People who are familiar with BPMN might find it easier to implement an executable process (possibly based on a BPMN process diagram) using a similar visualization. We have therefore created a BPMN skin that maps the Drools Flow concepts to the equivalent BPMN visualization.
As an example, the following figure shows a process using some of the different types of nodes in the RuleFlow language using the default skin.
![]() |
You may now change the preferred process skin in the Drools Preferences dialog:
![]() |
After reopening the editor, the same process is displayed using the BPMN skin.
![]() |
You need to actively monitor your processes to make sure you can detect any anomalies and react to unexpected events as soon as possible. Business Activity Monitoring (BAM) is concerned with real-time monitoring of your processes and the option of intervening directly, possibly even automatically, based on the analysis of these events.
Drools Flow allows users to define reports based on the events generated by the process engine, and possibly direct intervention in specific situations using complex event processing rules (Drools Fusion), as described in the next two sections. Future releases of the Drools platform will include support for all requirements of Business Activity Monitoring, including a web-based application that can be used to more easily interact with a running process engine, inspect its state, generate reports, etc.
By adding a history logger to the process engine, all relevent events are stored in the database. This history log can be used to monitor and analyze the execution of your processes. We are using the Eclipse BIRT (Business Intelligence Reporting Tool) to create reports that show the key performance indicators. Its easy to define your own reports yourself, using the predefined data sets containing all process history information, and any other data sources you might want to add yourself.
The Eclipse BIRT framework allows you to define data sets, create reports, include charts, preview your reports, and export them on web pages. (Consult the Eclipse BIRT documentation on how to define your own reports.) The following screen shot shows a sample on how to create such a chart.
The next figure displays a simple report based on some history data, showing the number of requests per hour and the average completion time of the request during that hour. These charts could be used to check for an unexpected drop or rise of requests, an increase in the average processing time, etc. These charts could signal possible problems before the situation really gets out of hand.
Reports can be used to visualize an overview of the current state of your processes, but they rely on a human actor to take action based on the information in these charts. However, we allow users to define automatic responses to specific circumstances.
Drools Fusion provides numerous features that make it easy to process large sets of events. This can be used to monitor the process engine itself. This can be achieved by adding a listener to the engine that forwards all related process events, such as the start and completion of a process instance, or the triggering of a specific node, to a session responsible for processing these events. This could be the same session as the one executing the processes, or an independent session as well. Complex Event Processing (CEP) rules could then be used to specify how to process these events. For example, these rules could generate higher-level business events based on a specific occurrence of low-level process events. The rules could also specify how to respond to specific situations.
The next section shows a sample rule that accumulates all start process events for one specific order process over the last hour, using the "sliding window" support. This rule prints out an error message if more than 1000 process instances were started in the last hour (e.g., to detect a possible overload of the server). Note that, in a realistic setting, this would probably be replaced by sending an email or other form of notification to the responsible instead of the simple logging.
declare ProcessStartedEvent @role( event ) end dialect "mvel" rule "Number of process instances above threshold" when Number( nbProcesses : intValue > 1000 ) from accumulate( e: ProcessStartedEvent( processInstance.processId == "com.sample.order.OrderProcess" ) over window:size(1h), count(e) ) then System.err.println( "WARNING: Number of order processes in the last hour above 1000: " + nbProcesses ); end
These rules could even be used to alter the behavior of a process automatically at runtime, based on the events generated by the engine. For example, whenever a specific situation is detected, additional rules could be added to the Knowledge Base to modify process behavior. For instance, whenever a large amount of user requests within a specific time frame are detected, an additional validation could be added to the process, enforcing some sort of flow control to reduce the frequency of incoming requests. There is also the possibility of deploying additional logging rules as the consequence of detecting problems. As soon as the situtation reverts back to normal, such rules would be removed again.