JBoss.orgCommunity Documentation
A RuleFlow is a process that describes the order in which a series of steps need to be executed, using a flow chart. A process consists of a collection of nodes that are linked to each other using connections. Each of the nodes represents one step in the overall process while the connections specify how to transition from one node to the other. A large selection of predefined node types have been defined. This chapter describes how to define such processes and use them in your application.
Processes can be created by using one of the following three methods:
The graphical RuleFlow editor is a editor that allows you to create a process by dragging and dropping different nodes on a canvas and editing the properties of these nodes. The graphical RuleFlow editor is part of the Drools plug-in for Eclipse. Once you have set up a Drools project (check the IDE chapter if you do not know how to do this), you can start adding processes. When in a project, launch the 'New' wizard (use "Ctrl+N" or by right-clicking the directory you would like to put your ruleflow in and selecting "New ... -> Other ...". Choose the section on "Drools" and then pick "RuleFlow file". This will create a new .rf file.
Next you will see the graphical ruleflow editor. Now would be a good time to switch to the "Drools Perspective" (if you haven't done so already) - this will tweak the UI so it is optimal for rules. Then ensure that you can see the "Properties" view down the bottom of the Eclipse window, as it will be necessary to fill in the different properties of the elements in your process. If you cannot see the properties view, open it using the menu Window -> Show View -> Other ..., and under the General folder select the properties view.
The RuleFlow editor consists of a palette, a canvas and an outline view. To add new elements to the canvas, select the element you would like to create in the palette and then add them to the canvas by clicking on the preferred location. For example, click on the RuleFlowGroup icon in the Component Pallette of the GUI - you can then draw a few rule flow groups. Clicking on an element in your ruleflow allows you to set the properties of that element.You can link the nodes together (as long as it is allowed by the different types of nodes) by using "Connection Creation" from the component palette.
You can keep adding nodes and connections to your process until it represents the business logic that you want to specify. You'll probably need to check the process for any missing information (by pressing the green "check" icon in the IDE menu bar) before trying to use it in your application.
It is also possible to specify processes using the underlying XML directly. The syntax of these XML processes is defined using an XML Schema Definition. For example, the following XML fragment shows a simple process that contains a sequence of a start node, an action node that prints "Hello World" to the console, and an end node.
<?xml version="1.0" encoding="UTF-8"?> <process xmlns="http://drools.org/drools-5.0/process" xmlns:xs="http://www.w3.org/2001/XMLSchema-instance" xs:schemaLocation="http://drools.org/drools-5.0/process drools-processes-5.0.xsd" type="RuleFlow" name="ruleflow" id="com.sample.ruleflow" package-name="com.sample" > <header> </header> <nodes> <start id="1" name="Start" x="16" y="16" /> <actionNode id="2" name="Hello" x="128" y="16" > <action type="expression" dialect="mvel" >System.out.println("Hello World");</action> </actionNode> <end id="3" name="End" x="240" y="16" /> </nodes> <connections> <connection from="1" to="2" /> <connection from="2" to="3" /> </connections> </process>
The process XML file should consist of exactly one <process> element. This element contains parameters related to the process (the type, name, id and package name of the process), and consists of three main subsections: a <header> (where process-level information like variables, globals, imports and swimlanes can be defined), a <nodes> section that defines each of the nodes in the process (there is a specific element for each of the different node types that defines the various parameters and possibly sub-elements for that node type), and a <connections> section that contains the connections between all the nodes in the process.
While it is recommended to define processes using the graphical editor or the underlying XML (to shield yourself from internal APIs), it is also possible to define a process using the Process API directly. The most important process elements are defined in the org.drools.workflow.core and the org.drools.workflow.core.node packages. A "fluent API" is provided that allows you to easily construct processes in a readable manner using factories. At the end, you can validate the process that you were constructing manually. Some examples about how to build processes using this fluent API are added below.
This is a simple example of a basic process with a ruleset node only:
RuleFlowProcessFactory factory = RuleFlowProcessFactory.createProcess("org.drools.HelloWorldRuleSet"); factory // Header .name("HelloWorldRuleSet") .version("1.0") .packageName("org.drools") // Nodes .startNode(1).name("Start").done() .ruleSetNode(2) .name("RuleSet") .ruleFlowGroup("someGroup").done() .endNode(3).name("End").done() // Connections .connection(1, 2) .connection(2, 3); RuleFlowProcess process = factory.validate().getProcess();
You can see that we start by calling the static createProcess() method from the RuleFlowProcessFactory class. This method creates a new process with the given id and returns the RuleFlowProcessFactory that can be used to create the process. A typical process consists of three parts: a header part that contains global elements like the name of the process, imports, variables, etc. The nodes section contains all the different nodes that are part of the process and finally the connections section links these nodes to each other to create a flow chart.
So in this example, the header contains the name and the version of the process and the package name. After that you can start adding nodes to the current process. If you have auto-completion you can see that you different methods to create each of the supported node types at your disposal.
When you start adding nodes to the process, in this example by calling the startNode(), ruleSetNode() and endNode() methods, you can see that these methods return a specific NodeFactory, that allows you to set the properties of that node. Once you have finished configuring that specific node, the done() method returns you to the current RuleFlowProcessFactory so you can add more nodes if necessary.
When you finised adding nodes you must connect them by creating connections between them. This can be done by calling the connection method, which will link the earlier created nodes.
Finally, you can validate the generated process by calling the validate() method and retrieve the create RuleFlowProcess object.
This example is using Split and Join nodes:
RuleFlowProcessFactory factory = RuleFlowProcessFactory.createProcess("org.drools.HelloWorldJoinSplit"); factory // Header .name("HelloWorldJoinSplit") .version("1.0") .packageName("org.drools") // Nodes .startNode(1).name("Start").done() .splitNode(2).name("Split").type(Split.TYPE_AND).done() .actionNode(3).name("Action 1").action("mvel", "System.out.println(\"Inside Action 1\")").done() .actionNode(4).name("Action 2").action("mvel", "System.out.println(\"Inside Action 2\")").done() .joinNode(5).type(Join.TYPE_AND).done() .endNode(6).name("End").done() // Connections .connection(1, 2) .connection(2, 3) .connection(2, 4) .connection(3, 5) .connection(4, 5) .connection(5, 6); RuleFlowProcess process = factory.validate().getProcess();
This shows a simple split / join example. As you can see, split nodes can have multiple outgoing connections and joins multipe incomming connections. To understand the behaviour of the different types of split and join nodes, take a look at the documentation for each of these nodes.
Now we show a more complex example with a ForEach node, where we have nested nodes:
RuleFlowProcessFactory factory = RuleFlowProcessFactory.createProcess("org.drools.HelloWorldForeach"); factory // Header .name("HelloWorldForeach") .version("1.0") .packageName("org.drools") // Nodes .startNode(1).name("Start").done() .forEachNode(2) // Properties .linkIncomingConnections(3) .linkOutgoingConnections(4) .collectionExpression("persons") .variable("child", new ObjectDataType("org.drools.Person")) // Nodes .actionNode(3) .action("mvel", "System.out.println(\"inside action1\")").done() .actionNode(4) .action("mvel", "System.out.println(\"inside action2\")").done() // Connections .connection(3, 4) .done() .endNode(5).name("End").done() // Connections .connection(1, 2) .connection(2, 5); RuleFlowProcess process = factory.validate().getProcess();
Here you can see how we can include a ForEach node with nested action nodes. Note the linkIncommingConnections() and linkOutgoingConnections() methods that are called to link the foreach node with the internal action node. These methods are used to specify what the first and last nodes are inside the ForEach composite node.
There are two things you need to do to be able to execute processes from within your application: (1) you need to create a knowledge base that contain the definition of the process; and (2) you need to start the process by creating a session to communicate with the process engine and start the process.
Creating a knowledge base: Once you have a valid process, you can add the process to the knowledge base (note that this is almost identical to adding rules to the knowledge base, except for the type of knowledge added):
KnowledgeBuilder kbuilder = KnowledgeBuilderFactory.newKnowledgeBuilder(); kbuilder.add(ResourceFactory.newClassPathResource("MyProcess.rf"), ResourceType.DRF);
After adding all your knowledge to the builder (you can add more than one process or even rules), you should probably check whether the process (and/or rules) have been parsed correctly and write out any errors like this:
KnowledgeBuilderErrors errors = kbuilder.getErrors(); if (errors.size() > 0) { for (KnowledgeBuilderError error: errors) { System.err.println(error); } throw new IllegalArgumentException("Could not parse knowledge."); }
Next you need to create the knowledge base that contains all the necessary processes (and rules) like this:
KnowledgeBase kbase = KnowledgeBaseFactory.newKnowledgeBase(); kbase.addKnowledgePackages(kbuilder.getKnowledgePackages());
Starting a process: Processes are only executed if you explicitly state that they should be executed. This is because you could potentially define a lot of processes in your knowledge base and the engine has no way to know when you would like to start each of these. To activate a particular process, you will need to start the process by calling the startProcess method on your session. For example:
StatefulKnowledgeSession ksession = kbase.newStatefulKnowledgeSession(); ksession.startProcess("com.sample.MyProcess");
The parameter of the startProcess method represents the id of the process that needs to be started (the process id needs to be specified as a property of the process, which are shown in the properties view when you click the background canvas of your process). If your process also requires the execution of rules during the execution of the process, you also need to call the ksession.fireAllRules() method to make sure the rules are executed as well. That's it!
You can also specify additional parameters that are used to pass on input data to the process, using the startProcess(String processId, Map parameters) method, that takes an additional set of parameters as name-value pairs. These parameters are then copied to the newly created process instance as top-level variables of the process.
You can also start a process from within a rule consequence, using
kcontext.getKnowledgeRuntime().startProcess("com.sample.MyProcess");
A ruleflow process is a flow chart where different types of nodes are linked using connections. The process itself exposes the following properties:
Id: The unique id of the process.
Name: The display name of the process.
Version: The version number of the process.
Package: The package (namespace) the process is defined in.
Variables: Variables can be defined to store data during the execution of your process (see the 'data' section for more details).
Swimlanes: Specify the actor that is responsible for the execution of human tasks (see the 'human tasks' section for more details).
Exception Handlers: Specify the behaviour when a fault occurs in the process (see the 'exceptions' section for more details).
Connection Layout: Specify how the connections are visualized on the canvas using the connection layout property:
'Manual' always draws your connections as lines going straight from their start to end point (with the possibility to use intermediate break points).
'Shortest path' is similar, but it tries to go around any obstacles is might encounter between the start and end point (to avoid lines crossing nodes).
'Manhatten' draws connections by only using horizontal and vertical lines.
A RuleFlow process supports different types of nodes:
Start: The start of the ruleflow. A ruleflow should have exactly one start node. The start node cannot have incoming connections and should have one outgoing connection. Whenever ruleflow process is started, the ruleflow will start executing here, and will then automatically continue to the first node linked to this start node, etc. It contains the following properties:
Id: The id of the node (which is unique within one node container).
Name: The display name of the node.
Triggers: A start node can also specify additional triggers that can be used to automatically start the process. Examples are a 'constraint' trigger that automatically starts the process if a given rule / contraint is satisfied, or an 'event' trigger that automatically starts the process if a specific event is signalled.
End: The end of the ruleflow. A ruleflow should have one or more end nodes. The end node should have one incoming connection and cannot have outgoing connections. It contains the following properties:
Id: The id of the node (which is unique within one node container).
Name: The display name of the node.
Terminate: An end node can be terminating (default) or not. When a terminating end node is reached in the ruleflow, the ruleflow is terminated. If a ruleflow is terminated, all nodes that are still active in this ruleflow are cancelled first (which is possible if parallel paths are used). Non-terminating end nodes are simply end nodes in the process where the flow ends but other parallel paths still continue.
RuleFlowGroup: Represents a set of rules that need to be evaluated. The rules are evaluated when the node is reached. A RuleFlowGroup node should have one incoming connection and one outgoing connection. Rules can become part of a specific ruleflow group using the "ruleflow-group" attribute in the header. When a RuleSet node is reached in the ruleflow, the engine will start executing rules that are part of the corresponding ruleflow-group (if any). Execution will automatically continue to the next node if there are no more active rules in this ruleflow-group. This means that, during the execution of a ruleflow-group, it is possible that new activations belonging to the currently active ruleflow-group are added to the agenda due to changes made to the facts by the other rules. Note that the ruleflow will immediately continue with the next node if it encounters a ruleflow-group where there are no active rules at that point. If the ruleflow-group was already active, the ruleflow-group will remain active and exeution will only continue if all active rules of the ruleflow-group has been completed. It contains the following properties:
Id: The id of the node (which is unique within one node container).
Name: The display name of the node.
RuleFlowGroup: The name of the ruleflow-group that represents the set of rules of this RuleFlowGroup node.
Timers: Timers that are linked to this node (see the 'timers' section for more details).
Split: Allows you to create branches in your ruleflow. A split node should have one incoming connection and two or more outgoing connections. There are three types of splits currently supported:
AND means that the control flow will continue in all outgoing connections simultaneously (paralellism).
XOR means that exactly one of the outgoing connections will be chosen (decision). Which connection is decided by evaluating the constraints that are linked to each of the outgoing connections. Constraints are specified using the same syntax as the left-hand side of a rule. The constraint with the lowest priority number that evaluates to true is selected. Note that you should always make sure that at least one of the outgoing connections will evaluate to true at runtime (the ruleflow will throw an exception at runtime if it cannot find at least one outgoing connection). For example, you could use a connection which is always true (default) with a high priority number to specify what should happen if none of the other connections can be taken.
OR means that all outgoing connections whose condition evaluates to true are selected. Conditions are similar to the XOR split, except that the priorities are not taken into account. Note that you should make sure that at least one of the outgoing connections will evaluate to true at runtime (the ruleflow will throw an exception at runtime if it cannot find an outgoing connection).
It contains the following properties:
Id: The id of the node (which is unique within one node container).
Name: The display name of the node.
Type: The type of the split node, t.e. AND, XOR or OR (see above).
Constraints: The constraints linked to each of the outgoing connections (in case of an (X)OR split).
Join: Allows you to synchronize multiple branches. A join node should have two or more incoming connections and one outgoing connection. There are three types of splits currently supported:
AND means that is will wait until all incoming branches are completed before continuing.
XOR means that it continues if one of its incoming branches has been completed. If it is triggered from more than one incoming connection, it will trigger the next node for each of those triggers.
Discriminator means that it continues if one of its incoming branches has been completed. At that point, it will wait until all other connections have been triggered as well. At that point, it will reset, so that it can trigger again when one of its incoming branches has been completed.
n-of-m means that it continues if n of its m incoming branches have been completed. The n variable could either be hardcoded to a fixed value, or could also refer to a process variable that will contain the number of incoming branches to wait for.
It contains the following properties:
Id: The id of the node (which is unique within one node container).
Name: The display name of the node.
Type: The type of the join node, t.e. AND, XOR or Discriminator (see above).
n: The number of incoming connections to wait for (in case of a n-of-m join).
Event wait (or milestone): Represents a wait state. An event wait should have one incoming connection and one outgoing connection. It specifies a constraint which defines how long the process should wait in this state before continuing. For example, a constraint in an order entry application might specify that the process should wait until no more errors are found in the given order. Constraints are specified using the same syntax as the left-hand side of a rule. When a wait node is reached in the ruleflow, the engine will check the associated constraint. If the constraint evaluates to true directly, the flow will continue imediately. Otherwise, the flow will continue if the constraint is satisfied later on, for example when a fact is inserted in, updated or removed from the working memory. It contains the following properties:
Id: The id of the node (which is unique within one node container).
Name: The display name of the node.
Constraint: Defines when the process can leave this state and continue.
SubFlow: represents the invocation of another process from withing this process. A sub-process node should have one incoming connection and one outgoing connection. When a SubProcess node is reached in the ruleflow, the engine will start the process with the given id. It contains the following properties:
Id: The id of the node (which is unique within one node container).
Name: The display name of the node.
ProcessId: The id of the process that should be executed.
Wait for completion: If this property is true, the subflow node will only continue if that subflow process has terminated its execution (completed or aborted); otherwise it will continue immediately after starting the sub-process.
Independent: If this property is true, the sub-process is started as an independent process, which means that the subflow process will not be terminated if this process reaches an end node; otherwise the active sub-process will be cancelled on termination (or abortion) of the process.
On entry/exit actions: Actions that are executed upon entry / exit of this node.
Parameter in/out mapping: A SubFlow node can also define in- and out-mappings for variables. The value of variables in this process with given variable name in the in-mappings will be used as parameters (with the associated parameter name) when starting the process. The value of the variables in the sub-process with the given variable name in the out-mappings will be copied to the variables of this process when the sub-process has been completed. Note that can only use out-mappings when "Wait for completion" is set to true.
Timers: Timers that are linked to this node (see the 'timers' section for more details).
Action: represents an action that should be executed in this ruleflow. An action node should have one incoming connection and one outgoing connection. The associated action specifies what should be executed. An action should specify which dialect is used to specify the action (e.g. Java or MVEL), and the actual action code. The action code can refer to any globals, the special 'drools' variable which implements KnowledgeHelper (can for example be used to access the working memory (drools.getWorkingMemory())) and the special 'context' variable which implements the ProcessContext (can for example be used to access the current ProcessInstance or NodeInstance and get/set variables). When an action node is reached in the ruleflow, it will execute the action and continue with the next node. It contains the following properties:
Id: The id of the node (which is unique within one node container).
Name: The display name of the node.
Action: The action associated with this action node.
Timer: represents a timer that can trigger one or multiple times after a given period of time. A Timer node should have one incoming connection and one outgoing connection. The timer delay specifies how long (in milliseconds) the timer should wait before triggering the first time. The timerperiod specifies the time between two subsequenct triggers. A period of 0 means that the timer should only be triggered once. When a timer node is reached in the ruleflow, it will execute the associated timer. The timer is cancelled if the timer node is cancelled (e.g. by completing or aborting the process). Check out the section on timers to find out more information. The timer node contains the following properties:
Id: The id of the node (which is unique within one node container).
Name: T
rule 'YourRule' ruleflow-group 'group1' when ... then ... end
he display name of the node.
Timer delay: The delay (in milliseconds) that the node should wait before triggering the first time.
Timer period: The period (in milliseconds) between two subsequent triggers. If the period is 0, the timer should only be triggered once.
Fault: A fault node can be used to signal an exceptional condition in the process. A fault node should have one incoming connection and no outgoing connections. When a fault node is reached in the ruleflow, it will throw a fault with the given name. The process will search for an appropriate exception handler that is capable of handling this kind of fault. If no fault handler is found, the process instance will be aborted. It contains the following properties:
Id: The id of the node (which is unique within one node container).
Name: The display name of the node.
FaultName: The name of the fault. This name is used to search for appriopriate exception handlers that is capable of handling this kind of fault.
FaultVariable: The name of the variable that contains the data associated with this fault. This data is also passed on to the exception handler (if one is found).
Event: An event node can be used to respond to (internal/external) events during the execution of the process. An event node should have no incoming connections and one outgoing connection. An event node specifies the type of event that is expected. Whenever that type of event is detected, the node connected to this event node will be triggered. It contains the following properties:
Id: The id of the node (which is unique within one node container).
Name: The display name of the node.
EventType: The type of event that is expected.
VariableName: The name of the variable that will contain the data associated with this event (if any) when this event occurs.
Scope: An event could be used to listen to internal events only, i.e. events that are signalled to this process instance directly, using processInstance.signalEvent(String type, Object data). When an event node is defined as external, it will also be listening to (external) events that are signalled to the process engine directly, using workingMemory.signalEvent(String type, Object event).
Human Task: Processes can also involve tasks that need to executed by human actors. A task node represents an atomic task that needs to be executed by a human actor. A human task node should have one incoming connection and one outgoing connection. Human task nodes can be used in combination with swimlanes to assign multiple human tasks to similar actors. For more detail, check the 'human tasks' chapter. A human task node is actually nothing more than a specific type of work item node (of type "Human Task"). A human task node contains the following properties:
Id: The id of the node (which is unique within one node container).
Name: The display name of the node.
TaskName: The name of the human task.
Priority: An integer indicating the priority of the human task.
Comment: A comment associated with the human task.
ActorId: The actor id that is responsible for executing the human task. A list of actor id's can be specified using a comma (',') as separator.
Skippable: Specifies whether the human task can be skipped (i.e. the actor decides not to execute the human task).
Content: The data associated with this task.
Swimlane: The swimlane this human task node is part of. Swimlanes make it easy to assign multiple human tasks to the same actor. See the human tasks chapter for more detail on how to use swimlanes.
Wait for completion: If this property is true, the human task node will only continue if the human task has been terminated (i.e. completed or any other terminal state); otherwise it will continue immediately after creating the human task.
On entry/exit actions: Actions that are executed upon entry / exit of this node.
Parameter mapping: Allows copying the value of process variables to parameters of the human task. Upon creation of the human tasks, the values will be copied.
Result mapping: Allows copying the value of result parameters of the human task to a process variable. Upon completion of the human task, the values will be copied. Note that can only use result mappings when "Wait for completion" is set to true. A human task has a result variable "Result" that contains the data returned by the human actor. The variable "ActorId" contains the id of the actor that actually executed the task.
Timers: Timers that are linked to this node (see the 'timers' section for more details).
Composite: A composite node is a node that can contain other nodes (i.e. acts as a node container). It thus allows creating a part of the flow embedded inside a composite node. It also allows you to define additional variables and exception handlers that are accessible for all nodes inside this container. A composite node should have one incoming connection and one outgoing connection. It contains the following properties:
Id: The id of the node (which is unique within one node container).
Name: The display name of the node.
StartNodeId: The id of the node (inside this node container) that should be triggered when this node is triggered.
EndNodeId: The id of the node (inside this node container) that represents the end of the flow contained in this node. When this node is completed, the composite node will also be completed and trigger its outgoing connection. All other executing nodes within this composite node will be cancelled.
Variables: Additional variables can be defined to store data during the execution of this node (see the 'data' section for more details).
Exception Handlers: Specify the behaviour when a fault occurs in this node container (see the 'exceptions' section for more details).
For Each: A for each node is a special kind of composite node that allows you to execute the contained flow multiple times, once for each element in a collection. A for each node should have one incoming connection and one outgoing connection. A for each node waits for completion of the embedded flow for each of its elements before continuing. It contains the following properties:
Id: The id of the node (which is unique within one node container).
Name: The display name of the node.
StartNodeId: The id of the node (inside this node container) that should be triggered for each of the elements in a collection.
EndNodeId: The id of the node (inside this node container) that represents the end of the flow contained in this node. When this node is completed, the execution of the for each node will also be completed for that element. and trigger its outgoing connection. All other executing nodes within this composite node will be cancelled.
CollectionExpression: The name of a variable that represents the collection of elements that should be iterated over. The collection variable should be of type java.util.Collection.
VariableName: The name of the variable that will contain the selected element from the collection. This can be used to gives nodes inside this composite node access to the selected element.
Work Item: Represents an (abstract) unit of work that should be executed in this process. All work that is executed outside the process engine should be represented (in a declarative way) using a work item. Different types of work items are predefined, like for example sending an email, logging a message, etc. However, the user can define domain-specific work items (using a unique name and by defining the paramaters (input) and results (output) that are associated with this type of work). See the chapter about domain-specific processes for a detailed explanation and illustrative examples of how to define and use work items in your processes. When a work item node is reached in the process, the associated work item is executed. A work item node should have one incoming connection and one outgoing connection.
Id: The id of the node (which is unique within one node container).
Name: The display name of the node.
Wait for completion: If the property "Wait for completion" is true, the WorkItem node will only continue if the created work item has terminated its execution (completed or aborted); otherwise it will continue immediately after starting the work item.
Parameter mapping: Allows copying the value of process variables to parameters of the work item. Upon creation of the work item, the values will be copied.
Result mapping: Allows copying the value of result parameters of the work item to a process variable. Each type of work can define result parameters that will (potentially) be returned after the work item has been completed. A result mapping can be used to copy the value of the given result parameter to the given variable in this process. For example, the "FileFinder" work item returns a list of files that match the given search criteria as a result parameter 'Files'. This list of files can then be bound to a process variable for use within the process. Upon completion of the work item, the values will be copied. Note that can only use result mappings when "Wait for completion" is set to true.
On entry/exit actions: Actions that are executed upon entry / exit of this node.
Timers: Timers that are linked to this node (see the 'timers' section for more details).
Additional parameters: Each type of work item can define additional parameters that are relevant for that type of work. For example, the "Email" work item defines additional parameters like 'From', 'To', 'Subject' and 'Body'. The user can either fill in values for these parameters directly, or define a parameter mapping that will copy the value of the given variable in this process to the given parameter (if both are specified, the mapping will have precedence). Parameters of type String can use #{expression} to embed a value in the String. The value will be retrieved when creating the work item and the #{...} will be replaced by the toString() value of the variable. The expression could simply be the name of a variable (in which case it will be resolved to the value of the variable), but more advanced MVEL expressions are possible as well, like #{person.name.firstname}.
While the flow graph focusses on specifying the control flow of the process, it is usually also necessary to look at the process from a data perspective. During the execution of a process, data can retrieved, stored, passed on and (re)used throughout the entire process.
Runtime data can be stored during the execution of the process using variables. A variable is defined by a name and a data type. This could be a basic data types (e.g. boolean, integer, String) or any kind of Object. Variables can be defined inside a variable scope. The top-level scope is the variable scope of the process itself. Sub-scopes can be defined using a composite node. Variables that are defined in a sub-scope are only accessible for nodes within that scope.
Whenever a variable is accessed, the process will search for the appropriate variable scope that defines the variable. Nesting of variable scopes is allowed: a node will always search for a variable in its parent container. If the variable cannot be found, it will look in that one's parent container, etc. until the process instance itself is reached. If the variable cannot be found, either null will be returned (in case of a read) or an error message will be shown that the variable could not be found (in case of a write), after which the process will continue without setting the paramater.
Variables can be used in various ways:
person.setAge(10); // with "person" a variable in the processChanging the value of a variable can be done through the knowledge context:
kcontext.setVariable(variableName, value);
Finally, processes and rules all have access to globals (globally defined variables that are considered immutable with regard to rule evaluation) and data in the knowledge session. The knowledge session can be accessed in actions using the knowledge context:
kcontext.getKnowledgeRuntime().insert( new Person("..") );
Constraints can be used in various locations in your processes, like for example decision points (i.e. an (X)OR split), wait constraints, etc. Drools Flow supports two types of constraints:
return person.getAge() > 20;An similar example of a valid MVEL code constraint would be:
return person.age > 20;
Person( age > 20 )which will search for a person older than 20 in the working memory.
Rule constraints do not have direct access to variables defined inside the process. It is however possible to refer to the current process instance inside a rule constraint, by adding the process instance to the working memory and matching to the process instance inside your rule constraint. We have added special logic to make sure that a variable "processInstance" of type WorkflowProcessInstance will only match to the current process instance and not to other process instances in the working memory. Note that you are however responsible yourself to insert (and possibly update) the process instance into the session (for example using Java code or an (on-entry or on-exit or explicit) action in your process). The following exampleof a rule constraint will search for a person with the same name as the value stored in the variable "name" of the process:
processInstance: WorkflowProcessInstance() Person( name == ( processInstance.getVariable("name") ) ) # add more constraints here ...
Actions can be used in different ways:
Actions have access to globals and the variables that are defined for the process and the 'kcontext' variable. This variable is of type org.drools.runtime.process.ProcessContext and can be used for
Drools currently supports two dialects: the java and the MVEL dialect. Java actions should be valid Java code. MVEL actions can use the business scripting language MVEL to express the action. MVEL accepts any valid Java code but also provides aditional support for nested accesses of parameters (e.g. person.name instead of person.getName()), and many other scripting improvements. Therefore, MVEL usually allows more business user friendly action expressions. For example, an action that prints out the name of the person in the "requester" variable of the process would look like this:
// using the Java dialect System.out.println( person.getName() ); // Similarly, using the MVEL dialect System.out.println( person.name );
During the execution of a process, the process engine makes sure that all the relevant tasks are executed according to the process plan, by requesting the execution of work items and waiting for the results. However, it is also possible that the process should respond to events that were not directly requested by the process engine. Explicitly representing these events in a process allows the process author to specify how the process should react whenever such events occur.
Events have a type and possibly data associated with the event. Users are free to define their own types of events and the data that is associated with this event.
A process can specify how to respond to events by using event nodes. An event node needs to specify the type of event the node is interested in. It can also define a variable name, which defines the variable that the data that is associated with the event will be copied to. This allows subsequent nodes in the process to access the event data and take appropriate action based on this data.
An event can be signalled to a running instance of a process in a number of ways:
context.getProcessInstance().signalEvent(type, eventData);
processInstance.signalEvent(type, eventData);
workingMemory.signalEvent(type, eventData);
Events could also be used to start a process. Whenever a start node defines an event trigger of a specific type, a new process instance will be started every time that type of event is signalled to the process engine.
Whenever an exceptional condition occurs during the execution of a process, a fault could be raised to signal the occurrence of this exception. The process will then search for an appropriate exception handler that is capable of handling such a fault.
Similar to events, faults also have a type and possibly data associated with the fault. Users are free to define their own types of faults and the data that is associated with this fault.
Faults can be created using a fault node: A fault node generates a fault of the given type (i.e. the fault name). If the fault node specifies a fault variable, the value of the given variable will be associated with the fault.
Whenever a fault is created, the process will search for an appropriate exception handler that is capable of handling the given type of fault. Processes and composite nodes both can define exception handlers for handling faults. Nesting of exception handlers is allowed: a node will always search for an appropriate exception handler in its parent container. If none is found, it will look in that one's parent container, etc. until the process instance itself is reached. If no exception handler can be found, the process instance will be aborted, resulting in the cancellation of all nodes inside the process.
Exception handlers can also specify a fault variable. The data associated with the fault (if any) will be copied to this variable if the exception handler is selected to handle the fault. This allows subsequent actions / nodes in the process to access the fault data and take appropriate action based on this data.
Exception handlers need to define an action that specifies how to respond to the given fault. In most cases, the behaviour that is needed to react to the given fault cannot be handled in one action. It is therefore recommended to have the exception handler signal an event of a specific type (in this case "Fault") using
context.getProcessInstance().signalEvent("FaultType", context.getVariable("FaultVariable");
Timers can be used to wait for a predefined amount of time, before triggering. They could be used to specify timeout behaviour, to trigger certain logic after a certain period or repeat it at regular intervals.
A timer needs to specify a delay and a period. The delay specifies the amount of time (in milliseconds) to wait after activation before triggering the timer the first time. The period defines the time between subsequent activations. If the period is 0, the timer will only be triggered once.
The timer service is responsible for making sure that timers get triggered at the appropriate times. Timers can also be cancelled, meaning that the timer will no longer be triggered.
Timers can be used in two ways inside a process:
By default, the Drools engine is a passive component, meaning that it will only start processing if you tell it to (for example, you first insert the necessary data and then tell the engine to start processing). In passive mode, a timer that has been triggered will be put on the action queue. This means that it will be executed the next time the engine is told to start executing by the user (using fireAllRules() or if the engine is already / still running), in which case the timer will be executed automatically.
When using timers, it does usually make sense to make the Drools engine an active component, meaning that it will execute actions whenever they become available (and not wait until the user tells it to start executing again). This would mean a timer would be executed once it is triggered. To make the engine fire all actions continuously, you must call the fireUntilHalt() method. That means the engine will continue firing until the engine is halted. The following fragment shows how to do this (note that you should call fireUntilHalt() in a separate thread as it will only return if the engine has been halted (by either the user or some logic calling halt() on the session):
new Thread(new Runnable() { public void run() { ksession.fireUntilHalt(); } }).start(); // starting a new process instance ksession.startProcess("..."); // any timer that will trigger will now be executed automatically
Drools already provides some functionality to define the order in which rules should be executed, like salience, activation groups, etc. When dealing with (possibly a lot of) large rule-sets, managing the order in which rules are evaluated might become complex. Ruleflow allows you to specify the order in which rule sets should be evaluated by using a flow chart. This allows you to define which rule sets should be evaluated in sequence or in parallel, to specify conditions under which rule sets should be evaluated, etc. This chapter contains a few ruleflow examples.
A rule flow is a graphical description of a sequence of steps that the rule engine needs to take, where the order is important. The ruleflow can also deal with conditional branching, parallelism, synchonization, etc.
To use a ruleflow to describe the order in which rules should be evaluatied, you should first group rules into rulefow-groups using the ruleflow-group rule attribute ("options" in the GUI). Then you should create a ruleflow graph (which is a flow chart) that graphically describe the order in which the rules should be considered (by specifying the order in which the ruleflow-groups should be evaluated).
rule 'YourRule' ruleflow-group 'group1' when ... then ... end
This rule will then be placed in the ruleflow-group called "group1".
The above rule flow specifies that the rules in the group "Check Order" must be executed before the rules in the group "Process Order". This means that only rules which are marked as having a ruleflow-group of "Check Order" will be considered first, and then "Process Order". That's about it. You could achieve similar results with either using salience (setting priorities, but this is harder to maintain, and makes the time-relationship implicit in the rules), or agenda groups. However, using a ruleflow makes the order of processing explicit, almost like a meta-rule, and makes managing more complex situations a lot easier.
In practice, if you are using ruleflow, you will most likely be doing more then setting a simple sequence of groups to progress though. You are more likely modeling branches of processing. In this case you use "Split" and "Join" items from the component pallette. You use connections to connect from the start to ruleflow groups, or to Splits, and from splits to groups, joins etc. (i.e. basically like a simple flow chart that models your processing). You can work entirely graphically until you get the graph approximately right.
The above flow is a more complex example. This example is an insurance claim processing rule flow. A description: Initially the claim data validation rules are processed (these check for data integrity and consistency, that all the information is there). Next there is a decision "split" - based on a condition which the rule flow checks (the value of the claim), it will either move on to an "auto-settlement" group, or to another "split", which checks if there was a fatality in the claim. If there was a fatality then it determines if the "regular" of fatality specific rules will take effect. And so on. What you can see from this is based on a few conditions in the rule flow the steps that the processing takes can be very different. Note that all the rules can be in one package - making maintenance easy. You can separate out the flow control from the actual rules.
Split types (referring to the above): When you click on a split, you will see the above properties panel. You then have to choose the type: AND, OR, and XOR. The interesting ones are OR and XOR: if you choose OR, then any of the "outputs" of the split can happen (i.e. processing can proceed in parallel down more then one path). If you chose XOR, then it will be only one path.
If you choose OR or XOR, then in the row that has constraints, you will see a button on the right hand side that has "..." - click on this, and you will see the constraint editor. From this constraint editor, you set the conditions which the split will use to decide which "output path" will be chosen.
Choose the output path you want to set the constraints for (eg Autosettlement), and then you should see the following constraint editor:
This is a text editor where the constraints (which are like the condition part of a rule) are entered. These constraints operate on facts in the working memory (eg. in the above example, it is checking for claims with a value of less than 250). Should this condition be true, then the path specified by it will be followed.