JBoss.orgCommunity Documentation
Artificial Intelligence (A.I.) is a very broad research area that focuses on "Making computers think like people" and includes disciplines such as Neural Networks, Genetic Algorithms, Decision Trees, Frame Systems and Expert Systems. Knowledge representation is the area of A.I. concerned with how knowledge is represented and manipulated. Expert Systems use Knowledge representation to facilitate the codification of knowledge into a knowledge base which can be used for reasoning - i.e. we can process data with this knowledge base to infer conclusions. Expert Systems are also known as Knowledge-based Systems and Knowledge-based Expert Systems and are considered 'applied artificial intelligence'. The process of developing with an Expert System is Knowledge Engineering. EMYCIN was one of the first "shells" for an Expert System, which was created from the MYCIN medical diagnosis Expert System. Where-as early Expert Systems had their logic hard coded, "shells" separated the logic from the system, providing an easy to use environment for user input. Drools is a Rule Engine that uses the Rule Based approach to implement an Expert System and is more correctly classified as a Production Rule System.
The term "Production Rule" originates from formal grammar - where it is described as "an abstract structure that describes a formal language precisely, i.e., a set of rules that mathematically delineates a (usually infinite) set of finite-length strings over a (usually finite) alphabet" (wikipedia).
Business Rule Management Systems build additional value on top of a general purpose Rule Engines by providing business user focused systems for rule creation, management, deployment, collaboration, analysis and end user tools. Further adding to this value is the fast evolving and popular methodology "Business Rules Approach", which is a helping to formalize the role of Rule Engines in the enterprise.
The term Rule Engine is quite ambiguous in that it can be any system that uses rules, in any form, that can be applied to data to produce outcomes; which includes simple systems like form validation and dynamic expression engines. The book "How to Build a Business Rules Engine (2004)" by Malcolm Chisholm exemplifies this ambiguity. The book is actually about how to build and alter a database schema to hold validation rules. The book then shows how to generate VB code from those validation rules to validate data entry - while a very valid and useful topic for some, it caused quite a surprise to this author, unaware at the time in the subtleties of Rules Engines differences, who was hoping to find some hidden secrets to help improve the Drools engine. JBoss jBPM uses expressions and delegates in its Decision nodes; which control the transitions in a Workflow. At each node it evaluates has a rule set that dictates the transition to undertake - this is also a Rule Engine. While a Production Rule System is a kind of Rule Engine and also an Expert System, the validation and expression evaluation Rule Engines mentioned previously are not Expert Systems.
A Production Rule System is Turing complete with a focus on knowledge representation to express propositional and first order logic in a concise, non-ambiguous and declarative manner. The brain of a Production Rules System is an Inference Engine that is able to scale to a large number of rules and facts. The Inference Engine matches facts and data against Production Rules - also called Productions or just Rules - to infer conclusions which result in actions. A Production Rule is a two-part structure using First Order Logic for knowledge representation.
when <conditions> then <actions>;
The process of matching the new or existing facts against Production Rules is called Pattern Matching, which is performed by the Inference Engine. There are a number of algorithms used for Pattern Matching by Inference Engines including:
Linear
Rete
Treat
Leaps
Drools implements and extends the Rete algorithm, Leaps used to be provided but was retired as it became unmaintained. The Drools Rete implementation is called ReteOO, signifying that Drools has an enhanced and optimized implementation of the Rete algorithm for Object Oriented systems. Other Rete based engines also have marketing terms for their proprietary enhancements to Rete, like RetePlus and Rete III. It is important to understand that names like Rete III are purely marketing where, unlike the original published Rete Algorithm, no details of the implementation are published. This makes questions such as "Does Drools implement Rete III?" nonsensical. The most common enhancements are covered in "Production Matching for Large Learning Systems (Rete/UL)" (1995) by Robert B. Doorenbos.
The Rules are stored in the Production Memory and the facts that the Inference Engine matches against the Working Memory. Facts are asserted into the Working Memory where they may then be modified or retracted. A system with a large number of rules and facts may result in many rules being true for the same fact assertion, these rules are said to be in conflict. The Agenda manages the execution order of these conflicting rules using a Conflict Resolution strategy.
A Production Rule System's Inference Engine is stateful and able to enforce truthfulness - called Truth Maintenance. A logical relationship can be declared by actions which means the action's state depends on the inference remaining true; when it is no longer true the logical dependent action is undone. The "Honest Politician" is an example of Truth Maintenance, which always ensures that hope can only exist for a democracy while we have honest politicians.
when an honest Politician exists then logically assert Hope when Hope exists then print "Hurrah!!! Democracy Lives" when Hope does not exist then print "Democracy is Doomed"
There are two methods of execution for a Production Rule Systems - Forward Chaining and Backward Chaining; systems that implement both are called Hybrid Production Rule Systems. Understanding these two modes of operation are key to understanding why a Production Rule System is different and how to get the best from them. Forward chaining is 'data-driven' and thus reactionary - facts are asserted into the working memory which results in one or more rules being concurrently true and scheduled for execution by the Agenda - we start with a fact, it propagates and we end in a conclusion. Drools is a forward chaining engine.
Backward chaining is 'goal-driven', meaning that we start with a conclusion which the engine tries to satisfy. If it can't it then searches for conclusions that it can, known as 'sub goals', that will help satisfy some unknown part of the current goal - it continues this process until either the initial conclusion is proven or there are no more sub goals. Prolog is an example of a Backward Chaining engine; Drools will be adding support for Backward Chaining in its next major release.
Some frequently asked questions:
We will attempt to address these questions below.
Declarative Programming
Rule engines allow you to say "What to do" not "How to do it".
The key advantage of this point is that using rules can make it easy to express solutions to difficult problems and consequently have those solutions verified (rules are much easier to read then code).
Rule systems are capable of solving very, very hard problems, providing an explanation of how the solution was arrived at and why each "decision" along the way was made (not so easy with other of AI systems like neural networks or the human brain - I have no idea why I scratched the side of the car).
Logic and Data Separation
Your data is in your domain objects, the logic is in the rules. This is fundamentally breaking the OO coupling of data and logic, which can be an advantage or a disadvantage depending on your point of view. The upshot is that the logic can be much easier to maintain as there are changes in the future, as the logic is all laid out in rules. This can be especially true if the logic is cross-domain or multi-domain logic. Instead of the logic being spread across many domain objects or controllers, it can all be organized in one or more very distinct rules files.
Speed and Scalability
The Rete algorithm, Leaps algorithm, and its descendants such as Drools' ReteOO (and Leaps), provide very efficient ways of matching rule patterns to your domain object data. These are especially efficient when you have datasets that do not change entirely (as the rule engine can remember past matches). These algorithms are battle proven.
Centralization of Knowledge
By using rules, you create a repository of knowledge (a knowledgebase) which is executable. This means it's a single point of truth, for business policy (for instance) - ideally rules are so readable that they can also serve as documentation.
Tool Integration
Tools such as Eclipse (and in future, Web based UIs) provide ways to edit and manage rules and get immediate feedback, validation and content assistance. Auditing and debugging tools are also available.
Explanation Facility
Rule systems effectively provide an "explanation facility" by being able to log the decisions made by the rule engine along with why the decisions were made.
Understandable Rules
By creating object models and, optionally, Domain Specific Languages that model your problem domain you can set yourself up to write rules that are very close to natural language. They lend themselves to logic that is understandable to, possibly nontechnical, domain experts as they are expressed in their language (as all the program plumbing, the "How", is in the usual code, hidden away).
The shortest answer to this is "when there is no satisfactory traditional programming approach to solve the problem.". Given that short answer, some more explanation is required. The reason why there is no "traditional" approach is possibly one of the following:
The problem is just too fiddle for traditional code.
The problem may not be complex, but you can't see a non-fragile way of building it.
The problem is beyond any obvious algorithm based solution.
It is a complex problem to solve, there are no obvious traditional solutions or basically the problem isn't fully understood.
The logic changes often
The logic itself may be simple (but doesn't have to be) but the rules change quite often. In many organizations software releases are few and far between and rules can help provide the "agility" that is needed and expected in a reasonably safe way.
Domain experts (or business analysts) are readily available, but are nontechnical.
Domain experts are often a wealth of knowledge about business rules and processes. They typically are nontechnical, but can be very logical. Rules can allow them to express the logic in their own terms. Of course, they still have to think critically and be capable of logical thinking (many people in "soft" nontechnical positions do not have training in formal logic, so be careful and work with them, as by codifying business knowledge in rules, you will often expose holes in the way the business rules and processes are currently understood).
If rules are a new technology for your project teams, the overhead in getting going must be factored in. It is not a trivial technology, but this document tries to make it easier to understand.
Typically in a modern OO application you would use a rule engine to contain key parts of your business logic (what that means of course depends on the application) - ESPECIALLY the REALLY MESSY parts!. This is an inversion of the OO concept of encapsulating all the logic inside your objects. This is not to say that you throw out OO practices, on the contrary in any real world application, business logic is just one part of the application. If you ever notice lots of "if", "else", "switch", an over abundance of strategy patterns and/or other messy logic in your code that just doesn't feel right (and you keep coming back to fix it - either because you got it wrong, or the logic/your understanding changes) - think about using rules. If you are faced with tough problems of which there are no algorithms or patterns for, consider using rules.
Rules could be used embedded in your application or perhaps as a service. Often rules work best as "stateful" component - hence they are often an integral part of an application. However, there have been successful cases of creating reusable rule services which are stateless.
In your organization it is important to think about the process you will use for updating rules in systems that are in production (the options are many, but different organizations have different requirements - often they are out of the control of the application vendors/project teams).
To quote a Drools mailing list regular:
It seems to me that in the excitement of working with rules engines, that people forget that a rules engine is only one piece of a complex application or solution. Rules engines are not really intended to handle workflow or process executions nor are workflow engines or process management tools designed to do rules. Use the right tool for the job. Sure, a pair of pliers can be used as a hammering tool in a pinch, but that's not what it's designed for. | ||
--Dave Hamu |
As rule engines are dynamic (dynamic in the sense that the rules can be stored and managed and updated as data), they are often looked at as a solution to the problem of deploying software (most IT departments seem to exist for the purpose of preventing software being rolled out). If this is the reason you wish to use a rule engine, be aware that rule engines work best when you are able to write declarative rules. As an alternative, you can consider data-driven designs (lookup tables), or script/process engines where the scripts are managed in a database and are able to be updated on the fly.
Hopefully the preceding sections have explained when you may want to use a rule engine.
Alternatives are script-based engines that provide the dynamicness for "changes on the fly" (there are many solutions here).
Alternatively Process Engines (also capable of workflow) such as jBPM allow you to graphically (or programmatically) describe steps in a process - those steps can also involve decision point which are in themselves a simple rule. Process engines and rules often can work nicely together, so it is not an either-or proposition.
One key point to note with rule engines, is that some rule-engines are really scripting engines. The downside of scripting engines is that you are tightly coupling your application to the scripts (if they are rules, you are effectively calling rules directly) and this may cause more difficulty in future maintenance, as they tend to grow in complexity over time. The upside of scripting engines is they can be easier to implement at first, and you can get quick results (and conceptually simpler for imperative programmers!).
Many people have also implemented data-driven systems successfully in the past (where there are control tables that store meta-data that changes your applications behavior) - these can work well when the control can remain very limited. However, they can quickly grow out of control if extended too much (such that only the original creators can change the applications behavior) or they cause the application to stagnate as they are too inflexible.
No doubt you have heard terms like "tight coupling" and "loose coupling" in systems design. Generally people assert that "loose" or "weak" coupling is preferable in design terms, due to the added flexibility it affords. Similarly, you can have "strongly coupled" and "weakly coupled" rules. Strongly coupled in this sense means that one rule "firing" will clearly result in another rule firing etc.; in other words there is a clear (probably obvious) chain of logic. If your rules are all strongly coupled, the chances are that the rules will have future inflexibility, and more significantly, that perhaps a rule engine is overkill (as the logic is a clear chain of rules - and can be hard coded. [A Decision Tree may be in order]). This is not to say that strong or weak coupling is inherently bad, but it is a point to keep in mind when considering a rule engine and in how you capture the rules. "Loosely" coupled rules should result in a system that allows rules to be changed, removed and added without requiring changes to other rules that are unrelated.
Rules are written using First Order Logic, or predicate logic, which extends Propositional Logic. Emil Leon Post was the first to develop an inference based system using symbols to express logic - as a consequence of this he was able to prove that any logical system (including mathematics) could be expressed with such a system.
A proposition is a statement that can be classified as true or false. If the truth can be determined from statement alone it is said to be a "closed statement". In programming terms this is an expression that does not reference any variables:
10 == 2 * 5
Expressions that evaluate against one or more variables, the facts, are "open statements", in that we cannot determine whether the statement is true until we have a variable instance to evaluate against:
Person.sex == "male"
With SQL if we look at the conclusion's action as simply returning the matched fact to the user:
select * from Person where Person.sex == "male"
For any rows, which represent our facts, that are returned we have inferred that those facts are male people. The following diagram shows how the above SQL statement and People table can be represented in terms of an Inference Engine.
So in Java we can say that a simple proposition is of the form 'variable' 'operator' 'value' - where we often refer to 'value' as being a literal value - a proposition can be thought as a field constraint. Further to this proposition can be combined with conjunctive and disjunctive connectives, which is the logic theorists way of saying '&&' and '||'. The following shows two open propositional statements connected together with a single disjunctive connective.
person.getEyeColor().equals("blue") || person.getEyeColor().equals("green")
This can be expressed using a disjunctive Conditional Element connective - which actually results in the generation of two rules, to represent the two possible logic outcomes.
Person( eyeColour == "blue" ) || Person( eyeColor == "green" )
Disjunctive field constraints connectives could also be used and would not result in multiple rule generation.
Person( eyeColour == "blue"||"green" )
Propositional Logic is not Turing complete, limiting the problems
you can define, because it cannot express criteria for the structure of
data. First Order Logic (FOL), or Predicate Logic, extends Propositional
Logic with two new quantifier concepts to allow expressions defining
structure - specifically universal and existential quantifiers. Universal
quantifiers allow you to check that something is true for everything;
normally supported by the forall
conditional element. Existential
quantifiers check for the existence of something, in that it occurs at
least once - this is supported with not
and exists
conditional
elements.
Imagine we have two classes - Student and Module. Module represents each of the courses the Student attended for that semester, referenced by the List collection. At the end of the semester each Module has a score. If the Student has a Module score below 40 then they will fail that semester - the existential quantifier can be used used with the "less than 40" open proposition to check for the existence of a Module that is true for the specified criteria.
public class Student { private String name; private List modules; ... }
public class Module { private String name; private String studentName; private int score; ... }
Java is Turing complete in that you can write code, among other things, to iterate data structures to check for existence. The following should return a List of students who have failed the semester.
List failedStudents = new ArrayList(); for ( Iterator studentIter = students.iterator(); studentIter.hasNext();) { Student student = ( Student ) studentIter.next(); for ( Iterator it = student.getModules.iterator(); it.hasNext(); ) { Module module = ( Module ) it.next(); if ( module.getScore() < 40 ) { failedStudents.add( student ) ; break; } } }
Early SQL implementations were not Turing complete as they did not provide quantifiers to access the structure of data. Modern SQL engines do allow nesting of SQL, which can be combined with keywords like 'exists' and 'in'. The following show SQL and a Rule to return a set of Students who have failed the semester.
select * from Students s where exists ( select * from Modules m where m.student_name = s.name and m.score < 40 )
rule "Failed_Students" when exists( $student : Student() && Module( student == $student, score < 40 ) )
The RETE algorithm was invented by Dr. Charles Forgy and documented in his PhD thesis in 1978-79. A simplified version of the paper was published in 1982 (http://citeseer.ist.psu.edu/context/505087/0). The word RETE is latin for "net" meaning network. The RETE algorithm can be broken into 2 parts: rule compilation and runtime execution.
The compilation algorithm describes how the Rules in the Production Memory to generate an efficient discrimination network. In non-technical terms, a discrimination network is used to filter data. The idea is to filter data as it propagates through the network. At the top of the network the nodes would have many matches and as we go down the network, there would be fewer matches. At the very bottom of the network are the terminal nodes. In Dr. Forgy's 1982 paper, he described 4 basic nodes: root, 1-input, 2-input and terminal.
The root node is where all objects enter the network. From there, it
immediately goes to the ObjectTypeNode. The purpose of the ObjectTypeNode is
to make sure the engine doesn't do more work than it needs to. For example,
say we have 2 objects: Account and Order. If the rule engine tried to
evaluate every single node against every object, it would waste a lot of
cycles. To make things efficient, the engine should only pass the object to
the nodes that match the object type. The easiest way to do this is to
create an ObjectTypeNode and have all 1-input and 2-input nodes descend from
it. This way, if an application asserts a new account, it won't propagate to
the nodes for the Order object. In Drools when an object is asserted it
retrieves a list of valid ObjectTypesNodes via a lookup in a HashMap from
the object's Class; if this list doesn't exist it scans all the ObjectTypeNodes finding valid matches which it caches in the list. This enables Drools
to match against any Class type that matches with an
instanceof
check.
ObjectTypeNodes can propagate to AlphaNodes, LeftInputAdapterNodes
and BetaNodes. AlphaNodes are used to evaluate literal conditions. Although
the 1982 paper only covers equality conditions, many RETE implementations
support other operations. For example, Account.name == "Mr Trout"
is a
literal condition. When a rule has multiple literal conditions for a single
object type, they are linked together. This means that if an application
asserts an account object, it must first satisfy the first literal condition
before it can proceed to the next AlphaNode. In Dr. Forgy's paper, he refers
to these as IntraElement conditions. The following shows the AlphaNode
combinations for Cheese( name == "cheddar", strength == "strong" ):
Drools extends Rete by optimizing the propagation from ObjectTypeNode to AlphaNode using hashing. Each time an AlphaNode is added to an ObjectTypeNode it adds the literal value as a key to the HashMap with the AlphaNode as the value. When a new instance enters the ObjectType node, rather than propagating to each AlphaNode, it can instead retrieve the correct AlphaNode from the HashMap - avoiding unnecessary literal checks.
There are two two-input nodes; JoinNode and NotNode - both are types of BetaNodes. BetaNodes are used to compare 2 objects, and their fields, to each other. The objects may be the same or different types. By convention we refer to the two inputs as left and right. The left input for a BetaNode is generally a list of objects; in Drools this is a Tuple. The right input is a single object. Two Nodes can be used to implement 'exists' checks. BetaNodes also have memory. The left input is called the Beta Memory and remembers all incoming tuples. The right input is called the Alpha Memory and remembers all incoming objects. Drools extends Rete by performing indexing on the BetaNodes. For instance, if we know that a BetaNode is performing a check on a String field, as each object enters we can do a hash lookup on that String value. This means when facts enter from the opposite side, instead of iterating over all the facts to find valid joins, we do a lookup returning potentially valid candidates. At any point a valid join is found the Tuple is joined with the Object; which is referred to as a partial match; and then propagated to the next node.
To enable the first Object, in the above case Cheese, to enter the network we use a LeftInputNodeAdapter - this takes an Object as an input and propagates a single Object Tuple.
Terminal nodes are used to indicate a single rule has matched all its conditions - at this point we say the rule has a full match. A rule with an 'or' conditional disjunctive connective results in subrule generation for each possible logically branch; thus one rule can have multiple terminal nodes.
Drools also performs node sharing. Many rules repeat the same patterns, node sharing allows us to collapse those patterns so that they don't have to be re-evaluated for every single instance. The following two rules share the first same pattern, but not the last:
rule when Cheese( $chedddar : name == "cheddar" ) $person : Person( favouriteCheese == $cheddar ) then System.out.println( $person.getName() + " likes cheddar" ); end
rule when Cheese( $chedddar : name == "cheddar" ) $person : Person( favouriteCheese != $cheddar ) then System.out.println( $person.getName() + " does not like cheddar" ); end
As you can see below, the compiled Rete network shows the alpha node is shared, but the beta nodes are not. Each beta node has its own TerminalNode. Had the second pattern been the same it would have also been shared.
Drools is split into two main parts: Authoring and Runtime.
The authoring process involves the creation of DRL or XML files for rules which are fed into a parser - defined by an Antlr 3 grammar. The parser checks for correctly formed grammar and produces an intermediate structure for the "descr"; where the "descr" indicates the AST that "describes" the rules. The AST is then passed to the Package Builder which produces Packages. Package Builder also undertakes any code generation and compilation that is necessary for the creation of the Package. A Package object is self contained and deployable, in that it's a serialized object consisting of one or more rules.
A RuleBase is a runtime component which consists of one or more Packages. Packages can be added and removed from the RuleBase at any time. A RuleBase can instantiate one or more WorkingMemories at any time; a weak reference is maintained, unless configured otherwise. The Working Memory consists of a number of sub components, including Working Memory Event Support, Truth Maintenance System, Agenda and Agenda Event Support. Object insertion may result in the creation of one or more Activations. The Agenda is responsible for scheduling the execution of these Activations.
Four classes are used for authoring: DrlParser
, XmlParser
,
ProcessBuilder
and PackageBuilder
. The two parser classes produce "descr"
(description) AST models from a provided Reader instance. ProcessBuilder
reads in an xstream serialization representation of the Rule Flow.
PackageBuilder provides convienience APIs so that you can mostly forget
about those classes. The three convenience methods are
addPackageFromDrl
, addPackageFromXml
and addRuleFlow
- all take an
instance of Reader as an argument. The example below shows how to build a
package that includes both XML and DRL rule files and a ruleflow file,
which are in the classpath. Note that all added package sources must be of
the same package namespace for the current PackageBuilder
instance!
Example 2.1. Building a Package from Multiple Sources
PackageBuilder builder = new PackageBuilder(); builder.addPackageFromDrl( new InputStreamReader( getClass().getResourceAsStream( "package1.drl" ) ) ); builder.addPackageFromXml( new InputStreamReader( getClass().getResourceAsStream( "package2.xml" ) ) ); builder.addRuleFlow( new InputStreamReader( getClass().getResourceAsStream( "ruleflow.rfm" ) ) ); Package pkg = builder.getPackage();
It is essential that you always check your PackageBuilder for errors before attempting to use it. While the ruleBase does throw an InvalidRulePackage when a broken Package is added, the detailed error information is stripped and only a toString() equivalent is available. If you interrogate the PackageBuilder itself much more information is available.
Example 2.2. Checking the PackageBuilder for errors
PackageBuilder builder = new PackageBuilder(); builder.addPackageFromDrl( new InputStreamReader( getClass().getResourceAsStream( "package1.drl" ) ) ); PackageBuilderErrors errors = builder.getErrors();
PackageBuilder is configurable using PackageBuilderConfiguration class.
It has default values that can be overridden programmatically via
setters or on first use via property settings. At the heart of the
settings is the ChainedProperties
class which searches a number of
locations looking for drools.packagebuilder.conf
files; as it finds them
it adds the properties to the master propperties list; this provides a
level precedence. In order of precedence those locations are: System
Properties, user defined file in System Properties, user home directory,
working directory, various META-INF
locations. Further to this the
droosl-compiler
jar has the default settings in its META-INF
directory.
Currently the PackageBuilderConfiguration
handles the registry of
Accumulate functions, registry of Dialects and the main
ClassLoader.
Drools has a pluggable Dialect system, which allows other languages
to compile and execution expressions and blocks, the two currently
supported dialects are Java and MVEL. Each has its own
DialectConfiguration
Implementation; the javadocs provide details for each
setter/getter and the property names used to configure them.
The JavaDialectConfiguration
allows the compiler and language levels
to be supported. You can override by setting the
drools.dialect.java.compiler
property in a packagebuilder.conf
file that
the ChainedProperties
instance will find, or you can do it at runtime as
shown below.
Example 2.3. Configuring the JavaDialectConfiguration
to use JANINO via a
setter
PackageBuilderConfiguration cfg = new PackageBuilderConfiguration( ); JavaDialectConfiguration javaConf = (JavaDialectConfiguration) cfg.getDialectConfiguration( "java" ); javaConf.setCompiler( JavaDialectConfiguration.JANINO );
if you do not have Eclipse JDT Core in your classpath you must
override the compiler setting before you instantiate this PackageBuilder
,
you can either do that with a packagebuilder
properties file the
ChainedProperties
class will find, or you can do it programmatically as
shown below; note this time I use properties to inject the value for
startup.
Example 2.4. Configuring the JavaDialectConfiguration
to use JANINO
Properties properties = new Properties(); properties.setProperty( "drools.dialect.java.compiler", "JANINO" ); PackageBuilderConfiguration cfg = new PackageBuilderConfiguration( properties ); JavaDialectConfiguration javaConf = (JavaDialectConfiguration) cfg.getDialectConfiguration( "java" ); assertEquals( JavaDialectConfiguration.JANINO, javaConf.getCompiler() ); // demonstrate that the compiler is correctly configured
Currently it allows alternative compilers (Janino, Eclipse JDT) to
be specified, different JDK source levels ("1.4" and "1.5") and a parent
class loader. The default compiler is Eclipse JDT Core at source level
"1.4" with the parent class loader set to
Thread.currentThread().getContextClassLoader()
.
The following show how to specify the JANINO compiler programmatically:
Example 2.5. Configuring the PackageBuilder
to use JANINO via a
property
PackageBuilderConfiguration conf = new PackageBuilderConfiguration(); conf.setCompiler( PackageBuilderConfiguration.JANINO ); PackageBuilder builder = new PackageBuilder( conf );
The MVELDialectConfiguration
is much simpler and only allows strict
mode to be turned on and off, by default strict is true; this means all
method calls must be type safe either by inference or by explicit
typing.
A RuleBase
is instantiated using the RuleBaseFactory
, by default
this returns a ReteOO RuleBase
. Packages are added, in turn, using the
addPackage
method. You may specify packages of any namespace and multiple
packages of the same namespace may be added.
Example 2.6. Adding a Package to a new RuleBase
RuleBase ruleBase = RuleBaseFactory.newRuleBase(); ruleBase.addPackage( pkg );
A RuleBase
contains one or more more packages of rules, ready to be
used, i.e., they have been validated/compiled etc. A RuleBase
is
serializable so it can be deployed to JNDI or other such services.
Typically, a rulebase would be generated and cached on first use; to save
on the continually re-generation of the RuleBase
; which is
expensive.
A RuleBase
instance is thread safe, in the sense that you can have
the one instance shared across threads in your application, which may be a
web application, for instance. The most common operation on a rulebase is
to create a new rule session; either stateful or stateless.
The RuleBase
also holds references to any stateful session that it
has spawned, so that if rules are changing (or being added/removed etc.
for long running sessions), they can be updated with the latest rules
(without necessarily having to restart the session). You can specify not
to maintain a reference, but only do so if you know the RuleBase
will not
be updated. References are not stored for stateless sessions.
ruleBase.newStatefulSession(); // maintains a reference. ruleBase.newStatefulSession( false ); // do not maintain a reference
Packages can be added and removed at any time - all changes will be
propagated to the existing stateful sessions; don't forget to call
fireAllRules()
for resulting Activations to fire.
ruleBase.addPackage( pkg ); // Add a package instance ruleBase.removePackage( "org.com.sample" ); // remove a package, and all its parts, by it's namespace ruleBase.removeRule( "org.com.sample", "my rule" ); // remove a specific rule from a namespace
While there is a method to remove an indivual rule, there is no method to add an individual rule - to achieve this just add a new package with a single rule in it.
RuleBaseConfigurator
can be used to specify additional behavior of
the RuleBase
. RuleBaseConfiguration
is set to immutable after it has been
added to a RuleBase
. Nearly all the engine optimizations can be turned on
and off from here, and also the execution behavior can be set. Users will
generally be concerned with insertion behavior (identity or equality) and
cross product behavior(remove or keep identity equals cross
products).
RuleBaseConfiguration conf = new RuleBaseConfiguration(); conf.setAssertBehaviour( AssertBehaviour.IDENTITY ); conf.setRemoveIdentities( true ); RuleBase ruleBase = RuleBaseFactory.newRuleBase( conf );
It holds references to all data that has been "inserted" into it (until retracted) and it is the place where the interaction with your application occurs. Working memories are stateful objects. They may be shortlived or longlived.
Facts are objects (beans) from your application that you insert into the working memory. Facts are any Java objects which the rules can access. The rule engine does not "clone" facts at all, it is all references/pointers at the end of the day. Facts are your applications data. Strings and other classes without getters and setters are not valid Facts and can't be used with Field Constraints which rely on the JavaBean standard of getters and setters to interact with the object.
"Insert" is the act of telling the WorkingMemory
about the facts.
WorkingMemory.insert(yourObject)
for example. When you insert a fact, it
is examined for matches against the rules etc. This means ALL of the
work is done during insertion; however, no rules are executed until you
call fireAllRules()
. You don't call fireAllRules()
until after you
have finished inserting your facts. This is a common misunderstanding by
people who think the work happens when you call fireAllRules()
. Expert
systems typically use the term assert
or assertion
to refer to facts
made available to the system, however due to the assert
become a keyword
in most languages we have moved to use the Insert
keyword; so expect
to hear the two used interchangeably.
When an Object is inserted it returns a FactHandle
. This FactHandle
is the token used to represent your insert Object inside the
WorkingMemory
, it will be used when interacting with the WorkingMemory
when you wish to retract or modify an object.
Cheese stilton = new Cheese("stilton"); FactHandle stiltonHandle = session.insert( stilton );
As mentioned in the Rule Base section a Working Memory may operate in two assertions modes equality and identity - identity is default.
Identity means the Working Memory uses an IdentityHashMap
to store
all asserted Objects. New instance assertions always result in the
return of a new FactHandle
, if an instance is asserted twice then it
returns the previous fact handle – i.e. it ignores the second insertion
for the same fact.
Equality means the Working Memory uses a HashMap
to store all
asserted Objects. New instance assertions will only return a new
FactHandle
if a not equal classes have been asserted.
"Retraction" is when you retract a fact from the Working Memory,
which means it will no longer track and match that fact, and any rules
that are activated and dependent on that fact will be cancelled. Note
that it is possible to have rules that depend on the "non existence" of
a fact, in which case retracting a fact may cause a rule to activate
(see the not
and exist
keywords). Retraction is done using the
FactHandle
that was returned during the assert.
Cheese stilton = new Cheese("stilton"); FactHandle stiltonHandle = session.insert( stilton ); .... session.retract( stiltonHandle );
The Rule Engine must be notified of modified Facts, so that it can
be re-processed. Modification internally is actually a retract and then an
insert; so it clears the WorkingMemory
and then starts again. Use the
modifyObject
method to notify the Working Memory of changed objects, for
objects that are not able to notify the Working Memory themselves.
Notice modifyObject
always takes the modified object as a second
parameter - this allows you to specify new instances for immutable
objects. The update()
method can only be used with objects that have
shadow proxies turned on. If you do not use shadow proxies then you must
call session.modifyRetract()
before making your changes and
session.modifyInsert()
after the changes.
Cheese stilton = new Cheese("stilton"); FactHandle stiltonHandle = workingMemory.insert( stilton ); .... stilton.setPrice( 100 ); workingMemory.update( stiltonHandle, stilton );
Globals are named objects that can be passed in to the rule engine; without needing to insert them. Most often these are used for static information, or services that are used in the RHS of a rule, or perhaps a means to return objects from the rule engine. If you use a global on the LHS of a rule, make sure it is immutable. A global must first be declared in the drl before it can be set on the session.
global java.util.List list
With the Rule Base now aware of the global identifier and its type
any sessions are now able to call session.setGlobal
; failure to declare
the global type and identifier first will result in an exception being
thrown. To set the global on the session use session.setGlobal(identifier, value)
;
List list = new ArrayList(); session.setGlobal("list", list);
If a rule evaluates on a global before you set it you will get a
NullPointerException
.
A shadow fact is a shallow copy of an asserted object. Shadow facts are cached copies of object asserted to the working memory. The term shadow facts is commonly known as a feature of JESS (Java Expert System Shell).
The origins of shadow facts traces back to the concept of truth maintenance. The basic idea is that an expert system should guarantee the derived conclusions are accurate. A running system may alter a fact during evaluation. When this occurs, the rule engine must know a modification occurred and handle the change appropriately. There's generally two ways to guarantee truthfulness. The first is to lock all the facts during the inference process. The second is to make a cache copy of an object and force all modifications to go through the rule engine. This way, the changes are processed in an orderly fashion. Shadow facts are particularly important in multi-threaded environments, where an engine is shared by multiple sessions. Without truth maintenance, a system has a difficult time proving the results are accurate. The primary benefit of shadow facts is it makes development easier. When developers are forced to keep track of fact modifications, it can lead to errors, which are difficult to debug. Building a moderately complex system using a rule engine is hard enough without adding the burden of tracking changes to facts and when they should notify the rule engine.
Drools 4.0 has full support for Shadow Facts implemented as transparent lazy proxies. Shadow facts are enable by default and are not visible from external code, not even inside code blocks on rules.
Since Drools implements Shadow Facts as Proxies, the fact classes must either be immutable or should not be final, nor have final methods. If a fact class is final or have final methods and is still a mutable class, the engine is not able to create a proper shadow fact for it and results in unpredictable behavior.
Although shadow facts are a great way of ensuring the engine integrity, they add some overhead to the the reasoning process. As so, Drools 4.0 supports fine grained control over them with the ability to enable/disable them for each individual class.
It is possible to disable shadow facts for your classes if you meet the following requirements:
Immutable classes are safe: if a class is immutable it does not require shadow facts. Just to clarify, a class is immutable from the engine perspective if once an instance is asserted into the working memory, no attribute will change until it is retracted.
Inside your rules, attributes are only changed using modify() blocks: both Drools dialects (MVEL and Java) have the modify block construct. If all attribute value changes for a given class happen inside modify() blocks, you can disable shadow facts for that class.
Example 2.7. Example: modify() block using Java dialect
rule "Eat Cheese" when $p: Person( status == "hungry" ) $c: Cheese( ) then retract( $c ); modify( $p ) { setStatus( "full" ), setAge( $p.getAge() + 1 ) } end
Example 2.8. Example: modify() block using MVEL dialect
rule "Eat Cheese" dialect "mvel" when $p: Person( status == "hungry" ) $c: Cheese( ) then retract( $c ); modify( $p ) { status = "full", age = $p.age + 1 } end
In your application, attributes are only changed between calls to modifyRetract() and modifyInsert(): this way, the engine becomes aware that attributes will be changed and can prepare itself for them.
Example 2.9. Example: safely modifying attributes in the application code
// create session StatefulSession session = ruleBase.newStatefulSession(); // get facts Person person = new Person( "Bob", 30 ); person.setLikes( "cheese" ); // insert facts FactHandle handle = session.insert( person ); // do application stuff and/or fire rules session.fireAllRules(); // wants to change attributes? session.modifyRetract( handle ); // call modifyRetract() before doing changes person.setAge( 31 ); person.setLikes( "chocolate" ); session.modifyInsert( handle, person ); // call modifyInsert() after the changes
To disable shadow fact for all classes set the following property in a configuration file of system property:
drools.shadowProxy = false
Alternatively, it is possible to disable through an API call:
RuleBaseConfiguration conf = new RuleBaseConfiguration(); conf.setShadowProxy( false ); ... RuleBase ruleBase = RuleBaseFactory.newRuleBase( conf );
To disable the shadow proxy for a list of classes only, use the following property instead, or the equivalent API:
drools.shadowproxy.exclude = org.domainy.* org.domainx.ClassZ
As shown above, a space separated list is used to specify more than one class, and '*' is used as a wild card.
If your fact objects are Java Beans, you can implement a property change listener for them, and then tell the rule engine about it. This means that the engine will automatically know when a fact has changed, and behave accordingly (you don't need to tell it that it is modified). There are proxy libraries that can help automate this (a future version of Drools will bundle some to make it easier). To use the Object in dynamic mode specify true for the second assertObject parameter.
Cheese stilton = new Cheese("stilton"); FactHandle stiltonHandle = workingMemory.insert( stilton, true ); //specifies that this is a dynamic fact
To make a JavaBean dynamic add a PropertyChangeSupport
field
memory along with two add/remove mothods and make sure that each setter
notifies the PropertyChangeSupport
instance of the change.
private final PropertyChangeSupport changes = new PropertyChangeSupport( this ); ... public void addPropertyChangeListener(final PropertyChangeListener l) { this.changes.addPropertyChangeListener( l ); } public void removePropertyChangeListener(final PropertyChangeListener l) { this.changes.removePropertyChangeListener( l ); } ... public void setState(final String newState) { String oldState = this.state; this.state = newState; this.changes.firePropertyChange( "state", oldState, newState ); }
To support conditional elements like not
(which will be covered
later on), there is a need to "seed" the engine with something known as
the "Initial Fact". This fact is a special fact that is not intended to
be seen by the user.
On the first working memory action (assert
, fireAllRules
) on a
fresh working memory, the Initial Fact will be propagated through the
RETE network. This allows rules that have no LHS, or perhaps do not use
normal facts (such as rules that use from
to pull data from an
external source). For instance, if a new working memory is created, and
no facts are asserted, calling the fireAllRules
will cause the Initial
Fact to propagate, possibly activating rules (otherwise, nothing would
happen as there is no other fact to start with).
The StatefulSession
extends the WorkingMemory
class. It simply adds
async methods and a dispose()
method. The ruleBase
retains a reference to
each StatefulSession
it creates, so that it can update them when new rules
are added, dispose()
is needed to release the StatefulSession
reference
from the RuleBase
, without it you can get memory leaks.
The StatelessSession
wraps the WorkingMemory
, instead of extending
it, its main focus is on decision service type scenarios.
Example 2.11. Createing a StatelessSession
StatelessSession session = ruleBase.newStatelessSession(); session.execute( new Cheese( "cheddar" ) );
The API is reduced for the problem domain and is thus much simpler;
which in turn can make maintenance of those services easier. The RuleBase
never retains a reference to the StatelessSession
, thus dispose()
is not
needed, and they only have an execute()
method that takes an object, an
array of objects or a collection of objects - there is no insert
or
fireAllRules
. The execute method iterates the objects inserting each and
calling fireAllRules()
at the end; session finished. Should the session
need access to any results information they can use the executeWithResults
method, which returns a StatelessSessionResult
. The reason for this is in
remoting situations you do not always want the return payload, so this way
it's optional.
setAgendaFilter
, setGlobal
and setGlobalResolver
share their state
across sessions; so each call to execute()
will use the set AgendaFilter
,
or see any previous set globals etc.
StatelessSession
s do not currently support
PropertyChangeListener
s.
Async versions of the Execute
method are supported, remember to
override the ExecutorService
implementation when in special managed thread
environments such as JEE.
StatelessSession
s also support sequential mode, which is a special
optimized mode that uses less memory and executes faster; please see the
Sequential section for more details.
StatelessSession.executeWithResults(....)
returns a minimal API to
examine the session's data. The inserted Objects can be iterated over,
queries can be executed and globals retrieved. Once the
StatelessSessionResult
is serialized it loses the reference to the
underlying WorkingMemory
and RuleBase
, so queries can no longer be
executed, however globals can still be retrieved and objects iterated. To
retrieve globals they must be exported from the StatelessSession
; the
GlobalExporter
strategy is set with StatelessSession.setGlobalExporter(
GlobalExporter globalExporter )
. Two implementations of GlobalExporter
are
available and users may implement their own strategies.
CopyIdentifiersGlobalExporter
copies named identifiers into a new
GlobalResovler
that is passed to the StatelessSessionResult
; the
constructor takes a String[]
array of identifiers, if no identifiers are
specified it copies all identifiers declared in the RuleBase
.
ReferenceOriginalGlobalExporter
just passes a reference to the original
GlobalResolver
; the latter should be used with care as identifier
instances can be changed at any time by the StatelessSession
and the
GlobalResolver
may not be serializable-friendly.
Example 2.12. GlobalExporter
with StatelessSession
s
StatelessSession session = ruleBase.newStatelessSession(); session.setGlobalExporter( new CopyIdentifiersGlobalExporter( new String[]{"list"} ) ); StatelessSessionResult result = session.executeWithResults( new Cheese( "stilton" ) ); List list = ( List ) result.getGlobal( "list" );
The Agenda is a RETE feature. During a Working Memory Action rules may become fully matched and eligible for execution; a single Working Memory Action can result in multiple eligible rules. When a rule is fully matched an Activation is created, referencing the Rule and the matched facts, and placed onto the Agenda. The Agenda controls the execution order of these Activations using a Conflict Resolution strategy.
The engine operates in a "2 phase" mode which is recursive:
Working Memory Actions - this is where most of the work takes
place - in either the Consequence
or the main Java application
process. Once the Consequence
has finished or the main Java
application process calls fireAllRules()
the engine switches to the
Agenda Evaluation phase.
Agenda Evaluation - attempts to select a rule to fire, if a rule is not found it exits, otherwise it attempts to fire the found rule, switching the phase back to Working Memory Actions and the process repeats again until the Agenda is empty.
The process recurses until the agenda is clear, in which case control returns to the calling application. When Working Memory Actions are taking place, no rules are being fired.
Conflict resolution is required when there are multiple rules on the agenda. As firing a rule may have side effects on working memory, the rule engine needs to know in what order the rules should fire (for instance, firing ruleA may cause ruleB to be removed from the agenda).
The default conflict resolution strategies employed by Drools are: Salience and LIFO (last in, first out).
The most visible one is "salience" or priority, in which case a user can specify that a certain rule has a higher priority (by giving it a higher number) than other rules. In that case, the higher salience rule will always be preferred. LIFO priorities are based on the assigned Working Memory Action counter value, multiple rules created from the same action have the same value - execution of these are considered arbitrary.
As a general rule, it is a good idea not to count on the rules firing in any particular order, and try and author the rules without worrying about a "flow".
Custom conflict resolution strategies can be specified by setting
the Class in the RuleBaseConfiguration
method setConflictResolver
, or
using the property drools.conflictResolver
.
Agenda groups are a way to partition rules (activations, actually) on the agenda. At any one time, only one group has "focus" which means that the activations for rules in that group will only take effect - you can also have rules "auto focus" which means the focus for its agenda group is taken when that rules conditions are true.
They are sometimes known as "modules" in CLIPS terminology. Agenda groups are a handy way to create a "flow" between grouped rules. You can switch the group which has focus either from within the rule engine, or from the API. If your rules have a clear need for multiple "phases" or "sequences" of processing, consider using agenda-groups for this purpose.
Each time setFocus(...)
is called it pushes that Agenda Group onto
a stack, when the focus group is empty it is popped off and the next one
on the stack evaluates. An Agenda Group can appear in multiple locations
on the stack. The default Agenda Group is "MAIN", all rules which do not
specify an Agenda Group are placed there, it is also always the first
group on the Stack and given focus as default.
Filters are optional implementations of the filter interface which are used to allow/or deny an activation from firing (what you filter on, is entirely up to the implementation). Drools provides the following convenience default implementations
RuleNameEndsWithAgendaFilter
RuleNameEqualsAgendaFilter
RuleNameStartsWithAgendaFilter
RuleNameMatchesAgendaFilter
To use a filter specify it while calling FireAllRules
. The
following example will only allow activation of rules ending with the test Test. All others will be filtered out:
workingMemory.fireAllRules( new RuleNameEndsWithAgendaFilter( "Test" ) );
In a regular insert, you need to explicitly retract a fact. With logical assertions, the fact that was asserted will be automatically retracted when the conditions that asserted it in the first place are no longer true. (It's actually more clever than that! If there are no possible conditions that could support the logical assertion, only then it will be retracted).
Normal insertions are said to be “STATED” (i.e. The Fact has been
stated - just like the intuitive concept). Using a HashMap
and a counter
we track how many times a particular equality is STATED; this means we
count how many different instances are equal. When we logically insert an
object we are said to justify it and it is justified by the firing rule.
For each logical insertion there can only be one equal object, each
subsequent equal logical insertion increases the justification counter for
this logical assertion. As each justification is removed when we have no
more justifications the logical object is automatically retracted.
If we logically insert an object when there is an equal STATED
object it will fail and return null. If we STATE an object that has an
existing equal object that is JUSTIFIED we override the Fact - how this
override works depends on the configuration setting
WM_BEHAVIOR_PRESERVE
. When the property is set to discard we use the
existing handle and replace the existing instance with the new Object -
this is the default behavior - otherwise we override it to STATED but we
create an new FactHandle
.
This can be confusing on a first read, so hopefully the flow charts
below help. When it says that it returns a new FactHandle
, this also
indicates the Object
was propagated through the network.
An example may make things clearer. Imagine a credit card processing application, processing transactions for a given account (and we have a working memory accumulating knowledge about a single accounts transaction). The rule engine is doing its best to decide if transactions are possibly fraudulent or not. Imagine this rule base basically has rules that kick in when there is "reason to be suspicious" and when "everything is normal".
Of course there are many rules that operate no matter what (performing standard calculations, etc.). Now there are possibly many reasons as to what could trigger a "reason to be suspicious": someone notifying the bank, a sequence of large transactions, transactions for geographically disparate locations or even reports of credit card theft. Rather then smattering all the little conditions in lots of rules, imagine there is a fact class called "SuspiciousAccount".
Then there can be a series of rules whose job is to look for things that may raise suspicion, and if they fire, they simply insert a new SuspiciousAccount() instance. All the other rules just have conditions like "not SuspiciousAccount()" or "SuspiciousAccount()" depending on their needs. Note that this has the advantage of allowing there to be many rules around raising suspicion, without touching the other rules. When the facts causing the SuspiciousAccount() insertion are removed, the rule engine reverts back to the normal "mode" of operation (and for instance, a rule with "not SuspiciousAccount()" may kick in which flushes through any interrupted transactions).
If you have followed this far, you will note that truth maintenance, like logical assertions, allows rules to behave a little like a human would, and can certainly make the rules more manageable.
It is important to note that for Truth Maintenance (and logical assertions) to work at all, your Fact objects (which may be Javabeans) override equals and hashCode methods (from java.lang.Object) correctly. As the truth maintenance system needs to know when 2 different physical objects are equal in value, BOTH equals and hashCode must be overridden correctly, as per the Java standard.
Two objects are equal if and only if their equals methods return true for each other and if their hashCode methods return the same values. See the Java API for more details (but do keep in mind you MUST override both equals and hashCode).
The event package provides means to be notified of rule engine events, including rules firing, objects being asserted, etc. This allows you to separate out logging/auditing activities from the main part of your application (and the rules) - as events are a cross cutting concern.
There are three types of event listeners - WorkingMemoryEventListener, AgendaEventListener RuleFlowEventListener.
Both stateful and stateless sessions implement the EventManager interface, which allows event listeners to be added to the session.
All EventListeners have default implementations that implement each method, but do nothing, these are convienience classes that you can inherit from to save having to implement each method - DefaultAgendaEventListener, DefaultWorkingMemoryEventListener, DefaultRuleFlowEventListener. The following shows how to extend DefaultAgendaEventListener and add it to the session - the example prints statements for only when rules are fired:
session.addEventListener( new DefaultAgendaEventListener() { public void afterActivationFired(AfterActivationFiredEvent event) { super.afterActivationFired( event ); System.out.println( event ); } });
Drools also provides DebugWorkingMemoryEventListener, DebugAgendaEventListener and DebugRuleFlowEventListener that implements each method with a debug print statement:
session.addEventListener( new DebugWorkingMemoryEventListener() );
The Eclipse-based Rule IDE also provides an audit logger and graphical viewer, so that the rule engine can log events for later viewing, and auditing.
With Rete you have a stateful session where objects can be asserted and modified over time, rules can also be added and removed. Now what happens if we assume a stateless session, where after the initial data set no more data can be asserted or modified (no rule re-evaluations) and rules cannot be added or removed? This means we can start to make assumptions to minimize what work the engine has to do.
Order the Rules by salience and position in the ruleset (just sets a sequence attribute on the rule terminal node). 4
Create an array, one element for each possible rule activation; element position indicates firing order.
Turn off all node memories, except the right-input Object memory.
Disconnect the LeftInputAdapterNode propagation, and have the Object plus the Node referenced in a Command object, which is added to a list on the WorkingMemory for later execution.
Assert all objects, when all assertions are finished and thus right-input node memories are populated check the Command list and execute each in turn.
All resulting Activations should be placed in the array, based upon the determined sequence number of the Rule. Record the first and last populated elements, to reduce the iteration range.
Iterate the array of Activations, executing populated element in turn.
If we have a maximum number of allowed rule executions, we can exit our network evaluations early to fire all the rules in the array.
The LeftInputAdapterNode no longer creates a Tuple, adding the Object, and then propagate the Tuple – instead a Command Object is created and added to a list in the Working Memory. This Command Object holds a reference to the LeftInputAdapterNode and the propagated Object. This stops any left-input propagations at insertion time, so that we know that a right-input propagation will never need to attempt a join with the left-inputs (removing the need for left-input memory). All nodes have their memory turned off, including the left-input Tuple memory but excluding the right-input Object memory – i.e. The only node that remembers an insertion propagation is the right-input Object memory. Once all the assertions are finished, and all right-input memories populated, we can then iterate the list of LeftInputAdatperNode Command objects calling each in turn; they will propagate down the network attempting to join with the right-input objects; not being remembered in the left input, as we know there will be no further object assertions and thus propagations into the right-input memory.
There is no longer an Agenda, with a priority queue to schedule the Tuples, instead there is simply an array for the number of rules. The sequence number of the RuleTerminalNode indicates the element with the array to place the Activation. Once all Command Objects have finished we can iterate our array checking each element in turn and firing the Activations if they exist. To improve performance in the array we remember record the first and last populated cells. The network is constructed where each RuleTerminalNode is given a sequence number, based on a salience number and its order of being added to the network.
Typically the right-input node memories are HashMaps, for fast Object retraction, as we know there will be no Object retractions, we can use a list when the values of the Object are not indexed. For larger numbers of Objects indexed HashMaps provide a performance increase; if we know an Object type has a low number of instances then indexing is probably not of an advantage and an Object list can be used.
Sequential mode can only be used with a StatelessSession and is off by default. To turn on either set the RuleBaseConfiguration.setSequential to true or set the rulebase.conf property drools.sequential to true. Sequential mode can fallback to a dynamic agenda with setSequentialAgenda to either SequentialAgenda.SEQUENTIAL or SequentialAgenda.DYNAMIC setter or the "drools.sequential.agenda" property