JBoss.orgCommunity Documentation
Copyright © 2009, 2010 eXoPlatform
Table of Contents
Java Content Repository API as well as other Java language related standards is created within the Java Community Process http://jcp.org/ as a result of collaboration of an expert group and the Java community and known as JSR-170 (Java Specification Request) http://www.jcp.org/en/jsr/detail?id=170.
As the main purpose of content repository is to maintain the data - the heart of CR is the data model:
The main data storage abstraction of JCR's data model is a workspace
Each repository should have one or more workspaces
The content is stored in a workspace as a hierarchy of items
Each workspace has its own hierarchy of items
Node is intended to support the data hierarchy. They are typed using namespaced names which allows the content to be structured according to standardized constraints. A node may be versioned through an associated version graph (optional feature)
Property stored data are values of predefined types (String, Binary, Long, Boolean, Double, Date, Reference, Path).
It is important to note that the data model for the interface (the repository model) is rarely the same as the data models used by the repository's underlying storage subsystems. The repository knows how to make the client's changes persistent because that is part of the repository configuration, rather than part of the application programming task.
Like other eXo services eXo JCR can be configured and used in portal or embedded mode (as a service embedded in eXo Portal) and in standalone mode.
In Embedded mode, JCR services are registered in the Portal container and the second option is to use a Standalone container. The main difference between these container types is that the first one is intended to be used in a Portal (Web) environment, while the second one can be used standalone (TODO see the comprehensive page Service Configuration for Beginners for more details).
The following setup procedure is used to obtain a Standalone configuration (TODO find more in Container configuration):
Configuration that is set explicitly using StandaloneContainer.addConfigurationURL(String url) or StandaloneContainer.addConfigurationPath(String path) before getInstance()
Configuration from $base:directory/exo-configuration.xml or $base:directory/conf/exo-configuration.xml file. Where $base:directory is either AS's home directory in case of J2EE AS environment or just the current directory in case of a standalone application.
/conf/exo-configuration.xml in the current classloader (e.g. war, ear archive)
Configuration from $service_jar_file/conf/portal/configuration.xml. WARNING: do not rely on some concrete jar's configuration if you have more than one jar containing conf/portal/configuration.xml file. In this case choosing a configuration is unpredictable.
JCR service configuration looks like:
<component> <key>org.exoplatform.services.jcr.RepositoryService</key> <type>org.exoplatform.services.jcr.impl.RepositoryServiceImpl</type> </component> <component> <key>org.exoplatform.services.jcr.config.RepositoryServiceConfiguration</key> <type>org.exoplatform.services.jcr.impl.config.RepositoryServiceConfigurationImpl</type> <init-params> <value-param> <name>conf-path</name> <description>JCR repositories configuration file</description> <value>jar:/conf/standalone/exo-jcr-config.xml</value> </value-param> <properties-param> <name>working-conf</name> <description>working-conf</description> <property name="source-name" value="jdbcjcr" /> <property name="dialect" value="hsqldb" /> <property name="persister-class-name" value="org.exoplatform.services.jcr.impl.config.JDBCConfigurationPersister" /> </properties-param> </init-params> </component>
conf-path : a path to a RepositoryService JCR Configuration
working-conf : optional; JCR configuration persister configuration. If there isn't a working-conf the persister will be disabled
The Configuration is defined in an XML file (see DTD below).
JCR Service can use multiple Repositories and each repository can have multiple Workspaces.
Repositories configuration parameters support human-readable formats of values. They are all case-insensitive:
Numbers formats: K,KB - kilobytes, M,MB - megabytes, G,GB - gigabytes, T,TB - terabytes.
Examples: 100.5 - digit 100.5, 200k - 200 Kbytes, 4m - 4 Mbytes, 1.4G - 1.4 Gbytes, 10T - 10 Tbytes
Time format endings: ms - milliseconds, s - seconds, m - minutes, h - hours, d - days, w - weeks, if no ending - seconds.
Examples: 500ms - 500 milliseconds, 20 or 20s - 20 seconds, 30m - 30 minutes, 12h - 12 hours, 5d - 5 days, 4w - 4 weeks.
Default configuration of the Repository Service located in jar:/conf/portal/exo-jcr-config.xml, it will be available for portal and standalone modes.
In portal mode it is overriden and located in the portal web application portal/WEB-INF/conf/jcr/repository-configuration.xml.
Example of Repository Service configuration for standalone mode:
<repository-service default-repository="repository"> <repositories> <repository name="db1" system-workspace="ws" default-workspace="ws"> <security-domain>exo-domain</security-domain> <access-control>optional</access-control> <session-max-age>1h</session-max-age> <authentication-policy>org.exoplatform.services.jcr.impl.core.access.JAASAuthenticator</authentication-policy> <workspaces> <workspace name="production"> <!-- for system storage --> <container class="org.exoplatform.services.jcr.impl.storage.jdbc.optimisation.CQJDBCWorkspaceDataContainer"> <properties> <property name="source-name" value="jdbcjcr" /> <property name="multi-db" value="false" /> <property name="update-storage" value="false" /> <property name="max-buffer-size" value="200k" /> <property name="swap-directory" value="../temp/swap/production" /> </properties> <value-storages> <value-storage id="system" class="org.exoplatform.services.jcr.impl.storage.value.fs.TreeFileValueStorage"> <properties> <property name="path" value="../temp/values/production" /> </properties> <filters> <filter property-type="Binary" /> </filters> </value-storage> </value-storages> </container> <initializer class="org.exoplatform.services.jcr.impl.core.ScratchWorkspaceInitializer"> <properties> <property name="root-nodetype" value="nt:unstructured" /> </properties> </initializer> <cache enabled="true" class="org.exoplatform.services.jcr.impl.dataflow.persistent.LinkedWorkspaceStorageCacheImpl"> <properties> <property name="max-size" value="10k" /> <property name="live-time" value="1h" /> </properties> </cache> <query-handler class="org.exoplatform.services.jcr.impl.core.query.lucene.SearchIndex"> <properties> <property name="index-dir" value="../temp/jcrlucenedb/production" /> </properties> </query-handler> <lock-manager> <time-out>15m</time-out> <persister class="org.exoplatform.services.jcr.impl.core.lock.FileSystemLockPersister"> <properties> <property name="path" value="../temp/lock/system" /> </properties> </persister> </lock-manager> </workspace> <workspace name="backup"> <container class="org.exoplatform.services.jcr.impl.storage.jdbc.optimisation.CQJDBCWorkspaceDataContainer"> <properties> <property name="source-name" value="jdbcjcr" /> <property name="multi-db" value="false" /> <property name="update-storage" value="false" /> <property name="max-buffer-size" value="200k" /> <property name="swap-directory" value="../temp/swap/backup" /> </properties> <value-storages> <value-storage id="draft" class="org.exoplatform.services.jcr.impl.storage.value.fs.TreeFileValueStorage"> <properties> <property name="path" value="../temp/values/backup" /> </properties> <filters> <filter property-type="Binary" /> </filters> </value-storage> </value-storages> </container> <initializer class="org.exoplatform.services.jcr.impl.core.ScratchWorkspaceInitializer"> <properties> <property name="root-nodetype" value="nt:unstructured" /> </properties> </initializer> <cache enabled="true" class="org.exoplatform.services.jcr.impl.dataflow.persistent.LinkedWorkspaceStorageCacheImpl"> <properties> <property name="max-size" value="10k" /> <property name="live-time" value="1h" /> </properties> </cache> <query-handler class="org.exoplatform.services.jcr.impl.core.query.lucene.SearchIndex"> <properties> <property name="index-dir" value="../temp/jcrlucenedb/backup" /> </properties> </query-handler> </workspace> </workspaces> </repository> </repositories> </repository-service>
Repository Service configuration:
default-repository - the name of a default repository (one returned by RepositoryService.getRepository())
repositories - the list of repositories
Repository configuration:
name - the name of a repository
default-workspace - the name of a workspace obtained using Session's login() or login(Credentials) methods (ones without an explicit workspace name)
system-workspace - name of workspace where /jcr:system node is placed
security-domain - the name of a security domain for JAAS authentication
access-control - the name of an access control policy. There can be 3 types: optional - ACL is created on-demand(default), disable - no access control, mandatory - an ACL is created for each added node(not supported yet)
authentication-policy - the name of an authentication policy class
workspaces - the list of workspaces
session-max-age - the time after which an idle session will be removed (called logout). If not set, the idle session will never be removed.
Workspace configuration:
name - the name of a workspace
auto-init-root-nodetype - DEPRECATED in JCR 1.9 (use initializer). The node type for root node initialization
container - workspace data container (physical storage) configuration
initializer - workspace initializer configuration
cache - workspace storage cache configuration
query-handler - query handler configuration
Workspace data container configuration:
class - A workspace data container class name
properties - the list of properties (name-value pairs) for the concrete Workspace data container
value-storages - the list of value storage plugins
Value Storage plugin configuration (optional feature):
The value-storage element is optional. If you don't include it, the values will be stored as BLOBs inside the database.
value-storage - Optional value Storage plugin definition
class- a value storage plugin class name (attribute)
properties - the list of properties (name-value pairs) for a concrete Value Storage plugin
filters - the list of filters defining conditions when this plugin is applicable
Initializer configuration (optional):
class - initializer implementation class.
properties - the list of properties (name-value pairs). Properties are supported:
root-nodetype - The node type for root node initialization
root-permissions - Default permissions of the root node. It is defined as a set of semicolon-delimited permissions containing a group of space-delimited identities (user, group etc, see Organization service documentation for details) and the type of permission. For example any read;:/admin read;:/admin add_node;:/admin set_property;:/admin remove means that users from group admin have all permissions and other users have only a 'read' permission.
Configurable initializer adds a capability to override workspace initial startup procedure.
Cache configuration:
enabled - if workspace cache is enabled
class - cache implementation class, optional from 1.9. Default value is org.exoplatform.services.jcr.impl.dataflow.persistent.LinkedWorkspaceStorageCacheImpl.
Cache can be configured to use concrete implementation of WorkspaceStorageCache interface. JCR core has two implementation to use: * LinkedWorkspaceStorageCacheImpl - default, with configurable read behavior and statistic. * WorkspaceStorageCacheImpl - pre 1.9, still can be used.
properties - the list of properties (name-value pairs) for Workspace cache:
max-size - cache maximum size.
live-time - cached item live time.
LinkedWorkspaceStorageCacheImpl supports additional optional parameters TODO
Query Handler configuration:
class - A Query Handler class name
properties - the list of properties (name-value pairs) for a Query Handler (indexDir) properties and advanced features described in *Search Configuration*
Lock Manager configuration:
time-out - time after which the unused global lock will be removed.
persister - a class for storing lock information for future use. For example, remove lock after jcr restart.
path - a lock folder, each workspace has its own.
Configuration definition:
<!ELEMENT repository-service (repositories)> <!ATTLIST repository-service default-repository NMTOKEN #REQUIRED> <!ELEMENT repositories (repository)> <!ELEMENT repository (security-domain,access-control,session-max-age,authentication-policy,workspaces)> <!ATTLIST repository default-workspace NMTOKEN #REQUIRED name NMTOKEN #REQUIRED system-workspace NMTOKEN #REQUIRED > <!ELEMENT security-domain (#PCDATA)> <!ELEMENT access-control (#PCDATA)> <!ELEMENT session-max-age (#PCDATA)> <!ELEMENT authentication-policy (#PCDATA)> <!ELEMENT workspaces (workspace+)> <!ELEMENT workspace (container,initializer,cache,query-handler)> <!ATTLIST workspace name NMTOKEN #REQUIRED> <!ELEMENT container (properties,value-storages)> <!ATTLIST container class NMTOKEN #REQUIRED> <!ELEMENT value-storages (value-storage+)> <!ELEMENT value-storage (properties,filters)> <!ATTLIST value-storage class NMTOKEN #REQUIRED> <!ELEMENT filters (filter+)> <!ELEMENT filter EMPTY> <!ATTLIST filter property-type NMTOKEN #REQUIRED> <!ELEMENT initializer (properties)> <!ATTLIST initializer class NMTOKEN #REQUIRED> <!ELEMENT cache (properties)> <!ATTLIST cache enabled NMTOKEN #REQUIRED class NMTOKEN #REQUIRED > <!ELEMENT query-handler (properties)> <!ATTLIST query-handler class NMTOKEN #REQUIRED> <!ELEMENT access-manager (properties)> <!ATTLIST access-manager class NMTOKEN #REQUIRED> <!ELEMENT lock-manager (time-out,persister)> <!ELEMENT time-out (#PCDATA)> <!ELEMENT persister (properties)> <!ELEMENT properties (property+)> <!ELEMENT property EMPTY>
eXo JCR persistent data container can work in two configuration modes:
Multi-database: one database for each workspace (used in standalone eXo JCR service mode)
Single-database: all workspaces persisted in one database (used in embedded eXo JCR service mode, e.g. in eXo portal)
The data container uses the JDBC driver to communicate with the actual database software, i.e. any JDBC-enabled data storage can be used with eXo JCR implementation.
Currently the data container is tested with the following RDBMS:
MySQL (5.x including UTF8 support)
PostgreSQL (8.x)
Oracle Database (9i, 10g)
Microsoft SQL Server (2005)
Sybase ASE (15.0)
Apache Derby/Java DB (10.1.x, 10.2.x)
IBM DB2 (8.x, 9.x)
HSQLDB (1.8.0.7)
Each database software supports ANSI SQL standards but has its own specifics too. So, each database has its own configuration in eXo JCR as a database dialect parameter. If you need a more detailed configuration of the database it's possible to do that by editing the metadata SQL-script files.
In case the non-ANSI node name is used it's necessary to use a
database with MultiLanguage support[TODO link to MultiLanguage]. Some JDBC
drivers need additional parameters for establishing a Unicode friendly
connection. E.g. under mysql it's necessary to add an additional parameter
for the JDBC driver at the end of JDBC URL. For instance:
jdbc:mysql://exoua.dnsalias.net/portal?characterEncoding=utf8
There are preconfigured configuration files for HSQLDB. Look for these files in /conf/portal and /conf/standalone folders of the jar-file exo.jcr.component.core-XXX.XXX.jar or source-distribution of eXo JCR implementation.
By default the configuration files are located in service jars
/conf/portal/configuration.xml
(eXo services
including JCR Repository Service) and
exo-jcr-config.xml
(repositories configuration). In
eXo portal product JCR is configured in portal web application
portal/WEB-INF/conf/jcr/jcr-configuration.xml
(JCR
Repository Service and related serivces) and repository-configuration.xml
(repositories configuration).
Read more about Repository configuration.
You need to configure each workspace in a repository. You may have each one on different remote servers as far as you need.
First of all configure the data containers in the
org.exoplatform.services.naming.InitialContextInitializer
service. It's the JNDI context initializer which registers (binds) naming
resources (DataSources) for data containers.
Example (standalone mode, two data containers
jdbcjcr
- local HSQLDB,
jdbcjcr1
- remote MySQL):
<component> <key>org.exoplatform.services.naming.InitialContextInitializer</key> <type>org.exoplatform.services.naming.InitialContextInitializer</type> <component-plugins> <component-plugin> <name>bind.datasource</name> <set-method>addPlugin</set-method> <type>org.exoplatform.services.naming.BindReferencePlugin</type> <init-params> <value-param> <name>bind-name</name> <value>jdbcjcr</value> </value-param> <value-param> <name>class-name</name> <value>javax.sql.DataSource</value> </value-param> <value-param> <name>factory</name> <value>org.apache.commons.dbcp.BasicDataSourceFactory</value> </value-param> <properties-param> <name>ref-addresses</name> <description>ref-addresses</description> <property name="driverClassName" value="org.hsqldb.jdbcDriver"/> <property name="url" value="jdbc:hsqldb:file:target/temp/data/portal"/> <property name="username" value="sa"/> <property name="password" value=""/> </properties-param> </init-params> </component-plugin> <component-plugin> <name>bind.datasource</name> <set-method>addPlugin</set-method> <type>org.exoplatform.services.naming.BindReferencePlugin</type> <init-params> <value-param> <name>bind-name</name> <value>jdbcjcr1</value> </value-param> <value-param> <name>class-name</name> <value>javax.sql.DataSource</value> </value-param> <value-param> <name>factory</name> <value>org.apache.commons.dbcp.BasicDataSourceFactory</value> </value-param> <properties-param> <name>ref-addresses</name> <description>ref-addresses</description> <property name="driverClassName" value="com.mysql.jdbc.Driver"/> <property name="url" value="jdbc:mysql://exoua.dnsalias.net/jcr"/> <property name="username" value="exoadmin"/> <property name="password" value="exo12321"/> <property name="maxActive" value="50"/> <property name="maxIdle" value="5"/> <property name="initialSize" value="5"/> </properties-param> </init-params> </component-plugin> <component-plugins> <init-params> <value-param> <name>default-context-factory</name> <value>org.exoplatform.services.naming.SimpleContextFactory</value> </value-param> </init-params> </component>
We configure the database connection parameters:
driverClassName
, e.g.
"org.hsqldb.jdbcDriver", "com.mysql.jdbc.Driver",
"org.postgresql.Driver"
url
, e.g.
"jdbc:hsqldb:file:target/temp/data/portal",
"jdbc:mysql://exoua.dnsalias.net/jcr"
username
, e.g. "sa", "exoadmin"
password
, e.g. "", "exo12321"
There can be connection pool configuration parameters (org.apache.commons.dbcp.BasicDataSourceFactory):
maxActive
, e.g. 50
maxIdle
, e.g. 5
initialSize
, e.g. 5
and other according to Apache DBCP configuration
When the data container configuration is done we can configure the repository service. Each workspace will be configured for its own data container.
Example (two workspaces ws
- jdbcjcr,
ws1
- jdbcjcr1):
<workspaces> <workspace name="ws" auto-init-root-nodetype="nt:unstructured"> <container class="org.exoplatform.services.jcr.impl.storage.jdbc.JDBCWorkspaceDataContainer"> <properties> <property name="source-name" value="jdbcjcr"/> <property name="dialect" value="hsqldb"/> <property name="multi-db" value="true"/> <property name="max-buffer-size" value="200K"/> <property name="swap-directory" value="target/temp/swap/ws"/> </properties> </container> <cache enabled="true"> <properties> <property name="max-size" value="10K"/><!-- 10Kbytes --> <property name="live-time" value="30m"/><!-- 30 min --> </properties> </cache> <query-handler class="org.exoplatform.services.jcr.impl.core.query.lucene.SearchIndex"> <properties> <property name="index-dir" value="target/temp/index"/> </properties> </query-handler> <lock-manager> <time-out>15m</time-out><!-- 15 min --> <persister class="org.exoplatform.services.jcr.impl.core.lock.FileSystemLockPersister"> <properties> <property name="path" value="target/temp/lock/ws"/> </properties> </persister> </lock-manager> </workspace> <workspace name="ws1" auto-init-root-nodetype="nt:unstructured"> <container class="org.exoplatform.services.jcr.impl.storage.jdbc.JDBCWorkspaceDataContainer"> <properties> <property name="source-name" value="jdbcjcr1"/> <property name="dialect" value="mysql"/> <property name="multi-db" value="true"/> <property name="max-buffer-size" value="200K"/> <property name="swap-directory" value="target/temp/swap/ws1"/> </properties> </container> <cache enabled="true"> <properties> <property name="max-size" value="10K"/> <property name="live-time" value="5m"/> </properties> </cache> <query-handler class="org.exoplatform.services.jcr.impl.core.query.lucene.SearchIndex"> <properties> <property name="index-dir" value="target/temp/index"/> </properties> </query-handler> <lock-manager> <time-out>15m</time-out><!-- 15 min --> <persister class="org.exoplatform.services.jcr.impl.core.lock.FileSystemLockPersister"> <properties> <property name="path" value="target/temp/lock/ws1"/> </properties> </persister> </lock-manager> </workspace> </workspaces>
source-name
- a javax.sql.DataSource
name configured in InitialContextInitializer component (was
sourceName
prior JCR 1.9);
dialect
- a database dialect, one of
"hsqldb", "mysql", "mysql-utf8", "pgsql", "oracle", "oracle-oci",
"mssql", "sybase", "derby", "db2", "db2v8".
multi-db
- enable multi-database
container with this parameter (set value "true");
max-buffer-size
- a threshold (in
bytes) after which a javax.jcr.Value content will be swapped to a
file in a temporary storage. I.e. swap for pending changes.
swap-directory
- a path in the file
system used to swap the pending changes.
In this way we have configured two workspace which will be persisted in two different databases (ws in HSQLDB, ws1 in MySQL).
Starting from v.1.9 repository configuration parameters supports human-readable formats of values (e.g. 200K - 200 Kbytes, 30m - 30 minutes etc)
It's more simple to configure a single-database data container. We have to configure one naming resource.
Example (embedded mode for jdbcjcr
data
container):
<external-component-plugins> <target-component>org.exoplatform.services.naming.InitialContextInitializer</target-component> <component-plugin> <name>bind.datasource</name> <set-method>addPlugin</set-method> <type>org.exoplatform.services.naming.BindReferencePlugin</type> <init-params> <value-param> <name>bind-name</name> <value>jdbcjcr</value> </value-param> <value-param> <name>class-name</name> <value>javax.sql.DataSource</value> </value-param> <value-param> <name>factory</name> <value>org.apache.commons.dbcp.BasicDataSourceFactory</value> </value-param> <properties-param> <name>ref-addresses</name> <description>ref-addresses</description> <property name="driverClassName" value="org.postgresql.Driver"/> <property name="url" value="jdbc:postgresql://exoua.dnsalias.net/portal"/> <property name="username" value="exoadmin"/> <property name="password" value="exo12321"/> <property name="maxActive" value="50"/> <property name="maxIdle" value="5"/> <property name="initialSize" value="5"/> </properties-param> </init-params> </component-plugin> </external-component-plugins>
And configure repository workspaces in repositories configuration with this one database. Parameter "multi-db" must be switched off (set value "false").
Example (two workspaces ws
- jdbcjcr,
ws1
- jdbcjcr):
<workspaces> <workspace name="ws" auto-init-root-nodetype="nt:unstructured"> <container class="org.exoplatform.services.jcr.impl.storage.jdbc.JDBCWorkspaceDataContainer"> <properties> <property name="source-name" value="jdbcjcr"/> <property name="dialect" value="pgsql"/> <property name="multi-db" value="false"/> <property name="max-buffer-size" value="200K"/> <property name="swap-directory" value="target/temp/swap/ws"/> </properties> </container> <cache enabled="true"> <properties> <property name="max-size" value="10K"/> <property name="live-time" value="30m"/> </properties> </cache> <query-handler class="org.exoplatform.services.jcr.impl.core.query.lucene.SearchIndex"> <properties> <property name="index-dir" value="../temp/index"/> </properties> </query-handler> <lock-manager> <time-out>15m</time-out> <persister class="org.exoplatform.services.jcr.impl.core.lock.FileSystemLockPersister"> <properties> <property name="path" value="target/temp/lock/ws"/> </properties> </persister> </lock-manager> </workspace> <workspace name="ws1" auto-init-root-nodetype="nt:unstructured"> <container class="org.exoplatform.services.jcr.impl.storage.jdbc.JDBCWorkspaceDataContainer"> <properties> <property name="source-name" value="jdbcjcr"/> <property name="dialect" value="pgsql"/> <property name="multi-db" value="false"/> <property name="max-buffer-size" value="200K"/> <property name="swap-directory" value="target/temp/swap/ws1"/> </properties> </container> <cache enabled="true"> <properties> <property name="max-size" value="10K"/> <property name="live-time" value="5m"/> </properties> </cache> <lock-manager> <time-out>15m</time-out> <persister class="org.exoplatform.services.jcr.impl.core.lock.FileSystemLockPersister"> <properties> <property name="path" value="target/temp/lock/ws1"/> </properties> </persister> </lock-manager> </workspace> </workspaces>
In this way we have configured two workspaces which will be persisted in one database (PostgreSQL).
Repository configuration without using of the
javax.sql.DataSource
bounded in JNDI.
This case may be usable if you have a dedicated JDBC driver implementation with special features like XA transactions, statements/connections pooling etc:
You have to remove the configuration in
InitialContextInitializer
for your database
and configure a new one directly in the workspace
container.
Remove parameter "source-name" and add next lines instead. Describe your values for a JDBC driver, database url and username.
But be careful in this case JDBC driver should implement and provide connection pooling. Connection pooling is very recommended for use with JCR to prevent a database overload.
<workspace name="ws" auto-init-root-nodetype="nt:unstructured"> <container class="org.exoplatform.services.jcr.impl.storage.jdbc.JDBCWorkspaceDataContainer"> <properties> <property name="dialect" value="hsqldb"/> <property name="driverClassName" value="org.hsqldb.jdbcDriver"/> <property name="url" value="jdbc:hsqldb:file:target/temp/data/portal"/> <property name="username" value="su"/> <property name="password" value=""/> ......
Workspaces can be added dynamically during runtime.
This can be performed in two steps:
Firstly,
ManageableRepository.configWorkspace(WorkspaceEntry
wsConfig)
- register a new configuration in
RepositoryContainer and create a WorkspaceContainer.
Secondly, the main step,
ManageableRepository.createWorkspace(String
workspaceName)
- creation of a new workspace.
The current configuration of eXo JCR uses Apache DBCP connection
pool
(org.apache.commons.dbcp.BasicDataSourceFactory
).
It's possible to set a big value for maxActive parameter in
configuration.xml
. That means usage of lots of TCP/IP
ports from a client machine inside the pool (i.e. JDBC driver). As a
result the data container can throw exceptions like "Address already in
use". To solve this problem you have to configure the client's machine
networking software for the usage of shorter timeouts for opened TCP/IP
ports.
Microsoft Windows has MaxUserPort
,
TcpTimedWaitDelay
registry keys in the node
HKEY_LOCAL_MACHINESYSTEMCurrentControlSetServicesTcpipParameters
,
by default these keys are unset, set each one with values like
these:
"TcpTimedWaitDelay"=dword:0000001e, sets TIME_WAIT parameter to 30 seconds, default is 240.
"MaxUserPort"=dword:00001b58, sets the maximum of open ports to 7000 or higher, default is 5000.
A sample registry file is below:
Windows Registry Editor Version 5.00 [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters] "MaxUserPort"=dword:00001b58 "TcpTimedWaitDelay"=dword:0000001e
By default JCR Values are stored in the Workspace Data container along with the JCR structure (i.e. Nodes and Properties). eXo JCR offers an additional option of storing JCR Values separately from Workspace Data container, which can be extremely helpful to keep Binary Large Objects (BLOBs) for example (see [TODOBinary values processing link]).
Value storage configuration is a part of Repository configuration, find more details there.
Tree-based storage is recommended for most of cases. If you run an application on Amazon EC2 - the S3 option may be interesting for architecture. Simple 'flat' storage is good in speed of creation/deletion of values, it might be a compromise for a small storages.
Holds Values in tree-like FileSystem files. path property points to the root directory to store the files.
This is a recommended type of external storage, it can contain large amount of files limited only by disk/volume free space.
A disadvantage it's a higher time on Value deletion due to unused tree-nodes remove.
<value-storage id="Storage #1" class="org.exoplatform.services.jcr.impl.storage.value.fs.TreeFileValueStorage"> <properties> <property name="path" value="data/values"/> </properties> <filters> <filter property-type="Binary" min-value-size="1M"/> </filters>
Where :
id - the value storage unique
identifier, used for linking with properties stored in workspace
container |
path - a location where value files
will be stored |
Each file value storage can have the filter(s)
for incoming values. A filter can match values by property type
(property-type), property name
(property-name), ancestor path
(ancestor-path) and/or size of values stored
(min-value-size, in bytes). In code sample we use a
filter with property-type and min-value-size only. I.e. storage for binary
values with size greater of 1MB. It's recommended to store properties with
large values in file value storage only.
Another example shows a value storage with different locations for large files (min-value-size a 20Mb-sized filter). A value storage uses ORed logic in the process of filter selection. That means the first filter in the list will be asked first and if not matched the next will be called etc. Here a value matches the 20 MB-sized filter min-value-size and will be stored in the path "data/20Mvalues", all other in "data/values".
<value-storages> <value-storage id="Storage #1" class="org.exoplatform.services.jcr.impl.storage.value.fs.TreeFileValueStorage"> <properties> <property name="path" value="data/20Mvalues"/> </properties> <filters> <filter property-type="Binary" min-value-size="20M"/> </filters> <value-storage> <value-storage id="Storage #2" class="org.exoplatform.services.jcr.impl.storage.value.fs.TreeFileValueStorage"> <properties> <property name="path" value="data/values"/> </properties> <filters> <filter property-type="Binary" min-value-size="1M"/> </filters> <value-storage> <value-storages>
Not recommended to use in production due to low capacity capabilities on most file systems.
But if you're sure in your file-system or data amount is small it may be useful for you as haves a faster speed of Value removal.
Holds Values in flat FileSystem files. path property points to root directory in order to store files
<value-storage id="Storage #1" class="org.exoplatform.services.jcr.impl.storage.value.fs.SimpleFileValueStorage"> <properties> <property name="path" value="data/values"/> </properties> <filters> <filter property-type="Binary" min-value-size="1M"/> </filters>
eXo JCR supports Content-addressable storage feature for Values storing.
Content-addressable storage, also referred to as associative storage and abbreviated CAS, is a mechanism for storing information that can be retrieved based on its content, not its storage location. It is typically used for high-speed storage and retrieval of fixed content, such as documents stored for compliance with government regulations.
Content Addressable Value storage stores unique content once. Different properties (values) with same content will be stored as one data file shared between those values. We can tell the Value content will be shared across some Values in storage and will be stored on one physical file.
Storage size will be decreased for application which governs potentially same data in the content.
For example: if you have 100 different properties containing the same data (e.g. mail attachment) the storage stores only one single file. The file will be shared with all referencing properties.
If property Value changes it is stored in an additional file. Alternatively the file is shared with other values, pointing to the same content.
The storage calculates Value content address each time the property was changed. CAS write operations are much more expensive compared to the non-CAS storages.
Content address calculation based on java.security.MessageDigest hash computation and tested with MD5 and SHA1 algorithms.
CAS storage works most efficiently on data that does not change often. For data that changes frequently, CAS is not as efficient as location-based addressing.
CAS support can be enabled for Tree and Simple File Value Storage types.
To enable CAS support just configure it in JCR Repositories configuration like we do for other Value Storages.
<workspaces> <workspace name="ws"> <container class="org.exoplatform.services.jcr.impl.storage.jdbc.JDBCWorkspaceDataContainer"> <properties> <property name="source-name" value="jdbcjcr"/> <property name="dialect" value="oracle"/> <property name="multi-db" value="false"/> <property name="update-storage" value="false"/> <property name="max-buffer-size" value="200k"/> <property name="swap-directory" value="target/temp/swap/ws"/> </properties> <value-storages> <!------------------- here -----------------------> <value-storage id="ws" class="org.exoplatform.services.jcr.impl.storage.value.fs.CASableTreeFileValueStorage"> <properties> <property name="path" value="target/temp/values/ws"/> <property name="digest-algo" value="MD5"/> <property name="vcas-type" value="org.exoplatform.services.jcr.impl.storage.value.cas.JDBCValueContentAddressStorageImpl"/> <property name="jdbc-source-name" value="jdbcjcr"/> <property name="jdbc-dialect" value="oracle"/> </properties> <filters> <filter property-type="Binary"/> </filters> </value-storage> </value-storages>
Properties:
digest-algo - digest hash algorithm
(MD5 and SHA1 were tested); |
vcas-type - Value CAS internal data
type, JDBC backed is currently implemented
org.exoplatform.services.jcr.impl.storage.value.cas.JDBCValueContentAddressStorageImp;l |
jdbc-source-name -
JDBCValueContentAddressStorageImpl specific parameter, database will
be used to save CAS metadata. It's simple to use same as in workspace
container; |
jdbc-dialect -
JDBCValueContentAddressStorageImpl specific parameter, database
dialect. It's simple to use same as in workspace container; |
JCR index configuration. You can find this file here:
.../portal/WEB-INF/conf/jcr/repository-configuration.xml
<repository-service default-repository="db1"> <repositories> <repository name="db1" system-workspace="ws" default-workspace="ws"> .... <workspaces> <workspace name="ws"> .... <query-handler class="org.exoplatform.services.jcr.impl.core.query.lucene.SearchIndex"> <properties> <property name="index-dir" value="${java.io.tmpdir}/temp/index/db1/ws" /> <property name="synonymprovider-class" value="org.exoplatform.services.jcr.impl.core.query.lucene.PropertiesSynonymProvider" /> <property name="synonymprovider-config-path" value="/synonyms.properties" /> <property name="indexing-config-path" value="/indexing-configuration.xml" /> <property name="query-class" value="org.exoplatform.services.jcr.impl.core.query.QueryImpl" /> </properties> </query-handler> ... </workspace> </workspaces> </repository> </repositories> </repository-service>
Table 6.1.
Parameter | Default | Description | Since |
---|---|---|---|
index-dir | none | The location of the index directory. This parameter is mandatory. Up to 1.9 this parameter called "indexDir" | 1.0 |
use-compoundfile | true | Advises lucene to use compound files for the index files. | 1.9 |
min-merge-docs | 100 | Minimum number of nodes in an index until segments are merged. | 1.9 |
volatile-idle-time | 3 | Idle time in seconds until the volatile index part is moved to a persistent index even though minMergeDocs is not reached. | 1.9 |
max-merge-docs | Integer.MAX_VALUE | Maximum number of nodes in segments that will be merged. The default value changed in JCR 1.9 to Integer.MAX_VALUE. | 1.9 |
merge-factor | 10 | Determines how often segment indices are merged. | 1.9 |
max-field-length | 10000 | The number of words that are fulltext indexed at most per property. | 1.9 |
cache-size | 1000 | Size of the document number cache. This cache maps uuids to lucene document numbers | 1.9 |
force-consistencycheck | false | Runs a consistency check on every startup. If false, a consistency check is only performed when the search index detects a prior forced shutdown. | 1.9 |
auto-repair | true | Errors detected by a consistency check are automatically repaired. If false, errors are only written to the log. | 1.9 |
query-class | QueryImpl | Class name that implements the javax.jcr.query.Query interface.This class must also extend from the class: org.exoplatform.services.jcr.impl.core.query.AbstractQueryImpl. | 1.9 |
document-order | true | If true and the query does not contain an 'order by' clause, result nodes will be in document order. For better performance when queries return a lot of nodes set to 'false'. | 1.9 |
result-fetch-size | Integer.MAX_VALUE | The number of results when a query is executed. Default value: Integer.MAX_VALUE (-> all). | 1.9 |
excerptprovider-class | DefaultXMLExcerpt | The name of the class that implements org.exoplatform.services.jcr.impl.core.query.lucene.ExcerptProvider and should be used for the rep:excerpt() function in a query. | 1.9 |
support-highlighting | false | If set to true additional information is stored in the index to support highlighting using the rep:excerpt() function. | 1.9 |
synonymprovider-class | none | The name of a class that implements org.exoplatform.services.jcr.impl.core.query.lucene.SynonymProvider. The default value is null (-> not set). | 1.9 |
synonymprovider-config-path | none | The path to the synonym provider configuration file. This path interpreted relative to the path parameter. If there is a path element inside the SearchIndex element, then this path is interpreted relative to the root path of the path. Whether this parameter is mandatory depends on the synonym provider implementation. The default value is null (-> not set). | 1.9 |
indexing-configuration-path | none | The path to the indexing configuration file. | 1.9 |
indexing-configuration-class | IndexingConfigurationImpl | The name of the class that implements org.exoplatform.services.jcr.impl.core.query.lucene.IndexingConfiguration. | 1.9 |
force-consistencycheck | false | If set to true a consistency check is performed depending on the parameter forceConsistencyCheck. If set to false no consistency check is performed on startup, even if a redo log had been applied. | 1.9 |
spellchecker-class | none | The name of a class that implements org.exoplatform.services.jcr.impl.core.query.lucene.SpellChecker. | 1.9 |
errorlog-size | 50(Kb) | The default size of error log file in Kb. | 1.9 |
upgrade-index | false | Allows JCR to convert an existing index into the new format. Also it is possible to set this property via system property, for example: -Dupgrade-index=true Indexes before JCR 1.12 will not run with JCR 1.12. Hence you have to run an automatic migration: Start JCR with -Dupgrade-index=true. The old index format is then converted in the new index format. After the conversion the new format is used. On the next start you don't need this option anymore. The old index is replaced and a back conversion is not possible - therefore better take a backup of the index before. (Only for migrations from JCR 1.9 and later.) | 1.12 |
analyzer | org.apache.lucene.analysis.standard.StandardAnalyzer | Class name of a lucene analyzer to use for fulltext indexing of text. | 1.12 |
The global search index is configured in the above-mentioned
configuration file
(portal/WEB-INF/conf/jcr/repository-configuration.xml
)
in the tag "query-handler".
<query-handler class="org.exoplatform.services.jcr.impl.core.query.lucene.SearchIndex">
In fact when using Lucene you always should use the same analyzer for indexing and for querying - otherwise the results are unpredictable. You don't have to worry about this, eXo JCR does this for you automatically. If you don't like the StandardAnalyzer configured by default just replace it by your own.
If you don't have a handy QueryHandler you will learn how create a customized Handler in 5 minutes.
By default Exo JCR uses the Lucene standard Analyzer to index contents. This analyzer uses some standard filters in the method that analyzes the content:
public TokenStream tokenStream(String fieldName, Reader reader) { StandardTokenizer tokenStream = new StandardTokenizer(reader, replaceInvalidAcronym); tokenStream.setMaxTokenLength(maxTokenLength); TokenStream result = new StandardFilter(tokenStream); result = new LowerCaseFilter(result); result = new StopFilter(result, stopSet); return result; }
The first one (StandardFilter) removes 's (as 's in "Peter's") from the end of words and removes dots from acronyms.
The second one (LowerCaseFilter) normalizes token text to lower case.
The last one (StopFilter) removes stop words from a token stream. The stop set is defined in the analyzer.
For specific cases, you may wish to use additional filters like ISOLatin1AccentFilter, which replaces accented characters in the ISO Latin 1 character set (ISO-8859-1) by their unaccented equivalents.
In order to use a different filter, you have to create a new analyzer, and a new search index to use the analyzer. You put it in a jar, which is deployed with your application.
The ISOLatin1AccentFilter is not present in the current Lucene version used by Exo. You can use the attached file. You can also create your own filter, the relevant method is
public final Token next(final Token reusableToken) throws java.io.IOException
which defines how chars are read and used by the filter.
The analyzer have to extends org.apache.lucene.analysis.standard.StandardAnalyzer, and overload the method
public TokenStream tokenStream(String fieldName, Reader reader)
to put your own filters. You can have a glance at the example analyzer attached to this article.
Now, we have the analyzer, we have to write the SearchIndex, which will use the analyzer. Your have to extends org.exoplatform.services.jcr.impl.core.query.lucene.SearchIndex. You have to write the constructor, to set the right analyzer, and the method
public Analyzer getAnalyzer() { return MyAnalyzer; }
to return your analyzer. You can see the attached SearchIndex.
Since 1.12 version we can set Analyzer directly in configuration. So, creation new SearchIndex only for new Analyzer is redundant.
In
portal/WEB-INF/conf/jcr/repository-configuration.xml
,
you have to replace each
<query-handler class="org.exoplatform.services.jcr.impl.core.query.lucene.SearchIndex">
by your own class
<query-handler class="mypackage.indexation.MySearchIndex">
In
portal/WEB-INF/conf/jcr/repository-configuration.xml
,
you have to add parameter "analyzer" to each query-handler
config:
<query-handler class="org.exoplatform.services.jcr.impl.core.query.lucene.SearchIndex"> <properties> ... <property name="analyzer" value="org.exoplatform.services.jcr.impl.core.MyAnalyzer"/> ... </properties> </query-handler>
When you start exo, your SearchIndex will start to index contents with the specified filters.
Starting with version 1.9, the default search index implementation in JCR allows you to control which properties of a node are indexed. You also can define different analyzers for different nodes.
The configuration parameter is called indexingConfiguration and per default is not set. This means all properties of a node are indexed.
If you wish to configure the indexing behavior you need to add a parameter to the query-handler element in your configuration file.
<param name="indexing-configuration-path" value="/indexing_configuration.xml"/>
To optimize the index size you can limit the node scope so that only certain properties of a node type are indexed.
With the below configuration only properties named Text are indexed for nodes of type nt:unstructured. This configuration also applies to all nodes whose type extends from nt:unstructured.
<?xml version="1.0"?> <!DOCTYPE configuration SYSTEM "http://www.exoplatform.org/dtd/indexing-configuration-1.0.dtd"> <configuration xmlns:nt="http://www.jcp.org/jcr/nt/1.0"> <index-rule nodeType="nt:unstructured"> <property>Text</property> </index-rule> </configuration>
Please note that you have to declare the namespace prefixes in the configuration element that you are using throughout the XML file!
It is also possible to configure a boost value for the nodes that match the index rule. The default boost value is 1.0. Higher boost values (a reasonable range is 1.0 - 5.0) will yield a higher score value and appear as more relevant.
<?xml version="1.0"?> <!DOCTYPE configuration SYSTEM "http://www.exoplatform.org/dtd/indexing-configuration-1.0.dtd"> <configuration xmlns:nt="http://www.jcp.org/jcr/nt/1.0"> <index-rule nodeType="nt:unstructured" boost="2.0"> <property>Text</property> </index-rule> </configuration>
If you do not wish to boost the complete node but only certain properties you can also provide a boost value for the listed properties:
<?xml version="1.0"?> <!DOCTYPE configuration SYSTEM "http://www.exoplatform.org/dtd/indexing-configuration-1.0.dtd"> <configuration xmlns:nt="http://www.jcp.org/jcr/nt/1.0"> <index-rule nodeType="nt:unstructured"> <property boost="3.0">Title</property> <property boost="1.5">Text</property> </index-rule> </configuration>
You may also add a condition to the index rule and have multiple rules with the same nodeType. The first index rule that matches will apply and all remaining ones are ignored:
<?xml version="1.0"?> <!DOCTYPE configuration SYSTEM "http://www.exoplatform.org/dtd/indexing-configuration-1.0.dtd"> <configuration xmlns:nt="http://www.jcp.org/jcr/nt/1.0"> <index-rule nodeType="nt:unstructured" boost="2.0" condition="@priority = 'high'"> <property>Text</property> </index-rule> <index-rule nodeType="nt:unstructured"> <property>Text</property> </index-rule> </configuration>
In the above example the first rule only applies if the nt:unstructured node has a priority property with a value 'high'. The condition syntax supports only the equals operator and a string literal.
You may also reference properties in the condition that are not on the current node:
<?xml version="1.0"?> <!DOCTYPE configuration SYSTEM "http://www.exoplatform.org/dtd/indexing-configuration-1.0.dtd"> <configuration xmlns:nt="http://www.jcp.org/jcr/nt/1.0"> <index-rule nodeType="nt:unstructured" boost="2.0" condition="ancestor::*/@priority = 'high'"> <property>Text</property> </index-rule> <index-rule nodeType="nt:unstructured" boost="0.5" condition="parent::foo/@priority = 'low'"> <property>Text</property> </index-rule> <index-rule nodeType="nt:unstructured" boost="1.5" condition="bar/@priority = 'medium'"> <property>Text</property> </index-rule> <index-rule nodeType="nt:unstructured"> <property>Text</property> </index-rule> </configuration>
The indexing configuration also allows you to specify the type of a node in the condition. Please note however that the type match must be exact. It does not consider sub types of the specified node type.
<?xml version="1.0"?> <!DOCTYPE configuration SYSTEM "http://www.exoplatform.org/dtd/indexing-configuration-1.0.dtd"> <configuration xmlns:nt="http://www.jcp.org/jcr/nt/1.0"> <index-rule nodeType="nt:unstructured" boost="2.0" condition="element(*, nt:unstructured)/@priority = 'high'"> <property>Text</property> </index-rule> </configuration>
Per default all configured properties are fulltext indexed if they are of type STRING and included in the node scope index. A node scope search finds normally all nodes of an index. That is, the select jcr:contains(., 'foo') returns all nodes that have a string property containing the word 'foo'. You can exclude explicitly a property from the node scope index:
<?xml version="1.0"?> <!DOCTYPE configuration SYSTEM "http://www.exoplatform.org/dtd/indexing-configuration-1.0.dtd"> <configuration xmlns:nt="http://www.jcp.org/jcr/nt/1.0"> <index-rule nodeType="nt:unstructured"> <property nodeScopeIndex="false">Text</property> </index-rule> </configuration>
Sometimes it is useful to include the contents of descendant nodes into a single node to easier search on content that is scattered across multiple nodes.
JCR allows you to define index aggregates based on relative path patterns and primary node types.
The following example creates an index aggregate on nt:file that includes the content of the jcr:content node:
<?xml version="1.0"?> <!DOCTYPE configuration SYSTEM "http://www.exoplatform.org/dtd/indexing-configuration-1.0.dtd"> <configuration xmlns:jcr="http://www.jcp.org/jcr/1.0" xmlns:nt="http://www.jcp.org/jcr/nt/1.0"> <aggregate primaryType="nt:file"> <include>jcr:content</include> </aggregate> </configuration>
You can also restrict the included nodes to a certain type:
<?xml version="1.0"?> <!DOCTYPE configuration SYSTEM "http://www.exoplatform.org/dtd/indexing-configuration-1.0.dtd"> <configuration xmlns:jcr="http://www.jcp.org/jcr/1.0" xmlns:nt="http://www.jcp.org/jcr/nt/1.0"> <aggregate primaryType="nt:file"> <include primaryType="nt:resource">jcr:content</include> </aggregate> </configuration>
You may also use the * to match all child nodes:
<?xml version="1.0"?> <!DOCTYPE configuration SYSTEM "http://www.exoplatform.org/dtd/indexing-configuration-1.0.dtd"> <configuration xmlns:jcr="http://www.jcp.org/jcr/1.0" xmlns:nt="http://www.jcp.org/jcr/nt/1.0"> <aggregate primaryType="nt:file">http://wiki.exoplatform.com/xwiki/bin/edit/JCR/Search+Configuration <include primaryType="nt:resource">*</include> </aggregate> </configuration>
If you wish to include nodes up to a certain depth below the current node you can add multiple include elements. E.g. the nt:file node may contain a complete XML document under jcr:content:
<?xml version="1.0"?> <!DOCTYPE configuration SYSTEM "http://www.exoplatform.org/dtd/indexing-configuration-1.0.dtd"> <configuration xmlns:jcr="http://www.jcp.org/jcr/1.0" xmlns:nt="http://www.jcp.org/jcr/nt/1.0"> <aggregate primaryType="nt:file"> <include>*</include> <include>*/*</include> <include>*/*/*</include> </aggregate> </configuration>
In this configuration section you define how a property has to be analyzed. If there is an analyzer configuration for a property, this analyzer is used for indexing and searching of this property. For example:
<?xml version="1.0"?> <!DOCTYPE configuration SYSTEM "http://www.exoplatform.org/dtd/indexing-configuration-1.0.dtd"> <configuration xmlns:nt="http://www.jcp.org/jcr/nt/1.0"> <analyzers> <analyzer class="org.apache.lucene.analysis.KeywordAnalyzer"> <property>mytext</property> </analyzer> <analyzer class="org.apache.lucene.analysis.WhitespaceAnalyzer"> <property>mytext2</property> </analyzer> </analyzers> </configuration>
The configuration above means that the property "mytext" for the entire workspace is indexed (and searched) with the Lucene KeywordAnalyzer, and property "mytext2" with the WhitespaceAnalyzer. Using different analyzers for different languages is particularly useful.
The WhitespaceAnalyzer tokenizes a property, the KeywordAnalyzer takes the property as a whole.
When using analyzers, you may encounter an unexpected behavior when searching within a property compared to searching within a node scope. The reason is that the node scope always uses the global analyzer.
Let's suppose that the property "mytext" contains the text : "testing my analyzers" and that you haven't configured any analyzers for the property "mytext" (and not changed the default analyzer in SearchIndex).
If your query is for example:
xpath = "//*[jcr:contains(mytext,'analyzer')]"
This xpath does not return a hit in the node with the property above and default analyzers.
Also a search on the node scope
xpath = "//*[jcr:contains(.,'analyzer')]"
won't give a hit. Realize, that you can only set specific analyzers on a node property, and that the node scope indexing/analyzing is always done with the globally defined analyzer in the SearchIndex element.
Now, if you change the analyzer used to index the "mytext" property above to
<analyzer class="org.apache.lucene.analysis.Analyzer.GermanAnalyzer"> <property>mytext</property> </analyzer>
and you do the same search again, then for
xpath = "//*[jcr:contains(mytext,'analyzer')]"
you would get a hit because of the word stemming (analyzers - analyzer).
The other search,
xpath = "//*[jcr:contains(.,'analyzer')]"
still would not give a result, since the node scope is indexed with the global analyzer, which in this case does not take into account any word stemming.
In conclusion, be aware that when using analyzers for specific properties, you might find a hit in a property for some search text, and you do not find a hit with the same search text in the node scope of the property!
Both index rules and index aggregates influence how content is indexed in JCR. If you change the configuration the existing content is not automatically re-indexed according to the new rules. You therefore have to manually re-index the content when you change the configuration!
Whenever relational database is used to store multilingual text data of eXo Java Content Repository we need to adapt configuration in order to support UTF-8 encoding. Here is a short HOWTO instruction for several supported RDBMS with examples.
The configuration file you have to modify: .../webapps/portal/WEB-INF/conf/jcr/repository-configuration.xml
Datasource jdbcjcr
used in examples can be
configured via InitialContextInitializer
component.
In order to run multilanguage JCR on an Oracle backend Unicode
encoding for characters set should be applied to the database. Other
Oracle globalization parameters don't make any impact. The only property
to modify is NLS_CHARACTERSET
.
We have tested NLS_CHARACTERSET
=
AL32UTF8
and it's works well for many European and
Asian languages.
Example of database configuration (used for JCR testing):
NLS_LANGUAGE AMERICAN NLS_TERRITORY AMERICA NLS_CURRENCY $ NLS_ISO_CURRENCY AMERICA NLS_NUMERIC_CHARACTERS ., NLS_CHARACTERSET AL32UTF8 NLS_CALENDAR GREGORIAN NLS_DATE_FORMAT DD-MON-RR NLS_DATE_LANGUAGE AMERICAN NLS_SORT BINARY NLS_TIME_FORMAT HH.MI.SSXFF AM NLS_TIMESTAMP_FORMAT DD-MON-RR HH.MI.SSXFF AM NLS_TIME_TZ_FORMAT HH.MI.SSXFF AM TZR NLS_TIMESTAMP_TZ_FORMAT DD-MON-RR HH.MI.SSXFF AM TZR NLS_DUAL_CURRENCY $ NLS_COMP BINARY NLS_LENGTH_SEMANTICS BYTE NLS_NCHAR_CONV_EXCP FALSE NLS_NCHAR_CHARACTERSET AL16UTF16
JCR 1.12.x doesn't use NVARCHAR columns, so that the value of the parameter NLS_NCHAR_CHARACTERSET does not matter for JCR.
Create database with Unicode encoding and use Oracle dialect for the Workspace Container:
<workspace name="collaboration"> <container class="org.exoplatform.services.jcr.impl.storage.jdbc.JDBCWorkspaceDataContainer"> <properties> <property name="source-name" value="jdbcjcr" /> <property name="dialect" value="oracle" /> <property name="multi-db" value="false" /> <property name="max-buffer-size" value="200k" /> <property name="swap-directory" value="target/temp/swap/ws" /> </properties> .....
DB2 Universal Database (DB2 UDB) supports UTF-8 and UTF-16/UCS-2. When a Unicode database is created, CHAR, VARCHAR, LONG VARCHAR data are stored in UTF-8 form. It's enough for JCR multi-lingual support.
Example of UTF-8 database creation:
DB2 CREATE DATABASE dbname USING CODESET UTF-8 TERRITORY US
Create database with UTF-8 encoding and use db2 dialect for Workspace Container on DB2 v.9 and higher:
<workspace name="collaboration"> <container class="org.exoplatform.services.jcr.impl.storage.jdbc.JDBCWorkspaceDataContainer"> <properties> <property name="source-name" value="jdbcjcr" /> <property name="dialect" value="db2" /> <property name="multi-db" value="false" /> <property name="max-buffer-size" value="200k" /> <property name="swap-directory" value="target/temp/swap/ws" /> </properties> .....
For DB2 v.8.x support change the property "dialect" to db2v8.
JCR MySQL-backend requires special dialect MySQL-UTF8 to be used for internationalization support. But the database default charset should be latin1 to use limited index space effectively (1000 bytes for MyISAM engine, 767 for InnoDB). If database default charset is multibyte, a JCR database initialization error is thrown concerning index creation failure. In other words JCR can work on any singlebyte default charset of database, with UTF8 supported by MySQL server. But we have tested it only on latin1 database default charset.
Repository configuration, workspace container entry example:
<workspace name="collaboration"> <container class="org.exoplatform.services.jcr.impl.storage.jdbc.JDBCWorkspaceDataContainer"> <properties> <property name="source-name" value="jdbcjcr" /> <property name="dialect" value="mysql-utf8" /> <property name="multi-db" value="false" /> <property name="max-buffer-size" value="200k" /> <property name="swap-directory" value="target/temp/swap/ws" /> </properties> .....
On PostgreSQL-backend multilingual support can be enabled in different ways:
Using the locale features of the operating system to provide locale-specific collation order, number formatting, translated messages, and other aspects. UTF-8 is widely used on Linux distributions by default, so it can be useful in such case.
Providing a number of different character sets defined in the PostgreSQL server, including multiple-byte character sets, to support storing text any language, and providing character set translation between client and server. We recommend to use UTF-8 database charset, it will allow any-to-any conversations and make this issue transparent for the JCR.
Create database with UTF-8 encoding and use PgSQL dialect for Workspace Container:
<workspace name="collaboration"> <container class="org.exoplatform.services.jcr.impl.storage.jdbc.JDBCWorkspaceDataContainer"> <properties> <property name="source-name" value="jdbcjcr" /> <property name="dialect" value="pgsql" /> <property name="multi-db" value="false" /> <property name="max-buffer-size" value="200k" /> <property name="swap-directory" value="target/temp/swap/ws" /> </properties> .....
JCR Repository Service uses
org.exoplatform.services.jcr.config.RepositoryServiceConfiguration
component to read its configuration.
<component> <key>org.exoplatform.services.jcr.config.RepositoryServiceConfiguration</key> <type>org.exoplatform.services.jcr.impl.config.RepositoryServiceConfigurationImpl</type> <init-params> <value-param> <name>conf-path</name> <description>JCR configuration file</description> <value>/conf/standalone/exo-jcr-config.xml</value> </value-param> </init-params> </component>
In the example Repository Service will read the configuration from
the file /conf/standalone/exo-jcr-config.xml
.
But in some cases it's required to change the configuration on the fly. And know that the new one will be used. Additionally we wish not to modify the original file.
In this case we have to use the configuration persister feature which allows to store the configuration in different locations.
On startup RepositoryServiceConfiguration
component checks if a configuration persister was configured. In that case
it uses the provided ConfigurationPersister
implementation class to instantiate the persister object.
Configuration with persister:
<component> <key>org.exoplatform.services.jcr.config.RepositoryServiceConfiguration</key> <type>org.exoplatform.services.jcr.impl.config.RepositoryServiceConfigurationImpl</type> <init-params> <value-param> <name>conf-path</name> <description>JCR configuration file</description> <value>/conf/standalone/exo-jcr-config.xml</value> </value-param> <properties-param> <name>working-conf</name> <description>working-conf</description> <property name="source-name" value="jdbcjcr" /> <property name="dialect" value="mysql" /> <property name="persister-class-name" value="org.exoplatform.services.jcr.impl.config.JDBCConfigurationPersister" /> </properties-param> </init-params> </component>
Where:
source-name
- JNDI source name
configured in InitialContextInitializer
component. (sourceName
prior v.1.9.) Find
more in database
configuration.
dialect
- SQL dialect which will be
used with database from source-name
. Find
more in database
configuration.
persister-class-name
- class name of
ConfigurationPersister
interface
implementation. (persisterClassName
prior
v.1.9.)
ConfigurationPersister interface:
/** * Init persister. * Used by RepositoryServiceConfiguration on init. * @return - config data stream */ void init(PropertiesParam params) throws RepositoryConfigurationException; /** * Read config data. * @return - config data stream */ InputStream read() throws RepositoryConfigurationException; /** * Create table, write data. * @param confData - config data stream */ void write(InputStream confData) throws RepositoryConfigurationException; /** * Tell if the config exists. * @return - flag */ boolean hasConfig() throws RepositoryConfigurationException;
JCR Core implementation contains a persister which stores the
repository configuration in the relational database using JDBC calls -
org.exoplatform.services.jcr.impl.config.JDBCConfigurationPersister
.
The implementation will crate and use table JCR_CONFIG in the provided database.
But the developer can implement his own persister for his particular usecase.
To deploy eXo JCR to JBoss As follow next steps:
Dowload the latest version of eXo JCR ear distribution.
Copy <jcr.ear> into <%jboss_home%/server/default/deploy>
Put exo-configuration.xml to the root <%jboss_home%/exo-configuration.xml>
Configure JAAS by inserting XML fragment shown below into <%jboss_home%/server/default/conf/login-config.xml>
<application-policy name="exo-domain"> <authentication> <login-module code="org.exoplatform.services.security.j2ee.JbossLoginModule" flag="required"></login-module> </authentication> </application-policy>
Ensure that you use JBossTS Transaction Service and JBossCache Transaction Manager. Your exo-configuration.xml must contain such parts:
<component> <key>org.jboss.cache.transaction.TransactionManagerLookup</key> <type>org.jboss.cache.GenericTransactionManagerLookup</type>^ </component> <component> <key>org.exoplatform.services.transaction.TransactionService</key> <type>org.exoplatform.services.transaction.jbosscache.JBossTransactionsService</type> <init-params> <value-param> <name>timeout</name> <value>300</value> </value-param> </init-params> </component>
Start server:
bin/run.sh for Unix
bin/run.bat for Windows
Try accessing http://localhost:8080/browser with root/exo as login/password if you have done everything right, you'll get access to repository browser.
To manually configure repository create a new configuration file (f.e. exo-jcr-configuration.xml). For details see JCR Configuration. Your configuration must look like:
<repository-service default-repository="repository1"> <repositories> <repository name="repository1" system-workspace="ws1" default-workspace="ws1"> <security-domain>exo-domain</security-domain> <access-control>optional</access-control> <authentication-policy>org.exoplatform.services.jcr.impl.core.access.JAASAuthenticator</authentication-policy> <workspaces> <workspace name="ws1"> <container class="org.exoplatform.services.jcr.impl.storage.jdbc.optimisation.CQJDBCWorkspaceDataContainer"> <properties> <property name="source-name" value="jdbcjcr" /> <property name="dialect" value="oracle" /> <property name="multi-db" value="false" /> <property name="update-storage" value="false" /> <property name="max-buffer-size" value="200k" /> <property name="swap-directory" value="../temp/swap/production" /> </properties> <value-storages> see "Value storage configuration" part. </value-storages> </container> <initializer class="org.exoplatform.services.jcr.impl.core.ScratchWorkspaceInitializer"> <properties> <property name="root-nodetype" value="nt:unstructured" /> </properties> </initializer> <cache enabled="true" class="org.exoplatform.services.jcr.impl.dataflow.persistent.jbosscache.JBossCacheWorkspaceStorageCache"> see "Cache configuration" part. </cache> <query-handler class="org.exoplatform.services.jcr.impl.core.query.lucene.SearchIndex"> see "Indexer configuration" part. </query-handler> <lock-manager class="org.exoplatform.services.jcr.impl.core.lock.jbosscache.CacheableLockManagerImpl"> see "Lock Manager configuration" part. </lock-manager> </workspace> <workspace name="ws2"> ... </workspace> <workspace name="wsN"> ... </workspace> </workspaces> </repository> </repositories> </repository-service>
and update RepositoryServiceConfiguration configuration in exo-configuration.xml to use this file:
<component> <key>org.exoplatform.services.jcr.config.RepositoryServiceConfiguration</key> <type>org.exoplatform.services.jcr.impl.config.RepositoryServiceConfigurationImpl</type> <init-params> <value-param> <name>conf-path</name> <description>JCR configuration file</description> <value>exo-jcr-configuration.xml</value> </value-param> </init-params> </component>
Every node of cluster MUST have the same mounted Network File System with read and write permissions on it.
"/mnt/tornado" - path to the mounted Network File System (all cluster nodes must use the same NFS)
Every node of cluster MUST use the same database
Same Clusters on different nodes MUST have the same cluster names (f.e if Indexer cluster in workspace production on the first node has name "production_indexer_cluster", then indexer clusters in workspace production on all other nodes MUST have the same name "production_indexer_cluster" )
Configuration of every workspace in repository must contains of such parts:
<value-storages> <value-storage id="system" class="org.exoplatform.services.jcr.impl.storage.value.fs.TreeFileValueStorage"> <properties> <property name="path" value="/mnt/tornado/temp/values/production" /> <!--path within NFS where ValueStorage will hold it's data--> </properties> <filters> <filter property-type="Binary" /> </filters> </value-storage> </value-storages>
<cache enabled="true" class="org.exoplatform.services.jcr.impl.dataflow.persistent.jbosscache.JBossCacheWorkspaceStorageCache"> <properties> <property name="jbosscache-configuration" value="jar:/conf/portal/test-jbosscache-data.xml" /> <!-- path to JBoss Cache configuration for data storage --> <property name="jgroups-configuration" value="jar:/conf/portal/udp-mux.xml" /> <!-- path to JGroups configuration --> <property name="jbosscache-cluster-name" value="JCR_Cluster_cache_production" /> <!-- JBoss Cache data storage cluster name --> <property name="jgroups-multiplexer-stack" value="true" /> </properties> </cache>
<query-handler class="org.exoplatform.services.jcr.impl.core.query.lucene.SearchIndex"> <properties> <property name="changesfilter-class" value="org.exoplatform.services.jcr.impl.core.query.jbosscache.JBossCacheIndexChangesFilter" /> <property name="index-dir" value="/mnt/tornado/temp/jcrlucenedb/production" /> <!-- path within NFS where ValueStorage will hold it's data --> <property name="jbosscache-configuration" value="jar:/conf/portal/test-jbosscache-indexer.xml" /> <!-- path to JBoss Cache configuration for indexer --> <property name="jgroups-configuration" value="jar:/conf/portal/udp-mux.xml" /> <!-- path to JGroups configuration --> <property name="jbosscache-cluster-name" value="JCR_Cluster_indexer_production" /> <!-- JBoss Cache indexer cluster name --> <property name="jgroups-multiplexer-stack" value="true" /> </properties> </query-handler>
<lock-manager class="org.exoplatform.services.jcr.impl.core.lock.jbosscache.CacheableLockManagerImpl"> <properties> <property name="time-out" value="15m" /> <property name="jbosscache-configuration" value="jar:/conf/portal/test-jbosscache-lock.xml" /> <!-- path to JBoss Cache configuration for lock manager --> <property name="jgroups-configuration" value="jar:/conf/portal/udp-mux.xml" /> <!-- path to JGroups configuration --> <property name="jgroups-multiplexer-stack" value="true" /> <property name="jbosscache-cluster-name" value="JCR_Cluster_lock_production" /> <!-- JBoss Cache locks cluster name --> <property name="jbosscache-cl-cache.jdbc.table.name" value="jcrlocks_production"/> <!-- the name of the DB table where lock's data will be stored --> <property name="jbosscache-cl-cache.jdbc.table.create" value="true"/> <property name="jbosscache-cl-cache.jdbc.table.drop" value="false"/> <property name="jbosscache-cl-cache.jdbc.table.primarykey" value="jcrlocks_production_pk"/> <property name="jbosscache-cl-cache.jdbc.fqn.column" value="fqn"/> <property name="jbosscache-cl-cache.jdbc.node.column" value="node"/> <property name="jbosscache-cl-cache.jdbc.parent.column" value="parent"/> <property name="jbosscache-cl-cache.jdbc.datasource" value="jdbcjcr"/> </properties> </lock-manager>
Each mentioned components uses instances of JBoss Cache product for caching in clustered environment. So every element has it's own transport and has to be configured in proper way. As usual, workspaces has similar configuration but with different cluster-names and may-be some other parameters. The simplest way to configure them is to define their's own configuration files for each component in each workspace:
<property name="jbosscache-configuration" value="conf/standalone/test-jbosscache-lock-db1-ws1.xml" />
But if there are few workspaces, configuring them in such a way can be painful and hard-manageable. eXo JCR offers a template-based configuration for JBoss Cache instances. You can have one template for Lock Manager, one for Indexer and one for data container and use them in all the workspaces, defining the map of substitution parameters in main configuration file. Just simply define ${jbosscache-<parameter name>} inside xml-template and list correct value in JCR configuration file just below "jbosscache-configuration", as shown:
template:
... <clustering mode="replication" clusterName="${jbosscache-cluster-name}"> <stateRetrieval timeout="20000" fetchInMemoryState="false" /> ...
and JCR configuration file:
... <property name="jbosscache-configuration" value="jar:/conf/portal/jbosscache-lock.xml" /> <property name="jbosscache-cluster-name" value="JCR-cluster-locks-db1-ws" /> ...
JGroups is used by JBoss Cache for network communications and transport in clustered environment. If property "jgroups-configuration" is defined in component configuration, it will be injected into the JBoss Cache instance on startup.
<property name="jgroups-configuration" value="your/path/to/modified-udp.xml" />
As mentioned above, each component (lock manager, data container and query handler) for each workspace requires it's own clustered environment. Saying with another words, they have their own clusters with unique names. By default each cluster should perform multi-casts on separate port. This configuration leads to great unnecessary overhead on cluster. Thats why JGroups offers multiplexer feature, providing ability to use one single channel for set of clusters. This feature reduces network overheads increasing performance and stability of application. To enable multiplexer stack, You should define appropriate configuration file (upd-mux.xml is pre-shipped one with eXo JCR) and set "jgroups-multiplexer-stack" into "true".
<property name="jgroups-configuration" value="jar:/conf/portal/udp-mux.xml" /> <property name="jgroups-multiplexer-stack" value="true" />
Exo JCR implementation is shipped with ready-to-use JBoss Cache configuration templates for JCR's components. They are situated in application package in /conf/porta/ folder.
Data container template is "jbosscache-data.xml" It's
<?xml version="1.0" encoding="UTF-8"?> <jbosscache xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="urn:jboss:jbosscache-core:config:3.1"> <locking useLockStriping="false" concurrencyLevel="50000" lockParentForChildInsertRemove="false" lockAcquisitionTimeout="20000" /> <clustering mode="replication" clusterName="${jbosscache-cluster-name}"> <stateRetrieval timeout="20000" fetchInMemoryState="false" /> <jgroupsConfig multiplexerStack="jcr.stack" /> <sync /> </clustering> <!-- Eviction configuration --> <eviction wakeUpInterval="5000"> <default algorithmClass="org.jboss.cache.eviction.LRUAlgorithm" actionPolicyClass="org.exoplatform.services.jcr.impl.dataflow.persistent.jbosscache.ParentNodeEvictionActionPolicy" eventQueueSize="1000000"> <property name="maxNodes" value="1000000" /> <property name="timeToLive" value="120000" /> </default> </eviction> </jbosscache>
It's template name is "jbosscache-lock.xml"
<?xml version="1.0" encoding="UTF-8"?> <jbosscache xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="urn:jboss:jbosscache-core:config:3.1"> <locking useLockStriping="false" concurrencyLevel="50000" lockParentForChildInsertRemove="false" lockAcquisitionTimeout="20000" /> <clustering mode="replication" clusterName="${jbosscache-cluster-name}"> <stateRetrieval timeout="20000" fetchInMemoryState="false" /> <jgroupsConfig multiplexerStack="jcr.stack" /> <sync /> </clustering> <loaders passivation="false" shared="true"> <preload> <node fqn="/" /> </preload> <loader class="org.jboss.cache.loader.JDBCCacheLoader" async="false" fetchPersistentState="false" ignoreModifications="false" purgeOnStartup="false"> <properties> cache.jdbc.table.name=${jbosscache-cl-cache.jdbc.table.name} cache.jdbc.table.create=${jbosscache-cl-cache.jdbc.table.create} cache.jdbc.table.drop=${jbosscache-cl-cache.jdbc.table.drop} cache.jdbc.table.primarykey=${jbosscache-cl-cache.jdbc.table.primarykey} cache.jdbc.fqn.column=${jbosscache-cl-cache.jdbc.fqn.column} cache.jdbc.fqn.type=${jbosscache-cl-cache.jdbc.fqn.type} cache.jdbc.node.column=${jbosscache-cl-cache.jdbc.node.column} cache.jdbc.node.type=${jbosscache-cl-cache.jdbc.node.type} cache.jdbc.parent.column=${jbosscache-cl-cache.jdbc.parent.column} cache.jdbc.datasource=${jbosscache-cl-cache.jdbc.datasource} </properties> </loader> </loaders> </jbosscache>
Table 10.2. Template variables
Variable |
---|
jbosscache-cluster-name |
jbosscache-cl-cache.jdbc.table.name |
jbosscache-cl-cache.jdbc.table.create |
jbosscache-cl-cache.jdbc.table.drop |
jbosscache-cl-cache.jdbc.table.primarykey |
jbosscache-cl-cache.jdbc.fqn.column |
jbosscache-cl-cache.jdbc.fqn.type |
jbosscache-cl-cache.jdbc.node.column |
jbosscache-cl-cache.jdbc.node.type |
jbosscache-cl-cache.jdbc.parent.column |
jbosscache-cl-cache.jdbc.datasource |
Have a look at "jbosscache-indexer.xml"
<?xml version="1.0" encoding="UTF-8"?> <jbosscache xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="urn:jboss:jbosscache-core:config:3.1"> <locking useLockStriping="false" concurrencyLevel="50000" lockParentForChildInsertRemove="false" lockAcquisitionTimeout="20000" /> <clustering mode="replication" clusterName="${jbosscache-cluster-name}"> <stateRetrieval timeout="20000" fetchInMemoryState="false" /> <jgroupsConfig multiplexerStack="jcr.stack" /> <sync /> </clustering> <!-- Eviction configuration --> <eviction wakeUpInterval="5000"> <default algorithmClass="org.jboss.cache.eviction.FIFOAlgorithm" eventQueueSize="1000000"> <property name="maxNodes" value="10000" /> <property name="minTimeToLive" value="60000" /> </default> </eviction> </jbosscache>
What LockManager does?
In common words, LockManager stores lock objects, so it can give Lock object or can release it, etc.
Also LockManager is responsible for removing Locks that live too long. This parameter may be configured with "time-out" property.
JCR provide two base implementation of LockManager:
org.exoplatform.services.jcr.impl.core.lock.LockManagerImpl
;
org.exoplatform.services.jcr.impl.core.lock.jbosscache.CacheableLockManagerImpl
;
In this article we will talk mostly about CacheableLockManagerImpl.
You can enable LockManager by adding lock-manager-configuration to workspace-configuration.
For example:
<workspace name="ws"> ... <lock-manager class="org.exoplatform.services.jcr.impl.core.lock.jbosscache.CacheableLockManagerImpl"> <properties> <property name="time-out" value="15m" /> ... </properties> </lock-manager> ... </workspace>
LockManagerImpl is simple implementation of LockManager, and also faster than CacheableLockManager. It stores Lock objects in HashMap and may also persist Locks if LockPersister is configured. LockManagerImpl do not support replication in any way.
See more about LockManager Configuration here.
CacheableLockManagerImpl stores Lock object in JBoss-cache, so Locks are replicable and affects on cluster, not only a single node. Also JBoss-cache has JDBCCacheLoader, so locks will be stored to database.
Both implementation supports Expired Locks removing. There is LockRemover - separate thread, that periodically ask LockManager for Locks that lives to much and must be removed. So, timeout for LockRemover may be set as follows, default value is 30m.
<properties> <property name="time-out" value="10m" /> ... </properties>
Replication requirements are same as for Cache
Full JCR configuration example you can see here.
Common tips:
clusterName
("jbosscache-cluster-name")
must be unique;
cache.jdbc.table.name
must be unique
per datasource;
cache.jdbc.fqn.type
must and
cache.jdbc.node.type must be configured according to used
database;
There is few ways how to configure CacheableLockManagerImpl, and all of them configures JBoss-cache and JDBCCacheLoader.
See http://community.jboss.org/wiki/JBossCacheJDBCCacheLoader
First one is - put JbossCache configuraion file path to CacheableLockManagerImpl
This configuration is not so good, as you can think. Because repository may contain many workspaces, and each workspace must contain LockManager configuration, and LockManager config may contain JbossCache config file. So total configuration is growing up. But it is usefull if we want a single LockManager with special configuration.
Config is:
<lock-manager class="org.exoplatform.services.jcr.impl.core.lock.jbosscache.CacheableLockManagerImpl"> <properties> <property name="time-out" value="15m" /> <property name="jbosscache-configuration" value="conf/standalone/cluster/test-jbosscache-lock-config.xml" /> </properties> </lock-manager>
test-jbosscache-lock-config.xml
<?xml version="1.0" encoding="UTF-8"?> <jbosscache xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="urn:jboss:jbosscache-core:config:3.2"> <locking useLockStriping="false" concurrencyLevel="50000" lockParentForChildInsertRemove="false" lockAcquisitionTimeout="20000" /> <clustering mode="replication" clusterName="JBoss-Cache-Lock-Cluster_Name"> <stateRetrieval timeout="20000" fetchInMemoryState="false" nonBlocking="true" /> <jgroupsConfig> <TCP bind_addr="127.0.0.1" start_port="9800" loopback="true" recv_buf_size="20000000" send_buf_size="640000" discard_incompatible_packets="true" max_bundle_size="64000" max_bundle_timeout="30" use_incoming_packet_handler="true" enable_bundling="false" use_send_queues="false" sock_conn_timeout="300" skip_suspected_members="true" use_concurrent_stack="true" thread_pool.enabled="true" thread_pool.min_threads="1" thread_pool.max_threads="25" thread_pool.keep_alive_time="5000" thread_pool.queue_enabled="false" thread_pool.queue_max_size="100" thread_pool.rejection_policy="run" oob_thread_pool.enabled="true" oob_thread_pool.min_threads="1" oob_thread_pool.max_threads="8" oob_thread_pool.keep_alive_time="5000" oob_thread_pool.queue_enabled="false" oob_thread_pool.queue_max_size="100" oob_thread_pool.rejection_policy="run" /> <MPING timeout="2000" num_initial_members="2" mcast_port="34540" bind_addr="127.0.0.1" mcast_addr="224.0.0.1" /> <MERGE2 max_interval="30000" min_interval="10000" /> <FD_SOCK /> <FD max_tries="5" shun="true" timeout="10000" /> <VERIFY_SUSPECT timeout="1500" /> <pbcast.NAKACK discard_delivered_msgs="true" gc_lag="0" retransmit_timeout="300,600,1200,2400,4800" use_mcast_xmit="false" /> <UNICAST timeout="300,600,1200,2400,3600" /> <pbcast.STABLE desired_avg_gossip="50000" max_bytes="400000" stability_delay="1000" /> <pbcast.GMS join_timeout="5000" print_local_addr="true" shun="false" view_ack_collection_timeout="5000" view_bundling="true" /> <FRAG2 frag_size="60000" /> <pbcast.STREAMING_STATE_TRANSFER /> <pbcast.FLUSH timeout="0" /> </jgroupsConfig <sync /> </clustering> <loaders passivation="false" shared="true"> <preload> <node fqn="/" /> </preload> <loader class="org.jboss.cache.loader.JDBCCacheLoader" async="false" fetchPersistentState="false" ignoreModifications="false" purgeOnStartup="false"> <properties> cache.jdbc.table.name=jcrlocks_ws cache.jdbc.table.create=true cache.jdbc.table.drop=false cache.jdbc.table.primarykey=jcrlocks_ws_pk cache.jdbc.fqn.column=fqn cache.jdbc.fqn.type=VARCHAR(512) cache.jdbc.node.column=node cache.jdbc.node.type=<BLOB> cache.jdbc.parent.column=parent cache.jdbc.datasource=jdbcjcr </properties> </loader> </loaders> </jbosscache>
Configuration requirements:
<clustering mode="replication" clusterName="JBoss-Cache-Lock-Cluster_Name"> - cluster name must be unique;
cache.jdbc.table.name
must be unique
per datasource;
cache.jdbc.node.type
and
cache.jdbc.fqn.type
must be configured
according to using database. See Data Types in Different Databases .
Second one is - use template JBoss-cache configuration for all LockManagers
Lock template configuration
test-jbosscache-lock.xml
<?xml version="1.0" encoding="UTF-8"?> <jbosscache xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="urn:jboss:jbosscache-core:config:3.1"> <locking useLockStriping="false" concurrencyLevel="50000" lockParentForChildInsertRemove="false" lockAcquisitionTimeout="20000" /> <clustering mode="replication" clusterName="${jbosscache-cluster-name}"> <stateRetrieval timeout="20000" fetchInMemoryState="false" /> <jgroupsConfig multiplexerStack="jcr.stack" /> <sync /> </clustering> <loaders passivation="false" shared="true"> <!-- All the data of the JCR locks needs to be loaded at startup --> <preload> <node fqn="/" /> </preload> <!-- For another cache-loader class you should use another template with cache-loader specific parameters -> <loader class="org.jboss.cache.loader.JDBCCacheLoader" async=q"false" fetchPersistentState="false" ignoreModifications="false" purgeOnStartup="false"> <properties> cache.jdbc.table.name=${jbosscache-cl-cache.jdbc.table.name} cache.jdbc.table.create=${jbosscache-cl-cache.jdbc.table.create} cache.jdbc.table.drop=${jbosscache-cl-cache.jdbc.table.drop} cache.jdbc.table.primarykey=${jbosscache-cl-cache.jdbc.table.primarykey} cache.jdbc.fqn.column=${jbosscache-cl-cache.jdbc.fqn.column} cache.jdbc.fqn.type=${jbosscache-cl-cache.jdbc.fqn.type} cache.jdbc.node.column=${jbosscache-cl-cache.jdbc.node.column} cache.jdbc.node.type=${jbosscache-cl-cache.jdbc.node.type} cache.jdbc.parent.column=${jbosscache-cl-cache.jdbc.parent.column} cache.jdbc.datasource=${jbosscache-cl-cache.jdbc.datasource} </properties> </loader> </loaders> </jbosscache>
As you see, all configurable paramaters filled by templates and will be replaced by LockManagers conf parameters:
<lock-manager class="org.exoplatform.services.jcr.impl.core.lock.jbosscache.CacheableLockManagerImpl"> <properties> <property name="time-out" value="15m" /> <property name="jbosscache-configuration" value="test-jbosscache-lock.xml" /> <property name="jgroups-configuration" value="udp-mux.xml" /> <property name="jgroups-multiplexer-stack" value="true" /> <property name="jbosscache-cluster-name" value="JCR-cluster-locks-ws" /> <property name="jbosscache-cl-cache.jdbc.table.name" value="jcrlocks_ws" /> <property name="jbosscache-cl-cache.jdbc.table.create" value="true" /> <property name="jbosscache-cl-cache.jdbc.table.drop" value="false" /> <property name="jbosscache-cl-cache.jdbc.table.primarykey" value="jcrlocks_ws_pk" /> <property name="jbosscache-cl-cache.jdbc.fqn.column" value="fqn" /> <property name="jbosscache-cl-cache.jdbc.fqn.type" value="AUTO"/> <property name="jbosscache-cl-cache.jdbc.node.column" value="node" /> <property name="jbosscache-cl-cache.jdbc.node.type" value="AUTO"/> <property name="jbosscache-cl-cache.jdbc.parent.column" value="parent" /> <property name="jbosscache-cl-cache.jdbc.datasource" value="jdbcjcr" /> </properties> </lock-manager>
Configuration requirements:
jbosscache-cl-cache.jdbc.fqn.column
and jbosscache-cl-cache.jdbc.node.type
is
nothing else as cache.jdbc.fqn.type and cache.jdbc.node.type in
JBoss-Cache configuration. You can set those data types according
to database type (See Data Types in Different Databases) or set it as AUTO (or do not set at
all) and data type will by detected automaticaly.
as you see, jgroups-configuration moved to separate config file - udp-mux.xml; In our case udp-mux.xml is common JGroup config for all components (QueryHandler, cache, LockManager). But we, still, can create own config.
our-udp-mux.xml
<protocol_stacks> <stack name="jcr.stack"> <config> <UDP mcast_addr="228.10.10.10" mcast_port="45588" tos="8" ucast_recv_buf_size="20000000" ucast_send_buf_size="640000" mcast_recv_buf_size="25000000" mcast_send_buf_size="640000" loopback="false" discard_incompatible_packets="true" max_bundle_size="64000" max_bundle_timeout="30" use_incoming_packet_handler="true" ip_ttl="2" enable_bundling="true" enable_diagnostics="true" thread_naming_pattern="cl" use_concurrent_stack="true" thread_pool.enabled="true" thread_pool.min_threads="2" thread_pool.max_threads="8" thread_pool.keep_alive_time="5000" thread_pool.queue_enabled="true" thread_pool.queue_max_size="1000" thread_pool.rejection_policy="discard" oob_thread_pool.enabled="true" oob_thread_pool.min_threads="1" oob_thread_pool.max_threads="8" oob_thread_pool.keep_alive_time="5000" oob_thread_pool.queue_enabled="false" oob_thread_pool.queue_max_size="100" oob_thread_pool.rejection_policy="Run" /> <PING timeout="2000" num_initial_members="3" /> <MERGE2 max_interval="30000" min_interval="10000" /> <FD_SOCK /> <FD timeout="10000" max_tries="5" shun="true" /> <VERIFY_SUSPECT timeout="1500" /> <BARRIER /> <pbcast.NAKACK use_stats_for_retransmission="false" exponential_backoff="150" use_mcast_xmit="true" gc_lag="0" retransmit_timeout="50,300,600,1200" discard_delivered_msgs="true" /> <UNICAST timeout="300,600,1200" /> <pbcast.STABLE stability_delay="1000" desired_avg_gossip="50000" max_bytes="1000000" /> <VIEW_SYNC avg_send_interval="60000" /> <pbcast.GMS print_local_addr="true" join_timeout="3000" shun="false" view_bundling="true" /> <FC max_credits="500000" min_threshold="0.20" /> <FRAG2 frag_size="60000" /> <!--pbcast.STREAMING_STATE_TRANSFER /--> <pbcast.STATE_TRANSFER /> <!-- pbcast.FLUSH /--> </config> </stack> </protocol_stacks>
Table 11.1. Fqn type and node type in different databases
DataBase name | Node data type | FQN data type |
---|---|---|
default | BLOB | VARCHAR(512) |
HSSQL | OBJECT | VARCHAR(512) |
MySQL | LONGBLOB | VARCHAR(512) |
ORACLE | BLOB | VARCHAR2(512) |
PostgreSQL | bytea | VARCHAR(512) |
MSSQL | VARBINARY(MAX) | VARCHAR(512) |
DB2 | BLOB | VARCHAR(512) |
Sybase | IMAGE | VARCHAR(512) |
Ingres | long byte | VARCHAR(512) |
Lets talk about indexing content in cluster.
For couple of reasons, we can't replicate index. That's means, some data added and indexed on one cluster node, will be replicated to another cluster node, but will not be indexed on that node.
So, how do the indexing works in cluster environment?
As, we can not index same data on all nodes of cluster, we must index it on one node. Node, that can index data and do changes on lucene index, is called "coordinator". Coordinator-node is choosen automaticaly, so we do not need special configuration for coordinator.
But, how can another nodes save their changes to lucene index?
First of all, data is already saved and replicated to another cluster-nodes, so we need only deliver message like "we need to index this data" to coordinator. Thats why Jboss-cache is used.
All nodes of cluster writes messages into JBoss-cache but only coordinator takes those messages and makes changes Lucene index.
How do the search works in cluster environment?
Search engine do not works with indexer, coordinator, etc. Search needs only lucene index. But only one cluster node can change lucene index - asking you. Yes - lucene index is shared. So, all cluster nodes must be configured to use lucene index from shared directory.
A little bit about indexing process (no matter, cluster or not) Indexer do not writes changes to FS lucene index immediately. At first, Indexer writes changes to Volatile index. If Volatile index size become 1Mb or more it is flushed to FS. Also there is timer, that flushes volatile index by timeout. Volatile index timeout configured by "max-volatile-time" paremeter.
See more about Search Configuration.
Common scheme of Shared Index
Now, lets see what we need to run Search engine in cluster environment.
shared directory for storing Lucene index (i.e. NFS);
changes filter configured as org.exoplatform.services.jcr.impl.core.query.jbosscache.JBossCacheIndexChangesFilter;
This filter ignore changes on non-coordinator nodes, and index changes on coordinator node.
configure JBoss-cache, course;
Configuration example:
<workspace name="ws"> <query-handler class="org.exoplatform.services.jcr.impl.core.query.lucene.SearchIndex"> <properties> <property name="index-dir" value="shareddir/index/db1/ws" /> <property name="changesfilter-class" value="org.exoplatform.services.jcr.impl.core.query.jbosscache.JBossCacheIndexChangesFilter" /> <property name="jbosscache-configuration" value="jbosscache-indexer.xml" /> <property name="jgroups-configuration" value="udp-mux.xml" /> <property name="jgroups-multiplexer-stack" value="true" /> <property name="jbosscache-cluster-name" value="JCR-cluster-indexer-ws" /> <property name="max-volatile-time" value="60" /> </properties> </query-handler> </workspace>
Table 12.1. Config properties description
Property name | Description |
---|---|
index-dir | path to index |
jbosscache-configuration | template of JBoss-cache configuration for all query-handlers in repository |
jgroups-configuration | jgroups-configuration is template configuration for all components (search, cache, locks) [Add link to document describing template configurations] |
jgroups-multiplexer-stack | [TODO about jgroups-multiplexer-stack - add link to JBoss doc] |
jbosscache-cluster-name | cluster name (must be unique) |
max-volatile-time | max time to live for Volatile Index |
JBoss-Cache template configuration for query handler.
jbosscache-indexer.xml
<?xml version="1.0" encoding="UTF-8"?> <jbosscache xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="urn:jboss:jbosscache-core:config:3.1"> <locking useLockStriping="false" concurrencyLevel="50000" lockParentForChildInsertRemove="false" lockAcquisitionTimeout="20000" /> <!-- Configure the TransactionManager --> <transaction transactionManagerLookupClass="org.jboss.cache.transaction.JBossStandaloneJTAManagerLookup" /> <clustering mode="replication" clusterName="${jbosscache-cluster-name}"> <stateRetrieval timeout="20000" fetchInMemoryState="false" /> <jgroupsConfig multiplexerStack="jcr.stack" /> <sync /> </clustering> <!-- Eviction configuration --> <eviction wakeUpInterval="5000"> <default algorithmClass="org.jboss.cache.eviction.FIFOAlgorithm" eventQueueSize="1000000"> <property name="maxNodes" value="10000" /> <property name="minTimeToLive" value="60000" /> </default> </eviction> </jbosscache>
See more about template configurations here.
JBossTransactionsService implements eXo TransactionService and provides access to JBoss Transaction Service (JBossTS) JTA implementation via eXo container dependency.
TransactionService used in JCR cache org.exoplatform.services.jcr.impl.dataflow.persistent.jbosscache.JBossCacheWorkspaceStorageCache implementaion. See Cluster configuration for example.
Example configuration:
<component> <key>org.exoplatform.services.transaction.TransactionService</key> <type>org.exoplatform.services.transaction.jbosscache.JBossTransactionsService</type> <init-params> <value-param> <name>timeout</name> <value>3000</value> </value-param> </init-params> </component>
timeout - XA transaction timeout in seconds
It's JBossCache class registered as eXo container component in configuration.xml file.
<component> <key>org.jboss.cache.transaction.TransactionManagerLookup</key> <type>org.jboss.cache.transaction.JBossStandaloneJTAManagerLookup</type> </component>
JBossStandaloneJTAManagerLookup used in standalone environment. Bur for Application Server environment use GenericTransactionManagerLookup.
Table of Contents
eXo Kernel is the basis of all eXo platform products and modules. Any component available in eXo Platform is managed by the Exo Container, our micro container responsible for gluing the services through dependency injection
Therefore, each product is composed of a set of services and plugins registered to the container and configured by XML configuration files.
The Kernel module also contains a set of very low level services.
To be effective the namespace URI http://www.exoplaform.org/xml/ns/kernel_1_1.xsd must be target namespace of the XML configuration file.
<xsd:schema targetNamespace="http://www.exoplaform.org/xml/ns/kernel_1_1.xsd" xmlns="http://www.exoplaform.org/xml/ns/kernel_1_1.xsd" xmlns:xsd="http://www.w3.org/2001/XMLSchema" elementFormDefault="qualified" attributeFormDefault="unqualified" version="1.0"> ... </xsd:schema>
eXo Portal uses PicoContainer, which implements the Inversion of Control (IoC) design pattern. All eXo containers inherit from a PicoContainer. There are mainly two eXo containers used, each of them can provide one or several services. Each container service is delivered in a JAR file. This JAR file may contain a default configuration. The use of default configurations is recommended and most services provide it.
When a Pico Container searches for services and its configurations, each configurable service may be reconfigured to override default values or set additional parameters. If the service is configured in two or more places the configuration override mechanism will be used.
The container performs the following steps making eXo Container configuration retrieval depending on the container type.
The container is initialized by looking into different locations. This container is used by portal applications. Configurations are overloaded in the following lookup sequence:
Services default RootContainer
configurations
from JAR files /conf/configuration.xml
External RootContainer
configuration, if will
be found at
$AS_HOME/exo-conf/configuration.xml
Services default PortalContainer
configurations from JAR files
/conf/portal/configuration.xml
Web applications configurations from WAR files /WEB-INF/conf/configuration.xml
External configuration for services of named portal, if will be found at $AS_HOME/exo-conf/portal/$PORTAL_NAME/configuration.xml
The container is initialized by looking into different locations. This container is used by non portal applications. Configurations are overloaded in the following lookup sequence:
Services default RootContainer
configurations
from JAR files /conf/configuration.xml
External RootContainer
configuration, if will
be found at
$AS_HOME/exo-conf/configuration.xml
Services default StandaloneContainer
configurations from JAR files
/conf/portal/configuration.xml
Web applications configurations from WAR files /WEB-INF/conf/configuration.xml
Then depending on the StandaloneContainer
configuration URL initialization:
if configuration URL was initialized to be added to services defaults, as below:
// add configuration to the default services configurations from JARs/WARs StandaloneContainer.addConfigurationURL(containerConf);
Configuration from added URL containerConf will override only services configured in the file
if configuration URL not initialized at all, it will be
found at $AS_HOME/exo-configuration.xml.
If $AS_HOME/exo-configuration.xml doesn't
exist the container will try find it at
$AS_HOME/exo-conf/exo-configuration.xml
location and if it's still not found and the
StandaloneContainer
instance obtained with the
dedicated configuration ClassLoader
the
container will try to retrieve the resource
conf/exo-configuration.xml within the
given ClassLoader
.
$AS_HOME - application server home directory, or user.dir JVM system property value in case of Java Standalone application.
$PORTAL_NAME - portal web application name.
External configuration location can be overridden with System property exo.conf.dir. If the property exists its value will be used as path to eXo configuration directory, i.e. to $AS_HOME/exo-conf alternative. E.g. put property in command line java -Dexo.conf.dir=/path/to/exo/conf. In this particular use case, you have no need to use any prefix to import other files. For instance, if your configuration file is $AS_HOME/exo-conf/portal/PORTAL_NAME/configuration.xml and you want to import the configuration file $AS_HOME/exo-conf/portal/PORTAL_NAME/mySubConfDir/myConfig.xml, you can do it by adding <import>mySubConfDir/myConfig.xml</import> to your configuration file.
The name of the configuration folder that is by default "exo-conf", can be changed thanks to the System property exo.conf.dir.name.
Under JBoss application server exo-conf will be looked up in directory described by JBoss System property jboss.server.config.url. If the property is not found or empty $AS_HOME/exo-conf will be asked.
The search looks for a configuration file in each JAR/WAR available from the classpath using the current thread context classloader. During the search these configurations are added to a set. If the service was configured previously and the current JAR contains a new configuration of that service the latest (from the current JAR/WAR) will replace the previous one. The last one will be applied to the service during the services start phase.
Take care to have no dependencies between configurations from JAR files (/conf/portal/configuration.xml and /conf/configuration.xml) since we have no way to know in advance the loading order of those configurations. In other words, if you want to overload some configuration located in the file /conf/portal/configuration.xml of a given JAR file, you must not do it from the file /conf/portal/configuration.xml of another JAR file but from another configuration file loaded after configurations from JAR files /conf/portal/configuration.xml.
After the processing of all configurations available in system the container will initialize it and start each service in order of the dependency injection (DI).
The user/developer should be careful when configuring the same service in different configuration files. It's recommended to configure a service in its own JAR only. Or, in case of a portal configuration, strictly reconfigure the services in portal WAR files or in an external configuration.
There are services that can be (or should be) configured more than one time. This depends on business logic of the service. A service may initialize the same resource (shared with other services) or may add a particular object to a set of objects (shared with other services too). In the first case it's critical who will be the last, i.e. whose configuration will be used. In the second case it's no matter who is the first and who is the last (if the parameter objects are independent).
In case of problems with service configuration it's important to know from which JAR/WAR it comes. For that purpose the JVM system property org.exoplatform.container.configuration.debug can be used.
java -Dorg.exoplatform.container.configuration.debug ...
If the property is enabled the container configuration manager will report the configuration adding process to the standard output (System.out).
...... Add configuration jar:file:/D:/Projects/eXo/dev/exo-working/exo-tomcat/lib/exo.kernel.container-trunk.jar!/conf/portal/configuration.xml Add configuration jar:file:/D:/Projects/eXo/dev/exo-working/exo-tomcat/lib/exo.kernel.component.cache-trunk.jar!/conf/portal/configuration.xml Add configuration jndi:/localhost/portal/WEB-INF/conf/configuration.xml import jndi:/localhost/portal/WEB-INF/conf/common/common-configuration.xml import jndi:/localhost/portal/WEB-INF/conf/database/database-configuration.xml import jndi:/localhost/portal/WEB-INF/conf/ecm/jcr-component-plugins-configuration.xml import jndi:/localhost/portal/WEB-INF/conf/jcr/jcr-configuration.xml ......
Since eXo JCR 1.12, we added a set of new features that have been designed to extend portal applications such as GateIn.
A ServletContextListener
called
org.exoplatform.container.web.PortalContainerConfigOwner
has been added in order to notify the application that a given web
application provides some configuration to the portal container, and
this configuration file is the file
WEB-INF/conf/configuration.xml available in the
web application itself.
If your war file contains some configuration to add to the
PortalContainer
simply add the following lines in your
web.xml file.
<?xml version="1.0" encoding="ISO-8859-1" ?> <!DOCTYPE web-app PUBLIC "-//Sun Microsystems, Inc.//DTD Web Application 2.3//EN" "http://java.sun.com/dtd/web-app_2_3.dtd"> <web-app> ... <!-- ================================================================== --> <!-- LISTENER --> <!-- ================================================================== --> <listener> <listener-class>org.exoplatform.container.web.PortalContainerConfigOwner</listener-class> </listener> ... </web-app>
A ServletContextListener
called
org.exoplatform.container.web.PortalContainerCreator
has been added in order to create the current portal containers that
have been registered. We assume that all the web applications have
already been loaded before calling
PortalContainerCreator.contextInitialized
[.]
In GateIn, the PortalContainerCreator
is
already managed by the file
starter.war/ear.
Now we can define precisely a portal container and its
dependencies and settings thanks to the
PortalContainerDefinition
that currently contains the
name of the portal container, the name of the rest context, the name
of the realm he web application dependencies ordered by loading
priority (i.e. the first dependency must be loaded at first and so
on..) and the settings.
To be able to define a PortalContainerDefinition
,
we need to ensure first of all that a
PortalContainerConfig
has been defined at the
RootContainer
level, see below an example:
<component> <!-- The full qualified name of the PortalContainerConfig --> <type>org.exoplatform.container.definition.PortalContainerConfig</type> <init-params> <!-- The name of the default portal container --> <value-param> <name>default.portal.container</name> <value>myPortal</value> </value-param> <!-- The name of the default rest ServletContext --> <value-param> <name>default.rest.context</name> <value>myRest</value> </value-param> <!-- The name of the default realm --> <value-param> <name>default.realm.name</name> <value>my-exo-domain</value> </value-param> <!-- The default portal container definition --> <!-- It cans be used to avoid duplicating configuration --> <object-param> <name>default.portal.definition</name> <object type="org.exoplatform.container.definition.PortalContainerDefinition"> <!-- All the dependencies of the portal container ordered by loading priority --> <field name="dependencies"> <collection type="java.util.ArrayList"> <value> <string>foo</string> </value> <value> <string>foo2</string> </value> <value> <string>foo3</string> </value> </collection> </field> <!-- A map of settings tied to the default portal container --> <field name="settings"> <map type="java.util.HashMap"> <entry> <key> <string>foo5</string> </key> <value> <string>value</string> </value> </entry> <entry> <key> <string>string</string> </key> <value> <string>value0</string> </value> </entry> <entry> <key> <string>int</string> </key> <value> <int>100</int> </value> </entry> </map> </field> <!-- The path to the external properties file --> <field name="externalSettingsPath"> <string>classpath:/org/exoplatform/container/definition/default-settings.properties</string> </field> </object> </object-param> </init-params> </component>
Table 16.1. Descriptions of the fields of
PortalContainerConfig
default.portal.container | The name of the default portal container. This field is optional. |
default.rest.context | The name of the default rest
ServletContext . This field is optional. |
default.realm.name | The name of the default realm. This field is optional. |
default.portal.definition | The definition of the default portal container. This
field is optional. The expected type is
org.exoplatform.container.definition.PortalContainerDefinition
that is described below. Allow the parameters defined in this
default PortalContainerDefinition will be the
default values. |
A new PortalContainerDefinition
can be defined at
the RootContainer
level thanks to an external plugin,
see below an example:
<external-component-plugins> <!-- The full qualified name of the PortalContainerConfig --> <target-component>org.exoplatform.container.definition.PortalContainerConfig</target-component> <component-plugin> <!-- The name of the plugin --> <name>Add PortalContainer Definitions</name> <!-- The name of the method to call on the PortalContainerConfig in order to register the PortalContainerDefinitions --> <set-method>registerPlugin</set-method> <!-- The full qualified name of the PortalContainerDefinitionPlugin --> <type>org.exoplatform.container.definition.PortalContainerDefinitionPlugin</type> <init-params> <object-param> <name>portal</name> <object type="org.exoplatform.container.definition.PortalContainerDefinition"> <!-- The name of the portal container --> <field name="name"> <string>myPortal</string> </field> <!-- The name of the context name of the rest web application --> <field name="restContextName"> <string>myRest</string> </field> <!-- The name of the realm --> <field name="realmName"> <string>my-domain</string> </field> <!-- All the dependencies of the portal container ordered by loading priority --> <field name="dependencies"> <collection type="java.util.ArrayList"> <value> <string>foo</string> </value> <value> <string>foo2</string> </value> <value> <string>foo3</string> </value> </collection> </field> <!-- A map of settings tied to the portal container --> <field name="settings"> <map type="java.util.HashMap"> <entry> <key> <string>foo</string> </key> <value> <string>value</string> </value> </entry> <entry> <key> <string>int</string> </key> <value> <int>10</int> </value> </entry> <entry> <key> <string>long</string> </key> <value> <long>10</long> </value> </entry> <entry> <key> <string>double</string> </key> <value> <double>10</double> </value> </entry> <entry> <key> <string>boolean</string> </key> <value> <boolean>true</boolean> </value> </entry> </map> </field> <!-- The path to the external properties file --> <field name="externalSettingsPath"> <string>classpath:/org/exoplatform/container/definition/settings.properties</string> </field> </object> </object-param> </init-params> </component-plugin> </external-component-plugins>
Table 16.2. Descriptions of the fields of
PortalContainerDefinition
when it is used to define a
new portal container
name | The name of the portal container. This field is mandatory . |
restContextName | The name of the context name of the rest web
application. This field is optional. The default value will
the value define at the PortalContainerConfig
level. |
realmName | The name of the realm. This field is optional. The
default value will the value define at the
PortalContainerConfig level. |
dependencies | All the dependencies of the portal container ordered by
loading priority. This field is optional. The default value
will the value define at the
PortalContainerConfig level. The dependencies
are in fact the list of the context names of the web
applications from which the portal container depends. This
field is optional. The dependency order is really crucial
since it will be interpreted the same way by several
components of the platform. All those components, will
consider the 1st element in the list less important than the
second element and so on. It is currently used
to:
|
settings | A java.util.Map of internal parameters
that we would like to tie the portal container. Those
parameters could have any type of value. This field is
optional. If some internal settings are defined at the
PortalContainerConfig level, the two maps of
settings will be merged. If a setting with the same name is
defined in both maps, it will keep the value defined at the
PortalContainerDefinition level. |
externalSettingsPath | The path of the external properties file to load as
default settings to the portal container. This field is
optional. If some external settings are defined at the
PortalContainerConfig level, the two maps of
settings will be merged. If a setting with the same name is
defined in both maps, it will keep the value defined at the
PortalContainerDefinition level. The external
properties files can be either of type "properties" or of type
"xml". The path will be interpreted as follows:
|
Table 16.3. Descriptions of the fields of
PortalContainerDefinition
when it is used to define
the default portal container
name | The name of the portal container. This field is
optional. The default portal name will be:
|
restContextName | The name of the context name of the rest web
application. This field is optional. The default value wil
be:
|
realmName | The name of the realm. This field is optional. The
default value wil be:
|
dependencies | All the dependencies of the portal container ordered by loading priority. This field is optional. If this field has a non empty value, it will be the default list of dependencies. |
settings | A java.util.Map of internal parameters
that we would like to tie the default portal container. Those
parameters could have any type of value. This field is
optional. |
externalSettingsPath | The path of the external properties file to load as
default settings to the default portal container. This field
is optional. The external properties files can be either of
type "properties" or of type "xml". The path will be
interpreted as follows:
|
Internal and external settings are both optional, but if we give a non empty value for both the application will merge the settings. If the same setting name exists in both settings, we apply the following rules:
The value of the external setting is null, we ignore the value.
The value of the external setting is not
null and the value of the internal setting is
null, the final value will be the external
setting value that is of type String
.
Both values are not null
, we will have to
convert the external setting value into the target type which is
the type of the internal setting value, thanks to the static
method valueOf(String), the following
sub-rules are then applied:
The method cannot be found, the final value will be the
external setting value that is of type
String
.
The method can be found and the external setting value
is an empty String
, we ignore the external
setting value.
The method can be found and the external setting value
is not an empty String
but the method call
fails, we ignore the external setting value.
The method can be found and the external setting value
is not an empty String
and the method call
succeeds, the final value will be the external setting value
that is of type of the internal setting value.
We can inject the value of the portal container settings into the portal container configuration files thanks to the variables which name start with "portal.container.", so to get the value of a setting called "foo" just use the following syntax ${portal.container.foo}. You can also use internal variables, such as:
Table 16.4. Definition of the internal variables
portal.container.name | Gives the name of the current portal container. |
portal.container.rest | Gives the context name of the rest web application of the current portal container. |
portal.container.realm | Gives the realm name of the current portal container. |
You can find below an example of how to use the variables:
<configuration xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.exoplaform.org/xml/ns/kernel_1_1.xsd http://www.exoplaform.org/xml/ns/kernel_1_1.xsd" xmlns="http://www.exoplaform.org/xml/ns/kernel_1_1.xsd"> <component> <type>org.exoplatform.container.TestPortalContainer$MyComponent</type> <init-params> <!-- The name of the portal container --> <value-param> <name>portal</name> <value>${portal.container.name}</value> </value-param> <!-- The name of the rest ServletContext --> <value-param> <name>rest</name> <value>${portal.container.rest}</value> </value-param> <!-- The name of the realm --> <value-param> <name>realm</name> <value>${portal.container.realm}</value> </value-param> <value-param> <name>foo</name> <value>${portal.container.foo}</value> </value-param> <value-param> <name>before foo after</name> <value>before ${portal.container.foo} after</value> </value-param> </init-params> </component> </configuration>
In the properties file corresponding to the external settings, you can reuse variables previously defined (in the external settings or in the internal settings) to create a new variable. In this case the prefix "portal.container." is not needed, see an example below:
my-var1=value 1 my-var2=value 2 complex-value=${my-var1}-${my-var2}
In the external and internal settings, you can also use create
variables based on value of System paramaters. The System parameters
can either be defined at launch time or thanks to the
PropertyConfigurator
(see next section for more
details). See an example below:
temp-dir=${java.io.tmpdir}${file.separator}my-temp
However, for the internal settings you can use System parameters
only to define settings of type
java.lang.String
.
It cans be also very usefull to define a generic variable in the settings of the default portal container, the value of this variable will change according to the current portal container. See below an example:
my-generic-var=value of the portal container "${name}"
If this variable is defined at the default portal container level, the value of this variable for a portal container called "foo" will be value of the portal container "foo".
A new property configurator service has been developed for taking care of configuring system properties from the inline kernel configuration or from specified property files.
The services is scoped at the root container level because it is used by all the services in the different portal containers in the application runtime.
The properties init param takes a property declared to configure various properties.
<component> <key>PropertyManagerConfigurator</key> <type>org.exoplatform.container.PropertyConfigurator</type> <init-params> <properties-param> <name>properties</name> <property name="foo" value="bar"/> </properties-param> </init-params> </component>
The properties URL init param allow to load an external file by
specifying its URL. Both property and XML format are supported, see the
javadoc of the java.util.Properties
class for more information. When a property file is loaded the various
property declarations are loaded in the order in which the properties
are declared sequentially in the file.
<component> <key>PropertyManagerConfigurator</key> <type>org.exoplatform.container.PropertyConfigurator</type> <init-params> <value-param> <name>properties.url</name> <value>classpath:configuration.properties</value> </value-param> </init-params> </component>
In the properties file corresponding to the external properties, you can reuse variables previously defined to create a new variable. In this case the prefix "portal.container." is not needed, see an example below:
my-var1=value 1 my-var2=value 2 complex-value=${my-var1}-${my-var2}
The kernel configuration is able to handle configuration profiles at runtime (as opposed to packaging time).
An active profile list is obtained during the boot of the root container and is composed of the system property exo.profiles sliced according the "," delimiter and also a server specific profile value (tomcat for tomcat, jboss for jboss, etc...).
# runs GateIn on Tomcat with the profiles tomcat and foo sh gatein.sh -Dexo.profiles=foo # runs GateIn on JBoss with the profiles jboss, foo and bar sh run.sh -Dexo.profiles=foo,bar
Profiles are configured in the configuration files of the eXo kernel.
Profile activation occurs at XML to configuration object unmarshalling time. It is based on an "profile" attribute that is present on some of the XML element of the configuration files. To enable this the kernel configuration schema has been upgraded to kernel_1_1.xsd. The configuration is based on the following rules:
Any kernel element with the no profiles attribute will create a configuration object
Any kernel element having a profiles attribute containing at least one of the active profiles will create a configuration object
Any kernel element having a profiles attribute matching none of the active profile will not create a configuration object
Resolution of duplicates (such as two components with same type) is left up to the kernel
A configuration element is profiles capable when it carries a profiles element.
The component element declares a component when activated. It will shadow any element with the same key declared before in the same configuration file:
<component> <key>Component</key> <type>Component</type> </component> <component profile="foo"> <key>Component</key> <type>FooComponent</type> </component>
The import element imports a referenced configuration file when activated:
<import>empty</import> <import profile="foo">foo</import> <import profile="bar">bar</import>
The init param element configures the parameter argument of the construction of a component service:
<component> <key>Component</key> <type>ComponentImpl</type> <init-params> <value-param> <name>param</name> <value>empty</value> </value-param> <value-param profile="foo"> <name>param</name> <value>foo</value> </value-param> <value-param profile="bar"> <name>param</name> <value>bar</value> </value-param> </init-params> </component>
The value collection element configures one of the value of collection data:
<object type="org.exoplatform.container.configuration.ConfigParam"> <field name="role"> <collection type="java.util.ArrayList"> <value><string>manager</string></value> <value profile="foo"><string>foo_manager</string></value> <value profile="foo,bar"><string>foo_bar_manager</string></value> </collection> </field> </object>
The field configuration element configures the field of an object:
<object-param> <name>test.configuration</name> <object type="org.exoplatform.container.configuration.ConfigParam"> <field name="role"> <collection type="java.util.ArrayList"> <value><string>manager</string></value> </collection> </field> <field name="role" profile="foo,bar"> <collection type="java.util.ArrayList"> <value><string>foo_bar_manager</string></value> </collection> </field> <field name="role" profile="foo"> <collection type="java.util.ArrayList"> <value><string>foo_manager</string></value> </collection> </field> </object> </object-param>
The component request life cycle is an interface that defines a contract for a component for being involved into a request:
public interface ComponentRequestLifecycle { /** * Start a request. * @param container the related container */ void startRequest(ExoContainer container); /** * Ends a request. * @param container the related container */ void endRequest(ExoContainer container); }
The container passed is the container to which the component is related. This contract is often used to setup a thread local based context that will be demarcated by a request.
For instance in the GateIn portal context, a component request life cycle is triggered for user requests. Another example is the initial data import in GateIn that demarcates using callbacks made to that interface.
The RequestLifeCycle
class has several statics
methods that are used to schedule the component request life cycle of
components. Its main responsability is to perform scheduling while
respecting the constraint to execute the request life cycle of a
component only once even if it can be scheduled several times.
RequestLifeCycle.begin(component); try { // Do something } finally { RequestLifeCycle.end(); }
Scheduling a container triggers the component request life cyle
of all the components that implement the interface
ComponentRequestLifeCycle
. If one of the component has
already been scheduled before then that component will not be
scheduled again. When the local value is true then the looked
components will be those of the container, when it is false then the
scheduler will also look at the components in the ancestor
containers.
RequestLifeCycle.begin(container, local); try { // Do something } finally { RequestLifeCycle.end(); }
Each portal request triggers the life cycle of the associated portal container.
All applications on the top of eXo JCR that need a cache, can rely
on an org.exoplatform.services.cache.ExoCache
instance that
is managed by the
org.exoplatform.services.cache.CacheService
. The main
implementation of this service is
org.exoplatform.services.cache.impl.CacheServiceImpl
which
depends on the
org.exoplatform.services.cache.ExoCacheConfig
in order to
create new ExoCache
instances. See below an example of
org.exoplatform.services.cache.CacheService
definition:
<component> <key>org.exoplatform.services.cache.CacheService</key> <jmx-name>cache:type=CacheService</jmx-name> <type>org.exoplatform.services.cache.impl.CacheServiceImpl</type> <init-params> <object-param> <name>cache.config.default</name> <description>The default cache configuration</description> <object type="org.exoplatform.services.cache.ExoCacheConfig"> <field name="name"><string>default</string></field> <field name="maxSize"><int>300</int></field> <field name="liveTime"><long>600</long></field> <field name="distributed"><boolean>false</boolean></field> <field name="implementation"><string>org.exoplatform.services.cache.concurrent.ConcurrentFIFOExoCache</string></field> </object> </object-param> </init-params> </component>
The ExoCacheConfig
which name is
default
, will be the default configuration of all the
ExoCache
instances that don't have dedicated
configuration.
See below an example of how to define a new
ExoCacheConfig
thanks to a
external-component-plugin:
<external-component-plugins> <target-component>org.exoplatform.services.cache.CacheService</target-component> <component-plugin> <name>addExoCacheConfig</name> <set-method>addExoCacheConfig</set-method> <type>org.exoplatform.services.cache.ExoCacheConfigPlugin</type> <description>Configures the cache for query service</description> <init-params> <object-param> <name>cache.config.wcm.composer</name> <description>The default cache configuration</description> <object type="org.exoplatform.services.cache.ExoCacheConfig"> <field name="name"><string>wcm.composer</string></field> <field name="maxSize"><int>300</int></field> <field name="liveTime"><long>600</long></field> <field name="distributed"><boolean>false</boolean></field> <field name="implementation"><string>org.exoplatform.services.cache.concurrent.ConcurrentFIFOExoCache</string></field> </object> </object-param> </init-params> </component-plugin> </external-component-plugins>
Table 17.1. Descriptions of the fields of
ExoCacheConfig
name | The name of the cache. This field is mandatory since it
will be used to retrieve the ExoCacheConfig
corresponding to a given cache name. |
label | The label of the cache. This field is optional. It is mainly used to indicate the purpose of the cache. |
maxSize | The maximum numbers of elements in cache. This field is mandatory. |
liveTime | The amount of time (in seconds) an element is not written or read before it is evicted. This field is mandatory. |
implementation | The full qualified name of the cache implementation to use.
This field is optional. This field is only used for simple cache
implementation. The default and main implementation is
org.exoplatform.services.cache.concurrent.ConcurrentFIFOExoCache ,
this implementation only works with local caches with FIFO as
eviction policy. For more complex implementation see the next
sections. |
distributed | Indicates if the cache is distributed. This field is optional. This field is used for backward compatibility. |
replicated | Indicates if the cache is replicated. This field is optional. This field is deprecated. |
logEnabled | Indicates if the log is enabled. This field is optional. This field is used for backward compatibility. |
In the previous versions of eXo kernel, it was quite complex to implement your own ExoCache because it was not open enough. Since kernel 2.0.8, it is possible to easily integrate your favorite cache provider in eXo Products.
You just need to implement your own ExoCacheFactory
and register it in an eXo container, as described below:
package org.exoplatform.services.cache; ... public interface ExoCacheFactory { /** * Creates a new instance of {@link org.exoplatform.services.cache.ExoCache} * @param config the cache to create * @return the new instance of {@link org.exoplatform.services.cache.ExoCache} * @exception ExoCacheInitException if an exception happens while initializing the cache */ public ExoCache createCache(ExoCacheConfig config) throws ExoCacheInitException; }
As you can see, there is only one method to implement which cans be
seen as a converter of an ExoCacheConfig
to get an instance
of ExoCache
. Once, you created your own implementation you
can simply register your factory by adding a file
conf/portal/configuration.xml with a content of the
following type:
<configuration> <component> <key>org.exoplatform.services.cache.ExoCacheFactory</key> <type>org.exoplatform.tutorial.MyExoCacheFactoryImpl</type> ... </component> </configuration>
When you add, the eXo library in your classpath, the eXo service container will use the default configuration provided in the library itself but of course you can still redefined the configuration if you wish as you can do with any components.
The default configuration of the factory is:
<configuration> <component> <key>org.exoplatform.services.cache.ExoCacheFactory</key> <type>org.exoplatform.services.cache.impl.jboss.ExoCacheFactoryImpl</type> <init-params> <value-param> <name>cache.config.template</name> <value>jar:/conf/portal/cache-configuration-template.xml</value> </value-param> </init-params> </component> </configuration>
As you can see the factory requires one single parameter which is cache.config.template, this parameter allows you to define the location of the default configuration template of your jboss cache. In the default configuration, we ask the eXo container to get the file shipped into the jar at /conf/portal/cache-configuration-template.xml.
The default configuration template aims to be the skeleton from which we will create any type of jboss cache instance, thus it must be very generic.
The default configuration template provided with the jar aims to work with any application servers, but if you intend to use JBoss AS, you should redefine it in your custom configuration to fit better with your AS.
If for a given reason, you need to use a specific configuration for a cache, you can register one thanks to an "external plugin", see an example below:
<configuration> ... <external-component-plugins> <target-component>org.exoplatform.services.cache.ExoCacheFactory</target-component> <component-plugin> <name>addConfig</name> <set-method>addConfig</set-method> <type>org.exoplatform.services.cache.impl.jboss.ExoCacheFactoryConfigPlugin</type> <description>add Custom Configurations</description> <init-params> <value-param> <name>myCustomCache</name> <value>jar:/conf/portal/custom-cache-configuration.xml</value> </value-param> </init-params> </component-plugin> </external-component-plugins> ... </configuration>
In the example above, I call the method
addConfig(ExoCacheFactoryConfigPlugin plugin) on
the current implementation of ExoCacheFactory
which is
actually the jboss cache implementation.
In the init-params block, you can define a set of value-param blocks and for each value-param, we expect the name of cache that needs a specific configuration as name and the location of your custom configuration as value.
In this example, we indicates to the factory that we would like that the cache myCustomCache use the configuration available at jar:/conf/portal/custom-cache-configuration.xml.
The factory for jboss cache, delegates the cache creation to
ExoCacheCreator
that is defines as
below:
package org.exoplatform.services.cache.impl.jboss; ... public interface ExoCacheCreator { /** * Creates an eXo cache according to the given configuration {@link org.exoplatform.services.cache.ExoCacheConfig} * @param config the configuration of the cache to apply * @param cache the cache to initialize * @exception ExoCacheInitException if an exception happens while initializing the cache */ public ExoCache create(ExoCacheConfig config, Cache<Serializable, Object> cache) throws ExoCacheInitException; /** * Returns the type of {@link org.exoplatform.services.cache.ExoCacheConfig} expected by the creator * @return the expected type */ public Class<? extends ExoCacheConfig> getExpectedConfigType(); /** * Returns the name of the implementation expected by the creator. This is mainly used to be backward compatible * @return the expected by the creator */ public String getExpectedImplementation(); }
The ExoCacheCreator
allows you to define any kind
of jboss cache instance that you would like to have. It has been
designed to give you the ability to have your own type of
configuration and to always be backward compatible.
In an ExoCacheCreator
, you need to implement 3
methods which are:
create - this method is used to create
a new ExoCache
from the
ExoCacheConfig
and a jboss cache instance.
getExpectedConfigType - this method is
used to indicate the factory the subtype of
ExoCacheConfig
supported by the creator.
getExpectedImplementation - this method
is used to indicate the factory the value of field implementation
of ExoCacheConfig
that is supported by the creator.
This is used for backward compatibility, in other words you can
still configure your cache with a super class
ExoCacheConfig
.
You can register any cache creator you want thanks to an "external plugin", see an example below:
<external-component-plugins> <target-component>org.exoplatform.services.cache.ExoCacheFactory</target-component> <component-plugin> <name>addCreator</name> <set-method>addCreator</set-method> <type>org.exoplatform.services.cache.impl.jboss.ExoCacheCreatorPlugin</type> <description>add Exo Cache Creator</description> <init-params> <object-param> <name>LRU</name> <description>The lru cache creator</description> <object type="org.exoplatform.services.cache.impl.jboss.lru.LRUExoCacheCreator"> <field name="defaultTimeToLive"><long>1500</long></field> <field name="defaultMaxAge"><long>2000</long></field> </object> </object-param> </init-params> </component-plugin> </external-component-plugins>
In the example above, I call the method
addCreator(ExoCacheCreatorPlugin plugin) on the
current implementation of ExoCacheFactory
which is
actually the jboss cache implementation.
In the init-params block, you can define a
set of object-param blocks and for each
object-param, we expect any object definition of
type ExoCacheCreator
.
In this example, we register the action creator related to the eviction policy LRU.
By default, no cache creator are defined, so you need to define them yourself by adding them in your configuration files.
.. <object-param> <name>LRU</name> <description>The lru cache creator</description> <object type="org.exoplatform.services.cache.impl.jboss.lru.LRUExoCacheCreator"> <field name="defaultTimeToLive"><long>${my-value}</long></field> <field name="defaultMaxAge"><long>${my-value}</long></field> </object> </object-param> ...
Table 17.2. Fields description
defaultTimeToLive | This is the default value of the field timeToLive described in the section dedicated to this cache type. This value is only use when we define a cache of this type with the old configuration. |
defaultMaxAge | his is the default value of the field maxAge described in the section dedicated to this cache type. This value is only use when we define a cache of this type with the old configuration. |
... <object-param> <name>FIFO</name> <description>The fifo cache creator</description> <object type="org.exoplatform.services.cache.impl.jboss.fifo.FIFOExoCacheCreator"></object> </object-param> ...
... <object-param> <name>MRU</name> <description>The mru cache creator</description> <object type="org.exoplatform.services.cache.impl.jboss.mru.MRUExoCacheCreator"></object> </object-param> ...
... <object-param> <name>LFU</name> <description>The lfu cache creator</description> <object type="org.exoplatform.services.cache.impl.jboss.lfu.LFUExoCacheCreator"> <field name="defaultMinNodes"><int>${my-value}</int></field> </object> </object-param> ...
Table 17.3. Fields description
defaultMinNodes | This is the default value of the field minNodes described in the section dedicated to this cache type. This value is only use when we define a cache of this type with the old configuration. |
... <object-param> <name>EA</name> <description>The ea cache creator</description> <object type="org.exoplatform.services.cache.impl.jboss.ea.EAExoCacheCreator"> <field name="defaultExpirationTimeout"><long>2000</long></field> </object> </object-param> ...
Table 17.4. Fields description
defaultExpirationTimeout | This is the default value of the field minNodes described in the section dedicated to this cache type. This value is only use when we define a cache of this type with the old configuration. |
You have 2 ways to define a cache which are:
At CacheService
initialization
With an "external plugin"
... <component> <key>org.exoplatform.services.cache.CacheService</key> <type>org.exoplatform.services.cache.impl.CacheServiceImpl</type> <init-params> ... <object-param> <name>fifocache</name> <description>The default cache configuration</description> <object type="org.exoplatform.services.cache.ExoCacheConfig"> <field name="name"><string>fifocache</string></field> <field name="maxSize"><int>${my-value}</int></field> <field name="liveTime"><long>${my-value}</long></field> <field name="distributed"><boolean>false</boolean></field> <field name="implementation"><string>org.exoplatform.services.cache.FIFOExoCache</string></field> </object> </object-param> ... </init-params> </component> ...
In this example, we define a new cache called fifocache.
... <external-component-plugins> <target-component>org.exoplatform.services.cache.CacheService</target-component> <component-plugin> <name>addExoCacheConfig</name> <set-method>addExoCacheConfig</set-method> <type>org.exoplatform.services.cache.ExoCacheConfigPlugin</type> <description>add ExoCache configuration component plugin </description> <init-params> ... <object-param> <name>fifoCache</name> <description>The fifo cache configuration</description> <object type="org.exoplatform.services.cache.ExoCacheConfig"> <field name="name"><string>fifocache</string></field> <field name="maxSize"><int>${my-value}</int></field> <field name="liveTime"><long>${my-value}</long></field> <field name="distributed"><boolean>false</boolean></field> <field name="implementation"><string>org.exoplatform.services.cache.FIFOExoCache</string></field> </object> </object-param> ... </init-params> </component-plugin> </external-component-plugins> ...
In this example, we define a new cache called fifocache which is in fact the same cache as in previous example but defined in a different manner.
Actually, if you use a custom configuration for your cache as described in a previous section, we will use the cache mode define in your configuration file.
In case, you decide to use the default configuration template,
we use the field distributed of your
ExoCacheConfig
to decide. In other words, if the value
of this field is false (the default value), the cache will be a local
cache otherwise it will be the cache mode defined in your default
configuration template that should be distributed.
New configuration
... <object-param> <name>lru</name> <description>The lru cache configuration</description> <object type="org.exoplatform.services.cache.impl.jboss.lru.LRUExoCacheConfig"> <field name="name"><string>lru</string></field> <field name="maxNodes"><int>${my-value}</int></field> <field name="minTimeToLive"><long>${my-value}</long></field> <field name="maxAge"><long>${my-value}</long></field> <field name="timeToLive"><long>${my-value}</long></field> </object> </object-param> ...
Table 17.5. Fields description
maxNodes | This is the maximum number of nodes allowed in this region. 0 denotes immediate expiry, -1 denotes no limit. |
minTimeToLive | The minimum amount of time (in milliseconds) a node must be allowed to live after being accessed before it is allowed to be considered for eviction. 0 denotes that this feature is disabled, which is the default value. |
maxAge | Lifespan of a node (in milliseconds) regardless of idle time before the node is swept away. 0 denotes immediate expiry, -1 denotes no limit. |
timeToLive | The amount of time a node is not written to or read (in milliseconds) before the node is swept away. 0 denotes immediate expiry, -1 denotes no limit. |
Old configuration
... <object-param> <name>lru-with-old-config</name> <description>The lru cache configuration</description> <object type="org.exoplatform.services.cache.ExoCacheConfig"> <field name="name"><string>lru-with-old-config</string></field> <field name="maxSize"><int>${my-value}</int></field> <field name="liveTime"><long>${my-value}</long></field> <field name="implementation"><string>LRU</string></field> </object> </object-param> ...
Table 17.6. Fields description
maxSize | This is the maximum number of nodes allowed in this region. 0 denotes immediate expiry, -1 denotes no limit. |
liveTime | The minimum amount of time (in seconds) a node must be allowed to live after being accessed before it is allowed to be considered for eviction. 0 denotes that this feature is disabled, which is the default value. |
For the fields maxAge and timeToLive needed by JBoss cache, we will use the default values provided by the creator.
New configuration
... <object-param> <name>fifo</name> <description>The fifo cache configuration</description> <object type="org.exoplatform.services.cache.impl.jboss.fifo.FIFOExoCacheConfig"> <field name="name"><string>fifo</string></field> <field name="maxNodes"><int>${my-value}</int></field> <field name="minTimeToLive"><long>${my-value}</long></field> </object> </object-param> ...
Table 17.7. Fields description
maxNodes | This is the maximum number of nodes allowed in this region. 0 denotes immediate expiry, -1 denotes no limit. |
minTimeToLive | The minimum amount of time (in milliseconds) a node must be allowed to live after being accessed before it is allowed to be considered for eviction. 0 denotes that this feature is disabled, which is the default value. |
Old configuration
... <object-param> <name>fifo-with-old-config</name> <description>The fifo cache configuration</description> <object type="org.exoplatform.services.cache.ExoCacheConfig"> <field name="name"><string>fifo-with-old-config</string></field> <field name="maxSize"><int>${my-value}</int></field> <field name="liveTime"><long>${my-value}</long></field> <field name="implementation"><string>FIFO</string></field> </object> </object-param> ...
Table 17.8. Fields description
maxSize | This is the maximum number of nodes allowed in this region. 0 denotes immediate expiry, -1 denotes no limit. |
liveTime | The minimum amount of time (in seconds) a node must be allowed to live after being accessed before it is allowed to be considered for eviction. 0 denotes that this feature is disabled, which is the default value. |
New configuration
... <object-param> <name>mru</name> <description>The mru cache configuration</description> <object type="org.exoplatform.services.cache.impl.jboss.mru.MRUExoCacheConfig"> <field name="name"><string>mru</string></field> <field name="maxNodes"><int>${my-value}</int></field> <field name="minTimeToLive"><long>${my-value}</long></field> </object> </object-param> ...
Table 17.9. Fields description
maxNodes | This is the maximum number of nodes allowed in this region. 0 denotes immediate expiry, -1 denotes no limit. |
minTimeToLive | The minimum amount of time (in milliseconds) a node must be allowed to live after being accessed before it is allowed to be considered for eviction. 0 denotes that this feature is disabled, which is the default value. |
Old configuration
... <object-param> <name>mru-with-old-config</name> <description>The mru cache configuration</description> <object type="org.exoplatform.services.cache.ExoCacheConfig"> <field name="name"><string>mru-with-old-config</string></field> <field name="maxSize"><int>${my-value}</int></field> <field name="liveTime"><long>${my-value}</long></field> <field name="implementation"><string>MRU</string></field> </object> </object-param> ...
Table 17.10. Fields description
maxSize | This is the maximum number of nodes allowed in this region. 0 denotes immediate expiry, -1 denotes no limit. |
liveTime | The minimum amount of time (in seconds) a node must be allowed to live after being accessed before it is allowed to be considered for eviction. 0 denotes that this feature is disabled, which is the default value. |
New configuration
... <object-param> <name>lfu</name> <description>The lfu cache configuration</description> <object type="org.exoplatform.services.cache.impl.jboss.lfu.LFUExoCacheConfig"> <field name="name"><string>lfu</string></field> <field name="maxNodes"><int>${my-value}</int></field> <field name="minNodes"><int>${my-value}</int></field> <field name="minTimeToLive"><long>${my-value}</long></field> </object> </object-param> ...
Table 17.11. Fields description
maxNodes | This is the maximum number of nodes allowed in this region. 0 denotes immediate expiry, -1 denotes no limit. |
minNodes | This is the minimum number of nodes allowed in this region. This value determines what the eviction queue should prune down to per pass. e.g. If minNodes is 10 and the cache grows to 100 nodes, the cache is pruned down to the 10 most frequently used nodes when the eviction timer makes a pass through the eviction algorithm. |
minTimeToLive | The minimum amount of time (in milliseconds) a node must be allowed to live after being accessed before it is allowed to be considered for eviction. 0 denotes that this feature is disabled, which is the default value. |
Old configuration
... <object-param> <name>lfu-with-old-config</name> <description>The lfu cache configuration</description> <object type="org.exoplatform.services.cache.ExoCacheConfig"> <field name="name"><string>lfu-with-old-config</string></field> <field name="maxSize"><int>${my-value}</int></field> <field name="liveTime"><long>${my-value}</long></field> <field name="implementation"><string>LFU</string></field> </object> </object-param> ...
Table 17.12. Fields description
maxSize | This is the maximum number of nodes allowed in this region. 0 denotes immediate expiry, -1 denotes no limit. |
liveTime | The minimum amount of time (in milliseconds) a node must be allowed to live after being accessed before it is allowed to be considered for eviction. 0 denotes that this feature is disabled, which is the default value. |
For the fields minNodes and timeToLive needed by JBoss cache, we will use the default values provided by the creator.
New configuration
... <object-param> <name>ea</name> <description>The ea cache configuration</description> <object type="org.exoplatform.services.cache.impl.jboss.ea.EAExoCacheConfig"> <field name="name"><string>ea</string></field> <field name="maxNodes"><int>${my-value}</int></field> <field name="minTimeToLive"><long>${my-value}</long></field> <field name="expirationTimeout"><long>${my-value}</long></field> </object> </object-param> ...
Table 17.13. Fields description
maxNodes | This is the maximum number of nodes allowed in this region. 0 denotes immediate expiry, -1 denotes no limit. |
minTimeToLive | The minimum amount of time (in milliseconds) a node must be allowed to live after being accessed before it is allowed to be considered for eviction. 0 denotes that this feature is disabled, which is the default value. |
expirationTimeout | This is the timeout after which the cache entry must be evicted. |
Old configuration
... <object-param> <name>ea-with-old-config</name> <description>The ea cache configuration</description> <object type="org.exoplatform.services.cache.ExoCacheConfig"> <field name="name"><string>lfu-with-old-config</string></field> <field name="maxSize"><int>${my-value}</int></field> <field name="liveTime"><long>${my-value}</long></field> <field name="implementation"><string>EA</string></field> </object> </object-param> ...
Table 17.14. Fields description
maxSize | This is the maximum number of nodes allowed in this region. 0 denotes immediate expiry, -1 denotes no limit. |
liveTime | The minimum amount of time (in milliseconds) a node must be allowed to live after being accessed before it is allowed to be considered for eviction. 0 denotes that this feature is disabled, which is the default value. |
For the fields expirationTimeout needed by JBoss cache, we will use the default values provided by the creator.
TransactionServices provides acces to XA TransactionManager and UserTransaction (See JTA specification for details).
Table 18.1. List methods
getTransactionManager() | Get used TransactionManager |
getUserTransaction() | Get UserTransaction on TransactionManager |
getDefaultTimeout() | Return default TimeOut |
setTransactionTimeout(int seconds) | Set TimeOut in second |
enlistResource(ExoResource xares) | Enlist XA resource in TransactionManager |
delistResource(ExoResource xares) | Delist XA resource from TransactionManager |
createXid() | Creates unique XA transaction identifier. |
Table of Contents
The eXo Core is a set of common services that are used by eXo products and modules, it also can be used in the business logic.
It's Authentication and Security, Organization, Database, Logging, JNDI, LDAP, Document reader and other services. Find more on eXo site
Table of Contents
The Web Services module allows eXo technology to integrate with external products and services.
It's implementaion of API for RESTful Web Services with extensions, Servlet and cross-domain AJAX web-frameworks and JavaBean-JSON transformer. Find full documentaion on eXo site.