JBoss.orgCommunity Documentation
Copyright © 2009, 2010 eXoPlatform
Table of Contents
Java Content Repository API as well as other Java language related standards is created within the Java Community Process http://jcp.org/ as a result of collaboration of an expert group and the Java community and known as JSR-170 (Java Specification Request) http://www.jcp.org/en/jsr/detail?id=170.
As the main purpose of content repository is to maintain the data - the heart of CR is the data model:
The main data storage abstraction of JCR's data model is a workspace
Each repository should have one or more workspaces
The content is stored in a workspace as a hierarchy of items
Each workspace has its own hierarchy of items
Node is intended to support the data hierarchy. They are typed using namespaced names which allows the content to be structured according to standardized constraints. A node may be versioned through an associated version graph (optional feature)
Property stored data are values of predefined types (String, Binary, Long, Boolean, Double, Date, Reference, Path).
It is important to note that the data model for the interface (the repository model) is rarely the same as the data models used by the repository's underlying storage subsystems. The repository knows how to make the client's changes persistent because that is part of the repository configuration, rather than part of the application programming task.
Like other eXo services eXo JCR can be configured and used in portal or embedded mode (as a service embedded in eXo Portal) and in standalone mode.
In Embedded mode, JCR services are registered in the Portal container and the second option is to use a Standalone container. The main difference between these container types is that the first one is intended to be used in a Portal (Web) environment, while the second one can be used standalone (TODO see the comprehensive page Service Configuration for Beginners for more details).
The following setup procedure is used to obtain a Standalone configuration (TODO find more in Container configuration):
Configuration that is set explicitly using StandaloneContainer.addConfigurationURL(String url) or StandaloneContainer.addConfigurationPath(String path) before getInstance()
Configuration from $base:directory/exo-configuration.xml or $base:directory/conf/exo-configuration.xml file. Where $base:directory is either AS's home directory in case of J2EE AS environment or just the current directory in case of a standalone application.
/conf/exo-configuration.xml in the current classloader (e.g. war, ear archive)
Configuration from $service_jar_file/conf/portal/configuration.xml. WARNING: do not rely on some concrete jar's configuration if you have more than one jar containing conf/portal/configuration.xml file. In this case choosing a configuration is unpredictable.
JCR service configuration looks like:
<component> <key>org.exoplatform.services.jcr.RepositoryService</key> <type>org.exoplatform.services.jcr.impl.RepositoryServiceImpl</type> </component> <component> <key>org.exoplatform.services.jcr.config.RepositoryServiceConfiguration</key> <type>org.exoplatform.services.jcr.impl.config.RepositoryServiceConfigurationImpl</type> <init-params> <value-param> <name>conf-path</name> <description>JCR repositories configuration file</description> <value>jar:/conf/standalone/exo-jcr-config.xml</value> </value-param> <properties-param> <name>working-conf</name> <description>working-conf</description> <property name="source-name" value="jdbcjcr" /> <property name="dialect" value="hsqldb" /> <property name="persister-class-name" value="org.exoplatform.services.jcr.impl.config.JDBCConfigurationPersister" /> </properties-param> </init-params> </component>
conf-path : a path to a RepositoryService JCR Configuration
working-conf : optional; JCR configuration persister configuration. If there isn't a working-conf the persister will be disabled
The Configuration is defined in an XML file (see DTD below).
JCR Service can use multiple Repositories and each repository can have multiple Workspaces.
Repositories configuration parameters support human-readable formats of values. They are all case-insensitive:
Numbers formats: K,KB - kilobytes, M,MB - megabytes, G,GB - gigabytes, T,TB - terabytes.
Examples: 100.5 - digit 100.5, 200k - 200 Kbytes, 4m - 4 Mbytes, 1.4G - 1.4 Gbytes, 10T - 10 Tbytes
Time format endings: ms - milliseconds, s - seconds, m - minutes, h - hours, d - days, w - weeks, if no ending - seconds.
Examples: 500ms - 500 milliseconds, 20 or 20s - 20 seconds, 30m - 30 minutes, 12h - 12 hours, 5d - 5 days, 4w - 4 weeks.
Default configuration of the Repository Service located in jar:/conf/portal/exo-jcr-config.xml, it will be available for portal and standalone modes.
In portal mode it is overriden and located in the portal web application portal/WEB-INF/conf/jcr/repository-configuration.xml.
Example of Repository Service configuration for standalone mode:
<repository-service default-repository="repository"> <repositories> <repository name="db1" system-workspace="ws" default-workspace="ws"> <security-domain>exo-domain</security-domain> <access-control>optional</access-control> <session-max-age>1h</session-max-age> <authentication-policy>org.exoplatform.services.jcr.impl.core.access.JAASAuthenticator</authentication-policy> <workspaces> <workspace name="production"> <!-- for system storage --> <container class="org.exoplatform.services.jcr.impl.storage.jdbc.optimisation.CQJDBCWorkspaceDataContainer"> <properties> <property name="source-name" value="jdbcjcr" /> <property name="multi-db" value="false" /> <property name="update-storage" value="false" /> <property name="max-buffer-size" value="200k" /> <property name="swap-directory" value="../temp/swap/production" /> </properties> <value-storages> <value-storage id="system" class="org.exoplatform.services.jcr.impl.storage.value.fs.TreeFileValueStorage"> <properties> <property name="path" value="../temp/values/production" /> </properties> <filters> <filter property-type="Binary" /> </filters> </value-storage> </value-storages> </container> <initializer class="org.exoplatform.services.jcr.impl.core.ScratchWorkspaceInitializer"> <properties> <property name="root-nodetype" value="nt:unstructured" /> </properties> </initializer> <cache enabled="true" class="org.exoplatform.services.jcr.impl.dataflow.persistent.LinkedWorkspaceStorageCacheImpl"> <properties> <property name="max-size" value="10k" /> <property name="live-time" value="1h" /> </properties> </cache> <query-handler class="org.exoplatform.services.jcr.impl.core.query.lucene.SearchIndex"> <properties> <property name="index-dir" value="../temp/jcrlucenedb/production" /> </properties> </query-handler> <lock-manager> <time-out>15m</time-out> <persister class="org.exoplatform.services.jcr.impl.core.lock.FileSystemLockPersister"> <properties> <property name="path" value="../temp/lock/system" /> </properties> </persister> </lock-manager> </workspace> <workspace name="backup"> <container class="org.exoplatform.services.jcr.impl.storage.jdbc.optimisation.CQJDBCWorkspaceDataContainer"> <properties> <property name="source-name" value="jdbcjcr" /> <property name="multi-db" value="false" /> <property name="update-storage" value="false" /> <property name="max-buffer-size" value="200k" /> <property name="swap-directory" value="../temp/swap/backup" /> </properties> <value-storages> <value-storage id="draft" class="org.exoplatform.services.jcr.impl.storage.value.fs.TreeFileValueStorage"> <properties> <property name="path" value="../temp/values/backup" /> </properties> <filters> <filter property-type="Binary" /> </filters> </value-storage> </value-storages> </container> <initializer class="org.exoplatform.services.jcr.impl.core.ScratchWorkspaceInitializer"> <properties> <property name="root-nodetype" value="nt:unstructured" /> </properties> </initializer> <cache enabled="true" class="org.exoplatform.services.jcr.impl.dataflow.persistent.LinkedWorkspaceStorageCacheImpl"> <properties> <property name="max-size" value="10k" /> <property name="live-time" value="1h" /> </properties> </cache> <query-handler class="org.exoplatform.services.jcr.impl.core.query.lucene.SearchIndex"> <properties> <property name="index-dir" value="../temp/jcrlucenedb/backup" /> </properties> </query-handler> </workspace> </workspaces> </repository> </repositories> </repository-service>
Repository Service configuration:
default-repository - the name of a default repository (one returned by RepositoryService.getRepository())
repositories - the list of repositories
Repository configuration:
name - the name of a repository
default-workspace - the name of a workspace obtained using Session's login() or login(Credentials) methods (ones without an explicit workspace name)
system-workspace - name of workspace where /jcr:system node is placed
security-domain - the name of a security domain for JAAS authentication
access-control - the name of an access control policy. There can be 3 types: optional - ACL is created on-demand(default), disable - no access control, mandatory - an ACL is created for each added node(not supported yet)
authentication-policy - the name of an authentication policy class
workspaces - the list of workspaces
session-max-age - the time after which an idle session will be removed (called logout). If not set, the idle session will never be removed.
Workspace configuration:
name - the name of a workspace
auto-init-root-nodetype - DEPRECATED in JCR 1.9 (use initializer). The node type for root node initialization
container - workspace data container (physical storage) configuration
initializer - workspace initializer configuration
cache - workspace storage cache configuration
query-handler - query handler configuration
Workspace data container configuration:
class - A workspace data container class name
properties - the list of properties (name-value pairs) for the concrete Workspace data container
value-storages - the list of value storage plugins
Value Storage plugin configuration (optional feature):
The value-storage element is optional. If you don't include it, the values will be stored as BLOBs inside the database.
value-storage - Optional value Storage plugin definition
class- a value storage plugin class name (attribute)
properties - the list of properties (name-value pairs) for a concrete Value Storage plugin
filters - the list of filters defining conditions when this plugin is applicable
Initializer configuration (optional):
class - initializer implementation class.
properties - the list of properties (name-value pairs). Properties are supported:
root-nodetype - The node type for root node initialization
root-permissions - Default permissions of the root node. It is defined as a set of semicolon-delimited permissions containing a group of space-delimited identities (user, group etc, see Organization service documentation for details) and the type of permission. For example any read;:/admin read;:/admin add_node;:/admin set_property;:/admin remove means that users from group admin have all permissions and other users have only a 'read' permission.
Configurable initializer adds a capability to override workspace initial startup procedure.
Cache configuration:
enabled - if workspace cache is enabled
class - cache implementation class, optional from 1.9. Default value is org.exoplatform.services.jcr.impl.dataflow.persistent.LinkedWorkspaceStorageCacheImpl.
Cache can be configured to use concrete implementation of WorkspaceStorageCache interface. JCR core has two implementation to use: * LinkedWorkspaceStorageCacheImpl - default, with configurable read behavior and statistic. * WorkspaceStorageCacheImpl - pre 1.9, still can be used.
properties - the list of properties (name-value pairs) for Workspace cache:
max-size - cache maximum size.
live-time - cached item live time.
LinkedWorkspaceStorageCacheImpl supports additional optional parameters TODO
Query Handler configuration:
class - A Query Handler class name
properties - the list of properties (name-value pairs) for a Query Handler (indexDir) properties and advanced features described in *Search Configuration*
Lock Manager configuration:
time-out - time after which the unused global lock will be removed.
persister - a class for storing lock information for future use. For example, remove lock after jcr restart.
path - a lock folder, each workspace has its own.
Configuration definition:
<!ELEMENT repository-service (repositories)> <!ATTLIST repository-service default-repository NMTOKEN #REQUIRED> <!ELEMENT repositories (repository)> <!ELEMENT repository (security-domain,access-control,session-max-age,authentication-policy,workspaces)> <!ATTLIST repository default-workspace NMTOKEN #REQUIRED name NMTOKEN #REQUIRED system-workspace NMTOKEN #REQUIRED > <!ELEMENT security-domain (#PCDATA)> <!ELEMENT access-control (#PCDATA)> <!ELEMENT session-max-age (#PCDATA)> <!ELEMENT authentication-policy (#PCDATA)> <!ELEMENT workspaces (workspace+)> <!ELEMENT workspace (container,initializer,cache,query-handler)> <!ATTLIST workspace name NMTOKEN #REQUIRED> <!ELEMENT container (properties,value-storages)> <!ATTLIST container class NMTOKEN #REQUIRED> <!ELEMENT value-storages (value-storage+)> <!ELEMENT value-storage (properties,filters)> <!ATTLIST value-storage class NMTOKEN #REQUIRED> <!ELEMENT filters (filter+)> <!ELEMENT filter EMPTY> <!ATTLIST filter property-type NMTOKEN #REQUIRED> <!ELEMENT initializer (properties)> <!ATTLIST initializer class NMTOKEN #REQUIRED> <!ELEMENT cache (properties)> <!ATTLIST cache enabled NMTOKEN #REQUIRED class NMTOKEN #REQUIRED > <!ELEMENT query-handler (properties)> <!ATTLIST query-handler class NMTOKEN #REQUIRED> <!ELEMENT access-manager (properties)> <!ATTLIST access-manager class NMTOKEN #REQUIRED> <!ELEMENT lock-manager (time-out,persister)> <!ELEMENT time-out (#PCDATA)> <!ELEMENT persister (properties)> <!ELEMENT properties (property+)> <!ELEMENT property EMPTY>
eXo JCR persistent data container can work in two configuration modes:
Multi-database: one database for each workspace (used in standalone eXo JCR service mode)
Single-database: all workspaces persisted in one database (used in embedded eXo JCR service mode, e.g. in eXo portal)
The data container uses the JDBC driver to communicate with the actual database software, i.e. any JDBC-enabled data storage can be used with eXo JCR implementation.
Currently the data container is tested with the following RDBMS:
MySQL (5.x including UTF8 support)
PostgreSQL (8.x)
Oracle Database (9i, 10g)
Microsoft SQL Server (2005)
Sybase ASE (15.0)
Apache Derby/Java DB (10.1.x, 10.2.x)
IBM DB2 (8.x, 9.x)
HSQLDB (1.8.0.7)
Each database software supports ANSI SQL standards but has its own specifics too. So, each database has its own configuration in eXo JCR as a database dialect parameter. If you need a more detailed configuration of the database it's possible to do that by editing the metadata SQL-script files.
In case the non-ANSI node name is used it's necessary to use a
database with MultiLanguage support[TODO link to MultiLanguage]. Some JDBC
drivers need additional parameters for establishing a Unicode friendly
connection. E.g. under mysql it's necessary to add an additional parameter
for the JDBC driver at the end of JDBC URL. For instance:
jdbc:mysql://exoua.dnsalias.net/portal?characterEncoding=utf8
There are preconfigured configuration files for HSQLDB. Look for these files in /conf/portal and /conf/standalone folders of the jar-file exo.jcr.component.core-XXX.XXX.jar or source-distribution of eXo JCR implementation.
By default the configuration files are located in service jars
/conf/portal/configuration.xml
(eXo services
including JCR Repository Service) and
exo-jcr-config.xml
(repositories configuration). In
eXo portal product JCR is configured in portal web application
portal/WEB-INF/conf/jcr/jcr-configuration.xml
(JCR
Repository Service and related serivces) and repository-configuration.xml
(repositories configuration).
Read more about Repository configuration.
You need to configure each workspace in a repository. You may have each one on different remote servers as far as you need.
First of all configure the data containers in the
org.exoplatform.services.naming.InitialContextInitializer
service. It's the JNDI context initializer which registers (binds) naming
resources (DataSources) for data containers.
Example (standalone mode, two data containers
jdbcjcr
- local HSQLDB,
jdbcjcr1
- remote MySQL):
<component> <key>org.exoplatform.services.naming.InitialContextInitializer</key> <type>org.exoplatform.services.naming.InitialContextInitializer</type> <component-plugins> <component-plugin> <name>bind.datasource</name> <set-method>addPlugin</set-method> <type>org.exoplatform.services.naming.BindReferencePlugin</type> <init-params> <value-param> <name>bind-name</name> <value>jdbcjcr</value> </value-param> <value-param> <name>class-name</name> <value>javax.sql.DataSource</value> </value-param> <value-param> <name>factory</name> <value>org.apache.commons.dbcp.BasicDataSourceFactory</value> </value-param> <properties-param> <name>ref-addresses</name> <description>ref-addresses</description> <property name="driverClassName" value="org.hsqldb.jdbcDriver"/> <property name="url" value="jdbc:hsqldb:file:target/temp/data/portal"/> <property name="username" value="sa"/> <property name="password" value=""/> </properties-param> </init-params> </component-plugin> <component-plugin> <name>bind.datasource</name> <set-method>addPlugin</set-method> <type>org.exoplatform.services.naming.BindReferencePlugin</type> <init-params> <value-param> <name>bind-name</name> <value>jdbcjcr1</value> </value-param> <value-param> <name>class-name</name> <value>javax.sql.DataSource</value> </value-param> <value-param> <name>factory</name> <value>org.apache.commons.dbcp.BasicDataSourceFactory</value> </value-param> <properties-param> <name>ref-addresses</name> <description>ref-addresses</description> <property name="driverClassName" value="com.mysql.jdbc.Driver"/> <property name="url" value="jdbc:mysql://exoua.dnsalias.net/jcr"/> <property name="username" value="exoadmin"/> <property name="password" value="exo12321"/> <property name="maxActive" value="50"/> <property name="maxIdle" value="5"/> <property name="initialSize" value="5"/> </properties-param> </init-params> </component-plugin> <component-plugins> <init-params> <value-param> <name>default-context-factory</name> <value>org.exoplatform.services.naming.SimpleContextFactory</value> </value-param> </init-params> </component>
We configure the database connection parameters:
driverClassName
, e.g.
"org.hsqldb.jdbcDriver", "com.mysql.jdbc.Driver",
"org.postgresql.Driver"
url
, e.g.
"jdbc:hsqldb:file:target/temp/data/portal",
"jdbc:mysql://exoua.dnsalias.net/jcr"
username
, e.g. "sa", "exoadmin"
password
, e.g. "", "exo12321"
There can be connection pool configuration parameters (org.apache.commons.dbcp.BasicDataSourceFactory):
maxActive
, e.g. 50
maxIdle
, e.g. 5
initialSize
, e.g. 5
and other according to Apache DBCP configuration
When the data container configuration is done we can configure the repository service. Each workspace will be configured for its own data container.
Example (two workspaces ws
- jdbcjcr,
ws1
- jdbcjcr1):
<workspaces> <workspace name="ws" auto-init-root-nodetype="nt:unstructured"> <container class="org.exoplatform.services.jcr.impl.storage.jdbc.JDBCWorkspaceDataContainer"> <properties> <property name="source-name" value="jdbcjcr"/> <property name="dialect" value="hsqldb"/> <property name="multi-db" value="true"/> <property name="max-buffer-size" value="200K"/> <property name="swap-directory" value="target/temp/swap/ws"/> </properties> </container> <cache enabled="true"> <properties> <property name="max-size" value="10K"/><!-- 10Kbytes --> <property name="live-time" value="30m"/><!-- 30 min --> </properties> </cache> <query-handler class="org.exoplatform.services.jcr.impl.core.query.lucene.SearchIndex"> <properties> <property name="index-dir" value="target/temp/index"/> </properties> </query-handler> <lock-manager> <time-out>15m</time-out><!-- 15 min --> <persister class="org.exoplatform.services.jcr.impl.core.lock.FileSystemLockPersister"> <properties> <property name="path" value="target/temp/lock/ws"/> </properties> </persister> </lock-manager> </workspace> <workspace name="ws1" auto-init-root-nodetype="nt:unstructured"> <container class="org.exoplatform.services.jcr.impl.storage.jdbc.JDBCWorkspaceDataContainer"> <properties> <property name="source-name" value="jdbcjcr1"/> <property name="dialect" value="mysql"/> <property name="multi-db" value="true"/> <property name="max-buffer-size" value="200K"/> <property name="swap-directory" value="target/temp/swap/ws1"/> </properties> </container> <cache enabled="true"> <properties> <property name="max-size" value="10K"/> <property name="live-time" value="5m"/> </properties> </cache> <query-handler class="org.exoplatform.services.jcr.impl.core.query.lucene.SearchIndex"> <properties> <property name="index-dir" value="target/temp/index"/> </properties> </query-handler> <lock-manager> <time-out>15m</time-out><!-- 15 min --> <persister class="org.exoplatform.services.jcr.impl.core.lock.FileSystemLockPersister"> <properties> <property name="path" value="target/temp/lock/ws1"/> </properties> </persister> </lock-manager> </workspace> </workspaces>
source-name
- a javax.sql.DataSource
name configured in InitialContextInitializer component (was
sourceName
prior JCR 1.9);
dialect
- a database dialect, one of
"hsqldb", "mysql", "mysql-utf8", "pgsql", "oracle", "oracle-oci",
"mssql", "sybase", "derby", "db2", "db2v8" or "auto" for dialect
autodetection;
multi-db
- enable multi-database
container with this parameter (set value "true");
max-buffer-size
- a threshold (in
bytes) after which a javax.jcr.Value content will be swapped to a
file in a temporary storage. I.e. swap for pending changes.
swap-directory
- a path in the file
system used to swap the pending changes.
In this way we have configured two workspace which will be persisted in two different databases (ws in HSQLDB, ws1 in MySQL).
Starting from v.1.9 repository configuration parameters supports human-readable formats of values (e.g. 200K - 200 Kbytes, 30m - 30 minutes etc)
It's more simple to configure a single-database data container. We have to configure one naming resource.
Example (embedded mode for jdbcjcr
data
container):
<external-component-plugins> <target-component>org.exoplatform.services.naming.InitialContextInitializer</target-component> <component-plugin> <name>bind.datasource</name> <set-method>addPlugin</set-method> <type>org.exoplatform.services.naming.BindReferencePlugin</type> <init-params> <value-param> <name>bind-name</name> <value>jdbcjcr</value> </value-param> <value-param> <name>class-name</name> <value>javax.sql.DataSource</value> </value-param> <value-param> <name>factory</name> <value>org.apache.commons.dbcp.BasicDataSourceFactory</value> </value-param> <properties-param> <name>ref-addresses</name> <description>ref-addresses</description> <property name="driverClassName" value="org.postgresql.Driver"/> <property name="url" value="jdbc:postgresql://exoua.dnsalias.net/portal"/> <property name="username" value="exoadmin"/> <property name="password" value="exo12321"/> <property name="maxActive" value="50"/> <property name="maxIdle" value="5"/> <property name="initialSize" value="5"/> </properties-param> </init-params> </component-plugin> </external-component-plugins>
And configure repository workspaces in repositories configuration with this one database. Parameter "multi-db" must be switched off (set value "false").
Example (two workspaces ws
- jdbcjcr,
ws1
- jdbcjcr):
<workspaces> <workspace name="ws" auto-init-root-nodetype="nt:unstructured"> <container class="org.exoplatform.services.jcr.impl.storage.jdbc.JDBCWorkspaceDataContainer"> <properties> <property name="source-name" value="jdbcjcr"/> <property name="dialect" value="pgsql"/> <property name="multi-db" value="false"/> <property name="max-buffer-size" value="200K"/> <property name="swap-directory" value="target/temp/swap/ws"/> </properties> </container> <cache enabled="true"> <properties> <property name="max-size" value="10K"/> <property name="live-time" value="30m"/> </properties> </cache> <query-handler class="org.exoplatform.services.jcr.impl.core.query.lucene.SearchIndex"> <properties> <property name="index-dir" value="../temp/index"/> </properties> </query-handler> <lock-manager> <time-out>15m</time-out> <persister class="org.exoplatform.services.jcr.impl.core.lock.FileSystemLockPersister"> <properties> <property name="path" value="target/temp/lock/ws"/> </properties> </persister> </lock-manager> </workspace> <workspace name="ws1" auto-init-root-nodetype="nt:unstructured"> <container class="org.exoplatform.services.jcr.impl.storage.jdbc.JDBCWorkspaceDataContainer"> <properties> <property name="source-name" value="jdbcjcr"/> <property name="dialect" value="pgsql"/> <property name="multi-db" value="false"/> <property name="max-buffer-size" value="200K"/> <property name="swap-directory" value="target/temp/swap/ws1"/> </properties> </container> <cache enabled="true"> <properties> <property name="max-size" value="10K"/> <property name="live-time" value="5m"/> </properties> </cache> <lock-manager> <time-out>15m</time-out> <persister class="org.exoplatform.services.jcr.impl.core.lock.FileSystemLockPersister"> <properties> <property name="path" value="target/temp/lock/ws1"/> </properties> </persister> </lock-manager> </workspace> </workspaces>
In this way we have configured two workspaces which will be persisted in one database (PostgreSQL).
Repository configuration without using of the
javax.sql.DataSource
bounded in JNDI.
This case may be usable if you have a dedicated JDBC driver implementation with special features like XA transactions, statements/connections pooling etc:
You have to remove the configuration in
InitialContextInitializer
for your database
and configure a new one directly in the workspace
container.
Remove parameter "source-name" and add next lines instead. Describe your values for a JDBC driver, database url and username.
But be careful in this case JDBC driver should implement and provide connection pooling. Connection pooling is very recommended for use with JCR to prevent a database overload.
<workspace name="ws" auto-init-root-nodetype="nt:unstructured"> <container class="org.exoplatform.services.jcr.impl.storage.jdbc.JDBCWorkspaceDataContainer"> <properties> <property name="dialect" value="hsqldb"/> <property name="driverClassName" value="org.hsqldb.jdbcDriver"/> <property name="url" value="jdbc:hsqldb:file:target/temp/data/portal"/> <property name="username" value="su"/> <property name="password" value=""/> ......
Workspaces can be added dynamically during runtime.
This can be performed in two steps:
Firstly,
ManageableRepository.configWorkspace(WorkspaceEntry
wsConfig)
- register a new configuration in
RepositoryContainer and create a WorkspaceContainer.
Secondly, the main step,
ManageableRepository.createWorkspace(String
workspaceName)
- creation of a new workspace.
eXo JCR provides two ways for interact with Database -
JDBCStorageConnection
that uses simple queries and
CQJDBCStorageConection
that uses complex queries
for reducing amount of database callings.
Simple queries will be used if you chose
org.exoplatform.services.jcr.impl.storage.jdbc.JDBCWorkspaceDataContainer
:
<workspaces> <workspace name="ws" auto-init-root-nodetype="nt:unstructured"> <container class="org.exoplatform.services.jcr.impl.storage.jdbc.JDBCWorkspaceDataContainer"> ... </workspace> </worksapces>
Complex queries will be used if you chose
org.exoplatform.services.jcr.impl.storage.jdbc.optimisation.CQJDBCWorkspaceDataContainer
:
<workspaces> <workspace name="ws" auto-init-root-nodetype="nt:unstructured"> <container class="org.exoplatform.services.jcr.impl.storage.jdbc.optimisation.CQJDBCWorkspaceDataContainer"> ... </workspace> </worksapces>
Why we should use a Complex Queries?
They are optimised to reduce amount of requests to database. |
Why we should use a Simple Queries?
Simple queries implemented in way to support as many database dialects as possible. |
Simple queries do not use sub queries, left or right joins. |
Some databases supports hints to increase query performance (like Oracle, MySQL, etc). eXo JCR have separate Complex Query implementation for Orcale dialect, that uses query hints to increase performance for few important queries.
To enable this option put next configuration property:
<workspace name="ws" auto-init-root-nodetype="nt:unstructured"> <container class="org.exoplatform.services.jcr.impl.storage.jdbc.JDBCWorkspaceDataContainer"> <properties> <property name="dialect" value="oracle"/> <property name="force.query.hints" value="true" /> ......
Query hints enabled by default.
eXo JCR uses query hints only for Complex Query Oracle dialect. For all other dialects this parameter is ignored.
The current configuration of eXo JCR uses Apache DBCP connection
pool
(org.apache.commons.dbcp.BasicDataSourceFactory
).
It's possible to set a big value for maxActive parameter in
configuration.xml
. That means usage of lots of TCP/IP
ports from a client machine inside the pool (i.e. JDBC driver). As a
result the data container can throw exceptions like "Address already in
use". To solve this problem you have to configure the client's machine
networking software for the usage of shorter timeouts for opened TCP/IP
ports.
Microsoft Windows has MaxUserPort
,
TcpTimedWaitDelay
registry keys in the node
HKEY_LOCAL_MACHINESYSTEMCurrentControlSetServicesTcpipParameters
,
by default these keys are unset, set each one with values like
these:
"TcpTimedWaitDelay"=dword:0000001e, sets TIME_WAIT parameter to 30 seconds, default is 240.
"MaxUserPort"=dword:00001b58, sets the maximum of open ports to 7000 or higher, default is 5000.
A sample registry file is below:
Windows Registry Editor Version 5.00 [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters] "MaxUserPort"=dword:00001b58 "TcpTimedWaitDelay"=dword:0000001e
By default JCR Values are stored in the Workspace Data container along with the JCR structure (i.e. Nodes and Properties). eXo JCR offers an additional option of storing JCR Values separately from Workspace Data container, which can be extremely helpful to keep Binary Large Objects (BLOBs) for example (see [TODOBinary values processing link]).
Value storage configuration is a part of Repository configuration, find more details there.
Tree-based storage is recommended for most of cases. If you run an application on Amazon EC2 - the S3 option may be interesting for architecture. Simple 'flat' storage is good in speed of creation/deletion of values, it might be a compromise for a small storages.
Holds Values in tree-like FileSystem files. path property points to the root directory to store the files.
This is a recommended type of external storage, it can contain large amount of files limited only by disk/volume free space.
A disadvantage it's a higher time on Value deletion due to unused tree-nodes remove.
<value-storage id="Storage #1" class="org.exoplatform.services.jcr.impl.storage.value.fs.TreeFileValueStorage"> <properties> <property name="path" value="data/values"/> </properties> <filters> <filter property-type="Binary" min-value-size="1M"/> </filters>
Where :
id - the value storage unique
identifier, used for linking with properties stored in workspace
container |
path - a location where value files
will be stored |
Each file value storage can have the filter(s)
for incoming values. A filter can match values by property type
(property-type), property name
(property-name), ancestor path
(ancestor-path) and/or size of values stored
(min-value-size, in bytes). In code sample we use a
filter with property-type and min-value-size only. I.e. storage for binary
values with size greater of 1MB. It's recommended to store properties with
large values in file value storage only.
Another example shows a value storage with different locations for large files (min-value-size a 20Mb-sized filter). A value storage uses ORed logic in the process of filter selection. That means the first filter in the list will be asked first and if not matched the next will be called etc. Here a value matches the 20 MB-sized filter min-value-size and will be stored in the path "data/20Mvalues", all other in "data/values".
<value-storages> <value-storage id="Storage #1" class="org.exoplatform.services.jcr.impl.storage.value.fs.TreeFileValueStorage"> <properties> <property name="path" value="data/20Mvalues"/> </properties> <filters> <filter property-type="Binary" min-value-size="20M"/> </filters> <value-storage> <value-storage id="Storage #2" class="org.exoplatform.services.jcr.impl.storage.value.fs.TreeFileValueStorage"> <properties> <property name="path" value="data/values"/> </properties> <filters> <filter property-type="Binary" min-value-size="1M"/> </filters> <value-storage> <value-storages>
Not recommended to use in production due to low capacity capabilities on most file systems.
But if you're sure in your file-system or data amount is small it may be useful for you as haves a faster speed of Value removal.
Holds Values in flat FileSystem files. path property points to root directory in order to store files
<value-storage id="Storage #1" class="org.exoplatform.services.jcr.impl.storage.value.fs.SimpleFileValueStorage"> <properties> <property name="path" value="data/values"/> </properties> <filters> <filter property-type="Binary" min-value-size="1M"/> </filters>
eXo JCR supports Content-addressable storage feature for Values storing.
Content-addressable storage, also referred to as associative storage and abbreviated CAS, is a mechanism for storing information that can be retrieved based on its content, not its storage location. It is typically used for high-speed storage and retrieval of fixed content, such as documents stored for compliance with government regulations.
Content Addressable Value storage stores unique content once. Different properties (values) with same content will be stored as one data file shared between those values. We can tell the Value content will be shared across some Values in storage and will be stored on one physical file.
Storage size will be decreased for application which governs potentially same data in the content.
For example: if you have 100 different properties containing the same data (e.g. mail attachment) the storage stores only one single file. The file will be shared with all referencing properties.
If property Value changes it is stored in an additional file. Alternatively the file is shared with other values, pointing to the same content.
The storage calculates Value content address each time the property was changed. CAS write operations are much more expensive compared to the non-CAS storages.
Content address calculation based on java.security.MessageDigest hash computation and tested with MD5 and SHA1 algorithms.
CAS storage works most efficiently on data that does not change often. For data that changes frequently, CAS is not as efficient as location-based addressing.
CAS support can be enabled for Tree and Simple File Value Storage types.
To enable CAS support just configure it in JCR Repositories configuration like we do for other Value Storages.
<workspaces> <workspace name="ws"> <container class="org.exoplatform.services.jcr.impl.storage.jdbc.JDBCWorkspaceDataContainer"> <properties> <property name="source-name" value="jdbcjcr"/> <property name="dialect" value="oracle"/> <property name="multi-db" value="false"/> <property name="update-storage" value="false"/> <property name="max-buffer-size" value="200k"/> <property name="swap-directory" value="target/temp/swap/ws"/> </properties> <value-storages> <!------------------- here -----------------------> <value-storage id="ws" class="org.exoplatform.services.jcr.impl.storage.value.fs.CASableTreeFileValueStorage"> <properties> <property name="path" value="target/temp/values/ws"/> <property name="digest-algo" value="MD5"/> <property name="vcas-type" value="org.exoplatform.services.jcr.impl.storage.value.cas.JDBCValueContentAddressStorageImpl"/> <property name="jdbc-source-name" value="jdbcjcr"/> <property name="jdbc-dialect" value="oracle"/> </properties> <filters> <filter property-type="Binary"/> </filters> </value-storage> </value-storages>
Properties:
digest-algo - digest hash algorithm
(MD5 and SHA1 were tested); |
vcas-type - Value CAS internal data
type, JDBC backed is currently implemented
org.exoplatform.services.jcr.impl.storage.value.cas.JDBCValueContentAddressStorageImp;l |
jdbc-source-name -
JDBCValueContentAddressStorageImpl specific parameter, database will
be used to save CAS metadata. It's simple to use same as in workspace
container; |
jdbc-dialect -
JDBCValueContentAddressStorageImpl specific parameter, database
dialect. It's simple to use same as in workspace container; |
JCR index configuration. You can find this file here:
.../portal/WEB-INF/conf/jcr/repository-configuration.xml
<repository-service default-repository="db1"> <repositories> <repository name="db1" system-workspace="ws" default-workspace="ws"> .... <workspaces> <workspace name="ws"> .... <query-handler class="org.exoplatform.services.jcr.impl.core.query.lucene.SearchIndex"> <properties> <property name="index-dir" value="${java.io.tmpdir}/temp/index/db1/ws" /> <property name="synonymprovider-class" value="org.exoplatform.services.jcr.impl.core.query.lucene.PropertiesSynonymProvider" /> <property name="synonymprovider-config-path" value="/synonyms.properties" /> <property name="indexing-config-path" value="/indexing-configuration.xml" /> <property name="query-class" value="org.exoplatform.services.jcr.impl.core.query.QueryImpl" /> </properties> </query-handler> ... </workspace> </workspaces> </repository> </repositories> </repository-service>
Table 6.1.
Parameter | Default | Description | Since |
---|---|---|---|
index-dir | none | The location of the index directory. This parameter is mandatory. Up to 1.9 this parameter called "indexDir" | 1.0 |
use-compoundfile | true | Advises lucene to use compound files for the index files. | 1.9 |
min-merge-docs | 100 | Minimum number of nodes in an index until segments are merged. | 1.9 |
volatile-idle-time | 3 | Idle time in seconds until the volatile index part is moved to a persistent index even though minMergeDocs is not reached. | 1.9 |
max-merge-docs | Integer.MAX_VALUE | Maximum number of nodes in segments that will be merged. The default value changed in JCR 1.9 to Integer.MAX_VALUE. | 1.9 |
merge-factor | 10 | Determines how often segment indices are merged. | 1.9 |
max-field-length | 10000 | The number of words that are fulltext indexed at most per property. | 1.9 |
cache-size | 1000 | Size of the document number cache. This cache maps uuids to lucene document numbers | 1.9 |
force-consistencycheck | false | Runs a consistency check on every startup. If false, a consistency check is only performed when the search index detects a prior forced shutdown. | 1.9 |
auto-repair | true | Errors detected by a consistency check are automatically repaired. If false, errors are only written to the log. | 1.9 |
query-class | QueryImpl | Class name that implements the javax.jcr.query.Query interface.This class must also extend from the class: org.exoplatform.services.jcr.impl.core.query.AbstractQueryImpl. | 1.9 |
document-order | true | If true and the query does not contain an 'order by' clause, result nodes will be in document order. For better performance when queries return a lot of nodes set to 'false'. | 1.9 |
result-fetch-size | Integer.MAX_VALUE | The number of results when a query is executed. Default value: Integer.MAX_VALUE (-> all). | 1.9 |
excerptprovider-class | DefaultXMLExcerpt | The name of the class that implements org.exoplatform.services.jcr.impl.core.query.lucene.ExcerptProvider and should be used for the rep:excerpt() function in a query. | 1.9 |
support-highlighting | false | If set to true additional information is stored in the index to support highlighting using the rep:excerpt() function. | 1.9 |
synonymprovider-class | none | The name of a class that implements org.exoplatform.services.jcr.impl.core.query.lucene.SynonymProvider. The default value is null (-> not set). | 1.9 |
synonymprovider-config-path | none | The path to the synonym provider configuration file. This path interpreted relative to the path parameter. If there is a path element inside the SearchIndex element, then this path is interpreted relative to the root path of the path. Whether this parameter is mandatory depends on the synonym provider implementation. The default value is null (-> not set). | 1.9 |
indexing-configuration-path | none | The path to the indexing configuration file. | 1.9 |
indexing-configuration-class | IndexingConfigurationImpl | The name of the class that implements org.exoplatform.services.jcr.impl.core.query.lucene.IndexingConfiguration. | 1.9 |
force-consistencycheck | false | If set to true a consistency check is performed depending on the parameter forceConsistencyCheck. If set to false no consistency check is performed on startup, even if a redo log had been applied. | 1.9 |
spellchecker-class | none | The name of a class that implements org.exoplatform.services.jcr.impl.core.query.lucene.SpellChecker. | 1.9 |
errorlog-size | 50(Kb) | The default size of error log file in Kb. | 1.9 |
upgrade-index | false | Allows JCR to convert an existing index into the new format. Also it is possible to set this property via system property, for example: -Dupgrade-index=true Indexes before JCR 1.12 will not run with JCR 1.12. Hence you have to run an automatic migration: Start JCR with -Dupgrade-index=true. The old index format is then converted in the new index format. After the conversion the new format is used. On the next start you don't need this option anymore. The old index is replaced and a back conversion is not possible - therefore better take a backup of the index before. (Only for migrations from JCR 1.9 and later.) | 1.12 |
analyzer | org.apache.lucene.analysis.standard.StandardAnalyzer | Class name of a lucene analyzer to use for fulltext indexing of text. | 1.12 |
The global search index is configured in the above-mentioned
configuration file
(portal/WEB-INF/conf/jcr/repository-configuration.xml
)
in the tag "query-handler".
<query-handler class="org.exoplatform.services.jcr.impl.core.query.lucene.SearchIndex">
In fact when using Lucene you always should use the same analyzer for indexing and for querying - otherwise the results are unpredictable. You don't have to worry about this, eXo JCR does this for you automatically. If you don't like the StandardAnalyzer configured by default just replace it by your own.
If you don't have a handy QueryHandler you will learn how create a customized Handler in 5 minutes.
By default Exo JCR uses the Lucene standard Analyzer to index contents. This analyzer uses some standard filters in the method that analyzes the content:
public TokenStream tokenStream(String fieldName, Reader reader) { StandardTokenizer tokenStream = new StandardTokenizer(reader, replaceInvalidAcronym); tokenStream.setMaxTokenLength(maxTokenLength); TokenStream result = new StandardFilter(tokenStream); result = new LowerCaseFilter(result); result = new StopFilter(result, stopSet); return result; }
The first one (StandardFilter) removes 's (as 's in "Peter's") from the end of words and removes dots from acronyms.
The second one (LowerCaseFilter) normalizes token text to lower case.
The last one (StopFilter) removes stop words from a token stream. The stop set is defined in the analyzer.
For specific cases, you may wish to use additional filters like ISOLatin1AccentFilter, which replaces accented characters in the ISO Latin 1 character set (ISO-8859-1) by their unaccented equivalents.
In order to use a different filter, you have to create a new analyzer, and a new search index to use the analyzer. You put it in a jar, which is deployed with your application.
The ISOLatin1AccentFilter is not present in the current Lucene version used by Exo. You can use the attached file. You can also create your own filter, the relevant method is
public final Token next(final Token reusableToken) throws java.io.IOException
which defines how chars are read and used by the filter.
The analyzer have to extends org.apache.lucene.analysis.standard.StandardAnalyzer, and overload the method
public TokenStream tokenStream(String fieldName, Reader reader)
to put your own filters. You can have a glance at the example analyzer attached to this article.
Now, we have the analyzer, we have to write the SearchIndex, which will use the analyzer. Your have to extends org.exoplatform.services.jcr.impl.core.query.lucene.SearchIndex. You have to write the constructor, to set the right analyzer, and the method
public Analyzer getAnalyzer() { return MyAnalyzer; }
to return your analyzer. You can see the attached SearchIndex.
Since 1.12 version we can set Analyzer directly in configuration. So, creation new SearchIndex only for new Analyzer is redundant.
In
portal/WEB-INF/conf/jcr/repository-configuration.xml
,
you have to replace each
<query-handler class="org.exoplatform.services.jcr.impl.core.query.lucene.SearchIndex">
by your own class
<query-handler class="mypackage.indexation.MySearchIndex">
In
portal/WEB-INF/conf/jcr/repository-configuration.xml
,
you have to add parameter "analyzer" to each query-handler
config:
<query-handler class="org.exoplatform.services.jcr.impl.core.query.lucene.SearchIndex"> <properties> ... <property name="analyzer" value="org.exoplatform.services.jcr.impl.core.MyAnalyzer"/> ... </properties> </query-handler>
When you start exo, your SearchIndex will start to index contents with the specified filters.
Starting with version 1.9, the default search index implementation in JCR allows you to control which properties of a node are indexed. You also can define different analyzers for different nodes.
The configuration parameter is called indexingConfiguration and per default is not set. This means all properties of a node are indexed.
If you wish to configure the indexing behavior you need to add a parameter to the query-handler element in your configuration file.
<param name="indexing-configuration-path" value="/indexing_configuration.xml"/>
To optimize the index size you can limit the node scope so that only certain properties of a node type are indexed.
With the below configuration only properties named Text are indexed for nodes of type nt:unstructured. This configuration also applies to all nodes whose type extends from nt:unstructured.
<?xml version="1.0"?> <!DOCTYPE configuration SYSTEM "http://www.exoplatform.org/dtd/indexing-configuration-1.0.dtd"> <configuration xmlns:nt="http://www.jcp.org/jcr/nt/1.0"> <index-rule nodeType="nt:unstructured"> <property>Text</property> </index-rule> </configuration>
Please note that you have to declare the namespace prefixes in the configuration element that you are using throughout the XML file!
It is also possible to configure a boost value for the nodes that match the index rule. The default boost value is 1.0. Higher boost values (a reasonable range is 1.0 - 5.0) will yield a higher score value and appear as more relevant.
<?xml version="1.0"?> <!DOCTYPE configuration SYSTEM "http://www.exoplatform.org/dtd/indexing-configuration-1.0.dtd"> <configuration xmlns:nt="http://www.jcp.org/jcr/nt/1.0"> <index-rule nodeType="nt:unstructured" boost="2.0"> <property>Text</property> </index-rule> </configuration>
If you do not wish to boost the complete node but only certain properties you can also provide a boost value for the listed properties:
<?xml version="1.0"?> <!DOCTYPE configuration SYSTEM "http://www.exoplatform.org/dtd/indexing-configuration-1.0.dtd"> <configuration xmlns:nt="http://www.jcp.org/jcr/nt/1.0"> <index-rule nodeType="nt:unstructured"> <property boost="3.0">Title</property> <property boost="1.5">Text</property> </index-rule> </configuration>
You may also add a condition to the index rule and have multiple rules with the same nodeType. The first index rule that matches will apply and all remaining ones are ignored:
<?xml version="1.0"?> <!DOCTYPE configuration SYSTEM "http://www.exoplatform.org/dtd/indexing-configuration-1.0.dtd"> <configuration xmlns:nt="http://www.jcp.org/jcr/nt/1.0"> <index-rule nodeType="nt:unstructured" boost="2.0" condition="@priority = 'high'"> <property>Text</property> </index-rule> <index-rule nodeType="nt:unstructured"> <property>Text</property> </index-rule> </configuration>
In the above example the first rule only applies if the nt:unstructured node has a priority property with a value 'high'. The condition syntax supports only the equals operator and a string literal.
You may also reference properties in the condition that are not on the current node:
<?xml version="1.0"?> <!DOCTYPE configuration SYSTEM "http://www.exoplatform.org/dtd/indexing-configuration-1.0.dtd"> <configuration xmlns:nt="http://www.jcp.org/jcr/nt/1.0"> <index-rule nodeType="nt:unstructured" boost="2.0" condition="ancestor::*/@priority = 'high'"> <property>Text</property> </index-rule> <index-rule nodeType="nt:unstructured" boost="0.5" condition="parent::foo/@priority = 'low'"> <property>Text</property> </index-rule> <index-rule nodeType="nt:unstructured" boost="1.5" condition="bar/@priority = 'medium'"> <property>Text</property> </index-rule> <index-rule nodeType="nt:unstructured"> <property>Text</property> </index-rule> </configuration>
The indexing configuration also allows you to specify the type of a node in the condition. Please note however that the type match must be exact. It does not consider sub types of the specified node type.
<?xml version="1.0"?> <!DOCTYPE configuration SYSTEM "http://www.exoplatform.org/dtd/indexing-configuration-1.0.dtd"> <configuration xmlns:nt="http://www.jcp.org/jcr/nt/1.0"> <index-rule nodeType="nt:unstructured" boost="2.0" condition="element(*, nt:unstructured)/@priority = 'high'"> <property>Text</property> </index-rule> </configuration>
Per default all configured properties are fulltext indexed if they are of type STRING and included in the node scope index. A node scope search finds normally all nodes of an index. That is, the select jcr:contains(., 'foo') returns all nodes that have a string property containing the word 'foo'. You can exclude explicitly a property from the node scope index:
<?xml version="1.0"?> <!DOCTYPE configuration SYSTEM "http://www.exoplatform.org/dtd/indexing-configuration-1.0.dtd"> <configuration xmlns:nt="http://www.jcp.org/jcr/nt/1.0"> <index-rule nodeType="nt:unstructured"> <property nodeScopeIndex="false">Text</property> </index-rule> </configuration>
Sometimes it is useful to include the contents of descendant nodes into a single node to easier search on content that is scattered across multiple nodes.
JCR allows you to define index aggregates based on relative path patterns and primary node types.
The following example creates an index aggregate on nt:file that includes the content of the jcr:content node:
<?xml version="1.0"?> <!DOCTYPE configuration SYSTEM "http://www.exoplatform.org/dtd/indexing-configuration-1.0.dtd"> <configuration xmlns:jcr="http://www.jcp.org/jcr/1.0" xmlns:nt="http://www.jcp.org/jcr/nt/1.0"> <aggregate primaryType="nt:file"> <include>jcr:content</include> </aggregate> </configuration>
You can also restrict the included nodes to a certain type:
<?xml version="1.0"?> <!DOCTYPE configuration SYSTEM "http://www.exoplatform.org/dtd/indexing-configuration-1.0.dtd"> <configuration xmlns:jcr="http://www.jcp.org/jcr/1.0" xmlns:nt="http://www.jcp.org/jcr/nt/1.0"> <aggregate primaryType="nt:file"> <include primaryType="nt:resource">jcr:content</include> </aggregate> </configuration>
You may also use the * to match all child nodes:
<?xml version="1.0"?> <!DOCTYPE configuration SYSTEM "http://www.exoplatform.org/dtd/indexing-configuration-1.0.dtd"> <configuration xmlns:jcr="http://www.jcp.org/jcr/1.0" xmlns:nt="http://www.jcp.org/jcr/nt/1.0"> <aggregate primaryType="nt:file">http://wiki.exoplatform.com/xwiki/bin/edit/JCR/Search+Configuration <include primaryType="nt:resource">*</include> </aggregate> </configuration>
If you wish to include nodes up to a certain depth below the current node you can add multiple include elements. E.g. the nt:file node may contain a complete XML document under jcr:content:
<?xml version="1.0"?> <!DOCTYPE configuration SYSTEM "http://www.exoplatform.org/dtd/indexing-configuration-1.0.dtd"> <configuration xmlns:jcr="http://www.jcp.org/jcr/1.0" xmlns:nt="http://www.jcp.org/jcr/nt/1.0"> <aggregate primaryType="nt:file"> <include>*</include> <include>*/*</include> <include>*/*/*</include> </aggregate> </configuration>
In this configuration section you define how a property has to be analyzed. If there is an analyzer configuration for a property, this analyzer is used for indexing and searching of this property. For example:
<?xml version="1.0"?> <!DOCTYPE configuration SYSTEM "http://www.exoplatform.org/dtd/indexing-configuration-1.0.dtd"> <configuration xmlns:nt="http://www.jcp.org/jcr/nt/1.0"> <analyzers> <analyzer class="org.apache.lucene.analysis.KeywordAnalyzer"> <property>mytext</property> </analyzer> <analyzer class="org.apache.lucene.analysis.WhitespaceAnalyzer"> <property>mytext2</property> </analyzer> </analyzers> </configuration>
The configuration above means that the property "mytext" for the entire workspace is indexed (and searched) with the Lucene KeywordAnalyzer, and property "mytext2" with the WhitespaceAnalyzer. Using different analyzers for different languages is particularly useful.
The WhitespaceAnalyzer tokenizes a property, the KeywordAnalyzer takes the property as a whole.
When using analyzers, you may encounter an unexpected behavior when searching within a property compared to searching within a node scope. The reason is that the node scope always uses the global analyzer.
Let's suppose that the property "mytext" contains the text : "testing my analyzers" and that you haven't configured any analyzers for the property "mytext" (and not changed the default analyzer in SearchIndex).
If your query is for example:
xpath = "//*[jcr:contains(mytext,'analyzer')]"
This xpath does not return a hit in the node with the property above and default analyzers.
Also a search on the node scope
xpath = "//*[jcr:contains(.,'analyzer')]"
won't give a hit. Realize, that you can only set specific analyzers on a node property, and that the node scope indexing/analyzing is always done with the globally defined analyzer in the SearchIndex element.
Now, if you change the analyzer used to index the "mytext" property above to
<analyzer class="org.apache.lucene.analysis.Analyzer.GermanAnalyzer"> <property>mytext</property> </analyzer>
and you do the same search again, then for
xpath = "//*[jcr:contains(mytext,'analyzer')]"
you would get a hit because of the word stemming (analyzers - analyzer).
The other search,
xpath = "//*[jcr:contains(.,'analyzer')]"
still would not give a result, since the node scope is indexed with the global analyzer, which in this case does not take into account any word stemming.
In conclusion, be aware that when using analyzers for specific properties, you might find a hit in a property for some search text, and you do not find a hit with the same search text in the node scope of the property!
Both index rules and index aggregates influence how content is indexed in JCR. If you change the configuration the existing content is not automatically re-indexed according to the new rules. You therefore have to manually re-index the content when you change the configuration!
Whenever relational database is used to store multilingual text data of eXo Java Content Repository we need to adapt configuration in order to support UTF-8 encoding. Here is a short HOWTO instruction for several supported RDBMS with examples.
The configuration file you have to modify: .../webapps/portal/WEB-INF/conf/jcr/repository-configuration.xml
Datasource jdbcjcr
used in examples can be
configured via InitialContextInitializer
component.
In order to run multilanguage JCR on an Oracle backend Unicode
encoding for characters set should be applied to the database. Other
Oracle globalization parameters don't make any impact. The only property
to modify is NLS_CHARACTERSET
.
We have tested NLS_CHARACTERSET
=
AL32UTF8
and it's works well for many European and
Asian languages.
Example of database configuration (used for JCR testing):
NLS_LANGUAGE AMERICAN NLS_TERRITORY AMERICA NLS_CURRENCY $ NLS_ISO_CURRENCY AMERICA NLS_NUMERIC_CHARACTERS ., NLS_CHARACTERSET AL32UTF8 NLS_CALENDAR GREGORIAN NLS_DATE_FORMAT DD-MON-RR NLS_DATE_LANGUAGE AMERICAN NLS_SORT BINARY NLS_TIME_FORMAT HH.MI.SSXFF AM NLS_TIMESTAMP_FORMAT DD-MON-RR HH.MI.SSXFF AM NLS_TIME_TZ_FORMAT HH.MI.SSXFF AM TZR NLS_TIMESTAMP_TZ_FORMAT DD-MON-RR HH.MI.SSXFF AM TZR NLS_DUAL_CURRENCY $ NLS_COMP BINARY NLS_LENGTH_SEMANTICS BYTE NLS_NCHAR_CONV_EXCP FALSE NLS_NCHAR_CHARACTERSET AL16UTF16
JCR 1.12.x doesn't use NVARCHAR columns, so that the value of the parameter NLS_NCHAR_CHARACTERSET does not matter for JCR.
Create database with Unicode encoding and use Oracle dialect for the Workspace Container:
<workspace name="collaboration"> <container class="org.exoplatform.services.jcr.impl.storage.jdbc.JDBCWorkspaceDataContainer"> <properties> <property name="source-name" value="jdbcjcr" /> <property name="dialect" value="oracle" /> <property name="multi-db" value="false" /> <property name="max-buffer-size" value="200k" /> <property name="swap-directory" value="target/temp/swap/ws" /> </properties> .....
DB2 Universal Database (DB2 UDB) supports UTF-8 and UTF-16/UCS-2. When a Unicode database is created, CHAR, VARCHAR, LONG VARCHAR data are stored in UTF-8 form. It's enough for JCR multi-lingual support.
Example of UTF-8 database creation:
DB2 CREATE DATABASE dbname USING CODESET UTF-8 TERRITORY US
Create database with UTF-8 encoding and use db2 dialect for Workspace Container on DB2 v.9 and higher:
<workspace name="collaboration"> <container class="org.exoplatform.services.jcr.impl.storage.jdbc.JDBCWorkspaceDataContainer"> <properties> <property name="source-name" value="jdbcjcr" /> <property name="dialect" value="db2" /> <property name="multi-db" value="false" /> <property name="max-buffer-size" value="200k" /> <property name="swap-directory" value="target/temp/swap/ws" /> </properties> .....
For DB2 v.8.x support change the property "dialect" to db2v8.
JCR MySQL-backend requires special dialect MySQL-UTF8 to be used for internationalization support. But the database default charset should be latin1 to use limited index space effectively (1000 bytes for MyISAM engine, 767 for InnoDB). If database default charset is multibyte, a JCR database initialization error is thrown concerning index creation failure. In other words JCR can work on any singlebyte default charset of database, with UTF8 supported by MySQL server. But we have tested it only on latin1 database default charset.
Repository configuration, workspace container entry example:
<workspace name="collaboration"> <container class="org.exoplatform.services.jcr.impl.storage.jdbc.JDBCWorkspaceDataContainer"> <properties> <property name="source-name" value="jdbcjcr" /> <property name="dialect" value="mysql-utf8" /> <property name="multi-db" value="false" /> <property name="max-buffer-size" value="200k" /> <property name="swap-directory" value="target/temp/swap/ws" /> </properties> .....
On PostgreSQL-backend multilingual support can be enabled in different ways:
Using the locale features of the operating system to provide locale-specific collation order, number formatting, translated messages, and other aspects. UTF-8 is widely used on Linux distributions by default, so it can be useful in such case.
Providing a number of different character sets defined in the PostgreSQL server, including multiple-byte character sets, to support storing text any language, and providing character set translation between client and server. We recommend to use UTF-8 database charset, it will allow any-to-any conversations and make this issue transparent for the JCR.
Create database with UTF-8 encoding and use PgSQL dialect for Workspace Container:
<workspace name="collaboration"> <container class="org.exoplatform.services.jcr.impl.storage.jdbc.JDBCWorkspaceDataContainer"> <properties> <property name="source-name" value="jdbcjcr" /> <property name="dialect" value="pgsql" /> <property name="multi-db" value="false" /> <property name="max-buffer-size" value="200k" /> <property name="swap-directory" value="target/temp/swap/ws" /> </properties> .....
JCR Repository Service uses
org.exoplatform.services.jcr.config.RepositoryServiceConfiguration
component to read its configuration.
<component> <key>org.exoplatform.services.jcr.config.RepositoryServiceConfiguration</key> <type>org.exoplatform.services.jcr.impl.config.RepositoryServiceConfigurationImpl</type> <init-params> <value-param> <name>conf-path</name> <description>JCR configuration file</description> <value>/conf/standalone/exo-jcr-config.xml</value> </value-param> </init-params> </component>
In the example Repository Service will read the configuration from
the file /conf/standalone/exo-jcr-config.xml
.
But in some cases it's required to change the configuration on the fly. And know that the new one will be used. Additionally we wish not to modify the original file.
In this case we have to use the configuration persister feature which allows to store the configuration in different locations.
On startup RepositoryServiceConfiguration
component checks if a configuration persister was configured. In that case
it uses the provided ConfigurationPersister
implementation class to instantiate the persister object.
Configuration with persister:
<component> <key>org.exoplatform.services.jcr.config.RepositoryServiceConfiguration</key> <type>org.exoplatform.services.jcr.impl.config.RepositoryServiceConfigurationImpl</type> <init-params> <value-param> <name>conf-path</name> <description>JCR configuration file</description> <value>/conf/standalone/exo-jcr-config.xml</value> </value-param> <properties-param> <name>working-conf</name> <description>working-conf</description> <property name="source-name" value="jdbcjcr" /> <property name="dialect" value="mysql" /> <property name="persister-class-name" value="org.exoplatform.services.jcr.impl.config.JDBCConfigurationPersister" /> </properties-param> </init-params> </component>
Where:
source-name
- JNDI source name
configured in InitialContextInitializer
component. (sourceName
prior v.1.9.) Find
more in database
configuration.
dialect
- SQL dialect which will be
used with database from source-name
. Find
more in database
configuration.
persister-class-name
- class name of
ConfigurationPersister
interface
implementation. (persisterClassName
prior
v.1.9.)
ConfigurationPersister interface:
/** * Init persister. * Used by RepositoryServiceConfiguration on init. * @return - config data stream */ void init(PropertiesParam params) throws RepositoryConfigurationException; /** * Read config data. * @return - config data stream */ InputStream read() throws RepositoryConfigurationException; /** * Create table, write data. * @param confData - config data stream */ void write(InputStream confData) throws RepositoryConfigurationException; /** * Tell if the config exists. * @return - flag */ boolean hasConfig() throws RepositoryConfigurationException;
JCR Core implementation contains a persister which stores the
repository configuration in the relational database using JDBC calls -
org.exoplatform.services.jcr.impl.config.JDBCConfigurationPersister
.
The implementation will crate and use table JCR_CONFIG in the provided database.
But the developer can implement his own persister for his particular usecase.
To deploy eXo JCR to JBoss As follow next steps:
Dowload the latest version of eXo JCR ear distribution.
Copy <jcr.ear> into <%jboss_home%/server/default/deploy>
Put exo-configuration.xml to the root <%jboss_home%/exo-configuration.xml>
Configure JAAS by inserting XML fragment shown below into <%jboss_home%/server/default/conf/login-config.xml>
<application-policy name="exo-domain"> <authentication> <login-module code="org.exoplatform.services.security.j2ee.JbossLoginModule" flag="required"></login-module> </authentication> </application-policy>
Ensure that you use JBossTS Transaction Service and JBossCache Transaction Manager. Your exo-configuration.xml must contain such parts:
<component> <key>org.jboss.cache.transaction.TransactionManagerLookup</key> <type>org.jboss.cache.GenericTransactionManagerLookup</type>^ </component> <component> <key>org.exoplatform.services.transaction.TransactionService</key> <type>org.exoplatform.services.transaction.jbosscache.JBossTransactionsService</type> <init-params> <value-param> <name>timeout</name> <value>300</value> </value-param> </init-params> </component>
Start server:
bin/run.sh for Unix
bin/run.bat for Windows
Try accessing http://localhost:8080/browser with root/exo as login/password if you have done everything right, you'll get access to repository browser.
To manually configure repository create a new configuration file (f.e. exo-jcr-configuration.xml). For details see JCR Configuration. Your configuration must look like:
<repository-service default-repository="repository1"> <repositories> <repository name="repository1" system-workspace="ws1" default-workspace="ws1"> <security-domain>exo-domain</security-domain> <access-control>optional</access-control> <authentication-policy>org.exoplatform.services.jcr.impl.core.access.JAASAuthenticator</authentication-policy> <workspaces> <workspace name="ws1"> <container class="org.exoplatform.services.jcr.impl.storage.jdbc.optimisation.CQJDBCWorkspaceDataContainer"> <properties> <property name="source-name" value="jdbcjcr" /> <property name="dialect" value="oracle" /> <property name="multi-db" value="false" /> <property name="update-storage" value="false" /> <property name="max-buffer-size" value="200k" /> <property name="swap-directory" value="../temp/swap/production" /> </properties> <value-storages> see "Value storage configuration" part. </value-storages> </container> <initializer class="org.exoplatform.services.jcr.impl.core.ScratchWorkspaceInitializer"> <properties> <property name="root-nodetype" value="nt:unstructured" /> </properties> </initializer> <cache enabled="true" class="org.exoplatform.services.jcr.impl.dataflow.persistent.jbosscache.JBossCacheWorkspaceStorageCache"> see "Cache configuration" part. </cache> <query-handler class="org.exoplatform.services.jcr.impl.core.query.lucene.SearchIndex"> see "Indexer configuration" part. </query-handler> <lock-manager class="org.exoplatform.services.jcr.impl.core.lock.jbosscache.CacheableLockManagerImpl"> see "Lock Manager configuration" part. </lock-manager> </workspace> <workspace name="ws2"> ... </workspace> <workspace name="wsN"> ... </workspace> </workspaces> </repository> </repositories> </repository-service>
and update RepositoryServiceConfiguration configuration in exo-configuration.xml to use this file:
<component> <key>org.exoplatform.services.jcr.config.RepositoryServiceConfiguration</key> <type>org.exoplatform.services.jcr.impl.config.RepositoryServiceConfigurationImpl</type> <init-params> <value-param> <name>conf-path</name> <description>JCR configuration file</description> <value>exo-jcr-configuration.xml</value> </value-param> </init-params> </component>
Every node of cluster MUST have the same mounted Network File System with read and write permissions on it.
"/mnt/tornado" - path to the mounted Network File System (all cluster nodes must use the same NFS)
Every node of cluster MUST use the same database
Same Clusters on different nodes MUST have the same cluster names (f.e if Indexer cluster in workspace production on the first node has name "production_indexer_cluster", then indexer clusters in workspace production on all other nodes MUST have the same name "production_indexer_cluster" )
Configuration of every workspace in repository must contains of such parts:
<value-storages> <value-storage id="system" class="org.exoplatform.services.jcr.impl.storage.value.fs.TreeFileValueStorage"> <properties> <property name="path" value="/mnt/tornado/temp/values/production" /> <!--path within NFS where ValueStorage will hold it's data--> </properties> <filters> <filter property-type="Binary" /> </filters> </value-storage> </value-storages>
<cache enabled="true" class="org.exoplatform.services.jcr.impl.dataflow.persistent.jbosscache.JBossCacheWorkspaceStorageCache"> <properties> <property name="jbosscache-configuration" value="jar:/conf/portal/test-jbosscache-data.xml" /> <!-- path to JBoss Cache configuration for data storage --> <property name="jgroups-configuration" value="jar:/conf/portal/udp-mux.xml" /> <!-- path to JGroups configuration --> <property name="jbosscache-cluster-name" value="JCR_Cluster_cache_production" /> <!-- JBoss Cache data storage cluster name --> <property name="jgroups-multiplexer-stack" value="true" /> </properties> </cache>
<query-handler class="org.exoplatform.services.jcr.impl.core.query.lucene.SearchIndex"> <properties> <property name="changesfilter-class" value="org.exoplatform.services.jcr.impl.core.query.jbosscache.JBossCacheIndexChangesFilter" /> <property name="index-dir" value="/mnt/tornado/temp/jcrlucenedb/production" /> <!-- path within NFS where ValueStorage will hold it's data --> <property name="jbosscache-configuration" value="jar:/conf/portal/test-jbosscache-indexer.xml" /> <!-- path to JBoss Cache configuration for indexer --> <property name="jgroups-configuration" value="jar:/conf/portal/udp-mux.xml" /> <!-- path to JGroups configuration --> <property name="jbosscache-cluster-name" value="JCR_Cluster_indexer_production" /> <!-- JBoss Cache indexer cluster name --> <property name="jgroups-multiplexer-stack" value="true" /> </properties> </query-handler>
<lock-manager class="org.exoplatform.services.jcr.impl.core.lock.jbosscache.CacheableLockManagerImpl"> <properties> <property name="time-out" value="15m" /> <property name="jbosscache-configuration" value="jar:/conf/portal/test-jbosscache-lock.xml" /> <!-- path to JBoss Cache configuration for lock manager --> <property name="jgroups-configuration" value="jar:/conf/portal/udp-mux.xml" /> <!-- path to JGroups configuration --> <property name="jgroups-multiplexer-stack" value="true" /> <property name="jbosscache-cluster-name" value="JCR_Cluster_lock_production" /> <!-- JBoss Cache locks cluster name --> <property name="jbosscache-cl-cache.jdbc.table.name" value="jcrlocks_production"/> <!-- the name of the DB table where lock's data will be stored --> <property name="jbosscache-cl-cache.jdbc.table.create" value="true"/> <property name="jbosscache-cl-cache.jdbc.table.drop" value="false"/> <property name="jbosscache-cl-cache.jdbc.table.primarykey" value="jcrlocks_production_pk"/> <property name="jbosscache-cl-cache.jdbc.fqn.column" value="fqn"/> <property name="jbosscache-cl-cache.jdbc.node.column" value="node"/> <property name="jbosscache-cl-cache.jdbc.parent.column" value="parent"/> <property name="jbosscache-cl-cache.jdbc.datasource" value="jdbcjcr"/> </properties> </lock-manager>
Each mentioned components uses instances of JBoss Cache product for caching in clustered environment. So every element has it's own transport and has to be configured in proper way. As usual, workspaces has similar configuration but with different cluster-names and may-be some other parameters. The simplest way to configure them is to define their's own configuration files for each component in each workspace:
<property name="jbosscache-configuration" value="conf/standalone/test-jbosscache-lock-db1-ws1.xml" />
But if there are few workspaces, configuring them in such a way can be painful and hard-manageable. eXo JCR offers a template-based configuration for JBoss Cache instances. You can have one template for Lock Manager, one for Indexer and one for data container and use them in all the workspaces, defining the map of substitution parameters in main configuration file. Just simply define ${jbosscache-<parameter name>} inside xml-template and list correct value in JCR configuration file just below "jbosscache-configuration", as shown:
template:
... <clustering mode="replication" clusterName="${jbosscache-cluster-name}"> <stateRetrieval timeout="20000" fetchInMemoryState="false" /> ...
and JCR configuration file:
... <property name="jbosscache-configuration" value="jar:/conf/portal/jbosscache-lock.xml" /> <property name="jbosscache-cluster-name" value="JCR-cluster-locks-db1-ws" /> ...
JGroups is used by JBoss Cache for network communications and transport in clustered environment. If property "jgroups-configuration" is defined in component configuration, it will be injected into the JBoss Cache instance on startup.
<property name="jgroups-configuration" value="your/path/to/modified-udp.xml" />
As mentioned above, each component (lock manager, data container and query handler) for each workspace requires it's own clustered environment. Saying with another words, they have their own clusters with unique names. By default each cluster should perform multi-casts on separate port. This configuration leads to great unnecessary overhead on cluster. Thats why JGroups offers multiplexer feature, providing ability to use one single channel for set of clusters. This feature reduces network overheads increasing performance and stability of application. To enable multiplexer stack, You should define appropriate configuration file (upd-mux.xml is pre-shipped one with eXo JCR) and set "jgroups-multiplexer-stack" into "true".
<property name="jgroups-configuration" value="jar:/conf/portal/udp-mux.xml" /> <property name="jgroups-multiplexer-stack" value="true" />
Exo JCR implementation is shipped with ready-to-use JBoss Cache configuration templates for JCR's components. They are situated in application package in /conf/porta/ folder.
Data container template is "jbosscache-data.xml" It's
<?xml version="1.0" encoding="UTF-8"?> <jbosscache xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="urn:jboss:jbosscache-core:config:3.1"> <locking useLockStriping="false" concurrencyLevel="50000" lockParentForChildInsertRemove="false" lockAcquisitionTimeout="20000" /> <clustering mode="replication" clusterName="${jbosscache-cluster-name}"> <stateRetrieval timeout="20000" fetchInMemoryState="false" /> <jgroupsConfig multiplexerStack="jcr.stack" /> <sync /> </clustering> <!-- Eviction configuration --> <eviction wakeUpInterval="5000"> <default algorithmClass="org.jboss.cache.eviction.LRUAlgorithm" actionPolicyClass="org.exoplatform.services.jcr.impl.dataflow.persistent.jbosscache.ParentNodeEvictionActionPolicy" eventQueueSize="1000000"> <property name="maxNodes" value="1000000" /> <property name="timeToLive" value="120000" /> </default> </eviction> </jbosscache>
It's template name is "jbosscache-lock.xml"
<?xml version="1.0" encoding="UTF-8"?> <jbosscache xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="urn:jboss:jbosscache-core:config:3.1"> <locking useLockStriping="false" concurrencyLevel="50000" lockParentForChildInsertRemove="false" lockAcquisitionTimeout="20000" /> <clustering mode="replication" clusterName="${jbosscache-cluster-name}"> <stateRetrieval timeout="20000" fetchInMemoryState="false" /> <jgroupsConfig multiplexerStack="jcr.stack" /> <sync /> </clustering> <loaders passivation="false" shared="true"> <preload> <node fqn="/" /> </preload> <loader class="org.jboss.cache.loader.JDBCCacheLoader" async="false" fetchPersistentState="false" ignoreModifications="false" purgeOnStartup="false"> <properties> cache.jdbc.table.name=${jbosscache-cl-cache.jdbc.table.name} cache.jdbc.table.create=${jbosscache-cl-cache.jdbc.table.create} cache.jdbc.table.drop=${jbosscache-cl-cache.jdbc.table.drop} cache.jdbc.table.primarykey=${jbosscache-cl-cache.jdbc.table.primarykey} cache.jdbc.fqn.column=${jbosscache-cl-cache.jdbc.fqn.column} cache.jdbc.fqn.type=${jbosscache-cl-cache.jdbc.fqn.type} cache.jdbc.node.column=${jbosscache-cl-cache.jdbc.node.column} cache.jdbc.node.type=${jbosscache-cl-cache.jdbc.node.type} cache.jdbc.parent.column=${jbosscache-cl-cache.jdbc.parent.column} cache.jdbc.datasource=${jbosscache-cl-cache.jdbc.datasource} </properties> </loader> </loaders> </jbosscache>
Table 10.2. Template variables
Variable |
---|
jbosscache-cluster-name |
jbosscache-cl-cache.jdbc.table.name |
jbosscache-cl-cache.jdbc.table.create |
jbosscache-cl-cache.jdbc.table.drop |
jbosscache-cl-cache.jdbc.table.primarykey |
jbosscache-cl-cache.jdbc.fqn.column |
jbosscache-cl-cache.jdbc.fqn.type |
jbosscache-cl-cache.jdbc.node.column |
jbosscache-cl-cache.jdbc.node.type |
jbosscache-cl-cache.jdbc.parent.column |
jbosscache-cl-cache.jdbc.datasource |
Have a look at "jbosscache-indexer.xml"
<?xml version="1.0" encoding="UTF-8"?> <jbosscache xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="urn:jboss:jbosscache-core:config:3.1"> <locking useLockStriping="false" concurrencyLevel="50000" lockParentForChildInsertRemove="false" lockAcquisitionTimeout="20000" /> <clustering mode="replication" clusterName="${jbosscache-cluster-name}"> <stateRetrieval timeout="20000" fetchInMemoryState="false" /> <jgroupsConfig multiplexerStack="jcr.stack" /> <sync /> </clustering> <!-- Eviction configuration --> <eviction wakeUpInterval="5000"> <default algorithmClass="org.jboss.cache.eviction.FIFOAlgorithm" eventQueueSize="1000000"> <property name="maxNodes" value="10000" /> <property name="minTimeToLive" value="60000" /> </default> </eviction> </jbosscache>
What LockManager does?
In common words, LockManager stores lock objects, so it can give Lock object or can release it, etc.
Also LockManager is responsible for removing Locks that live too long. This parameter may be configured with "time-out" property.
JCR provide two base implementation of LockManager:
org.exoplatform.services.jcr.impl.core.lock.LockManagerImpl
;
org.exoplatform.services.jcr.impl.core.lock.jbosscache.CacheableLockManagerImpl
;
In this article we will talk mostly about CacheableLockManagerImpl.
You can enable LockManager by adding lock-manager-configuration to workspace-configuration.
For example:
<workspace name="ws"> ... <lock-manager class="org.exoplatform.services.jcr.impl.core.lock.jbosscache.CacheableLockManagerImpl"> <properties> <property name="time-out" value="15m" /> ... </properties> </lock-manager> ... </workspace>
LockManagerImpl is simple implementation of LockManager, and also faster than CacheableLockManager. It stores Lock objects in HashMap and may also persist Locks if LockPersister is configured. LockManagerImpl do not support replication in any way.
See more about LockManager Configuration here.
CacheableLockManagerImpl stores Lock object in JBoss-cache, so Locks are replicable and affects on cluster, not only a single node. Also JBoss-cache has JDBCCacheLoader, so locks will be stored to database.
Both implementation supports Expired Locks removing. There is LockRemover - separate thread, that periodically ask LockManager for Locks that lives to much and must be removed. So, timeout for LockRemover may be set as follows, default value is 30m.
<properties> <property name="time-out" value="10m" /> ... </properties>
Replication requirements are same as for Cache
Full JCR configuration example you can see here.
Common tips:
clusterName
("jbosscache-cluster-name")
must be unique;
cache.jdbc.table.name
must be unique
per datasource;
cache.jdbc.fqn.type
must and
cache.jdbc.node.type must be configured according to used
database;
There is few ways how to configure CacheableLockManagerImpl, and all of them configures JBoss-cache and JDBCCacheLoader.
See http://community.jboss.org/wiki/JBossCacheJDBCCacheLoader
First one is - put JbossCache configuraion file path to CacheableLockManagerImpl
This configuration is not so good, as you can think. Because repository may contain many workspaces, and each workspace must contain LockManager configuration, and LockManager config may contain JbossCache config file. So total configuration is growing up. But it is usefull if we want a single LockManager with special configuration.
Config is:
<lock-manager class="org.exoplatform.services.jcr.impl.core.lock.jbosscache.CacheableLockManagerImpl"> <properties> <property name="time-out" value="15m" /> <property name="jbosscache-configuration" value="conf/standalone/cluster/test-jbosscache-lock-config.xml" /> </properties> </lock-manager>
test-jbosscache-lock-config.xml
<?xml version="1.0" encoding="UTF-8"?> <jbosscache xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="urn:jboss:jbosscache-core:config:3.2"> <locking useLockStriping="false" concurrencyLevel="50000" lockParentForChildInsertRemove="false" lockAcquisitionTimeout="20000" /> <clustering mode="replication" clusterName="JBoss-Cache-Lock-Cluster_Name"> <stateRetrieval timeout="20000" fetchInMemoryState="false" nonBlocking="true" /> <jgroupsConfig> <TCP bind_addr="127.0.0.1" start_port="9800" loopback="true" recv_buf_size="20000000" send_buf_size="640000" discard_incompatible_packets="true" max_bundle_size="64000" max_bundle_timeout="30" use_incoming_packet_handler="true" enable_bundling="false" use_send_queues="false" sock_conn_timeout="300" skip_suspected_members="true" use_concurrent_stack="true" thread_pool.enabled="true" thread_pool.min_threads="1" thread_pool.max_threads="25" thread_pool.keep_alive_time="5000" thread_pool.queue_enabled="false" thread_pool.queue_max_size="100" thread_pool.rejection_policy="run" oob_thread_pool.enabled="true" oob_thread_pool.min_threads="1" oob_thread_pool.max_threads="8" oob_thread_pool.keep_alive_time="5000" oob_thread_pool.queue_enabled="false" oob_thread_pool.queue_max_size="100" oob_thread_pool.rejection_policy="run" /> <MPING timeout="2000" num_initial_members="2" mcast_port="34540" bind_addr="127.0.0.1" mcast_addr="224.0.0.1" /> <MERGE2 max_interval="30000" min_interval="10000" /> <FD_SOCK /> <FD max_tries="5" shun="true" timeout="10000" /> <VERIFY_SUSPECT timeout="1500" /> <pbcast.NAKACK discard_delivered_msgs="true" gc_lag="0" retransmit_timeout="300,600,1200,2400,4800" use_mcast_xmit="false" /> <UNICAST timeout="300,600,1200,2400,3600" /> <pbcast.STABLE desired_avg_gossip="50000" max_bytes="400000" stability_delay="1000" /> <pbcast.GMS join_timeout="5000" print_local_addr="true" shun="false" view_ack_collection_timeout="5000" view_bundling="true" /> <FRAG2 frag_size="60000" /> <pbcast.STREAMING_STATE_TRANSFER /> <pbcast.FLUSH timeout="0" /> </jgroupsConfig <sync /> </clustering> <loaders passivation="false" shared="true"> <preload> <node fqn="/" /> </preload> <loader class="org.jboss.cache.loader.JDBCCacheLoader" async="false" fetchPersistentState="false" ignoreModifications="false" purgeOnStartup="false"> <properties> cache.jdbc.table.name=jcrlocks_ws cache.jdbc.table.create=true cache.jdbc.table.drop=false cache.jdbc.table.primarykey=jcrlocks_ws_pk cache.jdbc.fqn.column=fqn cache.jdbc.fqn.type=VARCHAR(512) cache.jdbc.node.column=node cache.jdbc.node.type=<BLOB> cache.jdbc.parent.column=parent cache.jdbc.datasource=jdbcjcr </properties> </loader> </loaders> </jbosscache>
Configuration requirements:
<clustering mode="replication" clusterName="JBoss-Cache-Lock-Cluster_Name"> - cluster name must be unique;
cache.jdbc.table.name
must be unique
per datasource;
cache.jdbc.node.type
and
cache.jdbc.fqn.type
must be configured
according to using database. See Data Types in Different Databases .
Second one is - use template JBoss-cache configuration for all LockManagers
Lock template configuration
test-jbosscache-lock.xml
<?xml version="1.0" encoding="UTF-8"?> <jbosscache xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="urn:jboss:jbosscache-core:config:3.1"> <locking useLockStriping="false" concurrencyLevel="50000" lockParentForChildInsertRemove="false" lockAcquisitionTimeout="20000" /> <clustering mode="replication" clusterName="${jbosscache-cluster-name}"> <stateRetrieval timeout="20000" fetchInMemoryState="false" /> <jgroupsConfig multiplexerStack="jcr.stack" /> <sync /> </clustering> <loaders passivation="false" shared="true"> <!-- All the data of the JCR locks needs to be loaded at startup --> <preload> <node fqn="/" /> </preload> <!-- For another cache-loader class you should use another template with cache-loader specific parameters -> <loader class="org.jboss.cache.loader.JDBCCacheLoader" async=q"false" fetchPersistentState="false" ignoreModifications="false" purgeOnStartup="false"> <properties> cache.jdbc.table.name=${jbosscache-cl-cache.jdbc.table.name} cache.jdbc.table.create=${jbosscache-cl-cache.jdbc.table.create} cache.jdbc.table.drop=${jbosscache-cl-cache.jdbc.table.drop} cache.jdbc.table.primarykey=${jbosscache-cl-cache.jdbc.table.primarykey} cache.jdbc.fqn.column=${jbosscache-cl-cache.jdbc.fqn.column} cache.jdbc.fqn.type=${jbosscache-cl-cache.jdbc.fqn.type} cache.jdbc.node.column=${jbosscache-cl-cache.jdbc.node.column} cache.jdbc.node.type=${jbosscache-cl-cache.jdbc.node.type} cache.jdbc.parent.column=${jbosscache-cl-cache.jdbc.parent.column} cache.jdbc.datasource=${jbosscache-cl-cache.jdbc.datasource} </properties> </loader> </loaders> </jbosscache>
As you see, all configurable paramaters filled by templates and will be replaced by LockManagers conf parameters:
<lock-manager class="org.exoplatform.services.jcr.impl.core.lock.jbosscache.CacheableLockManagerImpl"> <properties> <property name="time-out" value="15m" /> <property name="jbosscache-configuration" value="test-jbosscache-lock.xml" /> <property name="jgroups-configuration" value="udp-mux.xml" /> <property name="jgroups-multiplexer-stack" value="true" /> <property name="jbosscache-cluster-name" value="JCR-cluster-locks-ws" /> <property name="jbosscache-cl-cache.jdbc.table.name" value="jcrlocks_ws" /> <property name="jbosscache-cl-cache.jdbc.table.create" value="true" /> <property name="jbosscache-cl-cache.jdbc.table.drop" value="false" /> <property name="jbosscache-cl-cache.jdbc.table.primarykey" value="jcrlocks_ws_pk" /> <property name="jbosscache-cl-cache.jdbc.fqn.column" value="fqn" /> <property name="jbosscache-cl-cache.jdbc.fqn.type" value="AUTO"/> <property name="jbosscache-cl-cache.jdbc.node.column" value="node" /> <property name="jbosscache-cl-cache.jdbc.node.type" value="AUTO"/> <property name="jbosscache-cl-cache.jdbc.parent.column" value="parent" /> <property name="jbosscache-cl-cache.jdbc.datasource" value="jdbcjcr" /> </properties> </lock-manager>
Configuration requirements:
jbosscache-cl-cache.jdbc.fqn.column
and jbosscache-cl-cache.jdbc.node.type
is
nothing else as cache.jdbc.fqn.type and cache.jdbc.node.type in
JBoss-Cache configuration. You can set those data types according
to database type (See Data Types in Different Databases) or set it as AUTO (or do not set at
all) and data type will by detected automaticaly.
as you see, jgroups-configuration moved to separate config file - udp-mux.xml; In our case udp-mux.xml is common JGroup config for all components (QueryHandler, cache, LockManager). But we, still, can create own config.
our-udp-mux.xml
<protocol_stacks> <stack name="jcr.stack"> <config> <UDP mcast_addr="228.10.10.10" mcast_port="45588" tos="8" ucast_recv_buf_size="20000000" ucast_send_buf_size="640000" mcast_recv_buf_size="25000000" mcast_send_buf_size="640000" loopback="false" discard_incompatible_packets="true" max_bundle_size="64000" max_bundle_timeout="30" use_incoming_packet_handler="true" ip_ttl="2" enable_bundling="true" enable_diagnostics="true" thread_naming_pattern="cl" use_concurrent_stack="true" thread_pool.enabled="true" thread_pool.min_threads="2" thread_pool.max_threads="8" thread_pool.keep_alive_time="5000" thread_pool.queue_enabled="true" thread_pool.queue_max_size="1000" thread_pool.rejection_policy="discard" oob_thread_pool.enabled="true" oob_thread_pool.min_threads="1" oob_thread_pool.max_threads="8" oob_thread_pool.keep_alive_time="5000" oob_thread_pool.queue_enabled="false" oob_thread_pool.queue_max_size="100" oob_thread_pool.rejection_policy="Run" /> <PING timeout="2000" num_initial_members="3" /> <MERGE2 max_interval="30000" min_interval="10000" /> <FD_SOCK /> <FD timeout="10000" max_tries="5" shun="true" /> <VERIFY_SUSPECT timeout="1500" /> <BARRIER /> <pbcast.NAKACK use_stats_for_retransmission="false" exponential_backoff="150" use_mcast_xmit="true" gc_lag="0" retransmit_timeout="50,300,600,1200" discard_delivered_msgs="true" /> <UNICAST timeout="300,600,1200" /> <pbcast.STABLE stability_delay="1000" desired_avg_gossip="50000" max_bytes="1000000" /> <VIEW_SYNC avg_send_interval="60000" /> <pbcast.GMS print_local_addr="true" join_timeout="3000" shun="false" view_bundling="true" /> <FC max_credits="500000" min_threshold="0.20" /> <FRAG2 frag_size="60000" /> <!--pbcast.STREAMING_STATE_TRANSFER /--> <pbcast.STATE_TRANSFER /> <!-- pbcast.FLUSH /--> </config> </stack> </protocol_stacks>
Table 11.1. Fqn type and node type in different databases
DataBase name | Node data type | FQN data type |
---|---|---|
default | BLOB | VARCHAR(512) |
HSSQL | OBJECT | VARCHAR(512) |
MySQL | LONGBLOB | VARCHAR(512) |
ORACLE | BLOB | VARCHAR2(512) |
PostgreSQL | bytea | VARCHAR(512) |
MSSQL | VARBINARY(MAX) | VARCHAR(512) |
DB2 | BLOB | VARCHAR(512) |
Sybase | IMAGE | VARCHAR(512) |
Ingres | long byte | VARCHAR(512) |
Lets talk about indexing content in cluster.
For couple of reasons, we can't replicate index. That's means, some data added and indexed on one cluster node, will be replicated to another cluster node, but will not be indexed on that node.
So, how do the indexing works in cluster environment?
As, we can not index same data on all nodes of cluster, we must index it on one node. Node, that can index data and do changes on lucene index, is called "coordinator". Coordinator-node is choosen automaticaly, so we do not need special configuration for coordinator.
But, how can another nodes save their changes to lucene index?
First of all, data is already saved and replicated to another cluster-nodes, so we need only deliver message like "we need to index this data" to coordinator. Thats why Jboss-cache is used.
All nodes of cluster writes messages into JBoss-cache but only coordinator takes those messages and makes changes Lucene index.
How do the search works in cluster environment?
Search engine do not works with indexer, coordinator, etc. Search needs only lucene index. But only one cluster node can change lucene index - asking you. Yes - lucene index is shared. So, all cluster nodes must be configured to use lucene index from shared directory.
A little bit about indexing process (no matter, cluster or not) Indexer do not writes changes to FS lucene index immediately. At first, Indexer writes changes to Volatile index. If Volatile index size become 1Mb or more it is flushed to FS. Also there is timer, that flushes volatile index by timeout. Volatile index timeout configured by "max-volatile-time" paremeter.
See more about Search Configuration.
Common scheme of Shared Index
Now, lets see what we need to run Search engine in cluster environment.
shared directory for storing Lucene index (i.e. NFS);
changes filter configured as org.exoplatform.services.jcr.impl.core.query.jbosscache.JBossCacheIndexChangesFilter;
This filter ignore changes on non-coordinator nodes, and index changes on coordinator node.
configure JBoss-cache, course;
Configuration example:
<workspace name="ws"> <query-handler class="org.exoplatform.services.jcr.impl.core.query.lucene.SearchIndex"> <properties> <property name="index-dir" value="shareddir/index/db1/ws" /> <property name="changesfilter-class" value="org.exoplatform.services.jcr.impl.core.query.jbosscache.JBossCacheIndexChangesFilter" /> <property name="jbosscache-configuration" value="jbosscache-indexer.xml" /> <property name="jgroups-configuration" value="udp-mux.xml" /> <property name="jgroups-multiplexer-stack" value="true" /> <property name="jbosscache-cluster-name" value="JCR-cluster-indexer-ws" /> <property name="max-volatile-time" value="60" /> </properties> </query-handler> </workspace>
Table 12.1. Config properties description
Property name | Description |
---|---|
index-dir | path to index |
jbosscache-configuration | template of JBoss-cache configuration for all query-handlers in repository |
jgroups-configuration | jgroups-configuration is template configuration for all components (search, cache, locks) [Add link to document describing template configurations] |
jgroups-multiplexer-stack | [TODO about jgroups-multiplexer-stack - add link to JBoss doc] |
jbosscache-cluster-name | cluster name (must be unique) |
max-volatile-time | max time to live for Volatile Index |
JBoss-Cache template configuration for query handler.
jbosscache-indexer.xml
<?xml version="1.0" encoding="UTF-8"?> <jbosscache xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="urn:jboss:jbosscache-core:config:3.1"> <locking useLockStriping="false" concurrencyLevel="50000" lockParentForChildInsertRemove="false" lockAcquisitionTimeout="20000" /> <!-- Configure the TransactionManager --> <transaction transactionManagerLookupClass="org.jboss.cache.transaction.JBossStandaloneJTAManagerLookup" /> <clustering mode="replication" clusterName="${jbosscache-cluster-name}"> <stateRetrieval timeout="20000" fetchInMemoryState="false" /> <jgroupsConfig multiplexerStack="jcr.stack" /> <sync /> </clustering> <!-- Eviction configuration --> <eviction wakeUpInterval="5000"> <default algorithmClass="org.jboss.cache.eviction.FIFOAlgorithm" eventQueueSize="1000000"> <property name="maxNodes" value="10000" /> <property name="minTimeToLive" value="60000" /> </default> </eviction> </jbosscache>
See more about template configurations here.
JBossTransactionsService implements eXo TransactionService and provides access to JBoss Transaction Service (JBossTS) JTA implementation via eXo container dependency.
TransactionService used in JCR cache org.exoplatform.services.jcr.impl.dataflow.persistent.jbosscache.JBossCacheWorkspaceStorageCache implementaion. See Cluster configuration for example.
Example configuration:
<component> <key>org.exoplatform.services.transaction.TransactionService</key> <type>org.exoplatform.services.transaction.jbosscache.JBossTransactionsService</type> <init-params> <value-param> <name>timeout</name> <value>3000</value> </value-param> </init-params> </component>
timeout - XA transaction timeout in seconds
It's JBossCache class registered as eXo container component in configuration.xml file.
<component> <key>org.jboss.cache.transaction.TransactionManagerLookup</key> <type>org.jboss.cache.transaction.JBossStandaloneJTAManagerLookup</type> </component>
JBossStandaloneJTAManagerLookup used in standalone environment. Bur for Application Server environment use GenericTransactionManagerLookup.
In order to have a better idea of the time spent into the database access layer, it cans be interesting to get some statistics on that part of the code, knowing that most of the time spent into eXo JCR is mainly the database access. This statistics will then allow you to identify without using any profiler what is anormally slow in this layer, which could help to fix the problem quickly.
In case you use
org.exoplatform.services.jcr.impl.storage.jdbc.optimisation.CQJDBCWorkspaceDataContainer
or
org.exoplatform.services.jcr.impl.storage.jdbc.JDBCWorkspaceDataContainer
as WorkspaceDataContainer
, you can get statistics on the
time spent into the database access layer. The database access layer (in
eXo JCR) is represented by the methods of the interface
org.exoplatform.services.jcr.storage.WorkspaceStorageConnection
,
so for all the methods defined in this interface, we can have the
following figures:
The minimum time spent into the method.
The maximum time spent into the method.
The average time spent into the method.
The total amount of time spent into the method.
The total amount of times the method has been called.
Those figures are also available globaly for all the methods which gives us the global behavior of this layer.
If you want to enable the statistics, you just need to set the JVM parameter called JDBCWorkspaceDataContainer.statistics.enabled to true. The corresponding CSV file is StatisticsJDBCStorageConnection-${creation-timestamp}.csv for more details about how the csv files are managed please refer to the section dedicated to the statistics manager.
The format of each column header is ${method-alias}-${metric-alias}. The metric alias are described in the statistics manager section.
Table 15.1. Method Alias
global | This is the alias for all the methods. |
getItemDataById | This is the alias for the method getItemData(String identifier). |
getItemDataByNodeDataNQPathEntry | This is the alias for the method getItemData(NodeData parentData, QPathEntry name). |
getChildNodesData | This is the alias for the method getChildNodesData(NodeData parent). |
getChildNodesCount | This is the alias for the method getChildNodesCount(NodeData parent). |
getChildPropertiesData | This is the alias for the method getChildPropertiesData(NodeData parent). |
listChildPropertiesData | This is the alias for the method listChildPropertiesData(NodeData parent). |
getReferencesData | This is the alias for the method getReferencesData(String nodeIdentifier). |
commit | This is the alias for the method commit(). |
addNodeData | This is the alias for the method add(NodeData data). |
addPropertyData | This is the alias for the method add(PropertyData data). |
updateNodeData | This is the alias for the method update(NodeData data). |
updatePropertyData | This is the alias for the method update(PropertyData data). |
deleteNodeData | This is the alias for the method delete(NodeData data). |
deletePropertyData | This is the alias for the method delete(PropertyData data). |
renameNodeData | This is the alias for the method rename(NodeData data). |
rollback | This is the alias for the method rollback(). |
isOpened | This is the alias for the method isOpened(). |
close | This is the alias for the method close(). |
In order to know exactly how your application uses eXo JCR, it cans be interesting to register all the JCR API accesses in order to easily create real life test scenario based on pure JCR calls and also to tune your eXo JCR to better fit your requirements.
In order to allow you to specify into the configuration which part of eXo JCR needs to be monitored whitout applying any changes in your code and/or building anything, we choosed to rely on the Load-time Weaving proposed by AspectJ.
To enable this feature, you will have to add in your classpath the following jar files:
exo.jcr.component.statistics-X.Y.Z.jar
corresponding to your eXo JCR version that you can get from the jboss
maven repository http://repository.jboss.com/maven2/org/exoplatform/jcr/exo.jcr.component.statistics
.
aspectjrt-1.6.8.jar that you can get from the main maven
repository http://repo2.maven.org/maven2/org/aspectj/aspectjrt
.
You will also need to get aspectjweaver-1.6.8.jar from the main maven repository http://repo2.maven.org/maven2/org/aspectj/aspectjweaver. At this stage, to enable the statistics on the JCR API accesses, you will need to add the JVM parameter -javaagent:${pathto}/aspectjweaver-1.6.8.jar to your command line, for more details please refer to http://www.eclipse.org/aspectj/doc/released/devguide/ltw-configuration.html.
By default, the configuration will collect statistcs on all the methods of the internal interfaces org.exoplatform.services.jcr.core.ExtendedSession and org.exoplatform.services.jcr.core.ExtendedNode, and the JCR API interface javax.jcr.Property. To add and/or remove some interfaces to monitor, you have two configuration files to change that are bundled into the jar exo.jcr.component.statistics-X.Y.Z.jar, which are conf/configuration.xml and META-INF/aop.xml.
The file content below is the content of conf/configuration.xml that you will need to modify to add and/or remove the full qualified name of the interfaces to monitor, into the list of parameter values of the init param called targetInterfaces.
<configuration xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.exoplaform.org/xml/ns/kernel_1_0.xsd http://www.exoplaform.org/xml/ns/kernel_1_0.xsd" xmlns="http://www.exoplaform.org/xml/ns/kernel_1_0.xsd"> <component> <type>org.exoplatform.services.jcr.statistics.JCRAPIAspectConfig</type> <init-params> <values-param> <name>targetInterfaces</name> <value>org.exoplatform.services.jcr.core.ExtendedSession</value> <value>org.exoplatform.services.jcr.core.ExtendedNode</value> <value>javax.jcr.Property</value> </values-param> </init-params> </component> </configuration>
The file content below is the content of META-INF/aop.xml that you will to need to modify to add and/or remove the full qualified name of the interfaces to monitor, into the expression filter of the pointcut called JCRAPIPointcut. As you can see below, by default only JCR API calls from the exoplatform packages are took into account, don't hesistate to modify also this filter to add your own package names.
<aspectj> <aspects> <concrete-aspect name="org.exoplatform.services.jcr.statistics.JCRAPIAspectImpl" extends="org.exoplatform.services.jcr.statistics.JCRAPIAspect"> <pointcut name="JCRAPIPointcut" expression="(target(org.exoplatform.services.jcr.core.ExtendedSession) || target(org.exoplatform.services.jcr.core.ExtendedNode) || target(javax.jcr.Property)) && call(public * *(..))" /> </concrete-aspect> </aspects> <weaver options="-XnoInline"> <include within="org.exoplatform..*" /> </weaver> </aspectj>
The corresponding CSV files are of type Statistics${interface-name}-${creation-timestamp}.csv for more details about how the csv files are managed please refer to the section dedicated to the statistics manager.
The format of each column header is ${method-alias}-${metric-alias}. The method alias will be of type ${method-name}(list of parameter types separeted by ; to be compatible with the CSV format).
The metric alias are described in the statistics manager section.
Please note that this feature will affect the performances of eXo JCR so it must be used with caution.
The statistics manager manages all the statistics provided by eXo JCR, it is responsible of printing the data into the CSV files but also to expose the statistics through JMX and/or Rest.
The statistics manager will create all the CSV files for each
category of statistics that it manages, the format of those files is
Statistics${category-name}-${creation-timestamp}.csv.
Those files will be created into the user directory if it is possible
otherwise it will create them into the temporary directory. The format of
those files is CSV
(i.e. Comma-Seperated Values), one new
line will be added regularily (every 5 seconds by default) and one last
line will be added at JVM exit. Each line, will be composed of the 5
figures described below for each method and globaly for all the
methods.
Table 15.2. Metric Alias
Min | The minimum time spent into the method. |
Max | The maximum time spent into the method. |
Total | The total amount of time spent into the method. |
Avg | The average time spent into the method. |
Times | The total amount of times the method has been called. |
You can disable the persistence of the statistics by setting the
JVM parameter called
JCRStatisticsManager.persistence.enabled to
false, by default it is set to
true. You can aslo define the period of time between
each record (i.e. line of data into the file) by setting the JVM parameter
called JCRStatisticsManager.persistence.timeout to
your expected value expressed in milliseconds, by default it is set to
5000.
You can also access to the statistics thanks to JMX, the available methods are the following:
Table 15.3. JMX Methods
getMin | Gives the minimum time spent into the method corresponding to the given category name and statistics name. The expected arguments are the name of the category of the statistics (e.g. JDBCStorageConnection) and the name of the expected method or global for the global value. |
getMax | Gives the maximum time spent into the method corresponding to the given category name and statistics name. The expected arguments are the name of the category of the statistics (e.g. JDBCStorageConnection) and the name of the expected method or global for the global value. |
getTotal | Gives the total amount of time spent into the method corresponding to the given category name and statistics name. The expected arguments are the name of the category of the statistics (e.g. JDBCStorageConnection) and the name of the expected method or global for the global value. |
getAvg | Gives the average time spent into the method corresponding to the given category name and statistics name. The expected arguments are the name of the category of the statistics (e.g. JDBCStorageConnection) and the name of the expected method or global for the global value. |
getTimes | Gives the total amount of times the method has been called corresponding to the given category name and statistics name. The expected arguments are the name of the category of the statistics (e.g. JDBCStorageConnection) and the name of the expected method or global for the global value. |
reset | Reset the statistics for the given category name and statistics name. The expected arguments are the name of the category of the statistics (e.g. JDBCStorageConnection) and the name of the expected method or global for the global value. |
resetAll | Reset all the statistics for the given category name. The expected argument is the name of the category of the statistics (e.g. JDBCStorageConnection). |
The full name of the related MBean is
exo:service=statistic, view=jcr.
Table of Contents
eXo Kernel is the basis of all eXo platform products and modules. Any component available in eXo Platform is managed by the Exo Container, our micro container responsible for gluing the services through dependency injection
Therefore, each product is composed of a set of services and plugins registered to the container and configured by XML configuration files.
The Kernel module also contains a set of very low level services.
To be effective the namespace URI http://www.exoplaform.org/xml/ns/kernel_1_1.xsd must be target namespace of the XML configuration file.
<xsd:schema targetNamespace="http://www.exoplaform.org/xml/ns/kernel_1_1.xsd" xmlns="http://www.exoplaform.org/xml/ns/kernel_1_1.xsd" xmlns:xsd="http://www.w3.org/2001/XMLSchema" elementFormDefault="qualified" attributeFormDefault="unqualified" version="1.0"> ... </xsd:schema>
eXo Portal uses PicoContainer, which implements the Inversion of Control (IoC) design pattern. All eXo containers inherit from a PicoContainer. There are mainly two eXo containers used, each of them can provide one or several services. Each container service is delivered in a JAR file. This JAR file may contain a default configuration. The use of default configurations is recommended and most services provide it.
When a Pico Container searches for services and its configurations, each configurable service may be reconfigured to override default values or set additional parameters. If the service is configured in two or more places the configuration override mechanism will be used.
The container performs the following steps making eXo Container configuration retrieval depending on the container type.
The container is initialized by looking into different locations. This container is used by portal applications. Configurations are overloaded in the following lookup sequence:
Services default RootContainer
configurations
from JAR files /conf/configuration.xml
External RootContainer
configuration, if will
be found at
$AS_HOME/exo-conf/configuration.xml
Services default PortalContainer
configurations from JAR files
/conf/portal/configuration.xml
Web applications configurations from WAR files /WEB-INF/conf/configuration.xml
External configuration for services of named portal, if will be found at $AS_HOME/exo-conf/portal/$PORTAL_NAME/configuration.xml
The container is initialized by looking into different locations. This container is used by non portal applications. Configurations are overloaded in the following lookup sequence:
Services default RootContainer
configurations
from JAR files /conf/configuration.xml
External RootContainer
configuration, if will
be found at
$AS_HOME/exo-conf/configuration.xml
Services default StandaloneContainer
configurations from JAR files
/conf/portal/configuration.xml
Web applications configurations from WAR files /WEB-INF/conf/configuration.xml
Then depending on the StandaloneContainer
configuration URL initialization:
if configuration URL was initialized to be added to services defaults, as below:
// add configuration to the default services configurations from JARs/WARs StandaloneContainer.addConfigurationURL(containerConf);
Configuration from added URL containerConf will override only services configured in the file
if configuration URL not initialized at all, it will be
found at $AS_HOME/exo-configuration.xml.
If $AS_HOME/exo-configuration.xml doesn't
exist the container will try find it at
$AS_HOME/exo-conf/exo-configuration.xml
location and if it's still not found and the
StandaloneContainer
instance obtained with the
dedicated configuration ClassLoader
the
container will try to retrieve the resource
conf/exo-configuration.xml within the
given ClassLoader
.
$AS_HOME - application server home directory, or user.dir JVM system property value in case of Java Standalone application.
$PORTAL_NAME - portal web application name.
External configuration location can be overridden with System property exo.conf.dir. If the property exists its value will be used as path to eXo configuration directory, i.e. to $AS_HOME/exo-conf alternative. E.g. put property in command line java -Dexo.conf.dir=/path/to/exo/conf. In this particular use case, you have no need to use any prefix to import other files. For instance, if your configuration file is $AS_HOME/exo-conf/portal/PORTAL_NAME/configuration.xml and you want to import the configuration file $AS_HOME/exo-conf/portal/PORTAL_NAME/mySubConfDir/myConfig.xml, you can do it by adding <import>mySubConfDir/myConfig.xml</import> to your configuration file.
The name of the configuration folder that is by default "exo-conf", can be changed thanks to the System property exo.conf.dir.name.
Under JBoss application server exo-conf will be looked up in directory described by JBoss System property jboss.server.config.url. If the property is not found or empty $AS_HOME/exo-conf will be asked.
The search looks for a configuration file in each JAR/WAR available from the classpath using the current thread context classloader. During the search these configurations are added to a set. If the service was configured previously and the current JAR contains a new configuration of that service the latest (from the current JAR/WAR) will replace the previous one. The last one will be applied to the service during the services start phase.
Take care to have no dependencies between configurations from JAR files (/conf/portal/configuration.xml and /conf/configuration.xml) since we have no way to know in advance the loading order of those configurations. In other words, if you want to overload some configuration located in the file /conf/portal/configuration.xml of a given JAR file, you must not do it from the file /conf/portal/configuration.xml of another JAR file but from another configuration file loaded after configurations from JAR files /conf/portal/configuration.xml.
After the processing of all configurations available in system the container will initialize it and start each service in order of the dependency injection (DI).
The user/developer should be careful when configuring the same service in different configuration files. It's recommended to configure a service in its own JAR only. Or, in case of a portal configuration, strictly reconfigure the services in portal WAR files or in an external configuration.
There are services that can be (or should be) configured more than one time. This depends on business logic of the service. A service may initialize the same resource (shared with other services) or may add a particular object to a set of objects (shared with other services too). In the first case it's critical who will be the last, i.e. whose configuration will be used. In the second case it's no matter who is the first and who is the last (if the parameter objects are independent).
In case of problems with service configuration it's important to know from which JAR/WAR it comes. For that purpose the JVM system property org.exoplatform.container.configuration.debug can be used.
java -Dorg.exoplatform.container.configuration.debug ...
If the property is enabled the container configuration manager will log the configuration adding process at INFO level.
...... Add configuration jar:file:/D:/Projects/eXo/dev/exo-working/exo-tomcat/lib/exo.kernel.container-trunk.jar!/conf/portal/configuration.xml Add configuration jar:file:/D:/Projects/eXo/dev/exo-working/exo-tomcat/lib/exo.kernel.component.cache-trunk.jar!/conf/portal/configuration.xml Add configuration jndi:/localhost/portal/WEB-INF/conf/configuration.xml import jndi:/localhost/portal/WEB-INF/conf/common/common-configuration.xml import jndi:/localhost/portal/WEB-INF/conf/database/database-configuration.xml import jndi:/localhost/portal/WEB-INF/conf/ecm/jcr-component-plugins-configuration.xml import jndi:/localhost/portal/WEB-INF/conf/jcr/jcr-configuration.xml ......
The effective configuration of the StandaloneContainer, RootContainer and/or PortalContainer can be known thanks to the method getConfigurationXML() that is exposed through JMX at the container's level. This method will give you the effective configuration in XML format that has been really interpreted by the kernel. This could be helpful to understand how a given component or plugin has been initialized.
Since eXo JCR 1.12, we added a set of new features that have been designed to extend portal applications such as GateIn.
A ServletContextListener
called
org.exoplatform.container.web.PortalContainerConfigOwner
has been added in order to notify the application that a given web
application provides some configuration to the portal container, and
this configuration file is the file
WEB-INF/conf/configuration.xml available in the
web application itself.
If your war file contains some configuration to add to the
PortalContainer
simply add the following lines in your
web.xml file.
<?xml version="1.0" encoding="ISO-8859-1" ?> <!DOCTYPE web-app PUBLIC "-//Sun Microsystems, Inc.//DTD Web Application 2.3//EN" "http://java.sun.com/dtd/web-app_2_3.dtd"> <web-app> ... <!-- ================================================================== --> <!-- LISTENER --> <!-- ================================================================== --> <listener> <listener-class>org.exoplatform.container.web.PortalContainerConfigOwner</listener-class> </listener> ... </web-app>
A ServletContextListener
called
org.exoplatform.container.web.PortalContainerCreator
has been added in order to create the current portal containers that
have been registered. We assume that all the web applications have
already been loaded before calling
PortalContainerCreator.contextInitialized
[.]
In GateIn, the PortalContainerCreator
is
already managed by the file
starter.war/ear.
Now we can define precisely a portal container and its
dependencies and settings thanks to the
PortalContainerDefinition
that currently contains the
name of the portal container, the name of the rest context, the name
of the realm he web application dependencies ordered by loading
priority (i.e. the first dependency must be loaded at first and so
on..) and the settings.
To be able to define a PortalContainerDefinition
,
we need to ensure first of all that a
PortalContainerConfig
has been defined at the
RootContainer
level, see below an example:
<component> <!-- The full qualified name of the PortalContainerConfig --> <type>org.exoplatform.container.definition.PortalContainerConfig</type> <init-params> <!-- The name of the default portal container --> <value-param> <name>default.portal.container</name> <value>myPortal</value> </value-param> <!-- The name of the default rest ServletContext --> <value-param> <name>default.rest.context</name> <value>myRest</value> </value-param> <!-- The name of the default realm --> <value-param> <name>default.realm.name</name> <value>my-exo-domain</value> </value-param> <!-- The default portal container definition --> <!-- It cans be used to avoid duplicating configuration --> <object-param> <name>default.portal.definition</name> <object type="org.exoplatform.container.definition.PortalContainerDefinition"> <!-- All the dependencies of the portal container ordered by loading priority --> <field name="dependencies"> <collection type="java.util.ArrayList"> <value> <string>foo</string> </value> <value> <string>foo2</string> </value> <value> <string>foo3</string> </value> </collection> </field> <!-- A map of settings tied to the default portal container --> <field name="settings"> <map type="java.util.HashMap"> <entry> <key> <string>foo5</string> </key> <value> <string>value</string> </value> </entry> <entry> <key> <string>string</string> </key> <value> <string>value0</string> </value> </entry> <entry> <key> <string>int</string> </key> <value> <int>100</int> </value> </entry> </map> </field> <!-- The path to the external properties file --> <field name="externalSettingsPath"> <string>classpath:/org/exoplatform/container/definition/default-settings.properties</string> </field> </object> </object-param> </init-params> </component>
Table 17.1. Descriptions of the fields of
PortalContainerConfig
default.portal.container | The name of the default portal container. This field is optional. |
default.rest.context | The name of the default rest
ServletContext . This field is optional. |
default.realm.name | The name of the default realm. This field is optional. |
default.portal.definition | The definition of the default portal container. This
field is optional. The expected type is
org.exoplatform.container.definition.PortalContainerDefinition
that is described below. Allow the parameters defined in this
default PortalContainerDefinition will be the
default values. |
A new PortalContainerDefinition
can be defined at
the RootContainer
level thanks to an external plugin,
see below an example:
<external-component-plugins> <!-- The full qualified name of the PortalContainerConfig --> <target-component>org.exoplatform.container.definition.PortalContainerConfig</target-component> <component-plugin> <!-- The name of the plugin --> <name>Add PortalContainer Definitions</name> <!-- The name of the method to call on the PortalContainerConfig in order to register the PortalContainerDefinitions --> <set-method>registerPlugin</set-method> <!-- The full qualified name of the PortalContainerDefinitionPlugin --> <type>org.exoplatform.container.definition.PortalContainerDefinitionPlugin</type> <init-params> <object-param> <name>portal</name> <object type="org.exoplatform.container.definition.PortalContainerDefinition"> <!-- The name of the portal container --> <field name="name"> <string>myPortal</string> </field> <!-- The name of the context name of the rest web application --> <field name="restContextName"> <string>myRest</string> </field> <!-- The name of the realm --> <field name="realmName"> <string>my-domain</string> </field> <!-- All the dependencies of the portal container ordered by loading priority --> <field name="dependencies"> <collection type="java.util.ArrayList"> <value> <string>foo</string> </value> <value> <string>foo2</string> </value> <value> <string>foo3</string> </value> </collection> </field> <!-- A map of settings tied to the portal container --> <field name="settings"> <map type="java.util.HashMap"> <entry> <key> <string>foo</string> </key> <value> <string>value</string> </value> </entry> <entry> <key> <string>int</string> </key> <value> <int>10</int> </value> </entry> <entry> <key> <string>long</string> </key> <value> <long>10</long> </value> </entry> <entry> <key> <string>double</string> </key> <value> <double>10</double> </value> </entry> <entry> <key> <string>boolean</string> </key> <value> <boolean>true</boolean> </value> </entry> </map> </field> <!-- The path to the external properties file --> <field name="externalSettingsPath"> <string>classpath:/org/exoplatform/container/definition/settings.properties</string> </field> </object> </object-param> </init-params> </component-plugin> </external-component-plugins>
Table 17.2. Descriptions of the fields of
PortalContainerDefinition
when it is used to define a
new portal container
name | The name of the portal container. This field is mandatory . |
restContextName | The name of the context name of the rest web
application. This field is optional. The default value will
the value define at the PortalContainerConfig
level. |
realmName | The name of the realm. This field is optional. The
default value will the value define at the
PortalContainerConfig level. |
dependencies | All the dependencies of the portal container ordered by
loading priority. This field is optional. The default value
will the value define at the
PortalContainerConfig level. The dependencies
are in fact the list of the context names of the web
applications from which the portal container depends. This
field is optional. The dependency order is really crucial
since it will be interpreted the same way by several
components of the platform. All those components, will
consider the 1st element in the list less important than the
second element and so on. It is currently used
to:
|
settings | A java.util.Map of internal parameters
that we would like to tie the portal container. Those
parameters could have any type of value. This field is
optional. If some internal settings are defined at the
PortalContainerConfig level, the two maps of
settings will be merged. If a setting with the same name is
defined in both maps, it will keep the value defined at the
PortalContainerDefinition level. |
externalSettingsPath | The path of the external properties file to load as
default settings to the portal container. This field is
optional. If some external settings are defined at the
PortalContainerConfig level, the two maps of
settings will be merged. If a setting with the same name is
defined in both maps, it will keep the value defined at the
PortalContainerDefinition level. The external
properties files can be either of type "properties" or of type
"xml". The path will be interpreted as follows:
|
Table 17.3. Descriptions of the fields of
PortalContainerDefinition
when it is used to define
the default portal container
name | The name of the portal container. This field is
optional. The default portal name will be:
|
restContextName | The name of the context name of the rest web
application. This field is optional. The default value wil
be:
|
realmName | The name of the realm. This field is optional. The
default value wil be:
|
dependencies | All the dependencies of the portal container ordered by loading priority. This field is optional. If this field has a non empty value, it will be the default list of dependencies. |
settings | A java.util.Map of internal parameters
that we would like to tie the default portal container. Those
parameters could have any type of value. This field is
optional. |
externalSettingsPath | The path of the external properties file to load as
default settings to the default portal container. This field
is optional. The external properties files can be either of
type "properties" or of type "xml". The path will be
interpreted as follows:
|
Internal and external settings are both optional, but if we give a non empty value for both the application will merge the settings. If the same setting name exists in both settings, we apply the following rules:
The value of the external setting is null, we ignore the value.
The value of the external setting is not
null and the value of the internal setting is
null, the final value will be the external
setting value that is of type String
.
Both values are not null
, we will have to
convert the external setting value into the target type which is
the type of the internal setting value, thanks to the static
method valueOf(String), the following
sub-rules are then applied:
The method cannot be found, the final value will be the
external setting value that is of type
String
.
The method can be found and the external setting value
is an empty String
, we ignore the external
setting value.
The method can be found and the external setting value
is not an empty String
but the method call
fails, we ignore the external setting value.
The method can be found and the external setting value
is not an empty String
and the method call
succeeds, the final value will be the external setting value
that is of type of the internal setting value.
We can inject the value of the portal container settings into the portal container configuration files thanks to the variables which name start with "portal.container.", so to get the value of a setting called "foo" just use the following syntax ${portal.container.foo}. You can also use internal variables, such as:
Table 17.4. Definition of the internal variables
portal.container.name | Gives the name of the current portal container. |
portal.container.rest | Gives the context name of the rest web application of the current portal container. |
portal.container.realm | Gives the realm name of the current portal container. |
You can find below an example of how to use the variables:
<configuration xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.exoplaform.org/xml/ns/kernel_1_1.xsd http://www.exoplaform.org/xml/ns/kernel_1_1.xsd" xmlns="http://www.exoplaform.org/xml/ns/kernel_1_1.xsd"> <component> <type>org.exoplatform.container.TestPortalContainer$MyComponent</type> <init-params> <!-- The name of the portal container --> <value-param> <name>portal</name> <value>${portal.container.name}</value> </value-param> <!-- The name of the rest ServletContext --> <value-param> <name>rest</name> <value>${portal.container.rest}</value> </value-param> <!-- The name of the realm --> <value-param> <name>realm</name> <value>${portal.container.realm}</value> </value-param> <value-param> <name>foo</name> <value>${portal.container.foo}</value> </value-param> <value-param> <name>before foo after</name> <value>before ${portal.container.foo} after</value> </value-param> </init-params> </component> </configuration>
In the properties file corresponding to the external settings, you can reuse variables previously defined (in the external settings or in the internal settings) to create a new variable. In this case the prefix "portal.container." is not needed, see an example below:
my-var1=value 1 my-var2=value 2 complex-value=${my-var1}-${my-var2}
In the external and internal settings, you can also use create
variables based on value of System paramaters. The System parameters
can either be defined at launch time or thanks to the
PropertyConfigurator
(see next section for more
details). See an example below:
temp-dir=${java.io.tmpdir}${file.separator}my-temp
However, for the internal settings you can use System parameters
only to define settings of type
java.lang.String
.
It cans be also very usefull to define a generic variable in the settings of the default portal container, the value of this variable will change according to the current portal container. See below an example:
my-generic-var=value of the portal container "${name}"
If this variable is defined at the default portal container level, the value of this variable for a portal container called "foo" will be value of the portal container "foo".
A new property configurator service has been developed for taking care of configuring system properties from the inline kernel configuration or from specified property files.
The services is scoped at the root container level because it is used by all the services in the different portal containers in the application runtime.
The properties init param takes a property declared to configure various properties.
<component> <key>PropertyManagerConfigurator</key> <type>org.exoplatform.container.PropertyConfigurator</type> <init-params> <properties-param> <name>properties</name> <property name="foo" value="bar"/> </properties-param> </init-params> </component>
The properties URL init param allow to load an external file by
specifying its URL. Both property and XML format are supported, see the
javadoc of the java.util.Properties
class for more information. When a property file is loaded the various
property declarations are loaded in the order in which the properties
are declared sequentially in the file.
<component> <key>PropertyManagerConfigurator</key> <type>org.exoplatform.container.PropertyConfigurator</type> <init-params> <value-param> <name>properties.url</name> <value>classpath:configuration.properties</value> </value-param> </init-params> </component>
In the properties file corresponding to the external properties, you can reuse variables previously defined to create a new variable. In this case the prefix "portal.container." is not needed, see an example below:
my-var1=value 1 my-var2=value 2 complex-value=${my-var1}-${my-var2}
The kernel configuration is able to handle configuration profiles at runtime (as opposed to packaging time).
An active profile list is obtained during the boot of the root container and is composed of the system property exo.profiles sliced according the "," delimiter and also a server specific profile value (tomcat for tomcat, jboss for jboss, etc...).
# runs GateIn on Tomcat with the profiles tomcat and foo sh gatein.sh -Dexo.profiles=foo # runs GateIn on JBoss with the profiles jboss, foo and bar sh run.sh -Dexo.profiles=foo,bar
Profiles are configured in the configuration files of the eXo kernel.
Profile activation occurs at XML to configuration object unmarshalling time. It is based on an "profile" attribute that is present on some of the XML element of the configuration files. To enable this the kernel configuration schema has been upgraded to kernel_1_1.xsd. The configuration is based on the following rules:
Any kernel element with the no profiles attribute will create a configuration object
Any kernel element having a profiles attribute containing at least one of the active profiles will create a configuration object
Any kernel element having a profiles attribute matching none of the active profile will not create a configuration object
Resolution of duplicates (such as two components with same type) is left up to the kernel
A configuration element is profiles capable when it carries a profiles element.
The component element declares a component when activated. It will shadow any element with the same key declared before in the same configuration file:
<component> <key>Component</key> <type>Component</type> </component> <component profile="foo"> <key>Component</key> <type>FooComponent</type> </component>
The import element imports a referenced configuration file when activated:
<import>empty</import> <import profile="foo">foo</import> <import profile="bar">bar</import>
The init param element configures the parameter argument of the construction of a component service:
<component> <key>Component</key> <type>ComponentImpl</type> <init-params> <value-param> <name>param</name> <value>empty</value> </value-param> <value-param profile="foo"> <name>param</name> <value>foo</value> </value-param> <value-param profile="bar"> <name>param</name> <value>bar</value> </value-param> </init-params> </component>
The value collection element configures one of the value of collection data:
<object type="org.exoplatform.container.configuration.ConfigParam"> <field name="role"> <collection type="java.util.ArrayList"> <value><string>manager</string></value> <value profile="foo"><string>foo_manager</string></value> <value profile="foo,bar"><string>foo_bar_manager</string></value> </collection> </field> </object>
The field configuration element configures the field of an object:
<object-param> <name>test.configuration</name> <object type="org.exoplatform.container.configuration.ConfigParam"> <field name="role"> <collection type="java.util.ArrayList"> <value><string>manager</string></value> </collection> </field> <field name="role" profile="foo,bar"> <collection type="java.util.ArrayList"> <value><string>foo_bar_manager</string></value> </collection> </field> <field name="role" profile="foo"> <collection type="java.util.ArrayList"> <value><string>foo_manager</string></value> </collection> </field> </object> </object-param>
The component request life cycle is an interface that defines a contract for a component for being involved into a request:
public interface ComponentRequestLifecycle { /** * Start a request. * @param container the related container */ void startRequest(ExoContainer container); /** * Ends a request. * @param container the related container */ void endRequest(ExoContainer container); }
The container passed is the container to which the component is related. This contract is often used to setup a thread local based context that will be demarcated by a request.
For instance in the GateIn portal context, a component request life cycle is triggered for user requests. Another example is the initial data import in GateIn that demarcates using callbacks made to that interface.
The RequestLifeCycle
class has several statics
methods that are used to schedule the component request life cycle of
components. Its main responsability is to perform scheduling while
respecting the constraint to execute the request life cycle of a
component only once even if it can be scheduled several times.
RequestLifeCycle.begin(component); try { // Do something } finally { RequestLifeCycle.end(); }
Scheduling a container triggers the component request life cyle
of all the components that implement the interface
ComponentRequestLifeCycle
. If one of the component has
already been scheduled before then that component will not be
scheduled again. When the local value is true then the looked
components will be those of the container, when it is false then the
scheduler will also look at the components in the ancestor
containers.
RequestLifeCycle.begin(container, local); try { // Do something } finally { RequestLifeCycle.end(); }
Each portal request triggers the life cycle of the associated portal container.
All applications on the top of eXo JCR that need a cache, can rely
on an org.exoplatform.services.cache.ExoCache
instance that
is managed by the
org.exoplatform.services.cache.CacheService
. The main
implementation of this service is
org.exoplatform.services.cache.impl.CacheServiceImpl
which
depends on the
org.exoplatform.services.cache.ExoCacheConfig
in order to
create new ExoCache
instances. See below an example of
org.exoplatform.services.cache.CacheService
definition:
<component> <key>org.exoplatform.services.cache.CacheService</key> <jmx-name>cache:type=CacheService</jmx-name> <type>org.exoplatform.services.cache.impl.CacheServiceImpl</type> <init-params> <object-param> <name>cache.config.default</name> <description>The default cache configuration</description> <object type="org.exoplatform.services.cache.ExoCacheConfig"> <field name="name"><string>default</string></field> <field name="maxSize"><int>300</int></field> <field name="liveTime"><long>600</long></field> <field name="distributed"><boolean>false</boolean></field> <field name="implementation"><string>org.exoplatform.services.cache.concurrent.ConcurrentFIFOExoCache</string></field> </object> </object-param> </init-params> </component>
The ExoCacheConfig
which name is
default
, will be the default configuration of all the
ExoCache
instances that don't have dedicated
configuration.
See below an example of how to define a new
ExoCacheConfig
thanks to a
external-component-plugin:
<external-component-plugins> <target-component>org.exoplatform.services.cache.CacheService</target-component> <component-plugin> <name>addExoCacheConfig</name> <set-method>addExoCacheConfig</set-method> <type>org.exoplatform.services.cache.ExoCacheConfigPlugin</type> <description>Configures the cache for query service</description> <init-params> <object-param> <name>cache.config.wcm.composer</name> <description>The default cache configuration</description> <object type="org.exoplatform.services.cache.ExoCacheConfig"> <field name="name"><string>wcm.composer</string></field> <field name="maxSize"><int>300</int></field> <field name="liveTime"><long>600</long></field> <field name="distributed"><boolean>false</boolean></field> <field name="implementation"><string>org.exoplatform.services.cache.concurrent.ConcurrentFIFOExoCache</string></field> </object> </object-param> </init-params> </component-plugin> </external-component-plugins>
Table 18.1. Descriptions of the fields of
ExoCacheConfig
name | The name of the cache. This field is mandatory since it
will be used to retrieve the ExoCacheConfig
corresponding to a given cache name. |
label | The label of the cache. This field is optional. It is mainly used to indicate the purpose of the cache. |
maxSize | The maximum numbers of elements in cache. This field is mandatory. |
liveTime | The amount of time (in seconds) an element is not written or read before it is evicted. This field is mandatory. |
implementation | The full qualified name of the cache implementation to use.
This field is optional. This field is only used for simple cache
implementation. The default and main implementation is
org.exoplatform.services.cache.concurrent.ConcurrentFIFOExoCache ,
this implementation only works with local caches with FIFO as
eviction policy. For more complex implementation see the next
sections. |
distributed | Indicates if the cache is distributed. This field is optional. This field is used for backward compatibility. |
replicated | Indicates if the cache is replicated. This field is optional. This field is deprecated. |
logEnabled | Indicates if the log is enabled. This field is optional. This field is used for backward compatibility. |
In the previous versions of eXo kernel, it was quite complex to implement your own ExoCache because it was not open enough. Since kernel 2.0.8, it is possible to easily integrate your favorite cache provider in eXo Products.
You just need to implement your own ExoCacheFactory
and register it in an eXo container, as described below:
package org.exoplatform.services.cache; ... public interface ExoCacheFactory { /** * Creates a new instance of {@link org.exoplatform.services.cache.ExoCache} * @param config the cache to create * @return the new instance of {@link org.exoplatform.services.cache.ExoCache} * @exception ExoCacheInitException if an exception happens while initializing the cache */ public ExoCache createCache(ExoCacheConfig config) throws ExoCacheInitException; }
As you can see, there is only one method to implement which cans be
seen as a converter of an ExoCacheConfig
to get an instance
of ExoCache
. Once, you created your own implementation you
can simply register your factory by adding a file
conf/portal/configuration.xml with a content of the
following type:
<configuration> <component> <key>org.exoplatform.services.cache.ExoCacheFactory</key> <type>org.exoplatform.tutorial.MyExoCacheFactoryImpl</type> ... </component> </configuration>
When you add, the eXo library in your classpath, the eXo service container will use the default configuration provided in the library itself but of course you can still redefined the configuration if you wish as you can do with any components.
The default configuration of the factory is:
<configuration> <component> <key>org.exoplatform.services.cache.ExoCacheFactory</key> <type>org.exoplatform.services.cache.impl.jboss.ExoCacheFactoryImpl</type> <init-params> <value-param> <name>cache.config.template</name> <value>jar:/conf/portal/cache-configuration-template.xml</value> </value-param> </init-params> </component> </configuration>
As you can see the factory requires one single parameter which is cache.config.template, this parameter allows you to define the location of the default configuration template of your jboss cache. In the default configuration, we ask the eXo container to get the file shipped into the jar at /conf/portal/cache-configuration-template.xml.
The default configuration template aims to be the skeleton from which we will create any type of jboss cache instance, thus it must be very generic.
The default configuration template provided with the jar aims to work with any application servers, but if you intend to use JBoss AS, you should redefine it in your custom configuration to fit better with your AS.
If for a given reason, you need to use a specific configuration for a cache, you can register one thanks to an "external plugin", see an example below:
<configuration> ... <external-component-plugins> <target-component>org.exoplatform.services.cache.ExoCacheFactory</target-component> <component-plugin> <name>addConfig</name> <set-method>addConfig</set-method> <type>org.exoplatform.services.cache.impl.jboss.ExoCacheFactoryConfigPlugin</type> <description>add Custom Configurations</description> <init-params> <value-param> <name>myCustomCache</name> <value>jar:/conf/portal/custom-cache-configuration.xml</value> </value-param> </init-params> </component-plugin> </external-component-plugins> ... </configuration>
In the example above, I call the method
addConfig(ExoCacheFactoryConfigPlugin plugin) on
the current implementation of ExoCacheFactory
which is
actually the jboss cache implementation.
In the init-params block, you can define a set of value-param blocks and for each value-param, we expect the name of cache that needs a specific configuration as name and the location of your custom configuration as value.
In this example, we indicates to the factory that we would like that the cache myCustomCache use the configuration available at jar:/conf/portal/custom-cache-configuration.xml.
The factory for jboss cache, delegates the cache creation to
ExoCacheCreator
that is defines as
below:
package org.exoplatform.services.cache.impl.jboss; ... public interface ExoCacheCreator { /** * Creates an eXo cache according to the given configuration {@link org.exoplatform.services.cache.ExoCacheConfig} * @param config the configuration of the cache to apply * @param cache the cache to initialize * @exception ExoCacheInitException if an exception happens while initializing the cache */ public ExoCache create(ExoCacheConfig config, Cache<Serializable, Object> cache) throws ExoCacheInitException; /** * Returns the type of {@link org.exoplatform.services.cache.ExoCacheConfig} expected by the creator * @return the expected type */ public Class<? extends ExoCacheConfig> getExpectedConfigType(); /** * Returns the name of the implementation expected by the creator. This is mainly used to be backward compatible * @return the expected by the creator */ public String getExpectedImplementation(); }
The ExoCacheCreator
allows you to define any kind
of jboss cache instance that you would like to have. It has been
designed to give you the ability to have your own type of
configuration and to always be backward compatible.
In an ExoCacheCreator
, you need to implement 3
methods which are:
create - this method is used to create
a new ExoCache
from the
ExoCacheConfig
and a jboss cache instance.
getExpectedConfigType - this method is
used to indicate the factory the subtype of
ExoCacheConfig
supported by the creator.
getExpectedImplementation - this method
is used to indicate the factory the value of field implementation
of ExoCacheConfig
that is supported by the creator.
This is used for backward compatibility, in other words you can
still configure your cache with a super class
ExoCacheConfig
.
You can register any cache creator you want thanks to an "external plugin", see an example below:
<external-component-plugins> <target-component>org.exoplatform.services.cache.ExoCacheFactory</target-component> <component-plugin> <name>addCreator</name> <set-method>addCreator</set-method> <type>org.exoplatform.services.cache.impl.jboss.ExoCacheCreatorPlugin</type> <description>add Exo Cache Creator</description> <init-params> <object-param> <name>LRU</name> <description>The lru cache creator</description> <object type="org.exoplatform.services.cache.impl.jboss.lru.LRUExoCacheCreator"> <field name="defaultTimeToLive"><long>1500</long></field> <field name="defaultMaxAge"><long>2000</long></field> </object> </object-param> </init-params> </component-plugin> </external-component-plugins>
In the example above, I call the method
addCreator(ExoCacheCreatorPlugin plugin) on the
current implementation of ExoCacheFactory
which is
actually the jboss cache implementation.
In the init-params block, you can define a
set of object-param blocks and for each
object-param, we expect any object definition of
type ExoCacheCreator
.
In this example, we register the action creator related to the eviction policy LRU.
By default, no cache creator are defined, so you need to define them yourself by adding them in your configuration files.
.. <object-param> <name>LRU</name> <description>The lru cache creator</description> <object type="org.exoplatform.services.cache.impl.jboss.lru.LRUExoCacheCreator"> <field name="defaultTimeToLive"><long>${my-value}</long></field> <field name="defaultMaxAge"><long>${my-value}</long></field> </object> </object-param> ...
Table 18.2. Fields description
defaultTimeToLive | This is the default value of the field timeToLive described in the section dedicated to this cache type. This value is only use when we define a cache of this type with the old configuration. |
defaultMaxAge | his is the default value of the field maxAge described in the section dedicated to this cache type. This value is only use when we define a cache of this type with the old configuration. |
... <object-param> <name>FIFO</name> <description>The fifo cache creator</description> <object type="org.exoplatform.services.cache.impl.jboss.fifo.FIFOExoCacheCreator"></object> </object-param> ...
... <object-param> <name>MRU</name> <description>The mru cache creator</description> <object type="org.exoplatform.services.cache.impl.jboss.mru.MRUExoCacheCreator"></object> </object-param> ...
... <object-param> <name>LFU</name> <description>The lfu cache creator</description> <object type="org.exoplatform.services.cache.impl.jboss.lfu.LFUExoCacheCreator"> <field name="defaultMinNodes"><int>${my-value}</int></field> </object> </object-param> ...
Table 18.3. Fields description
defaultMinNodes | This is the default value of the field minNodes described in the section dedicated to this cache type. This value is only use when we define a cache of this type with the old configuration. |
... <object-param> <name>EA</name> <description>The ea cache creator</description> <object type="org.exoplatform.services.cache.impl.jboss.ea.EAExoCacheCreator"> <field name="defaultExpirationTimeout"><long>2000</long></field> </object> </object-param> ...
Table 18.4. Fields description
defaultExpirationTimeout | This is the default value of the field minNodes described in the section dedicated to this cache type. This value is only use when we define a cache of this type with the old configuration. |
You have 2 ways to define a cache which are:
At CacheService
initialization
With an "external plugin"
... <component> <key>org.exoplatform.services.cache.CacheService</key> <type>org.exoplatform.services.cache.impl.CacheServiceImpl</type> <init-params> ... <object-param> <name>fifocache</name> <description>The default cache configuration</description> <object type="org.exoplatform.services.cache.ExoCacheConfig"> <field name="name"><string>fifocache</string></field> <field name="maxSize"><int>${my-value}</int></field> <field name="liveTime"><long>${my-value}</long></field> <field name="distributed"><boolean>false</boolean></field> <field name="implementation"><string>org.exoplatform.services.cache.FIFOExoCache</string></field> </object> </object-param> ... </init-params> </component> ...
In this example, we define a new cache called fifocache.
... <external-component-plugins> <target-component>org.exoplatform.services.cache.CacheService</target-component> <component-plugin> <name>addExoCacheConfig</name> <set-method>addExoCacheConfig</set-method> <type>org.exoplatform.services.cache.ExoCacheConfigPlugin</type> <description>add ExoCache configuration component plugin </description> <init-params> ... <object-param> <name>fifoCache</name> <description>The fifo cache configuration</description> <object type="org.exoplatform.services.cache.ExoCacheConfig"> <field name="name"><string>fifocache</string></field> <field name="maxSize"><int>${my-value}</int></field> <field name="liveTime"><long>${my-value}</long></field> <field name="distributed"><boolean>false</boolean></field> <field name="implementation"><string>org.exoplatform.services.cache.FIFOExoCache</string></field> </object> </object-param> ... </init-params> </component-plugin> </external-component-plugins> ...
In this example, we define a new cache called fifocache which is in fact the same cache as in previous example but defined in a different manner.
Actually, if you use a custom configuration for your cache as described in a previous section, we will use the cache mode define in your configuration file.
In case, you decide to use the default configuration template,
we use the field distributed of your
ExoCacheConfig
to decide. In other words, if the value
of this field is false (the default value), the cache will be a local
cache otherwise it will be the cache mode defined in your default
configuration template that should be distributed.
New configuration
... <object-param> <name>lru</name> <description>The lru cache configuration</description> <object type="org.exoplatform.services.cache.impl.jboss.lru.LRUExoCacheConfig"> <field name="name"><string>lru</string></field> <field name="maxNodes"><int>${my-value}</int></field> <field name="minTimeToLive"><long>${my-value}</long></field> <field name="maxAge"><long>${my-value}</long></field> <field name="timeToLive"><long>${my-value}</long></field> </object> </object-param> ...
Table 18.5. Fields description
maxNodes | This is the maximum number of nodes allowed in this region. 0 denotes immediate expiry, -1 denotes no limit. |
minTimeToLive | The minimum amount of time (in milliseconds) a node must be allowed to live after being accessed before it is allowed to be considered for eviction. 0 denotes that this feature is disabled, which is the default value. |
maxAge | Lifespan of a node (in milliseconds) regardless of idle time before the node is swept away. 0 denotes immediate expiry, -1 denotes no limit. |
timeToLive | The amount of time a node is not written to or read (in milliseconds) before the node is swept away. 0 denotes immediate expiry, -1 denotes no limit. |
Old configuration
... <object-param> <name>lru-with-old-config</name> <description>The lru cache configuration</description> <object type="org.exoplatform.services.cache.ExoCacheConfig"> <field name="name"><string>lru-with-old-config</string></field> <field name="maxSize"><int>${my-value}</int></field> <field name="liveTime"><long>${my-value}</long></field> <field name="implementation"><string>LRU</string></field> </object> </object-param> ...
Table 18.6. Fields description
maxSize | This is the maximum number of nodes allowed in this region. 0 denotes immediate expiry, -1 denotes no limit. |
liveTime | The minimum amount of time (in seconds) a node must be allowed to live after being accessed before it is allowed to be considered for eviction. 0 denotes that this feature is disabled, which is the default value. |
For the fields maxAge and timeToLive needed by JBoss cache, we will use the default values provided by the creator.
New configuration
... <object-param> <name>fifo</name> <description>The fifo cache configuration</description> <object type="org.exoplatform.services.cache.impl.jboss.fifo.FIFOExoCacheConfig"> <field name="name"><string>fifo</string></field> <field name="maxNodes"><int>${my-value}</int></field> <field name="minTimeToLive"><long>${my-value}</long></field> </object> </object-param> ...
Table 18.7. Fields description
maxNodes | This is the maximum number of nodes allowed in this region. 0 denotes immediate expiry, -1 denotes no limit. |
minTimeToLive | The minimum amount of time (in milliseconds) a node must be allowed to live after being accessed before it is allowed to be considered for eviction. 0 denotes that this feature is disabled, which is the default value. |
Old configuration
... <object-param> <name>fifo-with-old-config</name> <description>The fifo cache configuration</description> <object type="org.exoplatform.services.cache.ExoCacheConfig"> <field name="name"><string>fifo-with-old-config</string></field> <field name="maxSize"><int>${my-value}</int></field> <field name="liveTime"><long>${my-value}</long></field> <field name="implementation"><string>FIFO</string></field> </object> </object-param> ...
Table 18.8. Fields description
maxSize | This is the maximum number of nodes allowed in this region. 0 denotes immediate expiry, -1 denotes no limit. |
liveTime | The minimum amount of time (in seconds) a node must be allowed to live after being accessed before it is allowed to be considered for eviction. 0 denotes that this feature is disabled, which is the default value. |
New configuration
... <object-param> <name>mru</name> <description>The mru cache configuration</description> <object type="org.exoplatform.services.cache.impl.jboss.mru.MRUExoCacheConfig"> <field name="name"><string>mru</string></field> <field name="maxNodes"><int>${my-value}</int></field> <field name="minTimeToLive"><long>${my-value}</long></field> </object> </object-param> ...
Table 18.9. Fields description
maxNodes | This is the maximum number of nodes allowed in this region. 0 denotes immediate expiry, -1 denotes no limit. |
minTimeToLive | The minimum amount of time (in milliseconds) a node must be allowed to live after being accessed before it is allowed to be considered for eviction. 0 denotes that this feature is disabled, which is the default value. |
Old configuration
... <object-param> <name>mru-with-old-config</name> <description>The mru cache configuration</description> <object type="org.exoplatform.services.cache.ExoCacheConfig"> <field name="name"><string>mru-with-old-config</string></field> <field name="maxSize"><int>${my-value}</int></field> <field name="liveTime"><long>${my-value}</long></field> <field name="implementation"><string>MRU</string></field> </object> </object-param> ...
Table 18.10. Fields description
maxSize | This is the maximum number of nodes allowed in this region. 0 denotes immediate expiry, -1 denotes no limit. |
liveTime | The minimum amount of time (in seconds) a node must be allowed to live after being accessed before it is allowed to be considered for eviction. 0 denotes that this feature is disabled, which is the default value. |
New configuration
... <object-param> <name>lfu</name> <description>The lfu cache configuration</description> <object type="org.exoplatform.services.cache.impl.jboss.lfu.LFUExoCacheConfig"> <field name="name"><string>lfu</string></field> <field name="maxNodes"><int>${my-value}</int></field> <field name="minNodes"><int>${my-value}</int></field> <field name="minTimeToLive"><long>${my-value}</long></field> </object> </object-param> ...
Table 18.11. Fields description
maxNodes | This is the maximum number of nodes allowed in this region. 0 denotes immediate expiry, -1 denotes no limit. |
minNodes | This is the minimum number of nodes allowed in this region. This value determines what the eviction queue should prune down to per pass. e.g. If minNodes is 10 and the cache grows to 100 nodes, the cache is pruned down to the 10 most frequently used nodes when the eviction timer makes a pass through the eviction algorithm. |
minTimeToLive | The minimum amount of time (in milliseconds) a node must be allowed to live after being accessed before it is allowed to be considered for eviction. 0 denotes that this feature is disabled, which is the default value. |
Old configuration
... <object-param> <name>lfu-with-old-config</name> <description>The lfu cache configuration</description> <object type="org.exoplatform.services.cache.ExoCacheConfig"> <field name="name"><string>lfu-with-old-config</string></field> <field name="maxSize"><int>${my-value}</int></field> <field name="liveTime"><long>${my-value}</long></field> <field name="implementation"><string>LFU</string></field> </object> </object-param> ...
Table 18.12. Fields description
maxSize | This is the maximum number of nodes allowed in this region. 0 denotes immediate expiry, -1 denotes no limit. |
liveTime | The minimum amount of time (in milliseconds) a node must be allowed to live after being accessed before it is allowed to be considered for eviction. 0 denotes that this feature is disabled, which is the default value. |
For the fields minNodes and timeToLive needed by JBoss cache, we will use the default values provided by the creator.
New configuration
... <object-param> <name>ea</name> <description>The ea cache configuration</description> <object type="org.exoplatform.services.cache.impl.jboss.ea.EAExoCacheConfig"> <field name="name"><string>ea</string></field> <field name="maxNodes"><int>${my-value}</int></field> <field name="minTimeToLive"><long>${my-value}</long></field> <field name="expirationTimeout"><long>${my-value}</long></field> </object> </object-param> ...
Table 18.13. Fields description
maxNodes | This is the maximum number of nodes allowed in this region. 0 denotes immediate expiry, -1 denotes no limit. |
minTimeToLive | The minimum amount of time (in milliseconds) a node must be allowed to live after being accessed before it is allowed to be considered for eviction. 0 denotes that this feature is disabled, which is the default value. |
expirationTimeout | This is the timeout after which the cache entry must be evicted. |
Old configuration
... <object-param> <name>ea-with-old-config</name> <description>The ea cache configuration</description> <object type="org.exoplatform.services.cache.ExoCacheConfig"> <field name="name"><string>lfu-with-old-config</string></field> <field name="maxSize"><int>${my-value}</int></field> <field name="liveTime"><long>${my-value}</long></field> <field name="implementation"><string>EA</string></field> </object> </object-param> ...
Table 18.14. Fields description
maxSize | This is the maximum number of nodes allowed in this region. 0 denotes immediate expiry, -1 denotes no limit. |
liveTime | The minimum amount of time (in milliseconds) a node must be allowed to live after being accessed before it is allowed to be considered for eviction. 0 denotes that this feature is disabled, which is the default value. |
For the fields expirationTimeout needed by JBoss cache, we will use the default values provided by the creator.
TransactionServices provides acces to XA TransactionManager and UserTransaction (See JTA specification for details).
Table 19.1. List methods
getTransactionManager() | Get used TransactionManager |
getUserTransaction() | Get UserTransaction on TransactionManager |
getDefaultTimeout() | Return default TimeOut |
setTransactionTimeout(int seconds) | Set TimeOut in second |
enlistResource(ExoResource xares) | Enlist XA resource in TransactionManager |
delistResource(ExoResource xares) | Delist XA resource from TransactionManager |
createXid() | Creates unique XA transaction identifier. |
We need to configure JNDI environment properties and Reference binding with the eXo container standard mechanism.
The Naming service covers:
Configuring the current Naming Context Factory implemented as
an ExoContainer Component
org.exoplatform.services.naming.InitialContextInitializer
.
Binding Objects (References) to the current Context using
org.exoplatform.services.naming.BindReferencePlugin
component plugin.
Make sure you understand the Java Naming and Directory InterfaceTM (JNDI) concepts before using this service.
After the start time the Context Initializer
(org.exoplatform.services.naming.InitialContextInitializer) traverses
all initial parameters (that concern the Naming Context) configured in
default-properties
and
mandatory-properties
(see Configuration examples)
and:
for default-properties
checks if this property
is already set as a System property
(System.getProperty(name)
) and sets this property if
it's not found. Using those properties is recommended with a third
party Naming service provider
for mandatory-properties
it just sets the
property without checking
Standard JNDI properties:
java.naming.factory.initial
java.naming.provider.url
and others (see JNDI docs)
Another responsibility of Context Initializer
org.exoplatform.services.naming.InitialContextInitializer
is binding of preconfigured references to the naming context. For this
purpose it uses a standard eXo component plugin mechanism and in
particular the
org.exoplatform.services.naming.BindReferencePlugin
component plugin. The configuration of this plugin includes three
mandatory value parameters:
bind-name
- the name of binding
reference
class-name
- the type of binding
reference
factory
- the object factory type
And also ref-addresses
property parameter with a
set of references' properties. (see Configuration examples) Context
Initializer uses those parameters to bind the neccessary reference
automatically.
The InitialContextInitializer
configuration
example:
<component> <type>org.exoplatform.services.naming.InitialContextInitializer</type> <init-params> <properties-param> <name>default-properties</name> <description>Default initial context properties</description> <property name="java.naming.factory.initial" value="org.exoplatform.services.naming.SimpleContextFactory"/> </properties-param> <properties-param> <name>mandatory-properties</name> <description>Mandatory initial context properties</description> <property name="java.naming.provider.url" value="rmi://localhost:9999"/> </properties-param> </init-params> </component>
The BindReferencePlugin
component plugin
configuration example (for JDBC datasource):
<component-plugins> <component-plugin> <name>bind.datasource</name> <set-method>addPlugin</set-method> <type>org.exoplatform.services.naming.BindReferencePlugin</type> <init-params> <value-param> <name>bind-name</name> <value>jdbcjcr</value> </value-param> <value-param> <name>class-name</name> <value>javax.sql.DataSource</value> </value-param> <value-param> <name>factory</name> <value>org.apache.commons.dbcp.BasicDataSourceFactory</value> </value-param> <properties-param> <name>ref-addresses</name> <description>ref-addresses</description> <property name="driverClassName" value="org.hsqldb.jdbcDriver"/> <property name="url" value="jdbc:hsqldb:file:target/temp/data/portal"/> <property name="username" value="sa"/> <property name="password" value=""/> </properties-param> </init-params> </component-plugin>
SimpleContextFactory
is created for testing
purposes only, do not use it for production.
In J2EE environment use Naming Factory objects provided with the Application Server.
InitialContextInitalizer
also provides feature of
references binding in runtime. References have bind in runtime will be
persisted and automatically rebinded on a next system start. Java temp
directory is used to persist references in bind-references.xml
file.
Service provide methods for binding reference.
public void bind(String bindName, String className, String factory, String factoryLocation, Map<String, String> refAddr) throws NamingException, FileNotFoundException, XMLStreamException;
bindName
- name of binding
className
- the fully-qualified name of the class
of the object to which this Reference refers
factory
- the name of the factory class for
creating an instance of the object to which this Reference
refers
factoryLocation
- the location of the factory
class
refAddr
- object's properties map
Example of usage:
// obtain InitialContextInitializer instance from ExoContainer (e.g. PortalContainer) InitialContextInitializer initContext = (InitialContextInitializer)container.getComponentInstanceOfType(InitialContextInitializer.class); Map<String, String> refAddr = new HashMap<String, String>(); refAddr.put("driverClassName", "oracle.jdbc.OracleDriver"); refAddr.put("url", "jdbc:oracle:thin:@oraclehost:1521:orcl"); refAddr.put("username", "exouser"); refAddr.put("password", "exopassword"); initContext.bind("jdbcexco", "javax.sql.DataSource", "org.apache.commons.dbcp.BasicDataSourceFactory", null, refAddr); // try to get just bound DataSource DataSource ds = (DataSource)new InitialContext().lookup("jdbcexo");
In order to accommodate to the different target runtime where it can be deployed, eXo is capable of leveraging several logging systems. eXo let's you choose the underlying logging engine to use and even configure that engine (as a quick alternative to doing it directly in your runtime environment).
The currently supported logging engines are :
Apache Log4J
JDK's logging
Apache Commons logging (which is itself a pluggable logging abstraction)
eXo lets you choose whatever logging engine you want as this is generally influences by the AS runtime or internal policy.
This is done through an eXo component called
LogConfigurationInitializer
.
org.exoplatform.services.log.LogConfigurationInitializer
that reads init parameters and configures logging system according to
them. The parameters:
configurator - an implementation of the
LogConfigurator
interface with one method
configure() that accepts a list of properties (3rd init parameter)
to configure the underlying log system using the concrete mechanism.
Again there are three configurators for the most known log systems
(commons, log4j, jdk).
properties - properties to configure the concrete log system (system properties for commons, log4j.properties or logging.properties for commons, log4j and jdk respectively) Look at the configuration examples below.
logger - an implementation of
commons-logging Log interface. It is possible to use commons
wrappers but to support buffering required by the log portlet three
kinds of loggers were added:
BufferedSimpleLog
,
BufferedLog4JLogger
and
BufferedJdk14Logger
(they contain BufferedLog
and extend SimpleLog, Log4JLogger and Jdk14Logger commons-logging
wrappers respectively).
Log4J is a very popular and flexible logging system. It is a good option for JBoss.
<component> <type>org.exoplatform.services.log.LogConfigurationInitializer</type> <init-params> <value-param> <name>logger</name> <value>org.exoplatform.services.log.impl.BufferedLog4JLogger</value> </value-param> <value-param> <name>configurator</name> <value>org.exoplatform.services.log.impl.Log4JConfigurator</value> </value-param> <properties-param> <name>properties</name> <description>Log4J properties</description> <property name="log4j.rootLogger" value="DEBUG, stdout, file"/> <property name="log4j.appender.stdout" value="org.apache.log4j.ConsoleAppender"/> <property name="log4j.appender.stdout.layout" value="org.apache.log4j.PatternLayout"/> <property name="log4j.appender.stdout.layout.ConversionPattern" value="%d {dd.MM.yyyy HH:mm:ss} %c {1}: %m (%F, line %L) %n"/> <property name="log4j.appender.file" value="org.apache.log4j.FileAppender"/> <property name="log4j.appender.file.File" value="jcr.log"/> <property name="log4j.appender.file.layout" value="org.apache.log4j.PatternLayout"/> <property name="log4j.appender.file.layout.ConversionPattern" value="%d{dd.MM.yyyy HH:mm:ss} %m (%F, line %L) %n"/> </properties-param > </init-params> </component>
You can set logger level for class or group of classes by setting next property:
<property name="log4j.category.{component or class name}" value="DEBUG"/>
For example:
we want log all debug messages for class
org.exoplatform.services.jcr.impl.core.SessionDataManager
,
that lies in exo.jcr.component.core
component
<property name="log4j.category.exo.jcr.component.core.SessionDataManager" value="DEBUG"/>
or we want log all debug messages for all classes in in exo.jcr.component.core component
<property name="log4j.category.exo.jcr.component.core" value="DEBUG"/>
or we want log all messages for all kernel components
<property name="log4j.category.exo.kernel" value="DEBUG"/>
JDK logging (aka JUL) is the builtin logging framework introduced in JDK 1.4. It is a good option for Tomcat AS.
edit the variable LOG_OPTS
in your
eXo.sh
or eXo.bat
:
LOG_OPTS="-Dorg.apache.commons.logging.Log=org.apache.commons.logging.impl.Jdk14Logger"
Edit your logs-configuration.xml
:
<component> <type>org.exoplatform.services.log.LogConfigurationInitializer</type> <init-params> <value-param> <name>logger</name> <value>org.exoplatform.services.log.impl.BufferedJdk14Logger</value> </value-param> <value-param> <name>configurator</name> <value>org.exoplatform.services.log.impl.Jdk14Configurator</value> </value-param> <properties-param> <name>properties</name> <description>jdk1.4 Logger properties</description> <property name="handlers" value="java.util.logging.ConsoleHandler"/> <property name=".level" value="FINE"/> <property name="java.util.logging.ConsoleHandler.level" value="FINE"/> </properties-param> </init-params> </component>
SimpleLog is a minimal logging system distributed with Commons Logging. To be used when nothing else is available or when you seek simplicity.
<component> <type>org.exoplatform.services.log.LogConfigurationInitializer</type> <init-params> <value-param> <name>logger</name> <value>org.exoplatform.services.log.impl.BufferedSimpleLog</value> </value-param> <value-param> <name>configurator</name> <value>org.exoplatform.services.log.impl.SimpleLogConfigurator</value> </value-param> <properties-param> <name>properties</name> <description>SimpleLog properties</description> <property name="org.apache.commons.logging.simplelog.defaultlog" value="debug"/> <property name="org.apache.commons.logging.simplelog.showdatetime" value="true"/> </properties-param> </init-params> </component>
If you use log4j configuration, you can change the log
configuration directly at runtime in:
JBOSS_HOME/server/default/conf/jboss-log4j.xml
.
To enable debug logs :
<param name="Threshold" value="DEBUG"/>
To exclude messages from unnecessary classes (server's internal) modify the threshold of these classes to "FATAL".
If you see only ERROR level logs while starting ear on jboss (4.2.2), you have to remove log4j*.jar from your ear and application.xml.
Table of Contents
The eXo Core is a set of common services that are used by eXo products and modules, it also can be used in the business logic.
It's Authentication and Security, Organization, Database, Logging, JNDI, LDAP, Document reader and other services. Find more on eXo site
Database creator DBCreator
is responsible for
execution DDL script in runtime. A DDL script may contain templates for
database name, user name and password which will be replaced by real
values at execution time.
Three templates supported:
${database}
for database name;
${username}
for user name;
${password}
for user's password;
Service provide method for execute script for new database creation.
Database name which are passed as parameter will be substituted in DDL
script instead of ${database}
template. Returns
DBConnectionInfo
object (with all neccesary information of
new database's connection) or throws DBCreatorException
exception if any errors occurs in other case.
public DBConnectionInfo createDatabase(String dbName) throws DBCreatorException;
For MSSQL and Sybase servers uses autocommit mode set true for connection. It's due to after execution "create database" command newly created database not available for "use" command and therefore you can't create new user inside database per one script.
Service's configuration.
<component> <key>org.exoplatform.services.database.creator.DBCreator</key> <type>org.exoplatform.services.database.creator.DBCreator</type> <init-params> <properties-param> <name>db-connection</name> <description>database connection properties</description> <property name="driverClassName" value="com.mysql.jdbc.Driver" /> <property name="url" value="jdbc:mysql://localhost/" /> <property name="username" value="root" /> <property name="password" value="admin" /> </properties-param> <properties-param> <name>db-creation</name>. <description>database creation properties</description>. <property name="scriptPath" value="script.sql" /> <property name="username" value="testuser" /> <property name="password" value="testpwd" /> </properties-param> </init-params> </component>
db-connection
properties section contains parameters
needed for connection to database server
db-creation
properties section contains paramaters
for database creation using DDL script:
scriptPath
absolute path to DDL script
file;
username
user name for substitution
${username}
template in DDL script;
password
user's password for substitution
${password}
template in DDL script;
Specific db-connection
properties section for
different databases.
MySQL:
<property name="driverClassName" value="com.mysql.jdbc.Driver" /> <property name="url" value="jdbc:mysql://localhost/" /> <property name="username" value="root" /> <property name="password" value="admin" />
PostgreSQL:
<property name="driverClassName" value="org.postgresql.Driver" /> <property name="url" value="jdbc:postgresql://localhost/" /> <property name="username" value="root" /> <property name="password" value="admin" />
MSSQL:
<property name="driverClassName" value="com.microsoft.sqlserver.jdbc.SQLServerDriver"/> <property name="url" value="jdbc:sqlserver://localhost:1433;"/> <property name="username" value="root"/> <property name="password" value="admin"/>
Sybase:
<property name="driverClassName" value="com.sybase.jdbc3.jdbc.SybDriver" /> <property name="url" value="jdbc:sybase:Tds:localhost:5000/"/> <property name="username" value="root"/> <property name="password" value="admin"/>
Oracle:
<property name="driverClassName" value="oracle.jdbc.OracleDriver" /> <property name="url" value="jdbc:oracle:thin:@db2.exoua-int:1521:orclvm" /> <property name="username" value="root" /> <property name="password" value="admin" />
MySQL:
CREATE DATABASE ${database}; USE ${database}; CREATE USER '${username}' IDENTIFIED BY '${password}'; GRANT SELECT,INSERT,UPDATE,DELETE ON ${database}.* TO '${username}';
PostgreSQL:
CREATE USER ${username} WITH PASSWORD '${password}'; CREATE DATABASE ${database} WITH OWNER ${username};
MSSQL:
USE MASTER; CREATE DATABASE ${database}; USE ${database}; CREATE LOGIN ${username} WITH PASSWORD = '${password}'; CREATE USER ${username} FOR LOGIN ${username};
Sybase:
sp_addlogin ${username}, ${password}; CREATE DATABASE ${database}; USE ${database}; sp_adduser ${username};
Oracle:
CREATE TABLESPACE "${database}" DATAFILE '/var/oracle_db/orclvm/${database}' SIZE 10M AUTOEXTEND ON NEXT 6M MAXSIZE UNLIMITED LOGGING EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO; CREATE TEMPORARY TABLESPACE "${database}.TEMP" TEMPFILE '/var/oracle_db/orclvm/${database}.temp' SIZE 5M AUTOEXTEND ON NEXT 5M MAXSIZE UNLIMITED EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M; CREATE USER "${username}" PROFILE "DEFAULT" IDENTIFIED BY "${password}" DEFAULT TABLESPACE "${database}" TEMPORARY TABLESPACE "${database}.TEMP" ACCOUNT UNLOCK; GRANT CREATE SEQUENCE TO "${username}"; GRANT CREATE TABLE TO "${username}"; GRANT CREATE TRIGGER TO "${username}"; GRANT UNLIMITED TABLESPACE TO "${username}"; GRANT "CONNECT" TO "${username}"; GRANT "RESOURCE" TO "${username}";
Table of Contents
The Web Services module allows eXo technology to integrate with external products and services.
It's implementaion of API for RESTful Web Services with extensions, Servlet and cross-domain AJAX web-frameworks and JavaBean-JSON transformer. Find full documentaion on eXo site.
Representational State Transfer (REST)
is a style
of software architecture for distributed hypermedia systems such as the
World Wide Web. The term was introduced in the doctoral dissertation in
2000 by Roy Fielding, one of the principal authors of the Hypertext
Transfer Protocol (HTTP) specification, and has come into widespread use
in the networking community.
REST strictly refers to a collection of network architecture principles that outline how resources are defined and addressed. The term is often used in a looser sense to describe any simple interface that transmits domain-specific data over HTTP without an additional messaging layer such as SOAP or session tracking via HTTP cookies.
The key abstraction of information in REST is a
resource
. Any information that can be named can be a
resource: a document or image, a temporal service (e.g. "today's weather
in Los Angeles"), a collection of other resources, a non-virtual object
(e.g. a person), and so on. In other words, any concept that might be the
target of an author's hypertext reference must fit within the definition
of a resource. A resource is a conceptual mapping to a set of entities,
not the entity that corresponds to the mapping at any particular point in
time.
REST uses a resource identifier
to identify the
particular resource involved in an interaction between components. REST
connectors provide a generic interface for accessing and manipulating the
value set of a resource, regardless of how the membership function is
defined or the type of software that is handling the request. URL or URN
are the examples of a resource identifier.
REST components perform actions with a resource by using a
representation
to capture the current or intended state
of that resource and transferring that representation between components.
A representation is a sequence of bytes, plus representation
metadata
to describe those bytes. Other commonly used but less
precise names for a representation include: document, file, and
HTTP message entity, instance, or variant
. A representation
consists of data, metadata describing the data, and, on occasion, metadata
to describe the metadata (usually for the purpose of verifying message
integrity). Metadata are in the form of name-value pairs, where the name
corresponds to a standard that defines the value's structure and
semantics. The data format of a representation is known as a media
type.
Table 25.1. REST Data Elements
Data Element | Modern Web Examples |
---|---|
resource | the intended conceptual target of a hypertext reference |
resource identifier | URL, URN |
representation | HTML document, JPEG image |
representation metadata | media type, last-modified time |
resource metadata | source link, alternates, vary |
control data | if-modified-since, cache-control |
REST uses various connector
types to encapsulate
the activities of accessing resources and transferring resource
representations. The connectors present an abstract interface for
component communication, enhancing simplicity by providing a complete
separation of concepts and hiding the underlying implementation of
resources and communication mechanisms.
Table 25.2. REST Connectors
Connector | Modern Web Examples |
---|---|
client | libwww, libwww-perl |
server | libwww, Apache API, NSAPI |
cache | browser cache, Akamai cache network |
resolver | bind (DNS lookup library) |
tunnel | SOCKS, SSL after HTTP CONNECT |
The primary connector types are client and server. The essential difference between the two is that a client initiates communication by making a request, whereas a server listens for connections and responds to requests in order to supply access to its services. A component may include both client and server connectors.
An important part of RESTful architecture is a well-defined interface to communicate, in particular it is a set of HTTP methods such as POST, GET, PUT and DELETE. These methods are often compared with the CREATE, READ, UPDATE, DELETE (CRUD) operations associated with database technologies. An analogy can also be made:
PUT is analogous to CREATE or PASTE OVER,
GET to READ or COPY,
POST to UPDATE or PASTE AFTER, and
DELETE to DELETE or CUT.
Note
: RESTful architecture is not limited to
those methods, one of good examples of extension is the WebDAV
protocol.
The CRUD
(Create, Read, Update and Delete) verbs
are designed to operate with atomic data within the context of a database
transaction. REST is designed around the atomic transfer of a more complex
state and can be viewed as a mechanism for transferring structured
information from one application to another.
HTTP separates the notions of a web server and a web browser. This
allows the implementation of each to vary from the other based on the
client/server principle. When used RESTfully, HTTP is
stateless
. Each message contains all the information
necessary to understand the request.
As a result, neither the client nor the server needs to remember any communication-state between messages. Any state retained by the server must be modeled as a resource..
This article describes REST framework before version exo-ws-2.0.
This HOW-TO explains how to create your own REST based services. In this HOW-TO we will create a simple calculator, which can do basic operations with integers.
// ... @URITemplate("/calculator/{item1}/{item2}/") public class Calculator implements ResourceContainer { }
Writing @URITemplate
before the class definition
gives you the possibility not to set it for each method. Furthermore the
class must implement the interface ResourceContainer
.
This interface doesn't have any methods, it is just an indication for the
ResourceBinder
. Add the code for adding two
integers.
// ... @URITemplate("/calculator/{item1}/{item2}/") public class Calculator implements ResourceContainer { @QueryTemplate("operation=add") @OutputTransformer(StringOutputTransformer.class) @HTTPMethod("GET") public Response add(@URIParam("item1") Integer item1, @URIParam("item2") Integer item2) { StringBuffer sb = new StringBuffer(); sb.append(item1).append(" + ").append(item2).append(" = ").append(item1 + item2); return Response.Builder.ok(sb.toString(), "text/plain").build(); } }
@QueryTemplate("operation=add") - only requests with query parameters "operation=add" can reach this method; @OutputTransformer(StringOutputTransformer.class) - the output transformer; @HTTPMethod("GET") - the HTTP method "GET".
Write the code for other operations in the same way. Then the result should look like:
package org.exoplatform.services.rest.example; import org.exoplatform.services.rest.HTTPMethod; import org.exoplatform.services.rest.OutputTransformer; import org.exoplatform.services.rest.QueryTemplate; import org.exoplatform.services.rest.Response; import org.exoplatform.services.rest.URIParam; import org.exoplatform.services.rest.URITemplate; import org.exoplatform.services.rest.container.ResourceContainer; import org.exoplatform.services.rest.transformer.StringOutputTransformer; @URITemplate("/calculator/{item1}/{item2}/") @OutputTransformer(StringOutputTransformer.class) public class Calculator implements ResourceContainer { @QueryTemplate("operation=add") @HTTPMethod("GET") public Response add(@URIParam("item1") Integer item1, @URIParam("item2") Integer item2) { StringBuffer sb = new StringBuffer(); sb.append(item1).append(" + ").append(item2).append(" = ").append(item1 + item2); return Response.Builder.ok(sb.toString(), "text/plain").build(); } @QueryTemplate("operation=subtract") @HTTPMethod("GET") public Response subtract(@URIParam("item1") Integer item1, @URIParam("item2") Integer item2) { StringBuffer sb = new StringBuffer(); sb.append(item1).append(" - ").append(item2).append(" = ").append(item1 - item2); return Response.Builder.ok(sb.toString(), "text/plain").build(); } @QueryTemplate("operation=multiply") @HTTPMethod("GET") public Response multiply(@URIParam("item1") Integer item1, @URIParam("item2") Integer item2) { StringBuffer sb = new StringBuffer(); sb.append(item1).append(" * ").append(item2).append(" = ").append(item1 * item2); return Response.Builder.ok(sb.toString(), "text/plain").build(); } @QueryTemplate("operation=divide") @HTTPMethod("GET") public Response divide(@URIParam("item1") Integer item1, @URIParam("item2") Integer item2) { StringBuffer sb = new StringBuffer(); sb.append(item1).append(" / ").append(item2).append(" = ").append(item1 / item2); return Response.Builder.ok(sb.toString(), "text/plain").build(); } }
So we are done with the source code.
Create the directory conf/portal and create the file configuration.xml in it. Add the following code to this file:
<?xml version="1.0" encoding="ISO-8859-1"?> <configuration> <component> <type>org.exoplatform.services.rest.example.Calculator</type> </component> </configuration>
Now we must create the following directory structure to get the possibility to build the source code using maven.
Then create the file pom.xml using the following:
<project> <parent> <groupId>org.exoplatform.ws</groupId> <artifactId>config</artifactId> <version>trunk</version> </parent> <modelVersionɰ.0.0</modelVersion> <groupId>org.exoplatform.ws.rest</groupId> <artifactId>simple.calculator</artifactId> <packaging>jar</packaging> <version>trunk</version> <description>Simple REST service</description> <dependencies> <dependency> <groupId>org.exoplatform.ws.rest</groupId> <artifactId>exo.rest.core</artifactId> <version>trunk</version> </dependency> </dependencies> </project>
Build this by executing the command:
andrew@ubu:~/workspace/calculator$ mvn clean install
We have done all now. Then copy the jar file from the target directory of project exo-tomcat into the server with all prepared stuff for REST services. (You can download it here: http://forge.objectweb.org/project/download.php?group_id=151&file_id=9862)
So just put the jar file into the lib directory of the tomcat and restart it. In the console you should see this message:
[INFO] ResourceBinder - Bind new ResourceContainer: org.exoplatform.services.rest.example.Calculator@19846fd
This message indicates that our service was found, bound and is ready to work
Open your browser and type the following URL: http://localhost:8080/rest/calculator/12/5/?operation=add and you should see the next page:
The service is working. This is a very simple example, but it should help developers use the REST framework.
Try to check other URLs.
http://localhost:8080/rest/calculator/12/5/?operation=subtract - must give "12 - 5 = 7";
http://localhost:8080/rest/calculator/12/5/?operation=multiply - must give "12 * 5 = 60";
http://localhost:8080/rest/calculator/12/5/?operation=divide- must give "12 / 5 = 2" (we are working with integers!);
The new implementation of the REST engine respects the jsr311 specification.
REST service must be represented by Java class that has @Path
annotation, it calls root resource
. In some cases
class may have not @Path, about this classes see in Sub-Resource
Locators section. Root resource class may contain Resource
Methods
, Sub-Resource Methods
and
Sub-Resource Locators
. Root resource MUST contain
at least one of it.
1. Resource Method
it is method that contains
HTTP method annotation, e.g. @GET, @POST, etc. and it does not contain
@Path annotation.
2. Sub-Resource Method
it is method that
contains HTTP method annotation and @Path annotation. Sub-Resource
Locator it is method that does not contain HTTP method annotation but
contains @Path annotation.
3. Sub-Resource Locator
can't process request
directly but it can produce Object (Resource) and that object can
process request or has Sub-Resource Locator
that
can produce other Resource.Other important part @Consumes and
@Produces annotations. These annotations can be used at classes and
methods. Annotation on method override annotation on class.
Annotated method parameters @PathParam, @QueryParam, @FormParam, @HeaderParam, @Matrix, @CookieParam, @Context annotation must be (Description of this annotation in jsr311 specification or jsr311-api javadoc):
1. String
2. Primitive type, except char
3. Has constructor with single string arguments
4. Has static valueOf method with single string argument
5. Be List, Set or SortedSet with items described in first four points
Not annotated method parameter or Entity
Parameter
.
1. Must be not more then one
2. Sub-Resource Locator
MUST NOT have
Entity Parameter
.
3. If Resource Method
or
Sub-Resource Method
contains at least one parameter
with @FormParam annotation then Entity Parameter
MUST be MultivaluedMap<String, String> only.
Method return type. Resource method (Resource
Method
or Sub-Resource Method
) may return
void,javax.ws.rs.core.Response,javax.ws.rs.core.GenericEntity or other
Java type. If void type returned ot returned object is null then 204
HTTP status will be set for response otherwise 200 status will be
used. With javax.ws.rs.core.Response response can be set by service
developer.
Reading/Writing Entity done via
EntityProvider
. Each
EntityProvider
can require precise preset media
type of entity, for example JsonEntityProvider require 'Content-Type'
and/or 'Accept' header set as application/json. REST engine supports
next entity provider:
Table 27.1.
Provider class | Media Type | Java Type |
---|---|---|
ByteEntityProvider | */* | byte[] |
DataSourceEntityProvider | */* | javax.activation.DataSource |
DOMSourceEntityProvider | application/xml,application/xhtml+xml,text/xml | javax.xml.transform.dom.DOMSource |
FileEntityProvider | */* | java.io.File |
MultivaluedMapEntityProvider | application/x-www-form-urlencoded | MultivaluedMap<String, String> |
MultipartFormDataEntityProvider | multipart/* | java.util.Iterator<org.apache.commons.fileupload.FileItem> |
InputStreamEntityProvider | */* | java.io.InputStream |
ReaderEntityProvider | */* | java.io.Reader |
SAXSourceEntityProvider | application/xml,application/xhtml+xml,text/xml | javax.xml.transform.sax.SAXSource |
StreamSourceEntityProvider | application/xml,application/xhtml+xml,text/xml | javax.xml.transform.stream.StreamSource |
StringEntityProvider | */* | java.lang.String |
StreamOutputEntityProvider | */* | NOTE for writing data only |
JsonEntityProvider | application/json | Object (simple constructor, get/set methods) |
JAXBElementEntityProvider | application/xml,application/xhtml+xml,text/xml | javax.xml.bind.JAXBElement |
JAXBObjectEntityProvider | application/xml,application/xhtml+xml,text/xml | Object (simple constructor, get/set methods) |
EXAMPLE #1
Old code
package org.exoplatform.services.rest.example; import org.exoplatform.services.rest.HTTPMethod; import org.exoplatform.services.rest.URITemplate; import org.exoplatform.services.rest.URIParam; import org.exoplatform.services.rest.container.ResourceContainer; import org.exoplatform.services.rest.transformer.StringOutputTransformer; import org.exoplatform.services.rest.OutputTransformer; import org.exoplatform.services.rest.Response; @URITemplate("/a/{1}/b") public class Resource implements ResourceContainer { @HTTPMethod("GET") @URITemplate("{2}") @OutputTransformer(StringOutputTransformer.class) public Response m0(@URIParam("1") String param1, @URIParam("2") String param2) { Response resp = Response.Builder.ok(param1+param2, "text/plain").build(); return resp; } }
New code
import javax.ws.rs.GET; import javax.ws.rs.Path; import javax.ws.rs.PathParam @Path("/a/{1}/b") public class Resource implements ResourceContainer { @GET @Path("{2}") @Produces("text/plain") public String m0(@PathParam("1") String param1, @PathParam("2") String param2) { return path1+path2; } }
EXAMPLE #2
Old code
import org.exoplatform.services.rest.HTTPMethod; import org.exoplatform.services.rest.InputTransformer; import org.exoplatform.services.rest.Response; import org.exoplatform.services.rest.URITemplate; import org.exoplatform.services.rest.container.ResourceContainer; import org.exoplatform.ws.frameworks.json.transformer.Json2BeanInputTransformer; public class Resource implements ResourceContainer { @HTTPMethod("POST") @URITemplate("/") @InputTransformer(Json2BeanInputTransformer.class) public Response m0(Book book) { // do something with bean return Response.Builder.noContent().build(); } }
New code
import javax.ws.rs.POST; import javax.ws.rs.Path; import javax.ws.rs.Consumes; @Path("/") public class Resource implements ResourceContainer { @POST @Consumes("application/json") public void m0(Book book) { // do something with bean } }
Since version exo-ws-2.0.1 add support for class javax.ws.rs.core.Application. It gives possibility to setup services and provider life-cycle to singleton or per-request mode. For details see JAX-RS specification, chapter 2. Subclass of Application should be configured as component of exo-container.
This article describes how to use Groovy scripts as REST services. We are going to consider these operations:
Load script and save it in JCR.
Instantiate script
Deploy newly created Class as RESTful service.
Script Lifecycle Management.
And finally we will discover simple example which can get JCR node UUID
In this article we consider RESTful service compatible with JSR-311 specification. Currently last feature available in version 1.11-SNAPSHOT of JCR, 2.0-SNAPSHOT of WS and version 2.1.4-SNAPSHOT of core.
There are two ways to save a script in JCR. The first way is to save it at server startup time by using configuration.xml and the second way is to upload the script via HTTP.
Load script at startup time
This way can be used for load prepared scripts, for use this way we must configure org.exoplatform.services.jcr.ext.script.groovy.GroovyScript2RestLoaderPlugin. This is simple configuration example.
<external-component-plugins> <target-component>org.exoplatform.services.jcr.ext.script.groovy.GroovyScript2RestLoader</target-component> <component-plugin> <name>test</name> <set-method>addPlugin</set-method> <type>org.exoplatform.services.jcr.ext.script.groovy.GroovyScript2RestLoaderPlugin</type> <init-params> <value-param> <name>repository</name> <value>repository</value> </value-param> <value-param> <name>workspace</name> <value>production</value> </value-param> <value-param> <name>node</name> <value>/script/groovy</value> </value-param> <properties-param> <name>JcrGroovyTest.groovy</name> <property name="autoload" value="true" /> <property name="path" value="file:/home/andrew/JcrGroovyTest.groovy" /> </properties-param> </init-params> </component-plugin> </external-component-plugins>
First value-param sets JCR repository, second value-param sets workspace and third one sets JCR node where scripts from plugin will be stored. If specified node does not exists then it will be created. List of scripts is set by properties-params. Name of each properties-param will be used as node name for stored script, property autoload says to deploy this script at startup time, property path sets the source of script to be loaded. In this example we try to load single script from local file /home/andrew/JcrGroovyTest.groovy.
Load script via HTTP
This is samples of HTTP requests. In this examples we will upload script from file with name test.groovy.
andrew@ossl:~> curl -u root:exo \ -X POST \ -H 'Content-type:script/groovy' \ --data-binary @test.groovy \ http://localhost:8080/rest/script/groovy/add/repository/production/script/groovy/test.groovy
This example imitate sending data with HTML form ('multipart/form-data'). Parameter autoload is optional. If parameter autoload=true then script will be instantiate and deploy script immediately.
andrew@ossl:~> curl -u root:exo \ -X POST \ -F "file=@test.groovy;name=test" \ -F "autoload=true" \ http://localhost:8080/rest/script/groovy/add/repository/production/script/groovy/test1.groovy
org.exoplatform.services.script.groovy.GroovyScriptInstantiator is part of project exo.core.component.script.groovy. GroovyScriptInstantiator can load script from specified URL and parse stream that contains Groovy source code. It has possibility inject component from Container in Groovy Class constructor. Configuration example:
<component> <type>org.exoplatform.services.script.groovy.GroovyScriptInstantiator</type> </component>
For deploy script automatically at server statup time its property exo:autoload must be set as true. org.exoplatform.services.jcr.ext.script.groovy.GroovyScript2RestLoader check JCR workspaces which were specified in configuration and deploy all auto-loadable scripts.
Example of configuration.
<component> <type>org.exoplatform.services.jcr.ext.script.groovy.GroovyScript2RestLoader</type> <init-params> <object-param> <name>observation.config</name> <object type="org.exoplatform.services.jcr.ext.script.groovy.ObservationListenerConfiguration"> <field name="repository"> <string>repository</string> </field> <field name="workspaces"> <collection type="java.util.ArrayList"> <value> <string>production</string> </value> </collection> </field> </object> </object-param> </init-params> </component>
In example above JCR workspace "production" will be checked for autoload scripts. At once this workspace will be listened for changes script's source code (property jcr:data).
If GroovyScript2RestLoader configured as was decribed in previous section then all "autoload" scripts deployed. In first section we added script from file /home/andrew/JcrGroovyTest.groovy to JCR node /script/groovy/JcrGroovyTest.groovy, repository repository, workspace production. In section "Load script via HTTP" it was refered about load scripts via HTTP, there is an opportunity to manage the life cycle of script.
Undeploy script, which is alredy deployed:
andrew@ossl:~> curl -u root:exo \ -X GET \ http://localhost:8080/rest/script/groovy/load/repository/production/script/groovy/JcrGroovyTest.groovy?state=false
Then deploy it again:
andrew@ossl:~> curl -u root:exo \ -X GET \ http://localhost:8080/rest/script/groovy/load/repository/production/script/groovy/JcrGroovyTest.groovy?state=true
or even more simple:
andrew@ossl:~> curl -u root:exo \ -X GET \ http://localhost:8080/rest/script/groovy/load/repository/production/script/groovy/JcrGroovyTest.groovy
Disable scripts autoloading, NOTE it does not change current state:
andrew@ossl:~> curl -u root:exo \ -X GET \ http://localhost:8080/rest/script/groovy/repository/production/script/groovy/JcrGroovyTest.groovy/autoload?state=false
Enable it again:
andrew@ossl:~> curl -u root:exo \ -X GET \ http://localhost:8080/rest/script/groovy/autoload/repository/production/script/groovy/JcrGroovyTest.groovy?state=true
and again more simpe variant:
andrew@ossl:~> curl -u root:exo \ -X GET \ http://localhost:8080/rest/script/groovy/autoload/repository/production/script/groovy/JcrGroovyTest.groovy
Change script source code:
andrew@ossl:~> curl -u root:exo \ -X POST \ -H 'Content-type:script/groovy' \ --data-binary @JcrGroovyTest.groovy \ http://localhost:8080/rest/script/groovy/update/repository/production/script/groovy/JcrGroovyTest.groovy
This example imitate sending data with HTML form ('multipart/form-data').
andrew@ossl:~> curl -u root:exo \ -X POST \ -F "file=@JcrGroovyTest.groovy;name=test" \ http://localhost:8080/rest/script/groovy/update/repository/production/script/groovy/JcrGroovyTest.groovy
Remove script from JCR:
andrew@ossl:~> curl -u root:exo \ -X GET \ http://localhost:8080/rest/script/groovy/delete/repository/production/script/groovy/JcrGroovyTest.groovy
Now we are going to try simple example of Groovy RESTfull service. There is one limitation, even if we use groovy we should use Java style code and decline to use dynamic types, but of course we can use it in private methods and feilds. Create file JcrGroovyTest.groovy, in this example I save it in my home directory /home/andrew/JcrGroovyTest.groovy. Then configure GroovyScript2RestLoaderPlugin as described in section Load script at startup time.
import javax.jcr.Node import javax.jcr.Session import javax.ws.rs.GET import javax.ws.rs.Path import javax.ws.rs.PathParam import org.exoplatform.services.jcr.RepositoryService import org.exoplatform.services.jcr.ext.app.ThreadLocalSessionProviderService @Path("groovy/test/{repository}/{workspace}") public class JcrGroovyTest { private RepositoryService repositoryService private ThreadLocalSessionProviderService sessionProviderService public JcrGroovyTest(RepositoryService repositoryService, ThreadLocalSessionProviderService sessionProviderService) { this.repositoryService = repositoryService this.sessionProviderService = sessionProviderService } @GET @Path("{path:.*}") public String nodeUUID(@PathParam("repository") String repository, @PathParam("workspace") String workspace, @PathParam("path") String path) { Session ses = null try { ses = sessionProviderService.getSessionProvider(null).getSession(workspace, repositoryService.getRepository(repository)) Node node = (Node) ses.getItem("/" + path) return node.getUUID() + "\n" } finally { if (ses != null) ses.logout() } }
After configuration is done start server. If configuration is correct and script does not have syntax error you should see next:
In screenshot we can see service deployed.
First create folder via WebDAV in repository production, folder name 'test'. Now we can try access this service. Open another console and type command:
andrew@ossl:~> curl -u root:exo \ http://localhost:8080/rest/groovy/test/repository/production/test
Whe you try to execute this command you should have exception, because JCR node '/test' is not referenceable and has not UUID. We can try add mixin mix:referenceable. To do this add one more method in script. Open script from local source code /home/andrew/JcrGroovyTest.groovy, add following code and save file.
@POST @Path("{path:.*}") public void addReferenceableMixin(@PathParam("repository") String repository, @PathParam("workspace") String workspace, @PathParam("path") String path) { Session ses = null try { ses = sessionProviderService.getSessionProvider(null).getSession(workspace, repositoryService.getRepository(repository)) Node node = (Node) ses.getItem("/" + path) node.addMixin("mix:referenceable") ses.save() } finally { if (ses != null) ses.logout() } }
Now we can try to change script deployed on the server without server restart. Type in console next command:
andrew@ossl:~> curl -i -v -u root:exo \ -X POST \ --data-binary @JcrGroovyTest.groovy \ -H 'Content-type:script/groovy' \ http://localhost:8080/rest/script/groovy/update/repository/production/script/groovy/JcrGroovyTest.groovy
Node '/script/groovy/JcrGroovyTest.groovy' has property exo:autoload=true so script will be re-deployed automatically when script source code changed.
Script was redeployed, now try access newly created method.
andrew@ossl:~> curl -u root:exo \ -X POST \ http://localhost:8080/rest/groovy/test/repository/production/test
Method excution should be quiet, without output, traces, etc. Then we can try again get node UUID.
andrew@ossl:~> curl -u root:exo \ http://localhost:8080/rest/groovy/test/repository/production/test 1b8c88d37f0000020084433d3af4941f
Node UUID: 1b8c88d37f0000020084433d3af4941f
We don't need this scripts any more, so remove it from JCR.
andrew@ossl:~> curl -u root:exo \ http://localhost:8080/rest/script/groovy/delete/repository/production/script/groovy/JcrGroovyTest.groovy
You should keep one class per one groovy file. The same actually for interface and it implementation. It's limitation of groovy parser that does not have type Class[] parseClass(InputStream) or Collection parseClass(InputStream) but only Class parseClass(InputStream) instead.
That is all.
This describes REST framework before version exo-ws-2.0
The purpose of eXo REST framework is to make eXo services (i.e. the components deployed inside eXo Container) simple and transparently accessible via HTTP in RESTful manner. In other words those services should be viewed as a set of REST Resources - endpoints of the HTTP request-response chain, we call those services ResourceContainers.
image: Simplified working model
Taking into account HTTP/REST constraints, it is considered that Resources (naturally mapped as Java methods) contained in ResourceContainer conform with the following conditions:
they are uniquely identifiable in the same way as it is specified for HTTP i.e. using URI
they can accept data from an HTTP request using all possible mechanisms, i.e. as a part of an URL, as URI parameters, as HTTP headers and body
they can return data to an HTTP response using all possible mechanisms, i.e. as HTTP status, headers and body
From the implementation point of view:
the framework should be as much JSR-311 compatible as possible for the time the Specification is in draft and fully compatible after the standard's release
the ResourceContainer components should be deployable and accessible in the same way as an ordinary service
In our REST architecture implementation, an HTTPServlet is the front-end for the REST engine. In this RESTful framework, endpoints for HTTP request are represented by java classes (ResourceContainers). All ResourceContainers are run as components of an ExoContainer, so they are configured within the same configuration file.
ResourceBinder and ResourceDispatcher are two other important components of the REST engine. ResourceBinder deals with binding and unbinding ResourceContainer. ResourceDispatcher dispatches REST requests to ResourceContainer. ResourceBinder and ResourceDispatcher are also components of the ExoContainer
A ResourceContainer is an annotated java class. Annotations must at least describe URLs and HTTP methods to be served by this container. But they may also describe other parameters, like produced content type or transformers. Transformers are special Java classes, used to serialize and deserialize Java objects.
A very simple ResourceContainer may look like this:
@HTTPMethod("GET") @URITemplate("/test1/{param}/test2/") public Response method1(@URIParam("param") String param) { //... //... Response resp = Response.Builder.ok().build(); return resp; }
The ResourceContainer described above is very simple, it can serve the GET HTTP method for the relative URL /test1/{param}/test2/ where {param} can be any string value. Additionally, {param} can be used as a method parameter of the container by annotating it as : @URIParam("param") String param. For example, in URL /test1/myID/test2 the method parameter "param" has the value "myID".
@URITemplate may be used for annotating classes or methods. If the TYPE scope annotation is for example @URITemplate("/testservices/") and a METHOD scope annotation is @URITemplate("/service1/") then that method can be called with the path /testservices/service1/. All ResourceContainer methods return Response objects. A Response includes HTTP status, an entity (the requested resource), HTTP headers and OutputEntityTransformer.
A client sends a request, after some operations, the HTTP request is parsed into three parts: the HTTP request stream, which consists of the body of the request, HTTP headers and query parameters. Then ResourceDispatcher gets this request. During the start of ExoContainers ResourceBinder collects all available ResourceContainers and after the start passes the list of ResourceContainers to the ResourceDispatcher. When ResourceDispatcher gets the request from a client it tries to search for a valid ResourceContainer. For this search ResourceDispatcher uses @HTTPMethod, @URITemplate and @ProducedMimeType annotations. The last one is not always necessary, if it is not specified this is set in / (all mime types). When ResourceDispatcher found the "right" ResourceContainer it tries to call a method with some parameters. ResourceDispatcher will send only parameters which ResourceContainer requests. In the code example above the parameter is a part of the requested URL between "/test1/" and "/test2/". ResourceContainer methods can consist only of some well-defined parameters. See the next rules:
Only one parameter of a method can be not annotated. This parameter represents the body of the HTTP request.
All other parameters should be of java.lang.String types or must have a constructor with the java.lang.String parameter and must have annotation.
If the parameter which represents the HTTP request body is present, then in the annotation @InputTransformer a java class must be defined that can build the requested parameter from java.io.InputStream. About transformation see below. After request processing ResourceContainer creates Request. Request is a special Java object which consists of HTTP status, a response header, an entity, a transformer. The entity is a representation of the requested resource. If Response has an entity it also must have OutputEntityTransformer. OutputEntityTransformer can be set in a few different ways.
Response is created by Builder. Builder is an inner static Java class within Response. Builder has some preset ways to build Response, for example (see code example above). The method ok() creates Builder with status 200 (OK HTTP status) and ok() returns again a Builder object. In this way a developer can again call other Builder methods. For example:
//... Document doc = .... Response response = Response.Builder.ok().entity(doc, "text/xml").transformer(new XMLOutputTransformer()).build();
In the code example above Response has HTTP status 200, an XML document like entity, a Content-Type header "text/xml" and OutputEntityTransformer of type XMLOutputTransformer. The method build() should be called at the end of process. In the same way a developer can build some other prepared Responses, like CREATED, NO CONTENT, FORBIDDEN, INTERNAL ERROR and other. In this example we set OutputEntityTransformer directly. Another mechanism to do this is the same like in the case with InputEntityTransformer.
//... @HTTPMethod("GET") @URITemplate("/test/resource1/") @InputTransformer(XMLInputTransformer.class) @OutputTransformer(XMLOutputTransformer.class) public Response method1(Document inputDoc) { //... Document doc = .... Response response = Response.Builder.ok(doc, "text/xml").build(); return response; }
InputEntityTransformer is described in the annotation to the method "method1". OutputEntityTransformer is described in the annotation to the method "method1".
All transformers can be created by EntityTransformerFactory. Transformers can be divided in two groups. Transformers from the first one extend the abstract class InputEntityTransformer. InputEntityTransformer implements the interface GenericInputEntityTransformer, and GenericInputEntityTransformer extends the interface GenericEntityTransformer. This architecture gives the possibility to use one class EntityTransformerFactory for creating both types of transformers (input and output). At first we will speak about InputEntityTransformer.
InputEntityTransformer
All classes which extend the abstract class InputEntitytransformer must override the method ObjectreadFrom(InputStream entityDataStream). This method must return an object with the same type as ResourceContainer requires in method parameters (one not annotated parameter). This mechanism works in the following way. For example, a class which extends InputEntityTransformer will be called SimpleInputTransformer. So, SimpleInputTransformer must have a simple constructor (without any parameters). SimpleInputTransformer also has two methods void setType(Class entityType) and Class getType(). Those methods are described in InputEntityTransformer. And, as we said above, SimpleInputTransformer must override the abstract method Object readFrom(InputStream entityDataStream). So, a developer needs to create a class which can build a Java Object from InputStream and then create annotation to ResourceContainer or some method in ResourceContainer. This annotation must define the type of InputEntityTransformer. Here is the code of the annotation InputTransformer.
//... @Retention(RUNTIME) @Target({TYPE, METHOD}) public @interface InputTransformer { Class<? extends InputEntityTransformer> value(); }
Then ResourceDispatcher gets InputTransformer from the factory during runtime then builds an Object from stream and adds it to the parameter array. See the code below.
//... InputEntityTransformer transformer = (InputEntityTransformer) factory_.newTransformer(resource.getInputTransformerType()); transformer.setType(methodParameters[i]); params[i] = transformer.readFrom(request.getEntityStream()); //... Response response = (Response) resource.getServer().invoke(resource.getResourceContainer(), params);
A developer can find some prepared transformers in the package "org.exoplatform.services.rest.transformer".
OutputEntityTransformer
OutputEntityTransformer can be used to serialize the Object which represents the requested resource to OutputStream. OutputEntityTransformer, in future we will call it SimpleOutputTransformer, can be defined in a few ways.
Now some more words about transformation. The RESTful framework has two multi-purpose input/output transformers. One of them is JAXB(Input/Output)Transformer, and another one is Serializable(Input/Output)Transformer. The first one uses JAXB API for serializing and deserializing an object. Serializable (Input/Output)Transformer uses the own methods of an entity for serialization and deserialization. Here is an example:
/... public class SimpleSerializableEntity implements SerializableEntity { String data; public SimpleSerializableEntity() { } public void readObject(InputStream in) throws IOException { StringBuffer sb = new StringBuffer(); int rd = -1; while((rd = in.read()) != -1) sb.append((char)rd); data = sb.toString(); } public void writeObject(OutputStream out) throws IOException { out.write(data.getBytes()); } }
SerializableInputTransformer will try to use the method void readObject(InputStream in) and SerializableOutputTransformer will try to use the method void writeObject(OutputStream in).
ResourceBinder takes care of binding and unbinding components which represent ResourceContainer. All ResourceContainers must be defined as ExoContainer components in configuration files, for example:
<component> <type>org.exoplatform.services.rest.samples.RESTSampleService</type> </component>
ResourceBinder is an ExoContainer component and implements the interface org.picocontainer.Startable (see the picocontainer documentation). So at the start of ExoContainer ResourceBinder collects all available ResourceContainers. ResourceBinder makes validation for all ResourceContainers. At least each ResourceContainer must have the @HTTPMethod annotation and @URITemplate annotation. Other annotations are not obligatory. It's not allowed to have few ResourceContainers or methods with the same @HTTPMethod and @URITemplate annotation value. For example, if one container/method has the following code:
public class SimpleResourceContainer implements ResourceContainer { @HTTPMethod("GET") @URITemplate("/services/{id}/service1/") @InputTransformer(StringInputTransformer.class) @OutputTransformer(StringOutputTransformer.class) public Response method1(String data, @URIParam("id") String param) { // ... } }
than another component with @HTTPMethod("GET") and @URITemplate("/services/service1/{id}/") can't be bound. And now we'll try to explain this situation. On the one hand URITemplate is defined by value /services/{id}/service1/, instead of {id} there can be any other string value, so the combination /services/service1/service1/ is possible and valid. On the other hand another URITemplate /services/service1/{id}/ can have the string service1 instead of {id}, so for this resource the combination /services/service1/service1/ is possible and valid. And now we have two resources with the same URITemplate and the same HTTPMethod. This situation is not obligatory, we can't say what method we must call! In this case ResourceBinder generates InvalidResourceDescriptorException. Binder also checks the method parameters. In parameters only one parameter without annotation and of free type is allowed. All other parameters must have String type (or have a constructor with String) and be annotated. In other cases InvalidResourceDescriptorException* is generated. If all components have the "right" @HTTPMethod and @URITemplate annotations they should be bound without any problems. ResourceDispather gets a collection of ResourceContainers during the start and everything is ready to serve a request.
Note: within the scope of one component (one class) validation for URI templates is not implemented, so a developer must take care of @URITemplate and @HTTPMethod. Both of them must not be the same for different methods. Except if @QueryTemplate or/and @ProducedMimeTypes annotation is used.
Now let's talk about the features of ResourceDispatcher. ResourceDispatcher is the main part of the RESTful engine. Above we said some words about transformation and the role of ResourceDispatcher in this process. Now we are ready to talk about annotated method parameters. In one of the code examples above you could see the next construction
public Response method1(String data, @URIParam("id") String param) {
The method method1 is described with two parameters. The first parameter String data is built from the entity body (InputStream). The second one - String param is annotated by a special type Annotation. This Annotation has the following code:
@Target(value={PARAMETER}) @Retention(RUNTIME) public @interface URIParam { String value(); }
How does it work? After having found the right method ResourceDispatcher gets the list of method parameters and starts handling them. Dispatcher tries to find the not annotated parameter (this entity body) and addresses InputEntityTransformer in order to build the required Object from InputStream. When dispatcher finds an annotated parameter it checks the type of annotation. It is possible to have four types of annotation: @URIParam, @HeaderParam, @QueryParam and @ContextParameter. If dispatcher has found the annotation @URIParam("id") then it takes the parameter with the key "id" from ResourceDescription and adds it into the parameter array. The same situation is with header, context and query parameters. So within the method a developer gets only the necessary parameters from header, query and uri. Some more information about @ContextParameters. @ContextParameters can be set in the configuration file of a component:
<component> <type>org.exoplatform.services.rest.ResourceDispatcher</type> <init-params> <properties-param> <name>context-params</name> <property name="test" value="test_1"/> </properties-param> </init-params> </component>
In this case any service can use this parameter, for example:
...method(@ContextParam("test") String p) { System.out.println(p); }
After its execution you should see "test_1" in console.
And in each service the next context parameters can be used. The name of these parameters are described in ResourceDispatcher:
public static final String CONTEXT_PARAM_HOST = "host"; public static final String CONTEXT_PARAM_BASE_URI = "baseURI"; public static final String CONTEXT_PARAM_REL_URI = "relURI"; public static final String CONTEXT_PARAM_ABSLOCATION = "absLocation";
After that dispatcher finishes collecting parameters that the method requires, invokes this method and passes the result to the client as it is described above.
This part of code can explain how this mechanism works:
//... for (int i = 0; i < methodParametersAnnotations.length; i++) { if (methodParametersAnnotations[i] == null) { InputEntityTransformer transformer = (InputEntityTransformer) factory .newTransformer(resource.getInputTransformerType()); transformer.setType(methodParameters[i]); params[i] = transformer.readFrom(request.getEntityStream()); } else { Constructor<?> constructor = methodParameters[i].getConstructor(String.class); String constructorParam = null; Annotation a = methodParametersAnnotations[i]; if (a.annotationType().isAssignableFrom(URIParam.class)) { URIParam u = (URIParam) a; constructorParam = request.getResourceIdentifier().getParameters().get(u.value()); } else if (a.annotationType().isAssignableFrom(HeaderParam.class)) { HeaderParam h = (HeaderParam) a; constructorParam = request.getHeaderParams().get(h.value()); } else if (a.annotationType().isAssignableFrom(QueryParam.class)) { QueryParam q = (QueryParam) a; constructorParam = request.getQueryParams().get(q.value()); } else if (a.annotationType().isAssignableFrom(ContextParam.class)) { ContextParam c = (ContextParam) a; constructorParam = contextHolder_.get().get(c.value()); } if (methodParameters[i].isAssignableFrom(String.class)) { params[i] = constructorParam; } else { params[i] = (constructorParam != null) ? constructor.newInstance(constructorParam) : null; } } } Response resp = (Response) resource.getServer().invoke(resource.getResourceContainer(), params); //...
The more detailed schema looks like this:
In this tutorial you will learn how you can use JSR 181 to expose your Exo Container components as web services.
The steps given below have been tested with eXo WS samples 2.0.2 under Tomcat 6.0.16. It uses the WS tomcat bundle.
You can use maven build within "ws/tags/2.0.2/application" to deploy this sample application configuration:
mvn clean install -f product-exo-ws-as-tomcat6.xml antrun:run
The JSR181 support does not come by default in portal.
The WS example tomcat bundle, which you can deploy with provided command above, has already all necessary libraries and SOAP servlet declarations.
For any other bundles you have to done the two steps below.
First you need to add the necessary dependencies inside exo-tomcat/lib
The required artifacts are :
exo.ws.soap.cxf.jsr181-2.0.2.jar
exo.ws.application.soap.cxf.samples-2.0.2.jar
cxf-rt-transports-http-2.1.2.jar
cxf-api-2.1.2.jar
cxf-rt-core-2.1.2.jar
cxf-common-utilities-2.1.2.jar
wsdl4j-1.6.1.jar
geronimo-jaxws_2.1_spec-1.0.jar
geronimo-ws-metadata_2.0_spec-1.1.2.jar
cxf-rt-frontend-jaxws-2.1.2.jar
cxf-rt-frontend-simple-2.1.2.jar
cxf-rt-bindings-soap-2.1.2.jar
XmlSchema-1.4.2.jar
cxf-rt-databinding-jaxb-2.1.2.jar
jaxb-impl-2.1.7.jar
cxf-tools-common-2.1.2.jar
cxf-rt-ws-addr-2.1.2.jar
saaj-api-1.3.jar
Edit YOUR_APPLICATION.war/WEB-INF/web.xml and add the CXF servlet :
<servlet> <servlet-name>SOAPServlet</servlet-name> <display-name>SOAP Servlet</display-name> <servlet-class>org.exoplatform.services.ws.impl.cxf.transport.http.SOAPServlet</servlet-class> </servlet> <servlet-mapping> <servlet-name>SOAPServlet</servlet-name> <url-pattern>/soap/services/*</url-pattern> </servlet-mapping>
alternatively, you could bundle cxf in a separate app, but make sure ws-examples.war starts before it
We have a tiny sample to demonstrate called : TicketOrderService.
Following JSR 181, it is now just a matter of annotating a class to create a Web Service. The main idea is the your interface has to extends extends AbstractSingletonWebService.
@WebService public interface TicketOrderService extends AbstractSingletonWebService { /** * @param departing departing place. * @param arriving arriving place. * @param departureDate departure date. * @param passenger passenger. * @return ticket order. */ public String getTicket(String departing, String arriving, Date departureDate, String passenger); /** * @param confirmation confirm or not. */ public void confirmation(boolean confirmation); }
@WebService(serviceName = "TicketOrderService", portName = "TicketOrderServicePort", targetNamespace = "http://exoplatform.org/soap/cxf") public class TicketOrderServiceImpl implements TicketOrderService { /** * Logger. */ private static final Log LOG = ExoLogger.getLogger(TicketOrderServiceImpl.class); /** * Ticket. */ private Ticket ticket; /** * @param departing departing place. * @param arriving arriving place. * @param departureDate departure date. * @param passenger passenger. * @return ticket order. */ public String getTicket(String departing, String arriving, Date departureDate, String passenger) { ticket = new Ticket(passenger, departing, arriving, departureDate); LOG.info(ticket); return String.valueOf(ticket.getOrder()); } /** * @param confirmation confirm or not. */ public void confirmation(boolean confirmation) { LOG.info("Confirmation : " + confirmation + " for order '" + ticket.getOrder() + "'."); } }
To test quickly, you can simply grab the jar on eXo repository and deploy within your application.
<configuration xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.exoplatform.org/xml/ns/kernel_1_0.xsd http://www.exoplatform.org/xml/ns/kernel_1_0.xsd" xmlns="http://www.exoplatform.org/xml/ns/kernel_1_0.xsd"> <component> <type>org.exoplatform.services.ws.soap.jsr181.TicketOrderServiceImpl</type> </component> </configuration>
Start server.
Execute script at ws/tags/2.0.2/application/java/services/samples/soap/client to test service
run.sh
and watch for this in the logs:
Sep 3, 2009 5:21:13 PM org.apache.cxf.endpoint.ServerImpl initDestination INFO: Setting the server's publish address to be /TicketOrderService/TicketOrderServicePort 03.09.2009 17:21:13 *INFO * [http-8080-1] ExoDeployCXFUtils: The webservice '/TicketOrderService/TicketOrderServicePort' has been published SUCCESSFUL! (ExoDeployCXFUtils.java, line 190) 03.09.2009 17:21:13 *INFO * [http-8080-1] WebServiceLoader: New singleton WebService '/TicketOrderService/TicketOrderServicePort' registered. (WebServiceLoader.java, line 95) 03.09.2009 17:21:27 *INFO * [http-8080-1] TicketOrderServiceImpl: Ticket {, , , null, 1251991287079} (TicketOrderServiceImpl.java, line 57) 03.09.2009 17:21:27 *INFO * [http-8080-1] TicketOrderServiceImpl: Confirmation : false for order '1251991287079'. (TicketOrderServiceImpl.java, line 65)
Now, your service is started and you can retrieve the WSDL at : http://localhost:8080/ws-examples/soap/services/TicketOrderService/TicketOrderServicePort?wsdl
Central Authentication Service (CAS) is a Web Single Sign-On (WebSSO), developped by JA-SIG as an open-source project. CAS, like any WebSSO, is very interesting in information systems where many applications share a common users repository. When you enable CAS on all the application, a user would log once and only once and will be recognized and authentified into all the applications.
CAS documentation explains how to configure in details any environment, that is mainly a configuration of a Web application to authenticate itself against the CAS Server instead of an internal mechanism. This documentation explain how to configure eXo Platform to delegate the authentication to the CAS server and let it ensure the single-sign-one between all the applications of an IS.
This article try to explain how to configure CAS server and client for exo. For this example we will use 2 the same tomcat instance, but one for it has additation CAS server.
This configuration is not very useful, but very good example for configuration CAS.
Tomcat 1 deployed on windows 2003 - this is CAS server, tomcat 2 on Ubuntu 7.10.
(DNS name: test01-srv.exoua-int)
1.Certificates.
E:/Program Files>cd java E:/Program Files/Java>cd jre1.5.0_11 E:/Program Files/Java/jre1.5.0_11>cd bin E:/Program Files/Java/jre1.5.0_11/bin>keytool -genkey -alias tomcat -keypass 123456 -keyalg RSA Enter keystore password: 123456 What is your first and last name? [Unknown]: test01-srv.exoua-int What is the name of your organizational unit? [Unknown]: . What is the name of your organization? [Unknown]: . What is the name of your City or Locality? [Unknown]: . What is the name of your State or Province? [Unknown]: . What is the two-letter country code for this unit? [Unknown]: UA Is CN=test01-srv.exoua-int, OU=., O=., L=., ST=., C=UA correct? [no]: yes E:/Program Files/Java/jre1.5.0_11/bin>keytool -export -alias tomcat -keypass 123456 -file server.crt Enter keystore password: 123456 Certificate stored in file <server.crt>
This is optional, I just want to have the same password for storage %JRE_HOME%/lib/security/cacerts.
E:/Program Files/Java/jre1.5.0_11/bin>keytool -storepasswd -keystore ../lib/security/cacerts Enter keystore password: changeit New keystore password: 123456 Re-enter new keystore password: 123456
Continue with certificates.
E:/Program Files/Java/jre1.5.0_11/bin>keytool -import -file server.crt -keypass 123456 -keystore ../lib/security/cacerts Enter keystore password: 123456 Owner: CN=test01-srv.exoua-int, OU=., O=., L=., ST=., C=UA Issuer: CN=test01-srv.exoua-int, OU=., O=., L=., ST=., C=UA Serial number: 4810c6c5 Valid from: Fri Apr 24 20:33:36 HST 2008 until: Thu Jul 23 20:33:36 HST 2008 Certificate fingerprints: MD5: CC:3B:FB:FB:AE:12:AD:FB:3E:D 5:98:CB:2E:3B:0A:AD SHA1: A1:16:80:68:39:C7:58:EA:2F:48:59:AA:1D:73:5F:56:78:CE:A4:CE Trust this certificate? [no]: yes Certificate was added to keystore E:/Program Files/Java/jre1.5.0_11/bin>
2. Now edit server.xml file for tomcat (we are using 6.0.13 everywhere).Uncomment configuration for SSL connection end edit it, it must looks as this:
<Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" keystoreFile="E:/Documents and Settings/admin/.keystore" keystorePass="123456" keyAlias="tomcat" truststoreFile="E:/Program Files/Java/jre1.5.0_11/lib/security/cacerts" truststorePass="123456" />
3. Now configure client part of CAS, as example we will use portal/private/* .Edit file /portal/WEB-INF/web.xml.
<context-param> <param-name>serverName</param-name> <param-value>http://test01-srv.exoua-int:8080</param-value> </context-param>
Configure client, in this example we will protect /portal/private/* resource. Note: These filter must be add before this filter "SetCurrentIdentityFilter", the same think for filter-mapping.
<filter> <filter-name>SingleSignOutFilter</filter-name> <filter-class>org.jasig.cas.client.session.SingleSignOutFilter</filter-class> </filter> <filter> <filter-name>AuthenticationFilter</filter-name> <filter-class>org.jasig.cas.client.authentication.AuthenticationFilter</filter-class> <init-param> <param-name>casServerLoginUrl</param-name> <param-value>https://test01-srv.exoua-int:8443/cas/login</param-value> </init-param> </filter> <filter> <filter-name>Cas20ProxyReceivingTicketValidationFilter</filter-name> <filter-class>org.jasig.cas.client.validation.Cas20ProxyReceivingTicketValidationFilter</filter-class> <init-param> <param-name>casServerUrlPrefix</param-name> <param-value>https://test01-srv.exoua-int:8443/cas</param-value> </init-param> <init-param> <param-name>redirectAfterValidation</param-name> <param-value>true</param-value> </init-param> </filter> <filter> <filter-name>HttpServletRequestWrapperFilter</filter-name> <filter-class>org.jasig.cas.client.util.HttpServletRequestWrapperFilter</filter-class> </filter> <!-- eXo --> <filter> <filter-name>BaseIdentityInitializerFilterImpl</filter-name> <filter-class>org.exoplatform.services.security.cas.client.impl.BaseIdentityInitializerFilterImpl</filter-class> </filter> <!-- end exo --> .... <filter-mapping> <filter-name>SingleSignOutFilter</filter-name> <url-pattern>/private/*</url-pattern> </filter-mapping> <filter-mapping> <filter-name>AuthenticationFilter</filter-name> <url-pattern>/private/*</url-pattern> </filter-mapping> <filter-mapping> <filter-name>Cas20ProxyReceivingTicketValidationFilter</filter-name> <url-pattern>/private/*</url-pattern> </filter-mapping> <filter-mapping> <filter-name>HttpServletRequestWrapperFilter</filter-name> <url-pattern>/private/*</url-pattern> </filter-mapping> <!-- exo --> <filter-mapping> <filter-name>BaseIdentityInitializerFilterImpl</filter-name> <url-pattern>/private/*</url-pattern> </filter-mapping> <!-- end exo --> .... <listener> <listener-class>org.jasig.cas.client.session.SingleSignOutHttpSessionListener</listener-class> </listener> .... <!-- exo --> <servlet> <servlet-name>LogoutServlet</servlet-name> <servlet-class>org.exoplatform.services.security.cas.client.impl.LogoutServlet</servlet-class> <init-param> <param-name>casServerLogoutUrl</param-name> <param-value>https://test01-srv.exoua-int:8443/cas/logout</param-value> </init-param> <init-param> <param-name>redirectToUrl</param-name> <param-value>http://test01-srv.exoua-int:8080/portal/public/classic</param-value> </init-param> </servlet> <!-- end exo --> ..... <!-- exo --> <servlet-mapping> <servlet-name>LogoutServlet</servlet-name> <url-pattern>/logout/*</url-pattern> </servlet-mapping> <!-- end exo --> ....
4. Download and build code from http://svn.exoplatform.org/svnroot/exoplatform/projects/ws/trunk/security/cas/client
5. Download and put cas-client-core-3.1.1.jar in CATALINA_HOME%/lib directory.
6. Download CAS server source code and build it or download binary. Put cas.war in webapps directoryChange configuration in file /cas/WEB-INF/deployConfigContext.xml. Comment test authenticator and add new one.
<!-- <bean class="org.jasig.cas.authentication.handler.support.SimpleTestUsernamePasswordAuthenticationHandler" /> --> <!-- will check username and password at remote host --> <bean class="org.exoplatform.services.security.cas.server.HTTPAuthenticationHandler" p:authenticationURL="https://ubu.exoua-int:8443/portal/login" />
7. Download and build code which can do remote authentication.Download and build code from http://svn.exoplatform.org/svnroot/exoplatform/projects/ws/trunk/security/cas/server, andput it cas/WEB-INF/lib directory. This CAS server side handler which can call remote eXo authenticatiob service via HTTP.In this case validation of username/password wiil be done at ubu.exoua-int, but authentication sever (CAS) will be at test01-srv.exoua-int
Configure other tomcat instance, deploy it on Ubuntu 7.10
(DNS name: ubu.exoua-int).
1. Generate certificates for CAS client. The same as for previous but change name to ubu.exoua-int.
2. Edit server.xml file for tomcat.
<Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" keystoreFile="/home/andrew/.keystore" keystorePass="123456" keyAlias="tomcat" truststoreFile="/home/andrew/lib/java/jre/lib/security/cacerts" truststorePass="123456" />
3. Edit file portal/WEB-INF/web.xml, add next strings in it. Change context parameter.
<context-param> <param-name>serverName</param-name> <param-value>http://ubu.exoua-int:8080</param-value> </context-param>
Filters configuration must be the same as in client part on tomcat1. But add one more servlet which will check username/password.
<filter> <filter-name>SingleSignOutFilter</filter-name> <filter-class>org.jasig.cas.client.session.SingleSignOutFilter</filter-class> </filter> <filter> <filter-name>AuthenticationFilter</filter-name> <filter-class>org.jasig.cas.client.authentication.AuthenticationFilter</filter-class> <init-param> <param-name>casServerLoginUrl</param-name> <param-value>https://test01-srv.exoua-int:8443/cas/login</param-value> </init-param> </filter> <filter> <filter-name>Cas20ProxyReceivingTicketValidationFilter</filter-name> <filter-class>org.jasig.cas.client.validation.Cas20ProxyReceivingTicketValidationFilter</filter-class> <init-param> <param-name>casServerUrlPrefix</param-name> <param-value>https://test01-srv.exoua-int:8443/cas</param-value> </init-param> <init-param> <param-name>redirectAfterValidation</param-name> <param-value>true</param-value> </init-param> </filter> <filter> <filter-name>HttpServletRequestWrapperFilter</filter-name> <filter-class>org.jasig.cas.client.util.HttpServletRequestWrapperFilter</filter-class> </filter> <!-- exo --> <filter> <filter-name>BaseIdentityInitializerFilterImpl</filter-name> <filter-class>org.exoplatform.services.security.cas.client.impl.BaseIdentityInitializerFilterImpl\ </filter-class> </filter> <!-- end exo --> .... <filter-mapping> <filter-name>SingleSignOutFilter</filter-name> <url-pattern>/private/*</url-pattern> </filter-mapping> <filter-mapping> <filter-name>AuthenticationFilter</filter-name> <url-pattern>/private/*</url-pattern> </filter-mapping> <filter-mapping> <filter-name>Cas20ProxyReceivingTicketValidationFilter</filter-name> <url-pattern>/private/*</url-pattern> </filter-mapping> <filter-mapping> <filter-name>HttpServletRequestWrapperFilter</filter-name> <url-pattern>/private/*</url-pattern> </filter-mapping> <!-- exo --> <filter-mapping> <filter-name>BaseIdentityInitializerFilterImpl</filter-name> <url-pattern>/private/*</url-pattern> </filter-mapping> <!-- end exo --> .... <listener> <listener-class>org.jasig.cas.client.session.SingleSignOutHttpSessionListener</listener-class> </listener> .... <!-- exo --> <servlet> <servlet-name>BaseHTTPUsernamePasswordValidator</servlet-name> <servlet-class>org.exoplatform.services.security.cas.client.impl.BaseHTTPUsernamePasswordValidatorImpl</servlet-class> </servlet> <servlet> <servlet-name>LogoutServlet</servlet-name> <servlet-class>org.exoplatform.services.security.cas.client.impl.LogoutServlet</servlet-class> <init-param> <param-name>casServerLogoutUrl</param-name> <param-value>https://test01-srv.exoua-int:8443/cas/logout</param-value> </init-param> <init-param> <param-name>redirectToUrl</param-name> <param-value>http://ubu.exoua-int:8080/portal/public/classic</param-value> </init-param> </servlet> <!-- end exo --> ..... <!-- exo --> <servlet-mapping> <servlet-name>BaseHTTPUsernamePasswordValidator</servlet-name> <url-pattern>/login/*</url-pattern> </servlet-mapping> <servlet-mapping> <servlet-name>LogoutServlet</servlet-name> <url-pattern>/logout/*</url-pattern> </servlet-mapping> <!-- end exo --> <!-- not use default authentification--> <!-- <security-constraint> <web-resource-collection> <web-resource-name>user authentication</web-resource-name> <url-pattern>/private/*</url-pattern> <http-method>POST</http-method> <http-method>GET</http-method> </web-resource-collection> <auth-constraint> <role-name>users</role-name> </auth-constraint> <user-data-constraint> <transport-guarantee>NONE</transport-guarantee> </user-data-constraint> </security-constraint> <login-config> <auth-method>FORM</auth-method> <realm-name>exo-domain</realm-name> <form-login-config> <form-login-page>/login/jsp/login.jsp</form-login-page> <form-error-page>/login/jsp/login.jsp</form-error-page> </form-login-config> </login-config> <security-role> <description>a simple user role</description> <role-name>users</role-name> </security-role> <security-role> <description>the admin role</description> <role-name>admin</role-name> </security-role> --> <!-- end web.xml file--> ....
4. Download and build code from http://svn.exoplatform.org/svnroot/exoplatform/projects/ws/trunk/security/cas/client
5. Download and put cas-client-core-3.1.1.jar in CATALINA_HOME%/lib directory.
6. Now get trusted certificate for CAS server instance. To do this download and compile this file: http://blogs.sun.com/andreas/resource/InstallCert.java Then run it:
java InstallCert test01-srv.exoua-int:8443 123456
Change 123456 to actual password for keystore. You can see some exception but finally you must see info about certificates and prompt about adding it in storage. Select certificate, usually type 1 end press Enter.
Finish!!!
Run both servers, and try open one of protected URLs, for example
http://test01-srv.exoua-int:8080/portal/private/classic.
Accept certificates, and in login page username/password: root/exo. You
must get private area in portal as root, then open other protected
resource on server 'ubu.exoua-int', http://ubu.exoua-int:8080/portal/private/classic.
And you must get this private area in other portal without
login
. If you get it then SSO for login work as well.
No try logout on 'ubu.exoua-int'. To do it directly from portal one groovy script must be modified, but it is not described here.
After logout from 'ubu.exoua-int' you should be asked about login at 'test01-srv.exoua-int'. The SSO for logout work well also.
That is all!
If it works as described above, then configuration right and SSO works.
ExoPortal allows to use SSO (Single Sign On) with Kerberos authentication on a Microsoft Active Directory. To install this functionality, some configuration is needed, on the Active Directory server and on the application server.
In this example, we suppose that the complete name of the machine on which Tomcat server runs is ubu.exoua-int, and that it runs on the Linux host (Ubuntu 7.04). This machine must be in Windows domain. How to do it read in Samba documentation.
Our implementation makes it possible to use SPNEGO or NTLM (sometimes this two terms can be mixed, but here we will try to separate it). The client will get two authentication headers 'Negotiate' and 'NTLM' and will use supported by client (browser). In Firefox it is possible to manage authentication types, in IE this is not possible. This HOWTO will describe how to make configuration to support both authentication types. In fact for IE SPNEGO will work.
On the AD server, we need to create a Kerberos identification for Tomcat Server :
1. Create a user account for the host computer on which Tomcat Server runs in the Active Directory server. (Select New > User, not New > Machine.)
When creating the two user accounts, use the simple name of the computer, and I recommend give names as next pattern host_host-name and http_host-name. First account will be used for LDAP connection, second one will be used for authentication service via HTTP. For example, if the host name is ubu.exoua-int, create a users in Active Directory called host_ubu and http_ubu.
Note the password you defined when creating the user account. You will need it in step 3. Do not select the "User must change password at next login" option, or any other password options.
2. Configure the new user account to comply with the Kerberos protocol.
Right-click the name of the user account in the Users tree in the left pane and select Properties.
NOTE Make sure the box "Use DES encryption types for this account" is unchecked. Also make sure no other boxes are checked, particularly the box "Do not require Kerberos pre-authentication."
Setting the encryption type may corrupt the password. Therefore, you should reset the user password by right-clicking the name of the user account, selecting Reset Password, and re-entering the same password specified earlier.
3. Generate keys for service.
C:\> ktpass -princ host/ubu.exoua-int@EXOUA-INT -mapuser host_ubu@EXOUA-INT -crypto RC4-HMAC-NT \ -ptype KRB5_NT_PRINCIPAL -mapop set -pass 123456 -out c:\host_ubu.keytab C:\> ktpass -princ HTTP/ubu.exoua-int@EXOUA-INT -mapuser http_ubu@EXOUA-INT -crypto RC4-HMAC-NT \ -ptype KRB5_NT_PRINCIPAL -mapop set -pass 123456 -out c:\http_ubu.keytab
4. Use the setspn utility to create the Service Principal Names (SPNs) for the user account created in step 1. Enter the following commands:
C:\> setspn -A host/ubu.exoua-int host_ubu C:\> setspn -A HTTP/ubu.exoua-int http_ubu
5. Check which SPNs are associated with your user account, using the following command:
C:\> setspn -L host_ubu
NOTE This is an important step. If the same service is linked to a different account in the Active Directory server, the client will not send a Kerberos ticket to the server. If filter will be used secure-constraint must be removed from web.xml
6. Configuration on Linux host. This is example of file /etc/krb5.conf
[logging] default = FILE:/var/log/krb5libs.log kdc = FILE:/var/log/krb5kdc.log admin_server = FILE:/var/log/kadmind.log [libdefaults] ticket_lifetime = 24000 default_realm = EXOUA-INT default_tkt_enctypes = rc4-hmac default_tgs_enctypes = rc4-hmac [realms] EXOUA-INT = { kdc = test01-srv.exoua-int:88 admin_server = test01-srv.exoua-int:749 default_domain = EXOUA-INT } [domain_real] .exoua-int = EXOUA-INT exoua-int = EXOUA-INT [kdc] profile = /etc/kdc.conf [pam] debug = false ticket_lifetime = 36000 renew_lifetime = 36000 forwardable = true krb4_convert = false
7. Copy key generated on step 3 to the Linux machine where tomcat server runs.
8. Run the ktutil utility on the Linux machine and import keys.
andrew@ubu:~$ ktutil ktutil: rkt host_ubu.keytab ktutil: wkt host.keytab ktutil: rkt http_ubu.keytab ktutil: wkt http.keytab
You must get to new files with tickets.
1. Deploy an exo-tomcat, and copy the jar for SSO in lib folder and change configuration.xml file to your network settings :
<configuration> <component> <key>org.exoplatform.services.security.sso.config.SSOConfigurator</key> <type>org.exoplatform.services.security.sso.config.SSOConfigurator</type> <init-params> <properties-param> <name>sso-properties</name> <property name="charset" value="UnicodeLittleUnmarked" /> <property name="domain" value="EXOUA-INT" /> <property name="jaas-context" value="krb5.ldap-action" /> <property name="ldap-server" value="ldap://test01-srv.exoua-int:389/" /> <!-- ********************************************************** Default cross domain authentication is disabled. NOTE: This is actual for NTLM only. For SPNEGO cross domain authentication is disabled by default. There is some more work to enable it for SPNEGO. ********************************************************** --> <!-- <property name="cross-domain" value="true" /> --> <!-- <property name="redirect-on-error" value="http://google.com" /> --> </properties-param> </init-params> </component>
2. In exo-tomcat/conf/Catalina/localhost/, change a file portal.xml :
<Context path='/portal' docBase='portal' debug='0' reloadable='true' crossContext='true'> <Logger className='org.apache.catalina.logger.SystemOutLogger' prefix='localhost_portal_log.' suffix='.txt' timestamp='true'/> <Manager className='org.apache.catalina.session.PersistentManager' saveOnRestart='false'/> <!-- <Realm className='org.apache.catalina.realm.JAASRealm' appName='exo-domain' userClassNames='org.exoplatform.services.security.jaas.UserPrincipal' roleClassNames='org.exoplatform.services.security.jaas.RolePrincipal' debug='0' cache='false'/> <Valve className='org.apache.catalina.authenticator.FormAuthenticator' characterEncoding='UTF-8'/> --> <Valve className="org.exoplatform.services.security.sso.tomcat.SSOAuthenticatorValve"/> </Context>
Secure configuration in web.xml must be changed to next:
<security-role> <description>a simple user role</description> <role-name>users</role-name> </security-role> <security-constraint> <web-resource-collection> <web-resource-name>portal</web-resource-name> <url-pattern>/*</url-pattern> </web-resource-collection> <auth-constraint> <role-name>users</role-name> </auth-constraint> <user-data-constraint> <transport-guarantee>NONE</transport-guarantee> </user-data-constraint> </security-constraint>
NOTE At list one roles in web.xml must be corresponding to user group in AD.
3. Download in lib folder jcif-1.2.17.jar . We need this for support NTLM authentication.
4. In exo-tomcat/conf/jaas.conf, add :
com.sun.security.jgss.accept { com.sun.security.auth.module.Krb5LoginModule required keyTab = "/home/andrew/http.keytab" useKeyTab = true storeKey = true principal = "HTTP/ubu.exoua-int@EXOUA-INT" doNotPrompt = true realm = "EXOUA-INT" refreshKrb5Config = true debug = false ; }; krb5.ldap-action { com.sun.security.auth.module.Krb5LoginModule required keyTab = "/home/andrew/host.keytab" useKeyTab = true storeKey = true principal = "host/ubu.exoua-int@EXOUA-INT" doNotPrompt = true realm = "EXOUA-INT" refreshKrb5Config = true debug = false ; };
5. Add next system properties in file bin/eXo.sh
KERBEROS="-Djavax.security.auth.useSubjectCredsOnly=false \ -Djava.security.krb5.kdc=test01-srv.exoua-int \ -Djava.security.krb5.realm=EXOUA-INT" JAVA_OPTS="$YOURKIT_PROFILE_OPTION $JAVA_OPTS $LOG_OPTS $SECURITY_OPTS $EXO_OPTS $EXO_CONFIG_OPTS $KERBEROS"
6. For portal add one more filter for initialize Identity for user org.exoplatform.services.security.sso.http.JndiIdentityInitalizerFilter this filter must be mapped before org.exoplatform.services.security.web.SetCurrentIdentityFilter on private area.
7. For Firefox, there is an additionnal configuration to do to use AD authentication : in adress bar, go to about:config. Filter on ntlm and choose "network.automatic-ntlm-auth.trusted-uris". Set string to the name of the machine where webserver run (in this exemple : set "ubu"). For IE, there is no additional configuration.
8. Go to http://ubu.exoua-int:8080/portal/private/classic If use was login windows (domain authentication) then, Active directory authentication will be used.
Instead tomcat valve org.exoplatform.services.security.sso.tomcat.SSOAuthenticatorValve can be used filter org.exoplatform.services.security.sso.http.SSOAuthenticationFilter
9. Important about using NTLM. JCIF may use MAC for signature connection to DC. In this case when one user logined, then next user may be not able login during time specified in property jcifs.smb.client.soTimeout (in ms), default 15000. This time JCIF keeps previous connection opened and may not create new one to authenticate other user. Must be set next properties to fix this jcifs.smb.client.domain, jcifs.smb.client.username, jcifs.smb.client.password. For example
-Djcifs.smb.client.domain=EXOUA-INT -Djcifs.smb.client.username=Admin -Djcifs.smb.client.password=secret"
In this case SMB connection will be signed for user.
OAuth allows to grant access to private resources on one site (which is called the Service Provider), to another site (called Consumer). OAuth is giving access to a resource without sharing your identity at all. More about oAuth at the site http://oauth.net/. (OAuth protocol: http://oauth.net/core/1.0/).
This article describes how to configure our implementation of oAuth service and client part. Both parts, service (Provider) and client (Consumer) are based on oAuth core, this code can be found here http://oauth.googlecode.com/svn/code/java/core/.
Our implementation can be found here http://svn.exoplatform.org/svnroot/exoplatform/projects/ws/trunk/security/oauth.
The provider consists of two parts oauthprovider.war and exo.ws.security.oauth.provider.service-trunk.jar. The main part of the provider is OAuthProviderService, currently there is one implementation of this interface org.exoplatform.ws.security.oauth.impl.OAuthProviderServiceMD5Impl; this component has few required configuration parameters.
The configuration is defined in the file configuration.xml.
<component> <type>org.exoplatform.ws.security.oauth.impl.OAuthProviderServiceMD5Impl</type> <init-params> <properties-param> <name>exo1</name> <property name="secret" value="81d1b5d080d1" /> <property name="description" value="description" /> <property name="callbackURL" value="http://localhost:8080/ws-examples/callback" /> </properties-param> </init-params> </component>
Properties:
1. name
: the name of provider, the client will
send the name of provider that it wants to use.
2. secret
: this property is used for subscribe
requests, this property must be known to the provider and the
consumer.
3. description
: any description of the
provider. Optional.
4. callbackURL
: this is URL where the client
will be redirected after successful authentication by the
provider.
That is all what is needed for the configuration of the provider service. The next part of configuration is about web, such as servlets.
The web part of the provider consists of 3 servlets:
OAuthRequestTokenServlet,
OAuthAccessTokenServlet, and
OAuthAuthorizationServlet
<?xml version="1.0" encoding="UTF-8"?> <web-app version="2.4" xmlns="http://java.sun.com/xml/ns/j2ee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/j2ee http://java.sun.com/xml/ns/j2ee/web-app_2_4.xsd"> <display-name>oAuth provider</display-name> <context-param> <description>Login page name</description> <param-name>login-page</param-name> <param-value>login/jsp/login.jsp</param-value> </context-param> <servlet> <servlet-name>OAuthAuthenticationServlet</servlet-name> <servlet-class>org.exoplatform.ws.security.oauth.http.OAuthAuthenticationServlet</servlet-class> </servlet> <servlet> <servlet-name>OAuthRequestTokenServlet</servlet-name> <servlet-class>org.exoplatform.ws.security.oauth.http.OAuthRequestTokenServlet</servlet-class> </servlet> <servlet> <servlet-name>OAuthAccessTokenServlet</servlet-name> <servlet-class>org.exoplatform.ws.security.oauth.http.OAuthAccessTokenServlet</servlet-class> </servlet> <servlet-mapping> <servlet-name>OAuthAuthenticationServlet</servlet-name> <url-pattern>/authorize/*</url-pattern> </servlet-mapping> <servlet-mapping> <servlet-name>OAuthRequestTokenServlet</servlet-name> <url-pattern>/request_token/*</url-pattern> </servlet-mapping> <servlet-mapping> <servlet-name>OAuthAccessTokenServlet</servlet-name> <url-pattern>/access_token/*</url-pattern> </servlet-mapping> </web-app>
The consumer consists of OAuthConsumerService and web part (servlets and filters).
OAuthConsumerFilter checks cookies in client's request. The cookie must have the name _consumer_name_.oauth_token and _consumer_name_.oauth_token_secret. Then this filter try to find the request/access token this at OAuthConsumerService. If the token from the request is an access token then the client is already authenticated and gets access to requested resource.
Otherwise the client will be redirected to the provider for authentication (see below, property "provider.authorizationURL"). This is the part of configuration.xml for the consumer:
<component> <type>org.exoplatform.ws.security.oauth.impl.OAuthConsumerServiceImpl</type> <init-params> <value-param> <!-- this parameter MUST be set in minutes --> <name>tokenAliveTime</name> <value>300</value> </value-param> <properties-param> <name>exo1</name> <property name="secret" value="81d1b5d080d1" /> <property name="description" value="description" /> <property name="provider.tokenRequestURL" value="http://localhost:8080/oauthprovider/request_token" /> <property name="provider.authorizationURL" value="http://localhost:8080/oauthprovider/authorize" /> <property name="provider.accessTokenURL" value="http://localhost:8080/oauthprovider/access_token" /> </properties-param> </init-params> </component> <component> <type>org.exoplatform.ws.security.oauth.impl.OAuthClientHttpImpl</type> </component> <component> <type>org.exoplatform.ws.security.oauth.impl.OAuthTokenCleanerImpl</type> <init-params> <value-param> <!-- this parameter MUST be set in minutes --> <name>tokenCleanerTimeout</name> <value>3</value> </value-param> </init-params> </component>
The client is redirected for authentication to the provider with the required parameters (request and secret token), before this token OAuthClient got from the provider (see configuration, property "provider.tokenRequestURL"). On the provider side, the user (if the authentication is successful and has valid request parameters) will be redirected to the consumer again (see property "callbackURL" in the provider configuration). Then the consumer (this is, the same as receiving request token, invisible for the client) receives the access token, and redirects the client to the original URL. Then the filter checks the token from the request and gives access to the requested resource.
This is the file web.xml for consumer application, in this example the resource http://localhost:8080/ws-examples/oauth/protected/ is under oAuth protect :
<filter> <filter-name>OAuthConsumerFilter</filter-name> <filter-class>org.exoplatform.ws.security.oauth.http.OAuthConsumerFilter</filter-class> <init-param> <param-name>consumer</param-name> <param-value>exo1</param-value> </init-param> </filter> <filter> <filter-name>OAuthRequestWrapperFilter</filter-name> <filter-class>org.exoplatform.ws.security.oauth.http.OAuthRequestWrapperFilter</filter-class> </filter> <filter-mapping> <filter-name>OAuthConsumerFilter</filter-name> <url-pattern>/oauth/protected/*</url-pattern> </filter-mapping> <filter-mapping> <filter-name>OAuthRequestWrapperFilter</filter-name> <url-pattern>/oauth/protected/*</url-pattern> </filter-mapping>
Any resource can be protected by this mechanism under the condition that web.xml is configured to use OAuthConsumerFilter. Client must save the given access and the secret token in a cookie.
1. continuous line - client's redirections
2. fine dashed line - internal oAuth schema requests and responses.
Stages:
1. green - stage 1 (receiving request token)
2. yellow - stage 2 (authentication)
3. blue - stage 3 (receiving access token)
4. red - stage 4 (get protected resource)
Alive time for tokens can be set using the parameter tokenAliveTime.
<value-param> <!-- this parameter MUST be set in minutes --> <name>tokenAliveTime</name> <value>300</value> </value-param>
There is special component cleaner on consumer side, it starts, by default, every 5 minutes and checks all tokens. If it finds token with expired time it removes it from the storage. Token cleaner timeout (how often it must run) can be also set in the configuration:
<component> <type>org.exoplatform.ws.security.oauth.impl.OAuthTokenCleanerImpl</type> <init-params> <value-param> <!-- this parameter MUST be set in minutes --> <name>tokenCleanerTimeout</name> <valueɯ</value> </value-param> </init-params> </component>
Comet is a World Wide Web application architecture in which a web server sends data to a client program (normally a web browser) asynchronously without any need for the client to explicitly request it. It allows creation of event-driven web applications, enabling real-time interaction which would otherwise require much more work. Though the term Comet was coined in 2006, the idea is several years older, and has been called various names, including server push, HTTP push, HTTP streaming, Pushlets, Reverse Ajax, and others.
Cometd on wikipedia http://en.wikipedia.org/wiki/Comet_(programming).
It's an implementation of the Bayeux protocol.
Cometd is deployed by default with the portal (only trunk version). it's composed of a service and a webapp.
To get access to the cometd, you need to get a key associated with the login used in the portal.
public String getUserToken(String eXoId)
The connection has to be initialized, for doing so you can use the cometd widget available by default in the portal.
to send a message from the service to a user :
public void sendMessage(String eXoId, String channel, Object data)
you can find a sample here http://svn.exoplatform.org/svnroot/exoplatform/projects/portal/trunk/sample/application/cometd/
To have it working :
deploy a trunk version of the portal
go to ~/java/portal/trunk/sample/application/cometd/ and execute exoproject --deploy=module
start the portal
load all the widgets and portlets
in a page, add the cometd widget and the portlet "CometdDemo"
in the widget click on init
in the CometdDemo portlet, type a message and click on "send", a pop up with your message will appear on the top right and disappear after 3 sec
What happened?
the portlet (JSR 286) sent an action with the message.
the action called the continuation service and sent the message to the user that had sent it
the service sent the message to the browser
the cometd client side received the message, and sent it to topics
in topic, the notification window listened to the topic "/eXo/portal/notification" so the notification window appeared.
When we run Stress Testing Cometd (Jetty implementation) http://docs.codehaus.org/display/JETTY/Stress+Testing+Cometd we see that if we connect many clients, deliver message latency tends to grow up. This situation described in article "20,000 Reasons Why Comet Scales". http://cometdaily.com/2008/01/07/20000-reasons-that-comet-scales/ So, in order to support a lot of concurrent cometd connections we need a mean to horizontally scale eXo cometd support.
1- Getting base URL for cometd connection (one of the node in cometd cluster)
2 - Getting userToken at cometd server (it need for subscribing on channel)
3 - Sending message from Exo-server
LoadBalancer - component that give base URL of one of free cometd server.
ContinuationServiceDelegate - component that send message from Exo-server to client via cometd server there client is registered
RESTContinuationService - component that receive message from ContinuationServiceDelegate and delegate it to ContinuationServer and send userToken generated by ContinuationServer for user.
ContinuationService - component that provide cometd connection.
To start work with cometd service the client should send request to Exo-server with the username, receive with response URL one of cometd servers (1 - on the scheme). This URL gives out LoadBalancer using the information set in a configuration
<init-params> <object-param> <name>cometd.lb.configuration</name> <description>cometd lb nodes</description> <object type="org.exoplatform.ws.frameworks.cometd.loadbalancer.LoadBalancerImpl$LoadBalancerConf"> <field name="nodes"> <collection type="java.util.ArrayList"> <value> <object type="org.exoplatform.ws.frameworks.cometd.loadbalancer.Node"> <field name="id"> <string>1</string> </field> <field name="url"> <string>http://localhost:8080</string> </field> <field name="maxConnection"> <int>10</int> </field> </object> </value> <value> <object type="org.exoplatform.ws.frameworks.cometd.loadbalancer.Node"> <field name="id"> <string>2</string> </field> <field name="url"> <string>http://localhost:8081</string> </field> <field name="maxConnection"> <int>15</int> </field> </object> </value> </collection> </field> </object> </object-param> </init-params>
In the above configuration we described two cometd-cluseter nodes of. Let's consider one of them.
1 - are unique the identifier of node (id) http://localhost:8080 - base URL 15 - the maximum number of the connected clients. LoadBalancer chooses one of node in cometd cluster (on which connection is possible) and associate with a name of the user! Now the client know on that cometd server connection is possible, for this purpose it is necessary to request userToken on cometd server(2). As for this need do request on other domain client must use framework that allowed cross-domain-ajax. We have framework that can do this task, how use this framework describe in article - Framework for cross-domain AJAX. After client receive userToken the client can do cometd-registration.
Example: the Client with a name exo1 wishes to be connected to cometd to service. It is necessary to make HTTP request (GET) on Exo-server URL = "http://localhost:8080/rest/cometdurl/exo1" with the answer receives a base URL of cometd server, it can be as a remote server or local. We will assume that we use cluster and receive some thing like this "http://192.168.0.21:8080". Further it is necessary get userToken already from cometd server for this purpose do request (GET) URL ="http://192.168.0.22:8081/rest/gettoken/exo1", after that we do standard procedure for cometd connections.
Then it is necessary to send the message to client ContinuationServiceDelegate requests at LoadBalancer the cometd server address on which the client is registered sends the message in format JSON on cometd server should be RESTContinuationService which to accept the message and to transfer them to ContinuationService (3)
Example: we will want to send the message to the client exo1. For this purpose ContinuationServiceDelegate requests to LoadBalancer the information on that to what server the given user is connected, we will receive necessary URL http://192.168.0.22:8081) and do HTTP request (POST) on comets a server, URL = "http://192.168.0.22:8080/rest/sendprivatemessage/" in a body the message in format JSON is transferred, RESTContinuationService having received the given message transfers it ContinuationService which already in turn delivers it to the client.
<configuration> <component> <type>org.exoplatform.ws.frameworks.cometd.ContinuationService</type> </component> <component> <key>org.mortbay.cometd.continuation.AbstractBayeux</key> <type>org.mortbay.cometd.continuation.EXoContinuationBayeux</type> </component> <component> <key>org.exoplatform.ws.frameworks.cometd.transport.ContinuationServiceDelegate</key> <type>org.exoplatform.ws.frameworks.cometd.transport.ContinuationServiceRemoteDelegate</type> </component> <component> <type>org.exoplatform.ws.frameworks.cometd.transport.RESTContinuationService</type> </component> <component> <type>org.exoplatform.ws.frameworks.cometd.loadbalancer.RESTLoadBalancerService</type> </component> <component> <key>org.exoplatform.ws.frameworks.cometd.loadbalancer.LoadBalancer</key> <type>org.exoplatform.ws.frameworks.cometd.loadbalancer.LoadBalancerImpl</type> <init-params> <object-param> <name>cometd.lb.configuration</name> <description>cometd lb nodes</description> <object type="org.exoplatform.ws.frameworks.cometd.loadbalancer.LoadBalancerImpl$LoadBalancerConf"> <field name="nodes"> <collection type="java.util.ArrayList"> <value> <object type="org.exoplatform.ws.frameworks.cometd.loadbalancer.Node"> <field name="id"> <string>1</string> </field> <field name="url"> <string>http://localhost:8080</string> </field> <field name="maxConnection"> <int>10</int> </field> </object> </value> <value> <object type="org.exoplatform.ws.frameworks.cometd.loadbalancer.Node"> <field name="id"> <string>2</string> </field> <field name="url"> <string>http://localhost:8081</string> </field> <field name="maxConnection"> <int>15</int> </field> </object> </value> </collection> </field> </object> </object-param> </init-params> </component> </configuration>
The test: At testing running two servlet-containers Tomcat (in role Exo - server'a) and Jetty (cometd server). A configuration example:
<configuration clients="12" repeat="1" sleep-connection="500" sleep-sending="200"> <container containerStart="false" port="8080" home=""/> <messages> <message broadcast="false" id="1">hello</message> <message broadcast="true" id="2">hello!!!</message> </messages> <cometd-url>http://localhost:8080/cometd/cometd</cometd-url> <base-url>http://localhost:8080/rest/</base-url> <channels> <channel>/eXo/comedt/test</channel> </channels> </configuration>
configuration describe that we create 12 cometd connections with sleep-connection = "500" (мс), will be subscribe on the channel "/eXo/comedt/test". Then it will be sent two message "hello" "hello!!!" the First individually to each of the clients, the second broadcast on the channel. Messages delivered with an interval sleep-sending = "200". http://localhost:8080/rest/ - base URL of Exo - server. For start test execute a command:
mvn clean install -f pom-test.xml -Dexo.test.skip=false -Djetty.home = "./target/jetty"
(http://svn.exoplatform.org/projects/ws/branches/1.3/frameworks/cross-domain-ajax/)
XmlHttpRequest objects are bound by the same origin security policy of browsers, which prevents a page from accessing data from another server. This has put a serious limitation on Ajax developers: you can use XmlHttpRequests to make background calls to a server, but it has to be the same server that served up the current page. For detail's you can visit http://www.mozilla.org/projects/security/components/same-origin.html.
But actually writing client web applications that use this object can be tricky given restrictions imposed by web browsers on network connections across domains. So need find way how to bypass this limitation of AJAX.
To describe our method for cross-domain AJAX solution let us consider the following scheme contains of 3 components:
1). User agent (a browser).
2). ServerA contains a main page with dedicated client and server IFRAMEs (see below) and an HTML client page (client.html) referenced from the client IFRAME. This client page contains dedicated script to push the data for request into server IFRAME.
3). ServerB contains remote service that want get access to and an HTML server page (server.html) referenced from the server IFRAME. This server page contains dedicated script to push the requested data into client IFRAME.
1) A Browser requests the Start page from the ServerA
2) The Start page is retrieved from the ServerA.
3) Create in the start page IFRAME (name it - "client iframe") and insert on it document from ServerA (client.
4) In "client iframe" create ne IFRAME element ("server iframe") and insert on it document from ServerB (server.html). Documents (client.html and server.html) contain special script that can transfer data between ifarmes.
5) "Client iframe" transfer information about HTTP method and URL that we want do cross-domain request to "server iframe".
6) "Server iframe" do simple XmlHttpRequest to the service that we need (it can do that because download from same domain) and get informaton from service.
7) "Server iframe" transfer data to "client iframe" and now we get information that we want.
1). Place the file client.html and xda.js on the serverA.
2). Place the file server.html on the serverB.
3). Declare xda.js in the main page.
<script type="text/javascript" src="xda.js"></script>
4). Create JS function which performing cross domain call as in following example:
<script type="text/javascript"> function test(){ var facade = xdaInit(); facade.clientURI = "http://localhost/cross-domain-ajax/client/client.html"; facade.serverURI = "http://localhost:8080/cross-domain-ajax/server/server.html"; facade.apiURI = "http://localhost:8080/cross-domain-ajax/server/test.txt"; facade.method = "POST"; facade.load = function(result) { alert(result.responseText); } facade.setRequestHeader("keep-alive","200"); xda.create(facade); } </script>
5). Use this function (here it is bound to a button's onclick event).
<button onclick='test()'>test cross-domain</button>
We simple open many connection, all client subscribing to one channel, we don't send any message to client, they reconnect through 120000 ms. Settings for cometd servlet.
<servlet> <servlet-name>cometd</servlet-name> <servlet-class>org.mortbay.cometd.continuation.EXoContinuationCometdServlet</servlet-class> <init-param> <param-name>filters</param-name> <param-value>/WEB-INF/filters.json</param-value> </init-param> <init-param> <param-name>timeout</param-name> <param-value>120000</param-value> </init-param> <init-param> <param-name>interval</param-name> <param-value>0</param-value> </init-param> <init-param> <param-name>multiFrameInterval</param-name> <param-value>1500</param-value> </init-param> <init-param> <param-name>logLevel</param-name> <param-value>0</param-value> </init-param> <init-param> <param-name>JSONCommented</param-name> <param-value>false</param-value> </init-param> <init-param> <param-name>alwaysResumePoll</param-name> <param-value>false</param-value> <!-- use true for x-site cometd --> </init-param> <load-on-startup>1</load-on-startup> </servlet>
Test running on our server for testing (Tornado)
Processor : Intel(R) Xeon(R) CPU E5345 @ 2.33GHz (8 cores)
RAM : 16 GB
java version "1.5.0_15" (64-bit)
Table 37.1.
Connection | Heap (JProfiler), mb | Virt (top),gb | res (top), gb |
---|---|---|---|
1000 | 239 | - | - |
2000 | 318 | - | - |
3000 | 450 | 6 | 1.36 |
4000 | 610 | 7.2 | 1.6 |
5000 | 767 | 12.9 | 1.7 |
6000 | 895 | 14.4 | 1.8 |
7000 | 1004 | 15.5 | 2.2 |
8000 | 1116 | 17.3 | 2.6 |
9000 | 1295 | 29.6 | 9.8 |
In screenshot red line show thousand of connection. Then count of connection be 9000 the test fail, with exception connection timeout.
I am proud to introduce JavaScript WebDAV library webdav.js which based on AJAX. This library consists the special class webdav which is aimed to make all requests supported by eXo Platform WebDAV-server with supported parameters.
Proposed library supports asynchronous (on default) and synchronous mode of AJAX request processing. To set one of some of this mode you should call webdav.setAsynchronous() or webdav.setSynchronous() method.
Also this library can fulfill BASIC HTTP authentication using value of preset webdav.username and webdav.password properties.
Scheme of WebDAV data exchange:
There is a list of realized methods:
ExtensionMethod(handler, path, options) method - a simple constructor of the user-defined WebDAV request.
1. WebDAV Modifications to HTTP Operations:
GET(handler, path, options) method - retrieves the content of a resource.
PUT(handler, path, options) method - saves the content of a resource to the server.
DELETE(handler, path, options) method - removes a resource or collection.
OPTIONS(handler, path, options) method - returns the HTTP methods that the server supports for specified URL.
MKCOL(handler, path, options) method - creates a collection.
COPY(handler, path, options) method - copies a resource from one URI to another.
MOVE(handler, path, options) method - moves a resource from one URI to another.
HEAD(handler, path, options) method - asks for the response identical to the one that would correspond to a HEAD request, but without the response body. This is useful for retrieving meta-information written in response headers, without having to transport the entire content.
2. WebDAV Property Operations:
PROPFIND(handler, path, options) method - retrieves properties, stored as XML, from a resource. It is also overloaded to allow one to retrieve the collection structure (a.k.a. directory hierarchy) of a remote system.
PROPPATCH(handler, path, options) method - changes and deletes multiple properties on a resource in a single atomic act.
3. WebDAV Lock Operations:
LOCK(handler, path, options) method - puts a lock on a resource.
UNLOCK(handler, path, options) method - removes a lock from a resource.
4. WebDAV Versioning Extension Operations:
VERSIONCONTROL(handler, path, options) method - is used to create a new version-controlled resource for an existing version history. This allows the creation of version-controlled resources for the same version history in multiple workspaces.
CHECKOUT(handler, path, options) method - can be applied to a checked-in version-controlled resource to allow modifications to the content and dead properties of that version-controlled resource.
CHECKIN(handler, path, options) method - can be applied to a checked-out version-controlled resource to produce a new version whose content and dead properties are copied from the checked-out resource.
UNCHECKOUT(handler, path, options) method - can be applied to a checked-out version-controlled resource to cancel the CHECKOUT and restore the pre-CHECKOUT state of the version-controlled resource.
REPORT(handler, path, options) method - an extensible mechanism for obtaining information about a resource.
ORDERPATCH(handler, path, options) method - is used to change the ordering semantics of a collection, to change the order of the collection's members in the ordering, or both.
5. WebDAV SEARCH Operation:
SEARCH(handler, path, options) method - a lightweight search method to transport queries and result sets that allows clients to make use of server-side search facilities retrieve properties, stored as XML, from a resource.
If there is no interested WebDAV method in the list above, you can use ExtensionMethod(handler, path, options) to construct WebDAV request you like with your own:
name of method (passing throw parameter options.method)
request headers (passing throw parameter options.headers) and
request body (passing throw parameter options.body).
First parameter of each method - hanlder - is an object {onSuccess , onError, onComplete}, which describes three functions to call when the request either succeeds or fails, and completes:
handler.onSuccess - will call if the request succeeds,
handler.onError - will call if the request fails,
handler.onComplete - will call if the request completes.
Additionally you can add your own headers to the request by using the special parameter options.
Each method returns object result which consists next properties:
result.status - status of XMLHttp response,
result.statusstring - an explanation of status,
result.headers - object with hash of XMLHttpRequest response.getAllResponseHeaders() (e.g. if this the response header is "Content-Type: test/plain" then result.headers['Content-Type'] = 'test/plain'),
result.content = XMLHttprequest response.responseXML if response.header['Content-Type'] consists 'xml', or = XMLHttpRequest response.responseText otherwise.
An example of using WebDAV library for the eXo Platform WebDAV-server:
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd"> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /> <title>Demonstration of the eXo Platform Client Library of WebDAV</title> <script type="text/javascript" src="js/webdav.js"></script> <script> // Example of using of eXo Platform Client Library of WebDAV /** * Serialize an XML Document or Element and return it as a string. */ function XMLtoString(node) { if (typeof node != 'object') return node; if (typeof XMLSerializer != "undefined") return (new XMLSerializer( )).serializeToString(node); else if (node.xml) return node.xml; else throw "XML.serialize is not supported or can't serialize " + node; }; // get and setup webdav object var webdav = new Webdav('localhost', '8080'); webdav.username = 'root'; webdav.password = 'exo'; // define webdav methods handlers function handler_onSuccess(result) { alert('Request SUCCEEDED with status = ' + result.status + ': ' + result.statusstring); }; function handler_onError(result) { alert('Request FAILED with status = ' + result.status + ': ' + result.statusstring); }; var handler = { onSuccess: handler_onSuccess, onError: handler_onError, onComplete: MKCOL_handler_onComplete } // for eXoPlatform webdav var default_webdav_path = '/rest/jcr/repository/collaboration'; // create a collection 'test' webdav.MKCOL(handler, default_webdav_path + '/test1'); // create a resource 'example.txt' with content 'an example' function MKCOL_handler_onComplete(result) { // create a resource 'example.txt' with content 'an example' var options = { content: 'an example', content_type: 'text/plain; charset=UTF-8' } handler.onComplete = PUT_handler_onComplete; webdav.PUT(handler, default_webdav_path + '/test1/example.txt', options); }; // put resource example.txt under version control function PUT_handler_onComplete(result) { // put resource example.txt under version control handler.onComplete = VERSIONCONROL_handler_onComplete; webdav.VERSIONCONTROL(handler, default_webdav_path + '/test1/example.txt'); }; // obtain the 'version-tree' WebDAV report about 'example.txt' function VERSIONCONROL_handler_onComplete(result) { // obtain the 'version-tree' WebDAV report about 'example.txt' var options = { depth: '0', type: 'version-tree' }; handler.onComplete = REPORT_handler_onComplete; webdav.REPORT(handler, default_webdav_path + '/test1/example.txt', options); }; // delete the collection 'test' function REPORT_handler_onComplete(result) { if ( result.content ) alert( 'Response of server: ' + XMLtoString(result.content) ); // delete the collection 'test' handler.onComplete = 'function() {}'; webdav.DELETE(handler, default_webdav_path + '/test1/'); }; </script> </head> <body> <h2>Demonstration of the eXo Platform Client Library of WebDAV</h2> </body> </html>
You can get this library at the http://svn.exoplatform.org/svnroot/exoplatform/projects/ws/branches/2.0.x/frameworks/javascript/webdav/src/main/js
Demonstration page was placed as http://svn.exoplatform.org/svnroot/exoplatform/projects/ws/branches/2.0.x/frameworks/javascript/webdav/src/main/index.html
JSDoc is available at the http://svn.exoplatform.org/svnroot/exoplatform/projects/ws/branches/2.0.x/frameworks/javascript/webdav/src/main/doc
Another place of Javadoc will be at the http://docs.exoplatform.com.ua site soon.
REST framework based on JSR-311 specification including:
Support of JSR-311 annotations
Support all media types specified in section 4.2.4 Standard Entity Providers of jaxrs-1.0-final specification. Additional support for 'multipart/*' and 'application/json' media types.
WADL generation on receipt an OPTIONS request
Addition feature request and response filters. Filters is similar to servlet filter and gives possibility to modify request and response
Support for RESTful services that made in Groovy.
OAuth implementation able to use persistent storage for 'tickets'
Use eXo Kernel version 2.1
Use eXo Core version 2.2
Use maven-ant-run-plugin version 1.3 instead 1.2-SNAPSHOT
Use jcifs version 1.2.19 instead 1.2.17
Complete list of issues fixed in eXo-Ws Version 2.0. Find details on http://jira.exoplatform.org/secure/ReleaseNote.jspa?version=10432&styleName=Text&projectId=10070&Create=Create
Bug * [WS-124] - Usage of SNAPSHOT in maven dependencies * [WS-126] - jcifs 1.2.17 is not available in public maven repo, but 1.2.19 is. * [WS-127] - wrong groupId for catalina dependency declaration Improvement * [WS-69] - oAuth should be able to use with annotation JSR250. * [WS-74] - Add possibility transfer large data (files) via org.exoplatform.services.rest.impl.provider.MultipartFormDataEntityProvider * [WS-77] - Check if the class org.exoplatform.ws.rest.ejbconnector30.RestEJBConnector can better handle the Portal Container to be compatible with multiple portal instance * [WS-78] - Check if the class org.exoplatform.ws.rest.ejbconnector21.RestEJBConnectorBean can better handle the Portal Container to be compatible with multiple portal instance * [WS-97] - Add support for X-HTTP-Method-Override header * [WS-98] - Use URI pattern for request and response filters. Task * [WS-57] - Refactor eXo REST engine to compatibility with JSR-311. * [WS-68] - Refactor REST EJB conector according to new implementation of REST engine (JSR-311). See WS-57. * [WS-75] - Add WADL generation for request with OPTIONS HTTP method. * [WS-81] - Change cometd service for JSR-311 * [WS-92] - Pre-release testing plan for QA team * [WS-93] - Move webdav methods to the package org.exoplatform.services.rest.ext.method.webdav * [WS-94] - More description of classes' purpose in rest.ext sources * [WS-95] - Remove StandaloneContainerInitializedListener from JCR (one from WS has to be used everywhere), move PortalContainerInitializedFilter from JCR to WS * [WS-117] - jsr-311 capability tests Sub-task * [WS-101] - Create service for administration groovy scripts for REST via HTTP. * [WS-116] - Merge code to the branches/2.0
Complete list of issues fixed in eXo-Ws Version 2.0.1 . Find details on JIRA http://jira.exoplatform.org/secure/ReleaseNote.jspa?version=10631&styleName=Text&projectId=10070&Create=Create.
Bug * [WS-125] - net.oauth wrong artifact id declared in maven dependencies * [WS-137] - Error of processing the request with encoded in UTF-8 content by gadgets.io * [WS-146] - In HierarchicalProperty class missing check QName.getPrefix() for length is 0 * [WS-159] - Need add charSet for Reader and Writer in JSON transformers, problem come in in Windows OS. * [WS-165] - Incorrect maven dependency on net.oauth:net.oauth.core:20080229 * [WS-166] - Generated files commited in svn * [WS-177] - Connection problem with NTLM when trying to connect two users in same time * [WS-183] - Use parent pom 1.1.1 in trunk Improvement * [WS-138] - Add supports for javax.ws.rs.core.Application in RESTful framework New Feature * [WS-176] - Create filter for URL rewriting * [WS-182] - Create new simple Deploy Service method with JaxWsProxyFactoryBean Task * [WS-168] - Change URL format of groovy services to "/{command}/{repostory}/{workspace}/{path}" * [WS-170] - oAuth in gadgets. Get protected resource from gadgets. * [WS-171] - Migrate module configuration to use kernel_1_0.xsd * [WS-172] - Remove unnecessary configuration from all the JARs' configuration.xml * [WS-184] - Release WS-2.0.1