SeamFramework.orgCommunity Documentation
In almost all enterprise applications, the database is the primary bottleneck, and the least scalable tier of the runtime environment. People from a PHP/Ruby environment will try to tell you that so-called "shared nothing" architectures scale well. While that may be literally true, I don't know of many interesting multi-user applications which can be implemented with no sharing of resources between different nodes of the cluster. What these silly people are really thinking of is a "share nothing except for the database" architecture. Of course, sharing the database is the primary problem with scaling a multi-user application — so the claim that this architecture is highly scalable is absurd, and tells you a lot about the kind of applications that these folks spend most of their time working on.
Almost anything we can possibly do to share the database less often is worth doing.
This calls for a cache. Well, not just one cache. A well designed Seam application will feature a rich, multi-layered caching strategy that impacts every layer of the application:
The database, of course, has its own cache. This is super-important, but can't scale like a cache in the application tier.
Your ORM solution (Hibernate, or some other JPA implementation) has a second-level cache of data from the database. This is a very powerful capability, but is often misused. In a clustered environment, keeping the data in the cache transactionally consistent across the whole cluster, and with the database, is quite expensive. It makes most sense for data which is shared between many users, and is updated rarely. In traditional stateless architectures, people often try to use the second-level cache for conversational state. This is always bad, and is especially wrong in Seam.
The Seam conversation context is a cache of conversational state. Components you put into the conversation context can hold and cache state relating to the current user interaction.
In particular, the Seam-managed persistence context (or an extended EJB container-managed persistence context associated with a conversation-scoped stateful session bean) acts as a cache of data that has been read in the current conversation. This cache tends to have a pretty high hitrate! Seam optimizes the replication of Seam-managed persistence contexts in a clustered environment, and there is no requirement for transactional consistency with the database (optimistic locking is sufficient) so you don't need to worry too much about the performance implications of this cache, unless you read thousands of objects into a single persistence context.
The application can cache non-transactional state in the Seam application context. State kept in the application context is of course not visible to other nodes in the cluster.
The application can cache transactional state using the Seam
cacheProvider
component, which integrates
JBossCache, JBoss POJO Cache, Infinispan or EHCache into the Seam environment.
This state will be visible to other nodes if your cache supports
running in a clustered mode.
Finally, Seam lets you cache rendered fragments of a JSF page. Unlike the ORM second-level cache, this cache is not automatically invalidated when data changes, so you need to write application code to perform explicit invalidation, or set appropriate expiration policies.
For more information about the second-level cache, you'll need to refer to
the documentation of your ORM solution, since this is an extremely complex
topic. In this section we'll discuss the use of caching directly, via
the cacheProvider
component, or as the page fragment cache,
via the <s:cache>
control.
The built-in cacheProvider
component manages an instance
of:
org.infninispan.tree.TreeCache
org.jboss.cache.TreeCache
org.jboss.cache.Cache
org.jboss.cache.aop.PojoCache
net.sf.ehcache.CacheManager
You can safely put any immutable Java object in the cache, and it will be stored in the cache and replicated across the cluster (assuming that replication is supported and enabled). If you want to keep mutable objects in the cache read the documentation of the underling caching project documentation to discover how to notify the cache of changes to the cache.
To use cacheProvider
, you need to include the jars
of the cache implementation in your project:
infinispan-core.jar
- Infinispan Core 5.1.x.Final
infinispan-tree.jar
- Infinispan TreeCache 5.1.x.Final
jgroups.jar
- JGroups 3.0
jboss-cache.jar
- JBoss Cache 1.4.1
jgroups.jar
- JGroups 2.4.1
jboss-cache.jar
- JBoss Cache 2.2.0
jgroups.jar
- JGroups 2.6.2
jboss-cache.jar
- JBoss Cache 1.4.1
jgroups.jar
- JGroups 2.4.1
jboss-aop.jar
- JBoss AOP 1.5.0
ehcache.jar
- EHCache 1.2.3
If you would like to know more details about Infinispan, look at the Infinispan Documentation page.
For an EAR deployment of Seam, we recommend that the infinispan jars and configuration go directly into the EAR.
JBoss AS7 already provides Infinispan and JGroups jars, so you need to turn on that
dependencies in your JBoss AS 7 deployment file or modify META-INF/Manifest.mf
to have this dependencies. Check the Blog example or JBoss AS7 documentation how to do that.
You'll also need to provide a configuration file for Infinispan. Place
infinispan.xml
with an appropriate cache
configuration into the Web applicaiton classpath (e.g. the ejb jar or
WEB-INF/classes
). Infinispan has many configuration settings,
so we won't discuss them here. Please refer to the Infinispan documentation for more information.
You can find a sample configuration file infinispan.xml
in
examples/blog/blog-web/src/main/resources/infinispan.xml
.
EHCache will run in it's default configuration without a configuration file
To alter the configuration file in use, configure your cache in
components.xml
:
<components xmlns="http://jboss.org/schema/seam/components"
xmlns:cache="http://jboss.org/schema/seam/cache">
<cache:infinispan-cache-provider configuration="infinispan.xml" />
</components>
Now you can inject the cache into any Seam component:
@Name("chatroomUsers")
@Scope(ScopeType.STATELESS)
public class ChatroomUsers
{
@In CacheProvider cacheProvider;
@Unwrap
public Set<String> getUsers() throws CacheException {
Set<String> userList = (Set<String>) cacheProvider.get("chatroom", "userList");
if (userList==null) {
userList = new HashSet<String>();
cacheProvider.put("chatroom", "userList", userList);
}
return userList;
}
}
If you want to have multiple cache configurations in your
application, use components.xml
to configure
multiple cache providers:
<components xmlns="http://jboss.org/schema/seam/components"
xmlns:cache="http://jboss.org/schema/seam/cache">
<cache:infinispan-cache-provider name="myCache" configuration="myown/cache.xml"/>
<cache:infinispan-cache-provider name="myOtherCache" configuration="myother/cache.xml"/>
</components>
The most interesting use of caching in Seam is the
<s:cache>
tag, Seam's solution to the problem
of page fragment caching in JSF. <s:cache>
uses pojoCache
internally, so you need to follow the
steps listed above before you can use it. (Put the jars in the EAR,
wade through the scary configuration options, etc.)
<s:cache>
is used for caching some rendered
content which changes rarely. For example, the welcome page of our blog
displays the recent blog entries:
<s:cache key="recentEntries-#{blog.id}" region="welcomePageFragments">
<h:dataTable value="#{blog.recentEntries}" var="blogEntry">
<h:column>
<h3>#{blogEntry.title}</h3>
<div>
<s:formattedText value="#{blogEntry.body}"/>
</div>
</h:column>
</h:dataTable>
</s:cache>
The key
let's you have multiple cached versions of
each page fragment. In this case, there is one cached version per blog.
The region
determines the cache or region node that
all version will be stored in. Different nodes may have different
expiry policies. (That's the stuff you set up using the aforementioned
scary configuration options.)
Of course, the big problem with <s:cache>
is
that it is too stupid to know when the underlying data changes (for
example, when the blogger posts a new entry). So you need to evict the
cached fragment manually:
public void post() {
...
entityManager.persist(blogEntry);
cacheProvider.remove("welcomePageFragments", "recentEntries-" + blog.getId() );
}
Alternatively, if it is not critical that changes are immediately visible to the user, you could set a short expiry time on the cache node.