Wednesday, May 19, 2010

Contexts and Dependency Injection for Java EE (CDI) - Part 4


This series of articles introduces Contexts and Dependency Injection for Java EE (CDI), a key part of the Java EE 6 platform. Standardized via JSR 299, CDI is the de-facto API for comprehensive next-generation type-safe dependency injection as well as robust context management for Java EE. Led by Gavin King, JSR 299 aims to synthesize the best-of-breed features from solutions like Seam, Guice and Spring while adding many useful innovations of its own.

In the previous articles in the series, we discussed basic dependency injection, scoping, producers/disposers, component naming, interceptors, decorators, stereotypes and events. In this article we will discuss CDI Conversations in detail. In future articles, we will cover details of using CDI with JSF, portable extensions, available implementations as well as CDI alignment with Seam, Spring and Guice. We will augment the discussion with a few implementation details using CanDI, Caucho’s independent implementation of JSR 299 included in the open source Resin application server.
The Concept of Conversations
In the first article of the series, we discussed conversations very briefly from the perspective of CDI scopes in general. The conversation scope is an idea that comes from Seam and deserves a detailed look, especially for understanding how to use it with JSF.
Most server-side Java developers are very familiar with the request and session scopes. That is likely how you started web application state management, probably using the programmatic APIs defined in the Servlet specification. As a result, it is pretty obvious how the CDI request and session scopes are used with JSF through @RequestScoped and @SessionScoped beans (although we will still discuss this in some detail in the next article in the series). The most typical use of a @RequestScoped bean is as a JSF backing bean for a single page handling one HTTP request/response cycle. The vast majority of CDI beans you will use with JSF will likely belong to the request scope. In a similar vein, @SessionScoped beans are used for objects that are used throughout the HTTP session. Examples of this include user login credentials, account details and so on.
There is a relatively large class of web application use-cases that fall between these two logical extremes. There are some presentation tier objects that can be used across more than one page/request but are clearly not used across the entire session. Such objects are usually used in multi-step workflows. Unlike session scoped objects that are usually timed-out, conversation scoped objects have well-defined life-cycle start and end-points that can be determined ahead of time. A good example use-case for the conversation scope is an online quiz or questionnaire (both of which I’m sure we’ve all encountered much more frequently than we would like). While such use-cases are often implemented as part of the HTTP session, there really is no good reason to do so since the life-cycle of a quiz or questionnaire can be pretty well-defined and they are not needed across the entire session. The life-cycle of a quiz/questionnaire would begin with the first question. As part of the application workflow, the user will progress through questions. The user may also go back and forth through responses an arbitrary number of times. Finally, the quiz/questionnaire would end its life-cycle after the user finalizes their responses. Infrequently, the user will simply abandon the quiz/questionnaire, so conversation scoped objects do still need a timeout mechanism in case the event that defines the end of the workflow never happens. An order process that translates a shopping cart into a finalized checkout is another good candidate for the conversation scope since it will typically be a multi-step wizard.
Another way to think about the conversation scope is that it is a truncated custom session with the developer programmatically determining where the scope begins and ends. The concept of conversations will become even clearer as we look at a concrete example with code.
A Conversational Example
To borrow a convenient example from EJB 3 in Action, the bidder account creation wizard in the eBay-like ActionBazaar makes another great candidate use-case for the conversation scope. As shown in Figure 1, the wizard is composed of fours steps. In the first step, the user will enter login information such a username, password, password confirmation, secret question/answer, etc. When the user clicks the “Next” button, the information in the first step is saved and the user is taken to the second step. In the second step, user details such as the first name, last name, address, email and contact information is collected and saved. The wizard also allows the user to backtrack to the previous step and change the previously entered information.

Similarly, the third step collects user preferences such as notification and display preferences. The final step of the wizard confirms all the collected bidder account information before actually creating the account. The first step of the wizard starts the conversation while the conversation scope should end when the account is created in the last step of the wizard.
The entire account creation wizard can be implemented through a single conversation scoped bean. Before we look at how the bean is implemented, let’s take a brief look at the relatively unsophisticated JSF Facelets for the wizard. Figure 2 shows the JSF code that corresponds to each of the pages in Figure 1.

The enter_login.jsf page implements the first page of the wizard (in case of Facelets the actual source code file name is likely enter_login.xhtml). The login input is bound to a bean named login while the “Next” button is bound to accountCreator.saveLogin. As we will see shortly, the accountCreator bean is a conversation scoped bean that models the workflow. The login bean, on the other hand, is a simple data holder backing bean produced by the accountCreator bean. The second page in the workflow, enter_user.jsf similarly uses a produced backing bean named user and the “Next” button is bound to accountCreator.saveUser. The “Previous” button maps directly to the first page in the wizard. The enter_preferences.jsf page is implemented in a very similar fashion. The final page in the wizard, confirm_account.jsf, displays the values collected by the wizard during the conversation and also binds the event handler that triggers the actual creation of the account and the end of the workflow, accountCreator.createAccount.
Let’s now take a close look at the conversation scoped bean that implements the wizard:


public class AccountCreator {
  private AccountService accountService;
  private Conversation conversation;
  public Login login = new Login();

  public User user = new User();

  public Preferences preferences = new Preferences();
   public String saveLogin() {
    return “enter_user.jsf”;

   public String saveUser() {
    return “enter_preferences.jsf”;

   public String savePreferences() {
    return “confirm_account.jsf”;

   public String createAccount() {
    Account account = new Account();
    return “/home.jsf”


The bean is annotated to be both @Named so that it can be referenced from EL as well as @ConversationScoped. The login, user and preferences fields produce named backing beans for use in the wizard pages. Because these beans are produced by the conversational bean, they are available throughout the conversation as well as being available to the parent bean holding the bean instances. It is very important to note that a handle to the Conversation itself is injected into the bean. As we will discuss in greater detail both in this section as well as the next section, the Conversation interface allows programmatic control over the life-cycle of the conversation scope. This interface is basically what allows you to “custom-fit” the conversation to your application. A back-end service to actually create the account is also injected and is presumably implemented as a transactional stateless session bean. The JSF event listener methods in AccountCreator is actually where most of the interesting stuff is going on. The saveLogin method is called on the first page of the wizard and actually starts the long-running conversation. To understand what that means, you’ll have to know about types of conversations in CDI.
CDI has two different types of conversations, transient and long-running. By default, when you annotate a bean with @ConversationScoped, it is assumed to be transient. A transient conversation ends when the request that originated the conversation ends. This is a sensible fail-safe in case a conversational bean does not really need to be extended beyond the request. Any transient conversation can be turned into a long-running conversation on demand. Unlike a transient conversation, a long-running conversation extends beyond the scope of a request, potentially as long as the whole application session. A transient conversation is turned into a long-running conversation by invoking the Conversation.begin method, as is done in the saveLogin method. If the saveLogin method is not invoked, for example if the user abandoned the wizard at the first step, the bean will never be put into a long-running conversation and will simply be disposed of at the end of the request as part of the transient conversation. Besides starting the long running conversation, a number of other things can possibly be done in the saveLogin method, including perhaps validating that the username does not already exist, the password matches the confirmed password or that the password meets security guidelines. The saveLogin method also moves the wizard to the next page by returning the “enter_user.jsf” URL as the outcome for the event.
No specific manipulation of the conversation is done in the next page of the wizard, except for binding more input values to the user produced bean and the saveUser event handler likely only does some form-level validation before forwarding to the enter_preferences.jsf page. The savePreferences event handler is similarly simplistic. The event handler method for the final page in the wizard, createAccount, does a number of interesting things, however. It actually creates the final Account object with all the input collected into the produced fields throughout the conversation and invokes the back-end service to save the newly created account into the database. It then invokes the Conversation.end method. As you can guess, the Conversation.end method ends the long running conversation. This does not mean however that the conversation is immediately destroyed; it simply means that the conversation is “demoted” to becoming transient again. This allows for the conversational bean to be destroyed when the request ends and the user is moved out of the wizard into the “/home.jsf” URL.
An interesting question to think about is what happens if the user abandons the wizard in the middle after the long-running conversation is started. The bean will of course not be destroyed when a request ends. Like sessions, long-running conversations have an implicit timeout. When this timeout value expires, the conversation is destroyed. This timeout value is typically shorter than a session timeout. An astute reader might also wonder what happens if the saveLogin method (and therefore the Conversation.begin) is invoked twice during the same conversation, or if the Conversation.end method is invoked while the conversation is still transient. In most real applications, the begin and end methods should always be invoked after checking the current state of the conversation using the other methods in the Conversation interface described in detail below.

Programmatic vs. Declarative Conversations
It is an interesting question to ask whether the programmatic model of injecting the conversation and calling the begin and end methods could be converted to a declarative equivalent. For example, instead of injecting the conversation, we could have used something like @Begin and @End on the saveLogin and createAccount methods.
While this is not currently supported in CDI, it is the model that was supported in previous versions of Seam and will likely still be supported in Seam 3. If you believe it is useful, this is something we can support in Resin 4 as well.

More on Conversations
In order to make effective use of the conversation scope, it is helpful to understand a little bit of how it is implemented under-the-hood. CDI keeps track of a long running conversation by propagating an HTTP GET parameter named cid (reserved by the specification) across requests that are part of a workflow. The ID is created when the conversation starts and the cid that is passed around from request-to-request is mapped to the correct conversation context at runtime on subsequent requests in the workflow. When CDI cannot find a cid in the request, it assumes that a new conversation should be started. The ID itself is usually automatically generated, but you can create it manually yourself and retrieve it when needed (see the table below). As matter of fact, the cid can even be propagated back and forth from JSF and non-JSF pages (such as Servlets that are aware of CDI).
Below are all the methods that are supported by the Conversation interface:

void begin() The method promotes the conversation to being long-running. If the conversation is already long-running, an IllegalStateException is thrown. When the conversation is promoted, CDI automatically generates a unique ID and assigns it to the conversation – this is the ID that is used in the cid parameter.
void begin(String id) This variant of the begin method allows you to provide an application defined ID for the conversation. You may want to do this, for example, if you wanted to track the conversations for your application by assigning some custom meaning to it.
void end() This method demotes a long-running conversation to become transient. If the conversation is not long-running, an IllegalStateException is thrown.
String getId() This method returns the identifier of the current long-running conversation, or a null value if the current conversation is transient. The method can be used to send the conversation ID to a non-JSF page by embedding it into a URL or hidden form value, for example.
long getTimeout() This method returns the timeout, in milliseconds, of the current conversation.
void setTimeout(
long milliseconds)
This method sets the timeout of the current conversation. The method could be used as a performance tuning measure or for customizing application behavior (for example, setting idle time for an on-line quiz).
boolean isTransient() This method indicates whether the current conversation is transient. It should be used to check conversation state before invoking the begin and end methods.

It is also important to understand that you can start multiple conversations in the same session. For example, you could open two instances of the enter_login.jsf page in separate tabs. Two different conversations with two different conversation IDs would result. Conversations are thread-safe, meaning that even if you started two concurrent requests, two different conversations would still result.
For further details on the conversation scope such as conversation passivation, feel free to look through the CDI specification (or the Weld reference guide referenced below).
More to Come
Although we have discussed CDI’s interaction with JSF in somewhat greater detail in this article of the series, JSF’s interaction with CDI deserves focused coverage. In the next article of the series, we will focus solely on CDI from a JSF developer’s perspective including CDI’s interaction with JSF using EL binding, scoping, producers, qualifiers, events and the like.
In the meanwhile, for comments on CDI, you are welcome to send an email to You can also send general comments on Java EE 6 to For comments on the article series, Resin or CanDI, our JSR 299 implementation, feel free to email us at or Cheers until next time!
1.      JSR 299: Contexts and Dependency Injection for Java EE,
2.      JSR 299 Specification Final Release,
3.      Weld, the JBoss reference implementation for JSR 299:
4.      Weld Reference Guide,
5.      CanDI, the JSR 299 implementation for Caucho Resin,
6.      OpenWebBeans, Apache implementation of JSR 299,
About the Authors
Reza Rahman is a Resin team member focusing on its EJB 3.1 Lite container. Reza is the author of EJB 3 in Action from Manning Publishing and is an independent member of the Java EE 6 and EJB 3.1 expert groups. He is a frequent speaker at seminars, conferences and Java user groups, including JavaOne and TSSJS.
Scott Ferguson is the chief architect of Resin and President of Caucho Technology. Scott is a member of the JSR 299 EG. Besides creating Resin and Hessian, his work includes leading JavaSoft’s WebTop server as well as creating Java servers for NFS, DHCP and DNS. He lead performance for Sun Web Server 1.0, the fastest web server on Solaris.

Contexts and Dependency Injection for Java EE (CDI) - Part 3


This series of articles introduces Contexts and Dependency Injection for Java EE (CDI), a key part of the Java EE 6 platform. Standardized via JSR 299, CDI is the de-facto API for comprehensive next-generation type-safe dependency injection as well as robust context management for Java EE. Led by Gavin King, JSR 299 aims to synthesize the best-of-breed features from solutions like Seam, Guice and Spring while adding many useful innovations of its own.
In the previous articles in the series, we took a high-level look at CDI, discussed basic dependency management, scoping, producers/disposers, component naming and dynamically looking up beans. In this article we will discuss interceptors, decorators, stereotypes and events. In the course of the series, we will cover conversations, CDI interaction with JSF, portable extensions, available implementations as well as CDI alignment with Seam, Spring and Guice. We will augment the discussion with a few implementation details using CanDI, Caucho’s independent implementation of JSR 299 included in the open source Resin application server.
Cross-cutting concerns with CDI interceptors
Besides business logic, it is occasionally necessary to implement system level concerns that are repeated across blocks of code. Examples of this kind of code include logging, auditing, profiling and so on. This type of code is generally termed “cross-cutting concerns” (although subject to much analysis as a complement to object orientation, these types of concerns really don’t occur that often in practice). CDI interceptors allow you to isolate cross-cutting concerns in a very concise, type-safe and intuitive way.   
The best way to understand how this works is through a simple example. Here is some CDI interceptor code to apply basic auditing at the EJB service layer:
public class BidService {
    private BidDao bidDao;

    public void addBid(Bid bid) {

@Audited @Interceptor
public class AuditInterceptor {
    public Object audit(InvocationContext context) throws Exception {
        System.out.print("Invoking: "
            + context.getMethod().getName());
        System.out.println(" with arguments: "
            + context.getParameters());
        return context.proceed();

@Target({TYPE, METHOD})
public @interface Audited {}
Whenever the addBid method annotated with @Audited is invoked, the audit interceptor is triggered and the audit method is executed. The @Audited annotation acts as the logical link between the interceptor and the bid service. @InterceptorBinding on the annotation definition is used to declare the fact that @Audited is such a logical link. On the interceptor side, the binding annotation (@Audited in this case) is placed with the @Interceptor annotation to complete the binding chain. In other words, the @Audited and @Interceptor annotations placed on AuditInterceptor means that the @Audited annotation placed on a component or method binds it to the interceptor.
Note a single interceptor can have more than one associated interceptor binding. Depending on the interceptor binding definition, a binding can be applied either at the method or class level. When a binding is applied at the class level, the associated interceptor is invoked for all methods of the class. For example, the @Audited annotation can be applied at the class or method level, as denoted by @Target({TYPE, METHOD}). Although in the example we chose to put @Audited at the method level, we could have easily applied it on the bid service class instead.
We encourage you to check out the CDI specification for more details on interceptors including disabling/enabling interceptors and interceptor ordering (alternatively, feel free to check out the Weld reference guide that’s a little more reader-friendly).
Custom vs. Built-in Interceptors
EJB declarative transaction annotations like @TransactionAttribute and declarative security annotations like @RolesAllowed, @RunAs can be thought of as interceptor bindings built into the container. In fact, this is not too far from exactly how things are implemented in Resin.
In addition to the EJB service annotations, we could add a number of other built-in interceptors for common application use-cases in Resin including @Logged, @Pooled, @Clustered, @Monitored, etc. Would this be useful to you?

Isolating pseudo business concerns with CDI decorators
Interceptors are ideal for isolating system-level cross-cutting concerns that are not specific to business logic. However, there is a class of cross-cutting logic that is closely related to business logic. In such cases, you will have logic that really should be externalized from the main line of business logic but is still very specific to the interception target type, method or parameter values. CDI decorators are intended for such use cases. Like interceptors, decorators are very concise, type-safe and pretty natural.
As in the case with interceptors, the best way to understand how decorators work is through a simple example. We’ll use the convenient bid service example again. Let’s assume that the bid service is used in multiple locales. For each locale bid monetary amounts are entered and displayed in the currency specific to the locale. However, the bid amounts are internally stored using a standardized currency (such as maybe the Euro or the U.S. Dollar). This means that bid amounts must be converted to/from the locale specific currency, likely at the service tier. Because the currency conversion code is not strictly business logic, it should really be externalized, but it is very specific to bid operations. This is a good use case for decorators, as shown in the code below:
public class DefaultBidService implements BidService {
    public void addBid(Bid bid) {

public class BidServiceDecorator implements BidService {
    @Inject @Delegate
    private BidService bidService;

    @Inject @CurrentLocale
    private Locale locale;

    private Converter converter;

    public void addBid(Bid bid) {
            locale.getCurrency(), Converter.STANDARDIZED_CURRENCY));

As you can see from the code example, the currency conversion logic is isolated in the decorator annotated with the @Decorator annotation. The decorator is automatically attached and invoked before the interception target by CDI (just as in the case of interceptors). A decorator cannot be injected directly into a bean but is only used for the purposes of interception. The actual interception target is injected into the decorator using the @Delegate built-in qualifier. As you can also see, decorators can utilize normal bean injection semantics. If the Decorator/Delegate terminology sounds familiar, it is not an accident. CDI Decorators and Delegates essentially implement the well-known Decorator and Delegate OO design patterns. You can use qualifiers with @Delegate to narrow down which class a decorator is applied to like this:
public class BidServiceDecorator implements BidService {
    @Inject @Legacy @Delegate
    private BidService bidService;

The CDI specification (or the Weld reference guide) has more details on decorators including disabling/enabling decorators and decorator ordering.
Custom component models with CDI stereotypes
CDI stereotypes essentially allow you to define your own custom component model by grouping together meta-data. This is a very powerful way of formalizing the recurring bean roles that often arise as a result of application architectural patterns. For example, in a tiered server-side application, you can imagine component definitions for the service, DAO or presentation-tier model (the ‘M’ in MVC). A stereotype consists of a default component scope and one or more interceptor bindings. A stereotype may also indicate that a bean will have a default name (essentially indirectly decorating it with @Named) or that a bean is an alternative (indirectly decorated with @Alternative). A stereotype may also include other stereotypes.
An alternative is anything marked with the @Alternative annotation. Unlike regular beans, an alternative must be explicitly enabled in beans.xml. Alternatives are useful as mock objects in unit tests as well as deployment-specific components. Alternatives take precedence over regular beans for injection when they are available.
We won’t discuss alternatives beyond this here, but we encourage you to explore them on your own.

A stereotype defined for the DAO layer in our example bidding application could look like the following:
public @interface Dao {

As you can see, the @Stereotype annotation denotes a stereotype. Our stereotype is declared to have the dependent scope by default. This makes sense since DAOs are likely injected into EJBs in the service tier. The interceptor binding @Profiled is also included in the stereotype. This means that any bean annotated with the @Dao stereotype may be profiled for performance via an interceptor bound to @Profiled. The stereotype would be applied to a DAO like this:
public class DefaultBidDao implements BidDao {
    private EntityManager entityManager;

To solidify the idea of stereotypes a little more, let’s take a look at another example. CDI actually has a built-in stereotype - @Model. Here is how it is defined:
public @interface Model {}
The @Model annotation is intended for beans used as JSF model components. This is why they have the request scope by default so that they are bound to the life-cycle of a page and are named so that they can be resolved from EL. This is how @Model might be applied:
public class Login {
Note it is possible to override the default scope of a stereotype. For example, you can turn the Login bean into a session scoped component like this:
@SessionScoped @Model
public class Login {

It is also possible to place more than one stereotype on a given class, as well as apply additional interceptors, decorators, etc. As we mentioned earlier, stereotypes can also be cumulative, meaning that a stereotype can include other stereotypes in its definition.

The EJB Component Model as Stereotypes
It is an interesting question to ask whether the EJB component model (@Stateless, @Stateful, etc) can be modeled simply as a set of highly specialized stereotypes for the business/service tier. This is a logical next step from redefining EJBs as managed beans with additional services as was done in Java EE 6 and could open up some very powerful possibilities for the Java EE component model going forward.
This is one possibility we are actively exploring for the Resin EJB 3.1 Lite container.

Lightweight type-safe events with CDI
Events are useful whenever you need to loosely couple one or more invokers from one or more invocation targets. In enterprise applications events can be used to communicate between logically separated tiers, synchronize application state across loosely related components or to serve as application extension points (think about Servlet context listeners, for example). Naturally CDI events are lightweight, type-safe, concise and intuitive. Let’s look at this via a brief example to see how events in CDI work.
Let’s assume that various components in the bidding system can detect and generate fraud alerts. Similarly, various components in the system need to know about and process the fraud alerts. CDI events are a perfect fit for such a scenario because the producers and consumers are so decoupled in this case. The code to generate a fraud alert event would look like this:
private Event fraudEvent;

Fraud fraud = new Fraud();;

CDI events are triggered using injected Event objects. The generic type of the Event is the actual event being generated. Like the Fraud object, events are simple Java classes. In our example, we would construct the fraud object and populate it as needed. As you can see, events are triggered by invoking the fire method of Event. When the event is triggered, CDI looks for any matching observer methods that are listening for the event and invokes them, passing in the event as an argument. Here is how an observer method for our fraud alert would look like:
public void processFraud(@Observes Fraud fraud) { ... }
An observer method is simply a method that has a parameter annotated with the @Observes annotation. The type of the annotated parameter must match the event being triggered. The name Observer mirrors the Observer OO design pattern. You can use qualifiers to filter observed events as needed. For example, if we were only interested in seller fraud, we could place a qualifier on the observer method like this:
public void processSellerFraud(@Observes @Seller Fraud fraud) { ... }
On the producer side, there are a couple of ways to attach qualifiers to trigged events. The most simple (and common) way would be to declaratively place a qualifier on the injected event like this:
@Inject @Seller
private Event sellerFraudEvent;
It is also possible to attach qualifiers programmatically using the method like this:
if (sellerFraud) { Seller()).fire(fraudEvent);

There is a lot more to events than this like injecting parameters into observer methods, transactional observers and the like that you should investigate on your own.

Events and Messages
There are a lot of obvious parallels between events and traditional messaging with JMS. This is one of the avenues we are exploring further in Resin to see if these models could be merged in a simple and intuitive way – namely if the Event object can be used to send JMS messages and/or if @Observer could listen for JMS messages.

More to come
In the next part of the series we will be focusing on CDI as it relates to JSF developers at the presentation tier (many of you have expressed specific interest in this topic). We will cover using the new conversation scope as well as CDI’s interaction with JSF using EL binding, scoping, producers, qualifiers and the like.
In the meanwhile, for comments on CDI, you are welcome to send an email to You can also send general comments on Java EE 6 to For comments on the article series, Resin or CanDI, our JSR 299 implementation, feel free to email us at or Adios Amigos!
1.      JSR 299: Contexts and Dependency Injection for Java EE
2.      JSR 299 Specification Final Release
3.      Weld, the JBoss reference implementation for JSR 299
4.      Weld Reference Guide
5.      CanDI, the JSR 299 implementation for Caucho Resin
6.      OpenWebBeans, Apache implementation of JSR 299
About the Authors
Reza Rahman is a Resin team member focusing on its EJB 3.1 Lite container. Reza is the author of EJB 3 in Action from Manning Publishing and is an independent member of the Java EE 6 and EJB 3.1 expert groups. He is a frequent speaker at seminars, conferences and Java user groups, including JavaOne and TSSJS.
Scott Ferguson is the chief architect of Resin and President of Caucho Technology. Scott is a member of the JSR 299 EG. Besides creating Resin and Hessian, his work includes leading JavaSoft'stable WebTop server as well as creating Java servers for NFS, DHCP and DNS. He lead performance for Sun Web Server 1.0, the fastest web server on Solaris.

Tuesday, May 18, 2010

Memory leaks where the classloader cannot be garbage collected

Short description of the problem

When an application is loaded in a container environment, it gets its own ClassLoader. In tomcat this will be a WebAppClassLoader. When the application is undeployed, the container in theory drops all references to that class loader, and expects normal garbage collection to take care of the rest. However, if any object that is loaded by the system or container classloader (StandardClassLoader in tomcat) still has a reference to the application class loader or any object loaded by it, the class loader will not be garbage collected. In that case, all the class objects and every object referenced by a static field in any of the classes will not be garbage collected.
I've run into this problem in the context of a spring/hibernate application running on tomcat, but it's really general to any situation where you have classloaders with a lifecycle.


There are two main patterns that cause this situation. The first one is where a library loaded by the container has a cache that keeps strong references to objects or classes in the application. The second one is where the application has ThreadLocal data that is attached to a container thread. In tomcat this thread will be part of the thread pool, so it will never be garbage collected.

Known offenders

Bean introspection
The bean introspection code will keep a strong cache, but this can easily be cleared with a call to java.beans.Introspector.flushCaches(). There is a listener in the spring library that will take care of this: org.springframework.web.util.IntrospectorCleanupListener.
org.apache.tomcat.util.IntrospectionUtils has a cache of objects and their methods that it has been used against. This is a Hashtable called objectMethods that is never cleaned up. It will be used to inspect any exceptions that your application throws. In this table, the class is the key and the method the value. Note that a WeakHashMap will not help here as the value has a reference to the key. I haven't found any good way of getting around this, short of removing the cache and making your own version of the jar.
There is also a copy of IntrospectionUtils in org.apache.commons.modeler.util. It seems to have the same problem.
Any JDBC driver loaded in the application (from the WEB-INF/lib directory) will be registered in the system-wide DriverManager. It will not be unloaded unless you add a listener similar to the example below (written by Guillaume Poirier).
public class CleanupListener implements ServletContextListener { 
  public void contextInitialized(ServletContextEvent event) { 
  public void contextDestroyed(ServletContextEvent event) { 
    try { 
      for (Enumeration e = DriverManager.getDrivers(); e.hasMoreElements();) { 
        Driver driver = (Driver) e.nextElement(); 
        if (driver.getClass().getClassLoader() == getClass().getClassLoader()) { 
    } catch (Throwable e) { 
      System.err.println("Failled to cleanup ClassLoader for webapp"); 
There used be some data saved in ThreadLocals by DOM4J, but this has been fixed in a later version. 1.6.1 seems to work fine for me.
Mozilla Rhino
See Bugzilla: ThreadLocal in Context prevents class unloading.
KeepAliveCache will keep HTTP 1.1 persisent connections open for reuse. It launches a thread that will close the connection after a timeout. This thread has a strong reference to your classloader (through a ProtectionDomain). The thread will eventually die, but if you are doing rapid redeployment, it could still be a problem. The only known solution is to add "Connection: close" to your HTTP responses.
If you are using the commons-dbcp connection pooler, you may be activating the idle evictor thread. Because of a bug, this will sometimes keep running even if your application is unloaded. It can be deactivated by calling the setTimeBetweenEvictionRunsMillis(long) method of the org.apache.commons.pool.impl.GenericObjectPool object with a negative parameter.
There is a bug in how Jasper (a JSP compiler) pools taglibs. This has been seen to cause memory leaks when used with hibernate, there is more information here: Bugzilla: The forEach JSTL tag doesn't release items
Sun's -server VM
In some cases (see JDK bug 4957990 below) the VM will not free your class loader even if there are no references to it. Try running with -client.

Suspected offenders

CGLIB used to have some memory leaks problems in its proxy code, but they are supposedly fixed in later versions.
If you load commons-logging in your container, it can cause leaks. See Logging/UndeployMemoryLeak for more information. There is also some information about this in the guide: Classloader and Memory Management

How to find your offenders

As for your application, there are two ways to go. Either start with a minimal test application and add stuff until it doesn't unload properly, or start with a complete applications and fix offenders until it unloads. Either way, a profiler will be essential. There are several alternatives, free and commercial, listed below.
Unfortunately I haven't found a profiler that will make it trivial to find these problems. In most cases I have to go through all the classes that are loaded by my class loader, and look for references to them by Objects loaded by the container class loader. This is very time-consuming work and can be quite frustrating - it's quite common to see that your application loads several thousand classes.
According to baliukas: If you run the JVM in debug note, or you use -Xprof, class objects will not be unloaded.
Commercial profiler that gives you a very good graphical representation of your references, but handles any static field in a class object as a garbage collection root.
YourKit Java Profiler
Commercial profiler. In some cases it is able to automatically find references to your ClassLoader. In other cases, your ClassLoader is annotated as a "Other GC" object, whatever that means.
Eclipse Profiler
Free. I've seen this recommended.
Included with the JDK (in 1.5 at least). Gives you some basic information about memory usage etc.
NetBeans Profiler
Free. I've seen this recommended.
HAT is used to parse the binary output from HPROF (a small profiler that is included with the JDK). Attila submitted a patch to make HAT work better for problems with classloaders. Hat can be found here:, and the patch at

Other discussions

Spring forum: Memory Leak
Spring forum: Memory leaks when redeploying web applications
Spring forum: Memory Leak: Spring does not release my objects
Hibernate forum: OutofMemoryError on webapp redeploy (10 pages)
dom4j-Bugs-1070309: ThreadLocal cache
web app using log4j not garbage collected when shutdown with tomcat manager
(commons-logging) j2ee unit tests added: memory leak demonstrated
spring-devel: Further Investigations into OOM Exceptions on Redeploy

Related bugs

ResourceBundle holds ClassLoader references using SoftReference (not weak) (Closed, fixed)
ObjectOutputStream.subclassAudits SoftCache prevents ClassLoader GC (Closed, fixed)
ClassLoaders do not get released by GC, causing OutOfMemory in Perm Space (Closed, fixed)
PermHeap overflow problem in and only in server VM (In progress, bug)
IntrospectionUtils caches application classes (Resolved)
(modeler) IntrospectionUtils memory leak (New)
Memory Leak: Objects are not released during hot deploy (Rejected)


A day in the life of a memory leak hunter
Memory leaks, be gone

Monday, May 17, 2010

Java Security Packages JCA/JCE

In this tutorial, the author explains the cryptography-related concepts and packages in JDK, with code examples. Many of the concepts and technical terms thus learnt will be useful in understanding the Cryptography API in MFC also.

There are three security-related packages in JDK1.4, as follows:

1.  JCA/JCE (Java Cryptography Architecture & Java Cryptography Extensions)

2.  JSSE( Java Secure-Sockets Extension).

3.  JAAS( Java Authentication & Auhorization Service)

( Prior to JDK1.4, many of these packages were not available within the JDK and had to be separately installed and used. But, JDK1.4 has incorporated all these within JDK itself).

Understanding the terminology of these important packages requires that we have some familiarity with the technical terms used in the field of Network Security. We can begin by saying that secure communication ,should ensure the following.
  •  Integrity
  •  Confidentiality
  •  Authentication
  •  Non-repudiation
[ There is also another requirement (ie). Authorization and it is more to protect resources and programs from users, than with communicating the data. JAAS deals with that].

These are all standard terms used in Security. When a person, say, Sam, wants to send some information to Tom, it must be ensured that the information thus sent, is not tampered with or altered on the way. This is known as Data Integrity.

Secondly, the information is meant only for Tom and so no one else should be able to understand the message. This is known as Confidentiality. There should be some indication that the message came from Sam and there should be some proof for that. This is Identification. Authentication, that the message came from Sam is provided by Digital Signature. There should preferably be a trusted third party to vouchsafe for the identity and signature of Sam. This is achieved by Digital Certificate, which authenticates the signature of Sam. Besides these, sometimes it is equally important that Sam should not be able to say later that he did not send the message to Tom and the message was actually sent by someone else ,in his name. This is ensuring Non-repudiation. This purpose also is served by Digital signature. We will now see step-by-step development of these concepts. Except 'Authorization', other things can be understood in the context of everyday exchange of information.

Confidentiality is achieved by using Cryptography techniques. For the sake of simple illustration, let us assume that Sam wants to send a message to Tom.( By convention, two persons Alice & Bob are chosen for illustration, because the original thesis made use of these names. Let us use Sam and Tom instead). Sam does not want his message to be understood by anybody else except Tom. So, he encrypts it. When Tom receives the encrypted message, he 'decrypts' it, so that he can read the original message. The original message is known as 'plaintext'. After Encryption, it becomes 'cipher text'. The process of converting the cipher text into the original plaintext is known as Decryption. A 'key' is used for controlling Encryption and Decryption.

There are two types of key-based encryption algorithms, namely, Symmetric algorithm and Asymmetric Algorithm.

I ).  Symmetric Algorithm: This algorithm uses the same key for encryption and decryption. This is also known as 'Secret key'.

In this scheme, when Sam wants to send a message to Tom, he encrypts the message by the mutually agreed secret-key and then sends the cipher text to Tom. Tom uses the same secret key and decrypts the message and reads it.

Symmetric key system is faster than the Asymmetric system but the problem of agreeing on mutual secret key and preserving the secrecy of the key while communicating it over the network, led to the development of Asymmetric key systems.

Some of the Symmetric key Algorithms are as follows: (Most of them are implemented in JCA/JCE).

1.  DES( Data-Encryption-Standard).. developed in 1970 and recommended by US government. Though it is not fool-proof, it is considered to be sufficiently safe and is in wide use.

It has different modes of operation.
  • Electronic Cook book ( ECB)
  • Cipher Block Chaining ( CBC)
  • Output Feedback Mode (OFB)
  • Cipher Feedback Mode ( CFB)
2. ) Triple DES( also known as DESede) ..

An improved and very safe method of DES.

3. ) IDEA (International Data Encryption Algorithm). This is used in PGP ( Pretty-Good-Privacy method of secure Email).

An important advantage of Secret-key algorithm is that a hardware-approach is possible. This results in very high speed encryption. The hardware implementation by a VLSI chip can be about 20 times faster than the corresponding software implementation! IDEA has been implemented in hardware.

4. ) Blowfish...This algorithm was designed by Bruce Schneier. It is not patented and he has placed the implementation in public domain.

5. ) There is also a method known as Password-Based Encryption (PBE). We will have a brief description of this method ,with code example, shortly.

Ready-made implementations for many of these algorithms are available in SunJCA/JCE and the programmer just chooses the desired algorithm and uses it. No deep knowledge of the mathematical theory of the algorithms or how these algorithms are implemented ,is required .Such topics are highly mathematical and are dealt with in books on Cryptography .

II ). Asymmetric Algorithms

This algorithm is also known as 'Public Key' algorithm. There are two keys in this scheme. One key is known as 'public key' and the other key is known as 'private key'.( It should be noted that 'secret key' does not mean 'private key'.)

The basic theory of Public key Cryptography was developed by two research workers at Stanford University Diffie & Hellman in 1976. The DH algorithm is known as Key-Agreement method. RSA algorithm is an implementation , named after the initials of the three academics who invented it. ( Rivest, Shamir & Adleman). RSA is the defacto standard. Another Asymmetric algorithm is DSA ( Digital Signature Algorithm). Yet another algorithm is known as ECC (Elliptic-Curve Cryptography). It is reputed to be very efficient and fast.
[ However, SunJCA/JCE does not provide ready-made implementation for ECC.]

The public key and private key are known as 'keypair'. The public key and private key are mathematically related in the sense that if a message is encrypted by using a particular public key, it can be decrypted by the corresponding private key and vice-versa (ie) the data can also be encrypted by using a private key and can be decrypted by the corresponding public key, and not by any other public key. But the problem is that any person who knows Sam's public key can decrypt the message. So, RSA system uses public key of the recipient to encrypt the data.( But, the private key cannot be derived from public key. Similarly, the public key cannot be derived from private key).

RSA method is the most widely used scheme. When Sam wants to send a secret message to Tom, he should know the public key of Tom to begin with.( Just as we should know the mail-id of our friend first, if we want to send email to him). Sam encrypts the message by using Tom's public key and sends it to Tom. At the receiving end, Tom uses his (Tom's) private key and decrypts the letter and reads it. The advantage of this scheme is that it ensures that only Tom will be able to read the message, as only his private-key can decrypt the message encrypted with his public key. A person's private key need never be known to anyone else and there is no sharing the key with another person. Only the public key needs to be informed to others. ( like the difference in sharing our mail-id and sharing our password!). Thus key-administration problem is less.

Digital Signature & Message Digest

The Asymmetric system has another use as well. It can be used for creating the Digital Signature, to ensure that the message came from Sam. Though the message itself can be signed without creating a digest, the usual method is to sign the message digest, so that Integrity of data also can be ensured.

A 'Message Digest' is a digital fingerprint. It is often referred to simply as a digest ( summary) or hash. It is an one-way process ( ie) it is impossible to reconstruct the original from the hash.

MD5(MessageDigest-5) and SHA-1( Secure Hash Algorithm) are two examples of such Digesting algorithms. These two are provided in Sun security package.MD5 accepts some input and produces a 128-bit message digest. SHA-1 is more secure and produces a 160-bit message digest.

When Sam wants to send a secure message, he passes the message through a Message Digest engine. The result is a hash.

He then encrypts the hash by using Sam's private key.( This encryption is done on the hash and not on the data). Thus we get the Digital Signature.

Finally, Sam encrypts the original message using Tom's Public key. After this, Sam sends the package to Tom.

At the receiving end, Tom uses his (Tom's) private key to decrypt the message. By using Sam's public key, he decrypts the digital signature and so gets the original hash( hash1). Using the same one way hash algorithm on the text message, Tom creates another hash( hash2).If hash2 exactly matches hash1, it means that the data has not been altered in transit. Thus, we get assurance of Confidentiality and Data Integrity. It also ensures the identity of the sender, becuase the the hash1 was obtained by using the public key of Sam to decrypt the package.

If the public key of Sam, used by Tom, has the added assurance from a certificate authority that it really belongs to Sam, this is a clear-cut method with no problems except that it is not suitable if the message being encrypted is of large size. Besides satisfying the requirements of Authentication, Confidentiality, Integrity and Non-Repudiation, we should also ensure that the process is fast, in Enterprise level. The method outlined above is slow and so may not be suitable for large messages. Otherwise, it is a satisfactory method.

(We will describe a hybrid method used for large messages, shortly).

Sometimes, it may be enough if there is Authentication and Non-Repudiation, without confidentiality. In such cases, it is enough if Sam sends the message-digest encrypted by Sam's privatekey ,along with the plaintext.

Digital Certificate
Just now , we saw that Tom made use of Sam's public key to verify his Digital Signature. How does Tom get to know the public key of Sam? Sam could have published his public key in the internet or could have sent it to Tom, personally. A person's public key can be freely published and shared and for this reason, anybody can use it , not necessarily Sam. A Digital signature of Sam can be verified only if Sam's public key is available to Tom but as it is a public key, impersonation is possible. A trusted third-party is required to certify that the said key is really Sam's public key. This is known as Digital Certificate and the authorities who issue such certifictes are Certifying Authority.

Public Key Infrastructure( PKI)
When Sam wants his public key to be certified by a CA, he generates a keypair and sends the public key to an appropriate CA with some proof of his identification.

The CA checks the identification and then after satisfying that the key has not been modified in transit, issues a certificate relating the public key of Sam with his identity, by signing the public key of Sam with the private key of the CA. The standard format of issuing the certificate is known as X509.

Who is to attest the CA themselves? The CA are self-attested.

The PKI standard has been developed by RSA Security Systems in collaboration with Industry leaders like SUN, IBM and Microsoft and is the industry standard.

A certificate becomes invalid after the expiry of validation period. Sometimes, the private key associated with a public key gets compromised ( ie) exposed, and in that case also, the certificate should be withdrawn( revoked). The owner of the private key also may like to change it. The CA publishes a list of such defunct certificates and Tom should verify that Sam's certificate is still valid, before important transactions.

Message-Authentication-Code ( MAC)

Digital Signature makes use of Sam's private key to sign the hash. An alternative to Digital Signature is to use a secret key to encrypt the hash. By its very definition, secret key is common to both Sam & Tom. So Tom can use the secret key at his end and get back the hash. The code thus generated by mixing the hash and the secret key is known as MAC. Digital Signature is better than MAC because it does not need any 'secret' key. In the context of E-Commerce, where there are thousands of parties , secret key administration is always very difficult.

The scheme outlined above is suitable for most purposes. However, for very large amounts of data, encryption and decryption of data by public-key systems becomes time consuming and requires large resources. In such cases, it is preferred to use Symmetric Encryption systems with some modifications. Hybrid Systems make use of Asymmetric method for agreeing upon a secret key and the actual encryption and decryption of data is done by this secret key. Some such method is the usual Industrial practice. The Secret key used here is valid only for a particular instance of transmission of message and so is usually called 'session-key'. ( This is not the 'session' as usually understood in servlates, because this is an one-time operation).

Digital Envelope

An illustration of Hybrid method is a Digital Envelope. In this scheme, Sam encrypts the message by a random Secret key, ( known as DEK i.e. Data-Encryption-key or session key). Next Sam encrypts this session key with Tom's public key. At this stage, Sam sends both the encrypted message and the encrypted session key to Tom.

At the receiving end, Tom uses his private key to get the session key. Using this session key, Tom decrypts the message. As Symmetric method is about 1000 times faster than Asymmetric method , this is a good combination. Though public key method also has been used here, it is only for encrypting the session key and not the message. This can be further improved by creating a hash of the message and signing it. Also, there is no permanent Secret key between Sam & Tom, and the required secret key is produced just as required and then discarded after the job. Thus, the method is fast and secure.

With this background information, let us now see some simple code examples, specific to JDK.

There are a number of Cryptographic Engines in Sun JCA & Sun JCE. They are listed below.

It will be immediately evident that the names will be Greek & Latin to us , unless we have a background in Crypto terminology. That is why, a broad outline was given. The function of some of the engines will be evident from the earlier discussion. A few more of the remaining items will be clear when we deal with code examples.

Cryptographic Engines
  •  Key Generator ( symmetric)
    (Blowfish, DES, Triple DES, HmacMD5, HmacSHA1,RC5)
  •  Key Pair Generator ( asymmetric)
    (Diffie Hellman, DSA, RSA)
  •  Mac ( message authentication code)
    ( HmacMD5, HmacSHA1)
  •  Message Digest
    (MD5, SHA1)
  • Signature
    ( MD5withRSA, SHA1withRSA, SHA1withDSA)
  • Cipher
    ( Blowfish, DES, Triple DES etc)
  • Certificate Factory
    ( X509)
  • Key Agreement
    ( Diffie Hellman)
  • Key Factory
  • Secret Key Factory
  • Secure Random
    ( SHA1PRNG) (ie)
    ( SHA1 ..pseudo-random-number-generator)
  • Trust Manager Factory
  • Key Manager Factory
  • Key Sore
    ( JKS, PHCS12)
  • SSL Context
  • Algorithm Parameter Generator
  • Algorithm Parameters
Let us now see a series of code-examples to get familiarity with some of the above engines. For all the examples, we are using JDK1.4.2. Our working directory is g:\securitydemos. CD to g:\securitydemos

We should set path as : c:\windows\command;d:\jdk1.4.2\bin

The easiest to understand is the Message Digest. "" creates the message digest of the string s1, by SHA method (Secure Hash Algorithm) . The given string is first converted into a byte array, because the function md. digest(), accepts only a bytearray. md. update() simply adds the array to existing arrays, if any. The digest object thus created is simply saved as object to the file.

// creation of message-digest
// storing the string & digest in file

class demo1 {
   public static void main(String args[]) {
     try { 
       MessageDigest md = MessageDigest.getInstance("SHA"); 
       String s1 = " we are learning java";
       byte[] array = s1.getBytes();
       FileOutputStream fos = new FileOutputStream("demo1test");
       ObjectOutputStream oos = new ObjectOutputStream(fos);
       oos.writeObject(md.digest ());
       System.out.println(" digest ready!");
     }catch(Exception e1) {

In, we learn how a given messagedigest can be used for checking up for Integrity of data. We begin by getting the original string as well as the existing hash. Next we create another hash of the original string by the same algorithm to get hash2. Then, we compare hash1 with hash2. If they are not equal, we get the message "corrupted". From these examples, it will be appreciated how much the Java API shields the programmer from the inner workings of the highly mathematical theory of Cryptology.


// getting a string and digest from file
// creating a hash and verifying the digest
class demo2 {
  public static void main(String args[]){
    try {
       FileInputStream fis = new FileInputStream("demo1test");
       ObjectInputStream ois = new ObjectInputStream(fis);
       Object ob1 = ois.readObject();
       String s1 = (String) ob1;

       Object ob2 = ois.readObject();
       byte[] array1= (byte[]) ob2;
       MessageDigest md = MessageDigestgetInstance("SHA");
       if(MessageDigest.isEqual(md.digest(), array1)) {
       }else{          System.out.println("corrupted");
    } catch(Exception e1){

In the third example( , we see how a secret key is created by the DES algorithm. Cipher class is the Encryption and Decryption engine.After initialising the Cipher engine for encrypting mode, we

give the command ci.doFinal(). This creates the encrypted message of the specified string. We should also get the initvector, by the command ci.getIV().

To avoid writing a separate example, we illustrate the process of decrypting also here, in the next stage. We get the initvector and then define the cipher for decrypt mode. After this , ci.doFinal(), does the decryption.

// creation of secret key
// encryption using secret key
// decryption using secret key

import javax.crypto.*;
import javax.crypto.spec.*;
class demo3 {
   public static void main(String args[]) {
     try {
       KeyGenerator kg = KeyGenerator.getInstance("DES");
       // DES= Data Encryption Standard
       Key key = kg.generateKey();
       Cipher ci = Cipher.getInstance("DES/CBC/PKCS5Padding");
       ci.init(Cipher.ENCRYPT_MODE, key);
       String s ="we are learning Java";
       byte[] array1 = s.getBytes();
       byte[] array2 = ci.doFinal(array1);
       byte[] initvector = ci.getIV();
       System.out.println ("string has been encrypted");
       System.out.println ("we are now decrypting");
       IvParameterSpec spec = new IvParameterSpec(initvector);
       ci.init (Cipher.DECRYPT_MODE, key, spec);
       byte[] array3 = ci.doFinal(array2);
       String s2 = new String(array3);
    }catch(Exception e1){

The following program displays information about the Providers available in JDK.

import java.util.*;
class demo4 {
   public static void main(String args[]) {
     try {
       Provider[] array = Security.getProviders();
       int n = array.length;
       for(int j=0; j
         for(Enumeration e=array[j].keys(); e.hasMoreElements();) {
     } catch(Exception e1) {

When we execute the program, we get a long list We can run the program as:

>java demo4 > list.txt

and then inspect list.txt

In demo5 , we create a privatekey/publickey pair, by using

the DSA( Digital Signature Algorithm).

The Key Pair Generator class is used for that, by specifying a pseudo-random number. The key pair is easily generated by the command, kpg.genKeyPair(). After this we get get the public key and private key. Next we specify the Signature algorithm as Secure Hash DSA. After initializing the sig with the private key, we create the byte array for the string to be signed and ask the sig to sign that array [sig. sign();]

Now the data string has been signed by the sender's private key. In the next section, we have loaded sig with the public key and used the command to verify with the original array.. This will return 'true' , if the keys belong to the same pair. The code is almost self-explanatory, except for the syntax.

When we run the program demo5 , we get the following output.

G:\SECURI~1>java demo5

public & private keys ready!

the data has been signed by private key

now verifying with public key

verified & found ok

// creation of public & private keys
// signing the data by private key
// verifying the data by public key

import javax.crypto.*;
class demo5 {
   public static void main(String args[]) {
     try {
       SecureRandom sr = new SecureRandom();
       byte[] pr =new byte[100];
       KeyPairGenerator kpg = KeyPairGenerator.getInstance("DSA");
       KeyPair kp = kpg.genKeyPair();
       PublicKey pubkey = kp.getPublic();
       PrivateKey prikey = kp.getPrivate();
       System.out.println ("public & private keys ready!");
       Signature sig = Signature.getInstance("SHA1withDSA");
       String s1 = "we are learning Java";
       byte[] array1= s1.getBytes();
       byte[] array2 = sig.sign();
       System.out.println ("the data signed by private key");
       System.out.println ("now verifying with public key");
       boolean ok = sig.verify(array2);
          System.out.println("authentic..ok");        } else{
          System.out.println("not authentic");
   } catch(Exception e1){

We will conclude this tutorial with an illustration for Password-based secret key.

Users are supposed to remember their password, easily. If a secret key can be generated based on the password, it will be easier to generate. Users can then generate such a secret key and use it to encrypt and decrypt their files. The algorithm uses a 'salt' and an integer number for iteration. The paramspec is created using these two values. The Cipher engine for PBEWithMD5AndDES( PBE=Password-based-encryption, MD5=MessageDigest-5 &DES=Data Encryption Standard) is then created. The program asks for a password and then the string to be encrypted. A secret key based on the password supplied is generated and used to encrypt the data.

Just for illustration, the program asks for the password again. When we type it correctly, the secret key is again generated and used to decrypt the encrypted data. Finally, the original string is printed.

The output of demo6 is shown below.

( system prompt & output in bold)

G:\SECURI~1>java demo6

enter the password


enter the string to be encrypted

we are studying java cryptography

encryption over

now decrypting

enter the password again!


we are studying java cryptography

import javax.crypto.*;
import javax.crypto.spec.*;

class demo6 {
   public static void main(String args[]) {
     String salt="saltings";
     int n = 20; // iterations
     byte[] a = salt.getBytes();
     PBEParameterSpec paramspec= new PBEParameterSpec (a,n);
     try {
       Cipher cipher = Cipher.getInstance("PBEWithMD5AndDES");
       DataInputStream ins = new DataInputStream(;

       System.out.println ("enter the password");
       String s1=ins.readLine();
       System.out.println("enter the datastring");
       String s2=ins.readLine();
       byte[] array1 = s2.getBytes();
       PBEKeySpec keyspec = new PBEKeySpec(s1.toCharArray());
       SecretKeyFactory factory= SecretKeyFactory.getInstance ("PBEWithMD5AndDES");
       SecretKey key = factory.generateSecret(keyspec);
       cipher.init (Cipher.ENCRYPT_MODE, key, paramspec);
       byte[] array2 = cipher.doFinal(array1);
       System.out.println("encryption over");
       System.out.println("now decrypting");
       System.out.println ("enter the password again!");
       String s3=ins.readLine();
       keyspec = new PBEKeySpec(s3.toCharArray());
       factory= SecretKeyFactory.getInstance("PBEWithMD5AndDES");
       key = factory.generateSecret(keyspec);
       cipher.init (Cipher.DECRYPT_MODE, key, paramspec);
       byte[] array3 = cipher.doFinal(array2);
       String s4=new String(array3);
     } catch(Exception e1){

That completes our introductory tutorial

on JCA/JCE. All the above programs have been tested and are working correctly. Students are encouraged to follow this up with the reference material mentioned below:

  • Mastering Java Security
    (Cryptography, Algorithms & Architecture) by Rich Helton & Johennie Helton ( Wiley/ DreamTech)
  • Professional Java JDK 5 edition. Clay Richardson & others ( chapter- ) Wrox/Wiley/DreamTech
  • Java Security by Scott Oaks ( of SUN MICRO SYSTEMS) ( SPD/OReilley)
  • Java Distributed Objects by Bill McCarty ( chapter- )
  • Professional Java Web Services by Mack Hendricks (SUN) & others chapter-6 on Security by James Millbury