Saturday, September 20, 2008

Scaling JBoss Portal to a new height

Mount Everest as seen from Nepal side
You may have guessed from previous blog that this was coming and it has. We have been working a lot on improving the performance and scalability (horizontal) of JBoss Portal lately and I am glad to tell you that it's scaling really well. Don't get me wrong, it was never too bad to begin with but there are always rooms for improvement. :-) JBoss Portal uses Hibernate for database related work and JBoss Cache for clustering and having both development teams in-house helped significantly as well.

Besides tuning our hibernate configurations and several code optimizations which improved the performance of a stand alone JBoss Portal server, our scalability exercise resulted into a new JBoss Cache and Hibernate integration library, the details of which you can find at this blog.. Without further ado, here is the result.

As you can see, scalability is linear (positive) going from a 1-node to 10-node cluster which is the best you can expect. What this means is that once you optimize your portal deployment on a single node, JBoss Portal server will not incur too much extra overhead when you deploy your application in a cluster. I would like to mention though that going from a 1-node to a 2-node cluster scalability is 85% which still is pretty good in my opinion. Now some testing details.

For our testing, we used JBoss Enternprise Application Platform (EAP) version 4.3 production configuration because it has some OOB optimizations that we needed. We used a portal application which would test the core of portal server which meant an application that would follow the most common code execution path. It tested most commonly used interceptors, security layer, database access layer and portal management layer. We also made sure that fail over was happening correctly because in most use cases scalability without fail over does not mean much in a clustered setup. Load was increased until average response time remained less than 2 seconds with average think time of 1.5 seconds between requests.

All portal servers were deployed on RHEL4 servers on the same subnet. We used SmartFrog components to manage distributed deployment and testing. Requests were generated using our in house load generator as well as Grinder which hit the cluster fronted by apache load balancer. MySql5 was used as back end database. We did not tune MySql5 any more than what is commonly recommended for production setup. Performance tests and scalability tests are now part of continuous integration which is done using Hudson and its Smartfrog plugin that JBoss QA team has developed.

Many thanks go to Brian Stansberry, clustering lead and Galder Zamarreno, senior support engineer for not only creating the new JBC/Hib library but also for having patience to ruminate through many thread dumps.

We certainly had lots of fun with this exercise. Please let us know how it performs and scales in your deployment. These optimizations will be available from next release of JBoss Portal which are 2.6.7.GA (JSR-168 compliant) and JBoss Portal 2.7.0.GA (JSR-286 compliant).

4 comments:

avalon said...

That's really great! Keep up the stats! :)

Patrick said...

So what are the actual hard numbers?

Prabhat Jha said...

Usually giving hard numbers starts a never ending race. Vendors start tweaking configurations just for the sake of better number which actually does not help anybody.

We did no special tuning just for the sake of showing off good numbers and just used internally. We just wanted to show the scalability while increasing cluster size since that's that value you can't really cheat on.

Patrick said...

Ok we understand the disclaimer and promise not to tweak... how about some ballpark number?