Performance tuning the camel parameters in backbase CXP application

Backbase is an Omni-Channel Digital Banking platform empowering financial institutions to accelerate their digital transformation and effectively compete in a digital-first world. It unifies functionality from of traditional core systems and new FinTech capabilities into a seamless digital customer experience. Thereby, drastically improving any the customer channel.
In any banking application, we interact with core banking for everything via Middleware ESB. In a Backbase CXP application, we make all calls to Middleware via camel. A typical Backbase CXP application’s architecture and system’s interaction is like shown below.

backbase cxp interaction

In a recent Backbase CXP project which that went live, we began experiencing slowness in the application when the number of concurrent users increased to (200+) and it became difficult to use the iOS and Android apps that are were consuming the Backbase CXP backend. We generated the thread dumps at the time the system was hanging and analyzed them using the tool samurai and jvisualvm.Lots of these threads were in WAITING mode. We analyzed the thread dumps a bit more and found that many of these threads were waiting with stacktrace like below.

"http-nio-8080-exec-499" #679 daemon prio=5 os_prio=0 tid=0x00007fb1f4204000 nid=0xb685 in Object.wait() [0x00007fb148308000]
java.lang.Thread.State: WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
at org.apache.commons.httpclient.MultiThreadedHttpConnectionManager.doGetConnection(
- locked <0x0000000e853c1c30> (a org.apache.commons.httpclient.MultiThreadedHttpConnectionManager$ConnectionPool)
at org.apache.commons.httpclient.MultiThreadedHttpConnectionManager.getConnectionWithTimeout(
at org.apache.commons.httpclient.HttpMethodDirector.executeMethod(
at org.apache.commons.httpclient.HttpClient.executeMethod(
at org.apache.commons.httpclient.HttpClient.executeMethod(

As we can see above in the above thread dump snippet, that the thread is waiting when it is trying to get the connection from the MultiThreadedHttpConnectionManager. This We had identified as the problem which was causing so many threads to wait and resulted in slowness. In our codebase, we created a common camel route to connect to Middleware for all the calls. So this is the common http endpoint which the camel is invoking every time. We further looked into our codebase and camel source code and discovered that the camel-core jar is using the apache commons-httpclient.jar to make http connections to the Middleware using the class MultiThreadedHttpConnectionManager. We found that Backbase uses its own default Multithreaded connection manager defined in backbase-ptc.xml. The property ptc.http.maxConnectionsPerHost is used to control the number of connections per host. This is part of the jar ptc-core.jar.

<bean id="ptc_httpConnectionManager" class="org.apache.commons.httpclient.MultiThreadedHttpConnectionManager">
<property name="maxConnectionsPerHost" value="$ptc{ptc.http.maxConnectionsPerHost}"/>
<property name="maxTotalConnections" value="$ptc{ptc.http.maxTotalConnections}"/>

These default values of the max connections in this connection manager are very little to handle 200 concurrent users.

## Maximum number of concurrent requests for one remote resource.

## Maximum total number of concurrent requests.


The approach we took to solve this problem was to make the Backbase CXP’s CamelContext to use a different MultiThreadedHttpConnectionManager that has higher optimal values. For making the change in the default camel context, we had to add the default backbase-integration.xml to portalserver/src/main/resources/META-INF/spring/backbase-integration.xml and then edit the file to attach new MultiThreadedHttpConnectionManager to the camel context using the following code.

<bean id="http" class="org.apache.camel.component.http.HttpComponent">
<property name="camelContext" ref="bb-integration-context"/>
<property name="httpConnectionManager" ref="myHttpConnectionManager"/>

<bean id=”myHttpConnectionManager” class=”org.apache.commons.httpclient.MultiThreadedHttpConnectionManager”>
<property name=”params” ref=”myHttpConnectionManagerParams”/>

<bean id=”myHttpConnectionManagerParams” class=”org.apache.commons.httpclient.params.HttpConnectionManagerParams”>
<property name=”defaultMaxConnectionsPerHost” value=”1000“/>
<property name=”maxTotalConnections” value=”1000“/>

So with the above code, we have defined myHttpConnectionManager to handle higher load and add it to the default camel context bb-integration-context. With this change on the camel context picked the new connection manager which could handle higher load of requests. This has solved the bottleneck for the http connections made to call the ESB and the application was performing well. This was the approach that we had followed.

Another simpler approach could have been to just change the default values in file


We could have done this, but the ptc module was going to be removed in the coming Backbase versions, so we stuck with to our initial approach.


In any Backbase CXP application, the Camel connection’s bottleneck will be an inevitable sure problem that will to happen with the default values ( and esp. if CXP is on one node only) as all the http requests will be sent to the Middleware and then if the concurrent users reaches like 200+, then the application will be slower.
This blog has addressed how to solve this performance issue which happens because of waiting in MultiThreadedHttpConnectionManager.

Leave a Reply

Your email address will not be published. Required fields are marked *