CICS Integration Performance Benchmark
Case Studies| by Jerry Rackley

As we say on the home page of our website, a core value proposition of HostBridge is speed: we enable the creation of lightning fast (the actual words of a HostBridge customer) integrations based on screen data, not screen geometry, with no disruption to mainframe code. We recently had the opportunity to prove this claim in a head-to-head performance benchmark at a customer site, using their applications.

Driver of Change: Vendor Support Withdrawal

This particular customer is in the process of evaluating HostBridge to replace another vendor’s integration product. The reason really has little to do with performance: the incumbent vendor is withdrawing support for the CICS integration solution, and the customer is reluctantly evaluating alternatives to replace the existing solution.

Testing Environment

I went onsite to work with this customer, doing heads-up testing on HostBridge, returning exactly the same responses and using exactly the same input as the existing solution. The initial tests involved no changes to the integration. We replicated with HostBridge exactly what the installed solution does, without any changes. While the customer understood that HostBridge provides more integration and orchestration functionality, the initial testing made no changes to the input and output. It was a true apples-to-apples comparison.

This initial test exposes some of the challenges that exist when measuring the performance of an integration: they consider just what happens inside CICS, where HostBridge runs. The competing solution, however, has many other components, such as MQ and Java components, that reside outside of CICS. These components add overhead to the composite transaction, inevitably causing latency to build up between the host and the middle-tier. This latency, however, is difficult to measure. The nature of this customer’s application environment was such that there are several other layers a service call passes through to get into CICS to execute a COMMAREA program and then to return back out again.

Because of these challenges, the performance measurement scope was only for what happens inside CICS, where HostBridge does all its processing. The competing solution, with some processing that occurs in layers off the host, didn’t have all its overheard in these measurements. It, for example, uses the MQ and DPL link across to another CICS region. Furthermore, as far as what does happen inside of CICS from a resource and cycles standpoint, is a very, very thin process. The only thing we were testing is just what was happening in the region where the DPL link runs. All the application work was happening in a remote region and we weren’t measuring that, because that won’t change regardless of which integration solution is in use.

The Result

With the deck seemingly stacked against HostBridge in this test, the benchmark performance results for response time were:

  • HostBridge: 60 milliseconds
  • Other Solution: 170 milliseconds

Benchmark test results for a CICS COMMAREA program.

In this test, response time through HostBridge was less than half of the time through the competing solution’s integration. These results were consistent with the HostBridge reputation for performance. However, we found that HostBridge was consuming more general-purpose CPU than the existing solution.

With the response time through HostBridge an order of magnitude better, there’s a compelling argument for stating that the cost in CPU cycles is worth the gain in performance. Given the testing scenario, HostBridge was consuming cycles in CICS to do work the other solution was doing elsewhere on the mainframe. However, in this initial test, the zIIP-enablement of HostBridge was not in use. After turning the zIIP option on, the tests were rerun, and HostBridge delivered the same response time advantage as before, but now the workload was moved to the zIIP. A significant gain from using HostBridge!

These results speak to the strength of the HostBridge architecture. HostBridge does its work inside the CICS region(s), while the other solution does much of it in processing layers outside of CICS. Despite this, when running HostBridge on the zIIP, the total CPU consumption of HostBridge was only a fraction of the other solution’s CPU usage, while also delivering much better performance. The application in this test experiences about a million transactions per day, so the accumulated cycle savings and improved response time is significant.

A $1 Million Process

More impressive than the performance and processing gains that HostBridge delivers in this benchmark is the financial impact the test will yield. The systems programmer with whom I was working at this customer location said the process we tested costs them a million dollars a year, and by using HostBridge to accomplish the integration, those costs will completely go away. For this customer, the results of testing this integration scenario provide all the justification necessary to implement HostBridge with a short-term payback.

If you run CICS applications, these same benefits are available to you. We’d be happy to work with you to perform similar tests on your host, with your applications. If you’d like to do that, simply let us know using the form below.

I agree to receive commercial messages from Broadcom. I understand my personal data is processed according to Broadcom's Privacy Policy and I may unsubscribe from emailed communications at any time.