Optimizing Legacy-to-Cloud Integrations

I recently joined former IBM executive Phil Weintraub to present a webinar on optimizing legacy-to-cloud integrations. Like me, Phil has spent a career helping enterprises architect, manage, and exploit mainframe IT infrastructure. Phil now is the president and lead consultant at Weintraub Systems IT Consulting, and we’re very pleased to partner with his firm.

Every enterprise customer we work with is in the process of integrating legacy applications, most of them running in CICS, with something outside the mainframe. HostBridge provides integration software that makes CICS apps available as services via RESTful APIs. We understand the spectrum of legacy integration solutions very well, their pros and cons, costs and benefits.

In recent years, and with increasing frequency, we are seeing enterprises experience sub optimal performance on the mainframe that is a direct result of integration technology choices. As businesses race to include mainframe applications in their hybrid cloud strategies, optimizing legacy-to-cloud integrations is imperative. In our recent webinar, Phil and I discuss the approach we feel enterprises should take: integration via API. APIs offer lower cost and greater performance because of the loose coupling they create.

Phil and I agree that the recipe for success in optimizing legacy-to-cloud integrations begins with analysis. Weintraub Systems and Hostbridge now offer an optimization service where Phil starts at the top by looking at IT strategy. HostBridge starts at the bottom by using SMF 110 data to look at cloud-mainframe interactions. We bring our analyses together to present clients with a report and plan for optimization.

As we shared our optimization approach during the webinar, we got several, insightful questions. I’ll share them in the rest of this post.

Q: Does the analysis of SMF 110 records cover the whole trip from end-user to CICS transaction?

Russ: SMF records by default don’t contain this information. However, what we provide with our analytics offering is software that runs inside CICS. It looks at the requests that come in and extracts bits of metadata about them, saving it in the SMF 110 records.

Imagine that you have a lot of HTTP traffic coming in and one of the headers, perhaps, on that HTTP request is a correlation ID. There’s nothing in CICS that automatically extracts that and then includes it in the SMF data. That’s part of what we’re doing. So while the SMF 110 record only covers the transaction that ran inside CICS, we enrich that SMF 110 record with correlation or requests-specific data. We can then use Splunk to match it up with other log data from the distributed system. In this way, we create a complete end-to-end picture of what’s going on.

Q: How do we measure distributed parts of hybrid transactions? We are currently flying blind in this area.

Russ: What we see in most large organizations is something like this: a request comes in from the hybrid cloud. That request will cause CICS transaction A to run, then it’s going to run CICS transaction B, so you have this long chain of events occurring. Remember that metadata I said we extract from incoming or outgoing requests? In our approach, CICS automatically includes it as part of the origin data or transaction tracking data. Thus, the correlation information follows each of the dependent transactions.

As a result, when we ingest this data into Splunk, we can literally stitch all the transactions together and see where requests are coming from. Let’s say it’s a WebSphere server request, or maybe it came in through MQ, HTTP, or even a 3270 data stream. Whatever its source, we’re able to show you and end-to-end picture of the hybrid transaction by virtue of having annotated the SMF records with origin data and metadata.

Q: Our organization is always looking for ways to free up budget money to spend on modernization. What are some of the best areas to focus on if we want to do that?

Phil: One of the first places to look is the efficiency of the applications and how the workload flows. In most mainframe clients, the processes typically kick off and follow a certain pattern throughout the day, week, and month. We always look at the use of the assets that tends to drive software costs. In this environment, software tends to be priced based on the peak utilization of that system over a month. That’s true of IBM software, monthly license charge software, the operating system, etc. It’s also true for most mainframe tooling, those from IBM as well as other vendors.

We look to see if there is effective use of the software by looking at all the software in use. Are you using it productively or are you spending money on software you don’t necessarily need? Then the other thing we’ll do is a very detailed analysis of when different workloads peak throughout the month, which then drives the software costs for those products or those tools. It’s not unusual – for clients who haven’t really followed through with how their performance tuning has changed over the years – for us to find peak reductions of 10, 15, 20%.

We also look at how workload is distributed across LPARs and processing shifts. It’s often possible to workload balance by moving it to later shifts that aren’t heavily used. So we look at both where the peaks are for software usage, and then for software that’s really not used very often or is redundant with other software.

Q: What are the main reasons that drive an application to be targeted for API enablement?

Russ: From the top down, it’s an issue of strategic intent. If an application is part of your demand chain or supply chain interactions, it probably falls into the category of critical application infrastructure. Such applications run the business, have high-availability requirements, and therefore need high-performing, reliable integration solutions. These are the kinds of applications that most benefit from making them available via an API.

In our work with customers, we usually see repetitive interaction patterns with these applications. With one customer, we observed that every morning, every sales rep clicked a “Refresh” button in a particular spreadsheet. They then went for coffee because when they clicked the button, a macro ran about four to five thousand CICS transactions. This volume of interaction between a client spreadsheet and your mainframe is a good indicator of need to wrap an API around that process. This interaction pattern – a flurry of interactions back and forth – is highly inefficient and generates latency like crazy.

Phil: Let me embellish that a little bit. Prioritizing the interactions we’re going to optimize becomes very important. It’s not only the strategic nature of the application we should consider. Executives tend to like us to go after the low hanging fruit first, to get the biggest return for the smallest possible investment.

As we get into reviewing applications with you, we’re also looking at things like: Do you have an architect who’s really has a good understanding of not only that application, but the interaction of that application with others? In addition, what documentation exists? What are the skill levels of the developers to actually go in and make the changes?

And then of course there is an assessment of the structure of the application itself. There’s a lot to consider, but Russ is right. It starts with prioritizing which ones are most important to your organization, followed by ones where you can demonstrate early success to your executive team and get significant benefit from it.

Russ: I might just add one more thing. We’ve done so much work. I’ve mentioned 3270 terminal emulation and screen scraping as an integration technique. There’s just so much of that still out there. So much so that we’ve actually built particular technology, based around Splunk, to be able to deduce what we call the DNA, of these robotic or automated processes. When we run your SMF 110 data through our process, we’re able to see the sequence of the flow of the automation. Our objective is to show the DNA at such a level so that an application subject matter expert can look at it and have an “Aha!” moment.

A lot of these RPA bots or Excel macros were developed for a good reason. Someone thought that was easier to write an Excel macro or a bot than call up the IT department and say, “Gee, I’d really like this functionality.” A decade later, we’re still running those macros. Our dashboards will show you the DNA of these automated processes. From there, it’s almost child’s play to make decisions about where the top optimization priorities are.

Q: We have hundreds of apps running on our mainframe. How can we figure out where to prioritize for whatever form of modernization, refactoring, replatforming?

Phil: It involves looking at how your mainframe systems perform through time and where the peaks occur. As an example, you may run applications you consider stable, in which you’re not really investing. You could replatform them without too much trouble. But that application tends to run in what we call a trough, an off-peak period that is low cost.

By doing an analysis of your applications, you might find some are running in that trough. It’s really not worthwhile to consider replatforming them.

By using the data and doing the analysis Russ talked about, you can find applications that drive a lot of activity during peak time. These are the applications you want to target first for modernization. And then the other important thing is, as you’re on your transformation journey of communicating right, as you’re putting up services, what are the cloud native applications that really would get the benefit?

Success requires good communication. We recommend that you improve the organizational communication within the company. Form a council made up of key application owners from the mainframe side and the line of business folks who drive revenue and profit. Most of the time, the latter are not aware of what exists on the mainframe, assets already there they can just tap into. For this reason and others, it’s really important to have cross-functional communication between business areas and IT as well as between mainframe and non-mainframe within IT.

Learn About Optimization and Integration Analysis and Services

Learn how HostBridge and Weintraub systems can complete an optimization analysis for you by reaching out to us using the contact information found at the bottom of this page.

I agree to receive commercial messages from Broadcom. I understand my personal data is processed according to Broadcom's Privacy Policy and I may unsubscribe from emailed communications at any time.