Sunday 26 June 2022

Hybrid environments

I can remember, as I’m sure you can, back in the 1990s when everyone was sure that the mainframe was pretty much on life support and anyone who wanted to stay in business should be migrating all their applications to Linux servers or Windows servers.

To be honest, it was a bad time for IBM. The 1990s was probably the last decade when different computing platforms pretended that other computing platforms didn’t exist, and that their model of how to do things was the only way. By the end of the decade, things like SNA were more-or-less moth-balled and everything had an IP address. The mainframe had realized that it needed to be part of the bigger computing environment.

As always in the history of the world, some companies (or empires) became huge and then disappeared, others soldiered on. Everyone seemed to work on laptops that were running Windows or Linux, and they was able to communicate with any servers including mainframes.

The rapid improvements in PC technology meant that people could work from almost anywhere without needing cables constantly plugged into their personal computers. Battery life became long enough to last all (the working) day – so, no need to plug into the mains – and WiFi meant there was no need to plug into the network.

Mainframes continued the trend of being a part of the wider IT environment. If anything worked well on distributed platforms, it was very likely to work better on a mainframe. Linux became part of the mainframe environment. In fact, it’s possible to run mainframes just with Linux, and not have z/OS at all.

Containerized applications, Kubernetes, and much more will work on a mainframe. Things like Ansible, for IT automation, work on a mainframe. Zowe allows non-mainframers to control mainframes. VS Code can be used to develop applications on a mainframe. DevOps is a mainframe thing. And there’s so much more. The point I’m making is that mainframes are very much open to utilizing the best technology around. It’s not parochial, it’s not old-fashioned, and it has been developing hugely since it first appeared in the 1960s.

Mainframe APIs can connect to APIs on mobile platforms, the Web, and cloud, to create new and exciting combined applications using REST. The mainframe is a player in the total computing environment.

However, the 2020s has seen a return to some of the outsider views of the 1990s. History seems to be repeating itself. ‘Experts’ are saying that the only way forward, the only way to modernize the mainframe, is to dump it. The mainframe should be consigned to the history books. And, rather than migrating everything off the mainframe to Linux or Windows, in 2022, the new Promised Land is cloud. Once every bit of workload has been migrated to the cloud, all your computing worries will be over!

If experience has taught us anything, it’s that this kind of ‘all or nothing’ thinking is seldom if ever right. In fact, ‘all or nothing thinking’ is recognized by Cognitive Behavioural Therapy (CBT) as one of a number of common cognitive distortions.

So, if the mainframe is so wonderful, as I’ve been explaining, should we simply ignore the cloud? The one-word answer to that is an obvious and resounding ‘no’. The cloud offers some huge advantages over the mainframe for certain types of workload. Cloud should be treated like laptops in so far as they are both very good at what they do, and should be included as part of the total IT environment used by an organization.

When deciding which workloads it’s worth considering migrating to the cloud, it’s probably worth looking at things the other way round, ie which workloads need to stay on the mainframe. There are probably two types of application that need to stay on the mainframe. The first are those applications with high security needs. Mainframes are clearly the most secure platforms available with things like pervasive encryption, which became available with the z14 mainframe. Data is encrypted while at rest as well as in flight. IBM also introduced Data Privacy Passports.

The second types of application that should stay on the mainframe are those that use a large amount of data. IMS DB, in particular, was developed to allow data to be accessed very quickly in order to speed up the elapsed time for running an application. Where accessing and using data is time critical, then those applications need to stay on a mainframe.

There is a third type of application that’s best left on the mainframe and that is for sites that have a z16 mainframe, with its Telum chip, and are concerned about credit-card fraud. The Telum processor provides on-chip AI inferencing, which allows banks to analyse for fraud during transactions on a massive scale. IBM assures us that the z16 can process 300 billion inference requests per day with just one millisecond of latency. Users of the z16 will be able to reduce the time and energy required to handle fraudulent transactions on their credit card. For both merchants and card issuers, this could mean a reduction in revenue loss because consumers could avoid the frustration associated with false declines where they might turn to other cards for future transactions.

Any other workloads can then be evaluated to see whether they will benefit from being run in the cloud, and they can be migrated to that environment. It’s also possible that certain applications can only run in the cloud – things that use big data for data analytics. In addition, application development work often progresses faster using the cloud.

So, hybrid working using the best features of the cloud makes a lot of sense, in the same way that using the best laptops and tablets makes sense in order to get the job done. But, a still very important part of the digital environment for any large organization is the mainframe. Don’t throw the baby out with the bathwater.

No comments: