Sunday, 3 July 2022

The latest z/OS enhancements

The end of June saw the second quarter enhancements to z/OS. In case you missed them, here are the key features from the announcement.

  • z/OS Parallel Sysplex(R) enhancements for the IBM z16TM CFLEVEL 25: Enhancements include z/OS support for the new z16TM coupling facility, CFLEVEL 25, which provides a variety of enhancements for improved Parallel Sysplex performance, scalability, and resiliency.
  • CFRM support for 4096 CF structures in a Parallel Sysplex: Previously announced support for up to 4096 CF structures in a sysplex has been temporarily disabled/withdrawn, pending a complete resolution to issues found with the support.
  •  z/OS support for System Recovery Boost: Enhancements to accompany the IBM z16 can provide boosted processor capacity and parallelism for specific recovery events. Client-selected middleware starts and restarts may be boosted to expedite recovery for middleware regions and restore steady-state operations as soon as possible. SVC dump processing and HyperSwap(R) configuration load and reload may be boosted to minimize the impact to running workloads.
  •  z/OS upgrade improvements for IBM z16: The z/OS IBM z16 Upgrade Workflow has been provided in a program temporary fix (PTF) to help position z/OS for usage on the new IBM z16 server.
  • z/OSMF ServerPac update: Because the z/OSMF Software Management data set merge function is now available and the z/OSMF ServerPac Portable Software Instance has enabled it, the CustomPac Dialog installation method is planned to be removed.
  • IBM z/OS Management Facility (z/OSMF) enhancements: New capabilities allow users to better leverage existing CFRM Policy Editor functions, support creating or deleting USS symbolic links, and easily validate the connection status of managed systems.
  • Cloud Provisioning and Management for z/OS: Enhancements expand the type of potential configurations a user can modify during the provisioning process.
  • z/OS Management Services Catalog enhancements: Numerous enhancements to the user experience flow have been delivered to efficiently create and manage z/OS management services. In addition, two new sample services are provided.
  • z/OS Job REST Completion Notification to eliminate Common Information Model (CIM): The asynchronous job completion notification function of the REST jobs API has been enhanced to eliminate the dependency on CIM and Common Event Adapter (CEA). 
  • z/OS Container Extensions (zCX) enhancements: A new performance improvement has been delivered to reduce the chances of lock contention in high frequency code paths.
  • Data Set File System: The new physical file system renders traditional z/OS data sets accessible by programs, shell scripts, and end users of z/OS UNIX(R) System Services.
  • COBOL-JavaTM interoperability: The IBM Semeru Runtime Certified Edition for z/OS has been enhanced to provide the 31-bit/64-bit interoperability support.
  • New IBM Open XL C/C++ 1.1 compiler: A new component for z/OS V2.4 and V2.5 adds C/C++ language standards support, ideal for z/OS UNIX users porting applications from distributed platforms.
  • Resource Measurement Facility (RMF) and Advanced Data Gatherer (ADG) enhancements: Support has been added to report on the new Crypto Express 8S card of the IBM z16 and allow machine configurations with up to 256 physical processors.
  • MEMLIMIT diagnostics for CICS(R) and Java enhancements: Serviceability is enhanced for MEMLIMIT diagnosis in high virtual memory.
  • z/OS Encryption Readiness Technology (zERT) Network Analyzer enhancements: Passphrase support is added, and usability is improved for database connection authentication.
  • RACF(R) Database Encryption: This function allows an installation to encrypt a VSAM data set that is used as a part of a RACF database as well as share that data set among z/OS systems in certain configurations to help further strengthen the overall security posture of the z/OS platform.
  • Compliance support for z/OS: z/OS has been enhanced to modernize compliance data reporting. This support enables the collection of compliance data from numerous IBM z16 and z/OS products and components, and simplifies auditing by publishing z/OS hardening guideline.

The enhancements should be available now.

Z/OS Version 2.5 runs on the following mainframe models:

  • IBM z16 Model A01
  • IBM z15TM Models T01 and T02
  • IBM z14(R) Models M01-M05
  • IBM z14 Model ZR1
  • IBM z13R)
  • IBM z13s(R)

It’s great to see IBM embracing quarterly updates on z/OS and many of its other products. And it’s also great to see the useful updates that are being produced each quarter.

Just to mention, the latest version of IMS is now available. According to IBM, Version 15.3 “streamlines your journey to the cloud”. You can read the details here.

 

Sunday, 26 June 2022

Hybrid environments

I can remember, as I’m sure you can, back in the 1990s when everyone was sure that the mainframe was pretty much on life support and anyone who wanted to stay in business should be migrating all their applications to Linux servers or Windows servers.

To be honest, it was a bad time for IBM. The 1990s was probably the last decade when different computing platforms pretended that other computing platforms didn’t exist, and that their model of how to do things was the only way. By the end of the decade, things like SNA were more-or-less moth-balled and everything had an IP address. The mainframe had realized that it needed to be part of the bigger computing environment.

As always in the history of the world, some companies (or empires) became huge and then disappeared, others soldiered on. Everyone seemed to work on laptops that were running Windows or Linux, and they was able to communicate with any servers including mainframes.

The rapid improvements in PC technology meant that people could work from almost anywhere without needing cables constantly plugged into their personal computers. Battery life became long enough to last all (the working) day – so, no need to plug into the mains – and WiFi meant there was no need to plug into the network.

Mainframes continued the trend of being a part of the wider IT environment. If anything worked well on distributed platforms, it was very likely to work better on a mainframe. Linux became part of the mainframe environment. In fact, it’s possible to run mainframes just with Linux, and not have z/OS at all.

Containerized applications, Kubernetes, and much more will work on a mainframe. Things like Ansible, for IT automation, work on a mainframe. Zowe allows non-mainframers to control mainframes. VS Code can be used to develop applications on a mainframe. DevOps is a mainframe thing. And there’s so much more. The point I’m making is that mainframes are very much open to utilizing the best technology around. It’s not parochial, it’s not old-fashioned, and it has been developing hugely since it first appeared in the 1960s.

Mainframe APIs can connect to APIs on mobile platforms, the Web, and cloud, to create new and exciting combined applications using REST. The mainframe is a player in the total computing environment.

However, the 2020s has seen a return to some of the outsider views of the 1990s. History seems to be repeating itself. ‘Experts’ are saying that the only way forward, the only way to modernize the mainframe, is to dump it. The mainframe should be consigned to the history books. And, rather than migrating everything off the mainframe to Linux or Windows, in 2022, the new Promised Land is cloud. Once every bit of workload has been migrated to the cloud, all your computing worries will be over!

If experience has taught us anything, it’s that this kind of ‘all or nothing’ thinking is seldom if ever right. In fact, ‘all or nothing thinking’ is recognized by Cognitive Behavioural Therapy (CBT) as one of a number of common cognitive distortions.

So, if the mainframe is so wonderful, as I’ve been explaining, should we simply ignore the cloud? The one-word answer to that is an obvious and resounding ‘no’. The cloud offers some huge advantages over the mainframe for certain types of workload. Cloud should be treated like laptops in so far as they are both very good at what they do, and should be included as part of the total IT environment used by an organization.

When deciding which workloads it’s worth considering migrating to the cloud, it’s probably worth looking at things the other way round, ie which workloads need to stay on the mainframe. There are probably two types of application that need to stay on the mainframe. The first are those applications with high security needs. Mainframes are clearly the most secure platforms available with things like pervasive encryption, which became available with the z14 mainframe. Data is encrypted while at rest as well as in flight. IBM also introduced Data Privacy Passports.

The second types of application that should stay on the mainframe are those that use a large amount of data. IMS DB, in particular, was developed to allow data to be accessed very quickly in order to speed up the elapsed time for running an application. Where accessing and using data is time critical, then those applications need to stay on a mainframe.

There is a third type of application that’s best left on the mainframe and that is for sites that have a z16 mainframe, with its Telum chip, and are concerned about credit-card fraud. The Telum processor provides on-chip AI inferencing, which allows banks to analyse for fraud during transactions on a massive scale. IBM assures us that the z16 can process 300 billion inference requests per day with just one millisecond of latency. Users of the z16 will be able to reduce the time and energy required to handle fraudulent transactions on their credit card. For both merchants and card issuers, this could mean a reduction in revenue loss because consumers could avoid the frustration associated with false declines where they might turn to other cards for future transactions.

Any other workloads can then be evaluated to see whether they will benefit from being run in the cloud, and they can be migrated to that environment. It’s also possible that certain applications can only run in the cloud – things that use big data for data analytics. In addition, application development work often progresses faster using the cloud.

So, hybrid working using the best features of the cloud makes a lot of sense, in the same way that using the best laptops and tablets makes sense in order to get the job done. But, a still very important part of the digital environment for any large organization is the mainframe. Don’t throw the baby out with the bathwater.

Sunday, 19 June 2022

Mainframes, cloud, and technical debt


When it comes to developing new software on any platform – not just mainframes – we’re probably all familiar with the three choices available. It can be good, or quick, or cheap. And you get to choose two out of the three.

If you want it done quickly and you want it to be good, then it can’t be cheap. If you want it developed quickly and cheaply, then it won’t be any good. Or if you want it to be cheap and really good, then it will take a long time.

Those of you who have worked in project management have probably come across Brooks’ Law. Before we talk about that, let’s look at a simple maths problem. If it takes a man (in these more enlightened times I obviously mean a person) five days to build a wall, how long will it take two men? With the simplicity of mathematics, the answer is clearly two and a half days. So, if you apply that logic to your IT project, the more people you have working on the project, the fewer days the project will take! Anyone who has ever worked on a project knows that just isn’t true. And that’s where Brooks’ Law comes in.

Fred Brooks published his book, “The Mythical Man-Month” in 1975. He concluded that adding manpower to a late project simply made it later. He explained this by saying that it took time for new people to get the hang of a project – what he called the ramp up phase. During that time, other project members were less productive because they were showing the new people what to do. And the new people weren’t productive until they knew what to do.

The other problem he identified was the communication overhead. As more people join a project, more time is spent finding out what everyone else is doing on their part of the project.

While he agreed that some tasks were simple and could easily be divided up amongst a group of people – eg cleaning rooms in hotels – even that reached a point where adding more people would mean they just got in each other’s way. And, besides, most projects require specialist work and aren’t that easily divisible.

Going back to our original three pillars of project development – quick, cheap, good – one common result of doing something quickly is what’s called technical debt. Technical debt is the implied cost of additional reworking on a project caused by choosing an easy (limited) solution now rather than using a better approach that would take longer. A proof-of-concept project is usually full of technical debt, but that’s usually acceptable because all that was needed was to prove that something good be done. However, for most projects, like financial debt, there will come a time in the future when that debt has to be paid. In terms of the project, it means that the software will have to be updated or even completely rewritten.

The consequence of that is that it will take time and will have a cost-implication. The question that organizations face is whether to get the code right from the start – with the implication that the project will take much longer than was planned. And that means it will cost more than originally planned. Or does the project team go ahead with the project as is, and let a different project team fund the technical debt they have left at some time in the future when the original team are probably working for different companies or doing different jobs for the same company? At least the original team will make their deadlines and receive any bonuses associated with that.

For mainframe sites, ‘paying’ the technical debt can result in downtime for the application and a particular service being unavailable to internal or external customers. That can result in service level agreements not being met, which can have financial consequences in addition to those associated with rewriting the code.

Internal projects are one thing, but do the same rules apply when you are buying software from a known mainframe software vendor? Is it worth skipping Version 1 and waiting until all the kinks have been ironed out and waiting for Version 2 to be released? And does the same apply to new software services like those offered by cloud providers?

For cloud providers like AWS and others, there’s money to be made from companies that are currently running their own servers, and there’s even more money to made from companies that have mainframes. And that’s why mainframe sites are being targeted by the marketing arm of these cloud giants.

That’s why AWS has announced its Mainframe Modernization service, its development and runtime environment, which is designed to make it easier for customers to modernize and run their mainframe workloads on AWS. AWS also offers mainframers the option to maintain existing applications as they are and re-platform them to AWS with minimal code changes.

All this is powered by the Micro Focus Enterprise Server, which lets IBM mainframe applications run on Linux or Windows servers. There are no upfront costs, and customers only pay for what they use.

System integrators will work with mainframe-using organization to discover mainframe workloads, assess and analyse migration readiness, then plan the migration and modernization projects.

That all sounds very good. I guess the question at the back of my mind is whether this migration will create any technical debt.