Sunday, 28 August 2022

Mainframe-cloud hybrid working – part 2

This is the second part of a two-part article.

Cloud provider choices

So, who are the big cloud infrastructure providers? In Q1 of 2022, the top three were AWS (33% market share), Microsoft Azure (21%) and Google Cloud (8%). IBM has an estimated 4%.

For mainframe developers, things like IBM’s newly launched Wazi as a Service provides a way to create development and test systems in IBM Cloud Virtual Private Cloud.

IBM has also developed an IBM Z and Cloud Modernization Center as part of its commitment to making it easier to upgrade mainframe programs. The objective is to provide a uniform platform that helps organizations identify best practices for achieving this.

Other mainframe options include Red Hat® OpenShift®, the market-leading hybrid cloud container platform. With Red Hat OpenShift, mainframe sites can develop and consume cloud services anywhere and from any cloud.

IBM also offers IBM Cloud Pak® solutions, an AI-infused software portfolio that runs on Red Hat OpenShift. These solutions can help organizations advance digital transformation with data insights, prediction, security, automation and modernization, across any cloud environment.

AWS, in June, launched its AWS Mainframe Modernization service, which is designed to makes it faster and easier for customers to modernize mainframe workloads by moving them to the cloud. Customers can refactor their mainframe workloads to run on AWS. Alternatively, customers can keep their applications as written and replatform their workloads to AWS by reusing existing code with minimal changes. A managed runtime environment built into AWS Mainframe Modernization provides the necessary compute, memory, and storage to run both refactored and replatformed applications and helps automate the details of capacity provisioning, security, load balancing, scaling, and application health monitoring.

The AWS Mainframe Modernization service also, they say, provides the development, testing, and deployment tools necessary to automate the modernization of mainframe applications to run on AWS. And customers only pay for the amount of compute provisioned.

Of course, migrating mainframe workloads to run in the cloud involves a number of steps to discover, assess, test, and operate the new workload environments. Customers also need to configure, run, and operate mainframe systems with application development and deployment best practices in the new cloud environments. AWS Mainframe Modernization says that it integrates the tools needed to modernize mainframe applications.

AWS partners that can help organizations migrate to AWS include: Accenture, DCX Technology, Tata, Atos, Micro Focus, and Infosys.

Similarly, Microsoft Azure also offers help in migrating mainframe applications across to it. Third-party companies that can help include: TmaxSoft with OpenFrame, and Asysco with its AMT products. LzLabs can also help people migrate to Azure.

And, Google cloud allows mainframe sites to migrate to services such as Compute Engine (virtual machines) and Google Kubernetes Engine. Partner companies to help with the migration include Advanced and LzLabs.

Known data challenges

Some known mainframe data challenges when it comes to migrating mainframe data to the cloud include:

  • Code page translation (CCSIDs) Invalid data
    • Non-numeric data in numeric fields
    • binary zeros in packed fields (or any fields)
    • Invalid data in character fields
  • Dates
    • Must be decoded/validated if target column is DATE or TIMESTAMP
    • May require knowledge of Y2K implementation
    • Allow extra time for data-intensive applications
  • Repeating groups
    • Sparse arrays
    • Number of elements
    • Will probably be de-normalized
  • Redefines of binary/'special' fields
    • Common in older applications
    • Developed in 1970s/80s
    • Generally requires application
    • Specific translation.

Mainframe subsystems like IMS were written to optimize data access techniques to make the applications run faster. This makes taking data out of IMS harder because it has been optimized.

Workloads perhaps best left on the mainframe

Before the migration/modernization process starts. it's worth considering which workloads would be best left on the mainframe. Firstly, there are those applications with high security needs. As discussed in part 1, modern mainframes are very secure. Secondly, application that use a large amount of data should stay on the mainframe. IMS DB, in particular, as mentioned above, was developed to allow data to be accessed very quickly in order to speed up the elapsed time for running an application. Keep any applications on the mainframe where data access is time critical. Thirdly, applications using the on-chip AI inferencing that's available on z16s can help prevent credit card fraud. Other applications should be evaluated to see whether there is anything about them that’s makes running them on the mainframe a better environment. If not, they can be candidates for the migration process.

A data-centric approach

Model9 has a data-centric approach to mainframe modernization. Their approach is to migrate mainframe formatted data into the cloud via zIIP engines and TCP/IP, and store it in the cloud in object storage. Mainframe professionals can also use new, modern cloud data management software for the mainframe to create a bi-directional sync between the mainframe and cloud that does not require tape or emulated tape technologies to function.

They suggest that this approach:

  • Delivers immediate value while taking steps toward long-term modernization. Instead of one large, complex, project that is doomed to fail, using software to move mainframe data to the cloud is a more agile approach where users can define smaller, more manageable sprints.
  • Breaks the silo surrounding mainframe data. Placing the data into the cloud – even when still in mainframe format – changes the mainframe from an isolated island into an integrated part of the technology architecture, giving mainframe professionals a seat at the broader IT table.
  • Can be used to securely eliminate tape/VTL backups, which brings cost savings and can improve disaster recovery – both in speed and accessibility in recovery scenarios.

Which other companies can help?

Other companies that can help with migration include: Chetu, Ensono, Mainframecloud, and VirtualZ.

VirtualZ has Lozen, which allows custom and packaged applications running anywhere – in the cloud, on distributed platforms or on mobile devices – to have real-time, read-write access to always-in-sync data on the IBM Z platform.

And there are more companies getting into this business all the time.

Conclusion

Using the cloud as part of the IT infrastructure is something mainframe sites will become as familiar with as using laptops to access the Internet or create spreadsheets. The issue at the moment is for mainframe sites to decide how they are going to use the cloud environment in a way that suits them best. There are currently a number of different choices available and number of different strategies available. Whatever choices are made, no-one should be thinking that mainframes are going to disappear any time soon.

 

 

Sunday, 21 August 2022

Mainframe-cloud hybrid working – part 1

This is the first part of a two-part article.

I think most mainframe sites are realizing that using the cloud is not some kind of betrayal of their previous computing environment, but a way to enhance the computing power available to their organization. It also comes with ways to control the cost of IT spending, as well as the ability to increase or decrease the computing capacity depending on the needs of the applications. In addition, some things, like analysing data and Artificial Intelligence (AI) work so much better in a cloud-based environment.

Cloud advantages

The benefits of the Mainframe as a Service (MFaaS) model that are usually given by cloud companies include: 

  • Scaling – organizations can scale up or down according to their needs. 
  • Maintenance and upgrades – organizations pay only for the compute and storage they use. In addition, maintenance and upgrades to IT infrastructure are handled by the cloud provider. 
  • Business continuity – the cloud provider handles any IT issues reducing any noticeable downtime. 
  • Predictable costs – the price for services is agreed ahead of time, which makes budgeting easier. 
  • Support – cloud companies provide continuous support. 
  • Security.

Other advantages include better business agility and a larger pool of potential staff with cloud computing experience.

Cloud providers also talk about moving from outdated mainframe technology!

Mainframe advantages

The big argument for mainframes is that they are flexible, scalable, and efficient. The truth is that cloud computing can now claim to have these characteristics. So, what reasons could people give for staying with mainframes and not moving everything to the mainframe. The answer would probably include words like security, privacy, resilience, and failover. No other platform can offer the security features that are found on z14, z15, and z16 processors. The z16’s Telum chip comes with on-chip acceleration for AI inferencing while transactions are taking place, allowing financial institutions to move from a fraud detection posture to a fraud prevention posture.

The z14s saw the introduction of pervasive encryption, which enables extensive encryption of data in-flight and at-rest to substantially simplify encryption and reduce costs associated with protecting data and achieving compliance mandates.

z15s introduced Data Privacy Passports technology that can be used to gain control over how data is stored and shared. This gives users the ability to protect and provision data and revoke access to that data at any time. In addition, it not only works in the z15 environment, but also across an enterprise’s hybrid multi-cloud environment. This helps enterprises to secure their data wherever it travels.

Also new with the z15 is IBM Z Instant Recovery, which uses z15 system recovery technologies, to limit the cost and impact of planned and unplanned downtime by accelerating the recovery of mission-critical applications by using full system capacity for a period of time. It enables general-purpose processors to run at full-capacity speed, and allows general-purpose workloads to run on zIIP processors. This boost period accelerates the entire recovery process in the partition(s) being boosted.

Cloud computing can't come close to those capabilities.

Other security questions you might ask about cloud-based working are around data, where it resides and who has access to it. There may also be questions over data latency as in how far away the data is from the application using it. It’s important to recognize that the attack surface area is larger and more complex in the cloud.

Hybrid working advantages

So, bearing those things in mind, why would a mainframe site consider utilizing a cloud environment? The answers are: to do things that the mainframe can’t, like data analytics; to do things that the mainframe isn’t as good at, like speedily developing and testing applications, and/or building multiplatform applications using APIs (Application Programming Interfaces); and to use cheaper staff who aren’t mainframe trained. The cloud makes it easy to access applications and data from anywhere at any time. Users can choose and implement only the features and services they need at any given time.

Migration choices

Any site that does decide to migrate some or all their applications to the cloud needs to decide how they want to go about it. The choices are:

  • Refactor – this usually refers to modifying the applications so that they better support the cloud environment.
  • Replatform – applications are moved to the cloud without major changes, but taking advantage of benefits of the cloud environment. Using a cloud-hosted emulator also allows organizations to start using .NET, Java, or other APIs to integrate with previously inaccessible programs and data.
  • Rehost (lift and shift) – applications are moved to the cloud without making any changes. The code is recompiled to run in a mainframe emulator hosted in a cloud instance.
  • Rewrite/rebuild – generally, not recommended because it is a risky approach. It’s complex, costly, and time consuming. It’s hard to predict the budget needed, how long it will take, and the return on investment. A better approach is to move the applications to a cloud-based emulator, migrate the database to a cloud-based database, then replace modules over time.
  • Replace – completely replace the mainframe functionality with a program or suite of programs, typically a Software-as-a-Service (SaaS) application. This removes the issue of needing to maintain code, but makes it hard to customize beyond the options provided by the vendor.
  • Retain (revisit) – keep applications on the mainframe. These might include applications that require major refactoring, and you want to postpone that work until a later time, and legacy applications that you want to retain, because there’s no business justification for migrating them. (This is discussed later.)
  • Retire – decommission or remove applications that are no longer needed.

The second part of the article will be published next week.

Sunday, 14 August 2022

Mainframes, Git, and application development

Since 1964, when IBM unveiled its System/360 mainframe, most development work on applications has taken place on the mainframe. It makes sense that that’s how developers would work. But nowadays, that’s not so much the case. Now many people are finding the benefits of using Git when it comes to application development. So, let’s have a look at what Git is and how mainframers are using it.

Git has been around since 2005 and is frequently used by non-mainframe developers to develop new applications. Visual Studio Code (VS Code) has been around since 2015, and it is also very popular with non-mainframe developers. And, now it’s a popular application development environment on mainframes. In fact, it’s unusual, these days, for mainframes not to be able to make use of anything that seems successful on other platforms, which is why we’re discussing Git and VS Code.

Linus Torvald (the man who developed Linux) wanted a way to keep track of all the Linux kernel developments going on at the time. So, he wrote a software package for tracking changes in any set of files, and he called it Git. Programmers working collaboratively on source code use it, and every Git-managed directory is a full-functioning repository (often called a repo). A repository contains all the files needed to compile and link an application. For mainframe applications, this could include source programs, copybooks, JCL or Rexx execs. It also usually includes a README file containing project instructions, documentation and any other useful information.

All Git instances or clones are equal. Developers use pull operations to integrate code from another developer into their repository and working directory. A push operation sends their code to a remote repository. Developers can also branch, switch branches, stash work, and perform other operations.

You might wonder why, when I’m talking about Git, I also mentioned VS Code, which is a source-code editor made originally by Microsoft for Windows. The answer is that VS Code is probably the most popular editor these days, and it has built-in Git capabilities. Broadcom’s free Code4z extension pack makes VS Code and similar tools usable by mainframe developers,

The reason this appeals to mainframe developers is because Git acts as a parallel development environment for multiple programmers. Mainframers no longer need to wait for other developers to release a file that they have checked out and are working on. Developers can work on programs whenever they want. In addition, the developers don't need to worry about their changes affecting another developer’s work.

Before the use of Git, mainframe sites developed new applications using serial library managers, which resulted in no other development work being able to take place while a developer had the application checked out. Once it was checked in, another developer could start work on it. By using Git-hosted code, developers can all work on code at the same time without impacting each other's work. Git excels at version control. Using VS Code makes code scanning easy.

It’s also possible to develop cross-platform applications that include the mainframe and cloud, mobile, and Web by hosting the code on Git. This is possible through the use of application programming interfaces (APIs). I'm sure that the creation of new composite applications using existing code and APIs is definitely likely to increase, which will give customers applications that completely suit their needs.

Of course, it is possible to install Git on a mainframe. IBM recommends Rocket Software’s port of the Git client, which is usually installed in z/OS USS. It includes code page translation, which means that mainframe source code (in EBCDIC) can be stored and retrieved from Git (in ASCII). This allows developers to work on repositories stored on mainframes while working on a laptop. The .gitattributes file tells Git which files should be translated from ASCII to EBCDIC when working on z/OS.

Using Git and VS Code on mainframes allows applications to be built by people with lots of experience of those two products without necessarily having lots of mainframe experience. This may become important for mainframe sites moving forward as their highly-experienced mainframers retire. Having said that, there are plenty of mainframers who are keen to learn these new skills, and are already using them.