Sunday 29 August 2021

Mainframes and Open Source


Everyone knows that mainframes work differently from other computing platforms. And if you leave university with a degree in computing, you still won’t have a clue about how a mainframe works. Everyone knows you have to be at least 50 and prefer to work on a green screen to have the slightest idea of what’s going on inside your z/OS box.

Unfortunately, the highly prejudiced comments in that first paragraph are all too common amongst people – often execs and Windows users – who find it easier to trot out the usual anti-mainframe mantras than look at what’s really going on in the world of mainframes.

Firstly, mainframes and Linux have been in a relationship for a very long time. If you have expertise in Linux, you can work on a mainframe and get great work done. For example, Docker and Kubernetes can run easily on a mainframe using Linux

And, of course, there’s the idea that mainframes are islands of hardware that not only don’t talk to the cloud, but don’t even know that the rest of the world is using the cloud all the time. Again, the reality is that IBM is very keen on the use of cloud. They have Red Hat Open Shift and IBM Cloud. They offer Mainframe as a Service. There is so much cloud-related stuff going on that it just seems strange that some people aren’t aware of it.

What about the fact that mainframes have sixty (nearly) years of their own way of working, their own software, and their own practices that no-one else can use? That is true, but (and it’s a very big ‘but’) they also have all the things (pretty much) that non-mainframe platforms have. Java works on a mainframe! There are things like z/OSMF, VSCode, Zowe, and ZOAU, which enable developers with non-IBM Z backgrounds to work usefully on mainframes.

And picking up on that, there is the Open Mainframe Project (OMP), which is an open source initiative that enables collaboration across the mainframe community to develop shared tool sets and resources. And they are having their second annual Open Mainframe Summit digitally on 22-23 September.

The theme of this year’s Open Mainframe Summit expands beyond the mainframe to highlight influencers with strengths in the areas supporting or leveraging the technology like continuous delivery, edge computing, financial services, and open source. It will also highlight projects, diversity, and business topics that will offer seasoned professionals, developers, students, and leaders an opportunity to share best practices and network with like-minded individuals.

This year’s virtual event will feature keynote speakers Gabriele Columbro, Executive Director of Fintech Open Source Foundation (FINOS); Jason Shepherd, Vice President of Ecosystem at ZEDEDA and Chair of the LF Edge Governing Board; Jono Bacon, a leading community and collaboration speaker and founder of Jono Bacon Consulting; Steve Winslow, Vice President of Compliance and Legal at The Linux Foundation; Tracy Ragan, CEO and Co-Founder of DeployHub and Continuous Delivery Foundation Board Member, and more.

Conference sessions highlight projects, diversity, and business topics such as:

  • Mainframe Mavens: 5 Women to Know – Stacey Miller, Global Product Marketing Manager at SUSE and Yvette LaMar, Director of the IBM Z Influencer Ecosystem at IBM
  • The Facts about COBOL – Misty Decker, Product Marketing Director at Micro Focus; Derek Britton, Director of Communications and Brand Strategy at Micro Focus; and Cameron Seay, Adjunct Instructor at East Carolina University
  • Making Our Strong Community Stronger moderated by Dr. Gloria Chance, CEO at Mousai Group – Jeanne Glass, CEO and Founder of VirtualZ Computing; David Jeffries, Vice President of Development IBM z/OS Software at IBM; Greg Lotko, Broadcom; Andy Youniss, Rocket Software
  • ConsoleZ – Accessing z/VM Console Data from a Browser – Mike MacIsaac, Systems Programmer at ADP
  • Workflow wiZard: A Flexible Workflow Creation Tool for z/OSMF – Ray Cole, Product Architect at BMC Software
  • Feilong: The Open Source API for z/VM Automation – Mike Friesenegger, Solutions Architect at SUSE
  • Integrating Tessia for Self-Provisioning of Linux Distributions on Z – Alexander Efremkin, Tessia Architect, Linux Workload Enablement on IBM Z at IBM
  • Introducing ZEBRA - an Incubation Project for Zowe – Salisu Ali, Student at Bayero University Kano, Andrew Twydell, Intern at IBM and Alex Kim, Enterprise Solutions Architect at Vicom Infinity
  • DIY: Zowe Explorer Starter Kit – Jessielaine Punongbayan, Product Marketing Engineer at Broadcom and Richelle Anne Craw, Senior Software Engineer at Broadcom

With a commitment to diversity, equity, and inclusion, Open Mainframe Project worked closely with the CHAOSS Diversity & Inclusion Badging Program, which encourages events to obtain D&I badges for leadership, self-reflection, and self-improvement on issues critical to building the Internet as a social good. Open Mainframe Summit earned a Gold Badge for prioritizing diversity and inclusion.

If you’re interested, you can see the full conference schedule here. Conference Registration for the online event is $50 for general attendance and $15 for academia.

If you have an interest in Open Source software on the mainframe, or you just want to know more about what might be possible, then have a look on the Open Mainframe Project website.

Sunday 22 August 2021

Mainframe resilience


Have you ever been part of a Business Continuity Plan test? If you have, then you know that you tend to end up in a hotel, or some other building, with a variety of other people from an organization and ‘war game’ what would happen in the event of various scenarios. Often, an external company will be invited in to host the sessions and be on the other end of the phone when someone is trying to deal with the press. The day can often be quite fun, sometimes illuminating, and lunch is usually very good!

The big problem is that often, what can be resolved in the meeting room in a couple of minutes takes much much longer in real life. In many scenarios, a building has been burgled, or is full of terrorists, but the mainframe and the other servers are still working. If the communications line from one site have been cut, most people – as we’ve seen over the past year or so – can work from home. The organization is generally able to continue in business so long as the mainframe is still working. But what happens if it isn’t?

I can remember many years ago, at one site I worked at, putting a tick in the box for backups for a particular application. However, as all the operators knew, the backup tapes were 7-track tapes, and the last 7-track tape drive had been removed some months beforehand. There was no way that anything could be restored. I can also remember driving backup tapes to an offsite backup site at a company on the other side of town. If there had been a disaster out of hours, can you imagine how long it would have taken to get those tapes and restore the data?

Clearly, backup strategies have improved hugely since those days back in the early 1980s. Even so, a lot of emphasis is still being put on the backing up of data, and, all too often, not enough emphasis is put on restoring the data. I’m talking about mainframe resiliency.

Mainframe resiliency is the ability of the mainframe to provide and maintain an acceptable level of service even when things go wrong! Now we know that mainframes don’t crash like they used to in the 1980s, but even so, things can go wrong.

In an ideal world, organizations would take a copy of their data at regular and frequent intervals and restore from the most recent copy in the event of a problem. That would result in only a few minutes of recent changes being lost. It would also create a massive overhead and require a huge amount of storage space. Some companies can afford a hot standby site, which is updated almost as soon as the main site’s data is changed. Should the main site go down, the standby site can take over very quickly and, hopefully, no data is lost.

Other sites take full backups once a week, and incremental backups every evening. That way, it’s possible to restore a file to its state yesterday. If journaling takes place, there will be a file that can be used to restore data almost up to just before the failure.

What I’m illustrating is that a lot of work has gone into getting backups right. What I would also suggest is that not enough attention has been spent on getting the restore part of the operation working as quickly and effectively as possible.

Let’s suppose that one application has somehow had a catastrophic failure. Let’s suppose that the DASD housing the files has died. Where can the recovery files be restored to? Exactly which backup tapes do I need to restore just those files? How quickly can I get hold of them? It’s the orchestration of the recovery operation that needs to take place in software, not in the head of someone who is out of the office that day, or printed on a piece of paper that could be missing from the backup and restore manual.

I wrote recently about Safeguarded Copy on FlashSystem arrays. It creates a security isolated copy of data that can be used in the event of the original data becoming corrupted. In fact, multiple recovery points can be created, which is great. The question is, how can you quickly decide which recovery point backup you want to restore. What software is there available that would speedily work out which recovery point is the one required and make sure that it is restored? Because, in order to speed up the restoration stage, it needs to be done by software orchestration and not trial and error of someone sitting down in front of a screen and seeing which backup is exactly the one they want. I’m not criticizing FlashSystem arrays, I’m just suggesting that the problem with speedy restores is endemic. Everyone worries about backups and they happen all the time. Not enough people are concerned about the restore process because it doesn’t happen (I’m pleased to say!) very often.

To ensure mainframe resiliency, as much effort must be put into simplifying and organizing the restore process as is put into the backup process, so that any mainframe outages last for as little time as possible and no-one – in particular paying customers – notice.

But that’s not all. What happens if nation state or criminal gang bad actors get into your mainframe? Typically, there is a period of time during which hackers raise their security level, exfiltrate useful data, overwrite backups, encrypt data, and then display a ransom demand. Mainframe resiliency also demands that there be some way to identify the early stages of a ransomware attack and stop it spreading further. It also requires that the corrupted files are restored. For this to happen, some kind of software orchestration is required to ensure that the correct (and uncorrupted) backup files are identified, and the data is restored as quickly as possible.

There’s a lot more to mainframe resilience than people might think when they are sitting comfortably after a good lunch discussing the business continuity plan!

Sunday 15 August 2021

The mainframe and Cloud PC

When I first started working with mainframes, and it was a long time ago, people used to sit in the main office and work on dumb terminals. The mainframe lived in a highly-secure, climate-controlled, part of the building that could only be accessed by people with appropriate key cards. In fact, the majority of people working on the mainframe had no idea what the mainframe looked like. They’d never seen it because they weren’t the chosen few who had been invited into the machine room. For them, it didn’t really exist. They were simply focused on getting their work done. They would come into the office, power up their terminal, and do whatever needed doing. They didn’t know or care about virtual storage or paging or security. They simply did their work. And went home.

How times have changed. Or have they?

The start of August saw the launch of Cloud PC and Windows 365 from Microsoft. The idea is that everything the user wants lives in the cloud – their data, applications, tools, and settings – and they can access it from just about any device they like to use – which could be a laptop, but could also be an Android or Linux device or even an Apple device.

Basically, Azure Virtual Desktop is used to build a virtual machine on top of any other device. And that runs Windows 365 for the user. All the data, applications, etc are stored in the cloud. Users don’t know or care exactly where it is, they simply get on with their work.

It does all seem to be very similar to how mainframers used to work 40 years ago. Everything you need to do your work is stored somewhere, but you don’t know or care where that is. And you simply get on with your work.

Plus ça change, plus c’est la même chose!

It’s not just Microsoft that has recycled this venerable mainframe way of working, Amazon has too. Amazon has its Workspaces Desktop as a Solution (DaaS) product that users might choose. And, of course, Chromebooks have been around for a while. They work on the principle that the operating system is small, the device doesn’t need to have much computing power, and the work takes place in the cloud somewhere.

So, why would you choose Microsoft’s Cloud PC option? Let’s suppose that you are back in the office working, you haven’t completed some major piece of work, so you simply save it and dash to get your train home. On the train, you can get out your tablet (or even your phone) and continue working. And when you get home, you can boot up your home PC and, again, carry on working. You don’t need to borrow a work PC loaded with everything you need to do your job. As long as you have an Internet connection, you can be productive and work on the same desktop environment. Another benefit is, if you leave your laptop on the train, or have it stolen, there is no data on the device. It is all stored in the cloud, so thieves can’t access corporate sensitive data or personal information of clients, etc.

For corporate IT teams, there are also a number of benefits. The first one is budgeting. Rather than buying in new PCs every year or so for staff, they can calculate how much Windows 365 will cost for their staff. If this works out cheaper than buying new devices over a three-year period, they have better control over their budget. There are different sizes of Cloud PC available, and these have different price tags. So, that must be taken into consideration.

Managing Cloud PCs can be done using Endpoint Manager in much the same way that existing physical devices can be managed. And that means corporate security policies can be applied to Cloud PCs as well as real devices. The Endpoint Analytics dashboard allows IT teams to see whether Cloud PC users need more resources allocated to them (or perhaps less). There’s also the Watchdog Service which looks after connectivity. If users become disconnected, alerts are raised, and suggestions made about how to rectify the situation.

I imagine that we’re all fairly familiar with the security on a mainframe, the big question is what kind of security do you get with Windows 365? Firstly, every Cloud PC managed disk is encrypted. Similarly, all data sent over the Internet is encrypted. Data in use isn’t encrypted.

As you might hope in these days of ransomware attacks, multifactor authentication (MFA) is used when someone tries to login. This uses the Azure Active Directory (Azure AD). So, only people passing that test get to login to Windows 365. As mentioned earlier, Endpoint Manager can apply access policies as people try to login.

Lastly, Windows 365 uses a Zero Trust Architecture (ZTA). In the event that perimeter security has failed, it will continually monitor identities, devices, and services that are being used. Should anyone try to access anything unusual or above their security level, ZTA will flag it and alerts will be raised. Again, all data used lives in the cloud.

Certainly, the idea of low power end devices and high power remote devices – whether that’s a mainframe or the cloud – seems like the way things are going for the next little while. To make accessing your mainframe work in that way would probably require it to be able to be accessed from any browser anywhere. I recently discovered that there is a way to do this. If you’re interested, the company is called MainTegrity, its product is called GateWAY z/OS, and you can find out more on its website at http://gatewayzos.com/

The thing about the IT industry is that ideas come and go – and then come back again. Sometimes we have everything on premise, sometimes we have nothing. As always, interesting times!