In the 1970s,
mainframes pretty much ruled the computing space, with some smaller machines,
eg from DEC, also being found in large data centres. And then in the early
1980s the first PCs (from IBM) arrived. The concept of personal computing
arrived and a different paradigm for how a person uses a computer was born.
That didn’t change the amount of work that was done on a mainframe, just how most
people experienced computing. Things began to change.
A whole range
of mid-range machines were born. And in the March 1991 issue of InfoWorld,
Stuart Alsop wrote: “I predict that the last mainframe will be unplugged on
March 15, 1996”. The first of the great death of the mainframe stories.
You may remember
that IBM was going through a bad time at the start of the 1990s, trying to find
its way forward. Looking back it seems obvious what was going to happen next
in the world of computing, but, obviously, at the time there were hard choices
to be made. Client-server was the computer model on everyone’s PowerPoint
slides. To prove the difficulty of prediction, even Bill Gates is meant to have
said about RAM. “640K ought to be enough for anybody”. He’s also meant to have
said: “The Internet? We are not interested in it”, And even if he never said
them, at the time, they sounded like something people were saying.
All through the
2000s, people who didn’t know mainframes very well, referred to them as
dinosaurs – forgetting that dinosaurs ruled the Earth for 165 million years and
humans have only existed for around 300,000 years – and ignored the important
work mainframes were doing for their organizations. People spent many long
hours planning to move all those workhorse COBOL programs off the mainframe and
onto their Linux servers or Windows. A move that would have cost the company
more money because Linux servers require so many more people to look after them
than a mainframe does. Again, people predicted the imminent death of the
mainframe.
Of course,
mainframers used laptops to carry out some of their work, and many of them used
SIEMs sitting on distributed systems to report in real time when the mainframe
was experiencing a problem or something not going quite right. Mainframers
understood a lot about distributed systems because the problems distributed
teams were facing were ones that mainframes had faced and resolved in the past.
Mainframers knew the best way forward.
Sadly, many
universities dropped mainframe training courses because most students didn’t
want to do the training. They had no idea what mainframes could do and were
more interested in ‘exciting’ platforms. Mobile phones became smart and needed
programmers to write apps for them. Gaming was big business and needed
developers to write code for them. Mainframes were ignored by so many people,
who thought they must be dead, or would be soon,
Of course,
mainframes found a way to integrate with phones. They could share APIs and
create new composite applications. The trouble was that people only saw the user
interface on their phones and knew nothing about the backend mainframe doing
the heavy lifting.
And then came
cloud providers offering their services. They could run applications for you,
and you only needed to pay when they were used. They were elastic in so far as
they could scale up and down depending on the needs of your users. You didn’t
really need to know where your servers were, and you could even let the cloud
provider update them as required and hot swap them if something went wrong with
the hardware to another server out there in the cloud. You could even go
serverless and just concentrate on your applications not the servers running
them. It seemed a perfect solution to a company’s IT needs. Get everything off
the mainframe and onto the cloud. Job done! The death of the mainframe was
assured.
So, here we are
in 2022 with a mainframe that is probably the most secure computing platform
available. Later this year, new mainframes running Telum chips with integrated
AI accelerators will be able to identify financial fraud faster than anything
previously. It doesn’t sound like a dying platform.
So, what is the
future for the mainframe? The answer seems come in three parts. Firstly, let’s
look at hybrid working. In the past 30 years, mainframers have embraced other
platforms if that seemed the best place to do work. I mentioned SIEMs on
distributed systems. Analytics takes place in the cloud on big data. And there
are plenty of other examples where the cloud solutions are optimal and probably
cheapest. What is also clear is that getting rid of the mainframe isn’t the
answer. Why recode everything to run in the cloud when one simply needs to take
data, for example out of IMS or Db2, and put it on the cloud for further work.
Why would anyone want to recode 50 years of COBOL code?
The second
solution is to make the mainframe available to non-mainframe specialists. This
overcomes the argument that experienced mainframe staff are coming up to (or
past) retirement age. You can use Microsoft Visual Studio Code (VSCode) on a
mainframe, and Java. There’s Zowe, which lets non-mainframers treat mainframes
like any other servers. Zowe makes CI/CD tools like Jenkins, Bamboo, and Urban
Code available to developers, as well as tools like Ansible and SaltStack
available on mainframes. There are applications from IBM such as z/OS
Management Facility (z/OSMF), which provides system management functionality in
a task-oriented, web browser-based UI with integrated user assistance. And there’s
Z Open Automation Utilities (ZOAU), which provides a runtime to support the
execution of automation tasks on z/OS through Java, Python, and shell commands.
And, thirdly, large
mainframe software companies are helping to train new people. Broadcom
Mainframe Software Division has its Vitality Residency Program, which is a development programme to
cultivate next-gen mainframe talent at low to no cost. Broadcom partners with
organizations to attract, grow, and retain talent to help manage the mainframe
in the hybrid data centre. Through Broadcom’s investment, a Vitality Resident gets
trained on mainframe fundamentals and Broadcom products, plus they get experts
mentoring. Next, they will partner on-site with a customer to learn their
environment and unique business applications. Once their training is complete,
they can start their career as a mainframer.
The mainframe
isn’t dead – it’s not on life support. It’s a thriving computing platform that
can utilize the strengths of other platforms to produce the most successful and
secure hybrid environment for any organization.
Nor is it running out of qualified staff because plenty of steps are
being taken to train new people and make the mainframe available to
technically-competent non-mainframers. Reports of its death have been
exaggerated.
In the 1970s,
mainframes pretty much ruled the computing space, with some smaller machines,
eg from DEC, also being found in large data centres. And then in the early
1980s the first PCs (from IBM) arrived. The concept of personal computing
arrived and a different paradigm for how a person uses a computer was born.
That didn’t change the amount of work that was done on a mainframe, just how most
people experienced computing. Things began to change.
A whole range
of mid-range machines were born. And in the March 1991 issue of InfoWorld,
Stuart Alsop wrote: “I predict that the last mainframe will be unplugged on
March 15, 1996”. The first of the great death of the mainframe stories.
You may remember
that IBM was going through a bad time at the start of the 1990s, trying to find
it’s way forward. Looking back it seems obvious what was going to happen next
in the world of computing, but, obviously, at the time there were hard choices
to be made. Client-server was the computer model on everyone’s PowerPoint
slides. To prove the difficulty of prediction, even Bill Gates is meant to have
said about RAM. “640K ought to be enough for anybody”. He’s also meant to have
said: “The Internet? We are not interested in it”, And even if he never said
them, at the time, they sounded like something people were saying.
All through the
2000s, people who didn’t know mainframes very well, referred to them as
dinosaurs – forgetting that dinosaurs ruled the Earth for 165 million years and
humans have only existed for around 300,000 years – and ignored the important
work mainframes were doing for their organizations. People spent many long
hours planning to move all those workhorse COBOL programs off the mainframe and
onto their Linux servers or Windows. A move that would have cost the company
more money because Linux servers require so many more people to look after them
than a mainframe does. Again, people predicted the imminent death of the
mainframe.
Of course,
mainframers used laptops to carry out some of their work, and many of them used
SIEMs sitting on distributed systems to report in real time when the mainframe
was experiencing a problem or something not going quite right. Mainframers
understood a lot about distributed systems because the problems distributed
teams were facing were ones that mainframes had faced and resolved in the past.
Mainframers knew the best way forward.
Sadly, many
universities dropped mainframe training courses because most students didn’t
want to do the training. They had no idea what mainframes could do and were
more interested in ‘exciting’ platforms. Mobile phones became smart and needed
programmers to write apps for them. Gaming was big business and needed
developers to write code for them. Mainframes were ignored by so many people,
who thought they must be dead, or would be soon,
Of course,
mainframes found a way to integrate with phones. They could share APIs and
create new composite applications. The trouble was that people only saw the user
interface on their phones and knew nothing about the backend mainframe doing
the heavy lifting.
And then came
cloud providers offering their services. They could run applications for you,
and you only needed to pay when they were used. They were elastic in so far as
they could scale up and down depending on the needs of your users. You didn’t
really need to know where your servers were, and you could even let the cloud
provider update them as required and hot swap them if something went wrong with
the hardware to another server out there in the cloud. You could even go
serverless and just concentrate on your applications not the servers running
them. It seemed a perfect solution to a company’s IT needs. Get everything off
the mainframe and onto the cloud. Job done! The death of the mainframe was
assured.
So, here we are
in 2022 with a mainframe that is probably the most secure computing platform
available. Later this year, new mainframes running Telum chips with integrated
AI accelerators will be able to identify financial fraud faster than anything
previously. It doesn’t sound like a dying platform.
So, what is the
future for the mainframe? The answer seems come in three parts. Firstly, let’s
look at hybrid working. In the past 30 years, mainframers have embraced other
platforms if that seemed the best place to do work. I mentioned SIEMs on
distributed systems. Analytics takes place in the cloud on big data. And there
are plenty of other examples where the cloud solutions are optimal and probably
cheapest. What is also clear is that getting rid of the mainframe isn’t the
answer. Why recode everything to run in the cloud when one simply needs to take
data, for example out of IMS or Db2, and put it on the cloud for further work.
Why would anyone want to recode 50 years of COBOL code?
The second
solution is to make the mainframe available to non-mainframe specialists. This
overcomes the argument that experienced mainframe staff are coming up to (or
past) retirement age. You can use Microsoft Visual Studio Code (VSCode) on a
mainframe, and Java. There’s Zowe, which lets non-mainframers treat mainframes
like any other servers. Zowe makes CI/CD tools like Jenkins, Bamboo, and Urban
Code available to developers, as well as tools like Ansible and SaltStack
available on mainframes. There are applications from IBM such as z/OS
Management Facility (z/OSMF), which provides system management functionality in
a task-oriented, web browser-based UI with integrated user assistance. And there’s
Z Open Automation Utilities (ZOAU), which provides a runtime to support the
execution of automation tasks on z/OS through Java, Python, and shell commands.
And, thirdly, large
mainframe software companies are helping to train new people. Broadcom
Mainframe Software Division has its Vitality Residency Program, which is a development programme to
cultivate next-gen mainframe talent at low to no cost. Broadcom partners with
organizations to attract, grow, and retain talent to help manage the mainframe
in the hybrid data centre. Through Broadcom’s investment, a Vitality Resident gets
trained on mainframe fundamentals and Broadcom products, plus they get experts
mentoring. Next, they will partner on-site with a customer to learn their
environment and unique business applications. Once their training is complete,
they can start their career as a mainframer.
The mainframe
isn’t dead – it’s not on life support. It’s a thriving computing platform that
can utilize the strengths of other platforms to produce the most successful and
secure hybrid environment for any organization.
Nor is it running out of qualified staff because plenty of steps are
being taken to train new people and make the mainframe available to
technically-competent non-mainframers. Reports of its death have been
exaggerated.