Sunday, 26 March 2023

If mainframes didn’t exist, you’d have to invent them

Since the 1990s, I guess, whenever I mention that I have some connection with mainframes, I feel as though I am on the defensive. It’s as though the general zeitgeist is, and has been for the past 30 years, that mainframes are old fashioned and barely hanging on in there, in a world that has moved on.

My argument against that has always been to look at aeroplanes and cars. If you want to move lots of people quickly from one place to another, then you need a large passenger plane. If you want to move just a few people you’d use a small passenger plane. There are plenty of different makes and models you can use to illustrate this. And, of course, there are all sorts of makes and models in between these two sizes, and there are things like hang gliders and gliders that also have a role. So, what the aeroplane metaphor shows is that there is a need for different types of aeroplane for different needs and uses. There’s not a one-size-fits-all approach to planes, and similarly, there’s not a one-size-fits-all approach to computing. Cloud is great, but no-one is getting rid of their laptop, or tablet, or even their phone, so why should they get rid of their mainframe?

And, if people like cars, you can use a similar metaphor. There are plenty of big cars being sold as well as Teslas and other electric cars. Different people have different needs. Some need a car that can fit seven people and their luggage in, some need a car that can carry Ikea furniture home in the back or take rubbish to the recycling centre. Some want a really small car that is easy to park. We don’t all drive around in identical vehicles. And new cars are all very different, in so many ways, from cars in the 1960s.

But let’s suppose that there never were any mainframes. This is a parallel world where they just weren’t produced. That’s not too big a stretch of the imagination because it was very expensive to create the System/360, and quite difficult to get the original operating system (OS/360) working on it. That’s why DOS/360, and even BOS/360 and TOS/360 saw the light of day.

In this non-mainframe-world scenario, you would still have the client/server environment of the 1990s, and you would have the current cloud environment, but what you wouldn’t have is a highly secure, centralized location where batch work would be able to access data very quickly. There would be no issues of losing computing power because the Internet had gone down.

So, that world would have to invent the mainframe now! And, although the hardware would most likely come from IBM, and the operating system definitely would, the applications that run on the mainframe could be sourced from any number of vendors. In addition, peripheral devices could come from a number of different vendors. It would be the increase in the speed of batch jobs that would be the most noticeable improvement following the invention of the mainframe.

There would also be huge cost savings – yes, I did say huge cost savings – for sites that were used to having large server rooms running multiple Linux servers. Each of these would probably need its own team of people ensuring everything was running smoothly so that end users could continue working. Our newly-invented mainframe would be able to run Linux with far fewer support staff and even more end users. So, firstly, there’s the saving in staff costs, but also, a much smaller server room would be required, and that would require purchasing less hardware, using less cooling, etc etc. Organizations would be jumping at the chance to get on board. And, of course, a complete mainframe can be rack-mounted in the same way as other servers. So, there will be space for it.

While CICS and IMS are both brilliant subsystems on mainframes, IMS might well be seen a as a dream solution for financial organizations that needed to process financial requests as quickly as possible. The various IMS database access methods have always made this speed of access a central part of the way they work.

When it comes to security, mainframes, without doubt, do it better than other platforms. Firstly, there’s never any issue with groups within a large organization going on and buying cloud services without the IT team knowing. There will be no shadow IT because everything is going through the mainframe. The mainframe ensures that people can only access the applications and data that they are meant to access – and not everything that’s on the mainframe. It can also ensure that data is automatically backed up. It can ensure that an audit trail is kept of what happen to data, when, and by whom. This will ensure that an organization is compliant with whatever regulations apply to it. In fact, a mainframe is one of the best platforms to move to a zero-trust way of working.

Mainframes are reliable, available, and serviceable. They have more than on processor, and more than one logical partition. Multiple machines can be clustered in a parallel sysplex. They can connect to other platforms (like cloud, mobile, etc). They can be highly automated, and they now come with software making it easier for people without much previous mainframe experience to work on them and control them.

What I’m arguing is that we should reframe the debate about mainframes being your dad’s technology to one where we argue that if they didn’t exist, we would have to invent them.

Sunday, 19 March 2023

Artificial Intelligence makes it to the big time

Artificial Intelligence (AI) has been around for a long time, but, to be honest, it has really only reached the public’s imagination through science fiction films and TV. In the Terminator movies, Skynet became self-aware on 29 August 1997. And we know how that turned out!

It’s true that numerous people have been working on AI projects, and great work has been done, but as far as the ordinary man in the street is concerned, it didn’t have any impact on his life. OK, maybe his phone was able to predict the next word he was going to type in a text. Alexa would show your photos or play your favourite music. And maybe Deep Blue had beaten a famous chess master a few years ago (it was Garry Kasparov in 1997). It’s just that the general public were, until recently, typically unaware of what AI can do and how it was being used.

According to Wikipedia, Artificial Intelligence (AI) is intelligence – perceiving, synthesizing, and inferring information – demonstrated by machines, as opposed to intelligence displayed by non-human animals and humans. Example tasks in which this is done include speech recognition, computer vision, translation between (natural) languages, as well as other mappings of inputs.

Alan Turing came up with the Turing test in the 1950s, which measures the ability of a machine to simulate human conversation. Because humans can only observe the behaviour of the machine, it does not matter whether it is ‘actually’ thinking or has a ‘mind’, the important thing is its behaviour.

But what has brought AI into the public consciousness is ChatGPT. This AI chatbot was developed by OpenAI and launched in November 2022. It’s built on top of OpenAI’s GPT-3 family of large language models and has been fine-tuned (an approach to transfer learning) using both supervised and reinforcement learning techniques.

The GPT part of its name stands for Generative Pre-trained Transformer. And what it does so well is content creation, ie its deep machine learning can create human-like text in response to simple prompts. You might like to think of it as souped-up version of Cortana on your laptop or Google Assistant on your phone.

Machine Learning (ML) allowed phones to recognize what words people most often used after a previous word. This information was collected on each phone and then centralized in order to learn from as much data as possible. The most likely next words were then sent back to people’s phones. More data was collected to see whether those first guesses were correct. The new results were sent back to people’s phones. So now, predictive text is getting to be fairly accurate, on the whole, although it can never be completely right for every conversation. This is how machine learning works.

With ChatGPT, giving it some articles can result in it writing an essay for you – which is a worry for schools and universities, and also for people who read and write newspaper articles and even blogs. ChatGPT can even write poems and jokes. And, like predictive text did, it is learning from everything that it is being fed. So, it will only get better.

The interesting thing is that ChatGPT now has so many users and many people have heard about it – even if they are not using it. I saw the famous Dilbert cartoons had jokes about it. ChatGPT can be found at https://chat.openai.com/chat. Microsoft’s new Bing search engine includes ChatGPT, allowing users to ask real questions and get complete answers – they say. However, there is currently a waiting list to use the service.

I asked ChatGPT what were the best alternatives to ChatGPT. It told me that there is no single ‘best’ alternative to ChatGPT because it depends on what specific needs and requirements I have. It then went on to suggest:

  •  GPT-3 is a language model developed by OpenAI, like ChatGPT. It is larger and more powerful than ChatGPT, with 175 billion parameters compared to ChatGPT's 1.5 billion parameters. However, it is also more expensive to use.
  • BERT (Bidirectional Encoder Representations from Transformers) is another popular language model, developed by Google. It is known for its ability to handle complex natural language processing tasks, such as question answering and sentiment analysis.
  • Transformer-XL is another language model developed by Google, with a focus on long-term dependencies. It is designed to handle sequences that are longer than what other models can typically handle, making it useful for tasks such as language modelling and machine translation.

It didn’t mention Bard, Google’s consumer-focused AI chatbot. Nor did it mention Copilot, Microsoft's new AI for Microsoft 365.

Other alternatives include:

And there are quite a few others.

In terms of mainframes, the IBM z16 was sold as being designed for AI. It has an AI accelerator built onto its core Telum processor. IBM said that the z16 is particularly suited to processing artificial intelligence apps. The AI accelerator on the Tellum processor utilizes an AI inferencing model that analyses details from the massive transaction processes that go on within the mainframe to spot trends and make intelligent predictions. IBM explained that AI has a broad applicability to a wide set of use cases across a variety of different industries, from banking and finance to insurance to healthcare, and many others. The AI accelerator can handle massive amounts of critical transactions and workloads in real time and at scale.

You can always tell when something has grabbed the imagination of the public when you hear people talking about it down the pub. It used to be Instagram, then TikTok, but now it’s ChatGPT. AI, in all its forms, has finally made the big time!

 

Sunday, 12 March 2023

What can we expect to see in z/OS V3?

IBM’s announcement of what we can expect to see in Version 3 of z/OS made me think about how far mainframe computing has come since it was first announced on 7 April 1964. That date means the mainframe is coming up for its 60th birthday soon.

The System/360 was born on that day, as was a whole new world of mainframe computing. IBM’s Big Iron, as it came to be called, took a big step ahead of the rest of the BUNCH (Burroughs, UNIVAC, NCR, Control Data Corporation, and Honeywell). The big leap of imagination was to have software that was architecturally compatible across the entire System/360 line.

It was called System/360 to indicate that this new system would be able to handle every need of every user in the business and scientific worlds because it covered all 360 degrees of the compass. By ensuring backward compatibility, it could emulate IBM’s older 1401 machines, which encouraged customers to upgrade. Famous names among its designers are Gene Amdahl, Bob Evans, Fred Brooks, and Gerrit Blaauw. Gene Amdahl later created a plug-compatible mainframe manufacturing company – Amdahl.

The first mainframe to be delivered went to Globe Exploration Co. in April 1965. Launching and producing the System/360 cost more than $5 billion, making it the largest privately-financed commercial project up to that time. It was a risky enterprise, but one that worked. From 1965 to 1970, IBM’s revenues went up from $3.6 billion to $7.5 billion; and the number of IBM computer systems installed anywhere tripled from 11,000 to 35,000.

Looking at the hardware for a moment, the Model 145 was the first IBM computer to have its main memory made entirely of monolithic circuits. It used silicon memory chips, rather than the older magnetic core technology.

In 1970, the System/370 was introduced. The marketing team said that the System/360 was for the 1960s; for the 1970s you needed a System/370. All thoughts of compass points had gone by then. IBM’s revenues went up to $75 billion and employee numbers grew from 120,000 to 269,000, and, at times, customers had a two-year wait to get their hands on a new mainframe.

1979 saw the introduction of the 4341, which was 26 times faster than the System/360 Model 30. There was no System/380 in the 1980s, but in 1990, the System/390 Model 190 was introduced. This was 353 times faster than the System/360 Model 30.

1985 saw the introduction of the Enterprise System/3090, which had over one-million-bit memory chips and came with Thermal Conduction Modules to speed chip-to-chip communication times. Some machines had a Vector Facility, which made them faster. It replaced the ES/3080.

The 1990s weren’t a good time for people’s perception of the mainframe. However, in terms of development, we saw the introduction of high-speed fibre optic mainframe channel architecture Enterprise System Connection (ESCON).

In the 2000s, we got zSeries (zArchitecture) and z operating systems giving us 24, 31, and 64-bit addressing.

Staying with operating systems for a moment, the System/360 hardware needed an operating system to run on it so I could do all the things that people wanted it to do. So, the original operating system, the troubled OS/360 came into being. This was developed into MVT and MFT, and then OS/VS1 and OS/VS2. And that became MVS, before being called z/OS. And now, expected in the third quarter of this year, we’ll see an AI-infused, hybrid-cloud-oriented version of z/OS. z/OS 3.1 will work best with the z16 mainframe, but it will support z14 models and above.

The z16 has an AI accelerator built onto its Telum processor, which allows it to perform 300 billion deep-learning inferences per day with one millisecond latency. The new operating system will:

  • Support a new AI Framework for system operations intended to augment z/OS with intelligence that optimizes IT processes, simplifies management, improves performance, and reduces skill requirements.
  • Extend the AI ecosystem by enabling AI co-located with z/OS applications, designed for low-latency response times.
  • Control the system with AI-powered workload management that intelligently predicts upcoming workloads and react by allocating an appropriate number of batch runs, thus eliminating manual fine-tuning and trial-and-error approaches.
z/OS V3.1 also features cloud capabilities by embracing aspects of cloud-native management of z/OS based on industry standards and access to consistent and modern browser-based interfaces, enabling users to efficiently update and configure z/OS and related software,

z/OS 3.1 also continues to simplify and automate the management of the operating system to help guide the next generation of system programmers, So, it will include:

  • A new z/OS callable service, Cloud Data Access, to enable access to data in cloud object stores and to incorporate cloud object data into z/OS workloads.
  • A set of modern APIs with a C-based interface, designed to simplify the application effort needed to access NoSQL VSAMDB data sets on z/OS.
  • IBM z/OS Container Extensions (zCX) to improve performance and security while running containerized Linux workloads and to support NFS, HTTPS, and IBM WebSphere Hybrid Edition.
  • Dedicated real memory pools to improve the behaviour of applications that have a high memory requirement.
  • An extension of the z/OS Authorized Code Scanner to provide greater coverage of potential vulnerabilities.
  • Enhanced COBOL-Java support that will let 31-bit COBOL call 64-bit Java programs using the IBM Semeru Runtime Certified Edition for z/OS.

All-in all, that’s quite a long journey from what could be done on those first machines with their early operating systems nearly 60 years ago. So, don’t let anyone tell you that mainframes are old technology.

Sunday, 5 March 2023

Goodbye IT staff and keep well

It seems like only a little while ago that I was writing about the Great Resignation – that fact that many employed people – not just in IT – had decided that they didn’t want to go back to the office after they’d got used to working from home during the lockdown. Many people re-evaluated their life and decided their current role wasn’t for them anymore. But now there seems to be a rush to lay off staff. IBM has announced plans to layoff 3900 people. Amazon, which owns AWS is laying off 18 000 people. And Microsoft is laying off 10 000 staff.

And that’s not all. Alphabet, the company owns Google is laying off 12 000 people. Salesforce plans to cut around 10% of its workforce, which is around 7000 staff. And Meta – the Facebook and Insta people, are laying off 11 000 employees. Twitter is saying farewell to 7 500 staff. Dell is cutting 6650 jobs. Even Stripe is cutting its staff number by 14%.

So, what’s going on?

One can only assume that much like individuals looked at their life and asked themselves some searching questions, these large organizations are doing the same. Whereas people wondered whether they needed to commute into the office every day and spend time doing something they didn’t really enjoy, organizations are looking at their corporate structure and wondering how to stay successful moving into the future.

For IBM, it’s been suggested that there are still some employees whose job function was more closely related to the Kyndryl and Watson Health businesses, which have now been spun off. Are those roles really needed moving forward?

Amazon apparently feels that it over-expanded during the pandemic, so most of those roles will go from Amazon stores and other parts of its retail business. But it's worth noting that there has been a pause in its hiring of new staff at AWS, as well as a slow-down in growth at AWS.

Microsoft also increased its staff number at the start of the pandemic. It's suggested that they have taken on 57,000 more employees in the past two years, before the layoffs were announced. So, they are still a larger company than they were.

Dell is losing about 5% of its workforce because of an uncertain future. The company enjoyed a boom time in sales of PCs during the pandemic with so many people working from home. This demand for PCs seems to have decreased as people return to the office and face an economic slowdown.

So, maybe these layoffs aren’t all bad, maybe they are a necessary realignment as the pendulum swung too far in one direction – taking on new staff – and it’s now swinging back the other way towards having the right number of staff for the current economic landscape.

Of course, using pendulum metaphors doesn’t help those individuals who are being made redundant. Even using just the numbers mentioned above that’s nearly 62 000 individuals who are changing their job through no fault of their own. They are the people who will be lying awake at night wondering whether they will be able to get a job without having to move to a different city and all the other upheavals associated with that – leaving friends and local organizations, taking children out of school, etc etc.

As mentioned at the beginning, there are some vacancies around because of the people who voluntarily left their old job during the great resignation. But, as I alluded to above, are those jobs in the right location?

A recent OnePoll survey of 1,000 Americans who had recently moved were asked about the most stressful events in their life. Moving was selected most often by respondents (45%). Getting divorced or going through a breakup was a close second at 44%. Getting married was 33%, having children 31%, starting a first ever job was 28%, and switching careers was 27%.

The other problem facing these newly-redundant people is that there will be lots of people with very similar skills to them suddenly joining the job market in their area. And that will make the competition for any existing IT jobs all the harder.

While I understand that companies need to be ruthless in order to stay in business, I also have deep concerns for the people who are affected by layoffs and reorganizations. Their future is looking grim. I hope the world doesn’t move into recession, and that things pick up for them very soon.

I have on-going concerns about the mental health and wellbeing of people generally in the post-pandemic world, and being made redundant is good for anyone’s mental health at any time.

Good luck to those people.