Sunday 26 September 2021

Mainframe RPA


As far as employees are concerned, nobody really likes doing boring and repetitive tasks. Why not let the mainframe do them. As far as businesses are concerned, anything that enhances the service levels they can offer their customers has got to be good. What ticks both those boxes? The answer is robotic process automation (RPA).

So long as there aren’t any changes needed to existing applications and no new hardware, then anything that increases productivity, reduces errors, and improves the service available to customers – whether they are internal or external – has got to be a good thing.

RPA isn’t new. Gartner were very excited about it in 2018, suggesting that it was the fastest growing software sub-segment that it tracked. However, it seems to be gaining more popularity in the mainframe arena over the past year.

Let’s be clear, when we’re talking about robotic process automation, we’re not talking about some 1950s vision of the future where robots will be doing the work of humans. It’s just software that will be automating repetitive tasks.

Essentially, computer software – often referred to as a software bot – captures and interprets application data and automatically performs the mundane work that can involve manipulating data, executing transactions, triggering responses, and communicating with other systems.

Most often, the automation software actually connects with the existing software by a process called screen scraping from a 3270 or 5250 terminal emulator. It takes information from a screen that would have been available for a human to look at. This, as you can imagine, is not the best way of going about things, and can result in numerous calls to the mainframe in order to get the data that it needs. A more modern approach is to use RESTful APIs into the existing programs to communicate backwards and forwards. That way, you know the information going into the RPA software is accurate, and the results going back to the application are error free. And, it can mean far fewer calls to the mainframe.

The other advantages of using APIs rather than screen scraping are that the application will still work even if the layout of the screen ever changes. The speed that the RPA software can work is measurably faster than with screen scraping. And fewer mainframe cycles are used in the process.

Other hidden benefits of using RPA include the fact that if employees are happier because they aren’t doing routine tasks, they can be focused on higher value-added activities. And that will make going to work more interesting for them and will also have a positive impact on the productivity of a company and, potentially, on customer satisfaction.

The downside for staff that aren’t able to pick up on these other tasks is that automation leads to fewer staff being required. For an organization, the benefit is that they will save money on their payroll costs. Obviously, training those staff is a better option, or HR can find them jobs elsewhere in the company.

For people project managing an RPA project, who are used to cost and time overruns for projects, and who often face user disappointment with the finished project, the good news is that RPA projects are non-invasive, which means that they can be completed without causing disruption to the existing ways of working.

One of the benefits often associated with RPA is the increased use of analytics that becomes possible. Firstly, the quality of the data is much improved making the analysis more reliable. More data can now be included in the analysis. And the analysis that is carried out can be more sophisticated than before.

Another, perhaps hidden benefit, is with compliance. The automation means that there is less human contact with sensitive data, which, therefore, reduces the opportunity for data theft, fraud, or other compliance issues. RPA also allows for audit trails to be kept, which can be used to prove to auditors that the system is compliant with whichever regulations apply to that industry.

Apart from loss of employees through redundancy mentioned above, there are some other downsides to RPA that often don’t get a mention. There’s the potential for there to be scaling issues, in terms of adding more and more bots in a piecemeal fashion. This then becomes harder and more costly to maintain and manage. There’s also the issue of ensuring this extra layer of software is properly documented – particularly in the event of a problem (eg a business continuity event) occurring.

Many organizations simply automate their current processes because it makes sense to simply do that, and the automation will probably work. However, it makes better sense to review all the current processes to see whether they can be made more efficient before the automation takes place. This takes away any likelihood of inefficiencies in the original process being amplified by the automation process. As well as speeding up inefficient processes, the automation may also add to the cost of the process by the frequency of these inefficiencies. Lastly, piecemeal automation can make things more expensive than looking at the big picture in the first place and automating as much as possible in the first iteration of the RPA introduction.

RPA seems to offer lots of benefits to a mainframe-using organization, but there are also potholes on the way that need to be avoided to get the most from the opportunities that the automation process offers.

Sunday 19 September 2021

Dipping your mainframe toe in the cloud


Ensono has just published a survey – Ensono Cloud Clarity: A Snapshot of the Cloud in 2021 – of 500 UK and US-based IT decision makers and their use of cloud. They found that hybrid cloud environments dominate, with 71% of IT decision makers in the public sector using public and private cloud platforms, 9% used only public cloud, and 14% used only private cloud platforms. One-third of the survey respondents use mainframes in their IT stacks. The survey also found that nearly three out of four respondents vetted and onboarded a new cloud provider during 2020.

What was their favourite cloud provider? Microsoft Azure was the most-used public cloud provider followed by Google Cloud, IBM, and Amazon Web Services. The biggest reasons for moving to cloud were security and compliance (52%), then technical capabilities (42%) and pricing (35%).

IBM is focusing a lot of its attention on cloud. Their website says that “flexibility, responsiveness, and cost” are fuelling your journey to cloud. They go on to say that “Red Hat® OpenShift® and IBM Cloud Paks® on IBM Z® enables more innovation without limitation on a platform open to any app, team, or infrastructure, plus containerized software that enables your workloads and data to run anywhere.”

Amazon Web Services (AWS) has a ‘Mainframe Migration’ category as part of its AWS Competency Program that allows partners to offer their expertise to potential customers. Some partners offer technology tools for easing mainframe-to-cloud migration, like Advanced Computer Software Group Ltd, Blu Age Corp, Deloitte Touche Tohmatsu Limited, TSRI Inc, and Micro Focus International plc. The tools available automate some of the steps involved in moving mainframe applications to the cloud. Other companies offer consulting services. Including Deloitte, TCS, Wipro Ltd, NTT Data Corp, Infosys Ltd, DXC Technology Co, and HCL Technologies Ltd. New companies are joining all the time.

Similarly, Google Cloud acquired Cornerstone Technology to help mainframe users migrate their workloads to Google Cloud.

So, you’re probably asking yourself, why would you want to move off the mainframe to the cloud? Bearing in mind that mainframes are probably the most secure platform around – see the z15 features – and the recently-announced Telum chips (coming next year) offer the best AI features on a chip anywhere. There are a number of reasons that are offered. Firstly, there is an ageing mainframe workforce, which means there are fewer people with the talents needed to maximize the efficiency and minimize the costs associated with running a mainframe. Of course, migrating to the cloud isn’t the only solution. Apart from Zowe, there are any number of mainframe applications that people with distributed computing experience can sit down and pretty much use straight away.

One big feature of cloud computing is cost. Let’s suppose that for a short period of time an application needs to be scaled up and then down to accommodate the increased usage – that’s more people using the application requiring more data storage and more processing capacity. People who have been worried about the rolling 4 hours average (R4HA) on mainframes will shrink at the thought of that scenario, whereas in a cloud environment, people typically only pay for what they use, so a rapid increase in usage has a one-time cost and no knock-on issues.

Other reasons for migrating to cloud often don’t apply to all sites. For example, moving to the cloud is often associated with new applications and flexibility. Companies can only stay in business if they are quick to exploit new market opportunities, and cloud provides that kind of opportunity. That thinking is not wrong, but it ignores the fact that DevOps (or, better, DevSecOps) has been around on mainframes for a while now, and provides a way to develop new applications quickly – and updates them as needed equally quickly.

It's suggested that business continuity planning is better in a cloud environment because disaster recovery is nearly instantaneous. That is true, but mainframes have been working on disaster recovery for many years. Most major financial institutions rely on mainframes, so they need to make sure that financial transactions aren’t lost during a crash. And they are still using mainframes at the core of their IT stack.

Mainframes have lots of data – maybe stored in a variety of IMS databases – that is unavailable to other applications. These days, customer data can help make business decisions that support and enhance the growth of many company. The suggestion is that putting the data into the cloud allows it to be analysed, allows artificial intelligence and machine learning applications to be run against it, and useful information to be produced. That certainly seems like a very good reason to use cloud computing in addition to the mainframe.

And I think this highlights the nub of the issue. For people who don’t understand the power and the capabilities of a mainframe, the idea of migrating all their applications and data off it and on to the cloud seems like a sensible idea. For people who do understand what a mainframe is capable of, it makes sense to use the mainframe for what it does best AND use a cloud environment for those things that it does best. Every mainframer I know uses a laptop – indicating that they have embraced distributed computing. I assume that, similarly, they will make the most of cloud environments.

It's just that people talk about ‘modernizing the mainframe’ as if it hasn’t changed in the past neatly 60 years – and it has. I feel very strongly that decision makers need to recognize the opportunities that come with a mainframe and not take a ‘throw the baby out with the bathwater’ approach to getting everything off the mainframe and on to the cloud as soon as possible.

From my point of view, I like the opportunities available with cloud computing. I just don’t want organizations to ignore the opportunities they already have with the mainframe. And that’s why I’m not surprised that the Ensono survey found so many respondents were moving to a hybrid environment.

Sunday 12 September 2021

Mainframe chips with everything!


If you were IBM and you were deciding where to focus for your next mainframe chip, what would you’re thinking be? There’s security, which is a massive issue everywhere. The 17th “Cost of a Data Breach Report”, researched by the Ponemon Institute for IBM found that the average cost for a data breach rose from $3.86 million last year to $4.24 million this year. A data breach in the USA averaged at $9.05 million per incident. For breaches between 50 million and 65 million records (mega breaches), the average cost was $401 million. And the average time to detect and contain a data breach was 287 days – 212 days to detect a breach and 75 days to contain it. IBM has been producing chips focused on security for a number of years.

A second big focus for computing is the cloud, but IBM can’t produce cloud chips.

So, where is the next major focus? The answer is Artificial Intelligence (AI), and, in a way, it still links back to security.

IBM’s new chip, which is called Telum, come with a new architecture designed to handle AI workloads faster, which, as a consequence, provides improved security and fraud detection for mainframes used by financial services organizations such as banks and insurance companies.

Announced at the Hot Chips conference in August, the Telum processors contain on-chip acceleration for AI inferencing while transactions are taking place. Traditionally, data needed to be moved for inferencing to take place. However, with the Telum processors, the accelerator is positioned closer to the data and applications so users can carry out high-volume inferencing for real-time transactions without calling on off-platform AI functions. And that’s what makes the process so fast. That’s what will allow financial institutions to move from a fraud detection posture to a fraud prevention posture.

Let’s take a look at the details. IBM has already been developing the chip for three years, and it’s expected to see the light of day in the first half of next year in the new IBM mainframes that are expected to be announced then.

The chip has a centralized design, allowing users to accelerate credit approval processes and identify trades and transactions likely to fail, and enhance rules-based fraud detection, loan processing, clearing and settlement of trades, anti-money laundering, and risk analysis, according to the IBM.

The 7-nanometer chip has a dual chip module design containing 22 billion transistors and was created by the IBM Research AI Hardware Center. Samsung will manufacture the processor, which contains 8 processor cores with a deep super-scalar out-of-order instruction pipeline, running with more than 5GHz clock frequency, optimized for the demands of heterogenous enterprise class workloads.

IBM explained that: “The completely redesigned cache and chip-interconnection infrastructure provides 32MB cache per core, and can scale to 32 Telum chips. The dual-chip module design contains 22 billion transistors and 19 miles of wire on 17 metal layers.”

AI workloads have higher computational requirements and operate on large quantities of data. For this to work successfully, the CPU and AI core need to be integrated on the same chip for low-latency AI inference, which is what Telum does.

“We see Telum as the next major step on a path for our processor technology, like previously the inventions of the mainframe and servers”, says a blog post by IBM’s AI Hardware Center. “The challenges facing businesses around the world keep getting more complex, and Telum will help solve these problems for years to come.”

Christian Jacobi, an IBM distinguished engineer and chief architect for IBM Z processor design, explained that the new on-chip AI accelerator achieves a 40% gain in performance at the socket level, which means every socket in the new processor, compared with processor sockets in the existing z15, performs up to 40% more work.

This additional speed is achieved by adding more cores and larger caches. Jacobi added that IBM will further optimize the firmware and software stack that is going to run on the next generation of mainframes.

So, something to look forward to next year. A chip that focuses on AI and also helps with security. And, according to Barry Baker, IBM’s vice president of product management for IBM Z & LinuxONE, as users open up their systems “to hybrid cloud, they need to modernize their application data estate. The things we’re doing down at the silicon level accelerate and are aligned with that overarching strategy. There’s a lot of work we do on a regular basis with partners to help enable that.”

Telum is a new chip that seems to tick all the boxes.