Sunday 28 November 2021

GSE UK Virtual Conference 2021 – my impressions, part 2


Last time, I looked at the first week of the Guide Share Europe Conference. This time, I want to share my experiences of the second week of virtual presentations.

I started the second week on the Tuesday with “Developing and modernizing CICS applications with Ansible”, presented by IBM’s Stewart Francis. The presentation looked at Ansible for z/OS and especially how that relates to CICS Transaction Server. Stewart first examined what Ansible is and what support it has on z/OS. He then introduced the audience to the CICS collection for Ansible. He said that Ansible is a provisioning, configuration management and application deployment tool, with the tagline: “Turn tough tasks into repeatable playbooks”. He went on to say that rather than managing one system at a time, Ansible models your IT infrastructure by describing how all your systems inter-relate. He said that Ansible is extremely popular across many enterprises. For example, it’s now the top cloud configuration tool and is heavily used on-prem too. Some of the reasons for that include:

  • It normalizes tooling across a multitude of platforms
  • It centralizes your enterprise automation strategy
  • You can achieve configuration as code
  • There are over 3000 modules for all the things you might need it to do.

The next session I saw was “CICS Resource Configuration for Target Environments” by IBM’s Chris Hodgins. He started his presentation with some questions: 

  • Is testing as close to production as possible? 
  • Can you avoid application changes for different environments?
  • Fluid environments like development with unique deployments per developer? 
  • Can you control and manage those differences so you can easily see what will change? 
  • Can you identify those changes easily during problem determination?

He then spent the rest of the presentation explaining in detail how to use CICS resource overrides, including how they can be specified, installed, and monitored. Chris described it as a different approach to CICS configuration. And how it can be combined with the CICS resource builder to streamline development resource creation.

I then did a lunch and learn session. It was “Five Tips To Energize Your Next Presentation” given by Glenn Anderson. It was a hugely enjoyable presentation, and, as I was giving a presentation at GSE on Thursday, I thought it may have some useful tips. Glenn said that if you’re giving a presentation and you want people to stay awake, you need enthusiasm, interaction, and clarity – although he admitted that interaction on Zoom was hard. He said to make the first 15 seconds count because that’s when the audience gets an impression of you. Don’t put too many words on a PowerPoint slide. And end with a call to action.

Next, it was “CICS performance, monitoring, and statistics: Making sense of all the numbers” from IBM’s Dan Zachary. He said that CICS monitoring and statistics provide a wealth of information. His presentation focused on Dispatcher statistics/monitoring terms like 'wait for re-dispatch', 'QR CPU / dispatch ratio', 'wait for 1st dispatch', and 'QR TCB saturation'. He looked at what these terms meant and how to use them to solve and prevent the most common CICS performance problems.

Then I watched Rocket Software’s Ezriel Gross, again. This time his presentation was “CICS Performance and Tuning 101”. He gave us an introduction to tuning and reasons to tune, and looked at application versus systems. He then looked at tuning methodology, followed by the anatomy of response time – which I always enjoy seeing. He moved on to data collection and reporting facilities. Having completed the theory side of things, he looked at getting started: monitoring, DFH0STAT, and end-of-day (EOD) statistics. He finished by giving us some examples of resources to tune.

I started Wednesday with “Producing and consuming messages in Kafka from CICS applications”, which was presented by Mark Cocker, IBM, CICS Transaction Server Product Manager. Mark reminded us that Apache Kafka is a distributed event streaming platform typically used for high-performance data pipelines, streaming analytics, data integration, and interconnecting mission-critical applications. His session introduced options for CICS applications to produce and consume messages in Kafka, including example code of the Kafka API in CICS. Mark divided his talk into the following sections: messaging and events; what Kafka is; CICS application options and considerations to interact with Kafka; CICS use cases with Kafka; and some code examples.

Back with IMS stream, I saw IBM’s Robert Recknagel‘s presentation entitled “IMS and the Relational Database”. Data Definition Language (DDL) was introduced with IBM Version 14 as a modern industry-standard way to define databases and programs in IMS. This session looked at the DDL infrastructure of IMS, the DDL syntax of IMS, ways to execute DDL in IMS, the challenges with mixed ACBGEN/DDL environments, migrating to a DDL-only environment, and unresolved issues with DDL.

I started Thursday with an “Overview and best practices of tuning IMS data entry databases” from Anshul Agrawal and Aditya Srivastava, who both work for BMC Software. They gave an overview of DEDB analysis and tuning, saying that DEDB tuning typically involves setting the database and area attributes to minimize the physical I/O requirements. They also looked at the four parts of a DEDB area: the root addressable area part (RAA); the dependent overflow part (DOVF); the independent overflow part (IOVF); and the sequential dependent part (SDEP).

At 10:30, I gave a presentation to the enterprise security working group entitled “Defending your mainframe against hackers, ransomware, and internal threats”. It covered: a beginner’s guide to mainframe hacking techniques; the issue with insider threats; the anatomy of a typical mainframe ransomware attack; your best defence and recovery strategies; and hidden benefits – better security equals better compliance (GDPR, PCI, NIST). It had about 36 attendees and I received some very positive comments afterwards.

The last session I attended of the day, and of the conference, was “Good, Better, Best – Are You Buffering IMS Efficiently?”, which was presented by IBM’s Suzie Wendler, Dennis Eichelberger, and Rick Engel. Suzy has presented to the Virtual IMS user group a number of times. Buffers can be monitored using IMS Monitor, IMS Performance Analyzer, or IMS Buffer Pool Analyzer. Buffer pool tuning is an integral part of improving overall IMS system performance.

I mainly focused on the CICS and IMS streams. There were plenty of other streams, all with excellent content. Hopefully, next year, the conference will be live rather than virtual. But, however it happens, it’s always excellent and a great way to learn about Z.

Sunday 21 November 2021

GSE UK Virtual Conference 2021 – my impressions, part 1


The Guide Share Europe Conference was online again this year during the first two weeks of November. The quality of the speakers was, as always, excellent. There was so much information in the sessions, which is why the tag line for this year’s conference was “Virtually the best way to learn about Z”.

Being online rather than everyone physically being in one place has both benefits and downsides. On the plus side, people can attend without needing to book time off work and getting cover for their absence etc – and convince management to pay for their conference experience. They can be in the office for most of the day and put meetings in their calendar for those sessions that they want to attend. The conference was also free, so anyone interested in mainframes could attend.

The downside, of course, is not having the opportunity to have in-depth conversations with vendors and colleagues from other sites. There’s no opportunity to stock up on pens and knick-knacks from the vendors in the exhibition hall. And you miss out on the horror stories that are shared later in the evening in the bar!

The nearly two weeks of presentations included all the usual streams: 101; 102; AIOPS, System Automation, Monitoring and Analytics; Application Development; Batch and Workload Scheduling; CICS; DB2; Enterprise Security; IMS; Large Systems; Linux on Z; Mainframe Skills & Learning; MQ; Network Management; New Technologies; Storage Management; Women in IT; and z/Capacity Management and zPerformance.

I started my conference experience on the first Tuesday with “CICS update from Hursley”, presented by Mark Cocker, IBM, CICS Transaction Server Product Manager. His presentation focused on CICS TS 5.6 and continuous delivery approach, CICS TS open beta, and Community resources. He explained that developers can create incredible mixed language applications, that include Jakarta® Enterprise Edition 8, Spring Boot, Eclipse, MicroProfile, and Node.js®, together with traditional complied languages like COBOL, C/C++, PL/I, and Assembler with first-class interoperability. These programs have access to APIs to access most data and messaging systems, and utilize the full power of the IBM Z and z/OS platform.

On Wednesday, I watched “What’s New in CICS Security”, presented by IBM’s Colin Penfold. This was similar to a presentation he gave to the Virtual CICS user group a few months ago, but it’s always very interesting.

Next, I enjoyed the “CICS Foundation Update” from IBM’s John Tilling. He was taking a more detailed look at what’s new in foundation for CICS TS Open Beta. He explained the changes in policies, managing Temporary Storage capacity, z/OS short on storage (SOS) extensions, enhanced Shared Data Tables, CICS-Db2 enhancements, and miscellaneous RFEs.

I then tuned in to an IMS session. That was “Nuts and Bolts of IMS Lock Management” given by IBM’s Kevin Stewart.

And then it was back to the CICS stream to see Rocket Software’s Ezriel Gross discuss “Debugging CICS Storage Violations Using IPCS”. Ezriel looked at CICS Storage Management, what a storage violation is, the causes of storage violations, protecting storage in CICS (including CICS provided facilities), Storage Manager internals, storage violation dump analysis (including useful domains for debugging storage violations and IPCS commands to view relevant domain summaries and data).

The last session I saw that day was “CICStart your mainframe with Zowe and open source” from IBM’s Joe Winchester. He explained: how to install the Zowe CICS Command Line Interface Plugin; how to use the Zowe Command Line Interface; obtaining and installing the Zowe CICS Explorer; and how to get access to a z/OS environment.

On Thursday, I started the day with “Is your IMS the best it can be?”, which was presented by IBM’s Dennis Eichelberger. He reminded us that IMS has been available for more than 50 years, does what two other subsystems do together; has many major businesses depending on it; and is reliable, stable, and scalable. And is often taken for granted or even neglected. He suggested that the ways to make it the best were by: monitoring; reporting; analysing; and changing.

I then saw “Lessons Learned: Streaming IMS Data to Modern Platforms” given by Precisely’s Scott Quillicy. Scott looked at the lessons learned over the last couple of years, and then recapped what he saw as best practice in streaming IMS data. He then reviewed methods for capturing IMS data, before discussing streaming platform architecture. Finally, he drew some conclusions.

I ended the first week with MainTegrity’s Al Saurette, who was looking at “Banking Resiliency for Financial Services – new requirements need defences”. He asked: “Realistically do we need more security rules?” He argued that when the top of the banking and insurance world mandates something, you pretty much have to comply with it. His session pointed out what mainframe sites can do to satisfy cyber resiliency, PCI, Zero Trust, GDPR, and other security best practices while eliminating redundant manual processes. He looked at real solutions to prevent an attack on a mainframe. He also explained how the teeth in IOSCO’s new resiliency guidelines can bite. And he described processes that make life easier and improve mainframe security, while ensuring new compliance and audit requirements can be passed.

All in all, it was a great first week.

Next week, I’ll publish my impressions of the second week at GSE.

Sunday 14 November 2021

Deciding on the future of your mainframe


In the very old days, it would have been a smoke-filled room where everyone who had an interest would be sitting down together to hammer out one of the biggest decisions for a company – what to do next with their IT. Should they continue with the mainframe as the main IT platform, and use some Windows servers for everyone to use Windows PCs for their off-mainframe work? But now there seems to be a desire for change as most of the people attending the meeting are sitting in the room nibbling on locally sourced, organic vegan snacks and drinking fair-trade coffee or herbal tea. Other people at this important meeting are using Teams or Zoom to attend the meeting – keeping their carbon footprint as low as possible.

Some time in the 1990s, and probably for the next 20 years, the meeting would have split into three groups. There were those who wanted to carry on using the mainframe in exactly the same was a before, but utilizing the new facilities and features that were available on each new mainframe model. Then there were those who argued that the mainframe was ancient technology that needed replacing with Linux or Windows systems. And the third group were the undecided. For many smaller companies, the migration from a mainframe to distributed systems was simply a painful one-off process that some sites decided to go through. For many larger companies, the risk to customers of migrating was too much and they decided to stay with the mainframe.

Today, at this notional meeting, there is a fourth large group of people who are certain that moving to the cloud is clearly the best way forward because of all the advantages that cloud offers in terms of only paying for what you use and being able to scale applications up and down as needed, plus there’s speedier backups and restores or recovery, and there’s data analytics on all that data, and many other features. The word modernize is used a lot – as if the mainframe hasn’t been enhanced in any way in its nearly-60 years of existence.

Tempers are beginning to flare up as each group feels that the others don’t understand the points they are making. Both the distributed and the cloud people are pointing out that mainframes are hard to use and the only people who really understand them should be retiring very soon. They look at each other and laugh as people mention green screens. As the snacks start to run low, it really looks as if they are winning the day and convincing the undecided people at the meeting that migrating off the mainframe is really the way to go. It’s the only way for the company to stay in business for the next 10 years. Even the distributed people can see the sense in the argument.

It's at this point that one of the mainframe systems programmers at the meeting starts to speak. His voice is quite quiet, causing everyone to stop talking, look up from their laptops or tablets, and listen to his words. “I’m not going to speak up to defend the mainframe”, he says, which brings a smile to the faces of many of the people in the room and on screens round the room. “I’m simply going to explain what the mainframe can do for our company, and then you can make an informed decision about the way forward”, he adds.

And then, in his quiet, but confident voice, he explains about the latest enhancements to the mainframe. He says how pervasive encryption on the z14 and z15 models means that data at rest or in-flight can be encrypted. He mentions how Fully Homomorphic Encryption allows users to perform calculations on encrypted data without decrypting it first. And he describes how no other platform currently offers that level of security. “In these days of hacking by criminal gangs and nation states, that level of security is vital to the continuity of the business”, he adds. He briefly mentions multi-factor authentication and zero tolerance architecture as additional ways that mainframes and other platforms are staying secure.

He picks up the argument that mainframes are being run by experienced staff who will soon retire by showing how non-mainframe people can now use applications that they are familiar with on the mainframe. Microsoft Visual Studio Code (VSCode), which is a very popular developer environment among non-mainframers, is available. There’s Java, and much more. Plus, there are now applications that make controlling a mainframe much easier like Zowe, which lets non-mainframers treat mainframes like any other servers. Zowe makes CI/CD tools like Jenkins, Bamboo, and Urban Code available to developers, as well as tools like Ansible and SaltStack available on mainframes. There’s the IBM z/OS Management Facility (z/OSMF), which provides system management functionality in a task-oriented, web browser-based UI with integrated user assistance. And there’s Z Open Automation Utilities (ZOAU), which provides a runtime to support the execution of automation tasks on z/OS through Java, Python, and shell commands. With ZOAU, it’s possible to run traditional MVS commands, such as IEBCOPY, IDCAMS, and IKJEFT01, as well as perform a number of data set operations in most scripting languages.

“And if the applications already run on the mainframe”, he continues, “why not let them stay there and use APIs (application programming interfaces”) to connect them to off-mainframe application APIs and create brand new applications for our customers?”

“But”, he says, bringing everyone’s attention fully back to his words, “let’s not make this an either/or battle. Let’s choose the best platform for each application”. He then goes on to describe how a SIEM running under Windows can be used to collated messages of what’s going on and alert the response team when necessary.

He talks about running IT service management platforms in the cloud – things like ServiceNow or Remedy. And he also talks about the benefits of Kafka and analytics tools such as Splunk, Elastic, and Datadog, and how they can create more benefit from the mountain of data stored in IMS and elsewhere.

Heads round the table begin to nod. The decision is made to stay with the mainframe, but not to treat it, distributed, and cloud as separate silos of computing, but to integrate them – to use each for what it is best at. And to take an overarching view of the IT needs of the company, and not to look at it as a battle of the platforms.

And the meeting ends happily with the best outcome for the company.