Sunday, 27 February 2011

Blending mainframe technology with Apple, Blackberry, and Android – makes you think!

I don’t know whether you’re going to be at the SHARE conference in Anaheim (California) from 27 February to 4 March, but one of the interesting things to see is the William Data Systems stand (booth 211).

They are showing how their ZEN z/OS network management suite of product integrates with popular smart phone technology – Apple, Blackberry, and Android. What you’ll see is ZEN monitoring z/OS networks and then reporting the results to a mobile device. The user can then evaluate what’s happening on the mainframe and take appropriate action immediately. As a consequence, z/OS support staff can get on with their lives and be out and about, but still be able to monitor their mainframes and react to alerts.

The company say that this mobile technology extends ZEN’s whistle-blower ability to monitor SyslogD, filter the results by many criteria, launch automation commands or REXX procedures, and transmit the important alerts to mobile devices, all in real-time. I think this sounds pretty impressive and I was lucky enough to have a demonstration of the technology a few ago.

Originally the technology was restricted to iPhone and iPad users, but now it’s extended to the other ‘smart’ technologies that organizations use.

Their press release mentions SyslogD. In fact, this is probably the most useful resource that many IT departments ignore or only refer to long after they should. Having this kind of information on your phone seems almost futuristic.

Anyway, if you are at Anaheim, take a look.

On a completely different topic… I was pleased to receive the following e-mail during the week:

“On behalf of IBM, it is with great pleasure to recognize you as a 2011 IBM Information Champion. We would like to thank you for your leadership and contributions to the Data Management community. You continue to be among a very small group to be chosen for this recognition. Congratulations.”

This makes three years in a row.

If you’re interested, you can find out more about the Information Champion programme here (

Saturday, 19 February 2011

Arcati Mainframe Yearbook 2011 user survey

The Arcati Mainframe Yearbook 2011 has been available for download free from for nearly a month now. Each new Yearbook is always greeted with enthusiasm by mainframers everywhere because it is such a unique source of information. And each year, many people find the results of the user survey especially interesting.

The 100 respondents who completed the survey on the Arcati site did so between 1 November and 3 December 2010. 32% were from Europe and 52% from North America, with 16% from the rest of the world.

44% of the respondents worked in companies with upwards of 10,000 employees worldwide, while 14% of respondents had 0-200 staff, 10% had 201-1000, 14% had 1001 to 5000, and 14% had 5001-10,000 staff. In terms of MIPS, 50% of respondents had fewer than 1000 MIPS installed, 24% fell into the mid-sized category between 1000 and 10,000 MIPS, and 22% were at the high end.

Looking at MIPS growth produced some interesting results. Larger, more mature businesses (above 10,000 MIPS) were almost all experiencing some growth, but predominantly in 0 to 10% per year category. Sites in the 1000-10,000 MIPS range were showing a range of results with some sites suggesting a decline while others predicted growth in excess of 50%. Sites below 1000 MIPS were most likely to be experiencing growth of less than 10%, with the largest percentage (of these three groups) predicting a decline. The mainframe market does appear to be quite fragmented with competitive pressures at the lower end of the mainframe market, and some respondents commented about lack of understanding amongst management about the value of mainframe computing.

With the environment and environmental issues getting so much coverage in the media these days, the survey asked whether IBM’s recent green initiatives on things like power consumption and cooling had made the mainframe more or less attractive. Nearly three-quarters (72% – the same as the previous year) said that IBM’s green initiatives made no difference at all. 17% felt it made the mainframe a little more attractive, and 11% felt it made the mainframe a lot more attractive. Clearly “greenness” isn’t much of a selling point for mainframes.

With so much talk about Cloud Computing, for the first time the survey asked the mainframe population for their opinion. It asked whether respondents currently used their mainframe for cloud computing. Only 2% of respondents said they did. 34% said they didn’t, and the rest weren’t sure. Bearing in mind that it is still early days for a cloud computing initiative, the survey asked whether respondents were planning to adopt cloud computing as a strategy. 22% said they weren’t at present. 8% thought some mainframe applications would be cloud-enabled in the future, and a similar number thought most would be cloud-enabled in the future. However, 4% didn’t see a use for cloud computing. It will be interesting to follow these figures in future surveys.

The survey asked respondents which specialty processors (IFL, zIIP, and zAAP) they had. 6% of sites had all three (down from last year’s value of 12%) and a further 28% of sites had two of the three specialty processors (up from last year’s 12%). More sites had zIIP processors (44%) than any other. 36% had IFL processors, and 24% had zAAP specialty processors. 36% of sites don’t have a specialty processor installed.

It seems that at many sites, mainframes are losing out due to management ignorance. The survey quotes one respondent who said: “We do not expect to have a mainframe within 2-3 years. The CIO sees the mainframe as obsolete and expensive, whether or not either of those is true”. Another respondent complained: “Our architects do not understand mainframes and seem to be mostly knowledgeable about Windows. Project funding is project based and not enterprise based, hence a tendency to prefer perceived cheaper solutions, eg Windows.”

The appearance of the z196 processor had a big impact within the industry. High-profile TV appearances of Watson on Jeopardy keep people familiar with the name IBM. However, there is still a lack of understanding of what a mainframe does and what it can do amongst far too many IT managers and other corporate executives.

Anyway, full details of the responses to many other questions can be found in the user survey section of the Yearbook. It’s well worth a read.

The Yearbook can only be free to mainframers because of the support given by sponsors. This year’s sponsors were CA Technologies, Canam Software, DataKinetics, and Type80 Security Software.

Sunday, 13 February 2011

IBM’s Transactional Analysis Workbench

Now I’m not here to tell you what software to buy and what to ignore, but if you haven’t had a look at IBM’s Transactional Analysis Workbench software yet, I think you should. It’s one of those pieces of software that kind of joins up the dots and allows you to see the bigger picture when you thought there was a performance problem. It can help identify performance issues in one subsystem – CICS, IMS, DB2, MQ, or even z/OS itself – when the symptoms of the problem are appearing in a completely different subsystem.

20 years ago, the world was a much simpler place. You’d be running IMS or CICS, and you’d be picking up data from your IMS database or DB2. But what was so simple, was the fact that the users of the data would be company employees. So, if there was a problem, you could use a fairly specific monitor to identify the location of the problem and fix it. Nowadays, you still run CICS and/or IMS as your transaction manager, but it can be linked to WebSphere MQ, and data can be coming from non-Z servers as well as IMS DB and DB2. But what makes life even more complicated is that the users are not just your staff, but also customers and potential customers, as well as automated systems that could be using your data in some mash-up appearing somewhere else entirely. Which means that it’s even more important to fix a slow-running system. And that means it’s vitally important to be able to quickly and easily identify where the problem actually is.

From a business perspective, there may be a single transaction that goes away, gets some data, and displays it. From a technical perspective, that single, say, CICS transaction may involve an IMS transaction running, and a DB2 intervention, and something involving MQ, before the results get back to the user’s screen. Now, if you think the problem lies with CICS, you can use CICS Performance Analyser to identify the problem. Or with IMS problems you can use IMS Performance Analyzer. Or for DB2 you can use DB2 Performance Manager, etc. But, what if the symptom appears to be IMS, but is really MQ? How can you combine this analysis to get to see the big picture of what’s happening on your system? This is where Transactional Analysis Workbench comes in.

You can check out the Web site to get all the specific details of why it’s a wonderful product, but I’d like to highlight just a couple of points. Transactional Analysis Workbench automated the collection of the data needed for problem analysis, and it provides a session manager to manage problem analysis through its life-cycle.

Rather cleverly, it allows slightly less-experienced or less highly-trained staff to identify the source of the problem. And then, when the ‘experts’ are available, it allows them to look in great detail to determine the problem. This is because the product links closely with other tools.

Transaction Analysis Workbench can provide a window into other subsystems that impact CICS and IMS performance. And by using information from SMF, OPERLOG, and other data sources such as CICS-DBCTL transaction performance, IMS address space resource consumption, WebSphere address space performance, MQ and DB2 external subsystem (ESAF) performance, APPC transaction performance, and IRLM long-lock activity, it can give an insight into what’s changed and where the problem might be originating.

Joined up software has got to be a good thing. And a product that can link closely with more specific monitoring or analysis tools has got to be a great help in finding out what’s different today compared to yesterday that’s causing a sudden drop in performance. But take a look yourselves.

Sunday, 6 February 2011

The importance of mainframe performance

It’s so easy to forget, or just take it as read, that mainframes have been able to successfully run with five nines availability for well over a decade. What that means is achieving 99.999% of scheduled uptime. In other words, it means that unscheduled downtime is less than five and half minutes in a year! Now that kind of amazing performance is something that boxes running other operating systems can only dream of. Some are working towards that level of availability, but others (you know who I’m thinking of here) aren’t even close.

But I wasn’t thinking about performance in that sense. We just take it for granted the operating system is going to be always working. What I was thinking about was the performance of the major subsystems running under z/OS. It’s very important to take steps to ensure that CICS or IMS are performing optimally. Monitoring software can be installed that will identify when preset thresholds are reached. They help identify bottlenecks and then the appropriate action can be taken to resolve them. This is the kind of stuff systems programmers have been working away at for years. They’ve been using faster processors, faster I/O, more efficiently-coded transactions, until every component is working as well as it can.

In previous blogs, we’ve talked about monitoring software that can arrange for alerts to be sent to designated staff as text messages or e-mails – allowing them to access the nearest iPad or laptop and take steps to resolve the new problem.

As well as CICS and IMS, there are monitors for DB2, WebSphere, and z/OS itself. These can all be integrated and produce wonderful moving graphs or other displays that allow users to tell at a glance whether everything is OK or whether a slight tweak to the subsystem is required. In addition, we’ve had software that learns how to maintain high performance and make appropriate changes on-the-fly, without any human intervention.

But the problem that many sites now face is what they can do if, for example, IMS users are reporting slow response times, but the problem appears to be coming from outside the IMS subsystem rather than from inside it. For example, what appears to be an IMS performance problem could be a CICS, DB2, WebSphere, or z/OS performance problem. The challenge facing systems programmers in this situation is to correlate performance data in IMS with activities in these other systems in order to discover the cause of the slow response time.

One new solution is Transaction Analysis Workbench, which is an IMS tool. If you’re interested in how to approach this type of situation, how to gather the necessary information from multiple subsystems, and then analyse, diagnose, and resolve the problem, you’ll be interested in the webinar from the Virtual IMS user group this week.

The Virtual IMS user group runs free webinars every other month. During the webinar, a technical expert shares their hard-won knowledge with the rest of the group. The webinars use Citrix GoToMeeting, which means you don’t have to face the hard task of convincing your company to fund your user group experience – you just sit down at your laptop and log in.

Anyone wishing to join the webinar needs to join the user group – which is also free. The next meeting is at 10:30 Central Standard Time on Tuesday 8 February. The user group’s Web site (where you can join) is at