Sunday, 29 June 2008

A lingua franca for databases

Wouldn’t it be nice if there were an easy way to combine data from all the different databases and files that exist on a mainframe, as well as (I suppose) all the databases and file types that exist on mid-range platforms and PCs. The trouble is, if I want to combine data from an IMS database with, say, VSAM files, I need to convert either TO IMS, or TO VSAM. Then if I want to combine data from DB2 and IMS DB, I am faced with the same problem. I have to convert one of those two to the format of the other. And if I want to combine lots of data from lots of different data sources, I am doing lots of different conversions. It’s like at the United Nations. I would need an interpreter for French into German, and another for German into Italian, etc. What I need is a lingua franca – a common language for converting my data or translating UN delegate speeches.

Luckily, at the UN there is such a lingua franca. Rather than translating between each of the 192 members states’ languages, they translate into English and then from English to the final recipient’s language. OK, I know that there are six official languages used for intergovernmental meetings and documents (Arabic, Chinese, English, French, Russian and Spanish) and that the Secretariat uses English and French, but you get the point.

So how could you possibly do the same thing on a mainframe? Well, in a way you can’t, but if you could get your data off the mainframe then you could. In fact it’s even simpler than that, you just need to get your meta data – the data about data – copied off the mainframe to a PC and your half-way to solving the problem.

Once you have your meta data, you are no longer tied to a particular data structure, such as Adabas or IMS DB or whatever. That means it could be massaged into a common format, no matter where it came from originally. This is the lingua franca bit. The secret is to convert it to relational data. So on your PC you effectively view tables and the actual data stays on the mainframe. Once you’ve got views on your data, you could combine views – effectively giving you a way to join together data from IMS DB, VSAM, DB2 (although that is a relational database already), Adabas, whatever you want.

Once the data is on a PC, it could be used in Excel to produce Pivot Tables, or whatever action you wanted to derive information from your combined data sources. It could be used with other applications, you could produce reports, you could do lots of useful work on it.

The other clever and useful thing you could do is like our UN translators. Having converted German to English (or IMS DB to relational data), you could translate it into Spanish (or relational data to Adabas). In other words, you could, almost painlessly, migrate your data from IMS DB to Adabas or any of the other data formats that are found on a mainframe.

So a lingua franca for mainframe data sources would allow users to combine and use data from a variety of data sources, and it would allow migration of data between different data sources.

Now provided it used industry-standard API such as ODBC and SQL, it should be able to scale well when you needed to. And that’s the direction corporations are moving in. No longer are they keeping their data to themselves, they are happy, assuming the all-important security measures are in place, to allow customers and business partners to access parts of their data store. At least one product can act as a lingua franca, and that’s the CONNX suite of products from CONNX Solutions Inc. Let me know of any others.

BTW this blog is a milestone for me, it’s my 100th blog.


Sunday, 22 June 2008

I like the mainframe because...

I enjoyed a recent article by Ken Milberg in IBM System Magazine's Mainframe EXTRA called "The Gen Xer's Guide to the Mainframe Part II". You can see the article at www.ibmsystemsmag.com/mainframe/enewsletterexclusive/20679p1.aspx.

Generation Xers, Ken tells us, are those born after 1964, and, according to Wikipedia, before 1982. The article lists the benefits of mainframes as discovered by a fictional 41 year-old IT Director. In the article the cost efficiency of the mainframe is explored.

Ben, the fictional IT Director, discovers a number of reasons why the mainframe could save money when compared with distributed systems. He finds the Total Cost of Ownership (TCO) for a mainframe is better than for distributed systems. He highlights maintenance, energy, cooling, design and architecture of networks, staffing woes, and the inability of IT to properly manage the proliferation of distributed systems, as bad news for distributed systems. He also mentions important topics such as security, back-ups, and disaster recovery as areas where distributed systems are weaker than mainframes.

In terms of risk management and compliance (SOX and all those other regulations that apply nationally, internationally, and at the local level) the mainframe wins again. He also mentions licensing software as a problem for distributed systems.

The article also highlights research by Arcati (
www.arcati.com - the people responsible for the Arcati Mainframe Yearbook) suggesting that by 2010 mainframe costs would be 25% of Windows server costs and 33% of Unix server costs.

Milberg suggests that with a mainframe, a new system could be configured in a few minutes, whereas a distributed system could take days.

He concludes by saying, "the mainframe provided everything that the distributed systems couldn't: security, reliability and dependability, breathtaking performance, manageability, and rock-solid support from a vendor with more than 40 years of experience with the product."

All this and it runs Java. I'm obviously totally in agreement, and looking forward to part III of this article.

Sunday, 15 June 2008

Gloomy news for IMS?

The Virtual IMS Connection (www.virtualims.com) user group recently surveyed IMS users about IMS and their plans for the future. The results were published last week during at their user group meeting. And the results make interesting reading for anyone involved with IMS either as a systems programmer, software vendor, or user.

The survey was carried out from the middle of April to the middle of May this year and included results from 45 different sites representing 38 different organizations in 10 different countries. Most sites were still using V9, but a third of respondents had at least one machine running the new V10. Over two thirds of the sites were fairly big, running over 100 databases.


The survey asked whether new IMS applications were being developed at respondents’ sites. The good news was that just over half said "yes", the bad news is that it was only just over half the sites that said "yes". Just under half said "no".

The survey also asked whether respondents had plans to retire IMS applications. Out of 44 replies, 25 said "no", but a disappointingly high number (19) said "yes". It then asked whether respondents planned to migrate data from IMS to another database. Again, out of 44 respondents 25 said "no", and 19 said "yes", with 13 respondents (29.5%) answering "yes" to both question. This must be a worry for IBM and software vendors producing IMS-related software because it looks like nearly half the current IMS user base could disappear. In addition, there is little evidence from elsewhere suggesting that any sites are migrating to IMS.

How quickly are these sites planning to retire applications or migrate data? Looking at the 19 sites planning to migrate data, five (26.3%) planned to migrate data to other databases in less than a year, nine (47.4%) planned the migration in 1-5 years, and five sites (26.3%) weren’t expecting to migrate for at least five years.

With Web services and Service-Oriented Architecture (SOA) being important in the development of modern applications, the survey asked how far respondents had gone in the development of SOA at their sites. 17 sites (38.6%) said that System z participates partly in Web services with 10 sites (22.7%) saying that System z participates fully in Web services. Two sites (4.5%) said that their mainframe will be Web-enabled in the future. But seven sites (15.9%) that said their mainframe applications did not participate in Web services and they didn’t intend to implement mainframe Web services. Eight respondents weren’t sure what was happening at their site.

The survey results are not all gloom. On the one hand it is encouraging to see that at over half the sites surveyed new IMS applications are being developed, but it is worrying to see that at over 40% of sites IMS applications are being retired and data from IMS is being migrated to other databases.

Is management sometimes unaware of the value of IMS to the organization? Is there a problem of finding young, but experienced IMS staff? Is it just management is being lured away from the mainframe to distributed computing? The good news from this survey is that there are still plenty of IMS sites that are embracing SOA and moving forward using the product. The bad news, of course, is that these sites seem to be getting fewer and fewer.

Sunday, 8 June 2008

Saving money with mainframes – revisited

I blogged about saving money with specialty engines on mainframes a couple of weeks ago, and I was taken to task over some of my conclusions – hence the need to revisit the subject.

I suggested that although specialty engines were themselves expensive, they would eventually save users’ money because workloads running on them don’t count against the MSU (Million Service Units) rating of the box. So software licences would be cheaper. I also said that a number of vendors were now supplying software that could utilize these specialty engines. And finally, I concluded that taking work off the main processor would save money because it would postpone the date when an upgrade would be necessary and also provide additional processing capacity.
Mark Fontecchio at Server Specs (
http://serverspecs.blogs.techtarget.com/2008/05/20/mainframe-specialty-processors-do-they-really-save-money) mentioned the success of IFL (Integrated Facility for Linux) as well as zIIP and zAAP. He doesn’t think that the savings in software licences are very much when compared to the actual cost of zIIP or zAAP specialty engine. He makes the point that the real saving comes from "being able to buy a six-figure specialty engine instead of a new seven- or eight-figure mainframe".

Marc Wambeke in his blog at Mainframe Watch Belgium (
http://mainframe-watch-belgium.blogspot.com/2008/05/saving-money-with-ziip.html) says, "Saying that every workload which is redirected to zIIP or zAAP results in a reduction of your software cost is a bit too straight-forward. You still have to keep in mind that your monthly software cost (MLC) is based on a 4-hour rolling average. If you have your peaks at night running a heavy IMS batch workload, you won't see any direct savings on that side by adding a specialty engine." And it’s a good point to bear in mind.

Marc goes on to list the product he’s come across that make use of these specialty engines. If you know of any others, he’d probably appreciate an e-mail.

My other controversial method for saving money on a mainframe is to not run your software on it!! In these days of distributed processing, there are a number of products that take mainframe data and then perform most of the work on it on a different platform. Now, I know there are lots of arguments that the total cost of ownership of non-mainframe platforms is higher, so you should stay on the mainframe, but the reality is that data centres are full of boxes that aren’t mainframes. Just a couple of examples: Guardium for mainframes does most of its work on Linux boxes off the mainframe; and the CONNX suite can process mainframe data on distributed platforms including PCs. Just a thought!

And finally, Marc Wambeke pointed out to me that my blog on saving money on mainframes was also published at
http://www.tekwits.com/node/129, but this version seemed to be attributed to vivin_bob. He may be a blog aggregator, but he ought to attribute other people’s work appropriately, don’t you think?

Sunday, 1 June 2008

Enterprise Information Integration

Moving on from last week’s blog about the different hardware manufacturers whose boxes you’d find in a data centre, this week I’d like to look at the issue of Enterprise Information Integration (EII).

The problem, as you’re probably only too well aware, is that although there are now standards in computing, there are an awful lot of them! Which still means that there is no easy way of integrating data from an IMS database with a transaction running from a Unix box or browser running on a PC – or almost any other combination of two different things! Even mainframes aren’t a single homogenous entity. You have a choice of z/VSE or z/OS (or z/VM and running multiple copies of the other two). With z/OS you can complicate things by running z/Linux as well. And even if you have just z/OS, your data could be in DB2, IMS DB, VSAM files, or a lot of other standard, but incompatible, formats.

Plus, in addition to mainframes, there are lots of other so-called enterprise platforms out there – ones that dyed-in-the-wool mainframers would think of as mid-range machines. Of course, these servers are running as much computing power as perhaps a 10-year-old mainframe could. Your mobile phone probably has the computing power of a 25-year-old mainframe!!

The challenge for EII is to make available this enterprise data, with all its myriad of formats, so it looks like it’s one big easily and quickly accessible database. And it has to do this in a way that doesn’t compromise the security or the integrity of the data. In addition, each type of file probably has its own proprietary storage method, and may very well have its own indexing and data access method.

As I mentioned earlier, there are standardized access methods for different data types. These APIs (Application Programming Interfaces) include ADO.NET, JDBC, ODBC, and OLE DB. Microsoft’s ADO.NET is a new version of its ActiveX Data Objects (ADO) technology. It’s a set of software components providing access to data and data services and is included with the Microsoft .NET Framework. It’s a way to access relational databases. Java Database Connectivity (JDBC) is a Java API for accessing relational databases. Open Database Connectivity (ODBC) provides a procedural API for using SQL queries to access data. Object Linking and Embedding, Database is a Microsoft API for accessing different types of data. It was meant to supersede ODBC by also supporting data types that don’t use SQL.


In addition to nicely-defined relational database interfaces, it’s important to be able to access data from the now ubiquitous XML files as well as the millions of other types, including the now quite common vCards, etc, etc.

Once an EII system is implemented, an organization could start to save money by using it for application development, data migration, ad hoc reporting, Web enabling their existing applications, and enhancing security, for example. The hard part is finding a product that is affordable, particularly for medium and small enterprises, and that will help them to achieve these benefits. I’d be interested to hear from organizations that had implemented such a system and find out exactly what they used and how they’d rate it.