Monday 31 December 2007

IBM – the future

Directors and shareholders of IBM must have tucked in to their Christmas dinners with a certain amount of satisfaction that the company was still very successful and that there was plenty of revenue coming in from the various parts of the company.

I don’t want to start the New Year being thought of as the blogging equivalent of Cassandra, but things may not be all good at Chateau IBM.
Cassandra, you’ll remember from Greek mythology, was the daughter of Priam, king of Troy. She was given the gift of prescience by Apollo, who later, because she didn’t return his love, cursed Cassandra so no one would ever believe her predictions.

There’s really three areas I want to mention in this blog: hardware, IMS, and little systems.

In terms of hardware, I’m really thinking about selling new computers. Like new cars, every time IBM brings out a new mainframe, someone is going to want it! It will seem like an appropriate time to upgrade – and IBM’s sales figures will look good. But, in fact, it seems that IBM is only selling to the faithful. How often do you hear of a company getting rid of a room full of servers and installing a mainframe? Probably not that often. Exploiting zLinux on a mainframe may make it look like the mainframe is an exciting place to be, but revenues must be small compared to that achieved from z/OS. What I’m really saying is that IBM needs to find a way to reduce drop-out, ie small VM/VSE sites getting rid of their mainframes, and, more importantly, start selling to new customers.

Secondly, I have been doing a lot of work recently with IMS. That’s Information Management System, not IP Multimedia Subsystem. IMS, as you probably know is a database/transaction management system. It’s like DB2 and CICS combined (sort of). It’s used at lots of large mainframe sites and is the core of those companies’ business.es Version 10 was announced a little while ago by IBM. So, IBM has this brilliant piece of software that so many major companies rely on, yet, when was the last time it sold a version of IMS to a new customer? I don’t actually know the answer to that question, but I’m let to believe that almost all IMS users have been users for a long time – ie there are no new customers. Come on IBM! If you have such a brilliant product, why aren’t you selling it?

How do you get people interested in mainframes? The answer is to have lots of them around and let people play with them. Now before you start sneering and saying that will never be possible, let me suggest a way. How about FLEX-ES from Fundamental Software? It provides a way for developers to test mainframe software on a laptop. Or there was UMX technologies. And, of course, Hercules – the Open Source mainframe emulator. Platform Solutions has a product called the Open Mainframe. There’s even Sim390. If IBM was to embrace these technologies and not try to smash them like the Hulk, people would be more familiar with mainframe systems because they would be more commonplace.

I have a final revolutionary thought. How about VM on Intel chips? Everyone says that Microsoft’s attempts at virtualization are a bit weak at the moment. VMware seems to have the lead in the market place. Why not adapt VM itself to run on Intel servers. This would be a good way of training new people in mainframe concepts and it becomes only a short step for them to become mainframers. Why not allow Hercules or FLEX-ES technology to plug in to a server running VM. That way z/OS would become a mid-range system and sales would grow hugely. Eventually those mid-range people would definitely consider buying a shiny new mainframe because it would not be a risky step for them to take.

This is the way to a successful future for IBM, and they might even sell IMS to a new site – but will they believe me?

Happy New Year to everyone.

Monday 24 December 2007

Is anybody there?

It was 2001 the last time Christmas fell on a Tuesday, and Tuesday seems like the worst day of the week to have such a major holiday. This is because there is only one working day between a weekend of parties and the Christmas holidays, and I’m sure anyone who can, is not going to be working a full day! Yes, nursing staff and similar will be there all day, but I’m thinking of office workers. My impression is that if there are people in the office today (and reading this blog), they are going to be hoping to get away at lunch time. And I’m sure many offices are not going to bother to even pretend to open.


Then there’s Thursday and Friday. Many people will be recovering on Thursday and so won’t go in. And if you’ve had four days off already, who wants to turn up on Friday? Not too many people I would think.


Next week looks a bit of a nightmare too. Monday is New Year’s eve, so who wants to go into the office for one day, knowing you have the next day off? So it looks like next Wednesday before any productive work will be done – wow! If you wanted to invade a western country without anyone noticing for over a week, today would be the day. If you wanted to make a terrorist statement, then today is definitely not the day, because, as I said, no-one will notice until the end of next week!


So, if you are in work, what’s left for you to do?


Obviously, if you work on a mainframe you can spend five minutes completing the Arcati Mainframe Yearbook 2008 user survey. That’s located at www.arcati.com/usersurvey08. If you’re at a PR company or vendor/consultancy/service provider, you could fill in the form at www.arcati.com/vendorentry for a free entry in the Yearbook when it’s published in January – to an anticipated audience of 10,000 to 15,000 worldwide.


What else? Well, if you’re an IMS user you can join the Virtual IMS Connection user group at www.virtualims.com. There’s a free newsletter coming out in January for all members, and the next virtual presentation is on the 5th February, when NEON Enterprise Software's Bill Keene will be talking about IMS disaster recovery preparation. If you’re a vendor of IMS-related software, then there are highly-focused advertising opportunities to reach IMS professionals. Contact trevor@itech-ed.com for the early-bird pricing structure.


You can view my corporate greetings card at www.itech-ed.com/xmas07.htm.


Have a good Christmas. And, if you don’t have decorated trees, Father Christmas, and presents where you are, have a good time anyway.

Monday 17 December 2007

SOA – still offering availability

In a year that saw data centres wanting to turn themselves green and mid-range server users discovering that virtualization (or sometimes simply emulation) was the ONLY project worth working on, we find that the acronym of the year 2006 is still with us and still remarkably youthful and invigorated. Yes, despite the important also-rans mentioned above, SOA wins the award of acronym of the year 2007 for the second time in row – a feat last achieved in the heady days of client/server.

SOA – Service-Oriented Architecture (or humorously referred to as Same Old Architecture) – is still, as late in the year as December, getting product announcements linked to it.

For example, Iona Technologies has just announced it is updating its Artix and Fuse SOA products. Artix is a suite of SOA infrastructure products designed to enable customers to deploy SOA in a distributed environment. Version 5.1 of the Artix Advanced SOA Infrastructure Suite include: Version 5.1 of Artix ESB, Version 5.1 of Artix Orchestration, Version 1.5 of Artix Registry/Repository, and Version 3.6.3 of Artix Data Services. Iona claims that Artix Registry/Repository allows customers to utilize their active SOA governance capabilities to effectively develop, test, deploy, and manage the life-cycle of services across their distributed SOA environments. With the update, ActiveBPEL 4.0, which is embedded in the Artix Orchestration software product, now supports BPEL (Business Process Execution Language) 2.0, which offers capabilities for message attachments and additional security.

New to the Fuse line is Fuse HQ, which acts as a management console to manage open source products from a single console. It also can manage software such as Web servers. Fuse HQ is based on Hyperic Enterprise technology. Other Fuse products updated include Version 3.3 of Fuse ESB 3.3 (based on Apache ServiceMix 3.3), Version 5.0 of Fuse Message Broker 5.0 (based on Apache ActiveMQ 5.0), Version 2.0.3 of Fuse Services Framework 2.0.3 (based on the Apache CXF 2.0.3 project), and Version 1.3 of Fuse Mediation Router (based on Apache Camel 1.3).

DataDirect Technologies (part of Progress Software) has announced Version 3.1 of its DataDirect XML Converters and DataDirect XQuery products. DataDirect XML Converters are Java and .NET components providing bi-directional, programmatic access to most non-XML files including flat files and other legacy formats.

Quite separately, Quest Software has agreed to buy PassGo Technologies, a company specializing in access control and identity management products. What makes PassGo interesting is that it was founded in 1983 and was then acquired by Axent. Axent was then taken over by Symantec, and then in 2001 the original founders did a management buyout and got the company back again. OK, nothing to do with SOA, I just thought it was interesting.
So SOA is still an important component of the enterprise environment as we come to the end of 2007.

What else is important? Filling in the mainframe user survey at www.arcati.com/usersurvey08; if you’re a vendor, filling in the vendor survey at www.arcati.com/vendorentry. And if you are an IMS site or IMS vendor then join the Virtual IMS Connection virtual user group at www.virtualims.com.

Monday 10 December 2007

IBM and Sun are very cosy!

It’s like young love – all sharing and caring, and long endearing looks. Yes, IBM and Sun Microsystems, who used to take every opportunity to denigrate each other’s products, it seems are now the best of friends – or even closer than that. Before I sink into a morass of poetic drivel and you find yourselves reaching for the vomit bags, I better explain.
It all started with OpenSolaris running on mainframes, and continued with Sun getting further into the mainframe tape business. Where will it end?


IBM announced back in August that it would now support Sun’s Solaris operating system, but it used the recent Gartner Data Center Conference to demonstrate it. z/VM, that old workhorse for making just about anything appear to happen on a mainframe has, not unsurprisingly, been used by IBM to make Solaris run on IBM hardware.

It’s fairly clear what Sun get out of the deal. They are very keen on virtualization (IBM and VM went through this phase back in the 1960s – and used it as the basis for PR/SM some years later). Sun’s ongoing xVM initiative provides a way to control lots of different bits of kit that a potential customer might have installed. 2007 was the year when data centres went green and one way of achieving a move in that direction was virtualization. Virtualization – and I realise you’re all thinking grandmother, eggs, suck, and teach at this point – reduces the need for hardware boxes to be installed because images of that hardware can appear to exist on other hardware. That hardware can now run multiple images and a whole lot of hardware can be cleared out of the machine room. And a whole lot of hardware that would probably have been bought, doesn’t need to be. So lots of savings and lots of reduction in carbon footprints. So, with this deal, one of the boxes that Sun can link to and help manage is an IBM mainframe.

Sun’s xVM Ops Center is described as a highly scalable data centre automation tool for managing heterogeneous environments. Again, according to Sun, it can be used for discovery, monitoring, operating systems provisioning, comprehensive updates, patch management, firmware upgrades and hardware management.

I’m not so clear what IBM gets from the deal. Perhaps a way to prevent low use mainframe users – the VM/VSE crowd – from dropping their mainframe and using Linux servers instead.

A company called Sine Nomine Associates were responsible for porting the code to z/VM.

Sun has been selling mainframe storage since 2005 when it acquired Storagetek. Sun has just announced a performance enhancement to its StorageTek VSM (Virtual Storage Manager) 5, which they claim adds 53 percent more throughput from the initial VSM 5 release in mid-2006. The VSM 5 architecture uses the StorageTek SL8500 tape library and StorageTek T-series tape drives in its operation to optimize tape application performance. Since the middle of last year when Jonathan Schwartz became president/CEO, Sun has realized just how much money it make from the mainframe world and has been directing its attention in that direction.

I expect Sun and IBM will be meeting each other’s parents soon, and who knows what could happen after that!!

On a different subject, now is the time for all mainframers to complete the survey at
www.arcati.com/usersurvey08.html. This will ensure that this year’s mainframe user survey in the Arcati Mainframe Yearbook 2008 has the most accurate information about mainframe use. You get a free copy of the survey results.

Monday 3 December 2007

Another one bites the dust!

Every few years I get involved in a project that requires a list of mainframe vendors to be produced. This task in itself can be quite dull and trivial. The interesting part comes the following year when it needs updating. There are always so many companies that have been swallowed up.

What started me thinking about this again was the news recently about IBM taking over Cognos. Is this a sensible move for IBM? With so many other companies out there, why choose that one? Well IBM has recently been partnering with Cognos, Business Intelligence, and Hyperion to deliver BI (Business Intelligence) to customers. Hyperion was recently taken over by Oracle, and two other BI vendors, ALG Software and Cartesis were acquired by Business Objects, which in turn is being acquired by SAP. So maybe IBM needed to get Cognos before someone else did.

And if that’s the case, who was its main rival? That’s difficult to say, IBM may well have been worried by Oracle or SAP getting their hands on Cognos. But perhaps the most likely rival was Hewlett-Packard. HP offers a business intelligence and data warehousing platform built using products from Cognos. HP may well have felt that getting their hands on Cognos would have been a natural fit – and now IBM has blown that idea out of the water.

It looks like Rob Ashe, the CEO at Cognos is to join IBM, and Cognos will become part of IBM’s Information Management Software division.

But going back to last year’s list of vendors, I notice that Acucorp is now part of Micro Focus, Consul is now also part of IBM, Cybermation was swallowed by CA, Diversified Software is part of ASG, and Farabi belongs to Seagull Software. And that’s just the first half of the alphabet!

Talking of lists... if you’re a vendor, consultant, or service provider, and you would like a free entry for your company in the 2008 edition of the Arcati Mainframe Yearbook, you should complete the form at
www.arcati.com/vendorentry.html. If you had an entry in the 2007 Yearbook, you can use the form to amend an existing entry. If you are a mainframe user, then the annual survey form is at www.arcati.com/usersurvey08.html. The Yearbook will appear early in 2008 and is free.

Also, Tuesday 4 December at 10:30 CST (4:30 GMT) is the first free Webinar for the virtual IMS user group at
www.virtualims.com. The virtual user group meeting will last about an hour and includes a presentation by NEON Enterprise Software’s Kristine Harper. It’s free to join the group and take part in the virtual meeting.

Monday 26 November 2007

IT Infrastructure Library

You’re probably familiar with the service management joke, and it’s a fairly weak joke at best, but it does contain within it a horrible grain of truth. Anyway, here goes: Why is it called ITIL? Because at so many meetings when service management is discussed the conclusion is always, it’ll have to wait!

Hardly worthy of a quiet groan, but like I say there has been a tendency in the past to put off adopting best practice until you have more time, and continue with what is, perhaps, little better fire-fighting problems as they occur.

So what is ITIL? Well the IT Infrastructure Library provides a framework of best practice guidance for IT service managers. The actual ITIL publications cover areas such as service strategy, service design, service transition, service operation, and continual service improvement.

The IT Service Management Forum (itSMF) has just produced a 58 page book, which describes itself as “An Introductory Overview of ITIL® V3”. This is available as a PDF from
http://www.itsmf.com/upload/bookstore/itSMF_ITILV3_Intro_Overview.pdf. They are clearly expecting people to print it because, apart from the cover page, it is all in black and white – or perhaps that’s a hidden metaphor.

The publication offers the following definition of service management: “[it] is a set of specialized organizational capabilities for providing value to customers in the form of services”. And to clarify, it says that a service is: “a means of delivering value to customers by facilitating outcomes customers want to achieve without the ownership of specific costs and risks”.


The book also suggests benefits from the use of ITIL, which include: increased user and customer satisfaction with IT services; improved service availability, directly leading to increased business profits and revenue; financial savings from reduced rework, lost time, improved resource management and usage; improved time to market for new products and services; and improved decision making and optimized risk.


Definitely worth reading through for anyone involved in IT and any kind of service management.

On a completely different topic... The Arcati Mainframe Yearbook 2008 will shortly be conducting its annual survey. It’s been available since 2005, and as well as the annual user survey, it contains a directory of vendors and consultants, a media guide, a strategy section with papers on mainframe trends and directions, a glossary of terminology, and a technical specification section. The survey is at
www.arcati.com/usersurvey08.html. The Arcati Mainframe Yearbook 2008 itself will be available in January 2008.

Monday 19 November 2007

Rational mainframes

Who says mainframes are hard to use and integrate with other systems? Well, it seems just about everyone who hasn’t spent long periods of time working on them. Experienced mainframers, of course, always stress the reliability and security of mainframes compared to any other system.

Anyway, it seems that IBM has taken the “hard to use” criticism on board and has done something about it. So last October, it initiated a $100 million mainframe simplification project with promises of management and development tools that would be incredibly powerful, but very easy to use. And a couple of weeks ago it started delivering the stuff.

These Rational development tools included retooled compilers for COBOL and DL/I on z/OS, a turn-key COBOL or Java code generation tool called the Rational Business Developer Extension, and an RCD (Rapid Component Development) tool that can scan existing COBOL code and identify useful jobs or processes, which it then componentizes. This product is called Rational Transformation Workbench.

The thinking behind the new products is that organizations have high-quality code that is supporting the business and already up and running. What’s needed is a simple way to expose that code – rather than writing what could very well be less efficient or bug-ridden code. So rather than the mainframe being a box that sits quietly somewhere within an organization and gets on with its work without bothering anybody, it now becomes a major player in the development of a business. Which, as you’ll appreciate, is a position IBM prefers mainframes to occupy.

The compilers are Version 4.1 of Enterprise COBOL for z/OS and Version 3.7 of Enterprise PL/I for z/OS. These are designed to integrate mainframe applications with Web-oriented business processes.

Version 7 of Rational Business Developer Extension uses code written in IBM’s EGL (Enterprise Generation Language) to generate COBOL or Java. EGL is similar to COBOL in construction and helps “modernize” code by allowing users to work in an SOA (Service-Oriented Architecture) environment.

IBM also announced Version 7.1 of Rational Developer for System z. This is claimed to be a simplified development environment for programming mainframe applications. Although described as new product, in many it is a repackaging of existing IBM technologies.

I’d also like to mention (again) the Virtual IMS User Group at
www.virtualims.com. It’s first virtual presentation is on the 4 December at 10:30 CST. One of NEON Enterprise Software’s IMS experts will give the first presentation. Following the presentation, Virtual IMS Connection members will be able to ask any questions they have and share their own experiences. It’s all free.

Monday 12 November 2007

The War of the Web

Back in 1978 Jeff Wayne released his musical version of H G Wells’ 1898 classic The War of the Worlds – and I’d like you to be humming the well-known theme from that album while you read the rest of this blog, which I’ve called “The War of the Web”.

Now we’ve all installed firewalls, antivirus software, and anti-spyware, and we’ve probably got something to check for rootkits and any other nasties, but now even that isn’t enough. It appears that gone are days of the sad little nerd trying to claim fame in his sad little nerd community by launching a virus on the rest of us that basically says, “I am here, look at me”. I’m sure there are still people like that passing their time in this particular way, but they are not the problem.

The next level of attack on ordinary people, like you and I, came from organized crime. Every time we inadvertently found ourself trying to download something free off the Internet we also downloaded a piece of software that exposed our files to outsiders – the growth of broadband helped criminals no end. Not only could they see the existence of our files labelled secret_passwords.doc and home_accounts.xls, they could download their contents and steal our identity at the bank as they withdrew all our hard-earned cash. Next they turned our computers into zombies that sent out millions of spam e-mails over our broadband connection when we left it for a few minutes.

But now we have reached a new level of sophisticated attack and from the unlikeliest of bedfellows. I’m talking about legitimate governments and terrorists! Now I’m sure that your government and mine can’t possibly be involved – it’s always the others! For example, there has been wide reporting in the press that the Chinese plan to have electronic supremacy by 2050. Apparently, hackers within the Chinese People’s Liberation Army have revealed China’s plan to control other countries’ military networks and disable their financial and communications capabilities. It seems that superiority in any future war lies in successful cyber assaults, and, what’s worse, globally there are an awful lot of vulnerable systems. It also seems that the Chinese have produced a blueprint for Cyber warfare.

Obviously, a successful cyber attack could destabilize a country – which is probably why those hackers who can’t get a job for a legitimate government are being recruited to help terrorists. Although it doesn’t appear to have happened, 11 November was meant to see a denial of service by al-Qaeda. This was the date set for a cyber jihad against non-Moslem targets. The attack was meant to work by allowing sympathizers to download a tool that when coordinated with thousands of other like-minded people would cause the denial of service attack.

Now I know the Internet is full of paranoid ravings and conspiracy Web sites, but it does seem like an extra problem to worry about – maybe the Internet won’t be there tomorrow morning when I try to log on. The only good inference you can draw from this is that if these cyber attacks are well known, there must be a lot of people in white hats preventing such attacks from happening. But, perhaps equally worryingly, they must be thinking about ways to wage cyber war on whoever they think of as wearing black hats. Let’s hope the War of the Web never gets passed the planning stage – take it away Jeff!

Thursday 1 November 2007

IMS community virtual user group Web site

I was blogging about IMS a couple of weeks ago and at the time I was thinking that there wasn’t a lot of Web-based resources available for IMS sites. So now I’d like to announce the launch of Virtual IMS Connection, the IMS community Web site at www.virtualims.com.

You would have thought that with IMS installed at 95% of the Fortune 1000 companies that the Web would be awash with sites discussing its use, how to improve performance with various hints and tips, and perhaps even a section for people looking for work and companies looking for experience people. Strangely, there seems to be very little out there. But, as I said above, not any longer.

The new Web site at
www.virtualims.com is intended to be a meeting place for all IMS people. It’s a virtual user group. I’m planning to run virtual meetings and hopefully produce a Web-based newsletter for IMS folk. In fact, I plan to have the first session in early December this year.

Now you’re asking how much is this going to cost to join? The answer is nothing at all. You just sign up and then you can take part in the first and all future virtual meetings. And you can join in the discussions.

The Web site will tell you about virtual meetings – the topics and date and time. There’s a section pointing to useful IMS articles that have been published recently, a section for IMS-related resources, and section for IMS events. There is also an IMS news section.

In addition, there is a forum area where IMS experts can share their experiences and useful hints and tips, and ask their peers questions about any aspect of IMS. Lastly, there is a job bank, where people looking for jobs and people needing staff can post their information.

The whole thing depends on user involvement, and the topics for the virtual meetings and the content of the Web site will depend very much on input from the users. I’m hoping that the Virtual IMS Connection Web site will become a major IMS resources and a lively and informative site for IMS people to visit. Please register your interest now – it really is all free.

Monday 29 October 2007

Database auditing

Finding out how your database is performing and what activities took place has traditionally been an historical activity. By that I mean actions against the database have been recorded in the logs and then later – perhaps the following day – these logs have been examined to find out exactly what happened. The advantage of this is that you have a fairly good record of all activities that occurred and it doesn’t use up too many valuable MIPS. The downside is that you never know what is happening currently and you may not be getting enough detail about what happened recorded in your log.

The alternative is to run trace utilities – and for DB2, for example, there are a number of traces that can be run. The good thing about traces is that they can help to identify where a performance problem is occurring. However, they also have a high CPU overhead. Not that you would, but if you run DB2’s global trace with all the audit classes started, IBM reckons this will add 100% CPU overhead. Even running just all the audit trace classes adds and estimated 5% CPU overhead.

So why are we worried about auditing what’s going on in our database? It’s the growth in regulations. In the USA there’s the Sarbanes-Oxley Act (SOX) and also the Payment Card Industry Data Security Standard (PCI-DSS). Both of these can affect what a company needs to audit. An audit is meant to identify whether procedures are in place, whether they are functioning as required, and whether they are being updated as necessary. In the event that one of these is not happening, the audit should be able to make recommendations for improvement.

It’s also important, with database auditing software, that it doesn’t have to be used by the DBA or anyone else who maintains the database. Pretty obviously, if the DBA was making changes to the data or browsing records he wasn’t authorized to look at, when he ran the auditing software, he could remove all information about those activities and no-one would be any the wiser.

So, to summarize, a successful database auditing tool would have to work in real-time and not historically. It would not have to impact on performance. It would have to comply with the latest regulations. And it would have to be able to audit the actions of the DBA and other super users.

There’s one other characteristic that would be useful. Having identified in real-time actions that violated corporate policies (like changing the payroll data!) it should then respond with a policy-based action – like an alert.

Monday 22 October 2007

IMS at 40

With the recent announcement of Version 10 of IMS, I thought it would be quite interesting to take look at what IMS actually is, before seeing what’s in the new version.

Information Management System, to give it its full name, is a combination of database and transaction processing system. I’m not sure whether it’s 40th birthday was last year or next year because work started on it back in 1966, but it was 1968 when it was first running anywhere.

There are three databases associated with IMS DB. These are called “full function”, “fast path”, and High-Availability Large Databases (HALDBs). With full function databases – the original database type – data is stored in VSAM or OSAM files and can be accessed using HDAM, HIDAM, HSAM, HISAM, and SHISAM access methods. Full-function databases are derived from DL/I databases that were around at the time (1966). There are two types of fast path database – Data Entry DataBases (DEDBs) and Main Storage DataBases (MSDBs). These databases do not have indexes and are stored in VSAM files. HALDBs are the newest (since V7). They are like souped-up very big full-function databases.

IMS TM (sometimes written as IMS DC – Data Communications) provides a way for users to run transactions to get information from the database. It is perhaps most like CICS in the way it allows users to work. IMS stores transactions in message queues and then schedules them to run. Like CICS there is a lot of work going on internally to maintain the integrity of the data and the transaction.

Highlights of the V10 announcement include enhanced IMS/XML database support, enhanced XML and Web services capabilities, more autonomic computing, and improved performance in database utilities. Of course, full details are on the IBM Web site at
http://www-306.ibm.com/software/data/ims/v10/.

IMS is reckoned to be installed in 95 percent of Fortune 1000 companies, which makes it an important piece of software. It might have been around for quite a while, but by embracing SOA and Web services it has ensured that it will be with us for a long time yet.

Monday 15 October 2007

Back-ups and archives

So what is the difference between a back-up and an archive? Don’t both copy data somewhere so it can be restored at a later time if necessary? The answer to the second question is sort-of “yes”, and the answer to the first question is what this blog is about.

Back-ups of data can be stored at the same location as the original or offsite. If the main site suffers a catastrophe, the data can be restored somewhere else using the offsite back-up and work can continue. Back-ups used to be performed to tapes and the tapes would be overwritten after a week or some other fairly short period of time. The data in a back-up was the same as the data left on the mainframe.

An archive is something completely different. Gartner has suggested that the amount of data in a database is typically growing by 125%. For performance reasons, no one can afford to leave unused data in a database. Unused data is data that isn’t needed operationally and won’t be referenced. It won’t be needed by a transaction. This data can be moved out of the database to an archive. The database will then be smaller, so reorgs and back-ups will take place more quickly. Using the database will require less CPU, so everything else will perform better. In addition to improved performance, organizations will enjoy reduced costs. So archiving gives a huge return on investment.

The big problem with archived data is that it needs to hang around for a long time. In fact, with new laws and regulations this could be up to 30 years! A lot can change in 30 years. Your schema on the database may change, in fact, because of takeovers, mergers, and other reasons, your brand of database may change. And there’s even a chance that you won’t have a mainframe! What you need is a future-proof storage mechanism. You also need to be able to access the data that you have in your archive. Many countries are now allowing electronic records to be used in court and those archived records need to be able to be accessed. It’s no good in 20 years time hoping that you can restore some back-ups because, even if you have the same database, you probably won’t use the same schema. You need to be able to access the data, you need to be able to retrieve the data, and you need to be able to produce reports about the data.

As well as being able to run e-discovery tools against your archive (when it comes to litigation both sides need to know what you’ve got!), you need to ensure that it is incorruptible. It’s no good finding that five years ago someone accessed the archive and hid the tracks of their previous ten years of misdeeds. The archived data has to be read-only.

And, of course, when the time comes, you have to be able to delete the data that has come to end of both its business life and its compliance life.

So archiving has much more to it than simple back-ups. It’s quite a big difference.

Monday 8 October 2007

So Long and Thanks for All the Fish

In October 1985 the very first CICS Update was put together. Since the summer that year, articles had been coming in to the Xephon office. Created on an Apple II using the Zardax word processing program, the first issue of CICS Update was printed out and sent to the printers at the beginning of November. Early in December, issue 1 arrived on the desks of subscribers. The Updates were born.

I wasn't there for the launch, I joined Xephon in February 1986. Soon there was VM Update and MVS Update. Next came VSE Update and VSAM Update. Then TCP Update and RACF Update. Others came and went, some quicker than others. There was Web Update, Oracle Update, NT Update, and Notes Update. In the end there was AIX Update, CICS Update, DB2 Update, z/OS Update, WebSphere Update, TCP/SNA Update, and RACF Update.

For the past four years, I have been editing all of them – but no longer. The new editor is Amy Novotny, who you may know from her work on TCI Publication's zJournal. If you do have any articles you want to contribute to the Updates, you can send them to her at
amy@tcipubs.com.

So Long and Thanks for All the Fish is of course the title of the fourth book in Douglas Adams' Hitch-hiker's Guide to the Galaxy trilogy(!), and was published in 1984. It's what all the whales say when they leave Earth, just before the Vogon fleet arrives.

Good luck to Amy and the Updates in the future.
And if you need to get in contact with me, you can use trevor@itech-ed.com.
See you around...

Sunday 30 September 2007

Compliance, data storage, and Titans

The Titans, in Greek mythology, were originally twelve powerful gods. They were later overthrown by Zeus and the Olympian gods. I'm not talking about them. Nor am I talking about the fictional characters created by Brian Herbert and Kevin J Anderson in their Legends of Dune novels. Today I want to talk about an interesting announcement from NEON Enterprise Software (www.neonesoft.com) called TITAN Archive.

So what makes TITAN Archive more interesting than anything else announced in September? Well, basically, its simplicity and usefulness. It is described as a "database archiving solution", which means that an organization can use it to store structured data for long periods of time. And why should anyone want to do that? Well the answer is compliance.

Regulations are getting stricter in so many countries, and companies are now compelled for legal reasons to store large amounts of data for long periods of time. In fact, data retention could now be between 6 and 25 years. Many organizations are defining their own retention policies and are looking for ways to action those policies that are economic and allow data to be recalled quickly and easily (now called e-discovery if it's needed for a court case), and, at the same time, doesn't affect the performance of their current computing needs. They are looking for a solution that meets all compliance and legal requirements and can be used in the event litigation.

At the moment, TITAN Archive works with DB2, but plans are in place for a version for Oracle and one for IMS. Both data and metadata are stored in what's called an Encapsulated Archive Data Object (EADO). The EADO format is independent of the source DBMS (which may very well change at a company in the course of 25 years!) and can be accessed or queried using standard SQL queries or reports – which makes accessing it very easy. The data can be stored for as long as necessary. TITAN Archive can also have a discard policy, which makes sure that data is deleted when it is no longer required for legal or commercial purposes.

TITAN Archive connects to a storage area network and is managed from a Java interface that could be deployed across the enterprise or secured to a single location. The heart of TITAN Archive is the archive appliance. This is a Linux server that performs all the TITAN Archive processing.

Moving archive data off the mainframe and being able to access it easily, while retaining it for the longer periods of time now required, is a problem many companies face. TITAN Archive seems like a very useful and economic solution to this problem.

Wednesday 26 September 2007

How Green Was My Valley – and how green are my computers?


How Green Was My Valley is a 1939 novel by Richard Llewellyn and a 1941 film directed by John Ford. It was written and filmed in the days when green was just a colour and not an aspirational life style. I blogged about IBM’s green data centre plans a few months ago, but I wanted to revisit this whole issue.

There does seem to be a lot of misconceptions about what’s green and what isn’t, and it does seem to depend on how you look at an issue.

For example, I have heard it said that because flat screens use less energy than cathode ray tubes, we should all (if we haven’t done so already) get rid of those old screen and replace them with new flat ones. Apparently wrong! Because of the huge amount of energy and resources it takes to create a CRT and a flat screen, it is, in fact, more energy efficient to use that CRT right up to the moment it fails, and then change to a flat screen. This is because, although per hour of usage the flat screen is greener, the total amount of energy it took to extract all the raw materials and then construct the screen far outweigh the energy used by that screen. So we should be using that old device until it no longer works and then change over.

Interestingly, thinking about the raw resources, it has been suggested that a standard PC uses 1.8 tonnes of raw materials.

Another common comment is that recycling computers is a good thing. The idea is that computers contain lots of expensive metals (like gold) so old ones should be stripped down and the expensive metals extracted and reused. Unfortunately, the energy audit for this is quite high. So is there a better alternative? Well yes, or else I wouldn’t have mentioned it! There are a variety of companies and charities that will refurbish computers and peripherals. This refurbished PC could be re-sold or it could be shipped to the developing world – both better choices than trying to regain the metal from the old PC and then using it in a new one. It’s the difference between re-use and recycling.

Storage vendor ONStor recently found that 58% of the companies they surveyed were either still talking about creating a green IT environment, or still have no plans to do anything. But with conflicting and confusing messages that isn't completely surprising.


Things like consolidation and virtualization could help reduce power, cooling, and other operational expenses – and these would therefore help reduce energy consumption and carbon dioxide emissions, etc.

Of course, we could all do more. Many sites (and many of my friends’ houses) have old machines sitting in cupboards and under unused desks. These could be given to charities and sent on to developing countries. They’re certainly not doing anyone any good gathering dust. And even if the computer doesn’t work, given two or three machines, enough spare components could be put together to get one that does work – and which would the be put to good use.


Even if we’re not concerned with being green, with saving the planet, or helping third-world countries, we are paying the electricity bill. So in terms of simple economics, powering off unused printers and computers and anything else we leave in stand-by mode will save us money and is a way of being green too. I know you can’t power off your mainframe, but there’s often a lot of laptops left on in offices. Think, how green can your offices be – not just your data centre?!

Office of the future?

It had to happen – I was bound to be sent a DOCX file. This is the new file type associated with Microsoft Office 2007. It’s all to do with the Office Open XML format Microsoft is keen on, and, of course, my copy of Office 2000 can’t open it. To be fair, Microsoft does have download that allows Office 2000 to open DOCX files, but it comes with health warnings and caveats, so I haven’t tried it.

I have wondered in the past about keeping the faith with Microsoft or whether I should go the Open Source route and install OpenOffice etc. Indeed I wrestled for a long time with getting Linux installed permanently on my PC (and not just booting up a distro from a CD every now and again).

So, I read with interest that IBM has decided to join the OpenOffice.org development community and is even donating some code that it developed for Lotus Notes. (Interestingly, Ray Ozzie, who developed Notes now works for Microsoft). OpenOffice.org was founded by Sun and works to the Open Document Format (ODF) ISO standard – not Microsoft’s Office Open XML (OOXML or Open XML) format.

Apparently, the code that was developed for Notes was derived in part from what was originally Microsoft-developed technology! It seems that IBM’s iAccessible2 specification, which makes accessibility features available to visually-impaired users interacting with ODF-compliant applications, was developed from Microsoft Active Accessibility (MAA). IBM has already donated the iAccessible2 specification to the Linux Foundation. iAccessible2 can run on Windows or Linux and is a set of APIs to make it easy for visuals in applications based on ODF and other Web technologies to be interpreted by screen readers that then reproduce the information verbally for the blind.

Luckily, I’m not visually impaired and have no use for this technology, but I have a friend who works a lot with Web site design so that they can be used by visually-impaired people, and I have listened with interest while he talks about things I previously took for granted. It is important.

Anyway, even if IBM’s motives are not pure and they secretly hope that OOXML never becomes an ISO standard, making this kind of technology freely available has got to be a good thing.


Maybe we should all take another look at OpenOffice.

Facebook – cocaine for the Internet generation?

It was only a couple of weeks ago that I was blogging about social networks on the Internet and how I thought that Facebook was being colonized by older people not just students and other youngsters. And now I find that Facebook is being treated by some companies as the most evil thing since the last virus or worm infection!

What’s happened is that Facebook has caught on, and a large number of ordinary working people have uploaded photos to it and linked with other “friends”. That all sounds rather good – where’s the harm in that? Well it seems that these same working people have been seduced by the many “applications” available with Facebook – and I particularly like Pandora (but that’s because I listen to
www.pandora.com anyway), Where I’ve been, and My aquarium. But the truth is, there are lots of these applications, such as: FunWall, Horoscopes, Fortune cookie, My solar system, The sorting hat, Moods, Superpoke, Likenesses, Harry Potter magic spells, etc, etc.

The problem is two-fold for employers. Firstly, too many employees are spending too much time interacting with their friends, uploading photos and videos, and messing about with the applications. The “lost” hours of work are mounting up, and so companies are banning access to Facebook. Some are allowing access at lunch times and after the defined working day, but other companies, apparently, have gone for a blanket ban.


The second problem is that large amounts of a company’s broadband bandwidth is being used by Facebookers rather than people doing productive work.


The third problem is that these applications seem to get round corporate firewalls and anti-virus software, with the result that they create a backdoor through which anything nasty could enter. No-one wants a security risk left undealt with.


This must be good publicity for Facebook, making it seem especially attractive – nothing boosts sales of a product like a ban! However, many wiser heads have been here before. I remember the first computer game – the one that was text only, and where a small dwarf threw an axe at you and killed you. Lots of hours were lost with that until the mood passed. More recently MSN has been banned at some sites because people spent all day talking to each other on that rather than getting on with work. These things come in phases, work time is lost, then the mood changes, work is caught up with and that old hot item is ignored. I would expect to see, this time next year, that Facebook is still popular, but not so compulsive as it is now. People won’t need to be banned from Facebook because they will not feel compelled to access it. But, I would bet, there’ll be some other must-visit Website, and we’ll be off again!


These things have been compared to crack cocaine and other “recreational” drugs. In truth they can be very compulsive for a while, but, unlike narcotics, eventually you want less-and-less of them not more-and-more.

The “dinosaur” lives on

I can still remember those distant days of the 1990s when everyone you spoke to “knew” that mainframes were doomed to extinction, and dates were confidently predicted when the last one would be turned off. These sit alongside, in terms of accuracy, predictions about how many computers a country would need in the future – I think two was the best guess, just one fewer than in my office at the moment!

Not only have the “dinosaurs” lived on, they are continuing to evolve and flourish – as witnessed by this “summer of love” for all things mainframe from IBM. They started with the latest version of CICS (V3.2), then we had the latest DB2 (9.1), and now we have the operating system itself, z/OS V1.9.


In summary, the new Release has been enhanced so that typical Unix applications, such as ERP or CRM (which are usually found on mid-range machines at the moment), can be ported to z/OS more easily.


Also there have been upgrades in terms of security and scalability. With improved network security management tools, it’s now easier to set consistent network security policies across distributed systems that communicate with the mainframe, as well as multiple instances of the operating system. Other security improvements come from enhanced PKI (Public Key Infrastructure) Services and RACF to help improve the creation, authentication, renewal, and management of digital certificates for user and device authentication directly through the mainframe. This now provides centralized management for Web-based applications. z/OS’s PKI could be used to secure a wireless network infrastructure or the end nodes of a Virtual Private Network (VPN) that might be hosting point of sale or ATM communications traffic. Lastly, the z/OS Integrated Cryptographic Service Facility (ICSF) will be enhanced to include the PKCS#11 standard, which specifies an Application-Programming Interface (API) for devices that hold cryptographic information and perform cryptographic functions.


One of the biggest improvements is the ability for logical partitions to span up to 54 processors – previously they were limited (if limited is the right word here) to 32 processors.


The upgrade becomes available on the 28 September 2007.


So are mainframes going extinct and this is little more than a dead-cat-bounce? Definitely not. IBM is saying that its revenue grew by 12% in the first quarter of the year over the previous quarter and up 25% over the previous year. Remember that dinosaurs ruled the earth for 186 million years!

Where am I?

I am just back from China and suffering from the usual affects of jet lag – so just a short blog (you’ll be pleased to hear).

I thought I’d pass on lots of Chinese wisdom, but you’ve probably heard them all before. Anyway, as I think they say, the longest blog begins with a single word!

So, I was thinking about my IP address now that I’m back – I was wondering what it was. So I downloaded a widget called what.ip.i.have by Vlad Sasu. I’m a big fan of widgets, which is now a Yahoo product (widgets.yahoo.com). I use widgets for the weather and the rainfall, and I have a BBC newsfeed and one showing my blogs (although it could be set for any other RSS feed). The new widget installed and told me my IP address.

The next stage, I thought, would be to look up that IP address on one of those sites that tell you where in the world each IP address comes from. I live in the beautiful west country near Bath and Bristol in the UK. My broadband connection is a slightly dear, but usually reliable, connection through BT.

So, my next stage was to go to Google and search for sites that would tell me where my IP address came from. I thought it would be an interesting test. In no particular order, I first tried www.ip-adress.com. Like many of the others, it “knew” my IP address already and showed that I was located in Silvertown in Newham, which is east London near the River Thames. I thought that perhaps BT’s cables joined the rest of the world at that point.

Next I tried http://whatismyipaddress.com, and that came up with Silvertown as well. So I thought that definitely must be where I am (Internetwise that is).

http://www.melissadata.com/lookups/iplocation.asp, however, thought I was in Basingstoke, in England. And so did http://www.geobytes.com/IpLocator.htm, which even gave Basingstoke’s longitude and latitude. http://www.ip2location.com/ also had me in Basingstoke.

http://www.analysespider.com/ip2country/lookup.php knew I was in the UK, but had my IP address as coming from Suffolk – which is about as far east of me as you can go without falling into the English Channel.
http://www.blueforge.org/map/ had me miles from anywhere on the Yorkshire Dales – very picturesque, of course, but miles away. And that’s when I stopped.

I just wondered whether other people had tried these IP locators with any degree of success. Ah well, as they said in China, we do live in interesting times!

Social networking

Someone was telling me that some top IT people who write blogs regularly and have a presence on Facebook and Myspace, etc are now so busy with these Web-based interactions that they don’t have time to do their real jobs properly. So, they employ people to live their Web life for them while they get on with their proper work!

This is to confirm that I am really writing this blog and I don’t have an employee doing it for me!!

Facebook (www.facebook.com) and Myspace (www.myspace.com) are two really interesting examples of how the Web has developed. Both started out as the domain of youngsters and are now being colonized by older people – parents and grandparents of the original users. It appears that we are all keen on social networking.

Recently-announced figures suggest that Facebook has grown by 270 percent and Myspace by 72 percent in a year. Although Myspace has more users logging in each day (28.8 million) than Facebook (which has 15 million).

The good thing about these sites, according to marketeers, is that they identify new trends very early in their life-cycle. So marketing people know exactly what products they should be selling this season.

The downside, I suppose, is that this cult of newness means that after a time the excitement goes from these sites and they gradually shrink in terms of usage. At one time, everyone was talking about friendsreunited (www.friendsreunited.co.uk) and catching up with old school friends. Once you’ve caught up, the point of such a site diminishes. Similarly, Friendster (www.friendster.com) was very popular, but is perhaps less so now.
Youtube (www.youtube.com) is also very popular with youngest because of the humorous and other short videos you can see there. I’m not sure that much interaction occurs between users on this site except someone uploads a video and other people can watch it, but lots of people have joined.

A question many people ask is, are they dangerous? Facebook allows you to collect friends – in fact a colleague and I were having a competition earlier this year to see who could get the most friends on Facebook! When we stopped, we were still not even slightly close to the totals my children and their friends have. But is it dangerous? Does it encourage sexual predators, and are our youngsters at risk? The answer is probably not because the more real friends you have on these networks the less likely you are to talk to strangers.

Wikipedia (itself often maligned) lists 100 social networking sites at http://en.wikipedia.org/wiki/List_of_social_networking_websites. You can see how many you belong to.

Like lots of other people, I have an entry on LinkedIn (www.linkedincom), Zoominfo (www.zoominfo.com), Plaxo (www.plaxo.com), and Naymz (www.naymz.com), and I have links to other people. However, if I really want to talk to any of these people, I e-mail them, which is exactly what I would have done if I didn’t belong to the networking site.

I think the plethora of social networking sites will eventually shrink to a few that everyone can use and a few that are specialized. I think some will grab people’s attention and become somewhere that you must have a presence, and others will wither and die as they forget to update or update with facilities that no-one really cares about. I think they could be useful as business tools if you could get people to join your group. For example, all the subscribers to Xephon’s (www.xephonusa.com) CICS Update could form a group on Facebook and share information about CICS. However, I’m not sure that most people belonging to these networks take them that seriously and would spend enough time talking to their group for there to be a business case, at the moment.

Anyway, the real Trevor Eddolls will not be blogging next week because I am going to be a tourist in China. Any burglars reading this post, please note that someone will be feeding our large and fierce dogs twice a day.

Viper 2

Last week I was talking about AIX 6, which IBM is making available as an open beta – which means anyone can test it out so long as they report their findings to IBM. This week I want to talk about Viper 2, the latest version of DB2 9, which is also available as a download for beta testers. You can register for the Viper 2 open beta program at www.ibm.com/db2/xml. The commercial version is slated to ship later this year.

DB2 9 (the original Viper) was released in July 2006. What made it so special was the way it could handle both relational and XML data easily, which was made possible by the use of pureXML. Users were able to simultaneously manage structured data plus documents and other types of content. This, IBM claims, made it superior to products from other database vendors – you know who they mean! Oracle 11g, which will have probably been announced when you read this, will have full native XML support. Sybase already has this facility.


According to recent figures from Gartner, IBM’s database software sales increased 8.8 percent in 2006 to just over $3.2 billion. However, Oracle’s had 14.9 percent growth and Microsoft had a 28 percent growth in databases. As a consequence, IBM’s share of the $15.2 billion relational database market decreased to 21.1 percent in 2006 from 22.1 percent in 2005.


Viper 2 offers enhanced workload management and security. The workload management tools will give better query performance from data warehouses, and better handling of XML data within the database – they say. There is also automated hands-off failover for high availability.


In addition, there’s simplified memory management and increased customization control. DB2 9 can perform transactional and analytical tasks at the same time, Viper 2 offers improved management tools for setting priorities between those tasks


And, it apparently has greater flexibility and granularity in security, auditing, and access control. It’s now easier to manage the process of granting access to specific information in the database. Viper 2 makes it simpler to manage and administer role-based privileges, for example label-based access control, which allows customers to set access privileges for individual columns of data. It is also easier to add performance and management enhancements to the system's audit facilities.


It is well worth a look.

Good news for AIX users?

IBM has announced that it is making available an open beta of AIX Version 6.1 – an upgrade to the currently available version of AIX. Now, the questions that immediately spring to mind are: is this a good thing? and why is IBM doing it?

Before I try to answer my own questions – or at least share my thoughts about those questions – let’s have a look at what AIX 6.1 has to offer. The big news is virtualization enhancements, with improved security also included. The IBM Web site tells us that workload partitions (WPARs) offer software-based virtualization that is designed to reduce the number of operating system images needing to be managed when consolidating workloads. The live application mobility feature allows users to move a workload partition from one server to another while the workload is running, which provides mainframe-like continuous availability. For security there is now role-based access control, which give administrators greater flexibility when granting user authorization and access control by role. There is also a new tool called the system director console, which provides access to the system management interface via a browser. The bad news for venturous adopters is that IBM is not providing any support – there’s just a Web forum for other users to share problems and possible solutions.

So, is it a good thing? The answer is (of course) a definite maybe! If lots of people pick up on the beta, and do thoroughly test it for IBM, then the final product, when it is released, will be very stable and not have any irritating teething problems. There could be thousands of beta testers rather than the usual small group of dedicated testers. Plus it could be tested on a whole range of hardware with almost every conceivable peripheral attached and third-party product run on it. And beta testers will get the benefit of the new virtualization features and security features.

Why is IBM doing it? Apart from getting their software beta tested for free, they also make it look like their version of Unix is part of the open source world. The reason I say that is because it is called an “open beta” – hence the verbal link with open source, which is perceived as being a good thing – rather than being called a public beta. To be clear, while some components of AIX are open source, the actual operating system isn’t open source.

AIX Version 6 is a completely new version – the current one is 5.3. The final version will probably be out in November. Announcing an open beta programme means that IBM can steal some limelight back from Unix rivals HP and Sun. All in all, it is good news for AIX users.

A year in blogs

Without wishing to get all mushy about it, this is my blog’s birthday! It’s one-year old today. This is blog 52 and blog 1 was published on the 19th July 2006.

I’ve tried to comment on mainframe-related events that have caught my eye, and at times I have blogged about other computer-related things. I discussed stand-alone IP phones, problems with my new Vista laptop, wireless networks. I also talked about “green” data centre strategies and virtualization, although a lot of the time I was focused on CICS and z/OS and DB2.

Perhaps one measure of how successful a blog has become is by seeing whether anyone else on the Internet mentions it. Here are some of the places that have picked up on this blog.

The blog was referred to by Craig Mullins in his excellent blog at http://www.db2portal.com/2006/08/mainframe-weekly-new-mainframe-focused.html. It was also talked about at the Mainframe Watch Belgium blog http://mainframe-watch-belgium.blogspot.com/2007/04/fellow-bloggers.html. At least one blog has been republished at Blue Mainframe (http://bluemainframe.com/).

James Governor mentioned Mainframe Weekly in his Mainframe blog at http://mainframe.typepad.com/blog/.

The blog about William Data Systems’ use of AJAX in its Ferret product is also linked to from Williams Data Web site at http://www.willdata.com. There’s a reference to the “When is a mainframe not a mainframe?” blog at the Hercules-390 site at http://permalink.gmane.org/gmane.comp.emulators.hercules390.general/25845/.

My first blog on virtualization ("Virtualization – it's really clever") was also published on the DABCC Web site at http://www.dabcc.com/article.aspx?id=3553. The second one ("On Demand versus virtualization") can also be found on the DABCC Web site at http://www.dabcc.com/article.aspx?id=4346.

There is a reference to the same blog on the V-Magazine site at http://v-magazine.info/node/5189 and this links to Virtualization Technology news and information's VM blog page at http://vmblog.com/archive/2007/05/07/on-demand-versus-virtualization.aspx, where the full blog is republished.

That particular blog is also republished in full on the Virtual Strategy Magazine site at http://www.virtual-strategy.com/article/articleview/1999/1/7/. There's also a pointer to it at the IT BusinessEdge site at http://www.itbusinessedge.com/item/?ci=28159. Arthur Cole refers to this blog in his blog at http://www.itbusinessedge.com/blogs/dcc/?p=127. It was also quoted from at the PC Blade Daily site at http://www.pcbladecomputing.com/virtualization-plays-well-with-others.
It’s good to know that people are reading the blog and referring to it in their own blogs and on their Web sites. Looking to the future, in the next year, I plan to continue highlighting trends and interesting new products in the mainframe environment, while occasionally discussing other computing developments that catch my attention.


And finally, a big thank you to everyone who has read my blog during the past year.

The times they are a-changin’

Today (Monday 16 July 2007) is my youngest daughter’s 21st birthday – so happy birthday to Jennifer. I started to think how things were different 21 years ago from how they are today – and hence I stole the title from Bob Dylan’s third album (released 1964) for the title of this blog.

21 years ago I’d just started working for Xephon (which I still do). I had a small laptop at home that I used for all my computing – although it was a Sinclair Spectrum and needed to be plugged in to a TV to see anything! IBM was the top mainframe computer company and you could use VM, VSE, or MVS as your operating systems. CICS and IMS were very popular transaction processing systems. But no-one had heard of OS/390 or z/OS. SNA was still king of communication with TCP/IP hardly being mentioned.

At work we shared Apple II computers – a luggable Mac each was still in the future. And we had so many pieces of paper!! We needed manuals and cuttings from the papers – you forget how the arrival of the Internet has made research so much easier. So, that’s another thing that’s changed – the Internet has revolutionized our lives. I can remember giving a course at that time where I would explain to people how many ways they interacted with a computer without them realising it. It sounds laughable today – you’d never do anything else on the course if you stuck to listing each person’s computer interactions!

The other thing that was missing 21 years ago that is such a necessary part of our lives is the mobile (cell) phone. You could be out of contact for a whole day and this was considered normal. Nowadays people expect an immediate answer. If you’re not getting calls on the phone then it’s text messages. There’s never been a generation of humans with such strong thumb muscles before! Teenagers can’t spell, but they can text amazingly fast.

21 years ago computer games were very simple. There was just no thought that a game would be able to respond to movements of your body like the Wii does. But, perhaps, back in those halcyon days, we went outside and played tennis or went swimming – sport that didn’t involve a TV screen.

Was it really a better simpler time? Were politicians less corrupt and the world a safer place? This is probably the wrong blog to answer those kinds of question. Would a CICS user from 1986 recognize a CICS screen from 2007? The answer is probably no. Gone are those green screen to be replaced by browsers. They wouldn’t recognize SOA, Web services, and all the other current buzzwords.

And yet despite all these changes listed above (and many others), a typical CICS or IMS user would still understand the concept of entering data and getting a suitable response.

So perhaps when you look at things from a personal perspective, although Dylan was right the times they are a-changin’ (laptops, phones, Internet, etc), the man-in-the-street still goes to work, it’s just what happens behind the scenes that has changed. For him, the French expression plus ça change, plus c'est la meme chose – the more things change, the more they stay the same – might have been a more accurate title for this review of 21 years.

What do you think?

Where can you go for help?

You’re an IBM mainframe user, where can you go for help with your mainframe problems? (If you were thinking of more personal problems, you’re reading the wrong blog!!) Well, my obvious answer would be Xephon’s Update publications (see www.xephonusa.com) or, perhaps, a search on Google (www.google.com), but IBM has recently introduced Destination z (http://www-03.ibm.com/systems/z/destinationz/index.html).

IBM’s new Web-based portal is designed to allow its customers, system integrators, and software developers to talk about mainframe usage, share ideas, and ask for technical help from other users. And just in case you might find you need to buy something, Destination z has links to IBM sales. To be fair, though, it is meant to contain technical resources such as case histories and mainframe migration tools. Part of the thinking behind this development is to provide the expertise to help potential customers migrate workloads from other platforms to mainframes.


In marketing speak, the IBM announcement said that it will also provide space for business partners to drive business developments and provide a broad spectrum of technical resources.


Going back to Xephon, for a moment, the June issue of TCP/SNA Update shared some interesting ideas from mainframe networking specialists. There were two articles that included code that could be used in order to monitor and measure exactly what was going on. The first looked at VTAM storage utilization and the second looked at VTAM subpool storage utilization. A third article looked at the need to apply a PTF if you utilize the VTAM Configuration Services Exit. There are also two interesting articles. The first talks about SNA modernization, and the second discusses Enterprise Extenders.


If you have some mainframe networking information you would like to share you can send your article to me at trevore@xephon.com.

Let’s hear it for Power6

A while ago I mentioned in this blog about IBM’s ECLipz project – their unannounced and mainly rumoured plan to create a single chip for System i, System p, and System z (hence the last three letters of the acronym). The big leap forward in this plan (according to rumour mills on the Web and elsewhere) was the much-touted Power6 chip, which IBM finally unveiled at the end of May.

Before we look at whether it fulfils any of the ECLipz hype, let’s see what was actually in the announcement. Running at a top speed of 4.7GHz, the microprocessor offers double the speed of a Power5 chip, yet still uses about the same amount of electricity to run and cool it (all part of the “green machine room”). This means customers can either double their performance or cut their power consumption in half by running at half the clock speed.


And while we’re talking “green”, the processor includes techniques to conserve power and reduce heat. In fact, the processor can be dynamically turned off when there is no useful work to be done and turned back on when there are instructions to be executed. Also, if extreme temperatures are detected, the Power6 chip can reduce its rate of execution to remain within an acceptable, user-defined, temperature range.


In terms of that other hot topic, virtualization, Power6 supports up to 1024 LPARs (Logical PARtitions). It also offers “live partition mobility”, which allows the resources in a specified LPAR to be increased or decreased, but, more interestingly, the applications in a virtual machine can be quiesced, the virtual machine can be moved from one physical server to another, and then everything restarts as though nothing had happened.


The new Systems Director Virtualization Manager eases virtualization management by including a Web-based interface and provides a single set of interfaces for managing all Power-based hardware and virtual partitions; and for discovering virtualized resources of the Virtual I/O server. Virtualization Manager 1.2 supports Power6 chips. It also supports Xen hypervisors included in Red Hat and Novell Linux distributions, as well as VMware, XenSource, and Microsoft Virtual Server.


As far as Project ECLipz goes, the Power6 chip does have redundancy features and support for mainframe instructions (including 50 new floating-point instructions designed to handle decimal maths and binary and decimal conversions). It’s the first Unix processor able to calculate decimal floating point arithmetic in hardware – previously calculations involving decimal numbers with floating decimal points were done using software. There’s also an AltiVec unit (a floating-point and integer processing engine), compliance with IBM’s Power ISA V2.03 specification, and support for Virtual Vector Architecture-2 (ViVA-2), allowing a combination of Power6 nodes to function as a single Vector processor.


And in case you were wondering, IBM listed benchmark tests showing the Power6 chip was faster than Lewis Hamilton’s Formula 1 car, and perhaps hinted that H-P’s Itanium-based machines may as well just give up now!

IBM acquisitive and dynamic

It looks like IBM has a plan. A number of recent events seem to indicate that IBM has decided how it wants things to look this time next year, and has started to set about making it happen. What am I talking about? Well I have in mind the recent acquisition of Watchfire, a Web application security company, and the “Web 2.0 Goes to Work” initiative.

Watchfire has a product called AppScan, which has been around for a few years now, in fact Watchfire got it by acquiring a company called Sanctum in 2004. IBM needed a good Web security product to go with RACF, it’s well-known mainframe security software, and, of course, its ISS purchase. Internet Security Systems cost IBM $1.3bn. The company sold intrusion detection and vulnerability assessment tools and services to secure corporate networks. Once it’s happy the Internet is secure, IBM can move forward with its new Web initiative.

Before I go on to talk about that, you might be interested to know that HP has bought SPI Dynamics, another Web security company. Whether HP bought the company to stop IBM getting it, or whether they have plans to integrate WebInspect (one of SPI’s products) with their own products, I just don’t know.


Anyway, the “Web 2.0 Goes to Work” initiative, announced 20 June, is IBM’s way of bringing the value of Web 2.0 into the enterprise. By the value of Web 2.0, they are thinking about things like easy access to information-rich browser-based applications, as well as social networking and collaboration software. No IBM announcement is complete these days without the letters S, O, and A appearing somewhere. IBM said that SOA helps build a flexible computing infrastructure and Web 2.0 provides users with the software required to create rich, lightweight, and easily-deployable software solutions.


Cutting through the hype, IBM has actually announced Lotus Connections, comprising social bookmarking and tagging, rich directories including skills and projects, activity dashboards, collaboration among like-minded communities, and weblogs or blogging. Lotus Quickr is a collaboration tool offering blogs, wikis, and templates. Thirdly, WebSphere Commerce now makes online shopping easier. Full details of the announcement can be found at
www.ibm.com/web20.

IBM is clearly thinking ahead and definitely doesn’t want to be seen as the company selling “dinosaur” mainframes. A strong move into the Web 2.0 arena is clearly sensible – and making sure security is locked down tightly means IBM can retain its reputation for reliability.

SOA still making an impact

IBM’s SOA (Service-Oriented Architecture) conference, IMPACT 2007, attracted nearly 4,000 attendees to Orlando, Florida. IBM used the occasion to make some software and services announcements.

IBM introduced a new mainframe version of WebSphere Process Server, which, they claim, automates people and information-centric business processes, and also consolidates mission-critical elements of a business onto a single system. IBM suggests that a combination of DB2 9, WebSphere Application Server (WAS), and WebSphere Process Server will deliver process and data services for SOA on a mainframe.


IBM also announced DB2 Dynamic Warehouse, which integrates Information on Demand and SOA strategies to implement Dynamic Warehousing solutions – they said. It also integrates with Rational Asset Manager (a registry of design, development, and deployment related assets such as services) to improve SOA governance and life-cycle management. At the same time, IBM announced a new WAS feature pack to simplify Web services deployment.

The trouble with SOA is that there are a lot of people talking about it, but not enough people who really understand how to implement SOA in an organization. IBM has thought about that issue and announced at IMPACT 2007 218 self-paced and instructor-led courses conducted online and in the classroom. IBM also claimed that it has good relationships with colleges and universities round the world and is working on the development of SOA-related curricula with them.


If you want to visualize how an SOA affects different parts of an organization, IBM had an interactive 3D educational game simulator. Called Innov8, this BPM simulator is designed to increase the understanding between IT departments and business executives.


At the same time, IBM announced an online portal containing Webcasts, podcasts, demos, White Papers, etc for people looking to get more SOA-related information.


Lastly, IBM announced its SOA portfolio, which contained integrated technology from DataPower SOA appliances, FileNet content manager, and Business Process Management (BPM). Included in the announcement was the WebSphere DataPower Integration Appliance XI50, which can now support direct database connectivity. Also, IBM has integrated the capabilities of WebSphere with the FileNet BPM.


So, not surprisingly, SOA and WebSphere are definitely THE hot topics for IBM at the moment.

Virtualization – a beginner’s guide to products

Let’s start with a caveat: I’m calling this a beginner’s guide not a complete guide – so, if you know of a product that I haven’t mentioned, sorry, I just ran out of space.

Now the thing is, on a mainframe, we’ve got z/VM, which is really the grandfather of all these fashionable virtualization products. In fact, if I can use a science fiction metaphor, VM is a bit like Dr Who, every few years it regenerates as a re-invigorated up-to-date youthful product, ready to set to with those pesky Daleks and Cybermen, etc.

And, of course, mainframers are all familiar with LPARs (Logical PARtitions), which are ways of dividing up the hardware so it can run multiple operating systems.

The real problem for mainframers is when they are asked to bring their wealth of experience with virtualized hardware and software to the x86 server arena. Where do you start? What products are available? Well, this is what I want to summarize here (for beginners).


I suppose the first product I should mention is IBM’s Virtualization Manager, which is an extension to IBM Director. The product provides a single console from which users can discover and manage real and virtual systems. Now, the virtual systems would themselves be running virtualization software – and I’ll talk about that layer in a moment.


If you don’t choose IBM, an alternative would be the VMware’s product suite, which comprises eight components: Consolidated Backup (for backing up virtual machines), DRS (for resource allocation and balancing), ESX Server, High Availability (an HA engine), Virtual SMP (offering multiprocessor support for virtual machines), VirtualCenter (where management, automation, and optimization occur), VMFS (a FileSystem for storage virtualization), and VMotion (for migration).


Also, quite well-known is HP’s ProLiant Essentials Virtual Machine Management Pack, which more-or-less explains what it does in the title.


Lastly, for this list of management software are CiRBA’s Data Center Intelligence (now at Version 4.2) and Marathon Technologies’ everRun. Marathon also has its v-Available initiative.


In terms of software that actually carries out the virtualization on an x86 platform perhaps the two best-known vendors would be VMware and XenSource. VMware has its ESX Server (mentioned above) and XenSource has XenEnterprise, XenServer, and XenExpress.


VMware’s ESX Server reckons to have around 50% of the x86 virtualization marketplace. It installs straight on to the hardware and then runs multiple operating systems underneath. The Xen products use the Xen Open Source hypervisor running straight on the hardware and allow Windows and Linux operating systems to run under them. Virtual Iron also uses the Xen hypervisor and is similar to the Xen products. It’s currently at Version 3.7. Also worth a quick mention is SWsoft, who produce Virtuozzo.


One other company that has a small presence in the world of virtualization is Microsoft – you may have heard of them! Microsoft has Virtual Server 2005 R2, which, as yet, hasn’t made a big impact on the world of virtualization.


So, any virtualization beginners out there – I hope that helped.

When is a mainframe not a mainframe?

The April/May 2007 issue of z/Journal (http://zjournal.tcipubs.com/issues/zJ.Apr-May07.pdf) has an interesting article by Philip H Smith III entitled, “The state of IBM mainframe emulation”. Emulation is a way of letting hardware run software that shouldn’t be able to run on that hardware! It’s an extra layer of code between the operating system and the hardware. The operating system sends an instruction and the emulation software converts that instruction to one that the existing hardware can understand. The hardware then carries out the instruction. Any response is then converted by the emulator into something that the operating system would expect, and the originating program carries on processing unaware of the clever stuff that’s been going on. Often there is a native operating system involved between the emulation software and the hardware, but not always.

Philip talks about FLEX-ES from Fundamental Software. Its business partners offer integrated FLEX-ES solutions on Intel-based laptops and servers. It means that developers can test mainframe software on a laptop. It works by running as a task under Linux, and FLEX emulates a range of devices including terminals and tape drives. FLEX also sell hardware to allow real mainframe peripherals to connect to the laptop, and PC peripherals that can emulate their mainframe counterparts. There is currently a legal dispute between IBM and Fundamental Software.


There was also UMX technologies, which offered a technology that was apparently developed in Russia. This company arrived in 2003 and disappeared in 2004.


Hercules is an Open Source mainframe emulator that was originally developed by Roger Bowler. Hercules runs under Linux, as well as Windows and Mac OS X. IBM, however, won’t license its operating systems for Hercules systems, so users have to either run older public domain versions of IBM operating systems (eg VM/370 or OS/360) or illegally run newer operating systems.


Platform Solutions has a product called the Open Mainframe, which provides a firmware-based mainframe environment on Intel-based hardware. It is built on intellectual property from the time that Amdahl offered a Plug-Comptible Mainframe (PCM). It’s not a complete solution because it doesn’t support the SIE instruction, with the result that z/VM won’t run. However, z/OS and z/Linux work OK. Open Mainframe runs straight on the hardware, it doesn’t need an operating system. Unsurprisingly, perhaps, IBM’s and PSI’s legal teams are now involved.
I also found Sim390, which is an application that runs under Windows and emulates a subset of the ESA/390 mainframe architecture. Its URL is http://www.geocities.com/sim390/index.htm.


I hope Philip H Smith III won’t mind me borrowing from his article, but there are two very interesting points leading on from this. One, and Phillip makes this in his article, is that if mainframe emulation is available on laptop, it is easier to use and more likely that younger people (remember that awful bell-shaped curve showing the average age of experienced mainframers and COBOL programmers) will want to have a go.


The second point is that emulation is only a short step away from virtualization, which I’ve talked about before. Wouldn’t it make sense (from a user’s point of view) if they had one box of processors (Intel quad processors, P6s, whatever), and they could then run all their operating systems on it? The virtualization software would also be the emulation software. It could run Windows, Linux, z/VM, z/OS, etc on it. If a user’s needs were simple, it would be a small box with few chips and not too many peripherals. If a user’s needs were complex, it would be a big box with lots of everything. Virtualization is appearing everywhere, I can quite easily see it absorbing the concept of mainframe emulation (IBM’s legal team permitting, of course!).