Sunday 27 July 2008

More mainframe futures?

Sometimes, writing a weekly blog means that only half the story gets published at a time. And Steve Craggs from Lustratus was very quick to pick me up on the fact that my article about the future of the mainframe a couple of weeks ago focused on data and not on applications. And he was quite right to do so.

Steve said: "Wile I agree with your conclusion that the mainframe will continue to contribute important business value in the future, I think you do it a disservice positioning it as simply a store for data" He continues: " see at least equal if not greater value coming from the mainframe as an APPLICATION store – all those trillions of dollars of investment can continue to deliver returns".


I completely agree. Not only is your data safely stored on the mainframe – which was the focus of my blog, but the mainframe also runs lots of high-performing applications that, with the growth of SOA (Services-Oriented Architecture), can be made available to customers, business partners, etc over the Internet. Accessing data is very important, accessing it in a way that maintains security and offers high performance has got to be a plus – and this clearly is what the mainframe is capable of delivering.


Steve Craggs also, helpfully, suggests that a best-of-breed functionality guide to determine what is needed in a tool that makes mainframe applications available in a distributed environment, preferably as an SOA service is available at www.lustratus.com. This will certainly help any site starting to evaluate various vendor solutions.


Some time ago I did mention that the acronym SOA was said by some cynics to stand for Same Old Architecture! Now people are talking about WOA (Web-Oriented Architecture) as a possible way to deliver something similar to SOA, but in a more lightweight fashion. Kind of SOA meets Web 2.0!


In essence, WOA describes a core set of Web protocols (fairly standard things like HTTP and XML) that are scalable, interoperable, and dynamic. WOA makes use of REST (REpresentational State Transfer), which is a way of using HTTP as a Web service.


So, going back to the mainframe, my conclusion is the same as last time, but strengthened and reinforced by Steve Craggs' comments. The mainframe can not only look after your data, it can make applications available across the spider’s web I described in my earlier blog.

Sunday 20 July 2008

Sometimes small isn’t beautiful


We all know that z/OS runs on a mainframe, and so do z/VSE and z/VM. So what would ordinary punters do if they could run their favourite mainframe operating system on another cheaper platform? What would IBM do?


We found part of the answer recently when IBM bought a company called PSI. If you’ve not heard of PSI, they were a company that originally sold Hewlett-Packard’s Itanium-based Superdome servers with an emulator that allowed mainframe software to run on them. They have also been in receipt of an IBM patent-infringement lawsuit since 2006. In 2007 the company countered with its own suit against IBM claiming that IBM was abusing its mainframe monopoly to keep out competitors. And the consequence of all this was that IBM has bought PSI for an undisclosed sum.

Two interesting things come out of this. The first is about the money. It’s known that Microsoft invested $37.5 million in PSI last November. Also, HP tried to buy the company for $200m or thereabouts. So some big money must have changed hands – although fees for lawyers may well have been reaching astronomical levels for PSI, which had also filed an anti-trust complaint in the European court in December 2007. So how much did IBM pay for PSI? The Internet is abuzz with figures of $260m.

The second interesting point is what is IBM to do with the Itanium stuff? Up till now, IBM did not have any Itanium-based servers. It could mean that IBM has to rebadge servers from HP.

The other off-mainframe choices for people wanting to run mainframe operating systems are Hercules and Flex-ES. Hercules is an open source emulator that runs on Linux and Windows boxes. Its users are now fairly secretive about what they do, in case IBM and its lawyers get tough. Flex-ES, from Fundamental Software, is also a mainframe emulator, and again they are in dispute with IBM.

As I’ve mentioned in the past, I believe that IBM is killing off these emulators so it can sell its own – as a way of bringing on-board potential new mainframe customers. IBM is claiming to have 200 new customers for its mainframes since 2000 – or figures like that. The emulator market could help it increase that figure hugely. We shall see.
I

n the case of PSI, it does look like Schumacher was wrong in 1973 when he said, "Small Is Beautiful". It looks like small companies like PSI (and perhaps Hercules and Flex-ES) are treated like irritating warts by IBM, and money is thrown at them until they disappear!

Sunday 13 July 2008

Mainframe futures?

I was thinking about the direction that mainframe evolution is moving in, and I came to the conclusion that it will definitely survive well into the foreseeable future. In fact, it will even take on a more important and strategic role within organizations. And it will do this by embracing distributed platforms and the Internet.

So how do I come to this contradictory conclusion? If everyone is busy working on laptops and running Linux servers, where does the mainframe fit into this scenario? Surely I’m misreading the signs!?!

But that’s the very reason that mainframes will be able to survive. It’s because so many users are working away on laptops that they need a central reliable repository for their bet-the-business data. A device where security and performance issues were solved decades ago, and continue to improve. And the mainframe needs these distributed platforms in order for it to survive. Think of a giant spider at the centre of an enormous web, and you have a picture of the computing environment of the future.

I was talking a little while ago about Enterprise Information Integration (EII) and how users needed to make use of information from different data sources in real-time in order to make the best decisions. I also wrote a couple of weeks ago about a lingua franca for translating (migrating) data from one format to another. This can be achieved off mainframe. And it’s a sensible place to do it. But the data needs to be stored on the mainframe because of all the data security and performance characteristics the mainframe possesses. CONNX Solutions has this kind of product.

The same is true with BPEL (Business Process Execution Language), which I also blogged about a few weeks ago. BPEL is important for sites that are interested in SOA and want to integrate their mainframes with their distributed systems in a way that allows scaling up – rather than small point-to-point solutions that don’t integrate or scale. Again, the mainframe can and should be used as the main data source. A product that makes use of industry-standards for this is available from DataDirect.

What about database auditing? What’s the best platform for that? Again, an interesting choice is to perform the auditing off the mainframe, but keep your data on the mainframe. Products like Guardium for Mainframes can ensure that users are able to audit their mainframe database’s activities and comply with the latest regulations, without impacting on the performance of the database.

The same thinking can apply to database archiving. You still want the database on your mainframe, but the archiving operation and the archived data can be on a distributed platform. NEON Enterprise Software has TITAN Archive, which can look after mainframe database archiving from a Linux platform.

These are just four (of many) different products that are currently available for use for different aspects of mainframe computing, but they illustrate the fact that the mainframe can run more efficiently with some activities taking place off-mainframe. But it’s only a short step (in terms of the thinking process) to realise that storing the active data on the mainframe in the first place is a really sensible approach. From the mainframe it can be made available over the Internet or locally to employees, customers, business partners, and whoever else it makes sense to share it with. And there’s an enormous lacuna in your computing environment if you don’t have a mainframe (with its many benefits) at the centre. The mainframe can be the heart of your computing environment – the spider at the centre of an intricate Web.

Sunday 6 July 2008

Lookout PST files

As you know, I am not Microsoft’s biggest fan and I am always encouraging people to look at alternatives. Our local secondary school could save a fortune each year if it didn’t pay for Microsoft licences and used Linux instead of Windows and OpenOffice instead MS Office. While I think the Microsoft products are satisfactory in their way, I’m just amazed at how many people can’t picture a world outside of Microsoft products. I ask whether they have tried Web-based applications, but too many are Microsoft zombies, or else argue that they need to use the products at work and so find it convenient to use them at home.

And this, in a roundabout sort of way, is how I’ve ended up discussing Microsoft Outlook PST files in this week’s blog.

Microsoft Outlook stores messages, contacts, appointments, tasks, notes, and journal entry data in Messaging Application Programming Interface (MAPI) folders. These MAPI folders can be stored in either a mailbox located on Microsoft Exchange Server (if you have it), or a personal folder (.pst file) on a local hard disk. PST stands for Personal Storage Table.

With Outlook 97 the PST file was encoded in ANSI format and had a maximum file size of 2GB. With Outlook 2003 and 2007 the PST file is encoded in unicode format and has a maximum size of 33TB. PST data can be imported or exported using the File/Import and Export… menu item.

Microsoft recommends that personal folders should be used only by Outlook users who access Internet mail only (ie do not use an Exchange Server). Microsoft also recommends that apart from mobile users who need to access data while offline, Exchange users should store all messages within the Exchange information store rather than in a personal folder.

A big problem users have with PST files is recovering from some kind of file corruption. One way to recover is to use a back-up copy of the file. However, if PST files are stored locally, there is a strong chance that they won’t be backed up. If they are stored on a network server, they won’t be backed up if they are open – ie if a user stays logged on overnight. Another problem with using a network server is performance. If lots of users update their PST files, the amount of messages sent and received can place a huge load on the network.

Another way to recover PST data is to use the Inbox Repair Tool (SCANPST.EXE), but this cannot always recover all the data. The Inbox Repair Tool rewrites the PST file’s header information, and deletes anything in the file it can’t understand. This means it’s good at repairing damaged header files, but less good at recovering damaged data.

There is also a performance issue with PST files and Outlook 2007. The Microsoft Knowledge Base article 932086 says that users may experience performance problems when working with items in a large PST file or in a large OST file. It goes on to suggest that part of the reason for the slowdown is a new internal structure for PST and OST files that increases the amount of disk access. The patch optimizes the way writes are committed to OST and PST files. Microsoft originally recommended that users kept the default PST file as small as possible, eg using AutoArchive to move out messages older than a few months.

An OST file, by the way, is an Outlook offline folder file. Users can work offline and then synchronize any changes with the Exchange server when they next connect.

So there we are, in this my 101st blog, I have turned to the dark side and talked about something to do with a Microsoft product. What next, Silverlight and XAML??