Monday, 29 September 2008

IMS – still life in the old dog!

IBM has recently announced the IMS Version 11 beta programme. What this means is that the latest version of IMS can be tested in real environments to iron out any wrinkles before the product is made generally available in the fourth quarter next year.

IMS, which stands for Information Management System, first saw the light of day back in August 1968. Depending on who you speak to, it was either designed as a way for IBM to sell more disk capacity or to help Rockwell and Caterpillar with their Bill of Materials for the space programme. It probably was both. 

IMS comes in two parts – database management and transaction management – and both parts have been updated in V11. Enhancements to the database manager include:
  • IMS Open Database support offers direct distributed TCP/IP access to IMS data, providing cost efficiency, enabling application growth, and improving resilience.
  • Broadened Java and XML tooling eases development and access of IMS data.
  • IMS Fast Path Buffer Manager, Application Control Block library, and Local System Queue Area storage reduction utilize 64-bit storage to improve availability and overall system performance.
  • Enhanced commands and user exits simplify operations and improve availability.

Enhancements to Transaction Manager include:
  • IMS Connect (the TCP/IP gateway to IMS transactions, operations, and now data) enhancements offer improved IMS flexibility, availability, resilience, and security.
  • Broadened Java and XML tooling eases IMS application development and connectivity, and enhances IMS Web services to assist developers with business transformation.
  • Enhanced commands and user exits simplify operations and improve availability.
  • IMS Application Control Block library and Local System Queue Area reduction utilize 64-bit storage to improve availability and system performance.

Although IMS is a hierarchical database, as opposed to DB2, Oracle, SQL Server, etc, which are relational databases, it allows data to be retrieved exceptionally quickly, which is why it is used at most of the large financial institutions and other large organizations.

There is an IMS user group on the Web at www.virtualims.com, and their next virtual meeting is on the 7th October. Anyone wishing to take part in the meeting can join the user group (it’s free) and they will be sent appropriate joining details. The speaker at the next meeting is BMC’s Nick Griffin. He says, that autonomic computing is not a new approach to the problem of effectively managing database systems, but it has begun to evolve. In order to be self-managing, an autonomic database management system must understand key aspects of its workload, including composition, frequency patterns, intensity and resource requirements. In this presentation, we examine what Autonomic database computing is and where software can help you.

Full details of IBM’s IMS announcement are available from http://www-01.ibm.com/common/ssi/index.wss?DocURL=http://www-01.ibm.com/common/ssi/rep_ca/8/897/ENUS208-258/index.html&InfoType=AN&InfoSubType=CA&InfoDesc=Announcement+Letters&panelurl=index.wss%3Fbuttonpressed%3DNAV002PT090&paneltext=Announcement+letter+search.

Monday, 22 September 2008

InfoSphere Information Server

I want to talk about IBM’s InfoSphere Information Server Data Integration (DI) software this week – especially as the latest version becomes available this month – because it now comes with so many mainframe-related features. But what is InfoSphere, I hear you say? It’s more than a marketing scheme to sell WebSphere and DB2 data integration products that were once sold separately. 

The story really starts about five or six years ago when IBM bought CrossAccess to get its hands on CrossAccess’s connectivity products. Later, IBM bought Ascential Software for its ETL (Extract, Transform, Load) software, and then DataMirror for its CDC (Changed Data Capture) and replication software. These all fed in to the WebSphere data integration stack.

In previous blogs I’ve talked about zIIP and zAAP specialty engines and how there is now software available from a number of vendors that makes use of these devices. Well, InfoSphere’s DI features also make use of these specialty engines.

InfoSphere Information Server now comes with expanded SOA and grid support. Grid environments get new automation tools for managing and optimizing Information Server when it is deployed across large-scale server farms.

According to Michael Curry, director of product strategy and management for IBM's Information Platform and Solutions, the existing InfoSphere product is SOA-able, but some interfaces are more SOA-able than others! Curry added: “that everything that you create with Information Server can be published as a service within an SOA. That’s where we are right now. But what we’ve added in this release is stronger support for security – so we’ve added support for WSSecurity standards. We’ve also added support for Web 2.0 interfaces, so you can create mash-ups that consume data from InfoSphere Information Server.” There’s support for Web 2.0 services using REST and RSS standards, as well as support for direct service publishing from Oracle databases.

Perhaps the big news about the announcement is the enhanced and improved mainframe access. It’s now much easier for programmers to expose legacy VSAM data sources as services. This advance is built on the developments leading directly from IBM’s acquisition of CrossAccess. Other important enhancements include the addition of CDC capabilities (thank you DataMirror) and VSAM-to-VSAM replication capabilities (again, thank you DataMirror). Users now are now able to access VSAM data directly through Web services.

For IMS users (who I’m sure are all members of the Virtual IMS Connection user group at www.virtualims.com) the product now offers IMS change data capture.

Enhancements announced to IBM InfoSphere Classic Replication Server included stand-alone replication capabilities for a subset of mainframe data sources to ensure high availability of critical information and to offload query processing to a secondary environment for improved system performance.

Also announced was IBM InfoSphere Classic Data Event Publisher, which provides a file-based interface to InfoSphere Information Server in addition to the existing IBM WebSphere MQ interface for changed-data delivery. Enhancements enable lower latency capture of source data and ensure replication accuracy, IBM claims.

Monday, 15 September 2008

System z ideas come to your laptop!

Two weeks ago I blogged about desktop virtualization and said how it worked best when it used concepts from the mainframe – and this week I’d like to continue that theme.

Although VM/370 was first released in 1972, there had been earlier versions – CP-40/CMS and CP-67/CMS – available since 1967, which makes IBM’s VM operating system, or to give it its proper name – hypervisor – pretty old. 1980 saw the release of VM/SP, 1988 saw VM/ESA, and 2000 saw z/VM – the current version. What made VM so clever and so useful to customers was the fact that it allowed other operating systems to run under it as if they were running natively (ie straight on the hardware). So rather than running one copy of MVS on one lot of hardware, it was possible to run two or three (or whatever) copies of MVS, which all appeared to be running natively on one lot of hardware.

Which means that much of the virtualization technology available on servers and the desktop is pretty much equivalent to mainframe technology in 1972 or thereabouts! It doesn’t seem quite so clever any more, does it?

IBM, of course, moved the concept of virtualization forward by including the hypervisor code with the hardware. It produced the Processor Resource/System Manager or PR/SM as it was known. The IBM Web site defines PR/SM as a type 1 hypervisor integrated with all IBM® System z™ models that transforms physical resources into virtual resources so that many logical partitions can share the same physical resources. Effectively, the VM operating system is now included in microcode in the hardware, which means that the hardware can be split into partitions and each partition can run an operating system. So users no longer need to use VM itself to virtualize their hardware, it’s built in to the hardware.

Can servers do that? The answer is yes and no – or really no, but there are some early attempts at a yes! An organization can now buy a server with an embedded hypervisor sitting on a memory card. When the server is first booted up, a menu-driven configuration process starts that results in the hypervisor being loaded and ready to accept guest operating systems. I guess that moves the timeline for off-mainframe virtualization up to about 1986!

And while we’re talking about virtualization, you may have noticed that Sun Microsystems has announced a Version 2.0 of Sun xVM VirtualBox, its free open source desktop virtualization software. And for people who aren’t sure they really know how to get the best out of the software or what to do when they hit a problem, Sun also announced Sun xVM VirtualBox Software Enterprise Subscription, which offer 24x7 support at a price.

Sun xVM VirtualBox supports Windows (including Vista), Linux (including 64-bit versions), Mac OSX, Solaris, and OpenSolaris. Sun claims that virtual desktops are the future of business desktops because they are more flexible, manageable, and secure than traditional PC architectures. It also claims that the xVM VirtualBox platform provides organizations with an easier way to deliver a standard operating environment across their enterprises. The xVM VirtualBox software runs to 20MB for those of you thinking about downloading it. There’s more information on the Sun Web site at www.sun.com/xvm.

It won't be long until the world of desktops and servers enters the 1990s – as defined by mainframes.

Monday, 8 September 2008

Whatever happened to CICS Update?

The CICS Listserv at LISTSERV@LISTSERV.UGA.EDU had a message on it recently from John Klavon with the subject, “Has anyone received CICS Update Manuals Lately”. He goes on to say, “CICS Update is a Xephon magazine... I have been trying to contact them for a long time since we have not been receiving the book. Does anyone know anything regarding this company?”

John almost immediately got a reply from Scott McFall of ProTech Professional Technical Services, who said: 
“Xephon unfortunately is no more :-(
I still carefully hoard my various copies of the "The Handbook of IBM Terminology" (circa '90-98).....a bible for z/OS sales/bus dev guys like me!
I have been in touch some former Xephon employees recently, Trevor Eddolls and Mark Lillycrop. Mark and Trevor now each have their own companies in the UK and among other things publish The Arcati Mainframe Yearbook (a free download I believe).  Trevor is still publishing books, articles, and online content as well...I believe now for TCI Publications and z/Journal.”

I just wanted to use this blog to fill in some of the gaps in that answer and talk a bit about sharing mainframe information.

CICS Update was the first of the Update publications produced by Xephon. Its first issue appeared in December 1985. It was an A5-sized publication and contained about ten articles written by systems programmers for systems programmers, and also contained code and JCL, so the information in the articles could be implemented by others. 

Xephon originally ran seminars and published surveys. The company was set up in 1980 by Chris Bunyan, Dave Bates, and Jeff Hosier (who created the The Handbook of IBM Terminology – mentioned above). Chris sadly died in 2004. There’s a tribute to him at http://www.itindepth.com/Chris%20Tribute.htm. In addition to the Update journals, Xephon also produced Enterprise MiddlewareThe Mainframe Market Monitor (now published by Arcati – www.arcati.com), IBEX surveys, News IS, and Insight IS. It also produced the original Dinosaur Myth – debunking the thinking that mainframes would disappear and alternatives were hugely cheaper.

Other Updates that were produces over the next 20 years included: AIX Update, DB2 Update, Domino/Notes Update, NT Update, Oracle Update, RACF Update, TCP/SNA Update, VM Update, VSAM Update, VSE Update, Web Update, WebSphere Update (formerly MQ Update), and z/OS Update (formerly MVS Update).

In 2004, the Update publications were sold to TCI Publications – the people who produce zJournal. I carried on editing them until 2007. Then in March 2008 publication ceased.

The problem with the Updates was that they were subscription journals, ie people had to pay up-front to receive their monthly issue. Nowadays, I guess, when people have a problem with an application or implementing an idea, they just Google it and get free access to information off the Internet. Having said that, there are still a number of people, like John Klavon, who miss receiving their monthly dose of CICS information. This information could stimulate ideas of what could be done at their site, or solve a back-of-the-mind query, or just give them more information about a particular CICS-related topic.

So, the question I’d like to pose is this: would anyone be interested in subscribing (or contributing articles) to a new monthly journal that would be similar in nature to CICS or any of the other Update journals? If you want to get in contact with me directly, my e-mail is trevor@itech-ed.com. I look forward to hearing from you.

Monday, 1 September 2008

Desktop choices

Now I like to think of myself as a mainframer – someone who appreciates all that mainframes have to offer in the world of computing. However, like everyone else who works with mainframes, I also use PCs. Now in the past, I have tried to persuade readers of this blog that there are far more choices than just Windows and Office on their PCs. In fact, I have suggested Macs as alternatives sometimes, and I have definitely extolled the virtues of Linux. This week, I’d like to examine a slightly different alternative – desktop virtualization.

You are probably all aware of the success of server virtualization as a way of letting the IT department gain control of its servers again. Rather than having hundreds (and at some sites that was exactly the position!) of boxes all doing a bit of important work, but nobody being quite sure of how much work that actually was, virtualization has allowed sites to make full use of each server, and know exactly what percentage of the server’s capacity each virtual server was using. This allowed sites to use less space and less electricity, and claim to be leaner and greener while at the same time saving money.

Now, desktop virtualization looks set to make a similar major impact. How do I know, well the US-based Enterprise Strategy Group found that 32% (that’s about a third) of the 700 companies it surveyed were already piloting some kind of desktop virtualization technology. In addition, Gartner has predicted desktop virtualization will become mainstream by 2010; and IDC are predicting that a year later (2011) desktop virtualization software will have a $2 billion global market.

The problem with PCs is that they costly and buggy, and their aren’t enough expert staff around to support them. In addition, they have big question marks associated with security, and management is extremely difficult. What’s really needed is something like a central mainframe with terminals attached to it!!!

One attempt at this model is the thin-client approach used in task-based environments where Windows Terminal Services or Citrix’s ICA protocols have been utilized. These have given quite significantly improved returns on investment, with figures quoted of between 20% and 40%. Bear in mind that the average cost of a PC across its lifetime is reckoned to be 10 times the purchase price.

Virtual Desktop Infrastructure (VDI) is set to expand beyond the limits we’ve seen up to now. In future, all the advantages of centralized security, scalability, control, and management (and cost savings) will be combined with the advantages of highly personalized PCs. In effect, each user will have available only those applications that they will use and not a whole range of software that different users at different times may want.

In this environment, PC images become much more manageable, allowing administrators much greater control over them. It also allows administrators to meet the end users’ needs more easily. End users are meant to enjoy a better IT experience than they do now – although we shall see how true that claim is in a few years time! The big advantage comes because organizations make major savings on the operational costs of their PCs. And this is in addition to the gains in security, improved service levels, and the ease of deploying new applications.

So, what company names should you look out for in VDI space? IBM, HP, VMware, Citrix, and smaller player like ClearCube, Parallels, RingCube, and Teradicci, and I’m sure there are others. Plus Microsoft, which isn’t a player at the moment, will enter the market at some time in the next few years.

It’s good to see mainframe ideas pervading the PC world!