Sunday, 29 November 2009

Clouding your thoughts

Cloud computing has received a number of boosts quite recently, and I thought I’d just run them down for you.

IBM made absolutely sure you knew their latest announcement was from them by calling it IBM Smart Business Development and Test on IBM Cloud (I usually put the acronym for a new product in brackets just after I give its full name, but this time I’ll just leave to you to work it out). This gives customers a free public cloud beta for software development. Get in early, because the beta will be open and free until general availability (sometime early in 2010).

IBM also announced the IBM Rational Software Delivery Services for Cloud Computing. This includes a set of ready-to-use application life-cycle management tools for developing and testing in the IBM Cloud.

Microsoft also has its head in the clouds and has announced Azure, as a way to bridge the gap between desktop and cloud-based computing. I guess the allure of Azure is that applications that were developed using a common Windows programming model can now be run as a cloud service.

Also clouding the distinction between what's a desktop application and what isn't, we've got the new Chrome OS. It just assumes that all applications are JavaScript-based Web applications that you use from a Web browser. So, effectively, every application is running in the cloud. And once a connection to the Internet is established, Chrome OS automatically synchronizes data using cloud storage.

Now I've read people commenting on how the Chrome OS will hit low-end Windows machines, but my guess is that it will actually hit Linux netbooks. Don't get me wrong, I'm a big fan of Linux and all things Open Source, but the real reason for running Linux on a netbook is because Vista is too memory hungry. OK, I know XP and Windows 7 are much better than Vista (or should I say, much much better!), but the idea of running Chrome OS on a netbook takes away the need to run Linux, and would appeal more to those people keen to experiment. Just a thought.

Going back to IBM, who have developed the world’s largest private smart analytics cloud-computing platform – aka codename Blue Insight – which combines the resources of more than 100 separate systems to create one large, centralized repository of business analytics. According to IBM, “cloud computing represents a paradigm shift in the consumption and delivery of IT services”. Blue Insight has allowed IBM to eliminate multiple Business Intelligence systems that were performing more-or-less the same ETL (Extract-Transform-Load) processes for different user groups.

Gartner Research are big fans of cloud computing, telling us that: “The use of cloud computing in general has been spreading rapidly over the past few years, as companies look for ways to increase operating efficiency and data security. The cloud industry, which is in its infancy, will generate $3.4 billion in sales this year.”

Merrill Lynch reckons that by 2011 the cloud computing market will reach $160 billion, including $95 billion in business and productivity applications. With that kind of money around, it’s no wonder that IBM and Microsoft are keen to get some of it.

And finally on this topic, IBM has announced a program designed to help educators and students pursue cloud-computing initiatives and better take advantage of collaboration technology in their studies. They’re calling it the IBM Cloud Academy. IBM provides the cloud-based infrastructure for the program, with some simple collaboration tools.

This is where I do the “every cloud has a silver lining” joke – or not!.

Sunday, 22 November 2009

Guest blog – Shadow ROI

This week, for a change, I’m publishing a blog entry from DataDirect’s Jeff Overton, product marketing manager for Shadow. Jeff looks at the return on investment for Shadow users through Mainline/DataDirect TCO calculator.

For horizontal technologies, like integration, it is difficult to quantify Return On Investment (ROI) because it underpins many business systems. What can be quantified is the Total Cost of Ownership (TCO). Nowhere is TCO more important than mainframes, where a single IBM System z10 can have the capacity of 1500 or more Intel servers. Licensing hardware and software on such an enterprise scale can be costly, so there is, and always has been, a need to manage this capacity and understand how its resources, such as processor time, are being allocated.

IBM’s line of specialty engines, including the System z Integrated Information Processor (zIIP), are designed to help lower mainframe TCO by processing qualified workloads rather than having this work processed on the General Purpose Processor (GPP). These engines are just like a GPP except:
  • Their capacity is typically not used in calculating software licensing fees based on mainframe capacity.
  • Their processing speed is not governed, that is they run at full speed all the time
  • Their processing capacity is enormous – a single zIIP engine for an IBM System z10 machine has a capacity of 920 MIPS.
At Progress DataDirect ( we recognized the potential TCO savings from these engines, and four years ago re-architected DataDirect Shadow, our single unified platform for mainframe integration, to exploit these engines. In 2007 we introduced the first generation of that effort, and earlier this year introduced the second generation. Today we can offload up to 99% of the integration processing performed by the product. It is important to note that our implementation is in strict accordance with ISV use of the zIIP and does not cause IBM or any other third-party code to become zIIP-enabled. The market reception to leveraging zIIP specialty engines to legally reduce integration costs has been extremely well received.

However, IT decision makers requested we provide more estimates of the potential savings based on THEIR workloads. In response we partnered with Wintergreen research (, a well-respected analyst firm specializing in TCO/ROI analysis, to deliver a Web-based calculator ( The Calculator models the potential capacity savings, measured in MIPS, as well as the monetary savings. It uses what is called the Willhoit constant, named after Gregg Willhoit, our Chief Software Architect, who developed the algorithm that estimates the offload of DataDirect Shadow processing. Today the calculator covers two processor-intensive types of integration processing: Web Services and SQL. In as little as an hour, our field-engineering team can quantify the savings using your workload profile for:
  • Number and type of Web services (requester or provider) or SQL statement type (join, aggregate, etc.)
  • Size of SOAP payload or estimated result set size for SQL
  • Invocations over a modelled timeframe – such as per-day or per-peak period to help model peak capacity requirements
  • Cost per MIPS. Because cost can be calculated differently, the model offers the option to use the comprehensive Wintergreen mainframe-costing model, which includes hardware, software, data centre, and labour costs, or a way to use your own numbers.
Using these core metrics, the capacity and monetary savings are presented immediately. The Calculator goes further by modelling this workload out over an additional five years and provides parameters to account for changes in workload, mainframe capacity, and MIPS costs. This is a great, low-investment process to quickly get a clear picture of the costs over the typical five-year planning horizon that many organizations rely on.

In as little as an hour, IT can be in a much stronger position to provide detailed and accurate information to support ROI analysis of not only mainframe integration investments but the potentially large MIPS dividend available to the entire mainframe from utilizing zIIP specialty engines to process up to 99% of the integration processing performed by Progress DataDirect Shadow.

Thanks Jeff for being our first guest blogger. And remember, there's still time to complete the mainframe user survey or a vendor entry for the Arcati Mainframe Yearbook 2010.

Sunday, 15 November 2009

GSE conference

I was lucky enough to attend the Guide Share Europe National Conference on 4th and 5th November at Whittlebury Hall. This pulled together lots of mainframers, who were very interesting to talk to – including three young lads who are mainframe apprentices! – plus numerous excellent speakers. There were also a number of vendors there in the exhibition area who were keen to chat and pass on information about their new products – which was also very informative.

I managed to have a long chat with NEON’s Tony Lubrano who gave a presentation in the New technologies stream on zPrime. He explained how zPrime 1.2 now includes an Enablement Console, making it easier for users to select the applications they want to move from the central processor to the zIIPs or zAAPs. There’s also an LE (Language Environment) Initialization Exit feature that automates the task of enabling LE-compliant applications to migrate to the specialty engines. Tony explained how these requirements had come from users and had been delivered in the new release.

The people from Innovation Data Processing were keen to talk about their core FDR products, plus the newer FDRERASE, and FDRERASE/OPEN, and FDRVIEWS, FDRMOVE, FDRMOVE, and FDRPAS.

I had an enjoyable catch-up with the team from Compute (Bridgend) who demonstrated their new SELCOPY/i, which is part of SELCOPY or CBLVCAT and provides multiple windows for user action and produces what they call a “mainframe desktop”. It’s worth checking the huge number of facilities on the Web site (

I was surprised to find mainframe companies I didn’t know. There was Thesaurus (, which offers products, consultancy, and managed services, and have expertise with mainframe Linux. There was EZLegacy (, who had EZSource, their application-oriented configuration management database. There were two EPV ( products: EPV for z/OS and EPV for DB2. Olga Henning represented Blue Sea Technology ( Stephen Golliker represented Higobi (

There were many other exhibitors who were friendly and helpful discussing their products

But I didn’t really go for the exhibitors, I wanted to see some of the presentations. There were streams for CICS, IMS, DB2, Enterprise security, zLinux, Large systems working group, Network management working group, Software asset management, and New technologies.

I was particularly interested in the IMS stream – because of my work with the Virtual IMS Connection user group (, and managed to see an excellent presentation by IBM’s Alan Cooper on “Rock solid security in the post-SMU era”. I also sat in on the “Birds-of-a-feather” session to see how real IMS users are finding the product and particularly what difficulties they have to overcome in their environments.

It was an excellent event. It was well-organized and run. It was in a lovely location. And everyone I spoke to was friendly and helpful, and keen to talk mainframe technical talk. Many thanks to the organizers for setting up such an excellent event, and to Mark Wilson who was conference manager for this year’s conference.

BTW: if you like this blog, go to Look for Individual IT professional male, then use the drop-down menu to find Mainframe update and select it. Then go down the page and press "Done" - and you will have voted for my blog. Tell all your friends!

Sunday, 8 November 2009

The big daddy of virtualization just got better

While all those Windows-warriors are talking about Windows 7 and virtualization strategies, the king of virtualization – IBM’s VM software – has seen the release of z/VM Version 6.1.

Microsoft has its desktop virtualization technology, and is up to Version 2 of the Microsoft Desktop Optimization Pack 2009 (MDOP) – the add-on you need for most of the Windows 7 virtualization capabilities – assuming you have the right chip in the first place. The big thing about Windows 7 is that it lets users run their software in XP emulation mode! The App-V (Application Virtualization) client, which is built into MDOP, provides the client side for virtual application launches. Users can click desktop icons to launch a server-based application, which they can use as if it had launched on their own machine. Microsoft Enterprise Desktop Virtualization (MED-V) allows Virtual PC to launch on top of Windows 7 and adds a management capability by linking to Microsoft’s management server and providing the client-side support for policy-based usage controls, provisioning, and delivery of a virtual-desktop image. But enough about that!

Anyway, the new release of z/VM is available only on the IBM System z10 Enterprise Class server and System z10 Business Class server, and future System z servers (z11 and whatever comes next).

According to IBM, z/VM V6.1 offers:
  • Guest LAN and Virtual Switch (VSWITCH) exploitation of the Prefetch Data instruction to use new IBM System z10 server cache prefetch capabilities to help improve the performance of guest-to-guest streaming network workloads

  • Closer integration with IBM Systems Director by shipping the Manageability Access Point Agent for z/VM to help simplify installation of the agent

  • Inclusion of post-z/VM V5.4 enhancements delivered in the IBM service stream.
IBM adds that this release provides the basis for some major future enhancements as indicated by the announced Statements of Direction that include:
  • z/VM Single System Image:

  • IBM intends to provide capabilities that permit multiple z/VM systems to collaborate in order to provide a single system image. This is planned to allow all z/VM member systems to be managed, serviced, and administered as one system across which workloads can be deployed. The single system image is intended to share resources among all member systems.

  • z/VM Live Guest Relocation:

  • IBM intends to further strengthen single system image support by providing live guest relocation. This is planned to provide the capability to move a running Linux virtual machine from one single system image member system to another. This is intended to further enhance workload management across a set of z/VM systems and to help clients avoid planned outages for virtual servers.
CA was quick on the scene offering Day One support for its many z/VM solutions.

I’m always interested in VM developments, I wrote two books about VM many years ago, and still have a soft spot for it. It seems that the big daddy of virtualization is still well ahead of any competitors out there and just keeps getting better.

Sunday, 1 November 2009

A couple of HTML tips

This time, just a couple of Web coding tips for valid HTML

Have you ever wanted to embed a Youtube video on a page AND have it validated? Youtube allow you to specify the size you want etc and then give you the code using the embed tag – so it looks like this:

<object width="480" height="295">
<param name="movie" value="">
<param name="allowFullScreen" value="true">
<param name="allowscriptaccess" value="always">
<embed src="" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" width="480" height="295"></embed></object>

If you’re curious, it’s Gavin Bate talking about climbing Everest.

Anyway, the code won’t validate because you can’t use the embed tag. What works is the following:

<object type="application/x-shockwave-flash" style="width:480px; height:295px;" data="">
<param name="movie" value="" />

Notice, that I also added “amp;” to the &s.

My second tip is to do with blob lists inside blob lists, also referred to as nesting bullet points.

You might think that the correct way to code the following:

  • Adam

  • Eve

    • Cain

    • Abel

Was like this:


But that is invalid. It is correctly written as:


And just talking about blob lists... You do know that you can control what type of blob you get. For example, code:
<ul type="square">

and you’ll get a square blob. You can also use “circle”. For ordered lists try lower or upper case Roman (“i” or “I”), and lower or upper case letters “a” or “A”).

And if you need a new Web site designed and coded, or if you need your tired old one revamped, please contact me at

And don't forget to complete the Arcati Yearbook user survey at