Sunday, 30 September 2007

Compliance, data storage, and Titans

The Titans, in Greek mythology, were originally twelve powerful gods. They were later overthrown by Zeus and the Olympian gods. I'm not talking about them. Nor am I talking about the fictional characters created by Brian Herbert and Kevin J Anderson in their Legends of Dune novels. Today I want to talk about an interesting announcement from NEON Enterprise Software (www.neonesoft.com) called TITAN Archive.

So what makes TITAN Archive more interesting than anything else announced in September? Well, basically, its simplicity and usefulness. It is described as a "database archiving solution", which means that an organization can use it to store structured data for long periods of time. And why should anyone want to do that? Well the answer is compliance.

Regulations are getting stricter in so many countries, and companies are now compelled for legal reasons to store large amounts of data for long periods of time. In fact, data retention could now be between 6 and 25 years. Many organizations are defining their own retention policies and are looking for ways to action those policies that are economic and allow data to be recalled quickly and easily (now called e-discovery if it's needed for a court case), and, at the same time, doesn't affect the performance of their current computing needs. They are looking for a solution that meets all compliance and legal requirements and can be used in the event litigation.

At the moment, TITAN Archive works with DB2, but plans are in place for a version for Oracle and one for IMS. Both data and metadata are stored in what's called an Encapsulated Archive Data Object (EADO). The EADO format is independent of the source DBMS (which may very well change at a company in the course of 25 years!) and can be accessed or queried using standard SQL queries or reports – which makes accessing it very easy. The data can be stored for as long as necessary. TITAN Archive can also have a discard policy, which makes sure that data is deleted when it is no longer required for legal or commercial purposes.

TITAN Archive connects to a storage area network and is managed from a Java interface that could be deployed across the enterprise or secured to a single location. The heart of TITAN Archive is the archive appliance. This is a Linux server that performs all the TITAN Archive processing.

Moving archive data off the mainframe and being able to access it easily, while retaining it for the longer periods of time now required, is a problem many companies face. TITAN Archive seems like a very useful and economic solution to this problem.

Wednesday, 26 September 2007

How Green Was My Valley – and how green are my computers?


How Green Was My Valley is a 1939 novel by Richard Llewellyn and a 1941 film directed by John Ford. It was written and filmed in the days when green was just a colour and not an aspirational life style. I blogged about IBM’s green data centre plans a few months ago, but I wanted to revisit this whole issue.

There does seem to be a lot of misconceptions about what’s green and what isn’t, and it does seem to depend on how you look at an issue.

For example, I have heard it said that because flat screens use less energy than cathode ray tubes, we should all (if we haven’t done so already) get rid of those old screen and replace them with new flat ones. Apparently wrong! Because of the huge amount of energy and resources it takes to create a CRT and a flat screen, it is, in fact, more energy efficient to use that CRT right up to the moment it fails, and then change to a flat screen. This is because, although per hour of usage the flat screen is greener, the total amount of energy it took to extract all the raw materials and then construct the screen far outweigh the energy used by that screen. So we should be using that old device until it no longer works and then change over.

Interestingly, thinking about the raw resources, it has been suggested that a standard PC uses 1.8 tonnes of raw materials.

Another common comment is that recycling computers is a good thing. The idea is that computers contain lots of expensive metals (like gold) so old ones should be stripped down and the expensive metals extracted and reused. Unfortunately, the energy audit for this is quite high. So is there a better alternative? Well yes, or else I wouldn’t have mentioned it! There are a variety of companies and charities that will refurbish computers and peripherals. This refurbished PC could be re-sold or it could be shipped to the developing world – both better choices than trying to regain the metal from the old PC and then using it in a new one. It’s the difference between re-use and recycling.

Storage vendor ONStor recently found that 58% of the companies they surveyed were either still talking about creating a green IT environment, or still have no plans to do anything. But with conflicting and confusing messages that isn't completely surprising.


Things like consolidation and virtualization could help reduce power, cooling, and other operational expenses – and these would therefore help reduce energy consumption and carbon dioxide emissions, etc.

Of course, we could all do more. Many sites (and many of my friends’ houses) have old machines sitting in cupboards and under unused desks. These could be given to charities and sent on to developing countries. They’re certainly not doing anyone any good gathering dust. And even if the computer doesn’t work, given two or three machines, enough spare components could be put together to get one that does work – and which would the be put to good use.


Even if we’re not concerned with being green, with saving the planet, or helping third-world countries, we are paying the electricity bill. So in terms of simple economics, powering off unused printers and computers and anything else we leave in stand-by mode will save us money and is a way of being green too. I know you can’t power off your mainframe, but there’s often a lot of laptops left on in offices. Think, how green can your offices be – not just your data centre?!

Office of the future?

It had to happen – I was bound to be sent a DOCX file. This is the new file type associated with Microsoft Office 2007. It’s all to do with the Office Open XML format Microsoft is keen on, and, of course, my copy of Office 2000 can’t open it. To be fair, Microsoft does have download that allows Office 2000 to open DOCX files, but it comes with health warnings and caveats, so I haven’t tried it.

I have wondered in the past about keeping the faith with Microsoft or whether I should go the Open Source route and install OpenOffice etc. Indeed I wrestled for a long time with getting Linux installed permanently on my PC (and not just booting up a distro from a CD every now and again).

So, I read with interest that IBM has decided to join the OpenOffice.org development community and is even donating some code that it developed for Lotus Notes. (Interestingly, Ray Ozzie, who developed Notes now works for Microsoft). OpenOffice.org was founded by Sun and works to the Open Document Format (ODF) ISO standard – not Microsoft’s Office Open XML (OOXML or Open XML) format.

Apparently, the code that was developed for Notes was derived in part from what was originally Microsoft-developed technology! It seems that IBM’s iAccessible2 specification, which makes accessibility features available to visually-impaired users interacting with ODF-compliant applications, was developed from Microsoft Active Accessibility (MAA). IBM has already donated the iAccessible2 specification to the Linux Foundation. iAccessible2 can run on Windows or Linux and is a set of APIs to make it easy for visuals in applications based on ODF and other Web technologies to be interpreted by screen readers that then reproduce the information verbally for the blind.

Luckily, I’m not visually impaired and have no use for this technology, but I have a friend who works a lot with Web site design so that they can be used by visually-impaired people, and I have listened with interest while he talks about things I previously took for granted. It is important.

Anyway, even if IBM’s motives are not pure and they secretly hope that OOXML never becomes an ISO standard, making this kind of technology freely available has got to be a good thing.


Maybe we should all take another look at OpenOffice.

Facebook – cocaine for the Internet generation?

It was only a couple of weeks ago that I was blogging about social networks on the Internet and how I thought that Facebook was being colonized by older people not just students and other youngsters. And now I find that Facebook is being treated by some companies as the most evil thing since the last virus or worm infection!

What’s happened is that Facebook has caught on, and a large number of ordinary working people have uploaded photos to it and linked with other “friends”. That all sounds rather good – where’s the harm in that? Well it seems that these same working people have been seduced by the many “applications” available with Facebook – and I particularly like Pandora (but that’s because I listen to
www.pandora.com anyway), Where I’ve been, and My aquarium. But the truth is, there are lots of these applications, such as: FunWall, Horoscopes, Fortune cookie, My solar system, The sorting hat, Moods, Superpoke, Likenesses, Harry Potter magic spells, etc, etc.

The problem is two-fold for employers. Firstly, too many employees are spending too much time interacting with their friends, uploading photos and videos, and messing about with the applications. The “lost” hours of work are mounting up, and so companies are banning access to Facebook. Some are allowing access at lunch times and after the defined working day, but other companies, apparently, have gone for a blanket ban.


The second problem is that large amounts of a company’s broadband bandwidth is being used by Facebookers rather than people doing productive work.


The third problem is that these applications seem to get round corporate firewalls and anti-virus software, with the result that they create a backdoor through which anything nasty could enter. No-one wants a security risk left undealt with.


This must be good publicity for Facebook, making it seem especially attractive – nothing boosts sales of a product like a ban! However, many wiser heads have been here before. I remember the first computer game – the one that was text only, and where a small dwarf threw an axe at you and killed you. Lots of hours were lost with that until the mood passed. More recently MSN has been banned at some sites because people spent all day talking to each other on that rather than getting on with work. These things come in phases, work time is lost, then the mood changes, work is caught up with and that old hot item is ignored. I would expect to see, this time next year, that Facebook is still popular, but not so compulsive as it is now. People won’t need to be banned from Facebook because they will not feel compelled to access it. But, I would bet, there’ll be some other must-visit Website, and we’ll be off again!


These things have been compared to crack cocaine and other “recreational” drugs. In truth they can be very compulsive for a while, but, unlike narcotics, eventually you want less-and-less of them not more-and-more.

The “dinosaur” lives on

I can still remember those distant days of the 1990s when everyone you spoke to “knew” that mainframes were doomed to extinction, and dates were confidently predicted when the last one would be turned off. These sit alongside, in terms of accuracy, predictions about how many computers a country would need in the future – I think two was the best guess, just one fewer than in my office at the moment!

Not only have the “dinosaurs” lived on, they are continuing to evolve and flourish – as witnessed by this “summer of love” for all things mainframe from IBM. They started with the latest version of CICS (V3.2), then we had the latest DB2 (9.1), and now we have the operating system itself, z/OS V1.9.


In summary, the new Release has been enhanced so that typical Unix applications, such as ERP or CRM (which are usually found on mid-range machines at the moment), can be ported to z/OS more easily.


Also there have been upgrades in terms of security and scalability. With improved network security management tools, it’s now easier to set consistent network security policies across distributed systems that communicate with the mainframe, as well as multiple instances of the operating system. Other security improvements come from enhanced PKI (Public Key Infrastructure) Services and RACF to help improve the creation, authentication, renewal, and management of digital certificates for user and device authentication directly through the mainframe. This now provides centralized management for Web-based applications. z/OS’s PKI could be used to secure a wireless network infrastructure or the end nodes of a Virtual Private Network (VPN) that might be hosting point of sale or ATM communications traffic. Lastly, the z/OS Integrated Cryptographic Service Facility (ICSF) will be enhanced to include the PKCS#11 standard, which specifies an Application-Programming Interface (API) for devices that hold cryptographic information and perform cryptographic functions.


One of the biggest improvements is the ability for logical partitions to span up to 54 processors – previously they were limited (if limited is the right word here) to 32 processors.


The upgrade becomes available on the 28 September 2007.


So are mainframes going extinct and this is little more than a dead-cat-bounce? Definitely not. IBM is saying that its revenue grew by 12% in the first quarter of the year over the previous quarter and up 25% over the previous year. Remember that dinosaurs ruled the earth for 186 million years!

Where am I?

I am just back from China and suffering from the usual affects of jet lag – so just a short blog (you’ll be pleased to hear).

I thought I’d pass on lots of Chinese wisdom, but you’ve probably heard them all before. Anyway, as I think they say, the longest blog begins with a single word!

So, I was thinking about my IP address now that I’m back – I was wondering what it was. So I downloaded a widget called what.ip.i.have by Vlad Sasu. I’m a big fan of widgets, which is now a Yahoo product (widgets.yahoo.com). I use widgets for the weather and the rainfall, and I have a BBC newsfeed and one showing my blogs (although it could be set for any other RSS feed). The new widget installed and told me my IP address.

The next stage, I thought, would be to look up that IP address on one of those sites that tell you where in the world each IP address comes from. I live in the beautiful west country near Bath and Bristol in the UK. My broadband connection is a slightly dear, but usually reliable, connection through BT.

So, my next stage was to go to Google and search for sites that would tell me where my IP address came from. I thought it would be an interesting test. In no particular order, I first tried www.ip-adress.com. Like many of the others, it “knew” my IP address already and showed that I was located in Silvertown in Newham, which is east London near the River Thames. I thought that perhaps BT’s cables joined the rest of the world at that point.

Next I tried http://whatismyipaddress.com, and that came up with Silvertown as well. So I thought that definitely must be where I am (Internetwise that is).

http://www.melissadata.com/lookups/iplocation.asp, however, thought I was in Basingstoke, in England. And so did http://www.geobytes.com/IpLocator.htm, which even gave Basingstoke’s longitude and latitude. http://www.ip2location.com/ also had me in Basingstoke.

http://www.analysespider.com/ip2country/lookup.php knew I was in the UK, but had my IP address as coming from Suffolk – which is about as far east of me as you can go without falling into the English Channel.
http://www.blueforge.org/map/ had me miles from anywhere on the Yorkshire Dales – very picturesque, of course, but miles away. And that’s when I stopped.

I just wondered whether other people had tried these IP locators with any degree of success. Ah well, as they said in China, we do live in interesting times!

Social networking

Someone was telling me that some top IT people who write blogs regularly and have a presence on Facebook and Myspace, etc are now so busy with these Web-based interactions that they don’t have time to do their real jobs properly. So, they employ people to live their Web life for them while they get on with their proper work!

This is to confirm that I am really writing this blog and I don’t have an employee doing it for me!!

Facebook (www.facebook.com) and Myspace (www.myspace.com) are two really interesting examples of how the Web has developed. Both started out as the domain of youngsters and are now being colonized by older people – parents and grandparents of the original users. It appears that we are all keen on social networking.

Recently-announced figures suggest that Facebook has grown by 270 percent and Myspace by 72 percent in a year. Although Myspace has more users logging in each day (28.8 million) than Facebook (which has 15 million).

The good thing about these sites, according to marketeers, is that they identify new trends very early in their life-cycle. So marketing people know exactly what products they should be selling this season.

The downside, I suppose, is that this cult of newness means that after a time the excitement goes from these sites and they gradually shrink in terms of usage. At one time, everyone was talking about friendsreunited (www.friendsreunited.co.uk) and catching up with old school friends. Once you’ve caught up, the point of such a site diminishes. Similarly, Friendster (www.friendster.com) was very popular, but is perhaps less so now.
Youtube (www.youtube.com) is also very popular with youngest because of the humorous and other short videos you can see there. I’m not sure that much interaction occurs between users on this site except someone uploads a video and other people can watch it, but lots of people have joined.

A question many people ask is, are they dangerous? Facebook allows you to collect friends – in fact a colleague and I were having a competition earlier this year to see who could get the most friends on Facebook! When we stopped, we were still not even slightly close to the totals my children and their friends have. But is it dangerous? Does it encourage sexual predators, and are our youngsters at risk? The answer is probably not because the more real friends you have on these networks the less likely you are to talk to strangers.

Wikipedia (itself often maligned) lists 100 social networking sites at http://en.wikipedia.org/wiki/List_of_social_networking_websites. You can see how many you belong to.

Like lots of other people, I have an entry on LinkedIn (www.linkedincom), Zoominfo (www.zoominfo.com), Plaxo (www.plaxo.com), and Naymz (www.naymz.com), and I have links to other people. However, if I really want to talk to any of these people, I e-mail them, which is exactly what I would have done if I didn’t belong to the networking site.

I think the plethora of social networking sites will eventually shrink to a few that everyone can use and a few that are specialized. I think some will grab people’s attention and become somewhere that you must have a presence, and others will wither and die as they forget to update or update with facilities that no-one really cares about. I think they could be useful as business tools if you could get people to join your group. For example, all the subscribers to Xephon’s (www.xephonusa.com) CICS Update could form a group on Facebook and share information about CICS. However, I’m not sure that most people belonging to these networks take them that seriously and would spend enough time talking to their group for there to be a business case, at the moment.

Anyway, the real Trevor Eddolls will not be blogging next week because I am going to be a tourist in China. Any burglars reading this post, please note that someone will be feeding our large and fierce dogs twice a day.

Viper 2

Last week I was talking about AIX 6, which IBM is making available as an open beta – which means anyone can test it out so long as they report their findings to IBM. This week I want to talk about Viper 2, the latest version of DB2 9, which is also available as a download for beta testers. You can register for the Viper 2 open beta program at www.ibm.com/db2/xml. The commercial version is slated to ship later this year.

DB2 9 (the original Viper) was released in July 2006. What made it so special was the way it could handle both relational and XML data easily, which was made possible by the use of pureXML. Users were able to simultaneously manage structured data plus documents and other types of content. This, IBM claims, made it superior to products from other database vendors – you know who they mean! Oracle 11g, which will have probably been announced when you read this, will have full native XML support. Sybase already has this facility.


According to recent figures from Gartner, IBM’s database software sales increased 8.8 percent in 2006 to just over $3.2 billion. However, Oracle’s had 14.9 percent growth and Microsoft had a 28 percent growth in databases. As a consequence, IBM’s share of the $15.2 billion relational database market decreased to 21.1 percent in 2006 from 22.1 percent in 2005.


Viper 2 offers enhanced workload management and security. The workload management tools will give better query performance from data warehouses, and better handling of XML data within the database – they say. There is also automated hands-off failover for high availability.


In addition, there’s simplified memory management and increased customization control. DB2 9 can perform transactional and analytical tasks at the same time, Viper 2 offers improved management tools for setting priorities between those tasks


And, it apparently has greater flexibility and granularity in security, auditing, and access control. It’s now easier to manage the process of granting access to specific information in the database. Viper 2 makes it simpler to manage and administer role-based privileges, for example label-based access control, which allows customers to set access privileges for individual columns of data. It is also easier to add performance and management enhancements to the system's audit facilities.


It is well worth a look.

Good news for AIX users?

IBM has announced that it is making available an open beta of AIX Version 6.1 – an upgrade to the currently available version of AIX. Now, the questions that immediately spring to mind are: is this a good thing? and why is IBM doing it?

Before I try to answer my own questions – or at least share my thoughts about those questions – let’s have a look at what AIX 6.1 has to offer. The big news is virtualization enhancements, with improved security also included. The IBM Web site tells us that workload partitions (WPARs) offer software-based virtualization that is designed to reduce the number of operating system images needing to be managed when consolidating workloads. The live application mobility feature allows users to move a workload partition from one server to another while the workload is running, which provides mainframe-like continuous availability. For security there is now role-based access control, which give administrators greater flexibility when granting user authorization and access control by role. There is also a new tool called the system director console, which provides access to the system management interface via a browser. The bad news for venturous adopters is that IBM is not providing any support – there’s just a Web forum for other users to share problems and possible solutions.

So, is it a good thing? The answer is (of course) a definite maybe! If lots of people pick up on the beta, and do thoroughly test it for IBM, then the final product, when it is released, will be very stable and not have any irritating teething problems. There could be thousands of beta testers rather than the usual small group of dedicated testers. Plus it could be tested on a whole range of hardware with almost every conceivable peripheral attached and third-party product run on it. And beta testers will get the benefit of the new virtualization features and security features.

Why is IBM doing it? Apart from getting their software beta tested for free, they also make it look like their version of Unix is part of the open source world. The reason I say that is because it is called an “open beta” – hence the verbal link with open source, which is perceived as being a good thing – rather than being called a public beta. To be clear, while some components of AIX are open source, the actual operating system isn’t open source.

AIX Version 6 is a completely new version – the current one is 5.3. The final version will probably be out in November. Announcing an open beta programme means that IBM can steal some limelight back from Unix rivals HP and Sun. All in all, it is good news for AIX users.

A year in blogs

Without wishing to get all mushy about it, this is my blog’s birthday! It’s one-year old today. This is blog 52 and blog 1 was published on the 19th July 2006.

I’ve tried to comment on mainframe-related events that have caught my eye, and at times I have blogged about other computer-related things. I discussed stand-alone IP phones, problems with my new Vista laptop, wireless networks. I also talked about “green” data centre strategies and virtualization, although a lot of the time I was focused on CICS and z/OS and DB2.

Perhaps one measure of how successful a blog has become is by seeing whether anyone else on the Internet mentions it. Here are some of the places that have picked up on this blog.

The blog was referred to by Craig Mullins in his excellent blog at http://www.db2portal.com/2006/08/mainframe-weekly-new-mainframe-focused.html. It was also talked about at the Mainframe Watch Belgium blog http://mainframe-watch-belgium.blogspot.com/2007/04/fellow-bloggers.html. At least one blog has been republished at Blue Mainframe (http://bluemainframe.com/).

James Governor mentioned Mainframe Weekly in his Mainframe blog at http://mainframe.typepad.com/blog/.

The blog about William Data Systems’ use of AJAX in its Ferret product is also linked to from Williams Data Web site at http://www.willdata.com. There’s a reference to the “When is a mainframe not a mainframe?” blog at the Hercules-390 site at http://permalink.gmane.org/gmane.comp.emulators.hercules390.general/25845/.

My first blog on virtualization ("Virtualization – it's really clever") was also published on the DABCC Web site at http://www.dabcc.com/article.aspx?id=3553. The second one ("On Demand versus virtualization") can also be found on the DABCC Web site at http://www.dabcc.com/article.aspx?id=4346.

There is a reference to the same blog on the V-Magazine site at http://v-magazine.info/node/5189 and this links to Virtualization Technology news and information's VM blog page at http://vmblog.com/archive/2007/05/07/on-demand-versus-virtualization.aspx, where the full blog is republished.

That particular blog is also republished in full on the Virtual Strategy Magazine site at http://www.virtual-strategy.com/article/articleview/1999/1/7/. There's also a pointer to it at the IT BusinessEdge site at http://www.itbusinessedge.com/item/?ci=28159. Arthur Cole refers to this blog in his blog at http://www.itbusinessedge.com/blogs/dcc/?p=127. It was also quoted from at the PC Blade Daily site at http://www.pcbladecomputing.com/virtualization-plays-well-with-others.
It’s good to know that people are reading the blog and referring to it in their own blogs and on their Web sites. Looking to the future, in the next year, I plan to continue highlighting trends and interesting new products in the mainframe environment, while occasionally discussing other computing developments that catch my attention.


And finally, a big thank you to everyone who has read my blog during the past year.

The times they are a-changin’

Today (Monday 16 July 2007) is my youngest daughter’s 21st birthday – so happy birthday to Jennifer. I started to think how things were different 21 years ago from how they are today – and hence I stole the title from Bob Dylan’s third album (released 1964) for the title of this blog.

21 years ago I’d just started working for Xephon (which I still do). I had a small laptop at home that I used for all my computing – although it was a Sinclair Spectrum and needed to be plugged in to a TV to see anything! IBM was the top mainframe computer company and you could use VM, VSE, or MVS as your operating systems. CICS and IMS were very popular transaction processing systems. But no-one had heard of OS/390 or z/OS. SNA was still king of communication with TCP/IP hardly being mentioned.

At work we shared Apple II computers – a luggable Mac each was still in the future. And we had so many pieces of paper!! We needed manuals and cuttings from the papers – you forget how the arrival of the Internet has made research so much easier. So, that’s another thing that’s changed – the Internet has revolutionized our lives. I can remember giving a course at that time where I would explain to people how many ways they interacted with a computer without them realising it. It sounds laughable today – you’d never do anything else on the course if you stuck to listing each person’s computer interactions!

The other thing that was missing 21 years ago that is such a necessary part of our lives is the mobile (cell) phone. You could be out of contact for a whole day and this was considered normal. Nowadays people expect an immediate answer. If you’re not getting calls on the phone then it’s text messages. There’s never been a generation of humans with such strong thumb muscles before! Teenagers can’t spell, but they can text amazingly fast.

21 years ago computer games were very simple. There was just no thought that a game would be able to respond to movements of your body like the Wii does. But, perhaps, back in those halcyon days, we went outside and played tennis or went swimming – sport that didn’t involve a TV screen.

Was it really a better simpler time? Were politicians less corrupt and the world a safer place? This is probably the wrong blog to answer those kinds of question. Would a CICS user from 1986 recognize a CICS screen from 2007? The answer is probably no. Gone are those green screen to be replaced by browsers. They wouldn’t recognize SOA, Web services, and all the other current buzzwords.

And yet despite all these changes listed above (and many others), a typical CICS or IMS user would still understand the concept of entering data and getting a suitable response.

So perhaps when you look at things from a personal perspective, although Dylan was right the times they are a-changin’ (laptops, phones, Internet, etc), the man-in-the-street still goes to work, it’s just what happens behind the scenes that has changed. For him, the French expression plus ça change, plus c'est la meme chose – the more things change, the more they stay the same – might have been a more accurate title for this review of 21 years.

What do you think?

Where can you go for help?

You’re an IBM mainframe user, where can you go for help with your mainframe problems? (If you were thinking of more personal problems, you’re reading the wrong blog!!) Well, my obvious answer would be Xephon’s Update publications (see www.xephonusa.com) or, perhaps, a search on Google (www.google.com), but IBM has recently introduced Destination z (http://www-03.ibm.com/systems/z/destinationz/index.html).

IBM’s new Web-based portal is designed to allow its customers, system integrators, and software developers to talk about mainframe usage, share ideas, and ask for technical help from other users. And just in case you might find you need to buy something, Destination z has links to IBM sales. To be fair, though, it is meant to contain technical resources such as case histories and mainframe migration tools. Part of the thinking behind this development is to provide the expertise to help potential customers migrate workloads from other platforms to mainframes.


In marketing speak, the IBM announcement said that it will also provide space for business partners to drive business developments and provide a broad spectrum of technical resources.


Going back to Xephon, for a moment, the June issue of TCP/SNA Update shared some interesting ideas from mainframe networking specialists. There were two articles that included code that could be used in order to monitor and measure exactly what was going on. The first looked at VTAM storage utilization and the second looked at VTAM subpool storage utilization. A third article looked at the need to apply a PTF if you utilize the VTAM Configuration Services Exit. There are also two interesting articles. The first talks about SNA modernization, and the second discusses Enterprise Extenders.


If you have some mainframe networking information you would like to share you can send your article to me at trevore@xephon.com.

Let’s hear it for Power6

A while ago I mentioned in this blog about IBM’s ECLipz project – their unannounced and mainly rumoured plan to create a single chip for System i, System p, and System z (hence the last three letters of the acronym). The big leap forward in this plan (according to rumour mills on the Web and elsewhere) was the much-touted Power6 chip, which IBM finally unveiled at the end of May.

Before we look at whether it fulfils any of the ECLipz hype, let’s see what was actually in the announcement. Running at a top speed of 4.7GHz, the microprocessor offers double the speed of a Power5 chip, yet still uses about the same amount of electricity to run and cool it (all part of the “green machine room”). This means customers can either double their performance or cut their power consumption in half by running at half the clock speed.


And while we’re talking “green”, the processor includes techniques to conserve power and reduce heat. In fact, the processor can be dynamically turned off when there is no useful work to be done and turned back on when there are instructions to be executed. Also, if extreme temperatures are detected, the Power6 chip can reduce its rate of execution to remain within an acceptable, user-defined, temperature range.


In terms of that other hot topic, virtualization, Power6 supports up to 1024 LPARs (Logical PARtitions). It also offers “live partition mobility”, which allows the resources in a specified LPAR to be increased or decreased, but, more interestingly, the applications in a virtual machine can be quiesced, the virtual machine can be moved from one physical server to another, and then everything restarts as though nothing had happened.


The new Systems Director Virtualization Manager eases virtualization management by including a Web-based interface and provides a single set of interfaces for managing all Power-based hardware and virtual partitions; and for discovering virtualized resources of the Virtual I/O server. Virtualization Manager 1.2 supports Power6 chips. It also supports Xen hypervisors included in Red Hat and Novell Linux distributions, as well as VMware, XenSource, and Microsoft Virtual Server.


As far as Project ECLipz goes, the Power6 chip does have redundancy features and support for mainframe instructions (including 50 new floating-point instructions designed to handle decimal maths and binary and decimal conversions). It’s the first Unix processor able to calculate decimal floating point arithmetic in hardware – previously calculations involving decimal numbers with floating decimal points were done using software. There’s also an AltiVec unit (a floating-point and integer processing engine), compliance with IBM’s Power ISA V2.03 specification, and support for Virtual Vector Architecture-2 (ViVA-2), allowing a combination of Power6 nodes to function as a single Vector processor.


And in case you were wondering, IBM listed benchmark tests showing the Power6 chip was faster than Lewis Hamilton’s Formula 1 car, and perhaps hinted that H-P’s Itanium-based machines may as well just give up now!

IBM acquisitive and dynamic

It looks like IBM has a plan. A number of recent events seem to indicate that IBM has decided how it wants things to look this time next year, and has started to set about making it happen. What am I talking about? Well I have in mind the recent acquisition of Watchfire, a Web application security company, and the “Web 2.0 Goes to Work” initiative.

Watchfire has a product called AppScan, which has been around for a few years now, in fact Watchfire got it by acquiring a company called Sanctum in 2004. IBM needed a good Web security product to go with RACF, it’s well-known mainframe security software, and, of course, its ISS purchase. Internet Security Systems cost IBM $1.3bn. The company sold intrusion detection and vulnerability assessment tools and services to secure corporate networks. Once it’s happy the Internet is secure, IBM can move forward with its new Web initiative.

Before I go on to talk about that, you might be interested to know that HP has bought SPI Dynamics, another Web security company. Whether HP bought the company to stop IBM getting it, or whether they have plans to integrate WebInspect (one of SPI’s products) with their own products, I just don’t know.


Anyway, the “Web 2.0 Goes to Work” initiative, announced 20 June, is IBM’s way of bringing the value of Web 2.0 into the enterprise. By the value of Web 2.0, they are thinking about things like easy access to information-rich browser-based applications, as well as social networking and collaboration software. No IBM announcement is complete these days without the letters S, O, and A appearing somewhere. IBM said that SOA helps build a flexible computing infrastructure and Web 2.0 provides users with the software required to create rich, lightweight, and easily-deployable software solutions.


Cutting through the hype, IBM has actually announced Lotus Connections, comprising social bookmarking and tagging, rich directories including skills and projects, activity dashboards, collaboration among like-minded communities, and weblogs or blogging. Lotus Quickr is a collaboration tool offering blogs, wikis, and templates. Thirdly, WebSphere Commerce now makes online shopping easier. Full details of the announcement can be found at
www.ibm.com/web20.

IBM is clearly thinking ahead and definitely doesn’t want to be seen as the company selling “dinosaur” mainframes. A strong move into the Web 2.0 arena is clearly sensible – and making sure security is locked down tightly means IBM can retain its reputation for reliability.

SOA still making an impact

IBM’s SOA (Service-Oriented Architecture) conference, IMPACT 2007, attracted nearly 4,000 attendees to Orlando, Florida. IBM used the occasion to make some software and services announcements.

IBM introduced a new mainframe version of WebSphere Process Server, which, they claim, automates people and information-centric business processes, and also consolidates mission-critical elements of a business onto a single system. IBM suggests that a combination of DB2 9, WebSphere Application Server (WAS), and WebSphere Process Server will deliver process and data services for SOA on a mainframe.


IBM also announced DB2 Dynamic Warehouse, which integrates Information on Demand and SOA strategies to implement Dynamic Warehousing solutions – they said. It also integrates with Rational Asset Manager (a registry of design, development, and deployment related assets such as services) to improve SOA governance and life-cycle management. At the same time, IBM announced a new WAS feature pack to simplify Web services deployment.

The trouble with SOA is that there are a lot of people talking about it, but not enough people who really understand how to implement SOA in an organization. IBM has thought about that issue and announced at IMPACT 2007 218 self-paced and instructor-led courses conducted online and in the classroom. IBM also claimed that it has good relationships with colleges and universities round the world and is working on the development of SOA-related curricula with them.


If you want to visualize how an SOA affects different parts of an organization, IBM had an interactive 3D educational game simulator. Called Innov8, this BPM simulator is designed to increase the understanding between IT departments and business executives.


At the same time, IBM announced an online portal containing Webcasts, podcasts, demos, White Papers, etc for people looking to get more SOA-related information.


Lastly, IBM announced its SOA portfolio, which contained integrated technology from DataPower SOA appliances, FileNet content manager, and Business Process Management (BPM). Included in the announcement was the WebSphere DataPower Integration Appliance XI50, which can now support direct database connectivity. Also, IBM has integrated the capabilities of WebSphere with the FileNet BPM.


So, not surprisingly, SOA and WebSphere are definitely THE hot topics for IBM at the moment.

Virtualization – a beginner’s guide to products

Let’s start with a caveat: I’m calling this a beginner’s guide not a complete guide – so, if you know of a product that I haven’t mentioned, sorry, I just ran out of space.

Now the thing is, on a mainframe, we’ve got z/VM, which is really the grandfather of all these fashionable virtualization products. In fact, if I can use a science fiction metaphor, VM is a bit like Dr Who, every few years it regenerates as a re-invigorated up-to-date youthful product, ready to set to with those pesky Daleks and Cybermen, etc.

And, of course, mainframers are all familiar with LPARs (Logical PARtitions), which are ways of dividing up the hardware so it can run multiple operating systems.

The real problem for mainframers is when they are asked to bring their wealth of experience with virtualized hardware and software to the x86 server arena. Where do you start? What products are available? Well, this is what I want to summarize here (for beginners).


I suppose the first product I should mention is IBM’s Virtualization Manager, which is an extension to IBM Director. The product provides a single console from which users can discover and manage real and virtual systems. Now, the virtual systems would themselves be running virtualization software – and I’ll talk about that layer in a moment.


If you don’t choose IBM, an alternative would be the VMware’s product suite, which comprises eight components: Consolidated Backup (for backing up virtual machines), DRS (for resource allocation and balancing), ESX Server, High Availability (an HA engine), Virtual SMP (offering multiprocessor support for virtual machines), VirtualCenter (where management, automation, and optimization occur), VMFS (a FileSystem for storage virtualization), and VMotion (for migration).


Also, quite well-known is HP’s ProLiant Essentials Virtual Machine Management Pack, which more-or-less explains what it does in the title.


Lastly, for this list of management software are CiRBA’s Data Center Intelligence (now at Version 4.2) and Marathon Technologies’ everRun. Marathon also has its v-Available initiative.


In terms of software that actually carries out the virtualization on an x86 platform perhaps the two best-known vendors would be VMware and XenSource. VMware has its ESX Server (mentioned above) and XenSource has XenEnterprise, XenServer, and XenExpress.


VMware’s ESX Server reckons to have around 50% of the x86 virtualization marketplace. It installs straight on to the hardware and then runs multiple operating systems underneath. The Xen products use the Xen Open Source hypervisor running straight on the hardware and allow Windows and Linux operating systems to run under them. Virtual Iron also uses the Xen hypervisor and is similar to the Xen products. It’s currently at Version 3.7. Also worth a quick mention is SWsoft, who produce Virtuozzo.


One other company that has a small presence in the world of virtualization is Microsoft – you may have heard of them! Microsoft has Virtual Server 2005 R2, which, as yet, hasn’t made a big impact on the world of virtualization.


So, any virtualization beginners out there – I hope that helped.

When is a mainframe not a mainframe?

The April/May 2007 issue of z/Journal (http://zjournal.tcipubs.com/issues/zJ.Apr-May07.pdf) has an interesting article by Philip H Smith III entitled, “The state of IBM mainframe emulation”. Emulation is a way of letting hardware run software that shouldn’t be able to run on that hardware! It’s an extra layer of code between the operating system and the hardware. The operating system sends an instruction and the emulation software converts that instruction to one that the existing hardware can understand. The hardware then carries out the instruction. Any response is then converted by the emulator into something that the operating system would expect, and the originating program carries on processing unaware of the clever stuff that’s been going on. Often there is a native operating system involved between the emulation software and the hardware, but not always.

Philip talks about FLEX-ES from Fundamental Software. Its business partners offer integrated FLEX-ES solutions on Intel-based laptops and servers. It means that developers can test mainframe software on a laptop. It works by running as a task under Linux, and FLEX emulates a range of devices including terminals and tape drives. FLEX also sell hardware to allow real mainframe peripherals to connect to the laptop, and PC peripherals that can emulate their mainframe counterparts. There is currently a legal dispute between IBM and Fundamental Software.


There was also UMX technologies, which offered a technology that was apparently developed in Russia. This company arrived in 2003 and disappeared in 2004.


Hercules is an Open Source mainframe emulator that was originally developed by Roger Bowler. Hercules runs under Linux, as well as Windows and Mac OS X. IBM, however, won’t license its operating systems for Hercules systems, so users have to either run older public domain versions of IBM operating systems (eg VM/370 or OS/360) or illegally run newer operating systems.


Platform Solutions has a product called the Open Mainframe, which provides a firmware-based mainframe environment on Intel-based hardware. It is built on intellectual property from the time that Amdahl offered a Plug-Comptible Mainframe (PCM). It’s not a complete solution because it doesn’t support the SIE instruction, with the result that z/VM won’t run. However, z/OS and z/Linux work OK. Open Mainframe runs straight on the hardware, it doesn’t need an operating system. Unsurprisingly, perhaps, IBM’s and PSI’s legal teams are now involved.
I also found Sim390, which is an application that runs under Windows and emulates a subset of the ESA/390 mainframe architecture. Its URL is http://www.geocities.com/sim390/index.htm.


I hope Philip H Smith III won’t mind me borrowing from his article, but there are two very interesting points leading on from this. One, and Phillip makes this in his article, is that if mainframe emulation is available on laptop, it is easier to use and more likely that younger people (remember that awful bell-shaped curve showing the average age of experienced mainframers and COBOL programmers) will want to have a go.


The second point is that emulation is only a short step away from virtualization, which I’ve talked about before. Wouldn’t it make sense (from a user’s point of view) if they had one box of processors (Intel quad processors, P6s, whatever), and they could then run all their operating systems on it? The virtualization software would also be the emulation software. It could run Windows, Linux, z/VM, z/OS, etc on it. If a user’s needs were simple, it would be a small box with few chips and not too many peripherals. If a user’s needs were complex, it would be a big box with lots of everything. Virtualization is appearing everywhere, I can quite easily see it absorbing the concept of mainframe emulation (IBM’s legal team permitting, of course!).

The Color Purple

OK, I’ve stolen the title from Steven Spielberg’s 1985 film – or from the title of Alice Walker’s 1982 novel. And this blog has nothing to do with racism, but it is to do with colours – the colours you see on your computer screen.

I have recently been using a little device called a Huey (from Pantone/GretagMacbeth – http://www.pantone.com/pages/products/product.aspx?pid=79&ca=2), which is a computer monitor calibration tool. It checks what colours your monitor produces and corrects them so you see more accurate colours. Mine came from a company called Colour Confidence (www.colourconfidence.com).


The device is about the size of a slightly thick and slightly short pen, and spends most of its life in a cradle connected to your computer through a USB port, where it monitors the ambient light. But let’s start at the beginning…


When you purchase the device (which costs about $90 in the USA and around £60 in the UK) you get the Huey device, a cradle, an extension USB cable, and a CD. The CD I had contained Version 1.0 of the software, which is OK if you have XP installed (or a Mac), but I have Vista. This meant I had to go to the Pantone Web site, register, and then download the Vista-capable version – which is 1.0.5. The software installed quickly and then needed to reboot my laptop.


The next stage was to wipe and dry the screen with the supplied wipes and cloth, and then connect the Huey device. Once Vista recognized it, I started the Huey software. Next I stuck the Huey to the screen using the very small suckers attached to it. The software then quickly ran through a number of colours and shades of grey. Lastly I was given a chance to compare the original settings with the new suggested settings. They weren’t enormously different, but they were definitely different. The colours are now “warmer”. In fact, you can select what type of use your computer is put to and select the appropriate colour scheme for that. There’s options such as Web browsing and photo editing, graphic design and video editing, and warm low contrast. Each week, the Huey recalibrates, which is a good idea.


Some reports on the Web suggest that the Huey is much cheaper and less accurate than other products. But I think that’s the point. If you need 100% accurate colour management for work you would buy one of these more expensive devices. For the rest of us, the Huey can make a useful additional tool at an affordable price. If you spend all day sitting in front of a computer screen, you want it to display more-or-less the right colours. I think it’s a handy little gadget.


And finally, and for the last time (I promise)... Xephon’s (www.xephonusa.com) WebSphere Update is looking for new authors to broaden its base of contributors. If you work with WebSphere and you have discovered something you wished you'd known before you started, or you've implemented something useful that others could benefit from, please contact me on TrevorE@xephon.com.

We hate Microsoft – or is it Microsoft hates everybody else!!

Let’s make it clear, this isn’t a personal rant – although I am finding the lack of Vista drivers for devices that happily attached to my XP laptop a bit frustrating – it’s a look at recent news stories and their significance.

Firstly then, there’s Adobe, who have announced that Vista-compatible drivers for Postscript-enabled printers will be available in July. Hang on, didn’t Vista appear in the shops in February! Why has Adobe waited six months? Well, of course, the answer is that Adobe hates Microsoft. (For legal reasons I need to point out the use of hyperbole is solely to make this blog more interesting and is not taken from opinions stated by representatives of any company mention herein – phew!) Users of Adobe’s DreamWeaver, InDesign, and Photoshop products will be aware that there isn’t (or last time I looked anyway) a Vista version of the products. Also, in Europe, Adobe and Microsoft have taken up the cudgels. Adobe claims that Microsoft has violated EU trade laws by bundling Vista and the XML Paper Specification – you know, the thing that’s more than a bit like Acrobat. Plus, of course, not only is Microsoft trying to eat Adobe’s PDF lunch, it’s also set its sights on Flash. Microsoft now has the Silverlight multimedia authoring tool. But Microsoft is also a little vexed by the fact that Adobe will soon have an equivalent to Media Player. The beta of Adobe Media Player should be out later this year.


Then there’s the Open Source community. Microsoft really hates them. Microsoft is now claiming that the Linux kernel violates 42 of its patents. And that’s not all – other Open Source programs (and I’ve heard Open Office is included in the list) apparently infringe 193 patents.


Who else? Well, let’s not forget that Microsoft hates Google. Google is king of the Internet and is offering all those Office-like applications over the Web. They have a PowerPoint equivalent coming soon. Plus Google has its own desktop gadgets much like the ones found in Vista – only more of them. Plus, Google has outbid Microsoft for advertising company DoubleClick. And talking of Office-like applications, have you looked at
www.zoho.com yet? They’re definitely going to appear on Microsoft’s hate list!

And now Dell has been added to the hate list. Dell has been supplying computers for years with Windows as the operating system. Now, it is saying that customers are asking for XP rather than Vista as the operating system and it is supplying them with it. But that isn’t its cardinal sin – no Dell is now supplying new computers with Ubuntu installed. Another victory for Linux.


Microsoft is apparently going to upset the phone and network companies with its unified communications device (to be announced).


Does Microsoft like anyone? Well yes, they are good friends with flash card manufacturer SanDisk. Together they plan to put application programs on memory cards. As a user, you plug your memory card into any computer and you can access personalised e-mail programs, Web browsers, productivity tools, multimedia applications, and, so they say, more. What a stupid idea! What happens when you forget your memory stick or you are trying to use two computers? Haven’t they heard of the Internet?


In addition to its unified communications announcement, Microsoft is also showing off its online storage facility. So it has a lot going for it at the moment. It just seems that in order to stay at the top it is upsetting other companies rather than trying to work with them. We don’t all hate you Microsoft, you just seem to have a habit of putting people in a difficult position.


As I mentioned last week... Xephon’s (www.xephonusa.com) WebSphere Update is looking for new authors to broaden its base of contributors. If you work with WebSphere and you have discovered something you wished you'd known before you started, or you've implemented something useful that others could benefit from, please contact me on TrevorE@xephon.com.

Aglets – a new way for mobile computing

I must have been messing around at the back of the class recently, because I have only just heard of aglets – a portmanteau word created from agent and applet – that run with distributed DB2 databases.

A mobile agent is an exceptionally clever piece of software that can migrate during execution from machine to machine in a heterogeneous network. Once it arrives on a new machine, the agent then interacts with service agents (that are permanently located on that machine) and other resources to perform its mission. So why would you want to use this kind of technology? Well the answer, as so often, is improved performance. Because the agent moves to the remote machine and performs a search (or whatever) and sends across the network the results, there is a huge reduction in the amount of data that uses the network and therefore every other network-related application isn’t slowed down – hence the improved performance. Any other method would involve large amounts of data from one computer being copied to another, and then searched (or whatever) there. Using aglets moves the processing to the computer on which the data resides – so, therefore, much less network traffic is necessary.


So how do you get hold of aglets? IBM’s Tokyo Research laboratory has created the Aglet Workbench, and the package can be downloaded from http://sourceforge.net/projects/aglets/.


The mobile agents are 100% pure Java. The Java Aglet API (J-APPI) is the interface used to build aglets and their environments. The API is platform agnostic, but it does require JDK 1.1 or higher to be installed for it to run. There is an agent server, which is called Tahiti, and this (by default) uses port 4434. Transferring agents between computers is achieved using ATP (Aglet Transfer Protocol).

It is definitely an interesting and useful development.

And for people who enjoy quiz nights – an aglet is also the little piece of plastic (or, perhaps, metal) at the end of a shoelace (usually) that stops the lace from unravelling.


My thanks to Nikola Lazovic, a regular contributor to Xephon's (www.xephonusa.com) DB2 Update journal, for drawing aglets to my attention.


And on a different, but related note… WebSphere Update, also from Xephon, is looking for new authors to broaden its base of contributors. If you work with WebSphere and you have discovered something you wished you’d known before you started, or you’ve implemented something useful that others could benefit from, please contact me on TrevorE@xephon.com.

On Demand versus virtualization

You might very well think this is a strange title for a blog – after all, they seem like two completely different things. It’s like saying apples versus mountain bikes!

However, think about it a little deeper. Both of them are trends in computing and both would take IT departments in totally different directions. At some stage, managers are going to have sit down and decide which route they are going to take.

Let me explain my thinking in more detail.

On Demand computing describes a method of making computing resources available to users at the time those resources are needed. Basically what happens is this: users have lots of computer capacity available at their site, but pay only for the capacity they use. That way they are not paying to support peak capacity during quieter periods. They use as much capacity as they require at any one time and that’s all they pay for. They don’t pay for the extra capacity on the occasions when it isn’t needed, only when it is needed and used. On Demand computing has been tipped by many to be the next big trend in computing. After all, it makes sense for a company to have all the capacity it will ever need, but only pay for the capacity it is actually using.

Virtualization, on the other hand, is a technique that allows users to maximize the use they get from their existing hardware. You don’t need more, you just utilize what you already have better. Virtualization techniques allow hardware to “appear” to be available to users. It can even make non-existent devices appear to be available. It then takes the call to that device and routes it somewhere else – all completely seamlessly to the user. However, the important point in this debate is that it maximizes the usage of the hardware that is installed. Mainframers are familiar with VM, PR/SM and all the other virtualization techniques that have been around for many years. AIX users are now able to benefit from virtualization and so are System i users and even sites with x86 servers.


The advantages of choosing the virtualization route are that virtualization will reduce the amount of hardware that needs to be switched on, and so will reduce electricity bills. On Demand requires that data centres have the hardware delivered and installed. Not having it delivered will save petrol and will be a component of a company’s “green” strategy. Companies will also need less air conditioning (and again less electricity) because they won’t have to cool these extra boxes.


The one big thing in favour of On Demand computing is that it is an easier strategy. The extra boxes are used when needed and money is paid out at the end of the month (or whatever the charging period is). With virtualization you need to buy the software and have someone who knows what they’re doing to install it. You then need expertise to set up and run the virtual machines.


What I am suggesting is that when IT departments sit down to decide where they want to be in five years time and how they can achieve those goals. they will prefer to develop their own expertise in virtualization and be able to take all the advantages this will offer both now and in the future rather than install extra just-in-case capacity.
What do you think?

Sign of the Zodiac

I mentioned in last week’s blog that I’d been to Mainz in Germany with IBM. The focus of the meeting was on SMB customers rather than mainframe users, although I would guess plenty of mainframe sites have a host of other boxes around the place.

One thing that surprised me was the number of horror stories they could quote of sites that had a number of x86 servers around the company, but weren’t sure quite how many there were or what the boxes they knew about actually did – ie what applications were running on it.


Before these sites could even think about virtualizing, they needed to discover what they had installed. They needed some way of discovering what boxes they owned and what applications were running on them, and they needed to do this without having to install an agent on each box to do the job – because, if you don’t know what boxes you’ve got, you can’t put an agent on them!


This is where a very clever piece of software called Zodiac comes in. This complex software can link in to other software where necessary and help a company build up an accurate picture of what’s going on where on its servers. The software will sit on the network and pick up message traffic – eg it will find a query going to a database, and find a response coming from that database.


Once a site knows where it is at the moment (in terms of hardware and software), it becomes possible to plan for a more idealized working environment and how to get to there from here – because currently there seems to be a lot of sites that don’t know where “here” actually is! Obviously, a business case needs to be built and Zodiac works with Cobra, a component that can help build a business case. This is perhaps harder for many sites than it might at first appear because as well as consolidating and reorganizing the hardware and software at a site (sites will be looking to virtualize their servers in order to use fewer of them and reduce overall costs), it also involves reorganizing people. There is strong likelihood that after the reorganization, the jobs needed to run the data centre will be different from those needed before the reorganization. Some new skills will be required and some old ones may not be needed, or two or more jobs might be consolidated because the amount of work needing to be done is reduced. These HR effects are important, and will need dealing with by companies making the change.


Steve Weeks, who heads the Zodiac project at IBM, said that they were now visiting sites that had used Zodiac for the initial inventory report and to create the business case for the migration, and who were now ready to move forward again and wanted to use Zodiac facilities a second time.


Zodiac doesn’t depend on IBM boxes being on site, it is completely vendor neutral, and will identify whatever it finds from whichever manufacturer. It seemed like a very useful product and one that I was completely unaware of before this meeting.

Big Blue goes green?

I was recently with IBM in Mainz discussing data centre challenges for the 21st century. Interestingly, one of the issues under discussion was about having a green data centre.

Now, environmental friendliness is very much on every politician’s agenda, with everyone trying to outdo the opposing candidate on how green they are – in terms of recycling waste, cutting energy use, and creating fewer carbon emissions by not using cars and planes etc whenever possible. And if people are recycling at home, turning down the thermostat on the heating, and cycling to work, it makes sense for them to look at being green in the work environment too.

Now this is where the problems start. Hands up anyone who can define what is meant by a green data centre. We all know what we think it means, but it is quite hard to come up with a definition that is worth including in a dictionary. And in many ways, it is impossible to have a green data centre because of the amount of energy needed to create the processors and data storage devices in the first place, the amount of energy necessary to run them so that we’re getting decent processing speeds, and the energy required to do something environmentally friendly when the hardware is passed its best and being shipped out.


At the meeting in Mainz, IBM was suggesting ways that the data centre could become greener – by which they meant more energy efficient. They were specifically talking about blade servers rather than mainframes, but I guess most sites have a mixture of technologies and this will apply to them.


IBM made a statement that I found quite startling, but everyone else in the room nodded sagely, so I guess it’s true. I suppose it’s my mainframe background that made the idea seem so strange – you’ll probably be saying, “of course, everyone knows that”. They suggested that the average usage of an x86 server was around the 20% mark and that people were likely to go out and buy another server when they hit 25% usage. This shopping expedition wasn’t necessarily caused by the increased server utilization, it was just the sort of pattern that they had observed. That means these companies would end up with rooms full of servers with around a quarter utilization. The first way to become more green is obviously to get rid of half the servers and double the utilization figures. But how can you do that? Well IBM is very keen on virtualization (and why wouldn’t they be, having been using VM for forty years?). Obviously, virtualization does use slightly more power on a single server than not virtualizing, but significantly less than running two servers.


Their other greening strategy was in the way the blades are cooled. Apparently air conditioning warm air works better than air conditioning cooler air! They told horror stories of servers that were drawing lots of power and were then running hotter, so the fans would spin faster to cool them down, which meant that the fans were drawing more power (and creating more heat, which meant…). Their solution simply involved keeping the hot and cold air separate, which results in the air conditioning working more efficiently and less energy being used.


They did also have ways of water cooling the doors of blade servers to keep down the temperature and some of the components were now more energy efficient, which meant they were greener – although this was a consequence of a desire to make them more efficient rather than anyone specifically following a green agenda.


So, to answer my question in the title of this blog, Big Blue is moving towards greenness. It’s doing it because it makes sense for them to do so. And that’s because energy efficiency means customers can save money – always a strong selling point. And also because customers are asking for greener solutions at affordable prices, which IBM is able to provide. However, I doubt we’ll be seeing a data centre that an environmentalist would consider to be green for a long time yet.

CICS V3.2 – do I need it?

IBM has been excitedly telling everyone recently about the latest release of CICS. But the real question is whether sites should be looking to upgrade from 3.1 to 3.2. Is there really any point?

IBM reckons that the upgrade rates to CICS 3.1 were the fastest that it had ever experienced and there was probably a good reason for that – SOA. Service-Oriented Architecture was available for the first time with CICS V3.0, but it was V3.1 that provided the first full production implementation. With so much pressure on sites to save money and provide better business value, you can see that a migration to V3.1 was going to be on the agenda for everyone in order to maximize the benefits that SOA can offer a company. And I would list them here, but you see them in every PowerPoint presentation you sit through these days, so I won’t bother. You know what they are!

So having migrated to V3.1 for all these benefits, do I need to take the next step to the newly-announced V3.2? The answer from IBM is obviously yes, so let’s see what 3.2 has to offer.

V3.1 of CICS in a Web services environment is a heavy user of resources. IBM has tried to do better by optimising the HTTP client and server in the new version. There’s also better management including a way to trace the progress of an end-to-end transaction. This makes use of WSRR – an acronym you’re going to become more familiar with over the next few months. The WebSphere Registery and Repository is a single location for all the available Web services.

Some people found the old message size restrictive – a transaction couldn’t handle all the data they wanted to send. IBM now has MTOM (Message Transaction Optimization Mechanism), which overcomes the problem. V 3.2 has increased transaction granularity and, by exploiting 64-bit architecture, it can handle larger payloads. CICS 3.2 has also seen improvements to the user interface making it easier to install and define regions, and problem location has been enhanced.


So do you need CICS Version 3.2? With the important improvements to SOA and Web services and the other improvements including problem identification, I think the answer is a resounding yes.

SOA – Same Old Architecture

Last week I blogged about a session at a legacy application modernization session I attended. This week I’d like to tell you about another presentation I saw later that same day. This second one was by Gary Barnett, Research Director at Ovum Consulting.

His approach was less one of telling us what to do, but rather raising our consciousness to stop us making the same mistakes that other people have made in the past. He is responsible for defining SOA as Same Old Architecture – which, although intended as a joke, made the point that this isn’t all new. He reminded us that Web services weren’t the first type of services that we’d come across. He suggested that we’d looked at work in terms of services before, with things like CORBA services and Tuxedo services (from BEA).


Gary also confidently predicted that 80% of SOA projects would fail. He based this prediction on the fact that they relied on ASCII and XML and that 80% was probably the number of projects that failed anyway.


He also had some important thoughts on re-use. He suggested that it wasn’t enough simply to have nice interface. He insisted that if re-use was to occur it had to have been planned since the design phase. There is no way to retro-fit re-use! He also insisted that “best practice” only worked when it really was “practised”!


Gary likened many IT projects to building a bridge. IT people know how to build metaphorical bridges, so when someone says let’s have a bridge the IT people start building. The reason so many projects fail is because it is not until they are half way across the river that anyone from IT stops to ask the questions, “just how wide is this river?” or, “do you really want the bridge here?”.


Gary said that most presentations show large coloured squares joined by thin lines and warned that the reason the lines were so thin was that people didn’t want anyone to notice them and ask questions. However, he stressed, it is often the links between applications or services that are the most difficult to modernize.
On a serious note, Gary insisted that the focus for change should be on business processes. He said that in any successful company there would be no such thing as a legacy system modernization project, there would only ever be business modernization projects.


Definitely a “make you think” session, and well worth seeing for anyone contemplating modernization (ie all of us!).

Legacy application modernization

Last Monday I was lucky enough to attend a one-day seminar near Heathrow in London organized by Arcati. It had a number of speakers, and gave a very interesting positioning of where many companies are today and where they’d like to be – and the all important guidelines describing how to get there.

It highlighted two very important points – that getting there is going to take time and effort; and, by the time you have arrived there, you’ll want to be somewhere else and the whole process will start all over again!


Dr Mike Gilbert, principal of Legacy Directions, suggested that there were three immediate problems organizations face in terms of modernization strategies. His first was the COBOL skills problem – he suggested that the average age of a COBOL programmer was 45, and few youngsters wanted to learn COBOL (or can even find places that teach it!). In ten years time, most COBOL experts will be looking to retire rather than take on a modernization challenge.
His second problem he likened to an octopus. The core legacy application was the body of the octopus and the tentacles were the peripheral systems that the application touched. While it is easy (a relative term, obviously) to do something with the tentacles, it is much harder to carry out work moving or integrating the octopus’s body.


The third problem was simply cost. What happens if the modernization project goes wrong? The cost can be enormous, and Mike gave an example of one company at which all the senior officers resigned following a project failure because of the expense suffered by the company.


Mike Gilbert then explained that we should be looking at the big picture when thinking of legacy systems. Firstly, there are people involved – not just IT, but the users who are familiar with using the applications. We should think about the processes – there are the old (original) processes and the new ones. Then we get to the applications themselves, which could be locked into particular databases etc. And finally there is the infrastructure that must be considered.


Before a modernization project can be undertaken, it’s important that the business leaders understand the need for the project in business terms and can see the benefits the business will get from the change. The business leaders must support the modernization project. The project must use the best methodologies (and in some cases the methodologies may be in their infancy). Lastly, companies must have appropriate tools for the project – again these might not exist for all modernization projects.


Mike suggested that in any modernization project, it was important to go through five stages. Stage 1 was to define the challenge. Stage 2 was to define success – that way you knew when you’d finished. Stage 3 was to plan the project. Stage 4 was to carry out the project. And Step 5 was to return to Step 1.


He suggested always starting with processes, because these were used by people. Then look at people, because they use the actual applications. Then look at the applications, which run on the infrastructure. And lastly look at infrastructure. He suggested that by scoring assets, it was possible to produce a decision table and show what changes were possible in terms of cost and risk. The final decisions should always be taken by the business leaders so they are supportive of the project.


This is just a flavour of one session at the seminar. All in all, it was a very useful day with lots of valuable information.

Vista – final connections

A couple of weeks ago I described the pain of setting up a Vista machine – and to be honest most of that pain was simply because we are so familiar with XP machines and anything Vista did differently came as an unpleasant surprise. This blog brings you right up-to-date with events. Readers may be interested to know that I am now working on my Vista machine each day and this blog was written using it. I am gradually getting to find my way round it.
Anyway, story so far, we bought a new Vista laptop for the office and we assigned it to our office workgroup. We used the “Network and Sharing Center” to make sharing possible and we turned off Norton so that the XP machines in our workgroup could access the new machine. We used Laplink to transfer everything from my old XP machine to this new Vista machine. All the applications seemed OK except Word, which wouldn’t run so we hacked it to make it work. Now Word documents won’t open except from inside Word.


Historically, Xephon (
www.xephonusa.com) used Macintoshes to produce its Update publications and my XP machine used PC MacLAN to connect to the Macs. The last copy of MacLAN we bought came from a company called Miramar. It allowed the PC to access a Mac as a guest and copy files backwards and forwards. MacLAN is now owned by CA (ca.com), who acquired it in March 2004. CA, well actually their very nice PR people, told us that PC MacLAN is not compatible with Vista and there are currently no plans to update the product. Oh dear. For reasons that are long and historical the Macs are running V9 and not OS X (tiger). Does anyone know of a product that will connect Vista with an old-style Mac? As a consequence of this, I am left having to access my old XP machine in order to access my Mac.

My old computer had a parallel port on the back, which could be used to connect to a printer. In fact mine connected to an Iomega ZIP drive and then to a printer. We used old 100MB ZIP drives because that’s what were connected to the Macs. They were used for a lot of our back-ups. Now the first very obvious problem was that my new HP laptop does not have a parallel port on it! But even if it did, the Iomega Web site (
www.iomega.com) lists all the operating systems that it has drivers for – DOS, Windows 3.1, Windows 2000, Windows XP, etc – but none for Vista. Yet again I have an older product for which there is no Vista support. So my old XP machine also has the ZIP drive still attached to it. My Vista machine is using an external hard drive for back-ups. We keep talking about network storage – it looks like we’ll really have to start using it.

So two failures with Vista, but the next thing was to connect the printer. Again, historically, we have had a printer attached to each computer so there is no waiting for output. I have a three-year old HP Business Inkjet 1100 that prints on both sides and has separate colour cartridges. I was told it was a top-of-the-range machine when I bought it. But you’ll never guess what happened next! I went to Hewlett Packard’s Web site (www.hp.com) to download the Vista drivers for this business class printer only to find that HP doesn’t have any!! It says, “HP Product Is Not Supported in Microsoft Windows Vista”. I expect HP hadn’t heard about Microsoft’s new operating system until it was launched and they’re busy catching up now!!!


So now my printer is left attached to my old XP machine along with the ZIP drive and my connection the Mac network. These three problems are not the fault of Microsoft, they are three other companies that do not want to support older products. I’m just glad I didn’t have the scanner attached to my computer, who knows whether there would be drivers for that!


I remember when legacy was used as a derogatory term for mainframe hardware and software. It now seems that three years is a very long time for PCs. If you are planning to buy a Vista machine, I’d wait until all these other third-party suppliers have caught up. I’d also wait until there is more expertise out there so the migration process can be done in an afternoon and not over two weeks.


And do I like using Vista? For ordinary work it is no different, but I do like the size of my new laptop’s screen. The Windows + alt key combination looks impressive when I’m showing Vista to new people. The Search facility is just bizarre. It’s quicker for me to look where I think the file is than to wait for the Search facility not to find it!

DB2 9.1 for z/OS

Finally (ie as of 16 March 2007) mainframers can get their hands on DB2 9.1 and start to use the promised XML facilities that it has to offer.

IBM has been talking about Viper – the codename for DB2 9 – on Windows, Unix, and Linux servers for a while now (July 2006), but now it is available on mainframes (although, of course, the Windows, Unix, and Linux version would run in a mainframe Linux partition).


The new big thing in this version is PureXML, which is IBM’s name for its XML facility allowing XML documents to be stored in such a way that the hierarchical information is retained and the XML document itself can be queried. Previously IBM had two less-than-efficient approaches available. Users could either store the whole XML document in a single database field, which made it impossible for SQL queries to work. The alternative was an approach called shredding in which the XML document is broken up into chunks and these chunks are then stored in different columns in the same row of the database. This approach made queries possible, but it has stopped being an hierarchical XML document that can be used elsewhere. Of course it could be recombined, but that uses more resources and puts the database under pressure.


So, PureXML allows the XML data to be stored, the data to be indexed, and SQL queries to scan the document – a huge improvement for users. It is particularly beneficial for sites who are adopting Service-Oriented Architecture (SOA) because so much information is stored as XML to allow all the different applications and even different systems to work together successfully. It allows the formation of composite applications using the services made available from these applications and systems.


Sales of DB2 9.1 for mainframes should also help IBM sell those zIIP co-processors. The System z9 Integrated Information Processor (zIIP) is a specialty engine directed toward data serving workloads. zIIP is designed to boost the performance of DB2 because it processes only DB2 routines.


Also enhanced is the DB2 QMF (Query Management Facility), which has a new Web interface. And various DB2 tools have minor changes so they support DB2 V9.1 for z/OS. These include the DB2 Utilities Suite.


It looks like IBM is onto a winner with this.