Sunday, 3 August 2014

Business continuity planning

It seems strange talking about business continuity planning for mainframe sites because most of them created their plan back in the days when BCP was called DR (Disaster Recovery). And although, for mainframe sites, things don’t seem to have changed to any great extent in perhaps as much as 30 years, the truth is, they have. And it’s a good idea to re-evaluate the Business Continuity Plan now.

In fact, it’s probably a good idea to start from the beginning, in terms of planning, and see what systems you have in place that needs to be available for the organization to continue in business, and how long you can be ‘down’ for. It was often a joke that non-mainframe sites had rooms full of servers running Linux and/or Windows servers and no-one knew what exactly ran on what hardware – and yet, something similar can be the case with mainframes. There is nowadays quite a disconnect between what an end user views as a single transaction and how the subsystems may see it. An end user may simply need to access some data – but, for that to happen, the transaction may start in CICS, access DB2 data, go back to CICS, involve IMS, go back to CICS, access some VSAM files, and finally end up in CICS again. So subsystem-level recovery can lead to confusion.

But let’s start at the beginning. What’s the first thing to do? Identify the business assets that need to be protected, then assess how business critical each process is and create a priority list. Next find the data and technology that’s needed for the business process to occur. Armed with that list, you can set objectives for their recovery, and design strategies and services that can be used to restore access to data for the applications and end users who need them. This is probably easier said than done because it also has to be achieved within time frames that mean your organization stays in business.

You need to be able to run the applications on probably new working processors, you need to get them talking to the latest version of the data, and you need to get your users connected to the applications. The options for how to do this range from cheap to hugely expensive. Like all insurance, you don’t want to have to make use of it, but when you do, you want it to cover everything. So what are your choices? You can do nothing – definitely the cheapest, until things go wrong and then it’s probably the end of the company staying in business. You can use a service bureau or another site. This again is bit like hoping nothing will go wrong, but if it does, you have some way of staying business until you can get your own hardware up and running. You need to ensure the other site is not in the same building or even city – earthquakes and other natural disasters do happen. You could have a cold standby site. This a dearer option, but once everything is powered up, you’re pretty much back in business. The dearest option is the hot standby site, where you basically copy everything to it as it happens on the main site. This hot site can continue running the business for you at a moment’s notice. If you’re a bank or similar, this is what you need. Your users just experience a small hiccough and they continue working. They connect to the new site without realizing anything has changed.

And that is your first big decision over with. The next step is to look at individual systems (such as IMS and CICS) and see how each of those can failover to the back-up site. Look into how you can ensure data is correct, and how in-flight tasks can have their data backed out and the whole task restarted. How quickly can communications be switched across to the back-up site? And what are the chances of both sites being hit by the same disaster?

And then you need to practice the BCP and see what you forgot in your plan. Which pieces of kit do you use that aren’t standard and can’t be replicated? There are so many things that can go wrong at each site because the set-up can be so different (while being superficially so similar). Who has access to the BCP? Who needs access to the BCP? What happens if a key person is doing a charity sleepover in some rundown part of town and hasn’t got a phone with them? What happens if your company is being attacked by terrorists of hacktivists or disgruntled ex-employees?

There’s lots to consider. But the first step is to re-visit your Business Continuity Plan – and do it soon.

Sunday, 27 July 2014

Gamification of health

We all know playing games can be fun, and we all know staying healthy is important. (I write this, ironically, as I bite into a doughnut!). Doesn’t it make absolute sense to use the most effective parts of gamification to encourage people to live healthy lives – to eat more healthily, to take exercise, to keep their weight at a healthy level, to reduce anxiety, to ‘play’ their way out of depressive thoughts, to overcome a phobia, to beat those OCD habits, etc. But is that a reality?

“Gaming to Engage the Healthcare Consumer”, published by ICF International defines gamification as “the application of game elements and digital game design techniques to everyday problems such as business dilemmas and social challenges”. They cite the Gartner report, “Gamification: Engagement Strategies for Business and IT”, that by 2015, 50 percent of organizations will be using gamification of some kind, and, in 2016, businesses will spend $2.6 billion on gamification.

The ICF report suggests that the trend towards value-based care, the increasing role of the patient as consumer, and the millennial generation as desirable health insurance customers are driving healthcare organizations to look at gamification. And this is all made possible by the huge number of smartphones and tablets that potential gamers own.

However, the Gartner report’s headline figure was that 80 percent of current gamified applications will fail to meet business objectives primarily due to poor design. They go on to say that: “While game mechanics such as points and badges are the hallmarks of gamification, the real challenge is to design player-centric applications that focus on the motivations and rewards that truly engage players more fully. Game mechanics like points, badges, and leader boards are simply the tools that implement the underlying engagement models.” Keeping players engaged, what they call “stickiness” in the trade, is a big challenge for any company gamifying health.

So, what healthy games are available? According to “From Fitbit to Fitocracy: The Rise of Health Care Gamification” at, UnitedHealth Group has OptumizeMe, an app that lets users engage in fitness-related contests with their friends. They’re also testing Join For Me, an app encouraging obese teenagers at risk of developing diabetes to play video games that require dancing or other physical activities. MeYou Health has a rewards program for people who complete one health-related task per day.

GymPact uses GPS to track its users to the gym. Members meeting their workout goals win cash, which comes from people paying penalties for failing to exercise as promised. Fitbit has wireless tracking devices that sync to smartphones and computers, allowing users to track their fitness activities. Fitocracy is a social network, where people track their workouts, challenge friends to exercise contests, and earn recognition for meeting goals. SuperBetter Labs is beta testing an online social game designed to help people coping with illnesses, injuries, or depression.

Tom Chivers’ blog at lists Runkeeper, Nike Run, and Fitocracy as apps that reward you for taking exercise, with extra points for the numbers of steps taken. There’s DietBet and Skinnyo for weight-loss and calorie counting. Sleep Cycle encourages you to get more and better sleep. He suggests that there are apps to make a game of physiotherapy, apps for people with autism, for people with dyslexia, even for pain management for burns. The NHS in the UK has a BMI calculator app.

And that’s pretty much where we are now. Everyone thinks gamification is a great idea to make mundane activities more fun. But, and this is a big ‘but’, just saying something is gamified doesn’t mean that people will come back and use it again and again. We’ve all got apps on our phones and tablets that seemed like a good idea to download when we downloaded them, and they haven’t been used much since that time. A good games app has to engage people. In addition, people have to get some value out of it, such as better health. It would be nice to think that after using the app people have learned something or modified their behaviour in a positive way. Finding programmers who can make this happen is also a challenge.

If only Angry Birds helped you lose weight, cut down on your alcohol consumption, and take more exercise! But if someone finds a way to achieve that, they are on to a winner that we’ll all benefit from.

Sunday, 20 July 2014

IBM and Apple deal

What a surprise! IBM and Apple have announced that they are working together. Who’d have thought it? Would it have happened under Steve Jobs leadership? Will it work? The two companies are planning to co-develop business-centric apps for the iPhone and iPad. And IBM is going to sell Apple’s mobile devices pre-installed with the new software to its business clients.

People are suggesting that IBM now has special access rights to certain security features on the devices and that other companies don’t have that kind of access. As a consequence, IBM can supply apps and services that are similar in behaviour to what users of Microsoft devices would expect. What hasn’t been made clear is what the financial arrangements are and what apps are going to be produced.

It seems that the deal is one that favours Apple. After all, they have a smaller part of the smartphone and tablet market worldwide than Android. According to IDC, Android will have about 60 percent of the smartphone market and Apple less than 20 percent. And Gartner are suggesting that Android has over 60 percent of the tablet market with Apple shrinking year-on-year with about 30 percent. And, after all the things Apple have said over the years, it seems an unlikely combination.

Maybe mainframe users will choose to use an Apple tablet and boost the flagging Apple sales that way. It seems hardly likely that a tablet user will rush out and buy a mainframe! Hence my conclusion that the relationship is very asymmetrical and favours Apple hugely more than IBM. Or, thinking the unthinkable (again), is Big Blue looking to take over Apple at some stage in the future – feeling that it can provide customers with an alternative to Microsoft and Android?

Or, perhaps, IBM looked in the mirror and saw itself 50 years ago, being able to dictate what software ran on its hardware and generally disregarding what every other company was doing as it stood in powerful isolation. And we know how that turns out.

I could make a prediction here that in three years’ time Apple will be a division of IBM. I could make a prediction, but predictions are notoriously unreliable. For example, Steve Ballmer, writing in USA Today, 30 April 2007, said: “There’s no chance that the iPhone is going to get any significant market share”. Or Thomas Watson, chairman of IBM, who in 1943 said: “I think there is a world market for maybe five computers”

There are more of these unfortunate predictions. Ken Olson, president, chairman, and founder of Digital Equipment Corp in 1977 said: “There is no reason anyone would want a computer in their home”. Or Bill Gates, who in 1981 is meant to have said: “640K ought to be enough for anybody” – although that one probably isn’t true.

Robert Metcalfe, the inventor of Ethernet, writing in InfoWorld magazine in December 1995 said; “I predict the Internet will soon go spectacularly supernova and in 1996 catastrophically collapse”. An engineer at the Advanced Computing Systems Division of IBM, in 1968, said about the microchip: “But it good for?” Or the editor in charge of business books for Prentice Hall said in 1957: “I have travelled the length and breadth of this country and talked with the best people, and I can assure you that data processing is a fad that won’t last out the year”.

These are predictions that are right up there with H M Warner of Warner Brothers in 1927 saying: “Who the hell wants to hear actors talk?” Or Decca Recording Co rejecting the Beatles in 1962 by saying: “We don’t like their sound, and guitar music is on the way out”. And, of course, journalist Stewart Alsop Jr back in 1991, predicting that the last mainframe would be unplugged by 15 March 1996.

Best not to make predictions, or at least not to publish them, don’t you think? But are we looking at Apple’s last days as an independent company?

Sunday, 13 July 2014

400 blogs

This is my 400th blog on this site, and I thought it was enough of a milestone to deserve some sort of recognition. And I thought it would be an opportunity to look back on all the things that have happened since that very first blog back in June 2006. In truth, I have published some guest blogs – so not all 400 have been written by me. But, I’ve also written blogs that have been published under other people’s names on a variety of sites, and I’ve had nearly 40 blogs published on the Destination z Web site.

Back in 2006, I was doing a lot of work for Xephon – I was producing and editing those much-loved Update journals. You probably remember MVS (later z/OS) Update, CICS Update – the very first one – DB2 Update, SNA (later TCP/SNA) Update, RACF Update, and WebSphereMQ Update. My very first blog was on the Mainframe Weekly blog site and was called “What’s going on with CICS?”. The first paragraph read:

What do I mean, what’s going on with CICS? Well, CICS used to be the dynamic heart of so many companies – it was the subsystem that allowed the company to make money – and as such there were lots of third parties selling add-ons to CICS to make it work better for individual organizations.

And over the months that followed, I talked about AJAX, Web 2.0, Project ECLipz, Aglets (DB2 agent applets), social networking, back-ups and archives, new versions of CICS, DB2, and IMS, and significant birthdays for software. I blogged about mash-ups using IMS, I gave a number of CSS tips, I wrote about BPEL, I even discussed PST files and the arrival of the Chrome browser. And back in November 2008 I first looked at cloud computing.

In 2009 I talked about CICS Explorer, Twitter, cloud computing, specialty processors, zPrime, mainframe apprentices, that year’s GSE conference, IBM’s London Analytics Solution Centre, more anniversaries and software updates, and much more.

2010 saw more blogs about the recession, IBM versus Oracle, social media, Linux, clouds, performance, the zEnterprise, some thoughts about SharePoint, Android, and connecting to your mainframe from a phone, SyslogD, the GSE conference, and lots of other thoughts on the events of the year.

2011 had a lot of blogs about cloud computing and virtual user groups, as well as more about SharePoint. The SharePoint blogs were also published on the Medium Sized Business Blog part of TechNet Blogs ( I also had a serious look at tablets. And wrote the “What’s a mainframe, Daddy?” blog. I had a look at IMS costs, mainframe maintenance, and Web 3.0 and Facebook (with the use of OpenGraph). I also examined gamification and augmented reality and what they meant for the future of software.

In 2012 I mentioned IBM Docs, how to create an e-book, BYOD (Bring Your Own Device), operating systems on a memory stick, cloud wars, and using the Dojo Toolkit to make the end user experience of CICS nicer, and more friendly (of course). There was talk of RIM, Hadoop, IOD, and Lotus.

2013 saw quite a few blogs about big data. My Lync and Yammer blog was republished on the IT Central Web site. And I looked at social media, bitcoins, and push technology, as well as IBM’s new mainframe and much else.

So far in 2014, we’ve covered more about big data and enterprise social networks, we’ve looked at NoSQL, Software Defined everything, and our old friends REXX and VM/CMS, and a lot more besides.

Over the years there have been frequent blogs about the Arcati Mainframe Yearbook, and in particular its user survey results.

Are the blogs any good? Well, over the years they have gained various awards and quite a few have been republished on a number of different Web sites, where they’ve been getting positive reviews and plenty of hits.

You can read my blogs at, and You can follow on Twitter at, or on Facebook at – and we appreciate you ‘LIKEing’ the page.

What about the future? The blogs will continue and, as usual, I’ll mainly focus on what’s happening with the mainframe industry, but I think it’s important to take a wider view and keep abreast of new IT technologies and ideas as they happen and try to put them in context and give my evaluation of them.

If you have read all 400 – thank you. If this is the first one you’ve read, then hopefully you’ll be back again next week for more!

Trevor Eddolls
IBM Champion

Sunday, 6 July 2014

Inside Big Data

Everyone is talking about big data, but sometimes the things you hear people say aren’t always strictly accurate. Adaptive Computing’s Al Nugent, who co-wrote “Big Data for Dummies” (Wiley, 2013) has written a blog called “Big Data: Facts and Myths” at – I thought it would be interesting to hear what he has to say.

He says: “there has been an explosion in the interest around big data (and big analytics and big workflow). While the interest, and concomitant marketing, has been exploding, big data implementations have proceeded at a relatively normal pace.” He goes on to say: “One fact substantiated by the current adoption rate is big data is not a single technology but a combination of old and new technologies and that the overarching purpose is to provide actionable insights. In practice, big data is the ability to manage huge volumes of disparate data, at the right speed and within the right time frame to allow real-time analysis and reaction. The original characterization of big data was built on the 3 Vs:

  • Volume: the sheer amount of data
  • Velocity: how fast data needs to be ingested or processed
  • Variety: how diverse is the data? Is it structured, unstructured, machine data, etc.

“Another fact is the limitation of this list. Over the course of the past year or so others have chosen to expand the list of Vs. The two most common add-ons are Value and Visualization. Value, sometimes called Veracity, is a measure of how appropriate the data is in the analytical context and is it delivering on expectations. How accurate is that data in predicting business value? Do the results of a big data analysis actually make sense? Visualization is the ability to easily ‘see’ the value. One needs to be able to quickly represent and interpret the data and this often requires sophisticated dashboards or other visual representations.

“A third fact is big data, analytics and workflow is really hard. Since big data incorporates all data, including structured data and unstructured data from e-mail, social media, text streams, sensors, and more, basic practices around data management and governance need to adapt. Sometimes, these changes are more difficult than the technology changes.

“One of the most popular myths is the ‘newness’ of big data. For many in the technology community, big data is just a new name for what they have been doing for years. Certainly some of the fundamentals are different, but the requirement to make sense of large amounts of information and present it in a manner easily consumable by non-technology people has been with us since the beginning of the computer era.

“Another myth is a derivative of the newness myth: you need to dismiss the ‘old database’ people and hire a whole new group of people to derive value from the adoption of big data. Even on the surface this is foolhardy. Unless one has a green field technology/business environment, the approach to staffing will be hybridized. The percentage of new to existing will vary based on the size of the business, customer base, transaction levels, etc.

“Yet another myth concerns the implementation rate of big data projects. There are some who advocate dropping in a Hadoop cluster and going for it. ‘We have to move fast! Our competition is outpacing us!’ While intrepid, this is doomed to failure for reasons too numerous for this writing. Like any other IT initiative, the creation of big data solutions need to be planned, prototyped, designed, tested, and deployed with care.”

I thought Al’s comments were very interesting and worth sharing. You can find out more at Adaptive Computing’s Web site.

Sunday, 29 June 2014

Thinking the unthinkable – alternatives to Microsoft

Office 365 with its cloud-based solution to all your office needs seems like a mature and all-encompassing way of moving IT forward at many organizations. But what if your organization isn’t big enough to justify the price tag of the Enterprise version of Office 365, what if you’re a school, for example, what other choices do you have? Well, let’ take a look at some Open Source alternatives.

The obvious first place to look is Google Apps for Business. It’s not free, it costs $5 per user per month, or $50 per user per year. Google’s definition of a user is a distinct Gmail inbox. Everyone gets 30GB of Drive space, as well as Docs, Sheets, and Slides. Documents created in Drive can be shared both individually or by organization. Google Sites lets you create Web sites. Google Apps Vault is used to keep track of information created, stored, sent, and received through an organization’s Google Apps programs. You can access Apps for Business from mobile devices using Google’s mobile apps. One of the best apps for this is arguably the QuickOffice app for Android and iOS, which allows users. QuickOffice can edit Microsoft Office files stored in Drive. If you are a school, Google Apps for Education is completely free and has the new ‘Classroom’ product coming soon.

There’s also Zoho, which provides the standard office tools as well as a Campaigns tool, which lets you create e-mail and social campaigns for forthcoming events. Then there’s Free Office, which needs Java. And there’s OX, which offers files, e-mail, address book, calendar, tasks, plus a social media portal.

If you’re looking for an alternative to Outlook, there’s obviously Gmail, Yahoo, and Outlook (Hotmail). For Open Source alternatives, there’s Thunderbird, which comes with the numerous Add-ons, eg Lightening, its calendar software. There’s also Zimbra Desktop. It can access e-mail and data from VMware Zimbra, cloud e-mail like Gmail, and social networks like Facebook. And there’s Incredimail, but it doesn’t have a calendar. And finally there’s the Opera Email Client.

If you want an alternative to SharePoint then, probably, your first choice is Google Cloud Connect. This simple plug-in connects you to Google Docs and let you collaborate with other people. Edited documents are automatically sync’ed and sent to other team members. Or you might look at Alfresco. This free platform allows users to collaborate on documents and interact with others. There’s also Samepage, which comes with a paid for option. Or you could try Liferay Social Office Community Edition (CE). This is a downloadable desktop application. And there’s Nuxeo Open-Source CMS.

If you’re looking for an intranet, then there’s Mindtouch core, which seems to get good reviews. Alternatives include PBWiki, which includes a wiki, and is hosted by them and isn’t free for businesses. There’s GlassCubes, an online collaborative workspace, which, again has a cost implication. There’s Plone, a content management system built on Zope. There’s also Brushtail, HyperGate, Open Atrium, and Twiki.

The bottom line is that if you have power users, who are using lots of the features of Word, Excel, and PowerPoint, then you need those products. If your users are only scratching the surface of what you can do with Microsoft’s Office suite, then it makes sense to choose a much cheaper alternative. If you can use alternatives to Office, then you can probably start to think about using alternatives to other Microsoft products. Perhaps you can live without Outlook for e-mail and calendar. Maybe you’ve never really made the most of SharePoint and you could use an Open Source alternative for file sharing and running your intranet.

The issue is this: you can save huge amounts of money by using Open Source products rather than Office 365, but you will need to spend time learning how to use each ‘best of breed’ alternative and how to integrate it with the other products. That will take up someone’s time. Once you’ve weighed up the pros and cons, you can make a decision about whether to keep the faith and stay with Microsoft and have a great CV for another job using Microsoft products, or whether to save money and spend lots of time as you take your first steps into the wilderness. But what you’ll find is that wilderness is quite full of people who’ve also stepped away from the easy choice.

What will you do?

Sunday, 22 June 2014

Plus ca change

I’ve worked with mainframes for over 30 years and I’m used to seeing trends moving in one direction and then, a few years later, going in the opposite direction. Each initiative gets sold to us as something completely new and the solution that we’ve been waiting. I imagine you share my experience. I originally worked on green screens with all the processing taking place on the mainframe. In fact, I can remember decks of cards being punched and fed into the mainframe. I can remember the excitement of everyone having their own little computer when PCs first came out. I can remember client/server being the ultimate answer to all our computing issues. Outsourcing, not outsourcing – we could wander down Memory Lane like this for a long time.

What always amazes me is when I’m working with sites that are predominantly Windows-based, and they still get that frisson of excitement over an idea that I think is pretty commonplace. It was only a few years ago (well maybe about five) that the Windows IT teams were all excited about VMware and the ability to virtualize hardware. They couldn’t believe mainframes had been doing that since the 1960s.

Then there was the excitement about using Citrix and giving users simple Linux terminals rather than more expensive PCs. Citrix have a host of products, including GoToMeeting – their conferencing software. With Citrix desktop solutions, all the applications live on the server rather than on each individual computer. It means you can launch a browser on your laptop, smartphone, tablet, or whatever device you like that has a browser, and see a Windows-looking desktop and all the usual applications. So, it’s just like a dumb terminal connecting to a mainframe, which does all the work and looks after all the data storage. Nothing new there!

And now Microsoft are selling Office 365, which, once you’ve paid your money, means that all the applications live in the cloud somewhere, and so does the data. It seems that all subscribers are like remote users, dialling into an organization’s mainframe that could be located in a different country or on a different continent. Looked at another way, IT departments are in many ways outsourcing their responsibilities – and we all remember when outsourcing was on everyone’s mind.

Office 365 seems like a very mature product and one whose time is about to come. You get more than just the familiar Office products like Word and Excel. You get SharePoint, Lync, and Exchange (and I’m talking about the Enterprise version of Office 365). Lync lets users chat to each other – a bit like using MSN used to. And SharePoint provides you with an intranet as well file and document management capabilities. You get Outlook, Publisher (my least-favourite piece of software), Access (the database), and InfoPath (used for electronic forms). You also get a nicely integrated Yammer – Microsoft’s Enterprise Social Networking (ESN) tool. There’s also PowerBI, a suite of business intelligence and self-serve data mining tools coming soon. This will integrate with Excel, so users can use the Power Query tool to create spreadsheets and graphs using public and private data, and also perform geovisualization with Bing Maps data using the Power Map tool.

And while the actual tools that are available on these different platforms and computing models, over time, are different, it’s the computing concepts that I’m suggesting come and go and come again, and go again! It’s like a battle between centralization and decentralization. Everyone likes to have that computing power on their phone or tablet, but whenever you need to do some real work, you connect (usually using a browser) to a distant computing monolith. So, plus ça change, plus c’est la même chose.