Sunday, 20 July 2014

IBM and Apple deal

What a surprise! IBM and Apple have announced that they are working together. Who’d have thought it? Would it have happened under Steve Jobs leadership? Will it work? The two companies are planning to co-develop business-centric apps for the iPhone and iPad. And IBM is going to sell Apple’s mobile devices pre-installed with the new software to its business clients.

People are suggesting that IBM now has special access rights to certain security features on the devices and that other companies don’t have that kind of access. As a consequence, IBM can supply apps and services that are similar in behaviour to what users of Microsoft devices would expect. What hasn’t been made clear is what the financial arrangements are and what apps are going to be produced.

It seems that the deal is one that favours Apple. After all, they have a smaller part of the smartphone and tablet market worldwide than Android. According to IDC, Android will have about 60 percent of the smartphone market and Apple less than 20 percent. And Gartner are suggesting that Android has over 60 percent of the tablet market with Apple shrinking year-on-year with about 30 percent. And, after all the things Apple have said over the years, it seems an unlikely combination.

Maybe mainframe users will choose to use an Apple tablet and boost the flagging Apple sales that way. It seems hardly likely that a tablet user will rush out and buy a mainframe! Hence my conclusion that the relationship is very asymmetrical and favours Apple hugely more than IBM. Or, thinking the unthinkable (again), is Big Blue looking to take over Apple at some stage in the future – feeling that it can provide customers with an alternative to Microsoft and Android?

Or, perhaps, IBM looked in the mirror and saw itself 50 years ago, being able to dictate what software ran on its hardware and generally disregarding what every other company was doing as it stood in powerful isolation. And we know how that turns out.

I could make a prediction here that in three years’ time Apple will be a division of IBM. I could make a prediction, but predictions are notoriously unreliable. For example, Steve Ballmer, writing in USA Today, 30 April 2007, said: “There’s no chance that the iPhone is going to get any significant market share”. Or Thomas Watson, chairman of IBM, who in 1943 said: “I think there is a world market for maybe five computers”

There are more of these unfortunate predictions. Ken Olson, president, chairman, and founder of Digital Equipment Corp in 1977 said: “There is no reason anyone would want a computer in their home”. Or Bill Gates, who in 1981 is meant to have said: “640K ought to be enough for anybody” – although that one probably isn’t true.

Robert Metcalfe, the inventor of Ethernet, writing in InfoWorld magazine in December 1995 said; “I predict the Internet will soon go spectacularly supernova and in 1996 catastrophically collapse”. An engineer at the Advanced Computing Systems Division of IBM, in 1968, said about the microchip: “But what...is it good for?” Or the editor in charge of business books for Prentice Hall said in 1957: “I have travelled the length and breadth of this country and talked with the best people, and I can assure you that data processing is a fad that won’t last out the year”.

These are predictions that are right up there with H M Warner of Warner Brothers in 1927 saying: “Who the hell wants to hear actors talk?” Or Decca Recording Co rejecting the Beatles in 1962 by saying: “We don’t like their sound, and guitar music is on the way out”. And, of course, journalist Stewart Alsop Jr back in 1991, predicting that the last mainframe would be unplugged by 15 March 1996.

Best not to make predictions, or at least not to publish them, don’t you think? But are we looking at Apple’s last days as an independent company?

Sunday, 13 July 2014

400 blogs

This is my 400th blog on this site, and I thought it was enough of a milestone to deserve some sort of recognition. And I thought it would be an opportunity to look back on all the things that have happened since that very first blog back in June 2006. In truth, I have published some guest blogs – so not all 400 have been written by me. But, I’ve also written blogs that have been published under other people’s names on a variety of sites, and I’ve had nearly 40 blogs published on the Destination z Web site.

Back in 2006, I was doing a lot of work for Xephon – I was producing and editing those much-loved Update journals. You probably remember MVS (later z/OS) Update, CICS Update – the very first one – DB2 Update, SNA (later TCP/SNA) Update, RACF Update, and WebSphereMQ Update. My very first blog was on the Mainframe Weekly blog site and was called “What’s going on with CICS?”. The first paragraph read:

What do I mean, what’s going on with CICS? Well, CICS used to be the dynamic heart of so many companies – it was the subsystem that allowed the company to make money – and as such there were lots of third parties selling add-ons to CICS to make it work better for individual organizations.

And over the months that followed, I talked about AJAX, Web 2.0, Project ECLipz, Aglets (DB2 agent applets), social networking, back-ups and archives, new versions of CICS, DB2, and IMS, and significant birthdays for software. I blogged about mash-ups using IMS, I gave a number of CSS tips, I wrote about BPEL, I even discussed PST files and the arrival of the Chrome browser. And back in November 2008 I first looked at cloud computing.

In 2009 I talked about CICS Explorer, Twitter, cloud computing, specialty processors, zPrime, mainframe apprentices, that year’s GSE conference, IBM’s London Analytics Solution Centre, more anniversaries and software updates, and much more.

2010 saw more blogs about the recession, IBM versus Oracle, social media, Linux, clouds, performance, the zEnterprise, some thoughts about SharePoint, Android, and connecting to your mainframe from a phone, SyslogD, the GSE conference, and lots of other thoughts on the events of the year.

2011 had a lot of blogs about cloud computing and virtual user groups, as well as more about SharePoint. The SharePoint blogs were also published on the Medium Sized Business Blog part of TechNet Blogs (http://blogs.technet.com/b/mediumbusiness/). I also had a serious look at tablets. And wrote the “What’s a mainframe, Daddy?” blog. I had a look at IMS costs, mainframe maintenance, and Web 3.0 and Facebook (with the use of OpenGraph). I also examined gamification and augmented reality and what they meant for the future of software.

In 2012 I mentioned IBM Docs, how to create an e-book, BYOD (Bring Your Own Device), operating systems on a memory stick, cloud wars, and using the Dojo Toolkit to make the end user experience of CICS nicer, and more friendly (of course). There was talk of RIM, Hadoop, IOD, and Lotus.

2013 saw quite a few blogs about big data. My Lync and Yammer blog was republished on the IT Central Web site. And I looked at social media, bitcoins, and push technology, as well as IBM’s new mainframe and much else.

So far in 2014, we’ve covered more about big data and enterprise social networks, we’ve looked at NoSQL, Software Defined everything, and our old friends REXX and VM/CMS, and a lot more besides.

Over the years there have been frequent blogs about the Arcati Mainframe Yearbook, and in particular its user survey results.

Are the blogs any good? Well, over the years they have gained various awards and quite a few have been republished on a number of different Web sites, where they’ve been getting positive reviews and plenty of hits.

You can read my blogs at mainframeupdate.blogspot.com, and it.toolbox.com/blogs/mainframe-world/. You can follow on Twitter at twitter.com/t_eddolls, or on Facebook at fb.com/iTechEd – and we appreciate you ‘LIKEing’ the page.

What about the future? The blogs will continue and, as usual, I’ll mainly focus on what’s happening with the mainframe industry, but I think it’s important to take a wider view and keep abreast of new IT technologies and ideas as they happen and try to put them in context and give my evaluation of them.

If you have read all 400 – thank you. If this is the first one you’ve read, then hopefully you’ll be back again next week for more!

Trevor Eddolls
IBM Champion

Sunday, 6 July 2014

Inside Big Data

Everyone is talking about big data, but sometimes the things you hear people say aren’t always strictly accurate. Adaptive Computing’s Al Nugent, who co-wrote “Big Data for Dummies” (Wiley, 2013) has written a blog called “Big Data: Facts and Myths” at http://www.adaptivecomputing.com/blog-hpc/big-data-facts-myths/ – I thought it would be interesting to hear what he has to say.

He says: “there has been an explosion in the interest around big data (and big analytics and big workflow). While the interest, and concomitant marketing, has been exploding, big data implementations have proceeded at a relatively normal pace.” He goes on to say: “One fact substantiated by the current adoption rate is big data is not a single technology but a combination of old and new technologies and that the overarching purpose is to provide actionable insights. In practice, big data is the ability to manage huge volumes of disparate data, at the right speed and within the right time frame to allow real-time analysis and reaction. The original characterization of big data was built on the 3 Vs:

  • Volume: the sheer amount of data
  • Velocity: how fast data needs to be ingested or processed
  • Variety: how diverse is the data? Is it structured, unstructured, machine data, etc.

“Another fact is the limitation of this list. Over the course of the past year or so others have chosen to expand the list of Vs. The two most common add-ons are Value and Visualization. Value, sometimes called Veracity, is a measure of how appropriate the data is in the analytical context and is it delivering on expectations. How accurate is that data in predicting business value? Do the results of a big data analysis actually make sense? Visualization is the ability to easily ‘see’ the value. One needs to be able to quickly represent and interpret the data and this often requires sophisticated dashboards or other visual representations.

“A third fact is big data, analytics and workflow is really hard. Since big data incorporates all data, including structured data and unstructured data from e-mail, social media, text streams, sensors, and more, basic practices around data management and governance need to adapt. Sometimes, these changes are more difficult than the technology changes.

“One of the most popular myths is the ‘newness’ of big data. For many in the technology community, big data is just a new name for what they have been doing for years. Certainly some of the fundamentals are different, but the requirement to make sense of large amounts of information and present it in a manner easily consumable by non-technology people has been with us since the beginning of the computer era.

“Another myth is a derivative of the newness myth: you need to dismiss the ‘old database’ people and hire a whole new group of people to derive value from the adoption of big data. Even on the surface this is foolhardy. Unless one has a green field technology/business environment, the approach to staffing will be hybridized. The percentage of new to existing will vary based on the size of the business, customer base, transaction levels, etc.

“Yet another myth concerns the implementation rate of big data projects. There are some who advocate dropping in a Hadoop cluster and going for it. ‘We have to move fast! Our competition is outpacing us!’ While intrepid, this is doomed to failure for reasons too numerous for this writing. Like any other IT initiative, the creation of big data solutions need to be planned, prototyped, designed, tested, and deployed with care.”

I thought Al’s comments were very interesting and worth sharing. You can find out more at Adaptive Computing’s Web site.

Sunday, 29 June 2014

Thinking the unthinkable – alternatives to Microsoft

Office 365 with its cloud-based solution to all your office needs seems like a mature and all-encompassing way of moving IT forward at many organizations. But what if your organization isn’t big enough to justify the price tag of the Enterprise version of Office 365, what if you’re a school, for example, what other choices do you have? Well, let’ take a look at some Open Source alternatives.

The obvious first place to look is Google Apps for Business. It’s not free, it costs $5 per user per month, or $50 per user per year. Google’s definition of a user is a distinct Gmail inbox. Everyone gets 30GB of Drive space, as well as Docs, Sheets, and Slides. Documents created in Drive can be shared both individually or by organization. Google Sites lets you create Web sites. Google Apps Vault is used to keep track of information created, stored, sent, and received through an organization’s Google Apps programs. You can access Apps for Business from mobile devices using Google’s mobile apps. One of the best apps for this is arguably the QuickOffice app for Android and iOS, which allows users. QuickOffice can edit Microsoft Office files stored in Drive. If you are a school, Google Apps for Education is completely free and has the new ‘Classroom’ product coming soon.

There’s also Zoho, which provides the standard office tools as well as a Campaigns tool, which lets you create e-mail and social campaigns for forthcoming events. Then there’s Free Office, which needs Java. And there’s OX, which offers files, e-mail, address book, calendar, tasks, plus a social media portal.

If you’re looking for an alternative to Outlook, there’s obviously Gmail, Yahoo, and Outlook (Hotmail). For Open Source alternatives, there’s Thunderbird, which comes with the numerous Add-ons, eg Lightening, its calendar software. There’s also Zimbra Desktop. It can access e-mail and data from VMware Zimbra, cloud e-mail like Gmail, and social networks like Facebook. And there’s Incredimail, but it doesn’t have a calendar. And finally there’s the Opera Email Client.

If you want an alternative to SharePoint then, probably, your first choice is Google Cloud Connect. This simple plug-in connects you to Google Docs and let you collaborate with other people. Edited documents are automatically sync’ed and sent to other team members. Or you might look at Alfresco. This free platform allows users to collaborate on documents and interact with others. There’s also Samepage, which comes with a paid for option. Or you could try Liferay Social Office Community Edition (CE). This is a downloadable desktop application. And there’s Nuxeo Open-Source CMS.

If you’re looking for an intranet, then there’s Mindtouch core, which seems to get good reviews. Alternatives include PBWiki, which includes a wiki, and is hosted by them and isn’t free for businesses. There’s GlassCubes, an online collaborative workspace, which, again has a cost implication. There’s Plone, a content management system built on Zope. There’s also Brushtail, HyperGate, Open Atrium, and Twiki.

The bottom line is that if you have power users, who are using lots of the features of Word, Excel, and PowerPoint, then you need those products. If your users are only scratching the surface of what you can do with Microsoft’s Office suite, then it makes sense to choose a much cheaper alternative. If you can use alternatives to Office, then you can probably start to think about using alternatives to other Microsoft products. Perhaps you can live without Outlook for e-mail and calendar. Maybe you’ve never really made the most of SharePoint and you could use an Open Source alternative for file sharing and running your intranet.

The issue is this: you can save huge amounts of money by using Open Source products rather than Office 365, but you will need to spend time learning how to use each ‘best of breed’ alternative and how to integrate it with the other products. That will take up someone’s time. Once you’ve weighed up the pros and cons, you can make a decision about whether to keep the faith and stay with Microsoft and have a great CV for another job using Microsoft products, or whether to save money and spend lots of time as you take your first steps into the wilderness. But what you’ll find is that wilderness is quite full of people who’ve also stepped away from the easy choice.

What will you do?

Sunday, 22 June 2014

Plus ca change

I’ve worked with mainframes for over 30 years and I’m used to seeing trends moving in one direction and then, a few years later, going in the opposite direction. Each initiative gets sold to us as something completely new and the solution that we’ve been waiting. I imagine you share my experience. I originally worked on green screens with all the processing taking place on the mainframe. In fact, I can remember decks of cards being punched and fed into the mainframe. I can remember the excitement of everyone having their own little computer when PCs first came out. I can remember client/server being the ultimate answer to all our computing issues. Outsourcing, not outsourcing – we could wander down Memory Lane like this for a long time.

What always amazes me is when I’m working with sites that are predominantly Windows-based, and they still get that frisson of excitement over an idea that I think is pretty commonplace. It was only a few years ago (well maybe about five) that the Windows IT teams were all excited about VMware and the ability to virtualize hardware. They couldn’t believe mainframes had been doing that since the 1960s.

Then there was the excitement about using Citrix and giving users simple Linux terminals rather than more expensive PCs. Citrix have a host of products, including GoToMeeting – their conferencing software. With Citrix desktop solutions, all the applications live on the server rather than on each individual computer. It means you can launch a browser on your laptop, smartphone, tablet, or whatever device you like that has a browser, and see a Windows-looking desktop and all the usual applications. So, it’s just like a dumb terminal connecting to a mainframe, which does all the work and looks after all the data storage. Nothing new there!

And now Microsoft are selling Office 365, which, once you’ve paid your money, means that all the applications live in the cloud somewhere, and so does the data. It seems that all subscribers are like remote users, dialling into an organization’s mainframe that could be located in a different country or on a different continent. Looked at another way, IT departments are in many ways outsourcing their responsibilities – and we all remember when outsourcing was on everyone’s mind.

Office 365 seems like a very mature product and one whose time is about to come. You get more than just the familiar Office products like Word and Excel. You get SharePoint, Lync, and Exchange (and I’m talking about the Enterprise version of Office 365). Lync lets users chat to each other – a bit like using MSN used to. And SharePoint provides you with an intranet as well file and document management capabilities. You get Outlook, Publisher (my least-favourite piece of software), Access (the database), and InfoPath (used for electronic forms). You also get a nicely integrated Yammer – Microsoft’s Enterprise Social Networking (ESN) tool. There’s also PowerBI, a suite of business intelligence and self-serve data mining tools coming soon. This will integrate with Excel, so users can use the Power Query tool to create spreadsheets and graphs using public and private data, and also perform geovisualization with Bing Maps data using the Power Map tool.

And while the actual tools that are available on these different platforms and computing models, over time, are different, it’s the computing concepts that I’m suggesting come and go and come again, and go again! It’s like a battle between centralization and decentralization. Everyone likes to have that computing power on their phone or tablet, but whenever you need to do some real work, you connect (usually using a browser) to a distant computing monolith. So, plus ça change, plus c’est la même chose.

Sunday, 15 June 2014

Having your cake and eating it

Everyone knows that mainframes are the best computers you can have. You can run them locally, you can hide them in the cloud, and you can link them together into a massive processing network. But we also know that there are smaller platforms out there that work differently. Wouldn’t it be brilliant if you could run them all from one place?

Last summer we were excited by the announcement from IBM of its new zBC12 mainframe computer. The zBC12 followed the previous year’s announcement of the zEC12 (Enterprise Class), and 2011 saw the z114, with 2010 giving us the z196. So what’s special about those mainframes?

Well, in addition to IFL, zIIP, and zAAP specialty processors, and massive amounts of processing power, they came with the IBM zEnterprise BladeCenter Extension (zBX), which lets users combine workloads designed for mainframes with those for POWER7 and x86 chips, like Microsoft Windows Server. So let’s unpick this a little.

One issue many companies have after years of mergers and acquisitions is a mixed bag of operating systems and platforms. They could well have server rooms across the world and not really know what was running on those servers.

IBM’s first solution to this problem was Linux on System z. Basically, a company could take hundreds of Linux servers and consolidate them onto a single mainframe. They would save on energy to drive the servers and cool them, and they would get control of their IT back.

IBM’s second solution, as we’ve just described, was to incorporate the hardware for Linux and Windows servers in its mainframe boxes. You’d just plug in the blades that you needed and you had full control over your Windows servers again (plus all the benefits of having a mainframe).

But what about if you could actually run Windows on your mainframe? That was the dream of some of the people at Mantissa Corporation. They did a technology demo at SHARE in August 2009. According to Mantissa’s Jim Porell, Network World read their abstract and incorrectly assumed that they were announcing a product – which they weren’t. The code is still in beta. But think about what it could mean: running all your Windows servers on a mainframe. That is quite a concept.

Again, according to Jim, they can now have real operating systems running under their z86VM, although, so far, they are the free versions of Linux. Their next step will be to test it with industrial strength Linux distros such as Red Hat and Suse. And then, they will need to get Windows running. And then they’ll have a product.

Speaking frankly, Jim said that currently they have a bug in their Windows ‘BIOS’ processing in the area of plug-and-play hardware support. Their thinking is that it’s a mistake in their interpretation of the hardware commands and, naturally, they’re working to resolve it.

The truth is that it’s still early days for the project, and while running Linux is pretty good, we can already do that on a mainframe (although you might quibble at the price tag for doing so). But once the Mantissa technical people have cracked the problems with Windows, it will be a product well worth taking a look at, I’m sure. But they’re not there yet, and they’re keen to manage expectations appropriately.

Jim goes on to say that once the problems are solved it’s going to be about performance. Performance is measured in a couple of ways: benchmarks of primitives, benchmarks of large-scale for capacity planning, end user experiences, and what they are willing to tolerate. So it seems that they have business objectives around performance where they could be successful if they supported only 10 PCs, then more successful with a 100 PCs, and even have even greater success if they can support a 1000 PCs.

Jim Porell describes z86VM as really just an enabler to solving a wide range of ‘customer problems’ by enabling direct access between the traditional mainframe and the PC operating systems that are co-located with it.

I think that anything that gets more operating systems running on mainframe hardware has got to be good. And I’m prepared to wait for however long it takes Mantissa to get Windows supported on a mainframe. I’ll definitely feel then that I’m having my cake and eating it!



Sunday, 8 June 2014

Conversational Monitoring System

I do a lot of work with Web sites and people are often talking about using a CMS to make it easier to upload and place content. For them, CMS stands for Content Management System, but I thought it would be fun to revisit Conversational Monitor System – one of the first virtual machines that people really enjoyed using because of its flexibility and functionality.

Our story starts in the mid-1960s at IBM’s Cambridge Scientific Center, where CMS – then called Cambridge Monitor System – first saw the light of day running under CP-40, a VM-like control program, which then developed into CP-67. It provided a way of giving every user the appearance of working on their own computer system. So, every user had their own terminal (the system console for their system) as if they had their own processor, unit record device (what we’d call a printer, card reader, and card punch), and DASD (disk space). The control program looked after the real resources and allocated them, as required, to the users.

In 1972, when IBM released its 370-architecture machine, a new version of VM called VM/370 became available. Unlike CP/CMS, IBM supported VM/370 and CMS retained its initials, but was now known as Conversational Monitor System – highlighting its interactive nature. This commercial product also included RSCS (Remote Spooling Communication Subsystem) and IPCS (Interactive  Problem Control System), which ran under CMS.

VM itself was originally not very popular within IBM, but, through quite an interesting story, survived. VM/370 became VM/SE and then VM/SP. There was also a low-end variant called VM/IS. Then there was VM/SP HPO before we had VM/XA SF, VM/XA SP, then VM/ESA, and now z/VM.

But returning to CMS, it was popular because you could do so much with it. You could develop, debug, and run programs, manage data files, communicate with other systems or users, and much more. When you start (IPL CMS) CMS, it loads a profile exec, which sets up your virtual environment in exactly the way that you want it to be.

Two products that made CMS users very happy were PROFS and REXX. PROFS (PRofessional OFfice System) became available in 1981 and was originally developed by IBM in Dallas, in conjunction with Amoco. It provided e-mail, shared calendars, and shared document storage and management. It was so popular that it was renamed OfficeVision and ported to other platforms. OfficeVision/VM was dropped in 2003, with IBM recommending that users migrate to Lotus Notes and Domino, which it had acquired by taking over Lotus.

REXX (Restructured Extended Executor) is an interpreted programming language developed by Mike Cowlishaw and released by IBM in 1982. It is a structured, high-level programming language that's very easy to get the hang of. In fact, it was so popular, that it was ported to most other platforms. REXX programs are often called REXX EXECs because it replaced EXEC and EXEC2 as the command language of choice.

CMS users will remember XEDIT, which is an edit program written by Xavier de Lamberterie that was released in 1980. XEDIT supports automatic line numbers, and many of the commands operate on blocks of lines. The command line allows user to type editor commands. It replaced EDIT SP as the standard editor. Again, XEDIT was very popular and ported to other platforms.

CMS provided a way to maximize the number of people who could concurrently use mainframe facilities at a time when these facilities were fairly restricted. It was a hugely successful environment, spawning tools that themselves were ported to other platforms because they were so successful. I used CMS and VM a lot back in the day, and even wrote two books about VM. Like many users, I have very fond memories of using CMS and what could be achieved by using CMS.