Sunday 24 November 2013

Vivat mainframe

“Vivat Rex” is what the populace was meant to shout when a new king of England was crowned. It means “long live the king”. I think that we’ve been able to shout, “long live the mainframe” for a long time now, and recent announcements mean that we can continue to do so.

Mainframes, and I don’t need to tell you this, have been around for a long time now and have faced and overcome all the technical and business challenges that have been thrown at them in that time. And older mainframers can seem somewhat jaundiced when their younger colleagues get over-enthusiastic about some new technology.

We’ve looked at client-server technology and thought how similar it is to dumb terminals logging onto a mainframe. We’ve looked at cloud computing and thought how similar that is to terminals connecting to a mainframe in a different part of the world. But there’s much more to the mainframe than a simple ‘seen it, done it’ attitude. The mainframe is also able to absorb new technologies and make them its own.

We’ve looked recently at Hadoop – there are distributions from Hortonworks, Cloudera, Apache, and IBM (and many others). But you can run Big Data on your mainframe, and a number of mainframe software vendors have recently produced software that connects to Big Data from z/OS. It’s becoming integrated. So long live the mainframe with Big Data.

We’ve also, in this blog, looked at ways that BYOD – personal devices – can be used to access mainframe data, usually through browsers. And many of IBM’s younger presenters at GSE recently were talking about more Windows-like interfaces to mainframe information. Think of it – it’s like 1970 all over again – mainframes in the hands of 20-year-olds! So long live the mainframe with youthful staff and modern-looking interfaces.

We know there are other computing platforms out there, and IBM over the past few years has produced hybrid hardware that contains a mainframe and blades for running these other platforms. This summer’s zBC12 (Business Class) followed last year’s announcement of the zEC12 (Enterprise Class). And 2011 saw the z114, and 2010 gave us the z196. So long live the mainframe and its ability to embrace other platforms. (And I haven’t even mentioned how successfully you can run Linux on a mainframe.)

And thinking about mainframes embracing other technologies, CA has just announced the general availability of technology designed, they say, to help customers drive down the cost of storing data processed on IBM System z by backing up the data and archiving it to the cloud.

What that means is by using CA Cloud Storage for System z and the Riverbed Whitewater appliance, customers can back up System z storage data to Amazon Simple Storage Service (Amazon S3), a storage infrastructure designed for mission-critical and primary data storage, or to Amazon Glacier, an extremely low-cost storage service for which retrieval times of several hours are suitable. Both services are highly secure and scalable and designed to be durable. In addition, disaster recovery readiness is improved and AWS cloud storage is accessed without changing the existing back-up infrastructure.
So, yet again, we can say, long live the mainframe for the way it’s embracing cloud computing.

As a side note: Amazon has Amazon Elastic MapReduce (EMR), which uses Hadoop to provide Web services.

IBM has taken over StoredIQ, Star Analytics, and The Now Factory for Big Data Analytics or Business Analytics. And it took over SoftLayer Technologies for its cloud computing infrastructure. It’s making sure it has its hands on the tools and the people who are developing these newer technologies.

My conclusion is that there are new problems that need to be solved. And there are new technologies available to solve them. But so often those exciting new things are very similar to things that we mainframers have dealt with before. And where they seem different, mainframe environments are able to work with them and bring them into the fold.

There’s really no danger that mainframes are going away anytime soon. So, we’re very safe in saying, “vivat mainframe”.

On a completely different topic...
Please complete the mainframe users’ survey at www.arcati.com/usersurvey14. And if you’re a vendor, get your free entry in the Arcati Mainframe Yearbook 2014 by completing the form at www.arcati.com/vendorentry.

Sunday 17 November 2013

IOD - what you missed!

Well, that’s it – the exhibition stands have been taken down, the speakers have flown out, the attendees are all back home with their families, and the organizers have packed away their banners. #IBMIOD has come to an end for another year – but what an event it was!

Those people lucky enough to attend were able to find out the latest thinking from IBM and partners in terms of DB2, IMS, Big Data and Analytics, Tools and Utilities for DB2 and IMS, and Cross Platform sessions. In addition, there were client-led session, hands-on labs and workshops, expert panel sessions, and birds of a feather sessions. And there were even more relaxing sessions, such as the “Rock the Mainframe” System z reception.

This year’s theme for the event that ran from Sunday 3 November to Thursday 7 November was “Think Big. Deliver Big. WIN BIG” – and that was reflected in a number of the announcements from IBM that were made during the conference. Looking for correlations or anomalies in your Big Data – then you need IBM SmartCloud Analytics Predictive Insights. Looking to find the optimum place to put your data – in terms of speed versus cost – then you need IBM SmartCloud Virtual Storage Center. Want the fastest Hadoop appliance around – then it’s the IBM PureData System for Hadoop that you need. Need to anonymize or mask data from various sources – try InfoSphere Data Privacy for Hadoop. Like to see what’s going on with your big data sets – then look out for InfoSphere Governance Dashboard.

8:15 Wednesday morning in “The Next Generation of Innovation” session, delegates were treated to tennis star Serena Williams as a featured speaker. There were also some very interesting keynote sessions with IBM executives sharing their thinking about Business Analytics, Enterprise Content Management, Information Management, and Business Leadership. Plenty of food for thought there for delegates to take back to their organizations.

The EXPO was a great place to visit to talk to all sorts of vendors and get a feel for what they thought was important and how their products and services might make a difference at individual sites. With so many vendors attending, it was the ideal place to really compare and contrast what was on offer.

And, of course, if you weren’t able to make the conference, there was Livestream, which broadcast some of the sessions, such as Monday’s opening general session, Win Big in the New Era of Computing. And there were a host of other sessions over the four days. Similarly, the Video Library allowed people to watch replays from general sessions, keynotes, selected elective sessions and sponsor interviews directly from the Livestream library. There was also an online photo gallery allowing non-attendees and attendees to see “the action each day”.

Let’s not forget the buzz about the conference in the Twittersphere. In a week that saw shares in Twitter not only go on sale but pretty much double in price, it was interesting to see how people were using Twitter to tell others what they were enjoying at the conference. And the IOD Web site kept everyone up-to-date with the #IBMIOD tweets.

So with tired delegates, speakers, vendors, and organizers having made their way home, and those people who followed events at Madalay Bay, Las Vegas from a distance, they can all be confident that Information On Demand 2013 was a hugely useful event to have attended in person or virtually. And the organizers are going to have to work amazingly hard to top this event next year.

On a completely different topic...
Please complete the mainframe users’ survey at www.arcati.com/usersurvey14. And if you’re a vendor, get your free entry in the Arcati Mainframe Yearbook 2014 by completing the form at www.arcati.com/vendorentry.

Sunday 10 November 2013

Guide Share Europe 2013

The Guide Share Europe Conference at Whittlebury Manor was as excellent this year as in previous years. I was only able to make Day 1 on 5 November, but I had a great day. Apart from the 5 star presentations, it’s always fun to catch up with old friends and people I’ve spoken to at webinars, but never actually seen. Plus there’s an opportunity to catch up with a number of vendors and find out what’s happening with them and business in general.

The day started with a couple of keynotes from Tesco’s Tomas Kadlec talking about Technology the retailers battlefield – zSeries reports for duty!”, and the University of Bedfordshire’s Dr Herbert Daly talking about “Re-framing the mainframe: new generations and regenerations on System z”.

I chair the Virtual IMS user group and the Virtual CICS user group, so my time is always split between the CICS and IMS sessions. This year, I started with IBM’s Kyle Milner’s “z/OS Explorer and CICS Explorer (5.1.1)”. Kyle started by saying that z/OS Explorer is really called IBM Explorer for z/OS, and V2.1 is separate from CICS Explorer, which integrates with it. Z/OS Explorer is a desktop tool that integrates with MQ Explore, Data Studio, Rational tools (RTC), Rational Developer for z (RDz), IMS Enterprise Suite Explorer, and other tools. It allows users to view, delete, and create files on z/OS and Linux; work on SPOOL files; and much more. It’s installed using Installation Manager in a process that’s much like getting apps from the apps store. CICS Explorer lets users create new program definitions, clone resources, and other life-cycle operations. It’s built on the Eclipse framework V4.2.2.

IBM’s Greg Vance spoke about “GDPS Active-Active and IMS replication”. He described how recovery involved two concepts: the Recovery Point Objective and the Recovery Time Objective. He looked at how recovery had evolved to the point where people wanted almost immediate recovery with almost no lost data. Active-Active replication involves stopping sending transactions to one database, waiting for the last transaction to replicate across, and then sending transactions to the second database. The transactions caught before the switch over just appear a bit slow to the user. For this to work with IMS, you need InfoSphere Data Replication for IMS for z/OS V1.11. Because it uses asynchronous replication, there are no restrictions on the distance between databases. There’s low latency because of the use of parallelism. And there’s transaction consistency.

Back at the CICS stream, I saw IBM’s David Harris talk about “Eliminating the batch window with modern batch”. David starting by explaining that batch jobs often had to run when online services were down because they needed exclusive access to resources. However, there are many drivers to keep the online system available the whole time leaving little or no time for traditional batch applications to run. The solution he proposed involved running batch at the same time as the online system – and this batch system relied on Java. It comes with a big plus in that it can run in a zAAP coprocessor. Using the Batch Data Stream Framework (BDSF), it allows checkpointing, and business objects can be re-used. The batch container is a long-running CICS job, and WebSphere Application Server (WAS) is used to schedule the jobs into CICS.

Informatica’s John Boyle spoke about “IMS test data management”. He explained that we needed only a subset of data to work on in testing, and we need to hide sensitive data – particularly in light of data privacy legislation and to reduce the risk of sensitive data loss. John also stressed the need for the test data to be kept current and internally consistent. Data masking is the technique that hides personal data and it must be consistent across the data. Test software must allow policies to be applied and maintain referential integrity. It first has to establish what to mask and how to mask.

IBM’s Paul Fletcher spoke about “IMS 13 Native SQL for COBOL”. A pre-req for this COBOL V5.1, which has only just been released. COBOL programs supply SQL keywords and support static and dynamic SQL. At the moment, IMS supports only dynamic SQL. Users need to declare tables, define an SQL Communication Area (SQLCA), which is like a PCB, possibly define an SQL Descriptor Area (SQLDA), declare data items for passing data between IMS and host languages, code SQL statements to access IMS, check the SQLCA to verify the execution of the SQL statements, and handle any SQL error codes. Paul told the group that there are three types of dynamic SQL. Firstly, where the whole SQL is known when the program is written. Secondly, where the SQL is known but the value can vary. And, thirdly, where none of the SQL is known – it’s read from a file. Users need to fully qualify all tables and columns; use a WHERE clause for key fields; and use PREPARE.

The exhibition hall was packed and lively, giving people a chance to find out about various products and services. An excellent day of learning and networking was rounded off by fireworks and a barbecue dinner. I’m sorry I couldn’t make the second day. If you didn’t make GSE this year, I recommend that you go next year.

Sunday 3 November 2013

When worlds collide

We know that mainframes are rock solid workhorses that ensure the banks and insurance companies and airlines and pretty much every other large organization get their work done correctly and swiftly. And we know that access to mainframes has been extended outside the world of green screens to anyone on a browser with proper authorization. And we also know that there’s little distinction between the world of cloud computing and distributed mainframe computing. But the latest big thing is Big Data – and that seems like a different world.

Big Data is used to refer to huge amounts (exabytes) of data, often unstructured, that can originate from a variety of sources – such as cameras, weather satellites, credit card machines, barcode readers, the Internet of Things, anything! This Big Data usually sits on Linux or Windows boxes and some of the early developers were Google, Amazon, and Facebook. The data is stored in HBase, a non-relational, distributed database, written in Java. And the file system is what’s called a Hadoop Distributed File System (HDFS). At runtime, a process maps the data and reduces it – that’s called MapReduce.

So how do these two worlds come together? For a start a lot of the things you need for Big Data are Open Source and come from the Apache Foundation. IBM is a member of the foundation and has a number of products that extend Big Data’s functionality. IBM provides InfoSphere BigInsights, Data Stage, Streams, and Guardium. There’s Big SQL with Big Insights V2.1, and the spreadsheet-like Big Sheets.

If you want to run Big Data – Hadoop – on your mainframe, you’ll need to do it in a Linux partition (Linux on System z). But IBM isn’t the only mainframe software vendor that’s getting in on the act. We’ve recently heard from BMC, Syncsort, Compuware, and Informatica about their products.

BMC has extended its Control-M automated mainframe job scheduler with Control-M for Hadoop. The product enables the creation and management of Hadoop workflows in an automated environment and is aimed at Hadoop application developers and enterprise IT administrators who are using Hadoop as part of their production workload.

Syncsort has Hadoop Connectivity, which prevents Hadoop becoming another silo within an enterprise. The product makes it easy to get data in and out of Hadoop. The product provides: native connectivity to all major data sources and targets; native mainframe connectivity and support for EBCDIC/ASCII, VSAM, Packed decimal, Comp-3, and more; heterogeneous database access on Hadoop; direct I/O access for faster data transfers; and high-performance compression.

Compuware has extended its Application Performance Management (APM) software with Compuware APM for Big Data. This, they claim, allows organizations to tame Big Data applications to eliminate inefficiencies and rapidly identify and resolve problems. Using PurePath Technology, it provides visibility into Hadoop and NoSQL applications. Organizations, they say, use Compuware APM for Big Data to reduce costs, analyse issues, and ensure optimal efficiency from their Big Data investments.

Informatica PowerExchange for Hadoop provides native high-performance connectivity to the Hadoop Distributed File System (HDFS). It enables organizations to take advantage of Hadoop’s storage and processing power using their existing IT infrastructure and resources. PowerExchange for Hadoop can bring any and all enterprise data into Hadoop for data integration and processing. Fully integrated with Informatica PowerCenter, it moves data into and out of Hadoop in batch or real time using universal connectivity to all data, including mainframe, databases, and applications, both on-premises and in the cloud. Informatica PowerCenter Big Data Edition is, they claim, highly scalable, high-performance enterprise data integration software that works with both Hadoop and traditional data management infrastructures.

Clearly, these two different worlds have more than collided – we are beginning to see the integration of these previously quite separate worlds with software from a number of vendors helping users with the integration process. And as users, we get the best of both worlds!